diff --git a/.clang-tidy b/.clang-tidy index 219ac263ab3..896052915f7 100644 --- a/.clang-tidy +++ b/.clang-tidy @@ -37,6 +37,7 @@ Checks: [ '-cert-oop54-cpp', '-cert-oop57-cpp', + '-clang-analyzer-optin.core.EnumCastOutOfRange', # https://github.com/abseil/abseil-cpp/issues/1667 '-clang-analyzer-optin.performance.Padding', '-clang-analyzer-unix.Malloc', @@ -94,6 +95,7 @@ Checks: [ '-modernize-pass-by-value', '-modernize-return-braced-init-list', '-modernize-use-auto', + '-modernize-use-constraints', # This is a good check, but clang-tidy crashes, see https://github.com/llvm/llvm-project/issues/91872 '-modernize-use-default-member-init', '-modernize-use-emplace', '-modernize-use-nodiscard', @@ -121,7 +123,8 @@ Checks: [ '-readability-magic-numbers', '-readability-named-parameter', '-readability-redundant-declaration', - '-readability-redundant-inline-specifier', + '-readability-redundant-inline-specifier', # useful but incompatible with __attribute((always_inline))__ (aka. ALWAYS_INLINE, base/base/defines.h). + # ALWAYS_INLINE only has an effect if combined with `inline`: https://godbolt.org/z/Eefd74qdM '-readability-redundant-member-init', # Useful but triggers another problem. Imagine a struct S with multiple String members. Structs are often instantiated via designated # initializer S s{.s1 = [...], .s2 = [...], [...]}. In this case, compiler warning `missing-field-initializers` requires to specify all members which are not in-struct # initialized (example: s1 in struct S { String s1; String s2{};}; is not in-struct initialized, therefore it must be specified at instantiation time). As explicitly @@ -132,12 +135,7 @@ Checks: [ '-readability-uppercase-literal-suffix', '-readability-use-anyofallof', - '-zircon-*', - - # This is a good check, but clang-tidy crashes, see https://github.com/llvm/llvm-project/issues/91872 - '-modernize-use-constraints', - # https://github.com/abseil/abseil-cpp/issues/1667 - '-clang-analyzer-optin.core.EnumCastOutOfRange' + '-zircon-*' ] WarningsAsErrors: '*' diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index f9765c1d57b..3d7c34af551 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -11,6 +11,7 @@ tests/ci/cancel_and_rerun_workflow_lambda/app.py - Backward Incompatible Change - Build/Testing/Packaging Improvement - Documentation (changelog entry is not required) +- Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC) - Bug Fix (user-visible misbehavior in an official stable release) - CI Fix or Improvement (changelog entry is not required) - Not for changelog (changelog entry is not required) @@ -71,10 +72,10 @@ At a minimum, the following information should be added (but add more as needed) - [ ] Exclude: All with Aarch64 --- - [ ] do not test (only style check) +- [ ] upload all binary artifacts from build jobs - [ ] disable merge-commit (no merge from master before tests) - [ ] disable CI cache (job reuse) -- [ ] allow: batch 1 for multi-batch jobs -- [ ] allow: batch 2 -- [ ] allow: batch 3 -- [ ] allow: batch 4, 5 and 6 +- [ ] allow: batch 1, 2 for multi-batch jobs +- [ ] allow: batch 3, 4 +- [ ] allow: batch 5, 6 diff --git a/CHANGELOG.md b/CHANGELOG.md index 207b88f7860..b10f521b8ac 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,4 +1,5 @@ ### Table of Contents +**[ClickHouse release v24.5, 2024-05-30](#245)**
**[ClickHouse release v24.4, 2024-04-30](#244)**
**[ClickHouse release v24.3 LTS, 2024-03-26](#243)**
**[ClickHouse release v24.2, 2024-02-29](#242)**
@@ -7,6 +8,179 @@ # 2024 Changelog +### ClickHouse release 24.5, 2024-05-30 + +#### Backward Incompatible Change +* Renamed "inverted indexes" to "full-text indexes" which is a less technical / more user-friendly name. This also changes internal table metadata and breaks tables with existing (experimental) inverted indexes. Please make to drop such indexes before upgrade and re-create them after upgrade. [#62884](https://github.com/ClickHouse/ClickHouse/pull/62884) ([Robert Schulze](https://github.com/rschu1ze)). +* Usage of functions `neighbor`, `runningAccumulate`, `runningDifferenceStartingWithFirstValue`, `runningDifference` deprecated (because it is error-prone). Proper window functions should be used instead. To enable them back, set `allow_deprecated_functions = 1` or set `compatibility = '24.4'` or lower. [#63132](https://github.com/ClickHouse/ClickHouse/pull/63132) ([Nikita Taranov](https://github.com/nickitat)). +* Queries from `system.columns` will work faster if there is a large number of columns, but many databases or tables are not granted for `SHOW TABLES`. Note that in previous versions, if you grant `SHOW COLUMNS` to individual columns without granting `SHOW TABLES` to the corresponding tables, the `system.columns` table will show these columns, but in a new version, it will skip the table entirely. Remove trace log messages "Access granted" and "Access denied" that slowed down queries. [#63439](https://github.com/ClickHouse/ClickHouse/pull/63439) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Setting `replace_long_file_name_to_hash` is enabled by default for `MergeTree` tables. [#64457](https://github.com/ClickHouse/ClickHouse/pull/64457) ([Anton Popov](https://github.com/CurtizJ)). The data written with this setting can be read by server versions since 23.9. After you use ClickHouse with this setting enabled, you cannot downgrade to versions 23.8 and earlier. + +#### New Feature +* Adds the `Form` format to read/write a single record in the `application/x-www-form-urlencoded` format. [#60199](https://github.com/ClickHouse/ClickHouse/pull/60199) ([Shaun Struwig](https://github.com/Blargian)). +* Added possibility to compress in CROSS JOIN. [#60459](https://github.com/ClickHouse/ClickHouse/pull/60459) ([p1rattttt](https://github.com/p1rattttt)). +* Added possibility to do `CROSS JOIN` in temporary files if the size exceeds limits. [#63432](https://github.com/ClickHouse/ClickHouse/pull/63432) ([p1rattttt](https://github.com/p1rattttt)). +* Support join with inequal conditions which involve columns from both left and right table. e.g. `t1.y < t2.y`. To enable, `SET allow_experimental_join_condition = 1`. [#60920](https://github.com/ClickHouse/ClickHouse/pull/60920) ([lgbo](https://github.com/lgbo-ustc)). +* Maps can now have `Float32`, `Float64`, `Array(T)`, `Map(K, V)` and `Tuple(T1, T2, ...)` as keys. Closes [#54537](https://github.com/ClickHouse/ClickHouse/issues/54537). [#59318](https://github.com/ClickHouse/ClickHouse/pull/59318) ([李扬](https://github.com/taiyang-li)). +* Introduce bulk loading to `EmbeddedRocksDB` by creating and ingesting SST file instead of relying on rocksdb build-in memtable. This help to increase importing speed, especially for long-running insert query to StorageEmbeddedRocksDB tables. Also, introduce `EmbeddedRocksDB` table settings. [#59163](https://github.com/ClickHouse/ClickHouse/pull/59163) [#63324](https://github.com/ClickHouse/ClickHouse/pull/63324) ([Duc Canh Le](https://github.com/canhld94)). +* User can now parse CRLF with TSV format using a setting `input_format_tsv_crlf_end_of_line`. Closes [#56257](https://github.com/ClickHouse/ClickHouse/issues/56257). [#59747](https://github.com/ClickHouse/ClickHouse/pull/59747) ([Shaun Struwig](https://github.com/Blargian)). +* A new setting `input_format_force_null_for_omitted_fields` that forces NULL values for omitted fields. [#60887](https://github.com/ClickHouse/ClickHouse/pull/60887) ([Constantine Peresypkin](https://github.com/pkit)). +* Earlier our S3 storage and s3 table function didn't support selecting from archive container files, such as tarballs, zip, 7z. Now they allow to iterate over files inside archives in S3. [#62259](https://github.com/ClickHouse/ClickHouse/pull/62259) ([Daniil Ivanik](https://github.com/divanik)). +* Support for conditional function `clamp`. [#62377](https://github.com/ClickHouse/ClickHouse/pull/62377) ([skyoct](https://github.com/skyoct)). +* Add `NPy` output format. [#62430](https://github.com/ClickHouse/ClickHouse/pull/62430) ([豪肥肥](https://github.com/HowePa)). +* `Raw` format as a synonym for `TSVRaw`. [#63394](https://github.com/ClickHouse/ClickHouse/pull/63394) ([Unalian](https://github.com/Unalian)). +* Added a new SQL function `generateSnowflakeID` for generating Twitter-style Snowflake IDs. [#63577](https://github.com/ClickHouse/ClickHouse/pull/63577) ([Danila Puzov](https://github.com/kazalika)). +* Added a new SQL function `generateUUIDv7` to generate version 7 UUIDs aka. timestamp-based UUIDs with random component. Also added a new function `UUIDToNum` to extract bytes from a UUID and a new function `UUIDv7ToDateTime` to extract timestamp component from a UUID version 7. [#62852](https://github.com/ClickHouse/ClickHouse/pull/62852) ([Alexey Petrunyaka](https://github.com/pet74alex)). +* On Linux and MacOS, if the program has stdout redirected to a file with a compression extension, use the corresponding compression method instead of nothing (making it behave similarly to `INTO OUTFILE`). [#63662](https://github.com/ClickHouse/ClickHouse/pull/63662) ([v01dXYZ](https://github.com/v01dXYZ)). +* Change warning on high number of attached tables to differentiate tables, views and dictionaries. [#64180](https://github.com/ClickHouse/ClickHouse/pull/64180) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)). +* Provide support for `azureBlobStorage` function in ClickHouse server to use Azure Workload identity to authenticate against Azure blob storage. If `use_workload_identity` parameter is set in config, [workload identity](https://github.com/Azure/azure-sdk-for-cpp/tree/main/sdk/identity/azure-identity#authenticate-azure-hosted-applications) is used for authentication. [#57881](https://github.com/ClickHouse/ClickHouse/pull/57881) ([Vinay Suryadevara](https://github.com/vinay92-ch)). +* Add TTL information in the `system.parts_columns` table. [#63200](https://github.com/ClickHouse/ClickHouse/pull/63200) ([litlig](https://github.com/litlig)). + +#### Experimental Features +* Implement `Dynamic` data type that allows to store values of any type inside it without knowing all of them in advance. `Dynamic` type is available under a setting `allow_experimental_dynamic_type`. Reference: [#54864](https://github.com/ClickHouse/ClickHouse/issues/54864). [#63058](https://github.com/ClickHouse/ClickHouse/pull/63058) ([Kruglov Pavel](https://github.com/Avogar)). +* Allowed to create `MaterializedMySQL` database without connection to MySQL. [#63397](https://github.com/ClickHouse/ClickHouse/pull/63397) ([Kirill](https://github.com/kirillgarbar)). +* Automatically mark a replica of Replicated database as lost and start recovery if some DDL task fails more than `max_retries_before_automatic_recovery` (100 by default) times in a row with the same error. Also, fixed a bug that could cause skipping DDL entries when an exception is thrown during an early stage of entry execution. [#63549](https://github.com/ClickHouse/ClickHouse/pull/63549) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Account failed files in `s3queue_tracked_file_ttl_sec` and `s3queue_traked_files_limit` for `StorageS3Queue`. [#63638](https://github.com/ClickHouse/ClickHouse/pull/63638) ([Kseniia Sumarokova](https://github.com/kssenii)). + +#### Performance Improvement +* A native parquet reader, which can read parquet binary to ClickHouse columns directly. Now this feature can be activated by setting `input_format_parquet_use_native_reader` to true. [#60361](https://github.com/ClickHouse/ClickHouse/pull/60361) ([ZhiHong Zhang](https://github.com/copperybean)). +* Less contention in filesystem cache (part 4). Allow to keep filesystem cache not filled to the limit by doing additional eviction in the background (controlled by `keep_free_space_size(elements)_ratio`). This allows to release pressure from space reservation for queries (on `tryReserve` method). Also this is done in a lock free way as much as possible, e.g. should not block normal cache usage. [#61250](https://github.com/ClickHouse/ClickHouse/pull/61250) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Skip merging of newly created projection blocks during `INSERT`-s. [#59405](https://github.com/ClickHouse/ClickHouse/pull/59405) ([Nikita Taranov](https://github.com/nickitat)). +* Process string functions `...UTF8` 'asciily' if input strings are all ascii chars. Inspired by https://github.com/apache/doris/pull/29799. Overall speed up by 1.07x~1.62x. Notice that peak memory usage had been decreased in some cases. [#61632](https://github.com/ClickHouse/ClickHouse/pull/61632) ([李扬](https://github.com/taiyang-li)). +* Improved performance of selection (`{}`) globs in StorageS3. [#62120](https://github.com/ClickHouse/ClickHouse/pull/62120) ([Andrey Zvonov](https://github.com/zvonand)). +* HostResolver has each IP address several times. If remote host has several IPs and by some reason (firewall rules for example) access on some IPs allowed and on others forbidden, than only first record of forbidden IPs marked as failed, and in each try these IPs have a chance to be chosen (and failed again). Even if fix this, every 120 seconds DNS cache dropped, and IPs can be chosen again. [#62652](https://github.com/ClickHouse/ClickHouse/pull/62652) ([Anton Ivashkin](https://github.com/ianton-ru)). +* Function `splitByRegexp` is now faster when the regular expression argument is a single-character, trivial regular expression (in this case, it now falls back internally to `splitByChar`). [#62696](https://github.com/ClickHouse/ClickHouse/pull/62696) ([Robert Schulze](https://github.com/rschu1ze)). +* Aggregation with 8-bit and 16-bit keys became faster: added min/max in FixedHashTable to limit the array index and reduce the `isZero()` calls during iteration. [#62746](https://github.com/ClickHouse/ClickHouse/pull/62746) ([Jiebin Sun](https://github.com/jiebinn)). +* Add a new configuration`prefer_merge_sort_block_bytes` to control the memory usage and speed up sorting 2 times when merging when there are many columns. [#62904](https://github.com/ClickHouse/ClickHouse/pull/62904) ([LiuNeng](https://github.com/liuneng1994)). +* `clickhouse-local` will start faster. In previous versions, it was not deleting temporary directories by mistake. Now it will. This closes [#62941](https://github.com/ClickHouse/ClickHouse/issues/62941). [#63074](https://github.com/ClickHouse/ClickHouse/pull/63074) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Micro-optimizations for the new analyzer. [#63429](https://github.com/ClickHouse/ClickHouse/pull/63429) ([Raúl Marín](https://github.com/Algunenano)). +* Index analysis will work if `DateTime` is compared to `DateTime64`. This closes [#63441](https://github.com/ClickHouse/ClickHouse/issues/63441). [#63443](https://github.com/ClickHouse/ClickHouse/pull/63443) [#63532](https://github.com/ClickHouse/ClickHouse/pull/63532) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Speed up indices of type `set` a little (around 1.5 times) by removing garbage. [#64098](https://github.com/ClickHouse/ClickHouse/pull/64098) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Optimized vertical merges in tables with sparse columns. [#64311](https://github.com/ClickHouse/ClickHouse/pull/64311) ([Anton Popov](https://github.com/CurtizJ)). +* Improve filtering of sparse columns: reduce redundant calls of `ColumnSparse::filter` to improve performance. [#64426](https://github.com/ClickHouse/ClickHouse/pull/64426) ([Jiebin Sun](https://github.com/jiebinn)). +* Remove copying data when writing to the filesystem cache. [#63401](https://github.com/ClickHouse/ClickHouse/pull/63401) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Now backups with azure blob storage will use multicopy. [#64116](https://github.com/ClickHouse/ClickHouse/pull/64116) ([alesapin](https://github.com/alesapin)). +* Allow to use native copy for azure even with different containers. [#64154](https://github.com/ClickHouse/ClickHouse/pull/64154) ([alesapin](https://github.com/alesapin)). +* Finally enable native copy for azure. [#64182](https://github.com/ClickHouse/ClickHouse/pull/64182) ([alesapin](https://github.com/alesapin)). +* Improve the iteration over sparse columns to reduce call of `size`. [#64497](https://github.com/ClickHouse/ClickHouse/pull/64497) ([Jiebin Sun](https://github.com/jiebinn)). + +#### Improvement +* Allow using `clickhouse-local` and its shortcuts `clickhouse` and `ch` with a query or queries file as a positional argument. Examples: `ch "SELECT 1"`, `ch --param_test Hello "SELECT {test:String}"`, `ch query.sql`. This closes [#62361](https://github.com/ClickHouse/ClickHouse/issues/62361). [#63081](https://github.com/ClickHouse/ClickHouse/pull/63081) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Enable plain_rewritable metadata for local and Azure (azure_blob_storage) object storages. [#63365](https://github.com/ClickHouse/ClickHouse/pull/63365) ([Julia Kartseva](https://github.com/jkartseva)). +* Support English-style Unicode quotes, e.g. “Hello”, ‘world’. This is questionable in general but helpful when you type your query in a word processor, such as Google Docs. This closes [#58634](https://github.com/ClickHouse/ClickHouse/issues/58634). [#63381](https://github.com/ClickHouse/ClickHouse/pull/63381) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Allow trailing commas in the columns list in the INSERT query. For example, `INSERT INTO test (a, b, c, ) VALUES ...`. [#63803](https://github.com/ClickHouse/ClickHouse/pull/63803) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Better exception messages for the `Regexp` format. [#63804](https://github.com/ClickHouse/ClickHouse/pull/63804) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Allow trailing commas in the `Values` format. For example, this query is allowed: `INSERT INTO test (a, b, c) VALUES (4, 5, 6,);`. [#63810](https://github.com/ClickHouse/ClickHouse/pull/63810) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Make rabbitmq nack broken messages. Closes [#45350](https://github.com/ClickHouse/ClickHouse/issues/45350). [#60312](https://github.com/ClickHouse/ClickHouse/pull/60312) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix a crash in asynchronous stack unwinding (such as when using the sampling query profiler) while interpreting debug info. This closes [#60460](https://github.com/ClickHouse/ClickHouse/issues/60460). [#60468](https://github.com/ClickHouse/ClickHouse/pull/60468) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Distinct messages for s3 error 'no key' for cases disk and storage. [#61108](https://github.com/ClickHouse/ClickHouse/pull/61108) ([Sema Checherinda](https://github.com/CheSema)). +* The progress bar will work for trivial queries with LIMIT from `system.zeros`, `system.zeros_mt` (it already works for `system.numbers` and `system.numbers_mt`), and the `generateRandom` table function. As a bonus, if the total number of records is greater than the `max_rows_to_read` limit, it will throw an exception earlier. This closes [#58183](https://github.com/ClickHouse/ClickHouse/issues/58183). [#61823](https://github.com/ClickHouse/ClickHouse/pull/61823) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Support for "Merge Key" in YAML configurations (this is a weird feature of YAML, please never mind). [#62685](https://github.com/ClickHouse/ClickHouse/pull/62685) ([Azat Khuzhin](https://github.com/azat)). +* Enhance error message when non-deterministic function is used with Replicated source. [#62896](https://github.com/ClickHouse/ClickHouse/pull/62896) ([Grégoire Pineau](https://github.com/lyrixx)). +* Fix interserver secret for Distributed over Distributed from `remote`. [#63013](https://github.com/ClickHouse/ClickHouse/pull/63013) ([Azat Khuzhin](https://github.com/azat)). +* Support `include_from` for YAML files. However, you should better use `config.d` [#63106](https://github.com/ClickHouse/ClickHouse/pull/63106) ([Eduard Karacharov](https://github.com/korowa)). +* Keep previous data in terminal after picking from skim suggestions. [#63261](https://github.com/ClickHouse/ClickHouse/pull/63261) ([FlameFactory](https://github.com/FlameFactory)). +* Width of fields (in Pretty formats or the `visibleWidth` function) now correctly ignores ANSI escape sequences. [#63270](https://github.com/ClickHouse/ClickHouse/pull/63270) ([Shaun Struwig](https://github.com/Blargian)). +* Update the usage of error code `NUMBER_OF_ARGUMENTS_DOESNT_MATCH` by more accurate error codes when appropriate. [#63406](https://github.com/ClickHouse/ClickHouse/pull/63406) ([Yohann Jardin](https://github.com/yohannj)). +* `os_user` and `client_hostname` are now correctly set up for queries for command line suggestions in clickhouse-client. This closes [#63430](https://github.com/ClickHouse/ClickHouse/issues/63430). [#63433](https://github.com/ClickHouse/ClickHouse/pull/63433) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Automatically correct `max_block_size` to the default value if it is zero. [#63587](https://github.com/ClickHouse/ClickHouse/pull/63587) ([Antonio Andelic](https://github.com/antonio2368)). +* Add a build_id ALIAS column to trace_log to facilitate auto renaming upon detecting binary changes. This is to address [#52086](https://github.com/ClickHouse/ClickHouse/issues/52086). [#63656](https://github.com/ClickHouse/ClickHouse/pull/63656) ([Zimu Li](https://github.com/woodlzm)). +* Enable truncate operation for object storage disks. [#63693](https://github.com/ClickHouse/ClickHouse/pull/63693) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* The loading of the keywords list is now dependent on the server revision and will be disabled for the old versions of ClickHouse server. CC @azat. [#63786](https://github.com/ClickHouse/ClickHouse/pull/63786) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Clickhouse disks have to read server setting to obtain actual metadata format version. [#63831](https://github.com/ClickHouse/ClickHouse/pull/63831) ([Sema Checherinda](https://github.com/CheSema)). +* Disable pretty format restrictions (`output_format_pretty_max_rows`/`output_format_pretty_max_value_width`) when stdout is not TTY. [#63942](https://github.com/ClickHouse/ClickHouse/pull/63942) ([Azat Khuzhin](https://github.com/azat)). +* Exception handling now works when ClickHouse is used inside AWS Lambda. Author: [Alexey Coolnev](https://github.com/acoolnev). [#64014](https://github.com/ClickHouse/ClickHouse/pull/64014) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Throw `CANNOT_DECOMPRESS` instread of `CORRUPTED_DATA` on invalid compressed data passed via HTTP. [#64036](https://github.com/ClickHouse/ClickHouse/pull/64036) ([vdimir](https://github.com/vdimir)). +* A tip for a single large number in Pretty formats now works for Nullable and LowCardinality. This closes [#61993](https://github.com/ClickHouse/ClickHouse/issues/61993). [#64084](https://github.com/ClickHouse/ClickHouse/pull/64084) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Added knob `metadata_storage_type` to keep free space on metadata storage disk. [#64128](https://github.com/ClickHouse/ClickHouse/pull/64128) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Add metrics, logs, and thread names around parts filtering with indices. [#64130](https://github.com/ClickHouse/ClickHouse/pull/64130) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Metrics to track the number of directories created and removed by the `plain_rewritable` metadata storage, and the number of entries in the local-to-remote in-memory map. [#64175](https://github.com/ClickHouse/ClickHouse/pull/64175) ([Julia Kartseva](https://github.com/jkartseva)). +* Ignore `allow_suspicious_primary_key` on `ATTACH` and verify on `ALTER`. [#64202](https://github.com/ClickHouse/ClickHouse/pull/64202) ([Azat Khuzhin](https://github.com/azat)). +* The query cache now considers identical queries with different settings as different. This increases robustness in cases where different settings (e.g. `limit` or `additional_table_filters`) would affect the query result. [#64205](https://github.com/ClickHouse/ClickHouse/pull/64205) ([Robert Schulze](https://github.com/rschu1ze)). +* Test that a non standard error code `QPSLimitExceeded` is supported and it is retryable error. [#64225](https://github.com/ClickHouse/ClickHouse/pull/64225) ([Sema Checherinda](https://github.com/CheSema)). +* Settings from the user config doesn't affect merges and mutations for MergeTree on top of object storage. [#64456](https://github.com/ClickHouse/ClickHouse/pull/64456) ([alesapin](https://github.com/alesapin)). +* Test that `totalqpslimitexceeded` is a retriable s3 error. [#64520](https://github.com/ClickHouse/ClickHouse/pull/64520) ([Sema Checherinda](https://github.com/CheSema)). + +#### Build/Testing/Packaging Improvement +* ClickHouse is built with clang-18. A lot of new checks from clang-tidy-18 have been enabled. [#60469](https://github.com/ClickHouse/ClickHouse/pull/60469) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Experimentally support loongarch64 as a new platform for ClickHouse. [#63733](https://github.com/ClickHouse/ClickHouse/pull/63733) ([qiangxuhui](https://github.com/qiangxuhui)). +* The Dockerfile is reviewed by the docker official library in https://github.com/docker-library/official-images/pull/15846. [#63400](https://github.com/ClickHouse/ClickHouse/pull/63400) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Information about every symbol in every translation unit will be collected in the CI database for every build in the CI. This closes [#63494](https://github.com/ClickHouse/ClickHouse/issues/63494). [#63495](https://github.com/ClickHouse/ClickHouse/pull/63495) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Update Apache Datasketches library. It resolves [#63858](https://github.com/ClickHouse/ClickHouse/issues/63858). [#63923](https://github.com/ClickHouse/ClickHouse/pull/63923) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Enable GRPC support for aarch64 linux while cross-compiling binary. [#64072](https://github.com/ClickHouse/ClickHouse/pull/64072) ([alesapin](https://github.com/alesapin)). +* Fix unwind on SIGSEGV on aarch64 (due to small stack for signal) [#64058](https://github.com/ClickHouse/ClickHouse/pull/64058) ([Azat Khuzhin](https://github.com/azat)). + +#### Bug Fix +* Disabled `enable_vertical_final` setting by default. This feature should not be used because it has a bug: [#64543](https://github.com/ClickHouse/ClickHouse/issues/64543). [#64544](https://github.com/ClickHouse/ClickHouse/pull/64544) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix making backup when multiple shards are used [#57684](https://github.com/ClickHouse/ClickHouse/pull/57684) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix passing projections/indexes/primary key from columns list from CREATE query into inner table of MV [#59183](https://github.com/ClickHouse/ClickHouse/pull/59183) ([Azat Khuzhin](https://github.com/azat)). +* Fix boundRatio incorrect merge [#60532](https://github.com/ClickHouse/ClickHouse/pull/60532) ([Tao Wang](https://github.com/wangtZJU)). +* Fix crash when calling some functions on const low-cardinality columns [#61966](https://github.com/ClickHouse/ClickHouse/pull/61966) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix queries with FINAL give wrong result when table does not use adaptive granularity [#62432](https://github.com/ClickHouse/ClickHouse/pull/62432) ([Duc Canh Le](https://github.com/canhld94)). +* Improve detection of cgroups v2 support for memory controllers [#62903](https://github.com/ClickHouse/ClickHouse/pull/62903) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix subsequent use of external tables in client [#62964](https://github.com/ClickHouse/ClickHouse/pull/62964) ([Azat Khuzhin](https://github.com/azat)). +* Fix crash with untuple and unresolved lambda [#63131](https://github.com/ClickHouse/ClickHouse/pull/63131) ([Raúl Marín](https://github.com/Algunenano)). +* Fix premature server listen for connections [#63181](https://github.com/ClickHouse/ClickHouse/pull/63181) ([alesapin](https://github.com/alesapin)). +* Fix intersecting parts when restarting after a DROP PART command [#63202](https://github.com/ClickHouse/ClickHouse/pull/63202) ([Han Fei](https://github.com/hanfei1991)). +* Correctly load SQL security defaults during startup [#63209](https://github.com/ClickHouse/ClickHouse/pull/63209) ([pufit](https://github.com/pufit)). +* JOIN filter push down filter join fix [#63234](https://github.com/ClickHouse/ClickHouse/pull/63234) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix infinite loop in AzureObjectStorage::listObjects [#63257](https://github.com/ClickHouse/ClickHouse/pull/63257) ([Julia Kartseva](https://github.com/jkartseva)). +* CROSS join ignore join_algorithm setting [#63273](https://github.com/ClickHouse/ClickHouse/pull/63273) ([vdimir](https://github.com/vdimir)). +* Fix finalize WriteBufferToFileSegment and StatusFile [#63346](https://github.com/ClickHouse/ClickHouse/pull/63346) ([vdimir](https://github.com/vdimir)). +* Fix logical error during SELECT query after ALTER in rare case [#63353](https://github.com/ClickHouse/ClickHouse/pull/63353) ([alesapin](https://github.com/alesapin)). +* Fix `X-ClickHouse-Timezone` header with `session_timezone` [#63377](https://github.com/ClickHouse/ClickHouse/pull/63377) ([Andrey Zvonov](https://github.com/zvonand)). +* Fix debug assert when using grouping WITH ROLLUP and LowCardinality types [#63398](https://github.com/ClickHouse/ClickHouse/pull/63398) ([Raúl Marín](https://github.com/Algunenano)). +* Small fixes for group_by_use_nulls [#63405](https://github.com/ClickHouse/ClickHouse/pull/63405) ([vdimir](https://github.com/vdimir)). +* Fix backup/restore of projection part in case projection was removed from table metadata, but part still has projection [#63426](https://github.com/ClickHouse/ClickHouse/pull/63426) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix mysql dictionary source [#63481](https://github.com/ClickHouse/ClickHouse/pull/63481) ([vdimir](https://github.com/vdimir)). +* Insert QueryFinish on AsyncInsertFlush with no data [#63483](https://github.com/ClickHouse/ClickHouse/pull/63483) ([Raúl Marín](https://github.com/Algunenano)). +* Fix: empty used_dictionaries in system.query_log [#63487](https://github.com/ClickHouse/ClickHouse/pull/63487) ([Eduard Karacharov](https://github.com/korowa)). +* Make `MergeTreePrefetchedReadPool` safer [#63513](https://github.com/ClickHouse/ClickHouse/pull/63513) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix crash on exit with sentry enabled (due to openssl destroyed before sentry) [#63548](https://github.com/ClickHouse/ClickHouse/pull/63548) ([Azat Khuzhin](https://github.com/azat)). +* Fix Array and Map support with Keyed hashing [#63628](https://github.com/ClickHouse/ClickHouse/pull/63628) ([Salvatore Mesoraca](https://github.com/aiven-sal)). +* Fix filter pushdown for Parquet and maybe StorageMerge [#63642](https://github.com/ClickHouse/ClickHouse/pull/63642) ([Michael Kolupaev](https://github.com/al13n321)). +* Prevent conversion to Replicated if zookeeper path already exists [#63670](https://github.com/ClickHouse/ClickHouse/pull/63670) ([Kirill](https://github.com/kirillgarbar)). +* Analyzer: views read only necessary columns [#63688](https://github.com/ClickHouse/ClickHouse/pull/63688) ([Maksim Kita](https://github.com/kitaisreal)). +* Analyzer: Forbid WINDOW redefinition [#63694](https://github.com/ClickHouse/ClickHouse/pull/63694) ([Dmitry Novik](https://github.com/novikd)). +* flatten_nested was broken with the experimental Replicated database. [#63695](https://github.com/ClickHouse/ClickHouse/pull/63695) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix [#63653](https://github.com/ClickHouse/ClickHouse/issues/63653) [#63722](https://github.com/ClickHouse/ClickHouse/pull/63722) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Allow cast from Array(Nothing) to Map(Nothing, Nothing) [#63753](https://github.com/ClickHouse/ClickHouse/pull/63753) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix ILLEGAL_COLUMN in partial_merge join [#63755](https://github.com/ClickHouse/ClickHouse/pull/63755) ([vdimir](https://github.com/vdimir)). +* Fix: remove redundant distinct with window functions [#63776](https://github.com/ClickHouse/ClickHouse/pull/63776) ([Igor Nikonov](https://github.com/devcrafter)). +* Fix possible crash with SYSTEM UNLOAD PRIMARY KEY [#63778](https://github.com/ClickHouse/ClickHouse/pull/63778) ([Raúl Marín](https://github.com/Algunenano)). +* Fix a query with duplicating cycling alias. [#63791](https://github.com/ClickHouse/ClickHouse/pull/63791) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Make `TokenIterator` lazy as it should be [#63801](https://github.com/ClickHouse/ClickHouse/pull/63801) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Add `endpoint_subpath` S3 URI setting [#63806](https://github.com/ClickHouse/ClickHouse/pull/63806) ([Julia Kartseva](https://github.com/jkartseva)). +* Fix deadlock in `ParallelReadBuffer` [#63814](https://github.com/ClickHouse/ClickHouse/pull/63814) ([Antonio Andelic](https://github.com/antonio2368)). +* JOIN filter push down equivalent columns fix [#63819](https://github.com/ClickHouse/ClickHouse/pull/63819) ([Maksim Kita](https://github.com/kitaisreal)). +* Remove data from all disks after DROP with Lazy database. [#63848](https://github.com/ClickHouse/ClickHouse/pull/63848) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Fix incorrect result when reading from MV with parallel replicas and new analyzer [#63861](https://github.com/ClickHouse/ClickHouse/pull/63861) ([Nikita Taranov](https://github.com/nickitat)). +* Fixes in `find_super_nodes` and `find_big_family` command of keeper-client [#63862](https://github.com/ClickHouse/ClickHouse/pull/63862) ([Alexander Gololobov](https://github.com/davenger)). +* Update lambda execution name [#63864](https://github.com/ClickHouse/ClickHouse/pull/63864) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix SIGSEGV due to CPU/Real profiler [#63865](https://github.com/ClickHouse/ClickHouse/pull/63865) ([Azat Khuzhin](https://github.com/azat)). +* Fix `EXPLAIN CURRENT TRANSACTION` query [#63926](https://github.com/ClickHouse/ClickHouse/pull/63926) ([Anton Popov](https://github.com/CurtizJ)). +* Fix analyzer: there's turtles all the way down... [#63930](https://github.com/ClickHouse/ClickHouse/pull/63930) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Allow certain ALTER TABLE commands for `plain_rewritable` disk [#63933](https://github.com/ClickHouse/ClickHouse/pull/63933) ([Julia Kartseva](https://github.com/jkartseva)). +* Recursive CTE distributed fix [#63939](https://github.com/ClickHouse/ClickHouse/pull/63939) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix reading of columns of type `Tuple(Map(LowCardinality(...)))` [#63956](https://github.com/ClickHouse/ClickHouse/pull/63956) ([Anton Popov](https://github.com/CurtizJ)). +* Analyzer: Fix COLUMNS resolve [#63962](https://github.com/ClickHouse/ClickHouse/pull/63962) ([Dmitry Novik](https://github.com/novikd)). +* LIMIT BY and skip_unused_shards with analyzer [#63983](https://github.com/ClickHouse/ClickHouse/pull/63983) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* A fix for some trash (experimental Kusto) [#63992](https://github.com/ClickHouse/ClickHouse/pull/63992) ([Yong Wang](https://github.com/kashwy)). +* Deserialize untrusted binary inputs in a safer way [#64024](https://github.com/ClickHouse/ClickHouse/pull/64024) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix query analysis for queries with the setting `final` = 1 for Distributed tables over tables from other than the MergeTree family. [#64037](https://github.com/ClickHouse/ClickHouse/pull/64037) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Add missing settings to recoverLostReplica [#64040](https://github.com/ClickHouse/ClickHouse/pull/64040) ([Raúl Marín](https://github.com/Algunenano)). +* Fix SQL security access checks with analyzer [#64079](https://github.com/ClickHouse/ClickHouse/pull/64079) ([pufit](https://github.com/pufit)). +* Fix analyzer: only interpolate expression should be used for DAG [#64096](https://github.com/ClickHouse/ClickHouse/pull/64096) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix azure backup writing multipart blocks by 1 MiB (read buffer size) instead of `max_upload_part_size` (in non-native copy case) [#64117](https://github.com/ClickHouse/ClickHouse/pull/64117) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Correctly fallback during backup copy [#64153](https://github.com/ClickHouse/ClickHouse/pull/64153) ([Antonio Andelic](https://github.com/antonio2368)). +* Prevent LOGICAL_ERROR on CREATE TABLE as Materialized View [#64174](https://github.com/ClickHouse/ClickHouse/pull/64174) ([Raúl Marín](https://github.com/Algunenano)). +* Query Cache: Consider identical queries against different databases as different [#64199](https://github.com/ClickHouse/ClickHouse/pull/64199) ([Robert Schulze](https://github.com/rschu1ze)). +* Ignore `text_log` for Keeper [#64218](https://github.com/ClickHouse/ClickHouse/pull/64218) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix ARRAY JOIN with Distributed. [#64226](https://github.com/ClickHouse/ClickHouse/pull/64226) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix: CNF with mutually exclusive atoms reduction [#64256](https://github.com/ClickHouse/ClickHouse/pull/64256) ([Eduard Karacharov](https://github.com/korowa)). +* Fix Logical error: Bad cast for Buffer table with prewhere. [#64388](https://github.com/ClickHouse/ClickHouse/pull/64388) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). + + ### ClickHouse release 24.4, 2024-04-30 #### Upgrade Notes diff --git a/base/base/BorrowedObjectPool.h b/base/base/BorrowedObjectPool.h index 05a23d5835e..f5ef28582b2 100644 --- a/base/base/BorrowedObjectPool.h +++ b/base/base/BorrowedObjectPool.h @@ -86,7 +86,7 @@ public: } /// Return object into pool. Client must return same object that was borrowed. - inline void returnObject(T && object_to_return) + void returnObject(T && object_to_return) { { std::lock_guard lock(objects_mutex); @@ -99,20 +99,20 @@ public: } /// Max pool size - inline size_t maxSize() const + size_t maxSize() const { return max_size; } /// Allocated objects size by the pool. If allocatedObjectsSize == maxSize then pool is full. - inline size_t allocatedObjectsSize() const + size_t allocatedObjectsSize() const { std::lock_guard lock(objects_mutex); return allocated_objects_size; } /// Returns allocatedObjectsSize == maxSize - inline bool isFull() const + bool isFull() const { std::lock_guard lock(objects_mutex); return allocated_objects_size == max_size; @@ -120,7 +120,7 @@ public: /// Borrowed objects size. If borrowedObjectsSize == allocatedObjectsSize and pool is full. /// Then client will wait during borrowObject function call. - inline size_t borrowedObjectsSize() const + size_t borrowedObjectsSize() const { std::lock_guard lock(objects_mutex); return borrowed_objects_size; @@ -129,7 +129,7 @@ public: private: template - inline T allocateObjectForBorrowing(const std::unique_lock &, FactoryFunc && func) + T allocateObjectForBorrowing(const std::unique_lock &, FactoryFunc && func) { ++allocated_objects_size; ++borrowed_objects_size; @@ -137,7 +137,7 @@ private: return std::forward(func)(); } - inline T borrowFromObjects(const std::unique_lock &) + T borrowFromObjects(const std::unique_lock &) { T dst; detail::moveOrCopyIfThrow(std::move(objects.back()), dst); diff --git a/contrib/aws b/contrib/aws index eb96e740453..deeaa9e7c5f 160000 --- a/contrib/aws +++ b/contrib/aws @@ -1 +1 @@ -Subproject commit eb96e740453ae27afa1f367ba19f99bdcb38484d +Subproject commit deeaa9e7c5fe690e3dacc4005d7ecfa7a66a32bb diff --git a/docs/_includes/install/deb_repo.sh b/docs/_includes/install/deb_repo.sh deleted file mode 100644 index 21106e9fc47..00000000000 --- a/docs/_includes/install/deb_repo.sh +++ /dev/null @@ -1,11 +0,0 @@ -sudo apt-get install apt-transport-https ca-certificates dirmngr -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 - -echo "deb https://repo.clickhouse.com/deb/stable/ main/" | sudo tee \ - /etc/apt/sources.list.d/clickhouse.list -sudo apt-get update - -sudo apt-get install -y clickhouse-server clickhouse-client - -sudo service clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. diff --git a/docs/_includes/install/rpm_repo.sh b/docs/_includes/install/rpm_repo.sh deleted file mode 100644 index e3fd1232047..00000000000 --- a/docs/_includes/install/rpm_repo.sh +++ /dev/null @@ -1,7 +0,0 @@ -sudo yum install yum-utils -sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG -sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/clickhouse.repo -sudo yum install clickhouse-server clickhouse-client - -sudo /etc/init.d/clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. diff --git a/docs/_includes/install/tgz_repo.sh b/docs/_includes/install/tgz_repo.sh deleted file mode 100644 index 0994510755b..00000000000 --- a/docs/_includes/install/tgz_repo.sh +++ /dev/null @@ -1,19 +0,0 @@ -export LATEST_VERSION=$(curl -s https://repo.clickhouse.com/tgz/stable/ | \ - grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1) -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz - -tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz -sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz -sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-server-$LATEST_VERSION.tgz -sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh -sudo /etc/init.d/clickhouse-server start - -tar -xzvf clickhouse-client-$LATEST_VERSION.tgz -sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh diff --git a/docs/changelogs/v23.8.1.2992-lts.md b/docs/changelogs/v23.8.1.2992-lts.md index 05385d9c52b..62326533a79 100644 --- a/docs/changelogs/v23.8.1.2992-lts.md +++ b/docs/changelogs/v23.8.1.2992-lts.md @@ -33,7 +33,7 @@ sidebar_label: 2023 * Add input format One that doesn't read any data and always returns single row with column `dummy` with type `UInt8` and value `0` like `system.one`. It can be used together with `_file/_path` virtual columns to list files in file/s3/url/hdfs/etc table functions without reading any data. [#53209](https://github.com/ClickHouse/ClickHouse/pull/53209) ([Kruglov Pavel](https://github.com/Avogar)). * Add tupleConcat function. Closes [#52759](https://github.com/ClickHouse/ClickHouse/issues/52759). [#53239](https://github.com/ClickHouse/ClickHouse/pull/53239) ([Nikolay Degterinsky](https://github.com/evillique)). * Support `TRUNCATE DATABASE` operation. [#53261](https://github.com/ClickHouse/ClickHouse/pull/53261) ([Bharat Nallan](https://github.com/bharatnc)). -* Add max_threads_for_indexes setting to limit number of threads used for primary key processing. [#53313](https://github.com/ClickHouse/ClickHouse/pull/53313) ([jorisgio](https://github.com/jorisgio)). +* Add max_threads_for_indexes setting to limit number of threads used for primary key processing. [#53313](https://github.com/ClickHouse/ClickHouse/pull/53313) ([Joris Giovannangeli](https://github.com/jorisgio)). * Add experimental support for HNSW as approximate neighbor search method. [#53447](https://github.com/ClickHouse/ClickHouse/pull/53447) ([Davit Vardanyan](https://github.com/davvard)). * Re-add SipHash keyed functions. [#53525](https://github.com/ClickHouse/ClickHouse/pull/53525) ([Salvatore Mesoraca](https://github.com/aiven-sal)). * ([#52755](https://github.com/ClickHouse/ClickHouse/issues/52755) , [#52895](https://github.com/ClickHouse/ClickHouse/issues/52895)) Added functions `arrayRotateLeft`, `arrayRotateRight`, `arrayShiftLeft`, `arrayShiftRight`. [#53557](https://github.com/ClickHouse/ClickHouse/pull/53557) ([Mikhail Koviazin](https://github.com/mkmkme)). @@ -72,7 +72,7 @@ sidebar_label: 2023 * Add ability to log when max_partitions_per_insert_block is reached ... [#50948](https://github.com/ClickHouse/ClickHouse/pull/50948) ([Sean Haynes](https://github.com/seandhaynes)). * Added a bunch of custom commands (mostly to make ClickHouse debugging easier). [#51117](https://github.com/ClickHouse/ClickHouse/pull/51117) ([pufit](https://github.com/pufit)). * Updated check for connection_string as connection string with sas does not always begin with DefaultEndPoint and updated connection url to include sas token after adding container to url. [#51141](https://github.com/ClickHouse/ClickHouse/pull/51141) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). -* Fix description for filtering sets in full_sorting_merge join. [#51329](https://github.com/ClickHouse/ClickHouse/pull/51329) ([Tanay Tummalapalli](https://github.com/ttanay)). +* Fix description for filtering sets in full_sorting_merge join. [#51329](https://github.com/ClickHouse/ClickHouse/pull/51329) ([ttanay](https://github.com/ttanay)). * The sizes of the (index) uncompressed/mark, mmap and query caches can now be configured dynamically at runtime. [#51446](https://github.com/ClickHouse/ClickHouse/pull/51446) ([Robert Schulze](https://github.com/rschu1ze)). * Fixed memory consumption in `Aggregator` when `max_block_size` is huge. [#51566](https://github.com/ClickHouse/ClickHouse/pull/51566) ([Nikita Taranov](https://github.com/nickitat)). * Add `SYSTEM SYNC FILESYSTEM CACHE` command. It will compare in-memory state of filesystem cache with what it has on disk and fix in-memory state if needed. [#51622](https://github.com/ClickHouse/ClickHouse/pull/51622) ([Kseniia Sumarokova](https://github.com/kssenii)). @@ -80,10 +80,10 @@ sidebar_label: 2023 * Support reading tuple subcolumns from file/s3/hdfs/url/azureBlobStorage table functions. [#51806](https://github.com/ClickHouse/ClickHouse/pull/51806) ([Kruglov Pavel](https://github.com/Avogar)). * Function `arrayIntersect` now returns the values sorted like the first argument. Closes [#27622](https://github.com/ClickHouse/ClickHouse/issues/27622). [#51850](https://github.com/ClickHouse/ClickHouse/pull/51850) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). * Add new queries, which allow to create/drop of access entities in specified access storage or move access entities from one access storage to another. [#51912](https://github.com/ClickHouse/ClickHouse/pull/51912) ([pufit](https://github.com/pufit)). -* ALTER TABLE FREEZE are not replicated in Replicated engine. [#52064](https://github.com/ClickHouse/ClickHouse/pull/52064) ([Mike Kot](https://github.com/myrrc)). +* ALTER TABLE FREEZE are not replicated in Replicated engine. [#52064](https://github.com/ClickHouse/ClickHouse/pull/52064) ([Mikhail Kot](https://github.com/myrrc)). * Added possibility to flush logs to the disk on crash - Added logs buffer configuration. [#52174](https://github.com/ClickHouse/ClickHouse/pull/52174) ([Alexey Gerasimchuck](https://github.com/Demilivor)). -* Fix S3 table function does not work for pre-signed URL. close [#50846](https://github.com/ClickHouse/ClickHouse/issues/50846). [#52310](https://github.com/ClickHouse/ClickHouse/pull/52310) ([chen](https://github.com/xiedeyantu)). -* System.events and system.metrics tables add column name as an alias to event and metric. close [#51257](https://github.com/ClickHouse/ClickHouse/issues/51257). [#52315](https://github.com/ClickHouse/ClickHouse/pull/52315) ([chen](https://github.com/xiedeyantu)). +* Fix S3 table function does not work for pre-signed URL. close [#50846](https://github.com/ClickHouse/ClickHouse/issues/50846). [#52310](https://github.com/ClickHouse/ClickHouse/pull/52310) ([Jensen](https://github.com/xiedeyantu)). +* System.events and system.metrics tables add column name as an alias to event and metric. close [#51257](https://github.com/ClickHouse/ClickHouse/issues/51257). [#52315](https://github.com/ClickHouse/ClickHouse/pull/52315) ([Jensen](https://github.com/xiedeyantu)). * Added support of syntax `CREATE UNIQUE INDEX` in parser for better SQL compatibility. `UNIQUE` index is not supported. Set `create_index_ignore_unique=1` to ignore UNIQUE keyword in queries. [#52320](https://github.com/ClickHouse/ClickHouse/pull/52320) ([Ilya Yatsishin](https://github.com/qoega)). * Add support of predefined macro (`{database}` and `{table}`) in some kafka engine settings: topic, consumer, client_id, etc. [#52386](https://github.com/ClickHouse/ClickHouse/pull/52386) ([Yury Bogomolov](https://github.com/ybogo)). * Disable updating fs cache during backup/restore. Filesystem cache must not be updated during backup/restore, it seems it just slows down the process without any profit (because the BACKUP command can read a lot of data and it's no use to put all the data to the filesystem cache and immediately evict it). [#52402](https://github.com/ClickHouse/ClickHouse/pull/52402) ([Vitaly Baranov](https://github.com/vitlibar)). @@ -107,7 +107,7 @@ sidebar_label: 2023 * Use the same default paths for `clickhouse_keeper` (symlink) as for `clickhouse_keeper` (executable). [#52861](https://github.com/ClickHouse/ClickHouse/pull/52861) ([Vitaly Baranov](https://github.com/vitlibar)). * CVE-2016-2183: disable 3DES. [#52893](https://github.com/ClickHouse/ClickHouse/pull/52893) ([Kenji Noguchi](https://github.com/knoguchi)). * Load filesystem cache metadata on startup in parallel. Configured by `load_metadata_threads` (default: 1) cache config setting. Related to [#52037](https://github.com/ClickHouse/ClickHouse/issues/52037). [#52943](https://github.com/ClickHouse/ClickHouse/pull/52943) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Improve error message for table function remote. Closes [#40220](https://github.com/ClickHouse/ClickHouse/issues/40220). [#52959](https://github.com/ClickHouse/ClickHouse/pull/52959) ([jiyoungyoooo](https://github.com/jiyoungyoooo)). +* Improve error message for table function remote. Closes [#40220](https://github.com/ClickHouse/ClickHouse/issues/40220). [#52959](https://github.com/ClickHouse/ClickHouse/pull/52959) ([Jiyoung Yoo](https://github.com/jiyoungyoooo)). * Added the possibility to specify custom storage policy in the `SETTINGS` clause of `RESTORE` queries. [#52970](https://github.com/ClickHouse/ClickHouse/pull/52970) ([Victor Krasnov](https://github.com/sirvickr)). * Add the ability to throttle the S3 requests on backup operations (`BACKUP` and `RESTORE` commands now honor `s3_max_[get/put]_[rps/burst]`). [#52974](https://github.com/ClickHouse/ClickHouse/pull/52974) ([Daniel Pozo Escalona](https://github.com/danipozo)). * Add settings to ignore ON CLUSTER clause in queries for management of replicated user-defined functions or access control entities with replicated storage. [#52975](https://github.com/ClickHouse/ClickHouse/pull/52975) ([Aleksei Filatov](https://github.com/aalexfvk)). @@ -127,7 +127,7 @@ sidebar_label: 2023 * Server settings asynchronous_metrics_update_period_s and asynchronous_heavy_metrics_update_period_s configured to 0 now fail gracefully instead of crash the server. [#53428](https://github.com/ClickHouse/ClickHouse/pull/53428) ([Robert Schulze](https://github.com/rschu1ze)). * Previously the caller could register the same watch callback multiple times. In that case each entry was consuming memory and the same callback was called multiple times which didn't make much sense. In order to avoid this the caller could have some logic to not add the same watch multiple times. With this change this deduplication is done internally if the watch callback is passed via shared_ptr. [#53452](https://github.com/ClickHouse/ClickHouse/pull/53452) ([Alexander Gololobov](https://github.com/davenger)). * The ClickHouse server now respects memory limits changed via cgroups when reloading its configuration. [#53455](https://github.com/ClickHouse/ClickHouse/pull/53455) ([Robert Schulze](https://github.com/rschu1ze)). -* Add ability to turn off flush of Distributed tables on `DETACH`/`DROP`/server shutdown. [#53501](https://github.com/ClickHouse/ClickHouse/pull/53501) ([Azat Khuzhin](https://github.com/azat)). +* Add ability to turn off flush of Distributed tables on `DETACH`/`DROP`/server shutdown (`flush_on_detach` setting for `Distributed`). [#53501](https://github.com/ClickHouse/ClickHouse/pull/53501) ([Azat Khuzhin](https://github.com/azat)). * Domainrfc support ipv6(ip literal within square brackets). [#53506](https://github.com/ClickHouse/ClickHouse/pull/53506) ([Chen768959](https://github.com/Chen768959)). * Use filter by file/path before reading in url/file/hdfs table functins. [#53529](https://github.com/ClickHouse/ClickHouse/pull/53529) ([Kruglov Pavel](https://github.com/Avogar)). * Use longer timeout for S3 CopyObject requests. [#53533](https://github.com/ClickHouse/ClickHouse/pull/53533) ([Michael Kolupaev](https://github.com/al13n321)). @@ -186,71 +186,71 @@ sidebar_label: 2023 #### Bug Fix (user-visible misbehavior in an official stable release) -* Do not reset Annoy index during build-up with > 1 mark [#51325](https://github.com/ClickHouse/ClickHouse/pull/51325) ([Tian Xinhui](https://github.com/xinhuitian)). -* Fix usage of temporary directories during RESTORE [#51493](https://github.com/ClickHouse/ClickHouse/pull/51493) ([Azat Khuzhin](https://github.com/azat)). -* Fix binary arithmetic for Nullable(IPv4) [#51642](https://github.com/ClickHouse/ClickHouse/pull/51642) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Support IPv4 and IPv6 as dictionary attributes [#51756](https://github.com/ClickHouse/ClickHouse/pull/51756) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Bug fix for checksum of compress marks [#51777](https://github.com/ClickHouse/ClickHouse/pull/51777) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). -* Fix mistakenly comma parsing as part of datetime in CSV best effort parsing [#51950](https://github.com/ClickHouse/ClickHouse/pull/51950) ([Kruglov Pavel](https://github.com/Avogar)). -* Don't throw exception when exec udf has parameters [#51961](https://github.com/ClickHouse/ClickHouse/pull/51961) ([Nikita Taranov](https://github.com/nickitat)). -* Fix recalculation of skip indexes and projections in `ALTER DELETE` queries [#52530](https://github.com/ClickHouse/ClickHouse/pull/52530) ([Anton Popov](https://github.com/CurtizJ)). -* MaterializedMySQL: Fix the infinite loop in ReadBuffer::read [#52621](https://github.com/ClickHouse/ClickHouse/pull/52621) ([Val Doroshchuk](https://github.com/valbok)). -* Load suggestion only with `clickhouse` dialect [#52628](https://github.com/ClickHouse/ClickHouse/pull/52628) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* init and destroy ares channel on demand.. [#52634](https://github.com/ClickHouse/ClickHouse/pull/52634) ([Arthur Passos](https://github.com/arthurpassos)). -* RFC: Fix filtering by virtual columns with OR expression [#52653](https://github.com/ClickHouse/ClickHouse/pull/52653) ([Azat Khuzhin](https://github.com/azat)). -* Fix crash in function `tuple` with one sparse column argument [#52659](https://github.com/ClickHouse/ClickHouse/pull/52659) ([Anton Popov](https://github.com/CurtizJ)). -* Fix named collections on cluster 23.7 [#52687](https://github.com/ClickHouse/ClickHouse/pull/52687) ([Al Korgun](https://github.com/alkorgun)). -* Fix reading of unnecessary column in case of multistage `PREWHERE` [#52689](https://github.com/ClickHouse/ClickHouse/pull/52689) ([Anton Popov](https://github.com/CurtizJ)). -* Fix unexpected sort result on multi columns with nulls first direction [#52761](https://github.com/ClickHouse/ClickHouse/pull/52761) ([copperybean](https://github.com/copperybean)). -* Fix data race in Keeper reconfiguration [#52804](https://github.com/ClickHouse/ClickHouse/pull/52804) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix sorting of sparse columns with large limit [#52827](https://github.com/ClickHouse/ClickHouse/pull/52827) ([Anton Popov](https://github.com/CurtizJ)). -* clickhouse-keeper: fix implementation of server with poll() [#52833](https://github.com/ClickHouse/ClickHouse/pull/52833) ([Andy Fiddaman](https://github.com/citrus-it)). -* make regexp analyzer recognize named capturing groups [#52840](https://github.com/ClickHouse/ClickHouse/pull/52840) ([Han Fei](https://github.com/hanfei1991)). -* Fix possible assert in ~PushingAsyncPipelineExecutor in clickhouse-local [#52862](https://github.com/ClickHouse/ClickHouse/pull/52862) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix reading of empty `Nested(Array(LowCardinality(...)))` [#52949](https://github.com/ClickHouse/ClickHouse/pull/52949) ([Anton Popov](https://github.com/CurtizJ)). -* Added new tests for session_log and fixed the inconsistency between login and logout. [#52958](https://github.com/ClickHouse/ClickHouse/pull/52958) ([Alexey Gerasimchuck](https://github.com/Demilivor)). -* Fix password leak in show create mysql table [#52962](https://github.com/ClickHouse/ClickHouse/pull/52962) ([Duc Canh Le](https://github.com/canhld94)). -* Convert sparse to full in CreateSetAndFilterOnTheFlyStep [#53000](https://github.com/ClickHouse/ClickHouse/pull/53000) ([vdimir](https://github.com/vdimir)). -* Fix rare race condition with empty key prefix directory deletion in fs cache [#53055](https://github.com/ClickHouse/ClickHouse/pull/53055) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix ZstdDeflatingWriteBuffer truncating the output sometimes [#53064](https://github.com/ClickHouse/ClickHouse/pull/53064) ([Michael Kolupaev](https://github.com/al13n321)). -* Fix query_id in part_log with async flush queries [#53103](https://github.com/ClickHouse/ClickHouse/pull/53103) ([Raúl Marín](https://github.com/Algunenano)). -* Fix possible error from cache "Read unexpected size" [#53121](https://github.com/ClickHouse/ClickHouse/pull/53121) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Disable the new parquet encoder [#53130](https://github.com/ClickHouse/ClickHouse/pull/53130) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Not-ready Set [#53162](https://github.com/ClickHouse/ClickHouse/pull/53162) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix character escaping in the PostgreSQL engine [#53250](https://github.com/ClickHouse/ClickHouse/pull/53250) ([Nikolay Degterinsky](https://github.com/evillique)). -* #2 Added new tests for session_log and fixed the inconsistency between login and logout. [#53255](https://github.com/ClickHouse/ClickHouse/pull/53255) ([Alexey Gerasimchuck](https://github.com/Demilivor)). -* #3 Fixed inconsistency between login success and logout [#53302](https://github.com/ClickHouse/ClickHouse/pull/53302) ([Alexey Gerasimchuck](https://github.com/Demilivor)). -* Fix adding sub-second intervals to DateTime [#53309](https://github.com/ClickHouse/ClickHouse/pull/53309) ([Michael Kolupaev](https://github.com/al13n321)). -* Fix "Context has expired" error in dictionaries [#53342](https://github.com/ClickHouse/ClickHouse/pull/53342) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix incorrect normal projection AST format [#53347](https://github.com/ClickHouse/ClickHouse/pull/53347) ([Amos Bird](https://github.com/amosbird)). -* Forbid use_structure_from_insertion_table_in_table_functions when execute Scalar [#53348](https://github.com/ClickHouse/ClickHouse/pull/53348) ([flynn](https://github.com/ucasfl)). -* Fix loading lazy database during system.table select query [#53372](https://github.com/ClickHouse/ClickHouse/pull/53372) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). -* Fixed system.data_skipping_indices for MaterializedMySQL [#53381](https://github.com/ClickHouse/ClickHouse/pull/53381) ([Filipp Ozinov](https://github.com/bakwc)). -* Fix processing single carriage return in TSV file segmentation engine [#53407](https://github.com/ClickHouse/ClickHouse/pull/53407) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix 'Context has expired' error properly [#53433](https://github.com/ClickHouse/ClickHouse/pull/53433) ([Michael Kolupaev](https://github.com/al13n321)). -* Fix timeout_overflow_mode when having subquery in the rhs of IN [#53439](https://github.com/ClickHouse/ClickHouse/pull/53439) ([Duc Canh Le](https://github.com/canhld94)). -* Fix an unexpected behavior in [#53152](https://github.com/ClickHouse/ClickHouse/issues/53152) [#53440](https://github.com/ClickHouse/ClickHouse/pull/53440) ([Zhiguo Zhou](https://github.com/ZhiguoZh)). -* Fix JSON_QUERY Function parse error while path is all number [#53470](https://github.com/ClickHouse/ClickHouse/pull/53470) ([KevinyhZou](https://github.com/KevinyhZou)). -* Fix wrong columns order for queries with parallel FINAL. [#53489](https://github.com/ClickHouse/ClickHouse/pull/53489) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fixed SELECTing from ReplacingMergeTree with do_not_merge_across_partitions_select_final [#53511](https://github.com/ClickHouse/ClickHouse/pull/53511) ([Vasily Nemkov](https://github.com/Enmk)). -* bugfix: Flush async insert queue first on shutdown [#53547](https://github.com/ClickHouse/ClickHouse/pull/53547) ([joelynch](https://github.com/joelynch)). -* Fix crash in join on sparse column [#53548](https://github.com/ClickHouse/ClickHouse/pull/53548) ([vdimir](https://github.com/vdimir)). -* Fix possible UB in Set skipping index for functions with incorrect args [#53559](https://github.com/ClickHouse/ClickHouse/pull/53559) ([Azat Khuzhin](https://github.com/azat)). -* Fix possible UB in inverted indexes (experimental feature) [#53560](https://github.com/ClickHouse/ClickHouse/pull/53560) ([Azat Khuzhin](https://github.com/azat)). -* Fix: interpolate expression takes source column instead of same name aliased from select expression. [#53572](https://github.com/ClickHouse/ClickHouse/pull/53572) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Fix number of dropped granules in EXPLAIN PLAN index=1 [#53616](https://github.com/ClickHouse/ClickHouse/pull/53616) ([wangxiaobo](https://github.com/wzb5212)). -* Correctly handle totals and extremes with `DelayedSource` [#53644](https://github.com/ClickHouse/ClickHouse/pull/53644) ([Antonio Andelic](https://github.com/antonio2368)). -* Prepared set cache in mutation pipeline stuck [#53645](https://github.com/ClickHouse/ClickHouse/pull/53645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix bug on mutations with subcolumns of type JSON in predicates of UPDATE and DELETE queries. [#53677](https://github.com/ClickHouse/ClickHouse/pull/53677) ([VanDarkholme7](https://github.com/VanDarkholme7)). -* Fix filter pushdown for full_sorting_merge join [#53699](https://github.com/ClickHouse/ClickHouse/pull/53699) ([vdimir](https://github.com/vdimir)). -* Try to fix bug with NULL::LowCardinality(Nullable(...)) NOT IN [#53706](https://github.com/ClickHouse/ClickHouse/pull/53706) ([Andrey Zvonov](https://github.com/zvonand)). -* Fix: sorted distinct with sparse columns [#53711](https://github.com/ClickHouse/ClickHouse/pull/53711) ([Igor Nikonov](https://github.com/devcrafter)). -* transform: correctly handle default column with multiple rows [#53742](https://github.com/ClickHouse/ClickHouse/pull/53742) ([Salvatore Mesoraca](https://github.com/aiven-sal)). -* Fix fuzzer crash in parseDateTime() [#53764](https://github.com/ClickHouse/ClickHouse/pull/53764) ([Robert Schulze](https://github.com/rschu1ze)). -* Materialized postgres: fix uncaught exception in getCreateTableQueryImpl [#53832](https://github.com/ClickHouse/ClickHouse/pull/53832) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix possible segfault while using PostgreSQL engine [#53847](https://github.com/ClickHouse/ClickHouse/pull/53847) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix named_collection_admin alias [#54066](https://github.com/ClickHouse/ClickHouse/pull/54066) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix rows_before_limit_at_least for DelayedSource. [#54122](https://github.com/ClickHouse/ClickHouse/pull/54122) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix results of queries utilizing the Annoy index when the part has more than one mark. [#51325](https://github.com/ClickHouse/ClickHouse/pull/51325) ([Tian Xinhui](https://github.com/xinhuitian)). +* Fix usage of temporary directories during RESTORE. [#51493](https://github.com/ClickHouse/ClickHouse/pull/51493) ([Azat Khuzhin](https://github.com/azat)). +* Fixed binary arithmetic for Nullable(IPv4). [#51642](https://github.com/ClickHouse/ClickHouse/pull/51642) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Support IPv4 and IPv6 as dictionary attributes. [#51756](https://github.com/ClickHouse/ClickHouse/pull/51756) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Updated checkDataPart to read compress marks as compressed file by checking its extension resolves [#51337](https://github.com/ClickHouse/ClickHouse/issues/51337). [#51777](https://github.com/ClickHouse/ClickHouse/pull/51777) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix mistakenly comma parsing as part of datetime in CSV datetime best effort parsing. Closes [#51059](https://github.com/ClickHouse/ClickHouse/issues/51059). [#51950](https://github.com/ClickHouse/ClickHouse/pull/51950) ([Kruglov Pavel](https://github.com/Avogar)). +* Fixed exception when executable udf was provided with a parameter. [#51961](https://github.com/ClickHouse/ClickHouse/pull/51961) ([Nikita Taranov](https://github.com/nickitat)). +* Fixed recalculation of skip indexes and projections in `ALTER DELETE` queries. [#52530](https://github.com/ClickHouse/ClickHouse/pull/52530) ([Anton Popov](https://github.com/CurtizJ)). +* Fixed the infinite loop in ReadBuffer when the pos overflows the end of the buffer in MaterializedMySQL. [#52621](https://github.com/ClickHouse/ClickHouse/pull/52621) ([Val Doroshchuk](https://github.com/valbok)). +* Do not try to load suggestions in `clickhouse-local` when a the dialect is not `clickhouse`. [#52628](https://github.com/ClickHouse/ClickHouse/pull/52628) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Remove mutex from CaresPTRResolver and create `ares_channel` on demand. Trying to fix: https://github.com/ClickHouse/ClickHouse/pull/52327#issuecomment-1643021543. [#52634](https://github.com/ClickHouse/ClickHouse/pull/52634) ([Arthur Passos](https://github.com/arthurpassos)). +* Fix filtering by virtual columns with OR expression (i.e. by `_table` for `Merge` engine). [#52653](https://github.com/ClickHouse/ClickHouse/pull/52653) ([Azat Khuzhin](https://github.com/azat)). +* Fix crash in function `tuple` with one sparse column argument. [#52659](https://github.com/ClickHouse/ClickHouse/pull/52659) ([Anton Popov](https://github.com/CurtizJ)). +* Fix named collections related statements: `if [not] exists`, `on cluster`. Closes [#51609](https://github.com/ClickHouse/ClickHouse/issues/51609). [#52687](https://github.com/ClickHouse/ClickHouse/pull/52687) ([Al Korgun](https://github.com/alkorgun)). +* Fix reading of unnecessary column in case of multistage `PREWHERE`. [#52689](https://github.com/ClickHouse/ClickHouse/pull/52689) ([Anton Popov](https://github.com/CurtizJ)). +* Fix unexpected sort result on multi columns with nulls first direction. [#52761](https://github.com/ClickHouse/ClickHouse/pull/52761) ([ZhiHong Zhang](https://github.com/copperybean)). +* Keeper fix: fix data race during reconfiguration. [#52804](https://github.com/ClickHouse/ClickHouse/pull/52804) ([Antonio Andelic](https://github.com/antonio2368)). +* Fixed sorting of sparse columns in case of `ORDER BY ... LIMIT n` clause and large values of `n`. [#52827](https://github.com/ClickHouse/ClickHouse/pull/52827) ([Anton Popov](https://github.com/CurtizJ)). +* Keeper fix: platforms that used poll() would delay responding to requests until the client sent a heartbeat. [#52833](https://github.com/ClickHouse/ClickHouse/pull/52833) ([Andy Fiddaman](https://github.com/citrus-it)). +* Make regexp analyzer recognize named capturing groups. [#52840](https://github.com/ClickHouse/ClickHouse/pull/52840) ([Han Fei](https://github.com/hanfei1991)). +* Fix possible assert in ~PushingAsyncPipelineExecutor in clickhouse-local. [#52862](https://github.com/ClickHouse/ClickHouse/pull/52862) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix reading of empty `Nested(Array(LowCardinality(...)))` columns (added by `ALTER TABLE ... ADD COLUMN ...` query and not materialized in parts) from compact parts of `MergeTree` tables. [#52949](https://github.com/ClickHouse/ClickHouse/pull/52949) ([Anton Popov](https://github.com/CurtizJ)). +* Fixed the record inconsistency in session_log between login and logout. [#52958](https://github.com/ClickHouse/ClickHouse/pull/52958) ([Alexey Gerasimchuck](https://github.com/Demilivor)). +* Fix password leak in show create mysql table. [#52962](https://github.com/ClickHouse/ClickHouse/pull/52962) ([Duc Canh Le](https://github.com/canhld94)). +* Fix possible crash in full sorting merge join on sparse columns, close [#52978](https://github.com/ClickHouse/ClickHouse/issues/52978). [#53000](https://github.com/ClickHouse/ClickHouse/pull/53000) ([vdimir](https://github.com/vdimir)). +* Fix very rare race condition with empty key prefix directory deletion in fs cache. [#53055](https://github.com/ClickHouse/ClickHouse/pull/53055) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fixed `output_format_parquet_compression_method='zstd'` producing invalid Parquet files sometimes. In older versions, use setting `output_format_parquet_use_custom_encoder = 0` as a workaround. [#53064](https://github.com/ClickHouse/ClickHouse/pull/53064) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix query_id in part_log with async flush queries. [#53103](https://github.com/ClickHouse/ClickHouse/pull/53103) ([Raúl Marín](https://github.com/Algunenano)). +* Fix possible error from filesystem cache "Read unexpected size". [#53121](https://github.com/ClickHouse/ClickHouse/pull/53121) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Disable the new parquet encoder: it has a bug. [#53130](https://github.com/ClickHouse/ClickHouse/pull/53130) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* `Not-ready Set is passed as the second argument for function 'in'` could happen with limited `max_result_rows` and ` result_overflow_mode = 'break'`. [#53162](https://github.com/ClickHouse/ClickHouse/pull/53162) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix character escaping in the PostgreSQL engine (`\'` -> `''`, `\\` -> `\`). Closes [#49821](https://github.com/ClickHouse/ClickHouse/issues/49821). [#53250](https://github.com/ClickHouse/ClickHouse/pull/53250) ([Nikolay Degterinsky](https://github.com/evillique)). +* Fixed the record inconsistency in session_log between login and logout. [#53255](https://github.com/ClickHouse/ClickHouse/pull/53255) ([Alexey Gerasimchuck](https://github.com/Demilivor)). +* Fixed the record inconsistency in session_log between login and logout. [#53302](https://github.com/ClickHouse/ClickHouse/pull/53302) ([Alexey Gerasimchuck](https://github.com/Demilivor)). +* Fixed adding intervals of a fraction of a second to DateTime producing incorrect result. [#53309](https://github.com/ClickHouse/ClickHouse/pull/53309) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix the "Context has expired" error in dictionaries when using subqueries. [#53342](https://github.com/ClickHouse/ClickHouse/pull/53342) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix incorrect normal projection AST format when single function is used in ORDER BY. This fixes [#52607](https://github.com/ClickHouse/ClickHouse/issues/52607). [#53347](https://github.com/ClickHouse/ClickHouse/pull/53347) ([Amos Bird](https://github.com/amosbird)). +* Forbid `use_structure_from_insertion_table_in_table_functions` when execute Scalar. Closes [#52494](https://github.com/ClickHouse/ClickHouse/issues/52494). [#53348](https://github.com/ClickHouse/ClickHouse/pull/53348) ([flynn](https://github.com/ucasfl)). +* Avoid loading tables from lazy database when not needed Follow up to [#43840](https://github.com/ClickHouse/ClickHouse/issues/43840). [#53372](https://github.com/ClickHouse/ClickHouse/pull/53372) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fixed `system.data_skipping_indices` columns `data_compressed_bytes` and `data_uncompressed_bytes` for MaterializedMySQL. [#53381](https://github.com/ClickHouse/ClickHouse/pull/53381) ([Filipp Ozinov](https://github.com/bakwc)). +* Fix processing single carriage return in TSV file segmentation engine that could lead to parsing errors. Closes [#53320](https://github.com/ClickHouse/ClickHouse/issues/53320). [#53407](https://github.com/ClickHouse/ClickHouse/pull/53407) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix the "Context has expired" error when using subqueries with functions `file()` (regular function, not table function), `joinGet()`, `joinGetOrNull()`, `connectionId()`. [#53433](https://github.com/ClickHouse/ClickHouse/pull/53433) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix timeout_overflow_mode when having subquery in the rhs of IN. [#53439](https://github.com/ClickHouse/ClickHouse/pull/53439) ([Duc Canh Le](https://github.com/canhld94)). +* This PR fixes [#53152](https://github.com/ClickHouse/ClickHouse/issues/53152). [#53440](https://github.com/ClickHouse/ClickHouse/pull/53440) ([Zhiguo Zhou](https://github.com/ZhiguoZh)). +* Fix the JSON_QUERY function can not parse the json string while path is numberic. like in the query SELECT JSON_QUERY('{"123":"abcd"}', '$.123'), we would encounter the exceptions ``` DB::Exception: Unable to parse JSONPath: While processing JSON_QUERY('{"123":"acd"}', '$.123'). (BAD_ARGUMENTS) ```. [#53470](https://github.com/ClickHouse/ClickHouse/pull/53470) ([KevinyhZou](https://github.com/KevinyhZou)). +* Fix possible crash for queries with parallel `FINAL` where `ORDER BY` and `PRIMARY KEY` are different in table definition. [#53489](https://github.com/ClickHouse/ClickHouse/pull/53489) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fixed ReplacingMergeTree to properly process single-partition cases when `do_not_merge_across_partitions_select_final=1`. Previously `SELECT` could return rows that were marked as deleted. [#53511](https://github.com/ClickHouse/ClickHouse/pull/53511) ([Vasily Nemkov](https://github.com/Enmk)). +* Fix bug in flushing of async insert queue on graceful shutdown. [#53547](https://github.com/ClickHouse/ClickHouse/pull/53547) ([joelynch](https://github.com/joelynch)). +* Fix crash in join on sparse column. [#53548](https://github.com/ClickHouse/ClickHouse/pull/53548) ([vdimir](https://github.com/vdimir)). +* Fix possible UB in Set skipping index for functions with incorrect args. [#53559](https://github.com/ClickHouse/ClickHouse/pull/53559) ([Azat Khuzhin](https://github.com/azat)). +* Fix possible UB in inverted indexes (experimental feature). [#53560](https://github.com/ClickHouse/ClickHouse/pull/53560) ([Azat Khuzhin](https://github.com/azat)). +* Fixed bug for interpolate when interpolated column is aliased with the same name as a source column. [#53572](https://github.com/ClickHouse/ClickHouse/pull/53572) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fixed a bug in EXPLAIN PLAN index=1 where the number of dropped granules was incorrect. [#53616](https://github.com/ClickHouse/ClickHouse/pull/53616) ([wangxiaobo](https://github.com/wzb5212)). +* Correctly handle totals and extremes when `DelayedSource` is used. [#53644](https://github.com/ClickHouse/ClickHouse/pull/53644) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix `Pipeline stuck` error in mutation with `IN (subquery WITH TOTALS)` where ready set was taken from cache. [#53645](https://github.com/ClickHouse/ClickHouse/pull/53645) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Allow to use JSON subcolumns in predicates of UPDATE and DELETE queries. [#53677](https://github.com/ClickHouse/ClickHouse/pull/53677) ([zps](https://github.com/VanDarkholme7)). +* Fix possible logical error exception during filter pushdown for full_sorting_merge join. [#53699](https://github.com/ClickHouse/ClickHouse/pull/53699) ([vdimir](https://github.com/vdimir)). +* Fix NULL::LowCardinality(Nullable(...)) with IN. [#53706](https://github.com/ClickHouse/ClickHouse/pull/53706) ([Andrey Zvonov](https://github.com/zvonand)). +* Fixes possible crashes in `DISTINCT` queries with enabled `optimize_distinct_in_order` and sparse columns. [#53711](https://github.com/ClickHouse/ClickHouse/pull/53711) ([Igor Nikonov](https://github.com/devcrafter)). +* Correctly handle default column with multiple rows in transform. [#53742](https://github.com/ClickHouse/ClickHouse/pull/53742) ([Salvatore Mesoraca](https://github.com/aiven-sal)). +* Fix crash in SQL function parseDateTime() with non-const timezone argument. [#53764](https://github.com/ClickHouse/ClickHouse/pull/53764) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix uncaught exception in `getCreateTableQueryImpl`. [#53832](https://github.com/ClickHouse/ClickHouse/pull/53832) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix possible segfault while using PostgreSQL engine. Closes [#36919](https://github.com/ClickHouse/ClickHouse/issues/36919). [#53847](https://github.com/ClickHouse/ClickHouse/pull/53847) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix `named_collection_admin` alias to `named_collection_control` not working from config. [#54066](https://github.com/ClickHouse/ClickHouse/pull/54066) ([Kseniia Sumarokova](https://github.com/kssenii)). +* A distributed query could miss `rows_before_limit_at_least` in the query result in case it was executed on a replica with a delay more than `max_replica_delay_for_distributed_queries`. [#54122](https://github.com/ClickHouse/ClickHouse/pull/54122) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). #### NO CL ENTRY @@ -272,7 +272,7 @@ sidebar_label: 2023 * Add more checks into ThreadStatus ctor. [#42019](https://github.com/ClickHouse/ClickHouse/pull/42019) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Refactor Query Tree visitor [#46740](https://github.com/ClickHouse/ClickHouse/pull/46740) ([Dmitry Novik](https://github.com/novikd)). * Revert "Revert "Randomize JIT settings in tests"" [#48282](https://github.com/ClickHouse/ClickHouse/pull/48282) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix outdated cache configuration in s3 tests: s3_storage_policy_by_defau... [#48424](https://github.com/ClickHouse/ClickHouse/pull/48424) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix outdated cache configuration in s3 tests: s3_storage_policy_by_defau… [#48424](https://github.com/ClickHouse/ClickHouse/pull/48424) ([Kseniia Sumarokova](https://github.com/kssenii)). * Fix IN with decimal in analyzer [#48754](https://github.com/ClickHouse/ClickHouse/pull/48754) ([vdimir](https://github.com/vdimir)). * Some unclear change in StorageBuffer::reschedule() for something [#49723](https://github.com/ClickHouse/ClickHouse/pull/49723) ([DimasKovas](https://github.com/DimasKovas)). * MergeTree & SipHash checksum big-endian support [#50276](https://github.com/ClickHouse/ClickHouse/pull/50276) ([ltrk2](https://github.com/ltrk2)). @@ -540,7 +540,7 @@ sidebar_label: 2023 * Do not warn about arch_sys_counter clock [#53739](https://github.com/ClickHouse/ClickHouse/pull/53739) ([Artur Malchanau](https://github.com/Hexta)). * Add some profile events [#53741](https://github.com/ClickHouse/ClickHouse/pull/53741) ([Kseniia Sumarokova](https://github.com/kssenii)). * Support clang-18 (Wmissing-field-initializers) [#53751](https://github.com/ClickHouse/ClickHouse/pull/53751) ([Raúl Marín](https://github.com/Algunenano)). -* Upgrade openSSL to v3.0.10 [#53756](https://github.com/ClickHouse/ClickHouse/pull/53756) ([bhavnajindal](https://github.com/bhavnajindal)). +* Upgrade openSSL to v3.0.10 [#53756](https://github.com/ClickHouse/ClickHouse/pull/53756) ([Bhavna Jindal](https://github.com/bhavnajindal)). * Improve JSON-handling on s390x [#53760](https://github.com/ClickHouse/ClickHouse/pull/53760) ([ltrk2](https://github.com/ltrk2)). * Reduce API calls to SSM client [#53762](https://github.com/ClickHouse/ClickHouse/pull/53762) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). * Remove branch references from .gitmodules [#53763](https://github.com/ClickHouse/ClickHouse/pull/53763) ([Robert Schulze](https://github.com/rschu1ze)). @@ -588,3 +588,4 @@ sidebar_label: 2023 * tests: mark 02152_http_external_tables_memory_tracking as no-parallel [#54155](https://github.com/ClickHouse/ClickHouse/pull/54155) ([Azat Khuzhin](https://github.com/azat)). * The external logs have had colliding arguments [#54165](https://github.com/ClickHouse/ClickHouse/pull/54165) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). * Rename macro [#54169](https://github.com/ClickHouse/ClickHouse/pull/54169) ([Kseniia Sumarokova](https://github.com/kssenii)). + diff --git a/docs/changelogs/v23.8.10.43-lts.md b/docs/changelogs/v23.8.10.43-lts.md index 0093467d129..0750901da8a 100644 --- a/docs/changelogs/v23.8.10.43-lts.md +++ b/docs/changelogs/v23.8.10.43-lts.md @@ -16,17 +16,17 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Background merges correctly use temporary data storage in the cache [#57275](https://github.com/ClickHouse/ClickHouse/pull/57275) ([vdimir](https://github.com/vdimir)). -* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)). -* Fix double destroy call on exception throw in addBatchLookupTable8 [#58745](https://github.com/ClickHouse/ClickHouse/pull/58745) ([Raúl Marín](https://github.com/Algunenano)). -* Fix JSONExtract function for LowCardinality(Nullable) columns [#58808](https://github.com/ClickHouse/ClickHouse/pull/58808) ([vdimir](https://github.com/vdimir)). -* Fix: LIMIT BY and LIMIT in distributed query [#59153](https://github.com/ClickHouse/ClickHouse/pull/59153) ([Igor Nikonov](https://github.com/devcrafter)). -* Fix translate() with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)). -* Fix error "Read beyond last offset" for AsynchronousBoundedReadBuffer [#59630](https://github.com/ClickHouse/ClickHouse/pull/59630) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix query start time on non initial queries [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)). -* Fix leftPad / rightPad function with FixedString input [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)). -* rabbitmq: fix having neither acked nor nacked messages [#59775](https://github.com/ClickHouse/ClickHouse/pull/59775) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix cosineDistance crash with Nullable [#60150](https://github.com/ClickHouse/ClickHouse/pull/60150) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#57565](https://github.com/ClickHouse/ClickHouse/issues/57565): Background merges correctly use temporary data storage in the cache. [#57275](https://github.com/ClickHouse/ClickHouse/pull/57275) ([vdimir](https://github.com/vdimir)). +* Backported in [#57476](https://github.com/ClickHouse/ClickHouse/issues/57476): Fix possible broken skipping indexes after materialization in MergeTree compact parts. [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)). +* Backported in [#58777](https://github.com/ClickHouse/ClickHouse/issues/58777): Fix double destroy call on exception throw in addBatchLookupTable8. [#58745](https://github.com/ClickHouse/ClickHouse/pull/58745) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#58856](https://github.com/ClickHouse/ClickHouse/issues/58856): Fix possible crash in JSONExtract function extracting `LowCardinality(Nullable(T))` type. [#58808](https://github.com/ClickHouse/ClickHouse/pull/58808) ([vdimir](https://github.com/vdimir)). +* Backported in [#59194](https://github.com/ClickHouse/ClickHouse/issues/59194): The combination of LIMIT BY and LIMIT could produce an incorrect result in distributed queries (parallel replicas included). [#59153](https://github.com/ClickHouse/ClickHouse/pull/59153) ([Igor Nikonov](https://github.com/devcrafter)). +* Backported in [#59429](https://github.com/ClickHouse/ClickHouse/issues/59429): Fix translate() with FixedString input. Could lead to crashes as it'd return a String column (vs the expected FixedString). This issue was found through ClickHouse Bug Bounty Program YohannJardin. [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#60128](https://github.com/ClickHouse/ClickHouse/issues/60128): Fix error `Read beyond last offset` for `AsynchronousBoundedReadBuffer`. [#59630](https://github.com/ClickHouse/ClickHouse/pull/59630) ([Vitaly Baranov](https://github.com/vitlibar)). +* Backported in [#59836](https://github.com/ClickHouse/ClickHouse/issues/59836): Fix query start time on non initial queries. [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59758](https://github.com/ClickHouse/ClickHouse/issues/59758): Fix leftPad / rightPad function with FixedString input. [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#60304](https://github.com/ClickHouse/ClickHouse/issues/60304): Fix having neigher acked nor nacked messages. If exception happens during read-write phase, messages will be nacked. [#59775](https://github.com/ClickHouse/ClickHouse/pull/59775) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#60171](https://github.com/ClickHouse/ClickHouse/issues/60171): Fix cosineDistance crash with Nullable. [#60150](https://github.com/ClickHouse/ClickHouse/pull/60150) ([Raúl Marín](https://github.com/Algunenano)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v23.8.11.28-lts.md b/docs/changelogs/v23.8.11.28-lts.md index acc284caa72..3da3d10cfa5 100644 --- a/docs/changelogs/v23.8.11.28-lts.md +++ b/docs/changelogs/v23.8.11.28-lts.md @@ -12,11 +12,11 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix buffer overflow in CompressionCodecMultiple [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Remove nonsense from SQL/JSON [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix crash in arrayEnumerateRanked [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)). -* Fix crash when using input() in INSERT SELECT JOIN [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)). -* Remove recursion when reading from S3 [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#60983](https://github.com/ClickHouse/ClickHouse/issues/60983): Fix buffer overflow that can happen if the attacker asks the HTTP server to decompress data with a composition of codecs and size triggering numeric overflow. Fix buffer overflow that can happen inside codec NONE on wrong input data. This was submitted by TIANGONG research team through our [Bug Bounty program](https://github.com/ClickHouse/ClickHouse/issues/38986). [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#60986](https://github.com/ClickHouse/ClickHouse/issues/60986): Functions for SQL/JSON were able to read uninitialized memory. This closes [#60017](https://github.com/ClickHouse/ClickHouse/issues/60017). Found by Fuzzer. [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#60816](https://github.com/ClickHouse/ClickHouse/issues/60816): Fix crash in arrayEnumerateRanked. [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#60837](https://github.com/ClickHouse/ClickHouse/issues/60837): Fix crash when using input() in INSERT SELECT JOIN. Closes [#60035](https://github.com/ClickHouse/ClickHouse/issues/60035). [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#60911](https://github.com/ClickHouse/ClickHouse/issues/60911): Avoid segfault if too many keys are skipped when reading from S3. [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)). #### NO CL ENTRY diff --git a/docs/changelogs/v23.8.12.13-lts.md b/docs/changelogs/v23.8.12.13-lts.md index dbb36fdc00e..0329d4349f3 100644 --- a/docs/changelogs/v23.8.12.13-lts.md +++ b/docs/changelogs/v23.8.12.13-lts.md @@ -9,9 +9,9 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Improve isolation of query cache entries under re-created users or role switches [#58611](https://github.com/ClickHouse/ClickHouse/pull/58611) ([Robert Schulze](https://github.com/rschu1ze)). -* Fix string search with const position [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` for incorrect UTF-8 [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). +* Backported in [#61439](https://github.com/ClickHouse/ClickHouse/issues/61439): The query cache now denies access to entries when the user is re-created or assumes another role. This improves prevents attacks where 1. an user with the same name as a dropped user may access the old user's cache entries or 2. a user with a different role may access cache entries of a role with a different row policy. [#58611](https://github.com/ClickHouse/ClickHouse/pull/58611) ([Robert Schulze](https://github.com/rschu1ze)). +* Backported in [#61572](https://github.com/ClickHouse/ClickHouse/issues/61572): Fix string search with constant start position which previously could lead to memory corruption. [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#61854](https://github.com/ClickHouse/ClickHouse/issues/61854): Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` when specifying incorrect UTF-8 sequence. Example: [#61714](https://github.com/ClickHouse/ClickHouse/issues/61714#issuecomment-2012768202). [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v23.8.13.25-lts.md b/docs/changelogs/v23.8.13.25-lts.md index 3452621556a..e9c6e2e9f28 100644 --- a/docs/changelogs/v23.8.13.25-lts.md +++ b/docs/changelogs/v23.8.13.25-lts.md @@ -15,11 +15,11 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix REPLACE/MOVE PARTITION with zero-copy replication [#54193](https://github.com/ClickHouse/ClickHouse/pull/54193) ([Alexander Tokmakov](https://github.com/tavplubix)). -* Fix ATTACH query with external ON CLUSTER [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)). -* Cancel merges before removing moved parts [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Mark CANNOT_PARSE_ESCAPE_SEQUENCE error as parse error to be able to skip it in row input formats [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). -* Try to fix segfault in Hive engine [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#62898](https://github.com/ClickHouse/ClickHouse/issues/62898): Fixed a bug in zero-copy replication (an experimental feature) that could cause `The specified key does not exist` errors and data loss after REPLACE/MOVE PARTITION. A similar issue might happen with TTL-moves between disks. [#54193](https://github.com/ClickHouse/ClickHouse/pull/54193) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Backported in [#61964](https://github.com/ClickHouse/ClickHouse/issues/61964): Fix the ATTACH query with the ON CLUSTER clause when the database does not exist on the initiator node. Closes [#55009](https://github.com/ClickHouse/ClickHouse/issues/55009). [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#62527](https://github.com/ClickHouse/ClickHouse/issues/62527): Fix data race between `MOVE PARTITION` query and merges resulting in intersecting parts. [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Backported in [#62238](https://github.com/ClickHouse/ClickHouse/issues/62238): Fix skipping escape sequcne parsing errors during JSON data parsing while using `input_format_allow_errors_num/ratio` settings. [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#62673](https://github.com/ClickHouse/ClickHouse/issues/62673): Fix segmentation fault when using Hive table engine. Reference [#62154](https://github.com/ClickHouse/ClickHouse/issues/62154), [#62560](https://github.com/ClickHouse/ClickHouse/issues/62560). [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v23.8.14.6-lts.md b/docs/changelogs/v23.8.14.6-lts.md index 0053502a9dc..3236c931e51 100644 --- a/docs/changelogs/v23.8.14.6-lts.md +++ b/docs/changelogs/v23.8.14.6-lts.md @@ -9,6 +9,6 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Set server name for SSL handshake in MongoDB engine [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). -* Use user specified db instead of "config" for MongoDB wire protocol version check [#63126](https://github.com/ClickHouse/ClickHouse/pull/63126) ([Alexander Gololobov](https://github.com/davenger)). +* Backported in [#63172](https://github.com/ClickHouse/ClickHouse/issues/63172): Setting server_name might help with recently reported SSL handshake error when connecting to MongoDB Atlas: `Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: error:10000438:SSL routines:OPENSSL_internal:TLSV1_ALERT_INTERNAL_ERROR`. [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). +* Backported in [#63164](https://github.com/ClickHouse/ClickHouse/issues/63164): The wire protocol version check for MongoDB used to try accessing "config" database, but this can fail if the user doesn't have permissions for it. The fix is to use the database name provided by user. [#63126](https://github.com/ClickHouse/ClickHouse/pull/63126) ([Alexander Gololobov](https://github.com/davenger)). diff --git a/docs/changelogs/v23.8.2.7-lts.md b/docs/changelogs/v23.8.2.7-lts.md index 317e2c6d56a..a6f74e7998c 100644 --- a/docs/changelogs/v23.8.2.7-lts.md +++ b/docs/changelogs/v23.8.2.7-lts.md @@ -9,8 +9,8 @@ sidebar_label: 2023 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix: parallel replicas over distributed don't read from all replicas [#54199](https://github.com/ClickHouse/ClickHouse/pull/54199) ([Igor Nikonov](https://github.com/devcrafter)). -* Fix: allow IPv6 for bloom filter [#54200](https://github.com/ClickHouse/ClickHouse/pull/54200) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Backported in [#54209](https://github.com/ClickHouse/ClickHouse/issues/54209): Parallel reading from replicas over Distributed table was using only one replica per shard. [#54199](https://github.com/ClickHouse/ClickHouse/pull/54199) ([Igor Nikonov](https://github.com/devcrafter)). +* Backported in [#54233](https://github.com/ClickHouse/ClickHouse/issues/54233): Allow IPv6 for bloom filter, backward compatibility issue. [#54200](https://github.com/ClickHouse/ClickHouse/pull/54200) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v23.8.3.48-lts.md b/docs/changelogs/v23.8.3.48-lts.md index af669c5adc8..91514f48a25 100644 --- a/docs/changelogs/v23.8.3.48-lts.md +++ b/docs/changelogs/v23.8.3.48-lts.md @@ -18,19 +18,19 @@ sidebar_label: 2023 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix: moved to prewhere condition actions can lose column [#53492](https://github.com/ClickHouse/ClickHouse/pull/53492) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Fix: parallel replicas over distributed with prefer_localhost_replica=1 [#54334](https://github.com/ClickHouse/ClickHouse/pull/54334) ([Igor Nikonov](https://github.com/devcrafter)). -* Fix possible error 'URI contains invalid characters' in s3 table function [#54373](https://github.com/ClickHouse/ClickHouse/pull/54373) ([Kruglov Pavel](https://github.com/Avogar)). -* Check for overflow before addition in `analysisOfVariance` function [#54385](https://github.com/ClickHouse/ClickHouse/pull/54385) ([Antonio Andelic](https://github.com/antonio2368)). -* reproduce and fix the bug in removeSharedRecursive [#54430](https://github.com/ClickHouse/ClickHouse/pull/54430) ([Sema Checherinda](https://github.com/CheSema)). -* Fix aggregate projections with normalized states [#54480](https://github.com/ClickHouse/ClickHouse/pull/54480) ([Amos Bird](https://github.com/amosbird)). -* Fix possible parsing error in WithNames formats with disabled input_format_with_names_use_header [#54513](https://github.com/ClickHouse/ClickHouse/pull/54513) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix zero copy garbage [#54550](https://github.com/ClickHouse/ClickHouse/pull/54550) ([Alexander Tokmakov](https://github.com/tavplubix)). -* Fix race in `ColumnUnique` [#54575](https://github.com/ClickHouse/ClickHouse/pull/54575) ([Nikita Taranov](https://github.com/nickitat)). -* Fix serialization of `ColumnDecimal` [#54601](https://github.com/ClickHouse/ClickHouse/pull/54601) ([Nikita Taranov](https://github.com/nickitat)). -* Fix virtual columns having incorrect values after ORDER BY [#54811](https://github.com/ClickHouse/ClickHouse/pull/54811) ([Michael Kolupaev](https://github.com/al13n321)). -* Fix Keeper segfault during shutdown [#54841](https://github.com/ClickHouse/ClickHouse/pull/54841) ([Antonio Andelic](https://github.com/antonio2368)). -* Rebuild minmax_count_projection when partition key gets modified [#54943](https://github.com/ClickHouse/ClickHouse/pull/54943) ([Amos Bird](https://github.com/amosbird)). +* Backported in [#54974](https://github.com/ClickHouse/ClickHouse/issues/54974): Fixed issue when during prewhere optimization compound condition actions DAG can lose output column of intermediate step while this column is required as an input column of some next step. [#53492](https://github.com/ClickHouse/ClickHouse/pull/53492) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Backported in [#54996](https://github.com/ClickHouse/ClickHouse/issues/54996): Parallel replicas either executed completely on the local replica or produce an incorrect result when `prefer_localhost_replica=1`. Fixes [#54276](https://github.com/ClickHouse/ClickHouse/issues/54276). [#54334](https://github.com/ClickHouse/ClickHouse/pull/54334) ([Igor Nikonov](https://github.com/devcrafter)). +* Backported in [#54516](https://github.com/ClickHouse/ClickHouse/issues/54516): Fix possible error 'URI contains invalid characters' in s3 table function. Closes [#54345](https://github.com/ClickHouse/ClickHouse/issues/54345). [#54373](https://github.com/ClickHouse/ClickHouse/pull/54373) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#54418](https://github.com/ClickHouse/ClickHouse/issues/54418): Check for overflow when handling group number argument for `analysisOfVariance` to avoid crashes. Crash found using WINGFUZZ. [#54385](https://github.com/ClickHouse/ClickHouse/pull/54385) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#54527](https://github.com/ClickHouse/ClickHouse/issues/54527): Reproduce the bug described here [#54135](https://github.com/ClickHouse/ClickHouse/issues/54135). [#54430](https://github.com/ClickHouse/ClickHouse/pull/54430) ([Sema Checherinda](https://github.com/CheSema)). +* Backported in [#54854](https://github.com/ClickHouse/ClickHouse/issues/54854): Fix incorrect aggregation projection optimization when using variant aggregate states. This optimization is accidentally enabled but not properly implemented, because after https://github.com/ClickHouse/ClickHouse/pull/39420 the comparison of DataTypeAggregateFunction is normalized. This fixes [#54406](https://github.com/ClickHouse/ClickHouse/issues/54406). [#54480](https://github.com/ClickHouse/ClickHouse/pull/54480) ([Amos Bird](https://github.com/amosbird)). +* Backported in [#54599](https://github.com/ClickHouse/ClickHouse/issues/54599): Fix parsing error in WithNames formats while reading subset of columns with disabled input_format_with_names_use_header. Closes [#52591](https://github.com/ClickHouse/ClickHouse/issues/52591). [#54513](https://github.com/ClickHouse/ClickHouse/pull/54513) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#54594](https://github.com/ClickHouse/ClickHouse/issues/54594): Starting from version 23.5, zero-copy replication could leave some garbage in ZooKeeper and on S3. It might happen on removal of Outdated parts that were mutated. The issue is indicated by `Failed to get mutation parent on {} for part {}, refusing to remove blobs` log messages. [#54550](https://github.com/ClickHouse/ClickHouse/pull/54550) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Backported in [#54627](https://github.com/ClickHouse/ClickHouse/issues/54627): Fix unsynchronised write to a shared variable in `ColumnUnique`. [#54575](https://github.com/ClickHouse/ClickHouse/pull/54575) ([Nikita Taranov](https://github.com/nickitat)). +* Backported in [#54625](https://github.com/ClickHouse/ClickHouse/issues/54625): Fix serialization of `ColumnDecimal`. [#54601](https://github.com/ClickHouse/ClickHouse/pull/54601) ([Nikita Taranov](https://github.com/nickitat)). +* Backported in [#54945](https://github.com/ClickHouse/ClickHouse/issues/54945): Fixed virtual columns (e.g. _file) showing incorrect values with ORDER BY. [#54811](https://github.com/ClickHouse/ClickHouse/pull/54811) ([Michael Kolupaev](https://github.com/al13n321)). +* Backported in [#54872](https://github.com/ClickHouse/ClickHouse/issues/54872): Keeper fix: correctly capture a variable in callback to avoid segfaults during shutdown. [#54841](https://github.com/ClickHouse/ClickHouse/pull/54841) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#54950](https://github.com/ClickHouse/ClickHouse/issues/54950): Fix projection optimization error if table's partition key was ALTERed by extending its Enum type. The fix is to rebuild `minmax_count_projection` when partition key gets modified. This fixes [#54941](https://github.com/ClickHouse/ClickHouse/issues/54941). [#54943](https://github.com/ClickHouse/ClickHouse/pull/54943) ([Amos Bird](https://github.com/amosbird)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v23.8.4.69-lts.md b/docs/changelogs/v23.8.4.69-lts.md index 065a57549be..a6d8d8bb03b 100644 --- a/docs/changelogs/v23.8.4.69-lts.md +++ b/docs/changelogs/v23.8.4.69-lts.md @@ -11,26 +11,26 @@ sidebar_label: 2023 * Backported in [#55673](https://github.com/ClickHouse/ClickHouse/issues/55673): If the database is already initialized, it doesn't need to be initialized again upon subsequent launches. This can potentially fix the issue of infinite container restarts when the database fails to load within 1000 attempts (relevant for very large databases and multi-node setups). [#50724](https://github.com/ClickHouse/ClickHouse/pull/50724) ([Alexander Nikolaev](https://github.com/AlexNik)). * Backported in [#55293](https://github.com/ClickHouse/ClickHouse/issues/55293): Resource with source code including submodules is built in Darwin special build task. It may be used to build ClickHouse without checkouting submodules. [#51435](https://github.com/ClickHouse/ClickHouse/pull/51435) ([Ilya Yatsishin](https://github.com/qoega)). * Backported in [#55366](https://github.com/ClickHouse/ClickHouse/issues/55366): Solve issue with launching standalone clickhouse-keeper from clickhouse-server package. [#55226](https://github.com/ClickHouse/ClickHouse/pull/55226) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). -* Backported in [#55725](https://github.com/ClickHouse/ClickHouse/issues/55725): Fix integration check python script to use gh api url - Add Readme for CI tests. [#55716](https://github.com/ClickHouse/ClickHouse/pull/55716) ([Max K.](https://github.com/mkaynov)). +* Backported in [#55725](https://github.com/ClickHouse/ClickHouse/issues/55725): Fix integration check python script to use gh api url - Add Readme for CI tests. [#55716](https://github.com/ClickHouse/ClickHouse/pull/55716) ([Max K.](https://github.com/maxknv)). #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix "Invalid number of rows in Chunk" in MaterializedPostgreSQL [#54844](https://github.com/ClickHouse/ClickHouse/pull/54844) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Move obsolete format settings to separate section [#54855](https://github.com/ClickHouse/ClickHouse/pull/54855) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix: insert quorum w/o keeper retries [#55026](https://github.com/ClickHouse/ClickHouse/pull/55026) ([Igor Nikonov](https://github.com/devcrafter)). -* Prevent attaching parts from tables with different projections or indices [#55062](https://github.com/ClickHouse/ClickHouse/pull/55062) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Proper cleanup in case of exception in ctor of ShellCommandSource [#55103](https://github.com/ClickHouse/ClickHouse/pull/55103) ([Alexander Gololobov](https://github.com/davenger)). -* Fix deadlock in LDAP assigned role update [#55119](https://github.com/ClickHouse/ClickHouse/pull/55119) ([Julian Maicher](https://github.com/jmaicher)). -* Fix for background download in fs cache [#55252](https://github.com/ClickHouse/ClickHouse/pull/55252) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix functions execution over sparse columns [#55275](https://github.com/ClickHouse/ClickHouse/pull/55275) ([Azat Khuzhin](https://github.com/azat)). -* Fix bug with inability to drop detached partition in replicated merge tree on top of S3 without zero copy [#55309](https://github.com/ClickHouse/ClickHouse/pull/55309) ([alesapin](https://github.com/alesapin)). -* Fix trash optimization (up to a certain extent) [#55353](https://github.com/ClickHouse/ClickHouse/pull/55353) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix parsing of arrays in cast operator [#55417](https://github.com/ClickHouse/ClickHouse/pull/55417) ([Anton Popov](https://github.com/CurtizJ)). -* Fix filtering by virtual columns with OR filter in query [#55418](https://github.com/ClickHouse/ClickHouse/pull/55418) ([Azat Khuzhin](https://github.com/azat)). -* Fix MongoDB connection issues [#55419](https://github.com/ClickHouse/ClickHouse/pull/55419) ([Nikolay Degterinsky](https://github.com/evillique)). -* Destroy fiber in case of exception in cancelBefore in AsyncTaskExecutor [#55516](https://github.com/ClickHouse/ClickHouse/pull/55516) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix crash in QueryNormalizer with cyclic aliases [#55602](https://github.com/ClickHouse/ClickHouse/pull/55602) ([vdimir](https://github.com/vdimir)). -* Fix filtering by virtual columns with OR filter in query (resubmit) [#55678](https://github.com/ClickHouse/ClickHouse/pull/55678) ([Azat Khuzhin](https://github.com/azat)). +* Backported in [#55304](https://github.com/ClickHouse/ClickHouse/issues/55304): Fix "Invalid number of rows in Chunk" in MaterializedPostgreSQL (which could happen with PostgreSQL version >= 13). [#54844](https://github.com/ClickHouse/ClickHouse/pull/54844) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#55018](https://github.com/ClickHouse/ClickHouse/issues/55018): Move obsolete format settings to separate section and use it together with all format settings to avoid exceptions `Unknown setting` during use of obsolete format settings. Closes [#54792](https://github.com/ClickHouse/ClickHouse/issues/54792) ### Documentation entry for user-facing changes. [#54855](https://github.com/ClickHouse/ClickHouse/pull/54855) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#55097](https://github.com/ClickHouse/ClickHouse/issues/55097): Insert quorum could be marked as satisfied incorrectly in case of keeper retries while waiting for the quorum. Fixes [#54543](https://github.com/ClickHouse/ClickHouse/issues/54543). [#55026](https://github.com/ClickHouse/ClickHouse/pull/55026) ([Igor Nikonov](https://github.com/devcrafter)). +* Backported in [#55473](https://github.com/ClickHouse/ClickHouse/issues/55473): Prevent attaching partitions from tables that doesn't have the same indices or projections defined. [#55062](https://github.com/ClickHouse/ClickHouse/pull/55062) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Backported in [#55461](https://github.com/ClickHouse/ClickHouse/issues/55461): If an exception happens in `ShellCommandSource` constructor after some of the `send_data_threads` are started, they need to be join()-ed, otherwise abort() will be triggered in `ThreadFromGlobalPool` destructor. Fixes [#55091](https://github.com/ClickHouse/ClickHouse/issues/55091). [#55103](https://github.com/ClickHouse/ClickHouse/pull/55103) ([Alexander Gololobov](https://github.com/davenger)). +* Backported in [#55412](https://github.com/ClickHouse/ClickHouse/issues/55412): Fix deadlock in LDAP assigned role update for non-existing ClickHouse roles. [#55119](https://github.com/ClickHouse/ClickHouse/pull/55119) ([Julian Maicher](https://github.com/jmaicher)). +* Backported in [#55323](https://github.com/ClickHouse/ClickHouse/issues/55323): Fix for background download in fs cache. [#55252](https://github.com/ClickHouse/ClickHouse/pull/55252) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#55349](https://github.com/ClickHouse/ClickHouse/issues/55349): Fix functions execution over sparse columns (fixes `DB::Exception: isDefaultAt is not implemented for Function: while executing 'FUNCTION Capture` error). [#55275](https://github.com/ClickHouse/ClickHouse/pull/55275) ([Azat Khuzhin](https://github.com/azat)). +* Backported in [#55475](https://github.com/ClickHouse/ClickHouse/issues/55475): Fix an issue with inability to drop detached partition in `ReplicatedMergeTree` engines family on top of S3 (without zero-copy replication). Fixes issue [#55225](https://github.com/ClickHouse/ClickHouse/issues/55225). Fix bug with abandoned blobs on S3 for complex data types like Arrays or Nested columns. Partially fixes [#52393](https://github.com/ClickHouse/ClickHouse/issues/52393). Many kudos to @alifirat for examples. [#55309](https://github.com/ClickHouse/ClickHouse/pull/55309) ([alesapin](https://github.com/alesapin)). +* Backported in [#55399](https://github.com/ClickHouse/ClickHouse/issues/55399): An optimization introduced one year ago was wrong. This closes [#55272](https://github.com/ClickHouse/ClickHouse/issues/55272). [#55353](https://github.com/ClickHouse/ClickHouse/pull/55353) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#55437](https://github.com/ClickHouse/ClickHouse/issues/55437): Fix parsing of arrays in cast operator (`::`). [#55417](https://github.com/ClickHouse/ClickHouse/pull/55417) ([Anton Popov](https://github.com/CurtizJ)). +* Backported in [#55635](https://github.com/ClickHouse/ClickHouse/issues/55635): Fix filtering by virtual columns with OR filter in query (`_part*` filtering for `MergeTree`, `_path`/`_file` for various `File`/`HDFS`/... engines, `_table` for `Merge`). [#55418](https://github.com/ClickHouse/ClickHouse/pull/55418) ([Azat Khuzhin](https://github.com/azat)). +* Backported in [#55445](https://github.com/ClickHouse/ClickHouse/issues/55445): Fix connection issues that occurred with some versions of MongoDB. Closes [#55376](https://github.com/ClickHouse/ClickHouse/issues/55376), [#55232](https://github.com/ClickHouse/ClickHouse/issues/55232). [#55419](https://github.com/ClickHouse/ClickHouse/pull/55419) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#55534](https://github.com/ClickHouse/ClickHouse/issues/55534): Fix possible deadlock caused by not destroyed fiber in case of exception in async task cancellation. Closes [#55185](https://github.com/ClickHouse/ClickHouse/issues/55185). [#55516](https://github.com/ClickHouse/ClickHouse/pull/55516) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#55747](https://github.com/ClickHouse/ClickHouse/issues/55747): Fix crash in QueryNormalizer with cyclic aliases. [#55602](https://github.com/ClickHouse/ClickHouse/pull/55602) ([vdimir](https://github.com/vdimir)). +* Backported in [#55760](https://github.com/ClickHouse/ClickHouse/issues/55760): Fix filtering by virtual columns with OR filter in query (_part* filtering for MergeTree, _path/_file for various File/HDFS/... engines, _table for Merge). [#55678](https://github.com/ClickHouse/ClickHouse/pull/55678) ([Azat Khuzhin](https://github.com/azat)). #### NO CL CATEGORY @@ -46,6 +46,6 @@ sidebar_label: 2023 * Clean data dir and always start an old server version in aggregate functions compatibility test. [#55105](https://github.com/ClickHouse/ClickHouse/pull/55105) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * check if block is empty after async insert retries [#55143](https://github.com/ClickHouse/ClickHouse/pull/55143) ([Han Fei](https://github.com/hanfei1991)). * MaterializedPostgreSQL: remove back check [#55297](https://github.com/ClickHouse/ClickHouse/pull/55297) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Remove existing moving/ dir if allow_remove_stale_moving_parts is off [#55480](https://github.com/ClickHouse/ClickHouse/pull/55480) ([Mike Kot](https://github.com/myrrc)). +* Remove existing moving/ dir if allow_remove_stale_moving_parts is off [#55480](https://github.com/ClickHouse/ClickHouse/pull/55480) ([Mikhail Kot](https://github.com/myrrc)). * Bump curl to 8.4 [#55492](https://github.com/ClickHouse/ClickHouse/pull/55492) ([Robert Schulze](https://github.com/rschu1ze)). diff --git a/docs/changelogs/v23.8.5.16-lts.md b/docs/changelogs/v23.8.5.16-lts.md index 4a23b8892be..32ddbd6031d 100644 --- a/docs/changelogs/v23.8.5.16-lts.md +++ b/docs/changelogs/v23.8.5.16-lts.md @@ -12,9 +12,9 @@ sidebar_label: 2023 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix storage Iceberg files retrieval [#55144](https://github.com/ClickHouse/ClickHouse/pull/55144) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Try to fix possible segfault in Native ORC input format [#55891](https://github.com/ClickHouse/ClickHouse/pull/55891) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix window functions in case of sparse columns. [#55895](https://github.com/ClickHouse/ClickHouse/pull/55895) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Backported in [#55736](https://github.com/ClickHouse/ClickHouse/issues/55736): Fix iceberg metadata parsing - delete files were not checked. [#55144](https://github.com/ClickHouse/ClickHouse/pull/55144) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#55969](https://github.com/ClickHouse/ClickHouse/issues/55969): Try to fix possible segfault in Native ORC input format. Closes [#55873](https://github.com/ClickHouse/ClickHouse/issues/55873). [#55891](https://github.com/ClickHouse/ClickHouse/pull/55891) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#55907](https://github.com/ClickHouse/ClickHouse/issues/55907): Fix window functions in case of sparse columns. Previously some queries with window functions returned invalid results or made ClickHouse crash when the columns were sparse. [#55895](https://github.com/ClickHouse/ClickHouse/pull/55895) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v23.8.6.16-lts.md b/docs/changelogs/v23.8.6.16-lts.md index 6eb752e987c..df6c03cd668 100644 --- a/docs/changelogs/v23.8.6.16-lts.md +++ b/docs/changelogs/v23.8.6.16-lts.md @@ -9,11 +9,11 @@ sidebar_label: 2023 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix rare case of CHECKSUM_DOESNT_MATCH error [#54549](https://github.com/ClickHouse/ClickHouse/pull/54549) ([alesapin](https://github.com/alesapin)). -* Fix: avoid using regex match, possibly containing alternation, as a key condition. [#54696](https://github.com/ClickHouse/ClickHouse/pull/54696) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Fix a crash during table loading on startup [#56232](https://github.com/ClickHouse/ClickHouse/pull/56232) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix segfault in signal handler for Keeper [#56266](https://github.com/ClickHouse/ClickHouse/pull/56266) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix buffer overflow in T64 [#56434](https://github.com/ClickHouse/ClickHouse/pull/56434) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#54583](https://github.com/ClickHouse/ClickHouse/issues/54583): Fix rare bug in replicated merge tree which could lead to self-recovering `CHECKSUM_DOESNT_MATCH` error in logs. [#54549](https://github.com/ClickHouse/ClickHouse/pull/54549) ([alesapin](https://github.com/alesapin)). +* Backported in [#56253](https://github.com/ClickHouse/ClickHouse/issues/56253): Fixed bug of match() function (regex) with pattern containing alternation produces incorrect key condition. [#54696](https://github.com/ClickHouse/ClickHouse/pull/54696) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Backported in [#56322](https://github.com/ClickHouse/ClickHouse/issues/56322): Fix a crash during table loading on startup. Closes [#55767](https://github.com/ClickHouse/ClickHouse/issues/55767). [#56232](https://github.com/ClickHouse/ClickHouse/pull/56232) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#56292](https://github.com/ClickHouse/ClickHouse/issues/56292): Fix segfault in signal handler for Keeper. [#56266](https://github.com/ClickHouse/ClickHouse/pull/56266) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#56443](https://github.com/ClickHouse/ClickHouse/issues/56443): Fix crash due to buffer overflow while decompressing malformed data using `T64` codec. This issue was found with [ClickHouse Bug Bounty Program](https://github.com/ClickHouse/ClickHouse/issues/38986) by https://twitter.com/malacupa. [#56434](https://github.com/ClickHouse/ClickHouse/pull/56434) ([Alexey Milovidov](https://github.com/alexey-milovidov)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v23.8.7.24-lts.md b/docs/changelogs/v23.8.7.24-lts.md index 37862c17315..042484e2404 100644 --- a/docs/changelogs/v23.8.7.24-lts.md +++ b/docs/changelogs/v23.8.7.24-lts.md @@ -12,12 +12,12 @@ sidebar_label: 2023 #### Bug Fix (user-visible misbehavior in an official stable release) -* Select from system tables when table based on table function. [#55540](https://github.com/ClickHouse/ClickHouse/pull/55540) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). -* Fix incomplete query result for UNION in view() function. [#56274](https://github.com/ClickHouse/ClickHouse/pull/56274) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix crash in case of adding a column with type Object(JSON) [#56307](https://github.com/ClickHouse/ClickHouse/pull/56307) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). -* Fix segfault during Kerberos initialization [#56401](https://github.com/ClickHouse/ClickHouse/pull/56401) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix: RabbitMQ OpenSSL dynamic loading issue [#56703](https://github.com/ClickHouse/ClickHouse/pull/56703) ([Igor Nikonov](https://github.com/devcrafter)). -* Fix crash in FPC codec [#56795](https://github.com/ClickHouse/ClickHouse/pull/56795) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#56581](https://github.com/ClickHouse/ClickHouse/issues/56581): Prevent reference to a remote data source for the `data_paths` column in `system.tables` if the table is created with a table function using explicit column description. [#55540](https://github.com/ClickHouse/ClickHouse/pull/55540) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Backported in [#56877](https://github.com/ClickHouse/ClickHouse/issues/56877): Fix incomplete query result for `UNION` in `view()` table function. [#56274](https://github.com/ClickHouse/ClickHouse/pull/56274) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#56409](https://github.com/ClickHouse/ClickHouse/issues/56409): Prohibit adding a column with type `Object(JSON)` to an existing table. This closes: [#56095](https://github.com/ClickHouse/ClickHouse/issues/56095) This closes: [#49944](https://github.com/ClickHouse/ClickHouse/issues/49944). [#56307](https://github.com/ClickHouse/ClickHouse/pull/56307) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Backported in [#56756](https://github.com/ClickHouse/ClickHouse/issues/56756): Fix a segfault caused by a thrown exception in Kerberos initialization during the creation of the Kafka table. Closes [#56073](https://github.com/ClickHouse/ClickHouse/issues/56073). [#56401](https://github.com/ClickHouse/ClickHouse/pull/56401) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#56748](https://github.com/ClickHouse/ClickHouse/issues/56748): Fixed the issue that the RabbitMQ table engine wasn't able to connect to RabbitMQ over a secure connection. [#56703](https://github.com/ClickHouse/ClickHouse/pull/56703) ([Igor Nikonov](https://github.com/devcrafter)). +* Backported in [#56839](https://github.com/ClickHouse/ClickHouse/issues/56839): The server crashed when decompressing malformed data using the `FPC` codec. This issue was found with [ClickHouse Bug Bounty Program](https://github.com/ClickHouse/ClickHouse/issues/38986) by https://twitter.com/malacupa. [#56795](https://github.com/ClickHouse/ClickHouse/pull/56795) ([Alexey Milovidov](https://github.com/alexey-milovidov)). #### NO CL CATEGORY diff --git a/docs/changelogs/v23.8.8.20-lts.md b/docs/changelogs/v23.8.8.20-lts.md index 345cfcccf17..f45498cb61f 100644 --- a/docs/changelogs/v23.8.8.20-lts.md +++ b/docs/changelogs/v23.8.8.20-lts.md @@ -16,9 +16,9 @@ sidebar_label: 2023 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix ON CLUSTER queries without database on initial node [#56484](https://github.com/ClickHouse/ClickHouse/pull/56484) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix buffer overflow in Gorilla codec [#57107](https://github.com/ClickHouse/ClickHouse/pull/57107) ([Nikolay Degterinsky](https://github.com/evillique)). -* Close interserver connection on any exception before authentication [#57142](https://github.com/ClickHouse/ClickHouse/pull/57142) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#57111](https://github.com/ClickHouse/ClickHouse/issues/57111): Fix ON CLUSTER queries without the database being present on an initial node. Closes [#55009](https://github.com/ClickHouse/ClickHouse/issues/55009). [#56484](https://github.com/ClickHouse/ClickHouse/pull/56484) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#57169](https://github.com/ClickHouse/ClickHouse/issues/57169): Fix crash due to buffer overflow while decompressing malformed data using `Gorilla` codec. This issue was found with [ClickHouse Bug Bounty Program](https://github.com/ClickHouse/ClickHouse/issues/38986) by https://twitter.com/malacupa. [#57107](https://github.com/ClickHouse/ClickHouse/pull/57107) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#57175](https://github.com/ClickHouse/ClickHouse/issues/57175): Close interserver connection for any exception that happens before the authentication. This issue was found with [ClickHouse Bug Bounty Program](https://github.com/ClickHouse/ClickHouse/issues/38986) by https://twitter.com/malacupa. [#57142](https://github.com/ClickHouse/ClickHouse/pull/57142) ([Antonio Andelic](https://github.com/antonio2368)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v23.8.9.54-lts.md b/docs/changelogs/v23.8.9.54-lts.md index 00607c60c39..db13238f4ad 100644 --- a/docs/changelogs/v23.8.9.54-lts.md +++ b/docs/changelogs/v23.8.9.54-lts.md @@ -11,29 +11,29 @@ sidebar_label: 2024 * Backported in [#57668](https://github.com/ClickHouse/ClickHouse/issues/57668): Output valid JSON/XML on excetpion during HTTP query execution. Add setting `http_write_exception_in_output_format` to enable/disable this behaviour (enabled by default). [#52853](https://github.com/ClickHouse/ClickHouse/pull/52853) ([Kruglov Pavel](https://github.com/Avogar)). * Backported in [#58491](https://github.com/ClickHouse/ClickHouse/issues/58491): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)). * Backported in [#57238](https://github.com/ClickHouse/ClickHouse/issues/57238): Fetching a part waits when that part is fully committed on remote replica. It is better not send part in PreActive state. In case of zero copy this is mandatory restriction. [#56808](https://github.com/ClickHouse/ClickHouse/pull/56808) ([Sema Checherinda](https://github.com/CheSema)). -* Backported in [#57655](https://github.com/ClickHouse/ClickHouse/issues/57655): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)). +* Backported in [#57655](https://github.com/ClickHouse/ClickHouse/issues/57655): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mikhail Kot](https://github.com/myrrc)). #### Build/Testing/Packaging Improvement * Backported in [#57582](https://github.com/ClickHouse/ClickHouse/issues/57582): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). #### Bug Fix (user-visible misbehavior in an official stable release) -* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)). -* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)). -* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)). -* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)). -* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)). -* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). -* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)). -* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)). -* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)). -* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)). -* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#58324](https://github.com/ClickHouse/ClickHouse/issues/58324): Flatten only true Nested type if flatten_nested=1, not all Array(Tuple). [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#57395](https://github.com/ClickHouse/ClickHouse/issues/57395): Fix ALTER COLUMN with ALIAS that previously threw the `NO_SUCH_COLUMN_IN_TABLE` exception. Closes [#50927](https://github.com/ClickHouse/ClickHouse/issues/50927). [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#57449](https://github.com/ClickHouse/ClickHouse/issues/57449): Now ALTER columns which are incompatible with columns used in some projections will be forbidden. Previously it could result in incorrect data. This fixes [#56932](https://github.com/ClickHouse/ClickHouse/issues/56932). This PR also allows RENAME of index columns, and improves the exception message by providing clear information on the affected indices or projections causing the prevention. [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)). +* Backported in [#57281](https://github.com/ClickHouse/ClickHouse/issues/57281): Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column. Closes [#42918](https://github.com/ClickHouse/ClickHouse/issues/42918). [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#57247](https://github.com/ClickHouse/ClickHouse/issues/57247): Fix incorrect JOIN plan optimization with partially materialized normal projection. This fixes [#57194](https://github.com/ClickHouse/ClickHouse/issues/57194). [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)). +* Backported in [#57346](https://github.com/ClickHouse/ClickHouse/issues/57346): Fix `ReadonlyReplica` metric for some cases (e.g. when a table cannot be initialized because of difference in local and Keeper data). [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#58434](https://github.com/ClickHouse/ClickHouse/issues/58434): Fix working with read buffers in StreamingFormatExecutor, previously it could lead to segfaults in Kafka and other streaming engines. [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#57539](https://github.com/ClickHouse/ClickHouse/issues/57539): Fix parsing of `SYSTEM STOP LISTEN TCP SECURE`. [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)). +* Backported in [#57779](https://github.com/ClickHouse/ClickHouse/issues/57779): Conf ``` /clickhouse/access/ ``` sql ``` show settings like 'ignore_on_cluster_for_replicated_access_entities_queries' ┌─name─────────────────────────────────────────────────────┬─type─┬─value─┐ │ ignore_on_cluster_for_replicated_access_entities_queries │ bool │ 1 │ └──────────────────────────────────────────────────────────┴──────┴───────┘. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Backported in [#58256](https://github.com/ClickHouse/ClickHouse/issues/58256): Disable system.kafka_consumers by default (due to possible live memory leak). [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)). +* Backported in [#57923](https://github.com/ClickHouse/ClickHouse/issues/57923): Fix invalid memory access in BLAKE3. [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#58084](https://github.com/ClickHouse/ClickHouse/issues/58084): Normilize function names in `CREATE INDEX` query. Avoid `Existing table metadata in ZooKeeper differs in skip indexes` errors if an alias was used insead of canonical function name when creating an index. [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Backported in [#58110](https://github.com/ClickHouse/ClickHouse/issues/58110): Keeper fix: Leader should correctly fail on preprocessing a request if it is not initialized. [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#58155](https://github.com/ClickHouse/ClickHouse/issues/58155): Fix Integer overflow in Poco::UTF32Encoding. [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)). +* Backported in [#58188](https://github.com/ClickHouse/ClickHouse/issues/58188): Parallel parsing for `JSONCompactEachRow` could work incorrectly in previous versions. This closes [#58180](https://github.com/ClickHouse/ClickHouse/issues/58180). [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#58301](https://github.com/ClickHouse/ClickHouse/issues/58301): Fix parallel parsing for JSONCompactEachRow. [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)). #### NO CL ENTRY diff --git a/docs/changelogs/v24.1.1.2048-stable.md b/docs/changelogs/v24.1.1.2048-stable.md index 8e4647da86e..c509ce0058e 100644 --- a/docs/changelogs/v24.1.1.2048-stable.md +++ b/docs/changelogs/v24.1.1.2048-stable.md @@ -133,56 +133,56 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Add join keys conversion for nested lowcardinality [#51550](https://github.com/ClickHouse/ClickHouse/pull/51550) ([vdimir](https://github.com/vdimir)). -* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix a bug with projections and the aggregate_functions_null_for_empty setting during insertion. [#56944](https://github.com/ClickHouse/ClickHouse/pull/56944) ([Amos Bird](https://github.com/amosbird)). -* Fixed potential exception due to stale profile UUID [#57263](https://github.com/ClickHouse/ClickHouse/pull/57263) ([Vasily Nemkov](https://github.com/Enmk)). -* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)). -* Ignore MVs with dropped target table during pushing to views [#57520](https://github.com/ClickHouse/ClickHouse/pull/57520) ([Kruglov Pavel](https://github.com/Avogar)). -* [RFC] Eliminate possible race between ALTER_METADATA and MERGE_PARTS [#57755](https://github.com/ClickHouse/ClickHouse/pull/57755) ([Azat Khuzhin](https://github.com/azat)). -* Fix the exprs order bug in group by with rollup [#57786](https://github.com/ClickHouse/ClickHouse/pull/57786) ([Chen768959](https://github.com/Chen768959)). -* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)). -* Allow users to work with symlinks in user_files_path (again) [#58447](https://github.com/ClickHouse/ClickHouse/pull/58447) ([Duc Canh Le](https://github.com/canhld94)). -* Fix segfault when graphite table does not have agg function [#58453](https://github.com/ClickHouse/ClickHouse/pull/58453) ([Duc Canh Le](https://github.com/canhld94)). -* Delay reading from StorageKafka to allow multiple reads in materialized views [#58477](https://github.com/ClickHouse/ClickHouse/pull/58477) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Fix a stupid case of intersecting parts [#58482](https://github.com/ClickHouse/ClickHouse/pull/58482) ([Alexander Tokmakov](https://github.com/tavplubix)). -* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)). -* Enable ordinary databases while restoration [#58520](https://github.com/ClickHouse/ClickHouse/pull/58520) ([Jihyuk Bok](https://github.com/tomahawk28)). -* Fix hive threadpool read ORC/Parquet/... Failed [#58537](https://github.com/ClickHouse/ClickHouse/pull/58537) ([sunny](https://github.com/sunny19930321)). -* Hide credentials in system.backup_log base_backup_name column [#58550](https://github.com/ClickHouse/ClickHouse/pull/58550) ([Daniel Pozo Escalona](https://github.com/danipozo)). -* toStartOfInterval for milli- microsencods values rounding [#58557](https://github.com/ClickHouse/ClickHouse/pull/58557) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). -* Disable max_joined_block_rows in ConcurrentHashJoin [#58595](https://github.com/ClickHouse/ClickHouse/pull/58595) ([vdimir](https://github.com/vdimir)). -* Fix join using nullable in old analyzer [#58596](https://github.com/ClickHouse/ClickHouse/pull/58596) ([vdimir](https://github.com/vdimir)). -* `makeDateTime64()`: Allow non-const fraction argument [#58597](https://github.com/ClickHouse/ClickHouse/pull/58597) ([Robert Schulze](https://github.com/rschu1ze)). -* Fix possible NULL dereference during symbolizing inline frames [#58607](https://github.com/ClickHouse/ClickHouse/pull/58607) ([Azat Khuzhin](https://github.com/azat)). -* Improve isolation of query cache entries under re-created users or role switches [#58611](https://github.com/ClickHouse/ClickHouse/pull/58611) ([Robert Schulze](https://github.com/rschu1ze)). -* Fix broken partition key analysis when doing projection optimization [#58638](https://github.com/ClickHouse/ClickHouse/pull/58638) ([Amos Bird](https://github.com/amosbird)). -* Query cache: Fix per-user quota [#58731](https://github.com/ClickHouse/ClickHouse/pull/58731) ([Robert Schulze](https://github.com/rschu1ze)). -* Fix stream partitioning in parallel window functions [#58739](https://github.com/ClickHouse/ClickHouse/pull/58739) ([Dmitry Novik](https://github.com/novikd)). -* Fix double destroy call on exception throw in addBatchLookupTable8 [#58745](https://github.com/ClickHouse/ClickHouse/pull/58745) ([Raúl Marín](https://github.com/Algunenano)). -* Don't process requests in Keeper during shutdown [#58765](https://github.com/ClickHouse/ClickHouse/pull/58765) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix Segfault in `SlabsPolygonIndex::find` [#58771](https://github.com/ClickHouse/ClickHouse/pull/58771) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). -* Fix JSONExtract function for LowCardinality(Nullable) columns [#58808](https://github.com/ClickHouse/ClickHouse/pull/58808) ([vdimir](https://github.com/vdimir)). -* Table CREATE DROP Poco::Logger memory leak fix [#58831](https://github.com/ClickHouse/ClickHouse/pull/58831) ([Maksim Kita](https://github.com/kitaisreal)). -* Fix HTTP compressors finalization [#58846](https://github.com/ClickHouse/ClickHouse/pull/58846) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Multiple read file log storage in mv [#58877](https://github.com/ClickHouse/ClickHouse/pull/58877) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Restriction for the access key id for s3. [#58900](https://github.com/ClickHouse/ClickHouse/pull/58900) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). -* Fix possible crash in clickhouse-local during loading suggestions [#58907](https://github.com/ClickHouse/ClickHouse/pull/58907) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix crash when indexHint() is used [#58911](https://github.com/ClickHouse/ClickHouse/pull/58911) ([Dmitry Novik](https://github.com/novikd)). -* Fix StorageURL forgetting headers on server restart [#58933](https://github.com/ClickHouse/ClickHouse/pull/58933) ([Michael Kolupaev](https://github.com/al13n321)). -* Analyzer: fix storage replacement with insertion block [#58958](https://github.com/ClickHouse/ClickHouse/pull/58958) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Fix seek in ReadBufferFromZipArchive [#58966](https://github.com/ClickHouse/ClickHouse/pull/58966) ([Michael Kolupaev](https://github.com/al13n321)). -* `DROP INDEX` of inverted index now removes all relevant files from persistence [#59040](https://github.com/ClickHouse/ClickHouse/pull/59040) ([mochi](https://github.com/MochiXu)). -* Fix data race on query_factories_info [#59049](https://github.com/ClickHouse/ClickHouse/pull/59049) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Disable "Too many redirects" error retry [#59099](https://github.com/ClickHouse/ClickHouse/pull/59099) ([skyoct](https://github.com/skyoct)). -* Fix aggregation issue in mixed x86_64 and ARM clusters [#59132](https://github.com/ClickHouse/ClickHouse/pull/59132) ([Harry Lee](https://github.com/HarryLeeIBM)). -* Fix not started database shutdown deadlock [#59137](https://github.com/ClickHouse/ClickHouse/pull/59137) ([Sergei Trifonov](https://github.com/serxa)). -* Fix: LIMIT BY and LIMIT in distributed query [#59153](https://github.com/ClickHouse/ClickHouse/pull/59153) ([Igor Nikonov](https://github.com/devcrafter)). -* Fix crash with nullable timezone for `toString` [#59190](https://github.com/ClickHouse/ClickHouse/pull/59190) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). -* Fix abort in iceberg metadata on bad file paths [#59275](https://github.com/ClickHouse/ClickHouse/pull/59275) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix architecture name in select of Rust target [#59307](https://github.com/ClickHouse/ClickHouse/pull/59307) ([p1rattttt](https://github.com/p1rattttt)). -* Fix not-ready set for system.tables [#59351](https://github.com/ClickHouse/ClickHouse/pull/59351) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix lazy initialization in RabbitMQ [#59352](https://github.com/ClickHouse/ClickHouse/pull/59352) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix possible errors when joining sub-types with low cardinality (e.g., Array(LowCardinality(T)) with Array(T)). [#51550](https://github.com/ClickHouse/ClickHouse/pull/51550) ([vdimir](https://github.com/vdimir)). +* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple). [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix a bug with projections and the `aggregate_functions_null_for_empty` setting during insertion. This is an addition to [#42198](https://github.com/ClickHouse/ClickHouse/issues/42198) and [#49873](https://github.com/ClickHouse/ClickHouse/issues/49873). The bug was found by fuzzer in [#56666](https://github.com/ClickHouse/ClickHouse/issues/56666). This PR also fix potential issues with projections and the `transform_null_in` setting. [#56944](https://github.com/ClickHouse/ClickHouse/pull/56944) ([Amos Bird](https://github.com/amosbird)). +* Fixed (a rare) exception in case when user's assigned profiles are updated right after user logging in, which could cause a missing entry in `session_log` or problems with logging in. [#57263](https://github.com/ClickHouse/ClickHouse/pull/57263) ([Vasily Nemkov](https://github.com/Enmk)). +* Fix working with read buffers in StreamingFormatExecutor, previously it could lead to segfaults in Kafka and other streaming engines. [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)). +* Ignore MVs with dropped target table during pushing to views in insert to a source table. [#57520](https://github.com/ClickHouse/ClickHouse/pull/57520) ([Kruglov Pavel](https://github.com/Avogar)). +* Eliminate possible race between ALTER_METADATA and MERGE_PARTS (that leads to checksum mismatch - CHECKSUM_DOESNT_MATCH). [#57755](https://github.com/ClickHouse/ClickHouse/pull/57755) ([Azat Khuzhin](https://github.com/azat)). +* Fix the exprs order bug in group by with rollup. [#57786](https://github.com/ClickHouse/ClickHouse/pull/57786) ([Chen768959](https://github.com/Chen768959)). +* Fix a bug in zero-copy-replication (an experimental feature) that could lead to `The specified key does not exist` error and data loss. It could happen when dropping a replica with broken or unexpected/ignored detached parts. Fixes [#57985](https://github.com/ClickHouse/ClickHouse/issues/57985). [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix a bug that users cannot work with symlinks in user_files_path. [#58447](https://github.com/ClickHouse/ClickHouse/pull/58447) ([Duc Canh Le](https://github.com/canhld94)). +* Fix segfault when graphite table does not have agg function. [#58453](https://github.com/ClickHouse/ClickHouse/pull/58453) ([Duc Canh Le](https://github.com/canhld94)). +* Fix reading multiple times from KafkaEngine in materialized views. [#58477](https://github.com/ClickHouse/ClickHouse/pull/58477) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Fix `Part ... intersects part ...` error that might occur in `ReplicatedMergeTree` when the server was restarted just after [automatically] dropping [an empty] part and adjacent parts were merged. The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/56282. [#58482](https://github.com/ClickHouse/ClickHouse/pull/58482) ([Alexander Tokmakov](https://github.com/tavplubix)). +* MergeTreePrefetchedReadPool disable for LIMIT only queries, because time spend during filling per thread tasks can be greater than whole query execution for big tables with small limit. [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)). +* While `restore` is underway in Clickhouse, restore should allow the database with an `ordinary` engine. [#58520](https://github.com/ClickHouse/ClickHouse/pull/58520) ([Jihyuk Bok](https://github.com/tomahawk28)). +* Fix read buffer creation in Hive engine when thread_pool read method is used. Closes [#57978](https://github.com/ClickHouse/ClickHouse/issues/57978). [#58537](https://github.com/ClickHouse/ClickHouse/pull/58537) ([sunny](https://github.com/sunny19930321)). +* Hide credentials in `base_backup_name` column of `system.backup_log`. [#58550](https://github.com/ClickHouse/ClickHouse/pull/58550) ([Daniel Pozo Escalona](https://github.com/danipozo)). +* While executing queries like `SELECT toStartOfInterval(toDateTime64('2023-10-09 10:11:12.000999', 6), toIntervalMillisecond(1));`, the result was not rounded to 1 millisecond previously. Current PR solves this issue. Also, current PR will solve some problems appearing in https://github.com/ClickHouse/ClickHouse/pull/56738. [#58557](https://github.com/ClickHouse/ClickHouse/pull/58557) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fix logical error in `parallel_hash` working with `max_joined_block_size_rows`. [#58595](https://github.com/ClickHouse/ClickHouse/pull/58595) ([vdimir](https://github.com/vdimir)). +* Fix error in join with `USING` when one of the table has `Nullable` key. [#58596](https://github.com/ClickHouse/ClickHouse/pull/58596) ([vdimir](https://github.com/vdimir)). +* The (optional) `fraction` argument in function `makeDateTime64()` can now be non-const. This was possible already with ClickHouse <= 23.8. [#58597](https://github.com/ClickHouse/ClickHouse/pull/58597) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix possible server crash during symbolizing inline frames. [#58607](https://github.com/ClickHouse/ClickHouse/pull/58607) ([Azat Khuzhin](https://github.com/azat)). +* The query cache now denies access to entries when the user is re-created or assumes another role. This improves prevents attacks where 1. an user with the same name as a dropped user may access the old user's cache entries or 2. a user with a different role may access cache entries of a role with a different row policy. [#58611](https://github.com/ClickHouse/ClickHouse/pull/58611) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix broken partition key analysis when doing projection optimization with `force_index_by_date = 1`. This fixes [#58620](https://github.com/ClickHouse/ClickHouse/issues/58620). We don't need partition key analysis for projections after https://github.com/ClickHouse/ClickHouse/pull/56502 . [#58638](https://github.com/ClickHouse/ClickHouse/pull/58638) ([Amos Bird](https://github.com/amosbird)). +* The query cache now behaves properly when per-user quotas are defined and `SYSTEM DROP QUERY CACHE` ran. [#58731](https://github.com/ClickHouse/ClickHouse/pull/58731) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix data stream partitioning for window functions when there are different window descriptions with similar prefixes but different partitioning. Fixes [#58714](https://github.com/ClickHouse/ClickHouse/issues/58714). [#58739](https://github.com/ClickHouse/ClickHouse/pull/58739) ([Dmitry Novik](https://github.com/novikd)). +* Fix double destroy call on exception throw in addBatchLookupTable8. [#58745](https://github.com/ClickHouse/ClickHouse/pull/58745) ([Raúl Marín](https://github.com/Algunenano)). +* Keeper fix: don't process requests during shutdown because it will lead to invalid state. [#58765](https://github.com/ClickHouse/ClickHouse/pull/58765) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix a crash in the polygon dictionary. Fixes [#58612](https://github.com/ClickHouse/ClickHouse/issues/58612). [#58771](https://github.com/ClickHouse/ClickHouse/pull/58771) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fix possible crash in JSONExtract function extracting `LowCardinality(Nullable(T))` type. [#58808](https://github.com/ClickHouse/ClickHouse/pull/58808) ([vdimir](https://github.com/vdimir)). +* Table CREATE DROP `Poco::Logger` memory leak fix. Closes [#57931](https://github.com/ClickHouse/ClickHouse/issues/57931). Closes [#58496](https://github.com/ClickHouse/ClickHouse/issues/58496). [#58831](https://github.com/ClickHouse/ClickHouse/pull/58831) ([Maksim Kita](https://github.com/kitaisreal)). +* Fix HTTP compressors. Follow-up [#58475](https://github.com/ClickHouse/ClickHouse/issues/58475). [#58846](https://github.com/ClickHouse/ClickHouse/pull/58846) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix reading multiple times from FileLog engine in materialized views. [#58877](https://github.com/ClickHouse/ClickHouse/pull/58877) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Prevent specifying an `access_key_id` that does not match the correct [correct pattern]( https://docs.aws.amazon.com/IAM/latest/APIReference/API_AccessKey.html). [#58900](https://github.com/ClickHouse/ClickHouse/pull/58900) ([MikhailBurdukov](https://github.com/MikhailBurdukov)). +* Fix possible crash in clickhouse-local during loading suggestions. Closes [#58825](https://github.com/ClickHouse/ClickHouse/issues/58825). [#58907](https://github.com/ClickHouse/ClickHouse/pull/58907) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix crash when `indexHint` function is used without arguments in the filters. [#58911](https://github.com/ClickHouse/ClickHouse/pull/58911) ([Dmitry Novik](https://github.com/novikd)). +* Fixed URL and S3 engines losing the `headers` argument on server restart. [#58933](https://github.com/ClickHouse/ClickHouse/pull/58933) ([Michael Kolupaev](https://github.com/al13n321)). +* Fix analyzer - insertion from select with subquery referencing insertion table should process only insertion block for all table expressions. Fixes [#58080](https://github.com/ClickHouse/ClickHouse/issues/58080). follow-up [#50857](https://github.com/ClickHouse/ClickHouse/issues/50857). [#58958](https://github.com/ClickHouse/ClickHouse/pull/58958) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fixed reading parquet files from archives. [#58966](https://github.com/ClickHouse/ClickHouse/pull/58966) ([Michael Kolupaev](https://github.com/al13n321)). +* Experimental feature of inverted indices: `ALTER TABLE DROP INDEX` for an inverted index now removes all inverted index files from the new part (issue [#59039](https://github.com/ClickHouse/ClickHouse/issues/59039)). [#59040](https://github.com/ClickHouse/ClickHouse/pull/59040) ([mochi](https://github.com/MochiXu)). +* Fix data race on collecting factories info for system.query_log. [#59049](https://github.com/ClickHouse/ClickHouse/pull/59049) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fixs: [#58967](https://github.com/ClickHouse/ClickHouse/issues/58967). [#59099](https://github.com/ClickHouse/ClickHouse/pull/59099) ([skyoct](https://github.com/skyoct)). +* Fixed wrong aggregation results in mixed x86_64 and ARM clusters. [#59132](https://github.com/ClickHouse/ClickHouse/pull/59132) ([Harry Lee](https://github.com/HarryLeeIBM)). +* Fix a deadlock that can happen during the shutdown of the server due to metadata loading failure. [#59137](https://github.com/ClickHouse/ClickHouse/pull/59137) ([Sergei Trifonov](https://github.com/serxa)). +* The combination of LIMIT BY and LIMIT could produce an incorrect result in distributed queries (parallel replicas included). [#59153](https://github.com/ClickHouse/ClickHouse/pull/59153) ([Igor Nikonov](https://github.com/devcrafter)). +* Fixes crash with for `toString()` with timezone in nullable format. Fixes [#59126](https://github.com/ClickHouse/ClickHouse/issues/59126). [#59190](https://github.com/ClickHouse/ClickHouse/pull/59190) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fix abort in iceberg metadata on bad file paths. [#59275](https://github.com/ClickHouse/ClickHouse/pull/59275) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix architecture name in select of Rust target. [#59307](https://github.com/ClickHouse/ClickHouse/pull/59307) ([p1rattttt](https://github.com/p1rattttt)). +* Fix `Not-ready Set` for queries from `system.tables` with `table IN (subquery)` filter expression. Fixes [#59342](https://github.com/ClickHouse/ClickHouse/issues/59342). [#59351](https://github.com/ClickHouse/ClickHouse/pull/59351) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix lazy initialization in RabbitMQ that could lead to logical error and not initialized state. [#59352](https://github.com/ClickHouse/ClickHouse/pull/59352) ([Kruglov Pavel](https://github.com/Avogar)). #### NO CL ENTRY diff --git a/docs/changelogs/v24.1.2.5-stable.md b/docs/changelogs/v24.1.2.5-stable.md index bac25c9b9ed..080e24da6f0 100644 --- a/docs/changelogs/v24.1.2.5-stable.md +++ b/docs/changelogs/v24.1.2.5-stable.md @@ -9,6 +9,6 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix translate() with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)). -* Fix stacktraces for binaries without debug symbols [#59444](https://github.com/ClickHouse/ClickHouse/pull/59444) ([Azat Khuzhin](https://github.com/azat)). +* Backported in [#59425](https://github.com/ClickHouse/ClickHouse/issues/59425): Fix translate() with FixedString input. Could lead to crashes as it'd return a String column (vs the expected FixedString). This issue was found through ClickHouse Bug Bounty Program YohannJardin. [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59478](https://github.com/ClickHouse/ClickHouse/issues/59478): Fix stacktraces for binaries without debug symbols. [#59444](https://github.com/ClickHouse/ClickHouse/pull/59444) ([Azat Khuzhin](https://github.com/azat)). diff --git a/docs/changelogs/v24.1.3.31-stable.md b/docs/changelogs/v24.1.3.31-stable.md index e898fba5c87..ec73672c8d5 100644 --- a/docs/changelogs/v24.1.3.31-stable.md +++ b/docs/changelogs/v24.1.3.31-stable.md @@ -13,13 +13,13 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix `ASTAlterCommand::formatImpl` in case of column specific settings... [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Make MAX use the same rules as permutation for complex types [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)). -* Fix corner case when passing `update_insert_deduplication_token_in_dependent_materialized_views` [#59544](https://github.com/ClickHouse/ClickHouse/pull/59544) ([Jordi Villar](https://github.com/jrdi)). -* Fix incorrect result of arrayElement / map[] on empty value [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)). -* Fix crash in topK when merging empty states [#59603](https://github.com/ClickHouse/ClickHouse/pull/59603) ([Raúl Marín](https://github.com/Algunenano)). -* Maintain function alias in RewriteSumFunctionWithSumAndCountVisitor [#59658](https://github.com/ClickHouse/ClickHouse/pull/59658) ([Raúl Marín](https://github.com/Algunenano)). -* Fix leftPad / rightPad function with FixedString input [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59726](https://github.com/ClickHouse/ClickHouse/issues/59726): Fix formatting of alter commands in case of column specific settings. [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Backported in [#59585](https://github.com/ClickHouse/ClickHouse/issues/59585): Make MAX use the same rules as permutation for complex types. [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59579](https://github.com/ClickHouse/ClickHouse/issues/59579): Fix a corner case when passing `update_insert_deduplication_token_in_dependent_materialized_views` setting. There is one corner case not covered due to the absence of tables in the path:. [#59544](https://github.com/ClickHouse/ClickHouse/pull/59544) ([Jordi Villar](https://github.com/jrdi)). +* Backported in [#59647](https://github.com/ClickHouse/ClickHouse/issues/59647): Fix incorrect result of arrayElement / map[] on empty value. [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59639](https://github.com/ClickHouse/ClickHouse/issues/59639): Fix crash in topK when merging empty states. [#59603](https://github.com/ClickHouse/ClickHouse/pull/59603) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59696](https://github.com/ClickHouse/ClickHouse/issues/59696): Maintain function alias in RewriteSumFunctionWithSumAndCountVisitor. [#59658](https://github.com/ClickHouse/ClickHouse/pull/59658) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59764](https://github.com/ClickHouse/ClickHouse/issues/59764): Fix leftPad / rightPad function with FixedString input. [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)). #### NO CL ENTRY diff --git a/docs/changelogs/v24.1.4.20-stable.md b/docs/changelogs/v24.1.4.20-stable.md index 8612a485f12..1baec2178b1 100644 --- a/docs/changelogs/v24.1.4.20-stable.md +++ b/docs/changelogs/v24.1.4.20-stable.md @@ -15,10 +15,10 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix digest calculation in Keeper [#59439](https://github.com/ClickHouse/ClickHouse/pull/59439) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix distributed table with a constant sharding key [#59606](https://github.com/ClickHouse/ClickHouse/pull/59606) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix query start time on non initial queries [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)). -* Fix parsing of partition expressions surrounded by parens [#59901](https://github.com/ClickHouse/ClickHouse/pull/59901) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Backported in [#59457](https://github.com/ClickHouse/ClickHouse/issues/59457): Keeper fix: fix digest calculation for nodes. [#59439](https://github.com/ClickHouse/ClickHouse/pull/59439) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#59682](https://github.com/ClickHouse/ClickHouse/issues/59682): Fix distributed table with a constant sharding key. [#59606](https://github.com/ClickHouse/ClickHouse/pull/59606) ([Vitaly Baranov](https://github.com/vitlibar)). +* Backported in [#59842](https://github.com/ClickHouse/ClickHouse/issues/59842): Fix query start time on non initial queries. [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#59937](https://github.com/ClickHouse/ClickHouse/issues/59937): Fix parsing of partition expressions that are surrounded by parentheses, e.g.: `ALTER TABLE test DROP PARTITION ('2023-10-19')`. [#59901](https://github.com/ClickHouse/ClickHouse/pull/59901) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v24.1.5.6-stable.md b/docs/changelogs/v24.1.5.6-stable.md index ce46c51e2f4..caf246fcab6 100644 --- a/docs/changelogs/v24.1.5.6-stable.md +++ b/docs/changelogs/v24.1.5.6-stable.md @@ -9,7 +9,7 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* UniqExactSet read crash fix [#59928](https://github.com/ClickHouse/ClickHouse/pull/59928) ([Maksim Kita](https://github.com/kitaisreal)). +* Backported in [#59959](https://github.com/ClickHouse/ClickHouse/issues/59959): Fix crash during deserialization of aggregation function states that internally use `UniqExactSet`. Introduced https://github.com/ClickHouse/ClickHouse/pull/59009. [#59928](https://github.com/ClickHouse/ClickHouse/pull/59928) ([Maksim Kita](https://github.com/kitaisreal)). #### NOT FOR CHANGELOG / INSIGNIFICANT diff --git a/docs/changelogs/v24.1.7.18-stable.md b/docs/changelogs/v24.1.7.18-stable.md index 603a83a67be..3bc94538174 100644 --- a/docs/changelogs/v24.1.7.18-stable.md +++ b/docs/changelogs/v24.1.7.18-stable.md @@ -9,10 +9,10 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix deadlock in parallel parsing when lots of rows are skipped due to errors [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix_max_query_size_for_kql_compound_operator: [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)). -* Fix crash with different allow_experimental_analyzer value in subqueries [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)). -* Fix Keeper reconfig for standalone binary [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#61330](https://github.com/ClickHouse/ClickHouse/issues/61330): Fix deadlock in parallel parsing when lots of rows are skipped due to errors. [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#61008](https://github.com/ClickHouse/ClickHouse/issues/61008): Fix the issue of `max_query_size` for KQL compound operator like mv-expand. Related to [#59626](https://github.com/ClickHouse/ClickHouse/issues/59626). [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)). +* Backported in [#61019](https://github.com/ClickHouse/ClickHouse/issues/61019): Fix crash when `allow_experimental_analyzer` setting value is changed in the subqueries. [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)). +* Backported in [#61293](https://github.com/ClickHouse/ClickHouse/issues/61293): Keeper: fix runtime reconfig for standalone binary. [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v24.1.8.22-stable.md b/docs/changelogs/v24.1.8.22-stable.md index f780de41c40..e615c60a942 100644 --- a/docs/changelogs/v24.1.8.22-stable.md +++ b/docs/changelogs/v24.1.8.22-stable.md @@ -9,12 +9,12 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix possible incorrect result of aggregate function `uniqExact` [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)). -* Fix consecutive keys optimization for nullable keys [#61393](https://github.com/ClickHouse/ClickHouse/pull/61393) ([Anton Popov](https://github.com/CurtizJ)). -* Fix bug when reading system.parts using UUID (issue 61220). [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)). -* Fix client `-s` argument [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). -* Fix string search with const position [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` for incorrect UTF-8 [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). +* Backported in [#61451](https://github.com/ClickHouse/ClickHouse/issues/61451): Fix possible incorrect result of aggregate function `uniqExact`. [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)). +* Backported in [#61844](https://github.com/ClickHouse/ClickHouse/issues/61844): Fixed possible wrong result of aggregation with nullable keys. [#61393](https://github.com/ClickHouse/ClickHouse/pull/61393) ([Anton Popov](https://github.com/CurtizJ)). +* Backported in [#61746](https://github.com/ClickHouse/ClickHouse/issues/61746): Fix incorrect results when filtering `system.parts` or `system.parts_columns` using UUID. [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)). +* Backported in [#61696](https://github.com/ClickHouse/ClickHouse/issues/61696): Fix `clickhouse-client -s` argument, it was broken by defining it two times. [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Backported in [#61576](https://github.com/ClickHouse/ClickHouse/issues/61576): Fix string search with constant start position which previously could lead to memory corruption. [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#61858](https://github.com/ClickHouse/ClickHouse/issues/61858): Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` when specifying incorrect UTF-8 sequence. Example: [#61714](https://github.com/ClickHouse/ClickHouse/issues/61714#issuecomment-2012768202). [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v24.2.1.2248-stable.md b/docs/changelogs/v24.2.1.2248-stable.md index 02affe12c43..edcd3da3852 100644 --- a/docs/changelogs/v24.2.1.2248-stable.md +++ b/docs/changelogs/v24.2.1.2248-stable.md @@ -60,7 +60,7 @@ sidebar_label: 2024 * Support negative positional arguments. Closes [#57736](https://github.com/ClickHouse/ClickHouse/issues/57736). [#58292](https://github.com/ClickHouse/ClickHouse/pull/58292) ([flynn](https://github.com/ucasfl)). * Implement auto-adjustment for asynchronous insert timeouts. The following settings are introduced: async_insert_poll_timeout_ms, async_insert_use_adaptive_busy_timeout, async_insert_busy_timeout_min_ms, async_insert_busy_timeout_max_ms, async_insert_busy_timeout_increase_rate, async_insert_busy_timeout_decrease_rate. [#58486](https://github.com/ClickHouse/ClickHouse/pull/58486) ([Julia Kartseva](https://github.com/jkartseva)). * Allow to define `volume_priority` in `storage_configuration`. [#58533](https://github.com/ClickHouse/ClickHouse/pull/58533) ([Andrey Zvonov](https://github.com/zvonand)). -* Add support for Date32 type in T64 codec. [#58738](https://github.com/ClickHouse/ClickHouse/pull/58738) ([Hongbin Ma](https://github.com/binmahone)). +* Add support for Date32 type in T64 codec. [#58738](https://github.com/ClickHouse/ClickHouse/pull/58738) ([Hongbin Ma (Mahone)](https://github.com/binmahone)). * Support `LEFT JOIN`, `ALL INNER JOIN`, and simple subqueries for parallel replicas (only with analyzer). New setting `parallel_replicas_prefer_local_join` chooses local `JOIN` execution (by default) vs `GLOBAL JOIN`. All tables should exist on every replica from `cluster_for_parallel_replicas`. New settings `min_external_table_block_size_rows` and `min_external_table_block_size_bytes` are used to squash small blocks that are sent for temporary tables (only with analyzer). [#58916](https://github.com/ClickHouse/ClickHouse/pull/58916) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). * Allow trailing commas in types with several items. [#59119](https://github.com/ClickHouse/ClickHouse/pull/59119) ([Aleksandr Musorin](https://github.com/AVMusorin)). * Allow parallel and distributed processing for `S3Queue` table engine. For distributed processing use setting `s3queue_total_shards_num` (by default `1`). Setting `s3queue_processing_threads_num` previously was not allowed for Ordered processing mode, now it is allowed. Warning: settings `s3queue_processing_threads_num`(processing threads per each shard) and `s3queue_total_shards_num` for ordered mode change how metadata is stored (make the number of `max_processed_file` nodes equal to `s3queue_processing_threads_num * s3queue_total_shards_num`), so they must be the same for all shards and cannot be changed once at least one shard is created. [#59167](https://github.com/ClickHouse/ClickHouse/pull/59167) ([Kseniia Sumarokova](https://github.com/kssenii)). @@ -123,60 +123,60 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Non ready set in TTL WHERE. [#57430](https://github.com/ClickHouse/ClickHouse/pull/57430) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix quantilesGK bug [#58216](https://github.com/ClickHouse/ClickHouse/pull/58216) ([李扬](https://github.com/taiyang-li)). -* Disable parallel replicas JOIN with CTE (not analyzer) [#59239](https://github.com/ClickHouse/ClickHouse/pull/59239) ([Raúl Marín](https://github.com/Algunenano)). -* Fix bug with `intDiv` for decimal arguments [#59243](https://github.com/ClickHouse/ClickHouse/pull/59243) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). -* Fix translate() with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)). -* Fix digest calculation in Keeper [#59439](https://github.com/ClickHouse/ClickHouse/pull/59439) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix stacktraces for binaries without debug symbols [#59444](https://github.com/ClickHouse/ClickHouse/pull/59444) ([Azat Khuzhin](https://github.com/azat)). -* Fix `ASTAlterCommand::formatImpl` in case of column specific settings... [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Fix `SELECT * FROM [...] ORDER BY ALL` with Analyzer [#59462](https://github.com/ClickHouse/ClickHouse/pull/59462) ([zhongyuankai](https://github.com/zhongyuankai)). -* Fix possible uncaught exception during distributed query cancellation [#59487](https://github.com/ClickHouse/ClickHouse/pull/59487) ([Azat Khuzhin](https://github.com/azat)). -* Make MAX use the same rules as permutation for complex types [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)). -* Fix corner case when passing `update_insert_deduplication_token_in_dependent_materialized_views` [#59544](https://github.com/ClickHouse/ClickHouse/pull/59544) ([Jordi Villar](https://github.com/jrdi)). -* Fix incorrect result of arrayElement / map[] on empty value [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)). -* Fix crash in topK when merging empty states [#59603](https://github.com/ClickHouse/ClickHouse/pull/59603) ([Raúl Marín](https://github.com/Algunenano)). -* Fix distributed table with a constant sharding key [#59606](https://github.com/ClickHouse/ClickHouse/pull/59606) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix_kql_issue_found_by_wingfuzz [#59626](https://github.com/ClickHouse/ClickHouse/pull/59626) ([Yong Wang](https://github.com/kashwy)). -* Fix error "Read beyond last offset" for AsynchronousBoundedReadBuffer [#59630](https://github.com/ClickHouse/ClickHouse/pull/59630) ([Vitaly Baranov](https://github.com/vitlibar)). -* Maintain function alias in RewriteSumFunctionWithSumAndCountVisitor [#59658](https://github.com/ClickHouse/ClickHouse/pull/59658) ([Raúl Marín](https://github.com/Algunenano)). -* Fix query start time on non initial queries [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)). -* Validate types of arguments for `minmax` skipping index [#59733](https://github.com/ClickHouse/ClickHouse/pull/59733) ([Anton Popov](https://github.com/CurtizJ)). -* Fix leftPad / rightPad function with FixedString input [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)). -* Fix AST fuzzer issue in function `countMatches` [#59752](https://github.com/ClickHouse/ClickHouse/pull/59752) ([Robert Schulze](https://github.com/rschu1ze)). -* rabbitmq: fix having neither acked nor nacked messages [#59775](https://github.com/ClickHouse/ClickHouse/pull/59775) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix StorageURL doing some of the query execution in single thread [#59833](https://github.com/ClickHouse/ClickHouse/pull/59833) ([Michael Kolupaev](https://github.com/al13n321)). -* s3queue: fix uninitialized value [#59897](https://github.com/ClickHouse/ClickHouse/pull/59897) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix parsing of partition expressions surrounded by parens [#59901](https://github.com/ClickHouse/ClickHouse/pull/59901) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Fix crash in JSONColumnsWithMetadata format over http [#59925](https://github.com/ClickHouse/ClickHouse/pull/59925) ([Kruglov Pavel](https://github.com/Avogar)). -* Do not rewrite sum() to count() if return value differs in analyzer [#59926](https://github.com/ClickHouse/ClickHouse/pull/59926) ([Azat Khuzhin](https://github.com/azat)). -* UniqExactSet read crash fix [#59928](https://github.com/ClickHouse/ClickHouse/pull/59928) ([Maksim Kita](https://github.com/kitaisreal)). -* ReplicatedMergeTree invalid metadata_version fix [#59946](https://github.com/ClickHouse/ClickHouse/pull/59946) ([Maksim Kita](https://github.com/kitaisreal)). -* Fix data race in `StorageDistributed` [#59987](https://github.com/ClickHouse/ClickHouse/pull/59987) ([Nikita Taranov](https://github.com/nickitat)). -* Run init scripts when option is enabled rather than disabled [#59991](https://github.com/ClickHouse/ClickHouse/pull/59991) ([jktng](https://github.com/jktng)). -* Fix scale conversion for DateTime64 [#60004](https://github.com/ClickHouse/ClickHouse/pull/60004) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). -* Fix INSERT into SQLite with single quote (by escaping single quotes with a quote instead of backslash) [#60015](https://github.com/ClickHouse/ClickHouse/pull/60015) ([Azat Khuzhin](https://github.com/azat)). -* Fix several logical errors in arrayFold [#60022](https://github.com/ClickHouse/ClickHouse/pull/60022) ([Raúl Marín](https://github.com/Algunenano)). -* Fix optimize_uniq_to_count removing the column alias [#60026](https://github.com/ClickHouse/ClickHouse/pull/60026) ([Raúl Marín](https://github.com/Algunenano)). -* Fix possible exception from s3queue table on drop [#60036](https://github.com/ClickHouse/ClickHouse/pull/60036) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix formatting of NOT with single literals [#60042](https://github.com/ClickHouse/ClickHouse/pull/60042) ([Raúl Marín](https://github.com/Algunenano)). -* Use max_query_size from context in DDLLogEntry instead of hardcoded 4096 [#60083](https://github.com/ClickHouse/ClickHouse/pull/60083) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix inconsistent formatting of queries [#60095](https://github.com/ClickHouse/ClickHouse/pull/60095) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix inconsistent formatting of explain in subqueries [#60102](https://github.com/ClickHouse/ClickHouse/pull/60102) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix cosineDistance crash with Nullable [#60150](https://github.com/ClickHouse/ClickHouse/pull/60150) ([Raúl Marín](https://github.com/Algunenano)). -* Allow casting of bools in string representation to to true bools [#60160](https://github.com/ClickHouse/ClickHouse/pull/60160) ([Robert Schulze](https://github.com/rschu1ze)). -* Fix system.s3queue_log [#60166](https://github.com/ClickHouse/ClickHouse/pull/60166) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix arrayReduce with nullable aggregate function name [#60188](https://github.com/ClickHouse/ClickHouse/pull/60188) ([Raúl Marín](https://github.com/Algunenano)). -* Fix actions execution during preliminary filtering (PK, partition pruning) [#60196](https://github.com/ClickHouse/ClickHouse/pull/60196) ([Azat Khuzhin](https://github.com/azat)). -* Hide sensitive info for s3queue [#60233](https://github.com/ClickHouse/ClickHouse/pull/60233) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Revert "Replace `ORDER BY ALL` by `ORDER BY *`" [#60248](https://github.com/ClickHouse/ClickHouse/pull/60248) ([Robert Schulze](https://github.com/rschu1ze)). -* Fix http exception codes. [#60252](https://github.com/ClickHouse/ClickHouse/pull/60252) ([Austin Kothig](https://github.com/kothiga)). -* s3queue: fix bug (also fixes flaky test_storage_s3_queue/test.py::test_shards_distributed) [#60282](https://github.com/ClickHouse/ClickHouse/pull/60282) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix use-of-uninitialized-value and invalid result in hashing functions with IPv6 [#60359](https://github.com/ClickHouse/ClickHouse/pull/60359) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix OptimizeDateOrDateTimeConverterWithPreimageVisitor with null arguments [#60453](https://github.com/ClickHouse/ClickHouse/pull/60453) ([Raúl Marín](https://github.com/Algunenano)). -* Merging [#59674](https://github.com/ClickHouse/ClickHouse/issues/59674). [#60470](https://github.com/ClickHouse/ClickHouse/pull/60470) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Correctly check keys in s3Cluster [#60477](https://github.com/ClickHouse/ClickHouse/pull/60477) ([Antonio Andelic](https://github.com/antonio2368)). +* Support `IN (subquery)` in table TTL expression. Initially, it was allowed to create such a TTL expression, but any TTL merge would fail with `Not-ready Set` error in the background. Now, TTL is correctly applied. Subquery is executed for every TTL merge, and its result is not cached or reused by other merges. Use such configuration with special care, because subqueries in TTL may lead to high memory consumption and, possibly, a non-deterministic result of TTL merge on different replicas (which is correctly handled by replication, however). [#57430](https://github.com/ClickHouse/ClickHouse/pull/57430) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix quantilesGK bug, close [#57683](https://github.com/ClickHouse/ClickHouse/issues/57683). [#58216](https://github.com/ClickHouse/ClickHouse/pull/58216) ([李扬](https://github.com/taiyang-li)). +* Disable parallel replicas JOIN with CTE (not analyzer). [#59239](https://github.com/ClickHouse/ClickHouse/pull/59239) ([Raúl Marín](https://github.com/Algunenano)). +* Fixes bug with for function `intDiv` with decimal arguments. Fixes [#56414](https://github.com/ClickHouse/ClickHouse/issues/56414). [#59243](https://github.com/ClickHouse/ClickHouse/pull/59243) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fix translate() with FixedString input. Could lead to crashes as it'd return a String column (vs the expected FixedString). This issue was found through ClickHouse Bug Bounty Program YohannJardin. [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)). +* Keeper fix: fix digest calculation for nodes. [#59439](https://github.com/ClickHouse/ClickHouse/pull/59439) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix stacktraces for binaries without debug symbols. [#59444](https://github.com/ClickHouse/ClickHouse/pull/59444) ([Azat Khuzhin](https://github.com/azat)). +* Fix formatting of alter commands in case of column specific settings. [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* `SELECT * FROM [...] ORDER BY ALL SETTINGS allow_experimental_analyzer = 1` now works. [#59462](https://github.com/ClickHouse/ClickHouse/pull/59462) ([zhongyuankai](https://github.com/zhongyuankai)). +* Fix possible uncaught exception during distributed query cancellation. Closes [#59169](https://github.com/ClickHouse/ClickHouse/issues/59169). [#59487](https://github.com/ClickHouse/ClickHouse/pull/59487) ([Azat Khuzhin](https://github.com/azat)). +* Make MAX use the same rules as permutation for complex types. [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)). +* Fix a corner case when passing `update_insert_deduplication_token_in_dependent_materialized_views` setting. There is one corner case not covered due to the absence of tables in the path:. [#59544](https://github.com/ClickHouse/ClickHouse/pull/59544) ([Jordi Villar](https://github.com/jrdi)). +* Fix incorrect result of arrayElement / map[] on empty value. [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)). +* Fix crash in topK when merging empty states. [#59603](https://github.com/ClickHouse/ClickHouse/pull/59603) ([Raúl Marín](https://github.com/Algunenano)). +* Fix distributed table with a constant sharding key. [#59606](https://github.com/ClickHouse/ClickHouse/pull/59606) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix segmentation fault in KQL parser when the input query exceeds the `max_query_size`. Also re-enable the KQL dialect. Fixes [#59036](https://github.com/ClickHouse/ClickHouse/issues/59036) and [#59037](https://github.com/ClickHouse/ClickHouse/issues/59037). [#59626](https://github.com/ClickHouse/ClickHouse/pull/59626) ([Yong Wang](https://github.com/kashwy)). +* Fix error `Read beyond last offset` for `AsynchronousBoundedReadBuffer`. [#59630](https://github.com/ClickHouse/ClickHouse/pull/59630) ([Vitaly Baranov](https://github.com/vitlibar)). +* Maintain function alias in RewriteSumFunctionWithSumAndCountVisitor. [#59658](https://github.com/ClickHouse/ClickHouse/pull/59658) ([Raúl Marín](https://github.com/Algunenano)). +* Fix query start time on non initial queries. [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)). +* Validate types of arguments for `minmax` skipping index. [#59733](https://github.com/ClickHouse/ClickHouse/pull/59733) ([Anton Popov](https://github.com/CurtizJ)). +* Fix leftPad / rightPad function with FixedString input. [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)). +* Fixed an exception in function `countMatches` with non-const `FixedString` haystack arguments, e.g. `SELECT countMatches(materialize(toFixedString('foobarfoo', 9)), 'foo');`. [#59752](https://github.com/ClickHouse/ClickHouse/pull/59752) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix having neigher acked nor nacked messages. If exception happens during read-write phase, messages will be nacked. [#59775](https://github.com/ClickHouse/ClickHouse/pull/59775) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fixed queries that read a Parquet file over HTTP (url()/URL()) executing in one thread instead of max_threads. [#59833](https://github.com/ClickHouse/ClickHouse/pull/59833) ([Michael Kolupaev](https://github.com/al13n321)). +* Fixed uninitialized value in s3 queue, which happened during upgrade to a new version if table had Ordered mode and resulted in an error "Existing table metadata in ZooKeeper differs in s3queue_processing_threads_num setting". [#59897](https://github.com/ClickHouse/ClickHouse/pull/59897) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix parsing of partition expressions that are surrounded by parentheses, e.g.: `ALTER TABLE test DROP PARTITION ('2023-10-19')`. [#59901](https://github.com/ClickHouse/ClickHouse/pull/59901) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Fix crash in JSONColumnsWithMetadata format over http. Closes [#59853](https://github.com/ClickHouse/ClickHouse/issues/59853). [#59925](https://github.com/ClickHouse/ClickHouse/pull/59925) ([Kruglov Pavel](https://github.com/Avogar)). +* Do not rewrite sum() to count() if return value differs in analyzer. [#59926](https://github.com/ClickHouse/ClickHouse/pull/59926) ([Azat Khuzhin](https://github.com/azat)). +* Fix crash during deserialization of aggregation function states that internally use `UniqExactSet`. Introduced https://github.com/ClickHouse/ClickHouse/pull/59009. [#59928](https://github.com/ClickHouse/ClickHouse/pull/59928) ([Maksim Kita](https://github.com/kitaisreal)). +* ReplicatedMergeTree fix invalid `metadata_version` node initialization in Zookeeper during creation of non first replica. Closes [#54902](https://github.com/ClickHouse/ClickHouse/issues/54902). [#59946](https://github.com/ClickHouse/ClickHouse/pull/59946) ([Maksim Kita](https://github.com/kitaisreal)). +* Fixed data race on cluster object between `StorageDistributed` and `Context::reloadClusterConfig()`. Former held const reference to its member while the latter destroyed the object (in process of replacing it with a new one). [#59987](https://github.com/ClickHouse/ClickHouse/pull/59987) ([Nikita Taranov](https://github.com/nickitat)). +* Fixes [#59989](https://github.com/ClickHouse/ClickHouse/issues/59989): runs init scripts when force-enabled or when no database exists, rather than the inverse. [#59991](https://github.com/ClickHouse/ClickHouse/pull/59991) ([jktng](https://github.com/jktng)). +* This PR fixes scale conversion for DateTime64 values (for example, DateTime64(6)->DateTime64(3)). ```SQL create table test (result DateTime64(3)) engine=Memory;. [#60004](https://github.com/ClickHouse/ClickHouse/pull/60004) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fix INSERT into SQLite with single quote (by properly escaping single quotes with a quote instead of backslash). [#60015](https://github.com/ClickHouse/ClickHouse/pull/60015) ([Azat Khuzhin](https://github.com/azat)). +* Fix several logical errors in arrayFold. Fixes support for Nullable and LowCardinality. [#60022](https://github.com/ClickHouse/ClickHouse/pull/60022) ([Raúl Marín](https://github.com/Algunenano)). +* Fix optimize_uniq_to_count removing the column alias. [#60026](https://github.com/ClickHouse/ClickHouse/pull/60026) ([Raúl Marín](https://github.com/Algunenano)). +* Fix possible error while dropping s3queue table, like "no node shard0". [#60036](https://github.com/ClickHouse/ClickHouse/pull/60036) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix formatting of NOT with single literals. [#60042](https://github.com/ClickHouse/ClickHouse/pull/60042) ([Raúl Marín](https://github.com/Algunenano)). +* Use max_query_size from context in parsing changed settings in DDLWorker. Previously with large number of changed settings DDLWorker could fail with `Max query size exceeded` error and don't process log entries. [#60083](https://github.com/ClickHouse/ClickHouse/pull/60083) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix inconsistent formatting of queries containing tables named `table`. Fix wrong formatting of queries with `UNION ALL`, `INTERSECT`, and `EXCEPT` when their structure wasn't linear. This closes [#52349](https://github.com/ClickHouse/ClickHouse/issues/52349). Fix wrong formatting of `SYSTEM` queries, including `SYSTEM ... DROP FILESYSTEM CACHE`, `SYSTEM ... REFRESH/START/STOP/CANCEL/TEST VIEW`, `SYSTEM ENABLE/DISABLE FAILPOINT`. Fix formatting of parameterized DDL queries. Fix the formatting of the `DESCRIBE FILESYSTEM CACHE` query. Fix incorrect formatting of the `SET param_...` (a query setting a parameter). Fix incorrect formatting of `CREATE INDEX` queries. Fix inconsistent formatting of `CREATE USER` and similar queries. Fix inconsistent formatting of `CREATE SETTINGS PROFILE`. Fix incorrect formatting of `ALTER ... MODIFY REFRESH`. Fix inconsistent formatting of window functions if frame offsets were expressions. Fix inconsistent formatting of `RESPECT NULLS` and `IGNORE NULLS` if they were used after a function that implements an operator (such as `plus`). Fix idiotic formatting of `SYSTEM SYNC REPLICA ... LIGHTWEIGHT FROM ...`. Fix inconsistent formatting of invalid queries with `GROUP BY GROUPING SETS ... WITH ROLLUP/CUBE/TOTALS`. Fix inconsistent formatting of `GRANT CURRENT GRANTS`. Fix inconsistent formatting of `CREATE TABLE (... COLLATE)`. Additionally, I fixed the incorrect formatting of `EXPLAIN` in subqueries ([#60102](https://github.com/ClickHouse/ClickHouse/issues/60102)). Fixed incorrect formatting of lambda functions ([#60012](https://github.com/ClickHouse/ClickHouse/issues/60012)). Added a check so there is no way to miss these abominations in the future. [#60095](https://github.com/ClickHouse/ClickHouse/pull/60095) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Queries like `SELECT * FROM (EXPLAIN ...)` were formatted incorrectly. [#60102](https://github.com/ClickHouse/ClickHouse/pull/60102) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix cosineDistance crash with Nullable. [#60150](https://github.com/ClickHouse/ClickHouse/pull/60150) ([Raúl Marín](https://github.com/Algunenano)). +* Boolean values in string representation now cast to true bools. E.g. this query previously threw an exception but now works: `SELECT true = 'true'`. [#60160](https://github.com/ClickHouse/ClickHouse/pull/60160) ([Robert Schulze](https://github.com/rschu1ze)). +* Fix non-filled column `table_uuid` in `system.s3queue_log`. Added columns `database` and `table`. Renamed `table_uuid` to `uuid`. [#60166](https://github.com/ClickHouse/ClickHouse/pull/60166) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix arrayReduce with nullable aggregate function name. [#60188](https://github.com/ClickHouse/ClickHouse/pull/60188) ([Raúl Marín](https://github.com/Algunenano)). +* Fix actions execution during preliminary filtering (PK, partition pruning). [#60196](https://github.com/ClickHouse/ClickHouse/pull/60196) ([Azat Khuzhin](https://github.com/azat)). +* Hide sensitive info for `S3Queue` table engine. [#60233](https://github.com/ClickHouse/ClickHouse/pull/60233) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Restore the previous syntax `ORDER BY ALL` which has temporarily (for a few days) been replaced by ORDER BY *. [#60248](https://github.com/ClickHouse/ClickHouse/pull/60248) ([Robert Schulze](https://github.com/rschu1ze)). +* Fixed a minor bug that caused all http return codes to be 200 (success) instead of a relevant code on exception. [#60252](https://github.com/ClickHouse/ClickHouse/pull/60252) ([Austin Kothig](https://github.com/kothiga)). +* Fix bug in `S3Queue` table engine with ordered parallel mode. [#60282](https://github.com/ClickHouse/ClickHouse/pull/60282) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix use-of-uninitialized-value and invalid result in hashing functions with IPv6. [#60359](https://github.com/ClickHouse/ClickHouse/pull/60359) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix OptimizeDateOrDateTimeConverterWithPreimageVisitor with null arguments. [#60453](https://github.com/ClickHouse/ClickHouse/pull/60453) ([Raúl Marín](https://github.com/Algunenano)). +* Fixed a minor bug that prevented distributed table queries sent from either KQL or PRQL dialect clients to be executed on replicas. [#60470](https://github.com/ClickHouse/ClickHouse/pull/60470) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix incomplete results with s3Cluster when multiple threads are used. [#60477](https://github.com/ClickHouse/ClickHouse/pull/60477) ([Antonio Andelic](https://github.com/antonio2368)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v24.2.2.71-stable.md b/docs/changelogs/v24.2.2.71-stable.md index b9aa5be626b..e17c22ab176 100644 --- a/docs/changelogs/v24.2.2.71-stable.md +++ b/docs/changelogs/v24.2.2.71-stable.md @@ -12,21 +12,21 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* PartsSplitter invalid ranges for the same part [#60041](https://github.com/ClickHouse/ClickHouse/pull/60041) ([Maksim Kita](https://github.com/kitaisreal)). -* Try to avoid calculation of scalar subqueries for CREATE TABLE. [#60464](https://github.com/ClickHouse/ClickHouse/pull/60464) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix deadlock in parallel parsing when lots of rows are skipped due to errors [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix_max_query_size_for_kql_compound_operator: [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)). -* Reduce the number of read rows from `system.numbers` [#60546](https://github.com/ClickHouse/ClickHouse/pull/60546) ([JackyWoo](https://github.com/JackyWoo)). -* Don't output number tips for date types [#60577](https://github.com/ClickHouse/ClickHouse/pull/60577) ([Raúl Marín](https://github.com/Algunenano)). -* Fix buffer overflow in CompressionCodecMultiple [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Remove nonsense from SQL/JSON [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Prevent setting custom metadata headers on unsupported multipart upload operations [#60748](https://github.com/ClickHouse/ClickHouse/pull/60748) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)). -* Fix crash in arrayEnumerateRanked [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)). -* Fix crash when using input() in INSERT SELECT JOIN [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix crash with different allow_experimental_analyzer value in subqueries [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)). -* Remove recursion when reading from S3 [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix multiple bugs in groupArraySorted [#61203](https://github.com/ClickHouse/ClickHouse/pull/61203) ([Raúl Marín](https://github.com/Algunenano)). -* Fix Keeper reconfig for standalone binary [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#60640](https://github.com/ClickHouse/ClickHouse/issues/60640): Fixed a bug in parallel optimization for queries with `FINAL`, which could give an incorrect result in rare cases. [#60041](https://github.com/ClickHouse/ClickHouse/pull/60041) ([Maksim Kita](https://github.com/kitaisreal)). +* Backported in [#61085](https://github.com/ClickHouse/ClickHouse/issues/61085): Avoid calculation of scalar subqueries for `CREATE TABLE`. Fixes [#59795](https://github.com/ClickHouse/ClickHouse/issues/59795) and [#59930](https://github.com/ClickHouse/ClickHouse/issues/59930). Attempt to re-implement https://github.com/ClickHouse/ClickHouse/pull/57855. [#60464](https://github.com/ClickHouse/ClickHouse/pull/60464) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#61332](https://github.com/ClickHouse/ClickHouse/issues/61332): Fix deadlock in parallel parsing when lots of rows are skipped due to errors. [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#61010](https://github.com/ClickHouse/ClickHouse/issues/61010): Fix the issue of `max_query_size` for KQL compound operator like mv-expand. Related to [#59626](https://github.com/ClickHouse/ClickHouse/issues/59626). [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)). +* Backported in [#61002](https://github.com/ClickHouse/ClickHouse/issues/61002): Reduce the number of read rows from `system.numbers`. Fixes [#59418](https://github.com/ClickHouse/ClickHouse/issues/59418). [#60546](https://github.com/ClickHouse/ClickHouse/pull/60546) ([JackyWoo](https://github.com/JackyWoo)). +* Backported in [#60629](https://github.com/ClickHouse/ClickHouse/issues/60629): Don't output number tips for date types. [#60577](https://github.com/ClickHouse/ClickHouse/pull/60577) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#60793](https://github.com/ClickHouse/ClickHouse/issues/60793): Fix buffer overflow that can happen if the attacker asks the HTTP server to decompress data with a composition of codecs and size triggering numeric overflow. Fix buffer overflow that can happen inside codec NONE on wrong input data. This was submitted by TIANGONG research team through our [Bug Bounty program](https://github.com/ClickHouse/ClickHouse/issues/38986). [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#60785](https://github.com/ClickHouse/ClickHouse/issues/60785): Functions for SQL/JSON were able to read uninitialized memory. This closes [#60017](https://github.com/ClickHouse/ClickHouse/issues/60017). Found by Fuzzer. [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Backported in [#60805](https://github.com/ClickHouse/ClickHouse/issues/60805): Do not set aws custom metadata `x-amz-meta-*` headers on UploadPart & CompleteMultipartUpload calls. [#60748](https://github.com/ClickHouse/ClickHouse/pull/60748) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)). +* Backported in [#60822](https://github.com/ClickHouse/ClickHouse/issues/60822): Fix crash in arrayEnumerateRanked. [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#60843](https://github.com/ClickHouse/ClickHouse/issues/60843): Fix crash when using input() in INSERT SELECT JOIN. Closes [#60035](https://github.com/ClickHouse/ClickHouse/issues/60035). [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#60919](https://github.com/ClickHouse/ClickHouse/issues/60919): Fix crash when `allow_experimental_analyzer` setting value is changed in the subqueries. [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)). +* Backported in [#60906](https://github.com/ClickHouse/ClickHouse/issues/60906): Avoid segfault if too many keys are skipped when reading from S3. [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#61307](https://github.com/ClickHouse/ClickHouse/issues/61307): Fix multiple bugs in groupArraySorted. [#61203](https://github.com/ClickHouse/ClickHouse/pull/61203) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#61295](https://github.com/ClickHouse/ClickHouse/issues/61295): Keeper: fix runtime reconfig for standalone binary. [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v24.2.3.70-stable.md b/docs/changelogs/v24.2.3.70-stable.md index cd88877e254..1a50355e0b9 100644 --- a/docs/changelogs/v24.2.3.70-stable.md +++ b/docs/changelogs/v24.2.3.70-stable.md @@ -15,28 +15,28 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix possible incorrect result of aggregate function `uniqExact` [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)). -* Fix ATTACH query with external ON CLUSTER [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix consecutive keys optimization for nullable keys [#61393](https://github.com/ClickHouse/ClickHouse/pull/61393) ([Anton Popov](https://github.com/CurtizJ)). -* fix issue of actions dag split [#61458](https://github.com/ClickHouse/ClickHouse/pull/61458) ([Raúl Marín](https://github.com/Algunenano)). -* Disable async_insert_use_adaptive_busy_timeout correctly with compatibility settings [#61468](https://github.com/ClickHouse/ClickHouse/pull/61468) ([Raúl Marín](https://github.com/Algunenano)). -* Fix bug when reading system.parts using UUID (issue 61220). [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)). -* Fix ALTER QUERY MODIFY SQL SECURITY [#61480](https://github.com/ClickHouse/ClickHouse/pull/61480) ([pufit](https://github.com/pufit)). -* Fix client `-s` argument [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). -* Fix string search with const position [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). -* Cancel merges before removing moved parts [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` for incorrect UTF-8 [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). -* Mark CANNOT_PARSE_ESCAPE_SEQUENCE error as parse error to be able to skip it in row input formats [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). -* Crash in Engine Merge if Row Policy does not have expression [#61971](https://github.com/ClickHouse/ClickHouse/pull/61971) ([Ilya Golshtein](https://github.com/ilejn)). -* Fix data race on scalars in Context [#62305](https://github.com/ClickHouse/ClickHouse/pull/62305) ([Kruglov Pavel](https://github.com/Avogar)). -* Try to fix segfault in Hive engine [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix memory leak in groupArraySorted [#62597](https://github.com/ClickHouse/ClickHouse/pull/62597) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix GCD codec [#62853](https://github.com/ClickHouse/ClickHouse/pull/62853) ([Nikita Taranov](https://github.com/nickitat)). -* Fix temporary data in cache incorrectly processing failure of cache key directory creation [#62925](https://github.com/ClickHouse/ClickHouse/pull/62925) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix incorrect judgement of of monotonicity of function abs [#63097](https://github.com/ClickHouse/ClickHouse/pull/63097) ([Duc Canh Le](https://github.com/canhld94)). -* Make sanity check of settings worse [#63119](https://github.com/ClickHouse/ClickHouse/pull/63119) ([Raúl Marín](https://github.com/Algunenano)). -* Set server name for SSL handshake in MongoDB engine [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). -* Format SQL security option only in `CREATE VIEW` queries. [#63136](https://github.com/ClickHouse/ClickHouse/pull/63136) ([pufit](https://github.com/pufit)). +* Backported in [#61453](https://github.com/ClickHouse/ClickHouse/issues/61453): Fix possible incorrect result of aggregate function `uniqExact`. [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)). +* Backported in [#61946](https://github.com/ClickHouse/ClickHouse/issues/61946): Fix the ATTACH query with the ON CLUSTER clause when the database does not exist on the initiator node. Closes [#55009](https://github.com/ClickHouse/ClickHouse/issues/55009). [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#61846](https://github.com/ClickHouse/ClickHouse/issues/61846): Fixed possible wrong result of aggregation with nullable keys. [#61393](https://github.com/ClickHouse/ClickHouse/pull/61393) ([Anton Popov](https://github.com/CurtizJ)). +* Backported in [#61591](https://github.com/ClickHouse/ClickHouse/issues/61591): ActionsDAG::split can't make sure that "Execution of first then second parts on block is equivalent to execution of initial DAG.". [#61458](https://github.com/ClickHouse/ClickHouse/pull/61458) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#61648](https://github.com/ClickHouse/ClickHouse/issues/61648): Disable async_insert_use_adaptive_busy_timeout correctly with compatibility settings. [#61468](https://github.com/ClickHouse/ClickHouse/pull/61468) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#61748](https://github.com/ClickHouse/ClickHouse/issues/61748): Fix incorrect results when filtering `system.parts` or `system.parts_columns` using UUID. [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)). +* Backported in [#61963](https://github.com/ClickHouse/ClickHouse/issues/61963): Fix the `ALTER QUERY MODIFY SQL SECURITY` queries to override the table's DDL correctly. [#61480](https://github.com/ClickHouse/ClickHouse/pull/61480) ([pufit](https://github.com/pufit)). +* Backported in [#61699](https://github.com/ClickHouse/ClickHouse/issues/61699): Fix `clickhouse-client -s` argument, it was broken by defining it two times. [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Backported in [#61578](https://github.com/ClickHouse/ClickHouse/issues/61578): Fix string search with constant start position which previously could lead to memory corruption. [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#62531](https://github.com/ClickHouse/ClickHouse/issues/62531): Fix data race between `MOVE PARTITION` query and merges resulting in intersecting parts. [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Backported in [#61860](https://github.com/ClickHouse/ClickHouse/issues/61860): Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` when specifying incorrect UTF-8 sequence. Example: [#61714](https://github.com/ClickHouse/ClickHouse/issues/61714#issuecomment-2012768202). [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). +* Backported in [#62242](https://github.com/ClickHouse/ClickHouse/issues/62242): Fix skipping escape sequcne parsing errors during JSON data parsing while using `input_format_allow_errors_num/ratio` settings. [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#62218](https://github.com/ClickHouse/ClickHouse/issues/62218): Fixes Crash in Engine Merge if Row Policy does not have expression. [#61971](https://github.com/ClickHouse/ClickHouse/pull/61971) ([Ilya Golshtein](https://github.com/ilejn)). +* Backported in [#62342](https://github.com/ClickHouse/ClickHouse/issues/62342): Fix data race on scalars in Context. [#62305](https://github.com/ClickHouse/ClickHouse/pull/62305) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#62677](https://github.com/ClickHouse/ClickHouse/issues/62677): Fix segmentation fault when using Hive table engine. Reference [#62154](https://github.com/ClickHouse/ClickHouse/issues/62154), [#62560](https://github.com/ClickHouse/ClickHouse/issues/62560). [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#62639](https://github.com/ClickHouse/ClickHouse/issues/62639): Fix memory leak in groupArraySorted. Fix [#62536](https://github.com/ClickHouse/ClickHouse/issues/62536). [#62597](https://github.com/ClickHouse/ClickHouse/pull/62597) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#63054](https://github.com/ClickHouse/ClickHouse/issues/63054): Fixed bug in GCD codec implementation that may lead to server crashes. [#62853](https://github.com/ClickHouse/ClickHouse/pull/62853) ([Nikita Taranov](https://github.com/nickitat)). +* Backported in [#63030](https://github.com/ClickHouse/ClickHouse/issues/63030): Fix temporary data in cache incorrect behaviour in case creation of cache key base directory fails with `no space left on device`. [#62925](https://github.com/ClickHouse/ClickHouse/pull/62925) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#63142](https://github.com/ClickHouse/ClickHouse/issues/63142): Fix incorrect judgement of of monotonicity of function `abs`. [#63097](https://github.com/ClickHouse/ClickHouse/pull/63097) ([Duc Canh Le](https://github.com/canhld94)). +* Backported in [#63183](https://github.com/ClickHouse/ClickHouse/issues/63183): Sanity check: Clamp values instead of throwing. [#63119](https://github.com/ClickHouse/ClickHouse/pull/63119) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#63176](https://github.com/ClickHouse/ClickHouse/issues/63176): Setting server_name might help with recently reported SSL handshake error when connecting to MongoDB Atlas: `Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: error:10000438:SSL routines:OPENSSL_internal:TLSV1_ALERT_INTERNAL_ERROR`. [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). +* Backported in [#63191](https://github.com/ClickHouse/ClickHouse/issues/63191): Fix a bug when `SQL SECURITY` statement appears in all `CREATE` queries if the server setting `ignore_empty_sql_security_in_create_view_query=true` https://github.com/ClickHouse/ClickHouse/pull/63134. [#63136](https://github.com/ClickHouse/ClickHouse/pull/63136) ([pufit](https://github.com/pufit)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v24.3.1.2672-lts.md b/docs/changelogs/v24.3.1.2672-lts.md index 006ab941203..a70a33971c2 100644 --- a/docs/changelogs/v24.3.1.2672-lts.md +++ b/docs/changelogs/v24.3.1.2672-lts.md @@ -20,7 +20,7 @@ sidebar_label: 2024 #### New Feature * Topk/topkweighed support mode, which return count of values and it's error. [#54508](https://github.com/ClickHouse/ClickHouse/pull/54508) ([UnamedRus](https://github.com/UnamedRus)). -* Add generate_series as a table function. This function generates table with an arithmetic progression with natural numbers. [#59390](https://github.com/ClickHouse/ClickHouse/pull/59390) ([divanik](https://github.com/divanik)). +* Add generate_series as a table function. This function generates table with an arithmetic progression with natural numbers. [#59390](https://github.com/ClickHouse/ClickHouse/pull/59390) ([Daniil Ivanik](https://github.com/divanik)). * Support reading and writing backups as tar archives. [#59535](https://github.com/ClickHouse/ClickHouse/pull/59535) ([josh-hildred](https://github.com/josh-hildred)). * Implemented support for S3Express buckets. [#59965](https://github.com/ClickHouse/ClickHouse/pull/59965) ([Nikita Taranov](https://github.com/nickitat)). * Allow to attach parts from a different disk * attach partition from the table on other disks using copy instead of hard link (such as instant table) * attach partition using copy when the hard link fails even on the same disk. [#60112](https://github.com/ClickHouse/ClickHouse/pull/60112) ([Unalian](https://github.com/Unalian)). @@ -133,75 +133,75 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix function execution over const and LowCardinality with GROUP BY const for analyzer [#59986](https://github.com/ClickHouse/ClickHouse/pull/59986) ([Azat Khuzhin](https://github.com/azat)). -* Fix finished_mutations_to_keep=0 for MergeTree (as docs says 0 is to keep everything) [#60031](https://github.com/ClickHouse/ClickHouse/pull/60031) ([Azat Khuzhin](https://github.com/azat)). -* PartsSplitter invalid ranges for the same part [#60041](https://github.com/ClickHouse/ClickHouse/pull/60041) ([Maksim Kita](https://github.com/kitaisreal)). -* Azure Blob Storage : Fix issues endpoint and prefix [#60251](https://github.com/ClickHouse/ClickHouse/pull/60251) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). -* fix LRUResource Cache bug (Hive cache) [#60262](https://github.com/ClickHouse/ClickHouse/pull/60262) ([shanfengp](https://github.com/Aed-p)). -* Force reanalysis if parallel replicas changed [#60362](https://github.com/ClickHouse/ClickHouse/pull/60362) ([Raúl Marín](https://github.com/Algunenano)). -* Fix usage of plain metadata type with new disks configuration option [#60396](https://github.com/ClickHouse/ClickHouse/pull/60396) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Try to fix logical error 'Cannot capture column because it has incompatible type' in mapContainsKeyLike [#60451](https://github.com/ClickHouse/ClickHouse/pull/60451) ([Kruglov Pavel](https://github.com/Avogar)). -* Try to avoid calculation of scalar subqueries for CREATE TABLE. [#60464](https://github.com/ClickHouse/ClickHouse/pull/60464) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix deadlock in parallel parsing when lots of rows are skipped due to errors [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix_max_query_size_for_kql_compound_operator: [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)). -* Keeper fix: add timeouts when waiting for commit logs [#60544](https://github.com/ClickHouse/ClickHouse/pull/60544) ([Antonio Andelic](https://github.com/antonio2368)). -* Reduce the number of read rows from `system.numbers` [#60546](https://github.com/ClickHouse/ClickHouse/pull/60546) ([JackyWoo](https://github.com/JackyWoo)). -* Don't output number tips for date types [#60577](https://github.com/ClickHouse/ClickHouse/pull/60577) ([Raúl Marín](https://github.com/Algunenano)). -* Fix reading from MergeTree with non-deterministic functions in filter [#60586](https://github.com/ClickHouse/ClickHouse/pull/60586) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix logical error on bad compatibility setting value type [#60596](https://github.com/ClickHouse/ClickHouse/pull/60596) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix inconsistent aggregate function states in mixed x86-64 / ARM clusters [#60610](https://github.com/ClickHouse/ClickHouse/pull/60610) ([Harry Lee](https://github.com/HarryLeeIBM)). -* fix(prql): Robust panic handler [#60615](https://github.com/ClickHouse/ClickHouse/pull/60615) ([Maximilian Roos](https://github.com/max-sixty)). -* Fix `intDiv` for decimal and date arguments [#60672](https://github.com/ClickHouse/ClickHouse/pull/60672) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). -* Fix: expand CTE in alter modify query [#60682](https://github.com/ClickHouse/ClickHouse/pull/60682) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). -* Fix system.parts for non-Atomic/Ordinary database engine (i.e. Memory) [#60689](https://github.com/ClickHouse/ClickHouse/pull/60689) ([Azat Khuzhin](https://github.com/azat)). -* Fix "Invalid storage definition in metadata file" for parameterized views [#60708](https://github.com/ClickHouse/ClickHouse/pull/60708) ([Azat Khuzhin](https://github.com/azat)). -* Fix buffer overflow in CompressionCodecMultiple [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Remove nonsense from SQL/JSON [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Remove wrong sanitize checking in aggregate function quantileGK [#60740](https://github.com/ClickHouse/ClickHouse/pull/60740) ([李扬](https://github.com/taiyang-li)). -* Fix insert-select + insert_deduplication_token bug by setting streams to 1 [#60745](https://github.com/ClickHouse/ClickHouse/pull/60745) ([Jordi Villar](https://github.com/jrdi)). -* Prevent setting custom metadata headers on unsupported multipart upload operations [#60748](https://github.com/ClickHouse/ClickHouse/pull/60748) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)). -* Fix toStartOfInterval [#60763](https://github.com/ClickHouse/ClickHouse/pull/60763) ([Andrey Zvonov](https://github.com/zvonand)). -* Fix crash in arrayEnumerateRanked [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)). -* Fix crash when using input() in INSERT SELECT JOIN [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix crash with different allow_experimental_analyzer value in subqueries [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)). -* Remove recursion when reading from S3 [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix possible stuck on error in HashedDictionaryParallelLoader [#60926](https://github.com/ClickHouse/ClickHouse/pull/60926) ([vdimir](https://github.com/vdimir)). -* Fix async RESTORE with Replicated database [#60934](https://github.com/ClickHouse/ClickHouse/pull/60934) ([Antonio Andelic](https://github.com/antonio2368)). -* fix csv format not support tuple [#60994](https://github.com/ClickHouse/ClickHouse/pull/60994) ([shuai.xu](https://github.com/shuai-xu)). -* Fix deadlock in async inserts to `Log` tables via native protocol [#61055](https://github.com/ClickHouse/ClickHouse/pull/61055) ([Anton Popov](https://github.com/CurtizJ)). -* Fix lazy execution of default argument in dictGetOrDefault for RangeHashedDictionary [#61196](https://github.com/ClickHouse/ClickHouse/pull/61196) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix multiple bugs in groupArraySorted [#61203](https://github.com/ClickHouse/ClickHouse/pull/61203) ([Raúl Marín](https://github.com/Algunenano)). -* Fix Keeper reconfig for standalone binary [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix usage of session_token in S3 engine [#61234](https://github.com/ClickHouse/ClickHouse/pull/61234) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix possible incorrect result of aggregate function `uniqExact` [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)). -* Fix bugs in show database [#61269](https://github.com/ClickHouse/ClickHouse/pull/61269) ([Raúl Marín](https://github.com/Algunenano)). -* Fix logical error in RabbitMQ storage with MATERIALIZED columns [#61320](https://github.com/ClickHouse/ClickHouse/pull/61320) ([vdimir](https://github.com/vdimir)). -* Fix CREATE OR REPLACE DICTIONARY [#61356](https://github.com/ClickHouse/ClickHouse/pull/61356) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix crash in ObjectJson parsing array with nulls [#61364](https://github.com/ClickHouse/ClickHouse/pull/61364) ([vdimir](https://github.com/vdimir)). -* Fix ATTACH query with external ON CLUSTER [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix consecutive keys optimization for nullable keys [#61393](https://github.com/ClickHouse/ClickHouse/pull/61393) ([Anton Popov](https://github.com/CurtizJ)). -* fix issue of actions dag split [#61458](https://github.com/ClickHouse/ClickHouse/pull/61458) ([Raúl Marín](https://github.com/Algunenano)). -* Fix finishing a failed RESTORE [#61466](https://github.com/ClickHouse/ClickHouse/pull/61466) ([Vitaly Baranov](https://github.com/vitlibar)). -* Disable async_insert_use_adaptive_busy_timeout correctly with compatibility settings [#61468](https://github.com/ClickHouse/ClickHouse/pull/61468) ([Raúl Marín](https://github.com/Algunenano)). -* Allow queuing in restore pool [#61475](https://github.com/ClickHouse/ClickHouse/pull/61475) ([Nikita Taranov](https://github.com/nickitat)). -* Fix bug when reading system.parts using UUID (issue 61220). [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)). -* Fix ALTER QUERY MODIFY SQL SECURITY [#61480](https://github.com/ClickHouse/ClickHouse/pull/61480) ([pufit](https://github.com/pufit)). -* Fix crash in window view [#61526](https://github.com/ClickHouse/ClickHouse/pull/61526) ([Alexey Milovidov](https://github.com/alexey-milovidov)). -* Fix `repeat` with non native integers [#61527](https://github.com/ClickHouse/ClickHouse/pull/61527) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix client `-s` argument [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). -* Reset part level upon attach from disk on MergeTree [#61536](https://github.com/ClickHouse/ClickHouse/pull/61536) ([Arthur Passos](https://github.com/arthurpassos)). -* Fix crash in arrayPartialReverseSort [#61539](https://github.com/ClickHouse/ClickHouse/pull/61539) ([Raúl Marín](https://github.com/Algunenano)). -* Fix string search with const position [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix addDays cause an error when used datetime64 [#61561](https://github.com/ClickHouse/ClickHouse/pull/61561) ([Shuai li](https://github.com/loneylee)). -* disallow LowCardinality input type for JSONExtract [#61617](https://github.com/ClickHouse/ClickHouse/pull/61617) ([Julia Kartseva](https://github.com/jkartseva)). -* Fix `system.part_log` for async insert with deduplication [#61620](https://github.com/ClickHouse/ClickHouse/pull/61620) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix Non-ready set for system.parts. [#61666](https://github.com/ClickHouse/ClickHouse/pull/61666) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Don't allow the same expression in ORDER BY with and without WITH FILL [#61667](https://github.com/ClickHouse/ClickHouse/pull/61667) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix actual_part_name for REPLACE_RANGE (`Entry actual part isn't empty yet`) [#61675](https://github.com/ClickHouse/ClickHouse/pull/61675) ([Alexander Tokmakov](https://github.com/tavplubix)). -* Fix columns after executing MODIFY QUERY for a materialized view with internal table [#61734](https://github.com/ClickHouse/ClickHouse/pull/61734) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` for incorrect UTF-8 [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). -* Fix RANGE frame is not supported for Nullable columns. [#61766](https://github.com/ClickHouse/ClickHouse/pull/61766) ([YuanLiu](https://github.com/ditgittube)). -* Revert "Revert "Fix bug when reading system.parts using UUID (issue 61220)."" [#61779](https://github.com/ClickHouse/ClickHouse/pull/61779) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Fix function execution over const and LowCardinality with GROUP BY const for analyzer. [#59986](https://github.com/ClickHouse/ClickHouse/pull/59986) ([Azat Khuzhin](https://github.com/azat)). +* Fix finished_mutations_to_keep=0 for MergeTree (as docs says 0 is to keep everything). [#60031](https://github.com/ClickHouse/ClickHouse/pull/60031) ([Azat Khuzhin](https://github.com/azat)). +* Fixed a bug in parallel optimization for queries with `FINAL`, which could give an incorrect result in rare cases. [#60041](https://github.com/ClickHouse/ClickHouse/pull/60041) ([Maksim Kita](https://github.com/kitaisreal)). +* Updated to not include account_name in endpoint if flag `endpoint_contains_account_name` is set and fixed issue with empty container name. [#60251](https://github.com/ClickHouse/ClickHouse/pull/60251) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix LRUResource Cache implementation that can be triggered by incorrect component usage. Error can't be triggered with current ClickHouse usage. close [#60122](https://github.com/ClickHouse/ClickHouse/issues/60122). [#60262](https://github.com/ClickHouse/ClickHouse/pull/60262) ([shanfengp](https://github.com/Aed-p)). +* Force reanalysis of the query if parallel replicas isn't supported in a subquery. [#60362](https://github.com/ClickHouse/ClickHouse/pull/60362) ([Raúl Marín](https://github.com/Algunenano)). +* Fix usage of plain metadata type for new disks configuration option. [#60396](https://github.com/ClickHouse/ClickHouse/pull/60396) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix logical error 'Cannot capture column because it has incompatible type' in mapContainsKeyLike. [#60451](https://github.com/ClickHouse/ClickHouse/pull/60451) ([Kruglov Pavel](https://github.com/Avogar)). +* Avoid calculation of scalar subqueries for `CREATE TABLE`. Fixes [#59795](https://github.com/ClickHouse/ClickHouse/issues/59795) and [#59930](https://github.com/ClickHouse/ClickHouse/issues/59930). Attempt to re-implement https://github.com/ClickHouse/ClickHouse/pull/57855. [#60464](https://github.com/ClickHouse/ClickHouse/pull/60464) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix deadlock in parallel parsing when lots of rows are skipped due to errors. [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix the issue of `max_query_size` for KQL compound operator like mv-expand. Related to [#59626](https://github.com/ClickHouse/ClickHouse/issues/59626). [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)). +* Keeper fix: add timeouts when waiting for commit logs. Keeper could get stuck if the log successfully gets replicated but never committed. [#60544](https://github.com/ClickHouse/ClickHouse/pull/60544) ([Antonio Andelic](https://github.com/antonio2368)). +* Reduce the number of read rows from `system.numbers`. Fixes [#59418](https://github.com/ClickHouse/ClickHouse/issues/59418). [#60546](https://github.com/ClickHouse/ClickHouse/pull/60546) ([JackyWoo](https://github.com/JackyWoo)). +* Don't output number tips for date types. [#60577](https://github.com/ClickHouse/ClickHouse/pull/60577) ([Raúl Marín](https://github.com/Algunenano)). +* Fix unexpected result during reading from tables with virtual columns when filter contains non-deterministic functions. Closes [#61106](https://github.com/ClickHouse/ClickHouse/issues/61106). [#60586](https://github.com/ClickHouse/ClickHouse/pull/60586) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix logical error on bad compatibility setting value type. Closes [#60590](https://github.com/ClickHouse/ClickHouse/issues/60590). [#60596](https://github.com/ClickHouse/ClickHouse/pull/60596) ([Kruglov Pavel](https://github.com/Avogar)). +* Fixed potentially inconsistent aggregate function states in mixed x86-64 / ARM clusters. [#60610](https://github.com/ClickHouse/ClickHouse/pull/60610) ([Harry Lee](https://github.com/HarryLeeIBM)). +* Isolates the ClickHouse binary from any panics in `prqlc`. [#60615](https://github.com/ClickHouse/ClickHouse/pull/60615) ([Maximilian Roos](https://github.com/max-sixty)). +* Fixing bug where `intDiv` with decimal and date/datetime as arguments leads to crash. Closes [#60653](https://github.com/ClickHouse/ClickHouse/issues/60653). [#60672](https://github.com/ClickHouse/ClickHouse/pull/60672) ([Yarik Briukhovetskyi](https://github.com/yariks5s)). +* Fix bug when attempt to 'ALTER TABLE ... MODIFY QUERY' with CTE ends up with "Table [CTE] does not exist" exception (Code: 60). [#60682](https://github.com/ClickHouse/ClickHouse/pull/60682) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)). +* Fix system.parts for non-Atomic/Ordinary database engine (i.e. Memory - major user is `clickhouse-local`). [#60689](https://github.com/ClickHouse/ClickHouse/pull/60689) ([Azat Khuzhin](https://github.com/azat)). +* Fix "Invalid storage definition in metadata file" for parameterized views. [#60708](https://github.com/ClickHouse/ClickHouse/pull/60708) ([Azat Khuzhin](https://github.com/azat)). +* Fix buffer overflow that can happen if the attacker asks the HTTP server to decompress data with a composition of codecs and size triggering numeric overflow. Fix buffer overflow that can happen inside codec NONE on wrong input data. This was submitted by TIANGONG research team through our [Bug Bounty program](https://github.com/ClickHouse/ClickHouse/issues/38986). [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Functions for SQL/JSON were able to read uninitialized memory. This closes [#60017](https://github.com/ClickHouse/ClickHouse/issues/60017). Found by Fuzzer. [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Remove wrong sanitize checking in aggregate function quantileGK: `sampled_len` in `ApproxSampler` is not guaranteed to be less than `default_compress_threshold`. `default_compress_threshold` is a just soft limitation while executing `ApproxSampler::insert`. cc @Algunenano. This issue was reproduced in https://github.com/oap-project/gluten/pull/4829. [#60740](https://github.com/ClickHouse/ClickHouse/pull/60740) ([李扬](https://github.com/taiyang-li)). +* Fix the issue causing undesired deduplication on insert-select queries passing a custom `insert_deduplication_token.` The change sets streams to 1 in those cases to prevent the issue from happening at the expense of ignoring `max_insert_threads > 1`. [#60745](https://github.com/ClickHouse/ClickHouse/pull/60745) ([Jordi Villar](https://github.com/jrdi)). +* Do not set aws custom metadata `x-amz-meta-*` headers on UploadPart & CompleteMultipartUpload calls. [#60748](https://github.com/ClickHouse/ClickHouse/pull/60748) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)). +* One more fix for toStartOfInterval returning wrong result for interval smaller than second. [#60763](https://github.com/ClickHouse/ClickHouse/pull/60763) ([Andrey Zvonov](https://github.com/zvonand)). +* Fix crash in arrayEnumerateRanked. [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)). +* Fix crash when using input() in INSERT SELECT JOIN. Closes [#60035](https://github.com/ClickHouse/ClickHouse/issues/60035). [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix crash when `allow_experimental_analyzer` setting value is changed in the subqueries. [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)). +* Avoid segfault if too many keys are skipped when reading from S3. [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix possible stuck on error while reloading dictionary with `SHARDS`. [#60926](https://github.com/ClickHouse/ClickHouse/pull/60926) ([vdimir](https://github.com/vdimir)). +* Fix async RESTORE with Replicated database. [#60934](https://github.com/ClickHouse/ClickHouse/pull/60934) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix csv write tuple in a wrong format and can not read it. [#60994](https://github.com/ClickHouse/ClickHouse/pull/60994) ([shuai.xu](https://github.com/shuai-xu)). +* Fixed deadlock in async inserts to `Log` tables via native protocol. [#61055](https://github.com/ClickHouse/ClickHouse/pull/61055) ([Anton Popov](https://github.com/CurtizJ)). +* Fix lazy execution of default argument in dictGetOrDefault for RangeHashedDictionary that could lead to nullptr dereference on bad column types in FunctionsConversion. Closes [#56661](https://github.com/ClickHouse/ClickHouse/issues/56661). [#61196](https://github.com/ClickHouse/ClickHouse/pull/61196) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix multiple bugs in groupArraySorted. [#61203](https://github.com/ClickHouse/ClickHouse/pull/61203) ([Raúl Marín](https://github.com/Algunenano)). +* Keeper: fix runtime reconfig for standalone binary. [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix usage of session_token in S3 engine. Fixes https://github.com/ClickHouse/ClickHouse/pull/57850#issuecomment-1966404710. [#61234](https://github.com/ClickHouse/ClickHouse/pull/61234) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix possible incorrect result of aggregate function `uniqExact`. [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)). +* Fix bugs in show database. [#61269](https://github.com/ClickHouse/ClickHouse/pull/61269) ([Raúl Marín](https://github.com/Algunenano)). +* Fix possible `LOGICAL_ERROR` in case storage with `RabbitMQ` engine has unsupported `MATERIALIZED|ALIAS|DEFAULT` columns. [#61320](https://github.com/ClickHouse/ClickHouse/pull/61320) ([vdimir](https://github.com/vdimir)). +* This PR fixes `CREATE OR REPLACE DICTIONARY` with `lazy_load` turned off. [#61356](https://github.com/ClickHouse/ClickHouse/pull/61356) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix possible crash in `Object('json')` data type parsing array with `null`s. [#61364](https://github.com/ClickHouse/ClickHouse/pull/61364) ([vdimir](https://github.com/vdimir)). +* Fix the ATTACH query with the ON CLUSTER clause when the database does not exist on the initiator node. Closes [#55009](https://github.com/ClickHouse/ClickHouse/issues/55009). [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)). +* Fixed possible wrong result of aggregation with nullable keys. [#61393](https://github.com/ClickHouse/ClickHouse/pull/61393) ([Anton Popov](https://github.com/CurtizJ)). +* ActionsDAG::split can't make sure that "Execution of first then second parts on block is equivalent to execution of initial DAG.". [#61458](https://github.com/ClickHouse/ClickHouse/pull/61458) ([Raúl Marín](https://github.com/Algunenano)). +* Fix finishing a failed RESTORE. [#61466](https://github.com/ClickHouse/ClickHouse/pull/61466) ([Vitaly Baranov](https://github.com/vitlibar)). +* Disable async_insert_use_adaptive_busy_timeout correctly with compatibility settings. [#61468](https://github.com/ClickHouse/ClickHouse/pull/61468) ([Raúl Marín](https://github.com/Algunenano)). +* Fix deadlock during `restore database` execution if `restore_threads` was set to 1. [#61475](https://github.com/ClickHouse/ClickHouse/pull/61475) ([Nikita Taranov](https://github.com/nickitat)). +* Fix incorrect results when filtering `system.parts` or `system.parts_columns` using UUID. [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)). +* Fix the `ALTER QUERY MODIFY SQL SECURITY` queries to override the table's DDL correctly. [#61480](https://github.com/ClickHouse/ClickHouse/pull/61480) ([pufit](https://github.com/pufit)). +* The experimental "window view" feature (it is disabled by default), which should not be used in production, could lead to a crash. Issue was identified by YohannJardin via Bugcrowd program. [#61526](https://github.com/ClickHouse/ClickHouse/pull/61526) ([Alexey Milovidov](https://github.com/alexey-milovidov)). +* Fix `repeat` with non-native integers (e.g. `UInt256`). [#61527](https://github.com/ClickHouse/ClickHouse/pull/61527) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix `clickhouse-client -s` argument, it was broken by defining it two times. [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)). +* Fix too high part level reported in [#58558](https://github.com/ClickHouse/ClickHouse/issues/58558) by resetting MergeTree part levels upon attach from disk just like `ReplicatedMergeTree` [does](https://github.com/ClickHouse/ClickHouse/blob/9cd7e6155c7027baccd6dc5380d0813db94b03cc/src/Storages/MergeTree/ReplicatedMergeTreeSink.cpp#L838). [#61536](https://github.com/ClickHouse/ClickHouse/pull/61536) ([Arthur Passos](https://github.com/arthurpassos)). +* Fix crash in arrayPartialReverseSort. [#61539](https://github.com/ClickHouse/ClickHouse/pull/61539) ([Raúl Marín](https://github.com/Algunenano)). +* Fix string search with constant start position which previously could lead to memory corruption. [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix the issue where the function `addDays` (and similar functions) reports an error when the first parameter is `DateTime64`. [#61561](https://github.com/ClickHouse/ClickHouse/pull/61561) ([Shuai li](https://github.com/loneylee)). +* Disallow LowCardinality type for the column containing JSON input in the JSONExtract function. [#61617](https://github.com/ClickHouse/ClickHouse/pull/61617) ([Julia Kartseva](https://github.com/jkartseva)). +* Add parts to `system.part_log` when created using async insert with deduplication. [#61620](https://github.com/ClickHouse/ClickHouse/pull/61620) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix `Not-ready Set` error while reading from `system.parts` (with `IN subquery`). Was introduced in [#60510](https://github.com/ClickHouse/ClickHouse/issues/60510). [#61666](https://github.com/ClickHouse/ClickHouse/pull/61666) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Don't allow the same expression in ORDER BY with and without WITH FILL. Such invalid expression could lead to logical error `Invalid number of rows in Chunk`. [#61667](https://github.com/ClickHouse/ClickHouse/pull/61667) ([Kruglov Pavel](https://github.com/Avogar)). +* Fixed `Entry actual part isn't empty yet. This is a bug. (LOGICAL_ERROR)` that might happen in rare cases after executing `REPLACE PARTITION`, `MOVE PARTITION TO TABLE` or `ATTACH PARTITION FROM`. [#61675](https://github.com/ClickHouse/ClickHouse/pull/61675) ([Alexander Tokmakov](https://github.com/tavplubix)). +* Fix columns after executing `ALTER TABLE MODIFY QUERY` for a materialized view with internal table. A materialized view must have the same columns as its internal table if any, however `MODIFY QUERY` could break that rule before this PR causing the materialized view to be inconsistent. [#61734](https://github.com/ClickHouse/ClickHouse/pull/61734) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` when specifying incorrect UTF-8 sequence. Example: [#61714](https://github.com/ClickHouse/ClickHouse/issues/61714#issuecomment-2012768202). [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)). +* Fix RANGE frame is not supported for Nullable columns. ``` SELECT number, sum(number) OVER (ORDER BY number ASC RANGE BETWEEN CURRENT ROW AND 1 FOLLOWING) AS sum FROM values('number Nullable(Int8)', 1, 1, 2, 3, NULL). [#61766](https://github.com/ClickHouse/ClickHouse/pull/61766) ([YuanLiu](https://github.com/ditgittube)). +* Fix incorrect results when filtering `system.parts` or `system.parts_columns` using UUID. [#61779](https://github.com/ClickHouse/ClickHouse/pull/61779) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). #### CI Fix or Improvement (changelog entry is not required) @@ -526,7 +526,7 @@ sidebar_label: 2024 * No "please" [#61916](https://github.com/ClickHouse/ClickHouse/pull/61916) ([Alexey Milovidov](https://github.com/alexey-milovidov)). * Update version_date.tsv and changelogs after v23.12.6.19-stable [#61917](https://github.com/ClickHouse/ClickHouse/pull/61917) ([robot-clickhouse](https://github.com/robot-clickhouse)). * Update version_date.tsv and changelogs after v24.1.8.22-stable [#61918](https://github.com/ClickHouse/ClickHouse/pull/61918) ([robot-clickhouse](https://github.com/robot-clickhouse)). -* Fix flaky test_broken_projestions/test.py::test_broken_ignored_replic... [#61932](https://github.com/ClickHouse/ClickHouse/pull/61932) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix flaky test_broken_projestions/test.py::test_broken_ignored_replic… [#61932](https://github.com/ClickHouse/ClickHouse/pull/61932) ([Kseniia Sumarokova](https://github.com/kssenii)). * Check is Rust avaiable for build, if not, suggest a way to disable Rust support [#61938](https://github.com/ClickHouse/ClickHouse/pull/61938) ([Azat Khuzhin](https://github.com/azat)). * CI: new ci menu in PR body [#61948](https://github.com/ClickHouse/ClickHouse/pull/61948) ([Max K.](https://github.com/maxknv)). * Remove flaky test `01193_metadata_loading` [#61961](https://github.com/ClickHouse/ClickHouse/pull/61961) ([Nikita Taranov](https://github.com/nickitat)). diff --git a/docs/changelogs/v24.3.2.23-lts.md b/docs/changelogs/v24.3.2.23-lts.md index 4d59a1cedf6..d8adc63c8ac 100644 --- a/docs/changelogs/v24.3.2.23-lts.md +++ b/docs/changelogs/v24.3.2.23-lts.md @@ -9,9 +9,9 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix logical error in group_by_use_nulls + grouping set + analyzer + materialize/constant [#61567](https://github.com/ClickHouse/ClickHouse/pull/61567) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix external table cannot parse data type Bool [#62115](https://github.com/ClickHouse/ClickHouse/pull/62115) ([Duc Canh Le](https://github.com/canhld94)). -* Revert "Merge pull request [#61564](https://github.com/ClickHouse/ClickHouse/issues/61564) from liuneng1994/optimize_in_single_value" [#62135](https://github.com/ClickHouse/ClickHouse/pull/62135) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#62078](https://github.com/ClickHouse/ClickHouse/issues/62078): Fix logical error ''Unexpected return type from materialize. Expected Nullable. Got UInt8' while using group_by_use_nulls with analyzer and materialize/constant in grouping set. Closes [#61531](https://github.com/ClickHouse/ClickHouse/issues/61531). [#61567](https://github.com/ClickHouse/ClickHouse/pull/61567) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#62122](https://github.com/ClickHouse/ClickHouse/issues/62122): Fix external table cannot parse data type Bool. [#62115](https://github.com/ClickHouse/ClickHouse/pull/62115) ([Duc Canh Le](https://github.com/canhld94)). +* Backported in [#62147](https://github.com/ClickHouse/ClickHouse/issues/62147): Revert "Merge pull request [#61564](https://github.com/ClickHouse/ClickHouse/issues/61564) from liuneng1994/optimize_in_single_value". The feature is broken and can't be disabled individually. [#62135](https://github.com/ClickHouse/ClickHouse/pull/62135) ([Raúl Marín](https://github.com/Algunenano)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v24.3.3.102-lts.md b/docs/changelogs/v24.3.3.102-lts.md index dc89ac24208..1cdbde67031 100644 --- a/docs/changelogs/v24.3.3.102-lts.md +++ b/docs/changelogs/v24.3.3.102-lts.md @@ -17,36 +17,36 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Cancel merges before removing moved parts [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Mark CANNOT_PARSE_ESCAPE_SEQUENCE error as parse error to be able to skip it in row input formats [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). -* Crash in Engine Merge if Row Policy does not have expression [#61971](https://github.com/ClickHouse/ClickHouse/pull/61971) ([Ilya Golshtein](https://github.com/ilejn)). -* ReadWriteBufferFromHTTP set right header host when redirected [#62068](https://github.com/ClickHouse/ClickHouse/pull/62068) ([Sema Checherinda](https://github.com/CheSema)). -* Analyzer: Fix query parameter resolution [#62186](https://github.com/ClickHouse/ClickHouse/pull/62186) ([Dmitry Novik](https://github.com/novikd)). -* Fixing NULL random seed for generateRandom with analyzer. [#62248](https://github.com/ClickHouse/ClickHouse/pull/62248) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix PartsSplitter [#62268](https://github.com/ClickHouse/ClickHouse/pull/62268) ([Nikita Taranov](https://github.com/nickitat)). -* Analyzer: Fix alias to parametrized view resolution [#62274](https://github.com/ClickHouse/ClickHouse/pull/62274) ([Dmitry Novik](https://github.com/novikd)). -* Analyzer: Fix name resolution from parent scopes [#62281](https://github.com/ClickHouse/ClickHouse/pull/62281) ([Dmitry Novik](https://github.com/novikd)). -* Fix argMax with nullable non native numeric column [#62285](https://github.com/ClickHouse/ClickHouse/pull/62285) ([Raúl Marín](https://github.com/Algunenano)). -* Fix data race on scalars in Context [#62305](https://github.com/ClickHouse/ClickHouse/pull/62305) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix analyzer with positional arguments in distributed query [#62362](https://github.com/ClickHouse/ClickHouse/pull/62362) ([flynn](https://github.com/ucasfl)). -* Fix filter pushdown from additional_table_filters in Merge engine in analyzer [#62398](https://github.com/ClickHouse/ClickHouse/pull/62398) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix GLOBAL IN table queries with analyzer. [#62409](https://github.com/ClickHouse/ClickHouse/pull/62409) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix scalar subquery in LIMIT [#62567](https://github.com/ClickHouse/ClickHouse/pull/62567) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Try to fix segfault in Hive engine [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix memory leak in groupArraySorted [#62597](https://github.com/ClickHouse/ClickHouse/pull/62597) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix argMin/argMax combinator state [#62708](https://github.com/ClickHouse/ClickHouse/pull/62708) ([Raúl Marín](https://github.com/Algunenano)). -* Fix temporary data in cache failing because of cache lock contention optimization [#62715](https://github.com/ClickHouse/ClickHouse/pull/62715) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix FINAL modifier is not respected in CTE with analyzer [#62811](https://github.com/ClickHouse/ClickHouse/pull/62811) ([Duc Canh Le](https://github.com/canhld94)). -* Fix crash in function `formatRow` with `JSON` format and HTTP interface [#62840](https://github.com/ClickHouse/ClickHouse/pull/62840) ([Anton Popov](https://github.com/CurtizJ)). -* Fix GCD codec [#62853](https://github.com/ClickHouse/ClickHouse/pull/62853) ([Nikita Taranov](https://github.com/nickitat)). -* Disable optimize_rewrite_aggregate_function_with_if for sum(nullable) [#62912](https://github.com/ClickHouse/ClickHouse/pull/62912) ([Raúl Marín](https://github.com/Algunenano)). -* Fix temporary data in cache incorrectly processing failure of cache key directory creation [#62925](https://github.com/ClickHouse/ClickHouse/pull/62925) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix optimize_rewrite_aggregate_function_with_if implicit cast [#62999](https://github.com/ClickHouse/ClickHouse/pull/62999) ([Raúl Marín](https://github.com/Algunenano)). -* Do not remove server constants from GROUP BY key for secondary query. [#63047](https://github.com/ClickHouse/ClickHouse/pull/63047) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix incorrect judgement of of monotonicity of function abs [#63097](https://github.com/ClickHouse/ClickHouse/pull/63097) ([Duc Canh Le](https://github.com/canhld94)). -* Set server name for SSL handshake in MongoDB engine [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). -* Use user specified db instead of "config" for MongoDB wire protocol version check [#63126](https://github.com/ClickHouse/ClickHouse/pull/63126) ([Alexander Gololobov](https://github.com/davenger)). -* Format SQL security option only in `CREATE VIEW` queries. [#63136](https://github.com/ClickHouse/ClickHouse/pull/63136) ([pufit](https://github.com/pufit)). +* Backported in [#62533](https://github.com/ClickHouse/ClickHouse/issues/62533): Fix data race between `MOVE PARTITION` query and merges resulting in intersecting parts. [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Backported in [#62244](https://github.com/ClickHouse/ClickHouse/issues/62244): Fix skipping escape sequcne parsing errors during JSON data parsing while using `input_format_allow_errors_num/ratio` settings. [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#62220](https://github.com/ClickHouse/ClickHouse/issues/62220): Fixes Crash in Engine Merge if Row Policy does not have expression. [#61971](https://github.com/ClickHouse/ClickHouse/pull/61971) ([Ilya Golshtein](https://github.com/ilejn)). +* Backported in [#62234](https://github.com/ClickHouse/ClickHouse/issues/62234): ReadWriteBufferFromHTTP set right header host when redirected. [#62068](https://github.com/ClickHouse/ClickHouse/pull/62068) ([Sema Checherinda](https://github.com/CheSema)). +* Backported in [#62278](https://github.com/ClickHouse/ClickHouse/issues/62278): Fix query parameter resolution with `allow_experimental_analyzer` enabled. Closes [#62113](https://github.com/ClickHouse/ClickHouse/issues/62113). [#62186](https://github.com/ClickHouse/ClickHouse/pull/62186) ([Dmitry Novik](https://github.com/novikd)). +* Backported in [#62354](https://github.com/ClickHouse/ClickHouse/issues/62354): Fix `generateRandom` with `NULL` in the seed argument. Fixes [#62092](https://github.com/ClickHouse/ClickHouse/issues/62092). [#62248](https://github.com/ClickHouse/ClickHouse/pull/62248) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#62412](https://github.com/ClickHouse/ClickHouse/issues/62412): When some index columns are not loaded into memory for some parts of a *MergeTree table, queries with `FINAL` might produce wrong results. Now we explicitly choose only the common prefix of index columns for all parts to avoid this issue. [#62268](https://github.com/ClickHouse/ClickHouse/pull/62268) ([Nikita Taranov](https://github.com/nickitat)). +* Backported in [#62733](https://github.com/ClickHouse/ClickHouse/issues/62733): Fix inability to address parametrized view in SELECT queries via aliases. [#62274](https://github.com/ClickHouse/ClickHouse/pull/62274) ([Dmitry Novik](https://github.com/novikd)). +* Backported in [#62407](https://github.com/ClickHouse/ClickHouse/issues/62407): Fix name resolution in case when identifier is resolved to an executed scalar subquery. [#62281](https://github.com/ClickHouse/ClickHouse/pull/62281) ([Dmitry Novik](https://github.com/novikd)). +* Backported in [#62331](https://github.com/ClickHouse/ClickHouse/issues/62331): Fix argMax with nullable non native numeric column. [#62285](https://github.com/ClickHouse/ClickHouse/pull/62285) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#62344](https://github.com/ClickHouse/ClickHouse/issues/62344): Fix data race on scalars in Context. [#62305](https://github.com/ClickHouse/ClickHouse/pull/62305) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#62484](https://github.com/ClickHouse/ClickHouse/issues/62484): Resolve positional arguments only on the initiator node. Closes [#62289](https://github.com/ClickHouse/ClickHouse/issues/62289). [#62362](https://github.com/ClickHouse/ClickHouse/pull/62362) ([flynn](https://github.com/ucasfl)). +* Backported in [#62442](https://github.com/ClickHouse/ClickHouse/issues/62442): Fix filter pushdown from additional_table_filters in Merge engine in analyzer. Closes [#62229](https://github.com/ClickHouse/ClickHouse/issues/62229). [#62398](https://github.com/ClickHouse/ClickHouse/pull/62398) ([Kruglov Pavel](https://github.com/Avogar)). +* Backported in [#62475](https://github.com/ClickHouse/ClickHouse/issues/62475): Fix `Unknown expression or table expression identifier` error for `GLOBAL IN table` queries (with new analyzer). Fixes [#62286](https://github.com/ClickHouse/ClickHouse/issues/62286). [#62409](https://github.com/ClickHouse/ClickHouse/pull/62409) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#62612](https://github.com/ClickHouse/ClickHouse/issues/62612): Fix an error `LIMIT expression must be constant` in queries with constant expression in `LIMIT`/`OFFSET` which contains scalar subquery. Fixes [#62294](https://github.com/ClickHouse/ClickHouse/issues/62294). [#62567](https://github.com/ClickHouse/ClickHouse/pull/62567) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#62679](https://github.com/ClickHouse/ClickHouse/issues/62679): Fix segmentation fault when using Hive table engine. Reference [#62154](https://github.com/ClickHouse/ClickHouse/issues/62154), [#62560](https://github.com/ClickHouse/ClickHouse/issues/62560). [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). +* Backported in [#62641](https://github.com/ClickHouse/ClickHouse/issues/62641): Fix memory leak in groupArraySorted. Fix [#62536](https://github.com/ClickHouse/ClickHouse/issues/62536). [#62597](https://github.com/ClickHouse/ClickHouse/pull/62597) ([Antonio Andelic](https://github.com/antonio2368)). +* Backported in [#62770](https://github.com/ClickHouse/ClickHouse/issues/62770): Fix argMin/argMax combinator state. [#62708](https://github.com/ClickHouse/ClickHouse/pull/62708) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#62750](https://github.com/ClickHouse/ClickHouse/issues/62750): Fix temporary data in cache failing because of a small value of setting `filesystem_cache_reserve_space_wait_lock_timeout_milliseconds`. Introduced a separate setting `temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds`. [#62715](https://github.com/ClickHouse/ClickHouse/pull/62715) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#62993](https://github.com/ClickHouse/ClickHouse/issues/62993): Fix an error when `FINAL` is not applied when specified in CTE (new analyzer). Fixes [#62779](https://github.com/ClickHouse/ClickHouse/issues/62779). [#62811](https://github.com/ClickHouse/ClickHouse/pull/62811) ([Duc Canh Le](https://github.com/canhld94)). +* Backported in [#62859](https://github.com/ClickHouse/ClickHouse/issues/62859): Fixed crash in function `formatRow` with `JSON` format in queries executed via the HTTP interface. [#62840](https://github.com/ClickHouse/ClickHouse/pull/62840) ([Anton Popov](https://github.com/CurtizJ)). +* Backported in [#63056](https://github.com/ClickHouse/ClickHouse/issues/63056): Fixed bug in GCD codec implementation that may lead to server crashes. [#62853](https://github.com/ClickHouse/ClickHouse/pull/62853) ([Nikita Taranov](https://github.com/nickitat)). +* Backported in [#62960](https://github.com/ClickHouse/ClickHouse/issues/62960): Disable optimize_rewrite_aggregate_function_with_if for sum(nullable). [#62912](https://github.com/ClickHouse/ClickHouse/pull/62912) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#63032](https://github.com/ClickHouse/ClickHouse/issues/63032): Fix temporary data in cache incorrect behaviour in case creation of cache key base directory fails with `no space left on device`. [#62925](https://github.com/ClickHouse/ClickHouse/pull/62925) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Backported in [#63148](https://github.com/ClickHouse/ClickHouse/issues/63148): Fix optimize_rewrite_aggregate_function_with_if implicit cast. [#62999](https://github.com/ClickHouse/ClickHouse/pull/62999) ([Raúl Marín](https://github.com/Algunenano)). +* Backported in [#63146](https://github.com/ClickHouse/ClickHouse/issues/63146): Fix `Not found column in block` error for distributed queries with server-side constants in `GROUP BY` key. Fixes [#62682](https://github.com/ClickHouse/ClickHouse/issues/62682). [#63047](https://github.com/ClickHouse/ClickHouse/pull/63047) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Backported in [#63144](https://github.com/ClickHouse/ClickHouse/issues/63144): Fix incorrect judgement of of monotonicity of function `abs`. [#63097](https://github.com/ClickHouse/ClickHouse/pull/63097) ([Duc Canh Le](https://github.com/canhld94)). +* Backported in [#63178](https://github.com/ClickHouse/ClickHouse/issues/63178): Setting server_name might help with recently reported SSL handshake error when connecting to MongoDB Atlas: `Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: error:10000438:SSL routines:OPENSSL_internal:TLSV1_ALERT_INTERNAL_ERROR`. [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). +* Backported in [#63170](https://github.com/ClickHouse/ClickHouse/issues/63170): The wire protocol version check for MongoDB used to try accessing "config" database, but this can fail if the user doesn't have permissions for it. The fix is to use the database name provided by user. [#63126](https://github.com/ClickHouse/ClickHouse/pull/63126) ([Alexander Gololobov](https://github.com/davenger)). +* Backported in [#63193](https://github.com/ClickHouse/ClickHouse/issues/63193): Fix a bug when `SQL SECURITY` statement appears in all `CREATE` queries if the server setting `ignore_empty_sql_security_in_create_view_query=true` https://github.com/ClickHouse/ClickHouse/pull/63134. [#63136](https://github.com/ClickHouse/ClickHouse/pull/63136) ([pufit](https://github.com/pufit)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/changelogs/v24.4.1.2088-stable.md b/docs/changelogs/v24.4.1.2088-stable.md index b8d83f1a31f..06e704356d4 100644 --- a/docs/changelogs/v24.4.1.2088-stable.md +++ b/docs/changelogs/v24.4.1.2088-stable.md @@ -106,75 +106,75 @@ sidebar_label: 2024 #### Bug Fix (user-visible misbehavior in an official stable release) -* Fix parser error when using COUNT(*) with FILTER clause [#61357](https://github.com/ClickHouse/ClickHouse/pull/61357) ([Duc Canh Le](https://github.com/canhld94)). -* Fix logical error in group_by_use_nulls + grouping set + analyzer + materialize/constant [#61567](https://github.com/ClickHouse/ClickHouse/pull/61567) ([Kruglov Pavel](https://github.com/Avogar)). -* Cancel merges before removing moved parts [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Try to fix abort in arrow [#61720](https://github.com/ClickHouse/ClickHouse/pull/61720) ([Kruglov Pavel](https://github.com/Avogar)). -* Search for convert_to_replicated flag at the correct path [#61769](https://github.com/ClickHouse/ClickHouse/pull/61769) ([Kirill](https://github.com/kirillgarbar)). -* Fix possible connections data-race for distributed_foreground_insert/distributed_background_insert_batch [#61867](https://github.com/ClickHouse/ClickHouse/pull/61867) ([Azat Khuzhin](https://github.com/azat)). -* Mark CANNOT_PARSE_ESCAPE_SEQUENCE error as parse error to be able to skip it in row input formats [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix writing exception message in output format in HTTP when http_wait_end_of_query is used [#61951](https://github.com/ClickHouse/ClickHouse/pull/61951) ([Kruglov Pavel](https://github.com/Avogar)). -* Proper fix for LowCardinality together with JSONExtact functions [#61957](https://github.com/ClickHouse/ClickHouse/pull/61957) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). -* Crash in Engine Merge if Row Policy does not have expression [#61971](https://github.com/ClickHouse/ClickHouse/pull/61971) ([Ilya Golshtein](https://github.com/ilejn)). -* Fix WriteBufferAzureBlobStorage destructor uncaught exception [#61988](https://github.com/ClickHouse/ClickHouse/pull/61988) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). -* Fix CREATE TABLE w/o columns definition for ReplicatedMergeTree [#62040](https://github.com/ClickHouse/ClickHouse/pull/62040) ([Azat Khuzhin](https://github.com/azat)). -* Fix optimize_skip_unused_shards_rewrite_in for composite sharding key [#62047](https://github.com/ClickHouse/ClickHouse/pull/62047) ([Azat Khuzhin](https://github.com/azat)). -* ReadWriteBufferFromHTTP set right header host when redirected [#62068](https://github.com/ClickHouse/ClickHouse/pull/62068) ([Sema Checherinda](https://github.com/CheSema)). -* Fix external table cannot parse data type Bool [#62115](https://github.com/ClickHouse/ClickHouse/pull/62115) ([Duc Canh Le](https://github.com/canhld94)). -* Revert "Merge pull request [#61564](https://github.com/ClickHouse/ClickHouse/issues/61564) from liuneng1994/optimize_in_single_value" [#62135](https://github.com/ClickHouse/ClickHouse/pull/62135) ([Raúl Marín](https://github.com/Algunenano)). -* Add test for [#35215](https://github.com/ClickHouse/ClickHouse/issues/35215) [#62180](https://github.com/ClickHouse/ClickHouse/pull/62180) ([Raúl Marín](https://github.com/Algunenano)). -* Analyzer: Fix query parameter resolution [#62186](https://github.com/ClickHouse/ClickHouse/pull/62186) ([Dmitry Novik](https://github.com/novikd)). -* Fix restoring parts while readonly [#62207](https://github.com/ClickHouse/ClickHouse/pull/62207) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix crash in index definition containing sql udf [#62225](https://github.com/ClickHouse/ClickHouse/pull/62225) ([vdimir](https://github.com/vdimir)). -* Fixing NULL random seed for generateRandom with analyzer. [#62248](https://github.com/ClickHouse/ClickHouse/pull/62248) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Correctly handle const columns in DistinctTransfom [#62250](https://github.com/ClickHouse/ClickHouse/pull/62250) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix PartsSplitter [#62268](https://github.com/ClickHouse/ClickHouse/pull/62268) ([Nikita Taranov](https://github.com/nickitat)). -* Analyzer: Fix alias to parametrized view resolution [#62274](https://github.com/ClickHouse/ClickHouse/pull/62274) ([Dmitry Novik](https://github.com/novikd)). -* Analyzer: Fix name resolution from parent scopes [#62281](https://github.com/ClickHouse/ClickHouse/pull/62281) ([Dmitry Novik](https://github.com/novikd)). -* Fix argMax with nullable non native numeric column [#62285](https://github.com/ClickHouse/ClickHouse/pull/62285) ([Raúl Marín](https://github.com/Algunenano)). -* Fix BACKUP and RESTORE of a materialized view in Ordinary database [#62295](https://github.com/ClickHouse/ClickHouse/pull/62295) ([Vitaly Baranov](https://github.com/vitlibar)). -* Fix data race on scalars in Context [#62305](https://github.com/ClickHouse/ClickHouse/pull/62305) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix primary key in materialized view [#62319](https://github.com/ClickHouse/ClickHouse/pull/62319) ([Murat Khairulin](https://github.com/mxwell)). -* Do not build multithread insert pipeline for tables without support [#62333](https://github.com/ClickHouse/ClickHouse/pull/62333) ([vdimir](https://github.com/vdimir)). -* Fix analyzer with positional arguments in distributed query [#62362](https://github.com/ClickHouse/ClickHouse/pull/62362) ([flynn](https://github.com/ucasfl)). -* Fix filter pushdown from additional_table_filters in Merge engine in analyzer [#62398](https://github.com/ClickHouse/ClickHouse/pull/62398) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix GLOBAL IN table queries with analyzer. [#62409](https://github.com/ClickHouse/ClickHouse/pull/62409) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Respect settings truncate_on_insert/create_new_file_on_insert in s3/hdfs/azure engines during partitioned write [#62425](https://github.com/ClickHouse/ClickHouse/pull/62425) ([Kruglov Pavel](https://github.com/Avogar)). -* Fix backup restore path for AzureBlobStorage [#62447](https://github.com/ClickHouse/ClickHouse/pull/62447) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). -* Fix SimpleSquashingChunksTransform [#62451](https://github.com/ClickHouse/ClickHouse/pull/62451) ([Nikita Taranov](https://github.com/nickitat)). -* Fix capture of nested lambda. [#62462](https://github.com/ClickHouse/ClickHouse/pull/62462) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix validation of special MergeTree columns [#62498](https://github.com/ClickHouse/ClickHouse/pull/62498) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). -* Avoid crash when reading protobuf with recursive types [#62506](https://github.com/ClickHouse/ClickHouse/pull/62506) ([Raúl Marín](https://github.com/Algunenano)). -* Fix a bug moving one partition from one to itself [#62524](https://github.com/ClickHouse/ClickHouse/pull/62524) ([helifu](https://github.com/helifu)). -* Fix scalar subquery in LIMIT [#62567](https://github.com/ClickHouse/ClickHouse/pull/62567) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Try to fix segfault in Hive engine [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). -* Fix memory leak in groupArraySorted [#62597](https://github.com/ClickHouse/ClickHouse/pull/62597) ([Antonio Andelic](https://github.com/antonio2368)). -* Fix crash in largestTriangleThreeBuckets [#62646](https://github.com/ClickHouse/ClickHouse/pull/62646) ([Raúl Marín](https://github.com/Algunenano)). -* Fix tumble[Start,End] and hop[Start,End] for bigger resolutions [#62705](https://github.com/ClickHouse/ClickHouse/pull/62705) ([Jordi Villar](https://github.com/jrdi)). -* Fix argMin/argMax combinator state [#62708](https://github.com/ClickHouse/ClickHouse/pull/62708) ([Raúl Marín](https://github.com/Algunenano)). -* Fix temporary data in cache failing because of cache lock contention optimization [#62715](https://github.com/ClickHouse/ClickHouse/pull/62715) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix crash in function `mergeTreeIndex` [#62762](https://github.com/ClickHouse/ClickHouse/pull/62762) ([Anton Popov](https://github.com/CurtizJ)). -* fix: update: nested materialized columns: size check fixes [#62773](https://github.com/ClickHouse/ClickHouse/pull/62773) ([Eliot Hautefeuille](https://github.com/hileef)). -* Fix FINAL modifier is not respected in CTE with analyzer [#62811](https://github.com/ClickHouse/ClickHouse/pull/62811) ([Duc Canh Le](https://github.com/canhld94)). -* Fix crash in function `formatRow` with `JSON` format and HTTP interface [#62840](https://github.com/ClickHouse/ClickHouse/pull/62840) ([Anton Popov](https://github.com/CurtizJ)). -* Azure: fix building final url from endpoint object [#62850](https://github.com/ClickHouse/ClickHouse/pull/62850) ([Daniel Pozo Escalona](https://github.com/danipozo)). -* Fix GCD codec [#62853](https://github.com/ClickHouse/ClickHouse/pull/62853) ([Nikita Taranov](https://github.com/nickitat)). -* Fix LowCardinality(Nullable) key in hyperrectangle [#62866](https://github.com/ClickHouse/ClickHouse/pull/62866) ([Amos Bird](https://github.com/amosbird)). -* Fix fromUnixtimestamp in joda syntax while the input value beyond UInt32 [#62901](https://github.com/ClickHouse/ClickHouse/pull/62901) ([KevinyhZou](https://github.com/KevinyhZou)). -* Disable optimize_rewrite_aggregate_function_with_if for sum(nullable) [#62912](https://github.com/ClickHouse/ClickHouse/pull/62912) ([Raúl Marín](https://github.com/Algunenano)). -* Fix PREWHERE for StorageBuffer with different source table column types. [#62916](https://github.com/ClickHouse/ClickHouse/pull/62916) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix temporary data in cache incorrectly processing failure of cache key directory creation [#62925](https://github.com/ClickHouse/ClickHouse/pull/62925) ([Kseniia Sumarokova](https://github.com/kssenii)). -* gRPC: fix crash on IPv6 peer connection [#62978](https://github.com/ClickHouse/ClickHouse/pull/62978) ([Konstantin Bogdanov](https://github.com/thevar1able)). -* Fix possible CHECKSUM_DOESNT_MATCH (and others) during replicated fetches [#62987](https://github.com/ClickHouse/ClickHouse/pull/62987) ([Azat Khuzhin](https://github.com/azat)). -* Fix terminate with uncaught exception in temporary data in cache [#62998](https://github.com/ClickHouse/ClickHouse/pull/62998) ([Kseniia Sumarokova](https://github.com/kssenii)). -* Fix optimize_rewrite_aggregate_function_with_if implicit cast [#62999](https://github.com/ClickHouse/ClickHouse/pull/62999) ([Raúl Marín](https://github.com/Algunenano)). -* Fix unhandled exception in ~RestorerFromBackup [#63040](https://github.com/ClickHouse/ClickHouse/pull/63040) ([Vitaly Baranov](https://github.com/vitlibar)). -* Do not remove server constants from GROUP BY key for secondary query. [#63047](https://github.com/ClickHouse/ClickHouse/pull/63047) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). -* Fix incorrect judgement of of monotonicity of function abs [#63097](https://github.com/ClickHouse/ClickHouse/pull/63097) ([Duc Canh Le](https://github.com/canhld94)). -* Make sanity check of settings worse [#63119](https://github.com/ClickHouse/ClickHouse/pull/63119) ([Raúl Marín](https://github.com/Algunenano)). -* Set server name for SSL handshake in MongoDB engine [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). -* Use user specified db instead of "config" for MongoDB wire protocol version check [#63126](https://github.com/ClickHouse/ClickHouse/pull/63126) ([Alexander Gololobov](https://github.com/davenger)). -* Format SQL security option only in `CREATE VIEW` queries. [#63136](https://github.com/ClickHouse/ClickHouse/pull/63136) ([pufit](https://github.com/pufit)). +* Fix parser error when using COUNT(*) with FILTER clause. [#61357](https://github.com/ClickHouse/ClickHouse/pull/61357) ([Duc Canh Le](https://github.com/canhld94)). +* Fix logical error ''Unexpected return type from materialize. Expected Nullable. Got UInt8' while using group_by_use_nulls with analyzer and materialize/constant in grouping set. Closes [#61531](https://github.com/ClickHouse/ClickHouse/issues/61531). [#61567](https://github.com/ClickHouse/ClickHouse/pull/61567) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix data race between `MOVE PARTITION` query and merges resulting in intersecting parts. [#61610](https://github.com/ClickHouse/ClickHouse/pull/61610) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* TBD. [#61720](https://github.com/ClickHouse/ClickHouse/pull/61720) ([Kruglov Pavel](https://github.com/Avogar)). +* Search for MergeTree to ReplicatedMergeTree conversion flag at the correct location for tables with custom storage policy. [#61769](https://github.com/ClickHouse/ClickHouse/pull/61769) ([Kirill](https://github.com/kirillgarbar)). +* Fix possible connections data-race for distributed_foreground_insert/distributed_background_insert_batch that leads to crashes. [#61867](https://github.com/ClickHouse/ClickHouse/pull/61867) ([Azat Khuzhin](https://github.com/azat)). +* Fix skipping escape sequcne parsing errors during JSON data parsing while using `input_format_allow_errors_num/ratio` settings. [#61883](https://github.com/ClickHouse/ClickHouse/pull/61883) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix writing exception message in output format in HTTP when http_wait_end_of_query is used. Closes [#55101](https://github.com/ClickHouse/ClickHouse/issues/55101). [#61951](https://github.com/ClickHouse/ClickHouse/pull/61951) ([Kruglov Pavel](https://github.com/Avogar)). +* This PR reverts https://github.com/ClickHouse/ClickHouse/pull/61617 and fixed the problem with usage of LowCardinality columns together with JSONExtract function. Previously the user may receive either incorrect result of a logical error. [#61957](https://github.com/ClickHouse/ClickHouse/pull/61957) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). +* Fixes Crash in Engine Merge if Row Policy does not have expression. [#61971](https://github.com/ClickHouse/ClickHouse/pull/61971) ([Ilya Golshtein](https://github.com/ilejn)). +* Implemented preFinalize, updated finalizeImpl & destructor of WriteBufferAzureBlobStorage to avoided having uncaught exception in destructor. [#61988](https://github.com/ClickHouse/ClickHouse/pull/61988) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fix CREATE TABLE w/o columns definition for ReplicatedMergeTree (columns will be obtained from replica). [#62040](https://github.com/ClickHouse/ClickHouse/pull/62040) ([Azat Khuzhin](https://github.com/azat)). +* Fix optimize_skip_unused_shards_rewrite_in for composite sharding key (could lead to `NOT_FOUND_COLUMN_IN_BLOCK` and `TYPE_MISMATCH`). [#62047](https://github.com/ClickHouse/ClickHouse/pull/62047) ([Azat Khuzhin](https://github.com/azat)). +* ReadWriteBufferFromHTTP set right header host when redirected. [#62068](https://github.com/ClickHouse/ClickHouse/pull/62068) ([Sema Checherinda](https://github.com/CheSema)). +* Fix external table cannot parse data type Bool. [#62115](https://github.com/ClickHouse/ClickHouse/pull/62115) ([Duc Canh Le](https://github.com/canhld94)). +* Revert "Merge pull request [#61564](https://github.com/ClickHouse/ClickHouse/issues/61564) from liuneng1994/optimize_in_single_value". The feature is broken and can't be disabled individually. [#62135](https://github.com/ClickHouse/ClickHouse/pull/62135) ([Raúl Marín](https://github.com/Algunenano)). +* Fix override of MergeTree virtual columns. [#62180](https://github.com/ClickHouse/ClickHouse/pull/62180) ([Raúl Marín](https://github.com/Algunenano)). +* Fix query parameter resolution with `allow_experimental_analyzer` enabled. Closes [#62113](https://github.com/ClickHouse/ClickHouse/issues/62113). [#62186](https://github.com/ClickHouse/ClickHouse/pull/62186) ([Dmitry Novik](https://github.com/novikd)). +* This PR makes `RESTORE ON CLUSTER` wait for each `ReplicatedMergeTree` table to stop being readonly before attaching any restored parts to it. Earlier it didn't wait and it could try to attach some parts at nearly the same time as checking other replicas during the table's startup. In rare cases some parts could be not attached at all during `RESTORE ON CLUSTER` because of that issue. [#62207](https://github.com/ClickHouse/ClickHouse/pull/62207) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix crash on `CREATE TABLE` with `INDEX` containing SQL UDF in expression, close [#62134](https://github.com/ClickHouse/ClickHouse/issues/62134). [#62225](https://github.com/ClickHouse/ClickHouse/pull/62225) ([vdimir](https://github.com/vdimir)). +* Fix `generateRandom` with `NULL` in the seed argument. Fixes [#62092](https://github.com/ClickHouse/ClickHouse/issues/62092). [#62248](https://github.com/ClickHouse/ClickHouse/pull/62248) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix buffer overflow when `DISTINCT` is used with constant values. [#62250](https://github.com/ClickHouse/ClickHouse/pull/62250) ([Antonio Andelic](https://github.com/antonio2368)). +* When some index columns are not loaded into memory for some parts of a *MergeTree table, queries with `FINAL` might produce wrong results. Now we explicitly choose only the common prefix of index columns for all parts to avoid this issue. [#62268](https://github.com/ClickHouse/ClickHouse/pull/62268) ([Nikita Taranov](https://github.com/nickitat)). +* Fix inability to address parametrized view in SELECT queries via aliases. [#62274](https://github.com/ClickHouse/ClickHouse/pull/62274) ([Dmitry Novik](https://github.com/novikd)). +* Fix name resolution in case when identifier is resolved to an executed scalar subquery. [#62281](https://github.com/ClickHouse/ClickHouse/pull/62281) ([Dmitry Novik](https://github.com/novikd)). +* Fix argMax with nullable non native numeric column. [#62285](https://github.com/ClickHouse/ClickHouse/pull/62285) ([Raúl Marín](https://github.com/Algunenano)). +* Fix BACKUP and RESTORE of a materialized view in Ordinary database. [#62295](https://github.com/ClickHouse/ClickHouse/pull/62295) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix data race on scalars in Context. [#62305](https://github.com/ClickHouse/ClickHouse/pull/62305) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix displaying of materialized_view primary_key in system.tables. Previously it was shown empty even when a CREATE query included PRIMARY KEY. [#62319](https://github.com/ClickHouse/ClickHouse/pull/62319) ([Murat Khairulin](https://github.com/mxwell)). +* Do not build multithread insert pipeline for engines without `max_insert_threads` support. Fix insterted rows order in queries like `INSERT INTO FUNCTION file/s3(...) SELECT * FROM ORDER BY col`. [#62333](https://github.com/ClickHouse/ClickHouse/pull/62333) ([vdimir](https://github.com/vdimir)). +* Resolve positional arguments only on the initiator node. Closes [#62289](https://github.com/ClickHouse/ClickHouse/issues/62289). [#62362](https://github.com/ClickHouse/ClickHouse/pull/62362) ([flynn](https://github.com/ucasfl)). +* Fix filter pushdown from additional_table_filters in Merge engine in analyzer. Closes [#62229](https://github.com/ClickHouse/ClickHouse/issues/62229). [#62398](https://github.com/ClickHouse/ClickHouse/pull/62398) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix `Unknown expression or table expression identifier` error for `GLOBAL IN table` queries (with new analyzer). Fixes [#62286](https://github.com/ClickHouse/ClickHouse/issues/62286). [#62409](https://github.com/ClickHouse/ClickHouse/pull/62409) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Respect settings truncate_on_insert/create_new_file_on_insert in s3/hdfs/azure engines during partitioned write. Closes [#61492](https://github.com/ClickHouse/ClickHouse/issues/61492). [#62425](https://github.com/ClickHouse/ClickHouse/pull/62425) ([Kruglov Pavel](https://github.com/Avogar)). +* Fix backup restore path for AzureBlobStorage to include specified blob path. [#62447](https://github.com/ClickHouse/ClickHouse/pull/62447) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)). +* Fixed rare bug in `SimpleSquashingChunksTransform` that may lead to a loss of the last chunk of data in a stream. [#62451](https://github.com/ClickHouse/ClickHouse/pull/62451) ([Nikita Taranov](https://github.com/nickitat)). +* Fix excessive memory usage for queries with nested lambdas. Fixes [#62036](https://github.com/ClickHouse/ClickHouse/issues/62036). [#62462](https://github.com/ClickHouse/ClickHouse/pull/62462) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix validation of special columns (`ver`, `is_deleted`, `sign`) in MergeTree engines on table creation and alter queries. Fixes [#62463](https://github.com/ClickHouse/ClickHouse/issues/62463). [#62498](https://github.com/ClickHouse/ClickHouse/pull/62498) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)). +* Avoid crash when reading protobuf with recursive types. [#62506](https://github.com/ClickHouse/ClickHouse/pull/62506) ([Raúl Marín](https://github.com/Algunenano)). +* Fix [62459](https://github.com/ClickHouse/ClickHouse/issues/62459). [#62524](https://github.com/ClickHouse/ClickHouse/pull/62524) ([helifu](https://github.com/helifu)). +* Fix an error `LIMIT expression must be constant` in queries with constant expression in `LIMIT`/`OFFSET` which contains scalar subquery. Fixes [#62294](https://github.com/ClickHouse/ClickHouse/issues/62294). [#62567](https://github.com/ClickHouse/ClickHouse/pull/62567) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix segmentation fault when using Hive table engine. Reference [#62154](https://github.com/ClickHouse/ClickHouse/issues/62154), [#62560](https://github.com/ClickHouse/ClickHouse/issues/62560). [#62578](https://github.com/ClickHouse/ClickHouse/pull/62578) ([Nikolay Degterinsky](https://github.com/evillique)). +* Fix memory leak in groupArraySorted. Fix [#62536](https://github.com/ClickHouse/ClickHouse/issues/62536). [#62597](https://github.com/ClickHouse/ClickHouse/pull/62597) ([Antonio Andelic](https://github.com/antonio2368)). +* Fix crash in largestTriangleThreeBuckets. [#62646](https://github.com/ClickHouse/ClickHouse/pull/62646) ([Raúl Marín](https://github.com/Algunenano)). +* Fix `tumble[Start,End]` and `hop[Start,End]` functions for resolutions bigger than a day. [#62705](https://github.com/ClickHouse/ClickHouse/pull/62705) ([Jordi Villar](https://github.com/jrdi)). +* Fix argMin/argMax combinator state. [#62708](https://github.com/ClickHouse/ClickHouse/pull/62708) ([Raúl Marín](https://github.com/Algunenano)). +* Fix temporary data in cache failing because of a small value of setting `filesystem_cache_reserve_space_wait_lock_timeout_milliseconds`. Introduced a separate setting `temporary_data_in_cache_reserve_space_wait_lock_timeout_milliseconds`. [#62715](https://github.com/ClickHouse/ClickHouse/pull/62715) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fixed crash in table function `mergeTreeIndex` after offloading some of the columns from suffix of primary key. [#62762](https://github.com/ClickHouse/ClickHouse/pull/62762) ([Anton Popov](https://github.com/CurtizJ)). +* Fix size checks when updating materialized nested columns ( fixes [#62731](https://github.com/ClickHouse/ClickHouse/issues/62731) ). [#62773](https://github.com/ClickHouse/ClickHouse/pull/62773) ([Eliot Hautefeuille](https://github.com/hileef)). +* Fix an error when `FINAL` is not applied when specified in CTE (new analyzer). Fixes [#62779](https://github.com/ClickHouse/ClickHouse/issues/62779). [#62811](https://github.com/ClickHouse/ClickHouse/pull/62811) ([Duc Canh Le](https://github.com/canhld94)). +* Fixed crash in function `formatRow` with `JSON` format in queries executed via the HTTP interface. [#62840](https://github.com/ClickHouse/ClickHouse/pull/62840) ([Anton Popov](https://github.com/CurtizJ)). +* Fix failure to start when storage account URL has trailing slash. [#62850](https://github.com/ClickHouse/ClickHouse/pull/62850) ([Daniel Pozo Escalona](https://github.com/danipozo)). +* Fixed bug in GCD codec implementation that may lead to server crashes. [#62853](https://github.com/ClickHouse/ClickHouse/pull/62853) ([Nikita Taranov](https://github.com/nickitat)). +* Fix incorrect key analysis when LowCardinality(Nullable) keys appear in the middle of a hyperrectangle. This fixes [#62848](https://github.com/ClickHouse/ClickHouse/issues/62848). [#62866](https://github.com/ClickHouse/ClickHouse/pull/62866) ([Amos Bird](https://github.com/amosbird)). +* When we use function `fromUnixTimestampInJodaSyntax` to convert the input `Int64` or `UInt64` value to `DateTime`, sometimes it return the wrong result,because the input value may exceed the maximum value of Uint32 type,and the function will first convert the input value to Uint32, and so would lead to the wrong result. For example we have a table `test_tbl(a Int64, b UInt64)`, and it has a row (`10262736196`, `10262736196`), when use `fromUnixTimestampInJodaSyntax` to convert, the wrong result as below. [#62901](https://github.com/ClickHouse/ClickHouse/pull/62901) ([KevinyhZou](https://github.com/KevinyhZou)). +* Disable optimize_rewrite_aggregate_function_with_if for sum(nullable). [#62912](https://github.com/ClickHouse/ClickHouse/pull/62912) ([Raúl Marín](https://github.com/Algunenano)). +* Fix the `Unexpected return type` error for queries that read from `StorageBuffer` with `PREWHERE` when the source table has different types. Fixes [#62545](https://github.com/ClickHouse/ClickHouse/issues/62545). [#62916](https://github.com/ClickHouse/ClickHouse/pull/62916) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix temporary data in cache incorrect behaviour in case creation of cache key base directory fails with `no space left on device`. [#62925](https://github.com/ClickHouse/ClickHouse/pull/62925) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fixed server crash on IPv6 gRPC client connection. [#62978](https://github.com/ClickHouse/ClickHouse/pull/62978) ([Konstantin Bogdanov](https://github.com/thevar1able)). +* Fix possible CHECKSUM_DOESNT_MATCH (and others) during replicated fetches. [#62987](https://github.com/ClickHouse/ClickHouse/pull/62987) ([Azat Khuzhin](https://github.com/azat)). +* Fix terminate with uncaught exception in temporary data in cache. [#62998](https://github.com/ClickHouse/ClickHouse/pull/62998) ([Kseniia Sumarokova](https://github.com/kssenii)). +* Fix optimize_rewrite_aggregate_function_with_if implicit cast. [#62999](https://github.com/ClickHouse/ClickHouse/pull/62999) ([Raúl Marín](https://github.com/Algunenano)). +* Fix possible crash after unsuccessful RESTORE. This PR fixes [#62985](https://github.com/ClickHouse/ClickHouse/issues/62985). [#63040](https://github.com/ClickHouse/ClickHouse/pull/63040) ([Vitaly Baranov](https://github.com/vitlibar)). +* Fix `Not found column in block` error for distributed queries with server-side constants in `GROUP BY` key. Fixes [#62682](https://github.com/ClickHouse/ClickHouse/issues/62682). [#63047](https://github.com/ClickHouse/ClickHouse/pull/63047) ([Nikolai Kochetov](https://github.com/KochetovNicolai)). +* Fix incorrect judgement of of monotonicity of function `abs`. [#63097](https://github.com/ClickHouse/ClickHouse/pull/63097) ([Duc Canh Le](https://github.com/canhld94)). +* Sanity check: Clamp values instead of throwing. [#63119](https://github.com/ClickHouse/ClickHouse/pull/63119) ([Raúl Marín](https://github.com/Algunenano)). +* Setting server_name might help with recently reported SSL handshake error when connecting to MongoDB Atlas: `Poco::Exception. Code: 1000, e.code() = 0, SSL Exception: error:10000438:SSL routines:OPENSSL_internal:TLSV1_ALERT_INTERNAL_ERROR`. [#63122](https://github.com/ClickHouse/ClickHouse/pull/63122) ([Alexander Gololobov](https://github.com/davenger)). +* The wire protocol version check for MongoDB used to try accessing "config" database, but this can fail if the user doesn't have permissions for it. The fix is to use the database name provided by user. [#63126](https://github.com/ClickHouse/ClickHouse/pull/63126) ([Alexander Gololobov](https://github.com/davenger)). +* Fix a bug when `SQL SECURITY` statement appears in all `CREATE` queries if the server setting `ignore_empty_sql_security_in_create_view_query=true` https://github.com/ClickHouse/ClickHouse/pull/63134. [#63136](https://github.com/ClickHouse/ClickHouse/pull/63136) ([pufit](https://github.com/pufit)). #### CI Fix or Improvement (changelog entry is not required) diff --git a/docs/en/development/contrib.md b/docs/en/development/contrib.md index 5f96466bbec..db3eabaecfc 100644 --- a/docs/en/development/contrib.md +++ b/docs/en/development/contrib.md @@ -7,21 +7,43 @@ description: A list of third-party libraries used # Third-Party Libraries Used -ClickHouse utilizes third-party libraries for different purposes, e.g., to connect to other databases, to decode (encode) data during load (save) from (to) disk or to implement certain specialized SQL functions. To be independent of the available libraries in the target system, each third-party library is imported as a Git submodule into ClickHouse's source tree and compiled and linked with ClickHouse. A list of third-party libraries and their licenses can be obtained by the following query: +ClickHouse utilizes third-party libraries for different purposes, e.g., to connect to other databases, to decode/encode data during load/save from/to disk, or to implement certain specialized SQL functions. +To be independent of the available libraries in the target system, each third-party library is imported as a Git submodule into ClickHouse's source tree and compiled and linked with ClickHouse. +A list of third-party libraries and their licenses can be obtained by the following query: ``` sql SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en'; ``` -Note that the listed libraries are the ones located in the `contrib/` directory of the ClickHouse repository. Depending on the build options, some of the libraries may have not been compiled, and as a result, their functionality may not be available at runtime. +Note that the listed libraries are the ones located in the `contrib/` directory of the ClickHouse repository. +Depending on the build options, some of the libraries may have not been compiled, and, as a result, their functionality may not be available at runtime. [Example](https://play.clickhouse.com/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==) -## Adding new third-party libraries and maintaining patches in third-party libraries {#adding-third-party-libraries} +## Adding and maintaining third-party libraries -1. Each third-party library must reside in a dedicated directory under the `contrib/` directory of the ClickHouse repository. Avoid dumps/copies of external code, instead use Git submodule feature to pull third-party code from an external upstream repository. -2. Submodules are listed in `.gitmodule`. If the external library can be used as-is, you may reference the upstream repository directly. Otherwise, i.e. the external library requires patching/customization, create a fork of the official repository in the [ClickHouse organization in GitHub](https://github.com/ClickHouse). -3. In the latter case, create a branch with `clickhouse/` prefix from the branch you want to integrate, e.g. `clickhouse/master` (for `master`) or `clickhouse/release/vX.Y.Z` (for a `release/vX.Y.Z` tag). The purpose of this branch is to isolate customization of the library from upstream work. For example, pulls from the upstream repository into the fork will leave all `clickhouse/` branches unaffected. Submodules in `contrib/` must only track `clickhouse/` branches of forked third-party repositories. -4. To patch a fork of a third-party library, create a dedicated branch with `clickhouse/` prefix in the fork, e.g. `clickhouse/fix-some-desaster`. Finally, merge the patch branch into the custom tracking branch (e.g. `clickhouse/master` or `clickhouse/release/vX.Y.Z`) using a PR. -5. Always create patches of third-party libraries with the official repository in mind. Once a PR of a patch branch to the `clickhouse/` branch in the fork repository is done and the submodule version in ClickHouse official repository is bumped, consider opening another PR from the patch branch to the upstream library repository. This ensures, that 1) the contribution has more than a single use case and importance, 2) others will also benefit from it, 3) the change will not remain a maintenance burden solely on ClickHouse developers. -9. To update a submodule with changes in the upstream repository, first merge upstream `master` (or a new `versionX.Y.Z` tag) into the `clickhouse`-tracking branch in the fork repository. Conflicts with patches/customization will need to be resolved in this merge (see Step 4.). Once the merge is done, bump the submodule in ClickHouse to point to the new hash in the fork. +Each third-party library must reside in a dedicated directory under the `contrib/` directory of the ClickHouse repository. +Avoid dumping copies of external code into the library directory. +Instead create a Git submodule to pull third-party code from an external upstream repository. + +All submodules used by ClickHouse are listed in the `.gitmodule` file. +If the library can be used as-is (the default case), you can reference the upstream repository directly. +If the library needs patching, create a fork of the upstream repository in the [ClickHouse organization on GitHub](https://github.com/ClickHouse). + +In the latter case, we aim to isolate custom patches as much as possible from upstream commits. +To that end, create a branch with prefix `clickhouse/` from the branch or tag you want to integrate, e.g. `clickhouse/master` (for branch `master`) or `clickhouse/release/vX.Y.Z` (for tag `release/vX.Y.Z`). +This ensures that pulls from the upstream repository into the fork will leave custom `clickhouse/` branches unaffected. +Submodules in `contrib/` must only track `clickhouse/` branches of forked third-party repositories. + +Patches are only applied against `clickhouse/` branches of external libraries. +For that, push the patch as a branch with `clickhouse/`, e.g. `clickhouse/fix-some-desaster`. +Then create a PR from the new branch against the custom tracking branch with `clickhouse/` prefix, (e.g. `clickhouse/master` or `clickhouse/release/vX.Y.Z`) and merge the patch. + +Create patches of third-party libraries with the official repository in mind and consider contributing the patch back to the upstream repository. +This makes sure that others will also benefit from the patch and it will not be a maintenance burden for the ClickHouse team. + +To pull upstream changes into the submodule, you can use two methods: +- (less work but less clean): merge upstream `master` into the corresponding `clickhouse/` tracking branch in the forked repository. You will need to resolve merge conflicts with previous custom patches. This method can be used when the `clickhouse/` branch tracks an upstream development branch like `master`, `main`, `dev`, etc. +- (more work but cleaner): create a new branch with `clickhouse/` prefix from the upstream commit or tag you like to integrate. Then re-apply all existing patches using new PRs (or squash them into a single PR). This method can be used when the `clickhouse/` branch tracks a specific upstream version branch or tag. It is cleaner in the sense that custom patches and upstream changes are better isolated from each other. + +Once the submodule has been updated, bump the submodule in ClickHouse to point to the new hash in the fork. diff --git a/docs/en/getting-started/install.md b/docs/en/getting-started/install.md index 6525c29306a..67752f223ce 100644 --- a/docs/en/getting-started/install.md +++ b/docs/en/getting-started/install.md @@ -111,29 +111,10 @@ clickhouse-client # or "clickhouse-client --password" if you've set up a passwor ```
-Deprecated Method for installing deb-packages - -``` bash -sudo apt-get install apt-transport-https ca-certificates dirmngr -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 - -echo "deb https://repo.clickhouse.com/deb/stable/ main/" | sudo tee \ - /etc/apt/sources.list.d/clickhouse.list -sudo apt-get update - -sudo apt-get install -y clickhouse-server clickhouse-client - -sudo service clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. -``` - -
- -
-Migration Method for installing the deb-packages +Old distributions method for installing the deb-packages ```bash -sudo apt-key del E0C56BD4 +sudo apt-get install apt-transport-https ca-certificates dirmngr sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754 echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \ /etc/apt/sources.list.d/clickhouse.list @@ -240,22 +221,6 @@ sudo systemctl start clickhouse-keeper sudo systemctl status clickhouse-keeper ``` -
- -Deprecated Method for installing rpm-packages - -``` bash -sudo yum install yum-utils -sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG -sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/clickhouse.repo -sudo yum install clickhouse-server clickhouse-client - -sudo /etc/init.d/clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. -``` - -
- You can replace `stable` with `lts` to use different [release kinds](/knowledgebase/production) based on your needs. Then run these commands to install packages: @@ -308,33 +273,6 @@ tar -xzvf "clickhouse-client-$LATEST_VERSION-${ARCH}.tgz" \ sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh" ``` -
- -Deprecated Method for installing tgz archives - -``` bash -export LATEST_VERSION=$(curl -s https://repo.clickhouse.com/tgz/stable/ | \ - grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1) -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz - -tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz -sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz -sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-server-$LATEST_VERSION.tgz -sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh -sudo /etc/init.d/clickhouse-server start - -tar -xzvf clickhouse-client-$LATEST_VERSION.tgz -sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh -``` -
- For production environments, it’s recommended to use the latest `stable`-version. You can find its number on GitHub page https://github.com/ClickHouse/ClickHouse/tags with postfix `-stable`. ### From Docker Image {#from-docker-image} diff --git a/docs/en/sql-reference/data-types/ipv4.md b/docs/en/sql-reference/data-types/ipv4.md index 637ed543e08..98ba9f4abac 100644 --- a/docs/en/sql-reference/data-types/ipv4.md +++ b/docs/en/sql-reference/data-types/ipv4.md @@ -57,6 +57,18 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1; └──────────────────┴───────────┘ ``` +IPv4 addresses can be directly compared to IPv6 addresses: + +```sql +SELECT toIPv4('127.0.0.1') = toIPv6('::ffff:127.0.0.1'); +``` + +```text +┌─equals(toIPv4('127.0.0.1'), toIPv6('::ffff:127.0.0.1'))─┐ +│ 1 │ +└─────────────────────────────────────────────────────────┘ +``` + **See Also** - [Functions for Working with IPv4 and IPv6 Addresses](../functions/ip-address-functions.md) diff --git a/docs/en/sql-reference/data-types/ipv6.md b/docs/en/sql-reference/data-types/ipv6.md index 642a7db81fc..d3b7cc72a1a 100644 --- a/docs/en/sql-reference/data-types/ipv6.md +++ b/docs/en/sql-reference/data-types/ipv6.md @@ -57,6 +57,19 @@ SELECT toTypeName(from), hex(from) FROM hits LIMIT 1; └──────────────────┴──────────────────────────────────┘ ``` +IPv6 addresses can be directly compared to IPv4 addresses: + +```sql +SELECT toIPv4('127.0.0.1') = toIPv6('::ffff:127.0.0.1'); +``` + +```text +┌─equals(toIPv4('127.0.0.1'), toIPv6('::ffff:127.0.0.1'))─┐ +│ 1 │ +└─────────────────────────────────────────────────────────┘ +``` + + **See Also** - [Functions for Working with IPv4 and IPv6 Addresses](../functions/ip-address-functions.md) diff --git a/docs/en/sql-reference/functions/date-time-functions.md b/docs/en/sql-reference/functions/date-time-functions.md index 6ad26f452ad..4092c83954a 100644 --- a/docs/en/sql-reference/functions/date-time-functions.md +++ b/docs/en/sql-reference/functions/date-time-functions.md @@ -1235,6 +1235,168 @@ Result: - [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) server configuration parameter. +## toStartOfMillisecond + +Rounds down a date with time to the start of the milliseconds. + +**Syntax** + +``` sql +toStartOfMillisecond(value, [timezone]) +``` + +**Arguments** + +- `value` — Date and time. [DateTime64](../../sql-reference/data-types/datetime64.md). +- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Input value with sub-milliseconds. [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Examples** + +Query without timezone: + +``` sql +WITH toDateTime64('2020-01-01 10:20:30.999999999', 9) AS dt64 +SELECT toStartOfMillisecond(dt64); +``` + +Result: + +``` text +┌────toStartOfMillisecond(dt64)─┐ +│ 2020-01-01 10:20:30.999000000 │ +└───────────────────────────────┘ +``` + +Query with timezone: + +``` sql +┌─toStartOfMillisecond(dt64, 'Asia/Istanbul')─┐ +│ 2020-01-01 12:20:30.999000000 │ +└─────────────────────────────────────────────┘ +``` + +Result: + +``` text +┌─toStartOfMillisecond(dt64, 'Asia/Istanbul')─┐ +│ 2020-01-01 12:20:30.999 │ +└─────────────────────────────────────────────┘ +``` + +## toStartOfMicrosecond + +Rounds down a date with time to the start of the microseconds. + +**Syntax** + +``` sql +toStartOfMicrosecond(value, [timezone]) +``` + +**Arguments** + +- `value` — Date and time. [DateTime64](../../sql-reference/data-types/datetime64.md). +- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Input value with sub-microseconds. [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Examples** + +Query without timezone: + +``` sql +WITH toDateTime64('2020-01-01 10:20:30.999999999', 9) AS dt64 +SELECT toStartOfMicrosecond(dt64); +``` + +Result: + +``` text +┌────toStartOfMicrosecond(dt64)─┐ +│ 2020-01-01 10:20:30.999999000 │ +└───────────────────────────────┘ +``` + +Query with timezone: + +``` sql +WITH toDateTime64('2020-01-01 10:20:30.999999999', 9) AS dt64 +SELECT toStartOfMicrosecond(dt64, 'Asia/Istanbul'); +``` + +Result: + +``` text +┌─toStartOfMicrosecond(dt64, 'Asia/Istanbul')─┐ +│ 2020-01-01 12:20:30.999999000 │ +└─────────────────────────────────────────────┘ +``` + +**See also** + +- [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) server configuration parameter. + +## toStartOfNanosecond + +Rounds down a date with time to the start of the nanoseconds. + +**Syntax** + +``` sql +toStartOfNanosecond(value, [timezone]) +``` + +**Arguments** + +- `value` — Date and time. [DateTime64](../../sql-reference/data-types/datetime64.md). +- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Input value with nanoseconds. [DateTime64](../../sql-reference/data-types/datetime64.md). + +**Examples** + +Query without timezone: + +``` sql +WITH toDateTime64('2020-01-01 10:20:30.999999999', 9) AS dt64 +SELECT toStartOfNanosecond(dt64); +``` + +Result: + +``` text +┌─────toStartOfNanosecond(dt64)─┐ +│ 2020-01-01 10:20:30.999999999 │ +└───────────────────────────────┘ +``` + +Query with timezone: + +``` sql +WITH toDateTime64('2020-01-01 10:20:30.999999999', 9) AS dt64 +SELECT toStartOfNanosecond(dt64, 'Asia/Istanbul'); +``` + +Result: + +``` text +┌─toStartOfNanosecond(dt64, 'Asia/Istanbul')─┐ +│ 2020-01-01 12:20:30.999999999 │ +└────────────────────────────────────────────┘ +``` + +**See also** + +- [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) server configuration parameter. + ## toStartOfFiveMinutes Rounds down a date with time to the start of the five-minute interval. @@ -3953,6 +4115,43 @@ Result: │ 2023-03-16 18:00:00.000 │ └─────────────────────────────────────────────────────────────────────────┘ ``` + +## UTCTimestamp + +Returns the current date and time at the moment of query analysis. The function is a constant expression. + +:::note +This function gives the same result that `now('UTC')` would. It was added only for MySQL support and [`now`](#now-now) is the preferred usage. +::: + +**Syntax** + +```sql +UTCTimestamp() +``` + +Alias: `UTC_timestamp`. + +**Returned value** + +- Returns the current date and time at the moment of query analysis. [DateTime](../data-types/datetime.md). + +**Example** + +Query: + +```sql +SELECT UTCTimestamp(); +``` + +Result: + +```response +┌──────UTCTimestamp()─┐ +│ 2024-05-28 08:32:09 │ +└─────────────────────┘ +``` + ## timeDiff Returns the difference between two dates or dates with time values. The difference is calculated in units of seconds. It is same as `dateDiff` and was added only for MySQL support. `dateDiff` is preferred. diff --git a/docs/en/sql-reference/functions/json-functions.md b/docs/en/sql-reference/functions/json-functions.md index 8359d5f9fbc..5d73c9a83b3 100644 --- a/docs/en/sql-reference/functions/json-functions.md +++ b/docs/en/sql-reference/functions/json-functions.md @@ -4,13 +4,13 @@ sidebar_position: 105 sidebar_label: JSON --- -There are two sets of functions to parse JSON. - - `simpleJSON*` (`visitParam*`) is made to parse a special very limited subset of a JSON, but these functions are extremely fast. - - `JSONExtract*` is made to parse normal JSON. +There are two sets of functions to parse JSON: + - [`simpleJSON*` (`visitParam*`)](#simplejson--visitparam-functions) which is made for parsing a limited subset of JSON extremely fast. + - [`JSONExtract*`](#jsonextract-functions) which is made for parsing ordinary JSON. -# simpleJSON/visitParam functions +## simpleJSON / visitParam functions -ClickHouse has special functions for working with simplified JSON. All these JSON functions are based on strong assumptions about what the JSON can be, but they try to do as little as possible to get the job done. +ClickHouse has special functions for working with simplified JSON. All these JSON functions are based on strong assumptions about what the JSON can be. They try to do as little as possible to get the job done as quickly as possible. The following assumptions are made: @@ -19,7 +19,7 @@ The following assumptions are made: 3. Fields are searched for on any nesting level, indiscriminately. If there are multiple matching fields, the first occurrence is used. 4. The JSON does not have space characters outside of string literals. -## simpleJSONHas +### simpleJSONHas Checks whether there is a field named `field_name`. The result is `UInt8`. @@ -29,14 +29,16 @@ Checks whether there is a field named `field_name`. The result is `UInt8`. simpleJSONHas(json, field_name) ``` +Alias: `visitParamHas`. + **Parameters** -- `json`: The JSON in which the field is searched for. [String](../data-types/string.md#string) -- `field_name`: The name of the field to search for. [String literal](../syntax#string) +- `json` — The JSON in which the field is searched for. [String](../data-types/string.md#string) +- `field_name` — The name of the field to search for. [String literal](../syntax#string) **Returned value** -It returns `1` if the field exists, `0` otherwise. +- Returns `1` if the field exists, `0` otherwise. [UInt8](../data-types/int-uint.md). **Example** @@ -55,11 +57,13 @@ SELECT simpleJSONHas(json, 'foo') FROM jsons; SELECT simpleJSONHas(json, 'bar') FROM jsons; ``` +Result: + ```response 1 0 ``` -## simpleJSONExtractUInt +### simpleJSONExtractUInt Parses `UInt64` from the value of the field named `field_name`. If this is a string field, it tries to parse a number from the beginning of the string. If the field does not exist, or it exists but does not contain a number, it returns `0`. @@ -69,14 +73,16 @@ Parses `UInt64` from the value of the field named `field_name`. If this is a str simpleJSONExtractUInt(json, field_name) ``` +Alias: `visitParamExtractUInt`. + **Parameters** -- `json`: The JSON in which the field is searched for. [String](../data-types/string.md#string) -- `field_name`: The name of the field to search for. [String literal](../syntax#string) +- `json` — The JSON in which the field is searched for. [String](../data-types/string.md#string) +- `field_name` — The name of the field to search for. [String literal](../syntax#string) **Returned value** -It returns the number parsed from the field if the field exists and contains a number, `0` otherwise. +- Returns the number parsed from the field if the field exists and contains a number, `0` otherwise. [UInt64](../data-types/int-uint.md). **Example** @@ -98,6 +104,8 @@ INSERT INTO jsons VALUES ('{"baz":2}'); SELECT simpleJSONExtractUInt(json, 'foo') FROM jsons ORDER BY json; ``` +Result: + ```response 0 4 @@ -106,7 +114,7 @@ SELECT simpleJSONExtractUInt(json, 'foo') FROM jsons ORDER BY json; 5 ``` -## simpleJSONExtractInt +### simpleJSONExtractInt Parses `Int64` from the value of the field named `field_name`. If this is a string field, it tries to parse a number from the beginning of the string. If the field does not exist, or it exists but does not contain a number, it returns `0`. @@ -116,14 +124,16 @@ Parses `Int64` from the value of the field named `field_name`. If this is a stri simpleJSONExtractInt(json, field_name) ``` +Alias: `visitParamExtractInt`. + **Parameters** -- `json`: The JSON in which the field is searched for. [String](../data-types/string.md#string) -- `field_name`: The name of the field to search for. [String literal](../syntax#string) +- `json` — The JSON in which the field is searched for. [String](../data-types/string.md#string) +- `field_name` — The name of the field to search for. [String literal](../syntax#string) **Returned value** -It returns the number parsed from the field if the field exists and contains a number, `0` otherwise. +- Returns the number parsed from the field if the field exists and contains a number, `0` otherwise. [Int64](../data-types/int-uint.md). **Example** @@ -145,6 +155,8 @@ INSERT INTO jsons VALUES ('{"baz":2}'); SELECT simpleJSONExtractInt(json, 'foo') FROM jsons ORDER BY json; ``` +Result: + ```response 0 -4 @@ -153,7 +165,7 @@ SELECT simpleJSONExtractInt(json, 'foo') FROM jsons ORDER BY json; 5 ``` -## simpleJSONExtractFloat +### simpleJSONExtractFloat Parses `Float64` from the value of the field named `field_name`. If this is a string field, it tries to parse a number from the beginning of the string. If the field does not exist, or it exists but does not contain a number, it returns `0`. @@ -163,14 +175,16 @@ Parses `Float64` from the value of the field named `field_name`. If this is a st simpleJSONExtractFloat(json, field_name) ``` +Alias: `visitParamExtractFloat`. + **Parameters** -- `json`: The JSON in which the field is searched for. [String](../data-types/string.md#string) -- `field_name`: The name of the field to search for. [String literal](../syntax#string) +- `json` — The JSON in which the field is searched for. [String](../data-types/string.md#string) +- `field_name` — The name of the field to search for. [String literal](../syntax#string) **Returned value** -It returns the number parsed from the field if the field exists and contains a number, `0` otherwise. +- Returns the number parsed from the field if the field exists and contains a number, `0` otherwise. [Float64](../data-types/float.md/#float32-float64). **Example** @@ -192,6 +206,8 @@ INSERT INTO jsons VALUES ('{"baz":2}'); SELECT simpleJSONExtractFloat(json, 'foo') FROM jsons ORDER BY json; ``` +Result: + ```response 0 -4000 @@ -200,7 +216,7 @@ SELECT simpleJSONExtractFloat(json, 'foo') FROM jsons ORDER BY json; 5 ``` -## simpleJSONExtractBool +### simpleJSONExtractBool Parses a true/false value from the value of the field named `field_name`. The result is `UInt8`. @@ -210,10 +226,12 @@ Parses a true/false value from the value of the field named `field_name`. The re simpleJSONExtractBool(json, field_name) ``` +Alias: `visitParamExtractBool`. + **Parameters** -- `json`: The JSON in which the field is searched for. [String](../data-types/string.md#string) -- `field_name`: The name of the field to search for. [String literal](../syntax#string) +- `json` — The JSON in which the field is searched for. [String](../data-types/string.md#string) +- `field_name` — The name of the field to search for. [String literal](../syntax#string) **Returned value** @@ -240,6 +258,8 @@ SELECT simpleJSONExtractBool(json, 'bar') FROM jsons ORDER BY json; SELECT simpleJSONExtractBool(json, 'foo') FROM jsons ORDER BY json; ``` +Result: + ```response 0 1 @@ -247,7 +267,7 @@ SELECT simpleJSONExtractBool(json, 'foo') FROM jsons ORDER BY json; 0 ``` -## simpleJSONExtractRaw +### simpleJSONExtractRaw Returns the value of the field named `field_name` as a `String`, including separators. @@ -257,14 +277,16 @@ Returns the value of the field named `field_name` as a `String`, including separ simpleJSONExtractRaw(json, field_name) ``` +Alias: `visitParamExtractRaw`. + **Parameters** -- `json`: The JSON in which the field is searched for. [String](../data-types/string.md#string) -- `field_name`: The name of the field to search for. [String literal](../syntax#string) +- `json` — The JSON in which the field is searched for. [String](../data-types/string.md#string) +- `field_name` — The name of the field to search for. [String literal](../syntax#string) **Returned value** -It returns the value of the field as a [`String`](../data-types/string.md#string), including separators if the field exists, or an empty `String` otherwise. +- Returns the value of the field as a string, including separators if the field exists, or an empty string otherwise. [`String`](../data-types/string.md#string) **Example** @@ -286,6 +308,8 @@ INSERT INTO jsons VALUES ('{"baz":2}'); SELECT simpleJSONExtractRaw(json, 'foo') FROM jsons ORDER BY json; ``` +Result: + ```response "-4e3" @@ -294,7 +318,7 @@ SELECT simpleJSONExtractRaw(json, 'foo') FROM jsons ORDER BY json; {"def":[1,2,3]} ``` -## simpleJSONExtractString +### simpleJSONExtractString Parses `String` in double quotes from the value of the field named `field_name`. @@ -304,14 +328,16 @@ Parses `String` in double quotes from the value of the field named `field_name`. simpleJSONExtractString(json, field_name) ``` +Alias: `visitParamExtractString`. + **Parameters** -- `json`: The JSON in which the field is searched for. [String](../data-types/string.md#string) -- `field_name`: The name of the field to search for. [String literal](../syntax#string) +- `json` — The JSON in which the field is searched for. [String](../data-types/string.md#string) +- `field_name` — The name of the field to search for. [String literal](../syntax#string) **Returned value** -It returns the value of a field as a [`String`](../data-types/string.md#string), including separators. The value is unescaped. It returns an empty `String`: if the field doesn't contain a double quoted string, if unescaping fails or if the field doesn't exist. +- Returns the unescaped value of a field as a string, including separators. An empty string is returned if the field doesn't contain a double quoted string, if unescaping fails or if the field doesn't exist. [String](../data-types/string.md). **Implementation details** @@ -336,6 +362,8 @@ INSERT INTO jsons VALUES ('{"foo":"hello}'); SELECT simpleJSONExtractString(json, 'foo') FROM jsons ORDER BY json; ``` +Result: + ```response \n\0 @@ -343,73 +371,61 @@ SELECT simpleJSONExtractString(json, 'foo') FROM jsons ORDER BY json; ``` -## visitParamHas +## JSONExtract functions -This function is [an alias of `simpleJSONHas`](./json-functions#simplejsonhas). +The following functions are based on [simdjson](https://github.com/lemire/simdjson), and designed for more complex JSON parsing requirements. -## visitParamExtractUInt +### isValidJSON -This function is [an alias of `simpleJSONExtractUInt`](./json-functions#simplejsonextractuint). +Checks that passed string is valid JSON. -## visitParamExtractInt +**Syntax** -This function is [an alias of `simpleJSONExtractInt`](./json-functions#simplejsonextractint). +```sql +isValidJSON(json) +``` -## visitParamExtractFloat - -This function is [an alias of `simpleJSONExtractFloat`](./json-functions#simplejsonextractfloat). - -## visitParamExtractBool - -This function is [an alias of `simpleJSONExtractBool`](./json-functions#simplejsonextractbool). - -## visitParamExtractRaw - -This function is [an alias of `simpleJSONExtractRaw`](./json-functions#simplejsonextractraw). - -## visitParamExtractString - -This function is [an alias of `simpleJSONExtractString`](./json-functions#simplejsonextractstring). - -# JSONExtract functions - -The following functions are based on [simdjson](https://github.com/lemire/simdjson) designed for more complex JSON parsing requirements. - -## isValidJSON(json) - -Checks that passed string is a valid json. - -Examples: +**Examples** ``` sql SELECT isValidJSON('{"a": "hello", "b": [-100, 200.0, 300]}') = 1 SELECT isValidJSON('not a json') = 0 ``` -## JSONHas(json\[, indices_or_keys\]...) +### JSONHas -If the value exists in the JSON document, `1` will be returned. +If the value exists in the JSON document, `1` will be returned. If the value does not exist, `0` will be returned. -If the value does not exist, `0` will be returned. +**Syntax** -Examples: +```sql +JSONHas(json [, indices_or_keys]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns `1` if the value exists in `json`, otherwise `0`. [UInt8](../data-types/int-uint.md). + +**Examples** + +Query: ``` sql SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 1 SELECT JSONHas('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 4) = 0 ``` -`indices_or_keys` is a list of zero or more arguments each of them can be either string or integer. - -- String = access object member by key. -- Positive integer = access the n-th member/key from the beginning. -- Negative integer = access the n-th member/key from the end. - -Minimum index of the element is 1. Thus the element 0 does not exist. - -You may use integers to access both JSON arrays and JSON objects. - -So, for example: +The minimum index of the element is 1. Thus the element 0 does not exist. You may use integers to access both JSON arrays and JSON objects. For example: ``` sql SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'a' @@ -419,26 +435,62 @@ SELECT JSONExtractKey('{"a": "hello", "b": [-100, 200.0, 300]}', -2) = 'a' SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 1) = 'hello' ``` -## JSONLength(json\[, indices_or_keys\]...) +### JSONLength -Return the length of a JSON array or a JSON object. +Return the length of a JSON array or a JSON object. If the value does not exist or has the wrong type, `0` will be returned. -If the value does not exist or has a wrong type, `0` will be returned. +**Syntax** -Examples: +```sql +JSONLength(json [, indices_or_keys]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns the length of the JSON array or JSON object. Returns `0` if the value does not exist or has the wrong type. [UInt64](../data-types/int-uint.md). + +**Examples** ``` sql SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 3 SELECT JSONLength('{"a": "hello", "b": [-100, 200.0, 300]}') = 2 ``` -## JSONType(json\[, indices_or_keys\]...) +### JSONType -Return the type of a JSON value. +Return the type of a JSON value. If the value does not exist, `Null` will be returned. -If the value does not exist, `Null` will be returned. +**Syntax** -Examples: +```sql +JSONType(json [, indices_or_keys]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns the type of a JSON value as a string, otherwise if the value doesn't exists it returns `Null`. [String](../data-types/string.md). + +**Examples** ``` sql SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}') = 'Object' @@ -446,35 +498,191 @@ SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'String' SELECT JSONType('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = 'Array' ``` -## JSONExtractUInt(json\[, indices_or_keys\]...) +### JSONExtractUInt -## JSONExtractInt(json\[, indices_or_keys\]...) +Parses JSON and extracts a value of UInt type. -## JSONExtractFloat(json\[, indices_or_keys\]...) +**Syntax** -## JSONExtractBool(json\[, indices_or_keys\]...) - -Parses a JSON and extract a value. These functions are similar to `visitParam` functions. - -If the value does not exist or has a wrong type, `0` will be returned. - -Examples: - -``` sql -SELECT JSONExtractInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 1) = -100 -SELECT JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) = 200.0 -SELECT JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) = 300 +```sql +JSONExtractUInt(json [, indices_or_keys]...) ``` -## JSONExtractString(json\[, indices_or_keys\]...) +**Parameters** -Parses a JSON and extract a string. This function is similar to `visitParamExtractString` functions. +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). -If the value does not exist or has a wrong type, an empty string will be returned. +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. -The value is unescaped. If unescaping failed, it returns an empty string. +**Returned value** -Examples: +- Returns a UInt value if it exists, otherwise it returns `Null`. [UInt64](../data-types/string.md). + +**Examples** + +Query: + +``` sql +SELECT JSONExtractUInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) as x, toTypeName(x); +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┐ +│ 300 │ UInt64 │ +└─────┴───────────────┘ +``` + +### JSONExtractInt + +Parses JSON and extracts a value of Int type. + +**Syntax** + +```sql +JSONExtractInt(json [, indices_or_keys]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns an Int value if it exists, otherwise it returns `Null`. [Int64](../data-types/int-uint.md). + +**Examples** + +Query: + +``` sql +SELECT JSONExtractInt('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', -1) as x, toTypeName(x); +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┐ +│ 300 │ Int64 │ +└─────┴───────────────┘ +``` + +### JSONExtractFloat + +Parses JSON and extracts a value of Int type. + +**Syntax** + +```sql +JSONExtractFloat(json [, indices_or_keys]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns an Float value if it exists, otherwise it returns `Null`. [Float64](../data-types/float.md). + +**Examples** + +Query: + +``` sql +SELECT JSONExtractFloat('{"a": "hello", "b": [-100, 200.0, 300]}', 'b', 2) as x, toTypeName(x); +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┐ +│ 200 │ Float64 │ +└─────┴───────────────┘ +``` + +### JSONExtractBool + +Parses JSON and extracts a boolean value. If the value does not exist or has a wrong type, `0` will be returned. + +**Syntax** + +```sql +JSONExtractBool(json\[, indices_or_keys\]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns a Boolean value if it exists, otherwise it returns `0`. [Bool](../data-types/boolean.md). + +**Example** + +Query: + +``` sql +SELECT JSONExtractBool('{"passed": true}', 'passed'); +``` + +Result: + +```response +┌─JSONExtractBool('{"passed": true}', 'passed')─┐ +│ 1 │ +└───────────────────────────────────────────────┘ +``` + +### JSONExtractString + +Parses JSON and extracts a string. This function is similar to [`visitParamExtractString`](#simplejsonextractstring) functions. If the value does not exist or has a wrong type, an empty string will be returned. + +**Syntax** + +```sql +JSONExtractString(json [, indices_or_keys]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns an unescaped string from `json`. If unescaping failed, if the value does not exist or if it has a wrong type then it returns an empty string. [String](../data-types/string.md). + +**Examples** ``` sql SELECT JSONExtractString('{"a": "hello", "b": [-100, 200.0, 300]}', 'a') = 'hello' @@ -484,16 +692,35 @@ SELECT JSONExtractString('{"abc":"\\u263"}', 'abc') = '' SELECT JSONExtractString('{"abc":"hello}', 'abc') = '' ``` -## JSONExtract(json\[, indices_or_keys...\], Return_type) +### JSONExtract -Parses a JSON and extract a value of the given ClickHouse data type. +Parses JSON and extracts a value of the given ClickHouse data type. This function is a generalized version of the previous `JSONExtract` functions. Meaning: -This is a generalization of the previous `JSONExtract` functions. -This means `JSONExtract(..., 'String')` returns exactly the same as `JSONExtractString()`, `JSONExtract(..., 'Float64')` returns exactly the same as `JSONExtractFloat()`. -Examples: +**Syntax** + +```sql +JSONExtract(json [, indices_or_keys...], return_type) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). +- `return_type` — A string specifying the type of the value to extract. [String](../data-types/string.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns a value if it exists of the specified return type, otherwise it returns `0`, `Null`, or an empty-string depending on the specified return type. [UInt64](../data-types/int-uint.md), [Int64](../data-types/int-uint.md), [Float64](../data-types/float.md), [Bool](../data-types/boolean.md) or [String](../data-types/string.md). + +**Examples** ``` sql SELECT JSONExtract('{"a": "hello", "b": [-100, 200.0, 300]}', 'Tuple(String, Array(Float64))') = ('hello',[-100,200,300]) @@ -506,17 +733,38 @@ SELECT JSONExtract('{"day": "Thursday"}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday SELECT JSONExtract('{"day": 5}', 'day', 'Enum8(\'Sunday\' = 0, \'Monday\' = 1, \'Tuesday\' = 2, \'Wednesday\' = 3, \'Thursday\' = 4, \'Friday\' = 5, \'Saturday\' = 6)') = 'Friday' ``` -## JSONExtractKeysAndValues(json\[, indices_or_keys...\], Value_type) +### JSONExtractKeysAndValues -Parses key-value pairs from a JSON where the values are of the given ClickHouse data type. +Parses key-value pairs from JSON where the values are of the given ClickHouse data type. -Example: +**Syntax** + +```sql +JSONExtractKeysAndValues(json [, indices_or_keys...], value_type) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). +- `value_type` — A string specifying the type of the value to extract. [String](../data-types/string.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns an array of parsed key-value pairs. [Array](../data-types/array.md)([Tuple](../data-types/tuple.md)(`value_type`)). + +**Example** ``` sql SELECT JSONExtractKeysAndValues('{"x": {"a": 5, "b": 7, "c": 11}}', 'x', 'Int8') = [('a',5),('b',7),('c',11)]; ``` -## JSONExtractKeys +### JSONExtractKeys Parses a JSON string and extracts the keys. @@ -526,14 +774,14 @@ Parses a JSON string and extracts the keys. JSONExtractKeys(json[, a, b, c...]) ``` -**Arguments** +**Parameters** - `json` — [String](../data-types/string.md) with valid JSON. - `a, b, c...` — Comma-separated indices or keys that specify the path to the inner field in a nested JSON object. Each argument can be either a [String](../data-types/string.md) to get the field by the key or an [Integer](../data-types/int-uint.md) to get the N-th field (indexed from 1, negative integers count from the end). If not set, the whole JSON is parsed as the top-level object. Optional parameter. **Returned value** -Array with the keys of the JSON. [Array](../data-types/array.md)([String](../data-types/string.md)). +- Returns an array with the keys of the JSON. [Array](../data-types/array.md)([String](../data-types/string.md)). **Example** @@ -552,31 +800,67 @@ text └────────────────────────────────────────────────────────────┘ ``` -## JSONExtractRaw(json\[, indices_or_keys\]...) +### JSONExtractRaw -Returns a part of JSON as unparsed string. +Returns part of the JSON as an unparsed string. If the part does not exist or has the wrong type, an empty string will be returned. -If the part does not exist or has a wrong type, an empty string will be returned. +**Syntax** -Example: +```sql +JSONExtractRaw(json [, indices_or_keys]...) +``` + +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns part of the JSON as an unparsed string. If the part does not exist or has the wrong type, an empty string is returned. [String](../data-types/string.md). + +**Example** ``` sql SELECT JSONExtractRaw('{"a": "hello", "b": [-100, 200.0, 300]}', 'b') = '[-100, 200.0, 300]'; ``` -## JSONExtractArrayRaw(json\[, indices_or_keys...\]) +### JSONExtractArrayRaw -Returns an array with elements of JSON array, each represented as unparsed string. +Returns an array with elements of JSON array, each represented as unparsed string. If the part does not exist or isn’t an array, then an empty array will be returned. -If the part does not exist or isn’t array, an empty array will be returned. +**Syntax** -Example: +```sql +JSONExtractArrayRaw(json [, indices_or_keys...]) +``` -``` sql +**Parameters** + +- `json` — JSON string to parse. [String](../data-types/string.md). +- `indices_or_keys` — A list of zero or more arguments, each of which can be either string or integer. [String](../data-types/string.md), [Int*](../data-types/int-uint.md). + +`indices_or_keys` type: +- String = access object member by key. +- Positive integer = access the n-th member/key from the beginning. +- Negative integer = access the n-th member/key from the end. + +**Returned value** + +- Returns an array with elements of JSON array, each represented as unparsed string. Otherwise, an empty array is returned if the part does not exist or is not an array. [Array](../data-types/array.md)([String](../data-types/string.md)). + +**Example** + +```sql SELECT JSONExtractArrayRaw('{"a": "hello", "b": [-100, 200.0, "hello"]}', 'b') = ['-100', '200.0', '"hello"']; ``` -## JSONExtractKeysAndValuesRaw +### JSONExtractKeysAndValuesRaw Extracts raw data from a JSON object. @@ -640,13 +924,30 @@ Result: └───────────────────────────────────────────────────────────────────────────────────────────────────────┘ ``` -## JSON_EXISTS(json, path) +### JSON_EXISTS -If the value exists in the JSON document, `1` will be returned. +If the value exists in the JSON document, `1` will be returned. If the value does not exist, `0` will be returned. -If the value does not exist, `0` will be returned. +**Syntax** -Examples: +```sql +JSON_EXISTS(json, path) +``` + +**Parameters** + +- `json` — A string with valid JSON. [String](../data-types/string.md). +- `path` — A string representing the path. [String](../data-types/string.md). + +:::note +Before version 21.11 the order of arguments was wrong, i.e. JSON_EXISTS(path, json) +::: + +**Returned value** + +- Returns `1` if the value exists in the JSON document, otherwise `0`. + +**Examples** ``` sql SELECT JSON_EXISTS('{"hello":1}', '$.hello'); @@ -655,17 +956,32 @@ SELECT JSON_EXISTS('{"hello":["world"]}', '$.hello[*]'); SELECT JSON_EXISTS('{"hello":["world"]}', '$.hello[0]'); ``` +### JSON_QUERY + +Parses a JSON and extract a value as a JSON array or JSON object. If the value does not exist, an empty string will be returned. + +**Syntax** + +```sql +JSON_QUERY(json, path) +``` + +**Parameters** + +- `json` — A string with valid JSON. [String](../data-types/string.md). +- `path` — A string representing the path. [String](../data-types/string.md). + :::note Before version 21.11 the order of arguments was wrong, i.e. JSON_EXISTS(path, json) ::: -## JSON_QUERY(json, path) +**Returned value** -Parses a JSON and extract a value as JSON array or JSON object. +- Returns the extracted value as a JSON array or JSON object. Otherwise it returns an empty string if the value does not exist. [String](../data-types/string.md). -If the value does not exist, an empty string will be returned. +**Example** -Example: +Query: ``` sql SELECT JSON_QUERY('{"hello":"world"}', '$.hello'); @@ -682,17 +998,38 @@ Result: [2] String ``` + +### JSON_VALUE + +Parses a JSON and extract a value as a JSON scalar. If the value does not exist, an empty string will be returned by default. + +This function is controlled by the following settings: + +- by SET `function_json_value_return_type_allow_nullable` = `true`, `NULL` will be returned. If the value is complex type (such as: struct, array, map), an empty string will be returned by default. +- by SET `function_json_value_return_type_allow_complex` = `true`, the complex value will be returned. + +**Syntax** + +```sql +JSON_VALUE(json, path) +``` + +**Parameters** + +- `json` — A string with valid JSON. [String](../data-types/string.md). +- `path` — A string representing the path. [String](../data-types/string.md). + :::note -Before version 21.11 the order of arguments was wrong, i.e. JSON_QUERY(path, json) +Before version 21.11 the order of arguments was wrong, i.e. JSON_EXISTS(path, json) ::: -## JSON_VALUE(json, path) +**Returned value** -Parses a JSON and extract a value as JSON scalar. +- Returns the extracted value as a JSON scalar if it exists, otherwise an empty string is returned. [String](../data-types/string.md). -If the value does not exist, an empty string will be returned by default, and by SET `function_json_value_return_type_allow_nullable` = `true`, `NULL` will be returned. If the value is complex type (such as: struct, array, map), an empty string will be returned by default, and by SET `function_json_value_return_type_allow_complex` = `true`, the complex value will be returned. +**Example** -Example: +Query: ``` sql SELECT JSON_VALUE('{"hello":"world"}', '$.hello'); @@ -712,11 +1049,7 @@ world String ``` -:::note -Before version 21.11 the order of arguments was wrong, i.e. JSON_VALUE(path, json) -::: - -## toJSONString +### toJSONString Serializes a value to its JSON representation. Various data types and nested structures are supported. 64-bit [integers](../data-types/int-uint.md) or bigger (like `UInt64` or `Int128`) are enclosed in quotes by default. [output_format_json_quote_64bit_integers](../../operations/settings/settings.md#session_settings-output_format_json_quote_64bit_integers) controls this behavior. @@ -762,7 +1095,7 @@ Result: - [output_format_json_quote_denormals](../../operations/settings/settings.md#settings-output_format_json_quote_denormals) -## JSONArrayLength +### JSONArrayLength Returns the number of elements in the outermost JSON array. The function returns NULL if input JSON string is invalid. @@ -795,7 +1128,7 @@ SELECT ``` -## jsonMergePatch +### jsonMergePatch Returns the merged JSON object string which is formed by merging multiple JSON objects. diff --git a/docs/en/sql-reference/functions/type-conversion-functions.md b/docs/en/sql-reference/functions/type-conversion-functions.md index 81625e24a64..5dd1d5ceebe 100644 --- a/docs/en/sql-reference/functions/type-conversion-functions.md +++ b/docs/en/sql-reference/functions/type-conversion-functions.md @@ -994,25 +994,681 @@ Result: └─────────────────────────────────────────────┘ ``` -## reinterpretAsUInt(8\|16\|32\|64) +## reinterpretAsUInt8 -## reinterpretAsInt(8\|16\|32\|64) +Performs byte reinterpretation by treating the input value as a value of type UInt8. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. -## reinterpretAsFloat(32\|64) +**Syntax** + +```sql +reinterpretAsUInt8(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as UInt8. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as UInt8. [UInt8](../data-types/int-uint.md/#uint8-uint16-uint32-uint64-uint128-uint256-int8-int16-int32-int64-int128-int256). + +**Example** + +Query: + +```sql +SELECT + toInt8(257) AS x, + toTypeName(x), + reinterpretAsUInt8(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌─x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 1 │ Int8 │ 1 │ UInt8 │ +└───┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsUInt16 + +Performs byte reinterpretation by treating the input value as a value of type UInt16. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsUInt16(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as UInt16. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as UInt16. [UInt16](../data-types/int-uint.md/#uint8-uint16-uint32-uint64-uint128-uint256-int8-int16-int32-int64-int128-int256). + +**Example** + +Query: + +```sql +SELECT + toUInt8(257) AS x, + toTypeName(x), + reinterpretAsUInt16(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌─x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 1 │ UInt8 │ 1 │ UInt16 │ +└───┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsUInt32 + +Performs byte reinterpretation by treating the input value as a value of type UInt32. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsUInt32(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as UInt32. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as UInt32. [UInt32](../data-types/int-uint.md/#uint8-uint16-uint32-uint64-uint128-uint256-int8-int16-int32-int64-int128-int256). + +**Example** + +Query: + +```sql +SELECT + toUInt16(257) AS x, + toTypeName(x), + reinterpretAsUInt32(x) AS res, + toTypeName(res) +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ UInt16 │ 257 │ UInt32 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsUInt64 + +Performs byte reinterpretation by treating the input value as a value of type UInt64. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsUInt64(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as UInt64. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as UInt64. [UInt64](../data-types/int-uint.md/#uint8-uint16-uint32-uint64-uint128-uint256-int8-int16-int32-int64-int128-int256). + +**Example** + +Query: + +```sql +SELECT + toUInt32(257) AS x, + toTypeName(x), + reinterpretAsUInt64(x) AS res, + toTypeName(res) +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ UInt32 │ 257 │ UInt64 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsUInt128 + +Performs byte reinterpretation by treating the input value as a value of type UInt128. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsUInt128(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as UInt128. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as UInt128. [UInt128](../data-types/int-uint.md/#uint8-uint16-uint32-uint64-uint128-uint256-int8-int16-int32-int64-int128-int256). + +**Example** + +Query: + +```sql +SELECT + toUInt64(257) AS x, + toTypeName(x), + reinterpretAsUInt128(x) AS res, + toTypeName(res) +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ UInt64 │ 257 │ UInt128 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsUInt256 + +Performs byte reinterpretation by treating the input value as a value of type UInt256. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsUInt256(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as UInt256. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as UInt256. [UInt256](../data-types/int-uint.md/#uint8-uint16-uint32-uint64-uint128-uint256-int8-int16-int32-int64-int128-int256). + +**Example** + +Query: + +```sql +SELECT + toUInt128(257) AS x, + toTypeName(x), + reinterpretAsUInt256(x) AS res, + toTypeName(res) +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ UInt128 │ 257 │ UInt256 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsInt8 + +Performs byte reinterpretation by treating the input value as a value of type Int8. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsInt8(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as Int8. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Int8. [Int8](../data-types/int-uint.md/#int-ranges). + +**Example** + +Query: + +```sql +SELECT + toUInt8(257) AS x, + toTypeName(x), + reinterpretAsInt8(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌─x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 1 │ UInt8 │ 1 │ Int8 │ +└───┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsInt16 + +Performs byte reinterpretation by treating the input value as a value of type Int16. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsInt16(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as Int16. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Int16. [Int16](../data-types/int-uint.md/#int-ranges). + +**Example** + +Query: + +```sql +SELECT + toInt8(257) AS x, + toTypeName(x), + reinterpretAsInt16(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌─x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 1 │ Int8 │ 1 │ Int16 │ +└───┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsInt32 + +Performs byte reinterpretation by treating the input value as a value of type Int32. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsInt32(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as Int32. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Int32. [Int32](../data-types/int-uint.md/#int-ranges). + +**Example** + +Query: + +```sql +SELECT + toInt16(257) AS x, + toTypeName(x), + reinterpretAsInt32(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ Int16 │ 257 │ Int32 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsInt64 + +Performs byte reinterpretation by treating the input value as a value of type Int64. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsInt64(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as Int64. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Int64. [Int64](../data-types/int-uint.md/#int-ranges). + +**Example** + +Query: + +```sql +SELECT + toInt32(257) AS x, + toTypeName(x), + reinterpretAsInt64(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ Int32 │ 257 │ Int64 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsInt128 + +Performs byte reinterpretation by treating the input value as a value of type Int128. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsInt128(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as Int128. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Int128. [Int128](../data-types/int-uint.md/#int-ranges). + +**Example** + +Query: + +```sql +SELECT + toInt64(257) AS x, + toTypeName(x), + reinterpretAsInt128(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ Int64 │ 257 │ Int128 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsInt256 + +Performs byte reinterpretation by treating the input value as a value of type Int256. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsInt256(x) +``` + +**Parameters** + +- `x`: value to byte reinterpret as Int256. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Int256. [Int256](../data-types/int-uint.md/#int-ranges). + +**Example** + +Query: + +```sql +SELECT + toInt128(257) AS x, + toTypeName(x), + reinterpretAsInt256(x) AS res, + toTypeName(res); +``` + +Result: + +```response +┌───x─┬─toTypeName(x)─┬─res─┬─toTypeName(res)─┐ +│ 257 │ Int128 │ 257 │ Int256 │ +└─────┴───────────────┴─────┴─────────────────┘ +``` + +## reinterpretAsFloat32 + +Performs byte reinterpretation by treating the input value as a value of type Float32. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsFloat32(x) +``` + +**Parameters** + +- `x`: value to reinterpret as Float32. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Float32. [Float32](../data-types/float.md). + +**Example** + +Query: + +```sql +SELECT reinterpretAsUInt32(toFloat32(0.2)) as x, reinterpretAsFloat32(x); +``` + +Result: + +```response +┌──────────x─┬─reinterpretAsFloat32(x)─┐ +│ 1045220557 │ 0.2 │ +└────────────┴─────────────────────────┘ +``` + +## reinterpretAsFloat64 + +Performs byte reinterpretation by treating the input value as a value of type Float64. Unlike [`CAST`](#castx-t), the function does not attempt to preserve the original value - if the target type is not able to represent the input type, the output is meaningless. + +**Syntax** + +```sql +reinterpretAsFloat64(x) +``` + +**Parameters** + +- `x`: value to reinterpret as Float64. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Reinterpreted value `x` as Float64. [Float64](../data-types/float.md). + +**Example** + +Query: + +```sql +SELECT reinterpretAsUInt64(toFloat64(0.2)) as x, reinterpretAsFloat64(x); +``` + +Result: + +```response +┌───────────────────x─┬─reinterpretAsFloat64(x)─┐ +│ 4596373779694328218 │ 0.2 │ +└─────────────────────┴─────────────────────────┘ +``` ## reinterpretAsDate +Accepts a string, fixed string or numeric value and interprets the bytes as a number in host order (little endian). It returns a date from the interpreted number as the number of days since the beginning of the Unix Epoch. + +**Syntax** + +```sql +reinterpretAsDate(x) +``` + +**Parameters** + +- `x`: number of days since the beginning of the Unix Epoch. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Date. [Date](../data-types/date.md). + +**Implementation details** + +:::note +If the provided string isn’t long enough, the function works as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored. +::: + +**Example** + +Query: + +```sql +SELECT reinterpretAsDate(65), reinterpretAsDate('A'); +``` + +Result: + +```response +┌─reinterpretAsDate(65)─┬─reinterpretAsDate('A')─┐ +│ 1970-03-07 │ 1970-03-07 │ +└───────────────────────┴────────────────────────┘ +``` + ## reinterpretAsDateTime -These functions accept a string and interpret the bytes placed at the beginning of the string as a number in host order (little endian). If the string isn’t long enough, the functions work as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored. A date is interpreted as the number of days since the beginning of the Unix Epoch, and a date with time is interpreted as the number of seconds since the beginning of the Unix Epoch. +These functions accept a string and interpret the bytes placed at the beginning of the string as a number in host order (little endian). Returns a date with time interpreted as the number of seconds since the beginning of the Unix Epoch. + +**Syntax** + +```sql +reinterpretAsDateTime(x) +``` + +**Parameters** + +- `x`: number of seconds since the beginning of the Unix Epoch. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md), [UUID](../data-types/uuid.md), [String](../data-types/string.md) or [FixedString](../data-types/fixedstring.md). + +**Returned value** + +- Date and Time. [DateTime](../data-types/datetime.md). + +**Implementation details** + +:::note +If the provided string isn’t long enough, the function works as if the string is padded with the necessary number of null bytes. If the string is longer than needed, the extra bytes are ignored. +::: + +**Example** + +Query: + +```sql +SELECT reinterpretAsDateTime(65), reinterpretAsDateTime('A'); +``` + +Result: + +```response +┌─reinterpretAsDateTime(65)─┬─reinterpretAsDateTime('A')─┐ +│ 1970-01-01 01:01:05 │ 1970-01-01 01:01:05 │ +└───────────────────────────┴────────────────────────────┘ +``` ## reinterpretAsString -This function accepts a number or date or date with time and returns a string containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a string that is one byte long. +This function accepts a number, date or date with time and returns a string containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a string that is one byte long. + +**Syntax** + +```sql +reinterpretAsString(x) +``` + +**Parameters** + +- `x`: value to reinterpret to string. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md). + +**Returned value** + +- String containing bytes representing `x`. [String](../data-types/fixedstring.md). + +**Example** + +Query: + +```sql +SELECT + reinterpretAsString(toDateTime('1970-01-01 01:01:05')), + reinterpretAsString(toDate('1970-03-07')); +``` + +Result: + +```response +┌─reinterpretAsString(toDateTime('1970-01-01 01:01:05'))─┬─reinterpretAsString(toDate('1970-03-07'))─┐ +│ A │ A │ +└────────────────────────────────────────────────────────┴───────────────────────────────────────────┘ +``` ## reinterpretAsFixedString -This function accepts a number or date or date with time and returns a FixedString containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a FixedString that is one byte long. +This function accepts a number, date or date with time and returns a FixedString containing bytes representing the corresponding value in host order (little endian). Null bytes are dropped from the end. For example, a UInt32 type value of 255 is a FixedString that is one byte long. + +**Syntax** + +```sql +reinterpretAsFixedString(x) +``` + +**Parameters** + +- `x`: value to reinterpret to string. [(U)Int*](../data-types/int-uint.md), [Float](../data-types/float.md), [Date](../data-types/date.md), [DateTime](../data-types/datetime.md). + +**Returned value** + +- Fixed string containing bytes representing `x`. [FixedString](../data-types/fixedstring.md). + +**Example** + +Query: + +```sql +SELECT + reinterpretAsFixedString(toDateTime('1970-01-01 01:01:05')), + reinterpretAsFixedString(toDate('1970-03-07')); +``` + +Result: + +```response +┌─reinterpretAsFixedString(toDateTime('1970-01-01 01:01:05'))─┬─reinterpretAsFixedString(toDate('1970-03-07'))─┐ +│ A │ A │ +└─────────────────────────────────────────────────────────────┴────────────────────────────────────────────────┘ +``` ## reinterpretAsUUID @@ -1020,7 +1676,7 @@ This function accepts a number or date or date with time and returns a FixedStri In addition to the UUID functions listed here, there is dedicated [UUID function documentation](../functions/uuid-functions.md). ::: -Accepts 16 bytes string and returns UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the function works as if the string is padded with the necessary number of null bytes to the end. If the string is longer than 16 bytes, the extra bytes at the end are ignored. +Accepts a 16 byte string and returns a UUID containing bytes representing the corresponding value in network byte order (big-endian). If the string isn't long enough, the function works as if the string is padded with the necessary number of null bytes to the end. If the string is longer than 16 bytes, the extra bytes at the end are ignored. **Syntax** diff --git a/docs/en/sql-reference/functions/url-functions.md b/docs/en/sql-reference/functions/url-functions.md index 47890e0b271..8b3e4f44840 100644 --- a/docs/en/sql-reference/functions/url-functions.md +++ b/docs/en/sql-reference/functions/url-functions.md @@ -6,7 +6,33 @@ sidebar_label: URLs # Functions for Working with URLs -All these functions do not follow the RFC. They are maximally simplified for improved performance. +:::note +The functions mentioned in this section are optimized for maximum performance and for the most part do not follow the RFC-3986 standard. Functions which implement RFC-3986 have `RFC` appended to their function name and are generally slower. +::: + +You can generally use the non-`RFC` function variants when working with publicly registered domains that contain neither user strings nor `@` symbols. +The table below details which symbols in a URL can (`✔`) or cannot (`✗`) be parsed by the respective `RFC` and non-`RFC` variants: + +|Symbol | non-`RFC`| `RFC` | +|-------|----------|-------| +| ' ' | ✗ |✗ | +| \t | ✗ |✗ | +| < | ✗ |✗ | +| > | ✗ |✗ | +| % | ✗ |✔* | +| { | ✗ |✗ | +| } | ✗ |✗ | +| \| | ✗ |✗ | +| \\\ | ✗ |✗ | +| ^ | ✗ |✗ | +| ~ | ✗ |✔* | +| [ | ✗ |✗ | +| ] | ✗ |✔ | +| ; | ✗ |✔* | +| = | ✗ |✔* | +| & | ✗ |✔* | + +symbols marked `*` are sub-delimiters in RFC 3986 and allowed for user info following the `@` symbol. ## Functions that Extract Parts of a URL @@ -16,21 +42,23 @@ If the relevant part isn’t present in a URL, an empty string is returned. Extracts the protocol from a URL. -Examples of typical returned values: http, https, ftp, mailto, tel, magnet... +Examples of typical returned values: http, https, ftp, mailto, tel, magnet. ### domain Extracts the hostname from a URL. +**Syntax** + ``` sql domain(url) ``` **Arguments** -- `url` — URL. [String](../data-types/string.md). +- `url` — URL. [String](../../sql-reference/data-types/string.md). -The URL can be specified with or without a scheme. Examples: +The URL can be specified with or without a protocol. Examples: ``` text svn+ssh://some.svn-hosting.com:80/repo/trunk @@ -48,7 +76,7 @@ clickhouse.com **Returned values** -- Host name if ClickHouse can parse the input string as a URL, otherwise an empty string. [String](../data-types/string.md). +- Host name if the input string can be parsed as a URL, otherwise an empty string. [String](../data-types/string.md). **Example** @@ -62,9 +90,103 @@ SELECT domain('svn+ssh://some.svn-hosting.com:80/repo/trunk'); └────────────────────────────────────────────────────────┘ ``` +### domainRFC + +Extracts the hostname from a URL. Similar to [domain](#domain), but RFC 3986 conformant. + +**Syntax** + +``` sql +domainRFC(url) +``` + +**Arguments** + +- `url` — URL. [String](../data-types/string.md). + +**Returned values** + +- Host name if the input string can be parsed as a URL, otherwise an empty string. [String](../data-types/string.md). + +**Example** + +``` sql +SELECT + domain('http://user:password@example.com:8080/path?query=value#fragment'), + domainRFC('http://user:password@example.com:8080/path?query=value#fragment'); +``` + +``` text +┌─domain('http://user:password@example.com:8080/path?query=value#fragment')─┬─domainRFC('http://user:password@example.com:8080/path?query=value#fragment')─┐ +│ │ example.com │ +└───────────────────────────────────────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────────────┘ +``` + ### domainWithoutWWW -Returns the domain and removes no more than one ‘www.’ from the beginning of it, if present. +Returns the domain without leading `www.` if present. + +**Syntax** + +```sql +domainWithoutWWW(url) +``` + +**Arguments** + +- `url` — URL. [String](../data-types/string.md). + +**Returned values** + +- Domain name if the input string can be parsed as a URL (without leading `www.`), otherwise an empty string. [String](../data-types/string.md). + +**Example** + +``` sql +SELECT domainWithoutWWW('http://paul@www.example.com:80/'); +``` + +``` text +┌─domainWithoutWWW('http://paul@www.example.com:80/')─┐ +│ example.com │ +└─────────────────────────────────────────────────────┘ +``` + +### domainWithoutWWWRFC + +Returns the domain without leading `www.` if present. Similar to [domainWithoutWWW](#domainwithoutwww) but conforms to RFC 3986. + +**Syntax** + +```sql +domainWithoutWWWRFC(url) +``` + +**Arguments** + +- `url` — URL. [String](../data-types/string.md). + +**Returned values** + +- Domain name if the input string can be parsed as a URL (without leading `www.`), otherwise an empty string. [String](../data-types/string.md). + +**Example** + +Query: + +```sql +SELECT + domainWithoutWWW('http://user:password@www.example.com:8080/path?query=value#fragment'), + domainWithoutWWWRFC('http://user:password@www.example.com:8080/path?query=value#fragment'); +``` + +Result: + +```response +┌─domainWithoutWWW('http://user:password@www.example.com:8080/path?query=value#fragment')─┬─domainWithoutWWWRFC('http://user:password@www.example.com:8080/path?query=value#fragment')─┐ +│ │ example.com │ +└─────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────────────────────────────────────┘ +``` ### topLevelDomain @@ -76,63 +198,314 @@ topLevelDomain(url) **Arguments** -- `url` — URL. [String](../data-types/string.md). +- `url` — URL. [String](../../sql-reference/data-types/string.md). -The URL can be specified with or without a scheme. Examples: +:::note +The URL can be specified with or without a protocol. Examples: ``` text svn+ssh://some.svn-hosting.com:80/repo/trunk some.svn-hosting.com:80/repo/trunk https://clickhouse.com/time/ ``` +::: **Returned values** -- Domain name if ClickHouse can parse the input string as a URL. Otherwise, an empty string. [String](../data-types/string.md). +- Domain name if the input string can be parsed as a URL. Otherwise, an empty string. [String](../../sql-reference/data-types/string.md). **Example** +Query: + ``` sql SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk'); ``` +Result: + ``` text ┌─topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk')─┐ │ com │ └────────────────────────────────────────────────────────────────────┘ ``` +### topLevelDomainRFC + +Extracts the the top-level domain from a URL. +Similar to [topLevelDomain](#topleveldomain), but conforms to RFC 3986. + +``` sql +topLevelDomainRFC(url) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). + +:::note +The URL can be specified with or without a protocol. Examples: + +``` text +svn+ssh://some.svn-hosting.com:80/repo/trunk +some.svn-hosting.com:80/repo/trunk +https://clickhouse.com/time/ +``` +::: + +**Returned values** + +- Domain name if the input string can be parsed as a URL. Otherwise, an empty string. [String](../../sql-reference/data-types/string.md). + +**Example** + +Query: + +``` sql +SELECT topLevelDomain('http://foo:foo%41bar@foo.com'), topLevelDomainRFC('http://foo:foo%41bar@foo.com'); +``` + +Result: + +``` text +┌─topLevelDomain('http://foo:foo%41bar@foo.com')─┬─topLevelDomainRFC('http://foo:foo%41bar@foo.com')─┐ +│ │ com │ +└────────────────────────────────────────────────┴───────────────────────────────────────────────────┘ +``` + ### firstSignificantSubdomain -Returns the “first significant subdomain”. The first significant subdomain is a second-level domain if it is ‘com’, ‘net’, ‘org’, or ‘co’. Otherwise, it is a third-level domain. For example, `firstSignificantSubdomain (‘https://news.clickhouse.com/’) = ‘clickhouse’, firstSignificantSubdomain (‘https://news.clickhouse.com.tr/’) = ‘clickhouse’`. The list of “insignificant” second-level domains and other implementation details may change in the future. +Returns the “first significant subdomain”. +The first significant subdomain is a second-level domain for `com`, `net`, `org`, or `co`, otherwise it is a third-level domain. +For example, `firstSignificantSubdomain (‘https://news.clickhouse.com/’) = ‘clickhouse’, firstSignificantSubdomain (‘https://news.clickhouse.com.tr/’) = ‘clickhouse’`. +The list of "insignificant" second-level domains and other implementation details may change in the future. + +**Syntax** + +```sql +firstSignificantSubdomain(url) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- The first significant subdomain. [String](../data-types/string.md). + +**Example** + +Query: + +```sql +SELECT firstSignificantSubdomain('http://www.example.com/a/b/c?a=b') +``` + +Result: + +```reference +┌─firstSignificantSubdomain('http://www.example.com/a/b/c?a=b')─┐ +│ example │ +└───────────────────────────────────────────────────────────────┘ +``` + +### firstSignificantSubdomainRFC + +Returns the “first significant subdomain”. +The first significant subdomain is a second-level domain for `com`, `net`, `org`, or `co`, otherwise it is a third-level domain. +For example, `firstSignificantSubdomain (‘https://news.clickhouse.com/’) = ‘clickhouse’, firstSignificantSubdomain (‘https://news.clickhouse.com.tr/’) = ‘clickhouse’`. +The list of "insignificant" second-level domains and other implementation details may change in the future. +Similar to [firstSignficantSubdomain](#firstsignificantsubdomain) but conforms to RFC 1034. + +**Syntax** + +```sql +firstSignificantSubdomainRFC(url) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- The first significant subdomain. [String](../data-types/string.md). + +**Example** + +Query: + +```sql +SELECT + firstSignificantSubdomain('http://user:password@example.com:8080/path?query=value#fragment'), + firstSignificantSubdomainRFC('http://user:password@example.com:8080/path?query=value#fragment'); +``` + +Result: + +```reference +┌─firstSignificantSubdomain('http://user:password@example.com:8080/path?query=value#fragment')─┬─firstSignificantSubdomainRFC('http://user:password@example.com:8080/path?query=value#fragment')─┐ +│ │ example │ +└──────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────┘ +``` ### cutToFirstSignificantSubdomain -Returns the part of the domain that includes top-level subdomains up to the “first significant subdomain” (see the explanation above). +Returns the part of the domain that includes top-level subdomains up to the [“first significant subdomain”](#firstsignificantsubdomain). -For example: +**Syntax** + +```sql +cutToFirstSignificantSubdomain(url) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Part of the domain that includes top-level subdomains up to the first significant subdomain if possible, otherwise returns an empty string. [String](../data-types/string.md). + +**Example** + +Query: + +```sql +SELECT + cutToFirstSignificantSubdomain('https://news.clickhouse.com.tr/'), + cutToFirstSignificantSubdomain('www.tr'), + cutToFirstSignificantSubdomain('tr'); +``` + +Result: + +```response +┌─cutToFirstSignificantSubdomain('https://news.clickhouse.com.tr/')─┬─cutToFirstSignificantSubdomain('www.tr')─┬─cutToFirstSignificantSubdomain('tr')─┐ +│ clickhouse.com.tr │ tr │ │ +└───────────────────────────────────────────────────────────────────┴──────────────────────────────────────────┴──────────────────────────────────────┘ +``` + +### cutToFirstSignificantSubdomainRFC + +Returns the part of the domain that includes top-level subdomains up to the [“first significant subdomain”](#firstsignificantsubdomain). +Similar to [cutToFirstSignificantSubdomain](#cuttofirstsignificantsubdomain) but conforms to RFC 3986. + +**Syntax** + +```sql +cutToFirstSignificantSubdomainRFC(url) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Part of the domain that includes top-level subdomains up to the first significant subdomain if possible, otherwise returns an empty string. [String](../data-types/string.md). + +**Example** + +Query: + +```sql +SELECT + cutToFirstSignificantSubdomain('http://user:password@example.com:8080'), + cutToFirstSignificantSubdomainRFC('http://user:password@example.com:8080'); +``` + +Result: + +```response +┌─cutToFirstSignificantSubdomain('http://user:password@example.com:8080')─┬─cutToFirstSignificantSubdomainRFC('http://user:password@example.com:8080')─┐ +│ │ example.com │ +└─────────────────────────────────────────────────────────────────────────┴────────────────────────────────────────────────────────────────────────────┘ +``` -- `cutToFirstSignificantSubdomain('https://news.clickhouse.com.tr/') = 'clickhouse.com.tr'`. -- `cutToFirstSignificantSubdomain('www.tr') = 'tr'`. -- `cutToFirstSignificantSubdomain('tr') = ''`. ### cutToFirstSignificantSubdomainWithWWW -Returns the part of the domain that includes top-level subdomains up to the “first significant subdomain”, without stripping "www". +Returns the part of the domain that includes top-level subdomains up to the "first significant subdomain", without stripping `www`. -For example: +**Syntax** -- `cutToFirstSignificantSubdomainWithWWW('https://news.clickhouse.com.tr/') = 'clickhouse.com.tr'`. -- `cutToFirstSignificantSubdomainWithWWW('www.tr') = 'www.tr'`. -- `cutToFirstSignificantSubdomainWithWWW('tr') = ''`. +```sql +cutToFirstSignificantSubdomainWithWWW(url) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Part of the domain that includes top-level subdomains up to the first significant subdomain (with `www`) if possible, otherwise returns an empty string. [String](../data-types/string.md). + +**Example** + +Query: + +```sql +SELECT + cutToFirstSignificantSubdomainWithWWW('https://news.clickhouse.com.tr/'), + cutToFirstSignificantSubdomainWithWWW('www.tr'), + cutToFirstSignificantSubdomainWithWWW('tr'); +``` + +Result: + +```response +┌─cutToFirstSignificantSubdomainWithWWW('https://news.clickhouse.com.tr/')─┬─cutToFirstSignificantSubdomainWithWWW('www.tr')─┬─cutToFirstSignificantSubdomainWithWWW('tr')─┐ +│ clickhouse.com.tr │ www.tr │ │ +└──────────────────────────────────────────────────────────────────────────┴─────────────────────────────────────────────────┴─────────────────────────────────────────────┘ +``` + +### cutToFirstSignificantSubdomainWithWWWRFC + +Returns the part of the domain that includes top-level subdomains up to the "first significant subdomain", without stripping `www`. +Similar to [cutToFirstSignificantSubdomainWithWWW](#cuttofirstsignificantsubdomaincustomwithwww) but conforms to RFC 3986. + +**Syntax** + +```sql +cutToFirstSignificantSubdomainWithWWW(url) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Part of the domain that includes top-level subdomains up to the first significant subdomain (with "www") if possible, otherwise returns an empty string. [String](../data-types/string.md). + +**Example** + +Query: + +```sql +SELECT + cutToFirstSignificantSubdomainWithWWW('http:%2F%2Fwwwww.nova@mail.ru/economicheskiy'), + cutToFirstSignificantSubdomainWithWWWRFC('http:%2F%2Fwwwww.nova@mail.ru/economicheskiy'); +``` + +Result: + +```response +┌─cutToFirstSignificantSubdomainWithWWW('http:%2F%2Fwwwww.nova@mail.ru/economicheskiy')─┬─cutToFirstSignificantSubdomainWithWWWRFC('http:%2F%2Fwwwww.nova@mail.ru/economicheskiy')─┐ +│ │ mail.ru │ +└───────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────────────────────────────────────────────────────────────────┘ +``` ### cutToFirstSignificantSubdomainCustom -Returns the part of the domain that includes top-level subdomains up to the first significant subdomain. Accepts custom [TLD list](https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains) name. +Returns the part of the domain that includes top-level subdomains up to the first significant subdomain. +Accepts custom [TLD list](https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains) name. +This function can be useful if you need a fresh TLD list or if you have a custom list. -Can be useful if you need fresh TLD list or you have custom. - -Configuration example: +**Configuration example** ```xml @@ -146,17 +519,17 @@ Configuration example: **Syntax** ``` sql -cutToFirstSignificantSubdomainCustom(URL, TLD) +cutToFirstSignificantSubdomain(url, tld) ``` **Arguments** -- `URL` — URL. [String](../data-types/string.md). -- `TLD` — Custom TLD list name. [String](../data-types/string.md). +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `tld` — Custom TLD list name. [String](../../sql-reference/data-types/string.md). **Returned value** -- Part of the domain that includes top-level subdomains up to the first significant subdomain. [String](../data-types/string.md). +- Part of the domain that includes top-level subdomains up to the first significant subdomain. [String](../../sql-reference/data-types/string.md). **Example** @@ -178,13 +551,39 @@ Result: - [firstSignificantSubdomain](#firstsignificantsubdomain). +### cutToFirstSignificantSubdomainCustomRFC + +Returns the part of the domain that includes top-level subdomains up to the first significant subdomain. +Accepts custom [TLD list](https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains) name. +This function can be useful if you need a fresh TLD list or if you have a custom list. +Similar to [cutToFirstSignificantSubdomainCustom](#cuttofirstsignificantsubdomaincustom) but conforms to RFC 3986. + +**Syntax** + +``` sql +cutToFirstSignificantSubdomainRFC(url, tld) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `tld` — Custom TLD list name. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Part of the domain that includes top-level subdomains up to the first significant subdomain. [String](../../sql-reference/data-types/string.md). + +**See Also** + +- [firstSignificantSubdomain](#firstsignificantsubdomain). + ### cutToFirstSignificantSubdomainCustomWithWWW -Returns the part of the domain that includes top-level subdomains up to the first significant subdomain without stripping `www`. Accepts custom TLD list name. +Returns the part of the domain that includes top-level subdomains up to the first significant subdomain without stripping `www`. +Accepts custom TLD list name. +It can be useful if you need a fresh TLD list or if you have a custom list. -Can be useful if you need fresh TLD list or you have custom. - -Configuration example: +**Configuration example** ```xml @@ -198,13 +597,13 @@ Configuration example: **Syntax** ```sql -cutToFirstSignificantSubdomainCustomWithWWW(URL, TLD) +cutToFirstSignificantSubdomainCustomWithWWW(url, tld) ``` **Arguments** -- `URL` — URL. [String](../data-types/string.md). -- `TLD` — Custom TLD list name. [String](../data-types/string.md). +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `tld` — Custom TLD list name. [String](../../sql-reference/data-types/string.md). **Returned value** @@ -230,10 +629,36 @@ Result: - [firstSignificantSubdomain](#firstsignificantsubdomain). +### cutToFirstSignificantSubdomainCustomWithWWWRFC + +Returns the part of the domain that includes top-level subdomains up to the first significant subdomain without stripping `www`. +Accepts custom TLD list name. +It can be useful if you need a fresh TLD list or if you have a custom list. +Similar to [cutToFirstSignificantSubdomainCustomWithWWW](#cuttofirstsignificantsubdomaincustomwithwww) but conforms to RFC 3986. + +**Syntax** + +```sql +cutToFirstSignificantSubdomainCustomWithWWWRFC(url, tld) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `tld` — Custom TLD list name. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- Part of the domain that includes top-level subdomains up to the first significant subdomain without stripping `www`. [String](../../sql-reference/data-types/string.md). + +**See Also** + +- [firstSignificantSubdomain](#firstsignificantsubdomain). + ### firstSignificantSubdomainCustom -Returns the first significant subdomain. Accepts customs TLD list name. - +Returns the first significant subdomain. +Accepts customs TLD list name. Can be useful if you need fresh TLD list or you have custom. Configuration example: @@ -250,17 +675,17 @@ Configuration example: **Syntax** ```sql -firstSignificantSubdomainCustom(URL, TLD) +firstSignificantSubdomainCustom(url, tld) ``` **Arguments** -- `URL` — URL. [String](../data-types/string.md). -- `TLD` — Custom TLD list name. [String](../data-types/string.md). +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `tld` — Custom TLD list name. [String](../../sql-reference/data-types/string.md). **Returned value** -- First significant subdomain. [String](../data-types/string.md). +- First significant subdomain. [String](../../sql-reference/data-types/string.md). **Example** @@ -282,47 +707,156 @@ Result: - [firstSignificantSubdomain](#firstsignificantsubdomain). -### port(URL\[, default_port = 0\]) +### firstSignificantSubdomainCustomRFC -Returns the port or `default_port` if there is no port in the URL (or in case of validation error). +Returns the first significant subdomain. +Accepts customs TLD list name. +Can be useful if you need fresh TLD list or you have custom. +Similar to [firstSignificantSubdomainCustom](#firstsignificantsubdomaincustom) but conforms to RFC 3986. + +**Syntax** + +```sql +firstSignificantSubdomainCustomRFC(url, tld) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `tld` — Custom TLD list name. [String](../../sql-reference/data-types/string.md). + +**Returned value** + +- First significant subdomain. [String](../../sql-reference/data-types/string.md). + +**See Also** + +- [firstSignificantSubdomain](#firstsignificantsubdomain). + +### port + +Returns the port or `default_port` if the URL contains no port or cannot be parsed. + +**Syntax** + +```sql +port(url [, default_port = 0]) +``` + +**Arguments** + +- `url` — URL. [String](../data-types/string.md). +- `default_port` — The default port number to be returned. [UInt16](../data-types/int-uint.md). + +**Returned value** + +- Port or the default port if there is no port in the URL or in case of a validation error. [UInt16](../data-types/int-uint.md). + +**Example** + +Query: + +```sql +SELECT port('http://paul@www.example.com:80/'); +``` + +Result: + +```response +┌─port('http://paul@www.example.com:80/')─┐ +│ 80 │ +└─────────────────────────────────────────┘ +``` + +### portRFC + +Returns the port or `default_port` if the URL contains no port or cannot be parsed. +Similar to [port](#port), but RFC 3986 conformant. + +**Syntax** + +```sql +portRFC(url [, default_port = 0]) +``` + +**Arguments** + +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `default_port` — The default port number to be returned. [UInt16](../data-types/int-uint.md). + +**Returned value** + +- Port or the default port if there is no port in the URL or in case of a validation error. [UInt16](../data-types/int-uint.md). + +**Example** + +Query: + +```sql +SELECT + port('http://user:password@example.com:8080'), + portRFC('http://user:password@example.com:8080'); +``` + +Result: + +```resposne +┌─port('http://user:password@example.com:8080')─┬─portRFC('http://user:password@example.com:8080')─┐ +│ 0 │ 8080 │ +└───────────────────────────────────────────────┴──────────────────────────────────────────────────┘ +``` ### path -Returns the path. Example: `/top/news.html` The path does not include the query string. +Returns the path without query string. + +Example: `/top/news.html`. ### pathFull -The same as above, but including query string and fragment. Example: /top/news.html?page=2#comments +The same as above, but including query string and fragment. + +Example: `/top/news.html?page=2#comments`. ### queryString -Returns the query string. Example: page=1&lr=213. query-string does not include the initial question mark, as well as # and everything after #. +Returns the query string without the initial question mark, `#` and everything after `#`. + +Example: `page=1&lr=213`. ### fragment -Returns the fragment identifier. fragment does not include the initial hash symbol. +Returns the fragment identifier without the initial hash symbol. ### queryStringAndFragment -Returns the query string and fragment identifier. Example: page=1#29390. +Returns the query string and fragment identifier. -### extractURLParameter(URL, name) +Example: `page=1#29390`. -Returns the value of the ‘name’ parameter in the URL, if present. Otherwise, an empty string. If there are many parameters with this name, it returns the first occurrence. This function works under the assumption that the parameter name is encoded in the URL exactly the same way as in the passed argument. +### extractURLParameter(url, name) -### extractURLParameters(URL) +Returns the value of the `name` parameter in the URL, if present, otherwise an empty string is returned. +If there are multiple parameters with this name, the first occurrence is returned. +The function assumes that the parameter in the `url` parameter is encoded in the same way as in the `name` argument. -Returns an array of name=value strings corresponding to the URL parameters. The values are not decoded in any way. +### extractURLParameters(url) -### extractURLParameterNames(URL) +Returns an array of `name=value` strings corresponding to the URL parameters. +The values are not decoded. -Returns an array of name strings corresponding to the names of URL parameters. The values are not decoded in any way. +### extractURLParameterNames(url) -### URLHierarchy(URL) +Returns an array of name strings corresponding to the names of URL parameters. +The values are not decoded. -Returns an array containing the URL, truncated at the end by the symbols /,? in the path and query-string. Consecutive separator characters are counted as one. The cut is made in the position after all the consecutive separator characters. +### URLHierarchy(url) -### URLPathHierarchy(URL) +Returns an array containing the URL, truncated at the end by the symbols /,? in the path and query-string. +Consecutive separator characters are counted as one. +The cut is made in the position after all the consecutive separator characters. + +### URLPathHierarchy(url) The same as above, but without the protocol and host in the result. The / element (root) is not included. @@ -334,9 +868,10 @@ URLPathHierarchy('https://example.com/browse/CONV-6788') = ] ``` -### encodeURLComponent(URL) +### encodeURLComponent(url) Returns the encoded URL. + Example: ``` sql @@ -349,9 +884,10 @@ SELECT encodeURLComponent('http://127.0.0.1:8123/?query=SELECT 1;') AS EncodedUR └──────────────────────────────────────────────────────────┘ ``` -### decodeURLComponent(URL) +### decodeURLComponent(url) Returns the decoded URL. + Example: ``` sql @@ -364,9 +900,10 @@ SELECT decodeURLComponent('http://127.0.0.1:8123/?query=SELECT%201%3B') AS Decod └────────────────────────────────────────┘ ``` -### encodeURLFormComponent(URL) +### encodeURLFormComponent(url) Returns the encoded URL. Follows rfc-1866, space(` `) is encoded as plus(`+`). + Example: ``` sql @@ -379,9 +916,10 @@ SELECT encodeURLFormComponent('http://127.0.0.1:8123/?query=SELECT 1 2+3') AS En └───────────────────────────────────────────────────────────┘ ``` -### decodeURLFormComponent(URL) +### decodeURLFormComponent(url) Returns the decoded URL. Follows rfc-1866, plain plus(`+`) is decoded as space(` `). + Example: ``` sql @@ -401,12 +939,12 @@ Extracts network locality (`username:password@host:port`) from a URL. **Syntax** ``` sql -netloc(URL) +netloc(url) ``` **Arguments** -- `url` — URL. [String](../data-types/string.md). +- `url` — URL. [String](../../sql-reference/data-types/string.md). **Returned value** @@ -428,44 +966,45 @@ Result: └───────────────────────────────────────────┘ ``` -## Functions that Remove Part of a URL +## Functions that remove part of a URL If the URL does not have anything similar, the URL remains unchanged. ### cutWWW -Removes no more than one ‘www.’ from the beginning of the URL’s domain, if present. +Removes leading `www.` (if present) from the URL’s domain. ### cutQueryString -Removes query string. The question mark is also removed. +Removes query string, including the question mark. ### cutFragment -Removes the fragment identifier. The number sign is also removed. +Removes the fragment identifier, including the number sign. ### cutQueryStringAndFragment -Removes the query string and fragment identifier. The question mark and number sign are also removed. +Removes the query string and fragment identifier, including the question mark and number sign. -### cutURLParameter(URL, name) +### cutURLParameter(url, name) -Removes the `name` parameter from URL, if present. This function does not encode or decode characters in parameter names, e.g. `Client ID` and `Client%20ID` are treated as different parameter names. +Removes the `name` parameter from a URL, if present. +This function does not encode or decode characters in parameter names, e.g. `Client ID` and `Client%20ID` are treated as different parameter names. **Syntax** ``` sql -cutURLParameter(URL, name) +cutURLParameter(url, name) ``` **Arguments** -- `url` — URL. [String](../data-types/string.md). -- `name` — name of URL parameter. [String](../data-types/string.md) or [Array](../data-types/array.md) of Strings. +- `url` — URL. [String](../../sql-reference/data-types/string.md). +- `name` — name of URL parameter. [String](../../sql-reference/data-types/string.md) or [Array](../../sql-reference/data-types/array.md) of Strings. **Returned value** -- URL with `name` URL parameter removed. [String](../data-types/string.md). +- url with `name` URL parameter removed. [String](../data-types/string.md). **Example** diff --git a/docs/en/sql-reference/functions/uuid-functions.md b/docs/en/sql-reference/functions/uuid-functions.md index 2707f0bf8d4..0323ae728a9 100644 --- a/docs/en/sql-reference/functions/uuid-functions.md +++ b/docs/en/sql-reference/functions/uuid-functions.md @@ -126,149 +126,6 @@ SELECT generateUUIDv7(1), generateUUIDv7(2); └──────────────────────────────────────┴──────────────────────────────────────┘ ``` -## generateUUIDv7ThreadMonotonic - -Generates a [UUID](../data-types/uuid.md) of [version 7](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04). - -The generated UUID contains the current Unix timestamp in milliseconds (48 bits), followed by version "7" (4 bits), a counter (42 bit) to distinguish UUIDs within a millisecond (including a variant field "2", 2 bit), and a random field (32 bits). -For any given timestamp (unix_ts_ms), the counter starts at a random value and is incremented by 1 for each new UUID until the timestamp changes. -In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to a random new start value. - -This function behaves like [generateUUIDv7](#generateUUIDv7) but gives no guarantee on counter monotony across different simultaneous requests. -Monotonicity within one timestamp is guaranteed only within the same thread calling this function to generate UUIDs. - -``` - 0 1 2 3 - 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| unix_ts_ms | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| unix_ts_ms | ver | counter_high_bits | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -|var| counter_low_bits | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| rand_b | -└─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┘ -``` - -:::note -As of April 2024, version 7 UUIDs are in draft status and their layout may change in future. -::: - -**Syntax** - -``` sql -generateUUIDv7ThreadMonotonic([expr]) -``` - -**Arguments** - -- `expr` — An arbitrary [expression](../syntax.md#syntax-expressions) used to bypass [common subexpression elimination](../functions/index.md#common-subexpression-elimination) if the function is called multiple times in a query. The value of the expression has no effect on the returned UUID. Optional. - -**Returned value** - -A value of type UUIDv7. - -**Usage example** - -First, create a table with a column of type UUID, then insert a generated UUIDv7 into the table. - -``` sql -CREATE TABLE tab (uuid UUID) ENGINE = Memory; - -INSERT INTO tab SELECT generateUUIDv7ThreadMonotonic(); - -SELECT * FROM tab; -``` - -Result: - -```response -┌─────────────────────────────────uuid─┐ -│ 018f05e2-e3b2-70cb-b8be-64b09b626d32 │ -└──────────────────────────────────────┘ -``` - -**Example with multiple UUIDs generated per row** - -```sql -SELECT generateUUIDv7ThreadMonotonic(1), generateUUIDv7ThreadMonotonic(2); - -┌─generateUUIDv7ThreadMonotonic(1)─────┬─generateUUIDv7ThreadMonotonic(2)─────┐ -│ 018f05e1-14ee-7bc5-9906-207153b400b1 │ 018f05e1-14ee-7bc5-9906-2072b8e96758 │ -└──────────────────────────────────────┴──────────────────────────────────────┘ -``` - -## generateUUIDv7NonMonotonic - -Generates a [UUID](../data-types/uuid.md) of [version 7](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04). - -The generated UUID contains the current Unix timestamp in milliseconds (48 bits), followed by version "7" (4 bits) and a random field (76 bits, including a 2-bit variant field "2"). - -This function is the fastest `generateUUIDv7*` function but it gives no monotonicity guarantees within a timestamp. - -``` - 0 1 2 3 - 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| unix_ts_ms | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| unix_ts_ms | ver | rand_a | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -|var| rand_b | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| rand_b | -└─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┘ -``` - -:::note -As of April 2024, version 7 UUIDs are in draft status and their layout may change in future. -::: - -**Syntax** - -``` sql -generateUUIDv7NonMonotonic([expr]) -``` - -**Arguments** - -- `expr` — An arbitrary [expression](../syntax.md#syntax-expressions) used to bypass [common subexpression elimination](../functions/index.md#common-subexpression-elimination) if the function is called multiple times in a query. The value of the expression has no effect on the returned UUID. Optional. - -**Returned value** - -A value of type UUIDv7. - -**Example** - -First, create a table with a column of type UUID, then insert a generated UUIDv7 into the table. - -``` sql -CREATE TABLE tab (uuid UUID) ENGINE = Memory; - -INSERT INTO tab SELECT generateUUIDv7NonMonotonic(); - -SELECT * FROM tab; -``` - -Result: - -```response -┌─────────────────────────────────uuid─┐ -│ 018f05af-f4a8-778f-beee-1bedbc95c93b │ -└──────────────────────────────────────┘ -``` - -**Example with multiple UUIDs generated per row** - -```sql -SELECT generateUUIDv7NonMonotonic(1), generateUUIDv7NonMonotonic(2); - -┌─generateUUIDv7NonMonotonic(1) ───────┬─generateUUIDv7(2)NonMonotonic────────┐ -│ 018f05b1-8c2e-7567-a988-48d09606ae8c │ 018f05b1-8c2e-7946-895b-fcd7635da9a0 │ -└──────────────────────────────────────┴──────────────────────────────────────┘ -``` - ## empty Checks whether the input UUID is empty. @@ -746,71 +603,6 @@ SELECT generateSnowflakeID(1), generateSnowflakeID(2); └────────────────────────┴────────────────────────┘ ``` -## generateSnowflakeIDThreadMonotonic - -Generates a [Snowflake ID](https://en.wikipedia.org/wiki/Snowflake_ID). - -The generated Snowflake ID contains the current Unix timestamp in milliseconds 41 (+ 1 top zero bit) bits, followed by machine id (10 bits), a counter (12 bits) to distinguish IDs within a millisecond. -For any given timestamp (unix_ts_ms), the counter starts at 0 and is incremented by 1 for each new Snowflake ID until the timestamp changes. -In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to 0. - -This function behaves like `generateSnowflakeID` but gives no guarantee on counter monotony across different simultaneous requests. -Monotonicity within one timestamp is guaranteed only within the same thread calling this function to generate Snowflake IDs. - -``` - 0 1 2 3 - 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -|0| timestamp | -├─┼ ┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| | machine_id | machine_seq_num | -└─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┘ -``` - -**Syntax** - -``` sql -generateSnowflakeIDThreadMonotonic([expr]) -``` - -**Arguments** - -- `expr` — An arbitrary [expression](../../sql-reference/syntax.md#syntax-expressions) used to bypass [common subexpression elimination](../../sql-reference/functions/index.md#common-subexpression-elimination) if the function is called multiple times in a query. The value of the expression has no effect on the returned Snowflake ID. Optional. - -**Returned value** - -A value of type UInt64. - -**Example** - -First, create a table with a column of type UInt64, then insert a generated Snowflake ID into the table. - -``` sql -CREATE TABLE tab (id UInt64) ENGINE = Memory; - -INSERT INTO tab SELECT generateSnowflakeIDThreadMonotonic(); - -SELECT * FROM tab; -``` - -Result: - -```response -┌──────────────────id─┐ -│ 7199082832006627328 │ -└─────────────────────┘ -``` - -**Example with multiple Snowflake IDs generated per row** - -```sql -SELECT generateSnowflakeIDThreadMonotonic(1), generateSnowflakeIDThreadMonotonic(2); - -┌─generateSnowflakeIDThreadMonotonic(1)─┬─generateSnowflakeIDThreadMonotonic(2)─┐ -│ 7199082940311945216 │ 7199082940316139520 │ -└───────────────────────────────────────┴───────────────────────────────────────┘ -``` - ## snowflakeToDateTime Extracts the timestamp component of a [Snowflake ID](https://en.wikipedia.org/wiki/Snowflake_ID) in [DateTime](../data-types/datetime.md) format. diff --git a/docs/en/sql-reference/statements/alter/view.md b/docs/en/sql-reference/statements/alter/view.md index 83e8e9311b4..fb7a5bd7c03 100644 --- a/docs/en/sql-reference/statements/alter/view.md +++ b/docs/en/sql-reference/statements/alter/view.md @@ -79,8 +79,6 @@ ORDER BY ts, event_type; │ 2020-01-03 00:00:00 │ imp │ │ 2 │ 0 │ └─────────────────────┴────────────┴─────────┴────────────┴──────┘ -SET allow_experimental_alter_materialized_view_structure=1; - ALTER TABLE mv MODIFY QUERY SELECT toStartOfDay(ts) ts, event_type, browser, count() events_cnt, @@ -178,7 +176,6 @@ SELECT * FROM mv; └───┘ ``` ```sql -set allow_experimental_alter_materialized_view_structure=1; ALTER TABLE mv MODIFY QUERY SELECT a * 2 as a FROM src_table; INSERT INTO src_table (a) VALUES (3), (4); SELECT * FROM mv; diff --git a/docs/en/sql-reference/statements/system.md b/docs/en/sql-reference/statements/system.md index 9fec5420f97..7efbff1b42b 100644 --- a/docs/en/sql-reference/statements/system.md +++ b/docs/en/sql-reference/statements/system.md @@ -206,6 +206,32 @@ Enables background data distribution when inserting data into distributed tables SYSTEM START DISTRIBUTED SENDS [db.] [ON CLUSTER cluster_name] ``` +### STOP LISTEN + +Closes the socket and gracefully terminates the existing connections to the server on the specified port with the specified protocol. + +However, if the corresponding protocol settings were not specified in the clickhouse-server configuration, this command will have no effect. + +```sql +SYSTEM STOP LISTEN [ON CLUSTER cluster_name] [QUERIES ALL | QUERIES DEFAULT | QUERIES CUSTOM | TCP | TCP WITH PROXY | TCP SECURE | HTTP | HTTPS | MYSQL | GRPC | POSTGRESQL | PROMETHEUS | CUSTOM 'protocol'] +``` + +- If `CUSTOM 'protocol'` modifier is specified, the custom protocol with the specified name defined in the protocols section of the server configuration will be stopped. +- If `QUERIES ALL [EXCEPT .. [,..]]` modifier is specified, all protocols are stopped, unless specified with `EXCEPT` clause. +- If `QUERIES DEFAULT [EXCEPT .. [,..]]` modifier is specified, all default protocols are stopped, unless specified with `EXCEPT` clause. +- If `QUERIES CUSTOM [EXCEPT .. [,..]]` modifier is specified, all custom protocols are stopped, unless specified with `EXCEPT` clause. + +### START LISTEN + +Allows new connections to be established on the specified protocols. + +However, if the server on the specified port and protocol was not stopped using the SYSTEM STOP LISTEN command, this command will have no effect. + +```sql +SYSTEM START LISTEN [ON CLUSTER cluster_name] [QUERIES ALL | QUERIES DEFAULT | QUERIES CUSTOM | TCP | TCP WITH PROXY | TCP SECURE | HTTP | HTTPS | MYSQL | GRPC | POSTGRESQL | PROMETHEUS | CUSTOM 'protocol'] +``` + + ## Managing MergeTree Tables ClickHouse can manage background processes in [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) tables. @@ -463,30 +489,16 @@ Will do sync syscall. SYSTEM SYNC FILE CACHE [ON CLUSTER cluster_name] ``` +### UNLOAD PRIMARY KEY -## SYSTEM STOP LISTEN - -Closes the socket and gracefully terminates the existing connections to the server on the specified port with the specified protocol. - -However, if the corresponding protocol settings were not specified in the clickhouse-server configuration, this command will have no effect. +Unload the primary keys for the given table or for all tables. ```sql -SYSTEM STOP LISTEN [ON CLUSTER cluster_name] [QUERIES ALL | QUERIES DEFAULT | QUERIES CUSTOM | TCP | TCP WITH PROXY | TCP SECURE | HTTP | HTTPS | MYSQL | GRPC | POSTGRESQL | PROMETHEUS | CUSTOM 'protocol'] +SYSTEM UNLOAD PRIMARY KEY [db.]name ``` -- If `CUSTOM 'protocol'` modifier is specified, the custom protocol with the specified name defined in the protocols section of the server configuration will be stopped. -- If `QUERIES ALL [EXCEPT .. [,..]]` modifier is specified, all protocols are stopped, unless specified with `EXCEPT` clause. -- If `QUERIES DEFAULT [EXCEPT .. [,..]]` modifier is specified, all default protocols are stopped, unless specified with `EXCEPT` clause. -- If `QUERIES CUSTOM [EXCEPT .. [,..]]` modifier is specified, all custom protocols are stopped, unless specified with `EXCEPT` clause. - -## SYSTEM START LISTEN - -Allows new connections to be established on the specified protocols. - -However, if the server on the specified port and protocol was not stopped using the SYSTEM STOP LISTEN command, this command will have no effect. - ```sql -SYSTEM START LISTEN [ON CLUSTER cluster_name] [QUERIES ALL | QUERIES DEFAULT | QUERIES CUSTOM | TCP | TCP WITH PROXY | TCP SECURE | HTTP | HTTPS | MYSQL | GRPC | POSTGRESQL | PROMETHEUS | CUSTOM 'protocol'] +SYSTEM UNLOAD PRIMARY KEY ``` ## Managing Refreshable Materialized Views {#refreshable-materialized-views} @@ -495,7 +507,7 @@ Commands to control background tasks performed by [Refreshable Materialized View Keep an eye on [`system.view_refreshes`](../../operations/system-tables/view_refreshes.md) while using them. -### SYSTEM REFRESH VIEW +### REFRESH VIEW Trigger an immediate out-of-schedule refresh of a given view. @@ -503,7 +515,7 @@ Trigger an immediate out-of-schedule refresh of a given view. SYSTEM REFRESH VIEW [db.]name ``` -### SYSTEM STOP VIEW, SYSTEM STOP VIEWS +### STOP VIEW, STOP VIEWS Disable periodic refreshing of the given view or all refreshable views. If a refresh is in progress, cancel it too. @@ -514,7 +526,7 @@ SYSTEM STOP VIEW [db.]name SYSTEM STOP VIEWS ``` -### SYSTEM START VIEW, SYSTEM START VIEWS +### START VIEW, START VIEWS Enable periodic refreshing for the given view or all refreshable views. No immediate refresh is triggered. @@ -525,22 +537,10 @@ SYSTEM START VIEW [db.]name SYSTEM START VIEWS ``` -### SYSTEM CANCEL VIEW +### CANCEL VIEW If there's a refresh in progress for the given view, interrupt and cancel it. Otherwise do nothing. ```sql SYSTEM CANCEL VIEW [db.]name ``` - -### SYSTEM UNLOAD PRIMARY KEY - -Unload the primary keys for the given table or for all tables. - -```sql -SYSTEM UNLOAD PRIMARY KEY [db.]name -``` - -```sql -SYSTEM UNLOAD PRIMARY KEY -``` \ No newline at end of file diff --git a/docs/en/sql-reference/table-functions/loop.md b/docs/en/sql-reference/table-functions/loop.md new file mode 100644 index 00000000000..3a9367b2d10 --- /dev/null +++ b/docs/en/sql-reference/table-functions/loop.md @@ -0,0 +1,55 @@ +# loop + +**Syntax** + +``` sql +SELECT ... FROM loop(database, table); +SELECT ... FROM loop(database.table); +SELECT ... FROM loop(table); +SELECT ... FROM loop(other_table_function(...)); +``` + +**Parameters** + +- `database` — database name. +- `table` — table name. +- `other_table_function(...)` — other table function. + Example: `SELECT * FROM loop(numbers(10));` + `other_table_function(...)` here is `numbers(10)`. + +**Returned Value** + +Infinite loop to return query results. + +**Examples** + +Selecting data from ClickHouse: + +``` sql +SELECT * FROM loop(test_database, test_table); +SELECT * FROM loop(test_database.test_table); +SELECT * FROM loop(test_table); +``` + +Or using other table function: + +``` sql +SELECT * FROM loop(numbers(3)) LIMIT 7; + ┌─number─┐ +1. │ 0 │ +2. │ 1 │ +3. │ 2 │ + └────────┘ + ┌─number─┐ +4. │ 0 │ +5. │ 1 │ +6. │ 2 │ + └────────┘ + ┌─number─┐ +7. │ 0 │ + └────────┘ +``` +``` sql +SELECT * FROM loop(mysql('localhost:3306', 'test', 'test', 'user', 'password')); +... +``` \ No newline at end of file diff --git a/docs/ru/getting-started/install.md b/docs/ru/getting-started/install.md index 59650826659..aee445da843 100644 --- a/docs/ru/getting-started/install.md +++ b/docs/ru/getting-started/install.md @@ -38,26 +38,6 @@ sudo service clickhouse-server start clickhouse-client # or "clickhouse-client --password" if you've set up a password. ``` -
- -Устаревший способ установки deb-пакетов - -``` bash -sudo apt-get install apt-transport-https ca-certificates dirmngr -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 - -echo "deb https://repo.clickhouse.com/deb/stable/ main/" | sudo tee \ - /etc/apt/sources.list.d/clickhouse.list -sudo apt-get update - -sudo apt-get install -y clickhouse-server clickhouse-client - -sudo service clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. -``` - -
- Чтобы использовать различные [версии ClickHouse](../faq/operations/production.md) в зависимости от ваших потребностей, вы можете заменить `stable` на `lts` или `testing`. Также вы можете вручную скачать и установить пакеты из [репозитория](https://packages.clickhouse.com/deb/pool/stable). @@ -110,22 +90,6 @@ sudo systemctl status clickhouse-server clickhouse-client # илм "clickhouse-client --password" если установлен пароль ``` -
- -Устаревший способ установки rpm-пакетов - -``` bash -sudo yum install yum-utils -sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG -sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/clickhouse.repo -sudo yum install clickhouse-server clickhouse-client - -sudo /etc/init.d/clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. -``` - -
- Для использования наиболее свежих версий нужно заменить `stable` на `testing` (рекомендуется для тестовых окружений). Также иногда доступен `prestable`. Для непосредственной установки пакетов необходимо выполнить следующие команды: @@ -178,33 +142,6 @@ tar -xzvf "clickhouse-client-$LATEST_VERSION-${ARCH}.tgz" \ sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh" ``` -
- -Устаревший способ установки из архивов tgz - -``` bash -export LATEST_VERSION=$(curl -s https://repo.clickhouse.com/tgz/stable/ | \ - grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1) -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz - -tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz -sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz -sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-server-$LATEST_VERSION.tgz -sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh -sudo /etc/init.d/clickhouse-server start - -tar -xzvf clickhouse-client-$LATEST_VERSION.tgz -sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh -``` -
- Для продуктивных окружений рекомендуется использовать последнюю `stable`-версию. Её номер также можно найти на github с на вкладке https://github.com/ClickHouse/ClickHouse/tags c постфиксом `-stable`. ### Из Docker образа {#from-docker-image} diff --git a/docs/ru/sql-reference/functions/uuid-functions.md b/docs/ru/sql-reference/functions/uuid-functions.md index a7fe6592338..7fe90263599 100644 --- a/docs/ru/sql-reference/functions/uuid-functions.md +++ b/docs/ru/sql-reference/functions/uuid-functions.md @@ -112,113 +112,6 @@ SELECT generateUUIDv7(1), generateUUIDv7(2) └──────────────────────────────────────┴──────────────────────────────────────┘ ``` -## generateUUIDv7ThreadMonotonic {#uuidv7threadmonotonic-function-generate} - -Генерирует идентификатор [UUID версии 7](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04). Генерируемый UUID состоит из 48-битной временной метки (Unix time в миллисекундах), маркеров версии 7 и варианта 2, монотонно возрастающего счётчика для данной временной метки и случайных данных в указанной ниже последовательности. Для каждой новой временной метки счётчик стартует с нового случайного значения, а для следующих UUIDv7 он увеличивается на единицу. В случае переполнения счётчика временная метка принудительно увеличивается на 1, и счётчик снова стартует со случайного значения. Данная функция является ускоренным аналогом функции `generateUUIDv7` за счёт потери гарантии монотонности счётчика при одной и той же метке времени между одновременно исполняемыми разными запросами. Монотонность счётчика гарантируется только в пределах одного треда, исполняющего данную функцию для генерации нескольких UUID. - -**Синтаксис** - -``` sql -generateUUIDv7ThreadMonotonic([x]) -``` - -**Аргументы** - -- `x` — [выражение](../syntax.md#syntax-expressions), возвращающее значение одного из [поддерживаемых типов данных](../data-types/index.md#data_types). Значение используется, чтобы избежать [склейки одинаковых выражений](index.md#common-subexpression-elimination), если функция вызывается несколько раз в одном запросе. Необязательный параметр. - -**Возвращаемое значение** - -Значение типа [UUID](../../sql-reference/functions/uuid-functions.md). - -**Пример использования** - -Этот пример демонстрирует, как создать таблицу с UUID-колонкой и добавить в нее сгенерированный UUIDv7. - -``` sql -CREATE TABLE t_uuid (x UUID) ENGINE=TinyLog - -INSERT INTO t_uuid SELECT generateUUIDv7ThreadMonotonic() - -SELECT * FROM t_uuid -``` - -``` text -┌────────────────────────────────────x─┐ -│ 018f05e2-e3b2-70cb-b8be-64b09b626d32 │ -└──────────────────────────────────────┘ -``` - -**Пример использования, для генерации нескольких значений в одной строке** - -```sql -SELECT generateUUIDv7ThreadMonotonic(1), generateUUIDv7ThreadMonotonic(7) - -┌─generateUUIDv7ThreadMonotonic(1)─────┬─generateUUIDv7ThreadMonotonic(2)─────┐ -│ 018f05e1-14ee-7bc5-9906-207153b400b1 │ 018f05e1-14ee-7bc5-9906-2072b8e96758 │ -└──────────────────────────────────────┴──────────────────────────────────────┘ -``` - -## generateUUIDv7NonMonotonic {#uuidv7nonmonotonic-function-generate} - -Генерирует идентификатор [UUID версии 7](https://datatracker.ietf.org/doc/html/draft-peabody-dispatch-new-uuid-format-04). Генерируемый UUID состоит из 48-битной временной метки (Unix time в миллисекундах), маркеров версии 7 и варианта 2, и случайных данных в следующей последовательности: -``` - 0 1 2 3 - 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| unix_ts_ms | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| unix_ts_ms | ver | rand_a | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -|var| rand_b | -├─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┼─┤ -| rand_b | -└─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┴─┘ -``` -::::note -На апрель 2024 года UUIDv7 находится в статусе черновика и его раскладка по битам может в итоге измениться. -:::: - -**Синтаксис** - -``` sql -generateUUIDv7NonMonotonic([x]) -``` - -**Аргументы** - -- `x` — [выражение](../syntax.md#syntax-expressions), возвращающее значение одного из [поддерживаемых типов данных](../data-types/index.md#data_types). Значение используется, чтобы избежать [склейки одинаковых выражений](index.md#common-subexpression-elimination), если функция вызывается несколько раз в одном запросе. Необязательный параметр. - -**Возвращаемое значение** - -Значение типа [UUID](../../sql-reference/functions/uuid-functions.md). - -**Пример использования** - -Этот пример демонстрирует, как создать таблицу с UUID-колонкой и добавить в нее сгенерированный UUIDv7. - -``` sql -CREATE TABLE t_uuid (x UUID) ENGINE=TinyLog - -INSERT INTO t_uuid SELECT generateUUIDv7NonMonotonic() - -SELECT * FROM t_uuid -``` - -``` text -┌────────────────────────────────────x─┐ -│ 018f05af-f4a8-778f-beee-1bedbc95c93b │ -└──────────────────────────────────────┘ -``` - -**Пример использования, для генерации нескольких значений в одной строке** - -```sql -SELECT generateUUIDv7NonMonotonic(1), generateUUIDv7NonMonotonic(7) -┌─generateUUIDv7NonMonotonic(1)────────┬─generateUUIDv7NonMonotonic(2)────────┐ -│ 018f05b1-8c2e-7567-a988-48d09606ae8c │ 018f05b1-8c2e-7946-895b-fcd7635da9a0 │ -└──────────────────────────────────────┴──────────────────────────────────────┘ -``` - ## empty {#empty} Проверяет, является ли входной UUID пустым. diff --git a/docs/zh/getting-started/install.md b/docs/zh/getting-started/install.md index e65cfea62cd..7e4fb6826e4 100644 --- a/docs/zh/getting-started/install.md +++ b/docs/zh/getting-started/install.md @@ -38,26 +38,6 @@ sudo service clickhouse-server start clickhouse-client # or "clickhouse-client --password" if you've set up a password. ``` -
- -Deprecated Method for installing deb-packages - -``` bash -sudo apt-get install apt-transport-https ca-certificates dirmngr -sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4 - -echo "deb https://repo.clickhouse.com/deb/stable/ main/" | sudo tee \ - /etc/apt/sources.list.d/clickhouse.list -sudo apt-get update - -sudo apt-get install -y clickhouse-server clickhouse-client - -sudo service clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. -``` - -
- 如果您想使用最新的版本,请用`testing`替代`stable`(我们只推荐您用于测试环境)。 你也可以从这里手动下载安装包:[下载](https://packages.clickhouse.com/deb/pool/stable)。 @@ -95,22 +75,6 @@ sudo /etc/init.d/clickhouse-server start clickhouse-client # or "clickhouse-client --password" if you set up a password. ``` -
- -Deprecated Method for installing rpm-packages - -``` bash -sudo yum install yum-utils -sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG -sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/clickhouse.repo -sudo yum install clickhouse-server clickhouse-client - -sudo /etc/init.d/clickhouse-server start -clickhouse-client # or "clickhouse-client --password" if you set up a password. -``` - -
- 如果您想使用最新的版本,请用`testing`替代`stable`(我们只推荐您用于测试环境)。`prestable`有时也可用。 然后运行命令安装: @@ -164,34 +128,6 @@ tar -xzvf "clickhouse-client-$LATEST_VERSION-${ARCH}.tgz" \ sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh" ``` -
- -Deprecated Method for installing tgz archives - -``` bash -export LATEST_VERSION=$(curl -s https://repo.clickhouse.com/tgz/stable/ | \ - grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1) -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz -curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz - -tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz -sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-common-static-dbg-$LATEST_VERSION.tgz -sudo clickhouse-common-static-dbg-$LATEST_VERSION/install/doinst.sh - -tar -xzvf clickhouse-server-$LATEST_VERSION.tgz -sudo clickhouse-server-$LATEST_VERSION/install/doinst.sh -sudo /etc/init.d/clickhouse-server start - -tar -xzvf clickhouse-client-$LATEST_VERSION.tgz -sudo clickhouse-client-$LATEST_VERSION/install/doinst.sh -``` - -
- 对于生产环境,建议使用最新的`stable`版本。你可以在GitHub页面https://github.com/ClickHouse/ClickHouse/tags找到它,它以后缀`-stable`标志。 ### `Docker`安装包 {#from-docker-image} diff --git a/packages/clickhouse-server.init b/packages/clickhouse-server.init index f215e52b6f3..0ac9cf7ae1f 100755 --- a/packages/clickhouse-server.init +++ b/packages/clickhouse-server.init @@ -1,10 +1,11 @@ #!/bin/sh ### BEGIN INIT INFO # Provides: clickhouse-server +# Required-Start: $network +# Required-Stop: $network +# Should-Start: $time # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 -# Should-Start: $time $network -# Should-Stop: $network # Short-Description: clickhouse-server daemon ### END INIT INFO # diff --git a/programs/keeper-client/Commands.cpp b/programs/keeper-client/Commands.cpp index a109912e6e0..860840a2d06 100644 --- a/programs/keeper-client/Commands.cpp +++ b/programs/keeper-client/Commands.cpp @@ -10,6 +10,7 @@ namespace DB namespace ErrorCodes { + extern const int LOGICAL_ERROR; extern const int KEEPER_EXCEPTION; } @@ -441,7 +442,7 @@ void ReconfigCommand::execute(const DB::ASTKeeperQuery * query, DB::KeeperClient new_members = query->args[1].safeGet(); break; default: - UNREACHABLE(); + throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected operation: {}", operation); } auto response = client->zookeeper->reconfig(joining, leaving, new_members); diff --git a/programs/library-bridge/LibraryBridgeHandlers.h b/programs/library-bridge/LibraryBridgeHandlers.h index 1db71eb24cb..62fbf2caede 100644 --- a/programs/library-bridge/LibraryBridgeHandlers.h +++ b/programs/library-bridge/LibraryBridgeHandlers.h @@ -23,7 +23,7 @@ public: void handleRequest(HTTPServerRequest & request, HTTPServerResponse & response, const ProfileEvents::Event & write_event) override; private: - static constexpr inline auto FORMAT = "RowBinary"; + static constexpr auto FORMAT = "RowBinary"; const size_t keep_alive_timeout; LoggerPtr log; diff --git a/programs/main.cpp b/programs/main.cpp index bc8476e4ce4..c270388f17f 100644 --- a/programs/main.cpp +++ b/programs/main.cpp @@ -155,8 +155,8 @@ auto instructionFailToString(InstructionFail fail) ret("AVX2"); case InstructionFail::AVX512: ret("AVX512"); +#undef ret } - UNREACHABLE(); } diff --git a/programs/server/MetricsTransmitter.h b/programs/server/MetricsTransmitter.h index 23420117b56..24069a60071 100644 --- a/programs/server/MetricsTransmitter.h +++ b/programs/server/MetricsTransmitter.h @@ -56,10 +56,10 @@ private: std::condition_variable cond; std::optional thread; - static inline constexpr auto profile_events_path_prefix = "ClickHouse.ProfileEvents."; - static inline constexpr auto profile_events_cumulative_path_prefix = "ClickHouse.ProfileEventsCumulative."; - static inline constexpr auto current_metrics_path_prefix = "ClickHouse.Metrics."; - static inline constexpr auto asynchronous_metrics_path_prefix = "ClickHouse.AsynchronousMetrics."; + static constexpr auto profile_events_path_prefix = "ClickHouse.ProfileEvents."; + static constexpr auto profile_events_cumulative_path_prefix = "ClickHouse.ProfileEventsCumulative."; + static constexpr auto current_metrics_path_prefix = "ClickHouse.Metrics."; + static constexpr auto asynchronous_metrics_path_prefix = "ClickHouse.AsynchronousMetrics."; }; } diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp index 223bc1f77e7..8fcb9d87a93 100644 --- a/programs/server/Server.cpp +++ b/programs/server/Server.cpp @@ -792,9 +792,32 @@ try LOG_INFO(log, "Background threads finished in {} ms", watch.elapsedMilliseconds()); }); + /// This object will periodically calculate some metrics. + ServerAsynchronousMetrics async_metrics( + global_context, + server_settings.asynchronous_metrics_update_period_s, + server_settings.asynchronous_heavy_metrics_update_period_s, + [&]() -> std::vector + { + std::vector metrics; + + std::lock_guard lock(servers_lock); + metrics.reserve(servers_to_start_before_tables.size() + servers.size()); + + for (const auto & server : servers_to_start_before_tables) + metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads()}); + + for (const auto & server : servers) + metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads()}); + return metrics; + } + ); + /// NOTE: global context should be destroyed *before* GlobalThreadPool::shutdown() /// Otherwise GlobalThreadPool::shutdown() will hang, since Context holds some threads. SCOPE_EXIT({ + async_metrics.stop(); + /** Ask to cancel background jobs all table engines, * and also query_log. * It is important to do early, not in destructor of Context, because @@ -921,27 +944,6 @@ try } } - /// This object will periodically calculate some metrics. - ServerAsynchronousMetrics async_metrics( - global_context, - server_settings.asynchronous_metrics_update_period_s, - server_settings.asynchronous_heavy_metrics_update_period_s, - [&]() -> std::vector - { - std::vector metrics; - - std::lock_guard lock(servers_lock); - metrics.reserve(servers_to_start_before_tables.size() + servers.size()); - - for (const auto & server : servers_to_start_before_tables) - metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads()}); - - for (const auto & server : servers) - metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads()}); - return metrics; - } - ); - zkutil::validateZooKeeperConfig(config()); bool has_zookeeper = zkutil::hasZooKeeperConfig(config()); @@ -1748,6 +1750,11 @@ try } + if (config().has(DB::PlacementInfo::PLACEMENT_CONFIG_PREFIX)) + { + PlacementInfo::PlacementInfo::instance().initialize(config()); + } + { std::lock_guard lock(servers_lock); /// We should start interserver communications before (and more important shutdown after) tables. @@ -2096,11 +2103,6 @@ try load_metadata_tasks); } - if (config().has(DB::PlacementInfo::PLACEMENT_CONFIG_PREFIX)) - { - PlacementInfo::PlacementInfo::instance().initialize(config()); - } - /// Do not keep tasks in server, they should be kept inside databases. Used here to make dependent tasks only. load_metadata_tasks.clear(); load_metadata_tasks.shrink_to_fit(); diff --git a/src/Access/AccessEntityIO.cpp b/src/Access/AccessEntityIO.cpp index b0dfd74c53b..1b073329296 100644 --- a/src/Access/AccessEntityIO.cpp +++ b/src/Access/AccessEntityIO.cpp @@ -144,8 +144,7 @@ AccessEntityPtr deserializeAccessEntity(const String & definition, const String catch (Exception & e) { e.addMessage("Could not parse " + file_path); - e.rethrow(); - UNREACHABLE(); + throw; } } diff --git a/src/Access/AccessRights.cpp b/src/Access/AccessRights.cpp index c10931f554c..2127f4ada70 100644 --- a/src/Access/AccessRights.cpp +++ b/src/Access/AccessRights.cpp @@ -258,7 +258,7 @@ namespace case TABLE_LEVEL: return AccessFlags::allFlagsGrantableOnTableLevel(); case COLUMN_LEVEL: return AccessFlags::allFlagsGrantableOnColumnLevel(); } - UNREACHABLE(); + chassert(false); } } diff --git a/src/Access/IAccessStorage.cpp b/src/Access/IAccessStorage.cpp index 8e51481e415..8d4e7d3073e 100644 --- a/src/Access/IAccessStorage.cpp +++ b/src/Access/IAccessStorage.cpp @@ -257,8 +257,7 @@ std::vector IAccessStorage::insert(const std::vector & mu } e.addMessage("After successfully inserting {}/{}: {}", successfully_inserted.size(), multiple_entities.size(), successfully_inserted_str); } - e.rethrow(); - UNREACHABLE(); + throw; } } @@ -361,8 +360,7 @@ std::vector IAccessStorage::remove(const std::vector & ids, bool thr } e.addMessage("After successfully removing {}/{}: {}", removed_names.size(), ids.size(), removed_names_str); } - e.rethrow(); - UNREACHABLE(); + throw; } } @@ -458,8 +456,7 @@ std::vector IAccessStorage::update(const std::vector & ids, const Up } e.addMessage("After successfully updating {}/{}: {}", names_of_updated.size(), ids.size(), names_of_updated_str); } - e.rethrow(); - UNREACHABLE(); + throw; } } diff --git a/src/AggregateFunctions/AggregateFunctionGroupArray.cpp b/src/AggregateFunctions/AggregateFunctionGroupArray.cpp index c21b1d376d9..16907e0f24f 100644 --- a/src/AggregateFunctions/AggregateFunctionGroupArray.cpp +++ b/src/AggregateFunctions/AggregateFunctionGroupArray.cpp @@ -60,14 +60,13 @@ struct GroupArrayTrait template constexpr const char * getNameByTrait() { - if (Trait::last) + if constexpr (Trait::last) return "groupArrayLast"; - if (Trait::sampler == Sampler::NONE) - return "groupArray"; - else if (Trait::sampler == Sampler::RNG) - return "groupArraySample"; - - UNREACHABLE(); + switch (Trait::sampler) + { + case Sampler::NONE: return "groupArray"; + case Sampler::RNG: return "groupArraySample"; + } } template diff --git a/src/AggregateFunctions/AggregateFunctionSequenceNextNode.cpp b/src/AggregateFunctions/AggregateFunctionSequenceNextNode.cpp index bed10333af0..b0240225138 100644 --- a/src/AggregateFunctions/AggregateFunctionSequenceNextNode.cpp +++ b/src/AggregateFunctions/AggregateFunctionSequenceNextNode.cpp @@ -341,7 +341,7 @@ public: value[i] = Node::read(buf, arena); } - inline std::optional getBaseIndex(Data & data) const + std::optional getBaseIndex(Data & data) const { if (data.value.size() == 0) return {}; @@ -414,7 +414,6 @@ public: break; return (i == events_size) ? base - i : unmatched_idx; } - UNREACHABLE(); } void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override diff --git a/src/AggregateFunctions/AggregateFunctionSum.h b/src/AggregateFunctions/AggregateFunctionSum.h index 58aaddf357a..2ce03c530c2 100644 --- a/src/AggregateFunctions/AggregateFunctionSum.h +++ b/src/AggregateFunctions/AggregateFunctionSum.h @@ -463,7 +463,6 @@ public: return "sumWithOverflow"; else if constexpr (Type == AggregateFunctionTypeSumKahan) return "sumKahan"; - UNREACHABLE(); } explicit AggregateFunctionSum(const DataTypes & argument_types_) diff --git a/src/AggregateFunctions/Combinators/AggregateFunctionIf.cpp b/src/AggregateFunctions/Combinators/AggregateFunctionIf.cpp index 9b5ee79a533..3e21ffa3418 100644 --- a/src/AggregateFunctions/Combinators/AggregateFunctionIf.cpp +++ b/src/AggregateFunctions/Combinators/AggregateFunctionIf.cpp @@ -73,7 +73,7 @@ private: using Base = AggregateFunctionNullBase>; - inline bool singleFilter(const IColumn ** columns, size_t row_num) const + bool singleFilter(const IColumn ** columns, size_t row_num) const { const IColumn * filter_column = columns[num_arguments - 1]; @@ -261,7 +261,7 @@ public: filter_is_only_null = arguments.back()->onlyNull(); } - static inline bool singleFilter(const IColumn ** columns, size_t row_num, size_t num_arguments) + static bool singleFilter(const IColumn ** columns, size_t row_num, size_t num_arguments) { return assert_cast(*columns[num_arguments - 1]).getData()[row_num]; } diff --git a/src/AggregateFunctions/QuantileTDigest.h b/src/AggregateFunctions/QuantileTDigest.h index 9d84f079daa..d5a4f6b576a 100644 --- a/src/AggregateFunctions/QuantileTDigest.h +++ b/src/AggregateFunctions/QuantileTDigest.h @@ -138,7 +138,7 @@ class QuantileTDigest compress(); } - inline bool canBeMerged(const BetterFloat & l_mean, const Value & r_mean) + bool canBeMerged(const BetterFloat & l_mean, const Value & r_mean) { return l_mean == r_mean || (!std::isinf(l_mean) && !std::isinf(r_mean)); } diff --git a/src/AggregateFunctions/QuantileTiming.h b/src/AggregateFunctions/QuantileTiming.h index 45fbf38258f..eef15828fc0 100644 --- a/src/AggregateFunctions/QuantileTiming.h +++ b/src/AggregateFunctions/QuantileTiming.h @@ -262,7 +262,7 @@ namespace detail UInt64 count_big[BIG_SIZE]; /// Get value of quantile by index in array `count_big`. - static inline UInt16 indexInBigToValue(size_t i) + static UInt16 indexInBigToValue(size_t i) { return (i * BIG_PRECISION) + SMALL_THRESHOLD + (intHash32<0>(i) % BIG_PRECISION - (BIG_PRECISION / 2)); /// A small randomization so that it is not noticeable that all the values are even. diff --git a/src/AggregateFunctions/ThetaSketchData.h b/src/AggregateFunctions/ThetaSketchData.h index f32386d945b..99dca27673d 100644 --- a/src/AggregateFunctions/ThetaSketchData.h +++ b/src/AggregateFunctions/ThetaSketchData.h @@ -24,14 +24,14 @@ private: std::unique_ptr sk_update; std::unique_ptr sk_union; - inline datasketches::update_theta_sketch * getSkUpdate() + datasketches::update_theta_sketch * getSkUpdate() { if (!sk_update) sk_update = std::make_unique(datasketches::update_theta_sketch::builder().build()); return sk_update.get(); } - inline datasketches::theta_union * getSkUnion() + datasketches::theta_union * getSkUnion() { if (!sk_union) sk_union = std::make_unique(datasketches::theta_union::builder().build()); diff --git a/src/AggregateFunctions/UniqVariadicHash.h b/src/AggregateFunctions/UniqVariadicHash.h index 840380e7f0f..5bb245397d4 100644 --- a/src/AggregateFunctions/UniqVariadicHash.h +++ b/src/AggregateFunctions/UniqVariadicHash.h @@ -38,7 +38,7 @@ bool isAllArgumentsContiguousInMemory(const DataTypes & argument_types); template <> struct UniqVariadicHash { - static inline UInt64 apply(size_t num_args, const IColumn ** columns, size_t row_num) + static UInt64 apply(size_t num_args, const IColumn ** columns, size_t row_num) { UInt64 hash; @@ -65,7 +65,7 @@ struct UniqVariadicHash template <> struct UniqVariadicHash { - static inline UInt64 apply(size_t num_args, const IColumn ** columns, size_t row_num) + static UInt64 apply(size_t num_args, const IColumn ** columns, size_t row_num) { UInt64 hash; @@ -94,7 +94,7 @@ struct UniqVariadicHash template <> struct UniqVariadicHash { - static inline UInt128 apply(size_t num_args, const IColumn ** columns, size_t row_num) + static UInt128 apply(size_t num_args, const IColumn ** columns, size_t row_num) { const IColumn ** column = columns; const IColumn ** columns_end = column + num_args; @@ -114,7 +114,7 @@ struct UniqVariadicHash template <> struct UniqVariadicHash { - static inline UInt128 apply(size_t num_args, const IColumn ** columns, size_t row_num) + static UInt128 apply(size_t num_args, const IColumn ** columns, size_t row_num) { const auto & tuple_columns = assert_cast(columns[0])->getColumns(); diff --git a/src/AggregateFunctions/UniquesHashSet.h b/src/AggregateFunctions/UniquesHashSet.h index d6fc2bb6634..d5241547711 100644 --- a/src/AggregateFunctions/UniquesHashSet.h +++ b/src/AggregateFunctions/UniquesHashSet.h @@ -105,14 +105,14 @@ private: } } - inline size_t buf_size() const { return 1ULL << size_degree; } /// NOLINT - inline size_t max_fill() const { return 1ULL << (size_degree - 1); } /// NOLINT - inline size_t mask() const { return buf_size() - 1; } + size_t buf_size() const { return 1ULL << size_degree; } /// NOLINT + size_t max_fill() const { return 1ULL << (size_degree - 1); } /// NOLINT + size_t mask() const { return buf_size() - 1; } - inline size_t place(HashValue x) const { return (x >> UNIQUES_HASH_BITS_FOR_SKIP) & mask(); } + size_t place(HashValue x) const { return (x >> UNIQUES_HASH_BITS_FOR_SKIP) & mask(); } /// The value is divided by 2 ^ skip_degree - inline bool good(HashValue hash) const { return hash == ((hash >> skip_degree) << skip_degree); } + bool good(HashValue hash) const { return hash == ((hash >> skip_degree) << skip_degree); } HashValue hash(Value key) const { return static_cast(Hash()(key)); } diff --git a/src/Analyzer/ArrayJoinNode.cpp b/src/Analyzer/ArrayJoinNode.cpp index 59389d4f2a8..27d7229d46a 100644 --- a/src/Analyzer/ArrayJoinNode.cpp +++ b/src/Analyzer/ArrayJoinNode.cpp @@ -24,6 +24,9 @@ void ArrayJoinNode::dumpTreeImpl(WriteBuffer & buffer, FormatState & format_stat buffer << std::string(indent, ' ') << "ARRAY_JOIN id: " << format_state.getNodeId(this); buffer << ", is_left: " << is_left; + if (hasAlias()) + buffer << ", alias: " << getAlias(); + buffer << '\n' << std::string(indent + 2, ' ') << "TABLE EXPRESSION\n"; getTableExpression()->dumpTreeImpl(buffer, format_state, indent + 4); diff --git a/src/Analyzer/Passes/AggregateFunctionsArithmericOperationsPass.cpp b/src/Analyzer/Passes/AggregateFunctionsArithmericOperationsPass.cpp index f96ba22eb7a..9153bc4eca2 100644 --- a/src/Analyzer/Passes/AggregateFunctionsArithmericOperationsPass.cpp +++ b/src/Analyzer/Passes/AggregateFunctionsArithmericOperationsPass.cpp @@ -173,13 +173,13 @@ private: return arithmetic_function_clone; } - inline void resolveOrdinaryFunctionNode(FunctionNode & function_node, const String & function_name) const + void resolveOrdinaryFunctionNode(FunctionNode & function_node, const String & function_name) const { auto function = FunctionFactory::instance().get(function_name, getContext()); function_node.resolveAsFunction(function->build(function_node.getArgumentColumns())); } - static inline void resolveAggregateFunctionNode(FunctionNode & function_node, const QueryTreeNodePtr & argument, const String & aggregate_function_name) + static void resolveAggregateFunctionNode(FunctionNode & function_node, const QueryTreeNodePtr & argument, const String & aggregate_function_name) { auto function_aggregate_function = function_node.getAggregateFunction(); diff --git a/src/Analyzer/Passes/ComparisonTupleEliminationPass.cpp b/src/Analyzer/Passes/ComparisonTupleEliminationPass.cpp index f8233f473f8..ebefc12ae53 100644 --- a/src/Analyzer/Passes/ComparisonTupleEliminationPass.cpp +++ b/src/Analyzer/Passes/ComparisonTupleEliminationPass.cpp @@ -184,7 +184,7 @@ private: return result_function; } - inline QueryTreeNodePtr makeEqualsFunction(QueryTreeNodePtr lhs_argument, QueryTreeNodePtr rhs_argument) const + QueryTreeNodePtr makeEqualsFunction(QueryTreeNodePtr lhs_argument, QueryTreeNodePtr rhs_argument) const { return makeComparisonFunction(std::move(lhs_argument), std::move(rhs_argument), "equals"); } diff --git a/src/Analyzer/Passes/ConvertQueryToCNFPass.cpp b/src/Analyzer/Passes/ConvertQueryToCNFPass.cpp index 96bc62212fd..5951e8fc5ea 100644 --- a/src/Analyzer/Passes/ConvertQueryToCNFPass.cpp +++ b/src/Analyzer/Passes/ConvertQueryToCNFPass.cpp @@ -99,6 +99,23 @@ bool checkIfGroupAlwaysTrueGraph(const Analyzer::CNF::OrGroup & group, const Com return false; } +bool checkIfGroupAlwaysTrueAtoms(const Analyzer::CNF::OrGroup & group) +{ + /// Filters out groups containing mutually exclusive atoms, + /// since these groups are always True + + for (const auto & atom : group) + { + auto negated(atom); + negated.negative = !atom.negative; + if (group.contains(negated)) + { + return true; + } + } + return false; +} + bool checkIfAtomAlwaysFalseFullMatch(const Analyzer::CNF::AtomicFormula & atom, const ConstraintsDescription::QueryTreeData & query_tree_constraints) { const auto constraint_atom_ids = query_tree_constraints.getAtomIds(atom.node_with_hash); @@ -644,7 +661,8 @@ void optimizeWithConstraints(Analyzer::CNF & cnf, const QueryTreeNodes & table_e cnf.filterAlwaysTrueGroups([&](const auto & group) { /// remove always true groups from CNF - return !checkIfGroupAlwaysTrueFullMatch(group, query_tree_constraints) && !checkIfGroupAlwaysTrueGraph(group, compare_graph); + return !checkIfGroupAlwaysTrueFullMatch(group, query_tree_constraints) + && !checkIfGroupAlwaysTrueGraph(group, compare_graph) && !checkIfGroupAlwaysTrueAtoms(group); }) .filterAlwaysFalseAtoms([&](const Analyzer::CNF::AtomicFormula & atom) { diff --git a/src/Analyzer/Passes/FunctionToSubcolumnsPass.cpp b/src/Analyzer/Passes/FunctionToSubcolumnsPass.cpp index 6248f462979..15ac8d642a4 100644 --- a/src/Analyzer/Passes/FunctionToSubcolumnsPass.cpp +++ b/src/Analyzer/Passes/FunctionToSubcolumnsPass.cpp @@ -215,7 +215,7 @@ public: } private: - inline void resolveOrdinaryFunctionNode(FunctionNode & function_node, const String & function_name) const + void resolveOrdinaryFunctionNode(FunctionNode & function_node, const String & function_name) const { auto function = FunctionFactory::instance().get(function_name, getContext()); function_node.resolveAsFunction(function->build(function_node.getArgumentColumns())); diff --git a/src/Analyzer/Passes/NormalizeCountVariantsPass.cpp b/src/Analyzer/Passes/NormalizeCountVariantsPass.cpp index 0d6f3fc2d87..e70e08e65f4 100644 --- a/src/Analyzer/Passes/NormalizeCountVariantsPass.cpp +++ b/src/Analyzer/Passes/NormalizeCountVariantsPass.cpp @@ -59,7 +59,7 @@ public: } } private: - static inline void resolveAsCountAggregateFunction(FunctionNode & function_node) + static void resolveAsCountAggregateFunction(FunctionNode & function_node) { AggregateFunctionProperties properties; auto aggregate_function = AggregateFunctionFactory::instance().get("count", NullsAction::EMPTY, {}, {}, properties); diff --git a/src/Analyzer/Passes/RewriteAggregateFunctionWithIfPass.cpp b/src/Analyzer/Passes/RewriteAggregateFunctionWithIfPass.cpp index 513dd0054d6..a82ad3dced1 100644 --- a/src/Analyzer/Passes/RewriteAggregateFunctionWithIfPass.cpp +++ b/src/Analyzer/Passes/RewriteAggregateFunctionWithIfPass.cpp @@ -108,7 +108,7 @@ public: } private: - static inline void resolveAsAggregateFunctionWithIf(FunctionNode & function_node, const DataTypes & argument_types) + static void resolveAsAggregateFunctionWithIf(FunctionNode & function_node, const DataTypes & argument_types) { auto result_type = function_node.getResultType(); diff --git a/src/Analyzer/Passes/RewriteSumFunctionWithSumAndCountPass.cpp b/src/Analyzer/Passes/RewriteSumFunctionWithSumAndCountPass.cpp index 917256bf4b1..5646d26f7f6 100644 --- a/src/Analyzer/Passes/RewriteSumFunctionWithSumAndCountPass.cpp +++ b/src/Analyzer/Passes/RewriteSumFunctionWithSumAndCountPass.cpp @@ -110,7 +110,7 @@ private: function_node.resolveAsFunction(function->build(function_node.getArgumentColumns())); } - static inline void resolveAsAggregateFunctionNode(FunctionNode & function_node, const DataTypePtr & argument_type) + static void resolveAsAggregateFunctionNode(FunctionNode & function_node, const DataTypePtr & argument_type) { AggregateFunctionProperties properties; const auto aggregate_function = AggregateFunctionFactory::instance().get(function_node.getFunctionName(), diff --git a/src/Analyzer/Passes/SumIfToCountIfPass.cpp b/src/Analyzer/Passes/SumIfToCountIfPass.cpp index 1a4712aa697..852cbe75c4a 100644 --- a/src/Analyzer/Passes/SumIfToCountIfPass.cpp +++ b/src/Analyzer/Passes/SumIfToCountIfPass.cpp @@ -156,7 +156,7 @@ public: } private: - static inline void resolveAsCountIfAggregateFunction(FunctionNode & function_node, const DataTypePtr & argument_type) + static void resolveAsCountIfAggregateFunction(FunctionNode & function_node, const DataTypePtr & argument_type) { AggregateFunctionProperties properties; auto aggregate_function = AggregateFunctionFactory::instance().get( @@ -165,7 +165,7 @@ private: function_node.resolveAsAggregateFunction(std::move(aggregate_function)); } - inline QueryTreeNodePtr getMultiplyFunction(QueryTreeNodePtr left, QueryTreeNodePtr right) + QueryTreeNodePtr getMultiplyFunction(QueryTreeNodePtr left, QueryTreeNodePtr right) { auto multiply_function_node = std::make_shared("multiply"); auto & multiply_arguments_nodes = multiply_function_node->getArguments().getNodes(); diff --git a/src/Analyzer/Resolve/ExpressionsStack.h b/src/Analyzer/Resolve/ExpressionsStack.h new file mode 100644 index 00000000000..82a27aa8b83 --- /dev/null +++ b/src/Analyzer/Resolve/ExpressionsStack.h @@ -0,0 +1,124 @@ +#pragma once + +#include +#include +#include + +namespace DB +{ + +class ExpressionsStack +{ +public: + void push(const QueryTreeNodePtr & node) + { + if (node->hasAlias()) + { + const auto & node_alias = node->getAlias(); + alias_name_to_expressions[node_alias].push_back(node); + } + + if (const auto * function = node->as()) + { + if (AggregateFunctionFactory::instance().isAggregateFunctionName(function->getFunctionName())) + ++aggregate_functions_counter; + } + + expressions.emplace_back(node); + } + + void pop() + { + const auto & top_expression = expressions.back(); + const auto & top_expression_alias = top_expression->getAlias(); + + if (!top_expression_alias.empty()) + { + auto it = alias_name_to_expressions.find(top_expression_alias); + auto & alias_expressions = it->second; + alias_expressions.pop_back(); + + if (alias_expressions.empty()) + alias_name_to_expressions.erase(it); + } + + if (const auto * function = top_expression->as()) + { + if (AggregateFunctionFactory::instance().isAggregateFunctionName(function->getFunctionName())) + --aggregate_functions_counter; + } + + expressions.pop_back(); + } + + [[maybe_unused]] const QueryTreeNodePtr & getRoot() const + { + return expressions.front(); + } + + const QueryTreeNodePtr & getTop() const + { + return expressions.back(); + } + + [[maybe_unused]] bool hasExpressionWithAlias(const std::string & alias) const + { + return alias_name_to_expressions.contains(alias); + } + + bool hasAggregateFunction() const + { + return aggregate_functions_counter > 0; + } + + QueryTreeNodePtr getExpressionWithAlias(const std::string & alias) const + { + auto expression_it = alias_name_to_expressions.find(alias); + if (expression_it == alias_name_to_expressions.end()) + return {}; + + return expression_it->second.front(); + } + + [[maybe_unused]] size_t size() const + { + return expressions.size(); + } + + bool empty() const + { + return expressions.empty(); + } + + void dump(WriteBuffer & buffer) const + { + buffer << expressions.size() << '\n'; + + for (const auto & expression : expressions) + { + buffer << "Expression "; + buffer << expression->formatASTForErrorMessage(); + + const auto & alias = expression->getAlias(); + if (!alias.empty()) + buffer << " alias " << alias; + + buffer << '\n'; + } + } + + [[maybe_unused]] String dump() const + { + WriteBufferFromOwnString buffer; + dump(buffer); + + return buffer.str(); + } + +private: + QueryTreeNodes expressions; + size_t aggregate_functions_counter = 0; + std::unordered_map alias_name_to_expressions; +}; + +} diff --git a/src/Analyzer/Resolve/IdentifierLookup.h b/src/Analyzer/Resolve/IdentifierLookup.h new file mode 100644 index 00000000000..8dd70c188e9 --- /dev/null +++ b/src/Analyzer/Resolve/IdentifierLookup.h @@ -0,0 +1,195 @@ +#pragma once + +#include +#include +#include + +#include +#include + +namespace DB +{ + +/// Identifier lookup context +enum class IdentifierLookupContext : uint8_t +{ + EXPRESSION = 0, + FUNCTION, + TABLE_EXPRESSION, +}; + +inline const char * toString(IdentifierLookupContext identifier_lookup_context) +{ + switch (identifier_lookup_context) + { + case IdentifierLookupContext::EXPRESSION: return "EXPRESSION"; + case IdentifierLookupContext::FUNCTION: return "FUNCTION"; + case IdentifierLookupContext::TABLE_EXPRESSION: return "TABLE_EXPRESSION"; + } +} + +inline const char * toStringLowercase(IdentifierLookupContext identifier_lookup_context) +{ + switch (identifier_lookup_context) + { + case IdentifierLookupContext::EXPRESSION: return "expression"; + case IdentifierLookupContext::FUNCTION: return "function"; + case IdentifierLookupContext::TABLE_EXPRESSION: return "table expression"; + } +} + +/** Structure that represent identifier lookup during query analysis. + * Lookup can be in query expression, function, table context. + */ +struct IdentifierLookup +{ + Identifier identifier; + IdentifierLookupContext lookup_context; + + bool isExpressionLookup() const + { + return lookup_context == IdentifierLookupContext::EXPRESSION; + } + + bool isFunctionLookup() const + { + return lookup_context == IdentifierLookupContext::FUNCTION; + } + + bool isTableExpressionLookup() const + { + return lookup_context == IdentifierLookupContext::TABLE_EXPRESSION; + } + + String dump() const + { + return identifier.getFullName() + ' ' + toString(lookup_context); + } +}; + +inline bool operator==(const IdentifierLookup & lhs, const IdentifierLookup & rhs) +{ + return lhs.identifier.getFullName() == rhs.identifier.getFullName() && lhs.lookup_context == rhs.lookup_context; +} + +[[maybe_unused]] inline bool operator!=(const IdentifierLookup & lhs, const IdentifierLookup & rhs) +{ + return !(lhs == rhs); +} + +struct IdentifierLookupHash +{ + size_t operator()(const IdentifierLookup & identifier_lookup) const + { + return std::hash()(identifier_lookup.identifier.getFullName()) ^ static_cast(identifier_lookup.lookup_context); + } +}; + +enum class IdentifierResolvePlace : UInt8 +{ + NONE = 0, + EXPRESSION_ARGUMENTS, + ALIASES, + JOIN_TREE, + /// Valid only for table lookup + CTE, + /// Valid only for table lookup + DATABASE_CATALOG +}; + +inline const char * toString(IdentifierResolvePlace resolved_identifier_place) +{ + switch (resolved_identifier_place) + { + case IdentifierResolvePlace::NONE: return "NONE"; + case IdentifierResolvePlace::EXPRESSION_ARGUMENTS: return "EXPRESSION_ARGUMENTS"; + case IdentifierResolvePlace::ALIASES: return "ALIASES"; + case IdentifierResolvePlace::JOIN_TREE: return "JOIN_TREE"; + case IdentifierResolvePlace::CTE: return "CTE"; + case IdentifierResolvePlace::DATABASE_CATALOG: return "DATABASE_CATALOG"; + } +} + +struct IdentifierResolveResult +{ + IdentifierResolveResult() = default; + + QueryTreeNodePtr resolved_identifier; + IdentifierResolvePlace resolve_place = IdentifierResolvePlace::NONE; + bool resolved_from_parent_scopes = false; + + [[maybe_unused]] bool isResolved() const + { + return resolve_place != IdentifierResolvePlace::NONE; + } + + [[maybe_unused]] bool isResolvedFromParentScopes() const + { + return resolved_from_parent_scopes; + } + + [[maybe_unused]] bool isResolvedFromExpressionArguments() const + { + return resolve_place == IdentifierResolvePlace::EXPRESSION_ARGUMENTS; + } + + [[maybe_unused]] bool isResolvedFromAliases() const + { + return resolve_place == IdentifierResolvePlace::ALIASES; + } + + [[maybe_unused]] bool isResolvedFromJoinTree() const + { + return resolve_place == IdentifierResolvePlace::JOIN_TREE; + } + + [[maybe_unused]] bool isResolvedFromCTEs() const + { + return resolve_place == IdentifierResolvePlace::CTE; + } + + void dump(WriteBuffer & buffer) const + { + if (!resolved_identifier) + { + buffer << "unresolved"; + return; + } + + buffer << resolved_identifier->formatASTForErrorMessage() << " place " << toString(resolve_place) << " resolved from parent scopes " << resolved_from_parent_scopes; + } + + [[maybe_unused]] String dump() const + { + WriteBufferFromOwnString buffer; + dump(buffer); + + return buffer.str(); + } +}; + +struct IdentifierResolveState +{ + IdentifierResolveResult resolve_result; + bool cyclic_identifier_resolve = false; +}; + +struct IdentifierResolveSettings +{ + /// Allow to check join tree during identifier resolution + bool allow_to_check_join_tree = true; + + /// Allow to check CTEs during table identifier resolution + bool allow_to_check_cte = true; + + /// Allow to check parent scopes during identifier resolution + bool allow_to_check_parent_scopes = true; + + /// Allow to check database catalog during table identifier resolution + bool allow_to_check_database_catalog = true; + + /// Allow to resolve subquery during identifier resolution + bool allow_to_resolve_subquery_during_identifier_resolution = true; +}; + +} diff --git a/src/Analyzer/Resolve/IdentifierResolveScope.cpp b/src/Analyzer/Resolve/IdentifierResolveScope.cpp new file mode 100644 index 00000000000..ae363b57047 --- /dev/null +++ b/src/Analyzer/Resolve/IdentifierResolveScope.cpp @@ -0,0 +1,184 @@ +#include + +#include +#include +#include + +namespace DB +{ +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + +IdentifierResolveScope::IdentifierResolveScope(QueryTreeNodePtr scope_node_, IdentifierResolveScope * parent_scope_) + : scope_node(std::move(scope_node_)) + , parent_scope(parent_scope_) +{ + if (parent_scope) + { + subquery_depth = parent_scope->subquery_depth; + context = parent_scope->context; + projection_mask_map = parent_scope->projection_mask_map; + } + else + projection_mask_map = std::make_shared>(); + + if (auto * union_node = scope_node->as()) + { + context = union_node->getContext(); + } + else if (auto * query_node = scope_node->as()) + { + context = query_node->getContext(); + group_by_use_nulls = context->getSettingsRef().group_by_use_nulls && + (query_node->isGroupByWithGroupingSets() || query_node->isGroupByWithRollup() || query_node->isGroupByWithCube()); + } + + if (context) + join_use_nulls = context->getSettingsRef().join_use_nulls; + else if (parent_scope) + join_use_nulls = parent_scope->join_use_nulls; + + aliases.alias_name_to_expression_node = &aliases.alias_name_to_expression_node_before_group_by; +} + +[[maybe_unused]] const IdentifierResolveScope * IdentifierResolveScope::getNearestQueryScope() const +{ + const IdentifierResolveScope * scope_to_check = this; + while (scope_to_check != nullptr) + { + if (scope_to_check->scope_node->getNodeType() == QueryTreeNodeType::QUERY) + break; + + scope_to_check = scope_to_check->parent_scope; + } + + return scope_to_check; +} + +IdentifierResolveScope * IdentifierResolveScope::getNearestQueryScope() +{ + IdentifierResolveScope * scope_to_check = this; + while (scope_to_check != nullptr) + { + if (scope_to_check->scope_node->getNodeType() == QueryTreeNodeType::QUERY) + break; + + scope_to_check = scope_to_check->parent_scope; + } + + return scope_to_check; +} + +AnalysisTableExpressionData & IdentifierResolveScope::getTableExpressionDataOrThrow(const QueryTreeNodePtr & table_expression_node) +{ + auto it = table_expression_node_to_data.find(table_expression_node); + if (it == table_expression_node_to_data.end()) + { + throw Exception(ErrorCodes::LOGICAL_ERROR, + "Table expression {} data must be initialized. In scope {}", + table_expression_node->formatASTForErrorMessage(), + scope_node->formatASTForErrorMessage()); + } + + return it->second; +} + +const AnalysisTableExpressionData & IdentifierResolveScope::getTableExpressionDataOrThrow(const QueryTreeNodePtr & table_expression_node) const +{ + auto it = table_expression_node_to_data.find(table_expression_node); + if (it == table_expression_node_to_data.end()) + { + throw Exception(ErrorCodes::LOGICAL_ERROR, + "Table expression {} data must be initialized. In scope {}", + table_expression_node->formatASTForErrorMessage(), + scope_node->formatASTForErrorMessage()); + } + + return it->second; +} + +void IdentifierResolveScope::pushExpressionNode(const QueryTreeNodePtr & node) +{ + bool had_aggregate_function = expressions_in_resolve_process_stack.hasAggregateFunction(); + expressions_in_resolve_process_stack.push(node); + if (group_by_use_nulls && had_aggregate_function != expressions_in_resolve_process_stack.hasAggregateFunction()) + aliases.alias_name_to_expression_node = &aliases.alias_name_to_expression_node_before_group_by; +} + +void IdentifierResolveScope::popExpressionNode() +{ + bool had_aggregate_function = expressions_in_resolve_process_stack.hasAggregateFunction(); + expressions_in_resolve_process_stack.pop(); + if (group_by_use_nulls && had_aggregate_function != expressions_in_resolve_process_stack.hasAggregateFunction()) + aliases.alias_name_to_expression_node = &aliases.alias_name_to_expression_node_after_group_by; +} + +/// Dump identifier resolve scope +[[maybe_unused]] void IdentifierResolveScope::dump(WriteBuffer & buffer) const +{ + buffer << "Scope node " << scope_node->formatASTForErrorMessage() << '\n'; + buffer << "Identifier lookup to resolve state " << identifier_lookup_to_resolve_state.size() << '\n'; + for (const auto & [identifier, state] : identifier_lookup_to_resolve_state) + { + buffer << "Identifier " << identifier.dump() << " resolve result "; + state.resolve_result.dump(buffer); + buffer << '\n'; + } + + buffer << "Expression argument name to node " << expression_argument_name_to_node.size() << '\n'; + for (const auto & [alias_name, node] : expression_argument_name_to_node) + buffer << "Alias name " << alias_name << " node " << node->formatASTForErrorMessage() << '\n'; + + buffer << "Alias name to expression node table size " << aliases.alias_name_to_expression_node->size() << '\n'; + for (const auto & [alias_name, node] : *aliases.alias_name_to_expression_node) + buffer << "Alias name " << alias_name << " expression node " << node->dumpTree() << '\n'; + + buffer << "Alias name to function node table size " << aliases.alias_name_to_lambda_node.size() << '\n'; + for (const auto & [alias_name, node] : aliases.alias_name_to_lambda_node) + buffer << "Alias name " << alias_name << " lambda node " << node->formatASTForErrorMessage() << '\n'; + + buffer << "Alias name to table expression node table size " << aliases.alias_name_to_table_expression_node.size() << '\n'; + for (const auto & [alias_name, node] : aliases.alias_name_to_table_expression_node) + buffer << "Alias name " << alias_name << " node " << node->formatASTForErrorMessage() << '\n'; + + buffer << "CTE name to query node table size " << cte_name_to_query_node.size() << '\n'; + for (const auto & [cte_name, node] : cte_name_to_query_node) + buffer << "CTE name " << cte_name << " node " << node->formatASTForErrorMessage() << '\n'; + + buffer << "WINDOW name to window node table size " << window_name_to_window_node.size() << '\n'; + for (const auto & [window_name, node] : window_name_to_window_node) + buffer << "CTE name " << window_name << " node " << node->formatASTForErrorMessage() << '\n'; + + buffer << "Nodes with duplicated aliases size " << aliases.nodes_with_duplicated_aliases.size() << '\n'; + for (const auto & node : aliases.nodes_with_duplicated_aliases) + buffer << "Alias name " << node->getAlias() << " node " << node->formatASTForErrorMessage() << '\n'; + + buffer << "Expression resolve process stack " << '\n'; + expressions_in_resolve_process_stack.dump(buffer); + + buffer << "Table expressions in resolve process size " << table_expressions_in_resolve_process.size() << '\n'; + for (const auto & node : table_expressions_in_resolve_process) + buffer << "Table expression " << node->formatASTForErrorMessage() << '\n'; + + buffer << "Non cached identifier lookups during expression resolve " << non_cached_identifier_lookups_during_expression_resolve.size() << '\n'; + for (const auto & identifier_lookup : non_cached_identifier_lookups_during_expression_resolve) + buffer << "Identifier lookup " << identifier_lookup.dump() << '\n'; + + buffer << "Table expression node to data " << table_expression_node_to_data.size() << '\n'; + for (const auto & [table_expression_node, table_expression_data] : table_expression_node_to_data) + buffer << "Table expression node " << table_expression_node->formatASTForErrorMessage() << " data " << table_expression_data.dump() << '\n'; + + buffer << "Use identifier lookup to result cache " << use_identifier_lookup_to_result_cache << '\n'; + buffer << "Subquery depth " << subquery_depth << '\n'; +} + +[[maybe_unused]] String IdentifierResolveScope::dump() const +{ + WriteBufferFromOwnString buffer; + dump(buffer); + + return buffer.str(); +} +} diff --git a/src/Analyzer/Resolve/IdentifierResolveScope.h b/src/Analyzer/Resolve/IdentifierResolveScope.h new file mode 100644 index 00000000000..ab2e27cc14d --- /dev/null +++ b/src/Analyzer/Resolve/IdentifierResolveScope.h @@ -0,0 +1,231 @@ +#pragma once + +#include +#include +#include + +#include +#include +#include +#include + +namespace DB +{ + +/** Projection names is name of query tree node that is used in projection part of query node. + * Example: SELECT id FROM test_table; + * `id` is projection name of column node + * + * Example: SELECT id AS id_alias FROM test_table; + * `id_alias` is projection name of column node + * + * Calculation of projection names is done during expression nodes resolution. This is done this way + * because after identifier node is resolved we lose information about identifier name. We could + * potentially save this information in query tree node itself, but that would require to clone it in some cases. + * Example: SELECT big_scalar_subquery AS a, a AS b, b AS c; + * All 3 nodes in projection are the same big_scalar_subquery, but they have different projection names. + * If we want to save it in query tree node, we have to clone subquery node that could lead to performance degradation. + * + * Possible solution is to separate query node metadata and query node content. So only node metadata could be cloned + * if we want to change projection name. This solution does not seem to be easy for client of query tree because projection + * name will be part of interface. If we potentially could hide projection names calculation in analyzer without introducing additional + * changes in query tree structure that would be preferable. + * + * Currently each resolve method returns projection names array. Resolve method must compute projection names of node. + * If node is resolved as list node this is case for `untuple` function or `matcher` result projection names array must contain projection names + * for result nodes. + * If node is not resolved as list node, projection names array contain single projection name for node. + * + * Rules for projection names: + * 1. If node has alias. It is node projection name. + * Except scenario where `untuple` function has alias. Example: SELECT untuple(expr) AS alias, alias. + * + * 2. For constant it is constant value string representation. + * + * 3. For identifier: + * If identifier is resolved from JOIN TREE, we want to remove additional identifier qualifications. + * Example: SELECT default.test_table.id FROM test_table. + * Result projection name is `id`. + * + * Example: SELECT t1.id FROM test_table_1 AS t1, test_table_2 AS t2 + * In example both test_table_1, test_table_2 have `id` column. + * In such case projection name is `t1.id` because if additional qualification is removed then column projection name `id` will be ambiguous. + * + * Example: SELECT default.test_table_1.id FROM test_table_1 AS t1, test_table_2 AS t2 + * In such case projection name is `test_table_1.id` because we remove unnecessary database qualification, but table name qualification cannot be removed + * because otherwise column projection name `id` will be ambiguous. + * + * If identifier is not resolved from JOIN TREE. Identifier name is projection name. + * Except scenario where `untuple` function resolved using identifier. Example: SELECT untuple(expr) AS alias, alias. + * Example: SELECT sum(1, 1) AS value, value. + * In such case both nodes have `value` projection names. + * + * Example: SELECT id AS value, value FROM test_table. + * In such case both nodes have have `value` projection names. + * + * Special case is `untuple` function. If `untuple` function specified with alias, then result nodes will have alias.tuple_column_name projection names. + * Example: SELECT cast(tuple(1), 'Tuple(id UInt64)') AS value, untuple(value) AS a; + * Result projection names are `value`, `a.id`. + * + * If `untuple` function does not have alias then result nodes will have `tupleElement(untuple_expression_projection_name, 'tuple_column_name') projection names. + * + * Example: SELECT cast(tuple(1), 'Tuple(id UInt64)') AS value, untuple(value); + * Result projection names are `value`, `tupleElement(value, 'id')`; + * + * 4. For function: + * Projection name consists from function_name(parameters_projection_names)(arguments_projection_names). + * Additionally if function is window function. Window node projection name is used with OVER clause. + * Example: function_name (parameters_names)(argument_projection_names) OVER window_name; + * Example: function_name (parameters_names)(argument_projection_names) OVER (PARTITION BY id ORDER BY id). + * Example: function_name (parameters_names)(argument_projection_names) OVER (window_name ORDER BY id). + * + * 5. For lambda: + * If it is standalone lambda that returns single expression, function projection name is used. + * Example: WITH (x -> x + 1) AS lambda SELECT lambda(1). + * Projection name is `lambda(1)`. + * + * If is it standalone lambda that returns list, projection names of list nodes are used. + * Example: WITH (x -> *) AS lambda SELECT lambda(1) FROM test_table; + * If test_table has two columns `id`, `value`. Then result projection names are `id`, `value`. + * + * If lambda is argument of function. + * Then projection name consists from lambda(tuple(lambda_arguments)(lambda_body_projection_name)); + * + * 6. For matcher: + * Matched nodes projection names are used as matcher projection names. + * + * Matched nodes must be qualified if needed. + * Example: SELECT * FROM test_table_1 AS t1, test_table_2 AS t2. + * In example table test_table_1 and test_table_2 both have `id`, `value` columns. + * Matched nodes after unqualified matcher resolve must be qualified to avoid ambiguous projection names. + * Result projection names must be `t1.id`, `t1.value`, `t2.id`, `t2.value`. + * + * There are special cases + * 1. For lambda inside APPLY matcher transformer: + * Example: SELECT * APPLY x -> toString(x) FROM test_table. + * In such case lambda argument projection name `x` will be replaced by matched node projection name. + * If table has two columns `id` and `value`. Then result projection names are `toString(id)`, `toString(value)`; + * + * 2. For unqualified matcher when JOIN tree contains JOIN with USING. + * Example: SELECT * FROM test_table_1 AS t1 INNER JOIN test_table_2 AS t2 USING(id); + * Result projection names must be `id`, `t1.value`, `t2.value`. + * + * 7. For subquery: + * For subquery projection name consists of `_subquery_` prefix and implementation specific unique number suffix. + * Example: SELECT (SELECT 1), (SELECT 1 UNION DISTINCT SELECT 1); + * Result projection name can be `_subquery_1`, `subquery_2`; + * + * 8. For table: + * Table node can be used in expression context only as right argument of IN function. In that case identifier is used + * as table node projection name. + * Example: SELECT id IN test_table FROM test_table; + * Result projection name is `in(id, test_table)`. + */ +using ProjectionName = String; +using ProjectionNames = std::vector; +constexpr auto PROJECTION_NAME_PLACEHOLDER = "__projection_name_placeholder"; + +struct IdentifierResolveScope +{ + /// Construct identifier resolve scope using scope node, and parent scope + IdentifierResolveScope(QueryTreeNodePtr scope_node_, IdentifierResolveScope * parent_scope_); + + QueryTreeNodePtr scope_node; + + IdentifierResolveScope * parent_scope = nullptr; + + ContextPtr context; + + /// Identifier lookup to result + std::unordered_map identifier_lookup_to_resolve_state; + + /// Argument can be expression like constant, column, function or table expression + std::unordered_map expression_argument_name_to_node; + + ScopeAliases aliases; + + /// Table column name to column node. Valid only during table ALIAS columns resolve. + ColumnNameToColumnNodeMap column_name_to_column_node; + + /// CTE name to query node + std::unordered_map cte_name_to_query_node; + + /// Window name to window node + std::unordered_map window_name_to_window_node; + + /// Current scope expression in resolve process stack + ExpressionsStack expressions_in_resolve_process_stack; + + /// Table expressions in resolve process + std::unordered_set table_expressions_in_resolve_process; + + /// Current scope expression + std::unordered_set non_cached_identifier_lookups_during_expression_resolve; + + /// Table expression node to data + std::unordered_map table_expression_node_to_data; + + QueryTreeNodePtrWithHashIgnoreTypesSet nullable_group_by_keys; + /// Here we count the number of nullable GROUP BY keys we met resolving expression. + /// E.g. for a query `SELECT tuple(tuple(number)) FROM numbers(10) GROUP BY (number, tuple(number)) with cube` + /// both `number` and `tuple(number)` would be in nullable_group_by_keys. + /// But when we resolve `tuple(tuple(number))` we should figure out that `tuple(number)` is already a key, + /// and we should not convert `number` to nullable. + size_t found_nullable_group_by_key_in_scope = 0; + + /** It's possible that after a JOIN, a column in the projection has a type different from the column in the source table. + * (For example, after join_use_nulls or USING column casted to supertype) + * However, the column in the projection still refers to the table as its source. + * This map is used to revert these columns back to their original columns in the source table. + */ + QueryTreeNodePtrWithHashMap join_columns_with_changed_types; + + /// Use identifier lookup to result cache + bool use_identifier_lookup_to_result_cache = true; + + /// Apply nullability to aggregation keys + bool group_by_use_nulls = false; + /// Join retutns NULLs instead of default values + bool join_use_nulls = false; + + /// JOINs count + size_t joins_count = 0; + + /// Subquery depth + size_t subquery_depth = 0; + + /** Scope join tree node for expression. + * Valid only during analysis construction for single expression. + */ + QueryTreeNodePtr expression_join_tree_node; + + /// Node hash to mask id map + std::shared_ptr> projection_mask_map; + + struct ResolvedFunctionsCache + { + FunctionOverloadResolverPtr resolver; + FunctionBasePtr function_base; + }; + + std::map functions_cache; + + [[maybe_unused]] const IdentifierResolveScope * getNearestQueryScope() const; + + IdentifierResolveScope * getNearestQueryScope(); + + AnalysisTableExpressionData & getTableExpressionDataOrThrow(const QueryTreeNodePtr & table_expression_node); + + const AnalysisTableExpressionData & getTableExpressionDataOrThrow(const QueryTreeNodePtr & table_expression_node) const; + + void pushExpressionNode(const QueryTreeNodePtr & node); + + void popExpressionNode(); + + /// Dump identifier resolve scope + [[maybe_unused]] void dump(WriteBuffer & buffer) const; + + [[maybe_unused]] String dump() const; +}; + +} diff --git a/src/Analyzer/Resolve/QueryAnalysisPass.cpp b/src/Analyzer/Resolve/QueryAnalysisPass.cpp new file mode 100644 index 00000000000..36c747555fc --- /dev/null +++ b/src/Analyzer/Resolve/QueryAnalysisPass.cpp @@ -0,0 +1,22 @@ +#include +#include +#include + +namespace DB +{ + +QueryAnalysisPass::QueryAnalysisPass(QueryTreeNodePtr table_expression_, bool only_analyze_) + : table_expression(std::move(table_expression_)) + , only_analyze(only_analyze_) +{} + +QueryAnalysisPass::QueryAnalysisPass(bool only_analyze_) : only_analyze(only_analyze_) {} + +void QueryAnalysisPass::run(QueryTreeNodePtr & query_tree_node, ContextPtr context) +{ + QueryAnalyzer analyzer(only_analyze); + analyzer.resolve(query_tree_node, table_expression, context); + createUniqueTableAliases(query_tree_node, table_expression, context); +} + +} diff --git a/src/Analyzer/Passes/QueryAnalysisPass.cpp b/src/Analyzer/Resolve/QueryAnalyzer.cpp similarity index 83% rename from src/Analyzer/Passes/QueryAnalysisPass.cpp rename to src/Analyzer/Resolve/QueryAnalyzer.cpp index b7c223303eb..d84626c4be6 100644 --- a/src/Analyzer/Passes/QueryAnalysisPass.cpp +++ b/src/Analyzer/Resolve/QueryAnalyzer.cpp @@ -1,16 +1,3 @@ -#include - -#include - -#include -#include -#include - -#include -#include -#include - -#include #include #include #include @@ -20,42 +7,26 @@ #include #include #include -#include #include -#include -#include -#include - #include #include #include #include -#include - #include #include -#include - #include -#include #include #include #include -#include -#include -#include -#include -#include #include #include #include -#include #include #include #include @@ -85,6 +56,11 @@ #include #include +#include +#include +#include +#include + namespace ProfileEvents { extern const Event ScalarSubqueriesGlobalCacheHit; @@ -94,7 +70,6 @@ namespace ProfileEvents namespace DB { - namespace ErrorCodes { extern const int UNSUPPORTED_METHOD; @@ -130,1466 +105,146 @@ namespace ErrorCodes extern const int INVALID_IDENTIFIER; } -/** Query analyzer implementation overview. Please check documentation in QueryAnalysisPass.h first. - * And additional documentation for each method, where special cases are described in detail. - * - * Each node in query must be resolved. For each query tree node resolved state is specific. - * - * For constant node no resolve process exists, it is resolved during construction. - * - * For table node no resolve process exists, it is resolved during construction. - * - * For function node to be resolved parameters and arguments must be resolved, function node must be initialized with concrete aggregate or - * non aggregate function and with result type. - * - * For lambda node there can be 2 different cases. - * 1. Standalone: WITH (x -> x + 1) AS lambda SELECT lambda(1); Such lambdas are inlined in query tree during query analysis pass. - * 2. Function arguments: WITH (x -> x + 1) AS lambda SELECT arrayMap(lambda, [1, 2, 3]); For such lambda resolution must - * set concrete lambda arguments (initially they are identifier nodes) and resolve lambda expression body. - * - * For query node resolve process must resolve all its inner nodes. - * - * For matcher node resolve process must replace it with matched nodes. - * - * For identifier node resolve process must replace it with concrete non identifier node. This part is most complex because - * for identifier resolution scopes and identifier lookup context play important part. - * - * ClickHouse SQL support lexical scoping for identifier resolution. Scope can be defined by query node or by expression node. - * Expression nodes that can define scope are lambdas and table ALIAS columns. - * - * Identifier lookup context can be expression, function, table. - * - * Examples: WITH (x -> x + 1) as func SELECT func() FROM func; During function `func` resolution identifier lookup is performed - * in function context. - * - * If there are no information of identifier context rules are following: - * 1. Try to resolve identifier in expression context. - * 2. Try to resolve identifier in function context, if it is allowed. Example: SELECT func(arguments); Here func identifier cannot be resolved in function context - * because query projection does not support that. - * 3. Try to resolve identifier in table context, if it is allowed. Example: SELECT table; Here table identifier cannot be resolved in function context - * because query projection does not support that. - * - * TODO: This does not supported properly before, because matchers could not be resolved from aliases. - * - * Identifiers are resolved with following rules: - * Resolution starts with current scope. - * 1. Try to resolve identifier from expression scope arguments. Lambda expression arguments are greatest priority. - * 2. Try to resolve identifier from aliases. - * 3. Try to resolve identifier from join tree if scope is query, or if there are registered table columns in scope. - * Steps 2 and 3 can be changed using prefer_column_name_to_alias setting. - * 4. If it is table lookup, try to resolve identifier from CTE. - * If identifier could not be resolved in current scope, resolution must be continued in parent scopes. - * 5. Try to resolve identifier from parent scopes. - * - * Additional rules about aliases and scopes. - * 1. Parent scope cannot refer alias from child scope. - * 2. Child scope can refer to alias in parent scope. - * - * Example: SELECT arrayMap(x -> x + 1 AS a, [1,2,3]), a; Identifier a is unknown in parent scope. - * Example: SELECT a FROM (SELECT 1 as a); Here we do not refer to alias a from child query scope. But we query it projection result, similar to tables. - * Example: WITH 1 as a SELECT (SELECT a) as b; Here in child scope identifier a is resolved using alias from parent scope. - * - * Additional rules about identifier binding. - * Bind for identifier to entity means that identifier first part match some node during analysis. - * If other parts of identifier cannot be resolved in that node, exception must be thrown. - * - * Example: - * CREATE TABLE test_table (id UInt64, compound_value Tuple(value UInt64)) ENGINE=TinyLog; - * SELECT compound_value.value, 1 AS compound_value FROM test_table; - * Identifier first part compound_value bound to entity with alias compound_value, but nested identifier part cannot be resolved from entity, - * lookup should not be continued, and exception must be thrown because if lookup continues that way identifier can be resolved from join tree. - * - * TODO: This was not supported properly before analyzer because nested identifier could not be resolved from alias. - * - * More complex example: - * CREATE TABLE test_table (id UInt64, value UInt64) ENGINE=TinyLog; - * WITH cast(('Value'), 'Tuple (value UInt64') AS value SELECT (SELECT value FROM test_table); - * Identifier first part value bound to test_table column value, but nested identifier part cannot be resolved from it, - * lookup should not be continued, and exception must be thrown because if lookup continues identifier can be resolved from parent scope. - * - * TODO: Update exception messages - * TODO: Table identifiers with optional UUID. - * TODO: Lookup functions arrayReduce(sum, [1, 2, 3]); - * TODO: Support function identifier resolve from parent query scope, if lambda in parent scope does not capture any columns. - */ +QueryAnalyzer::QueryAnalyzer(bool only_analyze_) : only_analyze(only_analyze_) {} +QueryAnalyzer::~QueryAnalyzer() = default; -namespace +void QueryAnalyzer::resolve(QueryTreeNodePtr & node, const QueryTreeNodePtr & table_expression, ContextPtr context) { + IdentifierResolveScope scope(node, nullptr /*parent_scope*/); -/// Identifier lookup context -enum class IdentifierLookupContext : uint8_t -{ - EXPRESSION = 0, - FUNCTION, - TABLE_EXPRESSION, -}; + if (!scope.context) + scope.context = context; -const char * toString(IdentifierLookupContext identifier_lookup_context) -{ - switch (identifier_lookup_context) + auto node_type = node->getNodeType(); + + switch (node_type) { - case IdentifierLookupContext::EXPRESSION: return "EXPRESSION"; - case IdentifierLookupContext::FUNCTION: return "FUNCTION"; - case IdentifierLookupContext::TABLE_EXPRESSION: return "TABLE_EXPRESSION"; + case QueryTreeNodeType::QUERY: + { + if (table_expression) + throw Exception(ErrorCodes::LOGICAL_ERROR, + "For query analysis table expression must be empty"); + + resolveQuery(node, scope); + break; + } + case QueryTreeNodeType::UNION: + { + if (table_expression) + throw Exception(ErrorCodes::LOGICAL_ERROR, + "For union analysis table expression must be empty"); + + resolveUnion(node, scope); + break; + } + case QueryTreeNodeType::IDENTIFIER: + [[fallthrough]]; + case QueryTreeNodeType::CONSTANT: + [[fallthrough]]; + case QueryTreeNodeType::COLUMN: + [[fallthrough]]; + case QueryTreeNodeType::FUNCTION: + [[fallthrough]]; + case QueryTreeNodeType::LIST: + { + if (table_expression) + { + scope.expression_join_tree_node = table_expression; + validateTableExpressionModifiers(scope.expression_join_tree_node, scope); + initializeTableExpressionData(scope.expression_join_tree_node, scope); + } + + if (node_type == QueryTreeNodeType::LIST) + resolveExpressionNodeList(node, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/); + else + resolveExpressionNode(node, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/); + + break; + } + case QueryTreeNodeType::TABLE_FUNCTION: + { + QueryExpressionsAliasVisitor expressions_alias_visitor(scope.aliases); + resolveTableFunction(node, scope, expressions_alias_visitor, false /*nested_table_function*/); + break; + } + default: + { + throw Exception(ErrorCodes::BAD_ARGUMENTS, + "Node {} with type {} is not supported by query analyzer. " + "Supported nodes are query, union, identifier, constant, column, function, list.", + node->formatASTForErrorMessage(), + node->getNodeTypeName()); + } } } -const char * toStringLowercase(IdentifierLookupContext identifier_lookup_context) +std::optional QueryAnalyzer::getColumnSideFromJoinTree(const QueryTreeNodePtr & resolved_identifier, const JoinNode & join_node) { - switch (identifier_lookup_context) - { - case IdentifierLookupContext::EXPRESSION: return "expression"; - case IdentifierLookupContext::FUNCTION: return "function"; - case IdentifierLookupContext::TABLE_EXPRESSION: return "table expression"; - } -} - -/** Structure that represent identifier lookup during query analysis. - * Lookup can be in query expression, function, table context. - */ -struct IdentifierLookup -{ - Identifier identifier; - IdentifierLookupContext lookup_context; - - bool isExpressionLookup() const - { - return lookup_context == IdentifierLookupContext::EXPRESSION; - } - - bool isFunctionLookup() const - { - return lookup_context == IdentifierLookupContext::FUNCTION; - } - - bool isTableExpressionLookup() const - { - return lookup_context == IdentifierLookupContext::TABLE_EXPRESSION; - } - - String dump() const - { - return identifier.getFullName() + ' ' + toString(lookup_context); - } -}; - -inline bool operator==(const IdentifierLookup & lhs, const IdentifierLookup & rhs) -{ - return lhs.identifier.getFullName() == rhs.identifier.getFullName() && lhs.lookup_context == rhs.lookup_context; -} - -[[maybe_unused]] inline bool operator!=(const IdentifierLookup & lhs, const IdentifierLookup & rhs) -{ - return !(lhs == rhs); -} - -struct IdentifierLookupHash -{ - size_t operator()(const IdentifierLookup & identifier_lookup) const - { - return std::hash()(identifier_lookup.identifier.getFullName()) ^ static_cast(identifier_lookup.lookup_context); - } -}; - -enum class IdentifierResolvePlace : UInt8 -{ - NONE = 0, - EXPRESSION_ARGUMENTS, - ALIASES, - JOIN_TREE, - /// Valid only for table lookup - CTE, - /// Valid only for table lookup - DATABASE_CATALOG -}; - -const char * toString(IdentifierResolvePlace resolved_identifier_place) -{ - switch (resolved_identifier_place) - { - case IdentifierResolvePlace::NONE: return "NONE"; - case IdentifierResolvePlace::EXPRESSION_ARGUMENTS: return "EXPRESSION_ARGUMENTS"; - case IdentifierResolvePlace::ALIASES: return "ALIASES"; - case IdentifierResolvePlace::JOIN_TREE: return "JOIN_TREE"; - case IdentifierResolvePlace::CTE: return "CTE"; - case IdentifierResolvePlace::DATABASE_CATALOG: return "DATABASE_CATALOG"; - } -} - -struct IdentifierResolveResult -{ - IdentifierResolveResult() = default; - - QueryTreeNodePtr resolved_identifier; - IdentifierResolvePlace resolve_place = IdentifierResolvePlace::NONE; - bool resolved_from_parent_scopes = false; - - [[maybe_unused]] bool isResolved() const - { - return resolve_place != IdentifierResolvePlace::NONE; - } - - [[maybe_unused]] bool isResolvedFromParentScopes() const - { - return resolved_from_parent_scopes; - } - - [[maybe_unused]] bool isResolvedFromExpressionArguments() const - { - return resolve_place == IdentifierResolvePlace::EXPRESSION_ARGUMENTS; - } - - [[maybe_unused]] bool isResolvedFromAliases() const - { - return resolve_place == IdentifierResolvePlace::ALIASES; - } - - [[maybe_unused]] bool isResolvedFromJoinTree() const - { - return resolve_place == IdentifierResolvePlace::JOIN_TREE; - } - - [[maybe_unused]] bool isResolvedFromCTEs() const - { - return resolve_place == IdentifierResolvePlace::CTE; - } - - void dump(WriteBuffer & buffer) const - { - if (!resolved_identifier) - { - buffer << "unresolved"; - return; - } - - buffer << resolved_identifier->formatASTForErrorMessage() << " place " << toString(resolve_place) << " resolved from parent scopes " << resolved_from_parent_scopes; - } - - [[maybe_unused]] String dump() const - { - WriteBufferFromOwnString buffer; - dump(buffer); - - return buffer.str(); - } -}; - -struct IdentifierResolveState -{ - IdentifierResolveResult resolve_result; - bool cyclic_identifier_resolve = false; -}; - -struct IdentifierResolveSettings -{ - /// Allow to check join tree during identifier resolution - bool allow_to_check_join_tree = true; - - /// Allow to check CTEs during table identifier resolution - bool allow_to_check_cte = true; - - /// Allow to check parent scopes during identifier resolution - bool allow_to_check_parent_scopes = true; - - /// Allow to check database catalog during table identifier resolution - bool allow_to_check_database_catalog = true; - - /// Allow to resolve subquery during identifier resolution - bool allow_to_resolve_subquery_during_identifier_resolution = true; -}; - -struct StringTransparentHash -{ - using is_transparent = void; - using hash = std::hash; - - [[maybe_unused]] size_t operator()(const char * data) const - { - return hash()(data); - } - - size_t operator()(std::string_view data) const - { - return hash()(data); - } - - size_t operator()(const std::string & data) const - { - return hash()(data); - } -}; - -using ColumnNameToColumnNodeMap = std::unordered_map>; - -struct TableExpressionData -{ - std::string table_expression_name; - std::string table_expression_description; - std::string database_name; - std::string table_name; - bool should_qualify_columns = true; - NamesAndTypes column_names_and_types; - ColumnNameToColumnNodeMap column_name_to_column_node; - std::unordered_set subcolumn_names; /// Subset columns that are subcolumns of other columns - std::unordered_set> column_identifier_first_parts; - - bool hasFullIdentifierName(IdentifierView identifier_view) const - { - return column_name_to_column_node.contains(identifier_view.getFullName()); - } - - bool canBindIdentifier(IdentifierView identifier_view) const - { - return column_identifier_first_parts.contains(identifier_view.at(0)); - } - - [[maybe_unused]] void dump(WriteBuffer & buffer) const - { - buffer << "Table expression name " << table_expression_name; - - if (!table_expression_description.empty()) - buffer << " table expression description " << table_expression_description; - - if (!database_name.empty()) - buffer << " database name " << database_name; - - if (!table_name.empty()) - buffer << " table name " << table_name; - - buffer << " should qualify columns " << should_qualify_columns; - buffer << " columns size " << column_name_to_column_node.size() << '\n'; - - for (const auto & [column_name, column_node] : column_name_to_column_node) - buffer << "Column name " << column_name << " column node " << column_node->dumpTree() << '\n'; - } - - [[maybe_unused]] String dump() const - { - WriteBufferFromOwnString buffer; - dump(buffer); - - return buffer.str(); - } -}; - -class ExpressionsStack -{ -public: - void push(const QueryTreeNodePtr & node) - { - if (node->hasAlias()) - { - const auto & node_alias = node->getAlias(); - alias_name_to_expressions[node_alias].push_back(node); - } - - if (const auto * function = node->as()) - { - if (AggregateFunctionFactory::instance().isAggregateFunctionName(function->getFunctionName())) - ++aggregate_functions_counter; - } - - expressions.emplace_back(node); - } - - void pop() - { - const auto & top_expression = expressions.back(); - const auto & top_expression_alias = top_expression->getAlias(); - - if (!top_expression_alias.empty()) - { - auto it = alias_name_to_expressions.find(top_expression_alias); - auto & alias_expressions = it->second; - alias_expressions.pop_back(); - - if (alias_expressions.empty()) - alias_name_to_expressions.erase(it); - } - - if (const auto * function = top_expression->as()) - { - if (AggregateFunctionFactory::instance().isAggregateFunctionName(function->getFunctionName())) - --aggregate_functions_counter; - } - - expressions.pop_back(); - } - - [[maybe_unused]] const QueryTreeNodePtr & getRoot() const - { - return expressions.front(); - } - - const QueryTreeNodePtr & getTop() const - { - return expressions.back(); - } - - [[maybe_unused]] bool hasExpressionWithAlias(const std::string & alias) const - { - return alias_name_to_expressions.contains(alias); - } - - bool hasAggregateFunction() const - { - return aggregate_functions_counter > 0; - } - - QueryTreeNodePtr getExpressionWithAlias(const std::string & alias) const - { - auto expression_it = alias_name_to_expressions.find(alias); - if (expression_it == alias_name_to_expressions.end()) - return {}; - - return expression_it->second.front(); - } - - [[maybe_unused]] size_t size() const - { - return expressions.size(); - } - - bool empty() const - { - return expressions.empty(); - } - - void dump(WriteBuffer & buffer) const - { - buffer << expressions.size() << '\n'; - - for (const auto & expression : expressions) - { - buffer << "Expression "; - buffer << expression->formatASTForErrorMessage(); - - const auto & alias = expression->getAlias(); - if (!alias.empty()) - buffer << " alias " << alias; - - buffer << '\n'; - } - } - - [[maybe_unused]] String dump() const - { - WriteBufferFromOwnString buffer; - dump(buffer); - - return buffer.str(); - } - -private: - QueryTreeNodes expressions; - size_t aggregate_functions_counter = 0; - std::unordered_map alias_name_to_expressions; -}; - -struct ScopeAliases -{ - /// Alias name to query expression node - std::unordered_map alias_name_to_expression_node_before_group_by; - std::unordered_map alias_name_to_expression_node_after_group_by; - - std::unordered_map * alias_name_to_expression_node = nullptr; - - /// Alias name to lambda node - std::unordered_map alias_name_to_lambda_node; - - /// Alias name to table expression node - std::unordered_map alias_name_to_table_expression_node; - - /// Expressions like `x as y` where we can't say whether it's a function, expression or table. - std::unordered_map transitive_aliases; - - /// Nodes with duplicated aliases - std::unordered_set nodes_with_duplicated_aliases; - std::vector cloned_nodes_with_duplicated_aliases; - - std::unordered_map & getAliasMap(IdentifierLookupContext lookup_context) - { - switch (lookup_context) - { - case IdentifierLookupContext::EXPRESSION: return *alias_name_to_expression_node; - case IdentifierLookupContext::FUNCTION: return alias_name_to_lambda_node; - case IdentifierLookupContext::TABLE_EXPRESSION: return alias_name_to_table_expression_node; - } - } - - enum class FindOption - { - FIRST_NAME, - FULL_NAME, - }; - - const std::string & getKey(const Identifier & identifier, FindOption find_option) - { - switch (find_option) - { - case FindOption::FIRST_NAME: return identifier.front(); - case FindOption::FULL_NAME: return identifier.getFullName(); - } - } - - QueryTreeNodePtr * find(IdentifierLookup lookup, FindOption find_option) - { - auto & alias_map = getAliasMap(lookup.lookup_context); - const std::string * key = &getKey(lookup.identifier, find_option); - - auto it = alias_map.find(*key); - - if (it != alias_map.end()) - return &it->second; - - if (lookup.lookup_context == IdentifierLookupContext::TABLE_EXPRESSION) - return {}; - - while (it == alias_map.end()) - { - auto jt = transitive_aliases.find(*key); - if (jt == transitive_aliases.end()) - return {}; - - key = &(getKey(jt->second, find_option)); - it = alias_map.find(*key); - } - - return &it->second; - } - - const QueryTreeNodePtr * find(IdentifierLookup lookup, FindOption find_option) const - { - return const_cast(this)->find(lookup, find_option); - } -}; - - -/** Projection names is name of query tree node that is used in projection part of query node. - * Example: SELECT id FROM test_table; - * `id` is projection name of column node - * - * Example: SELECT id AS id_alias FROM test_table; - * `id_alias` is projection name of column node - * - * Calculation of projection names is done during expression nodes resolution. This is done this way - * because after identifier node is resolved we lose information about identifier name. We could - * potentially save this information in query tree node itself, but that would require to clone it in some cases. - * Example: SELECT big_scalar_subquery AS a, a AS b, b AS c; - * All 3 nodes in projection are the same big_scalar_subquery, but they have different projection names. - * If we want to save it in query tree node, we have to clone subquery node that could lead to performance degradation. - * - * Possible solution is to separate query node metadata and query node content. So only node metadata could be cloned - * if we want to change projection name. This solution does not seem to be easy for client of query tree because projection - * name will be part of interface. If we potentially could hide projection names calculation in analyzer without introducing additional - * changes in query tree structure that would be preferable. - * - * Currently each resolve method returns projection names array. Resolve method must compute projection names of node. - * If node is resolved as list node this is case for `untuple` function or `matcher` result projection names array must contain projection names - * for result nodes. - * If node is not resolved as list node, projection names array contain single projection name for node. - * - * Rules for projection names: - * 1. If node has alias. It is node projection name. - * Except scenario where `untuple` function has alias. Example: SELECT untuple(expr) AS alias, alias. - * - * 2. For constant it is constant value string representation. - * - * 3. For identifier: - * If identifier is resolved from JOIN TREE, we want to remove additional identifier qualifications. - * Example: SELECT default.test_table.id FROM test_table. - * Result projection name is `id`. - * - * Example: SELECT t1.id FROM test_table_1 AS t1, test_table_2 AS t2 - * In example both test_table_1, test_table_2 have `id` column. - * In such case projection name is `t1.id` because if additional qualification is removed then column projection name `id` will be ambiguous. - * - * Example: SELECT default.test_table_1.id FROM test_table_1 AS t1, test_table_2 AS t2 - * In such case projection name is `test_table_1.id` because we remove unnecessary database qualification, but table name qualification cannot be removed - * because otherwise column projection name `id` will be ambiguous. - * - * If identifier is not resolved from JOIN TREE. Identifier name is projection name. - * Except scenario where `untuple` function resolved using identifier. Example: SELECT untuple(expr) AS alias, alias. - * Example: SELECT sum(1, 1) AS value, value. - * In such case both nodes have `value` projection names. - * - * Example: SELECT id AS value, value FROM test_table. - * In such case both nodes have have `value` projection names. - * - * Special case is `untuple` function. If `untuple` function specified with alias, then result nodes will have alias.tuple_column_name projection names. - * Example: SELECT cast(tuple(1), 'Tuple(id UInt64)') AS value, untuple(value) AS a; - * Result projection names are `value`, `a.id`. - * - * If `untuple` function does not have alias then result nodes will have `tupleElement(untuple_expression_projection_name, 'tuple_column_name') projection names. - * - * Example: SELECT cast(tuple(1), 'Tuple(id UInt64)') AS value, untuple(value); - * Result projection names are `value`, `tupleElement(value, 'id')`; - * - * 4. For function: - * Projection name consists from function_name(parameters_projection_names)(arguments_projection_names). - * Additionally if function is window function. Window node projection name is used with OVER clause. - * Example: function_name (parameters_names)(argument_projection_names) OVER window_name; - * Example: function_name (parameters_names)(argument_projection_names) OVER (PARTITION BY id ORDER BY id). - * Example: function_name (parameters_names)(argument_projection_names) OVER (window_name ORDER BY id). - * - * 5. For lambda: - * If it is standalone lambda that returns single expression, function projection name is used. - * Example: WITH (x -> x + 1) AS lambda SELECT lambda(1). - * Projection name is `lambda(1)`. - * - * If is it standalone lambda that returns list, projection names of list nodes are used. - * Example: WITH (x -> *) AS lambda SELECT lambda(1) FROM test_table; - * If test_table has two columns `id`, `value`. Then result projection names are `id`, `value`. - * - * If lambda is argument of function. - * Then projection name consists from lambda(tuple(lambda_arguments)(lambda_body_projection_name)); - * - * 6. For matcher: - * Matched nodes projection names are used as matcher projection names. - * - * Matched nodes must be qualified if needed. - * Example: SELECT * FROM test_table_1 AS t1, test_table_2 AS t2. - * In example table test_table_1 and test_table_2 both have `id`, `value` columns. - * Matched nodes after unqualified matcher resolve must be qualified to avoid ambiguous projection names. - * Result projection names must be `t1.id`, `t1.value`, `t2.id`, `t2.value`. - * - * There are special cases - * 1. For lambda inside APPLY matcher transformer: - * Example: SELECT * APPLY x -> toString(x) FROM test_table. - * In such case lambda argument projection name `x` will be replaced by matched node projection name. - * If table has two columns `id` and `value`. Then result projection names are `toString(id)`, `toString(value)`; - * - * 2. For unqualified matcher when JOIN tree contains JOIN with USING. - * Example: SELECT * FROM test_table_1 AS t1 INNER JOIN test_table_2 AS t2 USING(id); - * Result projection names must be `id`, `t1.value`, `t2.value`. - * - * 7. For subquery: - * For subquery projection name consists of `_subquery_` prefix and implementation specific unique number suffix. - * Example: SELECT (SELECT 1), (SELECT 1 UNION DISTINCT SELECT 1); - * Result projection name can be `_subquery_1`, `subquery_2`; - * - * 8. For table: - * Table node can be used in expression context only as right argument of IN function. In that case identifier is used - * as table node projection name. - * Example: SELECT id IN test_table FROM test_table; - * Result projection name is `in(id, test_table)`. - */ -using ProjectionName = String; -using ProjectionNames = std::vector; -constexpr auto PROJECTION_NAME_PLACEHOLDER = "__projection_name_placeholder"; - -struct IdentifierResolveScope -{ - /// Construct identifier resolve scope using scope node, and parent scope - IdentifierResolveScope(QueryTreeNodePtr scope_node_, IdentifierResolveScope * parent_scope_) - : scope_node(std::move(scope_node_)) - , parent_scope(parent_scope_) - { - if (parent_scope) - { - subquery_depth = parent_scope->subquery_depth; - context = parent_scope->context; - projection_mask_map = parent_scope->projection_mask_map; - } - else - projection_mask_map = std::make_shared>(); - - if (auto * union_node = scope_node->as()) - { - context = union_node->getContext(); - } - else if (auto * query_node = scope_node->as()) - { - context = query_node->getContext(); - group_by_use_nulls = context->getSettingsRef().group_by_use_nulls && - (query_node->isGroupByWithGroupingSets() || query_node->isGroupByWithRollup() || query_node->isGroupByWithCube()); - } - - if (context) - join_use_nulls = context->getSettingsRef().join_use_nulls; - else if (parent_scope) - join_use_nulls = parent_scope->join_use_nulls; - - aliases.alias_name_to_expression_node = &aliases.alias_name_to_expression_node_before_group_by; - } - - QueryTreeNodePtr scope_node; - - IdentifierResolveScope * parent_scope = nullptr; - - ContextPtr context; - - /// Identifier lookup to result - std::unordered_map identifier_lookup_to_resolve_state; - - /// Argument can be expression like constant, column, function or table expression - std::unordered_map expression_argument_name_to_node; - - ScopeAliases aliases; - - /// Table column name to column node. Valid only during table ALIAS columns resolve. - ColumnNameToColumnNodeMap column_name_to_column_node; - - /// CTE name to query node - std::unordered_map cte_name_to_query_node; - - /// Window name to window node - std::unordered_map window_name_to_window_node; - - /// Current scope expression in resolve process stack - ExpressionsStack expressions_in_resolve_process_stack; - - /// Table expressions in resolve process - std::unordered_set table_expressions_in_resolve_process; - - /// Current scope expression - std::unordered_set non_cached_identifier_lookups_during_expression_resolve; - - /// Table expression node to data - std::unordered_map table_expression_node_to_data; - - QueryTreeNodePtrWithHashIgnoreTypesSet nullable_group_by_keys; - /// Here we count the number of nullable GROUP BY keys we met resolving expression. - /// E.g. for a query `SELECT tuple(tuple(number)) FROM numbers(10) GROUP BY (number, tuple(number)) with cube` - /// both `number` and `tuple(number)` would be in nullable_group_by_keys. - /// But when we resolve `tuple(tuple(number))` we should figure out that `tuple(number)` is already a key, - /// and we should not convert `number` to nullable. - size_t found_nullable_group_by_key_in_scope = 0; - - /** It's possible that after a JOIN, a column in the projection has a type different from the column in the source table. - * (For example, after join_use_nulls or USING column casted to supertype) - * However, the column in the projection still refers to the table as its source. - * This map is used to revert these columns back to their original columns in the source table. - */ - QueryTreeNodePtrWithHashMap join_columns_with_changed_types; - - /// Use identifier lookup to result cache - bool use_identifier_lookup_to_result_cache = true; - - /// Apply nullability to aggregation keys - bool group_by_use_nulls = false; - /// Join retutns NULLs instead of default values - bool join_use_nulls = false; - - /// JOINs count - size_t joins_count = 0; - - /// Subquery depth - size_t subquery_depth = 0; - - /** Scope join tree node for expression. - * Valid only during analysis construction for single expression. - */ - QueryTreeNodePtr expression_join_tree_node; - - /// Node hash to mask id map - std::shared_ptr> projection_mask_map; - - struct ResolvedFunctionsCache - { - FunctionOverloadResolverPtr resolver; - FunctionBasePtr function_base; - }; - - std::map functions_cache; - - [[maybe_unused]] const IdentifierResolveScope * getNearestQueryScope() const - { - const IdentifierResolveScope * scope_to_check = this; - while (scope_to_check != nullptr) - { - if (scope_to_check->scope_node->getNodeType() == QueryTreeNodeType::QUERY) - break; - - scope_to_check = scope_to_check->parent_scope; - } - - return scope_to_check; - } - - IdentifierResolveScope * getNearestQueryScope() - { - IdentifierResolveScope * scope_to_check = this; - while (scope_to_check != nullptr) - { - if (scope_to_check->scope_node->getNodeType() == QueryTreeNodeType::QUERY) - break; - - scope_to_check = scope_to_check->parent_scope; - } - - return scope_to_check; - } - - TableExpressionData & getTableExpressionDataOrThrow(const QueryTreeNodePtr & table_expression_node) - { - auto it = table_expression_node_to_data.find(table_expression_node); - if (it == table_expression_node_to_data.end()) - { - throw Exception(ErrorCodes::LOGICAL_ERROR, - "Table expression {} data must be initialized. In scope {}", - table_expression_node->formatASTForErrorMessage(), - scope_node->formatASTForErrorMessage()); - } - - return it->second; - } - - const TableExpressionData & getTableExpressionDataOrThrow(const QueryTreeNodePtr & table_expression_node) const - { - auto it = table_expression_node_to_data.find(table_expression_node); - if (it == table_expression_node_to_data.end()) - { - throw Exception(ErrorCodes::LOGICAL_ERROR, - "Table expression {} data must be initialized. In scope {}", - table_expression_node->formatASTForErrorMessage(), - scope_node->formatASTForErrorMessage()); - } - - return it->second; - } - - void pushExpressionNode(const QueryTreeNodePtr & node) - { - bool had_aggregate_function = expressions_in_resolve_process_stack.hasAggregateFunction(); - expressions_in_resolve_process_stack.push(node); - if (group_by_use_nulls && had_aggregate_function != expressions_in_resolve_process_stack.hasAggregateFunction()) - aliases.alias_name_to_expression_node = &aliases.alias_name_to_expression_node_before_group_by; - } - - void popExpressionNode() - { - bool had_aggregate_function = expressions_in_resolve_process_stack.hasAggregateFunction(); - expressions_in_resolve_process_stack.pop(); - if (group_by_use_nulls && had_aggregate_function != expressions_in_resolve_process_stack.hasAggregateFunction()) - aliases.alias_name_to_expression_node = &aliases.alias_name_to_expression_node_after_group_by; - } - - /// Dump identifier resolve scope - [[maybe_unused]] void dump(WriteBuffer & buffer) const - { - buffer << "Scope node " << scope_node->formatASTForErrorMessage() << '\n'; - buffer << "Identifier lookup to resolve state " << identifier_lookup_to_resolve_state.size() << '\n'; - for (const auto & [identifier, state] : identifier_lookup_to_resolve_state) - { - buffer << "Identifier " << identifier.dump() << " resolve result "; - state.resolve_result.dump(buffer); - buffer << '\n'; - } - - buffer << "Expression argument name to node " << expression_argument_name_to_node.size() << '\n'; - for (const auto & [alias_name, node] : expression_argument_name_to_node) - buffer << "Alias name " << alias_name << " node " << node->formatASTForErrorMessage() << '\n'; - - buffer << "Alias name to expression node table size " << aliases.alias_name_to_expression_node->size() << '\n'; - for (const auto & [alias_name, node] : *aliases.alias_name_to_expression_node) - buffer << "Alias name " << alias_name << " expression node " << node->dumpTree() << '\n'; - - buffer << "Alias name to function node table size " << aliases.alias_name_to_lambda_node.size() << '\n'; - for (const auto & [alias_name, node] : aliases.alias_name_to_lambda_node) - buffer << "Alias name " << alias_name << " lambda node " << node->formatASTForErrorMessage() << '\n'; - - buffer << "Alias name to table expression node table size " << aliases.alias_name_to_table_expression_node.size() << '\n'; - for (const auto & [alias_name, node] : aliases.alias_name_to_table_expression_node) - buffer << "Alias name " << alias_name << " node " << node->formatASTForErrorMessage() << '\n'; - - buffer << "CTE name to query node table size " << cte_name_to_query_node.size() << '\n'; - for (const auto & [cte_name, node] : cte_name_to_query_node) - buffer << "CTE name " << cte_name << " node " << node->formatASTForErrorMessage() << '\n'; - - buffer << "WINDOW name to window node table size " << window_name_to_window_node.size() << '\n'; - for (const auto & [window_name, node] : window_name_to_window_node) - buffer << "CTE name " << window_name << " node " << node->formatASTForErrorMessage() << '\n'; - - buffer << "Nodes with duplicated aliases size " << aliases.nodes_with_duplicated_aliases.size() << '\n'; - for (const auto & node : aliases.nodes_with_duplicated_aliases) - buffer << "Alias name " << node->getAlias() << " node " << node->formatASTForErrorMessage() << '\n'; - - buffer << "Expression resolve process stack " << '\n'; - expressions_in_resolve_process_stack.dump(buffer); - - buffer << "Table expressions in resolve process size " << table_expressions_in_resolve_process.size() << '\n'; - for (const auto & node : table_expressions_in_resolve_process) - buffer << "Table expression " << node->formatASTForErrorMessage() << '\n'; - - buffer << "Non cached identifier lookups during expression resolve " << non_cached_identifier_lookups_during_expression_resolve.size() << '\n'; - for (const auto & identifier_lookup : non_cached_identifier_lookups_during_expression_resolve) - buffer << "Identifier lookup " << identifier_lookup.dump() << '\n'; - - buffer << "Table expression node to data " << table_expression_node_to_data.size() << '\n'; - for (const auto & [table_expression_node, table_expression_data] : table_expression_node_to_data) - buffer << "Table expression node " << table_expression_node->formatASTForErrorMessage() << " data " << table_expression_data.dump() << '\n'; - - buffer << "Use identifier lookup to result cache " << use_identifier_lookup_to_result_cache << '\n'; - buffer << "Subquery depth " << subquery_depth << '\n'; - } - - [[maybe_unused]] String dump() const - { - WriteBufferFromOwnString buffer; - dump(buffer); - - return buffer.str(); - } -}; - - -/** Visitor that extracts expression and function aliases from node and initialize scope tables with it. - * Does not go into child lambdas and queries. - * - * Important: - * Identifier nodes with aliases are added both in alias to expression and alias to function map. - * - * These is necessary because identifier with alias can give alias name to any query tree node. - * - * Example: - * WITH (x -> x + 1) AS id, id AS value SELECT value(1); - * In this example id as value is identifier node that has alias, during scope initialization we cannot derive - * that id is actually lambda or expression. - * - * There are no easy solution here, without trying to make full featured expression resolution at this stage. - * Example: - * WITH (x -> x + 1) AS id, id AS id_1, id_1 AS id_2 SELECT id_2(1); - * Example: SELECT a, b AS a, b AS c, 1 AS c; - * - * It is client responsibility after resolving identifier node with alias, make following actions: - * 1. If identifier node was resolved in function scope, remove alias from scope expression map. - * 2. If identifier node was resolved in expression scope, remove alias from scope function map. - * - * That way we separate alias map initialization and expressions resolution. - */ -class QueryExpressionsAliasVisitor : public InDepthQueryTreeVisitor -{ -public: - explicit QueryExpressionsAliasVisitor(ScopeAliases & aliases_) - : aliases(aliases_) - {} - - void visitImpl(QueryTreeNodePtr & node) - { - updateAliasesIfNeeded(node, false /*is_lambda_node*/); - } - - bool needChildVisit(const QueryTreeNodePtr &, const QueryTreeNodePtr & child) - { - if (auto * lambda_node = child->as()) - { - updateAliasesIfNeeded(child, true /*is_lambda_node*/); - return false; - } - else if (auto * query_tree_node = child->as()) - { - if (query_tree_node->isCTE()) - return false; - - updateAliasesIfNeeded(child, false /*is_lambda_node*/); - return false; - } - else if (auto * union_node = child->as()) - { - if (union_node->isCTE()) - return false; - - updateAliasesIfNeeded(child, false /*is_lambda_node*/); - return false; - } - - return true; - } -private: - void addDuplicatingAlias(const QueryTreeNodePtr & node) - { - aliases.nodes_with_duplicated_aliases.emplace(node); - auto cloned_node = node->clone(); - aliases.cloned_nodes_with_duplicated_aliases.emplace_back(cloned_node); - aliases.nodes_with_duplicated_aliases.emplace(cloned_node); - } - - void updateAliasesIfNeeded(const QueryTreeNodePtr & node, bool is_lambda_node) - { - if (!node->hasAlias()) - return; - - // We should not resolve expressions to WindowNode - if (node->getNodeType() == QueryTreeNodeType::WINDOW) - return; - - const auto & alias = node->getAlias(); - - if (is_lambda_node) - { - if (aliases.alias_name_to_expression_node->contains(alias)) - addDuplicatingAlias(node); - - auto [_, inserted] = aliases.alias_name_to_lambda_node.insert(std::make_pair(alias, node)); - if (!inserted) - addDuplicatingAlias(node); - - return; - } - - if (aliases.alias_name_to_lambda_node.contains(alias)) - addDuplicatingAlias(node); - - auto [_, inserted] = aliases.alias_name_to_expression_node->insert(std::make_pair(alias, node)); - if (!inserted) - addDuplicatingAlias(node); - - /// If node is identifier put it into transitive aliases map. - if (const auto * identifier = typeid_cast(node.get())) - aliases.transitive_aliases.insert(std::make_pair(alias, identifier->getIdentifier())); - } - - ScopeAliases & aliases; -}; - -class TableExpressionsAliasVisitor : public InDepthQueryTreeVisitor -{ -public: - explicit TableExpressionsAliasVisitor(IdentifierResolveScope & scope_) - : scope(scope_) - {} - - void visitImpl(QueryTreeNodePtr & node) - { - updateAliasesIfNeeded(node); - } - - static bool needChildVisit(const QueryTreeNodePtr & node, const QueryTreeNodePtr & child) - { - auto node_type = node->getNodeType(); - - switch (node_type) - { - case QueryTreeNodeType::ARRAY_JOIN: - { - const auto & array_join_node = node->as(); - return child.get() == array_join_node.getTableExpression().get(); - } - case QueryTreeNodeType::JOIN: - { - const auto & join_node = node->as(); - return child.get() == join_node.getLeftTableExpression().get() || child.get() == join_node.getRightTableExpression().get(); - } - default: - { - break; - } - } - - return false; - } - -private: - void updateAliasesIfNeeded(const QueryTreeNodePtr & node) - { - if (!node->hasAlias()) - return; - - const auto & node_alias = node->getAlias(); - auto [_, inserted] = scope.aliases.alias_name_to_table_expression_node.emplace(node_alias, node); - if (!inserted) - throw Exception(ErrorCodes::MULTIPLE_EXPRESSIONS_FOR_ALIAS, - "Multiple table expressions with same alias {}. In scope {}", - node_alias, - scope.scope_node->formatASTForErrorMessage()); - } - - IdentifierResolveScope & scope; -}; - -class QueryAnalyzer -{ -public: - explicit QueryAnalyzer(bool only_analyze_) : only_analyze(only_analyze_) {} - - void resolve(QueryTreeNodePtr & node, const QueryTreeNodePtr & table_expression, ContextPtr context) - { - IdentifierResolveScope scope(node, nullptr /*parent_scope*/); - - if (!scope.context) - scope.context = context; - - auto node_type = node->getNodeType(); - - switch (node_type) - { - case QueryTreeNodeType::QUERY: - { - if (table_expression) - throw Exception(ErrorCodes::LOGICAL_ERROR, - "For query analysis table expression must be empty"); - - resolveQuery(node, scope); - break; - } - case QueryTreeNodeType::UNION: - { - if (table_expression) - throw Exception(ErrorCodes::LOGICAL_ERROR, - "For union analysis table expression must be empty"); - - resolveUnion(node, scope); - break; - } - case QueryTreeNodeType::IDENTIFIER: - [[fallthrough]]; - case QueryTreeNodeType::CONSTANT: - [[fallthrough]]; - case QueryTreeNodeType::COLUMN: - [[fallthrough]]; - case QueryTreeNodeType::FUNCTION: - [[fallthrough]]; - case QueryTreeNodeType::LIST: - { - if (table_expression) - { - scope.expression_join_tree_node = table_expression; - validateTableExpressionModifiers(scope.expression_join_tree_node, scope); - initializeTableExpressionData(scope.expression_join_tree_node, scope); - } - - if (node_type == QueryTreeNodeType::LIST) - resolveExpressionNodeList(node, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/); - else - resolveExpressionNode(node, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/); - - break; - } - case QueryTreeNodeType::TABLE_FUNCTION: - { - QueryExpressionsAliasVisitor expressions_alias_visitor(scope.aliases); - resolveTableFunction(node, scope, expressions_alias_visitor, false /*nested_table_function*/); - break; - } - default: - { - throw Exception(ErrorCodes::BAD_ARGUMENTS, - "Node {} with type {} is not supported by query analyzer. " - "Supported nodes are query, union, identifier, constant, column, function, list.", - node->formatASTForErrorMessage(), - node->getNodeTypeName()); - } - } - } - -private: - /// Utility functions - - static bool isExpressionNodeType(QueryTreeNodeType node_type); - - static bool isFunctionExpressionNodeType(QueryTreeNodeType node_type); - - static bool isSubqueryNodeType(QueryTreeNodeType node_type); - - static bool isTableExpressionNodeType(QueryTreeNodeType node_type); - - static DataTypePtr getExpressionNodeResultTypeOrNull(const QueryTreeNodePtr & query_tree_node); - - static ProjectionName calculateFunctionProjectionName(const QueryTreeNodePtr & function_node, - const ProjectionNames & parameters_projection_names, - const ProjectionNames & arguments_projection_names); - - static ProjectionName calculateWindowProjectionName(const QueryTreeNodePtr & window_node, - const QueryTreeNodePtr & parent_window_node, - const String & parent_window_name, - const ProjectionNames & partition_by_projection_names, - const ProjectionNames & order_by_projection_names, - const ProjectionName & frame_begin_offset_projection_name, - const ProjectionName & frame_end_offset_projection_name); - - static ProjectionName calculateSortColumnProjectionName(const QueryTreeNodePtr & sort_column_node, - const ProjectionName & sort_expression_projection_name, - const ProjectionName & fill_from_expression_projection_name, - const ProjectionName & fill_to_expression_projection_name, - const ProjectionName & fill_step_expression_projection_name); - - static void collectCompoundExpressionValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, - const DataTypePtr & compound_expression_type, - const Identifier & valid_identifier_prefix, - std::unordered_set & valid_identifiers_result); - - static void collectTableExpressionValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, - const QueryTreeNodePtr & table_expression, - const TableExpressionData & table_expression_data, - std::unordered_set & valid_identifiers_result); - - static void collectScopeValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, - const IdentifierResolveScope & scope, - bool allow_expression_identifiers, - bool allow_function_identifiers, - bool allow_table_expression_identifiers, - std::unordered_set & valid_identifiers_result); - - static void collectScopeWithParentScopesValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, - const IdentifierResolveScope & scope, - bool allow_expression_identifiers, - bool allow_function_identifiers, - bool allow_table_expression_identifiers, - std::unordered_set & valid_identifiers_result); - - static std::vector collectIdentifierTypoHints(const Identifier & unresolved_identifier, const std::unordered_set & valid_identifiers); - - static QueryTreeNodePtr wrapExpressionNodeInTupleElement(QueryTreeNodePtr expression_node, IdentifierView nested_path); - - QueryTreeNodePtr tryGetLambdaFromSQLUserDefinedFunctions(const std::string & function_name, ContextPtr context); - - void evaluateScalarSubqueryIfNeeded(QueryTreeNodePtr & query_tree_node, IdentifierResolveScope & scope); - - static void mergeWindowWithParentWindow(const QueryTreeNodePtr & window_node, const QueryTreeNodePtr & parent_window_node, IdentifierResolveScope & scope); - - void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope); - - static void convertLimitOffsetExpression(QueryTreeNodePtr & expression_node, const String & expression_description, IdentifierResolveScope & scope); - - static void validateTableExpressionModifiers(const QueryTreeNodePtr & table_expression_node, IdentifierResolveScope & scope); - - static void validateJoinTableExpressionWithoutAlias(const QueryTreeNodePtr & join_node, const QueryTreeNodePtr & table_expression_node, IdentifierResolveScope & scope); - - static void checkDuplicateTableNamesOrAlias(const QueryTreeNodePtr & join_node, QueryTreeNodePtr & left_table_expr, QueryTreeNodePtr & right_table_expr, IdentifierResolveScope & scope); - - static std::pair recursivelyCollectMaxOrdinaryExpressions(QueryTreeNodePtr & node, QueryTreeNodes & into); - - static void expandGroupByAll(QueryNode & query_tree_node_typed); - - void expandOrderByAll(QueryNode & query_tree_node_typed, const Settings & settings); - - static std::string - rewriteAggregateFunctionNameIfNeeded(const std::string & aggregate_function_name, NullsAction action, const ContextPtr & context); - - static std::optional getColumnSideFromJoinTree(const QueryTreeNodePtr & resolved_identifier, const JoinNode & join_node) - { - if (resolved_identifier->getNodeType() == QueryTreeNodeType::CONSTANT) - return {}; - - if (resolved_identifier->getNodeType() == QueryTreeNodeType::FUNCTION) - { - const auto & resolved_function = resolved_identifier->as(); - - const auto & argument_nodes = resolved_function.getArguments().getNodes(); - - std::optional result; - for (const auto & argument_node : argument_nodes) - { - auto table_side = getColumnSideFromJoinTree(argument_node, join_node); - if (table_side && result && *table_side != *result) - { - throw Exception(ErrorCodes::AMBIGUOUS_IDENTIFIER, - "Ambiguous identifier {}. In scope {}", - resolved_identifier->formatASTForErrorMessage(), - join_node.formatASTForErrorMessage()); - } - result = table_side; - } - return result; - } - - const auto * column_src = resolved_identifier->as().getColumnSource().get(); - - if (join_node.getLeftTableExpression().get() == column_src) - return JoinTableSide::Left; - if (join_node.getRightTableExpression().get() == column_src) - return JoinTableSide::Right; + if (resolved_identifier->getNodeType() == QueryTreeNodeType::CONSTANT) return {}; - } - static QueryTreeNodePtr convertJoinedColumnTypeToNullIfNeeded( - const QueryTreeNodePtr & resolved_identifier, - const JoinKind & join_kind, - std::optional resolved_side, - IdentifierResolveScope & scope) + if (resolved_identifier->getNodeType() == QueryTreeNodeType::FUNCTION) { - if (resolved_identifier->getNodeType() == QueryTreeNodeType::COLUMN && - JoinCommon::canBecomeNullable(resolved_identifier->getResultType()) && - (isFull(join_kind) || - (isLeft(join_kind) && resolved_side && *resolved_side == JoinTableSide::Right) || - (isRight(join_kind) && resolved_side && *resolved_side == JoinTableSide::Left))) + const auto & resolved_function = resolved_identifier->as(); + + const auto & argument_nodes = resolved_function.getArguments().getNodes(); + + std::optional result; + for (const auto & argument_node : argument_nodes) { - auto nullable_resolved_identifier = resolved_identifier->clone(); - auto & resolved_column = nullable_resolved_identifier->as(); - auto new_result_type = makeNullableOrLowCardinalityNullable(resolved_column.getColumnType()); - resolved_column.setColumnType(new_result_type); - if (resolved_column.hasExpression()) + auto table_side = getColumnSideFromJoinTree(argument_node, join_node); + if (table_side && result && *table_side != *result) { - auto & resolved_expression = resolved_column.getExpression(); - if (!resolved_expression->getResultType()->equals(*new_result_type)) - resolved_expression = buildCastFunction(resolved_expression, new_result_type, scope.context, true); + throw Exception(ErrorCodes::AMBIGUOUS_IDENTIFIER, + "Ambiguous identifier {}. In scope {}", + resolved_identifier->formatASTForErrorMessage(), + join_node.formatASTForErrorMessage()); } - if (!nullable_resolved_identifier->isEqual(*resolved_identifier)) - scope.join_columns_with_changed_types[nullable_resolved_identifier] = resolved_identifier; - return nullable_resolved_identifier; + result = table_side; } - return nullptr; + return result; } - /// Resolve identifier functions + const auto * column_src = resolved_identifier->as().getColumnSource().get(); - static QueryTreeNodePtr tryResolveTableIdentifierFromDatabaseCatalog(const Identifier & table_identifier, ContextPtr context); + if (join_node.getLeftTableExpression().get() == column_src) + return JoinTableSide::Left; + if (join_node.getRightTableExpression().get() == column_src) + return JoinTableSide::Right; + return {}; +} - QueryTreeNodePtr tryResolveIdentifierFromCompoundExpression(const Identifier & expression_identifier, - size_t identifier_bind_size, - const QueryTreeNodePtr & compound_expression, - String compound_expression_source, - IdentifierResolveScope & scope, - bool can_be_not_found = false); - - QueryTreeNodePtr tryResolveIdentifierFromExpressionArguments(const IdentifierLookup & identifier_lookup, IdentifierResolveScope & scope); - - static bool tryBindIdentifierToAliases(const IdentifierLookup & identifier_lookup, const IdentifierResolveScope & scope); - - QueryTreeNodePtr tryResolveIdentifierFromAliases(const IdentifierLookup & identifier_lookup, - IdentifierResolveScope & scope, - IdentifierResolveSettings identifier_resolve_settings); - - QueryTreeNodePtr tryResolveIdentifierFromTableColumns(const IdentifierLookup & identifier_lookup, IdentifierResolveScope & scope); - - static bool tryBindIdentifierToTableExpression(const IdentifierLookup & identifier_lookup, - const QueryTreeNodePtr & table_expression_node, - const IdentifierResolveScope & scope); - - static bool tryBindIdentifierToTableExpressions(const IdentifierLookup & identifier_lookup, - const QueryTreeNodePtr & table_expression_node, - const IdentifierResolveScope & scope); - - QueryTreeNodePtr tryResolveIdentifierFromTableExpression(const IdentifierLookup & identifier_lookup, - const QueryTreeNodePtr & table_expression_node, - IdentifierResolveScope & scope); - - QueryTreeNodePtr tryResolveIdentifierFromJoin(const IdentifierLookup & identifier_lookup, - const QueryTreeNodePtr & table_expression_node, - IdentifierResolveScope & scope); - - QueryTreeNodePtr matchArrayJoinSubcolumns( - const QueryTreeNodePtr & array_join_column_inner_expression, - const ColumnNode & array_join_column_expression_typed, - const QueryTreeNodePtr & resolved_expression, - IdentifierResolveScope & scope); - - QueryTreeNodePtr tryResolveExpressionFromArrayJoinExpressions(const QueryTreeNodePtr & resolved_expression, - const QueryTreeNodePtr & table_expression_node, - IdentifierResolveScope & scope); - - QueryTreeNodePtr tryResolveIdentifierFromArrayJoin(const IdentifierLookup & identifier_lookup, - const QueryTreeNodePtr & table_expression_node, - IdentifierResolveScope & scope); - - QueryTreeNodePtr tryResolveIdentifierFromJoinTreeNode(const IdentifierLookup & identifier_lookup, - const QueryTreeNodePtr & join_tree_node, - IdentifierResolveScope & scope); - - QueryTreeNodePtr tryResolveIdentifierFromJoinTree(const IdentifierLookup & identifier_lookup, - IdentifierResolveScope & scope); - - IdentifierResolveResult tryResolveIdentifierInParentScopes(const IdentifierLookup & identifier_lookup, IdentifierResolveScope & scope); - - IdentifierResolveResult tryResolveIdentifier(const IdentifierLookup & identifier_lookup, - IdentifierResolveScope & scope, - IdentifierResolveSettings identifier_resolve_settings = {}); - - QueryTreeNodePtr tryResolveIdentifierFromStorage( - const Identifier & identifier, - const QueryTreeNodePtr & table_expression_node, - const TableExpressionData & table_expression_data, - IdentifierResolveScope & scope, - size_t identifier_column_qualifier_parts, - bool can_be_not_found = false); - - /// Resolve query tree nodes functions - - void qualifyColumnNodesWithProjectionNames(const QueryTreeNodes & column_nodes, - const QueryTreeNodePtr & table_expression_node, - const IdentifierResolveScope & scope); - - static GetColumnsOptions buildGetColumnsOptions(QueryTreeNodePtr & matcher_node, const ContextPtr & context); - - using QueryTreeNodesWithNames = std::vector>; - - QueryTreeNodesWithNames getMatchedColumnNodesWithNames(const QueryTreeNodePtr & matcher_node, - const QueryTreeNodePtr & table_expression_node, - const NamesAndTypes & matched_columns, - const IdentifierResolveScope & scope); - - void updateMatchedColumnsFromJoinUsing(QueryTreeNodesWithNames & result_matched_column_nodes_with_names, const QueryTreeNodePtr & source_table_expression, IdentifierResolveScope & scope); - - QueryTreeNodesWithNames resolveQualifiedMatcher(QueryTreeNodePtr & matcher_node, IdentifierResolveScope & scope); - - QueryTreeNodesWithNames resolveUnqualifiedMatcher(QueryTreeNodePtr & matcher_node, IdentifierResolveScope & scope); - - ProjectionNames resolveMatcher(QueryTreeNodePtr & matcher_node, IdentifierResolveScope & scope); - - ProjectionName resolveWindow(QueryTreeNodePtr & window_node, IdentifierResolveScope & scope); - - ProjectionNames resolveLambda(const QueryTreeNodePtr & lambda_node, - const QueryTreeNodePtr & lambda_node_to_resolve, - const QueryTreeNodes & lambda_arguments, - IdentifierResolveScope & scope); - - ProjectionNames resolveFunction(QueryTreeNodePtr & function_node, IdentifierResolveScope & scope); - - ProjectionNames resolveExpressionNode(QueryTreeNodePtr & node, IdentifierResolveScope & scope, bool allow_lambda_expression, bool allow_table_expression); - - ProjectionNames resolveExpressionNodeList(QueryTreeNodePtr & node_list, IdentifierResolveScope & scope, bool allow_lambda_expression, bool allow_table_expression); - - ProjectionNames resolveSortNodeList(QueryTreeNodePtr & sort_node_list, IdentifierResolveScope & scope); - - void resolveGroupByNode(QueryNode & query_node_typed, IdentifierResolveScope & scope); - - void resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpolate_node_list, IdentifierResolveScope & scope); - - void resolveWindowNodeList(QueryTreeNodePtr & window_node_list, IdentifierResolveScope & scope); - - NamesAndTypes resolveProjectionExpressionNodeList(QueryTreeNodePtr & projection_node_list, IdentifierResolveScope & scope); - - void initializeQueryJoinTreeNode(QueryTreeNodePtr & join_tree_node, IdentifierResolveScope & scope); - - void initializeTableExpressionData(const QueryTreeNodePtr & table_expression_node, IdentifierResolveScope & scope); - - void resolveTableFunction(QueryTreeNodePtr & table_function_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor, bool nested_table_function); - - void resolveArrayJoin(QueryTreeNodePtr & array_join_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor); - - void resolveJoin(QueryTreeNodePtr & join_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor); - - void resolveQueryJoinTreeNode(QueryTreeNodePtr & join_tree_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor); - - void resolveQuery(const QueryTreeNodePtr & query_node, IdentifierResolveScope & scope); - - void resolveUnion(const QueryTreeNodePtr & union_node, IdentifierResolveScope & scope); - - /// Lambdas that are currently in resolve process - std::unordered_set lambdas_in_resolve_process; - - /// CTEs that are currently in resolve process - std::unordered_set ctes_in_resolve_process; - - /// Function name to user defined lambda map - std::unordered_map function_name_to_user_defined_lambda; - - /// Array join expressions counter - size_t array_join_expressions_counter = 1; - - /// Subquery counter - size_t subquery_counter = 1; - - /// Global expression node to projection name map - std::unordered_map node_to_projection_name; - - /// Global resolve expression node to projection names map - std::unordered_map resolved_expressions; - - /// Global resolve expression node to tree size - std::unordered_map node_to_tree_size; - - /// Global scalar subquery to scalar value map - std::unordered_map scalar_subquery_to_scalar_value_local; - std::unordered_map scalar_subquery_to_scalar_value_global; - - const bool only_analyze; -}; +QueryTreeNodePtr QueryAnalyzer::convertJoinedColumnTypeToNullIfNeeded( + const QueryTreeNodePtr & resolved_identifier, + const JoinKind & join_kind, + std::optional resolved_side, + IdentifierResolveScope & scope) +{ + if (resolved_identifier->getNodeType() == QueryTreeNodeType::COLUMN && + JoinCommon::canBecomeNullable(resolved_identifier->getResultType()) && + (isFull(join_kind) || + (isLeft(join_kind) && resolved_side && *resolved_side == JoinTableSide::Right) || + (isRight(join_kind) && resolved_side && *resolved_side == JoinTableSide::Left))) + { + auto nullable_resolved_identifier = resolved_identifier->clone(); + auto & resolved_column = nullable_resolved_identifier->as(); + auto new_result_type = makeNullableOrLowCardinalityNullable(resolved_column.getColumnType()); + resolved_column.setColumnType(new_result_type); + if (resolved_column.hasExpression()) + { + auto & resolved_expression = resolved_column.getExpression(); + if (!resolved_expression->getResultType()->equals(*new_result_type)) + resolved_expression = buildCastFunction(resolved_expression, new_result_type, scope.context, true); + } + if (!nullable_resolved_identifier->isEqual(*resolved_identifier)) + scope.join_columns_with_changed_types[nullable_resolved_identifier] = resolved_identifier; + return nullable_resolved_identifier; + } + return nullptr; +} /// Utility functions implementation - bool QueryAnalyzer::isExpressionNodeType(QueryTreeNodeType node_type) { return node_type == QueryTreeNodeType::CONSTANT || node_type == QueryTreeNodeType::COLUMN || node_type == QueryTreeNodeType::FUNCTION @@ -1858,7 +513,7 @@ void QueryAnalyzer::collectCompoundExpressionValidIdentifiersForTypoCorrection( void QueryAnalyzer::collectTableExpressionValidIdentifiersForTypoCorrection( const Identifier & unresolved_identifier, const QueryTreeNodePtr & table_expression, - const TableExpressionData & table_expression_data, + const AnalysisTableExpressionData & table_expression_data, std::unordered_set & valid_identifiers_result) { for (const auto & [column_name, column_node] : table_expression_data.column_name_to_column_node) @@ -2858,7 +1513,7 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromExpressionArguments(cons bool QueryAnalyzer::tryBindIdentifierToAliases(const IdentifierLookup & identifier_lookup, const IdentifierResolveScope & scope) { - return scope.aliases.find(identifier_lookup, ScopeAliases::FindOption::FIRST_NAME) != nullptr; + return scope.aliases.find(identifier_lookup, ScopeAliases::FindOption::FIRST_NAME) != nullptr || scope.aliases.array_join_aliases.contains(identifier_lookup.identifier.front()); } /** Resolve identifier from scope aliases. @@ -3114,7 +1769,7 @@ bool QueryAnalyzer::tryBindIdentifierToTableExpressions(const IdentifierLookup & QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromStorage( const Identifier & identifier, const QueryTreeNodePtr & table_expression_node, - const TableExpressionData & table_expression_data, + const AnalysisTableExpressionData & table_expression_data, IdentifierResolveScope & scope, size_t identifier_column_qualifier_parts, bool can_be_not_found) @@ -3889,12 +2544,40 @@ QueryTreeNodePtr QueryAnalyzer::tryResolveIdentifierFromArrayJoin(const Identifi { auto & array_join_column_expression_typed = array_join_column_expression->as(); - if (array_join_column_expression_typed.getAlias() == identifier_lookup.identifier.getFullName()) + IdentifierView identifier_view(identifier_lookup.identifier); + + if (identifier_view.isCompound() && from_array_join_node.hasAlias() && identifier_view.front() == from_array_join_node.getAlias()) + identifier_view.popFirst(); + + const auto & alias_or_name = array_join_column_expression_typed.hasAlias() + ? array_join_column_expression_typed.getAlias() + : array_join_column_expression_typed.getColumnName(); + + if (identifier_view.front() == alias_or_name) + identifier_view.popFirst(); + else if (identifier_view.getFullName() == alias_or_name) + identifier_view.popFirst(identifier_view.getPartsSize()); /// Clear + else + continue; + + if (identifier_view.empty()) { auto array_join_column = std::make_shared(array_join_column_expression_typed.getColumn(), array_join_column_expression_typed.getColumnSource()); return array_join_column; } + + /// Resolve subcolumns. Example : SELECT x.y.z FROM tab ARRAY JOIN arr AS x + auto compound_expr = tryResolveIdentifierFromCompoundExpression( + identifier_lookup.identifier, + identifier_lookup.identifier.getPartsSize() - identifier_view.getPartsSize() /*identifier_bind_size*/, + array_join_column_expression, + {} /* compound_expression_source */, + scope, + true /* can_be_not_found */); + + if (compound_expr) + return compound_expr; } if (!resolved_identifier) @@ -4357,7 +3040,7 @@ QueryAnalyzer::QueryTreeNodesWithNames QueryAnalyzer::getMatchedColumnNodesWithN /** Use resolved columns from table expression data in nearest query scope if available. * It is important for ALIAS columns to use column nodes with resolved ALIAS expression. */ - const TableExpressionData * table_expression_data = nullptr; + const AnalysisTableExpressionData * table_expression_data = nullptr; const auto * nearest_query_scope = scope.getNearestQueryScope(); if (nearest_query_scope) table_expression_data = &nearest_query_scope->getTableExpressionDataOrThrow(table_expression_node); @@ -6284,7 +4967,7 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi * * 4. If node has alias, update its value in scope alias map. Deregister alias from expression_aliases_in_resolve_process. */ -ProjectionNames QueryAnalyzer::resolveExpressionNode(QueryTreeNodePtr & node, IdentifierResolveScope & scope, bool allow_lambda_expression, bool allow_table_expression) +ProjectionNames QueryAnalyzer::resolveExpressionNode(QueryTreeNodePtr & node, IdentifierResolveScope & scope, bool allow_lambda_expression, bool allow_table_expression, bool ignore_alias) { checkStackSize(); @@ -6334,7 +5017,7 @@ ProjectionNames QueryAnalyzer::resolveExpressionNode(QueryTreeNodePtr & node, Id * To support both (SELECT 1) AS expression in projection and (SELECT 1) as subquery in IN, do not use * alias table because in alias table subquery could be evaluated as scalar. */ - bool use_alias_table = true; + bool use_alias_table = !ignore_alias; if (is_duplicated_alias || (allow_table_expression && isSubqueryNodeType(node->getNodeType()))) use_alias_table = false; @@ -6634,7 +5317,8 @@ ProjectionNames QueryAnalyzer::resolveExpressionNode(QueryTreeNodePtr & node, Id if (is_duplicated_alias) scope.non_cached_identifier_lookups_during_expression_resolve.erase({Identifier{node_alias}, IdentifierLookupContext::EXPRESSION}); - resolved_expressions.emplace(node, result_projection_names); + if (!ignore_alias) + resolved_expressions.emplace(node, result_projection_names); scope.popExpressionNode(); bool expression_was_root = scope.expressions_in_resolve_process_stack.empty(); @@ -7109,7 +5793,7 @@ void QueryAnalyzer::initializeTableExpressionData(const QueryTreeNodePtr & table if (table_expression_data_it != scope.table_expression_node_to_data.end()) return; - TableExpressionData table_expression_data; + AnalysisTableExpressionData table_expression_data; if (table_node) { @@ -7569,22 +6253,25 @@ void QueryAnalyzer::resolveArrayJoin(QueryTreeNodePtr & array_join_node, Identif for (auto & array_join_expression : array_join_nodes) { auto array_join_expression_alias = array_join_expression->getAlias(); - if (!array_join_expression_alias.empty() && scope.aliases.alias_name_to_expression_node->contains(array_join_expression_alias)) - throw Exception(ErrorCodes::MULTIPLE_EXPRESSIONS_FOR_ALIAS, - "ARRAY JOIN expression {} with duplicate alias {}. In scope {}", - array_join_expression->formatASTForErrorMessage(), - array_join_expression_alias, - scope.scope_node->formatASTForErrorMessage()); - /// Add array join expression into scope - expressions_visitor.visit(array_join_expression); + for (const auto & elem : array_join_nodes) + { + if (elem->hasAlias()) + scope.aliases.array_join_aliases.insert(elem->getAlias()); + + for (auto & child : elem->getChildren()) + { + if (child) + expressions_visitor.visit(child); + } + } std::string identifier_full_name; if (auto * identifier_node = array_join_expression->as()) identifier_full_name = identifier_node->getIdentifier().getFullName(); - resolveExpressionNode(array_join_expression, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/); + resolveExpressionNode(array_join_expression, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/, true /*ignore_alias*/); auto process_array_join_expression = [&](QueryTreeNodePtr & expression) { @@ -7651,27 +6338,7 @@ void QueryAnalyzer::resolveArrayJoin(QueryTreeNodePtr & array_join_node, Identif } } - /** Allow to resolve ARRAY JOIN columns from aliases with types after ARRAY JOIN only after ARRAY JOIN expression list is resolved, because - * during resolution of ARRAY JOIN expression list we must use column type before ARRAY JOIN. - * - * Example: SELECT id, value_element FROM test_table ARRAY JOIN [[1,2,3]] AS value_element, value_element AS value - * It is expected that `value_element AS value` expression inside ARRAY JOIN expression list will be - * resolved as `value_element` expression with type before ARRAY JOIN. - * And it is expected that `value_element` inside projection expression list will be resolved as `value_element` expression - * with type after ARRAY JOIN. - */ array_join_nodes = std::move(array_join_column_expressions); - for (auto & array_join_column_expression : array_join_nodes) - { - auto it = scope.aliases.alias_name_to_expression_node->find(array_join_column_expression->getAlias()); - if (it != scope.aliases.alias_name_to_expression_node->end()) - { - auto & array_join_column_expression_typed = array_join_column_expression->as(); - auto array_join_column = std::make_shared(array_join_column_expression_typed.getColumn(), - array_join_column_expression_typed.getColumnSource()); - it->second = std::move(array_join_column); - } - } } void QueryAnalyzer::checkDuplicateTableNamesOrAlias(const QueryTreeNodePtr & join_node, QueryTreeNodePtr & left_table_expr, QueryTreeNodePtr & right_table_expr, IdentifierResolveScope & scope) @@ -8446,19 +7113,3 @@ void QueryAnalyzer::resolveUnion(const QueryTreeNodePtr & union_node, Identifier } } - -QueryAnalysisPass::QueryAnalysisPass(QueryTreeNodePtr table_expression_, bool only_analyze_) - : table_expression(std::move(table_expression_)) - , only_analyze(only_analyze_) -{} - -QueryAnalysisPass::QueryAnalysisPass(bool only_analyze_) : only_analyze(only_analyze_) {} - -void QueryAnalysisPass::run(QueryTreeNodePtr & query_tree_node, ContextPtr context) -{ - QueryAnalyzer analyzer(only_analyze); - analyzer.resolve(query_tree_node, table_expression, context); - createUniqueTableAliases(query_tree_node, table_expression, context); -} - -} diff --git a/src/Analyzer/Resolve/QueryAnalyzer.h b/src/Analyzer/Resolve/QueryAnalyzer.h new file mode 100644 index 00000000000..e2c4c8df46b --- /dev/null +++ b/src/Analyzer/Resolve/QueryAnalyzer.h @@ -0,0 +1,378 @@ +#pragma once + +#include +#include +#include +#include + +#include +#include + +#include + +namespace DB +{ + +struct GetColumnsOptions; +struct IdentifierResolveScope; +struct AnalysisTableExpressionData; +class QueryExpressionsAliasVisitor ; + +class QueryNode; +class JoinNode; +class ColumnNode; + +using ProjectionName = String; +using ProjectionNames = std::vector; + +struct Settings; + +/** Query analyzer implementation overview. Please check documentation in QueryAnalysisPass.h first. + * And additional documentation for each method, where special cases are described in detail. + * + * Each node in query must be resolved. For each query tree node resolved state is specific. + * + * For constant node no resolve process exists, it is resolved during construction. + * + * For table node no resolve process exists, it is resolved during construction. + * + * For function node to be resolved parameters and arguments must be resolved, function node must be initialized with concrete aggregate or + * non aggregate function and with result type. + * + * For lambda node there can be 2 different cases. + * 1. Standalone: WITH (x -> x + 1) AS lambda SELECT lambda(1); Such lambdas are inlined in query tree during query analysis pass. + * 2. Function arguments: WITH (x -> x + 1) AS lambda SELECT arrayMap(lambda, [1, 2, 3]); For such lambda resolution must + * set concrete lambda arguments (initially they are identifier nodes) and resolve lambda expression body. + * + * For query node resolve process must resolve all its inner nodes. + * + * For matcher node resolve process must replace it with matched nodes. + * + * For identifier node resolve process must replace it with concrete non identifier node. This part is most complex because + * for identifier resolution scopes and identifier lookup context play important part. + * + * ClickHouse SQL support lexical scoping for identifier resolution. Scope can be defined by query node or by expression node. + * Expression nodes that can define scope are lambdas and table ALIAS columns. + * + * Identifier lookup context can be expression, function, table. + * + * Examples: WITH (x -> x + 1) as func SELECT func() FROM func; During function `func` resolution identifier lookup is performed + * in function context. + * + * If there are no information of identifier context rules are following: + * 1. Try to resolve identifier in expression context. + * 2. Try to resolve identifier in function context, if it is allowed. Example: SELECT func(arguments); Here func identifier cannot be resolved in function context + * because query projection does not support that. + * 3. Try to resolve identifier in table context, if it is allowed. Example: SELECT table; Here table identifier cannot be resolved in function context + * because query projection does not support that. + * + * TODO: This does not supported properly before, because matchers could not be resolved from aliases. + * + * Identifiers are resolved with following rules: + * Resolution starts with current scope. + * 1. Try to resolve identifier from expression scope arguments. Lambda expression arguments are greatest priority. + * 2. Try to resolve identifier from aliases. + * 3. Try to resolve identifier from join tree if scope is query, or if there are registered table columns in scope. + * Steps 2 and 3 can be changed using prefer_column_name_to_alias setting. + * 4. If it is table lookup, try to resolve identifier from CTE. + * If identifier could not be resolved in current scope, resolution must be continued in parent scopes. + * 5. Try to resolve identifier from parent scopes. + * + * Additional rules about aliases and scopes. + * 1. Parent scope cannot refer alias from child scope. + * 2. Child scope can refer to alias in parent scope. + * + * Example: SELECT arrayMap(x -> x + 1 AS a, [1,2,3]), a; Identifier a is unknown in parent scope. + * Example: SELECT a FROM (SELECT 1 as a); Here we do not refer to alias a from child query scope. But we query it projection result, similar to tables. + * Example: WITH 1 as a SELECT (SELECT a) as b; Here in child scope identifier a is resolved using alias from parent scope. + * + * Additional rules about identifier binding. + * Bind for identifier to entity means that identifier first part match some node during analysis. + * If other parts of identifier cannot be resolved in that node, exception must be thrown. + * + * Example: + * CREATE TABLE test_table (id UInt64, compound_value Tuple(value UInt64)) ENGINE=TinyLog; + * SELECT compound_value.value, 1 AS compound_value FROM test_table; + * Identifier first part compound_value bound to entity with alias compound_value, but nested identifier part cannot be resolved from entity, + * lookup should not be continued, and exception must be thrown because if lookup continues that way identifier can be resolved from join tree. + * + * TODO: This was not supported properly before analyzer because nested identifier could not be resolved from alias. + * + * More complex example: + * CREATE TABLE test_table (id UInt64, value UInt64) ENGINE=TinyLog; + * WITH cast(('Value'), 'Tuple (value UInt64') AS value SELECT (SELECT value FROM test_table); + * Identifier first part value bound to test_table column value, but nested identifier part cannot be resolved from it, + * lookup should not be continued, and exception must be thrown because if lookup continues identifier can be resolved from parent scope. + * + * TODO: Update exception messages + * TODO: Table identifiers with optional UUID. + * TODO: Lookup functions arrayReduce(sum, [1, 2, 3]); + * TODO: Support function identifier resolve from parent query scope, if lambda in parent scope does not capture any columns. + */ + +class QueryAnalyzer +{ +public: + explicit QueryAnalyzer(bool only_analyze_); + ~QueryAnalyzer(); + + void resolve(QueryTreeNodePtr & node, const QueryTreeNodePtr & table_expression, ContextPtr context); + +private: + /// Utility functions + + static bool isExpressionNodeType(QueryTreeNodeType node_type); + + static bool isFunctionExpressionNodeType(QueryTreeNodeType node_type); + + static bool isSubqueryNodeType(QueryTreeNodeType node_type); + + static bool isTableExpressionNodeType(QueryTreeNodeType node_type); + + static DataTypePtr getExpressionNodeResultTypeOrNull(const QueryTreeNodePtr & query_tree_node); + + static ProjectionName calculateFunctionProjectionName(const QueryTreeNodePtr & function_node, + const ProjectionNames & parameters_projection_names, + const ProjectionNames & arguments_projection_names); + + static ProjectionName calculateWindowProjectionName(const QueryTreeNodePtr & window_node, + const QueryTreeNodePtr & parent_window_node, + const String & parent_window_name, + const ProjectionNames & partition_by_projection_names, + const ProjectionNames & order_by_projection_names, + const ProjectionName & frame_begin_offset_projection_name, + const ProjectionName & frame_end_offset_projection_name); + + static ProjectionName calculateSortColumnProjectionName(const QueryTreeNodePtr & sort_column_node, + const ProjectionName & sort_expression_projection_name, + const ProjectionName & fill_from_expression_projection_name, + const ProjectionName & fill_to_expression_projection_name, + const ProjectionName & fill_step_expression_projection_name); + + static void collectCompoundExpressionValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, + const DataTypePtr & compound_expression_type, + const Identifier & valid_identifier_prefix, + std::unordered_set & valid_identifiers_result); + + static void collectTableExpressionValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, + const QueryTreeNodePtr & table_expression, + const AnalysisTableExpressionData & table_expression_data, + std::unordered_set & valid_identifiers_result); + + static void collectScopeValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, + const IdentifierResolveScope & scope, + bool allow_expression_identifiers, + bool allow_function_identifiers, + bool allow_table_expression_identifiers, + std::unordered_set & valid_identifiers_result); + + static void collectScopeWithParentScopesValidIdentifiersForTypoCorrection(const Identifier & unresolved_identifier, + const IdentifierResolveScope & scope, + bool allow_expression_identifiers, + bool allow_function_identifiers, + bool allow_table_expression_identifiers, + std::unordered_set & valid_identifiers_result); + + static std::vector collectIdentifierTypoHints(const Identifier & unresolved_identifier, const std::unordered_set & valid_identifiers); + + static QueryTreeNodePtr wrapExpressionNodeInTupleElement(QueryTreeNodePtr expression_node, IdentifierView nested_path); + + QueryTreeNodePtr tryGetLambdaFromSQLUserDefinedFunctions(const std::string & function_name, ContextPtr context); + + void evaluateScalarSubqueryIfNeeded(QueryTreeNodePtr & query_tree_node, IdentifierResolveScope & scope); + + static void mergeWindowWithParentWindow(const QueryTreeNodePtr & window_node, const QueryTreeNodePtr & parent_window_node, IdentifierResolveScope & scope); + + void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope); + + static void convertLimitOffsetExpression(QueryTreeNodePtr & expression_node, const String & expression_description, IdentifierResolveScope & scope); + + static void validateTableExpressionModifiers(const QueryTreeNodePtr & table_expression_node, IdentifierResolveScope & scope); + + static void validateJoinTableExpressionWithoutAlias(const QueryTreeNodePtr & join_node, const QueryTreeNodePtr & table_expression_node, IdentifierResolveScope & scope); + + static void checkDuplicateTableNamesOrAlias(const QueryTreeNodePtr & join_node, QueryTreeNodePtr & left_table_expr, QueryTreeNodePtr & right_table_expr, IdentifierResolveScope & scope); + + static std::pair recursivelyCollectMaxOrdinaryExpressions(QueryTreeNodePtr & node, QueryTreeNodes & into); + + static void expandGroupByAll(QueryNode & query_tree_node_typed); + + void expandOrderByAll(QueryNode & query_tree_node_typed, const Settings & settings); + + static std::string + rewriteAggregateFunctionNameIfNeeded(const std::string & aggregate_function_name, NullsAction action, const ContextPtr & context); + + static std::optional getColumnSideFromJoinTree(const QueryTreeNodePtr & resolved_identifier, const JoinNode & join_node); + + static QueryTreeNodePtr convertJoinedColumnTypeToNullIfNeeded( + const QueryTreeNodePtr & resolved_identifier, + const JoinKind & join_kind, + std::optional resolved_side, + IdentifierResolveScope & scope); + + /// Resolve identifier functions + + static QueryTreeNodePtr tryResolveTableIdentifierFromDatabaseCatalog(const Identifier & table_identifier, ContextPtr context); + + QueryTreeNodePtr tryResolveIdentifierFromCompoundExpression(const Identifier & expression_identifier, + size_t identifier_bind_size, + const QueryTreeNodePtr & compound_expression, + String compound_expression_source, + IdentifierResolveScope & scope, + bool can_be_not_found = false); + + QueryTreeNodePtr tryResolveIdentifierFromExpressionArguments(const IdentifierLookup & identifier_lookup, IdentifierResolveScope & scope); + + static bool tryBindIdentifierToAliases(const IdentifierLookup & identifier_lookup, const IdentifierResolveScope & scope); + + QueryTreeNodePtr tryResolveIdentifierFromAliases(const IdentifierLookup & identifier_lookup, + IdentifierResolveScope & scope, + IdentifierResolveSettings identifier_resolve_settings); + + QueryTreeNodePtr tryResolveIdentifierFromTableColumns(const IdentifierLookup & identifier_lookup, IdentifierResolveScope & scope); + + static bool tryBindIdentifierToTableExpression(const IdentifierLookup & identifier_lookup, + const QueryTreeNodePtr & table_expression_node, + const IdentifierResolveScope & scope); + + static bool tryBindIdentifierToTableExpressions(const IdentifierLookup & identifier_lookup, + const QueryTreeNodePtr & table_expression_node, + const IdentifierResolveScope & scope); + + QueryTreeNodePtr tryResolveIdentifierFromTableExpression(const IdentifierLookup & identifier_lookup, + const QueryTreeNodePtr & table_expression_node, + IdentifierResolveScope & scope); + + QueryTreeNodePtr tryResolveIdentifierFromJoin(const IdentifierLookup & identifier_lookup, + const QueryTreeNodePtr & table_expression_node, + IdentifierResolveScope & scope); + + QueryTreeNodePtr matchArrayJoinSubcolumns( + const QueryTreeNodePtr & array_join_column_inner_expression, + const ColumnNode & array_join_column_expression_typed, + const QueryTreeNodePtr & resolved_expression, + IdentifierResolveScope & scope); + + QueryTreeNodePtr tryResolveExpressionFromArrayJoinExpressions(const QueryTreeNodePtr & resolved_expression, + const QueryTreeNodePtr & table_expression_node, + IdentifierResolveScope & scope); + + QueryTreeNodePtr tryResolveIdentifierFromArrayJoin(const IdentifierLookup & identifier_lookup, + const QueryTreeNodePtr & table_expression_node, + IdentifierResolveScope & scope); + + QueryTreeNodePtr tryResolveIdentifierFromJoinTreeNode(const IdentifierLookup & identifier_lookup, + const QueryTreeNodePtr & join_tree_node, + IdentifierResolveScope & scope); + + QueryTreeNodePtr tryResolveIdentifierFromJoinTree(const IdentifierLookup & identifier_lookup, + IdentifierResolveScope & scope); + + IdentifierResolveResult tryResolveIdentifierInParentScopes(const IdentifierLookup & identifier_lookup, IdentifierResolveScope & scope); + + IdentifierResolveResult tryResolveIdentifier(const IdentifierLookup & identifier_lookup, + IdentifierResolveScope & scope, + IdentifierResolveSettings identifier_resolve_settings = {}); + + QueryTreeNodePtr tryResolveIdentifierFromStorage( + const Identifier & identifier, + const QueryTreeNodePtr & table_expression_node, + const AnalysisTableExpressionData & table_expression_data, + IdentifierResolveScope & scope, + size_t identifier_column_qualifier_parts, + bool can_be_not_found = false); + + /// Resolve query tree nodes functions + + void qualifyColumnNodesWithProjectionNames(const QueryTreeNodes & column_nodes, + const QueryTreeNodePtr & table_expression_node, + const IdentifierResolveScope & scope); + + static GetColumnsOptions buildGetColumnsOptions(QueryTreeNodePtr & matcher_node, const ContextPtr & context); + + using QueryTreeNodesWithNames = std::vector>; + + QueryTreeNodesWithNames getMatchedColumnNodesWithNames(const QueryTreeNodePtr & matcher_node, + const QueryTreeNodePtr & table_expression_node, + const NamesAndTypes & matched_columns, + const IdentifierResolveScope & scope); + + void updateMatchedColumnsFromJoinUsing(QueryTreeNodesWithNames & result_matched_column_nodes_with_names, const QueryTreeNodePtr & source_table_expression, IdentifierResolveScope & scope); + + QueryTreeNodesWithNames resolveQualifiedMatcher(QueryTreeNodePtr & matcher_node, IdentifierResolveScope & scope); + + QueryTreeNodesWithNames resolveUnqualifiedMatcher(QueryTreeNodePtr & matcher_node, IdentifierResolveScope & scope); + + ProjectionNames resolveMatcher(QueryTreeNodePtr & matcher_node, IdentifierResolveScope & scope); + + ProjectionName resolveWindow(QueryTreeNodePtr & window_node, IdentifierResolveScope & scope); + + ProjectionNames resolveLambda(const QueryTreeNodePtr & lambda_node, + const QueryTreeNodePtr & lambda_node_to_resolve, + const QueryTreeNodes & lambda_arguments, + IdentifierResolveScope & scope); + + ProjectionNames resolveFunction(QueryTreeNodePtr & function_node, IdentifierResolveScope & scope); + + ProjectionNames resolveExpressionNode(QueryTreeNodePtr & node, IdentifierResolveScope & scope, bool allow_lambda_expression, bool allow_table_expression, bool ignore_alias = false); + + ProjectionNames resolveExpressionNodeList(QueryTreeNodePtr & node_list, IdentifierResolveScope & scope, bool allow_lambda_expression, bool allow_table_expression); + + ProjectionNames resolveSortNodeList(QueryTreeNodePtr & sort_node_list, IdentifierResolveScope & scope); + + void resolveGroupByNode(QueryNode & query_node_typed, IdentifierResolveScope & scope); + + void resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpolate_node_list, IdentifierResolveScope & scope); + + void resolveWindowNodeList(QueryTreeNodePtr & window_node_list, IdentifierResolveScope & scope); + + NamesAndTypes resolveProjectionExpressionNodeList(QueryTreeNodePtr & projection_node_list, IdentifierResolveScope & scope); + + void initializeQueryJoinTreeNode(QueryTreeNodePtr & join_tree_node, IdentifierResolveScope & scope); + + void initializeTableExpressionData(const QueryTreeNodePtr & table_expression_node, IdentifierResolveScope & scope); + + void resolveTableFunction(QueryTreeNodePtr & table_function_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor, bool nested_table_function); + + void resolveArrayJoin(QueryTreeNodePtr & array_join_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor); + + void resolveJoin(QueryTreeNodePtr & join_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor); + + void resolveQueryJoinTreeNode(QueryTreeNodePtr & join_tree_node, IdentifierResolveScope & scope, QueryExpressionsAliasVisitor & expressions_visitor); + + void resolveQuery(const QueryTreeNodePtr & query_node, IdentifierResolveScope & scope); + + void resolveUnion(const QueryTreeNodePtr & union_node, IdentifierResolveScope & scope); + + /// Lambdas that are currently in resolve process + std::unordered_set lambdas_in_resolve_process; + + /// CTEs that are currently in resolve process + std::unordered_set ctes_in_resolve_process; + + /// Function name to user defined lambda map + std::unordered_map function_name_to_user_defined_lambda; + + /// Array join expressions counter + size_t array_join_expressions_counter = 1; + + /// Subquery counter + size_t subquery_counter = 1; + + /// Global expression node to projection name map + std::unordered_map node_to_projection_name; + + /// Global resolve expression node to projection names map + std::unordered_map resolved_expressions; + + /// Global resolve expression node to tree size + std::unordered_map node_to_tree_size; + + /// Global scalar subquery to scalar value map + std::unordered_map scalar_subquery_to_scalar_value_local; + std::unordered_map scalar_subquery_to_scalar_value_global; + + const bool only_analyze; +}; + +} diff --git a/src/Analyzer/Resolve/QueryExpressionsAliasVisitor.h b/src/Analyzer/Resolve/QueryExpressionsAliasVisitor.h new file mode 100644 index 00000000000..45d081e34ea --- /dev/null +++ b/src/Analyzer/Resolve/QueryExpressionsAliasVisitor.h @@ -0,0 +1,119 @@ +#pragma once + +#include +#include +#include + +namespace DB +{ + +/** Visitor that extracts expression and function aliases from node and initialize scope tables with it. + * Does not go into child lambdas and queries. + * + * Important: + * Identifier nodes with aliases are added both in alias to expression and alias to function map. + * + * These is necessary because identifier with alias can give alias name to any query tree node. + * + * Example: + * WITH (x -> x + 1) AS id, id AS value SELECT value(1); + * In this example id as value is identifier node that has alias, during scope initialization we cannot derive + * that id is actually lambda or expression. + * + * There are no easy solution here, without trying to make full featured expression resolution at this stage. + * Example: + * WITH (x -> x + 1) AS id, id AS id_1, id_1 AS id_2 SELECT id_2(1); + * Example: SELECT a, b AS a, b AS c, 1 AS c; + * + * It is client responsibility after resolving identifier node with alias, make following actions: + * 1. If identifier node was resolved in function scope, remove alias from scope expression map. + * 2. If identifier node was resolved in expression scope, remove alias from scope function map. + * + * That way we separate alias map initialization and expressions resolution. + */ +class QueryExpressionsAliasVisitor : public InDepthQueryTreeVisitor +{ +public: + explicit QueryExpressionsAliasVisitor(ScopeAliases & aliases_) + : aliases(aliases_) + {} + + void visitImpl(QueryTreeNodePtr & node) + { + updateAliasesIfNeeded(node, false /*is_lambda_node*/); + } + + bool needChildVisit(const QueryTreeNodePtr &, const QueryTreeNodePtr & child) + { + if (auto * lambda_node = child->as()) + { + updateAliasesIfNeeded(child, true /*is_lambda_node*/); + return false; + } + else if (auto * query_tree_node = child->as()) + { + if (query_tree_node->isCTE()) + return false; + + updateAliasesIfNeeded(child, false /*is_lambda_node*/); + return false; + } + else if (auto * union_node = child->as()) + { + if (union_node->isCTE()) + return false; + + updateAliasesIfNeeded(child, false /*is_lambda_node*/); + return false; + } + + return true; + } +private: + void addDuplicatingAlias(const QueryTreeNodePtr & node) + { + aliases.nodes_with_duplicated_aliases.emplace(node); + auto cloned_node = node->clone(); + aliases.cloned_nodes_with_duplicated_aliases.emplace_back(cloned_node); + aliases.nodes_with_duplicated_aliases.emplace(cloned_node); + } + + void updateAliasesIfNeeded(const QueryTreeNodePtr & node, bool is_lambda_node) + { + if (!node->hasAlias()) + return; + + // We should not resolve expressions to WindowNode + if (node->getNodeType() == QueryTreeNodeType::WINDOW) + return; + + const auto & alias = node->getAlias(); + + if (is_lambda_node) + { + if (aliases.alias_name_to_expression_node->contains(alias)) + addDuplicatingAlias(node); + + auto [_, inserted] = aliases.alias_name_to_lambda_node.insert(std::make_pair(alias, node)); + if (!inserted) + addDuplicatingAlias(node); + + return; + } + + if (aliases.alias_name_to_lambda_node.contains(alias)) + addDuplicatingAlias(node); + + auto [_, inserted] = aliases.alias_name_to_expression_node->insert(std::make_pair(alias, node)); + if (!inserted) + addDuplicatingAlias(node); + + /// If node is identifier put it into transitive aliases map. + if (const auto * identifier = typeid_cast(node.get())) + aliases.transitive_aliases.insert(std::make_pair(alias, identifier->getIdentifier())); + } + + ScopeAliases & aliases; +}; + +} diff --git a/src/Analyzer/Resolve/ScopeAliases.h b/src/Analyzer/Resolve/ScopeAliases.h new file mode 100644 index 00000000000..baab843988b --- /dev/null +++ b/src/Analyzer/Resolve/ScopeAliases.h @@ -0,0 +1,91 @@ +#pragma once + +#include +#include + +namespace DB +{ + +struct ScopeAliases +{ + /// Alias name to query expression node + std::unordered_map alias_name_to_expression_node_before_group_by; + std::unordered_map alias_name_to_expression_node_after_group_by; + + std::unordered_map * alias_name_to_expression_node = nullptr; + + /// Alias name to lambda node + std::unordered_map alias_name_to_lambda_node; + + /// Alias name to table expression node + std::unordered_map alias_name_to_table_expression_node; + + /// Expressions like `x as y` where we can't say whether it's a function, expression or table. + std::unordered_map transitive_aliases; + + /// Nodes with duplicated aliases + std::unordered_set nodes_with_duplicated_aliases; + std::vector cloned_nodes_with_duplicated_aliases; + + /// Names which are aliases from ARRAY JOIN. + /// This is needed to properly qualify columns from matchers and avoid name collision. + std::unordered_set array_join_aliases; + + std::unordered_map & getAliasMap(IdentifierLookupContext lookup_context) + { + switch (lookup_context) + { + case IdentifierLookupContext::EXPRESSION: return *alias_name_to_expression_node; + case IdentifierLookupContext::FUNCTION: return alias_name_to_lambda_node; + case IdentifierLookupContext::TABLE_EXPRESSION: return alias_name_to_table_expression_node; + } + } + + enum class FindOption + { + FIRST_NAME, + FULL_NAME, + }; + + const std::string & getKey(const Identifier & identifier, FindOption find_option) + { + switch (find_option) + { + case FindOption::FIRST_NAME: return identifier.front(); + case FindOption::FULL_NAME: return identifier.getFullName(); + } + } + + QueryTreeNodePtr * find(IdentifierLookup lookup, FindOption find_option) + { + auto & alias_map = getAliasMap(lookup.lookup_context); + const std::string * key = &getKey(lookup.identifier, find_option); + + auto it = alias_map.find(*key); + + if (it != alias_map.end()) + return &it->second; + + if (lookup.lookup_context == IdentifierLookupContext::TABLE_EXPRESSION) + return {}; + + while (it == alias_map.end()) + { + auto jt = transitive_aliases.find(*key); + if (jt == transitive_aliases.end()) + return {}; + + key = &(getKey(jt->second, find_option)); + it = alias_map.find(*key); + } + + return &it->second; + } + + const QueryTreeNodePtr * find(IdentifierLookup lookup, FindOption find_option) const + { + return const_cast(this)->find(lookup, find_option); + } +}; + +} diff --git a/src/Analyzer/Resolve/TableExpressionData.h b/src/Analyzer/Resolve/TableExpressionData.h new file mode 100644 index 00000000000..18cbfa32366 --- /dev/null +++ b/src/Analyzer/Resolve/TableExpressionData.h @@ -0,0 +1,83 @@ +#pragma once + +#include +#include + +namespace DB +{ + +struct StringTransparentHash +{ + using is_transparent = void; + using hash = std::hash; + + [[maybe_unused]] size_t operator()(const char * data) const + { + return hash()(data); + } + + size_t operator()(std::string_view data) const + { + return hash()(data); + } + + size_t operator()(const std::string & data) const + { + return hash()(data); + } +}; + +using ColumnNameToColumnNodeMap = std::unordered_map>; + +struct AnalysisTableExpressionData +{ + std::string table_expression_name; + std::string table_expression_description; + std::string database_name; + std::string table_name; + bool should_qualify_columns = true; + NamesAndTypes column_names_and_types; + ColumnNameToColumnNodeMap column_name_to_column_node; + std::unordered_set subcolumn_names; /// Subset columns that are subcolumns of other columns + std::unordered_set> column_identifier_first_parts; + + bool hasFullIdentifierName(IdentifierView identifier_view) const + { + return column_name_to_column_node.contains(identifier_view.getFullName()); + } + + bool canBindIdentifier(IdentifierView identifier_view) const + { + return column_identifier_first_parts.contains(identifier_view.at(0)); + } + + [[maybe_unused]] void dump(WriteBuffer & buffer) const + { + buffer << "Table expression name " << table_expression_name; + + if (!table_expression_description.empty()) + buffer << " table expression description " << table_expression_description; + + if (!database_name.empty()) + buffer << " database name " << database_name; + + if (!table_name.empty()) + buffer << " table name " << table_name; + + buffer << " should qualify columns " << should_qualify_columns; + buffer << " columns size " << column_name_to_column_node.size() << '\n'; + + for (const auto & [column_name, column_node] : column_name_to_column_node) + buffer << "Column name " << column_name << " column node " << column_node->dumpTree() << '\n'; + } + + [[maybe_unused]] String dump() const + { + WriteBufferFromOwnString buffer; + dump(buffer); + + return buffer.str(); + } +}; + +} diff --git a/src/Analyzer/Resolve/TableExpressionsAliasVisitor.h b/src/Analyzer/Resolve/TableExpressionsAliasVisitor.h new file mode 100644 index 00000000000..cab79806465 --- /dev/null +++ b/src/Analyzer/Resolve/TableExpressionsAliasVisitor.h @@ -0,0 +1,71 @@ +#pragma once + +#include +#include +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int MULTIPLE_EXPRESSIONS_FOR_ALIAS; +} + +class TableExpressionsAliasVisitor : public InDepthQueryTreeVisitor +{ +public: + explicit TableExpressionsAliasVisitor(IdentifierResolveScope & scope_) + : scope(scope_) + {} + + void visitImpl(QueryTreeNodePtr & node) + { + updateAliasesIfNeeded(node); + } + + static bool needChildVisit(const QueryTreeNodePtr & node, const QueryTreeNodePtr & child) + { + auto node_type = node->getNodeType(); + + switch (node_type) + { + case QueryTreeNodeType::ARRAY_JOIN: + { + const auto & array_join_node = node->as(); + return child.get() == array_join_node.getTableExpression().get(); + } + case QueryTreeNodeType::JOIN: + { + const auto & join_node = node->as(); + return child.get() == join_node.getLeftTableExpression().get() || child.get() == join_node.getRightTableExpression().get(); + } + default: + { + break; + } + } + + return false; + } + +private: + void updateAliasesIfNeeded(const QueryTreeNodePtr & node) + { + if (!node->hasAlias()) + return; + + const auto & node_alias = node->getAlias(); + auto [_, inserted] = scope.aliases.alias_name_to_table_expression_node.emplace(node_alias, node); + if (!inserted) + throw Exception(ErrorCodes::MULTIPLE_EXPRESSIONS_FOR_ALIAS, + "Multiple table expressions with same alias {}. In scope {}", + node_alias, + scope.scope_node->formatASTForErrorMessage()); + } + + IdentifierResolveScope & scope; +}; + +} diff --git a/src/Backups/BackupIO_S3.cpp b/src/Backups/BackupIO_S3.cpp index 92e208ba464..92f086295a0 100644 --- a/src/Backups/BackupIO_S3.cpp +++ b/src/Backups/BackupIO_S3.cpp @@ -188,6 +188,7 @@ void BackupReaderS3::copyFileToDisk(const String & path_in_backup, size_t file_s fs::path(s3_uri.key) / path_in_backup, 0, file_size, + /* dest_s3_client= */ destination_disk->getS3StorageClient(), /* dest_bucket= */ blob_path[1], /* dest_key= */ blob_path[0], s3_settings.request_settings, @@ -252,18 +253,20 @@ void BackupWriterS3::copyFileFromDisk(const String & path_in_backup, DiskPtr src { LOG_TRACE(log, "Copying file {} from disk {} to S3", src_path, src_disk->getName()); copyS3File( - client, + src_disk->getS3StorageClient(), /* src_bucket */ blob_path[1], /* src_key= */ blob_path[0], start_pos, length, - s3_uri.bucket, - fs::path(s3_uri.key) / path_in_backup, + /* dest_s3_client= */ client, + /* dest_bucket= */ s3_uri.bucket, + /* dest_key= */ fs::path(s3_uri.key) / path_in_backup, s3_settings.request_settings, read_settings, blob_storage_log, {}, - threadPoolCallbackRunnerUnsafe(getBackupsIOThreadPool().get(), "BackupWriterS3")); + threadPoolCallbackRunnerUnsafe(getBackupsIOThreadPool().get(), "BackupWriterS3"), + /*for_disk_s3=*/false); return; /// copied! } } @@ -281,8 +284,9 @@ void BackupWriterS3::copyFile(const String & destination, const String & source, /* src_key= */ fs::path(s3_uri.key) / source, 0, size, - s3_uri.bucket, - fs::path(s3_uri.key) / destination, + /* dest_s3_client= */ client, + /* dest_bucket= */ s3_uri.bucket, + /* dest_key= */ fs::path(s3_uri.key) / destination, s3_settings.request_settings, read_settings, blob_storage_log, diff --git a/src/BridgeHelper/CatBoostLibraryBridgeHelper.h b/src/BridgeHelper/CatBoostLibraryBridgeHelper.h index 55dfd715f00..5d5c6d01705 100644 --- a/src/BridgeHelper/CatBoostLibraryBridgeHelper.h +++ b/src/BridgeHelper/CatBoostLibraryBridgeHelper.h @@ -14,8 +14,8 @@ namespace DB class CatBoostLibraryBridgeHelper final : public LibraryBridgeHelper { public: - static constexpr inline auto PING_HANDLER = "/catboost_ping"; - static constexpr inline auto MAIN_HANDLER = "/catboost_request"; + static constexpr auto PING_HANDLER = "/catboost_ping"; + static constexpr auto MAIN_HANDLER = "/catboost_request"; explicit CatBoostLibraryBridgeHelper( ContextPtr context_, @@ -38,11 +38,11 @@ protected: bool bridgeHandShake() override; private: - static constexpr inline auto CATBOOST_LIST_METHOD = "catboost_list"; - static constexpr inline auto CATBOOST_REMOVEMODEL_METHOD = "catboost_removeModel"; - static constexpr inline auto CATBOOST_REMOVEALLMODELS_METHOD = "catboost_removeAllModels"; - static constexpr inline auto CATBOOST_GETTREECOUNT_METHOD = "catboost_GetTreeCount"; - static constexpr inline auto CATBOOST_LIB_EVALUATE_METHOD = "catboost_libEvaluate"; + static constexpr auto CATBOOST_LIST_METHOD = "catboost_list"; + static constexpr auto CATBOOST_REMOVEMODEL_METHOD = "catboost_removeModel"; + static constexpr auto CATBOOST_REMOVEALLMODELS_METHOD = "catboost_removeAllModels"; + static constexpr auto CATBOOST_GETTREECOUNT_METHOD = "catboost_GetTreeCount"; + static constexpr auto CATBOOST_LIB_EVALUATE_METHOD = "catboost_libEvaluate"; Poco::URI createRequestURI(const String & method) const; diff --git a/src/BridgeHelper/ExternalDictionaryLibraryBridgeHelper.h b/src/BridgeHelper/ExternalDictionaryLibraryBridgeHelper.h index 5632fd2a28e..63816aa63ef 100644 --- a/src/BridgeHelper/ExternalDictionaryLibraryBridgeHelper.h +++ b/src/BridgeHelper/ExternalDictionaryLibraryBridgeHelper.h @@ -25,8 +25,8 @@ public: String dict_attributes; }; - static constexpr inline auto PING_HANDLER = "/extdict_ping"; - static constexpr inline auto MAIN_HANDLER = "/extdict_request"; + static constexpr auto PING_HANDLER = "/extdict_ping"; + static constexpr auto MAIN_HANDLER = "/extdict_request"; ExternalDictionaryLibraryBridgeHelper(ContextPtr context_, const Block & sample_block, const Field & dictionary_id_, const LibraryInitData & library_data_); @@ -62,14 +62,14 @@ protected: ReadWriteBufferFromHTTP::OutStreamCallback getInitLibraryCallback() const; private: - static constexpr inline auto EXT_DICT_LIB_NEW_METHOD = "extDict_libNew"; - static constexpr inline auto EXT_DICT_LIB_CLONE_METHOD = "extDict_libClone"; - static constexpr inline auto EXT_DICT_LIB_DELETE_METHOD = "extDict_libDelete"; - static constexpr inline auto EXT_DICT_LOAD_ALL_METHOD = "extDict_loadAll"; - static constexpr inline auto EXT_DICT_LOAD_IDS_METHOD = "extDict_loadIds"; - static constexpr inline auto EXT_DICT_LOAD_KEYS_METHOD = "extDict_loadKeys"; - static constexpr inline auto EXT_DICT_IS_MODIFIED_METHOD = "extDict_isModified"; - static constexpr inline auto EXT_DICT_SUPPORTS_SELECTIVE_LOAD_METHOD = "extDict_supportsSelectiveLoad"; + static constexpr auto EXT_DICT_LIB_NEW_METHOD = "extDict_libNew"; + static constexpr auto EXT_DICT_LIB_CLONE_METHOD = "extDict_libClone"; + static constexpr auto EXT_DICT_LIB_DELETE_METHOD = "extDict_libDelete"; + static constexpr auto EXT_DICT_LOAD_ALL_METHOD = "extDict_loadAll"; + static constexpr auto EXT_DICT_LOAD_IDS_METHOD = "extDict_loadIds"; + static constexpr auto EXT_DICT_LOAD_KEYS_METHOD = "extDict_loadKeys"; + static constexpr auto EXT_DICT_IS_MODIFIED_METHOD = "extDict_isModified"; + static constexpr auto EXT_DICT_SUPPORTS_SELECTIVE_LOAD_METHOD = "extDict_supportsSelectiveLoad"; Poco::URI createRequestURI(const String & method) const; diff --git a/src/BridgeHelper/IBridgeHelper.h b/src/BridgeHelper/IBridgeHelper.h index 6812bd04a03..8ce1c0e143a 100644 --- a/src/BridgeHelper/IBridgeHelper.h +++ b/src/BridgeHelper/IBridgeHelper.h @@ -16,9 +16,9 @@ class IBridgeHelper: protected WithContext { public: - static constexpr inline auto DEFAULT_HOST = "127.0.0.1"; - static constexpr inline auto DEFAULT_FORMAT = "RowBinary"; - static constexpr inline auto PING_OK_ANSWER = "Ok."; + static constexpr auto DEFAULT_HOST = "127.0.0.1"; + static constexpr auto DEFAULT_FORMAT = "RowBinary"; + static constexpr auto PING_OK_ANSWER = "Ok."; static const inline std::string PING_METHOD = Poco::Net::HTTPRequest::HTTP_GET; static const inline std::string MAIN_METHOD = Poco::Net::HTTPRequest::HTTP_POST; diff --git a/src/BridgeHelper/LibraryBridgeHelper.h b/src/BridgeHelper/LibraryBridgeHelper.h index 8940f9d1c9e..0c56fe7a221 100644 --- a/src/BridgeHelper/LibraryBridgeHelper.h +++ b/src/BridgeHelper/LibraryBridgeHelper.h @@ -37,7 +37,7 @@ protected: Poco::URI createBaseURI() const override; - static constexpr inline size_t DEFAULT_PORT = 9012; + static constexpr size_t DEFAULT_PORT = 9012; const Poco::Util::AbstractConfiguration & config; LoggerPtr log; diff --git a/src/BridgeHelper/XDBCBridgeHelper.h b/src/BridgeHelper/XDBCBridgeHelper.h index b557e12b85b..5f4c7fd8381 100644 --- a/src/BridgeHelper/XDBCBridgeHelper.h +++ b/src/BridgeHelper/XDBCBridgeHelper.h @@ -52,12 +52,12 @@ class XDBCBridgeHelper : public IXDBCBridgeHelper { public: - static constexpr inline auto DEFAULT_PORT = BridgeHelperMixin::DEFAULT_PORT; - static constexpr inline auto PING_HANDLER = "/ping"; - static constexpr inline auto MAIN_HANDLER = "/"; - static constexpr inline auto COL_INFO_HANDLER = "/columns_info"; - static constexpr inline auto IDENTIFIER_QUOTE_HANDLER = "/identifier_quote"; - static constexpr inline auto SCHEMA_ALLOWED_HANDLER = "/schema_allowed"; + static constexpr auto DEFAULT_PORT = BridgeHelperMixin::DEFAULT_PORT; + static constexpr auto PING_HANDLER = "/ping"; + static constexpr auto MAIN_HANDLER = "/"; + static constexpr auto COL_INFO_HANDLER = "/columns_info"; + static constexpr auto IDENTIFIER_QUOTE_HANDLER = "/identifier_quote"; + static constexpr auto SCHEMA_ALLOWED_HANDLER = "/schema_allowed"; XDBCBridgeHelper( ContextPtr context_, @@ -256,7 +256,7 @@ protected: struct JDBCBridgeMixin { - static constexpr inline auto DEFAULT_PORT = 9019; + static constexpr auto DEFAULT_PORT = 9019; static String configPrefix() { @@ -287,7 +287,7 @@ struct JDBCBridgeMixin struct ODBCBridgeMixin { - static constexpr inline auto DEFAULT_PORT = 9018; + static constexpr auto DEFAULT_PORT = 9018; static String configPrefix() { diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt index f2e10a27b75..2b5078111ee 100644 --- a/src/CMakeLists.txt +++ b/src/CMakeLists.txt @@ -215,6 +215,7 @@ add_object_library(clickhouse_databases_mysql Databases/MySQL) add_object_library(clickhouse_disks Disks) add_object_library(clickhouse_analyzer Analyzer) add_object_library(clickhouse_analyzer_passes Analyzer/Passes) +add_object_library(clickhouse_analyzer_passes Analyzer/Resolve) add_object_library(clickhouse_planner Planner) add_object_library(clickhouse_interpreters Interpreters) add_object_library(clickhouse_interpreters_cache Interpreters/Cache) diff --git a/src/Columns/ColumnSparse.cpp b/src/Columns/ColumnSparse.cpp index 49947be312d..cecd956fb95 100644 --- a/src/Columns/ColumnSparse.cpp +++ b/src/Columns/ColumnSparse.cpp @@ -8,7 +8,6 @@ #include #include #include -#include #include #include @@ -323,7 +322,9 @@ ColumnPtr ColumnSparse::filter(const Filter & filt, ssize_t) const size_t res_offset = 0; auto offset_it = begin(); - for (size_t i = 0; i < _size; ++i, ++offset_it) + /// Replace the `++offset_it` with `offset_it.increaseCurrentRow()` and `offset_it.increaseCurrentOffset()`, + /// to remove the redundant `isDefault()` in `++` of `Interator` and reuse the following `isDefault()`. + for (size_t i = 0; i < _size; ++i, offset_it.increaseCurrentRow()) { if (!offset_it.isDefault()) { @@ -338,6 +339,7 @@ ColumnPtr ColumnSparse::filter(const Filter & filt, ssize_t) const { values_filter.push_back(0); } + offset_it.increaseCurrentOffset(); } else { diff --git a/src/Columns/ColumnSparse.h b/src/Columns/ColumnSparse.h index 7d3200da35f..12b2def7cf1 100644 --- a/src/Columns/ColumnSparse.h +++ b/src/Columns/ColumnSparse.h @@ -181,14 +181,16 @@ public: { public: Iterator(const PaddedPODArray & offsets_, size_t size_, size_t current_offset_, size_t current_row_) - : offsets(offsets_), size(size_), current_offset(current_offset_), current_row(current_row_) + : offsets(offsets_), offsets_size(offsets.size()), size(size_), current_offset(current_offset_), current_row(current_row_) { } - bool ALWAYS_INLINE isDefault() const { return current_offset == offsets.size() || current_row != offsets[current_offset]; } + bool ALWAYS_INLINE isDefault() const { return current_offset == offsets_size || current_row != offsets[current_offset]; } size_t ALWAYS_INLINE getValueIndex() const { return isDefault() ? 0 : current_offset + 1; } size_t ALWAYS_INLINE getCurrentRow() const { return current_row; } size_t ALWAYS_INLINE getCurrentOffset() const { return current_offset; } + size_t ALWAYS_INLINE increaseCurrentRow() { return ++current_row; } + size_t ALWAYS_INLINE increaseCurrentOffset() { return ++current_offset; } bool operator==(const Iterator & other) const { @@ -209,6 +211,7 @@ public: private: const PaddedPODArray & offsets; + const size_t offsets_size; const size_t size; size_t current_offset; size_t current_row; diff --git a/src/Common/CPUID.h b/src/Common/CPUID.h index d7a714ec5af..b49f7706904 100644 --- a/src/Common/CPUID.h +++ b/src/Common/CPUID.h @@ -69,9 +69,9 @@ union CPUInfo UInt32 edx; } registers; - inline explicit CPUInfo(UInt32 op) noexcept { cpuid(op, info); } + explicit CPUInfo(UInt32 op) noexcept { cpuid(op, info); } - inline CPUInfo(UInt32 op, UInt32 sub_op) noexcept { cpuid(op, sub_op, info); } + CPUInfo(UInt32 op, UInt32 sub_op) noexcept { cpuid(op, sub_op, info); } }; inline bool haveRDTSCP() noexcept diff --git a/src/Common/ColumnsHashingImpl.h b/src/Common/ColumnsHashingImpl.h index f74a56292ae..0e013decf1f 100644 --- a/src/Common/ColumnsHashingImpl.h +++ b/src/Common/ColumnsHashingImpl.h @@ -453,7 +453,7 @@ protected: /// Return the columns which actually contain the values of the keys. /// For a given key column, if it is nullable, we return its nested /// column. Otherwise we return the key column itself. - inline const ColumnRawPtrs & getActualColumns() const + const ColumnRawPtrs & getActualColumns() const { return actual_columns; } diff --git a/src/Common/CombinedCardinalityEstimator.h b/src/Common/CombinedCardinalityEstimator.h index 0e53755d773..132f00de8eb 100644 --- a/src/Common/CombinedCardinalityEstimator.h +++ b/src/Common/CombinedCardinalityEstimator.h @@ -292,13 +292,13 @@ private: } template - inline T & getContainer() + T & getContainer() { return *reinterpret_cast(address & mask); } template - inline const T & getContainer() const + const T & getContainer() const { return *reinterpret_cast(address & mask); } @@ -309,7 +309,7 @@ private: address |= static_cast(t); } - inline details::ContainerType getContainerType() const + details::ContainerType getContainerType() const { return static_cast(address & ~mask); } diff --git a/src/Common/CompactArray.h b/src/Common/CompactArray.h index 613dc3d0b90..7b2bd658d2e 100644 --- a/src/Common/CompactArray.h +++ b/src/Common/CompactArray.h @@ -116,7 +116,7 @@ public: /** Return the current cell number and the corresponding content. */ - inline std::pair get() const + std::pair get() const { if ((current_bucket_index == 0) || is_eof) throw Exception(ErrorCodes::NO_AVAILABLE_DATA, "No available data."); diff --git a/src/Common/CounterInFile.h b/src/Common/CounterInFile.h index 854bf7cc675..0a11e52be2c 100644 --- a/src/Common/CounterInFile.h +++ b/src/Common/CounterInFile.h @@ -37,7 +37,7 @@ namespace fs = std::filesystem; class CounterInFile { private: - static inline constexpr size_t SMALL_READ_WRITE_BUFFER_SIZE = 16; + static constexpr size_t SMALL_READ_WRITE_BUFFER_SIZE = 16; public: /// path - the name of the file, including the path diff --git a/src/Common/CurrentMetrics.cpp b/src/Common/CurrentMetrics.cpp index e73ac307a35..731c72d65f2 100644 --- a/src/Common/CurrentMetrics.cpp +++ b/src/Common/CurrentMetrics.cpp @@ -127,6 +127,9 @@ M(DestroyAggregatesThreads, "Number of threads in the thread pool for destroy aggregate states.") \ M(DestroyAggregatesThreadsActive, "Number of threads in the thread pool for destroy aggregate states running a task.") \ M(DestroyAggregatesThreadsScheduled, "Number of queued or active jobs in the thread pool for destroy aggregate states.") \ + M(ConcurrentHashJoinPoolThreads, "Number of threads in the thread pool for concurrent hash join.") \ + M(ConcurrentHashJoinPoolThreadsActive, "Number of threads in the thread pool for concurrent hash join running a task.") \ + M(ConcurrentHashJoinPoolThreadsScheduled, "Number of queued or active jobs in the thread pool for concurrent hash join.") \ M(HashedDictionaryThreads, "Number of threads in the HashedDictionary thread pool.") \ M(HashedDictionaryThreadsActive, "Number of threads in the HashedDictionary thread pool running a task.") \ M(HashedDictionaryThreadsScheduled, "Number of queued or active jobs in the HashedDictionary thread pool.") \ @@ -174,6 +177,11 @@ M(ObjectStorageAzureThreads, "Number of threads in the AzureObjectStorage thread pool.") \ M(ObjectStorageAzureThreadsActive, "Number of threads in the AzureObjectStorage thread pool running a task.") \ M(ObjectStorageAzureThreadsScheduled, "Number of queued or active jobs in the AzureObjectStorage thread pool.") \ + \ + M(DiskPlainRewritableAzureDirectoryMapSize, "Number of local-to-remote path entries in the 'plain_rewritable' in-memory map for AzureObjectStorage.") \ + M(DiskPlainRewritableLocalDirectoryMapSize, "Number of local-to-remote path entries in the 'plain_rewritable' in-memory map for LocalObjectStorage.") \ + M(DiskPlainRewritableS3DirectoryMapSize, "Number of local-to-remote path entries in the 'plain_rewritable' in-memory map for S3ObjectStorage.") \ + \ M(MergeTreePartsLoaderThreads, "Number of threads in the MergeTree parts loader thread pool.") \ M(MergeTreePartsLoaderThreadsActive, "Number of threads in the MergeTree parts loader thread pool running a task.") \ M(MergeTreePartsLoaderThreadsScheduled, "Number of queued or active jobs in the MergeTree parts loader thread pool.") \ diff --git a/src/Common/CurrentThread.h b/src/Common/CurrentThread.h index e2b627a7f29..53b61ba315f 100644 --- a/src/Common/CurrentThread.h +++ b/src/Common/CurrentThread.h @@ -64,7 +64,7 @@ public: static ProfileEvents::Counters & getProfileEvents(); inline ALWAYS_INLINE static MemoryTracker * getMemoryTracker() { - if (unlikely(!current_thread)) + if (!current_thread) [[unlikely]] return nullptr; return ¤t_thread->memory_tracker; } diff --git a/src/Common/DateLUTImpl.cpp b/src/Common/DateLUTImpl.cpp index 392ee64dcbf..c87d44a4b95 100644 --- a/src/Common/DateLUTImpl.cpp +++ b/src/Common/DateLUTImpl.cpp @@ -41,7 +41,6 @@ UInt8 getDayOfWeek(const cctz::civil_day & date) case cctz::weekday::saturday: return 6; case cctz::weekday::sunday: return 7; } - UNREACHABLE(); } inline cctz::time_point lookupTz(const cctz::time_zone & cctz_time_zone, const cctz::civil_day & date) diff --git a/src/Common/HashTable/FixedHashTable.h b/src/Common/HashTable/FixedHashTable.h index 49675aaafbc..9666706ba20 100644 --- a/src/Common/HashTable/FixedHashTable.h +++ b/src/Common/HashTable/FixedHashTable.h @@ -115,6 +115,12 @@ class FixedHashTable : private boost::noncopyable, protected Allocator, protecte { static constexpr size_t NUM_CELLS = 1ULL << (sizeof(Key) * 8); + /// We maintain min and max values inserted into the hash table to then limit the amount of cells to traverse to the [min; max] range. + /// Both values could be efficiently calculated only within `emplace` calls (and not when we populate the hash table in `read` method for example), so we update them only within `emplace` and track if any other method was called. + bool only_emplace_was_used_to_insert_data = true; + size_t min = NUM_CELLS - 1; + size_t max = 0; + protected: friend class const_iterator; friend class iterator; @@ -170,6 +176,8 @@ protected: /// Skip empty cells in the main buffer. const auto * buf_end = container->buf + container->NUM_CELLS; + if (container->canUseMinMaxOptimization()) + buf_end = container->buf + container->max + 1; while (ptr < buf_end && ptr->isZero(*container)) ++ptr; @@ -261,7 +269,7 @@ public: return true; } - inline const value_type & get() const + const value_type & get() const { if (!is_initialized || is_eof) throw DB::Exception(DB::ErrorCodes::NO_AVAILABLE_DATA, "No available data"); @@ -297,12 +305,7 @@ public: if (!buf) return end(); - const Cell * ptr = buf; - auto buf_end = buf + NUM_CELLS; - while (ptr < buf_end && ptr->isZero(*this)) - ++ptr; - - return const_iterator(this, ptr); + return const_iterator(this, firstPopulatedCell()); } const_iterator cbegin() const { return begin(); } @@ -312,18 +315,13 @@ public: if (!buf) return end(); - Cell * ptr = buf; - auto buf_end = buf + NUM_CELLS; - while (ptr < buf_end && ptr->isZero(*this)) - ++ptr; - - return iterator(this, ptr); + return iterator(this, const_cast(firstPopulatedCell())); } const_iterator end() const { /// Avoid UBSan warning about adding zero to nullptr. It is valid in C++20 (and earlier) but not valid in C. - return const_iterator(this, buf ? buf + NUM_CELLS : buf); + return const_iterator(this, buf ? lastPopulatedCell() : buf); } const_iterator cend() const @@ -333,7 +331,7 @@ public: iterator end() { - return iterator(this, buf ? buf + NUM_CELLS : buf); + return iterator(this, buf ? lastPopulatedCell() : buf); } @@ -350,6 +348,8 @@ public: new (&buf[x]) Cell(x, *this); inserted = true; + if (x < min) min = x; + if (x > max) max = x; this->increaseSize(); } @@ -377,6 +377,26 @@ public: bool ALWAYS_INLINE has(const Key & x) const { return !buf[x].isZero(*this); } bool ALWAYS_INLINE has(const Key &, size_t hash_value) const { return !buf[hash_value].isZero(*this); } + /// Decide if we use the min/max optimization. `max < min` means the FixedHashtable is empty. The flag `only_emplace_was_used_to_insert_data` + /// will check if the FixedHashTable will only use `emplace()` to insert the raw data. + bool ALWAYS_INLINE canUseMinMaxOptimization() const { return ((max >= min) && only_emplace_was_used_to_insert_data); } + + const Cell * ALWAYS_INLINE firstPopulatedCell() const + { + const Cell * ptr = buf; + if (!canUseMinMaxOptimization()) + { + while (ptr < buf + NUM_CELLS && ptr->isZero(*this)) + ++ptr; + } + else + ptr = buf + min; + + return ptr; + } + + Cell * ALWAYS_INLINE lastPopulatedCell() const { return canUseMinMaxOptimization() ? buf + max + 1 : buf + NUM_CELLS; } + void write(DB::WriteBuffer & wb) const { Cell::State::write(wb); @@ -433,6 +453,7 @@ public: x.read(rb); new (&buf[place_value]) Cell(x, *this); } + only_emplace_was_used_to_insert_data = false; } void readText(DB::ReadBuffer & rb) @@ -455,6 +476,7 @@ public: x.readText(rb); new (&buf[place_value]) Cell(x, *this); } + only_emplace_was_used_to_insert_data = false; } size_t size() const { return this->getSize(buf, *this, NUM_CELLS); } @@ -493,7 +515,11 @@ public: } const Cell * data() const { return buf; } - Cell * data() { return buf; } + Cell * data() + { + only_emplace_was_used_to_insert_data = false; + return buf; + } #ifdef DBMS_HASH_MAP_COUNT_COLLISIONS size_t getCollisions() const { return 0; } diff --git a/src/Common/HashTable/HashTable.h b/src/Common/HashTable/HashTable.h index 9050b7ef6d7..a600f57b06a 100644 --- a/src/Common/HashTable/HashTable.h +++ b/src/Common/HashTable/HashTable.h @@ -844,7 +844,7 @@ public: return true; } - inline const value_type & get() const + const value_type & get() const { if (!is_initialized || is_eof) throw DB::Exception(DB::ErrorCodes::NO_AVAILABLE_DATA, "No available data"); diff --git a/src/Common/HashTable/PackedHashMap.h b/src/Common/HashTable/PackedHashMap.h index 0d25addb58e..72eb721b274 100644 --- a/src/Common/HashTable/PackedHashMap.h +++ b/src/Common/HashTable/PackedHashMap.h @@ -69,7 +69,7 @@ struct PackedHashMapCell : public HashMapCellvalue.first, state); } static bool isZero(const Key key, const State & /*state*/) { return ZeroTraits::check(key); } - static inline bool bitEqualsByValue(key_type a, key_type b) { return a == b; } + static bool bitEqualsByValue(key_type a, key_type b) { return a == b; } template auto get() const diff --git a/src/Common/HashTable/SmallTable.h b/src/Common/HashTable/SmallTable.h index 3229e4748ea..63a6b932dd0 100644 --- a/src/Common/HashTable/SmallTable.h +++ b/src/Common/HashTable/SmallTable.h @@ -112,7 +112,7 @@ public: return true; } - inline const value_type & get() const + const value_type & get() const { if (!is_initialized || is_eof) throw DB::Exception(DB::ErrorCodes::NO_AVAILABLE_DATA, "No available data"); diff --git a/src/Common/HyperLogLogCounter.h b/src/Common/HyperLogLogCounter.h index bacd4cc7288..9b2b33dc918 100644 --- a/src/Common/HyperLogLogCounter.h +++ b/src/Common/HyperLogLogCounter.h @@ -128,13 +128,13 @@ public: { } - inline void update(UInt8 cur_rank, UInt8 new_rank) + void update(UInt8 cur_rank, UInt8 new_rank) { denominator -= static_cast(1.0) / (1ULL << cur_rank); denominator += static_cast(1.0) / (1ULL << new_rank); } - inline void update(UInt8 rank) + void update(UInt8 rank) { denominator += static_cast(1.0) / (1ULL << rank); } @@ -166,13 +166,13 @@ public: rank_count[0] = static_cast(initial_value); } - inline void update(UInt8 cur_rank, UInt8 new_rank) + void update(UInt8 cur_rank, UInt8 new_rank) { --rank_count[cur_rank]; ++rank_count[new_rank]; } - inline void update(UInt8 rank) + void update(UInt8 rank) { ++rank_count[rank]; } @@ -429,13 +429,13 @@ public: private: /// Extract subset of bits in [begin, end[ range. - inline HashValueType extractBitSequence(HashValueType val, UInt8 begin, UInt8 end) const + HashValueType extractBitSequence(HashValueType val, UInt8 begin, UInt8 end) const { return (val >> begin) & ((1ULL << (end - begin)) - 1); } /// Rank is number of trailing zeros. - inline UInt8 calculateRank(HashValueType val) const + UInt8 calculateRank(HashValueType val) const { if (unlikely(val == 0)) return max_rank; @@ -448,7 +448,7 @@ private: return zeros_plus_one; } - inline HashValueType getHash(Value key) const + HashValueType getHash(Value key) const { /// NOTE: this should be OK, since value is the same as key for HLL. return static_cast( @@ -496,7 +496,7 @@ private: throw Poco::Exception("Internal error", DB::ErrorCodes::LOGICAL_ERROR); } - inline double applyCorrection(double raw_estimate) const + double applyCorrection(double raw_estimate) const { double fixed_estimate; @@ -525,7 +525,7 @@ private: /// Correction used in HyperLogLog++ algorithm. /// Source: "HyperLogLog in Practice: Algorithmic Engineering of a State of The Art Cardinality Estimation Algorithm" /// (S. Heule et al., Proceedings of the EDBT 2013 Conference). - inline double applyBiasCorrection(double raw_estimate) const + double applyBiasCorrection(double raw_estimate) const { double fixed_estimate; @@ -540,7 +540,7 @@ private: /// Calculation of unique values using LinearCounting algorithm. /// Source: "A Linear-time Probabilistic Counting Algorithm for Database Applications" /// (Whang et al., ACM Trans. Database Syst., pp. 208-229, 1990). - inline double applyLinearCorrection(double raw_estimate) const + double applyLinearCorrection(double raw_estimate) const { double fixed_estimate; diff --git a/src/Common/IntervalKind.cpp b/src/Common/IntervalKind.cpp index 22c7db504c3..1548d5cf9a5 100644 --- a/src/Common/IntervalKind.cpp +++ b/src/Common/IntervalKind.cpp @@ -34,8 +34,6 @@ Int64 IntervalKind::toAvgNanoseconds() const default: return toAvgSeconds() * NANOSECONDS_PER_SECOND; } - - UNREACHABLE(); } Int32 IntervalKind::toAvgSeconds() const @@ -54,7 +52,6 @@ Int32 IntervalKind::toAvgSeconds() const case IntervalKind::Kind::Quarter: return 7889238; /// Exactly 1/4 of a year. case IntervalKind::Kind::Year: return 31556952; /// The average length of a Gregorian year is equal to 365.2425 days } - UNREACHABLE(); } Float64 IntervalKind::toSeconds() const @@ -80,7 +77,6 @@ Float64 IntervalKind::toSeconds() const default: throw Exception(ErrorCodes::BAD_ARGUMENTS, "Not possible to get precise number of seconds in non-precise interval"); } - UNREACHABLE(); } bool IntervalKind::isFixedLength() const @@ -99,7 +95,6 @@ bool IntervalKind::isFixedLength() const case IntervalKind::Kind::Quarter: case IntervalKind::Kind::Year: return false; } - UNREACHABLE(); } IntervalKind IntervalKind::fromAvgSeconds(Int64 num_seconds) @@ -141,7 +136,6 @@ const char * IntervalKind::toKeyword() const case IntervalKind::Kind::Quarter: return "QUARTER"; case IntervalKind::Kind::Year: return "YEAR"; } - UNREACHABLE(); } @@ -161,7 +155,6 @@ const char * IntervalKind::toLowercasedKeyword() const case IntervalKind::Kind::Quarter: return "quarter"; case IntervalKind::Kind::Year: return "year"; } - UNREACHABLE(); } @@ -192,7 +185,6 @@ const char * IntervalKind::toDateDiffUnit() const case IntervalKind::Kind::Year: return "year"; } - UNREACHABLE(); } @@ -223,7 +215,6 @@ const char * IntervalKind::toNameOfFunctionToIntervalDataType() const case IntervalKind::Kind::Year: return "toIntervalYear"; } - UNREACHABLE(); } @@ -257,7 +248,6 @@ const char * IntervalKind::toNameOfFunctionExtractTimePart() const case IntervalKind::Kind::Year: return "toYear"; } - UNREACHABLE(); } diff --git a/src/Common/IntervalTree.h b/src/Common/IntervalTree.h index fbd1de3197e..db7f5238921 100644 --- a/src/Common/IntervalTree.h +++ b/src/Common/IntervalTree.h @@ -23,7 +23,7 @@ struct Interval Interval(IntervalStorageType left_, IntervalStorageType right_) : left(left_), right(right_) { } - inline bool contains(IntervalStorageType point) const { return left <= point && point <= right; } + bool contains(IntervalStorageType point) const { return left <= point && point <= right; } }; template @@ -290,7 +290,7 @@ private: IntervalStorageType middle_element; - inline bool hasValue() const { return sorted_intervals_range_size != 0; } + bool hasValue() const { return sorted_intervals_range_size != 0; } }; using IntervalWithEmptyValue = Interval; @@ -585,7 +585,7 @@ private: } } - inline size_t findFirstIteratorNodeIndex() const + size_t findFirstIteratorNodeIndex() const { size_t nodes_size = nodes.size(); size_t result_index = 0; @@ -602,7 +602,7 @@ private: return result_index; } - inline size_t findLastIteratorNodeIndex() const + size_t findLastIteratorNodeIndex() const { if (unlikely(nodes.empty())) return 0; @@ -618,7 +618,7 @@ private: return result_index; } - inline void increaseIntervalsSize() + void increaseIntervalsSize() { /// Before tree is build we store all intervals size in our first node to allow tree iteration. ++intervals_size; @@ -630,7 +630,7 @@ private: size_t intervals_size = 0; bool tree_is_built = false; - static inline const Interval & getInterval(const IntervalWithValue & interval_with_value) + static const Interval & getInterval(const IntervalWithValue & interval_with_value) { if constexpr (is_empty_value) return interval_with_value; @@ -639,7 +639,7 @@ private: } template - static inline bool callCallback(const IntervalWithValue & interval, IntervalCallback && callback) + static bool callCallback(const IntervalWithValue & interval, IntervalCallback && callback) { if constexpr (is_empty_value) return callback(interval); @@ -647,7 +647,7 @@ private: return callback(interval.first, interval.second); } - static inline void + static void intervalsToPoints(const std::vector & intervals, std::vector & temporary_points_storage) { for (const auto & interval_with_value : intervals) @@ -658,7 +658,7 @@ private: } } - static inline IntervalStorageType pointsMedian(std::vector & points) + static IntervalStorageType pointsMedian(std::vector & points) { size_t size = points.size(); size_t middle_element_index = size / 2; diff --git a/src/Common/JSONParsers/SimdJSONParser.h b/src/Common/JSONParsers/SimdJSONParser.h index a8594710d20..827d142266a 100644 --- a/src/Common/JSONParsers/SimdJSONParser.h +++ b/src/Common/JSONParsers/SimdJSONParser.h @@ -26,62 +26,62 @@ class SimdJSONBasicFormatter { public: explicit SimdJSONBasicFormatter(PaddedPODArray & buffer_) : buffer(buffer_) {} - inline void comma() { oneChar(','); } + void comma() { oneChar(','); } /** Start an array, prints [ **/ - inline void startArray() { oneChar('['); } + void startArray() { oneChar('['); } /** End an array, prints ] **/ - inline void endArray() { oneChar(']'); } + void endArray() { oneChar(']'); } /** Start an array, prints { **/ - inline void startObject() { oneChar('{'); } + void startObject() { oneChar('{'); } /** Start an array, prints } **/ - inline void endObject() { oneChar('}'); } + void endObject() { oneChar('}'); } /** Prints a true **/ - inline void trueAtom() + void trueAtom() { const char * s = "true"; buffer.insert(s, s + 4); } /** Prints a false **/ - inline void falseAtom() + void falseAtom() { const char * s = "false"; buffer.insert(s, s + 5); } /** Prints a null **/ - inline void nullAtom() + void nullAtom() { const char * s = "null"; buffer.insert(s, s + 4); } /** Prints a number **/ - inline void number(int64_t x) + void number(int64_t x) { char number_buffer[24]; auto res = std::to_chars(number_buffer, number_buffer + sizeof(number_buffer), x); buffer.insert(number_buffer, res.ptr); } /** Prints a number **/ - inline void number(uint64_t x) + void number(uint64_t x) { char number_buffer[24]; auto res = std::to_chars(number_buffer, number_buffer + sizeof(number_buffer), x); buffer.insert(number_buffer, res.ptr); } /** Prints a number **/ - inline void number(double x) + void number(double x) { char number_buffer[24]; auto res = std::to_chars(number_buffer, number_buffer + sizeof(number_buffer), x); buffer.insert(number_buffer, res.ptr); } /** Prints a key (string + colon) **/ - inline void key(std::string_view unescaped) + void key(std::string_view unescaped) { string(unescaped); oneChar(':'); } /** Prints a string. The string is escaped as needed. **/ - inline void string(std::string_view unescaped) + void string(std::string_view unescaped) { oneChar('\"'); size_t i = 0; @@ -165,7 +165,7 @@ public: oneChar('\"'); } - inline void oneChar(char c) + void oneChar(char c) { buffer.push_back(c); } @@ -182,7 +182,7 @@ class SimdJSONElementFormatter public: explicit SimdJSONElementFormatter(PaddedPODArray & buffer_) : format(buffer_) {} /** Append an element to the builder (to be printed) **/ - inline void append(simdjson::dom::element value) + void append(simdjson::dom::element value) { switch (value.type()) { @@ -224,7 +224,7 @@ public: } } /** Append an array to the builder (to be printed) **/ - inline void append(simdjson::dom::array value) + void append(simdjson::dom::array value) { format.startArray(); auto iter = value.begin(); @@ -241,7 +241,7 @@ public: format.endArray(); } - inline void append(simdjson::dom::object value) + void append(simdjson::dom::object value) { format.startObject(); auto pair = value.begin(); @@ -258,7 +258,7 @@ public: format.endObject(); } - inline void append(simdjson::dom::key_value_pair kv) + void append(simdjson::dom::key_value_pair kv) { format.key(kv.key); append(kv.value); diff --git a/src/Common/NamedCollections/NamedCollectionUtils.cpp b/src/Common/NamedCollections/NamedCollectionUtils.cpp index 21fa9b64c22..5dbdeb10795 100644 --- a/src/Common/NamedCollections/NamedCollectionUtils.cpp +++ b/src/Common/NamedCollections/NamedCollectionUtils.cpp @@ -16,6 +16,7 @@ #include #include #include +#include #include diff --git a/src/Common/NamedCollections/NamedCollections.cpp b/src/Common/NamedCollections/NamedCollections.cpp index 6ee47fd6523..04d2099f4df 100644 --- a/src/Common/NamedCollections/NamedCollections.cpp +++ b/src/Common/NamedCollections/NamedCollections.cpp @@ -6,7 +6,6 @@ #include #include #include -#include namespace DB @@ -14,170 +13,12 @@ namespace DB namespace ErrorCodes { - extern const int NAMED_COLLECTION_DOESNT_EXIST; - extern const int NAMED_COLLECTION_ALREADY_EXISTS; extern const int NAMED_COLLECTION_IS_IMMUTABLE; extern const int BAD_ARGUMENTS; } namespace Configuration = NamedCollectionConfiguration; - -NamedCollectionFactory & NamedCollectionFactory::instance() -{ - static NamedCollectionFactory instance; - return instance; -} - -bool NamedCollectionFactory::exists(const std::string & collection_name) const -{ - std::lock_guard lock(mutex); - return existsUnlocked(collection_name, lock); -} - -bool NamedCollectionFactory::existsUnlocked( - const std::string & collection_name, - std::lock_guard & /* lock */) const -{ - return loaded_named_collections.contains(collection_name); -} - -NamedCollectionPtr NamedCollectionFactory::get(const std::string & collection_name) const -{ - std::lock_guard lock(mutex); - auto collection = tryGetUnlocked(collection_name, lock); - if (!collection) - { - throw Exception( - ErrorCodes::NAMED_COLLECTION_DOESNT_EXIST, - "There is no named collection `{}`", - collection_name); - } - return collection; -} - -NamedCollectionPtr NamedCollectionFactory::tryGet(const std::string & collection_name) const -{ - std::lock_guard lock(mutex); - return tryGetUnlocked(collection_name, lock); -} - -MutableNamedCollectionPtr NamedCollectionFactory::getMutable( - const std::string & collection_name) const -{ - std::lock_guard lock(mutex); - auto collection = tryGetUnlocked(collection_name, lock); - if (!collection) - { - throw Exception( - ErrorCodes::NAMED_COLLECTION_DOESNT_EXIST, - "There is no named collection `{}`", - collection_name); - } - else if (!collection->isMutable()) - { - throw Exception( - ErrorCodes::NAMED_COLLECTION_IS_IMMUTABLE, - "Cannot get collection `{}` for modification, " - "because collection was defined as immutable", - collection_name); - } - return collection; -} - -MutableNamedCollectionPtr NamedCollectionFactory::tryGetUnlocked( - const std::string & collection_name, - std::lock_guard & /* lock */) const -{ - auto it = loaded_named_collections.find(collection_name); - if (it == loaded_named_collections.end()) - return nullptr; - return it->second; -} - -void NamedCollectionFactory::add( - const std::string & collection_name, - MutableNamedCollectionPtr collection) -{ - std::lock_guard lock(mutex); - addUnlocked(collection_name, collection, lock); -} - -void NamedCollectionFactory::add(NamedCollectionsMap collections) -{ - std::lock_guard lock(mutex); - for (const auto & [collection_name, collection] : collections) - addUnlocked(collection_name, collection, lock); -} - -void NamedCollectionFactory::addUnlocked( - const std::string & collection_name, - MutableNamedCollectionPtr collection, - std::lock_guard & /* lock */) -{ - auto [it, inserted] = loaded_named_collections.emplace(collection_name, collection); - if (!inserted) - { - throw Exception( - ErrorCodes::NAMED_COLLECTION_ALREADY_EXISTS, - "A named collection `{}` already exists", - collection_name); - } -} - -void NamedCollectionFactory::remove(const std::string & collection_name) -{ - std::lock_guard lock(mutex); - bool removed = removeIfExistsUnlocked(collection_name, lock); - if (!removed) - { - throw Exception( - ErrorCodes::NAMED_COLLECTION_DOESNT_EXIST, - "There is no named collection `{}`", - collection_name); - } -} - -void NamedCollectionFactory::removeIfExists(const std::string & collection_name) -{ - std::lock_guard lock(mutex); - removeIfExistsUnlocked(collection_name, lock); // NOLINT -} - -bool NamedCollectionFactory::removeIfExistsUnlocked( - const std::string & collection_name, - std::lock_guard & lock) -{ - auto collection = tryGetUnlocked(collection_name, lock); - if (!collection) - return false; - - if (!collection->isMutable()) - { - throw Exception( - ErrorCodes::NAMED_COLLECTION_IS_IMMUTABLE, - "Cannot get collection `{}` for modification, " - "because collection was defined as immutable", - collection_name); - } - loaded_named_collections.erase(collection_name); - return true; -} - -void NamedCollectionFactory::removeById(NamedCollectionUtils::SourceId id) -{ - std::lock_guard lock(mutex); - std::erase_if( - loaded_named_collections, - [&](const auto & value) { return value.second->getSourceId() == id; }); -} - -NamedCollectionsMap NamedCollectionFactory::getAll() const -{ - std::lock_guard lock(mutex); - return loaded_named_collections; -} - class NamedCollection::Impl { private: diff --git a/src/Common/NamedCollections/NamedCollections.h b/src/Common/NamedCollections/NamedCollections.h index de27f4e6083..c253c56594f 100644 --- a/src/Common/NamedCollections/NamedCollections.h +++ b/src/Common/NamedCollections/NamedCollections.h @@ -93,59 +93,4 @@ private: mutable std::mutex mutex; }; -/** - * A factory of immutable named collections. - */ -class NamedCollectionFactory : boost::noncopyable -{ -public: - static NamedCollectionFactory & instance(); - - bool exists(const std::string & collection_name) const; - - NamedCollectionPtr get(const std::string & collection_name) const; - - NamedCollectionPtr tryGet(const std::string & collection_name) const; - - MutableNamedCollectionPtr getMutable(const std::string & collection_name) const; - - void add(const std::string & collection_name, MutableNamedCollectionPtr collection); - - void add(NamedCollectionsMap collections); - - void update(NamedCollectionsMap collections); - - void remove(const std::string & collection_name); - - void removeIfExists(const std::string & collection_name); - - void removeById(NamedCollectionUtils::SourceId id); - - NamedCollectionsMap getAll() const; - -private: - bool existsUnlocked( - const std::string & collection_name, - std::lock_guard & lock) const; - - MutableNamedCollectionPtr tryGetUnlocked( - const std::string & collection_name, - std::lock_guard & lock) const; - - void addUnlocked( - const std::string & collection_name, - MutableNamedCollectionPtr collection, - std::lock_guard & lock); - - bool removeIfExistsUnlocked( - const std::string & collection_name, - std::lock_guard & lock); - - mutable NamedCollectionsMap loaded_named_collections; - - mutable std::mutex mutex; - bool is_initialized = false; -}; - - } diff --git a/src/Common/NamedCollections/NamedCollectionsFactory.cpp b/src/Common/NamedCollections/NamedCollectionsFactory.cpp new file mode 100644 index 00000000000..dd69952429f --- /dev/null +++ b/src/Common/NamedCollections/NamedCollectionsFactory.cpp @@ -0,0 +1,169 @@ +#include +#include + +namespace DB +{ + +namespace ErrorCodes +{ + extern const int NAMED_COLLECTION_DOESNT_EXIST; + extern const int NAMED_COLLECTION_ALREADY_EXISTS; + extern const int NAMED_COLLECTION_IS_IMMUTABLE; +} + +NamedCollectionFactory & NamedCollectionFactory::instance() +{ + static NamedCollectionFactory instance; + return instance; +} + +bool NamedCollectionFactory::exists(const std::string & collection_name) const +{ + std::lock_guard lock(mutex); + return existsUnlocked(collection_name, lock); +} + +bool NamedCollectionFactory::existsUnlocked( + const std::string & collection_name, + std::lock_guard & /* lock */) const +{ + return loaded_named_collections.contains(collection_name); +} + +NamedCollectionPtr NamedCollectionFactory::get(const std::string & collection_name) const +{ + std::lock_guard lock(mutex); + auto collection = tryGetUnlocked(collection_name, lock); + if (!collection) + { + throw Exception( + ErrorCodes::NAMED_COLLECTION_DOESNT_EXIST, + "There is no named collection `{}`", + collection_name); + } + return collection; +} + +NamedCollectionPtr NamedCollectionFactory::tryGet(const std::string & collection_name) const +{ + std::lock_guard lock(mutex); + return tryGetUnlocked(collection_name, lock); +} + +MutableNamedCollectionPtr NamedCollectionFactory::getMutable( + const std::string & collection_name) const +{ + std::lock_guard lock(mutex); + auto collection = tryGetUnlocked(collection_name, lock); + if (!collection) + { + throw Exception( + ErrorCodes::NAMED_COLLECTION_DOESNT_EXIST, + "There is no named collection `{}`", + collection_name); + } + else if (!collection->isMutable()) + { + throw Exception( + ErrorCodes::NAMED_COLLECTION_IS_IMMUTABLE, + "Cannot get collection `{}` for modification, " + "because collection was defined as immutable", + collection_name); + } + return collection; +} + +MutableNamedCollectionPtr NamedCollectionFactory::tryGetUnlocked( + const std::string & collection_name, + std::lock_guard & /* lock */) const +{ + auto it = loaded_named_collections.find(collection_name); + if (it == loaded_named_collections.end()) + return nullptr; + return it->second; +} + +void NamedCollectionFactory::add( + const std::string & collection_name, + MutableNamedCollectionPtr collection) +{ + std::lock_guard lock(mutex); + addUnlocked(collection_name, collection, lock); +} + +void NamedCollectionFactory::add(NamedCollectionsMap collections) +{ + std::lock_guard lock(mutex); + for (const auto & [collection_name, collection] : collections) + addUnlocked(collection_name, collection, lock); +} + +void NamedCollectionFactory::addUnlocked( + const std::string & collection_name, + MutableNamedCollectionPtr collection, + std::lock_guard & /* lock */) +{ + auto [it, inserted] = loaded_named_collections.emplace(collection_name, collection); + if (!inserted) + { + throw Exception( + ErrorCodes::NAMED_COLLECTION_ALREADY_EXISTS, + "A named collection `{}` already exists", + collection_name); + } +} + +void NamedCollectionFactory::remove(const std::string & collection_name) +{ + std::lock_guard lock(mutex); + bool removed = removeIfExistsUnlocked(collection_name, lock); + if (!removed) + { + throw Exception( + ErrorCodes::NAMED_COLLECTION_DOESNT_EXIST, + "There is no named collection `{}`", + collection_name); + } +} + +void NamedCollectionFactory::removeIfExists(const std::string & collection_name) +{ + std::lock_guard lock(mutex); + removeIfExistsUnlocked(collection_name, lock); // NOLINT +} + +bool NamedCollectionFactory::removeIfExistsUnlocked( + const std::string & collection_name, + std::lock_guard & lock) +{ + auto collection = tryGetUnlocked(collection_name, lock); + if (!collection) + return false; + + if (!collection->isMutable()) + { + throw Exception( + ErrorCodes::NAMED_COLLECTION_IS_IMMUTABLE, + "Cannot get collection `{}` for modification, " + "because collection was defined as immutable", + collection_name); + } + loaded_named_collections.erase(collection_name); + return true; +} + +void NamedCollectionFactory::removeById(NamedCollectionUtils::SourceId id) +{ + std::lock_guard lock(mutex); + std::erase_if( + loaded_named_collections, + [&](const auto & value) { return value.second->getSourceId() == id; }); +} + +NamedCollectionsMap NamedCollectionFactory::getAll() const +{ + std::lock_guard lock(mutex); + return loaded_named_collections; +} + +} diff --git a/src/Common/NamedCollections/NamedCollectionsFactory.h b/src/Common/NamedCollections/NamedCollectionsFactory.h new file mode 100644 index 00000000000..2d64a03bde3 --- /dev/null +++ b/src/Common/NamedCollections/NamedCollectionsFactory.h @@ -0,0 +1,58 @@ +#pragma once +#include + +namespace DB +{ + +class NamedCollectionFactory : boost::noncopyable +{ +public: + static NamedCollectionFactory & instance(); + + bool exists(const std::string & collection_name) const; + + NamedCollectionPtr get(const std::string & collection_name) const; + + NamedCollectionPtr tryGet(const std::string & collection_name) const; + + MutableNamedCollectionPtr getMutable(const std::string & collection_name) const; + + void add(const std::string & collection_name, MutableNamedCollectionPtr collection); + + void add(NamedCollectionsMap collections); + + void update(NamedCollectionsMap collections); + + void remove(const std::string & collection_name); + + void removeIfExists(const std::string & collection_name); + + void removeById(NamedCollectionUtils::SourceId id); + + NamedCollectionsMap getAll() const; + +private: + bool existsUnlocked( + const std::string & collection_name, + std::lock_guard & lock) const; + + MutableNamedCollectionPtr tryGetUnlocked( + const std::string & collection_name, + std::lock_guard & lock) const; + + void addUnlocked( + const std::string & collection_name, + MutableNamedCollectionPtr collection, + std::lock_guard & lock); + + bool removeIfExistsUnlocked( + const std::string & collection_name, + std::lock_guard & lock); + + mutable NamedCollectionsMap loaded_named_collections; + + mutable std::mutex mutex; + bool is_initialized = false; +}; + +} diff --git a/src/Common/PODArray.h b/src/Common/PODArray.h index b4069027ad1..ece5114a998 100644 --- a/src/Common/PODArray.h +++ b/src/Common/PODArray.h @@ -284,7 +284,7 @@ public: } template - inline void assertNotIntersects(It1 from_begin [[maybe_unused]], It2 from_end [[maybe_unused]]) + void assertNotIntersects(It1 from_begin [[maybe_unused]], It2 from_end [[maybe_unused]]) { #if !defined(NDEBUG) const char * ptr_begin = reinterpret_cast(&*from_begin); diff --git a/src/Common/PoolBase.h b/src/Common/PoolBase.h index d6fc1656eca..fb0c75e7c95 100644 --- a/src/Common/PoolBase.h +++ b/src/Common/PoolBase.h @@ -174,7 +174,7 @@ public: items.emplace_back(std::make_shared(allocObject(), *this)); } - inline size_t size() + size_t size() { std::lock_guard lock(mutex); return items.size(); diff --git a/src/Common/ProfileEvents.cpp b/src/Common/ProfileEvents.cpp index 8c8e2163aad..f73e16c517d 100644 --- a/src/Common/ProfileEvents.cpp +++ b/src/Common/ProfileEvents.cpp @@ -195,6 +195,8 @@ M(SelectedMarks, "Number of marks (index granules) selected to read from a MergeTree table.") \ M(SelectedRows, "Number of rows SELECTed from all tables.") \ M(SelectedBytes, "Number of bytes (uncompressed; for columns as they stored in memory) SELECTed from all tables.") \ + M(RowsReadByMainReader, "Number of rows read from MergeTree tables by the main reader (after PREWHERE step).") \ + M(RowsReadByPrewhereReaders, "Number of rows read from MergeTree tables (in total) by prewhere readers.") \ \ M(WaitMarksLoadMicroseconds, "Time spent loading marks") \ M(BackgroundLoadingMarksTasks, "Number of background tasks for loading marks") \ @@ -417,6 +419,13 @@ The server successfully detected this situation and will download merged part fr M(DiskS3PutObject, "Number of DiskS3 API PutObject calls.") \ M(DiskS3GetObject, "Number of DiskS3 API GetObject calls.") \ \ + M(DiskPlainRewritableAzureDirectoryCreated, "Number of directories created by the 'plain_rewritable' metadata storage for AzureObjectStorage.") \ + M(DiskPlainRewritableAzureDirectoryRemoved, "Number of directories removed by the 'plain_rewritable' metadata storage for AzureObjectStorage.") \ + M(DiskPlainRewritableLocalDirectoryCreated, "Number of directories created by the 'plain_rewritable' metadata storage for LocalObjectStorage.") \ + M(DiskPlainRewritableLocalDirectoryRemoved, "Number of directories removed by the 'plain_rewritable' metadata storage for LocalObjectStorage.") \ + M(DiskPlainRewritableS3DirectoryCreated, "Number of directories created by the 'plain_rewritable' metadata storage for S3ObjectStorage.") \ + M(DiskPlainRewritableS3DirectoryRemoved, "Number of directories removed by the 'plain_rewritable' metadata storage for S3ObjectStorage.") \ + \ M(S3Clients, "Number of created S3 clients.") \ M(TinyS3Clients, "Number of S3 clients copies which reuse an existing auth provider from another client.") \ \ diff --git a/src/Common/RadixSort.h b/src/Common/RadixSort.h index a30e19d8212..238321ec76e 100644 --- a/src/Common/RadixSort.h +++ b/src/Common/RadixSort.h @@ -385,7 +385,7 @@ private: * PASS is counted from least significant (0), so the first pass is NUM_PASSES - 1. */ template - static inline void radixSortMSDInternal(Element * arr, size_t size, size_t limit) + static void radixSortMSDInternal(Element * arr, size_t size, size_t limit) { /// The beginning of every i-1-th bucket. 0th element will be equal to 1st. /// Last element will point to array end. @@ -528,7 +528,7 @@ private: // A helper to choose sorting algorithm based on array length template - static inline void radixSortMSDInternalHelper(Element * arr, size_t size, size_t limit) + static void radixSortMSDInternalHelper(Element * arr, size_t size, size_t limit) { if (size <= INSERTION_SORT_THRESHOLD) insertionSortInternal(arr, size); diff --git a/src/Common/SpaceSaving.h b/src/Common/SpaceSaving.h index 7a740ae6c9b..81ac4e71e8c 100644 --- a/src/Common/SpaceSaving.h +++ b/src/Common/SpaceSaving.h @@ -131,12 +131,12 @@ public: ~SpaceSaving() { destroyElements(); } - inline size_t size() const + size_t size() const { return counter_list.size(); } - inline size_t capacity() const + size_t capacity() const { return m_capacity; } diff --git a/src/Common/TargetSpecific.cpp b/src/Common/TargetSpecific.cpp index 49f396c0926..8540c9a9986 100644 --- a/src/Common/TargetSpecific.cpp +++ b/src/Common/TargetSpecific.cpp @@ -54,8 +54,6 @@ String toString(TargetArch arch) case TargetArch::AMXTILE: return "amxtile"; case TargetArch::AMXINT8: return "amxint8"; } - - UNREACHABLE(); } } diff --git a/src/Common/ThreadProfileEvents.cpp b/src/Common/ThreadProfileEvents.cpp index 6a63d484cd9..23b41f23bde 100644 --- a/src/Common/ThreadProfileEvents.cpp +++ b/src/Common/ThreadProfileEvents.cpp @@ -75,7 +75,6 @@ const char * TasksStatsCounters::metricsProviderString(MetricsProvider provider) case MetricsProvider::Netlink: return "netlink"; } - UNREACHABLE(); } bool TasksStatsCounters::checkIfAvailable() diff --git a/src/Common/ThreadProfileEvents.h b/src/Common/ThreadProfileEvents.h index 26aeab08302..0af3ccb4c80 100644 --- a/src/Common/ThreadProfileEvents.h +++ b/src/Common/ThreadProfileEvents.h @@ -107,7 +107,7 @@ struct RUsageCounters } private: - static inline UInt64 getClockMonotonic() + static UInt64 getClockMonotonic() { struct timespec ts; if (0 != clock_gettime(CLOCK_MONOTONIC, &ts)) diff --git a/src/Common/Volnitsky.h b/src/Common/Volnitsky.h index 3a148983790..3f8e1927493 100644 --- a/src/Common/Volnitsky.h +++ b/src/Common/Volnitsky.h @@ -54,16 +54,16 @@ namespace VolnitskyTraits /// min haystack size to use main algorithm instead of fallback static constexpr size_t min_haystack_size_for_algorithm = 20000; - static inline bool isFallbackNeedle(const size_t needle_size, size_t haystack_size_hint = 0) + static bool isFallbackNeedle(const size_t needle_size, size_t haystack_size_hint = 0) { return needle_size < 2 * sizeof(Ngram) || needle_size >= std::numeric_limits::max() || (haystack_size_hint && haystack_size_hint < min_haystack_size_for_algorithm); } - static inline Ngram toNGram(const UInt8 * const pos) { return unalignedLoad(pos); } + static Ngram toNGram(const UInt8 * const pos) { return unalignedLoad(pos); } template - static inline bool putNGramASCIICaseInsensitive(const UInt8 * pos, int offset, Callback && putNGramBase) + static bool putNGramASCIICaseInsensitive(const UInt8 * pos, int offset, Callback && putNGramBase) { struct Chars { @@ -115,7 +115,7 @@ namespace VolnitskyTraits } template - static inline bool putNGramUTF8CaseInsensitive( + static bool putNGramUTF8CaseInsensitive( const UInt8 * pos, int offset, const UInt8 * begin, size_t size, Callback && putNGramBase) { const UInt8 * end = begin + size; @@ -349,7 +349,7 @@ namespace VolnitskyTraits } template - static inline bool putNGram(const UInt8 * pos, int offset, [[maybe_unused]] const UInt8 * begin, size_t size, Callback && putNGramBase) + static bool putNGram(const UInt8 * pos, int offset, [[maybe_unused]] const UInt8 * begin, size_t size, Callback && putNGramBase) { if constexpr (CaseSensitive) { @@ -580,7 +580,7 @@ public: return true; } - inline bool searchOne(const UInt8 * haystack, const UInt8 * haystack_end) const + bool searchOne(const UInt8 * haystack, const UInt8 * haystack_end) const { const size_t fallback_size = fallback_needles.size(); for (size_t i = 0; i < fallback_size; ++i) @@ -609,7 +609,7 @@ public: return false; } - inline size_t searchOneFirstIndex(const UInt8 * haystack, const UInt8 * haystack_end) const + size_t searchOneFirstIndex(const UInt8 * haystack, const UInt8 * haystack_end) const { const size_t fallback_size = fallback_needles.size(); @@ -647,7 +647,7 @@ public: } template - inline UInt64 searchOneFirstPosition(const UInt8 * haystack, const UInt8 * haystack_end, const CountCharsCallback & count_chars) const + UInt64 searchOneFirstPosition(const UInt8 * haystack, const UInt8 * haystack_end, const CountCharsCallback & count_chars) const { const size_t fallback_size = fallback_needles.size(); @@ -682,7 +682,7 @@ public: } template - inline void searchOneAll(const UInt8 * haystack, const UInt8 * haystack_end, AnsType * answer, const CountCharsCallback & count_chars) const + void searchOneAll(const UInt8 * haystack, const UInt8 * haystack_end, AnsType * answer, const CountCharsCallback & count_chars) const { const size_t fallback_size = fallback_needles.size(); for (size_t i = 0; i < fallback_size; ++i) diff --git a/src/Common/ZooKeeper/IKeeper.cpp b/src/Common/ZooKeeper/IKeeper.cpp index 7d2602bde1e..7cca262baca 100644 --- a/src/Common/ZooKeeper/IKeeper.cpp +++ b/src/Common/ZooKeeper/IKeeper.cpp @@ -146,8 +146,6 @@ const char * errorMessage(Error code) case Error::ZSESSIONMOVED: return "Session moved to another server, so operation is ignored"; case Error::ZNOTREADONLY: return "State-changing request is passed to read-only server"; } - - UNREACHABLE(); } bool isHardwareError(Error zk_return_code) diff --git a/src/Common/ZooKeeper/IKeeper.h b/src/Common/ZooKeeper/IKeeper.h index ec49c94808e..ddd30c4eef2 100644 --- a/src/Common/ZooKeeper/IKeeper.h +++ b/src/Common/ZooKeeper/IKeeper.h @@ -491,12 +491,12 @@ public: incrementErrorMetrics(code); } - inline static Exception createDeprecated(const std::string & msg, Error code_) + static Exception createDeprecated(const std::string & msg, Error code_) { return Exception(msg, code_, 0); } - inline static Exception fromPath(Error code_, const std::string & path) + static Exception fromPath(Error code_, const std::string & path) { return Exception(code_, "Coordination error: {}, path {}", errorMessage(code_), path); } @@ -504,7 +504,7 @@ public: /// Message must be a compile-time constant template requires std::is_convertible_v - inline static Exception fromMessage(Error code_, T && message) + static Exception fromMessage(Error code_, T && message) { return Exception(std::forward(message), code_); } diff --git a/src/Common/tests/gtest_named_collections.cpp b/src/Common/tests/gtest_named_collections.cpp index e2482f6ba8b..8a8a364961b 100644 --- a/src/Common/tests/gtest_named_collections.cpp +++ b/src/Common/tests/gtest_named_collections.cpp @@ -1,5 +1,5 @@ #include -#include +#include #include #include #include diff --git a/src/Compression/CompressionCodecDeflateQpl.cpp b/src/Compression/CompressionCodecDeflateQpl.cpp index 7e0653c69f8..f1b5b24e866 100644 --- a/src/Compression/CompressionCodecDeflateQpl.cpp +++ b/src/Compression/CompressionCodecDeflateQpl.cpp @@ -466,7 +466,6 @@ void CompressionCodecDeflateQpl::doDecompressData(const char * source, UInt32 so sw_codec->doDecompressData(source, source_size, dest, uncompressed_size); return; } - UNREACHABLE(); } void CompressionCodecDeflateQpl::flushAsynchronousDecompressRequests() diff --git a/src/Compression/CompressionCodecDoubleDelta.cpp b/src/Compression/CompressionCodecDoubleDelta.cpp index e6e8db4c699..cbd8cd57a62 100644 --- a/src/Compression/CompressionCodecDoubleDelta.cpp +++ b/src/Compression/CompressionCodecDoubleDelta.cpp @@ -21,6 +21,11 @@ namespace DB { +namespace ErrorCodes +{ + extern const int BAD_ARGUMENTS; +} + /** NOTE DoubleDelta is surprisingly bad name. The only excuse is that it comes from an academic paper. * Most people will think that "double delta" is just applying delta transform twice. * But in fact it is something more than applying delta transform twice. @@ -142,9 +147,9 @@ namespace ErrorCodes { extern const int CANNOT_COMPRESS; extern const int CANNOT_DECOMPRESS; - extern const int BAD_ARGUMENTS; extern const int ILLEGAL_SYNTAX_FOR_CODEC_TYPE; extern const int ILLEGAL_CODEC_PARAMETER; + extern const int LOGICAL_ERROR; } namespace @@ -163,9 +168,8 @@ inline Int64 getMaxValueForByteSize(Int8 byte_size) case sizeof(UInt64): return std::numeric_limits::max(); default: - assert(false && "only 1, 2, 4 and 8 data sizes are supported"); + throw Exception(ErrorCodes::LOGICAL_ERROR, "only 1, 2, 4 and 8 data sizes are supported"); } - UNREACHABLE(); } struct WriteSpec diff --git a/src/Coordination/KeeperReconfiguration.cpp b/src/Coordination/KeeperReconfiguration.cpp index e3642913a7a..05211af6704 100644 --- a/src/Coordination/KeeperReconfiguration.cpp +++ b/src/Coordination/KeeperReconfiguration.cpp @@ -5,6 +5,12 @@ namespace DB { + +namespace ErrorCodes +{ + extern const int LOGICAL_ERROR; +} + ClusterUpdateActions joiningToClusterUpdates(const ClusterConfigPtr & cfg, std::string_view joining) { ClusterUpdateActions out; @@ -79,7 +85,7 @@ String serializeClusterConfig(const ClusterConfigPtr & cfg, const ClusterUpdateA new_config.emplace_back(RaftServerConfig{*cfg->get_server(priority->id)}); } else - UNREACHABLE(); + throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected update"); } for (const auto & item : cfg->get_servers()) diff --git a/src/Coordination/KeeperServer.cpp b/src/Coordination/KeeperServer.cpp index 8d21ce2ab01..736a01443ce 100644 --- a/src/Coordination/KeeperServer.cpp +++ b/src/Coordination/KeeperServer.cpp @@ -990,7 +990,7 @@ KeeperServer::ConfigUpdateState KeeperServer::applyConfigUpdate( raft_instance->set_priority(update->id, update->priority, /*broadcast on live leader*/true); return Accepted; } - UNREACHABLE(); + std::unreachable(); } ClusterUpdateActions KeeperServer::getRaftConfigurationDiff(const Poco::Util::AbstractConfiguration & config) diff --git a/src/Coordination/Standalone/Context.cpp b/src/Coordination/Standalone/Context.cpp index 4b14b038852..2af8a015c2d 100644 --- a/src/Coordination/Standalone/Context.cpp +++ b/src/Coordination/Standalone/Context.cpp @@ -478,4 +478,9 @@ bool Context::hasTraceCollector() const return false; } +bool Context::isBackgroundOperationContext() const +{ + return false; +} + } diff --git a/src/Coordination/Standalone/Context.h b/src/Coordination/Standalone/Context.h index 7e4d1794f7d..79a3e32a72d 100644 --- a/src/Coordination/Standalone/Context.h +++ b/src/Coordination/Standalone/Context.h @@ -170,6 +170,8 @@ public: const ServerSettings & getServerSettings() const; bool hasTraceCollector() const; + + bool isBackgroundOperationContext() const; }; } diff --git a/src/Core/Field.h b/src/Core/Field.h index 4424d669c4d..a78b589c883 100644 --- a/src/Core/Field.h +++ b/src/Core/Field.h @@ -667,8 +667,6 @@ public: case Types::AggregateFunctionState: return f(field.template get()); case Types::CustomType: return f(field.template get()); } - - UNREACHABLE(); } String dump() const; @@ -855,13 +853,13 @@ template <> struct Field::EnumToType { usi template <> struct Field::EnumToType { using Type = CustomType; }; template <> struct Field::EnumToType { using Type = UInt64; }; -inline constexpr bool isInt64OrUInt64FieldType(Field::Types::Which t) +constexpr bool isInt64OrUInt64FieldType(Field::Types::Which t) { return t == Field::Types::Int64 || t == Field::Types::UInt64; } -inline constexpr bool isInt64OrUInt64orBoolFieldType(Field::Types::Which t) +constexpr bool isInt64OrUInt64orBoolFieldType(Field::Types::Which t) { return t == Field::Types::Int64 || t == Field::Types::UInt64 diff --git a/src/Core/Joins.h b/src/Core/Joins.h index ccdd6eefab7..96d2b51325c 100644 --- a/src/Core/Joins.h +++ b/src/Core/Joins.h @@ -19,16 +19,16 @@ enum class JoinKind : uint8_t const char * toString(JoinKind kind); -inline constexpr bool isLeft(JoinKind kind) { return kind == JoinKind::Left; } -inline constexpr bool isRight(JoinKind kind) { return kind == JoinKind::Right; } -inline constexpr bool isInner(JoinKind kind) { return kind == JoinKind::Inner; } -inline constexpr bool isFull(JoinKind kind) { return kind == JoinKind::Full; } -inline constexpr bool isCrossOrComma(JoinKind kind) { return kind == JoinKind::Comma || kind == JoinKind::Cross; } -inline constexpr bool isRightOrFull(JoinKind kind) { return kind == JoinKind::Right || kind == JoinKind::Full; } -inline constexpr bool isLeftOrFull(JoinKind kind) { return kind == JoinKind::Left || kind == JoinKind::Full; } -inline constexpr bool isInnerOrRight(JoinKind kind) { return kind == JoinKind::Inner || kind == JoinKind::Right; } -inline constexpr bool isInnerOrLeft(JoinKind kind) { return kind == JoinKind::Inner || kind == JoinKind::Left; } -inline constexpr bool isPaste(JoinKind kind) { return kind == JoinKind::Paste; } +constexpr bool isLeft(JoinKind kind) { return kind == JoinKind::Left; } +constexpr bool isRight(JoinKind kind) { return kind == JoinKind::Right; } +constexpr bool isInner(JoinKind kind) { return kind == JoinKind::Inner; } +constexpr bool isFull(JoinKind kind) { return kind == JoinKind::Full; } +constexpr bool isCrossOrComma(JoinKind kind) { return kind == JoinKind::Comma || kind == JoinKind::Cross; } +constexpr bool isRightOrFull(JoinKind kind) { return kind == JoinKind::Right || kind == JoinKind::Full; } +constexpr bool isLeftOrFull(JoinKind kind) { return kind == JoinKind::Left || kind == JoinKind::Full; } +constexpr bool isInnerOrRight(JoinKind kind) { return kind == JoinKind::Inner || kind == JoinKind::Right; } +constexpr bool isInnerOrLeft(JoinKind kind) { return kind == JoinKind::Inner || kind == JoinKind::Left; } +constexpr bool isPaste(JoinKind kind) { return kind == JoinKind::Paste; } /// Allows more optimal JOIN for typical cases. enum class JoinStrictness : uint8_t @@ -66,7 +66,7 @@ enum class ASOFJoinInequality : uint8_t const char * toString(ASOFJoinInequality asof_join_inequality); -inline constexpr ASOFJoinInequality getASOFJoinInequality(std::string_view func_name) +constexpr ASOFJoinInequality getASOFJoinInequality(std::string_view func_name) { ASOFJoinInequality inequality = ASOFJoinInequality::None; @@ -82,7 +82,7 @@ inline constexpr ASOFJoinInequality getASOFJoinInequality(std::string_view func_ return inequality; } -inline constexpr ASOFJoinInequality reverseASOFJoinInequality(ASOFJoinInequality inequality) +constexpr ASOFJoinInequality reverseASOFJoinInequality(ASOFJoinInequality inequality) { if (inequality == ASOFJoinInequality::Less) return ASOFJoinInequality::Greater; diff --git a/src/Core/Settings.h b/src/Core/Settings.h index 28b068b9e37..6e4a1eb6452 100644 --- a/src/Core/Settings.h +++ b/src/Core/Settings.h @@ -394,7 +394,7 @@ class IColumn; M(Bool, allow_experimental_analyzer, true, "Allow experimental analyzer.", 0) \ M(Bool, analyzer_compatibility_join_using_top_level_identifier, false, "Force to resolve identifier in JOIN USING from projection (for example, in `SELECT a + 1 AS b FROM t1 JOIN t2 USING (b)` join will be performed by `t1.a + 1 = t2.b`, rather then `t1.b = t2.b`).", 0) \ M(Bool, prefer_global_in_and_join, false, "If enabled, all IN/JOIN operators will be rewritten as GLOBAL IN/JOIN. It's useful when the to-be-joined tables are only available on the initiator and we need to always scatter their data on-the-fly during distributed processing with the GLOBAL keyword. It's also useful to reduce the need to access the external sources joining external tables.", 0) \ - M(Bool, enable_vertical_final, true, "If enable, remove duplicated rows during FINAL by marking rows as deleted and filtering them later instead of merging rows", 0) \ + M(Bool, enable_vertical_final, false, "Not recommended. If enable, remove duplicated rows during FINAL by marking rows as deleted and filtering them later instead of merging rows", 0) \ \ \ /** Limits during query execution are part of the settings. \ @@ -925,7 +925,7 @@ class IColumn; M(Int64, ignore_cold_parts_seconds, 0, "Only available in ClickHouse Cloud. Exclude new data parts from SELECT queries until they're either pre-warmed (see cache_populated_by_fetch) or this many seconds old. Only for Replicated-/SharedMergeTree.", 0) \ M(Int64, prefer_warmed_unmerged_parts_seconds, 0, "Only available in ClickHouse Cloud. If a merged part is less than this many seconds old and is not pre-warmed (see cache_populated_by_fetch), but all its source parts are available and pre-warmed, SELECT queries will read from those parts instead. Only for ReplicatedMergeTree. Note that this only checks whether CacheWarmer processed the part; if the part was fetched into cache by something else, it'll still be considered cold until CacheWarmer gets to it; if it was warmed, then evicted from cache, it'll still be considered warm.", 0) \ M(Bool, iceberg_engine_ignore_schema_evolution, false, "Ignore schema evolution in Iceberg table engine and read all data using latest schema saved on table creation. Note that it can lead to incorrect result", 0) \ - M(Bool, allow_deprecated_functions, false, "Allow usage of deprecated functions", 0) \ + M(Bool, allow_deprecated_error_prone_window_functions, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)", 0) \ // End of COMMON_SETTINGS // Please add settings related to formats into the FORMAT_FACTORY_SETTINGS, move obsolete settings to OBSOLETE_SETTINGS and obsolete format settings to OBSOLETE_FORMAT_SETTINGS. diff --git a/src/Core/SettingsChangesHistory.h b/src/Core/SettingsChangesHistory.h index ecb4960a06a..1d664d08627 100644 --- a/src/Core/SettingsChangesHistory.h +++ b/src/Core/SettingsChangesHistory.h @@ -85,7 +85,8 @@ namespace SettingsChangesHistory /// It's used to implement `compatibility` setting (see https://github.com/ClickHouse/ClickHouse/issues/35972) static std::map settings_changes_history = { - {"24.6", {{"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"}, + {"24.6", {{"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."}, + {"hdfs_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in HDFS engine instead of empty query result"}, {"azure_throw_on_zero_files_match", false, false, "Allow to throw an error when ListObjects request cannot match any files in AzureBlobStorage engine instead of empty query result"}, {"s3_validate_request_settings", true, true, "Allow to disable S3 request settings validation"}, {"azure_skip_empty_files", false, false, "Allow to skip empty files in azure table engine"}, @@ -94,7 +95,7 @@ static std::map sett {"s3_ignore_file_doesnt_exist", false, false, "Allow to return 0 rows when the requested files don't exist instead of throwing an exception in S3 table engine"}, {"min_untracked_memory", 4_MiB, 4_KiB, "A new setting."}, }}, - {"24.5", {{"allow_deprecated_functions", true, false, "Allow usage of deprecated functions"}, + {"24.5", {{"allow_deprecated_error_prone_window_functions", true, false, "Allow usage of deprecated error prone window functions (neighbor, runningAccumulate, runningDifferenceStartingWithFirstValue, runningDifference)"}, {"allow_experimental_join_condition", false, false, "Support join with inequal conditions which involve columns from both left and right table. e.g. t1.y < t2.y."}, {"input_format_tsv_crlf_end_of_line", false, false, "Enables reading of CRLF line endings with TSV formats"}, {"output_format_parquet_use_custom_encoder", false, true, "Enable custom Parquet encoder."}, @@ -102,7 +103,6 @@ static std::map sett {"cross_join_min_bytes_to_compress", 0, 1_GiB, "Minimal size of block to compress in CROSS JOIN. Zero value means - disable this threshold. This block is compressed when any of the two thresholds (by rows or by bytes) are reached."}, {"http_max_chunk_size", 0, 0, "Internal limitation"}, {"prefer_external_sort_block_bytes", 0, DEFAULT_BLOCK_SIZE * 256, "Prefer maximum block bytes for external sort, reduce the memory usage during merging."}, - {"input_format_parquet_use_native_reader", false, false, "When reading Parquet files, to use native reader instead of arrow reader."}, {"input_format_force_null_for_omitted_fields", false, false, "Disable type-defaults for omitted fields when needed"}, {"cast_string_to_dynamic_use_inference", false, false, "Add setting to allow converting String to Dynamic through parsing"}, {"allow_experimental_dynamic_type", false, false, "Add new experimental Dynamic type"}, diff --git a/src/Daemon/BaseDaemon.h b/src/Daemon/BaseDaemon.h index a0f47c44460..3d34d404595 100644 --- a/src/Daemon/BaseDaemon.h +++ b/src/Daemon/BaseDaemon.h @@ -40,7 +40,7 @@ class BaseDaemon : public Poco::Util::ServerApplication, public Loggers friend class SignalListener; public: - static inline constexpr char DEFAULT_GRAPHITE_CONFIG_NAME[] = "graphite"; + static constexpr char DEFAULT_GRAPHITE_CONFIG_NAME[] = "graphite"; BaseDaemon(); ~BaseDaemon() override; diff --git a/src/DataTypes/DataTypeDecimalBase.h b/src/DataTypes/DataTypeDecimalBase.h index 642d2de833f..997c554059b 100644 --- a/src/DataTypes/DataTypeDecimalBase.h +++ b/src/DataTypes/DataTypeDecimalBase.h @@ -147,7 +147,7 @@ public: static T getScaleMultiplier(UInt32 scale); - inline DecimalUtils::DataTypeDecimalTrait getTrait() const + DecimalUtils::DataTypeDecimalTrait getTrait() const { return {precision, scale}; } diff --git a/src/DataTypes/Serializations/ISerialization.cpp b/src/DataTypes/Serializations/ISerialization.cpp index dbe27a5f3f6..bbb1d1a6cd1 100644 --- a/src/DataTypes/Serializations/ISerialization.cpp +++ b/src/DataTypes/Serializations/ISerialization.cpp @@ -36,7 +36,6 @@ String ISerialization::kindToString(Kind kind) case Kind::SPARSE: return "Sparse"; } - UNREACHABLE(); } ISerialization::Kind ISerialization::stringToKind(const String & str) diff --git a/src/DataTypes/Serializations/SerializationLowCardinality.cpp b/src/DataTypes/Serializations/SerializationLowCardinality.cpp index 2b88c5d68d6..40071c4607a 100644 --- a/src/DataTypes/Serializations/SerializationLowCardinality.cpp +++ b/src/DataTypes/Serializations/SerializationLowCardinality.cpp @@ -516,8 +516,14 @@ void SerializationLowCardinality::deserializeBinaryBulkWithMultipleStreams( size_t limit, DeserializeBinaryBulkSettings & settings, DeserializeBinaryBulkStatePtr & state, - SubstreamsCache * /* cache */) const + SubstreamsCache * cache) const { + if (auto cached_column = getFromSubstreamsCache(cache, settings.path)) + { + column = cached_column; + return; + } + auto mutable_column = column->assumeMutable(); ColumnLowCardinality & low_cardinality_column = typeid_cast(*mutable_column); @@ -671,6 +677,7 @@ void SerializationLowCardinality::deserializeBinaryBulkWithMultipleStreams( } column = std::move(mutable_column); + addToSubstreamsCache(cache, settings.path, column); } void SerializationLowCardinality::serializeBinary(const Field & field, WriteBuffer & ostr, const FormatSettings & settings) const diff --git a/src/Databases/DatabaseReplicated.cpp b/src/Databases/DatabaseReplicated.cpp index cc946fc22c4..f5aff604dcb 100644 --- a/src/Databases/DatabaseReplicated.cpp +++ b/src/Databases/DatabaseReplicated.cpp @@ -936,7 +936,7 @@ void DatabaseReplicated::recoverLostReplica(const ZooKeeperPtr & current_zookeep query_context->setSetting("allow_experimental_window_functions", 1); query_context->setSetting("allow_experimental_geo_types", 1); query_context->setSetting("allow_experimental_map_type", 1); - query_context->setSetting("allow_deprecated_functions", 1); + query_context->setSetting("allow_deprecated_error_prone_window_functions", 1); query_context->setSetting("allow_suspicious_low_cardinality_types", 1); query_context->setSetting("allow_suspicious_fixed_string_types", 1); diff --git a/src/Dictionaries/CacheDictionaryStorage.h b/src/Dictionaries/CacheDictionaryStorage.h index 01217c58e31..a960a916027 100644 --- a/src/Dictionaries/CacheDictionaryStorage.h +++ b/src/Dictionaries/CacheDictionaryStorage.h @@ -754,7 +754,7 @@ private: std::vector attributes; - inline void setCellDeadline(Cell & cell, TimePoint now) + void setCellDeadline(Cell & cell, TimePoint now) { if (configuration.lifetime.min_sec == 0 && configuration.lifetime.max_sec == 0) { @@ -774,7 +774,7 @@ private: cell.deadline = std::chrono::system_clock::to_time_t(deadline); } - inline size_t getCellIndex(const KeyType key) const + size_t getCellIndex(const KeyType key) const { const size_t hash = DefaultHash()(key); const size_t index = hash & size_overlap_mask; @@ -783,7 +783,7 @@ private: using KeyStateAndCellIndex = std::pair; - inline KeyStateAndCellIndex getKeyStateAndCellIndex(const KeyType key, const time_t now) const + KeyStateAndCellIndex getKeyStateAndCellIndex(const KeyType key, const time_t now) const { size_t place_value = getCellIndex(key); const size_t place_value_end = place_value + max_collision_length; @@ -810,7 +810,7 @@ private: return std::make_pair(KeyState::not_found, place_value & size_overlap_mask); } - inline size_t getCellIndexForInsert(const KeyType & key) const + size_t getCellIndexForInsert(const KeyType & key) const { size_t place_value = getCellIndex(key); const size_t place_value_end = place_value + max_collision_length; diff --git a/src/Dictionaries/DictionaryHelpers.h b/src/Dictionaries/DictionaryHelpers.h index 8bf190d3edc..64fc05e99ab 100644 --- a/src/Dictionaries/DictionaryHelpers.h +++ b/src/Dictionaries/DictionaryHelpers.h @@ -44,7 +44,7 @@ public: { } - inline bool isConstant() const { return default_values_column == nullptr; } + bool isConstant() const { return default_values_column == nullptr; } Field getDefaultValue(size_t row) const { @@ -450,17 +450,17 @@ public: keys_size = key_columns.front()->size(); } - inline size_t getKeysSize() const + size_t getKeysSize() const { return keys_size; } - inline size_t getCurrentKeyIndex() const + size_t getCurrentKeyIndex() const { return current_key_index; } - inline KeyType extractCurrentKey() + KeyType extractCurrentKey() { assert(current_key_index < keys_size); diff --git a/src/Dictionaries/Embedded/RegionsNames.h b/src/Dictionaries/Embedded/RegionsNames.h index 0053c74745a..0e4c1fe8b88 100644 --- a/src/Dictionaries/Embedded/RegionsNames.h +++ b/src/Dictionaries/Embedded/RegionsNames.h @@ -48,14 +48,14 @@ public: }; private: - static inline constexpr const char * languages[] = + static constexpr const char * languages[] = { #define M(NAME, FALLBACK, NUM) #NAME, FOR_EACH_LANGUAGE(M) #undef M }; - static inline constexpr Language fallbacks[] = + static constexpr Language fallbacks[] = { #define M(NAME, FALLBACK, NUM) Language::FALLBACK, FOR_EACH_LANGUAGE(M) diff --git a/src/Dictionaries/ICacheDictionaryStorage.h b/src/Dictionaries/ICacheDictionaryStorage.h index dcd7434946f..532154cd190 100644 --- a/src/Dictionaries/ICacheDictionaryStorage.h +++ b/src/Dictionaries/ICacheDictionaryStorage.h @@ -26,15 +26,15 @@ struct KeyState : state(state_) {} - inline bool isFound() const { return state == State::found; } - inline bool isExpired() const { return state == State::expired; } - inline bool isNotFound() const { return state == State::not_found; } - inline bool isDefault() const { return is_default; } - inline void setDefault() { is_default = true; } - inline void setDefaultValue(bool is_default_value) { is_default = is_default_value; } + bool isFound() const { return state == State::found; } + bool isExpired() const { return state == State::expired; } + bool isNotFound() const { return state == State::not_found; } + bool isDefault() const { return is_default; } + void setDefault() { is_default = true; } + void setDefaultValue(bool is_default_value) { is_default = is_default_value; } /// Valid only if keyState is found or expired - inline size_t getFetchedColumnIndex() const { return fetched_column_index; } - inline void setFetchedColumnIndex(size_t fetched_column_index_value) { fetched_column_index = fetched_column_index_value; } + size_t getFetchedColumnIndex() const { return fetched_column_index; } + void setFetchedColumnIndex(size_t fetched_column_index_value) { fetched_column_index = fetched_column_index_value; } private: State state = not_found; size_t fetched_column_index = 0; diff --git a/src/Dictionaries/IPAddressDictionary.cpp b/src/Dictionaries/IPAddressDictionary.cpp index 1bc6d16c932..a67118caaf8 100644 --- a/src/Dictionaries/IPAddressDictionary.cpp +++ b/src/Dictionaries/IPAddressDictionary.cpp @@ -66,7 +66,7 @@ namespace return buf; } - inline UInt8 prefixIPv6() const + UInt8 prefixIPv6() const { return isv6 ? prefix : prefix + 96; } diff --git a/src/Dictionaries/RegExpTreeDictionary.cpp b/src/Dictionaries/RegExpTreeDictionary.cpp index 2e93a8e6001..ab999202e42 100644 --- a/src/Dictionaries/RegExpTreeDictionary.cpp +++ b/src/Dictionaries/RegExpTreeDictionary.cpp @@ -474,7 +474,7 @@ public: } // Checks if no more values can be added for a given attribute - inline bool full(const String & attr_name, std::unordered_set * const defaults = nullptr) const + bool full(const String & attr_name, std::unordered_set * const defaults = nullptr) const { if (collect_values_limit) { @@ -490,7 +490,7 @@ public: } // Returns the number of full attributes - inline size_t attributesFull() const { return n_full_attributes; } + size_t attributesFull() const { return n_full_attributes; } }; std::pair processBackRefs(const String & data, const re2::RE2 & searcher, const std::vector & pieces) diff --git a/src/Dictionaries/SSDCacheDictionaryStorage.h b/src/Dictionaries/SSDCacheDictionaryStorage.h index f0b56cbf529..e96bdc4ac55 100644 --- a/src/Dictionaries/SSDCacheDictionaryStorage.h +++ b/src/Dictionaries/SSDCacheDictionaryStorage.h @@ -134,7 +134,7 @@ public: /// Reset block with new block_data /// block_data must be filled with zeroes if it is new block - inline void reset(char * new_block_data) + void reset(char * new_block_data) { block_data = new_block_data; current_block_offset = block_header_size; @@ -142,13 +142,13 @@ public: } /// Check if it is enough place to write key in block - inline bool enoughtPlaceToWriteKey(const SSDCacheSimpleKey & cache_key) const + bool enoughtPlaceToWriteKey(const SSDCacheSimpleKey & cache_key) const { return (current_block_offset + (sizeof(cache_key.key) + sizeof(cache_key.size) + cache_key.size)) <= block_size; } /// Check if it is enough place to write key in block - inline bool enoughtPlaceToWriteKey(const SSDCacheComplexKey & cache_key) const + bool enoughtPlaceToWriteKey(const SSDCacheComplexKey & cache_key) const { const StringRef & key = cache_key.key; size_t complex_key_size = sizeof(key.size) + key.size; @@ -159,7 +159,7 @@ public: /// Write key and returns offset in ssd cache block where data is written /// It is client responsibility to check if there is enough place in block to write key /// Returns true if key was written and false if there was not enough place to write key - inline bool writeKey(const SSDCacheSimpleKey & cache_key, size_t & offset_in_block) + bool writeKey(const SSDCacheSimpleKey & cache_key, size_t & offset_in_block) { assert(cache_key.size > 0); @@ -188,7 +188,7 @@ public: return true; } - inline bool writeKey(const SSDCacheComplexKey & cache_key, size_t & offset_in_block) + bool writeKey(const SSDCacheComplexKey & cache_key, size_t & offset_in_block) { assert(cache_key.size > 0); @@ -223,20 +223,20 @@ public: return true; } - inline size_t getKeysSize() const { return keys_size; } + size_t getKeysSize() const { return keys_size; } /// Write keys size into block header - inline void writeKeysSize() + void writeKeysSize() { char * keys_size_offset_data = block_data + block_header_check_sum_size; std::memcpy(keys_size_offset_data, &keys_size, sizeof(size_t)); } /// Get check sum from block header - inline size_t getCheckSum() const { return unalignedLoad(block_data); } + size_t getCheckSum() const { return unalignedLoad(block_data); } /// Calculate check sum in block - inline size_t calculateCheckSum() const + size_t calculateCheckSum() const { size_t calculated_check_sum = static_cast(CityHash_v1_0_2::CityHash64(block_data + block_header_check_sum_size, block_size - block_header_check_sum_size)); @@ -244,7 +244,7 @@ public: } /// Check if check sum from block header matched calculated check sum in block - inline bool checkCheckSum() const + bool checkCheckSum() const { size_t calculated_check_sum = calculateCheckSum(); size_t check_sum = getCheckSum(); @@ -253,16 +253,16 @@ public: } /// Write check sum in block header - inline void writeCheckSum() + void writeCheckSum() { size_t check_sum = static_cast(CityHash_v1_0_2::CityHash64(block_data + block_header_check_sum_size, block_size - block_header_check_sum_size)); std::memcpy(block_data, &check_sum, sizeof(size_t)); } - inline size_t getBlockSize() const { return block_size; } + size_t getBlockSize() const { return block_size; } /// Returns block data - inline char * getBlockData() const { return block_data; } + char * getBlockData() const { return block_data; } /// Read keys that were serialized in block /// It is client responsibility to ensure that simple or complex keys were written in block @@ -405,16 +405,16 @@ public: current_write_block.writeCheckSum(); } - inline char * getPlace(SSDCacheIndex index) const + char * getPlace(SSDCacheIndex index) const { return buffer.m_data + index.block_index * block_size + index.offset_in_block; } - inline size_t getCurrentBlockIndex() const { return current_block_index; } + size_t getCurrentBlockIndex() const { return current_block_index; } - inline const char * getData() const { return buffer.m_data; } + const char * getData() const { return buffer.m_data; } - inline size_t getSizeInBytes() const { return block_size * partition_blocks_size; } + size_t getSizeInBytes() const { return block_size * partition_blocks_size; } void readKeys(PaddedPODArray & keys) const { @@ -431,7 +431,7 @@ public: } } - inline void reset() + void reset() { current_block_index = 0; current_write_block.reset(buffer.m_data); @@ -750,9 +750,9 @@ public: } } - inline size_t getCurrentBlockIndex() const { return current_block_index; } + size_t getCurrentBlockIndex() const { return current_block_index; } - inline void reset() + void reset() { current_block_index = 0; } @@ -788,7 +788,7 @@ private: int fd = -1; }; - inline static int preallocateDiskSpace(int fd, size_t offset, size_t len) + static int preallocateDiskSpace(int fd, size_t offset, size_t len) { #if defined(OS_FREEBSD) return posix_fallocate(fd, offset, len); @@ -797,7 +797,7 @@ private: #endif } - inline static char * getRequestBuffer(const iocb & request) + static char * getRequestBuffer(const iocb & request) { char * result = nullptr; @@ -810,7 +810,7 @@ private: return result; } - inline static ssize_t eventResult(io_event & event) + static ssize_t eventResult(io_event & event) { ssize_t bytes_written; @@ -985,9 +985,9 @@ private: size_t in_memory_partition_index; CellState state; - inline bool isInMemory() const { return state == in_memory; } - inline bool isOnDisk() const { return state == on_disk; } - inline bool isDefaultValue() const { return state == default_value; } + bool isInMemory() const { return state == in_memory; } + bool isOnDisk() const { return state == on_disk; } + bool isDefaultValue() const { return state == default_value; } }; struct KeyToBlockOffset @@ -1366,7 +1366,7 @@ private: } } - inline void setCellDeadline(Cell & cell, TimePoint now) + void setCellDeadline(Cell & cell, TimePoint now) { if (configuration.lifetime.min_sec == 0 && configuration.lifetime.max_sec == 0) { @@ -1383,7 +1383,7 @@ private: cell.deadline = std::chrono::system_clock::to_time_t(deadline); } - inline void eraseKeyFromIndex(KeyType key) + void eraseKeyFromIndex(KeyType key) { auto it = index.find(key); diff --git a/src/Disks/DiskEncrypted.h b/src/Disks/DiskEncrypted.h index 27000dcc8af..9b575c65bce 100644 --- a/src/Disks/DiskEncrypted.h +++ b/src/Disks/DiskEncrypted.h @@ -350,6 +350,13 @@ public: return delegate; } +#if USE_AWS_S3 + std::shared_ptr getS3StorageClient() const override + { + return delegate->getS3StorageClient(); + } +#endif + private: String wrappedPath(const String & path) const { diff --git a/src/Disks/IDisk.h b/src/Disks/IDisk.h index 614fe413503..658acb01c74 100644 --- a/src/Disks/IDisk.h +++ b/src/Disks/IDisk.h @@ -14,7 +14,6 @@ #include #include -#include #include #include #include @@ -116,13 +115,18 @@ public: /// Default constructor. IDisk(const String & name_, const Poco::Util::AbstractConfiguration & config, const String & config_prefix) : name(name_) - , copying_thread_pool(CurrentMetrics::IDiskCopierThreads, CurrentMetrics::IDiskCopierThreadsActive, CurrentMetrics::IDiskCopierThreadsScheduled, config.getUInt(config_prefix + ".thread_pool_size", 16)) + , copying_thread_pool( + CurrentMetrics::IDiskCopierThreads, + CurrentMetrics::IDiskCopierThreadsActive, + CurrentMetrics::IDiskCopierThreadsScheduled, + config.getUInt(config_prefix + ".thread_pool_size", 16)) { } explicit IDisk(const String & name_) : name(name_) - , copying_thread_pool(CurrentMetrics::IDiskCopierThreads, CurrentMetrics::IDiskCopierThreadsActive, CurrentMetrics::IDiskCopierThreadsScheduled, 16) + , copying_thread_pool( + CurrentMetrics::IDiskCopierThreads, CurrentMetrics::IDiskCopierThreadsActive, CurrentMetrics::IDiskCopierThreadsScheduled, 16) { } @@ -466,6 +470,17 @@ public: virtual DiskPtr getDelegateDiskIfExists() const { return nullptr; } +#if USE_AWS_S3 + virtual std::shared_ptr getS3StorageClient() const + { + throw Exception( + ErrorCodes::NOT_IMPLEMENTED, + "Method getS3StorageClient() is not implemented for disk type: {}", + getDataSourceDescription().toString()); + } +#endif + + protected: friend class DiskDecorator; diff --git a/src/Disks/IO/CachedOnDiskReadBufferFromFile.cpp b/src/Disks/IO/CachedOnDiskReadBufferFromFile.cpp index 1fe369832ac..e9c642666d3 100644 --- a/src/Disks/IO/CachedOnDiskReadBufferFromFile.cpp +++ b/src/Disks/IO/CachedOnDiskReadBufferFromFile.cpp @@ -274,6 +274,11 @@ bool CachedOnDiskReadBufferFromFile::canStartFromCache(size_t current_offset, co return current_write_offset > current_offset; } +String CachedOnDiskReadBufferFromFile::toString(ReadType type) +{ + return String(magic_enum::enum_name(type)); +} + CachedOnDiskReadBufferFromFile::ImplementationBufferPtr CachedOnDiskReadBufferFromFile::getReadBufferForFileSegment(FileSegment & file_segment) { diff --git a/src/Disks/IO/CachedOnDiskReadBufferFromFile.h b/src/Disks/IO/CachedOnDiskReadBufferFromFile.h index 3433698a162..119fa166214 100644 --- a/src/Disks/IO/CachedOnDiskReadBufferFromFile.h +++ b/src/Disks/IO/CachedOnDiskReadBufferFromFile.h @@ -129,19 +129,7 @@ private: ReadType read_type = ReadType::REMOTE_FS_READ_BYPASS_CACHE; - static String toString(ReadType type) - { - switch (type) - { - case ReadType::CACHED: - return "CACHED"; - case ReadType::REMOTE_FS_READ_BYPASS_CACHE: - return "REMOTE_FS_READ_BYPASS_CACHE"; - case ReadType::REMOTE_FS_READ_AND_PUT_IN_CACHE: - return "REMOTE_FS_READ_AND_PUT_IN_CACHE"; - } - UNREACHABLE(); - } + static String toString(ReadType type); size_t first_offset = 0; String nextimpl_step_log_info; diff --git a/src/Disks/IO/IOUringReader.h b/src/Disks/IO/IOUringReader.h index 89e71e4b215..359b3badc45 100644 --- a/src/Disks/IO/IOUringReader.h +++ b/src/Disks/IO/IOUringReader.h @@ -61,12 +61,12 @@ private: void monitorRing(); - template inline void failPromise(std::promise & promise, const Exception & ex) + template void failPromise(std::promise & promise, const Exception & ex) { promise.set_exception(std::make_exception_ptr(ex)); } - inline std::future makeFailedResult(const Exception & ex) + std::future makeFailedResult(const Exception & ex) { auto promise = std::promise{}; failPromise(promise, ex); diff --git a/src/Disks/ObjectStorages/Cached/CachedObjectStorage.h b/src/Disks/ObjectStorages/Cached/CachedObjectStorage.h index a4d263e92eb..f06f78fbe4a 100644 --- a/src/Disks/ObjectStorages/Cached/CachedObjectStorage.h +++ b/src/Disks/ObjectStorages/Cached/CachedObjectStorage.h @@ -127,6 +127,13 @@ public: } #endif +#if USE_AWS_S3 + std::shared_ptr getS3StorageClient() override + { + return object_storage->getS3StorageClient(); + } +#endif + private: FileCacheKey getCacheKey(const std::string & path) const; diff --git a/src/Disks/ObjectStorages/Cached/registerDiskCache.cpp b/src/Disks/ObjectStorages/Cached/registerDiskCache.cpp index 6e0453f5f02..917a12eaaaa 100644 --- a/src/Disks/ObjectStorages/Cached/registerDiskCache.cpp +++ b/src/Disks/ObjectStorages/Cached/registerDiskCache.cpp @@ -4,7 +4,7 @@ #include #include #include -#include +#include #include #include diff --git a/src/Disks/ObjectStorages/DiskObjectStorage.cpp b/src/Disks/ObjectStorages/DiskObjectStorage.cpp index abf0c1fad0b..5803a985000 100644 --- a/src/Disks/ObjectStorages/DiskObjectStorage.cpp +++ b/src/Disks/ObjectStorages/DiskObjectStorage.cpp @@ -582,6 +582,12 @@ UInt64 DiskObjectStorage::getRevision() const return metadata_helper->getRevision(); } +#if USE_AWS_S3 +std::shared_ptr DiskObjectStorage::getS3StorageClient() const +{ + return object_storage->getS3StorageClient(); +} +#endif DiskPtr DiskObjectStorageReservation::getDisk(size_t i) const { diff --git a/src/Disks/ObjectStorages/DiskObjectStorage.h b/src/Disks/ObjectStorages/DiskObjectStorage.h index 2a27ddf89a7..ffef0a007da 100644 --- a/src/Disks/ObjectStorages/DiskObjectStorage.h +++ b/src/Disks/ObjectStorages/DiskObjectStorage.h @@ -6,6 +6,8 @@ #include #include +#include "config.h" + namespace CurrentMetrics { @@ -210,6 +212,10 @@ public: bool supportsChmod() const override { return metadata_storage->supportsChmod(); } void chmod(const String & path, mode_t mode) override; +#if USE_AWS_S3 + std::shared_ptr getS3StorageClient() const override; +#endif + private: /// Create actual disk object storage transaction for operations diff --git a/src/Disks/ObjectStorages/IObjectStorage.cpp b/src/Disks/ObjectStorages/IObjectStorage.cpp index fd1269df79b..ce5f06e8f25 100644 --- a/src/Disks/ObjectStorages/IObjectStorage.cpp +++ b/src/Disks/ObjectStorages/IObjectStorage.cpp @@ -18,6 +18,11 @@ namespace ErrorCodes extern const int LOGICAL_ERROR; } +const MetadataStorageMetrics & IObjectStorage::getMetadataStorageMetrics() const +{ + throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method 'getMetadataStorageMetrics' is not implemented"); +} + bool IObjectStorage::existsOrHasAnyChild(const std::string & path) const { RelativePathsWithMetadata files; diff --git a/src/Disks/ObjectStorages/IObjectStorage.h b/src/Disks/ObjectStorages/IObjectStorage.h index d4ac6ea0239..7bc9e4073db 100644 --- a/src/Disks/ObjectStorages/IObjectStorage.h +++ b/src/Disks/ObjectStorages/IObjectStorage.h @@ -1,10 +1,10 @@ #pragma once -#include #include #include #include #include +#include #include #include @@ -13,17 +13,18 @@ #include #include -#include -#include -#include -#include -#include -#include #include #include -#include -#include +#include +#include +#include +#include +#include #include +#include +#include +#include +#include #include "config.h" #if USE_AZURE_BLOB_STORAGE @@ -31,6 +32,10 @@ #include #endif +#if USE_AWS_S3 +#include +#endif + namespace DB { @@ -111,6 +116,8 @@ public: virtual std::string getDescription() const = 0; + virtual const MetadataStorageMetrics & getMetadataStorageMetrics() const; + /// Object exists or not virtual bool exists(const StoredObject & object) const = 0; @@ -257,6 +264,13 @@ public: } #endif +#if USE_AWS_S3 + virtual std::shared_ptr getS3StorageClient() + { + throw Exception(ErrorCodes::NOT_IMPLEMENTED, "This function is only implemented for S3ObjectStorage"); + } +#endif + private: mutable std::mutex throttlers_mutex; diff --git a/src/Disks/ObjectStorages/MetadataStorageFromPlainObjectStorageOperations.cpp b/src/Disks/ObjectStorages/MetadataStorageFromPlainObjectStorageOperations.cpp index a28f4e7a882..7e4b1f69962 100644 --- a/src/Disks/ObjectStorages/MetadataStorageFromPlainObjectStorageOperations.cpp +++ b/src/Disks/ObjectStorages/MetadataStorageFromPlainObjectStorageOperations.cpp @@ -52,11 +52,16 @@ void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::execute(std: [[maybe_unused]] auto result = path_map.emplace(path, std::move(key_prefix)); chassert(result.second); + auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; + CurrentMetrics::add(metric, 1); writeString(path.string(), *buf); buf->finalize(); write_finalized = true; + + auto event = object_storage->getMetadataStorageMetrics().directory_created; + ProfileEvents::increment(event); } void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::undo(std::unique_lock &) @@ -65,6 +70,9 @@ void MetadataStorageFromPlainObjectStorageCreateDirectoryOperation::undo(std::un if (write_finalized) { path_map.erase(path); + auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; + CurrentMetrics::sub(metric, 1); + object_storage->removeObject(StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME)); } else if (write_created) @@ -165,7 +173,15 @@ void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::execute(std: auto object_key = ObjectStorageKey::createAsRelative(key_prefix, PREFIX_PATH_FILE_NAME); auto object = StoredObject(object_key.serialize(), path / PREFIX_PATH_FILE_NAME); object_storage->removeObject(object); + path_map.erase(path_it); + auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; + CurrentMetrics::sub(metric, 1); + + removed = true; + + auto event = object_storage->getMetadataStorageMetrics().directory_removed; + ProfileEvents::increment(event); } void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::undo(std::unique_lock &) @@ -185,6 +201,8 @@ void MetadataStorageFromPlainObjectStorageRemoveDirectoryOperation::undo(std::un buf->finalize(); path_map.emplace(path, std::move(key_prefix)); + auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; + CurrentMetrics::add(metric, 1); } } diff --git a/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.cpp b/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.cpp index 3e772271b99..cc77ca5364b 100644 --- a/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.cpp +++ b/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.cpp @@ -50,6 +50,8 @@ MetadataStorageFromPlainObjectStorage::PathMap loadPathPrefixMap(const std::stri res.first->second, remote_path.parent_path().string()); } + auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; + CurrentMetrics::add(metric, result.size()); return result; } @@ -134,6 +136,12 @@ MetadataStorageFromPlainRewritableObjectStorage::MetadataStorageFromPlainRewrita object_storage->setKeysGenerator(keys_gen); } +MetadataStorageFromPlainRewritableObjectStorage::~MetadataStorageFromPlainRewritableObjectStorage() +{ + auto metric = object_storage->getMetadataStorageMetrics().directory_map_size; + CurrentMetrics::sub(metric, path_map->size()); +} + std::vector MetadataStorageFromPlainRewritableObjectStorage::getDirectChildrenOnDisk( const std::string & storage_key, const RelativePathsWithMetadata & remote_paths, const std::string & local_path) const { diff --git a/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.h b/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.h index 4415a68c24e..661968d7044 100644 --- a/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.h +++ b/src/Disks/ObjectStorages/MetadataStorageFromPlainRewritableObjectStorage.h @@ -14,6 +14,7 @@ private: public: MetadataStorageFromPlainRewritableObjectStorage(ObjectStoragePtr object_storage_, String storage_path_prefix_); + ~MetadataStorageFromPlainRewritableObjectStorage() override; MetadataStorageType getType() const override { return MetadataStorageType::PlainRewritable; } diff --git a/src/Disks/ObjectStorages/MetadataStorageMetrics.h b/src/Disks/ObjectStorages/MetadataStorageMetrics.h new file mode 100644 index 00000000000..365fd3c8145 --- /dev/null +++ b/src/Disks/ObjectStorages/MetadataStorageMetrics.h @@ -0,0 +1,24 @@ +#pragma once + +#include +#include +#include + +namespace DB +{ + +struct MetadataStorageMetrics +{ + const ProfileEvents::Event directory_created = ProfileEvents::end(); + const ProfileEvents::Event directory_removed = ProfileEvents::end(); + + CurrentMetrics::Metric directory_map_size = CurrentMetrics::end(); + + template + static MetadataStorageMetrics create() + { + return MetadataStorageMetrics{}; + } +}; + +} diff --git a/src/Disks/ObjectStorages/MetadataStorageTransactionState.cpp b/src/Disks/ObjectStorages/MetadataStorageTransactionState.cpp index 245578b5d9e..a37f4ce7e65 100644 --- a/src/Disks/ObjectStorages/MetadataStorageTransactionState.cpp +++ b/src/Disks/ObjectStorages/MetadataStorageTransactionState.cpp @@ -17,7 +17,6 @@ std::string toString(MetadataStorageTransactionState state) case MetadataStorageTransactionState::PARTIALLY_ROLLED_BACK: return "PARTIALLY_ROLLED_BACK"; } - UNREACHABLE(); } } diff --git a/src/Disks/ObjectStorages/ObjectStorageFactory.cpp b/src/Disks/ObjectStorages/ObjectStorageFactory.cpp index d7884c2911b..8210255decb 100644 --- a/src/Disks/ObjectStorages/ObjectStorageFactory.cpp +++ b/src/Disks/ObjectStorages/ObjectStorageFactory.cpp @@ -23,6 +23,7 @@ #include #include #include +#include #include #include @@ -85,7 +86,9 @@ ObjectStoragePtr createObjectStorage( DataSourceDescription{DataSourceType::ObjectStorage, type, MetadataStorageType::PlainRewritable, /*description*/ ""} .toString()); - return std::make_shared>(std::forward(args)...); + auto metadata_storage_metrics = DB::MetadataStorageMetrics::create(); + return std::make_shared>( + std::move(metadata_storage_metrics), std::forward(args)...); } else return std::make_shared(std::forward(args)...); @@ -256,8 +259,9 @@ void registerS3PlainRewritableObjectStorage(ObjectStorageFactory & factory) auto client = getClient(config, config_prefix, context, *settings, true); auto key_generator = getKeyGenerator(uri, config, config_prefix); + auto metadata_storage_metrics = DB::MetadataStorageMetrics::create(); auto object_storage = std::make_shared>( - std::move(client), std::move(settings), uri, s3_capabilities, key_generator, name); + std::move(metadata_storage_metrics), std::move(client), std::move(settings), uri, s3_capabilities, key_generator, name); /// NOTE: should we still perform this check for clickhouse-disks? if (!skip_access_check) diff --git a/src/Disks/ObjectStorages/PlainRewritableObjectStorage.h b/src/Disks/ObjectStorages/PlainRewritableObjectStorage.h index 2b116cff443..5f000afe625 100644 --- a/src/Disks/ObjectStorages/PlainRewritableObjectStorage.h +++ b/src/Disks/ObjectStorages/PlainRewritableObjectStorage.h @@ -16,8 +16,9 @@ class PlainRewritableObjectStorage : public BaseObjectStorage { public: template - explicit PlainRewritableObjectStorage(Args &&... args) + explicit PlainRewritableObjectStorage(MetadataStorageMetrics && metadata_storage_metrics_, Args &&... args) : BaseObjectStorage(std::forward(args)...) + , metadata_storage_metrics(std::move(metadata_storage_metrics_)) /// A basic key generator is required for checking S3 capabilities, /// it will be reset later by metadata storage. , key_generator(createObjectStorageKeysGeneratorAsIsWithPrefix(BaseObjectStorage::getCommonKeyPrefix())) @@ -26,6 +27,8 @@ public: std::string getName() const override { return "PlainRewritable" + BaseObjectStorage::getName(); } + const MetadataStorageMetrics & getMetadataStorageMetrics() const override { return metadata_storage_metrics; } + bool isWriteOnce() const override { return false; } bool isPlain() const override { return true; } @@ -37,6 +40,7 @@ public: void setKeysGenerator(ObjectStorageKeysGeneratorPtr gen) override { key_generator = gen; } private: + MetadataStorageMetrics metadata_storage_metrics; ObjectStorageKeysGeneratorPtr key_generator; }; diff --git a/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp b/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp index 69485bd4d01..ae719f5cde4 100644 --- a/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp +++ b/src/Disks/ObjectStorages/S3/S3ObjectStorage.cpp @@ -259,7 +259,10 @@ std::unique_ptr S3ObjectStorage::writeObject( /// NOLIN throw Exception(ErrorCodes::BAD_ARGUMENTS, "S3 doesn't support append to files"); S3Settings::RequestSettings request_settings = s3_settings.get()->request_settings; - if (auto query_context = CurrentThread::getQueryContext()) + /// NOTE: For background operations settings are not propagated from session or query. They are taken from + /// default user's .xml config. It's obscure and unclear behavior. For them it's always better + /// to rely on settings from disk. + if (auto query_context = CurrentThread::getQueryContext(); query_context && !query_context->isBackgroundOperationContext()) { request_settings.updateFromSettingsIfChanged(query_context->getSettingsRef()); } @@ -495,13 +498,14 @@ void S3ObjectStorage::copyObjectToAnotherObjectStorage( // NOLINT try { copyS3File( - current_client, - uri.bucket, - object_from.remote_path, - 0, - size, - dest_s3->uri.bucket, - object_to.remote_path, + /*src_s3_client=*/current_client, + /*src_bucket=*/uri.bucket, + /*src_key=*/object_from.remote_path, + /*src_offset=*/0, + /*src_size=*/size, + /*dest_s3_client=*/current_client, + /*dest_bucket=*/dest_s3->uri.bucket, + /*dest_key=*/object_to.remote_path, settings_ptr->request_settings, patchSettings(read_settings), BlobStorageLogWriter::create(disk_name), @@ -535,13 +539,15 @@ void S3ObjectStorage::copyObject( // NOLINT auto size = S3::getObjectSize(*current_client, uri.bucket, object_from.remote_path, {}, settings_ptr->request_settings); auto scheduler = threadPoolCallbackRunnerUnsafe(getThreadPoolWriter(), "S3ObjStor_copy"); - copyS3File(current_client, - uri.bucket, - object_from.remote_path, - 0, - size, - uri.bucket, - object_to.remote_path, + copyS3File( + /*src_s3_client=*/current_client, + /*src_bucket=*/uri.bucket, + /*src_key=*/object_from.remote_path, + /*src_offset=*/0, + /*src_size=*/size, + /*dest_s3_client=*/current_client, + /*dest_bucket=*/uri.bucket, + /*dest_key=*/object_to.remote_path, settings_ptr->request_settings, patchSettings(read_settings), BlobStorageLogWriter::create(disk_name), @@ -578,6 +584,7 @@ void S3ObjectStorage::applyNewSettings( auto settings_from_config = getSettings(config, config_prefix, context, context->getSettingsRef().s3_validate_request_settings); auto modified_settings = std::make_unique(*s3_settings.get()); modified_settings->auth_settings.updateFrom(settings_from_config->auth_settings); + modified_settings->request_settings = settings_from_config->request_settings; if (auto endpoint_settings = context->getStorageS3Settings().getSettings(uri.uri.toString(), context->getUserName())) modified_settings->auth_settings.updateFrom(endpoint_settings->auth_settings); @@ -616,6 +623,11 @@ ObjectStorageKey S3ObjectStorage::generateObjectKeyForPath(const std::string & p return key_generator->generate(path, /* is_directory */ false); } +std::shared_ptr S3ObjectStorage::getS3StorageClient() +{ + return client.get(); +} + } #endif diff --git a/src/Disks/ObjectStorages/S3/S3ObjectStorage.h b/src/Disks/ObjectStorages/S3/S3ObjectStorage.h index 062ddd4e2a2..6eacf3a1eee 100644 --- a/src/Disks/ObjectStorages/S3/S3ObjectStorage.h +++ b/src/Disks/ObjectStorages/S3/S3ObjectStorage.h @@ -168,6 +168,7 @@ public: bool isReadOnly() const override { return s3_settings.get()->read_only; } + std::shared_ptr getS3StorageClient() override; private: void setNewSettings(std::unique_ptr && s3_settings_); diff --git a/src/Disks/ObjectStorages/Web/WebObjectStorage.h b/src/Disks/ObjectStorages/Web/WebObjectStorage.h index 9d3b9a3a8f0..9ca2950dae0 100644 --- a/src/Disks/ObjectStorages/Web/WebObjectStorage.h +++ b/src/Disks/ObjectStorages/Web/WebObjectStorage.h @@ -3,6 +3,8 @@ #include "config.h" #include + +#include #include namespace Poco diff --git a/src/Disks/ObjectStorages/createMetadataStorageMetrics.h b/src/Disks/ObjectStorages/createMetadataStorageMetrics.h new file mode 100644 index 00000000000..6dddc227ade --- /dev/null +++ b/src/Disks/ObjectStorages/createMetadataStorageMetrics.h @@ -0,0 +1,67 @@ +#pragma once + +#if USE_AWS_S3 +# include +#endif +#if USE_AZURE_BLOB_STORAGE && !defined(CLICKHOUSE_KEEPER_STANDALONE_BUILD) +# include +#endif +#ifndef CLICKHOUSE_KEEPER_STANDALONE_BUILD +# include +#endif +#include + +namespace ProfileEvents +{ +extern const Event DiskPlainRewritableAzureDirectoryCreated; +extern const Event DiskPlainRewritableAzureDirectoryRemoved; +extern const Event DiskPlainRewritableLocalDirectoryCreated; +extern const Event DiskPlainRewritableLocalDirectoryRemoved; +extern const Event DiskPlainRewritableS3DirectoryCreated; +extern const Event DiskPlainRewritableS3DirectoryRemoved; +} + +namespace CurrentMetrics +{ +extern const Metric DiskPlainRewritableAzureDirectoryMapSize; +extern const Metric DiskPlainRewritableLocalDirectoryMapSize; +extern const Metric DiskPlainRewritableS3DirectoryMapSize; +} + +namespace DB +{ + +#if USE_AWS_S3 +template <> +inline MetadataStorageMetrics MetadataStorageMetrics::create() +{ + return MetadataStorageMetrics{ + .directory_created = ProfileEvents::DiskPlainRewritableS3DirectoryCreated, + .directory_removed = ProfileEvents::DiskPlainRewritableS3DirectoryRemoved, + .directory_map_size = CurrentMetrics::DiskPlainRewritableS3DirectoryMapSize}; +} +#endif + +#if USE_AZURE_BLOB_STORAGE && !defined(CLICKHOUSE_KEEPER_STANDALONE_BUILD) +template <> +inline MetadataStorageMetrics MetadataStorageMetrics::create() +{ + return MetadataStorageMetrics{ + .directory_created = ProfileEvents::DiskPlainRewritableAzureDirectoryCreated, + .directory_removed = ProfileEvents::DiskPlainRewritableAzureDirectoryRemoved, + .directory_map_size = CurrentMetrics::DiskPlainRewritableAzureDirectoryMapSize}; +} +#endif + +#ifndef CLICKHOUSE_KEEPER_STANDALONE_BUILD +template <> +inline MetadataStorageMetrics MetadataStorageMetrics::create() +{ + return MetadataStorageMetrics{ + .directory_created = ProfileEvents::DiskPlainRewritableLocalDirectoryCreated, + .directory_removed = ProfileEvents::DiskPlainRewritableLocalDirectoryRemoved, + .directory_map_size = CurrentMetrics::DiskPlainRewritableLocalDirectoryMapSize}; +} +#endif + +} diff --git a/src/Disks/VolumeJBOD.cpp b/src/Disks/VolumeJBOD.cpp index d0e9d32ff5e..f8b9a57affe 100644 --- a/src/Disks/VolumeJBOD.cpp +++ b/src/Disks/VolumeJBOD.cpp @@ -112,7 +112,6 @@ DiskPtr VolumeJBOD::getDisk(size_t /* index */) const return disks_by_size.top().disk; } } - UNREACHABLE(); } ReservationPtr VolumeJBOD::reserve(UInt64 bytes) @@ -164,7 +163,6 @@ ReservationPtr VolumeJBOD::reserve(UInt64 bytes) return reservation; } } - UNREACHABLE(); } bool VolumeJBOD::areMergesAvoided() const diff --git a/src/Formats/EscapingRuleUtils.cpp b/src/Formats/EscapingRuleUtils.cpp index 89a7a31d033..9577ca2a8df 100644 --- a/src/Formats/EscapingRuleUtils.cpp +++ b/src/Formats/EscapingRuleUtils.cpp @@ -62,7 +62,6 @@ String escapingRuleToString(FormatSettings::EscapingRule escaping_rule) case FormatSettings::EscapingRule::Raw: return "Raw"; } - UNREACHABLE(); } void skipFieldByEscapingRule(ReadBuffer & buf, FormatSettings::EscapingRule escaping_rule, const FormatSettings & format_settings) diff --git a/src/Functions/DivisionUtils.h b/src/Functions/DivisionUtils.h index ff07309e248..7fd5b7476e1 100644 --- a/src/Functions/DivisionUtils.h +++ b/src/Functions/DivisionUtils.h @@ -68,7 +68,7 @@ struct DivideIntegralImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { using CastA = std::conditional_t && std::is_same_v, uint8_t, A>; using CastB = std::conditional_t && std::is_same_v, uint8_t, B>; @@ -120,7 +120,7 @@ struct ModuloImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { if constexpr (std::is_floating_point_v) { @@ -175,7 +175,7 @@ struct PositiveModuloImpl : ModuloImpl using ResultType = typename NumberTraits::ResultOfPositiveModulo::Type; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { auto res = ModuloImpl::template apply(a, b); if constexpr (is_signed_v) diff --git a/src/Functions/FunctionBinaryArithmetic.h b/src/Functions/FunctionBinaryArithmetic.h index 6203999fa37..5d19ba44d9b 100644 --- a/src/Functions/FunctionBinaryArithmetic.h +++ b/src/Functions/FunctionBinaryArithmetic.h @@ -284,7 +284,7 @@ struct BinaryOperation private: template - static inline void apply(const A * __restrict a, const B * __restrict b, ResultType * __restrict c, size_t i) + static void apply(const A * __restrict a, const B * __restrict b, ResultType * __restrict c, size_t i) { if constexpr (op_case == OpCase::Vector) c[i] = Op::template apply(a[i], b[i]); @@ -432,7 +432,7 @@ template struct FixedStringReduceOperationImpl { template - static void inline process(const UInt8 * __restrict a, const UInt8 * __restrict b, UInt16 * __restrict result, size_t size, size_t N) + static void process(const UInt8 * __restrict a, const UInt8 * __restrict b, UInt16 * __restrict result, size_t size, size_t N) { if constexpr (op_case == OpCase::Vector) vectorVector(a, b, result, size, N); @@ -503,7 +503,7 @@ struct StringReduceOperationImpl } } - static inline UInt64 constConst(std::string_view a, std::string_view b) + static UInt64 constConst(std::string_view a, std::string_view b) { return process( reinterpret_cast(a.data()), @@ -643,7 +643,7 @@ public: private: template - static inline void processWithRightNullmapImpl(const auto & a, const auto & b, ResultContainerType & c, size_t size, const NullMap * right_nullmap, ApplyFunc apply_func) + static void processWithRightNullmapImpl(const auto & a, const auto & b, ResultContainerType & c, size_t size, const NullMap * right_nullmap, ApplyFunc apply_func) { if (right_nullmap) { diff --git a/src/Functions/FunctionSQLJSON.h b/src/Functions/FunctionSQLJSON.h index 37db514fd1f..83ed874c47b 100644 --- a/src/Functions/FunctionSQLJSON.h +++ b/src/Functions/FunctionSQLJSON.h @@ -44,27 +44,27 @@ class DefaultJSONStringSerializer public: explicit DefaultJSONStringSerializer(ColumnString & col_str_) : col_str(col_str_) { } - inline void addRawData(const char * ptr, size_t len) + void addRawData(const char * ptr, size_t len) { out << std::string_view(ptr, len); } - inline void addRawString(std::string_view str) + void addRawString(std::string_view str) { out << str; } /// serialize the json element into stringstream - inline void addElement(const Element & element) + void addElement(const Element & element) { out << element.getElement(); } - inline void commit() + void commit() { auto out_str = out.str(); col_str.insertData(out_str.data(), out_str.size()); } - inline void rollback() {} + void rollback() {} private: ColumnString & col_str; std::stringstream out; // STYLE_CHECK_ALLOW_STD_STRING_STREAM @@ -82,27 +82,27 @@ public: prev_offset = offsets.empty() ? 0 : offsets.back(); } /// Put the data into column's buffer directly. - inline void addRawData(const char * ptr, size_t len) + void addRawData(const char * ptr, size_t len) { chars.insert(ptr, ptr + len); } - inline void addRawString(std::string_view str) + void addRawString(std::string_view str) { chars.insert(str.data(), str.data() + str.size()); } /// serialize the json element into column's buffer directly - inline void addElement(const Element & element) + void addElement(const Element & element) { formatter.append(element.getElement()); } - inline void commit() + void commit() { chars.push_back(0); offsets.push_back(chars.size()); } - inline void rollback() + void rollback() { chars.resize(prev_offset); } diff --git a/src/Functions/FunctionsAES.h b/src/Functions/FunctionsAES.h index 14745460658..524b4f82acd 100644 --- a/src/Functions/FunctionsAES.h +++ b/src/Functions/FunctionsAES.h @@ -59,7 +59,7 @@ enum class CipherMode : uint8_t template struct KeyHolder { - inline StringRef setKey(size_t cipher_key_size, StringRef key) const + StringRef setKey(size_t cipher_key_size, StringRef key) const { if (key.size != cipher_key_size) throw Exception(ErrorCodes::BAD_ARGUMENTS, "Invalid key size: {} expected {}", key.size, cipher_key_size); @@ -71,7 +71,7 @@ struct KeyHolder template <> struct KeyHolder { - inline StringRef setKey(size_t cipher_key_size, StringRef key) + StringRef setKey(size_t cipher_key_size, StringRef key) { if (key.size < cipher_key_size) throw Exception(ErrorCodes::BAD_ARGUMENTS, "Invalid key size: {} expected {}", key.size, cipher_key_size); diff --git a/src/Functions/FunctionsBitToArray.cpp b/src/Functions/FunctionsBitToArray.cpp index 566ce16d1a7..adabda1a7f8 100644 --- a/src/Functions/FunctionsBitToArray.cpp +++ b/src/Functions/FunctionsBitToArray.cpp @@ -79,7 +79,7 @@ public: private: template - inline static void writeBitmask(T x, WriteBuffer & out) + static void writeBitmask(T x, WriteBuffer & out) { using UnsignedT = make_unsigned_t; UnsignedT u_x = x; diff --git a/src/Functions/FunctionsCodingIP.cpp b/src/Functions/FunctionsCodingIP.cpp index 54f7b6dd1f4..e01967274f4 100644 --- a/src/Functions/FunctionsCodingIP.cpp +++ b/src/Functions/FunctionsCodingIP.cpp @@ -785,7 +785,7 @@ private: #include - static inline void applyCIDRMask(const char * __restrict src, char * __restrict dst_lower, char * __restrict dst_upper, UInt8 bits_to_keep) + static void applyCIDRMask(const char * __restrict src, char * __restrict dst_lower, char * __restrict dst_upper, UInt8 bits_to_keep) { __m128i mask = _mm_loadu_si128(reinterpret_cast(getCIDRMaskIPv6(bits_to_keep).data())); __m128i lower = _mm_and_si128(_mm_loadu_si128(reinterpret_cast(src)), mask); @@ -916,7 +916,7 @@ public: class FunctionIPv4CIDRToRange : public IFunction { private: - static inline std::pair applyCIDRMask(UInt32 src, UInt8 bits_to_keep) + static std::pair applyCIDRMask(UInt32 src, UInt8 bits_to_keep) { if (bits_to_keep >= 8 * sizeof(UInt32)) return { src, src }; diff --git a/src/Functions/FunctionsComparison.h b/src/Functions/FunctionsComparison.h index 57aebc11da0..4bee19ba87a 100644 --- a/src/Functions/FunctionsComparison.h +++ b/src/Functions/FunctionsComparison.h @@ -1176,8 +1176,7 @@ public: /// You can compare the date, datetime, or datatime64 and an enumeration with a constant string. || ((left.isDate() || left.isDate32() || left.isDateTime() || left.isDateTime64()) && (right.isDate() || right.isDate32() || right.isDateTime() || right.isDateTime64()) && left.idx == right.idx) /// only date vs date, or datetime vs datetime || (left.isUUID() && right.isUUID()) - || (left.isIPv4() && right.isIPv4()) - || (left.isIPv6() && right.isIPv6()) + || ((left.isIPv4() || left.isIPv6()) && (right.isIPv4() || right.isIPv6())) || (left.isEnum() && right.isEnum() && arguments[0]->getName() == arguments[1]->getName()) /// only equivalent enum type values can be compared against || (left_tuple && right_tuple && left_tuple->getElements().size() == right_tuple->getElements().size()) || (arguments[0]->equals(*arguments[1])))) @@ -1266,6 +1265,8 @@ public: const bool left_is_float = which_left.isFloat(); const bool right_is_float = which_right.isFloat(); + const bool left_is_ipv4 = which_left.isIPv4(); + const bool right_is_ipv4 = which_right.isIPv4(); const bool left_is_ipv6 = which_left.isIPv6(); const bool right_is_ipv6 = which_right.isIPv6(); const bool left_is_fixed_string = which_left.isFixedString(); @@ -1323,10 +1324,13 @@ public: { return res; } - else if (((left_is_ipv6 && right_is_fixed_string) || (right_is_ipv6 && left_is_fixed_string)) && fixed_string_size == IPV6_BINARY_LENGTH) + else if ( + (((left_is_ipv6 && right_is_fixed_string) || (right_is_ipv6 && left_is_fixed_string)) && fixed_string_size == IPV6_BINARY_LENGTH) + || ((left_is_ipv4 || left_is_ipv6) && (right_is_ipv4 || right_is_ipv6)) + ) { - /// Special treatment for FixedString(16) as a binary representation of IPv6 - - /// CAST is customized for this case + /// Special treatment for FixedString(16) as a binary representation of IPv6 & for comparing IPv4 & IPv6 values - + /// CAST is customized for this cases ColumnPtr left_column = left_is_ipv6 ? col_with_type_and_name_left.column : castColumn(col_with_type_and_name_left, right_type); ColumnPtr right_column = right_is_ipv6 ? diff --git a/src/Functions/FunctionsConsistentHashing.h b/src/Functions/FunctionsConsistentHashing.h index 6f2eec5be98..306b6395dc5 100644 --- a/src/Functions/FunctionsConsistentHashing.h +++ b/src/Functions/FunctionsConsistentHashing.h @@ -83,7 +83,7 @@ private: using BucketsType = typename Impl::BucketsType; template - inline BucketsType checkBucketsRange(T buckets) const + BucketsType checkBucketsRange(T buckets) const { if (unlikely(buckets <= 0)) throw Exception(ErrorCodes::BAD_ARGUMENTS, "The second argument of function {} (number of buckets) must be positive number", getName()); diff --git a/src/Functions/FunctionsLogical.cpp b/src/Functions/FunctionsLogical.cpp index 7e7ae76d6eb..2f5ce6deebf 100644 --- a/src/Functions/FunctionsLogical.cpp +++ b/src/Functions/FunctionsLogical.cpp @@ -170,7 +170,7 @@ public: : vec(in[in.size() - N]->getData()), next(in) {} /// Returns a combination of values in the i-th row of all columns stored in the constructor. - inline ResultValueType apply(const size_t i) const + ResultValueType apply(const size_t i) const { const auto a = !!vec[i]; return Op::apply(a, next.apply(i)); @@ -190,7 +190,7 @@ public: explicit AssociativeApplierImpl(const UInt8ColumnPtrs & in) : vec(in[in.size() - 1]->getData()) {} - inline ResultValueType apply(const size_t i) const { return !!vec[i]; } + ResultValueType apply(const size_t i) const { return !!vec[i]; } private: const UInt8Container & vec; @@ -291,7 +291,7 @@ public: } /// Returns a combination of values in the i-th row of all columns stored in the constructor. - inline ResultValueType apply(const size_t i) const + ResultValueType apply(const size_t i) const { return Op::ternaryApply(vec[i], next.apply(i)); } @@ -315,7 +315,7 @@ public: TernaryValueBuilder::build(in[in.size() - 1], vec.data()); } - inline ResultValueType apply(const size_t i) const { return vec[i]; } + ResultValueType apply(const size_t i) const { return vec[i]; } private: UInt8Container vec; diff --git a/src/Functions/FunctionsLogical.h b/src/Functions/FunctionsLogical.h index 41464329f79..3c2eb3ee0b8 100644 --- a/src/Functions/FunctionsLogical.h +++ b/src/Functions/FunctionsLogical.h @@ -84,47 +84,47 @@ struct AndImpl { using ResultType = UInt8; - static inline constexpr bool isSaturable() { return true; } + static constexpr bool isSaturable() { return true; } /// Final value in two-valued logic (no further operations with True, False will change this value) - static inline constexpr bool isSaturatedValue(bool a) { return !a; } + static constexpr bool isSaturatedValue(bool a) { return !a; } /// Final value in three-valued logic (no further operations with True, False, Null will change this value) - static inline constexpr bool isSaturatedValueTernary(UInt8 a) { return a == Ternary::False; } + static constexpr bool isSaturatedValueTernary(UInt8 a) { return a == Ternary::False; } - static inline constexpr ResultType apply(UInt8 a, UInt8 b) { return a & b; } + static constexpr ResultType apply(UInt8 a, UInt8 b) { return a & b; } - static inline constexpr ResultType ternaryApply(UInt8 a, UInt8 b) { return std::min(a, b); } + static constexpr ResultType ternaryApply(UInt8 a, UInt8 b) { return std::min(a, b); } /// Will use three-valued logic for NULLs (see above) or default implementation (any operation with NULL returns NULL). - static inline constexpr bool specialImplementationForNulls() { return true; } + static constexpr bool specialImplementationForNulls() { return true; } }; struct OrImpl { using ResultType = UInt8; - static inline constexpr bool isSaturable() { return true; } - static inline constexpr bool isSaturatedValue(bool a) { return a; } - static inline constexpr bool isSaturatedValueTernary(UInt8 a) { return a == Ternary::True; } - static inline constexpr ResultType apply(UInt8 a, UInt8 b) { return a | b; } - static inline constexpr ResultType ternaryApply(UInt8 a, UInt8 b) { return std::max(a, b); } - static inline constexpr bool specialImplementationForNulls() { return true; } + static constexpr bool isSaturable() { return true; } + static constexpr bool isSaturatedValue(bool a) { return a; } + static constexpr bool isSaturatedValueTernary(UInt8 a) { return a == Ternary::True; } + static constexpr ResultType apply(UInt8 a, UInt8 b) { return a | b; } + static constexpr ResultType ternaryApply(UInt8 a, UInt8 b) { return std::max(a, b); } + static constexpr bool specialImplementationForNulls() { return true; } }; struct XorImpl { using ResultType = UInt8; - static inline constexpr bool isSaturable() { return false; } - static inline constexpr bool isSaturatedValue(bool) { return false; } - static inline constexpr bool isSaturatedValueTernary(UInt8) { return false; } - static inline constexpr ResultType apply(UInt8 a, UInt8 b) { return a != b; } - static inline constexpr ResultType ternaryApply(UInt8 a, UInt8 b) { return a != b; } - static inline constexpr bool specialImplementationForNulls() { return false; } + static constexpr bool isSaturable() { return false; } + static constexpr bool isSaturatedValue(bool) { return false; } + static constexpr bool isSaturatedValueTernary(UInt8) { return false; } + static constexpr ResultType apply(UInt8 a, UInt8 b) { return a != b; } + static constexpr ResultType ternaryApply(UInt8 a, UInt8 b) { return a != b; } + static constexpr bool specialImplementationForNulls() { return false; } #if USE_EMBEDDED_COMPILER - static inline llvm::Value * apply(llvm::IRBuilder<> & builder, llvm::Value * a, llvm::Value * b) + static llvm::Value * apply(llvm::IRBuilder<> & builder, llvm::Value * a, llvm::Value * b) { return builder.CreateXor(a, b); } @@ -136,13 +136,13 @@ struct NotImpl { using ResultType = UInt8; - static inline ResultType apply(A a) + static ResultType apply(A a) { return !static_cast(a); } #if USE_EMBEDDED_COMPILER - static inline llvm::Value * apply(llvm::IRBuilder<> & builder, llvm::Value * a) + static llvm::Value * apply(llvm::IRBuilder<> & builder, llvm::Value * a) { return builder.CreateNot(a); } diff --git a/src/Functions/FunctionsRound.h b/src/Functions/FunctionsRound.h index 99f3a14dfec..d2dac467bff 100644 --- a/src/Functions/FunctionsRound.h +++ b/src/Functions/FunctionsRound.h @@ -149,8 +149,6 @@ struct IntegerRoundingComputation return x; } } - - UNREACHABLE(); } static ALWAYS_INLINE T compute(T x, T scale) @@ -163,8 +161,6 @@ struct IntegerRoundingComputation case ScaleMode::Negative: return computeImpl(x, scale); } - - UNREACHABLE(); } static ALWAYS_INLINE void compute(const T * __restrict in, size_t scale, T * __restrict out) requires std::integral @@ -247,8 +243,6 @@ inline float roundWithMode(float x, RoundingMode mode) case RoundingMode::Ceil: return ceilf(x); case RoundingMode::Trunc: return truncf(x); } - - UNREACHABLE(); } inline double roundWithMode(double x, RoundingMode mode) @@ -260,8 +254,6 @@ inline double roundWithMode(double x, RoundingMode mode) case RoundingMode::Ceil: return ceil(x); case RoundingMode::Trunc: return trunc(x); } - - UNREACHABLE(); } template @@ -296,7 +288,7 @@ class FloatRoundingComputation : public BaseFloatRoundingComputation using Base = BaseFloatRoundingComputation; public: - static inline void compute(const T * __restrict in, const typename Base::VectorType & scale, T * __restrict out) + static void compute(const T * __restrict in, const typename Base::VectorType & scale, T * __restrict out) { auto val = Base::load(in); diff --git a/src/Functions/FunctionsStringSimilarity.cpp b/src/Functions/FunctionsStringSimilarity.cpp index aadf5c246fc..7b3f2337c89 100644 --- a/src/Functions/FunctionsStringSimilarity.cpp +++ b/src/Functions/FunctionsStringSimilarity.cpp @@ -275,7 +275,7 @@ struct NgramDistanceImpl } template - static inline auto dispatchSearcher(Callback callback, Args &&... args) + static auto dispatchSearcher(Callback callback, Args &&... args) { if constexpr (!UTF8) return callback(std::forward(args)..., readASCIICodePoints, calculateASCIIHash); diff --git a/src/Functions/FunctionsTimeWindow.cpp b/src/Functions/FunctionsTimeWindow.cpp index 1c9f28c9724..f93a885ee65 100644 --- a/src/Functions/FunctionsTimeWindow.cpp +++ b/src/Functions/FunctionsTimeWindow.cpp @@ -232,7 +232,6 @@ struct TimeWindowImpl default: throw Exception(ErrorCodes::SYNTAX_ERROR, "Fraction seconds are unsupported by windows yet"); } - UNREACHABLE(); } template @@ -422,7 +421,6 @@ struct TimeWindowImpl default: throw Exception(ErrorCodes::SYNTAX_ERROR, "Fraction seconds are unsupported by windows yet"); } - UNREACHABLE(); } template diff --git a/src/Functions/FunctionsTimeWindow.h b/src/Functions/FunctionsTimeWindow.h index 6183d25c8bd..7522bd374a2 100644 --- a/src/Functions/FunctionsTimeWindow.h +++ b/src/Functions/FunctionsTimeWindow.h @@ -97,7 +97,7 @@ template<> \ template <> \ struct AddTime \ { \ - static inline auto execute(UInt16 d, Int64 delta, const DateLUTImpl & time_zone) \ + static auto execute(UInt16 d, Int64 delta, const DateLUTImpl & time_zone) \ { \ return time_zone.add##INTERVAL_KIND##s(ExtendedDayNum(d), delta); \ } \ @@ -110,7 +110,7 @@ template<> \ template <> struct AddTime { - static inline NO_SANITIZE_UNDEFINED ExtendedDayNum execute(UInt16 d, UInt64 delta, const DateLUTImpl &) + static NO_SANITIZE_UNDEFINED ExtendedDayNum execute(UInt16 d, UInt64 delta, const DateLUTImpl &) { return ExtendedDayNum(static_cast(d + delta * 7)); } @@ -120,7 +120,7 @@ template<> \ template <> \ struct AddTime \ { \ - static inline NO_SANITIZE_UNDEFINED UInt32 execute(UInt32 t, Int64 delta, const DateLUTImpl &) \ + static NO_SANITIZE_UNDEFINED UInt32 execute(UInt32 t, Int64 delta, const DateLUTImpl &) \ { return static_cast(t + delta * (INTERVAL)); } \ }; ADD_TIME(Day, 86400) @@ -133,7 +133,7 @@ template<> \ template <> \ struct AddTime \ { \ - static inline NO_SANITIZE_UNDEFINED Int64 execute(Int64 t, UInt64 delta, const UInt32 scale) \ + static NO_SANITIZE_UNDEFINED Int64 execute(Int64 t, UInt64 delta, const UInt32 scale) \ { \ if (scale < (DEF_SCALE)) \ { \ diff --git a/src/Functions/GCDLCMImpl.h b/src/Functions/GCDLCMImpl.h index df531363c31..094c248497b 100644 --- a/src/Functions/GCDLCMImpl.h +++ b/src/Functions/GCDLCMImpl.h @@ -26,7 +26,7 @@ struct GCDLCMImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { throwIfDivisionLeadsToFPE(typename NumberTraits::ToInteger::Type(a), typename NumberTraits::ToInteger::Type(b)); throwIfDivisionLeadsToFPE(typename NumberTraits::ToInteger::Type(b), typename NumberTraits::ToInteger::Type(a)); diff --git a/src/Functions/GregorianDate.cpp b/src/Functions/GregorianDate.cpp index eb7ef4abe56..91861e8bbd2 100644 --- a/src/Functions/GregorianDate.cpp +++ b/src/Functions/GregorianDate.cpp @@ -20,12 +20,12 @@ namespace ErrorCodes namespace { - inline constexpr bool is_leap_year(int32_t year) + constexpr bool is_leap_year(int32_t year) { return (year % 4 == 0) && ((year % 400 == 0) || (year % 100 != 0)); } - inline constexpr uint8_t monthLength(bool is_leap_year, uint8_t month) + constexpr uint8_t monthLength(bool is_leap_year, uint8_t month) { switch (month) { @@ -49,7 +49,7 @@ namespace /** Integer division truncated toward negative infinity. */ template - inline constexpr I div(I x, J y) + constexpr I div(I x, J y) { const auto y_cast = static_cast(y); if (x > 0 && y_cast < 0) @@ -63,7 +63,7 @@ namespace /** Integer modulus, satisfying div(x, y)*y + mod(x, y) == x. */ template - inline constexpr I mod(I x, J y) + constexpr I mod(I x, J y) { const auto y_cast = static_cast(y); const auto r = x % y_cast; @@ -76,7 +76,7 @@ namespace /** Like std::min(), but the type of operands may differ. */ template - inline constexpr I min(I x, J y) + constexpr I min(I x, J y) { const auto y_cast = static_cast(y); return x < y_cast ? x : y_cast; diff --git a/src/Functions/PolygonUtils.h b/src/Functions/PolygonUtils.h index c4851718da6..57f1243537d 100644 --- a/src/Functions/PolygonUtils.h +++ b/src/Functions/PolygonUtils.h @@ -381,8 +381,6 @@ bool PointInPolygonWithGrid::contains(CoordinateType x, Coordina case CellType::complexPolygon: return boost::geometry::within(Point(x, y), polygons[cell.index_of_inner_polygon]); } - - UNREACHABLE(); } diff --git a/src/Functions/TransformDateTime64.h b/src/Functions/TransformDateTime64.h index 896e9d8ca48..b52ccd3cce0 100644 --- a/src/Functions/TransformDateTime64.h +++ b/src/Functions/TransformDateTime64.h @@ -53,7 +53,7 @@ public: {} template - inline auto NO_SANITIZE_UNDEFINED execute(const DateTime64 & t, Args && ... args) const + auto NO_SANITIZE_UNDEFINED execute(const DateTime64 & t, Args && ... args) const { /// Type conversion from float to integer may be required. /// We are Ok with implementation specific result for out of range and denormals conversion. @@ -90,14 +90,14 @@ public: template requires(!std::same_as) - inline auto execute(const T & t, Args &&... args) const + auto execute(const T & t, Args &&... args) const { return wrapped_transform.execute(t, std::forward(args)...); } template - inline auto NO_SANITIZE_UNDEFINED executeExtendedResult(const DateTime64 & t, Args && ... args) const + auto NO_SANITIZE_UNDEFINED executeExtendedResult(const DateTime64 & t, Args && ... args) const { /// Type conversion from float to integer may be required. /// We are Ok with implementation specific result for out of range and denormals conversion. @@ -131,7 +131,7 @@ public: template requires (!std::same_as) - inline auto executeExtendedResult(const T & t, Args && ... args) const + auto executeExtendedResult(const T & t, Args && ... args) const { return wrapped_transform.executeExtendedResult(t, std::forward(args)...); } diff --git a/src/Functions/UserDefined/UserDefinedSQLObjectsZooKeeperStorage.cpp b/src/Functions/UserDefined/UserDefinedSQLObjectsZooKeeperStorage.cpp index 568e0b9b5d2..766d63eafb0 100644 --- a/src/Functions/UserDefined/UserDefinedSQLObjectsZooKeeperStorage.cpp +++ b/src/Functions/UserDefined/UserDefinedSQLObjectsZooKeeperStorage.cpp @@ -35,7 +35,6 @@ namespace case UserDefinedSQLObjectType::Function: return "function_"; } - UNREACHABLE(); } constexpr std::string_view sql_extension = ".sql"; diff --git a/src/Functions/abs.cpp b/src/Functions/abs.cpp index 0cd313caf1e..9ac2363f765 100644 --- a/src/Functions/abs.cpp +++ b/src/Functions/abs.cpp @@ -12,7 +12,7 @@ struct AbsImpl using ResultType = std::conditional_t, A, typename NumberTraits::ResultOfAbs::Type>; static constexpr bool allow_string_or_fixed_string = false; - static inline NO_SANITIZE_UNDEFINED ResultType apply(A a) + static NO_SANITIZE_UNDEFINED ResultType apply(A a) { if constexpr (is_decimal) return a < A(0) ? A(-a) : a; diff --git a/src/Functions/array/arrayIndex.h b/src/Functions/array/arrayIndex.h index 395f96bbffb..fa9b3dc92dd 100644 --- a/src/Functions/array/arrayIndex.h +++ b/src/Functions/array/arrayIndex.h @@ -322,7 +322,7 @@ private: } template - static inline void invokeCheckNullMaps( + static void invokeCheckNullMaps( const ColumnString::Chars & data, const ColumnArray::Offsets & offsets, const ColumnString::Offsets & str_offsets, const ColumnString::Chars & values, OffsetT item_offsets, @@ -339,7 +339,7 @@ private: } public: - static inline void process( + static void process( const ColumnString::Chars & data, const ColumnArray::Offsets & offsets, const ColumnString::Offsets & string_offsets, const ColumnString::Chars & item_values, Offset item_offsets, PaddedPODArray & result, @@ -348,7 +348,7 @@ public: invokeCheckNullMaps(data, offsets, string_offsets, item_values, item_offsets, result, data_map, item_map); } - static inline void process( + static void process( const ColumnString::Chars & data, const ColumnArray::Offsets & offsets, const ColumnString::Offsets & string_offsets, const ColumnString::Chars & item_values, const ColumnString::Offsets & item_offsets, PaddedPODArray & result, @@ -467,10 +467,10 @@ private: NullMaps maps; ResultColumnPtr result { ResultColumnType::create() }; - inline void moveResult() { result_column = std::move(result); } + void moveResult() { result_column = std::move(result); } }; - static inline bool allowArguments(const DataTypePtr & inner_type, const DataTypePtr & arg) + static bool allowArguments(const DataTypePtr & inner_type, const DataTypePtr & arg) { auto inner_type_decayed = removeNullable(removeLowCardinality(inner_type)); auto arg_decayed = removeNullable(removeLowCardinality(arg)); @@ -633,7 +633,7 @@ private: * (s1, s1, s2, ...), (s2, s1, s2, ...), (s3, s1, s2, ...) */ template - static inline ColumnPtr executeIntegral(const ColumnsWithTypeAndName & arguments) + static ColumnPtr executeIntegral(const ColumnsWithTypeAndName & arguments) { const ColumnArray * const left = checkAndGetColumn(arguments[0].column.get()); @@ -658,14 +658,14 @@ private: } template - static inline bool executeIntegral(ExecutionData& data) + static bool executeIntegral(ExecutionData& data) { return (executeIntegralExpanded(data) || ...); } /// Invoke executeIntegralImpl with such parameters: (A, other1), (A, other2), ... template - static inline bool executeIntegralExpanded(ExecutionData& data) + static bool executeIntegralExpanded(ExecutionData& data) { return (executeIntegralImpl(data) || ...); } diff --git a/src/Functions/array/arrayNorm.cpp b/src/Functions/array/arrayNorm.cpp index e87eff6add1..ca1e8f21aee 100644 --- a/src/Functions/array/arrayNorm.cpp +++ b/src/Functions/array/arrayNorm.cpp @@ -25,19 +25,19 @@ struct L1Norm struct ConstParams {}; template - inline static ResultType accumulate(ResultType result, ResultType value, const ConstParams &) + static ResultType accumulate(ResultType result, ResultType value, const ConstParams &) { return result + fabs(value); } template - inline static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) + static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) { return result + other_result; } template - inline static ResultType finalize(ResultType result, const ConstParams &) + static ResultType finalize(ResultType result, const ConstParams &) { return result; } @@ -50,19 +50,19 @@ struct L2Norm struct ConstParams {}; template - inline static ResultType accumulate(ResultType result, ResultType value, const ConstParams &) + static ResultType accumulate(ResultType result, ResultType value, const ConstParams &) { return result + value * value; } template - inline static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) + static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) { return result + other_result; } template - inline static ResultType finalize(ResultType result, const ConstParams &) + static ResultType finalize(ResultType result, const ConstParams &) { return sqrt(result); } @@ -73,7 +73,7 @@ struct L2SquaredNorm : L2Norm static constexpr auto name = "L2Squared"; template - inline static ResultType finalize(ResultType result, const ConstParams &) + static ResultType finalize(ResultType result, const ConstParams &) { return result; } @@ -91,19 +91,19 @@ struct LpNorm }; template - inline static ResultType accumulate(ResultType result, ResultType value, const ConstParams & params) + static ResultType accumulate(ResultType result, ResultType value, const ConstParams & params) { return result + static_cast(std::pow(fabs(value), params.power)); } template - inline static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) + static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) { return result + other_result; } template - inline static ResultType finalize(ResultType result, const ConstParams & params) + static ResultType finalize(ResultType result, const ConstParams & params) { return static_cast(std::pow(result, params.inverted_power)); } @@ -116,19 +116,19 @@ struct LinfNorm struct ConstParams {}; template - inline static ResultType accumulate(ResultType result, ResultType value, const ConstParams &) + static ResultType accumulate(ResultType result, ResultType value, const ConstParams &) { return fmax(result, fabs(value)); } template - inline static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) + static ResultType combine(ResultType result, ResultType other_result, const ConstParams &) { return fmax(result, other_result); } template - inline static ResultType finalize(ResultType result, const ConstParams &) + static ResultType finalize(ResultType result, const ConstParams &) { return result; } diff --git a/src/Functions/bitAnd.cpp b/src/Functions/bitAnd.cpp index 8efc5181919..c6ab9023142 100644 --- a/src/Functions/bitAnd.cpp +++ b/src/Functions/bitAnd.cpp @@ -20,7 +20,7 @@ struct BitAndImpl static constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { return static_cast(a) & static_cast(b); } @@ -28,7 +28,7 @@ struct BitAndImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { if (!left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitAndImpl expected an integral type"); diff --git a/src/Functions/bitBoolMaskAnd.cpp b/src/Functions/bitBoolMaskAnd.cpp index 11c0c1d1b7d..bd89b6eb69a 100644 --- a/src/Functions/bitBoolMaskAnd.cpp +++ b/src/Functions/bitBoolMaskAnd.cpp @@ -25,7 +25,7 @@ struct BitBoolMaskAndImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply([[maybe_unused]] A left, [[maybe_unused]] B right) + static Result apply([[maybe_unused]] A left, [[maybe_unused]] B right) { // Should be a logical error, but this function is callable from SQL. // Need to investigate this. diff --git a/src/Functions/bitBoolMaskOr.cpp b/src/Functions/bitBoolMaskOr.cpp index 7940bf3e2ca..1ddf2d258f8 100644 --- a/src/Functions/bitBoolMaskOr.cpp +++ b/src/Functions/bitBoolMaskOr.cpp @@ -25,7 +25,7 @@ struct BitBoolMaskOrImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply([[maybe_unused]] A left, [[maybe_unused]] B right) + static Result apply([[maybe_unused]] A left, [[maybe_unused]] B right) { if constexpr (!std::is_same_v || !std::is_same_v) // Should be a logical error, but this function is callable from SQL. diff --git a/src/Functions/bitCount.cpp b/src/Functions/bitCount.cpp index f1a3ac897c1..68555b1386c 100644 --- a/src/Functions/bitCount.cpp +++ b/src/Functions/bitCount.cpp @@ -13,7 +13,7 @@ struct BitCountImpl using ResultType = std::conditional_t<(sizeof(A) * 8 >= 256), UInt16, UInt8>; static constexpr bool allow_string_or_fixed_string = true; - static inline ResultType apply(A a) + static ResultType apply(A a) { /// We count bits in the value representation in memory. For example, we support floats. /// We need to avoid sign-extension when converting signed numbers to larger type. So, uint8_t(-1) has 8 bits. diff --git a/src/Functions/bitHammingDistance.cpp b/src/Functions/bitHammingDistance.cpp index f00f38b61af..f8a1a95ae14 100644 --- a/src/Functions/bitHammingDistance.cpp +++ b/src/Functions/bitHammingDistance.cpp @@ -19,7 +19,7 @@ struct BitHammingDistanceImpl static constexpr bool allow_string_integer = false; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a, B b) + static NO_SANITIZE_UNDEFINED Result apply(A a, B b) { /// Note: it's unspecified if signed integers should be promoted with sign-extension or with zero-fill. /// This behavior can change in the future. diff --git a/src/Functions/bitNot.cpp b/src/Functions/bitNot.cpp index 62ebdc7c52a..44dc77bb7bb 100644 --- a/src/Functions/bitNot.cpp +++ b/src/Functions/bitNot.cpp @@ -19,7 +19,7 @@ struct BitNotImpl using ResultType = typename NumberTraits::ResultOfBitNot::Type; static constexpr bool allow_string_or_fixed_string = true; - static inline ResultType NO_SANITIZE_UNDEFINED apply(A a) + static ResultType NO_SANITIZE_UNDEFINED apply(A a) { return ~static_cast(a); } @@ -27,7 +27,7 @@ struct BitNotImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) { if (!arg->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitNotImpl expected an integral type"); diff --git a/src/Functions/bitOr.cpp b/src/Functions/bitOr.cpp index 9e19fc55219..22ce15d892d 100644 --- a/src/Functions/bitOr.cpp +++ b/src/Functions/bitOr.cpp @@ -19,7 +19,7 @@ struct BitOrImpl static constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { return static_cast(a) | static_cast(b); } @@ -27,7 +27,7 @@ struct BitOrImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { if (!left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitOrImpl expected an integral type"); diff --git a/src/Functions/bitRotateLeft.cpp b/src/Functions/bitRotateLeft.cpp index c72466b8d49..2fe2c4e0f1d 100644 --- a/src/Functions/bitRotateLeft.cpp +++ b/src/Functions/bitRotateLeft.cpp @@ -20,7 +20,7 @@ struct BitRotateLeftImpl static const constexpr bool allow_string_integer = false; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) + static NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) { if constexpr (is_big_int_v || is_big_int_v) throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Bit rotate is not implemented for big integers"); @@ -32,7 +32,7 @@ struct BitRotateLeftImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { if (!left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitRotateLeftImpl expected an integral type"); diff --git a/src/Functions/bitRotateRight.cpp b/src/Functions/bitRotateRight.cpp index 045758f9a31..a2f0fe12324 100644 --- a/src/Functions/bitRotateRight.cpp +++ b/src/Functions/bitRotateRight.cpp @@ -20,7 +20,7 @@ struct BitRotateRightImpl static const constexpr bool allow_string_integer = false; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) + static NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) { if constexpr (is_big_int_v || is_big_int_v) throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Bit rotate is not implemented for big integers"); @@ -32,7 +32,7 @@ struct BitRotateRightImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { if (!left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitRotateRightImpl expected an integral type"); diff --git a/src/Functions/bitShiftLeft.cpp b/src/Functions/bitShiftLeft.cpp index 7b3748edb5c..c366a1ecb44 100644 --- a/src/Functions/bitShiftLeft.cpp +++ b/src/Functions/bitShiftLeft.cpp @@ -20,7 +20,7 @@ struct BitShiftLeftImpl static const constexpr bool allow_string_integer = true; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) + static NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) { if constexpr (is_big_int_v) throw Exception(ErrorCodes::NOT_IMPLEMENTED, "BitShiftLeft is not implemented for big integers as second argument"); @@ -145,7 +145,7 @@ struct BitShiftLeftImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { if (!left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitShiftLeftImpl expected an integral type"); diff --git a/src/Functions/bitShiftRight.cpp b/src/Functions/bitShiftRight.cpp index 21a0f7584aa..1c37cd3bf4c 100644 --- a/src/Functions/bitShiftRight.cpp +++ b/src/Functions/bitShiftRight.cpp @@ -21,7 +21,7 @@ struct BitShiftRightImpl static const constexpr bool allow_string_integer = true; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) + static NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) { if constexpr (is_big_int_v) throw Exception(ErrorCodes::NOT_IMPLEMENTED, "BitShiftRight is not implemented for big integers as second argument"); @@ -31,7 +31,7 @@ struct BitShiftRightImpl return static_cast(a) >> static_cast(b); } - static inline NO_SANITIZE_UNDEFINED void bitShiftRightForBytes(const UInt8 * op_pointer, const UInt8 * begin, UInt8 * out, const size_t shift_right_bits) + static NO_SANITIZE_UNDEFINED void bitShiftRightForBytes(const UInt8 * op_pointer, const UInt8 * begin, UInt8 * out, const size_t shift_right_bits) { while (op_pointer > begin) { @@ -123,7 +123,7 @@ struct BitShiftRightImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool is_signed) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool is_signed) { if (!left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitShiftRightImpl expected an integral type"); diff --git a/src/Functions/bitSwapLastTwo.cpp b/src/Functions/bitSwapLastTwo.cpp index d8957598c62..4ff436d5708 100644 --- a/src/Functions/bitSwapLastTwo.cpp +++ b/src/Functions/bitSwapLastTwo.cpp @@ -21,7 +21,7 @@ struct BitSwapLastTwoImpl using ResultType = UInt8; static constexpr const bool allow_string_or_fixed_string = false; - static inline ResultType NO_SANITIZE_UNDEFINED apply([[maybe_unused]] A a) + static ResultType NO_SANITIZE_UNDEFINED apply([[maybe_unused]] A a) { if constexpr (!std::is_same_v) // Should be a logical error, but this function is callable from SQL. @@ -35,7 +35,7 @@ struct BitSwapLastTwoImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; -static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) +static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) { if (!arg->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "__bitSwapLastTwo expected an integral type"); diff --git a/src/Functions/bitTest.cpp b/src/Functions/bitTest.cpp index 4c9c6aa2dfb..78ec9c8b773 100644 --- a/src/Functions/bitTest.cpp +++ b/src/Functions/bitTest.cpp @@ -21,7 +21,7 @@ struct BitTestImpl static const constexpr bool allow_string_integer = false; template - NO_SANITIZE_UNDEFINED static inline Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) + NO_SANITIZE_UNDEFINED static Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) { if constexpr (is_big_int_v || is_big_int_v) throw Exception(ErrorCodes::NOT_IMPLEMENTED, "bitTest is not implemented for big integers as second argument"); diff --git a/src/Functions/bitTestAll.cpp b/src/Functions/bitTestAll.cpp index a2dcef3eb96..92f63bfa262 100644 --- a/src/Functions/bitTestAll.cpp +++ b/src/Functions/bitTestAll.cpp @@ -9,7 +9,7 @@ namespace struct BitTestAllImpl { template - static inline UInt8 apply(A a, B b) { return (a & b) == b; } + static UInt8 apply(A a, B b) { return (a & b) == b; } }; struct NameBitTestAll { static constexpr auto name = "bitTestAll"; }; diff --git a/src/Functions/bitTestAny.cpp b/src/Functions/bitTestAny.cpp index 6b20d6c184c..c8f445d524e 100644 --- a/src/Functions/bitTestAny.cpp +++ b/src/Functions/bitTestAny.cpp @@ -9,7 +9,7 @@ namespace struct BitTestAnyImpl { template - static inline UInt8 apply(A a, B b) { return (a & b) != 0; } + static UInt8 apply(A a, B b) { return (a & b) != 0; } }; struct NameBitTestAny { static constexpr auto name = "bitTestAny"; }; diff --git a/src/Functions/bitWrapperFunc.cpp b/src/Functions/bitWrapperFunc.cpp index 99c06172c30..d243a6724a8 100644 --- a/src/Functions/bitWrapperFunc.cpp +++ b/src/Functions/bitWrapperFunc.cpp @@ -21,7 +21,7 @@ struct BitWrapperFuncImpl using ResultType = UInt8; static constexpr const bool allow_string_or_fixed_string = false; - static inline ResultType NO_SANITIZE_UNDEFINED apply(A a [[maybe_unused]]) + static ResultType NO_SANITIZE_UNDEFINED apply(A a [[maybe_unused]]) { // Should be a logical error, but this function is callable from SQL. // Need to investigate this. diff --git a/src/Functions/bitXor.cpp b/src/Functions/bitXor.cpp index 78c4c64d06e..43004c6f676 100644 --- a/src/Functions/bitXor.cpp +++ b/src/Functions/bitXor.cpp @@ -19,7 +19,7 @@ struct BitXorImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { return static_cast(a) ^ static_cast(b); } @@ -27,7 +27,7 @@ struct BitXorImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { if (!left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "BitXorImpl expected an integral type"); diff --git a/src/Functions/dateName.cpp b/src/Functions/dateName.cpp index 4d7a4f0b53d..c06dfe15dc4 100644 --- a/src/Functions/dateName.cpp +++ b/src/Functions/dateName.cpp @@ -214,7 +214,7 @@ private: template struct QuarterWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { writeText(ToQuarterImpl::execute(source, timezone), buffer); } @@ -223,7 +223,7 @@ private: template struct MonthWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { const auto month = ToMonthImpl::execute(source, timezone); static constexpr std::string_view month_names[] = @@ -249,7 +249,7 @@ private: template struct WeekWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { writeText(ToISOWeekImpl::execute(source, timezone), buffer); } @@ -258,7 +258,7 @@ private: template struct DayOfYearWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { writeText(ToDayOfYearImpl::execute(source, timezone), buffer); } @@ -267,7 +267,7 @@ private: template struct DayWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { writeText(ToDayOfMonthImpl::execute(source, timezone), buffer); } @@ -276,7 +276,7 @@ private: template struct WeekDayWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { const auto day = ToDayOfWeekImpl::execute(source, 0, timezone); static constexpr std::string_view day_names[] = @@ -297,7 +297,7 @@ private: template struct HourWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { writeText(ToHourImpl::execute(source, timezone), buffer); } @@ -306,7 +306,7 @@ private: template struct MinuteWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { writeText(ToMinuteImpl::execute(source, timezone), buffer); } @@ -315,7 +315,7 @@ private: template struct SecondWriter { - static inline void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) + static void write(WriteBuffer & buffer, Time source, const DateLUTImpl & timezone) { writeText(ToSecondImpl::execute(source, timezone), buffer); } diff --git a/src/Functions/divide.cpp b/src/Functions/divide.cpp index ca552256cd1..7c67245c382 100644 --- a/src/Functions/divide.cpp +++ b/src/Functions/divide.cpp @@ -16,7 +16,7 @@ struct DivideFloatingImpl static const constexpr bool allow_string_integer = false; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) + static NO_SANITIZE_UNDEFINED Result apply(A a [[maybe_unused]], B b [[maybe_unused]]) { return static_cast(a) / b; } @@ -24,7 +24,7 @@ struct DivideFloatingImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { if (left->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "DivideFloatingImpl expected a floating-point type"); diff --git a/src/Functions/divideDecimal.cpp b/src/Functions/divideDecimal.cpp index 1d0db232062..c8d2c5edc8a 100644 --- a/src/Functions/divideDecimal.cpp +++ b/src/Functions/divideDecimal.cpp @@ -18,7 +18,7 @@ struct DivideDecimalsImpl static constexpr auto name = "divideDecimal"; template - static inline Decimal256 + static Decimal256 execute(FirstType a, SecondType b, UInt16 scale_a, UInt16 scale_b, UInt16 result_scale) { if (b.value == 0) diff --git a/src/Functions/factorial.cpp b/src/Functions/factorial.cpp index b814e8198e6..7ff9126c004 100644 --- a/src/Functions/factorial.cpp +++ b/src/Functions/factorial.cpp @@ -19,7 +19,7 @@ struct FactorialImpl static const constexpr bool allow_decimal = false; static const constexpr bool allow_string_or_fixed_string = false; - static inline NO_SANITIZE_UNDEFINED ResultType apply(A a) + static NO_SANITIZE_UNDEFINED ResultType apply(A a) { if constexpr (std::is_floating_point_v || is_over_big_int) throw Exception( diff --git a/src/Functions/generateSnowflakeID.cpp b/src/Functions/generateSnowflakeID.cpp index c3f7701a05a..f1e47ea1158 100644 --- a/src/Functions/generateSnowflakeID.cpp +++ b/src/Functions/generateSnowflakeID.cpp @@ -123,61 +123,37 @@ SnowflakeIdRange getRangeOfAvailableIds(const SnowflakeId & available, size_t in return {begin, end}; } -struct GlobalCounterPolicy +struct Data { - static constexpr auto name = "generateSnowflakeID"; - static constexpr auto description = R"(Generates a Snowflake ID. The generated Snowflake ID contains the current Unix timestamp in milliseconds 41 (+ 1 top zero bit) bits, followed by machine id (10 bits), a counter (12 bits) to distinguish IDs within a millisecond. For any given timestamp (unix_ts_ms), the counter starts at 0 and is incremented by 1 for each new Snowflake ID until the timestamp changes. In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to 0. Function generateSnowflakeID guarantees that the counter field within a timestamp increments monotonically across all function invocations in concurrently running threads and queries.)"; - /// Guarantee counter monotonicity within one timestamp across all threads generating Snowflake IDs simultaneously. - struct Data + static inline std::atomic lowest_available_snowflake_id = 0; + + SnowflakeId reserveRange(size_t input_rows_count) { - static inline std::atomic lowest_available_snowflake_id = 0; - - SnowflakeId reserveRange(size_t input_rows_count) + uint64_t available_snowflake_id = lowest_available_snowflake_id.load(); + SnowflakeIdRange range; + do { - uint64_t available_snowflake_id = lowest_available_snowflake_id.load(); - SnowflakeIdRange range; - do - { - range = getRangeOfAvailableIds(toSnowflakeId(available_snowflake_id), input_rows_count); - } - while (!lowest_available_snowflake_id.compare_exchange_weak(available_snowflake_id, fromSnowflakeId(range.end))); - /// if CAS failed --> another thread updated `lowest_available_snowflake_id` and we re-try - /// else --> our thread reserved ID range [begin, end) and return the beginning of the range - - return range.begin; + range = getRangeOfAvailableIds(toSnowflakeId(available_snowflake_id), input_rows_count); } - }; -}; + while (!lowest_available_snowflake_id.compare_exchange_weak(available_snowflake_id, fromSnowflakeId(range.end))); + /// CAS failed --> another thread updated `lowest_available_snowflake_id` and we re-try + /// else --> our thread reserved ID range [begin, end) and return the beginning of the range -struct ThreadLocalCounterPolicy -{ - static constexpr auto name = "generateSnowflakeIDThreadMonotonic"; - static constexpr auto description = R"(Generates a Snowflake ID. The generated Snowflake ID contains the current Unix timestamp in milliseconds 41 (+ 1 top zero bit) bits, followed by machine id (10 bits), a counter (12 bits) to distinguish IDs within a millisecond. For any given timestamp (unix_ts_ms), the counter starts at 0 and is incremented by 1 for each new Snowflake ID until the timestamp changes. In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to 0. This function behaves like generateSnowflakeID but gives no guarantee on counter monotony across different simultaneous requests. Monotonicity within one timestamp is guaranteed only within the same thread calling this function to generate Snowflake IDs.)"; - - /// Guarantee counter monotonicity within one timestamp within the same thread. Faster than GlobalCounterPolicy if a query uses multiple threads. - struct Data - { - static inline thread_local uint64_t lowest_available_snowflake_id = 0; - - SnowflakeId reserveRange(size_t input_rows_count) - { - SnowflakeIdRange range = getRangeOfAvailableIds(toSnowflakeId(lowest_available_snowflake_id), input_rows_count); - lowest_available_snowflake_id = fromSnowflakeId(range.end); - return range.begin; - } - }; + return range.begin; + } }; } -template -class FunctionGenerateSnowflakeID : public IFunction, public FillPolicy +class FunctionGenerateSnowflakeID : public IFunction { public: + static constexpr auto name = "generateSnowflakeID"; + static FunctionPtr create(ContextPtr /*context*/) { return std::make_shared(); } - String getName() const override { return FillPolicy::name; } + String getName() const override { return name; } size_t getNumberOfArguments() const override { return 0; } bool isDeterministic() const override { return false; } bool isDeterministicInScopeOfQuery() const override { return false; } @@ -205,7 +181,7 @@ public: { vec_to.resize(input_rows_count); - typename FillPolicy::Data data; + Data data; SnowflakeId snowflake_id = data.reserveRange(input_rows_count); /// returns begin of available snowflake ids range for (UInt64 & to_row : vec_to) @@ -229,27 +205,16 @@ public: }; -template -void registerSnowflakeIDGenerator(auto & factory) -{ - static constexpr auto doc_syntax_format = "{}([expression])"; - static constexpr auto example_format = "SELECT {}()"; - static constexpr auto multiple_example_format = "SELECT {f}(1), {f}(2)"; - - FunctionDocumentation::Description description = FillPolicy::description; - FunctionDocumentation::Syntax syntax = fmt::format(doc_syntax_format, FillPolicy::name); - FunctionDocumentation::Arguments arguments = {{"expression", "The expression is used to bypass common subexpression elimination if the function is called multiple times in a query but otherwise ignored. Optional."}}; - FunctionDocumentation::ReturnedValue returned_value = "A value of type UInt64"; - FunctionDocumentation::Examples examples = {{"single", fmt::format(example_format, FillPolicy::name), ""}, {"multiple", fmt::format(multiple_example_format, fmt::arg("f", FillPolicy::name)), ""}}; - FunctionDocumentation::Categories categories = {"Snowflake ID"}; - - factory.template registerFunction>({description, syntax, arguments, returned_value, examples, categories}, FunctionFactory::CaseInsensitive); -} - REGISTER_FUNCTION(GenerateSnowflakeID) { - registerSnowflakeIDGenerator(factory); - registerSnowflakeIDGenerator(factory); + FunctionDocumentation::Description description = R"(Generates a Snowflake ID. The generated Snowflake ID contains the current Unix timestamp in milliseconds 41 (+ 1 top zero bit) bits, followed by machine id (10 bits), a counter (12 bits) to distinguish IDs within a millisecond. For any given timestamp (unix_ts_ms), the counter starts at 0 and is incremented by 1 for each new Snowflake ID until the timestamp changes. In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to 0. Function generateSnowflakeID guarantees that the counter field within a timestamp increments monotonically across all function invocations in concurrently running threads and queries.)"; + FunctionDocumentation::Syntax syntax = "generateSnowflakeID([expression])"; + FunctionDocumentation::Arguments arguments = {{"expression", "The expression is used to bypass common subexpression elimination if the function is called multiple times in a query but otherwise ignored. Optional."}}; + FunctionDocumentation::ReturnedValue returned_value = "A value of type UInt64"; + FunctionDocumentation::Examples examples = {{"single", "SELECT generateSnowflakeID()", "7201148511606784000"}, {"multiple", "SELECT generateSnowflakeID(1), generateSnowflakeID(2)", ""}}; + FunctionDocumentation::Categories categories = {"Snowflake ID"}; + + factory.registerFunction({description, syntax, arguments, returned_value, examples, categories}); } } diff --git a/src/Functions/generateUUIDv7.cpp b/src/Functions/generateUUIDv7.cpp index f2a82431c0a..b226c0840f4 100644 --- a/src/Functions/generateUUIDv7.cpp +++ b/src/Functions/generateUUIDv7.cpp @@ -73,20 +73,6 @@ void setVariant(UUID & uuid) UUIDHelpers::getLowBytes(uuid) = (UUIDHelpers::getLowBytes(uuid) & rand_b_bits_mask) | variant_2_mask; } -struct FillAllRandomPolicy -{ - static constexpr auto name = "generateUUIDv7NonMonotonic"; - static constexpr auto description = R"(Generates a UUID of version 7. The generated UUID contains the current Unix timestamp in milliseconds (48 bits), followed by version "7" (4 bits), and a random field (74 bit, including a 2-bit variant field "2") to distinguish UUIDs within a millisecond. This function is the fastest generateUUIDv7* function but it gives no monotonicity guarantees within a timestamp.)"; - struct Data - { - void generate(UUID & uuid, uint64_t ts) - { - setTimestampAndVersion(uuid, ts); - setVariant(uuid); - } - }; -}; - struct CounterFields { uint64_t last_timestamp = 0; @@ -133,44 +119,21 @@ struct CounterFields }; -struct GlobalCounterPolicy +struct Data { - static constexpr auto name = "generateUUIDv7"; - static constexpr auto description = R"(Generates a UUID of version 7. The generated UUID contains the current Unix timestamp in milliseconds (48 bits), followed by version "7" (4 bits), a counter (42 bit, including a variant field "2", 2 bit) to distinguish UUIDs within a millisecond, and a random field (32 bits). For any given timestamp (unix_ts_ms), the counter starts at a random value and is incremented by 1 for each new UUID until the timestamp changes. In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to a random new start value. Function generateUUIDv7 guarantees that the counter field within a timestamp increments monotonically across all function invocations in concurrently running threads and queries.)"; - /// Guarantee counter monotonicity within one timestamp across all threads generating UUIDv7 simultaneously. - struct Data + static inline CounterFields fields; + static inline SharedMutex mutex; /// works a little bit faster than std::mutex here + std::lock_guard guard; + + Data() + : guard(mutex) + {} + + void generate(UUID & uuid, uint64_t timestamp) { - static inline CounterFields fields; - static inline SharedMutex mutex; /// works a little bit faster than std::mutex here - std::lock_guard guard; - - Data() - : guard(mutex) - {} - - void generate(UUID & uuid, uint64_t timestamp) - { - fields.generate(uuid, timestamp); - } - }; -}; - -struct ThreadLocalCounterPolicy -{ - static constexpr auto name = "generateUUIDv7ThreadMonotonic"; - static constexpr auto description = R"(Generates a UUID of version 7. The generated UUID contains the current Unix timestamp in milliseconds (48 bits), followed by version "7" (4 bits), a counter (42 bit, including a variant field "2", 2 bit) to distinguish UUIDs within a millisecond, and a random field (32 bits). For any given timestamp (unix_ts_ms), the counter starts at a random value and is incremented by 1 for each new UUID until the timestamp changes. In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to a random new start value. This function behaves like generateUUIDv7 but gives no guarantee on counter monotony across different simultaneous requests. Monotonicity within one timestamp is guaranteed only within the same thread calling this function to generate UUIDs.)"; - - /// Guarantee counter monotonicity within one timestamp within the same thread. Faster than GlobalCounterPolicy if a query uses multiple threads. - struct Data - { - static inline thread_local CounterFields fields; - - void generate(UUID & uuid, uint64_t timestamp) - { - fields.generate(uuid, timestamp); - } - }; + fields.generate(uuid, timestamp); + } }; } @@ -181,11 +144,12 @@ DECLARE_AVX2_SPECIFIC_CODE(__VA_ARGS__) DECLARE_SEVERAL_IMPLEMENTATIONS( -template -class FunctionGenerateUUIDv7Base : public IFunction, public FillPolicy +class FunctionGenerateUUIDv7Base : public IFunction { public: - String getName() const final { return FillPolicy::name; } + static constexpr auto name = "generateUUIDv7"; + + String getName() const final { return name; } size_t getNumberOfArguments() const final { return 0; } bool isDeterministic() const override { return false; } bool isDeterministicInScopeOfQuery() const final { return false; } @@ -221,7 +185,7 @@ public: uint64_t timestamp = getTimestampMillisecond(); for (UUID & uuid : vec_to) { - typename FillPolicy::Data data; + Data data; data.generate(uuid, timestamp); } } @@ -231,19 +195,18 @@ public: ) // DECLARE_SEVERAL_IMPLEMENTATIONS #undef DECLARE_SEVERAL_IMPLEMENTATIONS -template -class FunctionGenerateUUIDv7Base : public TargetSpecific::Default::FunctionGenerateUUIDv7Base +class FunctionGenerateUUIDv7Base : public TargetSpecific::Default::FunctionGenerateUUIDv7Base { public: - using Self = FunctionGenerateUUIDv7Base; - using Parent = TargetSpecific::Default::FunctionGenerateUUIDv7Base; + using Self = FunctionGenerateUUIDv7Base; + using Parent = TargetSpecific::Default::FunctionGenerateUUIDv7Base; explicit FunctionGenerateUUIDv7Base(ContextPtr context) : selector(context) { selector.registerImplementation(); #if USE_MULTITARGET_CODE - using ParentAVX2 = TargetSpecific::AVX2::FunctionGenerateUUIDv7Base; + using ParentAVX2 = TargetSpecific::AVX2::FunctionGenerateUUIDv7Base; selector.registerImplementation(); #endif } @@ -262,27 +225,16 @@ private: ImplementationSelector selector; }; -template -void registerUUIDv7Generator(auto & factory) -{ - static constexpr auto doc_syntax_format = "{}([expression])"; - static constexpr auto example_format = "SELECT {}()"; - static constexpr auto multiple_example_format = "SELECT {f}(1), {f}(2)"; - - FunctionDocumentation::Description description = FillPolicy::description; - FunctionDocumentation::Syntax syntax = fmt::format(doc_syntax_format, FillPolicy::name); - FunctionDocumentation::Arguments arguments = {{"expression", "The expression is used to bypass common subexpression elimination if the function is called multiple times in a query but otherwise ignored. Optional."}}; - FunctionDocumentation::ReturnedValue returned_value = "A value of type UUID version 7."; - FunctionDocumentation::Examples examples = {{"single", fmt::format(example_format, FillPolicy::name), ""}, {"multiple", fmt::format(multiple_example_format, fmt::arg("f", FillPolicy::name)), ""}}; - FunctionDocumentation::Categories categories = {"UUID"}; - - factory.template registerFunction>({description, syntax, arguments, returned_value, examples, categories}, FunctionFactory::CaseInsensitive); -} - REGISTER_FUNCTION(GenerateUUIDv7) { - registerUUIDv7Generator(factory); - registerUUIDv7Generator(factory); - registerUUIDv7Generator(factory); + FunctionDocumentation::Description description = R"(Generates a UUID of version 7. The generated UUID contains the current Unix timestamp in milliseconds (48 bits), followed by version "7" (4 bits), a counter (42 bit, including a variant field "2", 2 bit) to distinguish UUIDs within a millisecond, and a random field (32 bits). For any given timestamp (unix_ts_ms), the counter starts at a random value and is incremented by 1 for each new UUID until the timestamp changes. In case the counter overflows, the timestamp field is incremented by 1 and the counter is reset to a random new start value. Function generateUUIDv7 guarantees that the counter field within a timestamp increments monotonically across all function invocations in concurrently running threads and queries.)"; + FunctionDocumentation::Syntax syntax = "SELECT generateUUIDv7()"; + FunctionDocumentation::Arguments arguments = {{"expression", "The expression is used to bypass common subexpression elimination if the function is called multiple times in a query but otherwise ignored. Optional."}}; + FunctionDocumentation::ReturnedValue returned_value = "A value of type UUID version 7."; + FunctionDocumentation::Examples examples = {{"single", "SELECT generateUUIDv7()", ""}, {"multiple", "SELECT generateUUIDv7(1), generateUUIDv7(2)", ""}}; + FunctionDocumentation::Categories categories = {"UUID"}; + + factory.registerFunction({description, syntax, arguments, returned_value, examples, categories}); } + } diff --git a/src/Functions/greatCircleDistance.cpp b/src/Functions/greatCircleDistance.cpp index 1c12317f510..1bd71f19f76 100644 --- a/src/Functions/greatCircleDistance.cpp +++ b/src/Functions/greatCircleDistance.cpp @@ -94,13 +94,13 @@ struct Impl } } - static inline NO_SANITIZE_UNDEFINED size_t toIndex(T x) + static NO_SANITIZE_UNDEFINED size_t toIndex(T x) { /// Implementation specific behaviour on overflow or infinite value. return static_cast(x); } - static inline T degDiff(T f) + static T degDiff(T f) { f = std::abs(f); if (f > 180) @@ -108,7 +108,7 @@ struct Impl return f; } - inline T fastCos(T x) + T fastCos(T x) { T y = std::abs(x) * (T(COS_LUT_SIZE) / T(PI) / T(2.0)); size_t i = toIndex(y); @@ -117,7 +117,7 @@ struct Impl return cos_lut[i] + (cos_lut[i + 1] - cos_lut[i]) * y; } - inline T fastSin(T x) + T fastSin(T x) { T y = std::abs(x) * (T(COS_LUT_SIZE) / T(PI) / T(2.0)); size_t i = toIndex(y); @@ -128,7 +128,7 @@ struct Impl /// fast implementation of asin(sqrt(x)) /// max error in floats 0.00369%, in doubles 0.00072% - inline T fastAsinSqrt(T x) + T fastAsinSqrt(T x) { if (x < T(0.122)) { diff --git a/src/Functions/greatest.cpp b/src/Functions/greatest.cpp index 93fd7e24853..87a48c887b4 100644 --- a/src/Functions/greatest.cpp +++ b/src/Functions/greatest.cpp @@ -15,7 +15,7 @@ struct GreatestBaseImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { return static_cast(a) > static_cast(b) ? static_cast(a) : static_cast(b); @@ -24,7 +24,7 @@ struct GreatestBaseImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool is_signed) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool is_signed) { if (!left->getType()->isIntegerTy()) { @@ -46,7 +46,7 @@ struct GreatestSpecialImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { static_assert(std::is_same_v, "ResultType != Result"); return accurate::greaterOp(a, b) ? static_cast(a) : static_cast(b); diff --git a/src/Functions/h3GetUnidirectionalEdge.cpp b/src/Functions/h3GetUnidirectionalEdge.cpp index 4e41cdbfef6..9e253e87104 100644 --- a/src/Functions/h3GetUnidirectionalEdge.cpp +++ b/src/Functions/h3GetUnidirectionalEdge.cpp @@ -108,7 +108,7 @@ public: /// suppress asan errors generated by the following: /// 'NEW_ADJUSTMENT_III' defined in '../contrib/h3/src/h3lib/lib/algos.c:142:24 /// 'NEW_DIGIT_III' defined in '../contrib/h3/src/h3lib/lib/algos.c:121:24 - __attribute__((no_sanitize_address)) static inline UInt64 getUnidirectionalEdge(const UInt64 origin, const UInt64 dest) + __attribute__((no_sanitize_address)) static UInt64 getUnidirectionalEdge(const UInt64 origin, const UInt64 dest) { const UInt64 res = cellsToDirectedEdge(origin, dest); return res; diff --git a/src/Functions/initialQueryID.cpp b/src/Functions/initialQueryID.cpp index 469f37cf614..9c9390d4e50 100644 --- a/src/Functions/initialQueryID.cpp +++ b/src/Functions/initialQueryID.cpp @@ -19,16 +19,16 @@ public: explicit FunctionInitialQueryID(const String & initial_query_id_) : initial_query_id(initial_query_id_) {} - inline String getName() const override { return name; } + String getName() const override { return name; } - inline size_t getNumberOfArguments() const override { return 0; } + size_t getNumberOfArguments() const override { return 0; } DataTypePtr getReturnTypeImpl(const DataTypes & /*arguments*/) const override { return std::make_shared(); } - inline bool isDeterministic() const override { return false; } + bool isDeterministic() const override { return false; } bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return false; } diff --git a/src/Functions/intDiv.cpp b/src/Functions/intDiv.cpp index 38939556fa5..6b5bb00eacd 100644 --- a/src/Functions/intDiv.cpp +++ b/src/Functions/intDiv.cpp @@ -80,7 +80,7 @@ struct DivideIntegralByConstantImpl private: template - static inline void apply(const A * __restrict a, const B * __restrict b, ResultType * __restrict c, size_t i) + static void apply(const A * __restrict a, const B * __restrict b, ResultType * __restrict c, size_t i) { if constexpr (op_case == OpCase::Vector) c[i] = Op::template apply(a[i], b[i]); diff --git a/src/Functions/intDivOrZero.cpp b/src/Functions/intDivOrZero.cpp index 96ff6ea80fc..f32eac17127 100644 --- a/src/Functions/intDivOrZero.cpp +++ b/src/Functions/intDivOrZero.cpp @@ -13,7 +13,7 @@ struct DivideIntegralOrZeroImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { if (unlikely(divisionLeadsToFPE(a, b))) return 0; diff --git a/src/Functions/intExp10.cpp b/src/Functions/intExp10.cpp index 6944c4701bc..733f9d55702 100644 --- a/src/Functions/intExp10.cpp +++ b/src/Functions/intExp10.cpp @@ -19,7 +19,7 @@ struct IntExp10Impl using ResultType = UInt64; static constexpr const bool allow_string_or_fixed_string = false; - static inline ResultType apply([[maybe_unused]] A a) + static ResultType apply([[maybe_unused]] A a) { if constexpr (is_big_int_v || std::is_same_v) throw DB::Exception(ErrorCodes::NOT_IMPLEMENTED, "IntExp10 is not implemented for big integers"); diff --git a/src/Functions/intExp2.cpp b/src/Functions/intExp2.cpp index 4e5cc60a731..7e016a0dbd2 100644 --- a/src/Functions/intExp2.cpp +++ b/src/Functions/intExp2.cpp @@ -20,7 +20,7 @@ struct IntExp2Impl using ResultType = UInt64; static constexpr bool allow_string_or_fixed_string = false; - static inline ResultType apply([[maybe_unused]] A a) + static ResultType apply([[maybe_unused]] A a) { if constexpr (is_big_int_v) throw DB::Exception(ErrorCodes::NOT_IMPLEMENTED, "intExp2 not implemented for big integers"); @@ -31,7 +31,7 @@ struct IntExp2Impl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) { if (!arg->getType()->isIntegerTy()) throw Exception(ErrorCodes::LOGICAL_ERROR, "IntExp2Impl expected an integral type"); diff --git a/src/Functions/isValidUTF8.cpp b/src/Functions/isValidUTF8.cpp index e7aba672356..d5f5e6a8986 100644 --- a/src/Functions/isValidUTF8.cpp +++ b/src/Functions/isValidUTF8.cpp @@ -65,9 +65,9 @@ SOFTWARE. */ #ifndef __SSE4_1__ - static inline UInt8 isValidUTF8(const UInt8 * data, UInt64 len) { return DB::UTF8::isValidUTF8(data, len); } + static UInt8 isValidUTF8(const UInt8 * data, UInt64 len) { return DB::UTF8::isValidUTF8(data, len); } #else - static inline UInt8 isValidUTF8(const UInt8 * data, UInt64 len) + static UInt8 isValidUTF8(const UInt8 * data, UInt64 len) { /* * Map high nibble of "First Byte" to legal character length minus 1 diff --git a/src/Functions/jumpConsistentHash.cpp b/src/Functions/jumpConsistentHash.cpp index ffc21eb5cea..fbac5d4fdd5 100644 --- a/src/Functions/jumpConsistentHash.cpp +++ b/src/Functions/jumpConsistentHash.cpp @@ -29,7 +29,7 @@ struct JumpConsistentHashImpl using BucketsType = ResultType; static constexpr auto max_buckets = static_cast(std::numeric_limits::max()); - static inline ResultType apply(UInt64 hash, BucketsType n) + static ResultType apply(UInt64 hash, BucketsType n) { return JumpConsistentHash(hash, n); } diff --git a/src/Functions/kostikConsistentHash.cpp b/src/Functions/kostikConsistentHash.cpp index 47a9a928976..42004ed40d9 100644 --- a/src/Functions/kostikConsistentHash.cpp +++ b/src/Functions/kostikConsistentHash.cpp @@ -17,7 +17,7 @@ struct KostikConsistentHashImpl using BucketsType = ResultType; static constexpr auto max_buckets = 32768; - static inline ResultType apply(UInt64 hash, BucketsType n) + static ResultType apply(UInt64 hash, BucketsType n) { return ConsistentHashing(hash, n); } diff --git a/src/Functions/least.cpp b/src/Functions/least.cpp index f5680d4d468..babb8378d80 100644 --- a/src/Functions/least.cpp +++ b/src/Functions/least.cpp @@ -15,7 +15,7 @@ struct LeastBaseImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { /** gcc 4.9.2 successfully vectorizes a loop from this function. */ return static_cast(a) < static_cast(b) ? static_cast(a) : static_cast(b); @@ -24,7 +24,7 @@ struct LeastBaseImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool is_signed) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool is_signed) { if (!left->getType()->isIntegerTy()) { @@ -46,7 +46,7 @@ struct LeastSpecialImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { static_assert(std::is_same_v, "ResultType != Result"); return accurate::lessOp(a, b) ? static_cast(a) : static_cast(b); diff --git a/src/Functions/minus.cpp b/src/Functions/minus.cpp index 04877a42b18..f3b9b8a7bcb 100644 --- a/src/Functions/minus.cpp +++ b/src/Functions/minus.cpp @@ -13,7 +13,7 @@ struct MinusImpl static const constexpr bool allow_string_integer = false; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a, B b) + static NO_SANITIZE_UNDEFINED Result apply(A a, B b) { if constexpr (is_big_int_v || is_big_int_v) { @@ -28,7 +28,7 @@ struct MinusImpl /// Apply operation and check overflow. It's used for Deciamal operations. @returns true if overflowed, false otherwise. template - static inline bool apply(A a, B b, Result & c) + static bool apply(A a, B b, Result & c) { return common::subOverflow(static_cast(a), b, c); } @@ -36,7 +36,7 @@ struct MinusImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { return left->getType()->isIntegerTy() ? b.CreateSub(left, right) : b.CreateFSub(left, right); } diff --git a/src/Functions/modulo.cpp b/src/Functions/modulo.cpp index cbc2ec2cd0a..ebc1c4f5275 100644 --- a/src/Functions/modulo.cpp +++ b/src/Functions/modulo.cpp @@ -105,7 +105,7 @@ struct ModuloByConstantImpl private: template - static inline void apply(const A * __restrict a, const B * __restrict b, ResultType * __restrict c, size_t i) + static void apply(const A * __restrict a, const B * __restrict b, ResultType * __restrict c, size_t i) { if constexpr (op_case == OpCase::Vector) c[i] = Op::template apply(a[i], b[i]); diff --git a/src/Functions/moduloOrZero.cpp b/src/Functions/moduloOrZero.cpp index 3551ae74c5f..cd7873b3b9e 100644 --- a/src/Functions/moduloOrZero.cpp +++ b/src/Functions/moduloOrZero.cpp @@ -15,7 +15,7 @@ struct ModuloOrZeroImpl static const constexpr bool allow_string_integer = false; template - static inline Result apply(A a, B b) + static Result apply(A a, B b) { if constexpr (std::is_floating_point_v) { diff --git a/src/Functions/multiply.cpp b/src/Functions/multiply.cpp index 4dc8cd10f31..67b6fff6b58 100644 --- a/src/Functions/multiply.cpp +++ b/src/Functions/multiply.cpp @@ -14,7 +14,7 @@ struct MultiplyImpl static const constexpr bool allow_string_integer = false; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a, B b) + static NO_SANITIZE_UNDEFINED Result apply(A a, B b) { if constexpr (is_big_int_v || is_big_int_v) { @@ -29,7 +29,7 @@ struct MultiplyImpl /// Apply operation and check overflow. It's used for Decimal operations. @returns true if overflowed, false otherwise. template - static inline bool apply(A a, B b, Result & c) + static bool apply(A a, B b, Result & c) { if constexpr (std::is_same_v || std::is_same_v) { @@ -43,7 +43,7 @@ struct MultiplyImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { return left->getType()->isIntegerTy() ? b.CreateMul(left, right) : b.CreateFMul(left, right); } diff --git a/src/Functions/multiplyDecimal.cpp b/src/Functions/multiplyDecimal.cpp index ed6487c6683..7e30a893d72 100644 --- a/src/Functions/multiplyDecimal.cpp +++ b/src/Functions/multiplyDecimal.cpp @@ -17,7 +17,7 @@ struct MultiplyDecimalsImpl static constexpr auto name = "multiplyDecimal"; template - static inline Decimal256 + static Decimal256 execute(FirstType a, SecondType b, UInt16 scale_a, UInt16 scale_b, UInt16 result_scale) { if (a.value == 0 || b.value == 0) diff --git a/src/Functions/negate.cpp b/src/Functions/negate.cpp index bd47780dea8..2c9b461274d 100644 --- a/src/Functions/negate.cpp +++ b/src/Functions/negate.cpp @@ -11,7 +11,7 @@ struct NegateImpl using ResultType = std::conditional_t, A, typename NumberTraits::ResultOfNegate::Type>; static constexpr const bool allow_string_or_fixed_string = false; - static inline NO_SANITIZE_UNDEFINED ResultType apply(A a) + static NO_SANITIZE_UNDEFINED ResultType apply(A a) { return -static_cast(a); } @@ -19,7 +19,7 @@ struct NegateImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * arg, bool) { return arg->getType()->isIntegerTy() ? b.CreateNeg(arg) : b.CreateFNeg(arg); } diff --git a/src/Functions/neighbor.cpp b/src/Functions/neighbor.cpp index abe6d39422d..62f129109f9 100644 --- a/src/Functions/neighbor.cpp +++ b/src/Functions/neighbor.cpp @@ -36,11 +36,11 @@ public: static FunctionPtr create(ContextPtr context) { - if (!context->getSettingsRef().allow_deprecated_functions) + if (!context->getSettingsRef().allow_deprecated_error_prone_window_functions) throw Exception( ErrorCodes::DEPRECATED_FUNCTION, "Function {} is deprecated since its usage is error-prone (see docs)." - "Please use proper window function or set `allow_deprecated_functions` setting to enable it", + "Please use proper window function or set `allow_deprecated_error_prone_window_functions` setting to enable it", name); return std::make_shared(); diff --git a/src/Functions/plus.cpp b/src/Functions/plus.cpp index cd9cf6cec5c..ffb0fe2ade7 100644 --- a/src/Functions/plus.cpp +++ b/src/Functions/plus.cpp @@ -14,7 +14,7 @@ struct PlusImpl static const constexpr bool is_commutative = true; template - static inline NO_SANITIZE_UNDEFINED Result apply(A a, B b) + static NO_SANITIZE_UNDEFINED Result apply(A a, B b) { /// Next everywhere, static_cast - so that there is no wrong result in expressions of the form Int64 c = UInt32(a) * Int32(-1). if constexpr (is_big_int_v || is_big_int_v) @@ -30,7 +30,7 @@ struct PlusImpl /// Apply operation and check overflow. It's used for Deciamal operations. @returns true if overflowed, false otherwise. template - static inline bool apply(A a, B b, Result & c) + static bool apply(A a, B b, Result & c) { return common::addOverflow(static_cast(a), b, c); } @@ -38,7 +38,7 @@ struct PlusImpl #if USE_EMBEDDED_COMPILER static constexpr bool compilable = true; - static inline llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) + static llvm::Value * compile(llvm::IRBuilder<> & b, llvm::Value * left, llvm::Value * right, bool) { return left->getType()->isIntegerTy() ? b.CreateAdd(left, right) : b.CreateFAdd(left, right); } diff --git a/src/Functions/queryID.cpp b/src/Functions/queryID.cpp index 704206e1de5..5d0ac719797 100644 --- a/src/Functions/queryID.cpp +++ b/src/Functions/queryID.cpp @@ -19,16 +19,16 @@ public: explicit FunctionQueryID(const String & query_id_) : query_id(query_id_) {} - inline String getName() const override { return name; } + String getName() const override { return name; } - inline size_t getNumberOfArguments() const override { return 0; } + size_t getNumberOfArguments() const override { return 0; } DataTypePtr getReturnTypeImpl(const DataTypes & /*arguments*/) const override { return std::make_shared(); } - inline bool isDeterministic() const override { return false; } + bool isDeterministic() const override { return false; } bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return false; } diff --git a/src/Functions/repeat.cpp b/src/Functions/repeat.cpp index 84597f4eadc..7f2fe646062 100644 --- a/src/Functions/repeat.cpp +++ b/src/Functions/repeat.cpp @@ -22,14 +22,14 @@ namespace struct RepeatImpl { /// Safety threshold against DoS. - static inline void checkRepeatTime(UInt64 repeat_time) + static void checkRepeatTime(UInt64 repeat_time) { static constexpr UInt64 max_repeat_times = 1'000'000; if (repeat_time > max_repeat_times) throw Exception(ErrorCodes::TOO_LARGE_STRING_SIZE, "Too many times to repeat ({}), maximum is: {}", repeat_time, max_repeat_times); } - static inline void checkStringSize(UInt64 size) + static void checkStringSize(UInt64 size) { static constexpr UInt64 max_string_size = 1 << 30; if (size > max_string_size) diff --git a/src/Functions/roundAge.cpp b/src/Functions/roundAge.cpp index cca92c19b0c..38eda9f3383 100644 --- a/src/Functions/roundAge.cpp +++ b/src/Functions/roundAge.cpp @@ -12,7 +12,7 @@ struct RoundAgeImpl using ResultType = UInt8; static constexpr const bool allow_string_or_fixed_string = false; - static inline ResultType apply(A x) + static ResultType apply(A x) { return x < 1 ? 0 : (x < 18 ? 17 diff --git a/src/Functions/roundDuration.cpp b/src/Functions/roundDuration.cpp index 918f0b3425d..963080ba0d2 100644 --- a/src/Functions/roundDuration.cpp +++ b/src/Functions/roundDuration.cpp @@ -12,7 +12,7 @@ struct RoundDurationImpl using ResultType = UInt16; static constexpr bool allow_string_or_fixed_string = false; - static inline ResultType apply(A x) + static ResultType apply(A x) { return x < 1 ? 0 : (x < 10 ? 1 diff --git a/src/Functions/roundToExp2.cpp b/src/Functions/roundToExp2.cpp index 607c67b742e..eb0df8884c5 100644 --- a/src/Functions/roundToExp2.cpp +++ b/src/Functions/roundToExp2.cpp @@ -65,7 +65,7 @@ struct RoundToExp2Impl using ResultType = T; static constexpr const bool allow_string_or_fixed_string = false; - static inline T apply(T x) + static T apply(T x) { return roundDownToPowerOfTwo(x); } diff --git a/src/Functions/runningAccumulate.cpp b/src/Functions/runningAccumulate.cpp index 9bf387d3357..d585affd91b 100644 --- a/src/Functions/runningAccumulate.cpp +++ b/src/Functions/runningAccumulate.cpp @@ -39,11 +39,11 @@ public: static FunctionPtr create(ContextPtr context) { - if (!context->getSettingsRef().allow_deprecated_functions) + if (!context->getSettingsRef().allow_deprecated_error_prone_window_functions) throw Exception( ErrorCodes::DEPRECATED_FUNCTION, "Function {} is deprecated since its usage is error-prone (see docs)." - "Please use proper window function or set `allow_deprecated_functions` setting to enable it", + "Please use proper window function or set `allow_deprecated_error_prone_window_functions` setting to enable it", name); return std::make_shared(); diff --git a/src/Functions/runningDifference.h b/src/Functions/runningDifference.h index d3704aa97ca..fe477d13744 100644 --- a/src/Functions/runningDifference.h +++ b/src/Functions/runningDifference.h @@ -139,11 +139,11 @@ public: static FunctionPtr create(ContextPtr context) { - if (!context->getSettingsRef().allow_deprecated_functions) + if (!context->getSettingsRef().allow_deprecated_error_prone_window_functions) throw Exception( ErrorCodes::DEPRECATED_FUNCTION, "Function {} is deprecated since its usage is error-prone (see docs)." - "Please use proper window function or set `allow_deprecated_functions` setting to enable it", + "Please use proper window function or set `allow_deprecated_error_prone_window_functions` setting to enable it", name); return std::make_shared>(); diff --git a/src/Functions/sign.cpp b/src/Functions/sign.cpp index 6c849760eed..3dd2ac8e3aa 100644 --- a/src/Functions/sign.cpp +++ b/src/Functions/sign.cpp @@ -11,7 +11,7 @@ struct SignImpl using ResultType = Int8; static constexpr bool allow_string_or_fixed_string = false; - static inline NO_SANITIZE_UNDEFINED ResultType apply(A a) + static NO_SANITIZE_UNDEFINED ResultType apply(A a) { if constexpr (is_decimal || std::is_floating_point_v) return a < A(0) ? -1 : a == A(0) ? 0 : 1; diff --git a/src/Functions/space.cpp b/src/Functions/space.cpp index 4cfa629aa33..83183c991bc 100644 --- a/src/Functions/space.cpp +++ b/src/Functions/space.cpp @@ -27,7 +27,7 @@ private: static constexpr auto space = ' '; /// Safety threshold against DoS. - static inline void checkRepeatTime(size_t repeat_time) + static void checkRepeatTime(size_t repeat_time) { static constexpr auto max_repeat_times = 1'000'000uz; if (repeat_time > max_repeat_times) diff --git a/src/Functions/tokenExtractors.cpp b/src/Functions/tokenExtractors.cpp index a29d759d2ca..e7dcb5cced3 100644 --- a/src/Functions/tokenExtractors.cpp +++ b/src/Functions/tokenExtractors.cpp @@ -116,7 +116,7 @@ public: private: template - inline void executeImpl( + void executeImpl( const ExtractorType & extractor, StringColumnType & input_data_column, ResultStringColumnType & result_data_column, diff --git a/src/IO/BufferBase.h b/src/IO/BufferBase.h index e98f00270e2..62fe011c0b6 100644 --- a/src/IO/BufferBase.h +++ b/src/IO/BufferBase.h @@ -37,13 +37,13 @@ public: { Buffer(Position begin_pos_, Position end_pos_) : begin_pos(begin_pos_), end_pos(end_pos_) {} - inline Position begin() const { return begin_pos; } - inline Position end() const { return end_pos; } - inline size_t size() const { return size_t(end_pos - begin_pos); } - inline void resize(size_t size) { end_pos = begin_pos + size; } - inline bool empty() const { return size() == 0; } + Position begin() const { return begin_pos; } + Position end() const { return end_pos; } + size_t size() const { return size_t(end_pos - begin_pos); } + void resize(size_t size) { end_pos = begin_pos + size; } + bool empty() const { return size() == 0; } - inline void swap(Buffer & other) noexcept + void swap(Buffer & other) noexcept { std::swap(begin_pos, other.begin_pos); std::swap(end_pos, other.end_pos); @@ -71,21 +71,21 @@ public: } /// get buffer - inline Buffer & internalBuffer() { return internal_buffer; } + Buffer & internalBuffer() { return internal_buffer; } /// get the part of the buffer from which you can read / write data - inline Buffer & buffer() { return working_buffer; } + Buffer & buffer() { return working_buffer; } /// get (for reading and modifying) the position in the buffer - inline Position & position() { return pos; } + Position & position() { return pos; } /// offset in bytes of the cursor from the beginning of the buffer - inline size_t offset() const { return size_t(pos - working_buffer.begin()); } + size_t offset() const { return size_t(pos - working_buffer.begin()); } /// How many bytes are available for read/write - inline size_t available() const { return size_t(working_buffer.end() - pos); } + size_t available() const { return size_t(working_buffer.end() - pos); } - inline void swap(BufferBase & other) noexcept + void swap(BufferBase & other) noexcept { internal_buffer.swap(other.internal_buffer); working_buffer.swap(other.working_buffer); diff --git a/src/IO/CompressionMethod.cpp b/src/IO/CompressionMethod.cpp index b8e1134d422..22913125e99 100644 --- a/src/IO/CompressionMethod.cpp +++ b/src/IO/CompressionMethod.cpp @@ -52,7 +52,6 @@ std::string toContentEncodingName(CompressionMethod method) case CompressionMethod::None: return ""; } - UNREACHABLE(); } CompressionMethod chooseHTTPCompressionMethod(const std::string & list) diff --git a/src/IO/HTTPHeaderEntries.h b/src/IO/HTTPHeaderEntries.h index 5862f1ead15..36b2ccc4ba5 100644 --- a/src/IO/HTTPHeaderEntries.h +++ b/src/IO/HTTPHeaderEntries.h @@ -10,7 +10,7 @@ struct HTTPHeaderEntry std::string value; HTTPHeaderEntry(const std::string & name_, const std::string & value_) : name(name_), value(value_) {} - inline bool operator==(const HTTPHeaderEntry & other) const { return name == other.name && value == other.value; } + bool operator==(const HTTPHeaderEntry & other) const { return name == other.name && value == other.value; } }; using HTTPHeaderEntries = std::vector; diff --git a/src/IO/HadoopSnappyReadBuffer.h b/src/IO/HadoopSnappyReadBuffer.h index 73e52f2c503..7d6e6db2fa7 100644 --- a/src/IO/HadoopSnappyReadBuffer.h +++ b/src/IO/HadoopSnappyReadBuffer.h @@ -37,7 +37,7 @@ public: Status readBlock(size_t * avail_in, const char ** next_in, size_t * avail_out, char ** next_out); - inline void reset() + void reset() { buffer_length = 0; block_length = -1; @@ -73,7 +73,7 @@ class HadoopSnappyReadBuffer : public CompressedReadBufferWrapper public: using Status = HadoopSnappyDecoder::Status; - inline static String statusToString(Status status) + static String statusToString(Status status) { switch (status) { @@ -88,7 +88,6 @@ public: case Status::TOO_LARGE_COMPRESSED_BLOCK: return "TOO_LARGE_COMPRESSED_BLOCK"; } - UNREACHABLE(); } explicit HadoopSnappyReadBuffer( diff --git a/src/IO/IReadableWriteBuffer.h b/src/IO/IReadableWriteBuffer.h index dda5fc07c8e..db379fef969 100644 --- a/src/IO/IReadableWriteBuffer.h +++ b/src/IO/IReadableWriteBuffer.h @@ -8,7 +8,7 @@ namespace DB struct IReadableWriteBuffer { /// At the first time returns getReadBufferImpl(). Next calls return nullptr. - inline std::unique_ptr tryGetReadBuffer() + std::unique_ptr tryGetReadBuffer() { if (!can_reread) return nullptr; diff --git a/src/IO/PeekableReadBuffer.h b/src/IO/PeekableReadBuffer.h index 2ee209ffd6c..e831956956f 100644 --- a/src/IO/PeekableReadBuffer.h +++ b/src/IO/PeekableReadBuffer.h @@ -83,9 +83,9 @@ private: bool peekNext(); - inline bool useSubbufferOnly() const { return !peeked_size; } - inline bool currentlyReadFromOwnMemory() const { return working_buffer.begin() != sub_buf->buffer().begin(); } - inline bool checkpointInOwnMemory() const { return checkpoint_in_own_memory; } + bool useSubbufferOnly() const { return !peeked_size; } + bool currentlyReadFromOwnMemory() const { return working_buffer.begin() != sub_buf->buffer().begin(); } + bool checkpointInOwnMemory() const { return checkpoint_in_own_memory; } void checkStateCorrect() const; diff --git a/src/IO/ReadBuffer.h b/src/IO/ReadBuffer.h index 056e25a5fbe..73f5335411f 100644 --- a/src/IO/ReadBuffer.h +++ b/src/IO/ReadBuffer.h @@ -85,7 +85,7 @@ public: } - inline void nextIfAtEnd() + void nextIfAtEnd() { if (!hasPendingData()) next(); diff --git a/src/IO/S3/Requests.h b/src/IO/S3/Requests.h index 424cf65caf2..3b03356a8fb 100644 --- a/src/IO/S3/Requests.h +++ b/src/IO/S3/Requests.h @@ -169,7 +169,7 @@ using DeleteObjectsRequest = ExtendedRequest; class ComposeObjectRequest : public ExtendedRequest { public: - inline const char * GetServiceRequestName() const override { return "ComposeObject"; } + const char * GetServiceRequestName() const override { return "ComposeObject"; } AWS_S3_API Aws::String SerializePayload() const override; diff --git a/src/IO/S3/copyS3File.cpp b/src/IO/S3/copyS3File.cpp index cff6fa5ad21..24e14985758 100644 --- a/src/IO/S3/copyS3File.cpp +++ b/src/IO/S3/copyS3File.cpp @@ -652,14 +652,25 @@ namespace const std::optional> & object_metadata_, ThreadPoolCallbackRunnerUnsafe schedule_, bool for_disk_s3_, - BlobStorageLogWriterPtr blob_storage_log_) - : UploadHelper(client_ptr_, dest_bucket_, dest_key_, request_settings_, object_metadata_, schedule_, for_disk_s3_, blob_storage_log_, getLogger("copyS3File")) + BlobStorageLogWriterPtr blob_storage_log_, + std::function fallback_method_) + : UploadHelper( + client_ptr_, + dest_bucket_, + dest_key_, + request_settings_, + object_metadata_, + schedule_, + for_disk_s3_, + blob_storage_log_, + getLogger("copyS3File")) , src_bucket(src_bucket_) , src_key(src_key_) , offset(src_offset_) , size(src_size_) , supports_multipart_copy(client_ptr_->supportsMultiPartCopy()) , read_settings(read_settings_) + , fallback_method(std::move(fallback_method_)) { } @@ -682,14 +693,7 @@ namespace size_t size; bool supports_multipart_copy; const ReadSettings read_settings; - - CreateReadBuffer getSourceObjectReadBuffer() - { - return [&] - { - return std::make_unique(client_ptr, src_bucket, src_key, "", request_settings, read_settings); - }; - } + std::function fallback_method; void performSingleOperationCopy() { @@ -744,28 +748,21 @@ namespace if (outcome.GetError().GetExceptionName() == "EntityTooLarge" || outcome.GetError().GetExceptionName() == "InvalidRequest" || outcome.GetError().GetExceptionName() == "InvalidArgument" || + outcome.GetError().GetExceptionName() == "AccessDenied" || (outcome.GetError().GetExceptionName() == "InternalError" && outcome.GetError().GetResponseCode() == Aws::Http::HttpResponseCode::GATEWAY_TIMEOUT && outcome.GetError().GetMessage().contains("use the Rewrite method in the JSON API"))) { - if (!supports_multipart_copy) + if (!supports_multipart_copy || outcome.GetError().GetExceptionName() == "AccessDenied") { - LOG_INFO(log, "Multipart upload using copy is not supported, will try regular upload for Bucket: {}, Key: {}, Object size: {}", - dest_bucket, - dest_key, - size); - copyDataToS3File( - getSourceObjectReadBuffer(), - offset, - size, - client_ptr, + LOG_INFO( + log, + "Multipart upload using copy is not supported, will try regular upload for Bucket: {}, Key: {}, Object size: " + "{}", dest_bucket, dest_key, - request_settings, - blob_storage_log, - object_metadata, - schedule, - for_disk_s3); + size); + fallback_method(); break; } else @@ -859,17 +856,29 @@ void copyDataToS3File( ThreadPoolCallbackRunnerUnsafe schedule, bool for_disk_s3) { - CopyDataToFileHelper helper{create_read_buffer, offset, size, dest_s3_client, dest_bucket, dest_key, settings, object_metadata, schedule, for_disk_s3, blob_storage_log}; + CopyDataToFileHelper helper{ + create_read_buffer, + offset, + size, + dest_s3_client, + dest_bucket, + dest_key, + settings, + object_metadata, + schedule, + for_disk_s3, + blob_storage_log}; helper.performCopy(); } void copyS3File( - const std::shared_ptr & s3_client, + const std::shared_ptr & src_s3_client, const String & src_bucket, const String & src_key, size_t src_offset, size_t src_size, + std::shared_ptr dest_s3_client, const String & dest_bucket, const String & dest_key, const S3Settings::RequestSettings & settings, @@ -879,19 +888,50 @@ void copyS3File( ThreadPoolCallbackRunnerUnsafe schedule, bool for_disk_s3) { - if (settings.allow_native_copy) + if (!dest_s3_client) + dest_s3_client = src_s3_client; + + std::function fallback_method = [&] { - CopyFileHelper helper{s3_client, src_bucket, src_key, src_offset, src_size, dest_bucket, dest_key, settings, read_settings, object_metadata, schedule, for_disk_s3, blob_storage_log}; - helper.performCopy(); - } - else + auto create_read_buffer + = [&] { return std::make_unique(src_s3_client, src_bucket, src_key, "", settings, read_settings); }; + + copyDataToS3File( + create_read_buffer, + src_offset, + src_size, + dest_s3_client, + dest_bucket, + dest_key, + settings, + blob_storage_log, + object_metadata, + schedule, + for_disk_s3); + }; + + if (!settings.allow_native_copy) { - auto create_read_buffer = [&] - { - return std::make_unique(s3_client, src_bucket, src_key, "", settings, read_settings); - }; - copyDataToS3File(create_read_buffer, src_offset, src_size, s3_client, dest_bucket, dest_key, settings, blob_storage_log, object_metadata, schedule, for_disk_s3); + fallback_method(); + return; } + + CopyFileHelper helper{ + src_s3_client, + src_bucket, + src_key, + src_offset, + src_size, + dest_bucket, + dest_key, + settings, + read_settings, + object_metadata, + schedule, + for_disk_s3, + blob_storage_log, + std::move(fallback_method)}; + helper.performCopy(); } } diff --git a/src/IO/S3/copyS3File.h b/src/IO/S3/copyS3File.h index d5da4d260b1..85b3870ddbf 100644 --- a/src/IO/S3/copyS3File.h +++ b/src/IO/S3/copyS3File.h @@ -31,11 +31,12 @@ using CreateReadBuffer = std::function()>; /// /// read_settings - is used for throttling in case of native copy is not possible void copyS3File( - const std::shared_ptr & s3_client, + const std::shared_ptr & src_s3_client, const String & src_bucket, const String & src_key, size_t src_offset, size_t src_size, + std::shared_ptr dest_s3_client, const String & dest_bucket, const String & dest_key, const S3Settings::RequestSettings & settings, diff --git a/src/IO/WriteBuffer.h b/src/IO/WriteBuffer.h index 1ceb938e454..ef4e0058ec3 100644 --- a/src/IO/WriteBuffer.h +++ b/src/IO/WriteBuffer.h @@ -41,7 +41,7 @@ public: * If direct write is performed into [position(), buffer().end()) and its length is not enough, * you need to fill it first (i.g with write call), after it the capacity is regained. */ - inline void next() + void next() { if (!offset()) return; @@ -69,7 +69,7 @@ public: /// Calling finalize() in the destructor of derived classes is a bad practice. virtual ~WriteBuffer(); - inline void nextIfAtEnd() + void nextIfAtEnd() { if (!hasPendingData()) next(); @@ -96,7 +96,7 @@ public: } } - inline void write(char x) + void write(char x) { if (finalized) throw Exception{ErrorCodes::LOGICAL_ERROR, "Cannot write to finalized buffer"}; diff --git a/src/IO/ZstdDeflatingAppendableWriteBuffer.h b/src/IO/ZstdDeflatingAppendableWriteBuffer.h index d9c4f32d6da..34cdf03df25 100644 --- a/src/IO/ZstdDeflatingAppendableWriteBuffer.h +++ b/src/IO/ZstdDeflatingAppendableWriteBuffer.h @@ -27,7 +27,7 @@ class ZstdDeflatingAppendableWriteBuffer : public BufferWithOwnMemory; /// Frame end block. If we read non-empty file and see no such flag we should add it. - static inline constexpr ZSTDLastBlock ZSTD_CORRECT_TERMINATION_LAST_BLOCK = {0x01, 0x00, 0x00}; + static constexpr ZSTDLastBlock ZSTD_CORRECT_TERMINATION_LAST_BLOCK = {0x01, 0x00, 0x00}; ZstdDeflatingAppendableWriteBuffer( std::unique_ptr out_, diff --git a/src/Interpreters/AggregatedDataVariants.cpp b/src/Interpreters/AggregatedDataVariants.cpp index 87cfdda5948..8f82f15248f 100644 --- a/src/Interpreters/AggregatedDataVariants.cpp +++ b/src/Interpreters/AggregatedDataVariants.cpp @@ -117,8 +117,6 @@ size_t AggregatedDataVariants::size() const APPLY_FOR_AGGREGATED_VARIANTS(M) #undef M } - - UNREACHABLE(); } size_t AggregatedDataVariants::sizeWithoutOverflowRow() const @@ -136,8 +134,6 @@ size_t AggregatedDataVariants::sizeWithoutOverflowRow() const APPLY_FOR_AGGREGATED_VARIANTS(M) #undef M } - - UNREACHABLE(); } const char * AggregatedDataVariants::getMethodName() const @@ -155,8 +151,6 @@ const char * AggregatedDataVariants::getMethodName() const APPLY_FOR_AGGREGATED_VARIANTS(M) #undef M } - - UNREACHABLE(); } bool AggregatedDataVariants::isTwoLevel() const @@ -174,8 +168,6 @@ bool AggregatedDataVariants::isTwoLevel() const APPLY_FOR_AGGREGATED_VARIANTS(M) #undef M } - - UNREACHABLE(); } bool AggregatedDataVariants::isConvertibleToTwoLevel() const diff --git a/src/Interpreters/Cache/FileSegment.cpp b/src/Interpreters/Cache/FileSegment.cpp index 9459029dc4c..61a356fa3c3 100644 --- a/src/Interpreters/Cache/FileSegment.cpp +++ b/src/Interpreters/Cache/FileSegment.cpp @@ -799,7 +799,6 @@ String FileSegment::stateToString(FileSegment::State state) case FileSegment::State::DETACHED: return "DETACHED"; } - UNREACHABLE(); } bool FileSegment::assertCorrectness() const diff --git a/src/Interpreters/ClientInfo.h b/src/Interpreters/ClientInfo.h index c2ed9f7ffa4..3054667e264 100644 --- a/src/Interpreters/ClientInfo.h +++ b/src/Interpreters/ClientInfo.h @@ -130,6 +130,16 @@ public: UInt64 count_participating_replicas{0}; UInt64 number_of_current_replica{0}; + enum class BackgroundOperationType : uint8_t + { + NOT_A_BACKGROUND_OPERATION = 0, + MERGE = 1, + MUTATION = 2, + }; + + /// It's ClientInfo and context created for background operation (not real query) + BackgroundOperationType background_operation_type{BackgroundOperationType::NOT_A_BACKGROUND_OPERATION}; + bool empty() const { return query_kind == QueryKind::NO_QUERY; } /** Serialization and deserialization. diff --git a/src/Interpreters/ComparisonGraph.cpp b/src/Interpreters/ComparisonGraph.cpp index 4eacbae7a30..d53ff4b0227 100644 --- a/src/Interpreters/ComparisonGraph.cpp +++ b/src/Interpreters/ComparisonGraph.cpp @@ -309,7 +309,6 @@ ComparisonGraphCompareResult ComparisonGraph::pathToCompareResult(Path pat case Path::GREATER: return inverse ? ComparisonGraphCompareResult::LESS : ComparisonGraphCompareResult::GREATER; case Path::GREATER_OR_EQUAL: return inverse ? ComparisonGraphCompareResult::LESS_OR_EQUAL : ComparisonGraphCompareResult::GREATER_OR_EQUAL; } - UNREACHABLE(); } template diff --git a/src/Interpreters/ConcurrentHashJoin.cpp b/src/Interpreters/ConcurrentHashJoin.cpp index 96be70c5527..53987694e46 100644 --- a/src/Interpreters/ConcurrentHashJoin.cpp +++ b/src/Interpreters/ConcurrentHashJoin.cpp @@ -1,10 +1,9 @@ -#include -#include #include #include #include #include #include +#include #include #include #include @@ -15,10 +14,20 @@ #include #include #include +#include #include +#include #include +#include +#include #include -#include + +namespace CurrentMetrics +{ +extern const Metric ConcurrentHashJoinPoolThreads; +extern const Metric ConcurrentHashJoinPoolThreadsActive; +extern const Metric ConcurrentHashJoinPoolThreadsScheduled; +} namespace DB { @@ -36,20 +45,82 @@ static UInt32 toPowerOfTwo(UInt32 x) return static_cast(1) << (32 - std::countl_zero(x - 1)); } -ConcurrentHashJoin::ConcurrentHashJoin(ContextPtr context_, std::shared_ptr table_join_, size_t slots_, const Block & right_sample_block, bool any_take_last_row_) +ConcurrentHashJoin::ConcurrentHashJoin( + ContextPtr context_, std::shared_ptr table_join_, size_t slots_, const Block & right_sample_block, bool any_take_last_row_) : context(context_) , table_join(table_join_) , slots(toPowerOfTwo(std::min(static_cast(slots_), 256))) + , pool(std::make_unique( + CurrentMetrics::ConcurrentHashJoinPoolThreads, + CurrentMetrics::ConcurrentHashJoinPoolThreadsActive, + CurrentMetrics::ConcurrentHashJoinPoolThreadsScheduled, + slots)) { - for (size_t i = 0; i < slots; ++i) - { - auto inner_hash_join = std::make_shared(); + hash_joins.resize(slots); - inner_hash_join->data = std::make_unique(table_join_, right_sample_block, any_take_last_row_, 0, fmt::format("concurrent{}", i)); - /// Non zero `max_joined_block_rows` allows to process block partially and return not processed part. - /// TODO: It's not handled properly in ConcurrentHashJoin case, so we set it to 0 to disable this feature. - inner_hash_join->data->setMaxJoinedBlockRows(0); - hash_joins.emplace_back(std::move(inner_hash_join)); + try + { + for (size_t i = 0; i < slots; ++i) + { + pool->scheduleOrThrow( + [&, idx = i, thread_group = CurrentThread::getGroup()]() + { + SCOPE_EXIT_SAFE({ + if (thread_group) + CurrentThread::detachFromGroupIfNotDetached(); + }); + + if (thread_group) + CurrentThread::attachToGroupIfDetached(thread_group); + setThreadName("ConcurrentJoin"); + + auto inner_hash_join = std::make_shared(); + inner_hash_join->data = std::make_unique( + table_join_, right_sample_block, any_take_last_row_, 0, fmt::format("concurrent{}", idx)); + /// Non zero `max_joined_block_rows` allows to process block partially and return not processed part. + /// TODO: It's not handled properly in ConcurrentHashJoin case, so we set it to 0 to disable this feature. + inner_hash_join->data->setMaxJoinedBlockRows(0); + hash_joins[idx] = std::move(inner_hash_join); + }); + } + pool->wait(); + } + catch (...) + { + tryLogCurrentException(__PRETTY_FUNCTION__); + pool->wait(); + throw; + } +} + +ConcurrentHashJoin::~ConcurrentHashJoin() +{ + try + { + for (size_t i = 0; i < slots; ++i) + { + // Hash tables destruction may be very time-consuming. + // Without the following code, they would be destroyed in the current thread (i.e. sequentially). + // `InternalHashJoin` is moved here and will be destroyed in the destructor of the lambda function. + pool->scheduleOrThrow( + [join = std::move(hash_joins[i]), thread_group = CurrentThread::getGroup()]() + { + SCOPE_EXIT_SAFE({ + if (thread_group) + CurrentThread::detachFromGroupIfNotDetached(); + }); + + if (thread_group) + CurrentThread::attachToGroupIfDetached(thread_group); + setThreadName("ConcurrentJoin"); + }); + } + pool->wait(); + } + catch (...) + { + tryLogCurrentException(__PRETTY_FUNCTION__); + pool->wait(); } } diff --git a/src/Interpreters/ConcurrentHashJoin.h b/src/Interpreters/ConcurrentHashJoin.h index 40796376d23..c797ff27ece 100644 --- a/src/Interpreters/ConcurrentHashJoin.h +++ b/src/Interpreters/ConcurrentHashJoin.h @@ -10,6 +10,7 @@ #include #include #include +#include namespace DB { @@ -39,7 +40,7 @@ public: const Block & right_sample_block, bool any_take_last_row_ = false); - ~ConcurrentHashJoin() override = default; + ~ConcurrentHashJoin() override; std::string getName() const override { return "ConcurrentHashJoin"; } const TableJoin & getTableJoin() const override { return *table_join; } @@ -66,6 +67,7 @@ private: ContextPtr context; std::shared_ptr table_join; size_t slots; + std::unique_ptr pool; std::vector> hash_joins; std::mutex totals_mutex; diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp index e1d82a8f604..5c9ae4716b9 100644 --- a/src/Interpreters/Context.cpp +++ b/src/Interpreters/Context.cpp @@ -2386,6 +2386,17 @@ void Context::setCurrentQueryId(const String & query_id) client_info.initial_query_id = client_info.current_query_id; } +void Context::setBackgroundOperationTypeForContext(ClientInfo::BackgroundOperationType background_operation) +{ + chassert(background_operation != ClientInfo::BackgroundOperationType::NOT_A_BACKGROUND_OPERATION); + client_info.background_operation_type = background_operation; +} + +bool Context::isBackgroundOperationContext() const +{ + return client_info.background_operation_type != ClientInfo::BackgroundOperationType::NOT_A_BACKGROUND_OPERATION; +} + void Context::killCurrentQuery() const { if (auto elem = getProcessListElement()) diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h index 814534f7035..87a7baa0469 100644 --- a/src/Interpreters/Context.h +++ b/src/Interpreters/Context.h @@ -760,6 +760,12 @@ public: void setCurrentDatabaseNameInGlobalContext(const String & name); void setCurrentQueryId(const String & query_id); + /// FIXME: for background operations (like Merge and Mutation) we also use the same Context object and even setup + /// query_id for it (table_uuid::result_part_name). We can distinguish queries from background operation in some way like + /// bool is_background = query_id.contains("::"), but it's much worse than just enum check with more clear purpose + void setBackgroundOperationTypeForContext(ClientInfo::BackgroundOperationType setBackgroundOperationTypeForContextbackground_operation); + bool isBackgroundOperationContext() const; + void killCurrentQuery() const; bool isCurrentQueryKilled() const; diff --git a/src/Interpreters/DDLTask.h b/src/Interpreters/DDLTask.h index 5a8a5bfb184..0b0460b26c8 100644 --- a/src/Interpreters/DDLTask.h +++ b/src/Interpreters/DDLTask.h @@ -133,10 +133,10 @@ struct DDLTaskBase virtual void createSyncedNodeIfNeed(const ZooKeeperPtr & /*zookeeper*/) {} - inline String getActiveNodePath() const { return fs::path(entry_path) / "active" / host_id_str; } - inline String getFinishedNodePath() const { return fs::path(entry_path) / "finished" / host_id_str; } - inline String getShardNodePath() const { return fs::path(entry_path) / "shards" / getShardID(); } - inline String getSyncedNodePath() const { return fs::path(entry_path) / "synced" / host_id_str; } + String getActiveNodePath() const { return fs::path(entry_path) / "active" / host_id_str; } + String getFinishedNodePath() const { return fs::path(entry_path) / "finished" / host_id_str; } + String getShardNodePath() const { return fs::path(entry_path) / "shards" / getShardID(); } + String getSyncedNodePath() const { return fs::path(entry_path) / "synced" / host_id_str; } static String getLogEntryName(UInt32 log_entry_number); static UInt32 getLogEntryNumber(const String & log_entry_name); diff --git a/src/Interpreters/DatabaseCatalog.h b/src/Interpreters/DatabaseCatalog.h index 5caa034e0e9..37125d9900c 100644 --- a/src/Interpreters/DatabaseCatalog.h +++ b/src/Interpreters/DatabaseCatalog.h @@ -284,7 +284,7 @@ private: static constexpr UInt64 bits_for_first_level = 4; using UUIDToStorageMap = std::array; - static inline size_t getFirstLevelIdx(const UUID & uuid) + static size_t getFirstLevelIdx(const UUID & uuid) { return UUIDHelpers::getHighBytes(uuid) >> (64 - bits_for_first_level); } diff --git a/src/Interpreters/FilesystemCacheLog.cpp b/src/Interpreters/FilesystemCacheLog.cpp index 80fe1c3a8ef..90756f1c84a 100644 --- a/src/Interpreters/FilesystemCacheLog.cpp +++ b/src/Interpreters/FilesystemCacheLog.cpp @@ -15,18 +15,7 @@ namespace DB static String typeToString(FilesystemCacheLogElement::CacheType type) { - switch (type) - { - case FilesystemCacheLogElement::CacheType::READ_FROM_CACHE: - return "READ_FROM_CACHE"; - case FilesystemCacheLogElement::CacheType::READ_FROM_FS_AND_DOWNLOADED_TO_CACHE: - return "READ_FROM_FS_AND_DOWNLOADED_TO_CACHE"; - case FilesystemCacheLogElement::CacheType::READ_FROM_FS_BYPASSING_CACHE: - return "READ_FROM_FS_BYPASSING_CACHE"; - case FilesystemCacheLogElement::CacheType::WRITE_THROUGH_CACHE: - return "WRITE_THROUGH_CACHE"; - } - UNREACHABLE(); + return String(magic_enum::enum_name(type)); } ColumnsDescription FilesystemCacheLogElement::getColumnsDescription() diff --git a/src/Interpreters/HashJoin.cpp b/src/Interpreters/HashJoin.cpp index 3a21c13db5e..75da8bbc3e7 100644 --- a/src/Interpreters/HashJoin.cpp +++ b/src/Interpreters/HashJoin.cpp @@ -705,7 +705,6 @@ namespace APPLY_FOR_JOIN_VARIANTS(M) #undef M } - UNREACHABLE(); } } @@ -2641,8 +2640,6 @@ private: default: throw Exception(ErrorCodes::UNSUPPORTED_JOIN_KEYS, "Unsupported JOIN keys (type: {})", parent.data->type); } - - UNREACHABLE(); } template diff --git a/src/Interpreters/HashJoin.h b/src/Interpreters/HashJoin.h index 86db8943926..a0996556f9a 100644 --- a/src/Interpreters/HashJoin.h +++ b/src/Interpreters/HashJoin.h @@ -322,8 +322,6 @@ public: APPLY_FOR_JOIN_VARIANTS(M) #undef M } - - UNREACHABLE(); } size_t getTotalByteCountImpl(Type which) const @@ -338,8 +336,6 @@ public: APPLY_FOR_JOIN_VARIANTS(M) #undef M } - - UNREACHABLE(); } size_t getBufferSizeInCells(Type which) const @@ -354,8 +350,6 @@ public: APPLY_FOR_JOIN_VARIANTS(M) #undef M } - - UNREACHABLE(); } /// NOLINTEND(bugprone-macro-parentheses) }; diff --git a/src/Interpreters/InterpreterTransactionControlQuery.cpp b/src/Interpreters/InterpreterTransactionControlQuery.cpp index d31ace758c4..13872fbe3f5 100644 --- a/src/Interpreters/InterpreterTransactionControlQuery.cpp +++ b/src/Interpreters/InterpreterTransactionControlQuery.cpp @@ -33,7 +33,6 @@ BlockIO InterpreterTransactionControlQuery::execute() case ASTTransactionControl::SET_SNAPSHOT: return executeSetSnapshot(session_context, tcl.snapshot); } - UNREACHABLE(); } BlockIO InterpreterTransactionControlQuery::executeBegin(ContextMutablePtr session_context) diff --git a/src/Interpreters/JIT/CHJIT.cpp b/src/Interpreters/JIT/CHJIT.cpp index 046d0b4fc10..21c773ee1d7 100644 --- a/src/Interpreters/JIT/CHJIT.cpp +++ b/src/Interpreters/JIT/CHJIT.cpp @@ -119,9 +119,9 @@ public: return result; } - inline size_t getAllocatedSize() const { return allocated_size; } + size_t getAllocatedSize() const { return allocated_size; } - inline size_t getPageSize() const { return page_size; } + size_t getPageSize() const { return page_size; } ~PageArena() { @@ -177,10 +177,10 @@ private: { } - inline void * base() const { return pages_base; } - inline size_t pagesSize() const { return pages_size; } - inline size_t pageSize() const { return page_size; } - inline size_t blockSize() const { return pages_size * page_size; } + void * base() const { return pages_base; } + size_t pagesSize() const { return pages_size; } + size_t pageSize() const { return page_size; } + size_t blockSize() const { return pages_size * page_size; } private: void * pages_base; @@ -298,7 +298,7 @@ public: return true; } - inline size_t allocatedSize() const + size_t allocatedSize() const { size_t data_size = rw_page_arena.getAllocatedSize() + ro_page_arena.getAllocatedSize(); size_t code_size = ex_page_arena.getAllocatedSize(); diff --git a/src/Interpreters/JIT/CHJIT.h b/src/Interpreters/JIT/CHJIT.h index fc883802426..89d446fd3b3 100644 --- a/src/Interpreters/JIT/CHJIT.h +++ b/src/Interpreters/JIT/CHJIT.h @@ -85,7 +85,7 @@ public: /** Total compiled code size for module that are currently valid. */ - inline size_t getCompiledCodeSize() const { return compiled_code_size.load(std::memory_order_relaxed); } + size_t getCompiledCodeSize() const { return compiled_code_size.load(std::memory_order_relaxed); } private: diff --git a/src/Interpreters/JIT/CompileDAG.h b/src/Interpreters/JIT/CompileDAG.h index 13ec763b6fc..8db4ac5e110 100644 --- a/src/Interpreters/JIT/CompileDAG.h +++ b/src/Interpreters/JIT/CompileDAG.h @@ -65,17 +65,17 @@ public: nodes.emplace_back(std::move(node)); } - inline size_t getNodesCount() const { return nodes.size(); } - inline size_t getInputNodesCount() const { return input_nodes_count; } + size_t getNodesCount() const { return nodes.size(); } + size_t getInputNodesCount() const { return input_nodes_count; } - inline Node & operator[](size_t index) { return nodes[index]; } - inline const Node & operator[](size_t index) const { return nodes[index]; } + Node & operator[](size_t index) { return nodes[index]; } + const Node & operator[](size_t index) const { return nodes[index]; } - inline Node & front() { return nodes.front(); } - inline const Node & front() const { return nodes.front(); } + Node & front() { return nodes.front(); } + const Node & front() const { return nodes.front(); } - inline Node & back() { return nodes.back(); } - inline const Node & back() const { return nodes.back(); } + Node & back() { return nodes.back(); } + const Node & back() const { return nodes.back(); } private: std::vector nodes; diff --git a/src/Interpreters/JoinUtils.h b/src/Interpreters/JoinUtils.h index ff48f34d82c..f15ee2c2fb2 100644 --- a/src/Interpreters/JoinUtils.h +++ b/src/Interpreters/JoinUtils.h @@ -49,7 +49,7 @@ public: return nullptr; } - inline bool isRowFiltered(size_t row) const + bool isRowFiltered(size_t row) const { return !assert_cast(*column).getData()[row]; } diff --git a/src/Interpreters/SetVariants.cpp b/src/Interpreters/SetVariants.cpp index 64796a013f1..c600d096160 100644 --- a/src/Interpreters/SetVariants.cpp +++ b/src/Interpreters/SetVariants.cpp @@ -41,8 +41,6 @@ size_t SetVariantsTemplate::getTotalRowCount() const APPLY_FOR_SET_VARIANTS(M) #undef M } - - UNREACHABLE(); } template @@ -57,8 +55,6 @@ size_t SetVariantsTemplate::getTotalByteCount() const APPLY_FOR_SET_VARIANTS(M) #undef M } - - UNREACHABLE(); } template diff --git a/src/Interpreters/TreeCNFConverter.h b/src/Interpreters/TreeCNFConverter.h index 8258412f1a6..ec4b029eee9 100644 --- a/src/Interpreters/TreeCNFConverter.h +++ b/src/Interpreters/TreeCNFConverter.h @@ -164,6 +164,12 @@ public: void pushNotIn(CNFQuery::AtomicFormula & atom); +/// Reduces CNF groups by removing mutually exclusive atoms +/// found across groups, in case other atoms are identical. +/// Might require multiple passes to complete reduction. +/// +/// Example: +/// (x OR y) AND (x OR !y) -> x template TAndGroup reduceOnceCNFStatements(const TAndGroup & groups) { @@ -175,10 +181,19 @@ TAndGroup reduceOnceCNFStatements(const TAndGroup & groups) bool inserted = false; for (const auto & atom : group) { - copy.erase(atom); using AtomType = std::decay_t; AtomType negative_atom(atom); negative_atom.negative = !atom.negative; + + // Sikpping erase-insert for mutually exclusive atoms within + // single group, since it won't insert negative atom, which + // will break the logic of this rule + if (copy.contains(negative_atom)) + { + continue; + } + + copy.erase(atom); copy.insert(negative_atom); if (groups.contains(copy)) @@ -209,6 +224,10 @@ bool isCNFGroupSubset(const TOrGroup & left, const TOrGroup & right) return true; } +/// Removes CNF groups if subset group is found in CNF. +/// +/// Example: +/// (x OR y) AND (x) -> x template TAndGroup filterCNFSubsets(const TAndGroup & groups) { diff --git a/src/Interpreters/WhereConstraintsOptimizer.cpp b/src/Interpreters/WhereConstraintsOptimizer.cpp index 979a4f4dbf5..456cf76b987 100644 --- a/src/Interpreters/WhereConstraintsOptimizer.cpp +++ b/src/Interpreters/WhereConstraintsOptimizer.cpp @@ -91,6 +91,22 @@ bool checkIfGroupAlwaysTrueGraph(const CNFQuery::OrGroup & group, const Comparis return false; } +bool checkIfGroupAlwaysTrueAtoms(const CNFQuery::OrGroup & group) +{ + /// Filters out groups containing mutually exclusive atoms, + /// since these groups are always True + + for (const auto & atom : group) + { + auto negated(atom); + negated.negative = !atom.negative; + if (group.contains(negated)) + { + return true; + } + } + return false; +} bool checkIfAtomAlwaysFalseFullMatch(const CNFQuery::AtomicFormula & atom, const ConstraintsDescription & constraints_description) { @@ -158,7 +174,8 @@ void WhereConstraintsOptimizer::perform() .filterAlwaysTrueGroups([&compare_graph, this](const auto & group) { /// remove always true groups from CNF - return !checkIfGroupAlwaysTrueFullMatch(group, metadata_snapshot->getConstraints()) && !checkIfGroupAlwaysTrueGraph(group, compare_graph); + return !checkIfGroupAlwaysTrueFullMatch(group, metadata_snapshot->getConstraints()) + && !checkIfGroupAlwaysTrueGraph(group, compare_graph) && !checkIfGroupAlwaysTrueAtoms(group); }) .filterAlwaysFalseAtoms([&compare_graph, this](const auto & atom) { diff --git a/src/Interpreters/examples/hash_map_string_3.cpp b/src/Interpreters/examples/hash_map_string_3.cpp index 57e36bed545..44ee3542bd9 100644 --- a/src/Interpreters/examples/hash_map_string_3.cpp +++ b/src/Interpreters/examples/hash_map_string_3.cpp @@ -96,7 +96,7 @@ inline bool operator==(StringRef_CompareAlwaysTrue, StringRef_CompareAlwaysTrue) struct FastHash64 { - static inline uint64_t mix(uint64_t h) + static uint64_t mix(uint64_t h) { h ^= h >> 23; h *= 0x2127599bf4325c37ULL; diff --git a/src/Parsers/ASTExplainQuery.h b/src/Parsers/ASTExplainQuery.h index 701bde8cebd..eb095b5dbbc 100644 --- a/src/Parsers/ASTExplainQuery.h +++ b/src/Parsers/ASTExplainQuery.h @@ -40,8 +40,6 @@ public: case TableOverride: return "EXPLAIN TABLE OVERRIDE"; case CurrentTransaction: return "EXPLAIN CURRENT TRANSACTION"; } - - UNREACHABLE(); } static ExplainKind fromString(const String & str) diff --git a/src/Parsers/Lexer.cpp b/src/Parsers/Lexer.cpp index 34855a7ce20..5f2bd50524c 100644 --- a/src/Parsers/Lexer.cpp +++ b/src/Parsers/Lexer.cpp @@ -42,7 +42,7 @@ Token quotedString(const char *& pos, const char * const token_begin, const char continue; } - UNREACHABLE(); + chassert(false); } } @@ -538,8 +538,6 @@ const char * getTokenName(TokenType type) APPLY_FOR_TOKENS(M) #undef M } - - UNREACHABLE(); } diff --git a/src/Planner/PlannerExpressionAnalysis.cpp b/src/Planner/PlannerExpressionAnalysis.cpp index 2a95234057c..f0a2845c3e8 100644 --- a/src/Planner/PlannerExpressionAnalysis.cpp +++ b/src/Planner/PlannerExpressionAnalysis.cpp @@ -51,7 +51,7 @@ FilterAnalysisResult analyzeFilter(const QueryTreeNodePtr & filter_expression_no return result; } -bool isDeterministicConstant(const ConstantNode & root) +bool canRemoveConstantFromGroupByKey(const ConstantNode & root) { const auto & source_expression = root.getSourceExpression(); if (!source_expression) @@ -64,15 +64,20 @@ bool isDeterministicConstant(const ConstantNode & root) const auto * node = nodes.top(); nodes.pop(); + if (node->getNodeType() == QueryTreeNodeType::QUERY) + /// Allow removing constants from scalar subqueries. We send them to all the shards. + continue; + const auto * constant_node = node->as(); const auto * function_node = node->as(); if (constant_node) { - if (!isDeterministicConstant(*constant_node)) + if (!canRemoveConstantFromGroupByKey(*constant_node)) return false; } else if (function_node) { + /// Do not allow removing constants like `hostName()` if (!function_node->getFunctionOrThrow()->isDeterministic()) return false; @@ -122,7 +127,7 @@ std::optional analyzeAggregation(const QueryTreeNodeP bool is_secondary_query = planner_context->getQueryContext()->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY; bool is_distributed_query = planner_context->getQueryContext()->isDistributed(); - bool check_deterministic_constants = is_secondary_query || is_distributed_query; + bool check_constants_for_group_by_key = is_secondary_query || is_distributed_query; if (query_node.hasGroupBy()) { @@ -139,7 +144,7 @@ std::optional analyzeAggregation(const QueryTreeNodeP const auto * constant_key = grouping_set_key_node->as(); group_by_with_constant_keys |= (constant_key != nullptr); - if (constant_key && !aggregates_descriptions.empty() && (!check_deterministic_constants || isDeterministicConstant(*constant_key))) + if (constant_key && !aggregates_descriptions.empty() && (!check_constants_for_group_by_key || canRemoveConstantFromGroupByKey(*constant_key))) continue; auto expression_dag_nodes = actions_visitor.visit(before_aggregation_actions, grouping_set_key_node); @@ -191,7 +196,7 @@ std::optional analyzeAggregation(const QueryTreeNodeP const auto * constant_key = group_by_key_node->as(); group_by_with_constant_keys |= (constant_key != nullptr); - if (constant_key && !aggregates_descriptions.empty() && (!check_deterministic_constants || isDeterministicConstant(*constant_key))) + if (constant_key && !aggregates_descriptions.empty() && (!check_constants_for_group_by_key || canRemoveConstantFromGroupByKey(*constant_key))) continue; auto expression_dag_nodes = actions_visitor.visit(before_aggregation_actions, group_by_key_node); diff --git a/src/Processors/Formats/Impl/CustomSeparatedRowInputFormat.h b/src/Processors/Formats/Impl/CustomSeparatedRowInputFormat.h index ab16aaa56ad..58f78e5af42 100644 --- a/src/Processors/Formats/Impl/CustomSeparatedRowInputFormat.h +++ b/src/Processors/Formats/Impl/CustomSeparatedRowInputFormat.h @@ -80,7 +80,7 @@ public: bool allowVariableNumberOfColumns() const override { return format_settings.custom.allow_variable_number_of_columns; } bool checkForSuffixImpl(bool check_eof); - inline void skipSpaces() { if (ignore_spaces) skipWhitespaceIfAny(*buf, true); } + void skipSpaces() { if (ignore_spaces) skipWhitespaceIfAny(*buf, true); } EscapingRule getEscapingRule() const override { return format_settings.custom.escaping_rule; } diff --git a/src/Processors/Formats/Impl/MsgPackRowInputFormat.cpp b/src/Processors/Formats/Impl/MsgPackRowInputFormat.cpp index 98cbdeaaa4b..6b7f1f5206c 100644 --- a/src/Processors/Formats/Impl/MsgPackRowInputFormat.cpp +++ b/src/Processors/Formats/Impl/MsgPackRowInputFormat.cpp @@ -657,7 +657,6 @@ DataTypePtr MsgPackSchemaReader::getDataType(const msgpack::object & object) throw Exception(ErrorCodes::BAD_ARGUMENTS, "Msgpack extension type {:x} is not supported", object_ext.type()); } } - UNREACHABLE(); } std::optional MsgPackSchemaReader::readRowAndGetDataTypes() diff --git a/src/Processors/Formats/Impl/TemplateRowInputFormat.h b/src/Processors/Formats/Impl/TemplateRowInputFormat.h index 38870473289..9a7bc03ea78 100644 --- a/src/Processors/Formats/Impl/TemplateRowInputFormat.h +++ b/src/Processors/Formats/Impl/TemplateRowInputFormat.h @@ -84,7 +84,7 @@ public: void readPrefix(); void skipField(EscapingRule escaping_rule); - inline void skipSpaces() { if (ignore_spaces) skipWhitespaceIfAny(*buf); } + void skipSpaces() { if (ignore_spaces) skipWhitespaceIfAny(*buf); } template ReturnType tryReadPrefixOrSuffix(size_t & input_part_beg, size_t input_part_end); diff --git a/src/Processors/IProcessor.cpp b/src/Processors/IProcessor.cpp index 8b160153733..5ab5e5277aa 100644 --- a/src/Processors/IProcessor.cpp +++ b/src/Processors/IProcessor.cpp @@ -36,8 +36,6 @@ std::string IProcessor::statusToName(Status status) case Status::ExpandPipeline: return "ExpandPipeline"; } - - UNREACHABLE(); } } diff --git a/src/Processors/Merges/Algorithms/AggregatingSortedAlgorithm.cpp b/src/Processors/Merges/Algorithms/AggregatingSortedAlgorithm.cpp index 857f5040b79..a77bb0dabfc 100644 --- a/src/Processors/Merges/Algorithms/AggregatingSortedAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/AggregatingSortedAlgorithm.cpp @@ -76,9 +76,6 @@ static void preprocessChunk(Chunk & chunk, const AggregatingSortedAlgorithm::Col auto num_rows = chunk.getNumRows(); auto columns = chunk.detachColumns(); - for (auto & column : columns) - column = column->convertToFullColumnIfConst(); - for (const auto & desc : def.columns_to_simple_aggregate) if (desc.nested_type) columns[desc.column_number] = recursiveRemoveLowCardinality(columns[desc.column_number]); @@ -266,6 +263,7 @@ AggregatingSortedAlgorithm::AggregatingSortedAlgorithm( void AggregatingSortedAlgorithm::initialize(Inputs inputs) { + removeConstAndSparse(inputs); merged_data.initialize(header, inputs); for (auto & input : inputs) @@ -277,6 +275,7 @@ void AggregatingSortedAlgorithm::initialize(Inputs inputs) void AggregatingSortedAlgorithm::consume(Input & input, size_t source_num) { + removeConstAndSparse(input); preprocessChunk(input.chunk, columns_definition); updateCursor(input, source_num); } diff --git a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp index a5befca7233..466adf93538 100644 --- a/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/FinishAggregatingInOrderAlgorithm.cpp @@ -40,6 +40,7 @@ FinishAggregatingInOrderAlgorithm::FinishAggregatingInOrderAlgorithm( void FinishAggregatingInOrderAlgorithm::initialize(Inputs inputs) { + removeConstAndSparse(inputs); current_inputs = std::move(inputs); states.resize(num_inputs); for (size_t i = 0; i < num_inputs; ++i) @@ -48,6 +49,7 @@ void FinishAggregatingInOrderAlgorithm::initialize(Inputs inputs) void FinishAggregatingInOrderAlgorithm::consume(Input & input, size_t source_num) { + removeConstAndSparse(input); if (!input.chunk.hasRows()) return; diff --git a/src/Processors/Merges/Algorithms/IMergingAlgorithm.h b/src/Processors/Merges/Algorithms/IMergingAlgorithm.h index 6e352c3f104..9a1c7c24270 100644 --- a/src/Processors/Merges/Algorithms/IMergingAlgorithm.h +++ b/src/Processors/Merges/Algorithms/IMergingAlgorithm.h @@ -39,7 +39,6 @@ public: void set(Chunk chunk_) { - convertToFullIfSparse(chunk_); chunk = std::move(chunk_); skip_last_row = false; } @@ -47,6 +46,18 @@ public: using Inputs = std::vector; + static void removeConstAndSparse(Input & input) + { + convertToFullIfConst(input.chunk); + convertToFullIfSparse(input.chunk); + } + + static void removeConstAndSparse(Inputs & inputs) + { + for (auto & input : inputs) + removeConstAndSparse(input); + } + virtual const char * getName() const = 0; virtual void initialize(Inputs inputs) = 0; virtual void consume(Input & input, size_t source_num) = 0; diff --git a/src/Processors/Merges/Algorithms/IMergingAlgorithmWithSharedChunks.cpp b/src/Processors/Merges/Algorithms/IMergingAlgorithmWithSharedChunks.cpp index fe5186736b5..47b7ddf38dc 100644 --- a/src/Processors/Merges/Algorithms/IMergingAlgorithmWithSharedChunks.cpp +++ b/src/Processors/Merges/Algorithms/IMergingAlgorithmWithSharedChunks.cpp @@ -17,18 +17,9 @@ IMergingAlgorithmWithSharedChunks::IMergingAlgorithmWithSharedChunks( { } -static void prepareChunk(Chunk & chunk) -{ - auto num_rows = chunk.getNumRows(); - auto columns = chunk.detachColumns(); - for (auto & column : columns) - column = column->convertToFullColumnIfConst(); - - chunk.setColumns(std::move(columns), num_rows); -} - void IMergingAlgorithmWithSharedChunks::initialize(Inputs inputs) { + removeConstAndSparse(inputs); merged_data->initialize(header, inputs); for (size_t source_num = 0; source_num < inputs.size(); ++source_num) @@ -36,8 +27,6 @@ void IMergingAlgorithmWithSharedChunks::initialize(Inputs inputs) if (!inputs[source_num].chunk) continue; - prepareChunk(inputs[source_num].chunk); - auto & source = sources[source_num]; source.skip_last_row = inputs[source_num].skip_last_row; @@ -55,7 +44,7 @@ void IMergingAlgorithmWithSharedChunks::initialize(Inputs inputs) void IMergingAlgorithmWithSharedChunks::consume(Input & input, size_t source_num) { - prepareChunk(input.chunk); + removeConstAndSparse(input); auto & source = sources[source_num]; source.skip_last_row = input.skip_last_row; diff --git a/src/Processors/Merges/Algorithms/MergingSortedAlgorithm.cpp b/src/Processors/Merges/Algorithms/MergingSortedAlgorithm.cpp index d17a4d859ee..3a9cf7ee141 100644 --- a/src/Processors/Merges/Algorithms/MergingSortedAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/MergingSortedAlgorithm.cpp @@ -49,17 +49,16 @@ void MergingSortedAlgorithm::addInput() void MergingSortedAlgorithm::initialize(Inputs inputs) { + removeConstAndSparse(inputs); merged_data.initialize(header, inputs); current_inputs = std::move(inputs); for (size_t source_num = 0; source_num < current_inputs.size(); ++source_num) { auto & chunk = current_inputs[source_num].chunk; - if (!chunk) continue; - convertToFullIfConst(chunk); cursors[source_num] = SortCursorImpl(header, chunk.getColumns(), description, source_num); } @@ -83,7 +82,7 @@ void MergingSortedAlgorithm::initialize(Inputs inputs) void MergingSortedAlgorithm::consume(Input & input, size_t source_num) { - convertToFullIfConst(input.chunk); + removeConstAndSparse(input); current_inputs[source_num].swap(input); cursors[source_num].reset(current_inputs[source_num].chunk.getColumns(), header); diff --git a/src/Processors/Merges/Algorithms/SummingSortedAlgorithm.cpp b/src/Processors/Merges/Algorithms/SummingSortedAlgorithm.cpp index 7329821cf97..e2c6371c44f 100644 --- a/src/Processors/Merges/Algorithms/SummingSortedAlgorithm.cpp +++ b/src/Processors/Merges/Algorithms/SummingSortedAlgorithm.cpp @@ -387,9 +387,6 @@ static void preprocessChunk(Chunk & chunk, const SummingSortedAlgorithm::Columns auto num_rows = chunk.getNumRows(); auto columns = chunk.detachColumns(); - for (auto & column : columns) - column = column->convertToFullColumnIfConst(); - for (const auto & desc : def.columns_to_aggregate) { if (desc.nested_type) @@ -704,6 +701,7 @@ SummingSortedAlgorithm::SummingSortedAlgorithm( void SummingSortedAlgorithm::initialize(Inputs inputs) { + removeConstAndSparse(inputs); merged_data.initialize(header, inputs); for (auto & input : inputs) @@ -715,6 +713,7 @@ void SummingSortedAlgorithm::initialize(Inputs inputs) void SummingSortedAlgorithm::consume(Input & input, size_t source_num) { + removeConstAndSparse(input); preprocessChunk(input.chunk, columns_definition); updateCursor(input, source_num); } diff --git a/src/Processors/QueryPlan/ReadFromLoopStep.cpp b/src/Processors/QueryPlan/ReadFromLoopStep.cpp new file mode 100644 index 00000000000..10436490a2a --- /dev/null +++ b/src/Processors/QueryPlan/ReadFromLoopStep.cpp @@ -0,0 +1,156 @@ +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +namespace DB +{ + namespace ErrorCodes + { + extern const int TOO_MANY_RETRIES_TO_FETCH_PARTS; + } + class PullingPipelineExecutor; + + class LoopSource : public ISource + { + public: + + LoopSource( + const Names & column_names_, + const SelectQueryInfo & query_info_, + const StorageSnapshotPtr & storage_snapshot_, + ContextPtr & context_, + QueryProcessingStage::Enum processed_stage_, + StoragePtr inner_storage_, + size_t max_block_size_, + size_t num_streams_) + : ISource(storage_snapshot_->getSampleBlockForColumns(column_names_)) + , column_names(column_names_) + , query_info(query_info_) + , storage_snapshot(storage_snapshot_) + , processed_stage(processed_stage_) + , context(context_) + , inner_storage(std::move(inner_storage_)) + , max_block_size(max_block_size_) + , num_streams(num_streams_) + { + } + + String getName() const override { return "Loop"; } + + Chunk generate() override + { + while (true) + { + if (!loop) + { + QueryPlan plan; + auto storage_snapshot_ = inner_storage->getStorageSnapshotForQuery(inner_storage->getInMemoryMetadataPtr(), nullptr, context); + inner_storage->read( + plan, + column_names, + storage_snapshot_, + query_info, + context, + processed_stage, + max_block_size, + num_streams); + auto builder = plan.buildQueryPipeline( + QueryPlanOptimizationSettings::fromContext(context), + BuildQueryPipelineSettings::fromContext(context)); + QueryPlanResourceHolder resources; + auto pipe = QueryPipelineBuilder::getPipe(std::move(*builder), resources); + query_pipeline = QueryPipeline(std::move(pipe)); + executor = std::make_unique(query_pipeline); + loop = true; + } + Chunk chunk; + if (executor->pull(chunk)) + { + if (chunk) + { + retries_count = 0; + return chunk; + } + + } + else + { + ++retries_count; + if (retries_count > max_retries_count) + throw Exception(ErrorCodes::TOO_MANY_RETRIES_TO_FETCH_PARTS, "Too many retries to pull from storage"); + loop = false; + executor.reset(); + query_pipeline.reset(); + } + } + } + + private: + + const Names column_names; + SelectQueryInfo query_info; + const StorageSnapshotPtr storage_snapshot; + QueryProcessingStage::Enum processed_stage; + ContextPtr context; + StoragePtr inner_storage; + size_t max_block_size; + size_t num_streams; + // add retries. If inner_storage failed to pull X times in a row we'd better to fail here not to hang + size_t retries_count = 0; + size_t max_retries_count = 3; + bool loop = false; + QueryPipeline query_pipeline; + std::unique_ptr executor; + }; + + ReadFromLoopStep::ReadFromLoopStep( + const Names & column_names_, + const SelectQueryInfo & query_info_, + const StorageSnapshotPtr & storage_snapshot_, + const ContextPtr & context_, + QueryProcessingStage::Enum processed_stage_, + StoragePtr inner_storage_, + size_t max_block_size_, + size_t num_streams_) + : SourceStepWithFilter( + DataStream{.header = storage_snapshot_->getSampleBlockForColumns(column_names_)}, + column_names_, + query_info_, + storage_snapshot_, + context_) + , column_names(column_names_) + , processed_stage(processed_stage_) + , inner_storage(std::move(inner_storage_)) + , max_block_size(max_block_size_) + , num_streams(num_streams_) + { + } + + Pipe ReadFromLoopStep::makePipe() + { + return Pipe(std::make_shared( + column_names, query_info, storage_snapshot, context, processed_stage, inner_storage, max_block_size, num_streams)); + } + + void ReadFromLoopStep::initializePipeline(QueryPipelineBuilder & pipeline, const BuildQueryPipelineSettings &) + { + auto pipe = makePipe(); + + if (pipe.empty()) + { + assert(output_stream != std::nullopt); + pipe = Pipe(std::make_shared(output_stream->header)); + } + + pipeline.init(std::move(pipe)); + } + +} diff --git a/src/Processors/QueryPlan/ReadFromLoopStep.h b/src/Processors/QueryPlan/ReadFromLoopStep.h new file mode 100644 index 00000000000..4eee0ca5605 --- /dev/null +++ b/src/Processors/QueryPlan/ReadFromLoopStep.h @@ -0,0 +1,37 @@ +#pragma once +#include +#include +#include +#include + +namespace DB +{ + + class ReadFromLoopStep final : public SourceStepWithFilter + { + public: + ReadFromLoopStep( + const Names & column_names_, + const SelectQueryInfo & query_info_, + const StorageSnapshotPtr & storage_snapshot_, + const ContextPtr & context_, + QueryProcessingStage::Enum processed_stage_, + StoragePtr inner_storage_, + size_t max_block_size_, + size_t num_streams_); + + String getName() const override { return "ReadFromLoop"; } + + void initializePipeline(QueryPipelineBuilder & pipeline, const BuildQueryPipelineSettings &) override; + + private: + + Pipe makePipe(); + + const Names column_names; + QueryProcessingStage::Enum processed_stage; + StoragePtr inner_storage; + size_t max_block_size; + size_t num_streams; + }; +} diff --git a/src/Processors/QueryPlan/ReadFromMergeTree.cpp b/src/Processors/QueryPlan/ReadFromMergeTree.cpp index 6f0fa55c349..caba1d32988 100644 --- a/src/Processors/QueryPlan/ReadFromMergeTree.cpp +++ b/src/Processors/QueryPlan/ReadFromMergeTree.cpp @@ -381,7 +381,7 @@ Pipe ReadFromMergeTree::readFromPoolParallelReplicas( auto algorithm = std::make_unique(i); auto processor = std::make_unique( - pool, std::move(algorithm), storage_snapshot, prewhere_info, + pool, std::move(algorithm), prewhere_info, actions_settings, block_size_copy, reader_settings); auto source = std::make_shared(std::move(processor)); @@ -480,7 +480,7 @@ Pipe ReadFromMergeTree::readFromPool( auto algorithm = std::make_unique(i); auto processor = std::make_unique( - pool, std::move(algorithm), storage_snapshot, prewhere_info, + pool, std::move(algorithm), prewhere_info, actions_settings, block_size_copy, reader_settings); auto source = std::make_shared(std::move(processor)); @@ -592,7 +592,7 @@ Pipe ReadFromMergeTree::readInOrder( algorithm = std::make_unique(i); auto processor = std::make_unique( - pool, std::move(algorithm), storage_snapshot, prewhere_info, + pool, std::move(algorithm), prewhere_info, actions_settings, block_size, reader_settings); processor->addPartLevelToChunk(isQueryWithFinal()); @@ -1136,8 +1136,6 @@ static void addMergingFinal( return std::make_shared(header, num_outputs, sort_description, max_block_size_rows, /*max_block_size_bytes=*/0, merging_params.graphite_params, now); } - - UNREACHABLE(); }; pipe.addTransform(get_merging_processor()); @@ -2125,8 +2123,6 @@ static const char * indexTypeToString(ReadFromMergeTree::IndexType type) case ReadFromMergeTree::IndexType::Skip: return "Skip"; } - - UNREACHABLE(); } static const char * readTypeToString(ReadFromMergeTree::ReadType type) @@ -2142,8 +2138,6 @@ static const char * readTypeToString(ReadFromMergeTree::ReadType type) case ReadFromMergeTree::ReadType::ParallelReplicas: return "Parallel"; } - - UNREACHABLE(); } void ReadFromMergeTree::describeActions(FormatSettings & format_settings) const diff --git a/src/Processors/QueryPlan/TotalsHavingStep.cpp b/src/Processors/QueryPlan/TotalsHavingStep.cpp index d1bd70fd0b2..ac5e144bf4a 100644 --- a/src/Processors/QueryPlan/TotalsHavingStep.cpp +++ b/src/Processors/QueryPlan/TotalsHavingStep.cpp @@ -86,8 +86,6 @@ static String totalsModeToString(TotalsMode totals_mode, double auto_include_thr case TotalsMode::AFTER_HAVING_AUTO: return "after_having_auto threshold " + std::to_string(auto_include_threshold); } - - UNREACHABLE(); } void TotalsHavingStep::describeActions(FormatSettings & settings) const diff --git a/src/Processors/Transforms/ColumnGathererTransform.cpp b/src/Processors/Transforms/ColumnGathererTransform.cpp index b6bcec26c0c..15f8355bdc7 100644 --- a/src/Processors/Transforms/ColumnGathererTransform.cpp +++ b/src/Processors/Transforms/ColumnGathererTransform.cpp @@ -2,6 +2,7 @@ #include #include #include +#include #include #include @@ -20,11 +21,13 @@ ColumnGathererStream::ColumnGathererStream( size_t num_inputs, ReadBuffer & row_sources_buf_, size_t block_preferred_size_rows_, - size_t block_preferred_size_bytes_) + size_t block_preferred_size_bytes_, + bool is_result_sparse_) : sources(num_inputs) , row_sources_buf(row_sources_buf_) , block_preferred_size_rows(block_preferred_size_rows_) , block_preferred_size_bytes(block_preferred_size_bytes_) + , is_result_sparse(is_result_sparse_) { if (num_inputs == 0) throw Exception(ErrorCodes::EMPTY_DATA_PASSED, "There are no streams to gather"); @@ -36,17 +39,23 @@ void ColumnGathererStream::initialize(Inputs inputs) source_columns.reserve(inputs.size()); for (size_t i = 0; i < inputs.size(); ++i) { - if (inputs[i].chunk) - { - sources[i].update(inputs[i].chunk.detachColumns().at(0)); - source_columns.push_back(sources[i].column); - } + if (!inputs[i].chunk) + continue; + + if (!is_result_sparse) + convertToFullIfSparse(inputs[i].chunk); + + sources[i].update(inputs[i].chunk.detachColumns().at(0)); + source_columns.push_back(sources[i].column); } if (source_columns.empty()) return; result_column = source_columns[0]->cloneEmpty(); + if (is_result_sparse && !result_column->isSparse()) + result_column = ColumnSparse::create(std::move(result_column)); + if (result_column->hasDynamicStructure()) result_column->takeDynamicStructureFromSourceColumns(source_columns); } @@ -146,7 +155,12 @@ void ColumnGathererStream::consume(Input & input, size_t source_num) { auto & source = sources[source_num]; if (input.chunk) + { + if (!is_result_sparse) + convertToFullIfSparse(input.chunk); + source.update(input.chunk.getColumns().at(0)); + } if (0 == source.size) { @@ -159,10 +173,11 @@ ColumnGathererTransform::ColumnGathererTransform( size_t num_inputs, ReadBuffer & row_sources_buf_, size_t block_preferred_size_rows_, - size_t block_preferred_size_bytes_) + size_t block_preferred_size_bytes_, + bool is_result_sparse_) : IMergingTransform( num_inputs, header, header, /*have_all_inputs_=*/ true, /*limit_hint_=*/ 0, /*always_read_till_end_=*/ false, - num_inputs, row_sources_buf_, block_preferred_size_rows_, block_preferred_size_bytes_) + num_inputs, row_sources_buf_, block_preferred_size_rows_, block_preferred_size_bytes_, is_result_sparse_) , log(getLogger("ColumnGathererStream")) { if (header.columns() != 1) diff --git a/src/Processors/Transforms/ColumnGathererTransform.h b/src/Processors/Transforms/ColumnGathererTransform.h index 4e56cffa46a..ec5691316ce 100644 --- a/src/Processors/Transforms/ColumnGathererTransform.h +++ b/src/Processors/Transforms/ColumnGathererTransform.h @@ -60,7 +60,8 @@ public: size_t num_inputs, ReadBuffer & row_sources_buf_, size_t block_preferred_size_rows_, - size_t block_preferred_size_bytes_); + size_t block_preferred_size_bytes_, + bool is_result_sparse_); const char * getName() const override { return "ColumnGathererStream"; } void initialize(Inputs inputs) override; @@ -97,6 +98,7 @@ private: const size_t block_preferred_size_rows; const size_t block_preferred_size_bytes; + const bool is_result_sparse; Source * source_to_fully_copy = nullptr; @@ -113,7 +115,8 @@ public: size_t num_inputs, ReadBuffer & row_sources_buf_, size_t block_preferred_size_rows_, - size_t block_preferred_size_bytes_); + size_t block_preferred_size_bytes_, + bool is_result_sparse_); String getName() const override { return "ColumnGathererTransform"; } @@ -145,7 +148,6 @@ void ColumnGathererStream::gather(Column & column_res) next_required_source = -1; - /// We use do ... while here to ensure there will be at least one iteration of this loop. /// Because the column_res.byteSize() could be bigger than block_preferred_size_bytes already at this point. do diff --git a/src/Processors/Transforms/FillingTransform.cpp b/src/Processors/Transforms/FillingTransform.cpp index 05fd2a7254f..bb38c3e1dc5 100644 --- a/src/Processors/Transforms/FillingTransform.cpp +++ b/src/Processors/Transforms/FillingTransform.cpp @@ -67,7 +67,6 @@ static FillColumnDescription::StepFunction getStepFunction( FOR_EACH_INTERVAL_KIND(DECLARE_CASE) #undef DECLARE_CASE } - UNREACHABLE(); } static bool tryConvertFields(FillColumnDescription & descr, const DataTypePtr & type) diff --git a/src/Processors/Transforms/FilterTransform.cpp b/src/Processors/Transforms/FilterTransform.cpp index 0793bb3db5b..e8e7f99ce53 100644 --- a/src/Processors/Transforms/FilterTransform.cpp +++ b/src/Processors/Transforms/FilterTransform.cpp @@ -14,6 +14,7 @@ namespace DB namespace ErrorCodes { extern const int ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER; + extern const int LOGICAL_ERROR; } static void replaceFilterToConstant(Block & block, const String & filter_column_name) @@ -81,7 +82,11 @@ static std::unique_ptr combineFilterAndIndices( auto mutable_holder = ColumnUInt8::create(num_rows, 0); auto & data = mutable_holder->getData(); for (auto idx : selected_by_indices) + { + if (idx >= num_rows) + throw Exception(ErrorCodes::LOGICAL_ERROR, "Index {} out of range {}", idx, num_rows); data[idx] = 1; + } /// AND two filters auto * begin = data.data(); diff --git a/src/Processors/Transforms/buildPushingToViewsChain.cpp b/src/Processors/Transforms/buildPushingToViewsChain.cpp index cdcfad4442c..a1a886fb4f7 100644 --- a/src/Processors/Transforms/buildPushingToViewsChain.cpp +++ b/src/Processors/Transforms/buildPushingToViewsChain.cpp @@ -898,8 +898,6 @@ static std::exception_ptr addStorageToException(std::exception_ptr ptr, const St { return std::current_exception(); } - - UNREACHABLE(); } void FinalizingViewsTransform::work() diff --git a/src/Server/HTTPHandler.h b/src/Server/HTTPHandler.h index ae4cf034276..a96402247a2 100644 --- a/src/Server/HTTPHandler.h +++ b/src/Server/HTTPHandler.h @@ -77,12 +77,12 @@ private: bool exception_is_written = false; std::function exception_writer; - inline bool hasDelayed() const + bool hasDelayed() const { return out_maybe_delayed_and_compressed != out_maybe_compressed.get(); } - inline void finalize() + void finalize() { if (finalized) return; @@ -94,7 +94,7 @@ private: out->finalize(); } - inline bool isFinalized() const + bool isFinalized() const { return finalized; } diff --git a/src/Storages/Cache/ExternalDataSourceCache.h b/src/Storages/Cache/ExternalDataSourceCache.h index a5dea2f63db..4c8c7974005 100644 --- a/src/Storages/Cache/ExternalDataSourceCache.h +++ b/src/Storages/Cache/ExternalDataSourceCache.h @@ -70,7 +70,7 @@ public: void initOnce(ContextPtr context, const String & root_dir_, size_t limit_size_, size_t bytes_read_before_flush_); - inline bool isInitialized() const { return initialized; } + bool isInitialized() const { return initialized; } std::pair, std::unique_ptr> createReader(ContextPtr context, IRemoteFileMetadataPtr remote_file_metadata, std::unique_ptr & read_buffer, bool is_random_accessed); diff --git a/src/Storages/Cache/RemoteCacheController.h b/src/Storages/Cache/RemoteCacheController.h index 782a6b89519..22b3d64b1db 100644 --- a/src/Storages/Cache/RemoteCacheController.h +++ b/src/Storages/Cache/RemoteCacheController.h @@ -45,41 +45,41 @@ public: */ void waitMoreData(size_t start_offset_, size_t end_offset_); - inline size_t size() const { return current_offset; } + size_t size() const { return current_offset; } - inline const std::filesystem::path & getLocalPath() { return local_path; } - inline String getRemotePath() const { return file_metadata_ptr->remote_path; } + const std::filesystem::path & getLocalPath() { return local_path; } + String getRemotePath() const { return file_metadata_ptr->remote_path; } - inline UInt64 getLastModificationTimestamp() const { return file_metadata_ptr->last_modification_timestamp; } + UInt64 getLastModificationTimestamp() const { return file_metadata_ptr->last_modification_timestamp; } bool isModified(IRemoteFileMetadataPtr file_metadata_); - inline void markInvalid() + void markInvalid() { std::lock_guard lock(mutex); valid = false; } - inline bool isValid() + bool isValid() { std::lock_guard lock(mutex); return valid; } - inline bool isEnable() + bool isEnable() { std::lock_guard lock(mutex); return is_enable; } - inline void disable() + void disable() { std::lock_guard lock(mutex); is_enable = false; } - inline void enable() + void enable() { std::lock_guard lock(mutex); is_enable = true; } IRemoteFileMetadataPtr getFileMetadata() { return file_metadata_ptr; } - inline size_t getFileSize() const { return file_metadata_ptr->file_size; } + size_t getFileSize() const { return file_metadata_ptr->file_size; } void startBackgroundDownload(std::unique_ptr in_readbuffer_, BackgroundSchedulePool & thread_pool); diff --git a/src/Storages/Hive/HiveFile.h b/src/Storages/Hive/HiveFile.h index 2c8a2e020a9..a9468ce7d3d 100644 --- a/src/Storages/Hive/HiveFile.h +++ b/src/Storages/Hive/HiveFile.h @@ -65,8 +65,8 @@ public: {ORC_INPUT_FORMAT, FileFormat::ORC}, }; - static inline bool isFormatClass(const String & format_class) { return VALID_HDFS_FORMATS.contains(format_class); } - static inline FileFormat toFileFormat(const String & format_class) + static bool isFormatClass(const String & format_class) { return VALID_HDFS_FORMATS.contains(format_class); } + static FileFormat toFileFormat(const String & format_class) { if (isFormatClass(format_class)) { diff --git a/src/Storages/Kafka/KafkaConsumer.h b/src/Storages/Kafka/KafkaConsumer.h index f160d1c0855..a3bc97779b3 100644 --- a/src/Storages/Kafka/KafkaConsumer.h +++ b/src/Storages/Kafka/KafkaConsumer.h @@ -82,17 +82,17 @@ public: auto pollTimeout() const { return poll_timeout; } - inline bool hasMorePolledMessages() const + bool hasMorePolledMessages() const { return (stalled_status == NOT_STALLED) && (current != messages.end()); } - inline bool polledDataUnusable() const + bool polledDataUnusable() const { return (stalled_status != NOT_STALLED) && (stalled_status != NO_MESSAGES_RETURNED); } - inline bool isStalled() const { return stalled_status != NOT_STALLED; } + bool isStalled() const { return stalled_status != NOT_STALLED; } void storeLastReadMessageOffset(); void resetToLastCommitted(const char * msg); diff --git a/src/Storages/Kafka/StorageKafka.cpp b/src/Storages/Kafka/StorageKafka.cpp index 03a30d47d91..f5c5d093ce1 100644 --- a/src/Storages/Kafka/StorageKafka.cpp +++ b/src/Storages/Kafka/StorageKafka.cpp @@ -29,6 +29,7 @@ #include #include #include +#include #include #include #include diff --git a/src/Storages/MergeTree/BackgroundJobsAssignee.cpp b/src/Storages/MergeTree/BackgroundJobsAssignee.cpp index 56a4378cf9a..0a69bf1109f 100644 --- a/src/Storages/MergeTree/BackgroundJobsAssignee.cpp +++ b/src/Storages/MergeTree/BackgroundJobsAssignee.cpp @@ -93,7 +93,6 @@ String BackgroundJobsAssignee::toString(Type type) case Type::Moving: return "Moving"; } - UNREACHABLE(); } void BackgroundJobsAssignee::start() diff --git a/src/Storages/MergeTree/BackgroundProcessList.h b/src/Storages/MergeTree/BackgroundProcessList.h index c9a4887cca3..bf29aaf32d0 100644 --- a/src/Storages/MergeTree/BackgroundProcessList.h +++ b/src/Storages/MergeTree/BackgroundProcessList.h @@ -87,7 +87,7 @@ public: virtual void onEntryCreate(const Entry & /* entry */) {} virtual void onEntryDestroy(const Entry & /* entry */) {} - virtual inline ~BackgroundProcessList() = default; + virtual ~BackgroundProcessList() = default; }; } diff --git a/src/Storages/MergeTree/IMergeTreeDataPart.h b/src/Storages/MergeTree/IMergeTreeDataPart.h index a337bdbe1c5..bd3814bf415 100644 --- a/src/Storages/MergeTree/IMergeTreeDataPart.h +++ b/src/Storages/MergeTree/IMergeTreeDataPart.h @@ -462,23 +462,23 @@ public: /// File with compression codec name which was used to compress part columns /// by default. Some columns may have their own compression codecs, but /// default will be stored in this file. - static inline constexpr auto DEFAULT_COMPRESSION_CODEC_FILE_NAME = "default_compression_codec.txt"; + static constexpr auto DEFAULT_COMPRESSION_CODEC_FILE_NAME = "default_compression_codec.txt"; /// "delete-on-destroy.txt" is deprecated. It is no longer being created, only is removed. - static inline constexpr auto DELETE_ON_DESTROY_MARKER_FILE_NAME_DEPRECATED = "delete-on-destroy.txt"; + static constexpr auto DELETE_ON_DESTROY_MARKER_FILE_NAME_DEPRECATED = "delete-on-destroy.txt"; - static inline constexpr auto UUID_FILE_NAME = "uuid.txt"; + static constexpr auto UUID_FILE_NAME = "uuid.txt"; /// File that contains information about kinds of serialization of columns /// and information that helps to choose kind of serialization later during merging /// (number of rows, number of rows with default values, etc). - static inline constexpr auto SERIALIZATION_FILE_NAME = "serialization.json"; + static constexpr auto SERIALIZATION_FILE_NAME = "serialization.json"; /// Version used for transactions. - static inline constexpr auto TXN_VERSION_METADATA_FILE_NAME = "txn_version.txt"; + static constexpr auto TXN_VERSION_METADATA_FILE_NAME = "txn_version.txt"; - static inline constexpr auto METADATA_VERSION_FILE_NAME = "metadata_version.txt"; + static constexpr auto METADATA_VERSION_FILE_NAME = "metadata_version.txt"; /// One of part files which is used to check how many references (I'd like /// to say hardlinks, but it will confuse even more) we have for the part @@ -490,7 +490,7 @@ public: /// it was mutation without any change for source part. In this case we /// really don't need to remove data from remote FS and need only decrement /// reference counter locally. - static inline constexpr auto FILE_FOR_REFERENCES_CHECK = "checksums.txt"; + static constexpr auto FILE_FOR_REFERENCES_CHECK = "checksums.txt"; /// Checks that all TTLs (table min/max, column ttls, so on) for part /// calculated. Part without calculated TTL may exist if TTL was added after diff --git a/src/Storages/MergeTree/KeyCondition.cpp b/src/Storages/MergeTree/KeyCondition.cpp index bd8642b9f66..9666da574fb 100644 --- a/src/Storages/MergeTree/KeyCondition.cpp +++ b/src/Storages/MergeTree/KeyCondition.cpp @@ -2964,8 +2964,6 @@ String KeyCondition::RPNElement::toString(std::string_view column_name, bool pri case ALWAYS_TRUE: return "true"; } - - UNREACHABLE(); } diff --git a/src/Storages/MergeTree/MergeFromLogEntryTask.cpp b/src/Storages/MergeTree/MergeFromLogEntryTask.cpp index e8d55f75b08..2db0c0af3d7 100644 --- a/src/Storages/MergeTree/MergeFromLogEntryTask.cpp +++ b/src/Storages/MergeTree/MergeFromLogEntryTask.cpp @@ -312,6 +312,7 @@ ReplicatedMergeMutateTaskBase::PrepareResult MergeFromLogEntryTask::prepare() task_context = Context::createCopy(storage.getContext()); task_context->makeQueryContext(); task_context->setCurrentQueryId(getQueryId()); + task_context->setBackgroundOperationTypeForContext(ClientInfo::BackgroundOperationType::MERGE); /// Add merge to list merge_mutate_entry = storage.getContext()->getMergeList().insert( diff --git a/src/Storages/MergeTree/MergePlainMergeTreeTask.cpp b/src/Storages/MergeTree/MergePlainMergeTreeTask.cpp index 866a63911c3..a7070c80df9 100644 --- a/src/Storages/MergeTree/MergePlainMergeTreeTask.cpp +++ b/src/Storages/MergeTree/MergePlainMergeTreeTask.cpp @@ -168,6 +168,7 @@ ContextMutablePtr MergePlainMergeTreeTask::createTaskContext() const context->makeQueryContext(); auto queryId = getQueryId(); context->setCurrentQueryId(queryId); + context->setBackgroundOperationTypeForContext(ClientInfo::BackgroundOperationType::MERGE); return context; } diff --git a/src/Storages/MergeTree/MergeTask.cpp b/src/Storages/MergeTree/MergeTask.cpp index e43b6c615b3..f1f856da3a2 100644 --- a/src/Storages/MergeTree/MergeTask.cpp +++ b/src/Storages/MergeTree/MergeTask.cpp @@ -536,6 +536,7 @@ bool MergeTask::VerticalMergeStage::prepareVerticalMergeForAllColumns() const std::unique_ptr reread_buf = wbuf_readable ? wbuf_readable->tryGetReadBuffer() : nullptr; if (!reread_buf) throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot read temporary file {}", ctx->rows_sources_uncompressed_write_buf->getFileName()); + auto * reread_buffer_raw = dynamic_cast(reread_buf.get()); if (!reread_buffer_raw) { @@ -556,6 +557,7 @@ bool MergeTask::VerticalMergeStage::prepareVerticalMergeForAllColumns() const ctx->it_name_and_type = global_ctx->gathering_columns.cbegin(); const auto & settings = global_ctx->context->getSettingsRef(); + size_t max_delayed_streams = 0; if (global_ctx->new_data_part->getDataPartStorage().supportParallelWrite()) { @@ -564,20 +566,20 @@ bool MergeTask::VerticalMergeStage::prepareVerticalMergeForAllColumns() const else max_delayed_streams = DEFAULT_DELAYED_STREAMS_FOR_PARALLEL_WRITE; } + ctx->max_delayed_streams = max_delayed_streams; + bool all_parts_on_remote_disks = std::ranges::all_of(global_ctx->future_part->parts, [](const auto & part) { return part->isStoredOnRemoteDisk(); }); + ctx->use_prefetch = all_parts_on_remote_disks && global_ctx->data->getSettings()->vertical_merge_remote_filesystem_prefetch; + + if (ctx->use_prefetch && ctx->it_name_and_type != global_ctx->gathering_columns.end()) + ctx->prepared_pipe = createPipeForReadingOneColumn(ctx->it_name_and_type->name); + return false; } -void MergeTask::VerticalMergeStage::prepareVerticalMergeForOneColumn() const +Pipe MergeTask::VerticalMergeStage::createPipeForReadingOneColumn(const String & column_name) const { - const auto & [column_name, column_type] = *ctx->it_name_and_type; - Names column_names{column_name}; - - ctx->progress_before = global_ctx->merge_list_element_ptr->progress.load(std::memory_order_relaxed); - - global_ctx->column_progress = std::make_unique(ctx->progress_before, ctx->column_sizes->columnWeight(column_name)); - Pipes pipes; for (size_t part_num = 0; part_num < global_ctx->future_part->parts.size(); ++part_num) { @@ -586,20 +588,42 @@ void MergeTask::VerticalMergeStage::prepareVerticalMergeForOneColumn() const *global_ctx->data, global_ctx->storage_snapshot, global_ctx->future_part->parts[part_num], - column_names, + Names{column_name}, /*mark_ranges=*/ {}, + global_ctx->input_rows_filtered, /*apply_deleted_mask=*/ true, ctx->read_with_direct_io, - /*take_column_types_from_storage=*/ true, - /*quiet=*/ false, - global_ctx->input_rows_filtered); + ctx->use_prefetch); pipes.emplace_back(std::move(pipe)); } - auto pipe = Pipe::unitePipes(std::move(pipes)); + return Pipe::unitePipes(std::move(pipes)); +} + +void MergeTask::VerticalMergeStage::prepareVerticalMergeForOneColumn() const +{ + const auto & column_name = ctx->it_name_and_type->name; + + ctx->progress_before = global_ctx->merge_list_element_ptr->progress.load(std::memory_order_relaxed); + global_ctx->column_progress = std::make_unique(ctx->progress_before, ctx->column_sizes->columnWeight(column_name)); + + Pipe pipe; + if (ctx->prepared_pipe) + { + pipe = std::move(*ctx->prepared_pipe); + + auto next_column_it = std::next(ctx->it_name_and_type); + if (next_column_it != global_ctx->gathering_columns.end()) + ctx->prepared_pipe = createPipeForReadingOneColumn(next_column_it->name); + } + else + { + pipe = createPipeForReadingOneColumn(column_name); + } ctx->rows_sources_read_buf->seek(0, 0); + bool is_result_sparse = global_ctx->new_data_part->getSerialization(column_name)->getKind() == ISerialization::Kind::SPARSE; const auto data_settings = global_ctx->data->getSettings(); auto transform = std::make_unique( @@ -607,7 +631,8 @@ void MergeTask::VerticalMergeStage::prepareVerticalMergeForOneColumn() const pipe.numOutputPorts(), *ctx->rows_sources_read_buf, data_settings->merge_max_block_size, - data_settings->merge_max_block_size_bytes); + data_settings->merge_max_block_size_bytes, + is_result_sparse); pipe.addTransform(std::move(transform)); @@ -953,11 +978,10 @@ void MergeTask::ExecuteAndFinalizeHorizontalPart::createMergedStream() part, global_ctx->merging_column_names, /*mark_ranges=*/ {}, + global_ctx->input_rows_filtered, /*apply_deleted_mask=*/ true, ctx->read_with_direct_io, - /*take_column_types_from_storage=*/ true, - /*quiet=*/ false, - global_ctx->input_rows_filtered); + /*prefetch=*/ false); if (global_ctx->metadata_snapshot->hasSortingKey()) { diff --git a/src/Storages/MergeTree/MergeTask.h b/src/Storages/MergeTree/MergeTask.h index c8b0662e3eb..1294fa30449 100644 --- a/src/Storages/MergeTree/MergeTask.h +++ b/src/Storages/MergeTree/MergeTask.h @@ -299,7 +299,9 @@ private: Float64 progress_before = 0; std::unique_ptr column_to{nullptr}; + std::optional prepared_pipe; size_t max_delayed_streams = 0; + bool use_prefetch = false; std::list> delayed_streams; size_t column_elems_written{0}; QueryPipeline column_parts_pipeline; @@ -340,6 +342,8 @@ private: bool executeVerticalMergeForOneColumn() const; void finalizeVerticalMergeForOneColumn() const; + Pipe createPipeForReadingOneColumn(const String & column_name) const; + VerticalMergeRuntimeContextPtr ctx; GlobalRuntimeContextPtr global_ctx; }; diff --git a/src/Storages/MergeTree/MergeTreeBlockReadUtils.h b/src/Storages/MergeTree/MergeTreeBlockReadUtils.h index b19c42c8db8..c1514416301 100644 --- a/src/Storages/MergeTree/MergeTreeBlockReadUtils.h +++ b/src/Storages/MergeTree/MergeTreeBlockReadUtils.h @@ -41,13 +41,13 @@ struct MergeTreeBlockSizePredictor void update(const Block & sample_block, const Columns & columns, size_t num_rows, double decay = calculateDecay()); /// Return current block size (after update()) - inline size_t getBlockSize() const + size_t getBlockSize() const { return block_size_bytes; } /// Predicts what number of rows should be read to exhaust byte quota per column - inline size_t estimateNumRowsForMaxSizeColumn(size_t bytes_quota) const + size_t estimateNumRowsForMaxSizeColumn(size_t bytes_quota) const { double max_size_per_row = std::max(std::max(max_size_per_row_fixed, 1), max_size_per_row_dynamic); return (bytes_quota > block_size_rows * max_size_per_row) @@ -56,14 +56,14 @@ struct MergeTreeBlockSizePredictor } /// Predicts what number of rows should be read to exhaust byte quota per block - inline size_t estimateNumRows(size_t bytes_quota) const + size_t estimateNumRows(size_t bytes_quota) const { return (bytes_quota > block_size_bytes) ? static_cast((bytes_quota - block_size_bytes) / std::max(1, static_cast(bytes_per_row_current))) : 0; } - inline void updateFilteredRowsRation(size_t rows_was_read, size_t rows_was_filtered, double decay = calculateDecay()) + void updateFilteredRowsRation(size_t rows_was_read, size_t rows_was_filtered, double decay = calculateDecay()) { double alpha = std::pow(1. - decay, rows_was_read); double current_ration = rows_was_filtered / std::max(1.0, static_cast(rows_was_read)); diff --git a/src/Storages/MergeTree/MergeTreeData.cpp b/src/Storages/MergeTree/MergeTreeData.cpp index 4b3093eeaac..b6373a22d9c 100644 --- a/src/Storages/MergeTree/MergeTreeData.cpp +++ b/src/Storages/MergeTree/MergeTreeData.cpp @@ -1177,8 +1177,6 @@ String MergeTreeData::MergingParams::getModeName() const case Graphite: return "Graphite"; case VersionedCollapsing: return "VersionedCollapsing"; } - - UNREACHABLE(); } Int64 MergeTreeData::getMaxBlockNumber() const diff --git a/src/Storages/MergeTree/MergeTreeDataWriter.cpp b/src/Storages/MergeTree/MergeTreeDataWriter.cpp index 426e36ce9a9..df4087b8546 100644 --- a/src/Storages/MergeTree/MergeTreeDataWriter.cpp +++ b/src/Storages/MergeTree/MergeTreeDataWriter.cpp @@ -360,8 +360,6 @@ Block MergeTreeDataWriter::mergeBlock( return std::make_shared( block, 1, sort_description, block_size + 1, /*block_size_bytes=*/0, merging_params.graphite_params, time(nullptr)); } - - UNREACHABLE(); }; auto merging_algorithm = get_merging_algorithm(); diff --git a/src/Storages/MergeTree/MergeTreeIndexGranularityInfo.h b/src/Storages/MergeTree/MergeTreeIndexGranularityInfo.h index 85006c3ffde..87445c99ade 100644 --- a/src/Storages/MergeTree/MergeTreeIndexGranularityInfo.h +++ b/src/Storages/MergeTree/MergeTreeIndexGranularityInfo.h @@ -64,8 +64,8 @@ public: std::string describe() const; }; -constexpr inline auto getNonAdaptiveMrkSizeWide() { return sizeof(UInt64) * 2; } -constexpr inline auto getAdaptiveMrkSizeWide() { return sizeof(UInt64) * 3; } +constexpr auto getNonAdaptiveMrkSizeWide() { return sizeof(UInt64) * 2; } +constexpr auto getAdaptiveMrkSizeWide() { return sizeof(UInt64) * 3; } inline size_t getAdaptiveMrkSizeCompact(size_t columns_num); } diff --git a/src/Storages/MergeTree/MergeTreeRangeReader.cpp b/src/Storages/MergeTree/MergeTreeRangeReader.cpp index 9e344a7161d..e88ded5437d 100644 --- a/src/Storages/MergeTree/MergeTreeRangeReader.cpp +++ b/src/Storages/MergeTree/MergeTreeRangeReader.cpp @@ -28,6 +28,12 @@ # pragma clang diagnostic ignored "-Wreserved-identifier" #endif +namespace ProfileEvents +{ +extern const Event RowsReadByMainReader; +extern const Event RowsReadByPrewhereReaders; +} + namespace DB { namespace ErrorCodes @@ -807,12 +813,14 @@ MergeTreeRangeReader::MergeTreeRangeReader( IMergeTreeReader * merge_tree_reader_, MergeTreeRangeReader * prev_reader_, const PrewhereExprStep * prewhere_info_, - bool last_reader_in_chain_) + bool last_reader_in_chain_, + bool main_reader_) : merge_tree_reader(merge_tree_reader_) , index_granularity(&(merge_tree_reader->data_part_info_for_read->getIndexGranularity())) , prev_reader(prev_reader_) , prewhere_info(prewhere_info_) , last_reader_in_chain(last_reader_in_chain_) + , main_reader(main_reader_) , is_initialized(true) { if (prev_reader) @@ -1150,6 +1158,10 @@ MergeTreeRangeReader::ReadResult MergeTreeRangeReader::startReadingChain(size_t result.adjustLastGranule(); fillVirtualColumns(result, leading_begin_part_offset, leading_end_part_offset); + + ProfileEvents::increment(ProfileEvents::RowsReadByMainReader, main_reader * result.numReadRows()); + ProfileEvents::increment(ProfileEvents::RowsReadByPrewhereReaders, (!main_reader) * result.numReadRows()); + return result; } @@ -1258,6 +1270,9 @@ Columns MergeTreeRangeReader::continueReadingChain(const ReadResult & result, si throw Exception(ErrorCodes::LOGICAL_ERROR, "RangeReader read {} rows, but {} expected.", num_rows, result.total_rows_per_granule); + ProfileEvents::increment(ProfileEvents::RowsReadByMainReader, main_reader * num_rows); + ProfileEvents::increment(ProfileEvents::RowsReadByPrewhereReaders, (!main_reader) * num_rows); + return columns; } diff --git a/src/Storages/MergeTree/MergeTreeRangeReader.h b/src/Storages/MergeTree/MergeTreeRangeReader.h index b282ada6038..7acc8cd88b4 100644 --- a/src/Storages/MergeTree/MergeTreeRangeReader.h +++ b/src/Storages/MergeTree/MergeTreeRangeReader.h @@ -101,7 +101,8 @@ public: IMergeTreeReader * merge_tree_reader_, MergeTreeRangeReader * prev_reader_, const PrewhereExprStep * prewhere_info_, - bool last_reader_in_chain_); + bool last_reader_in_chain_, + bool main_reader_); MergeTreeRangeReader() = default; @@ -326,6 +327,7 @@ private: Block result_sample_block; /// Block with columns that are returned by this step. bool last_reader_in_chain = false; + bool main_reader = false; /// Whether it is the main reader or one of the readers for prewhere steps bool is_initialized = false; LoggerPtr log = getLogger("MergeTreeRangeReader"); diff --git a/src/Storages/MergeTree/MergeTreeReadTask.cpp b/src/Storages/MergeTree/MergeTreeReadTask.cpp index 08b30e445e2..177a325ea5a 100644 --- a/src/Storages/MergeTree/MergeTreeReadTask.cpp +++ b/src/Storages/MergeTree/MergeTreeReadTask.cpp @@ -83,7 +83,8 @@ MergeTreeReadTask::createRangeReaders(const Readers & task_readers, const Prewhe { last_reader = task_readers.main->getColumns().empty() && (i + 1 == prewhere_actions.steps.size()); - MergeTreeRangeReader current_reader(task_readers.prewhere[i].get(), prev_reader, prewhere_actions.steps[i].get(), last_reader); + MergeTreeRangeReader current_reader( + task_readers.prewhere[i].get(), prev_reader, prewhere_actions.steps[i].get(), last_reader, /*main_reader_=*/false); new_range_readers.prewhere.push_back(std::move(current_reader)); prev_reader = &new_range_readers.prewhere.back(); @@ -91,7 +92,7 @@ MergeTreeReadTask::createRangeReaders(const Readers & task_readers, const Prewhe if (!last_reader) { - new_range_readers.main = MergeTreeRangeReader(task_readers.main.get(), prev_reader, nullptr, true); + new_range_readers.main = MergeTreeRangeReader(task_readers.main.get(), prev_reader, nullptr, true, /*main_reader_=*/true); } else { diff --git a/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp b/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp index fce733d47b7..78b67de1a7e 100644 --- a/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp +++ b/src/Storages/MergeTree/MergeTreeSelectProcessor.cpp @@ -26,14 +26,12 @@ namespace ErrorCodes MergeTreeSelectProcessor::MergeTreeSelectProcessor( MergeTreeReadPoolPtr pool_, MergeTreeSelectAlgorithmPtr algorithm_, - const StorageSnapshotPtr & storage_snapshot_, const PrewhereInfoPtr & prewhere_info_, const ExpressionActionsSettings & actions_settings_, const MergeTreeReadTask::BlockSizeParams & block_size_params_, const MergeTreeReaderSettings & reader_settings_) : pool(std::move(pool_)) , algorithm(std::move(algorithm_)) - , storage_snapshot(storage_snapshot_) , prewhere_info(prewhere_info_) , actions_settings(actions_settings_) , prewhere_actions(getPrewhereActions(prewhere_info, actions_settings, reader_settings_.enable_multiple_prewhere_read_steps)) diff --git a/src/Storages/MergeTree/MergeTreeSelectProcessor.h b/src/Storages/MergeTree/MergeTreeSelectProcessor.h index 6b663e0fd36..8f41f5deacb 100644 --- a/src/Storages/MergeTree/MergeTreeSelectProcessor.h +++ b/src/Storages/MergeTree/MergeTreeSelectProcessor.h @@ -41,7 +41,6 @@ public: MergeTreeSelectProcessor( MergeTreeReadPoolPtr pool_, MergeTreeSelectAlgorithmPtr algorithm_, - const StorageSnapshotPtr & storage_snapshot_, const PrewhereInfoPtr & prewhere_info_, const ExpressionActionsSettings & actions_settings_, const MergeTreeReadTask::BlockSizeParams & block_size_params_, @@ -71,7 +70,6 @@ private: const MergeTreeReadPoolPtr pool; const MergeTreeSelectAlgorithmPtr algorithm; - const StorageSnapshotPtr storage_snapshot; const PrewhereInfoPtr prewhere_info; const ExpressionActionsSettings actions_settings; diff --git a/src/Storages/MergeTree/MergeTreeSequentialSource.cpp b/src/Storages/MergeTree/MergeTreeSequentialSource.cpp index fbb48b37482..865371b7d2c 100644 --- a/src/Storages/MergeTree/MergeTreeSequentialSource.cpp +++ b/src/Storages/MergeTree/MergeTreeSequentialSource.cpp @@ -42,8 +42,7 @@ public: std::optional mark_ranges_, bool apply_deleted_mask, bool read_with_direct_io_, - bool take_column_types_from_storage, - bool quiet = false); + bool prefetch); ~MergeTreeSequentialSource() override; @@ -96,8 +95,7 @@ MergeTreeSequentialSource::MergeTreeSequentialSource( std::optional mark_ranges_, bool apply_deleted_mask, bool read_with_direct_io_, - bool take_column_types_from_storage, - bool quiet) + bool prefetch) : ISource(storage_snapshot_->getSampleBlockForColumns(columns_to_read_)) , storage(storage_) , storage_snapshot(storage_snapshot_) @@ -107,16 +105,13 @@ MergeTreeSequentialSource::MergeTreeSequentialSource( , mark_ranges(std::move(mark_ranges_)) , mark_cache(storage.getContext()->getMarkCache()) { - if (!quiet) - { - /// Print column name but don't pollute logs in case of many columns. - if (columns_to_read.size() == 1) - LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part, column {}", - data_part->getMarksCount(), data_part->name, data_part->rows_count, columns_to_read.front()); - else - LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part", - data_part->getMarksCount(), data_part->name, data_part->rows_count); - } + /// Print column name but don't pollute logs in case of many columns. + if (columns_to_read.size() == 1) + LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part, column {}", + data_part->getMarksCount(), data_part->name, data_part->rows_count, columns_to_read.front()); + else + LOG_DEBUG(log, "Reading {} marks from part {}, total {} rows starting from the beginning of the part", + data_part->getMarksCount(), data_part->name, data_part->rows_count); auto alter_conversions = storage.getAlterConversionsForPart(data_part); @@ -131,21 +126,12 @@ MergeTreeSequentialSource::MergeTreeSequentialSource( storage.supportsSubcolumns(), columns_to_read); - NamesAndTypesList columns_for_reader; - if (take_column_types_from_storage) - { - auto options = GetColumnsOptions(GetColumnsOptions::AllPhysical) - .withExtendedObjects() - .withVirtuals() - .withSubcolumns(storage.supportsSubcolumns()); + auto options = GetColumnsOptions(GetColumnsOptions::AllPhysical) + .withExtendedObjects() + .withVirtuals() + .withSubcolumns(storage.supportsSubcolumns()); - columns_for_reader = storage_snapshot->getColumnsByNames(options, columns_to_read); - } - else - { - /// take columns from data_part - columns_for_reader = data_part->getColumns().addTypes(columns_to_read); - } + auto columns_for_reader = storage_snapshot->getColumnsByNames(options, columns_to_read); const auto & context = storage.getContext(); ReadSettings read_settings = context->getReadSettings(); @@ -191,6 +177,9 @@ MergeTreeSequentialSource::MergeTreeSequentialSource( reader_settings, /*avg_value_size_hints=*/ {}, /*profile_callback=*/ {}); + + if (prefetch) + reader->prefetchBeginOfRange(Priority{}); } static void fillBlockNumberColumns( @@ -313,11 +302,10 @@ Pipe createMergeTreeSequentialSource( MergeTreeData::DataPartPtr data_part, Names columns_to_read, std::optional mark_ranges, + std::shared_ptr> filtered_rows_count, bool apply_deleted_mask, bool read_with_direct_io, - bool take_column_types_from_storage, - bool quiet, - std::shared_ptr> filtered_rows_count) + bool prefetch) { /// The part might have some rows masked by lightweight deletes @@ -329,7 +317,7 @@ Pipe createMergeTreeSequentialSource( auto column_part_source = std::make_shared(type, storage, storage_snapshot, data_part, columns_to_read, std::move(mark_ranges), - /*apply_deleted_mask=*/ false, read_with_direct_io, take_column_types_from_storage, quiet); + /*apply_deleted_mask=*/ false, read_with_direct_io, prefetch); Pipe pipe(std::move(column_part_source)); @@ -408,11 +396,10 @@ public: data_part, columns_to_read, std::move(mark_ranges), + /*filtered_rows_count=*/ nullptr, apply_deleted_mask, /*read_with_direct_io=*/ false, - /*take_column_types_from_storage=*/ true, - /*quiet=*/ false, - /*filtered_rows_count=*/ nullptr); + /*prefetch=*/ false); pipeline.init(Pipe(std::move(source))); } diff --git a/src/Storages/MergeTree/MergeTreeSequentialSource.h b/src/Storages/MergeTree/MergeTreeSequentialSource.h index a5e36a7726f..e6f055f776c 100644 --- a/src/Storages/MergeTree/MergeTreeSequentialSource.h +++ b/src/Storages/MergeTree/MergeTreeSequentialSource.h @@ -23,11 +23,10 @@ Pipe createMergeTreeSequentialSource( MergeTreeData::DataPartPtr data_part, Names columns_to_read, std::optional mark_ranges, + std::shared_ptr> filtered_rows_count, bool apply_deleted_mask, bool read_with_direct_io, - bool take_column_types_from_storage, - bool quiet, - std::shared_ptr> filtered_rows_count); + bool prefetch); class QueryPlan; diff --git a/src/Storages/MergeTree/MergeTreeSettings.h b/src/Storages/MergeTree/MergeTreeSettings.h index a00508fd1c1..cea999d0d98 100644 --- a/src/Storages/MergeTree/MergeTreeSettings.h +++ b/src/Storages/MergeTree/MergeTreeSettings.h @@ -35,7 +35,7 @@ struct Settings; M(UInt64, min_bytes_for_wide_part, 10485760, "Minimal uncompressed size in bytes to create part in wide format instead of compact", 0) \ M(UInt64, min_rows_for_wide_part, 0, "Minimal number of rows to create part in wide format instead of compact", 0) \ M(Float, ratio_of_defaults_for_sparse_serialization, 0.9375f, "Minimal ratio of number of default values to number of all values in column to store it in sparse serializations. If >= 1, columns will be always written in full serialization.", 0) \ - M(Bool, replace_long_file_name_to_hash, false, "If the file name for column is too long (more than 'max_file_name_length' bytes) replace it to SipHash128", 0) \ + M(Bool, replace_long_file_name_to_hash, true, "If the file name for column is too long (more than 'max_file_name_length' bytes) replace it to SipHash128", 0) \ M(UInt64, max_file_name_length, 127, "The maximal length of the file name to keep it as is without hashing", 0) \ M(UInt64, min_bytes_for_full_part_storage, 0, "Only available in ClickHouse Cloud", 0) \ M(UInt64, min_rows_for_full_part_storage, 0, "Only available in ClickHouse Cloud", 0) \ @@ -148,6 +148,7 @@ struct Settings; M(UInt64, vertical_merge_algorithm_min_rows_to_activate, 16 * 8192, "Minimal (approximate) sum of rows in merging parts to activate Vertical merge algorithm.", 0) \ M(UInt64, vertical_merge_algorithm_min_bytes_to_activate, 0, "Minimal (approximate) uncompressed size in bytes in merging parts to activate Vertical merge algorithm.", 0) \ M(UInt64, vertical_merge_algorithm_min_columns_to_activate, 11, "Minimal amount of non-PK columns to activate Vertical merge algorithm.", 0) \ + M(Bool, vertical_merge_remote_filesystem_prefetch, true, "If true prefetching of data from remote filesystem is used for the next column during merge", 0) \ M(UInt64, max_postpone_time_for_failed_mutations_ms, 5ULL * 60 * 1000, "The maximum postpone time for failed mutations.", 0) \ \ /** Compatibility settings */ \ diff --git a/src/Storages/MergeTree/MutateFromLogEntryTask.cpp b/src/Storages/MergeTree/MutateFromLogEntryTask.cpp index 3415b08cebb..8d40658bb2c 100644 --- a/src/Storages/MergeTree/MutateFromLogEntryTask.cpp +++ b/src/Storages/MergeTree/MutateFromLogEntryTask.cpp @@ -206,6 +206,7 @@ ReplicatedMergeMutateTaskBase::PrepareResult MutateFromLogEntryTask::prepare() task_context = Context::createCopy(storage.getContext()); task_context->makeQueryContext(); task_context->setCurrentQueryId(getQueryId()); + task_context->setBackgroundOperationTypeForContext(ClientInfo::BackgroundOperationType::MUTATION); merge_mutate_entry = storage.getContext()->getMergeList().insert( storage.getStorageID(), diff --git a/src/Storages/MergeTree/MutatePlainMergeTreeTask.cpp b/src/Storages/MergeTree/MutatePlainMergeTreeTask.cpp index 0b19aebe36d..2fd02708421 100644 --- a/src/Storages/MergeTree/MutatePlainMergeTreeTask.cpp +++ b/src/Storages/MergeTree/MutatePlainMergeTreeTask.cpp @@ -139,6 +139,7 @@ ContextMutablePtr MutatePlainMergeTreeTask::createTaskContext() const context->makeQueryContext(); auto queryId = getQueryId(); context->setCurrentQueryId(queryId); + context->setBackgroundOperationTypeForContext(ClientInfo::BackgroundOperationType::MUTATION); return context; } diff --git a/src/Storages/MergeTree/PartMovesBetweenShardsOrchestrator.cpp b/src/Storages/MergeTree/PartMovesBetweenShardsOrchestrator.cpp index 78fcfabb704..4228d7b70b6 100644 --- a/src/Storages/MergeTree/PartMovesBetweenShardsOrchestrator.cpp +++ b/src/Storages/MergeTree/PartMovesBetweenShardsOrchestrator.cpp @@ -616,8 +616,6 @@ PartMovesBetweenShardsOrchestrator::Entry PartMovesBetweenShardsOrchestrator::st } } } - - UNREACHABLE(); } void PartMovesBetweenShardsOrchestrator::removePins(const Entry & entry, zkutil::ZooKeeperPtr zk) diff --git a/src/Storages/NamedCollectionsHelpers.cpp b/src/Storages/NamedCollectionsHelpers.cpp index c1e744e8d79..47b69d79ad8 100644 --- a/src/Storages/NamedCollectionsHelpers.cpp +++ b/src/Storages/NamedCollectionsHelpers.cpp @@ -1,6 +1,7 @@ #include "NamedCollectionsHelpers.h" #include #include +#include #include #include #include diff --git a/src/Storages/ObjectStorage/ReadBufferIterator.cpp b/src/Storages/ObjectStorage/ReadBufferIterator.cpp index 5e89a0a1b9d..78cdc442f64 100644 --- a/src/Storages/ObjectStorage/ReadBufferIterator.cpp +++ b/src/Storages/ObjectStorage/ReadBufferIterator.cpp @@ -254,21 +254,17 @@ ReadBufferIterator::Data ReadBufferIterator::next() } } - LOG_TEST(getLogger("KSSENII"), "Will read columns from {}", current_object_info->getPath()); - std::unique_ptr read_buf; CompressionMethod compression_method; using ObjectInfoInArchive = StorageObjectStorageSource::ArchiveIterator::ObjectInfoInArchive; if (const auto * object_info_in_archive = dynamic_cast(current_object_info.get())) { - LOG_TEST(getLogger("KSSENII"), "Will read columns from {} from archive", current_object_info->getPath()); compression_method = chooseCompressionMethod(filename, configuration->compression_method); const auto & archive_reader = object_info_in_archive->archive_reader; read_buf = archive_reader->readFile(object_info_in_archive->path_in_archive, /*throw_on_not_found=*/true); } else { - LOG_TEST(getLogger("KSSENII"), "Will read columns from {} from s3", current_object_info->getPath()); compression_method = chooseCompressionMethod(filename, configuration->compression_method); read_buf = object_storage->readObject( StoredObject(current_object_info->getPath()), diff --git a/src/Storages/StorageLoop.cpp b/src/Storages/StorageLoop.cpp new file mode 100644 index 00000000000..2062749e60b --- /dev/null +++ b/src/Storages/StorageLoop.cpp @@ -0,0 +1,49 @@ +#include "StorageLoop.h" +#include +#include +#include + + +namespace DB +{ + namespace ErrorCodes + { + + } + StorageLoop::StorageLoop( + const StorageID & table_id_, + StoragePtr inner_storage_) + : IStorage(table_id_) + , inner_storage(std::move(inner_storage_)) + { + StorageInMemoryMetadata storage_metadata = inner_storage->getInMemoryMetadata(); + setInMemoryMetadata(storage_metadata); + } + + + void StorageLoop::read( + QueryPlan & query_plan, + const Names & column_names, + const StorageSnapshotPtr & storage_snapshot, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + size_t num_streams) + { + query_info.optimize_trivial_count = false; + + query_plan.addStep(std::make_unique( + column_names, query_info, storage_snapshot, context, processed_stage, inner_storage, max_block_size, num_streams + )); + } + + void registerStorageLoop(StorageFactory & factory) + { + factory.registerStorage("Loop", [](const StorageFactory::Arguments & args) + { + StoragePtr inner_storage; + return std::make_shared(args.table_id, inner_storage); + }); + } +} diff --git a/src/Storages/StorageLoop.h b/src/Storages/StorageLoop.h new file mode 100644 index 00000000000..48760b169c2 --- /dev/null +++ b/src/Storages/StorageLoop.h @@ -0,0 +1,33 @@ +#pragma once +#include "config.h" +#include + + +namespace DB +{ + + class StorageLoop final : public IStorage + { + public: + StorageLoop( + const StorageID & table_id, + StoragePtr inner_storage_); + + std::string getName() const override { return "Loop"; } + + void read( + QueryPlan & query_plan, + const Names & column_names, + const StorageSnapshotPtr & storage_snapshot, + SelectQueryInfo & query_info, + ContextPtr context, + QueryProcessingStage::Enum processed_stage, + size_t max_block_size, + size_t num_streams) override; + + bool supportsTrivialCountOptimization(const StorageSnapshotPtr &, ContextPtr) const override { return false; } + + private: + StoragePtr inner_storage; + }; +} diff --git a/src/Storages/StorageReplicatedMergeTree.h b/src/Storages/StorageReplicatedMergeTree.h index 9d086e1dc37..f96206ce657 100644 --- a/src/Storages/StorageReplicatedMergeTree.h +++ b/src/Storages/StorageReplicatedMergeTree.h @@ -307,7 +307,7 @@ public: /// Get best replica having this partition on a same type remote disk String getSharedDataReplica(const IMergeTreeDataPart & part, const DataSourceDescription & data_source_description) const; - inline const String & getReplicaName() const { return replica_name; } + const String & getReplicaName() const { return replica_name; } /// Restores table metadata if ZooKeeper lost it. /// Used only on restarted readonly replicas (not checked). All active (Active) parts are moved to detached/ diff --git a/src/Storages/System/StorageSystemNamedCollections.cpp b/src/Storages/System/StorageSystemNamedCollections.cpp index 156fa5e5a9b..0836560dff0 100644 --- a/src/Storages/System/StorageSystemNamedCollections.cpp +++ b/src/Storages/System/StorageSystemNamedCollections.cpp @@ -9,7 +9,7 @@ #include #include #include -#include +#include namespace DB diff --git a/src/Storages/UVLoop.h b/src/Storages/UVLoop.h index dd1d64973d1..907a3fc0b13 100644 --- a/src/Storages/UVLoop.h +++ b/src/Storages/UVLoop.h @@ -57,9 +57,9 @@ public: } } - inline uv_loop_t * getLoop() { return loop_ptr.get(); } + uv_loop_t * getLoop() { return loop_ptr.get(); } - inline const uv_loop_t * getLoop() const { return loop_ptr.get(); } + const uv_loop_t * getLoop() const { return loop_ptr.get(); } private: std::unique_ptr loop_ptr; diff --git a/src/Storages/WindowView/StorageWindowView.cpp b/src/Storages/WindowView/StorageWindowView.cpp index a9ec1f6c694..8bca1c97aad 100644 --- a/src/Storages/WindowView/StorageWindowView.cpp +++ b/src/Storages/WindowView/StorageWindowView.cpp @@ -297,7 +297,6 @@ namespace CASE_WINDOW_KIND(Year) #undef CASE_WINDOW_KIND } - UNREACHABLE(); } class AddingAggregatedChunkInfoTransform : public ISimpleTransform @@ -920,7 +919,6 @@ UInt32 StorageWindowView::getWindowLowerBound(UInt32 time_sec) CASE_WINDOW_KIND(Year) #undef CASE_WINDOW_KIND } - UNREACHABLE(); } UInt32 StorageWindowView::getWindowUpperBound(UInt32 time_sec) @@ -948,7 +946,6 @@ UInt32 StorageWindowView::getWindowUpperBound(UInt32 time_sec) CASE_WINDOW_KIND(Year) #undef CASE_WINDOW_KIND } - UNREACHABLE(); } void StorageWindowView::addFireSignal(std::set & signals) diff --git a/src/Storages/registerStorages.cpp b/src/Storages/registerStorages.cpp index 0fb00c08acc..47542b7b47e 100644 --- a/src/Storages/registerStorages.cpp +++ b/src/Storages/registerStorages.cpp @@ -25,6 +25,7 @@ void registerStorageLiveView(StorageFactory & factory); void registerStorageGenerateRandom(StorageFactory & factory); void registerStorageExecutable(StorageFactory & factory); void registerStorageWindowView(StorageFactory & factory); +void registerStorageLoop(StorageFactory & factory); #if USE_RAPIDJSON || USE_SIMDJSON void registerStorageFuzzJSON(StorageFactory & factory); #endif @@ -120,6 +121,7 @@ void registerStorages() registerStorageGenerateRandom(factory); registerStorageExecutable(factory); registerStorageWindowView(factory); + registerStorageLoop(factory); #if USE_RAPIDJSON || USE_SIMDJSON registerStorageFuzzJSON(factory); #endif diff --git a/src/TableFunctions/ITableFunction.h b/src/TableFunctions/ITableFunction.h index 1946d8e8905..ed7f80e5df9 100644 --- a/src/TableFunctions/ITableFunction.h +++ b/src/TableFunctions/ITableFunction.h @@ -39,7 +39,7 @@ class Context; class ITableFunction : public std::enable_shared_from_this { public: - static inline std::string getDatabaseName() { return "_table_function"; } + static std::string getDatabaseName() { return "_table_function"; } /// Get the main function name. virtual std::string getName() const = 0; diff --git a/src/TableFunctions/TableFunctionLoop.cpp b/src/TableFunctions/TableFunctionLoop.cpp new file mode 100644 index 00000000000..0281002e50f --- /dev/null +++ b/src/TableFunctions/TableFunctionLoop.cpp @@ -0,0 +1,156 @@ +#include "config.h" +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include "registerTableFunctions.h" + +namespace DB +{ + namespace ErrorCodes + { + extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; + extern const int ILLEGAL_TYPE_OF_ARGUMENT; + extern const int UNKNOWN_TABLE; + } + namespace + { + class TableFunctionLoop : public ITableFunction + { + public: + static constexpr auto name = "loop"; + std::string getName() const override { return name; } + private: + StoragePtr executeImpl(const ASTPtr & ast_function, ContextPtr context, const String & table_name, ColumnsDescription cached_columns, bool is_insert_query) const override; + const char * getStorageTypeName() const override { return "Loop"; } + ColumnsDescription getActualTableStructure(ContextPtr context, bool is_insert_query) const override; + void parseArguments(const ASTPtr & ast_function, ContextPtr context) override; + + // save the inner table function AST + ASTPtr inner_table_function_ast; + // save database and table + std::string loop_database_name; + std::string loop_table_name; + }; + + } + + void TableFunctionLoop::parseArguments(const ASTPtr & ast_function, ContextPtr context) + { + const auto & args_func = ast_function->as(); + + if (!args_func.arguments) + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Table function 'loop' must have arguments."); + + auto & args = args_func.arguments->children; + if (args.empty()) + throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "No arguments provided for table function 'loop'"); + + if (args.size() == 1) + { + if (const auto * id = args[0]->as()) + { + String id_name = id->name(); + + size_t dot_pos = id_name.find('.'); + if (id_name.find('.', dot_pos + 1) != String::npos) + throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "There are more than one dot"); + if (dot_pos != String::npos) + { + loop_database_name = id_name.substr(0, dot_pos); + loop_table_name = id_name.substr(dot_pos + 1); + } + else + { + loop_table_name = id_name; + } + } + else if (const auto * func = args[0]->as()) + { + inner_table_function_ast = args[0]; + } + else + { + throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Expected identifier or function for argument 1 of function 'loop', got {}", args[0]->getID()); + } + } + // loop(database, table) + else if (args.size() == 2) + { + args[0] = evaluateConstantExpressionForDatabaseName(args[0], context); + args[1] = evaluateConstantExpressionOrIdentifierAsLiteral(args[1], context); + + loop_database_name = checkAndGetLiteralArgument(args[0], "database"); + loop_table_name = checkAndGetLiteralArgument(args[1], "table"); + } + else + { + throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Table function 'loop' must have 1 or 2 arguments."); + } + } + + ColumnsDescription TableFunctionLoop::getActualTableStructure(ContextPtr /*context*/, bool /*is_insert_query*/) const + { + return ColumnsDescription(); + } + + StoragePtr TableFunctionLoop::executeImpl( + const ASTPtr & /*ast_function*/, + ContextPtr context, + const std::string & table_name, + ColumnsDescription cached_columns, + bool is_insert_query) const + { + StoragePtr storage; + if (!loop_table_name.empty()) + { + String database_name = loop_database_name; + if (database_name.empty()) + database_name = context->getCurrentDatabase(); + + auto database = DatabaseCatalog::instance().getDatabase(database_name); + storage = database->tryGetTable(loop_table_name, context); + if (!storage) + throw Exception(ErrorCodes::UNKNOWN_TABLE, "Table '{}' not found in database '{}'", loop_table_name, database_name); + } + + else + { + auto inner_table_function = TableFunctionFactory::instance().get(inner_table_function_ast, context); + storage = inner_table_function->execute( + inner_table_function_ast, + context, + table_name, + std::move(cached_columns), + is_insert_query); + } + auto res = std::make_shared( + StorageID(getDatabaseName(), table_name), + storage + ); + res->startup(); + return res; + } + + void registerTableFunctionLoop(TableFunctionFactory & factory) + { + factory.registerFunction( + {.documentation + = {.description=R"(The table function can be used to continuously output query results in an infinite loop.)", + .examples{{"loop", "SELECT * FROM loop((numbers(3)) LIMIT 7", "0" + "1" + "2" + "0" + "1" + "2" + "0"}} + }}); + } + +} diff --git a/src/TableFunctions/registerTableFunctions.cpp b/src/TableFunctions/registerTableFunctions.cpp index 26b9a771416..ca4913898f9 100644 --- a/src/TableFunctions/registerTableFunctions.cpp +++ b/src/TableFunctions/registerTableFunctions.cpp @@ -11,6 +11,7 @@ void registerTableFunctions() registerTableFunctionMerge(factory); registerTableFunctionRemote(factory); registerTableFunctionNumbers(factory); + registerTableFunctionLoop(factory); registerTableFunctionGenerateSeries(factory); registerTableFunctionNull(factory); registerTableFunctionZeros(factory); diff --git a/src/TableFunctions/registerTableFunctions.h b/src/TableFunctions/registerTableFunctions.h index 4a89b3afbb3..efde4d6dcdc 100644 --- a/src/TableFunctions/registerTableFunctions.h +++ b/src/TableFunctions/registerTableFunctions.h @@ -8,6 +8,7 @@ class TableFunctionFactory; void registerTableFunctionMerge(TableFunctionFactory & factory); void registerTableFunctionRemote(TableFunctionFactory & factory); void registerTableFunctionNumbers(TableFunctionFactory & factory); +void registerTableFunctionLoop(TableFunctionFactory & factory); void registerTableFunctionGenerateSeries(TableFunctionFactory & factory); void registerTableFunctionNull(TableFunctionFactory & factory); void registerTableFunctionZeros(TableFunctionFactory & factory); diff --git a/tests/ci/bugfix_validate_check.py b/tests/ci/bugfix_validate_check.py index 7aaf18e7765..d41fdaf05ff 100644 --- a/tests/ci/bugfix_validate_check.py +++ b/tests/ci/bugfix_validate_check.py @@ -109,12 +109,12 @@ def main(): test_script = jobs_scripts[test_job] if report_file.exists(): report_file.unlink() - extra_timeout_option = "" - if test_job == JobNames.STATELESS_TEST_RELEASE: - extra_timeout_option = str(3600) # "bugfix" must be present in checkname, as integration test runner checks this check_name = f"Validate bugfix: {test_job}" - command = f"python3 {test_script} '{check_name}' {extra_timeout_option} --validate-bugfix --report-to-file {report_file}" + command = ( + f"python3 {test_script} '{check_name}' " + f"--validate-bugfix --report-to-file {report_file}" + ) print(f"Going to validate job [{test_job}], command [{command}]") _ = subprocess.run( command, diff --git a/tests/ci/cancel_and_rerun_workflow_lambda/app.py b/tests/ci/cancel_and_rerun_workflow_lambda/app.py index 9ee884c801a..578ade5c8a0 100644 --- a/tests/ci/cancel_and_rerun_workflow_lambda/app.py +++ b/tests/ci/cancel_and_rerun_workflow_lambda/app.py @@ -9,7 +9,7 @@ from threading import Thread from typing import Any, Dict, List, Optional import requests -from lambda_shared.pr import Labels, check_pr_description +from lambda_shared.pr import Labels from lambda_shared.token import get_cached_access_token NEED_RERUN_OR_CANCELL_WORKFLOWS = { @@ -321,21 +321,21 @@ def main(event): return if action == "edited": - print("PR is edited, check if the body is correct") - error, _ = check_pr_description( - pull_request["body"], pull_request["base"]["repo"]["full_name"] - ) - if error: - print( - f"The PR's body is wrong, is going to comment it. The error is: {error}" - ) - post_json = { - "body": "This is an automatic comment. The PR descriptions does not " - f"match the [template]({pull_request['base']['repo']['html_url']}/" - "blob/master/.github/PULL_REQUEST_TEMPLATE.md?plain=1).\n\n" - f"Please, edit it accordingly.\n\nThe error is: {error}" - } - _exec_post_with_retry(pull_request["comments_url"], token, json=post_json) + print("PR is edited - do nothing") + # error, _ = check_pr_description( + # pull_request["body"], pull_request["base"]["repo"]["full_name"] + # ) + # if error: + # print( + # f"The PR's body is wrong, is going to comment it. The error is: {error}" + # ) + # post_json = { + # "body": "This is an automatic comment. The PR descriptions does not " + # f"match the [template]({pull_request['base']['repo']['html_url']}/" + # "blob/master/.github/PULL_REQUEST_TEMPLATE.md?plain=1).\n\n" + # f"Please, edit it accordingly.\n\nThe error is: {error}" + # } + # _exec_post_with_retry(pull_request["comments_url"], token, json=post_json) return if action == "synchronize": diff --git a/tests/ci/cherry_pick.py b/tests/ci/cherry_pick.py index 7f267d5ed1a..e470621e2c5 100644 --- a/tests/ci/cherry_pick.py +++ b/tests/ci/cherry_pick.py @@ -33,9 +33,10 @@ from subprocess import CalledProcessError from typing import List, Optional import __main__ + from env_helper import TEMP_PATH from get_robot_token import get_best_robot_token -from git_helper import git_runner, is_shallow +from git_helper import GIT_PREFIX, git_runner, is_shallow from github_helper import GitHub, PullRequest, PullRequests, Repository from lambda_shared_package.lambda_shared.pr import Labels from ssh import SSHKey @@ -90,7 +91,7 @@ close it. name: str, pr: PullRequest, repo: Repository, - backport_created_label: str = Labels.PR_BACKPORTS_CREATED, + backport_created_label: str, ): self.name = name self.pr = pr @@ -104,10 +105,6 @@ close it. self.backport_created_label = backport_created_label - self.git_prefix = ( # All commits to cherrypick are done as robot-clickhouse - "git -c user.email=robot-clickhouse@users.noreply.github.com " - "-c user.name=robot-clickhouse -c commit.gpgsign=false" - ) self.pre_check() def pre_check(self): @@ -118,11 +115,12 @@ close it. if branch_updated: self._backported = True - def pop_prs(self, prs: PullRequests) -> None: + def pop_prs(self, prs: PullRequests) -> PullRequests: """the method processes all prs and pops the ReleaseBranch related prs""" to_pop = [] # type: List[int] for i, pr in enumerate(prs): if self.name not in pr.head.ref: + # this pr is not for the current branch continue if pr.head.ref.startswith(f"cherrypick/{self.name}"): self.cherrypick_pr = pr @@ -131,19 +129,22 @@ close it. self.backport_pr = pr to_pop.append(i) else: - logging.error( - "head ref of PR #%s isn't starting with known suffix", - pr.number, - ) + assert False, f"BUG! Invalid PR's branch [{pr.head.ref}]" + + # Cherry-pick or backport PR found, set @backported flag for current release branch + self._backported = True + for i in reversed(to_pop): # Going from the tail to keep the order and pop greater index first prs.pop(i) + return prs def process( # pylint: disable=too-many-return-statements self, dry_run: bool ) -> None: if self.backported: return + if not self.cherrypick_pr: if dry_run: logging.info( @@ -151,56 +152,54 @@ close it. ) return self.create_cherrypick() - if self.backported: - return - if self.cherrypick_pr is not None: - # Try to merge cherrypick instantly - if self.cherrypick_pr.mergeable and self.cherrypick_pr.state != "closed": - if dry_run: - logging.info( - "DRY RUN: Would merge cherry-pick PR for #%s", self.pr.number - ) - return - self.cherrypick_pr.merge() - # The PR needs update, since PR.merge doesn't update the object - self.cherrypick_pr.update() - if self.cherrypick_pr.merged: - if dry_run: - logging.info( - "DRY RUN: Would create backport PR for #%s", self.pr.number - ) - return - self.create_backport() - return - if self.cherrypick_pr.state == "closed": + assert self.cherrypick_pr, "BUG!" + + if self.cherrypick_pr.mergeable and self.cherrypick_pr.state != "closed": + if dry_run: logging.info( - "The cherrypick PR #%s for PR #%s is discarded", - self.cherrypick_pr.number, - self.pr.number, + "DRY RUN: Would merge cherry-pick PR for #%s", self.pr.number ) - self._backported = True return + self.cherrypick_pr.merge() + # The PR needs update, since PR.merge doesn't update the object + self.cherrypick_pr.update() + if self.cherrypick_pr.merged: + if dry_run: + logging.info( + "DRY RUN: Would create backport PR for #%s", self.pr.number + ) + return + self.create_backport() + return + if self.cherrypick_pr.state == "closed": logging.info( - "Cherrypick PR #%s for PR #%s have conflicts and unable to be merged", + "The cherry-pick PR #%s for PR #%s is discarded", self.cherrypick_pr.number, self.pr.number, ) - self.ping_cherry_pick_assignees(dry_run) + self._backported = True + return + logging.info( + "Cherry-pick PR #%s for PR #%s has conflicts and unable to be merged", + self.cherrypick_pr.number, + self.pr.number, + ) + self.ping_cherry_pick_assignees(dry_run) def create_cherrypick(self): # First, create backport branch: # Checkout release branch with discarding every change - git_runner(f"{self.git_prefix} checkout -f {self.name}") + git_runner(f"{GIT_PREFIX} checkout -f {self.name}") # Create or reset backport branch - git_runner(f"{self.git_prefix} checkout -B {self.backport_branch}") + git_runner(f"{GIT_PREFIX} checkout -B {self.backport_branch}") # Merge all changes from PR's the first parent commit w/o applying anything # It will allow to create a merge commit like it would be a cherry-pick first_parent = git_runner(f"git rev-parse {self.pr.merge_commit_sha}^1") - git_runner(f"{self.git_prefix} merge -s ours --no-edit {first_parent}") + git_runner(f"{GIT_PREFIX} merge -s ours --no-edit {first_parent}") # Second step, create cherrypick branch git_runner( - f"{self.git_prefix} branch -f " + f"{GIT_PREFIX} branch -f " f"{self.cherrypick_branch} {self.pr.merge_commit_sha}" ) @@ -209,7 +208,7 @@ close it. # manually to the release branch already try: output = git_runner( - f"{self.git_prefix} merge --no-commit --no-ff {self.cherrypick_branch}" + f"{GIT_PREFIX} merge --no-commit --no-ff {self.cherrypick_branch}" ) # 'up-to-date', 'up to date', who knows what else (╯°v°)╯ ^┻━┻ if output.startswith("Already up") and output.endswith("date."): @@ -219,18 +218,17 @@ close it. self.name, self.pr.number, ) - self._backported = True return except CalledProcessError: # There are most probably conflicts, they'll be resolved in PR - git_runner(f"{self.git_prefix} reset --merge") + git_runner(f"{GIT_PREFIX} reset --merge") else: # There are changes to apply, so continue - git_runner(f"{self.git_prefix} reset --merge") + git_runner(f"{GIT_PREFIX} reset --merge") - # Push, create the cherrypick PR, lable and assign it + # Push, create the cherry-pick PR, label and assign it for branch in [self.cherrypick_branch, self.backport_branch]: - git_runner(f"{self.git_prefix} push -f {self.REMOTE} {branch}:{branch}") + git_runner(f"{GIT_PREFIX} push -f {self.REMOTE} {branch}:{branch}") self.cherrypick_pr = self.repo.create_pull( title=f"Cherry pick #{self.pr.number} to {self.name}: {self.pr.title}", @@ -245,6 +243,11 @@ close it. ) self.cherrypick_pr.add_to_labels(Labels.PR_CHERRYPICK) self.cherrypick_pr.add_to_labels(Labels.DO_NOT_TEST) + if Labels.PR_CRITICAL_BUGFIX in [label.name for label in self.pr.labels]: + self.cherrypick_pr.add_to_labels(Labels.PR_CRITICAL_BUGFIX) + elif Labels.PR_BUGFIX in [label.name for label in self.pr.labels]: + self.cherrypick_pr.add_to_labels(Labels.PR_BUGFIX) + self._backported = True self._assign_new_pr(self.cherrypick_pr) # update cherrypick PR to get the state for PR.mergable self.cherrypick_pr.update() @@ -254,21 +257,19 @@ close it. # Checkout the backport branch from the remote and make all changes to # apply like they are only one cherry-pick commit on top of release logging.info("Creating backport for PR #%s", self.pr.number) - git_runner(f"{self.git_prefix} checkout -f {self.backport_branch}") - git_runner( - f"{self.git_prefix} pull --ff-only {self.REMOTE} {self.backport_branch}" - ) + git_runner(f"{GIT_PREFIX} checkout -f {self.backport_branch}") + git_runner(f"{GIT_PREFIX} pull --ff-only {self.REMOTE} {self.backport_branch}") merge_base = git_runner( - f"{self.git_prefix} merge-base " + f"{GIT_PREFIX} merge-base " f"{self.REMOTE}/{self.name} {self.backport_branch}" ) - git_runner(f"{self.git_prefix} reset --soft {merge_base}") + git_runner(f"{GIT_PREFIX} reset --soft {merge_base}") title = f"Backport #{self.pr.number} to {self.name}: {self.pr.title}" - git_runner(f"{self.git_prefix} commit --allow-empty -F -", input=title) + git_runner(f"{GIT_PREFIX} commit --allow-empty -F -", input=title) # Push with force, create the backport PR, lable and assign it git_runner( - f"{self.git_prefix} push -f {self.REMOTE} " + f"{GIT_PREFIX} push -f {self.REMOTE} " f"{self.backport_branch}:{self.backport_branch}" ) self.backport_pr = self.repo.create_pull( @@ -280,6 +281,10 @@ close it. head=self.backport_branch, ) self.backport_pr.add_to_labels(Labels.PR_BACKPORT) + if Labels.PR_CRITICAL_BUGFIX in [label.name for label in self.pr.labels]: + self.backport_pr.add_to_labels(Labels.PR_CRITICAL_BUGFIX) + elif Labels.PR_BUGFIX in [label.name for label in self.pr.labels]: + self.backport_pr.add_to_labels(Labels.PR_BUGFIX) self._assign_new_pr(self.backport_pr) def ping_cherry_pick_assignees(self, dry_run: bool) -> None: @@ -335,7 +340,7 @@ close it. @property def backported(self) -> bool: - return self._backported or self.backport_pr is not None + return self._backported def __repr__(self): return self.name @@ -348,16 +353,22 @@ class Backport: repo: str, fetch_from: Optional[str], dry_run: bool, - must_create_backport_labels: List[str], - backport_created_label: str, ): self.gh = gh self._repo_name = repo self._fetch_from = fetch_from self.dry_run = dry_run - self.must_create_backport_labels = must_create_backport_labels - self.backport_created_label = backport_created_label + self.must_create_backport_label = ( + Labels.MUST_BACKPORT + if self._repo_name == self._fetch_from + else Labels.MUST_BACKPORT_CLOUD + ) + self.backport_created_label = ( + Labels.PR_BACKPORTS_CREATED + if self._repo_name == self._fetch_from + else Labels.PR_BACKPORTS_CREATED_CLOUD + ) self._remote = "" self._remote_line = "" @@ -457,7 +468,7 @@ class Backport: query_args = { "query": f"type:pr repo:{self._fetch_from} -label:{self.backport_created_label}", "label": ",".join( - self.labels_to_backport + self.must_create_backport_labels + self.labels_to_backport + [self.must_create_backport_label] ), "merged": [since_date, tomorrow], } @@ -474,23 +485,19 @@ class Backport: self.process_pr(pr) except Exception as e: logging.error( - "During processing the PR #%s error occured: %s", pr.number, e + "During processing the PR #%s error occurred: %s", pr.number, e ) self.error = e def process_pr(self, pr: PullRequest) -> None: pr_labels = [label.name for label in pr.labels] - for label in self.must_create_backport_labels: - # We backport any vXXX-must-backport to all branches of the fetch repo (better than no backport) - if label in pr_labels or self._fetch_from: - branches = [ - ReleaseBranch(br, pr, self.repo, self.backport_created_label) - for br in self.release_branches - ] # type: List[ReleaseBranch] - break - - if not branches: + if self.must_create_backport_label in pr_labels: + branches = [ + ReleaseBranch(br, pr, self.repo, self.backport_created_label) + for br in self.release_branches + ] # type: List[ReleaseBranch] + else: branches = [ ReleaseBranch(br, pr, self.repo, self.backport_created_label) for br in [ @@ -499,20 +506,14 @@ class Backport: if label in self.labels_to_backport ] ] - if not branches: - # This is definitely some error. There must be at least one branch - # It also make the whole program exit code non-zero - self.error = Exception( - f"There are no branches to backport PR #{pr.number}, logical error" - ) - raise self.error + assert branches, "BUG!" logging.info( " PR #%s is supposed to be backported to %s", pr.number, ", ".join(map(str, branches)), ) - # All PRs for cherrypick and backport branches as heads + # All PRs for cherry-pick and backport branches as heads query_suffix = " ".join( [ f"head:{branch.backport_branch} head:{branch.cherrypick_branch}" @@ -524,29 +525,15 @@ class Backport: label=f"{Labels.PR_BACKPORT},{Labels.PR_CHERRYPICK}", ) for br in branches: - br.pop_prs(bp_cp_prs) - - if bp_cp_prs: - # This is definitely some error. All prs must be consumed by - # branches with ReleaseBranch.pop_prs. It also makes the whole - # program exit code non-zero - self.error = Exception( - "The following PRs are not filtered by release branches:\n" - "\n".join(map(str, bp_cp_prs)) - ) - raise self.error - - if all(br.backported for br in branches): - # Let's check if the PR is already backported - self.mark_pr_backported(pr) - return + bp_cp_prs = br.pop_prs(bp_cp_prs) + assert not bp_cp_prs, "BUG!" for br in branches: br.process(self.dry_run) - if all(br.backported for br in branches): - # And check it after the running - self.mark_pr_backported(pr) + for br in branches: + assert br.backported, f"BUG! backport to branch [{br}] failed" + self.mark_pr_backported(pr) def mark_pr_backported(self, pr: PullRequest) -> None: if self.dry_run: @@ -583,19 +570,6 @@ def parse_args(): ) parser.add_argument("--dry-run", action="store_true", help="do not create anything") - parser.add_argument( - "--must-create-backport-label", - default=Labels.MUST_BACKPORT, - choices=(Labels.MUST_BACKPORT, Labels.MUST_BACKPORT_CLOUD), - help="label to filter PRs to backport", - nargs="+", - ) - parser.add_argument( - "--backport-created-label", - default=Labels.PR_BACKPORTS_CREATED, - choices=(Labels.PR_BACKPORTS_CREATED, Labels.PR_BACKPORTS_CREATED_CLOUD), - help="label to mark PRs as backported", - ) parser.add_argument( "--reserve-search-days", default=0, @@ -660,10 +634,6 @@ def main(): args.repo, args.from_repo, args.dry_run, - args.must_create_backport_label - if isinstance(args.must_create_backport_label, list) - else [args.must_create_backport_label], - args.backport_created_label, ) # https://github.com/python/mypy/issues/3004 bp.gh.cache_path = temp_path / "gh_cache" diff --git a/tests/ci/ci.py b/tests/ci/ci.py index c4e06ccd79a..606af9a43fb 100644 --- a/tests/ci/ci.py +++ b/tests/ci/ci.py @@ -18,6 +18,7 @@ import docker_images_helper import upload_result_helper from build_check import get_release_or_pr from ci_config import CI_CONFIG, Build, CILabels, CIStages, JobNames, StatusNames +from ci_metadata import CiMetadata from ci_utils import GHActions, is_hex, normalize_string from clickhouse_helper import ( CiLogsCredentials, @@ -39,22 +40,23 @@ from digest_helper import DockerDigester, JobDigester from env_helper import ( CI, GITHUB_JOB_API_URL, + GITHUB_REPOSITORY, + GITHUB_RUN_ID, GITHUB_RUN_URL, REPO_COPY, REPORT_PATH, S3_BUILDS_BUCKET, TEMP_PATH, - GITHUB_RUN_ID, - GITHUB_REPOSITORY, ) from get_robot_token import get_best_robot_token from git_helper import GIT_PREFIX, Git from git_helper import Runner as GitRunner from github_helper import GitHub from pr_info import PRInfo -from report import ERROR, SUCCESS, BuildResult, JobReport, PENDING +from report import ERROR, FAILURE, PENDING, SUCCESS, BuildResult, JobReport, TestResult from s3_helper import S3Helper -from ci_metadata import CiMetadata +from stopwatch import Stopwatch +from tee_popen import TeePopen from version_helper import get_version_from_repo # pylint: disable=too-many-lines @@ -616,11 +618,11 @@ class CiCache: def download_build_reports(self, file_prefix: str = "") -> List[str]: """ - not ideal class for this method, + not an ideal class for this method, but let it be as we store build reports in CI cache directory on s3 and CiCache knows where exactly - @file_prefix allows to filter out reports by git head_ref + @file_prefix allows filtering out reports by git head_ref """ report_path = Path(REPORT_PATH) report_path.mkdir(exist_ok=True, parents=True) @@ -752,6 +754,7 @@ class CiOptions: do_not_test: bool = False no_ci_cache: bool = False + upload_all: bool = False no_merge_commit: bool = False def as_dict(self) -> Dict[str, Any]: @@ -821,6 +824,9 @@ class CiOptions: elif match == CILabels.NO_CI_CACHE: res.no_ci_cache = True print("NOTE: CI Cache will be disabled") + elif match == CILabels.UPLOAD_ALL_ARTIFACTS: + res.upload_all = True + print("NOTE: All binary artifacts will be uploaded") elif match == CILabels.DO_NOT_TEST_LABEL: res.do_not_test = True elif match == CILabels.NO_MERGE_COMMIT: @@ -889,9 +895,10 @@ class CiOptions: for job in job_with_parents: if job in jobs_to_do and job not in jobs_to_do_requested: jobs_to_do_requested.append(job) - print( - f"WARNING: Include tags are set but no job configured - Invalid tags, probably [{self.include_keywords}]" - ) + if not jobs_to_do_requested: + print( + f"WARNING: Include tags are set but no job configured - Invalid tags, probably [{self.include_keywords}]" + ) if JobNames.STYLE_CHECK not in jobs_to_do_requested: # Style check must not be omitted jobs_to_do_requested.append(JobNames.STYLE_CHECK) @@ -1192,7 +1199,7 @@ def _pre_action(s3, indata, pr_info): BuildResult.cleanup() ci_cache = CiCache(s3, indata["jobs_data"]["digests"]) - # for release/master branches reports must be from the same branches + # for release/master branches reports must be from the same branch report_prefix = normalize_string(pr_info.head_ref) if pr_info.number == 0 else "" print( f"Use report prefix [{report_prefix}], pr_num [{pr_info.number}], head_ref [{pr_info.head_ref}]" @@ -1610,6 +1617,7 @@ def _upload_build_artifacts( job_report: JobReport, s3: S3Helper, s3_destination: str, + upload_binary: bool, ) -> str: # There are ugly artifacts for the performance test. FIXME: s3_performance_path = "/".join( @@ -1623,25 +1631,29 @@ def _upload_build_artifacts( performance_urls = [] assert job_report.build_dir_for_upload, "Must be set for build job" performance_path = Path(job_report.build_dir_for_upload) / "performance.tar.zst" - if performance_path.exists(): - performance_urls.append( - s3.upload_build_file_to_s3(performance_path, s3_performance_path) + if upload_binary: + if performance_path.exists(): + performance_urls.append( + s3.upload_build_file_to_s3(performance_path, s3_performance_path) + ) + print( + "Uploaded performance.tar.zst to %s, now delete to avoid duplication", + performance_urls[0], + ) + performance_path.unlink() + build_urls = ( + s3.upload_build_directory_to_s3( + Path(job_report.build_dir_for_upload), + s3_destination, + keep_dirs_in_s3_path=False, + upload_symlinks=False, + ) + + performance_urls ) - print( - "Uploaded performance.tar.zst to %s, now delete to avoid duplication", - performance_urls[0], - ) - performance_path.unlink() - build_urls = ( - s3.upload_build_directory_to_s3( - Path(job_report.build_dir_for_upload), - s3_destination, - keep_dirs_in_s3_path=False, - upload_symlinks=False, - ) - + performance_urls - ) - print("::notice ::Build URLs: {}".format("\n".join(build_urls))) + print("::notice ::Build URLs: {}".format("\n".join(build_urls))) + else: + build_urls = [] + print("::notice ::No binaries will be uploaded for this job") log_path = Path(job_report.additional_files[0]) log_url = "" if log_path.exists(): @@ -1650,7 +1662,7 @@ def _upload_build_artifacts( ) print(f"::notice ::Log URL: {log_url}") - # generate and upload build report + # generate and upload a build report build_result = BuildResult( build_name, log_url, @@ -1867,8 +1879,8 @@ def _run_test(job_name: str, run_command: str) -> int: run_command or CI_CONFIG.get_job_config(job_name).run_command ), "Run command must be provided as input argument or be configured in job config" - if CI_CONFIG.get_job_config(job_name).timeout: - os.environ["KILL_TIMEOUT"] = str(CI_CONFIG.get_job_config(job_name).timeout) + env = os.environ.copy() + timeout = CI_CONFIG.get_job_config(job_name).timeout or None if not run_command: run_command = "/".join( @@ -1879,26 +1891,27 @@ def _run_test(job_name: str, run_command: str) -> int: print("Use run command from a job config") else: print("Use run command from the workflow") - os.environ["CHECK_NAME"] = job_name + env["CHECK_NAME"] = job_name print(f"Going to start run command [{run_command}]") - process = subprocess.run( - run_command, - stdout=sys.stdout, - stderr=sys.stderr, - text=True, - check=False, - shell=True, - ) + stopwatch = Stopwatch() + job_log = Path(TEMP_PATH) / "job_log.txt" + with TeePopen(run_command, job_log, env, timeout) as process: + retcode = process.wait() + if retcode != 0: + print(f"Run action failed for: [{job_name}] with exit code [{retcode}]") + if timeout and process.timeout_exceeded: + print(f"Timeout {timeout} exceeded, dumping the job report") + JobReport( + status=FAILURE, + description=f"Timeout {timeout} exceeded", + test_results=[TestResult.create_check_timeout_expired(timeout)], + start_time=stopwatch.start_time_str, + duration=stopwatch.duration_seconds, + additional_files=[job_log], + ).dump() - if process.returncode == 0: - print(f"Run action done for: [{job_name}]") - exit_code = 0 - else: - print( - f"Run action failed for: [{job_name}] with exit code [{process.returncode}]" - ) - exit_code = process.returncode - return exit_code + print(f"Run action done for: [{job_name}]") + return retcode def _get_ext_check_name(check_name: str) -> str: @@ -2177,6 +2190,15 @@ def main() -> int: assert ( indata ), f"--infile with config must be provided for POST action of a build type job [{args.job_name}]" + + # upload binaries only for normal builds in PRs + upload_binary = ( + not pr_info.is_pr + or args.job_name + not in CI_CONFIG.get_builds_for_report(JobNames.BUILD_CHECK_SPECIAL) + or CiOptions.create_from_run_config(indata).upload_all + ) + build_name = args.job_name s3_path_prefix = "/".join( ( @@ -2192,10 +2214,12 @@ def main() -> int: job_report=job_report, s3=s3, s3_destination=s3_path_prefix, + upload_binary=upload_binary, ) - _upload_build_profile_data( - pr_info, build_name, job_report, git_runner, ch_helper - ) + # FIXME: profile data upload does not work + # _upload_build_profile_data( + # pr_info, build_name, job_report, git_runner, ch_helper + # ) check_url = log_url else: # test job diff --git a/tests/ci/ci_config.py b/tests/ci/ci_config.py index 68fa6f1cf10..a8bd85ee908 100644 --- a/tests/ci/ci_config.py +++ b/tests/ci/ci_config.py @@ -47,6 +47,8 @@ class CILabels(metaclass=WithIter): DO_NOT_TEST_LABEL = "do_not_test" NO_MERGE_COMMIT = "no_merge_commit" NO_CI_CACHE = "no_ci_cache" + # to upload all binaries from build jobs + UPLOAD_ALL_ARTIFACTS = "upload_all" CI_SET_REDUCED = "ci_set_reduced" CI_SET_FAST = "ci_set_fast" CI_SET_ARM = "ci_set_arm" @@ -175,8 +177,8 @@ class JobNames(metaclass=WithIter): COMPATIBILITY_TEST = "Compatibility check (amd64)" COMPATIBILITY_TEST_ARM = "Compatibility check (aarch64)" - CLCIKBENCH_TEST = "ClickBench (amd64)" - CLCIKBENCH_TEST_ARM = "ClickBench (aarch64)" + CLICKBENCH_TEST = "ClickBench (amd64)" + CLICKBENCH_TEST_ARM = "ClickBench (aarch64)" LIBFUZZER_TEST = "libFuzzer tests" @@ -472,17 +474,18 @@ compatibility_test_common_params = { } stateless_test_common_params = { "digest": stateless_check_digest, - "run_command": 'functional_test_check.py "$CHECK_NAME" $KILL_TIMEOUT', + "run_command": 'functional_test_check.py "$CHECK_NAME"', "timeout": 10800, } stateful_test_common_params = { "digest": stateful_check_digest, - "run_command": 'functional_test_check.py "$CHECK_NAME" $KILL_TIMEOUT', + "run_command": 'functional_test_check.py "$CHECK_NAME"', "timeout": 3600, } stress_test_common_params = { "digest": stress_check_digest, "run_command": "stress_check.py", + "timeout": 9000, } upgrade_test_common_params = { "digest": upgrade_check_digest, @@ -531,6 +534,7 @@ clickbench_test_params = { docker=["clickhouse/clickbench"], ), "run_command": 'clickbench.py "$CHECK_NAME"', + "timeout": 900, } install_test_params = JobConfig( digest=install_check_digest, @@ -701,10 +705,7 @@ class CIConfig: elif isinstance(config[job_name], BuildConfig): # type: ignore pass elif isinstance(config[job_name], BuildReportConfig): # type: ignore - # add all build jobs as parents for build report check - res.extend( - [job for job in JobNames if job in self.build_config] - ) + pass else: assert ( False @@ -1067,6 +1068,7 @@ CI_CONFIG = CIConfig( Build.PACKAGE_TSAN, Build.PACKAGE_MSAN, Build.PACKAGE_DEBUG, + Build.BINARY_RELEASE, ] ), JobNames.BUILD_CHECK_SPECIAL: BuildReportConfig( @@ -1084,7 +1086,6 @@ CI_CONFIG = CIConfig( Build.BINARY_AMD64_COMPAT, Build.BINARY_AMD64_MUSL, Build.PACKAGE_RELEASE_COVERAGE, - Build.BINARY_RELEASE, Build.FUZZERS, ] ), @@ -1111,6 +1112,7 @@ CI_CONFIG = CIConfig( exclude_files=[".md"], docker=["clickhouse/fasttest"], ), + timeout=2400, ), ), JobNames.STYLE_CHECK: TestConfig( @@ -1123,7 +1125,9 @@ CI_CONFIG = CIConfig( "", # we run this check by label - no digest required job_config=JobConfig( - run_by_label="pr-bugfix", run_command="bugfix_validate_check.py" + run_by_label="pr-bugfix", + run_command="bugfix_validate_check.py", + timeout=900, ), ), }, @@ -1357,10 +1361,10 @@ CI_CONFIG = CIConfig( Build.PACKAGE_RELEASE, job_config=sqllogic_test_params ), JobNames.SQLTEST: TestConfig(Build.PACKAGE_RELEASE, job_config=sql_test_params), - JobNames.CLCIKBENCH_TEST: TestConfig( + JobNames.CLICKBENCH_TEST: TestConfig( Build.PACKAGE_RELEASE, job_config=JobConfig(**clickbench_test_params) # type: ignore ), - JobNames.CLCIKBENCH_TEST_ARM: TestConfig( + JobNames.CLICKBENCH_TEST_ARM: TestConfig( Build.PACKAGE_AARCH64, job_config=JobConfig(**clickbench_test_params) # type: ignore ), JobNames.LIBFUZZER_TEST: TestConfig( @@ -1368,7 +1372,7 @@ CI_CONFIG = CIConfig( job_config=JobConfig( run_by_label=CILabels.libFuzzer, timeout=10800, - run_command='libfuzzer_test_check.py "$CHECK_NAME" 10800', + run_command='libfuzzer_test_check.py "$CHECK_NAME"', ), ), # type: ignore }, @@ -1386,6 +1390,9 @@ REQUIRED_CHECKS = [ JobNames.FAST_TEST, JobNames.STATEFUL_TEST_RELEASE, JobNames.STATELESS_TEST_RELEASE, + JobNames.STATELESS_TEST_ASAN, + JobNames.STATELESS_TEST_FLAKY_ASAN, + JobNames.STATEFUL_TEST_ASAN, JobNames.STYLE_CHECK, JobNames.UNIT_TEST_ASAN, JobNames.UNIT_TEST_MSAN, @@ -1419,6 +1426,11 @@ class CheckDescription: CHECK_DESCRIPTIONS = [ + CheckDescription( + StatusNames.SYNC, + "If it fails, ask a maintainer for help", + lambda x: x == StatusNames.SYNC, + ), CheckDescription( "AST fuzzer", "Runs randomly generated queries to catch program errors. " diff --git a/tests/ci/ci_utils.py b/tests/ci/ci_utils.py index 97d42f9845b..2bc0a4fef14 100644 --- a/tests/ci/ci_utils.py +++ b/tests/ci/ci_utils.py @@ -1,8 +1,7 @@ -from contextlib import contextmanager import os -import signal -from typing import Any, List, Union, Iterator +from contextlib import contextmanager from pathlib import Path +from typing import Any, Iterator, List, Union class WithIter(type): @@ -49,14 +48,3 @@ class GHActions: for line in lines: print(line) print("::endgroup::") - - -def set_job_timeout(): - def timeout_handler(_signum, _frame): - print("Timeout expired") - raise TimeoutError("Job's KILL_TIMEOUT expired") - - kill_timeout = int(os.getenv("KILL_TIMEOUT", "0")) - assert kill_timeout > 0, "kill timeout must be provided in KILL_TIMEOUT env" - signal.signal(signal.SIGALRM, timeout_handler) - signal.alarm(kill_timeout) diff --git a/tests/ci/fast_test_check.py b/tests/ci/fast_test_check.py index 383f5b340c7..ed727dd3659 100644 --- a/tests/ci/fast_test_check.py +++ b/tests/ci/fast_test_check.py @@ -1,5 +1,4 @@ #!/usr/bin/env python3 -import argparse import csv import logging import os @@ -11,15 +10,7 @@ from typing import Tuple from docker_images_helper import DockerImage, get_docker_image, pull_image from env_helper import REPO_COPY, S3_BUILDS_BUCKET, TEMP_PATH from pr_info import PRInfo -from report import ( - ERROR, - FAILURE, - SUCCESS, - JobReport, - TestResult, - TestResults, - read_test_results, -) +from report import ERROR, FAILURE, SUCCESS, JobReport, TestResults, read_test_results from stopwatch import Stopwatch from tee_popen import TeePopen @@ -80,30 +71,9 @@ def process_results(result_directory: Path) -> Tuple[str, str, TestResults]: return state, description, test_results -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser( - formatter_class=argparse.ArgumentDefaultsHelpFormatter, - description="FastTest script", - ) - - parser.add_argument( - "--timeout", - type=int, - # Fast tests in most cases done within 10 min and 40 min timout should be sufficient, - # though due to cold cache build time can be much longer - # https://pastila.nl/?146195b6/9bb99293535e3817a9ea82c3f0f7538d.link#5xtClOjkaPLEjSuZ92L2/g== - default=40, - help="Timeout in minutes", - ) - args = parser.parse_args() - args.timeout = args.timeout * 60 - return args - - def main(): logging.basicConfig(level=logging.INFO) stopwatch = Stopwatch() - args = parse_args() temp_path = Path(TEMP_PATH) temp_path.mkdir(parents=True, exist_ok=True) @@ -134,14 +104,10 @@ def main(): logs_path.mkdir(parents=True, exist_ok=True) run_log_path = logs_path / "run.log" - timeout_expired = False - with TeePopen(run_cmd, run_log_path, timeout=args.timeout) as process: + with TeePopen(run_cmd, run_log_path) as process: retcode = process.wait() - if process.timeout_exceeded: - logging.info("Timeout expired for command: %s", run_cmd) - timeout_expired = True - elif retcode == 0: + if retcode == 0: logging.info("Run successfully") else: logging.info("Run failed") @@ -175,11 +141,6 @@ def main(): else: state, description, test_results = process_results(output_path) - if timeout_expired: - test_results.append(TestResult.create_check_timeout_expired(args.timeout)) - state = FAILURE - description = test_results[-1].name - JobReport( description=description, test_results=test_results, diff --git a/tests/ci/functional_test_check.py b/tests/ci/functional_test_check.py index e898138fb3a..5bb46d7ec2f 100644 --- a/tests/ci/functional_test_check.py +++ b/tests/ci/functional_test_check.py @@ -68,7 +68,6 @@ def get_run_command( repo_path: Path, result_path: Path, server_log_path: Path, - kill_timeout: int, additional_envs: List[str], ci_logs_args: str, image: DockerImage, @@ -86,7 +85,6 @@ def get_run_command( ) envs = [ - f"-e MAX_RUN_TIME={int(0.9 * kill_timeout)}", # a static link, don't use S3_URL or S3_DOWNLOAD '-e S3_URL="https://s3.amazonaws.com/clickhouse-datasets"', ] @@ -192,7 +190,6 @@ def process_results( def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("check_name") - parser.add_argument("kill_timeout", type=int) parser.add_argument( "--validate-bugfix", action="store_true", @@ -224,12 +221,7 @@ def main(): assert ( check_name ), "Check name must be provided as an input arg or in CHECK_NAME env" - kill_timeout = args.kill_timeout or int(os.getenv("KILL_TIMEOUT", "0")) - assert ( - kill_timeout > 0 - ), "kill timeout must be provided as an input arg or in KILL_TIMEOUT env" validate_bugfix_check = args.validate_bugfix - print(f"Runnin check [{check_name}] with timeout [{kill_timeout}]") flaky_check = "flaky" in check_name.lower() @@ -288,7 +280,6 @@ def main(): repo_path, result_path, server_log_path, - kill_timeout, additional_envs, ci_logs_args, docker_image, diff --git a/tests/ci/git_helper.py b/tests/ci/git_helper.py index f15f1273bb9..8ec90dd7b2d 100644 --- a/tests/ci/git_helper.py +++ b/tests/ci/git_helper.py @@ -1,9 +1,12 @@ #!/usr/bin/env python import argparse +import atexit import logging +import os import os.path as p import re import subprocess +import tempfile from typing import Any, List, Optional logger = logging.getLogger(__name__) @@ -19,12 +22,16 @@ SHA_REGEXP = re.compile(r"\A([0-9]|[a-f]){40}\Z") CWD = p.dirname(p.realpath(__file__)) TWEAK = 1 -GIT_PREFIX = ( # All commits to remote are done as robot-clickhouse - "git -c user.email=robot-clickhouse@users.noreply.github.com " - "-c user.name=robot-clickhouse -c commit.gpgsign=false " - "-c core.sshCommand=" - "'ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'" -) +with tempfile.NamedTemporaryFile("w", delete=False) as f: + GIT_KNOWN_HOSTS_FILE = f.name + GIT_PREFIX = ( # All commits to remote are done as robot-clickhouse + "git -c user.email=robot-clickhouse@users.noreply.github.com " + "-c user.name=robot-clickhouse -c commit.gpgsign=false " + "-c core.sshCommand=" + f"'ssh -o UserKnownHostsFile={GIT_KNOWN_HOSTS_FILE} " + "-o StrictHostKeyChecking=accept-new'" + ) + atexit.register(os.remove, f.name) # Py 3.8 removeprefix and removesuffix diff --git a/tests/ci/install_check.py b/tests/ci/install_check.py index 54a18c7e26c..6c33b1f2066 100644 --- a/tests/ci/install_check.py +++ b/tests/ci/install_check.py @@ -1,25 +1,21 @@ #!/usr/bin/env python3 import argparse - import logging -import sys import subprocess +import sys from pathlib import Path from shutil import copy2 from typing import Dict - from build_download_helper import download_builds_filter - from compress_files import compress_fast -from docker_images_helper import DockerImage, pull_image, get_docker_image -from env_helper import CI, REPORT_PATH, TEMP_PATH as TEMP -from report import JobReport, TestResults, TestResult, FAILURE, FAIL, OK, SUCCESS +from docker_images_helper import DockerImage, get_docker_image, pull_image +from env_helper import REPORT_PATH +from env_helper import TEMP_PATH as TEMP +from report import FAIL, FAILURE, OK, SUCCESS, JobReport, TestResult, TestResults from stopwatch import Stopwatch from tee_popen import TeePopen -from ci_utils import set_job_timeout - RPM_IMAGE = "clickhouse/install-rpm-test" DEB_IMAGE = "clickhouse/install-deb-test" @@ -256,9 +252,6 @@ def main(): args = parse_args() - if CI: - set_job_timeout() - TEMP_PATH.mkdir(parents=True, exist_ok=True) LOGS_PATH.mkdir(parents=True, exist_ok=True) diff --git a/tests/ci/jepsen_check.py b/tests/ci/jepsen_check.py index 6ed411a11ef..1e61fd9fab7 100644 --- a/tests/ci/jepsen_check.py +++ b/tests/ci/jepsen_check.py @@ -10,6 +10,7 @@ from typing import Any, List import boto3 # type: ignore import requests + from build_download_helper import ( download_build_with_progress, get_build_name_for_check, @@ -201,7 +202,7 @@ def main(): docker_image = KEEPER_IMAGE_NAME if args.program == "keeper" else SERVER_IMAGE_NAME if pr_info.is_scheduled or pr_info.is_dispatched: - # get latest clcikhouse by the static link for latest master buit - get its version and provide permanent url for this version to the jepsen + # get latest clickhouse by the static link for latest master buit - get its version and provide permanent url for this version to the jepsen build_url = f"{S3_URL}/{S3_BUILDS_BUCKET}/master/amd64/clickhouse" download_build_with_progress(build_url, Path(TEMP_PATH) / "clickhouse") git_runner.run(f"chmod +x {TEMP_PATH}/clickhouse") diff --git a/tests/ci/lambda_shared_package/lambda_shared/pr.py b/tests/ci/lambda_shared_package/lambda_shared/pr.py index f80ac896c9b..e981e28a454 100644 --- a/tests/ci/lambda_shared_package/lambda_shared/pr.py +++ b/tests/ci/lambda_shared_package/lambda_shared/pr.py @@ -50,6 +50,8 @@ TRUSTED_CONTRIBUTORS = { class Labels: + PR_BUGFIX = "pr-bugfix" + PR_CRITICAL_BUGFIX = "pr-critical-bugfix" CAN_BE_TESTED = "can be tested" DO_NOT_TEST = "do not test" MUST_BACKPORT = "pr-must-backport" @@ -68,8 +70,8 @@ class Labels: RELEASE_LTS = "release-lts" SUBMODULE_CHANGED = "submodule changed" - # pr-bugfix autoport can lead to issues in releases, let's do ci fixes only - AUTO_BACKPORT = {"pr-ci"} + # automatic backport for critical bug fixes + AUTO_BACKPORT = {"pr-critical-bugfix"} # Descriptions are used in .github/PULL_REQUEST_TEMPLATE.md, keep comments there @@ -84,6 +86,7 @@ LABEL_CATEGORIES = { "Bug Fix (user-visible misbehaviour in official stable or prestable release)", "Bug Fix (user-visible misbehavior in official stable or prestable release)", ], + "pr-critical-bugfix": ["Critical Bug Fix (crash, LOGICAL_ERROR, data loss, RBAC)"], "pr-build": [ "Build/Testing/Packaging Improvement", "Build Improvement", diff --git a/tests/ci/libfuzzer_test_check.py b/tests/ci/libfuzzer_test_check.py index 4bb39010978..1f5936c3fec 100644 --- a/tests/ci/libfuzzer_test_check.py +++ b/tests/ci/libfuzzer_test_check.py @@ -46,7 +46,6 @@ def get_run_command( fuzzers_path: Path, repo_path: Path, result_path: Path, - kill_timeout: int, additional_envs: List[str], ci_logs_args: str, image: DockerImage, @@ -59,7 +58,6 @@ def get_run_command( ) envs = [ - f"-e MAX_RUN_TIME={int(0.9 * kill_timeout)}", # a static link, don't use S3_URL or S3_DOWNLOAD '-e S3_URL="https://s3.amazonaws.com/clickhouse-datasets"', ] @@ -83,7 +81,6 @@ def get_run_command( def parse_args(): parser = argparse.ArgumentParser() parser.add_argument("check_name") - parser.add_argument("kill_timeout", type=int) return parser.parse_args() @@ -99,7 +96,6 @@ def main(): args = parse_args() check_name = args.check_name - kill_timeout = args.kill_timeout pr_info = PRInfo() @@ -145,7 +141,6 @@ def main(): fuzzers_path, repo_path, result_path, - kill_timeout, additional_envs, ci_logs_args, docker_image, diff --git a/tests/ci/report.py b/tests/ci/report.py index 8676c998afb..ee58efdba52 100644 --- a/tests/ci/report.py +++ b/tests/ci/report.py @@ -288,7 +288,7 @@ class JobReport: start_time: str duration: float additional_files: Union[Sequence[str], Sequence[Path]] - # clcikhouse version, build job only + # clickhouse version, build job only version: str = "" # checkname to set in commit status, set if differs from jjob name check_name: str = "" @@ -401,30 +401,40 @@ class BuildResult: @classmethod def load_any(cls, build_name: str, pr_number: int, head_ref: str): # type: ignore """ - loads report from suitable report file with the following priority: - 1. report from PR with the same @pr_number - 2. report from branch with the same @head_ref - 3. report from the master - 4. any other report + loads build report from one of all available report files (matching the job digest) + with the following priority: + 1. report for the current PR @pr_number (might happen in PR' wf with or without job reuse) + 2. report for the current branch @head_ref (might happen in release/master' wf with or without job reuse) + 3. report for master branch (might happen in any workflow in case of job reuse) + 4. any other report (job reuse from another PR, if master report is not available yet) """ - reports = [] + pr_report = None + ref_report = None + master_report = None + any_report = None for file in Path(REPORT_PATH).iterdir(): if f"{build_name}.json" in file.name: - reports.append(file) - if not reports: + any_report = file + if "_master_" in file.name: + master_report = file + elif f"_{head_ref}_" in file.name: + ref_report = file + elif pr_number and f"_{pr_number}_" in file.name: + pr_report = file + + if not any_report: return None - file_path = None - for file in reports: - if pr_number and f"_{pr_number}_" in file.name: - file_path = file - break - if f"_{head_ref}_" in file.name: - file_path = file - break - if "_master_" in file.name: - file_path = file - break - return cls.load_from_file(file_path or reports[-1]) + + if pr_report: + file_path = pr_report + elif ref_report: + file_path = ref_report + elif master_report: + file_path = master_report + else: + file_path = any_report + + return cls.load_from_file(file_path) @classmethod def load_from_file(cls, file: Union[Path, str]): # type: ignore diff --git a/tests/ci/sqllogic_test.py b/tests/ci/sqllogic_test.py index 6ea6fa19d91..63880f07e92 100755 --- a/tests/ci/sqllogic_test.py +++ b/tests/ci/sqllogic_test.py @@ -9,8 +9,8 @@ from pathlib import Path from typing import Tuple from build_download_helper import download_all_deb_packages -from docker_images_helper import DockerImage, pull_image, get_docker_image -from env_helper import REPORT_PATH, TEMP_PATH, REPO_COPY +from docker_images_helper import DockerImage, get_docker_image, pull_image +from env_helper import REPO_COPY, REPORT_PATH, TEMP_PATH from report import ( ERROR, FAIL, @@ -72,11 +72,6 @@ def parse_args() -> argparse.Namespace: required=False, default="", ) - parser.add_argument( - "--kill-timeout", - required=False, - default=0, - ) return parser.parse_args() @@ -96,10 +91,6 @@ def main(): assert ( check_name ), "Check name must be provided as an input arg or in CHECK_NAME env" - kill_timeout = args.kill_timeout or int(os.getenv("KILL_TIMEOUT", "0")) - assert ( - kill_timeout > 0 - ), "kill timeout must be provided as an input arg or in KILL_TIMEOUT env" docker_image = pull_image(get_docker_image(IMAGE_NAME)) @@ -127,7 +118,7 @@ def main(): ) logging.info("Going to run func tests: %s", run_command) - with TeePopen(run_command, run_log_path, timeout=kill_timeout) as process: + with TeePopen(run_command, run_log_path) as process: retcode = process.wait() if retcode == 0: logging.info("Run successfully") diff --git a/tests/ci/stress_check.py b/tests/ci/stress_check.py index 027d7316e23..bf0281cae68 100644 --- a/tests/ci/stress_check.py +++ b/tests/ci/stress_check.py @@ -14,7 +14,7 @@ from docker_images_helper import DockerImage, get_docker_image, pull_image from env_helper import REPO_COPY, REPORT_PATH, TEMP_PATH from get_robot_token import get_parameter_from_ssm from pr_info import PRInfo -from report import ERROR, JobReport, TestResult, TestResults, read_test_results +from report import ERROR, JobReport, TestResults, read_test_results from stopwatch import Stopwatch from tee_popen import TeePopen @@ -161,14 +161,9 @@ def run_stress_test(docker_image_name: str) -> None: ) logging.info("Going to run stress test: %s", run_command) - timeout_expired = False - timeout = 60 * 150 - with TeePopen(run_command, run_log_path, timeout=timeout) as process: + with TeePopen(run_command, run_log_path) as process: retcode = process.wait() - if process.timeout_exceeded: - logging.info("Timeout expired for command: %s", run_command) - timeout_expired = True - elif retcode == 0: + if retcode == 0: logging.info("Run successfully") else: logging.info("Run failed") @@ -180,11 +175,6 @@ def run_stress_test(docker_image_name: str) -> None: result_path, server_log_path, run_log_path ) - if timeout_expired: - test_results.append(TestResult.create_check_timeout_expired(timeout)) - state = "failure" - description = test_results[-1].name - JobReport( description=description, test_results=test_results, diff --git a/tests/clickhouse-test b/tests/clickhouse-test index 133d635f8a0..af203563d58 100755 --- a/tests/clickhouse-test +++ b/tests/clickhouse-test @@ -1223,12 +1223,9 @@ class TestCase: return FailureReason.S3_STORAGE elif ( tags - and ("no-s3-storage-with-slow-build" in tags) + and "no-s3-storage-with-slow-build" in tags and args.s3_storage - and ( - BuildFlags.THREAD in args.build_flags - or BuildFlags.DEBUG in args.build_flags - ) + and BuildFlags.RELEASE not in args.build_flags ): return FailureReason.S3_STORAGE @@ -2411,6 +2408,17 @@ def do_run_tests(jobs, test_suite: TestSuite, parallel): for _ in range(jobs): parallel_tests_array.append((None, batch_size, test_suite)) + # If we don't do random shuffling then there will be always + # nearly the same groups of test suites running concurrently. + # Thus, if there is a test within group which appears to be broken + # then it will affect all other tests in a non-random form. + # So each time a bad test fails - other tests from the group will also fail + # and this process will be more or less stable. + # It makes it more difficult to detect real flaky tests, + # because the distribution and the amount + # of failures will be nearly the same for all tests from the group. + random.shuffle(test_suite.parallel_tests) + try: with closing(multiprocessing.Pool(processes=jobs)) as pool: pool.map_async(run_tests_array, parallel_tests_array) diff --git a/tests/config/config.d/storage_conf.xml b/tests/config/config.d/storage_conf.xml index 0e6cd4b0e03..7a9b579c00a 100644 --- a/tests/config/config.d/storage_conf.xml +++ b/tests/config/config.d/storage_conf.xml @@ -92,6 +92,13 @@ 22548578304 100 + + s3 + http://localhost:11111/test/special/ + clickhouse + clickhouse + 0 + @@ -107,6 +114,13 @@ + + +
+ s3_no_cache +
+
+
diff --git a/tests/integration/helpers/cluster.py b/tests/integration/helpers/cluster.py index c2bea3060aa..41c162217d2 100644 --- a/tests/integration/helpers/cluster.py +++ b/tests/integration/helpers/cluster.py @@ -513,6 +513,7 @@ class ClickHouseCluster: self.minio_redirect_host = "proxy1" self.minio_redirect_ip = None self.minio_redirect_port = 8080 + self.minio_docker_id = self.get_instance_docker_id(self.minio_host) self.spark_session = None diff --git a/tests/integration/helpers/s3_mocks/broken_s3.py b/tests/integration/helpers/s3_mocks/broken_s3.py index 7d0127bc1c4..686abc76bdf 100644 --- a/tests/integration/helpers/s3_mocks/broken_s3.py +++ b/tests/integration/helpers/s3_mocks/broken_s3.py @@ -183,6 +183,9 @@ class _ServerRuntime: ) request_handler.write_error(429, data) + # make sure that Alibaba errors (QpsLimitExceeded, TotalQpsLimitExceededAction) are retriable + # we patched contrib/aws to achive it: https://github.com/ClickHouse/aws-sdk-cpp/pull/22 https://github.com/ClickHouse/aws-sdk-cpp/pull/23 + # https://www.alibabacloud.com/help/en/oss/support/http-status-code-503 class QpsLimitExceededAction: def inject_error(self, request_handler): data = ( @@ -195,6 +198,18 @@ class _ServerRuntime: ) request_handler.write_error(429, data) + class TotalQpsLimitExceededAction: + def inject_error(self, request_handler): + data = ( + '' + "" + "TotalQpsLimitExceeded" + "Please reduce your request rate." + "txfbd566d03042474888193-00608d7537" + "" + ) + request_handler.write_error(429, data) + class RedirectAction: def __init__(self, host="localhost", port=1): self.dst_host = _and_then(host, str) @@ -269,6 +284,10 @@ class _ServerRuntime: self.error_handler = _ServerRuntime.QpsLimitExceededAction( *self.action_args ) + elif self.action == "total_qps_limit_exceeded": + self.error_handler = _ServerRuntime.TotalQpsLimitExceededAction( + *self.action_args + ) else: self.error_handler = _ServerRuntime.Expected500ErrorAction() diff --git a/tests/integration/test_backup_restore_s3/configs/disk_s3_restricted_user.xml b/tests/integration/test_backup_restore_s3/configs/disk_s3_restricted_user.xml new file mode 100644 index 00000000000..323e986f966 --- /dev/null +++ b/tests/integration/test_backup_restore_s3/configs/disk_s3_restricted_user.xml @@ -0,0 +1,22 @@ + + + + + + s3 + http://minio1:9001/root/data/disks/disk_s3_restricted_user/ + miniorestricted1 + minio123 + + + + + +
+ disk_s3_restricted_user +
+
+
+
+
+
diff --git a/tests/integration/test_backup_restore_s3/test.py b/tests/integration/test_backup_restore_s3/test.py index 05424887736..967ed6a221c 100644 --- a/tests/integration/test_backup_restore_s3/test.py +++ b/tests/integration/test_backup_restore_s3/test.py @@ -3,8 +3,11 @@ import pytest from helpers.cluster import ClickHouseCluster from helpers.test_tools import TSV import uuid +import os +CONFIG_DIR = os.path.join(os.path.dirname(os.path.realpath(__file__)), "configs") + cluster = ClickHouseCluster(__file__) node = cluster.add_instance( "node", @@ -20,13 +23,127 @@ node = cluster.add_instance( ], with_minio=True, with_zookeeper=True, + stay_alive=True, ) +def setup_minio_users(): + # create 2 extra users with restricted access + # miniorestricted1 - full access to bucket 'root', no access to other buckets + # miniorestricted2 - full access to bucket 'root2', no access to other buckets + # storage policy 'policy_s3_restricted' defines a policy for storing files inside bucket 'root' using 'miniorestricted1' user + for user, bucket in [("miniorestricted1", "root"), ("miniorestricted2", "root2")]: + print( + cluster.exec_in_container( + cluster.minio_docker_id, + [ + "mc", + "alias", + "set", + "root", + "http://minio1:9001", + "minio", + "minio123", + ], + ) + ) + policy = f""" +{{ + "Version": "2012-10-17", + "Statement": [ + {{ + "Effect": "Allow", + "Principal": {{ + "AWS": [ + "*" + ] + }}, + "Action": [ + "s3:GetBucketLocation", + "s3:ListBucket", + "s3:ListBucketMultipartUploads" + ], + "Resource": [ + "arn:aws:s3:::{bucket}" + ] + }}, + {{ + "Effect": "Allow", + "Principal": {{ + "AWS": [ + "*" + ] + }}, + "Action": [ + "s3:AbortMultipartUpload", + "s3:DeleteObject", + "s3:GetObject", + "s3:ListMultipartUploadParts", + "s3:PutObject" + ], + "Resource": [ + "arn:aws:s3:::{bucket}/*" + ] + }} + ] +}}""" + + cluster.exec_in_container( + cluster.minio_docker_id, + ["bash", "-c", f"cat >/tmp/{bucket}_policy.json < 0, + ) + try: cluster.start() yield cluster @@ -64,10 +65,10 @@ def test_insert(): gen_insert_values(random.randint(1, MAX_ROWS)) for _ in range(0, NUM_WORKERS) ] threads = [] + assert len(cluster.instances) == NUM_WORKERS for i in range(NUM_WORKERS): - t = threading.Thread( - target=create_insert, args=(nodes[i], insert_values_arr[i]) - ) + node = cluster.instances[f"node{i + 1}"] + t = threading.Thread(target=create_insert, args=(node, insert_values_arr[i])) threads.append(t) t.start() @@ -75,48 +76,61 @@ def test_insert(): t.join() for i in range(NUM_WORKERS): + node = cluster.instances[f"node{i + 1}"] assert ( - nodes[i].query("SELECT * FROM test ORDER BY id FORMAT Values") + node.query("SELECT * FROM test ORDER BY id FORMAT Values") == insert_values_arr[i] ) for i in range(NUM_WORKERS): - nodes[i].query("ALTER TABLE test MODIFY SETTING old_parts_lifetime = 59") + node = cluster.instances[f"node{i + 1}"] + node.query("ALTER TABLE test MODIFY SETTING old_parts_lifetime = 59") assert ( - nodes[i] - .query( + node.query( "SELECT engine_full from system.tables WHERE database = currentDatabase() AND name = 'test'" - ) - .find("old_parts_lifetime = 59") + ).find("old_parts_lifetime = 59") != -1 ) - nodes[i].query("ALTER TABLE test RESET SETTING old_parts_lifetime") + node.query("ALTER TABLE test RESET SETTING old_parts_lifetime") assert ( - nodes[i] - .query( + node.query( "SELECT engine_full from system.tables WHERE database = currentDatabase() AND name = 'test'" - ) - .find("old_parts_lifetime") + ).find("old_parts_lifetime") == -1 ) - nodes[i].query("ALTER TABLE test MODIFY COMMENT 'new description'") + node.query("ALTER TABLE test MODIFY COMMENT 'new description'") assert ( - nodes[i] - .query( + node.query( "SELECT comment from system.tables WHERE database = currentDatabase() AND name = 'test'" - ) - .find("new description") + ).find("new description") != -1 ) + created = int( + node.query( + "SELECT value FROM system.events WHERE event = 'DiskPlainRewritableS3DirectoryCreated'" + ) + ) + assert created > 0 + dirs_created.append(created) + assert ( + int( + node.query( + "SELECT value FROM system.metrics WHERE metric = 'DiskPlainRewritableS3DirectoryMapSize'" + ) + ) + == created + ) + @pytest.mark.order(1) def test_restart(): insert_values_arr = [] for i in range(NUM_WORKERS): + node = cluster.instances[f"node{i + 1}"] insert_values_arr.append( - nodes[i].query("SELECT * FROM test ORDER BY id FORMAT Values") + node.query("SELECT * FROM test ORDER BY id FORMAT Values") ) def restart(node): @@ -124,7 +138,7 @@ def test_restart(): threads = [] for i in range(NUM_WORKERS): - t = threading.Thread(target=restart, args=(nodes[i],)) + t = threading.Thread(target=restart, args=(node,)) threads.append(t) t.start() @@ -132,8 +146,9 @@ def test_restart(): t.join() for i in range(NUM_WORKERS): + node = cluster.instances[f"node{i + 1}"] assert ( - nodes[i].query("SELECT * FROM test ORDER BY id FORMAT Values") + node.query("SELECT * FROM test ORDER BY id FORMAT Values") == insert_values_arr[i] ) @@ -141,7 +156,16 @@ def test_restart(): @pytest.mark.order(2) def test_drop(): for i in range(NUM_WORKERS): - nodes[i].query("DROP TABLE IF EXISTS test SYNC") + node = cluster.instances[f"node{i + 1}"] + node.query("DROP TABLE IF EXISTS test SYNC") + + removed = int( + node.query( + "SELECT value FROM system.events WHERE event = 'DiskPlainRewritableS3DirectoryRemoved'" + ) + ) + + assert dirs_created[i] == removed it = cluster.minio_client.list_objects( cluster.minio_bucket, "data/", recursive=True diff --git a/tests/performance/sparse_column_filter.xml b/tests/performance/sparse_column_filter.xml new file mode 100644 index 00000000000..bc6a94a1cc4 --- /dev/null +++ b/tests/performance/sparse_column_filter.xml @@ -0,0 +1,42 @@ + + + + serialization + + sparse + + + + ratio + + 10 + 100 + 1000 + + + + + + CREATE TABLE test_{serialization}_{ratio} (id UInt64, u8 UInt8, u64 UInt64, str String) + ENGINE = MergeTree ORDER BY id + SETTINGS ratio_of_defaults_for_sparse_serialization = 0.8 + + + SYSTEM STOP MERGES test_{serialization}_{ratio} + + + INSERT INTO test_{serialization}_{ratio} SELECT + number, + number % {ratio} = 0 ? rand(1) : 0, + number % {ratio} = 0 ? rand(2) : 0, + number % {ratio} = 0 ? randomPrintableASCII(64, 3) : '' + FROM numbers(100000000) + + + SELECT str, COUNT(DISTINCT id) as i FROM test_{serialization}_{ratio} WHERE notEmpty(str) GROUP BY str ORDER BY i DESC LIMIT 10 + SELECT str, COUNT(DISTINCT u8) as u FROM test_{serialization}_{ratio} WHERE notEmpty(str) GROUP BY str ORDER BY u DESC LIMIT 10 + SELECT str, COUNT(DISTINCT u64) as u FROM test_{serialization}_{ratio} WHERE notEmpty(str) GROUP BY str ORDER BY u DESC LIMIT 10 + + + DROP TABLE IF EXISTS test_{serialization}_{ratio} + diff --git a/tests/queries/0_stateless/00002_system_numbers.sql b/tests/queries/0_stateless/00002_system_numbers.sql index d5934c7d387..1710a0d6a1e 100644 --- a/tests/queries/0_stateless/00002_system_numbers.sql +++ b/tests/queries/0_stateless/00002_system_numbers.sql @@ -7,8 +7,8 @@ SELECT * FROM system.numbers WHERE number == 7 LIMIT 1; SELECT number AS n FROM system.numbers WHERE number IN(8, 9) LIMIT 2; select number from system.numbers limit 0; select x from system.numbers limit 1; -- { serverError UNKNOWN_IDENTIFIER } -SELECT x, number FROM system.numbers LIMIT 1; -- { serverError 47 } -SELECT * FROM system.number LIMIT 1; -- { serverError 60 } -SELECT * FROM system LIMIT 1; -- { serverError 60 } -SELECT * FROM numbers LIMIT 1; -- { serverError 60 } -SELECT sys.number FROM system.numbers AS sys_num LIMIT 1; -- { serverError 47 } +SELECT x, number FROM system.numbers LIMIT 1; -- { serverError UNKNOWN_IDENTIFIER } +SELECT * FROM system.number LIMIT 1; -- { serverError UNKNOWN_TABLE } +SELECT * FROM system LIMIT 1; -- { serverError UNKNOWN_TABLE } +SELECT * FROM numbers LIMIT 1; -- { serverError UNKNOWN_TABLE } +SELECT sys.number FROM system.numbers AS sys_num LIMIT 1; -- { serverError UNKNOWN_IDENTIFIER } diff --git a/tests/queries/0_stateless/00011_array_join_alias.sql b/tests/queries/0_stateless/00011_array_join_alias.sql index 5eafeddb8fe..8e04d48a7a2 100644 --- a/tests/queries/0_stateless/00011_array_join_alias.sql +++ b/tests/queries/0_stateless/00011_array_join_alias.sql @@ -1,2 +1,2 @@ -SELECT x, a FROM (SELECT arrayJoin(['Hello', 'Goodbye']) AS x, [1, 2, 3] AS arr) ARRAY JOIN; -- { serverError 42 } +SELECT x, a FROM (SELECT arrayJoin(['Hello', 'Goodbye']) AS x, [1, 2, 3] AS arr) ARRAY JOIN; -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } SELECT x, a FROM (SELECT arrayJoin(['Hello', 'Goodbye']) AS x, [1, 2, 3] AS arr) ARRAY JOIN arr AS a; diff --git a/tests/queries/0_stateless/00079_defaulted_columns.sql b/tests/queries/0_stateless/00079_defaulted_columns.sql index 04dfb7057d2..28e6ec0568c 100644 --- a/tests/queries/0_stateless/00079_defaulted_columns.sql +++ b/tests/queries/0_stateless/00079_defaulted_columns.sql @@ -11,7 +11,7 @@ select * from defaulted; select col3, col4 from defaulted; drop table defaulted; -create table defaulted (col1 Int8, col2 UInt64 default (SELECT dummy+99 from system.one)) engine=Memory; --{serverError 116} +create table defaulted (col1 Int8, col2 UInt64 default (SELECT dummy+99 from system.one)) engine=Memory; --{serverError THERE_IS_NO_DEFAULT_VALUE} set allow_deprecated_syntax_for_merge_tree=1; create table defaulted (payload String, date materialized today(), key materialized 0 * rand()) engine=MergeTree(date, key, 8192); diff --git a/tests/queries/0_stateless/00105_shard_collations.sql b/tests/queries/0_stateless/00105_shard_collations.sql index 3a4151eebf8..28c54727078 100644 --- a/tests/queries/0_stateless/00105_shard_collations.sql +++ b/tests/queries/0_stateless/00105_shard_collations.sql @@ -45,7 +45,7 @@ SELECT number FROM numbers(2) ORDER BY 'x' COLLATE 'el'; SELECT number FROM numbers(11) ORDER BY 'x', toString(number), 'y' COLLATE 'el'; --- Trash locales -SELECT '' as x ORDER BY x COLLATE 'qq'; --{serverError 186} -SELECT '' as x ORDER BY x COLLATE 'qwe'; --{serverError 186} -SELECT '' as x ORDER BY x COLLATE 'some_non_existing_locale'; --{serverError 186} -SELECT '' as x ORDER BY x COLLATE 'ру'; --{serverError 186} +SELECT '' as x ORDER BY x COLLATE 'qq'; --{serverError UNSUPPORTED_COLLATION_LOCALE} +SELECT '' as x ORDER BY x COLLATE 'qwe'; --{serverError UNSUPPORTED_COLLATION_LOCALE} +SELECT '' as x ORDER BY x COLLATE 'some_non_existing_locale'; --{serverError UNSUPPORTED_COLLATION_LOCALE} +SELECT '' as x ORDER BY x COLLATE 'ру'; --{serverError UNSUPPORTED_COLLATION_LOCALE} diff --git a/tests/queries/0_stateless/00118_storage_join.sql b/tests/queries/0_stateless/00118_storage_join.sql index 552e62afa9c..c0bc2817140 100644 --- a/tests/queries/0_stateless/00118_storage_join.sql +++ b/tests/queries/0_stateless/00118_storage_join.sql @@ -16,6 +16,6 @@ SELECT k, js1.s, t2.s FROM (SELECT toUInt64(number / 3) AS k, sum(number) as s F SELECT k, js1.s, t2.s FROM (SELECT number AS k, number AS s FROM system.numbers LIMIT 10) js1 ANY LEFT JOIN t2 ON js1.k == t2.k ORDER BY k; SELECT k, t2.k, js1.s, t2.s FROM (SELECT number AS k, number AS s FROM system.numbers LIMIT 10) js1 ANY LEFT JOIN t2 ON js1.k == t2.k ORDER BY k; -SELECT k, js1.s, t2.s FROM (SELECT number AS k, number AS s FROM system.numbers LIMIT 10) js1 ANY LEFT JOIN t2 ON js1.k == t2.k OR js1.s == t2.k ORDER BY k; -- { serverError 48, 264 } +SELECT k, js1.s, t2.s FROM (SELECT number AS k, number AS s FROM system.numbers LIMIT 10) js1 ANY LEFT JOIN t2 ON js1.k == t2.k OR js1.s == t2.k ORDER BY k; -- { serverError NOT_IMPLEMENTED, INCOMPATIBLE_TYPE_OF_JOIN } DROP TABLE t2; diff --git a/tests/queries/0_stateless/00153_transform.sql b/tests/queries/0_stateless/00153_transform.sql index 78ec3cd4d1c..d69a18cb002 100644 --- a/tests/queries/0_stateless/00153_transform.sql +++ b/tests/queries/0_stateless/00153_transform.sql @@ -12,7 +12,7 @@ SELECT transform(1, [2, 3], ['Bigmir)net', 'Google'], 'Остальные') AS t SELECT transform(2, [2, 3], ['Bigmir)net', 'Google'], 'Остальные') AS title; SELECT transform(3, [2, 3], ['Bigmir)net', 'Google'], 'Остальные') AS title; SELECT transform(4, [2, 3], ['Bigmir)net', 'Google'], 'Остальные') AS title; -SELECT transform('hello', 'wrong', 1); -- { serverError 43 } -SELECT transform('hello', ['wrong'], 1); -- { serverError 43 } -SELECT transform('hello', ['wrong'], [1]); -- { serverError 43 } -SELECT transform(tuple(1), ['sdf'], [1]); -- { serverError 43 } +SELECT transform('hello', 'wrong', 1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT transform('hello', ['wrong'], 1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT transform('hello', ['wrong'], [1]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT transform(tuple(1), ['sdf'], [1]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/00161_rounding_functions.sql b/tests/queries/0_stateless/00161_rounding_functions.sql index abdc1e7317b..9dc117c4f9a 100644 --- a/tests/queries/0_stateless/00161_rounding_functions.sql +++ b/tests/queries/0_stateless/00161_rounding_functions.sql @@ -47,4 +47,4 @@ SELECT roundToExp2(0.9), roundToExp2(0), roundToExp2(-0.5), roundToExp2(-0.6), r select round(2, 4) round2, round(20, 4) round20, round(200, 4) round200, round(5, 4) round5, round(50, 4) round50, round(500, 4) round500, round(toInt32(5), 4) roundInt5, round(toInt32(50), 4) roundInt50, round(toInt32(500), 4) roundInt500; select roundBankers(2, 4) round2, roundBankers(20, 4) round20, roundBankers(200, 4) round200, roundBankers(5, 4) round5, roundBankers(50, 4) round50, roundBankers(500, 4) round500, roundBankers(toInt32(5), 4) roundInt5, roundBankers(toInt32(50), 4) roundInt50, roundBankers(toInt32(500), 4) roundInt500; -SELECT ceil(29375422, -54212) --{serverError 69} +SELECT ceil(29375422, -54212) --{serverError ARGUMENT_OUT_OF_BOUND} diff --git a/tests/queries/0_stateless/00166_functions_of_aggregation_states.sql b/tests/queries/0_stateless/00166_functions_of_aggregation_states.sql index 85f26d4e206..62297e4076e 100644 --- a/tests/queries/0_stateless/00166_functions_of_aggregation_states.sql +++ b/tests/queries/0_stateless/00166_functions_of_aggregation_states.sql @@ -1,5 +1,5 @@ -- Disable external aggregation because the state is reset for each new block of data in 'runningAccumulate' function. SET max_bytes_before_external_group_by = 0; -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; SELECT k, finalizeAggregation(sum_state), runningAccumulate(sum_state) FROM (SELECT intDiv(number, 50000) AS k, sumState(number) AS sum_state FROM (SELECT number FROM system.numbers LIMIT 1000000) GROUP BY k ORDER BY k); diff --git a/tests/queries/0_stateless/00189_time_zones_long.sql b/tests/queries/0_stateless/00189_time_zones_long.sql index 4785bee1482..782035e816e 100644 --- a/tests/queries/0_stateless/00189_time_zones_long.sql +++ b/tests/queries/0_stateless/00189_time_zones_long.sql @@ -33,7 +33,7 @@ SELECT toMonday(toDateTime(1419800400), 'Europe/Paris'); SELECT toMonday(toDateTime(1419800400), 'Europe/London'); SELECT toMonday(toDateTime(1419800400), 'Asia/Tokyo'); SELECT toMonday(toDateTime(1419800400), 'Pacific/Pitcairn'); -SELECT toMonday(toDate(16433), 'Asia/Istanbul'); -- { serverError 43 } +SELECT toMonday(toDate(16433), 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toMonday(toDate(16433)); /* toStartOfWeek (Sunday) */ @@ -44,7 +44,7 @@ SELECT toStartOfWeek(toDateTime(1419800400), 0, 'Europe/Paris'); SELECT toStartOfWeek(toDateTime(1419800400), 0, 'Europe/London'); SELECT toStartOfWeek(toDateTime(1419800400), 0, 'Asia/Tokyo'); SELECT toStartOfWeek(toDateTime(1419800400), 0, 'Pacific/Pitcairn'); -SELECT toStartOfWeek(toDate(16433), 0, 'Asia/Istanbul'); -- { serverError 43 } +SELECT toStartOfWeek(toDate(16433), 0, 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toStartOfWeek(toDate(16433), 0); /* toStartOfWeek (Monday) */ @@ -55,7 +55,7 @@ SELECT toStartOfWeek(toDateTime(1419800400), 1, 'Europe/Paris'); SELECT toStartOfWeek(toDateTime(1419800400), 1, 'Europe/London'); SELECT toStartOfWeek(toDateTime(1419800400), 1, 'Asia/Tokyo'); SELECT toStartOfWeek(toDateTime(1419800400), 1, 'Pacific/Pitcairn'); -SELECT toStartOfWeek(toDate(16433), 1, 'Asia/Istanbul'); -- { serverError 43 } +SELECT toStartOfWeek(toDate(16433), 1, 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toStartOfWeek(toDate(16433), 1); /* toLastDayOfWeek (Sunday) */ @@ -66,7 +66,7 @@ SELECT toLastDayOfWeek(toDateTime(1419800400), 0, 'Europe/Paris'); SELECT toLastDayOfWeek(toDateTime(1419800400), 0, 'Europe/London'); SELECT toLastDayOfWeek(toDateTime(1419800400), 0, 'Asia/Tokyo'); SELECT toLastDayOfWeek(toDateTime(1419800400), 0, 'Pacific/Pitcairn'); -SELECT toLastDayOfWeek(toDate(16433), 0, 'Asia/Istanbul'); -- { serverError 43 } +SELECT toLastDayOfWeek(toDate(16433), 0, 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toLastDayOfWeek(toDate(16433), 0); /* toLastDayOfWeek (Monday) */ @@ -77,7 +77,7 @@ SELECT toLastDayOfWeek(toDateTime(1419800400), 1, 'Europe/Paris'); SELECT toLastDayOfWeek(toDateTime(1419800400), 1, 'Europe/London'); SELECT toLastDayOfWeek(toDateTime(1419800400), 1, 'Asia/Tokyo'); SELECT toLastDayOfWeek(toDateTime(1419800400), 1, 'Pacific/Pitcairn'); -SELECT toLastDayOfWeek(toDate(16433), 1, 'Asia/Istanbul'); -- { serverError 43 } +SELECT toLastDayOfWeek(toDate(16433), 1, 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toLastDayOfWeek(toDate(16433), 1); /* toStartOfMonth */ @@ -88,7 +88,7 @@ SELECT toStartOfMonth(toDateTime(1419800400), 'Europe/Paris'); SELECT toStartOfMonth(toDateTime(1419800400), 'Europe/London'); SELECT toStartOfMonth(toDateTime(1419800400), 'Asia/Tokyo'); SELECT toStartOfMonth(toDateTime(1419800400), 'Pacific/Pitcairn'); -SELECT toStartOfMonth(toDate(16433), 'Asia/Istanbul'); -- { serverError 43 } +SELECT toStartOfMonth(toDate(16433), 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toStartOfMonth(toDate(16433)); /* toStartOfQuarter */ @@ -99,7 +99,7 @@ SELECT toStartOfQuarter(toDateTime(1412106600), 'Europe/Paris'); SELECT toStartOfQuarter(toDateTime(1412106600), 'Europe/London'); SELECT toStartOfQuarter(toDateTime(1412106600), 'Asia/Tokyo'); SELECT toStartOfQuarter(toDateTime(1412106600), 'Pacific/Pitcairn'); -SELECT toStartOfQuarter(toDate(16343), 'Asia/Istanbul'); -- { serverError 43 } +SELECT toStartOfQuarter(toDate(16343), 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toStartOfQuarter(toDate(16343)); /* toStartOfYear */ @@ -110,7 +110,7 @@ SELECT toStartOfYear(toDateTime(1419800400), 'Europe/Paris'); SELECT toStartOfYear(toDateTime(1419800400), 'Europe/London'); SELECT toStartOfYear(toDateTime(1419800400), 'Asia/Tokyo'); SELECT toStartOfYear(toDateTime(1419800400), 'Pacific/Pitcairn'); -SELECT toStartOfYear(toDate(16433), 'Asia/Istanbul'); -- { serverError 43 } +SELECT toStartOfYear(toDate(16433), 'Asia/Istanbul'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toStartOfYear(toDate(16433)); /* toTime */ diff --git a/tests/queries/0_stateless/00203_full_join.sql b/tests/queries/0_stateless/00203_full_join.sql index 43ce4c6da7d..c1b71751742 100644 --- a/tests/queries/0_stateless/00203_full_join.sql +++ b/tests/queries/0_stateless/00203_full_join.sql @@ -26,7 +26,7 @@ SELECT k1, k2, k3, val_t1, val_t2 FROM t1_00203 ANY FULL JOIN t2_00203 USING (k3 SELECT k1, k2, k3, val_t1, val_t2 FROM t1_00203 ANY RIGHT JOIN t2_00203 USING (k3, k1, k2) ORDER BY k1, k2, k3; SET any_join_distinct_right_table_keys = 0; -SELECT k1, k2, k3, val_t1, val_t2 FROM t1_00203 ANY FULL JOIN t2_00203 USING (k3, k1, k2) ORDER BY k1, k2, k3; -- { serverError 48 } +SELECT k1, k2, k3, val_t1, val_t2 FROM t1_00203 ANY FULL JOIN t2_00203 USING (k3, k1, k2) ORDER BY k1, k2, k3; -- { serverError NOT_IMPLEMENTED } SELECT k1, k2, k3, val_t1, val_t2 FROM t1_00203 ANY RIGHT JOIN t2_00203 USING (k3, k1, k2) ORDER BY k1, k2, k3; DROP TABLE t1_00203; diff --git a/tests/queries/0_stateless/00205_scalar_subqueries.sql b/tests/queries/0_stateless/00205_scalar_subqueries.sql index c6cece66244..5c35de10f70 100644 --- a/tests/queries/0_stateless/00205_scalar_subqueries.sql +++ b/tests/queries/0_stateless/00205_scalar_subqueries.sql @@ -7,15 +7,15 @@ SELECT (SELECT toDate('2015-01-02'), 'Hello'); SELECT (SELECT toDate('2015-01-02'), 'Hello') AS x, x, identity((SELECT 1)), identity((SELECT 1) AS y); -- SELECT (SELECT uniqState('')); - SELECT ( SELECT throwIf(1 + dummy) ); -- { serverError 395 } + SELECT ( SELECT throwIf(1 + dummy) ); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } -- Scalar subquery with 0 rows must return Null SELECT (SELECT 1 WHERE 0); -- But tuple and array can't be inside nullable -SELECT (SELECT 1, 2 WHERE 0); -- { serverError 125 } -SELECT (SELECT [1] WHERE 0); -- { serverError 125 } +SELECT (SELECT 1, 2 WHERE 0); -- { serverError INCORRECT_RESULT_OF_SCALAR_SUBQUERY } +SELECT (SELECT [1] WHERE 0); -- { serverError INCORRECT_RESULT_OF_SCALAR_SUBQUERY } -- Works for not-empty casle SELECT (SELECT 1, 2); SELECT (SELECT [1]); -- Several rows -SELECT (SELECT number FROM numbers(2)); -- { serverError 125 } +SELECT (SELECT number FROM numbers(2)); -- { serverError INCORRECT_RESULT_OF_SCALAR_SUBQUERY } diff --git a/tests/queries/0_stateless/00386_has_column_in_table.sql b/tests/queries/0_stateless/00386_has_column_in_table.sql index 7347293e05b..a54af18f863 100644 --- a/tests/queries/0_stateless/00386_has_column_in_table.sql +++ b/tests/queries/0_stateless/00386_has_column_in_table.sql @@ -21,11 +21,11 @@ SELECT hasColumnInTable('localhost', currentDatabase(), 'has_column_in_table', ' SELECT hasColumnInTable('system', 'one', ''); /* bad queries */ -SELECT hasColumnInTable('', '', ''); -- { serverError 60 } -SELECT hasColumnInTable('', 't', 'c'); -- { serverError 81 } -SELECT hasColumnInTable(currentDatabase(), '', 'c'); -- { serverError 60 } -SELECT hasColumnInTable('d', 't', 's'); -- { serverError 81 } -SELECT hasColumnInTable(currentDatabase(), 't', 's'); -- { serverError 60 } +SELECT hasColumnInTable('', '', ''); -- { serverError UNKNOWN_TABLE } +SELECT hasColumnInTable('', 't', 'c'); -- { serverError UNKNOWN_DATABASE } +SELECT hasColumnInTable(currentDatabase(), '', 'c'); -- { serverError UNKNOWN_TABLE } +SELECT hasColumnInTable('d', 't', 's'); -- { serverError UNKNOWN_DATABASE } +SELECT hasColumnInTable(currentDatabase(), 't', 's'); -- { serverError UNKNOWN_TABLE } DROP TABLE has_column_in_table; diff --git a/tests/queries/0_stateless/00410_aggregation_combinators_with_arenas.sql b/tests/queries/0_stateless/00410_aggregation_combinators_with_arenas.sql index 99091878d90..3eb4c2b1b4a 100644 --- a/tests/queries/0_stateless/00410_aggregation_combinators_with_arenas.sql +++ b/tests/queries/0_stateless/00410_aggregation_combinators_with_arenas.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; DROP TABLE IF EXISTS arena; CREATE TABLE arena (k UInt8, d String) ENGINE = Memory; INSERT INTO arena SELECT number % 10 AS k, hex(intDiv(number, 10) % 1000) AS d FROM system.numbers LIMIT 10000000; diff --git a/tests/queries/0_stateless/00500_point_in_polygon.sql b/tests/queries/0_stateless/00500_point_in_polygon.sql index 468f4b761d4..2305208cef4 100644 --- a/tests/queries/0_stateless/00500_point_in_polygon.sql +++ b/tests/queries/0_stateless/00500_point_in_polygon.sql @@ -18,14 +18,14 @@ SELECT pointInPolygon((2.1, 2.9), [(0., 0.), (8., 7.), (7., 8.), (0., 0.)]); SELECT pointInPolygon((2.9, 2.1), [(0., 0.), (8., 7.), (7., 8.), (0., 0.)]); SELECT 'pair of lines, different polygons'; -SELECT pointInPolygon((0.1, 0.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } -SELECT pointInPolygon((1., 1.), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } -SELECT pointInPolygon((0.7, 0.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } -SELECT pointInPolygon((0.1, 0.7), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } -SELECT pointInPolygon((1.1, 0.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } -SELECT pointInPolygon((0.1, 1.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } -SELECT pointInPolygon((5.0, 5.0), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } -SELECT pointInPolygon((7.9, 7.9), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError 36 } +SELECT pointInPolygon((0.1, 0.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } +SELECT pointInPolygon((1., 1.), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } +SELECT pointInPolygon((0.7, 0.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } +SELECT pointInPolygon((0.1, 0.7), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } +SELECT pointInPolygon((1.1, 0.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } +SELECT pointInPolygon((0.1, 1.1), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } +SELECT pointInPolygon((5.0, 5.0), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } +SELECT pointInPolygon((7.9, 7.9), [(0.5, 0.), (1.0, 0.), (8.0, 7.5), (7.5, 8.0), (0., 1.), (0., 0.5), (4.5, 5.5), (5.5, 4.5), (0.5, 0.0)]); -- { serverError BAD_ARGUMENTS } SELECT 'complex polygon'; SELECT pointInPolygon((0.05, 0.05), [(0., 1.), (0.2, 0.5), (0.6, 0.5), (0.8, 0.8), (0.8, 0.3), (0.1, 0.3), (0.1, 0.1), (0.8, 0.1), (1.0, 0.0), (8.0, 7.0), (7.0, 8.0), (0., 1.)]); diff --git a/tests/queries/0_stateless/00500_point_in_polygon_bug_3_linestring_rotation_precision.sql b/tests/queries/0_stateless/00500_point_in_polygon_bug_3_linestring_rotation_precision.sql index cc0b164e7b6..0de749bfecd 100644 --- a/tests/queries/0_stateless/00500_point_in_polygon_bug_3_linestring_rotation_precision.sql +++ b/tests/queries/0_stateless/00500_point_in_polygon_bug_3_linestring_rotation_precision.sql @@ -1,7 +1,7 @@ SELECT pointInPolygon((106.6671509, 10.7674952), [(106.667161868227, 10.7674952), (106.667165727127, 10.7675059912261), (106.667170817563, 10.7674904752629), (106.667229225265, 10.7672278502066), (106.667231193621, 10.7672115129572), (106.667229912029, 10.7671951075415), (106.667225430503, 10.767179274157), (106.667217923927, 10.7671646306786), (106.667207685234, 10.7671517485471), (106.667195113975, 10.7671411304688), (106.667180700725, 10.7671331907989), (106.66716500794, 10.7671282393715), (106.666628232995, 10.7670156787539), (106.666612233649, 10.7670139127584), (106.666596193354, 10.7670152569112), (106.666580711053, 10.7670196610218), (106.666566364856, 10.7670269606408), (106.666553690448, 10.7670368832008), (106.666543161092, 10.767049058194), (106.666535169952, 10.7670630310067), (106.666530015418, 10.7670782798948), (106.666482284259, 10.7672828714379), (106.666480170141, 10.7672985245675), (106.666481048788, 10.7673142953614), (106.666484888609, 10.7673296167758), (106.666491551541, 10.7673439379244), (106.666500798017, 10.7673567438858), (106.666512295576, 10.7673675742178), (106.666525630821, 10.7673760395122), (106.667032331859, 10.7676338521733), (106.6671413386, 10.7676893154858), (106.667371048786, 10.7678061934666), (106.667552760053, 10.7678987010209), (106.667801848625, 10.7680278028917), (106.667817742281, 10.7680340673957), (106.667834579682, 10.7680369577679), (106.66785165264, 10.7680363524383), (106.667868243061, 10.7680322768672), (106.667878683314, 10.7680285412847), (106.667885469819, 10.7680268413536), (106.667892390269, 10.7680258148018), (106.667899378015, 10.7680254715159), (106.667906365761, 10.7680258148018), (106.667913286211, 10.7680268413536), (106.667920072716, 10.7680285412847), (106.667926659921, 10.7680308982244), (106.667932984386, 10.7680338894736), (106.667938985204, 10.7680374862253), (106.667944604583, 10.7680416538412), (106.667949788405, 10.7680463521828), (106.667954486747, 10.7680515360051), (106.667958654362, 10.7680571553826), (106.667962251113, 10.7680631561994), (106.667965242363, 10.7680694806664), (106.667967599303, 10.7680760678724), (106.667969299234, 10.7680828543774), (106.667970926246, 10.7680938227996), (106.667974657027, 10.7681089916695), (106.667981154238, 10.7681231972879), (106.667990189396, 10.7681359400994), (106.668001444773, 10.7681467719897), (106.668014524559, 10.7681553120441), (106.668198488147, 10.7682521458591), (106.669562015793, 10.7689901124345), (106.669614757162, 10.7690820717448), (106.669623023723, 10.7690939566151), (106.669633223154, 10.7691042307472), (106.669645047385, 10.7691125838155), (106.670748051536, 10.7697559307954), (106.670751419717, 10.7697577924329), (106.671035494073, 10.7699063431327), (106.671270162713, 10.7700364834325), (106.67127192876, 10.7700374352053), (106.671437929267, 10.7701243344783), (106.671665917937, 10.7702517637461), (106.67166656035, 10.7702521191025), (106.671943689514, 10.7704038245574), (106.671943806749, 10.7704038886117), (106.6722776446, 10.7705859421916), (106.672278295949, 10.7705862936499), (106.673020324076, 10.7709824352208), (106.673433726727, 10.7712057751884), (106.673694081332, 10.7713489702214), (106.673977066657, 10.7715146655761), (106.674254247937, 10.7716778144336), (106.67440928634, 10.7717698954974), (106.674658478275, 10.7719268836667), (106.674658802254, 10.7719270867325), (106.6748919449, 10.7720724734391), (106.675071660589, 10.7721853602936), (106.675350447469, 10.7723606751059), (106.675350748696, 10.7723608636368), (106.6756252856, 10.7725318758852), (106.675888735092, 10.7726957126602), (106.676114500069, 10.7728361211927), (106.676379504941, 10.7730007692002), (106.67661713771, 10.7731502653527), (106.676617572241, 10.773150536857), (106.676852995814, 10.7732966297465), (106.677284352687, 10.7735807849214), (106.677738143311, 10.7738851794554), (106.677752655777, 10.7738929549383), (106.677768414072, 10.773897724206), (106.677784802596, 10.7738993009456), (106.677801181124, 10.7738976235612), (106.677816909825, 10.7738927575805), (106.677831374252, 10.7738848930944), (106.677844009349, 10.7738743373313), (106.677920079221, 10.7737967983562), (106.678239245717, 10.7735243703649), (106.67839926068, 10.7733892116467), (106.678400691571, 10.7733879749217), (106.678515896101, 10.7732860955802), (106.678557979259, 10.7732504310319), (106.67855930664, 10.7732492818517), (106.679033975331, 10.7728295048433), (106.679053201911, 10.772844898411), (106.679632133733, 10.7733262832973), (106.679771732358, 10.7734524450384), (106.679773325229, 10.7734538481348), (106.680011463819, 10.7736582857586), (106.680175801881, 10.7738018862846), (106.680176891116, 10.7738028216402), (106.680320149367, 10.773923712053), (106.680672123374, 10.7742204563391), (106.68094213423, 10.7744504786771), (106.68094233625, 10.7744506502241), (106.68124725775, 10.7747087432576), (106.681247329066, 10.7747088035527), (106.681470746982, 10.7748974804345), (106.681471338135, 10.7748979749973), (106.681840030697, 10.7752035373868), (106.682304929691, 10.7756040772245), (106.682308650112, 10.7756071005185), (106.682312917236, 10.7756103687835), (106.682359764439, 10.7756490693986), (106.682640114944, 10.7758996628849), (106.682644070655, 10.7759029839554), (106.682711710544, 10.7759562859055), (106.682806505954, 10.7760368956153), (106.68280745353, 10.776037689352), (106.683169164535, 10.7763361378178), (106.68363265876, 10.7767252395911), (106.683677875719, 10.7767650291442), (106.683797775698, 10.77688614766), (106.684138558845, 10.7772306328105), (106.68414063031, 10.7772326552454), (106.684827531639, 10.777880369263), (106.685228619785, 10.7782605077038), (106.685228896163, 10.7782607684525), (106.686025996525, 10.7790093622583), (106.686026813787, 10.7790101368229), (106.68658269265, 10.7795369738106), (106.687194479537, 10.7801158277128), (106.688401155505, 10.7812670656457), (106.688401571342, 10.7812674596561), (106.689622367701, 10.7824162362891), (106.690002723257, 10.7827815572149), (106.690002908997, 10.7827817350625), (106.690359062158, 10.7831217027417), (106.690359638585, 10.7831222477508), (106.690747557266, 10.7834855403784), (106.691628272565, 10.7843952548301), (106.692179613338, 10.7849709155958), (106.692179802225, 10.7849711121697), (106.692743910048, 10.7855562574979), (106.693288875836, 10.7861225208133), (106.693601234729, 10.7864484801726), (106.69220838651, 10.7875617536129), (106.692196691453, 10.787573150248), (106.692187444486, 10.7875866094924), (106.692181000965, 10.7876016141149), (106.692177608512, 10.7876175874962), (106.692177397496, 10.7876339157883), (106.692180376026, 10.7876499715041), (106.692186429639, 10.7876651376314), (106.692195325699, 10.7876788313445), (106.692206722334, 10.7876905264015), (106.692220181578, 10.7876997733682), (106.692235186201, 10.7877062168886), (106.692251159582, 10.787709609342), (106.692267487874, 10.7877098203582), (106.69228354359, 10.7877068418281), (106.692298709717, 10.7877007882148), (106.69231240343, 10.7876918921553), (106.693776442708, 10.7865217172423), (106.693788736175, 10.7865096022178), (106.693798269005, 10.7864952137411), (106.693804631934, 10.7864791695437), (106.693807551784, 10.7864621584413), (106.693806903199, 10.7864449107613), (106.693802714026, 10.7864281669878), (106.693795164114, 10.786412645971), (106.693784577601, 10.7863990140651), (106.69340910087, 10.7860071886444), (106.69340897739, 10.7860070600637), (106.692863924954, 10.7854407067139), (106.69229983717, 10.7848555821281), (106.691748435669, 10.7842798579551), (106.691748124777, 10.7842795350934), (106.690865834778, 10.7833681940925), (106.690862927107, 10.7833653342196), (106.690473809086, 10.7830009183885), (106.690118035849, 10.7826613133679), (106.689737465891, 10.7822957865149), (106.689736848623, 10.7822951996834), (106.688515950726, 10.7811463275029), (106.687309357068, 10.7799951680976), (106.687309106711, 10.779994930232), (106.686697270266, 10.7794160294802), (106.686141416688, 10.7788892164565), (106.686140461741, 10.7788883114), (106.686140185762, 10.7788880510296), (106.6853430856, 10.7781394574112), (106.684942058447, 10.7777593767781), (106.684941904463, 10.7777592312084), (106.684255979358, 10.7771124377212), (106.683916204215, 10.776768971525), (106.683794256559, 10.7766457845149), (106.68379008676, 10.7766418525893), (106.683741989497, 10.7765995284558), (106.683740519326, 10.7765982647987), (106.683276011394, 10.7762083120217), (106.683275466929, 10.7762078588774), (106.68291395946, 10.77590957835), (106.682818451152, 10.775828362424), (106.682816046951, 10.7758263940715), (106.682749215964, 10.7757737295564), (106.682469581984, 10.775523776542), (106.682467121137, 10.7755216616573), (106.682417839663, 10.775480950083), (106.68241543796, 10.7754790393628), (106.682411856108, 10.7754762959601), (106.681948170223, 10.775076801292), (106.681946953215, 10.7750757728772), (106.681577943952, 10.7747699480145), (106.681354856141, 10.7745815499075), (106.681050071432, 10.7743235726569), (106.680779998801, 10.774093497693), (106.680779672798, 10.7740932214111), (106.680427578845, 10.7737963760106), (106.680284883706, 10.7736759607876), (106.680120811518, 10.7735325925854), (106.680120259999, 10.7735321149047), (106.679882649978, 10.7733281310479), (106.679742564868, 10.7732015296478), (106.67973997054, 10.7731992804165), (106.679159125009, 10.772716304271), (106.679157929246, 10.7727153285815), (106.679083371982, 10.7726556350576), (106.679069423592, 10.7726465921904), (106.679053957365, 10.7726404990091), (106.679037589221, 10.7726375981655), (106.679020970997, 10.7726380051815), (106.679004764489, 10.7726417038483), (106.678989615098, 10.7726485468719), (106.678976126125, 10.772658261739), (106.678449597495, 10.7731239014943), (106.678407514754, 10.773159565689), (106.678406188192, 10.7731607141448), (106.678291034854, 10.7732625482153), (106.678131577851, 10.7733972356454), (106.678131249559, 10.7733975143985), (106.677809116892, 10.7736724741964), (106.677803734254, 10.7736774962862), (106.67777351642, 10.773708297704), (106.677376870851, 10.7734422350384), (106.677376291861, 10.7734418501559), (106.676943701895, 10.7731568826838), (106.676941799819, 10.7731556663352), (106.676705634648, 10.7730091132449), (106.676468020922, 10.7728596290723), (106.676467624617, 10.7728593813034), (106.676202468827, 10.7726946395397), (106.675976718772, 10.7725542402878), (106.675713344944, 10.7723904505946), (106.675438984881, 10.7722195485022), (106.675160330528, 10.7720443170291), (106.674980445983, 10.7719313240966), (106.674980215342, 10.7719311797465), (106.674747119479, 10.7717858222138), (106.674497164595, 10.7716283533947), (106.674495300219, 10.7716272127471), (106.674339180867, 10.7715344896819), (106.674338897981, 10.771534322423), (106.674061493048, 10.7713710419232), (106.674061328848, 10.7713709455279), (106.673777295695, 10.7712046366425), (106.673775349509, 10.7712035319333), (106.673513740027, 10.7710596467179), (106.673513190173, 10.7710593469847), (106.673099330442, 10.7708357600807), (106.673098966779, 10.7708355647753), (106.672357083034, 10.7704395002842), (106.672023628724, 10.7702576558632), (106.671746880137, 10.7701061587426), (106.671518215262, 10.7699783515251), (106.671516207112, 10.7699772649622), (106.671350083838, 10.7698903014222), (106.671115399209, 10.7697601522552), (106.671113600766, 10.7697591835329), (106.670830326847, 10.7696110514048), (106.66974820551, 10.7689798847013), (106.66969475177, 10.7688866833063), (106.669685913661, 10.7688741199651), (106.669674918986, 10.7688633930448), (106.669662141606, 10.7688548673033), (106.668277363011, 10.7681053993183), (106.668276514094, 10.7681049461882), (106.668126503268, 10.7680259842551), (106.668125839186, 10.7680237950692), (106.66812072496, 10.7680095017658), (106.668117596648, 10.7680019493532), (106.66811110606, 10.7679882261576), (106.668107252546, 10.7679810167398), (106.668099448104, 10.7679679958141), (106.668094906497, 10.767961198818), (106.668085863361, 10.7679490055608), (106.668080677403, 10.7679426864524), (106.668070482664, 10.7679314382913), (106.668064702296, 10.7679256579236), (106.668053454135, 10.7679154631847), (106.668047135024, 10.7679102772246), (106.668034941766, 10.7679012340887), (106.668028144776, 10.7678966924853), (106.668015123851, 10.7678888880428), (106.668007914429, 10.7678850345264), (106.667994191233, 10.7678785439383), (106.667986638821, 10.7678754156266), (106.667972345518, 10.7678703014008), (106.667964522841, 10.7678679284177), (106.667949797082, 10.7678642398071), (106.667941779481, 10.7678626450072), (106.667926763083, 10.767860417535), (106.667918627772, 10.7678596162768), (106.667903465352, 10.7678588713949), (106.667895290678, 10.7678588713949), (106.667880128258, 10.7678596162768), (106.667871992947, 10.767860417535), (106.667856976549, 10.7678626450072), (106.667848958948, 10.7678642398071), (106.667848526162, 10.7678643482145), (106.667629153721, 10.7677506481269), (106.667628614008, 10.7677503708842), (106.66744662399, 10.7676577214203), (106.667216888626, 10.7675408306262), (106.667161868227, 10.7675128359024), (106.667012119458, 10.7674366427911), (106.666659357657, 10.7672571553777), (106.666673753979, 10.7671954479766), (106.667048293768, 10.7672739882109), (106.6670141, 10.7674274)]); SELECT pointInPolygon((106.677085876465,10.7744951248169), [(106.667161868227,10.7675128359024),(106.667165727127,10.7675059912261),(106.667170817563,10.7674904752629),(106.667229225265,10.7672278502066),(106.667231193621,10.7672115129572),(106.667229912029,10.7671951075415),(106.667225430503,10.767179274157),(106.667217923927,10.7671646306786),(106.667207685234,10.7671517485471),(106.667195113975,10.7671411304688),(106.667180700725,10.7671331907989),(106.66716500794,10.7671282393715),(106.666628232995,10.7670156787539),(106.666612233649,10.7670139127584),(106.666596193354,10.7670152569112),(106.666580711053,10.7670196610218),(106.666566364856,10.7670269606408),(106.666553690448,10.7670368832008),(106.666543161092,10.767049058194),(106.666535169952,10.7670630310067),(106.666530015418,10.7670782798948),(106.666482284259,10.7672828714379),(106.666480170141,10.7672985245675),(106.666481048788,10.7673142953614),(106.666484888609,10.7673296167758),(106.666491551541,10.7673439379244),(106.666500798017,10.7673567438858),(106.666512295576,10.7673675742178),(106.666525630821,10.7673760395122),(106.667032331859,10.7676338521733),(106.6671413386,10.7676893154858),(106.667371048786,10.7678061934666),(106.667552760053,10.7678987010209),(106.667801848625,10.7680278028917),(106.667817742281,10.7680340673957),(106.667834579682,10.7680369577679),(106.66785165264,10.7680363524383),(106.667868243061,10.7680322768672),(106.667878683314,10.7680285412847),(106.667885469819,10.7680268413536),(106.667892390269,10.7680258148018),(106.667899378015,10.7680254715159),(106.667906365761,10.7680258148018),(106.667913286211,10.7680268413536),(106.667920072716,10.7680285412847),(106.667926659921,10.7680308982244),(106.667932984386,10.7680338894736),(106.667938985204,10.7680374862253),(106.667944604583,10.7680416538412),(106.667949788405,10.7680463521828),(106.667954486747,10.7680515360051),(106.667958654362,10.7680571553826),(106.667962251113,10.7680631561994),(106.667965242363,10.7680694806664),(106.667967599303,10.7680760678724),(106.667969299234,10.7680828543774),(106.667970926246,10.7680938227996),(106.667974657027,10.7681089916695),(106.667981154238,10.7681231972879),(106.667990189396,10.7681359400994),(106.668001444773,10.7681467719897),(106.668014524559,10.7681553120441),(106.668198488147,10.7682521458591),(106.669562015793,10.7689901124345),(106.669614757162,10.7690820717448),(106.669623023723,10.7690939566151),(106.669633223154,10.7691042307472),(106.669645047385,10.7691125838155),(106.670748051536,10.7697559307954),(106.670751419717,10.7697577924329),(106.671035494073,10.7699063431327),(106.671270162713,10.7700364834325),(106.67127192876,10.7700374352053),(106.671437929267,10.7701243344783),(106.671665917937,10.7702517637461),(106.67166656035,10.7702521191025),(106.671943689514,10.7704038245574),(106.671943806749,10.7704038886117),(106.6722776446,10.7705859421916),(106.672278295949,10.7705862936499),(106.673020324076,10.7709824352208),(106.673433726727,10.7712057751884),(106.673694081332,10.7713489702214),(106.673977066657,10.7715146655761),(106.674254247937,10.7716778144336),(106.67440928634,10.7717698954974),(106.674658478275,10.7719268836667),(106.674658802254,10.7719270867325),(106.6748919449,10.7720724734391),(106.675071660589,10.7721853602936),(106.675350447469,10.7723606751059),(106.675350748696,10.7723608636368),(106.6756252856,10.7725318758852),(106.675888735092,10.7726957126602),(106.676114500069,10.7728361211927),(106.676379504941,10.7730007692002),(106.67661713771,10.7731502653527),(106.676617572241,10.773150536857),(106.676852995814,10.7732966297465),(106.677284352687,10.7735807849214),(106.677738143311,10.7738851794554),(106.677752655777,10.7738929549383),(106.677768414072,10.773897724206),(106.677784802596,10.7738993009456),(106.677801181124,10.7738976235612),(106.677816909825,10.7738927575805),(106.677831374252,10.7738848930944),(106.677844009349,10.7738743373313),(106.677920079221,10.7737967983562),(106.678239245717,10.7735243703649),(106.67839926068,10.7733892116467),(106.678400691571,10.7733879749217),(106.678515896101,10.7732860955802),(106.678557979259,10.7732504310319),(106.67855930664,10.7732492818517),(106.679033975331,10.7728295048433),(106.679053201911,10.772844898411),(106.679632133733,10.7733262832973),(106.679771732358,10.7734524450384),(106.679773325229,10.7734538481348),(106.680011463819,10.7736582857586),(106.680175801881,10.7738018862846),(106.680176891116,10.7738028216402),(106.680320149367,10.773923712053),(106.680672123374,10.7742204563391),(106.68094213423,10.7744504786771),(106.68094233625,10.7744506502241),(106.68124725775,10.7747087432576),(106.681247329066,10.7747088035527),(106.681470746982,10.7748974804345),(106.681471338135,10.7748979749973),(106.681840030697,10.7752035373868),(106.682304929691,10.7756040772245),(106.682308650112,10.7756071005185),(106.682312917236,10.7756103687835),(106.682359764439,10.7756490693986),(106.682640114944,10.7758996628849),(106.682644070655,10.7759029839554),(106.682711710544,10.7759562859055),(106.682806505954,10.7760368956153),(106.68280745353,10.776037689352),(106.683169164535,10.7763361378178),(106.68363265876,10.7767252395911),(106.683677875719,10.7767650291442),(106.683797775698,10.77688614766),(106.684138558845,10.7772306328105),(106.68414063031,10.7772326552454),(106.684827531639,10.777880369263),(106.685228619785,10.7782605077038),(106.685228896163,10.7782607684525),(106.686025996525,10.7790093622583),(106.686026813787,10.7790101368229),(106.68658269265,10.7795369738106),(106.687194479537,10.7801158277128),(106.688401155505,10.7812670656457),(106.688401571342,10.7812674596561),(106.689622367701,10.7824162362891),(106.690002723257,10.7827815572149),(106.690002908997,10.7827817350625),(106.690359062158,10.7831217027417),(106.690359638585,10.7831222477508),(106.690747557266,10.7834855403784),(106.691628272565,10.7843952548301),(106.692179613338,10.7849709155958),(106.692179802225,10.7849711121697),(106.692743910048,10.7855562574979),(106.693288875836,10.7861225208133),(106.693601234729,10.7864484801726),(106.69220838651,10.7875617536129),(106.692196691453,10.787573150248),(106.692187444486,10.7875866094924),(106.692181000965,10.7876016141149),(106.692177608512,10.7876175874962),(106.692177397496,10.7876339157883),(106.692180376026,10.7876499715041),(106.692186429639,10.7876651376314),(106.692195325699,10.7876788313445),(106.692206722334,10.7876905264015),(106.692220181578,10.7876997733682),(106.692235186201,10.7877062168886),(106.692251159582,10.787709609342),(106.692267487874,10.7877098203582),(106.69228354359,10.7877068418281),(106.692298709717,10.7877007882148),(106.69231240343,10.7876918921553),(106.693776442708,10.7865217172423),(106.693788736175,10.7865096022178),(106.693798269005,10.7864952137411),(106.693804631934,10.7864791695437),(106.693807551784,10.7864621584413),(106.693806903199,10.7864449107613),(106.693802714026,10.7864281669878),(106.693795164114,10.786412645971),(106.693784577601,10.7863990140651),(106.69340910087,10.7860071886444),(106.69340897739,10.7860070600637),(106.692863924954,10.7854407067139),(106.69229983717,10.7848555821281),(106.691748435669,10.7842798579551),(106.691748124777,10.7842795350934),(106.690865834778,10.7833681940925),(106.690862927107,10.7833653342196),(106.690473809086,10.7830009183885),(106.690118035849,10.7826613133679),(106.689737465891,10.7822957865149),(106.689736848623,10.7822951996834),(106.688515950726,10.7811463275029),(106.687309357068,10.7799951680976),(106.687309106711,10.779994930232),(106.686697270266,10.7794160294802),(106.686141416688,10.7788892164565),(106.686140461741,10.7788883114),(106.686140185762,10.7788880510296),(106.6853430856,10.7781394574112),(106.684942058447,10.7777593767781),(106.684941904463,10.7777592312084),(106.684255979358,10.7771124377212),(106.683916204215,10.776768971525),(106.683794256559,10.7766457845149),(106.68379008676,10.7766418525893),(106.683741989497,10.7765995284558),(106.683740519326,10.7765982647987),(106.683276011394,10.7762083120217),(106.683275466929,10.7762078588774),(106.68291395946,10.77590957835),(106.682818451152,10.775828362424),(106.682816046951,10.7758263940715),(106.682749215964,10.7757737295564),(106.682469581984,10.775523776542),(106.682467121137,10.7755216616573),(106.682417839663,10.775480950083),(106.68241543796,10.7754790393628),(106.682411856108,10.7754762959601),(106.681948170223,10.775076801292),(106.681946953215,10.7750757728772),(106.681577943952,10.7747699480145),(106.681354856141,10.7745815499075),(106.681050071432,10.7743235726569),(106.680779998801,10.774093497693),(106.680779672798,10.7740932214111),(106.680427578845,10.7737963760106),(106.680284883706,10.7736759607876),(106.680120811518,10.7735325925854),(106.680120259999,10.7735321149047),(106.679882649978,10.7733281310479),(106.679742564868,10.7732015296478),(106.67973997054,10.7731992804165),(106.679159125009,10.772716304271),(106.679157929246,10.7727153285815),(106.679083371982,10.7726556350576),(106.679069423592,10.7726465921904),(106.679053957365,10.7726404990091),(106.679037589221,10.7726375981655),(106.679020970997,10.7726380051815),(106.679004764489,10.7726417038483),(106.678989615098,10.7726485468719),(106.678976126125,10.772658261739),(106.678449597495,10.7731239014943),(106.678407514754,10.773159565689),(106.678406188192,10.7731607141448),(106.678291034854,10.7732625482153),(106.678131577851,10.7733972356454),(106.678131249559,10.7733975143985),(106.677809116892,10.7736724741964),(106.677803734254,10.7736774962862),(106.67777351642,10.773708297704),(106.677376870851,10.7734422350384),(106.677376291861,10.7734418501559),(106.676943701895,10.7731568826838),(106.676941799819,10.7731556663352),(106.676705634648,10.7730091132449),(106.676468020922,10.7728596290723),(106.676467624617,10.7728593813034),(106.676202468827,10.7726946395397),(106.675976718772,10.7725542402878),(106.675713344944,10.7723904505946),(106.675438984881,10.7722195485022),(106.675160330528,10.7720443170291),(106.674980445983,10.7719313240966),(106.674980215342,10.7719311797465),(106.674747119479,10.7717858222138),(106.674497164595,10.7716283533947),(106.674495300219,10.7716272127471),(106.674339180867,10.7715344896819),(106.674338897981,10.771534322423),(106.674061493048,10.7713710419232),(106.674061328848,10.7713709455279),(106.673777295695,10.7712046366425),(106.673775349509,10.7712035319333),(106.673513740027,10.7710596467179),(106.673513190173,10.7710593469847),(106.673099330442,10.7708357600807),(106.673098966779,10.7708355647753),(106.672357083034,10.7704395002842),(106.672023628724,10.7702576558632),(106.671746880137,10.7701061587426),(106.671518215262,10.7699783515251),(106.671516207112,10.7699772649622),(106.671350083838,10.7698903014222),(106.671115399209,10.7697601522552),(106.671113600766,10.7697591835329),(106.670830326847,10.7696110514048),(106.66974820551,10.7689798847013),(106.66969475177,10.7688866833063),(106.669685913661,10.7688741199651),(106.669674918986,10.7688633930448),(106.669662141606,10.7688548673033),(106.668277363011,10.7681053993183),(106.668276514094,10.7681049461882),(106.668126503268,10.7680259842551),(106.668125839186,10.7680237950692),(106.66812072496,10.7680095017658),(106.668117596648,10.7680019493532),(106.66811110606,10.7679882261576),(106.668107252546,10.7679810167398),(106.668099448104,10.7679679958141),(106.668094906497,10.767961198818),(106.668085863361,10.7679490055608),(106.668080677403,10.7679426864524),(106.668070482664,10.7679314382913),(106.668064702296,10.7679256579236),(106.668053454135,10.7679154631847),(106.668047135024,10.7679102772246),(106.668034941766,10.7679012340887),(106.668028144776,10.7678966924853),(106.668015123851,10.7678888880428),(106.668007914429,10.7678850345264),(106.667994191233,10.7678785439383),(106.667986638821,10.7678754156266),(106.667972345518,10.7678703014008),(106.667964522841,10.7678679284177),(106.667949797082,10.7678642398071),(106.667941779481,10.7678626450072),(106.667926763083,10.767860417535),(106.667918627772,10.7678596162768),(106.667903465352,10.7678588713949),(106.667895290678,10.7678588713949),(106.667880128258,10.7678596162768),(106.667871992947,10.767860417535),(106.667856976549,10.7678626450072),(106.667848958948,10.7678642398071),(106.667848526162,10.7678643482145),(106.667629153721,10.7677506481269),(106.667628614008,10.7677503708842),(106.66744662399,10.7676577214203),(106.667216888626,10.7675408306262),(106.667161868227,10.7675128359024),(106.667012119458,10.7674366427911),(106.666659357657,10.7672571553777),(106.666673753979,10.7671954479766),(106.667048293768,10.7672739882109),(106.667012119458,10.7674366427911)] -); -- { serverError 36 } +); -- { serverError BAD_ARGUMENTS } SET validate_polygons = 0; diff --git a/tests/queries/0_stateless/00514_interval_operators.sql b/tests/queries/0_stateless/00514_interval_operators.sql index f9f3abbdb54..e8f03cb4fb8 100644 --- a/tests/queries/0_stateless/00514_interval_operators.sql +++ b/tests/queries/0_stateless/00514_interval_operators.sql @@ -20,8 +20,8 @@ SELECT (toDateTime64('2000-01-01 12:00:00.678', 3) - INTERVAL 12345 MILLISECOND) SELECT (toDateTime64('2000-01-01 12:00:00.67898', 5) - INTERVAL 12345 MILLISECOND) x, toTypeName(x); SELECT (toDateTime64('2000-01-01 12:00:00.67', 2) - INTERVAL 12345 MILLISECOND) x, toTypeName(x); -select toDateTime64('3000-01-01 12:00:00.12345', 0) + interval 0 nanosecond; -- { serverError 407 } +select toDateTime64('3000-01-01 12:00:00.12345', 0) + interval 0 nanosecond; -- { serverError DECIMAL_OVERFLOW } select toDateTime64('3000-01-01 12:00:00.12345', 0) + interval 0 microsecond; -- Check that the error is thrown during typechecking, not execution. -select materialize(toDate('2000-01-01')) + interval 1 nanosecond from numbers(0); -- { serverError 43 } +select materialize(toDate('2000-01-01')) + interval 1 nanosecond from numbers(0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/00515_gcd_lcm.sql b/tests/queries/0_stateless/00515_gcd_lcm.sql index 67fab1c9d59..829a865369e 100644 --- a/tests/queries/0_stateless/00515_gcd_lcm.sql +++ b/tests/queries/0_stateless/00515_gcd_lcm.sql @@ -24,18 +24,18 @@ select lcm(2147483647, 2147483646); select lcm(4611686011984936962, 2147483647); select lcm(-2147483648, 1); -- test gcd float -select gcd(1280.1, 1024.1); -- { serverError 43 } -select gcd(11.1, 121.1); -- { serverError 43 } -select gcd(-256.1, 64.1); -- { serverError 43 } -select gcd(1.1, 1.1); -- { serverError 43 } -select gcd(4.1, 2.1); -- { serverError 43 } -select gcd(15.1, 49.1); -- { serverError 43 } -select gcd(255.1, 254.1); -- { serverError 43 } +select gcd(1280.1, 1024.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select gcd(11.1, 121.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select gcd(-256.1, 64.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select gcd(1.1, 1.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select gcd(4.1, 2.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select gcd(15.1, 49.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select gcd(255.1, 254.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -- test lcm float -select lcm(1280.1, 1024.1); -- { serverError 43 } -select lcm(11.1, 121.1); -- { serverError 43 } -select lcm(-256.1, 64.1); -- { serverError 43 } -select lcm(1.1, 1.1); -- { serverError 43 } -select lcm(4.1, 2.1); -- { serverError 43 } -select lcm(15.1, 49.1); -- { serverError 43 } -select lcm(255.1, 254.1); -- { serverError 43 } +select lcm(1280.1, 1024.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select lcm(11.1, 121.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select lcm(-256.1, 64.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select lcm(1.1, 1.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select lcm(4.1, 2.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select lcm(15.1, 49.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select lcm(255.1, 254.1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/00555_hasAll_hasAny.sql b/tests/queries/0_stateless/00555_hasAll_hasAny.sql index c8a6c3cecbd..cf037d1ef2b 100644 --- a/tests/queries/0_stateless/00555_hasAll_hasAny.sql +++ b/tests/queries/0_stateless/00555_hasAll_hasAny.sql @@ -39,10 +39,10 @@ select hasAny(['a', 'b'], ['a', 'c']); select hasAll(['a', 'b'], ['a', 'c']); select '-'; -select hasAny([1], ['a']); -- { serverError 386 } -select hasAll([1], ['a']); -- { serverError 386 } -select hasAll([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } -select hasAny([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } +select hasAny([1], ['a']); -- { serverError NO_COMMON_TYPE } +select hasAll([1], ['a']); -- { serverError NO_COMMON_TYPE } +select hasAll([[1, 2], [3, 4]], ['a', 'c']); -- { serverError NO_COMMON_TYPE } +select hasAny([[1, 2], [3, 4]], ['a', 'c']); -- { serverError NO_COMMON_TYPE } select '-'; select hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]]); diff --git a/tests/queries/0_stateless/00555_hasSubstr.sql b/tests/queries/0_stateless/00555_hasSubstr.sql index 5f90a69c546..25af5a64865 100644 --- a/tests/queries/0_stateless/00555_hasSubstr.sql +++ b/tests/queries/0_stateless/00555_hasSubstr.sql @@ -25,8 +25,8 @@ select hasSubstr(['a', 'b'], ['a', 'c']); select hasSubstr(['a', 'c', 'b'], ['a', 'c']); select '-'; -select hasSubstr([1], ['a']); -- { serverError 386 } -select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); -- { serverError 386 } +select hasSubstr([1], ['a']); -- { serverError NO_COMMON_TYPE } +select hasSubstr([[1, 2], [3, 4]], ['a', 'c']); -- { serverError NO_COMMON_TYPE } select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[3, 4], [5, 8]]); select hasSubstr([[1, 2], [3, 4], [5, 8]], [[1, 2], [5, 8]]); diff --git a/tests/queries/0_stateless/00561_storage_join.sql b/tests/queries/0_stateless/00561_storage_join.sql index 6411628bbde..1603e85f75d 100644 --- a/tests/queries/0_stateless/00561_storage_join.sql +++ b/tests/queries/0_stateless/00561_storage_join.sql @@ -36,7 +36,7 @@ SEMI LEFT JOIN joinbug_join using id2; SELECT * FROM ( SELECT toUInt32(11) AS id2 ) AS js1 SEMI LEFT JOIN joinbug_join USING (id2); -- can't convert right side in case on storage join -SELECT * FROM ( SELECT toInt64(11) AS id2 ) AS js1 SEMI LEFT JOIN joinbug_join USING (id2); -- { serverError 53, 386 } +SELECT * FROM ( SELECT toInt64(11) AS id2 ) AS js1 SEMI LEFT JOIN joinbug_join USING (id2); -- { serverError TYPE_MISMATCH, 386 } DROP TABLE joinbug; DROP TABLE joinbug_join; diff --git a/tests/queries/0_stateless/00578_merge_table_sampling.sql b/tests/queries/0_stateless/00578_merge_table_sampling.sql index 3b5cdd3db47..03f57792f71 100644 --- a/tests/queries/0_stateless/00578_merge_table_sampling.sql +++ b/tests/queries/0_stateless/00578_merge_table_sampling.sql @@ -4,7 +4,7 @@ DROP TABLE IF EXISTS numbers2; CREATE TABLE numbers1 ENGINE = Memory AS SELECT number FROM numbers(1000); CREATE TABLE numbers2 ENGINE = Memory AS SELECT number FROM numbers(1000); -SELECT * FROM merge(currentDatabase(), '^numbers\\d+$') SAMPLE 0.1; -- { serverError 141 } +SELECT * FROM merge(currentDatabase(), '^numbers\\d+$') SAMPLE 0.1; -- { serverError SAMPLING_NOT_SUPPORTED } DROP TABLE numbers1; DROP TABLE numbers2; diff --git a/tests/queries/0_stateless/00578_merge_table_shadow_virtual_column.sql b/tests/queries/0_stateless/00578_merge_table_shadow_virtual_column.sql index e729bfdf188..0cd92591ad9 100644 --- a/tests/queries/0_stateless/00578_merge_table_shadow_virtual_column.sql +++ b/tests/queries/0_stateless/00578_merge_table_shadow_virtual_column.sql @@ -4,7 +4,7 @@ DROP TABLE IF EXISTS numbers2; CREATE TABLE numbers1 ENGINE = Memory AS SELECT number as _table FROM numbers(1000); CREATE TABLE numbers2 ENGINE = Memory AS SELECT number as _table FROM numbers(1000); -SELECT count() FROM merge(currentDatabase(), '^numbers\\d+$') WHERE _table='numbers1'; -- { serverError 53 } +SELECT count() FROM merge(currentDatabase(), '^numbers\\d+$') WHERE _table='numbers1'; -- { serverError TYPE_MISMATCH } SELECT count() FROM merge(currentDatabase(), '^numbers\\d+$') WHERE _table=1; DROP TABLE numbers1; diff --git a/tests/queries/0_stateless/00640_endsWith.sql b/tests/queries/0_stateless/00640_endsWith.sql index c497f529954..719b6a3014b 100644 --- a/tests/queries/0_stateless/00640_endsWith.sql +++ b/tests/queries/0_stateless/00640_endsWith.sql @@ -13,5 +13,5 @@ SELECT COUNT() FROM endsWith_test WHERE endsWith(S1, S1); SELECT COUNT() FROM endsWith_test WHERE endsWith(S1, S2); SELECT COUNT() FROM endsWith_test WHERE endsWith(S2, S3); -SELECT endsWith([], 'str'); -- { serverError 43 } +SELECT endsWith([], 'str'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } DROP TABLE endsWith_test; diff --git a/tests/queries/0_stateless/00647_multiply_aggregation_state.sql b/tests/queries/0_stateless/00647_multiply_aggregation_state.sql index b0361458221..0cfdd27f930 100644 --- a/tests/queries/0_stateless/00647_multiply_aggregation_state.sql +++ b/tests/queries/0_stateless/00647_multiply_aggregation_state.sql @@ -21,6 +21,6 @@ SELECT groupArrayMerge(y * 5) FROM (SELECT groupArrayState(x) AS y FROM (SELECT SELECT groupArrayMerge(2)(y * 5) FROM (SELECT groupArrayState(2)(x) AS y FROM (SELECT 1 AS x)); SELECT groupUniqArrayMerge(y * 5) FROM (SELECT groupUniqArrayState(x) AS y FROM (SELECT 1 AS x)); -SELECT sumMerge(y * a) FROM (SELECT a, sumState(b) AS y FROM mult_aggregation GROUP BY a); -- { serverError 44} +SELECT sumMerge(y * a) FROM (SELECT a, sumState(b) AS y FROM mult_aggregation GROUP BY a); -- { serverError ILLEGAL_COLUMN} DROP TABLE IF EXISTS mult_aggregation; diff --git a/tests/queries/0_stateless/00653_running_difference.sql b/tests/queries/0_stateless/00653_running_difference.sql index d210e04a3a4..d2858a938cd 100644 --- a/tests/queries/0_stateless/00653_running_difference.sql +++ b/tests/queries/0_stateless/00653_running_difference.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; select runningDifference(x) from (select arrayJoin([0, 1, 5, 10]) as x); select '-'; select runningDifference(x) from (select arrayJoin([2, Null, 3, Null, 10]) as x); diff --git a/tests/queries/0_stateless/00692_if_exception_code.sql b/tests/queries/0_stateless/00692_if_exception_code.sql index f9d06f2e3a5..bdca46be603 100644 --- a/tests/queries/0_stateless/00692_if_exception_code.sql +++ b/tests/queries/0_stateless/00692_if_exception_code.sql @@ -1,6 +1,6 @@ SET send_logs_level = 'fatal'; -SELECT if(); -- { serverError 42 } -SELECT if(1); -- { serverError 42 } -SELECT if(1, 1); -- { serverError 42 } +SELECT if(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT if(1); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT if(1, 1); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } SELECT if(1, 1, 1); diff --git a/tests/queries/0_stateless/00698_validate_array_sizes_for_nested.sql b/tests/queries/0_stateless/00698_validate_array_sizes_for_nested.sql index ec238797dca..a1fe531e6d8 100644 --- a/tests/queries/0_stateless/00698_validate_array_sizes_for_nested.sql +++ b/tests/queries/0_stateless/00698_validate_array_sizes_for_nested.sql @@ -3,7 +3,7 @@ SET send_logs_level = 'fatal'; DROP TABLE IF EXISTS mergetree_00698; CREATE TABLE mergetree_00698 (k UInt32, `n.x` Array(UInt64), `n.y` Array(UInt64)) ENGINE = MergeTree ORDER BY k; -INSERT INTO mergetree_00698 VALUES (3, [], [1, 2, 3]), (1, [111], []), (2, [], []); -- { serverError 190 } +INSERT INTO mergetree_00698 VALUES (3, [], [1, 2, 3]), (1, [111], []), (2, [], []); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } SELECT * FROM mergetree_00698; INSERT INTO mergetree_00698 VALUES (3, [4, 5, 6], [1, 2, 3]), (1, [111], [222]), (2, [], []); diff --git a/tests/queries/0_stateless/00698_validate_array_sizes_for_nested_kshvakov.sql b/tests/queries/0_stateless/00698_validate_array_sizes_for_nested_kshvakov.sql index 010d53dbcac..6533f55c82a 100644 --- a/tests/queries/0_stateless/00698_validate_array_sizes_for_nested_kshvakov.sql +++ b/tests/queries/0_stateless/00698_validate_array_sizes_for_nested_kshvakov.sql @@ -11,7 +11,7 @@ CREATE TABLE Issue_2231_Invalid_Nested_Columns_Size ( PARTITION BY tuple() ORDER BY Date; -INSERT INTO Issue_2231_Invalid_Nested_Columns_Size VALUES (today(), [2,2], [1]), (today(), [2,2], [1, 1]); -- { serverError 190 } +INSERT INTO Issue_2231_Invalid_Nested_Columns_Size VALUES (today(), [2,2], [1]), (today(), [2,2], [1, 1]); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } SELECT * FROM Issue_2231_Invalid_Nested_Columns_Size; DROP TABLE Issue_2231_Invalid_Nested_Columns_Size; diff --git a/tests/queries/0_stateless/00700_decimal_aggregates.sql b/tests/queries/0_stateless/00700_decimal_aggregates.sql index 6ca37e06918..a59bfb76d96 100644 --- a/tests/queries/0_stateless/00700_decimal_aggregates.sql +++ b/tests/queries/0_stateless/00700_decimal_aggregates.sql @@ -105,9 +105,9 @@ SELECT stddevPop(toFloat64(a)), stddevPop(toFloat64(b)), stddevPop(toFloat64(c)) SELECT stddevSamp(a) AS da, stddevSamp(b) AS db, stddevSamp(c) AS dc, toTypeName(da), toTypeName(db), toTypeName(dc) FROM decimal; SELECT stddevSamp(toFloat64(a)), stddevSamp(toFloat64(b)), stddevSamp(toFloat64(c)) FROM decimal; -SELECT covarPop(a, a), covarPop(b, b), covarPop(c, c) FROM decimal; -- { serverError 43 } -SELECT covarSamp(a, a), covarSamp(b, b), covarSamp(c, c) FROM decimal; -- { serverError 43 } -SELECT corr(a, a), corr(b, b), corr(c, c) FROM decimal; -- { serverError 43 } +SELECT covarPop(a, a), covarPop(b, b), covarPop(c, c) FROM decimal; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT covarSamp(a, a), covarSamp(b, b), covarSamp(c, c) FROM decimal; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT corr(a, a), corr(b, b), corr(c, c) FROM decimal; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT 1 LIMIT 0; DROP TABLE decimal; diff --git a/tests/queries/0_stateless/00700_decimal_arithm.sql b/tests/queries/0_stateless/00700_decimal_arithm.sql index d24b593dac1..8eaed345a0e 100644 --- a/tests/queries/0_stateless/00700_decimal_arithm.sql +++ b/tests/queries/0_stateless/00700_decimal_arithm.sql @@ -26,10 +26,10 @@ INSERT INTO decimal (a, b, c, d, e, f, g, h, i, j, k, l, m, n, o) VALUES (-42, - SELECT a + a, a - a, a * a, a / a, intDiv(a, a), intDivOrZero(a, a) FROM decimal WHERE a = 42; SELECT b + b, b - b, b * b, b / b, intDiv(b, b), intDivOrZero(b, b) FROM decimal WHERE b = 42; SELECT c + c, c - c, c * c, c / c, intDiv(c, c), intDivOrZero(c, c) FROM decimal WHERE c = 42; -SELECT e + e, e - e, e * e, e / e, intDiv(e, e), intDivOrZero(e, e) FROM decimal WHERE e > 0; -- { serverError 69 } -SELECT f + f, f - f, f * f, f / f, intDiv(f, f), intDivOrZero(f, f) FROM decimal WHERE f > 0; -- { serverError 69 } +SELECT e + e, e - e, e * e, e / e, intDiv(e, e), intDivOrZero(e, e) FROM decimal WHERE e > 0; -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT f + f, f - f, f * f, f / f, intDiv(f, f), intDivOrZero(f, f) FROM decimal WHERE f > 0; -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT g + g, g - g, g * g, g / g, intDiv(g, g), intDivOrZero(g, g) FROM decimal WHERE g > 0; -SELECT h + h, h - h, h * h, h / h, intDiv(h, h), intDivOrZero(h, h) FROM decimal WHERE h > 0; -- { serverError 407 } +SELECT h + h, h - h, h * h, h / h, intDiv(h, h), intDivOrZero(h, h) FROM decimal WHERE h > 0; -- { serverError DECIMAL_OVERFLOW } SELECT h + h, h - h FROM decimal WHERE h > 0; SELECT i + i, i - i, i * i, i / i, intDiv(i, i), intDivOrZero(i, i) FROM decimal WHERE i > 0; SELECT i + i, i - i FROM decimal WHERE i > 0; @@ -38,7 +38,7 @@ SELECT j + j, j - j, j * j, j / j, intDiv(j, j), intDivOrZero(j, j) FROM decimal SELECT a + 21, a - 21, a - 84, a * 21, a * -21, a / 21, a / 84, intDiv(a, 21), intDivOrZero(a, 84) FROM decimal WHERE a = 42; SELECT b + 21, b - 21, b - 84, b * 21, b * -21, b / 21, b / 84, intDiv(b, 21), intDivOrZero(b, 84) FROM decimal WHERE b = 42; SELECT c + 21, c - 21, c - 84, c * 21, c * -21, c / 21, c / 84, intDiv(c, 21), intDivOrZero(c, 84) FROM decimal WHERE c = 42; -SELECT e + 21, e - 21, e - 84, e * 21, e * -21, e / 21, e / 84 FROM decimal WHERE e > 0; -- { serverError 407 } +SELECT e + 21, e - 21, e - 84, e * 21, e * -21, e / 21, e / 84 FROM decimal WHERE e > 0; -- { serverError DECIMAL_OVERFLOW } SELECT f + 21, f - 21, f - 84, f * 21, f * -21, f / 21, f / 84 FROM decimal WHERE f > 0; SELECT g + 21, g - 21, g - 84, g * 21, g * -21, g / 21, g / 84, intDiv(g, 21), intDivOrZero(g, 84) FROM decimal WHERE g > 0; SELECT h + 21, h - 21, h - 84, h * 21, h * -21, h / 21, h / 84, intDiv(h, 21), intDivOrZero(h, 84) FROM decimal WHERE h > 0; @@ -48,10 +48,10 @@ SELECT j + 21, j - 21, j - 84, j * 21, j * -21, j / 21, j / 84, intDiv(j, 21), i SELECT 21 + a, 21 - a, 84 - a, 21 * a, -21 * a, 21 / a, 84 / a, intDiv(21, a), intDivOrZero(84, a) FROM decimal WHERE a = 42; SELECT 21 + b, 21 - b, 84 - b, 21 * b, -21 * b, 21 / b, 84 / b, intDiv(21, b), intDivOrZero(84, b) FROM decimal WHERE b = 42; SELECT 21 + c, 21 - c, 84 - c, 21 * c, -21 * c, 21 / c, 84 / c, intDiv(21, c), intDivOrZero(84, c) FROM decimal WHERE c = 42; -SELECT 21 + e, 21 - e, 84 - e, 21 * e, -21 * e, 21 / e, 84 / e FROM decimal WHERE e > 0; -- { serverError 407 } +SELECT 21 + e, 21 - e, 84 - e, 21 * e, -21 * e, 21 / e, 84 / e FROM decimal WHERE e > 0; -- { serverError DECIMAL_OVERFLOW } SELECT 21 + f, 21 - f, 84 - f, 21 * f, -21 * f, 21 / f, 84 / f FROM decimal WHERE f > 0; SELECT 21 + g, 21 - g, 84 - g, 21 * g, -21 * g, 21 / g, 84 / g, intDiv(21, g), intDivOrZero(84, g) FROM decimal WHERE g > 0; -SELECT 21 + h, 21 - h, 84 - h, 21 * h, -21 * h, 21 / h, 84 / h FROM decimal WHERE h > 0; -- { serverError 407 } +SELECT 21 + h, 21 - h, 84 - h, 21 * h, -21 * h, 21 / h, 84 / h FROM decimal WHERE h > 0; -- { serverError DECIMAL_OVERFLOW } SELECT 21 + h, 21 - h, 84 - h, 21 * h, -21 * h FROM decimal WHERE h > 0; SELECT 21 + i, 21 - i, 84 - i, 21 * i, -21 * i, 21 / i, 84 / i, intDiv(21, i), intDivOrZero(84, i) FROM decimal WHERE i > 0; SELECT 21 + j, 21 - j, 84 - j, 21 * j, -21 * j, 21 / j, 84 / j, intDiv(21, j), intDivOrZero(84, j) FROM decimal WHERE j > 0; @@ -67,16 +67,16 @@ SELECT (i * i) != 0, (i / i) = 1 FROM decimal WHERE i > 0; SELECT e + 1 > e, e + 10 > e, 1 + e > e, 10 + e > e FROM decimal WHERE e > 0; SELECT f + 1 > f, f + 10 > f, 1 + f > f, 10 + f > f FROM decimal WHERE f > 0; -SELECT 1 / toDecimal32(0, 0); -- { serverError 153 } -SELECT 1 / toDecimal64(0, 1); -- { serverError 153 } -SELECT 1 / toDecimal128(0, 2); -- { serverError 153 } -SELECT 0 / toDecimal32(0, 3); -- { serverError 153 } -SELECT 0 / toDecimal64(0, 4); -- { serverError 153 } -SELECT 0 / toDecimal128(0, 5); -- { serverError 153 } +SELECT 1 / toDecimal32(0, 0); -- { serverError ILLEGAL_DIVISION } +SELECT 1 / toDecimal64(0, 1); -- { serverError ILLEGAL_DIVISION } +SELECT 1 / toDecimal128(0, 2); -- { serverError ILLEGAL_DIVISION } +SELECT 0 / toDecimal32(0, 3); -- { serverError ILLEGAL_DIVISION } +SELECT 0 / toDecimal64(0, 4); -- { serverError ILLEGAL_DIVISION } +SELECT 0 / toDecimal128(0, 5); -- { serverError ILLEGAL_DIVISION } -SELECT toDecimal32(0, 0) / toInt8(0); -- { serverError 153 } -SELECT toDecimal64(0, 1) / toInt32(0); -- { serverError 153 } -SELECT toDecimal128(0, 2) / toInt64(0); -- { serverError 153 } +SELECT toDecimal32(0, 0) / toInt8(0); -- { serverError ILLEGAL_DIVISION } +SELECT toDecimal64(0, 1) / toInt32(0); -- { serverError ILLEGAL_DIVISION } +SELECT toDecimal128(0, 2) / toInt64(0); -- { serverError ILLEGAL_DIVISION } SELECT toDecimal32(0, 4) AS x, multiIf(x = 0, NULL, intDivOrZero(1, x)), multiIf(x = 0, NULL, intDivOrZero(x, 0)); SELECT toDecimal64(0, 8) AS x, multiIf(x = 0, NULL, intDivOrZero(1, x)), multiIf(x = 0, NULL, intDivOrZero(x, 0)); diff --git a/tests/queries/0_stateless/00700_decimal_bounds.sql b/tests/queries/0_stateless/00700_decimal_bounds.sql index ddf096149ab..2fa1360eeae 100644 --- a/tests/queries/0_stateless/00700_decimal_bounds.sql +++ b/tests/queries/0_stateless/00700_decimal_bounds.sql @@ -1,8 +1,8 @@ DROP TABLE IF EXISTS decimal; -CREATE TABLE IF NOT EXISTS decimal (x DECIMAL(10, -2)) ENGINE = Memory; -- { serverError 69 } -CREATE TABLE IF NOT EXISTS decimal (x DECIMAL(10, 15)) ENGINE = Memory; -- { serverError 69 } -CREATE TABLE IF NOT EXISTS decimal (x DECIMAL(0, 0)) ENGINE = Memory; -- { serverError 69 } +CREATE TABLE IF NOT EXISTS decimal (x DECIMAL(10, -2)) ENGINE = Memory; -- { serverError ARGUMENT_OUT_OF_BOUND } +CREATE TABLE IF NOT EXISTS decimal (x DECIMAL(10, 15)) ENGINE = Memory; -- { serverError ARGUMENT_OUT_OF_BOUND } +CREATE TABLE IF NOT EXISTS decimal (x DECIMAL(0, 0)) ENGINE = Memory; -- { serverError ARGUMENT_OUT_OF_BOUND } CREATE TABLE IF NOT EXISTS decimal ( diff --git a/tests/queries/0_stateless/00700_decimal_casts.sql b/tests/queries/0_stateless/00700_decimal_casts.sql index 8c752565fee..49f486bf9e8 100644 --- a/tests/queries/0_stateless/00700_decimal_casts.sql +++ b/tests/queries/0_stateless/00700_decimal_casts.sql @@ -2,18 +2,18 @@ SELECT toDecimal32('1.1', 1), toDecimal32('1.1', 2), toDecimal32('1.1', 8); SELECT toDecimal32('1.1', 0); SELECT toDecimal32(1.1, 0), toDecimal32(1.1, 1), toDecimal32(1.1, 2), toDecimal32(1.1, 8); -SELECT '1000000000' AS x, toDecimal32(x, 0); -- { serverError 69 } -SELECT '-1000000000' AS x, toDecimal32(x, 0); -- { serverError 69 } -SELECT '1000000000000000000' AS x, toDecimal64(x, 0); -- { serverError 69 } -SELECT '-1000000000000000000' AS x, toDecimal64(x, 0); -- { serverError 69 } -SELECT '100000000000000000000000000000000000000' AS x, toDecimal128(x, 0); -- { serverError 69 } -SELECT '-100000000000000000000000000000000000000' AS x, toDecimal128(x, 0); -- { serverError 69 } -SELECT '1' AS x, toDecimal32(x, 9); -- { serverError 69 } -SELECT '-1' AS x, toDecimal32(x, 9); -- { serverError 69 } -SELECT '1' AS x, toDecimal64(x, 18); -- { serverError 69 } -SELECT '-1' AS x, toDecimal64(x, 18); -- { serverError 69 } -SELECT '1' AS x, toDecimal128(x, 38); -- { serverError 69 } -SELECT '-1' AS x, toDecimal128(x, 38); -- { serverError 69 } +SELECT '1000000000' AS x, toDecimal32(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1000000000' AS x, toDecimal32(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1000000000000000000' AS x, toDecimal64(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1000000000000000000' AS x, toDecimal64(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '100000000000000000000000000000000000000' AS x, toDecimal128(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-100000000000000000000000000000000000000' AS x, toDecimal128(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1' AS x, toDecimal32(x, 9); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1' AS x, toDecimal32(x, 9); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1' AS x, toDecimal64(x, 18); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1' AS x, toDecimal64(x, 18); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1' AS x, toDecimal128(x, 38); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1' AS x, toDecimal128(x, 38); -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT '0.1' AS x, toDecimal32(x, 0); SELECT '-0.1' AS x, toDecimal32(x, 0); @@ -28,18 +28,18 @@ SELECT '-0.0000000000000000001' AS x, toDecimal64(x, 18); SELECT '0.000000000000000000000000000000000000001' AS x, toDecimal128(x, 38); SELECT '-0.000000000000000000000000000000000000001' AS x, toDecimal128(x, 38); -SELECT '1e9' AS x, toDecimal32(x, 0); -- { serverError 69 } -SELECT '-1E9' AS x, toDecimal32(x, 0); -- { serverError 69 } -SELECT '1E18' AS x, toDecimal64(x, 0); -- { serverError 69 } -SELECT '-1e18' AS x, toDecimal64(x, 0); -- { serverError 69 } -SELECT '1e38' AS x, toDecimal128(x, 0); -- { serverError 69 } -SELECT '-1E38' AS x, toDecimal128(x, 0); -- { serverError 69 } -SELECT '1e0' AS x, toDecimal32(x, 9); -- { serverError 69 } -SELECT '-1e-0' AS x, toDecimal32(x, 9); -- { serverError 69 } -SELECT '1e0' AS x, toDecimal64(x, 18); -- { serverError 69 } -SELECT '-1e-0' AS x, toDecimal64(x, 18); -- { serverError 69 } -SELECT '1e-0' AS x, toDecimal128(x, 38); -- { serverError 69 } -SELECT '-1e0' AS x, toDecimal128(x, 38); -- { serverError 69 } +SELECT '1e9' AS x, toDecimal32(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1E9' AS x, toDecimal32(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1E18' AS x, toDecimal64(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1e18' AS x, toDecimal64(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1e38' AS x, toDecimal128(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1E38' AS x, toDecimal128(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1e0' AS x, toDecimal32(x, 9); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1e-0' AS x, toDecimal32(x, 9); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1e0' AS x, toDecimal64(x, 18); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1e-0' AS x, toDecimal64(x, 18); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1e-0' AS x, toDecimal128(x, 38); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1e0' AS x, toDecimal128(x, 38); -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT '1e-1' AS x, toDecimal32(x, 0); SELECT '-1e-1' AS x, toDecimal32(x, 0); @@ -137,9 +137,9 @@ SELECT CAST('42.42', 'Decimal(9,2)') AS a, CAST(a, 'Decimal(9,7)'), CAST(a, 'Dec SELECT CAST('123456789', 'Decimal(9,0)'), CAST('123456789123456789', 'Decimal(18,0)'); SELECT CAST('12345678901234567890123456789012345678', 'Decimal(38,0)'); -SELECT CAST('123456789', 'Decimal(9,1)'); -- { serverError 69 } -SELECT CAST('123456789123456789', 'Decimal(18,1)'); -- { serverError 69 } -SELECT CAST('12345678901234567890123456789012345678', 'Decimal(38,1)'); -- { serverError 69 } +SELECT CAST('123456789', 'Decimal(9,1)'); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT CAST('123456789123456789', 'Decimal(18,1)'); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT CAST('12345678901234567890123456789012345678', 'Decimal(38,1)'); -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT CAST('0.123456789', 'Decimal(9,9)'), CAST('0.123456789123456789', 'Decimal(18,18)'); SELECT CAST('0.12345678901234567890123456789012345678', 'Decimal(38,38)'); diff --git a/tests/queries/0_stateless/00700_decimal_casts_2.sql b/tests/queries/0_stateless/00700_decimal_casts_2.sql index 89c95fed271..2d3ace866f0 100644 --- a/tests/queries/0_stateless/00700_decimal_casts_2.sql +++ b/tests/queries/0_stateless/00700_decimal_casts_2.sql @@ -2,24 +2,24 @@ SELECT toDecimal128('1234567890', 28) AS x, toDecimal128(x, 29), toDecimal128(to SELECT toDecimal128(toDecimal128('1234567890', 28), 30); SELECT toDecimal64('1234567890', 8) AS x, toDecimal64(x, 9), toDecimal64(toDecimal64('1234567890', 8), 9); -SELECT toDecimal64(toDecimal64('1234567890', 8), 10); -- { serverError 407 } +SELECT toDecimal64(toDecimal64('1234567890', 8), 10); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal32('12345678', 1) AS x, toDecimal32(x, 2), toDecimal32(toDecimal32('12345678', 1), 2); -SELECT toDecimal32(toDecimal32('12345678', 1), 3); -- { serverError 407 } +SELECT toDecimal32(toDecimal32('12345678', 1), 3); -- { serverError DECIMAL_OVERFLOW } -SELECT toDecimal64(toDecimal64('92233720368547758.1', 1), 2); -- { serverError 407 } -SELECT toDecimal64(toDecimal64('-92233720368547758.1', 1), 2); -- { serverError 407 } +SELECT toDecimal64(toDecimal64('92233720368547758.1', 1), 2); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal64(toDecimal64('-92233720368547758.1', 1), 2); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('9223372036854775807', 6) AS x, toInt64(x), toInt64(-x); -SELECT toDecimal128('9223372036854775809', 6) AS x, toInt64(x); -- { serverError 407 } -SELECT toDecimal128('9223372036854775809', 6) AS x, toInt64(-x); -- { serverError 407 } +SELECT toDecimal128('9223372036854775809', 6) AS x, toInt64(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal128('9223372036854775809', 6) AS x, toInt64(-x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal64('922337203685477580', 0) * 10 AS x, toInt64(x), toInt64(-x); SELECT toDecimal64(toDecimal64('92233720368547758.0', 1), 2) AS x, toInt64(x), toInt64(-x); SELECT toDecimal128('2147483647', 10) AS x, toInt32(x), toInt32(-x); -SELECT toDecimal128('2147483649', 10) AS x, toInt32(x), toInt32(-x); -- { serverError 407 } +SELECT toDecimal128('2147483649', 10) AS x, toInt32(x), toInt32(-x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal64('2147483647', 2) AS x, toInt32(x), toInt32(-x); -SELECT toDecimal64('2147483649', 2) AS x, toInt32(x), toInt32(-x); -- { serverError 407 } +SELECT toDecimal64('2147483649', 2) AS x, toInt32(x), toInt32(-x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('92233720368547757.99', 2) AS x, toInt64(x), toInt64(-x); SELECT toDecimal64('2147483640.99', 2) AS x, toInt32(x), toInt32(-x); @@ -40,80 +40,80 @@ SELECT toDecimal128('-0.6', 6) AS x, toUInt8(x); SELECT toDecimal64('-0.6', 6) AS x, toUInt8(x); SELECT toDecimal32('-0.6', 6) AS x, toUInt8(x); -SELECT toDecimal128('-1', 7) AS x, toUInt64(x); -- { serverError 407 } -SELECT toDecimal128('-1', 7) AS x, toUInt32(x); -- { serverError 407 } -SELECT toDecimal128('-1', 7) AS x, toUInt16(x); -- { serverError 407 } -SELECT toDecimal128('-1', 7) AS x, toUInt8(x); -- { serverError 407 } +SELECT toDecimal128('-1', 7) AS x, toUInt64(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal128('-1', 7) AS x, toUInt32(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal128('-1', 7) AS x, toUInt16(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal128('-1', 7) AS x, toUInt8(x); -- { serverError DECIMAL_OVERFLOW } -SELECT toDecimal64('-1', 5) AS x, toUInt64(x); -- { serverError 407 } -SELECT toDecimal64('-1', 5) AS x, toUInt32(x); -- { serverError 407 } -SELECT toDecimal64('-1', 5) AS x, toUInt16(x); -- { serverError 407 } -SELECT toDecimal64('-1', 5) AS x, toUInt8(x); -- { serverError 407 } +SELECT toDecimal64('-1', 5) AS x, toUInt64(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal64('-1', 5) AS x, toUInt32(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal64('-1', 5) AS x, toUInt16(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal64('-1', 5) AS x, toUInt8(x); -- { serverError DECIMAL_OVERFLOW } -SELECT toDecimal32('-1', 3) AS x, toUInt64(x); -- { serverError 407 } -SELECT toDecimal32('-1', 3) AS x, toUInt32(x); -- { serverError 407 } -SELECT toDecimal32('-1', 3) AS x, toUInt16(x); -- { serverError 407 } -SELECT toDecimal32('-1', 3) AS x, toUInt8(x); -- { serverError 407 } +SELECT toDecimal32('-1', 3) AS x, toUInt64(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal32('-1', 3) AS x, toUInt32(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal32('-1', 3) AS x, toUInt16(x); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal32('-1', 3) AS x, toUInt8(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('18446744073709551615', 0) AS x, toUInt64(x); -SELECT toDecimal128('18446744073709551616', 0) AS x, toUInt64(x); -- { serverError 407 } +SELECT toDecimal128('18446744073709551616', 0) AS x, toUInt64(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('18446744073709551615', 8) AS x, toUInt64(x); -SELECT toDecimal128('18446744073709551616', 8) AS x, toUInt64(x); -- { serverError 407 } +SELECT toDecimal128('18446744073709551616', 8) AS x, toUInt64(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('4294967295', 0) AS x, toUInt32(x); -SELECT toDecimal128('4294967296', 0) AS x, toUInt32(x); -- { serverError 407 } +SELECT toDecimal128('4294967296', 0) AS x, toUInt32(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('4294967295', 10) AS x, toUInt32(x); -SELECT toDecimal128('4294967296', 10) AS x, toUInt32(x); -- { serverError 407 } +SELECT toDecimal128('4294967296', 10) AS x, toUInt32(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal64('4294967295', 0) AS x, toUInt32(x); -SELECT toDecimal64('4294967296', 0) AS x, toUInt32(x); -- { serverError 407 } +SELECT toDecimal64('4294967296', 0) AS x, toUInt32(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal64('4294967295', 4) AS x, toUInt32(x); -SELECT toDecimal64('4294967296', 4) AS x, toUInt32(x); -- { serverError 407 } +SELECT toDecimal64('4294967296', 4) AS x, toUInt32(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('65535', 0) AS x, toUInt16(x); -SELECT toDecimal128('65536', 0) AS x, toUInt16(x); -- { serverError 407 } +SELECT toDecimal128('65536', 0) AS x, toUInt16(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal128('65535', 10) AS x, toUInt16(x); -SELECT toDecimal128('65536', 10) AS x, toUInt16(x); -- { serverError 407 } +SELECT toDecimal128('65536', 10) AS x, toUInt16(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal64('65535', 0) AS x, toUInt16(x); -SELECT toDecimal64('65536', 0) AS x, toUInt16(x); -- { serverError 407 } +SELECT toDecimal64('65536', 0) AS x, toUInt16(x); -- { serverError DECIMAL_OVERFLOW } SELECT toDecimal64('65535', 4) AS x, toUInt16(x); -SELECT toDecimal64('65536', 4) AS x, toUInt16(x); -- { serverError 407 } +SELECT toDecimal64('65536', 4) AS x, toUInt16(x); -- { serverError DECIMAL_OVERFLOW } SELECT toInt64('2147483647') AS x, toDecimal32(x, 0); SELECT toInt64('-2147483647') AS x, toDecimal32(x, 0); SELECT toUInt64('2147483647') AS x, toDecimal32(x, 0); -SELECT toInt64('2147483649') AS x, toDecimal32(x, 0); -- { serverError 407 } -SELECT toInt64('-2147483649') AS x, toDecimal32(x, 0); -- { serverError 407 } -SELECT toUInt64('2147483649') AS x, toDecimal32(x, 0); -- { serverError 407 } +SELECT toInt64('2147483649') AS x, toDecimal32(x, 0); -- { serverError DECIMAL_OVERFLOW } +SELECT toInt64('-2147483649') AS x, toDecimal32(x, 0); -- { serverError DECIMAL_OVERFLOW } +SELECT toUInt64('2147483649') AS x, toDecimal32(x, 0); -- { serverError DECIMAL_OVERFLOW } SELECT toUInt64('9223372036854775807') AS x, toDecimal64(x, 0); -SELECT toUInt64('9223372036854775809') AS x, toDecimal64(x, 0); -- { serverError 407 } +SELECT toUInt64('9223372036854775809') AS x, toDecimal64(x, 0); -- { serverError DECIMAL_OVERFLOW } -SELECT toDecimal32(0, rowNumberInBlock()); -- { serverError 44 } -SELECT toDecimal64(0, rowNumberInBlock()); -- { serverError 44 } -SELECT toDecimal128(0, rowNumberInBlock()); -- { serverError 44 } +SELECT toDecimal32(0, rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } +SELECT toDecimal64(0, rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } +SELECT toDecimal128(0, rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } -SELECT toDecimal32(1/0, 0); -- { serverError 407 } -SELECT toDecimal64(1/0, 1); -- { serverError 407 } -SELECT toDecimal128(0/0, 2); -- { serverError 407 } -SELECT CAST(1/0, 'Decimal(9, 0)'); -- { serverError 407 } -SELECT CAST(1/0, 'Decimal(18, 1)'); -- { serverError 407 } -SELECT CAST(1/0, 'Decimal(38, 2)'); -- { serverError 407 } -SELECT CAST(0/0, 'Decimal(9, 3)'); -- { serverError 407 } -SELECT CAST(0/0, 'Decimal(18, 4)'); -- { serverError 407 } -SELECT CAST(0/0, 'Decimal(38, 5)'); -- { serverError 407 } +SELECT toDecimal32(1/0, 0); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal64(1/0, 1); -- { serverError DECIMAL_OVERFLOW } +SELECT toDecimal128(0/0, 2); -- { serverError DECIMAL_OVERFLOW } +SELECT CAST(1/0, 'Decimal(9, 0)'); -- { serverError DECIMAL_OVERFLOW } +SELECT CAST(1/0, 'Decimal(18, 1)'); -- { serverError DECIMAL_OVERFLOW } +SELECT CAST(1/0, 'Decimal(38, 2)'); -- { serverError DECIMAL_OVERFLOW } +SELECT CAST(0/0, 'Decimal(9, 3)'); -- { serverError DECIMAL_OVERFLOW } +SELECT CAST(0/0, 'Decimal(18, 4)'); -- { serverError DECIMAL_OVERFLOW } +SELECT CAST(0/0, 'Decimal(38, 5)'); -- { serverError DECIMAL_OVERFLOW } -select toDecimal32(10000.1, 6); -- { serverError 407 } -select toDecimal64(10000.1, 18); -- { serverError 407 } -select toDecimal128(1000000000000000000000.1, 18); -- { serverError 407 } +select toDecimal32(10000.1, 6); -- { serverError DECIMAL_OVERFLOW } +select toDecimal64(10000.1, 18); -- { serverError DECIMAL_OVERFLOW } +select toDecimal128(1000000000000000000000.1, 18); -- { serverError DECIMAL_OVERFLOW } -select toDecimal32(-10000.1, 6); -- { serverError 407 } -select toDecimal64(-10000.1, 18); -- { serverError 407 } -select toDecimal128(-1000000000000000000000.1, 18); -- { serverError 407 } +select toDecimal32(-10000.1, 6); -- { serverError DECIMAL_OVERFLOW } +select toDecimal64(-10000.1, 18); -- { serverError DECIMAL_OVERFLOW } +select toDecimal128(-1000000000000000000000.1, 18); -- { serverError DECIMAL_OVERFLOW } -select toDecimal32(2147483647.0 + 1.0, 0); -- { serverError 407 } -select toDecimal64(9223372036854775807.0, 0); -- { serverError 407 } -select toDecimal128(170141183460469231731687303715884105729.0, 0); -- { serverError 407 } +select toDecimal32(2147483647.0 + 1.0, 0); -- { serverError DECIMAL_OVERFLOW } +select toDecimal64(9223372036854775807.0, 0); -- { serverError DECIMAL_OVERFLOW } +select toDecimal128(170141183460469231731687303715884105729.0, 0); -- { serverError DECIMAL_OVERFLOW } -select toDecimal32(-2147483647.0 - 1.0, 0); -- { serverError 407 } -select toDecimal64(-9223372036854775807.0, 0); -- { serverError 407 } -select toDecimal128(-170141183460469231731687303715884105729.0, 0); -- { serverError 407 } +select toDecimal32(-2147483647.0 - 1.0, 0); -- { serverError DECIMAL_OVERFLOW } +select toDecimal64(-9223372036854775807.0, 0); -- { serverError DECIMAL_OVERFLOW } +select toDecimal128(-170141183460469231731687303715884105729.0, 0); -- { serverError DECIMAL_OVERFLOW } diff --git a/tests/queries/0_stateless/00700_decimal_compare.sql b/tests/queries/0_stateless/00700_decimal_compare.sql index 7740c75f859..beadbdade16 100644 --- a/tests/queries/0_stateless/00700_decimal_compare.sql +++ b/tests/queries/0_stateless/00700_decimal_compare.sql @@ -47,18 +47,18 @@ SELECT greatest(a, 0), greatest(b, 0), greatest(g, 0) FROM decimal ORDER BY a; SELECT (a, d, g) = (b, e, h), (a, d, g) != (b, e, h) FROM decimal ORDER BY a; SELECT (a, d, g) = (c, f, i), (a, d, g) != (c, f, i) FROM decimal ORDER BY a; -SELECT toUInt32(2147483648) AS x, a == x FROM decimal WHERE a = 42; -- { serverError 407 } +SELECT toUInt32(2147483648) AS x, a == x FROM decimal WHERE a = 42; -- { serverError DECIMAL_OVERFLOW } SELECT toUInt64(2147483648) AS x, b == x, x == ((b - 42) + x) FROM decimal WHERE a = 42; -SELECT toUInt64(9223372036854775808) AS x, b == x FROM decimal WHERE a = 42; -- { serverError 407 } +SELECT toUInt64(9223372036854775808) AS x, b == x FROM decimal WHERE a = 42; -- { serverError DECIMAL_OVERFLOW } SELECT toUInt64(9223372036854775808) AS x, c == x, x == ((c - 42) + x) FROM decimal WHERE a = 42; SELECT g = 10000, (g - g + 10000) == 10000 FROM decimal WHERE a = 42; SELECT 10000 = g, 10000 = (g - g + 10000) FROM decimal WHERE a = 42; -SELECT g = 30000 FROM decimal WHERE a = 42; -- { serverError 407 } -SELECT 30000 = g FROM decimal WHERE a = 42; -- { serverError 407 } +SELECT g = 30000 FROM decimal WHERE a = 42; -- { serverError DECIMAL_OVERFLOW } +SELECT 30000 = g FROM decimal WHERE a = 42; -- { serverError DECIMAL_OVERFLOW } SELECT h = 30000, (h - g + 30000) = 30000 FROM decimal WHERE a = 42; SELECT 30000 = h, 30000 = (h - g + 30000) FROM decimal WHERE a = 42; -SELECT h = 10000000000 FROM decimal WHERE a = 42; -- { serverError 407 } +SELECT h = 10000000000 FROM decimal WHERE a = 42; -- { serverError DECIMAL_OVERFLOW } SELECT i = 10000000000, (i - g + 10000000000) = 10000000000 FROM decimal WHERE a = 42; SELECT 10000000000 = i, 10000000000 = (i - g + 10000000000) FROM decimal WHERE a = 42; diff --git a/tests/queries/0_stateless/00700_decimal_complex_types.sql b/tests/queries/0_stateless/00700_decimal_complex_types.sql index f4b29e77be9..979b7aaa298 100644 --- a/tests/queries/0_stateless/00700_decimal_complex_types.sql +++ b/tests/queries/0_stateless/00700_decimal_complex_types.sql @@ -158,16 +158,16 @@ SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal32('32.2', 5) FROM syste SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal64('64.2', 5) FROM system.numbers LIMIT 2; SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal128('128.2', 5) FROM system.numbers LIMIT 2; -SELECT number % 2 ? toDecimal32('32.1', 5) : toDecimal32('32.2', 1) FROM system.numbers LIMIT 2; -- { serverError 48 } -SELECT number % 2 ? toDecimal32('32.1', 5) : toDecimal64('64.2', 2) FROM system.numbers LIMIT 2; -- { serverError 48 } -SELECT number % 2 ? toDecimal32('32.1', 5) : toDecimal128('128.2', 3) FROM system.numbers LIMIT 2; -- { serverError 48 } +SELECT number % 2 ? toDecimal32('32.1', 5) : toDecimal32('32.2', 1) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } +SELECT number % 2 ? toDecimal32('32.1', 5) : toDecimal64('64.2', 2) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } +SELECT number % 2 ? toDecimal32('32.1', 5) : toDecimal128('128.2', 3) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } -SELECT number % 2 ? toDecimal64('64.1', 5) : toDecimal32('32.2', 1) FROM system.numbers LIMIT 2; -- { serverError 48 } -SELECT number % 2 ? toDecimal64('64.1', 5) : toDecimal64('64.2', 2) FROM system.numbers LIMIT 2; -- { serverError 48 } -SELECT number % 2 ? toDecimal64('64.1', 5) : toDecimal128('128.2', 3) FROM system.numbers LIMIT 2; -- { serverError 48 } +SELECT number % 2 ? toDecimal64('64.1', 5) : toDecimal32('32.2', 1) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } +SELECT number % 2 ? toDecimal64('64.1', 5) : toDecimal64('64.2', 2) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } +SELECT number % 2 ? toDecimal64('64.1', 5) : toDecimal128('128.2', 3) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } -SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal32('32.2', 1) FROM system.numbers LIMIT 2; -- { serverError 48 } -SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal64('64.2', 2) FROM system.numbers LIMIT 2; -- { serverError 48 } -SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal128('128.2', 3) FROM system.numbers LIMIT 2; -- { serverError 48 } +SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal32('32.2', 1) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } +SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal64('64.2', 2) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } +SELECT number % 2 ? toDecimal128('128.1', 5) : toDecimal128('128.2', 3) FROM system.numbers LIMIT 2; -- { serverError NOT_IMPLEMENTED } DROP TABLE IF EXISTS decimal; diff --git a/tests/queries/0_stateless/00700_decimal_empty_aggregates.sql b/tests/queries/0_stateless/00700_decimal_empty_aggregates.sql index c77f605a4c2..4ee37b3b924 100644 --- a/tests/queries/0_stateless/00700_decimal_empty_aggregates.sql +++ b/tests/queries/0_stateless/00700_decimal_empty_aggregates.sql @@ -77,9 +77,9 @@ SELECT stddevPop(toFloat64(a)), stddevPop(toFloat64(b)), stddevPop(toFloat64(c)) SELECT stddevSamp(a) AS da, stddevSamp(b) AS db, stddevSamp(c) AS dc, toTypeName(da), toTypeName(db), toTypeName(dc) FROM decimal; SELECT stddevSamp(toFloat64(a)), stddevSamp(toFloat64(b)), stddevSamp(toFloat64(c)) FROM decimal; -SELECT covarPop(a, a), covarPop(b, b), covarPop(c, c) FROM decimal; -- { serverError 43 } -SELECT covarSamp(a, a), covarSamp(b, b), covarSamp(c, c) FROM decimal; -- { serverError 43 } -SELECT corr(a, a), corr(b, b), corr(c, c) FROM decimal; -- { serverError 43 } +SELECT covarPop(a, a), covarPop(b, b), covarPop(c, c) FROM decimal; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT covarSamp(a, a), covarSamp(b, b), covarSamp(c, c) FROM decimal; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT corr(a, a), corr(b, b), corr(c, c) FROM decimal; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT 1 LIMIT 0; DROP TABLE decimal; diff --git a/tests/queries/0_stateless/00700_decimal_math.sql b/tests/queries/0_stateless/00700_decimal_math.sql index cefbf2fd604..7f695a5b017 100644 --- a/tests/queries/0_stateless/00700_decimal_math.sql +++ b/tests/queries/0_stateless/00700_decimal_math.sql @@ -40,6 +40,6 @@ SELECT toDecimal128(pi(), 14) AS x, round(sin(x), 8), round(cos(x), 8), round(ta SELECT toDecimal128('1.0', 2) AS x, asin(x), acos(x), atan(x); -SELECT toDecimal32('4.2', 1) AS x, pow(x, 2), pow(x, 0.5); -- { serverError 43 } -SELECT toDecimal64('4.2', 1) AS x, pow(x, 2), pow(x, 0.5); -- { serverError 43 } -SELECT toDecimal128('4.2', 1) AS x, pow(x, 2), pow(x, 0.5); -- { serverError 43 } +SELECT toDecimal32('4.2', 1) AS x, pow(x, 2), pow(x, 0.5); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDecimal64('4.2', 1) AS x, pow(x, 2), pow(x, 0.5); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDecimal128('4.2', 1) AS x, pow(x, 2), pow(x, 0.5); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/00700_to_decimal_or_something.sql b/tests/queries/0_stateless/00700_to_decimal_or_something.sql index cbf5dc33c07..8d932d3d750 100644 --- a/tests/queries/0_stateless/00700_to_decimal_or_something.sql +++ b/tests/queries/0_stateless/00700_to_decimal_or_something.sql @@ -1,6 +1,6 @@ SELECT toDecimal32OrZero('1.1', 1), toDecimal32OrZero('1.1', 2), toDecimal32OrZero('1.1', 8); SELECT toDecimal32OrZero('1.1', 0); -SELECT toDecimal32OrZero(1.1, 0); -- { serverError 43 } +SELECT toDecimal32OrZero(1.1, 0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toDecimal128OrZero('', 0) AS x, toDecimal128OrZero('0.42', 2) AS y; SELECT toDecimal64OrZero('', 0) AS x, toDecimal64OrZero('0.42', 3) AS y; @@ -15,15 +15,15 @@ SELECT toDecimal64OrZero('100000000000000000000000000000000000000', 0); SELECT toDecimal128OrZero('-99999999999999999999999999999999999999', 0); SELECT toDecimal64OrZero('-100000000000000000000000000000000000000', 0); -SELECT toDecimal32OrZero('1', rowNumberInBlock()); -- { serverError 44 } -SELECT toDecimal64OrZero('1', rowNumberInBlock()); -- { serverError 44 } -SELECT toDecimal128OrZero('1', rowNumberInBlock()); -- { serverError 44 } +SELECT toDecimal32OrZero('1', rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } +SELECT toDecimal64OrZero('1', rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } +SELECT toDecimal128OrZero('1', rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } SELECT '----'; SELECT toDecimal32OrNull('1.1', 1), toDecimal32OrNull('1.1', 2), toDecimal32OrNull('1.1', 8); SELECT toDecimal32OrNull('1.1', 0); -SELECT toDecimal32OrNull(1.1, 0); -- { serverError 43 } +SELECT toDecimal32OrNull(1.1, 0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toDecimal128OrNull('', 0) AS x, toDecimal128OrNull('-0.42', 2) AS y; SELECT toDecimal64OrNull('', 0) AS x, toDecimal64OrNull('-0.42', 3) AS y; @@ -38,6 +38,6 @@ SELECT toDecimal64OrNull('100000000000000000000000000000000000000', 0); SELECT toDecimal128OrNull('-99999999999999999999999999999999999999', 0); SELECT toDecimal64OrNull('-100000000000000000000000000000000000000', 0); -SELECT toDecimal32OrNull('1', rowNumberInBlock()); -- { serverError 44 } -SELECT toDecimal64OrNull('1', rowNumberInBlock()); -- { serverError 44 } -SELECT toDecimal128OrNull('1', rowNumberInBlock()); -- { serverError 44 } +SELECT toDecimal32OrNull('1', rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } +SELECT toDecimal64OrNull('1', rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } +SELECT toDecimal128OrNull('1', rowNumberInBlock()); -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/00705_aggregate_states_addition.sql b/tests/queries/0_stateless/00705_aggregate_states_addition.sql index c5b11f9971d..29510fc9382 100644 --- a/tests/queries/0_stateless/00705_aggregate_states_addition.sql +++ b/tests/queries/0_stateless/00705_aggregate_states_addition.sql @@ -7,8 +7,8 @@ INSERT INTO add_aggregate VALUES(3, 1); SELECT countMerge(x + y) FROM (SELECT countState(a) as x, countState(b) as y from add_aggregate); SELECT sumMerge(x + y), sumMerge(x), sumMerge(y) FROM (SELECT sumState(a) as x, sumState(b) as y from add_aggregate); -SELECT sumMerge(x) FROM (SELECT sumState(a) + countState(b) as x FROM add_aggregate); -- { serverError 421 } -SELECT sumMerge(x) FROM (SELECT sumState(a) + sumState(toInt32(b)) as x FROM add_aggregate); -- { serverError 421 } +SELECT sumMerge(x) FROM (SELECT sumState(a) + countState(b) as x FROM add_aggregate); -- { serverError CANNOT_ADD_DIFFERENT_AGGREGATE_STATES } +SELECT sumMerge(x) FROM (SELECT sumState(a) + sumState(toInt32(b)) as x FROM add_aggregate); -- { serverError CANNOT_ADD_DIFFERENT_AGGREGATE_STATES } SELECT minMerge(x) FROM (SELECT minState(a) + minState(b) as x FROM add_aggregate); @@ -17,6 +17,6 @@ SELECT uniqMerge(x + y) FROM (SELECT uniqState(a) as x, uniqState(b) as y FROM a SELECT arraySort(groupArrayMerge(x + y)) FROM (SELECT groupArrayState(a) AS x, groupArrayState(b) as y FROM add_aggregate); SELECT arraySort(groupUniqArrayMerge(x + y)) FROM (SELECT groupUniqArrayState(a) AS x, groupUniqArrayState(b) as y FROM add_aggregate); -SELECT uniqMerge(x + y) FROM (SELECT uniqState(65536, a) AS x, uniqState(b) AS y FROM add_aggregate); -- { serverError 421 } +SELECT uniqMerge(x + y) FROM (SELECT uniqState(65536, a) AS x, uniqState(b) AS y FROM add_aggregate); -- { serverError CANNOT_ADD_DIFFERENT_AGGREGATE_STATES } DROP TABLE IF EXISTS add_aggregate; diff --git a/tests/queries/0_stateless/00714_alter_uuid.sql b/tests/queries/0_stateless/00714_alter_uuid.sql index ab08e943175..40c4981dd9d 100644 --- a/tests/queries/0_stateless/00714_alter_uuid.sql +++ b/tests/queries/0_stateless/00714_alter_uuid.sql @@ -39,7 +39,7 @@ ORDER BY (created_at, id0, id1); SET send_logs_level = 'fatal'; -ALTER TABLE uuid MODIFY COLUMN id0 UUID; -- { serverError 524 } -ALTER TABLE uuid MODIFY COLUMN id1 UUID; -- { serverError 524 } +ALTER TABLE uuid MODIFY COLUMN id0 UUID; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE uuid MODIFY COLUMN id1 UUID; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } DROP TABLE uuid; diff --git a/tests/queries/0_stateless/00716_allow_ddl.sql b/tests/queries/0_stateless/00716_allow_ddl.sql index d33d8b7eec5..8d9988736a0 100644 --- a/tests/queries/0_stateless/00716_allow_ddl.sql +++ b/tests/queries/0_stateless/00716_allow_ddl.sql @@ -1,8 +1,8 @@ SET send_logs_level = 'fatal'; SET allow_ddl = 0; -CREATE DATABASE some_db; -- { serverError 392 } -CREATE TABLE some_table(a Int32) ENGINE = Memory; -- { serverError 392} -ALTER TABLE some_table DELETE WHERE 1; -- { serverError 392} -RENAME TABLE some_table TO some_table1; -- { serverError 392} -SET allow_ddl = 1; -- { serverError 392} +CREATE DATABASE some_db; -- { serverError QUERY_IS_PROHIBITED } +CREATE TABLE some_table(a Int32) ENGINE = Memory; -- { serverError QUERY_IS_PROHIBITED} +ALTER TABLE some_table DELETE WHERE 1; -- { serverError QUERY_IS_PROHIBITED} +RENAME TABLE some_table TO some_table1; -- { serverError QUERY_IS_PROHIBITED} +SET allow_ddl = 1; -- { serverError QUERY_IS_PROHIBITED} diff --git a/tests/queries/0_stateless/00719_parallel_ddl_db.sh b/tests/queries/0_stateless/00719_parallel_ddl_db.sh index 004590c21df..b7dea25c182 100755 --- a/tests/queries/0_stateless/00719_parallel_ddl_db.sh +++ b/tests/queries/0_stateless/00719_parallel_ddl_db.sh @@ -11,7 +11,11 @@ ${CLICKHOUSE_CLIENT} --query "DROP DATABASE IF EXISTS parallel_ddl" function query() { - for _ in {1..50}; do + local it=0 + TIMELIMIT=30 + while [ $SECONDS -lt "$TIMELIMIT" ] && [ $it -lt 50 ]; + do + it=$((it+1)) ${CLICKHOUSE_CLIENT} --query "CREATE DATABASE IF NOT EXISTS parallel_ddl" ${CLICKHOUSE_CLIENT} --query "DROP DATABASE IF EXISTS parallel_ddl" done diff --git a/tests/queries/0_stateless/00719_parallel_ddl_table.sh b/tests/queries/0_stateless/00719_parallel_ddl_table.sh index 57a7e228341..fefe12ae656 100755 --- a/tests/queries/0_stateless/00719_parallel_ddl_table.sh +++ b/tests/queries/0_stateless/00719_parallel_ddl_table.sh @@ -10,7 +10,11 @@ ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS parallel_ddl" function query() { - for _ in {1..50}; do + local it=0 + TIMELIMIT=30 + while [ $SECONDS -lt "$TIMELIMIT" ] && [ $it -lt 50 ]; + do + it=$((it+1)) ${CLICKHOUSE_CLIENT} --query "CREATE TABLE IF NOT EXISTS parallel_ddl(a Int) ENGINE = Memory" ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS parallel_ddl" done diff --git a/tests/queries/0_stateless/00729_prewhere_array_join.sql b/tests/queries/0_stateless/00729_prewhere_array_join.sql index 5ac79c150c6..38fe4aed83b 100644 --- a/tests/queries/0_stateless/00729_prewhere_array_join.sql +++ b/tests/queries/0_stateless/00729_prewhere_array_join.sql @@ -14,9 +14,9 @@ insert into t1_00729 (id,val,nid,eDate) values (3,[],6,'2018-09-27'); insert into t1_00729 (id,val,nid,eDate) values (3,[],7,'2018-09-27'); insert into t1_00729 (id,val,nid,eDate) values (3,[],8,'2018-09-27'); -select arrayJoin(val) as nameGroup6 from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError 182 } +select arrayJoin(val) as nameGroup6 from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError ILLEGAL_PREWHERE } select arrayJoin(val) as nameGroup6, countDistinct(nid) as rowids from t1_00729 where notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -select arrayJoin(val) as nameGroup6, countDistinct(nid) as rowids from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError 182 } +select arrayJoin(val) as nameGroup6, countDistinct(nid) as rowids from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError ILLEGAL_PREWHERE } drop table t1_00729; create table t1_00729 (id UInt64, val Array(String),nid UInt64, eDate Date) ENGINE = MergeTree(eDate, (id, eDate), 8192); @@ -26,8 +26,8 @@ insert into t1_00729 (id,val,nid,eDate) values (1,['background','foreground','he insert into t1_00729 (id,val,nid,eDate) values (2,['background','foreground','heading','image'],1,'2018-09-27'); insert into t1_00729 (id,val,nid,eDate) values (2,[],2,'2018-09-27'); -select arrayJoin(val) as nameGroup6 from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError 182 } +select arrayJoin(val) as nameGroup6 from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError ILLEGAL_PREWHERE } select arrayJoin(val) as nameGroup6, countDistinct(nid) as rowids from t1_00729 where notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -select arrayJoin(val) as nameGroup6, countDistinct(nid) as rowids from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError 182 } +select arrayJoin(val) as nameGroup6, countDistinct(nid) as rowids from t1_00729 prewhere notEmpty(toString(nameGroup6)) group by nameGroup6 order by nameGroup6; -- { serverError ILLEGAL_PREWHERE } drop table t1_00729; diff --git a/tests/queries/0_stateless/00732_quorum_insert_lost_part_and_alive_part_zookeeper_long.sql b/tests/queries/0_stateless/00732_quorum_insert_lost_part_and_alive_part_zookeeper_long.sql index a1859220c6c..dcc7f430468 100644 --- a/tests/queries/0_stateless/00732_quorum_insert_lost_part_and_alive_part_zookeeper_long.sql +++ b/tests/queries/0_stateless/00732_quorum_insert_lost_part_and_alive_part_zookeeper_long.sql @@ -20,7 +20,7 @@ SET insert_quorum_timeout=0; SYSTEM STOP FETCHES quorum1; -INSERT INTO quorum2 VALUES (4, toDate('2018-12-16')); -- { serverError 319 } +INSERT INTO quorum2 VALUES (4, toDate('2018-12-16')); -- { serverError UNKNOWN_STATUS_OF_INSERT } SELECT x FROM quorum1 ORDER BY x; SELECT x FROM quorum2 ORDER BY x; diff --git a/tests/queries/0_stateless/00732_quorum_insert_lost_part_zookeeper_long.sql b/tests/queries/0_stateless/00732_quorum_insert_lost_part_zookeeper_long.sql index 61394447c3d..232ae1553f7 100644 --- a/tests/queries/0_stateless/00732_quorum_insert_lost_part_zookeeper_long.sql +++ b/tests/queries/0_stateless/00732_quorum_insert_lost_part_zookeeper_long.sql @@ -16,7 +16,7 @@ SET insert_quorum_timeout=0; SYSTEM STOP FETCHES quorum1; -INSERT INTO quorum2 VALUES (1, '2018-11-15'); -- { serverError 319 } +INSERT INTO quorum2 VALUES (1, '2018-11-15'); -- { serverError UNKNOWN_STATUS_OF_INSERT } SELECT count(*) FROM quorum1; SELECT count(*) FROM quorum2; diff --git a/tests/queries/0_stateless/00732_quorum_insert_select_with_old_data_and_without_quorum_zookeeper_long.sql b/tests/queries/0_stateless/00732_quorum_insert_select_with_old_data_and_without_quorum_zookeeper_long.sql index e3e5aa7949f..57117303f68 100644 --- a/tests/queries/0_stateless/00732_quorum_insert_select_with_old_data_and_without_quorum_zookeeper_long.sql +++ b/tests/queries/0_stateless/00732_quorum_insert_select_with_old_data_and_without_quorum_zookeeper_long.sql @@ -22,7 +22,7 @@ SET insert_quorum_timeout=0; SYSTEM STOP FETCHES quorum1; -INSERT INTO quorum2 VALUES (4, toDate('2020-12-16')); -- { serverError 319 } +INSERT INTO quorum2 VALUES (4, toDate('2020-12-16')); -- { serverError UNKNOWN_STATUS_OF_INSERT } SELECT x FROM quorum1 ORDER BY x; SELECT x FROM quorum2 ORDER BY x; diff --git a/tests/queries/0_stateless/00734_timeslot.sql b/tests/queries/0_stateless/00734_timeslot.sql index f9422ee8f16..25be570bd4c 100644 --- a/tests/queries/0_stateless/00734_timeslot.sql +++ b/tests/queries/0_stateless/00734_timeslot.sql @@ -2,6 +2,6 @@ SELECT timeSlot(toDateTime('2000-01-02 03:04:05', 'UTC')); SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), toUInt32(10000)); SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), toUInt32(10000), 600); SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), toUInt32(600), 30); -SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), 'wrong argument'); -- { serverError 43 } -SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), toUInt32(600), 'wrong argument'); -- { serverError 43 } -SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), toUInt32(600), 0); -- { serverError 44 } \ No newline at end of file +SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), 'wrong argument'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), toUInt32(600), 'wrong argument'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT timeSlots(toDateTime('2000-01-02 03:04:05', 'UTC'), toUInt32(600), 0); -- { serverError ILLEGAL_COLUMN } \ No newline at end of file diff --git a/tests/queries/0_stateless/00735_long_conditional.sql b/tests/queries/0_stateless/00735_long_conditional.sql index 662c87db48f..25f7fbaf87a 100644 --- a/tests/queries/0_stateless/00735_long_conditional.sql +++ b/tests/queries/0_stateless/00735_long_conditional.sql @@ -11,11 +11,11 @@ SELECT toInt8(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), t SELECT toInt8(0) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt8(0) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt8(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toInt8(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toInt8(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toInt8(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt8(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toInt8(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toInt8(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toInt8(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toInt8(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toInt8(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt8(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt8(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -27,11 +27,11 @@ SELECT toInt16(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), SELECT toInt16(0) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt16(0) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt16(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toInt16(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toInt16(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toInt16(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt16(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toInt16(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toInt16(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toInt16(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toInt16(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toInt16(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt16(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt16(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -43,11 +43,11 @@ SELECT toInt32(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), SELECT toInt32(0) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt32(0) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt32(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toInt32(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toInt32(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toInt32(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt32(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toInt32(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toInt32(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toInt32(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toInt32(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toInt32(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt32(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt32(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -59,11 +59,11 @@ SELECT toInt64(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), SELECT toInt64(0) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt64(0) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt64(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toInt64(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toInt64(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toInt64(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toInt64(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toInt64(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toInt64(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toInt64(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toInt64(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toInt64(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toInt64(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toInt64(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt64(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toInt64(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -78,8 +78,8 @@ SELECT toUInt8(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), SELECT toUInt8(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt8(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt8(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toUInt8(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toUInt8(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toUInt8(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toUInt8(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toUInt8(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt8(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt8(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -94,8 +94,8 @@ SELECT toUInt16(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x) SELECT toUInt16(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt16(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt16(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toUInt16(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toUInt16(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toUInt16(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toUInt16(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toUInt16(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt16(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt16(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -110,59 +110,59 @@ SELECT toUInt32(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x) SELECT toUInt32(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt32(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt32(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toUInt32(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toUInt32(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toUInt32(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toUInt32(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toUInt32(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt32(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt32(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toUInt64(0) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toUInt64(0) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toUInt64(0) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toUInt64(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toUInt64(0) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toUInt64(0) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toUInt64(0) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toUInt64(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toUInt64(0) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt64(0) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt64(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt64(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toUInt64(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toUInt64(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toUInt64(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toUInt64(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toUInt64(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toUInt64(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toUInt64(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toUInt64(0) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toUInt64(0) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt64(0) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toUInt64(0) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toDate(0) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } +SELECT toDate(0) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toDate(0) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toDate('2000-01-01') AS x, toDateTime('2000-01-01 00:00:01', 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toDate(0) AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT toDate(0) AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } +SELECT toDate(0) AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toDate(0) AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT toDateTime('2000-01-01 00:00:00', 'Asia/Istanbul') AS x, toDate('2000-01-02') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT toDateTime(0, 'Asia/Istanbul') AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT toDateTime(0, 'Asia/Istanbul') AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT toDateTime(0, 'Asia/Istanbul') AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT 'column vs value'; @@ -173,11 +173,11 @@ SELECT materialize(toInt8(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toT SELECT materialize(toInt8(0)) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt8(0)) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt8(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toInt8(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toInt8(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toInt8(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt8(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toInt8(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toInt8(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toInt8(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toInt8(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toInt8(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt8(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt8(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -189,11 +189,11 @@ SELECT materialize(toInt16(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, to SELECT materialize(toInt16(0)) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt16(0)) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt16(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toInt16(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toInt16(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toInt16(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt16(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toInt16(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toInt16(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toInt16(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toInt16(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toInt16(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt16(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt16(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -205,11 +205,11 @@ SELECT materialize(toInt32(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, to SELECT materialize(toInt32(0)) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt32(0)) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt32(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toInt32(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toInt32(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toInt32(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt32(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toInt32(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toInt32(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toInt32(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toInt32(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toInt32(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt32(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt32(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -221,11 +221,11 @@ SELECT materialize(toInt64(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, to SELECT materialize(toInt64(0)) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt64(0)) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt64(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toInt64(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toInt64(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toInt64(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toInt64(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toInt64(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toInt64(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toInt64(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toInt64(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toInt64(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toInt64(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toInt64(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt64(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toInt64(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -240,8 +240,8 @@ SELECT materialize(toUInt8(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, t SELECT materialize(toUInt8(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt8(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt8(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toUInt8(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toUInt8(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toUInt8(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toUInt8(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toUInt8(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt8(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt8(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -256,8 +256,8 @@ SELECT materialize(toUInt16(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, SELECT materialize(toUInt16(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt16(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt16(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toUInt16(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toUInt16(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toUInt16(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toUInt16(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toUInt16(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt16(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt16(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); @@ -272,56 +272,56 @@ SELECT materialize(toUInt32(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, SELECT materialize(toUInt32(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt32(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt32(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toUInt32(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toUInt32(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toUInt32(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toUInt32(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toUInt32(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt32(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt32(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toUInt64(0)) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toUInt64(0)) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toUInt64(0)) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toUInt64(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toUInt64(0)) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toUInt64(0)) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toUInt64(0)) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toUInt64(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toUInt64(0)) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt64(0)) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt64(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt64(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toUInt64(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toUInt64(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toUInt64(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toUInt64(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toUInt64(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toUInt64(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toUInt64(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toUInt64(0)) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toUInt64(0)) AS x, toDecimal32(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt64(0)) AS x, toDecimal64(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toUInt64(0)) AS x, toDecimal128(1, 0) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toDate(0)) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } +SELECT materialize(toDate(0)) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT materialize(toDate(0)) AS x, toDate(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toDate('2000-01-01')) AS x, toDateTime('2000-01-01 00:00:01', 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toDate(0)) AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } -SELECT materialize(toDate(0)) AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 43 } +SELECT materialize(toDate(0)) AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT materialize(toDate(0)) AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt8(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt16(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toUInt64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toFloat32(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toFloat64(1) AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } SELECT materialize(toDateTime('2000-01-01 00:00:00', 'Asia/Istanbul')) AS x, toDate('2000-01-02') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toDateTime(1, 'Asia/Istanbul') AS y, ((x > y) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } -SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError 386 } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toDecimal32(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toDecimal64(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } +SELECT materialize(toDateTime(0, 'Asia/Istanbul')) AS x, toDecimal128(1, 0) AS y, ((x = 0) ? x : y) AS z, toTypeName(x), toTypeName(y), toTypeName(z); -- { serverError NO_COMMON_TYPE } diff --git a/tests/queries/0_stateless/00742_require_join_strictness.sql b/tests/queries/0_stateless/00742_require_join_strictness.sql index 5659a0f6833..a3a5315e7cd 100644 --- a/tests/queries/0_stateless/00742_require_join_strictness.sql +++ b/tests/queries/0_stateless/00742_require_join_strictness.sql @@ -1,3 +1,3 @@ SET send_logs_level = 'fatal'; SET join_default_strictness = ''; -SELECT * FROM system.one INNER JOIN (SELECT number AS k FROM system.numbers) js2 ON dummy = k; -- { serverError 417 } +SELECT * FROM system.one INNER JOIN (SELECT number AS k FROM system.numbers) js2 ON dummy = k; -- { serverError EXPECTED_ALL_OR_ANY } diff --git a/tests/queries/0_stateless/00753_alter_attach.sql b/tests/queries/0_stateless/00753_alter_attach.sql index 7f2b1214613..b22a95a83ff 100644 --- a/tests/queries/0_stateless/00753_alter_attach.sql +++ b/tests/queries/0_stateless/00753_alter_attach.sql @@ -64,7 +64,7 @@ select * from replicated_table_detach_all1 order by id; SYSTEM SYNC REPLICA replicated_table_detach_all2; select * from replicated_table_detach_all2 order by id; -ALTER TABLE replicated_table_detach_all1 FETCH PARTITION ALL FROM '/clickhouse/tables/test_00753_{database}/replicated_table_detach_all1'; -- { serverError 344 } +ALTER TABLE replicated_table_detach_all1 FETCH PARTITION ALL FROM '/clickhouse/tables/test_00753_{database}/replicated_table_detach_all1'; -- { serverError SUPPORT_IS_DISABLED } DROP TABLE replicated_table_detach_all1; DROP TABLE replicated_table_detach_all2; @@ -79,14 +79,14 @@ CREATE TABLE partition_all2 (x UInt64, p UInt8, q UInt8) ENGINE = MergeTree ORDE INSERT INTO partition_all2 VALUES (4, 1, 2), (5, 1, 3), (3, 1, 4); -- test PARTITION ALL -ALTER TABLE partition_all2 REPLACE PARTITION ALL FROM partition_all; -- { serverError 344 } -ALTER TABLE partition_all MOVE PARTITION ALL TO TABLE partition_all2; -- { serverError 344 } -ALTER TABLE partition_all2 CLEAR INDEX p IN PARTITION ALL; -- { serverError 344 } -ALTER TABLE partition_all2 CLEAR COLUMN q IN PARTITION ALL; -- { serverError 344 } -ALTER TABLE partition_all2 UPDATE q = q + 1 IN PARTITION ALL where p = 1; -- { serverError 344 } -ALTER TABLE partition_all2 FREEZE PARTITION ALL; -- { serverError 344 } -CHECK TABLE partition_all2 PARTITION ALL; -- { serverError 344 } -OPTIMIZE TABLE partition_all2 PARTITION ALL; -- { serverError 344 } +ALTER TABLE partition_all2 REPLACE PARTITION ALL FROM partition_all; -- { serverError SUPPORT_IS_DISABLED } +ALTER TABLE partition_all MOVE PARTITION ALL TO TABLE partition_all2; -- { serverError SUPPORT_IS_DISABLED } +ALTER TABLE partition_all2 CLEAR INDEX p IN PARTITION ALL; -- { serverError SUPPORT_IS_DISABLED } +ALTER TABLE partition_all2 CLEAR COLUMN q IN PARTITION ALL; -- { serverError SUPPORT_IS_DISABLED } +ALTER TABLE partition_all2 UPDATE q = q + 1 IN PARTITION ALL where p = 1; -- { serverError SUPPORT_IS_DISABLED } +ALTER TABLE partition_all2 FREEZE PARTITION ALL; -- { serverError SUPPORT_IS_DISABLED } +CHECK TABLE partition_all2 PARTITION ALL; -- { serverError SUPPORT_IS_DISABLED } +OPTIMIZE TABLE partition_all2 PARTITION ALL; -- { serverError SUPPORT_IS_DISABLED } DROP TABLE partition_all; DROP TABLE partition_all2; diff --git a/tests/queries/0_stateless/00754_alter_modify_order_by.sql b/tests/queries/0_stateless/00754_alter_modify_order_by.sql index 9c7eee74c8c..ece3cfdc033 100644 --- a/tests/queries/0_stateless/00754_alter_modify_order_by.sql +++ b/tests/queries/0_stateless/00754_alter_modify_order_by.sql @@ -3,30 +3,30 @@ SET optimize_on_insert = 0; DROP TABLE IF EXISTS no_order; CREATE TABLE no_order(a UInt32, b UInt32) ENGINE = MergeTree ORDER BY tuple(); -ALTER TABLE no_order MODIFY ORDER BY (a); -- { serverError 36} +ALTER TABLE no_order MODIFY ORDER BY (a); -- { serverError BAD_ARGUMENTS} DROP TABLE no_order; DROP TABLE IF EXISTS old_style; set allow_deprecated_syntax_for_merge_tree=1; CREATE TABLE old_style(d Date, x UInt32) ENGINE MergeTree(d, x, 8192); -ALTER TABLE old_style ADD COLUMN y UInt32, MODIFY ORDER BY (x, y); -- { serverError 36} +ALTER TABLE old_style ADD COLUMN y UInt32, MODIFY ORDER BY (x, y); -- { serverError BAD_ARGUMENTS} DROP TABLE old_style; DROP TABLE IF EXISTS summing; CREATE TABLE summing(x UInt32, y UInt32, val UInt32) ENGINE SummingMergeTree ORDER BY (x, y); /* Can't add an expression with existing column to ORDER BY. */ -ALTER TABLE summing MODIFY ORDER BY (x, y, -val); -- { serverError 36} +ALTER TABLE summing MODIFY ORDER BY (x, y, -val); -- { serverError BAD_ARGUMENTS} /* Can't add an expression with existing column to ORDER BY. */ -ALTER TABLE summing ADD COLUMN z UInt32 DEFAULT x + 1, MODIFY ORDER BY (x, y, -z); -- { serverError 36} +ALTER TABLE summing ADD COLUMN z UInt32 DEFAULT x + 1, MODIFY ORDER BY (x, y, -z); -- { serverError BAD_ARGUMENTS} /* Can't add nonexistent column to ORDER BY. */ -ALTER TABLE summing MODIFY ORDER BY (x, y, nonexistent); -- { serverError 47} +ALTER TABLE summing MODIFY ORDER BY (x, y, nonexistent); -- { serverError UNKNOWN_IDENTIFIER} /* Can't modyfy ORDER BY so that it is no longer a prefix of the PRIMARY KEY. */ -ALTER TABLE summing MODIFY ORDER BY x; -- { serverError 36} +ALTER TABLE summing MODIFY ORDER BY x; -- { serverError BAD_ARGUMENTS} ALTER TABLE summing ADD COLUMN z UInt32 AFTER y, MODIFY ORDER BY (x, y, -z); diff --git a/tests/queries/0_stateless/00754_alter_modify_order_by_replicated_zookeeper_long.sql b/tests/queries/0_stateless/00754_alter_modify_order_by_replicated_zookeeper_long.sql index 29d0ef79b91..e29b6b996bc 100644 --- a/tests/queries/0_stateless/00754_alter_modify_order_by_replicated_zookeeper_long.sql +++ b/tests/queries/0_stateless/00754_alter_modify_order_by_replicated_zookeeper_long.sql @@ -8,7 +8,7 @@ SET send_logs_level = 'fatal'; DROP TABLE IF EXISTS old_style; set allow_deprecated_syntax_for_merge_tree=1; CREATE TABLE old_style(d Date, x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/{database}/test_00754/old_style', 'r1', d, x, 8192); -ALTER TABLE old_style ADD COLUMN y UInt32, MODIFY ORDER BY (x, y); -- { serverError 36 } +ALTER TABLE old_style ADD COLUMN y UInt32, MODIFY ORDER BY (x, y); -- { serverError BAD_ARGUMENTS } DROP TABLE old_style; DROP TABLE IF EXISTS summing_r1; @@ -17,16 +17,16 @@ CREATE TABLE summing_r1(x UInt32, y UInt32, val UInt32) ENGINE ReplicatedSumming CREATE TABLE summing_r2(x UInt32, y UInt32, val UInt32) ENGINE ReplicatedSummingMergeTree('/clickhouse/tables/{database}/test_00754/summing', 'r2') ORDER BY (x, y); /* Can't add an expression with existing column to ORDER BY. */ -ALTER TABLE summing_r1 MODIFY ORDER BY (x, y, -val); -- { serverError 36 } +ALTER TABLE summing_r1 MODIFY ORDER BY (x, y, -val); -- { serverError BAD_ARGUMENTS } /* Can't add an expression with existing column to ORDER BY. */ -ALTER TABLE summing_r1 ADD COLUMN z UInt32 DEFAULT x + 1, MODIFY ORDER BY (x, y, -z); -- { serverError 36 } +ALTER TABLE summing_r1 ADD COLUMN z UInt32 DEFAULT x + 1, MODIFY ORDER BY (x, y, -z); -- { serverError BAD_ARGUMENTS } /* Can't add nonexistent column to ORDER BY. */ -ALTER TABLE summing_r1 MODIFY ORDER BY (x, y, nonexistent); -- { serverError 47 } +ALTER TABLE summing_r1 MODIFY ORDER BY (x, y, nonexistent); -- { serverError UNKNOWN_IDENTIFIER } /* Can't modyfy ORDER BY so that it is no longer a prefix of the PRIMARY KEY. */ -ALTER TABLE summing_r1 MODIFY ORDER BY x; -- { serverError 36 } +ALTER TABLE summing_r1 MODIFY ORDER BY x; -- { serverError BAD_ARGUMENTS } ALTER TABLE summing_r1 ADD COLUMN z UInt32 AFTER y, MODIFY ORDER BY (x, y, -z); @@ -46,7 +46,7 @@ SELECT '*** Check SHOW CREATE TABLE ***'; SHOW CREATE TABLE summing_r2; DETACH TABLE summing_r2; -ALTER TABLE summing_r1 ADD COLUMN t UInt32 AFTER z, MODIFY ORDER BY (x, y, t * t) SETTINGS replication_alter_partitions_sync = 2; -- { serverError 341 } +ALTER TABLE summing_r1 ADD COLUMN t UInt32 AFTER z, MODIFY ORDER BY (x, y, t * t) SETTINGS replication_alter_partitions_sync = 2; -- { serverError UNFINISHED } ATTACH TABLE summing_r2; SYSTEM SYNC REPLICA summing_r2; diff --git a/tests/queries/0_stateless/00757_enum_defaults.sql b/tests/queries/0_stateless/00757_enum_defaults.sql index 45dc9b80cb7..d69ba9ffcb2 100644 --- a/tests/queries/0_stateless/00757_enum_defaults.sql +++ b/tests/queries/0_stateless/00757_enum_defaults.sql @@ -15,7 +15,7 @@ select * from auto_assign_enum1; select CAST(x, 'Int16') from auto_assign_enum1; select * from auto_assign_enum1 where x = -999; -CREATE TABLE auto_assign_enum2 (x enum('a' = -1000, 'b', 'c' = -99)) ENGINE=MergeTree() order by x; -- { serverError 223 } +CREATE TABLE auto_assign_enum2 (x enum('a' = -1000, 'b', 'c' = -99)) ENGINE=MergeTree() order by x; -- { serverError UNEXPECTED_AST_STRUCTURE } CREATE TABLE auto_assign_enum2 (x Enum8( '00' = -128 ,'01','02','03','04','05','06','07','08','09','0A','0B','0C','0D','0E','0F', @@ -31,7 +31,7 @@ CREATE TABLE auto_assign_enum2 (x Enum8( INSERT INTO auto_assign_enum2 VALUES('7F'); select CAST(x, 'Int8') from auto_assign_enum2; -CREATE TABLE auto_assign_enum3 (x enum('a', 'b', NULL)) ENGINE=MergeTree() order by x; -- { serverError 223 } +CREATE TABLE auto_assign_enum3 (x enum('a', 'b', NULL)) ENGINE=MergeTree() order by x; -- { serverError UNEXPECTED_AST_STRUCTURE } DROP TABLE auto_assign_enum; DROP TABLE auto_assign_enum1; diff --git a/tests/queries/0_stateless/00758_array_reverse.sql b/tests/queries/0_stateless/00758_array_reverse.sql index 11192535dc1..c6a6c66cc2a 100644 --- a/tests/queries/0_stateless/00758_array_reverse.sql +++ b/tests/queries/0_stateless/00758_array_reverse.sql @@ -12,4 +12,4 @@ SELECT reverse([]); SELECT reverse([[[[]]]]); SET send_logs_level = 'fatal'; -SELECT '[RE7', ( SELECT '\0' ) AS riwwq, ( SELECT reverse([( SELECT bitTestAll(NULL) ) , ( SELECT '\0' ) AS ddfweeuy]) ) AS xuvv, '', ( SELECT * FROM file() ) AS wqgdswyc, ( SELECT * FROM file() ); -- { serverError 42 } +SELECT '[RE7', ( SELECT '\0' ) AS riwwq, ( SELECT reverse([( SELECT bitTestAll(NULL) ) , ( SELECT '\0' ) AS ddfweeuy]) ) AS xuvv, '', ( SELECT * FROM file() ) AS wqgdswyc, ( SELECT * FROM file() ); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } diff --git a/tests/queries/0_stateless/00762_date_comparsion.sql b/tests/queries/0_stateless/00762_date_comparsion.sql index cc054bc7047..16e5b235485 100644 --- a/tests/queries/0_stateless/00762_date_comparsion.sql +++ b/tests/queries/0_stateless/00762_date_comparsion.sql @@ -1,6 +1,6 @@ SET send_logs_level = 'fatal'; -select today() < 2018-11-14; -- { serverError 43 } +select today() < 2018-11-14; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select toDate('2018-01-01') < '2018-11-14'; select toDate('2018-01-01') < '2018-01-01'; @@ -10,8 +10,8 @@ select toDate('2018-01-01') < toDate('2018-01-01'); select toDate('2018-01-01') == toDate('2018-01-01'); select toDate('2018-01-01') != toDate('2018-01-01'); -select toDate('2018-01-01') < 1; -- { serverError 43 } -select toDate('2018-01-01') == 1; -- { serverError 43 } -select toDate('2018-01-01') != 1; -- { serverError 43 } +select toDate('2018-01-01') < 1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select toDate('2018-01-01') == 1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select toDate('2018-01-01') != 1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/00800_low_cardinality_join.sql b/tests/queries/0_stateless/00800_low_cardinality_join.sql index 9c1fd9b7ad3..ecb5194253c 100644 --- a/tests/queries/0_stateless/00800_low_cardinality_join.sql +++ b/tests/queries/0_stateless/00800_low_cardinality_join.sql @@ -11,7 +11,7 @@ select * from (select toLowCardinality(toNullable(dummy)) as val from system.one select * from (select toLowCardinality(dummy) as val from system.one) any left join (select toLowCardinality(toNullable(dummy)) as val from system.one) using val; select * from (select toLowCardinality(toNullable(dummy)) as val from system.one) any left join (select toLowCardinality(toNullable(dummy)) as val from system.one) using val; select '-'; -select * from (select dummy as val from system.one) any left join (select dummy as val from system.one) on val + 0 = val * 1; -- { serverError 403 } +select * from (select dummy as val from system.one) any left join (select dummy as val from system.one) on val + 0 = val * 1; -- { serverError INVALID_JOIN_ON_EXPRESSION } select * from (select dummy as val from system.one) any left join (select dummy as rval from system.one) on val + 0 = rval * 1; select * from (select toLowCardinality(dummy) as val from system.one) any left join (select dummy as rval from system.one) on val + 0 = rval * 1; select * from (select dummy as val from system.one) any left join (select toLowCardinality(dummy) as rval from system.one) on val + 0 = rval * 1; diff --git a/tests/queries/0_stateless/00804_test_alter_compression_codecs.sql b/tests/queries/0_stateless/00804_test_alter_compression_codecs.sql index 85e5f8b63ad..bfa6b16812c 100644 --- a/tests/queries/0_stateless/00804_test_alter_compression_codecs.sql +++ b/tests/queries/0_stateless/00804_test_alter_compression_codecs.sql @@ -50,9 +50,9 @@ CREATE TABLE alter_bad_codec ( id UInt64 CODEC(NONE) ) ENGINE = MergeTree() ORDER BY tuple(); -ALTER TABLE alter_bad_codec ADD COLUMN alter_column DateTime DEFAULT '2019-01-01 00:00:00' CODEC(gbdgkjsdh); -- { serverError 432 } +ALTER TABLE alter_bad_codec ADD COLUMN alter_column DateTime DEFAULT '2019-01-01 00:00:00' CODEC(gbdgkjsdh); -- { serverError UNKNOWN_CODEC } -ALTER TABLE alter_bad_codec ADD COLUMN alter_column DateTime DEFAULT '2019-01-01 00:00:00' CODEC(ZSTD(100)); -- { serverError 433 } +ALTER TABLE alter_bad_codec ADD COLUMN alter_column DateTime DEFAULT '2019-01-01 00:00:00' CODEC(ZSTD(100)); -- { serverError ILLEGAL_CODEC_PARAMETER } DROP TABLE IF EXISTS alter_bad_codec; diff --git a/tests/queries/0_stateless/00804_test_custom_compression_codecs.sql b/tests/queries/0_stateless/00804_test_custom_compression_codecs.sql index c080c2fc98e..b874ab05e2d 100644 --- a/tests/queries/0_stateless/00804_test_custom_compression_codecs.sql +++ b/tests/queries/0_stateless/00804_test_custom_compression_codecs.sql @@ -37,13 +37,13 @@ DROP TABLE IF EXISTS codec_multiple_direct_specification_2; DROP TABLE IF EXISTS delta_bad_params1; DROP TABLE IF EXISTS delta_bad_params2; -CREATE TABLE bad_codec(id UInt64 CODEC(adssadads)) ENGINE = MergeTree() order by tuple(); -- { serverError 432 } -CREATE TABLE too_many_params(id UInt64 CODEC(ZSTD(2,3,4,5))) ENGINE = MergeTree() order by tuple(); -- { serverError 431 } -CREATE TABLE params_when_no_params(id UInt64 CODEC(LZ4(1))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 378 } -CREATE TABLE codec_multiple_direct_specification_1(id UInt64 CODEC(MULTIPLE(LZ4, ZSTD))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 432 } -CREATE TABLE codec_multiple_direct_specification_2(id UInt64 CODEC(multiple(LZ4, ZSTD))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 432 } -CREATE TABLE delta_bad_params1(id UInt64 CODEC(Delta(3))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 433 } -CREATE TABLE delta_bad_params2(id UInt64 CODEC(Delta(16))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 433 } +CREATE TABLE bad_codec(id UInt64 CODEC(adssadads)) ENGINE = MergeTree() order by tuple(); -- { serverError UNKNOWN_CODEC } +CREATE TABLE too_many_params(id UInt64 CODEC(ZSTD(2,3,4,5))) ENGINE = MergeTree() order by tuple(); -- { serverError ILLEGAL_SYNTAX_FOR_CODEC_TYPE } +CREATE TABLE params_when_no_params(id UInt64 CODEC(LZ4(1))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError DATA_TYPE_CANNOT_HAVE_ARGUMENTS } +CREATE TABLE codec_multiple_direct_specification_1(id UInt64 CODEC(MULTIPLE(LZ4, ZSTD))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError UNKNOWN_CODEC } +CREATE TABLE codec_multiple_direct_specification_2(id UInt64 CODEC(multiple(LZ4, ZSTD))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError UNKNOWN_CODEC } +CREATE TABLE delta_bad_params1(id UInt64 CODEC(Delta(3))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError ILLEGAL_CODEC_PARAMETER } +CREATE TABLE delta_bad_params2(id UInt64 CODEC(Delta(16))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError ILLEGAL_CODEC_PARAMETER } DROP TABLE IF EXISTS bad_codec; DROP TABLE IF EXISTS params_when_no_params; @@ -88,7 +88,7 @@ CREATE TABLE compression_codec_multiple_more_types ( id Decimal128(13) CODEC(ZSTD, LZ4, ZSTD, ZSTD, Delta(2), Delta(4), Delta(1), LZ4HC), data FixedString(12) CODEC(ZSTD, ZSTD, Delta, Delta, Delta, NONE, NONE, NONE, LZ4HC), ddd Nested (age UInt8, Name String) CODEC(LZ4, LZ4HC, NONE, NONE, NONE, ZSTD, Delta(8)) -) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 36 } +) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError BAD_ARGUMENTS } CREATE TABLE compression_codec_multiple_more_types ( id Decimal128(13) CODEC(ZSTD, LZ4, ZSTD, ZSTD, Delta(2), Delta(4), Delta(1), LZ4HC), diff --git a/tests/queries/0_stateless/00805_round_down.sql b/tests/queries/0_stateless/00805_round_down.sql index 6d59cb0af1a..28377580a86 100644 --- a/tests/queries/0_stateless/00805_round_down.sql +++ b/tests/queries/0_stateless/00805_round_down.sql @@ -5,9 +5,9 @@ SELECT number as x, roundDown(x, [6, 5, 4]) FROM system.numbers LIMIT 10; SELECT 1 as x, roundDown(x, [6, 5, 4]); SET send_logs_level = 'fatal'; -SELECT 1 as x, roundDown(x, []); -- { serverError 43 } -SELECT 1 as x, roundDown(x, emptyArrayUInt8()); -- { serverError 44 } -SELECT roundDown(number, [number]) FROM system.numbers LIMIT 10; -- { serverError 44 } +SELECT 1 as x, roundDown(x, []); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT 1 as x, roundDown(x, emptyArrayUInt8()); -- { serverError ILLEGAL_COLUMN } +SELECT roundDown(number, [number]) FROM system.numbers LIMIT 10; -- { serverError ILLEGAL_COLUMN } SELECT 1 as x, roundDown(x, [1]); SELECT 1 as x, roundDown(x, [1.5]); diff --git a/tests/queries/0_stateless/00808_array_enumerate_segfault.sql b/tests/queries/0_stateless/00808_array_enumerate_segfault.sql index 88f9b821685..16c94aeb954 100644 --- a/tests/queries/0_stateless/00808_array_enumerate_segfault.sql +++ b/tests/queries/0_stateless/00808_array_enumerate_segfault.sql @@ -1,4 +1,4 @@ SET send_logs_level = 'fatal'; SELECT arrayEnumerateUniq(anyHeavy([]), []); -SELECT arrayEnumerateDense([], [sequenceCount(NULL)]); -- { serverError 190 } +SELECT arrayEnumerateDense([], [sequenceCount(NULL)]); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } SELECT arrayEnumerateDense([STDDEV_SAMP(NULL, 910947.571364)], [NULL]); diff --git a/tests/queries/0_stateless/00808_not_optimize_predicate.sql b/tests/queries/0_stateless/00808_not_optimize_predicate.sql index c39f1ff2ad1..560ace5efe1 100644 --- a/tests/queries/0_stateless/00808_not_optimize_predicate.sql +++ b/tests/queries/0_stateless/00808_not_optimize_predicate.sql @@ -1,6 +1,6 @@ SET send_logs_level = 'fatal'; SET convert_query_to_cnf = 0; -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; DROP TABLE IF EXISTS test_00808; CREATE TABLE test_00808(date Date, id Int8, name String, value Int64, sign Int8) ENGINE = CollapsingMergeTree(sign) ORDER BY (id, date); @@ -18,8 +18,8 @@ SELECT * FROM (SELECT id FROM test_00808 GROUP BY id LIMIT 1 BY id) WHERE id = 1 SET force_primary_key = 1; SELECT '-------FORCE PRIMARY KEY-------'; -SELECT * FROM (SELECT * FROM test_00808 LIMIT 1) WHERE id = 1; -- { serverError 277 } -SELECT * FROM (SELECT id FROM test_00808 GROUP BY id LIMIT 1 BY id) WHERE id = 1; -- { serverError 277 } +SELECT * FROM (SELECT * FROM test_00808 LIMIT 1) WHERE id = 1; -- { serverError INDEX_NOT_USED } +SELECT * FROM (SELECT id FROM test_00808 GROUP BY id LIMIT 1 BY id) WHERE id = 1; -- { serverError INDEX_NOT_USED } SELECT '-------CHECK STATEFUL FUNCTIONS-------'; SELECT n, z, changed FROM ( diff --git a/tests/queries/0_stateless/00809_add_days_segfault.sql b/tests/queries/0_stateless/00809_add_days_segfault.sql index d2d91dd2711..3be654f71d1 100644 --- a/tests/queries/0_stateless/00809_add_days_segfault.sql +++ b/tests/queries/0_stateless/00809_add_days_segfault.sql @@ -8,5 +8,5 @@ SET send_logs_level = 'fatal'; SELECT ignore(addDays((CAST((96.338) AS DateTime)), -3)); SELECT ignore(subtractDays((CAST((-5263074.47) AS DateTime)), -737895)); -SELECT quantileDeterministic([], identity(( SELECT subtractDays((CAST((566450.398706) AS DateTime)), 54) ) )), '\0', []; -- { serverError 43 } -SELECT sequenceCount((CAST((( SELECT NULL ) AS rg, ( SELECT ( SELECT [], 'A') AS String))]]); -- { serverError 44 } +SELECT ( SELECT toDecimal128([], rowNumberInBlock()) ) , lcm('', [[(CAST(('>A') AS String))]]); -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/00818_alias_bug_4110.sql b/tests/queries/0_stateless/00818_alias_bug_4110.sql index 9f3657221e4..d057bacc908 100644 --- a/tests/queries/0_stateless/00818_alias_bug_4110.sql +++ b/tests/queries/0_stateless/00818_alias_bug_4110.sql @@ -12,10 +12,10 @@ select s.a + 2 as b, b - 1 as a from (select 10 as a) s; select s.a as a, s.a + 2 as b from (select 10 as a) s; select s.a + 1 as a, s.a + 2 as b from (select 10 as a) s; select a + 1 as a, a + 1 as b from (select 10 as a); -select a + 1 as b, b + 1 as a from (select 10 as a); -- { serverError 174 } -select 10 as a, a + 1 as a; -- { serverError 47 } -with 10 as a select a as a; -- { serverError 47 } -with 10 as a select a + 1 as a; -- { serverError 47 } +select a + 1 as b, b + 1 as a from (select 10 as a); -- { serverError CYCLIC_ALIASES } +select 10 as a, a + 1 as a; -- { serverError UNKNOWN_IDENTIFIER } +with 10 as a select a as a; -- { serverError UNKNOWN_IDENTIFIER } +with 10 as a select a + 1 as a; -- { serverError UNKNOWN_IDENTIFIER } SELECT 0 as t FROM (SELECT 1 as t) as inn WHERE inn.t = 1; SELECT sum(value) as value FROM (SELECT 1 as value) as data WHERE data.value > 0; diff --git a/tests/queries/0_stateless/00819_full_join_wrong_columns_in_block.sql b/tests/queries/0_stateless/00819_full_join_wrong_columns_in_block.sql index cdb9e57d17f..3c0246619da 100644 --- a/tests/queries/0_stateless/00819_full_join_wrong_columns_in_block.sql +++ b/tests/queries/0_stateless/00819_full_join_wrong_columns_in_block.sql @@ -16,4 +16,4 @@ SET any_join_distinct_right_table_keys = 0; SELECT * FROM (SELECT 1 AS a, 'x' AS b) any join (SELECT 1 as a, 'y' as b) using a; SELECT * FROM (SELECT 1 AS a, 'x' AS b) left join (SELECT 1 as a, 'y' as b) using a; SELECT * FROM (SELECT 1 AS a, 'x' AS b) any right join (SELECT 1 as a, 'y' as b) using a; -SELECT * FROM (SELECT 1 AS a, 'x' AS b) any full join (SELECT 1 as a, 'y' as b) using a; -- { serverError 48 } +SELECT * FROM (SELECT 1 AS a, 'x' AS b) any full join (SELECT 1 as a, 'y' as b) using a; -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/00831_quantile_weighted_parameter_check.sql b/tests/queries/0_stateless/00831_quantile_weighted_parameter_check.sql index 1d31b80f193..e16a3157e58 100644 --- a/tests/queries/0_stateless/00831_quantile_weighted_parameter_check.sql +++ b/tests/queries/0_stateless/00831_quantile_weighted_parameter_check.sql @@ -1,2 +1,2 @@ SELECT quantileExactWeighted(0.5)(number, number) FROM numbers(10); -SELECT quantileExactWeighted(0.5)(number, 0.1) FROM numbers(10); -- { serverError 43 } +SELECT quantileExactWeighted(0.5)(number, 0.1) FROM numbers(10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/00832_storage_file_lock.sql b/tests/queries/0_stateless/00832_storage_file_lock.sql index 808d591d351..a1312e8a9b7 100644 --- a/tests/queries/0_stateless/00832_storage_file_lock.sql +++ b/tests/queries/0_stateless/00832_storage_file_lock.sql @@ -1,6 +1,6 @@ DROP TABLE IF EXISTS file; CREATE TABLE file (number UInt64) ENGINE = File(TSV); -SELECT * FROM file; -- { serverError 107 } +SELECT * FROM file; -- { serverError FILE_DOESNT_EXIST } INSERT INTO file VALUES (1); SELECT * FROM file; DROP TABLE file; diff --git a/tests/queries/0_stateless/00833_sleep_overflow.sql b/tests/queries/0_stateless/00833_sleep_overflow.sql index 155637eebd9..dc38bee8b9d 100644 --- a/tests/queries/0_stateless/00833_sleep_overflow.sql +++ b/tests/queries/0_stateless/00833_sleep_overflow.sql @@ -1 +1 @@ -SELECT sleep(4295.967296); -- { serverError 160 } +SELECT sleep(4295.967296); -- { serverError TOO_SLOW } diff --git a/tests/queries/0_stateless/00834_limit_with_constant_expressions.sql b/tests/queries/0_stateless/00834_limit_with_constant_expressions.sql index 47b403a37f9..d4a7f80ed98 100644 --- a/tests/queries/0_stateless/00834_limit_with_constant_expressions.sql +++ b/tests/queries/0_stateless/00834_limit_with_constant_expressions.sql @@ -1,15 +1,15 @@ SELECT number FROM numbers(10) LIMIT 0 + 1; SELECT number FROM numbers(10) LIMIT 1 - 1; SELECT number FROM numbers(10) LIMIT 2 - 1; -SELECT number FROM numbers(10) LIMIT 0 - 1; -- { serverError 440 } +SELECT number FROM numbers(10) LIMIT 0 - 1; -- { serverError INVALID_LIMIT_EXPRESSION } SELECT number FROM numbers(10) LIMIT 1.0; -SELECT number FROM numbers(10) LIMIT 1.5; -- { serverError 440 } -SELECT number FROM numbers(10) LIMIT '1'; -- { serverError 440 } -SELECT number FROM numbers(10) LIMIT now(); -- { serverError 440 } -SELECT number FROM numbers(10) LIMIT today(); -- { serverError 440 } +SELECT number FROM numbers(10) LIMIT 1.5; -- { serverError INVALID_LIMIT_EXPRESSION } +SELECT number FROM numbers(10) LIMIT '1'; -- { serverError INVALID_LIMIT_EXPRESSION } +SELECT number FROM numbers(10) LIMIT now(); -- { serverError INVALID_LIMIT_EXPRESSION } +SELECT number FROM numbers(10) LIMIT today(); -- { serverError INVALID_LIMIT_EXPRESSION } SELECT number FROM numbers(10) LIMIT toUInt8('1'); SELECT number FROM numbers(10) LIMIT toFloat32('1'); -SELECT number FROM numbers(10) LIMIT rand(); -- { serverError 36, 440 } +SELECT number FROM numbers(10) LIMIT rand(); -- { serverError BAD_ARGUMENTS, INVALID_LIMIT_EXPRESSION } SELECT count() <= 1 FROM (SELECT number FROM numbers(10) LIMIT randConstant() % 2); @@ -18,11 +18,11 @@ SELECT number FROM numbers(10) LIMIT 0 BY number; SELECT TOP 5 * FROM numbers(10); -SELECT * FROM numbers(10) LIMIT 0.33 / 0.165 - 0.33 + 0.67; -- { serverError 440 } -SELECT * FROM numbers(10) LIMIT LENGTH('NNN') + COS(0), toDate('0000-00-02'); -- { serverError 440 } -SELECT * FROM numbers(10) LIMIT LENGTH('NNN') + COS(0), toDate('0000-00-02'); -- { serverError 440 } -SELECT * FROM numbers(10) LIMIT a + 5 - a; -- { serverError 47 } -SELECT * FROM numbers(10) LIMIT a + b; -- { serverError 47 } -SELECT * FROM numbers(10) LIMIT 'Hello'; -- { serverError 440 } +SELECT * FROM numbers(10) LIMIT 0.33 / 0.165 - 0.33 + 0.67; -- { serverError INVALID_LIMIT_EXPRESSION } +SELECT * FROM numbers(10) LIMIT LENGTH('NNN') + COS(0), toDate('0000-00-02'); -- { serverError INVALID_LIMIT_EXPRESSION } +SELECT * FROM numbers(10) LIMIT LENGTH('NNN') + COS(0), toDate('0000-00-02'); -- { serverError INVALID_LIMIT_EXPRESSION } +SELECT * FROM numbers(10) LIMIT a + 5 - a; -- { serverError UNKNOWN_IDENTIFIER } +SELECT * FROM numbers(10) LIMIT a + b; -- { serverError UNKNOWN_IDENTIFIER } +SELECT * FROM numbers(10) LIMIT 'Hello'; -- { serverError INVALID_LIMIT_EXPRESSION } SELECT number from numbers(10) order by number limit (select sum(number), count() from numbers(3)).1; diff --git a/tests/queries/0_stateless/00835_if_generic_case.sql b/tests/queries/0_stateless/00835_if_generic_case.sql index 3d7f128f4c1..051fad14603 100644 --- a/tests/queries/0_stateless/00835_if_generic_case.sql +++ b/tests/queries/0_stateless/00835_if_generic_case.sql @@ -17,4 +17,4 @@ SELECT materialize(toDateTime('2000-01-01 00:00:00', 'Asia/Istanbul')) AS x, mat SELECT rand() % 2 = 0 ? number : number FROM numbers(5); -SELECT rand() % 2 = 0 ? number : toString(number) FROM numbers(5); -- { serverError 386 } +SELECT rand() % 2 = 0 ? number : toString(number) FROM numbers(5); -- { serverError NO_COMMON_TYPE } diff --git a/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh b/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh index cbe37de6651..238cdcea547 100755 --- a/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh +++ b/tests/queries/0_stateless/00840_long_concurrent_select_and_drop_deadlock.sh @@ -19,15 +19,39 @@ trap cleanup EXIT $CLICKHOUSE_CLIENT -q "create view view_00840 as select count(*),database,table from system.columns group by database,table" -for _ in {1..100}; do - $CLICKHOUSE_CLIENT -nm -q " - drop table if exists view_00840; - create view view_00840 as select count(*),database,table from system.columns group by database,table; - " -done & -for _ in {1..250}; do - $CLICKHOUSE_CLIENT -q "select * from view_00840 order by table" >/dev/null 2>&1 || true -done & + +function thread_drop_create() +{ + local TIMELIMIT=$((SECONDS+$1)) + local it=0 + while [ $SECONDS -lt "$TIMELIMIT" ] && [ $it -lt 100 ]; + do + it=$((it+1)) + $CLICKHOUSE_CLIENT -nm -q " + drop table if exists view_00840; + create view view_00840 as select count(*),database,table from system.columns group by database,table; + " + done +} + +function thread_select() +{ + local TIMELIMIT=$((SECONDS+$1)) + local it=0 + while [ $SECONDS -lt "$TIMELIMIT" ] && [ $it -lt 250 ]; + do + it=$((it+1)) + $CLICKHOUSE_CLIENT -q "select * from view_00840 order by table" >/dev/null 2>&1 || true + done +} + + +export -f thread_drop_create +export -f thread_select + +TIMEOUT=60 +thread_drop_create $TIMEOUT & +thread_select $TIMEOUT & wait trap '' EXIT diff --git a/tests/queries/0_stateless/00841_temporary_table_database.sql b/tests/queries/0_stateless/00841_temporary_table_database.sql index a5927a4cd33..6f4f2ca80b9 100644 --- a/tests/queries/0_stateless/00841_temporary_table_database.sql +++ b/tests/queries/0_stateless/00841_temporary_table_database.sql @@ -2,4 +2,4 @@ CREATE TEMPORARY TABLE t1_00841 (x UInt8); INSERT INTO t1_00841 VALUES (1); SELECT * FROM t1_00841; -CREATE TEMPORARY TABLE test.t2_00841 (x UInt8); -- { serverError 442 } +CREATE TEMPORARY TABLE test.t2_00841 (x UInt8); -- { serverError BAD_DATABASE_FOR_TEMPORARY_TABLE } diff --git a/tests/queries/0_stateless/00842_array_with_constant_overflow.sql b/tests/queries/0_stateless/00842_array_with_constant_overflow.sql index ffd5fecde10..aa22f02a512 100644 --- a/tests/queries/0_stateless/00842_array_with_constant_overflow.sql +++ b/tests/queries/0_stateless/00842_array_with_constant_overflow.sql @@ -1 +1 @@ -SELECT arrayWithConstant(-231.37104, -138); -- { serverError 128 } +SELECT arrayWithConstant(-231.37104, -138); -- { serverError TOO_LARGE_ARRAY_SIZE } diff --git a/tests/queries/0_stateless/00843_optimize_predicate_and_rename_table.sql b/tests/queries/0_stateless/00843_optimize_predicate_and_rename_table.sql index e8a90fa5746..3e1e6497832 100644 --- a/tests/queries/0_stateless/00843_optimize_predicate_and_rename_table.sql +++ b/tests/queries/0_stateless/00843_optimize_predicate_and_rename_table.sql @@ -10,7 +10,7 @@ INSERT INTO test1_00843 VALUES (1); CREATE VIEW view_00843 AS SELECT * FROM test1_00843; SELECT * FROM view_00843; RENAME TABLE test1_00843 TO test2_00843; -SELECT * FROM view_00843; -- { serverError 60 } +SELECT * FROM view_00843; -- { serverError UNKNOWN_TABLE } RENAME TABLE test2_00843 TO test1_00843; SELECT * FROM view_00843; diff --git a/tests/queries/0_stateless/00877_memory_limit_for_new_delete.sql b/tests/queries/0_stateless/00877_memory_limit_for_new_delete.sql index 8eb9d83b730..b0480b2e1bd 100644 --- a/tests/queries/0_stateless/00877_memory_limit_for_new_delete.sql +++ b/tests/queries/0_stateless/00877_memory_limit_for_new_delete.sql @@ -8,4 +8,4 @@ SELECT sum(ignore(*)) FROM ( SELECT number, argMax(number, (number, toFixedString(toString(number), 1024))) FROM numbers(1000000) GROUP BY number -) -- { serverError 241 } +) -- { serverError MEMORY_LIMIT_EXCEEDED } diff --git a/tests/queries/0_stateless/00879_cast_to_decimal_crash.sql b/tests/queries/0_stateless/00879_cast_to_decimal_crash.sql index d07a54ecd5a..58d72802706 100644 --- a/tests/queries/0_stateless/00879_cast_to_decimal_crash.sql +++ b/tests/queries/0_stateless/00879_cast_to_decimal_crash.sql @@ -1 +1 @@ -select cast(toIntervalDay(1) as Nullable(Decimal(10, 10))); -- { serverError 70 } +select cast(toIntervalDay(1) as Nullable(Decimal(10, 10))); -- { serverError CANNOT_CONVERT_TYPE } diff --git a/tests/queries/0_stateless/00909_arrayEnumerateUniq.sql b/tests/queries/0_stateless/00909_arrayEnumerateUniq.sql index fe01b2185c2..e952eac2e6a 100644 --- a/tests/queries/0_stateless/00909_arrayEnumerateUniq.sql +++ b/tests/queries/0_stateless/00909_arrayEnumerateUniq.sql @@ -154,32 +154,32 @@ DROP TABLE arrays_test; select '---------BAD'; SELECT arrayEnumerateUniqRanked(); -- { serverError TOO_FEW_ARGUMENTS_FOR_FUNCTION } SELECT arrayEnumerateUniqRanked([]); -SELECT arrayEnumerateUniqRanked(1); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(2,[]); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(2,[],2); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(2,[],[]); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(2,[],[],3); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked([],2); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked([],2,[]); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(0,[],0); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(0,0,0); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(1,1,1); -- { serverError 36 } -SELECT arrayEnumerateDenseRanked(1, [10,20,10,30], 0); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked(1, [[7,8,9,10],[10,11,12]], 2, [[14,15,16],[17,18,19],[20],[21]], 2); -- { serverError 190 } +SELECT arrayEnumerateUniqRanked(1); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(2,[]); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(2,[],2); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(2,[],[]); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(2,[],[],3); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked([],2); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked([],2,[]); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(0,[],0); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(0,0,0); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(1,1,1); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateDenseRanked(1, [10,20,10,30], 0); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked(1, [[7,8,9,10],[10,11,12]], 2, [[14,15,16],[17,18,19],[20],[21]], 2); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } -SELECT arrayEnumerateUniqRanked(1, [1,2], 1, ['a', 'b', 'c', 'd'],1); -- { serverError 190 } -SELECT arrayEnumerateUniqRanked(1, [1,2], 1, [14,15,16,17,18,19], 1); -- { serverError 190 } -SELECT arrayEnumerateUniqRanked(1, [14,15,16,17,18,19], 1, [1,2], 1); -- { serverError 190 } -SELECT arrayEnumerateUniqRanked(1, [1,1,1,1,1,1], 1, [1,1], 1); -- { serverError 190 } -SELECT arrayEnumerateUniqRanked(1, [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], 1, [1,1], 1); -- { serverError 190 } +SELECT arrayEnumerateUniqRanked(1, [1,2], 1, ['a', 'b', 'c', 'd'],1); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } +SELECT arrayEnumerateUniqRanked(1, [1,2], 1, [14,15,16,17,18,19], 1); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } +SELECT arrayEnumerateUniqRanked(1, [14,15,16,17,18,19], 1, [1,2], 1); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } +SELECT arrayEnumerateUniqRanked(1, [1,1,1,1,1,1], 1, [1,1], 1); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } +SELECT arrayEnumerateUniqRanked(1, [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1], 1, [1,1], 1); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } SELECT arrayEnumerateDenseRanked([], [], []); SELECT arrayEnumerateDenseRanked([], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], [], []); SELECT arrayEnumerateDenseRanked([1,2], [1,2], [1,2]); SELECT arrayEnumerateUniqRanked([1,2], [1,2], [1,2]); -SELECT arrayEnumerateUniqRanked([1,2], 3, 4, 5); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked([1,2], 1, 2); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked([1,2], 1, 3, 4, 5); -- { serverError 36 } -SELECT arrayEnumerateUniqRanked([1,2], 1, 3, [4], 5); -- { serverError 36 } +SELECT arrayEnumerateUniqRanked([1,2], 3, 4, 5); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked([1,2], 1, 2); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked([1,2], 1, 3, 4, 5); -- { serverError BAD_ARGUMENTS } +SELECT arrayEnumerateUniqRanked([1,2], 1, 3, [4], 5); -- { serverError BAD_ARGUMENTS } SELECT arrayEnumerateDenseRanked([[[[[[[[[[42]]]]]]]]]]); SELECT arrayEnumerateUniqRanked('wat', [1,2]); -- { serverError BAD_ARGUMENTS } SELECT arrayEnumerateUniqRanked(1, [1,2], 'boom'); -- { serverError BAD_ARGUMENTS } @@ -190,7 +190,7 @@ SELECT arrayEnumerateDenseRanked(-101, ['\0']); -- { serverError BAD_ARGUMENTS } SELECT arrayEnumerateDenseRanked(1.1, [10,20,10,30]); -- { serverError BAD_ARGUMENTS } SELECT arrayEnumerateDenseRanked([10,20,10,30], 0.4); -- { serverError BAD_ARGUMENTS } SELECT arrayEnumerateDenseRanked([10,20,10,30], 1.8); -- { serverError BAD_ARGUMENTS } -SELECT arrayEnumerateUniqRanked(1, [], 1000000000); -- { serverError 36 } +SELECT arrayEnumerateUniqRanked(1, [], 1000000000); -- { serverError BAD_ARGUMENTS } -- skipping empty arrays diff --git a/tests/queries/0_stateless/00910_crash_when_distributed_modify_order_by.sql b/tests/queries/0_stateless/00910_crash_when_distributed_modify_order_by.sql index 00811d8ab89..67a1043586a 100644 --- a/tests/queries/0_stateless/00910_crash_when_distributed_modify_order_by.sql +++ b/tests/queries/0_stateless/00910_crash_when_distributed_modify_order_by.sql @@ -5,6 +5,6 @@ DROP TABLE IF EXISTS union2; set allow_deprecated_syntax_for_merge_tree=1; CREATE TABLE union1 ( date Date, a Int32, b Int32, c Int32, d Int32) ENGINE = MergeTree(date, (a, date), 8192); CREATE TABLE union2 ( date Date, a Int32, b Int32, c Int32, d Int32) ENGINE = Distributed(test_shard_localhost, currentDatabase(), 'union1'); -ALTER TABLE union2 MODIFY ORDER BY a; -- { serverError 48 } +ALTER TABLE union2 MODIFY ORDER BY a; -- { serverError NOT_IMPLEMENTED } DROP TABLE union1; DROP TABLE union2; diff --git a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql index c40419e4d56..4f5213a2dbb 100644 --- a/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql +++ b/tests/queries/0_stateless/00918_has_unsufficient_type_check.sql @@ -1,3 +1,3 @@ -SELECT hasAny([['Hello, world']], [[[]]]); -- { serverError 386 } +SELECT hasAny([['Hello, world']], [[[]]]); -- { serverError NO_COMMON_TYPE } SELECT hasAny([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); SELECT hasAll([['Hello, world']], [['Hello', 'world'], ['Hello, world']]); diff --git a/tests/queries/0_stateless/00921_datetime64_basic.sql b/tests/queries/0_stateless/00921_datetime64_basic.sql index 13abe3e64d0..33f07501904 100644 --- a/tests/queries/0_stateless/00921_datetime64_basic.sql +++ b/tests/queries/0_stateless/00921_datetime64_basic.sql @@ -1,20 +1,20 @@ DROP TABLE IF EXISTS A; -SELECT CAST(1 as DateTime64('abc')); -- { serverError 43 } # Invalid scale parameter type -SELECT CAST(1 as DateTime64(100)); -- { serverError 69 } # too big scale -SELECT CAST(1 as DateTime64(-1)); -- { serverError 43 } # signed scale parameter type +SELECT CAST(1 as DateTime64('abc')); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } # Invalid scale parameter type +SELECT CAST(1 as DateTime64(100)); -- { serverError ARGUMENT_OUT_OF_BOUND } # too big scale +SELECT CAST(1 as DateTime64(-1)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } # signed scale parameter type SELECT CAST(1 as DateTime64(3, 'qqq')); -- { serverError BAD_ARGUMENTS } # invalid timezone -SELECT toDateTime64('2019-09-16 19:20:11.234', 'abc'); -- { serverError 43 } # invalid scale -SELECT toDateTime64('2019-09-16 19:20:11.234', 100); -- { serverError 69 } # too big scale -SELECT toDateTime64(CAST([['CLb5Ph ']], 'String'), uniqHLL12('2Gs1V', 752)); -- { serverError 44 } # non-const string and non-const scale +SELECT toDateTime64('2019-09-16 19:20:11.234', 'abc'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } # invalid scale +SELECT toDateTime64('2019-09-16 19:20:11.234', 100); -- { serverError ARGUMENT_OUT_OF_BOUND } # too big scale +SELECT toDateTime64(CAST([['CLb5Ph ']], 'String'), uniqHLL12('2Gs1V', 752)); -- { serverError ILLEGAL_COLUMN } # non-const string and non-const scale SELECT toDateTime64('2019-09-16 19:20:11.234', 3, 'qqq'); -- { serverError BAD_ARGUMENTS } # invalid timezone -SELECT ignore(now64(gccMurmurHash())); -- { serverError 43 } # Illegal argument type -SELECT ignore(now64('abcd')); -- { serverError 43 } # Illegal argument type -SELECT ignore(now64(number)) FROM system.numbers LIMIT 10; -- { serverError 43 } # Illegal argument type +SELECT ignore(now64(gccMurmurHash())); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } # Illegal argument type +SELECT ignore(now64('abcd')); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } # Illegal argument type +SELECT ignore(now64(number)) FROM system.numbers LIMIT 10; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } # Illegal argument type SELECT ignore(now64(3, 'invalid timezone')); -- { serverError BAD_ARGUMENTS } -SELECT ignore(now64(3, 1111)); -- { serverError 44 } # invalid timezone parameter type +SELECT ignore(now64(3, 1111)); -- { serverError ILLEGAL_COLUMN } # invalid timezone parameter type WITH 'UTC' as timezone SELECT timezone, timeZoneOf(now64(3, timezone)) == timezone; WITH 'Europe/Minsk' as timezone SELECT timezone, timeZoneOf(now64(3, timezone)) == timezone; diff --git a/tests/queries/0_stateless/00926_geo_to_h3.sql b/tests/queries/0_stateless/00926_geo_to_h3.sql index a86548d3555..0a41f06b14e 100644 --- a/tests/queries/0_stateless/00926_geo_to_h3.sql +++ b/tests/queries/0_stateless/00926_geo_to_h3.sql @@ -11,7 +11,7 @@ INSERT INTO table1 VALUES(55.72076201, 37.59813500, 15); INSERT INTO table1 VALUES(55.72076200, 37.59813500, 14); select geoToH3(37.63098076, 55.77922738, 15); -select geoToH3(37.63098076, 55.77922738, 24); -- { serverError 69 } +select geoToH3(37.63098076, 55.77922738, 24); -- { serverError ARGUMENT_OUT_OF_BOUND } select geoToH3(lon, lat, resolution) from table1 order by lat, lon, resolution; select geoToH3(lon, lat, resolution) AS k from table1 order by lat, lon, k; select lat, lon, geoToH3(lon, lat, resolution) AS k from table1 order by lat, lon, k; diff --git a/tests/queries/0_stateless/00927_disable_hyperscan.sql b/tests/queries/0_stateless/00927_disable_hyperscan.sql index d6f47d739fb..c07848a4fcc 100644 --- a/tests/queries/0_stateless/00927_disable_hyperscan.sql +++ b/tests/queries/0_stateless/00927_disable_hyperscan.sql @@ -7,10 +7,10 @@ SELECT multiMatchAny(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc SET allow_hyperscan = 0; -SELECT multiMatchAny(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), ['hel+o', 'w(or)*ld']); -- { serverError 446 } -SELECT multiMatchAny(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), materialize(['hel+o', 'w(or)*ld'])); -- { serverError 446 } +SELECT multiMatchAny(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), ['hel+o', 'w(or)*ld']); -- { serverError FUNCTION_NOT_ALLOWED } +SELECT multiMatchAny(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), materialize(['hel+o', 'w(or)*ld'])); -- { serverError FUNCTION_NOT_ALLOWED } -SELECT multiMatchAllIndices(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), ['hel+o', 'w(or)*ld']); -- { serverError 446 } -SELECT multiMatchAllIndices(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), materialize(['hel+o', 'w(or)*ld'])); -- { serverError 446 } +SELECT multiMatchAllIndices(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), ['hel+o', 'w(or)*ld']); -- { serverError FUNCTION_NOT_ALLOWED } +SELECT multiMatchAllIndices(arrayJoin(['hello', 'world', 'hellllllllo', 'wororld', 'abc']), materialize(['hel+o', 'w(or)*ld'])); -- { serverError FUNCTION_NOT_ALLOWED } SELECT multiSearchAny(arrayJoin(['hello', 'world', 'hello, world', 'abc']), ['hello', 'world']); diff --git a/tests/queries/0_stateless/00929_multi_match_edit_distance.sql b/tests/queries/0_stateless/00929_multi_match_edit_distance.sql index c86accd260b..a74a9d71621 100644 --- a/tests/queries/0_stateless/00929_multi_match_edit_distance.sql +++ b/tests/queries/0_stateless/00929_multi_match_edit_distance.sql @@ -8,8 +8,8 @@ SELECT '- const pattern'; select multiFuzzyMatchAny('abc', 0, ['a1c']) from system.numbers limit 3; select multiFuzzyMatchAny('abc', 1, ['a1c']) from system.numbers limit 3; select multiFuzzyMatchAny('abc', 2, ['a1c']) from system.numbers limit 3; -select multiFuzzyMatchAny('abc', 3, ['a1c']) from system.numbers limit 3; -- { serverError 36 } -select multiFuzzyMatchAny('abc', 4, ['a1c']) from system.numbers limit 3; -- { serverError 36 } +select multiFuzzyMatchAny('abc', 3, ['a1c']) from system.numbers limit 3; -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAny('abc', 4, ['a1c']) from system.numbers limit 3; -- { serverError BAD_ARGUMENTS } select multiFuzzyMatchAny('leftabcright', 1, ['a1c']) from system.numbers limit 3; @@ -19,9 +19,9 @@ select multiFuzzyMatchAny('halo some wrld', 2, ['^hello.*world$']); select multiFuzzyMatchAny('halo some wrld', 2, ['^hello.*world$', '^halo.*world$']); select multiFuzzyMatchAny('halo some wrld', 2, ['^halo.*world$', '^hello.*world$']); select multiFuzzyMatchAny('halo some wrld', 3, ['^hello.*world$']); -select multiFuzzyMatchAny('hello some world', 10, ['^hello.*world$']); -- { serverError 36 } -select multiFuzzyMatchAny('hello some world', -1, ['^hello.*world$']); -- { serverError 43 } -select multiFuzzyMatchAny('hello some world', 10000000000, ['^hello.*world$']); -- { serverError 44 } +select multiFuzzyMatchAny('hello some world', 10, ['^hello.*world$']); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAny('hello some world', -1, ['^hello.*world$']); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select multiFuzzyMatchAny('hello some world', 10000000000, ['^hello.*world$']); -- { serverError ILLEGAL_COLUMN } select multiFuzzyMatchAny('http://hyperscan_is_nice.de/st', 2, ['http://hyperscan_is_nice.de/(st\\d\\d$|st\\d\\d\\.|st1[0-4]\\d|st150|st\\d$|gl|rz|ch)']); select multiFuzzyMatchAny('string', 0, ['zorro$', '^tring', 'in$', 'how.*', 'it{2}', 'works']); select multiFuzzyMatchAny('string', 1, ['zorro$', '^tring', 'ip$', 'how.*', 'it{2}', 'works']); @@ -37,8 +37,8 @@ SELECT '- non-const pattern'; select multiFuzzyMatchAny(materialize('abc'), 0, materialize(['a1c'])) from system.numbers limit 3; select multiFuzzyMatchAny(materialize('abc'), 1, materialize(['a1c'])) from system.numbers limit 3; select multiFuzzyMatchAny(materialize('abc'), 2, materialize(['a1c'])) from system.numbers limit 3; -select multiFuzzyMatchAny(materialize('abc'), 3, materialize(['a1c'])) from system.numbers limit 3; -- { serverError 36} -select multiFuzzyMatchAny(materialize('abc'), 4, materialize(['a1c'])) from system.numbers limit 3; -- { serverError 36} +select multiFuzzyMatchAny(materialize('abc'), 3, materialize(['a1c'])) from system.numbers limit 3; -- { serverError BAD_ARGUMENTS} +select multiFuzzyMatchAny(materialize('abc'), 4, materialize(['a1c'])) from system.numbers limit 3; -- { serverError BAD_ARGUMENTS} select multiFuzzyMatchAny(materialize('leftabcright'), 1, materialize(['a1c'])); @@ -48,9 +48,9 @@ select multiFuzzyMatchAny(materialize('halo some wrld'), 2, materialize(['^hello select multiFuzzyMatchAny(materialize('halo some wrld'), 2, materialize(['^hello.*world$', '^halo.*world$'])); select multiFuzzyMatchAny(materialize('halo some wrld'), 2, materialize(['^halo.*world$', '^hello.*world$'])); select multiFuzzyMatchAny(materialize('halo some wrld'), 3, materialize(['^hello.*world$'])); -select multiFuzzyMatchAny(materialize('hello some world'), 10, materialize(['^hello.*world$'])); -- { serverError 36 } -select multiFuzzyMatchAny(materialize('hello some world'), -1, materialize(['^hello.*world$'])); -- { serverError 43 } -select multiFuzzyMatchAny(materialize('hello some world'), 10000000000, materialize(['^hello.*world$'])); -- { serverError 44 } +select multiFuzzyMatchAny(materialize('hello some world'), 10, materialize(['^hello.*world$'])); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAny(materialize('hello some world'), -1, materialize(['^hello.*world$'])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select multiFuzzyMatchAny(materialize('hello some world'), 10000000000, materialize(['^hello.*world$'])); -- { serverError ILLEGAL_COLUMN } select multiFuzzyMatchAny(materialize('http://hyperscan_is_nice.de/st'), 2, materialize(['http://hyperscan_is_nice.de/(st\\d\\d$|st\\d\\d\\.|st1[0-4]\\d|st150|st\\d$|gl|rz|ch)'])); select multiFuzzyMatchAny(materialize('string'), 0, materialize(['zorro$', '^tring', 'in$', 'how.*', 'it{2}', 'works'])); select multiFuzzyMatchAny(materialize('string'), 1, materialize(['zorro$', '^tring', 'ip$', 'how.*', 'it{2}', 'works'])); diff --git a/tests/queries/0_stateless/00930_max_partitions_per_insert_block.sql b/tests/queries/0_stateless/00930_max_partitions_per_insert_block.sql index 93751e609e6..3d45a3e02c0 100644 --- a/tests/queries/0_stateless/00930_max_partitions_per_insert_block.sql +++ b/tests/queries/0_stateless/00930_max_partitions_per_insert_block.sql @@ -9,6 +9,6 @@ SELECT count() FROM system.parts WHERE database = currentDatabase() AND table = SET max_partitions_per_insert_block = 1; INSERT INTO partitions SELECT * FROM system.numbers LIMIT 1; -INSERT INTO partitions SELECT * FROM system.numbers LIMIT 2; -- { serverError 252 } +INSERT INTO partitions SELECT * FROM system.numbers LIMIT 2; -- { serverError TOO_MANY_PARTS } DROP TABLE partitions; diff --git a/tests/queries/0_stateless/00933_alter_ttl.sql b/tests/queries/0_stateless/00933_alter_ttl.sql index b0e697d024b..9ec1347568c 100644 --- a/tests/queries/0_stateless/00933_alter_ttl.sql +++ b/tests/queries/0_stateless/00933_alter_ttl.sql @@ -13,14 +13,14 @@ optimize table ttl partition 10 final; select * from ttl order by d, a; -alter table ttl modify ttl a; -- { serverError 450 } +alter table ttl modify ttl a; -- { serverError BAD_TTL_EXPRESSION } drop table if exists ttl; create table ttl (d Date, a Int) engine = MergeTree order by tuple() partition by toDayOfMonth(d) settings remove_empty_parts = 0; alter table ttl modify column a Int ttl d + interval 1 day; desc table ttl; -alter table ttl modify column d Int ttl d + interval 1 day; -- { serverError 43 } -alter table ttl modify column d DateTime ttl d + interval 1 day; -- { serverError 524 } +alter table ttl modify column d Int ttl d + interval 1 day; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +alter table ttl modify column d DateTime ttl d + interval 1 day; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } drop table if exists ttl; diff --git a/tests/queries/0_stateless/00933_ttl_simple.sql b/tests/queries/0_stateless/00933_ttl_simple.sql index c1df338a0ff..c3744396873 100644 --- a/tests/queries/0_stateless/00933_ttl_simple.sql +++ b/tests/queries/0_stateless/00933_ttl_simple.sql @@ -102,12 +102,12 @@ set send_logs_level = 'fatal'; drop table if exists ttl_00933_1; -create table ttl_00933_1 (d DateTime ttl d) engine = MergeTree order by tuple() partition by toSecond(d); -- { serverError 44} -create table ttl_00933_1 (d DateTime, a Int ttl d) engine = MergeTree order by a partition by toSecond(d); -- { serverError 44} -create table ttl_00933_1 (d DateTime, a Int ttl 2 + 2) engine = MergeTree order by tuple() partition by toSecond(d); -- { serverError 450 } -create table ttl_00933_1 (d DateTime, a Int ttl d - d) engine = MergeTree order by tuple() partition by toSecond(d); -- { serverError 450 } +create table ttl_00933_1 (d DateTime ttl d) engine = MergeTree order by tuple() partition by toSecond(d); -- { serverError ILLEGAL_COLUMN} +create table ttl_00933_1 (d DateTime, a Int ttl d) engine = MergeTree order by a partition by toSecond(d); -- { serverError ILLEGAL_COLUMN} +create table ttl_00933_1 (d DateTime, a Int ttl 2 + 2) engine = MergeTree order by tuple() partition by toSecond(d); -- { serverError BAD_TTL_EXPRESSION } +create table ttl_00933_1 (d DateTime, a Int ttl d - d) engine = MergeTree order by tuple() partition by toSecond(d); -- { serverError BAD_TTL_EXPRESSION } -create table ttl_00933_1 (d DateTime, a Int ttl d + interval 1 day) engine = Log; -- { serverError 36 } -create table ttl_00933_1 (d DateTime, a Int) engine = Log ttl d + interval 1 day; -- { serverError 36 } +create table ttl_00933_1 (d DateTime, a Int ttl d + interval 1 day) engine = Log; -- { serverError BAD_ARGUMENTS } +create table ttl_00933_1 (d DateTime, a Int) engine = Log ttl d + interval 1 day; -- { serverError BAD_ARGUMENTS } drop table if exists ttl_00933_1; diff --git a/tests/queries/0_stateless/00936_function_result_with_operator_in.sql b/tests/queries/0_stateless/00936_function_result_with_operator_in.sql index 0b253021f39..85979600689 100644 --- a/tests/queries/0_stateless/00936_function_result_with_operator_in.sql +++ b/tests/queries/0_stateless/00936_function_result_with_operator_in.sql @@ -22,13 +22,13 @@ SELECT 'a' IN splitByChar('c', 'abcdef'); SELECT 'errors:'; -- non-constant expressions in the right side of IN -SELECT count() FROM samples WHERE 1 IN range(samples.value); -- { serverError 1, 47 } -SELECT count() FROM samples WHERE 1 IN range(rand() % 1000); -- { serverError 1, 36 } +SELECT count() FROM samples WHERE 1 IN range(samples.value); -- { serverError UNSUPPORTED_METHOD, 47 } +SELECT count() FROM samples WHERE 1 IN range(rand() % 1000); -- { serverError UNSUPPORTED_METHOD, 36 } -- index is not used -SELECT count() FROM samples WHERE value IN range(3); -- { serverError 277 } +SELECT count() FROM samples WHERE value IN range(3); -- { serverError INDEX_NOT_USED } -- wrong type -SELECT 123 IN splitByChar('c', 'abcdef'); -- { serverError 53 } +SELECT 123 IN splitByChar('c', 'abcdef'); -- { serverError TYPE_MISMATCH } DROP TABLE samples; diff --git a/tests/queries/0_stateless/00938_ipv6_cidr_range.sql b/tests/queries/0_stateless/00938_ipv6_cidr_range.sql index 1ceefa8cfb3..7953505651c 100644 --- a/tests/queries/0_stateless/00938_ipv6_cidr_range.sql +++ b/tests/queries/0_stateless/00938_ipv6_cidr_range.sql @@ -1,8 +1,8 @@ SELECT 'check invalid params'; -SELECT IPv6CIDRToRange(1, 1); -- { serverError 43 } -SELECT IPv6CIDRToRange('1234', 1); -- { serverError 43 } -SELECT IPv6CIDRToRange(toFixedString('1234', 10), 1); -- { serverError 43 } -SELECT IPv6CIDRToRange(toFixedString('1234', 16), toUInt16(1)); -- { serverError 43 } +SELECT IPv6CIDRToRange(1, 1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT IPv6CIDRToRange('1234', 1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT IPv6CIDRToRange(toFixedString('1234', 10), 1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT IPv6CIDRToRange(toFixedString('1234', 16), toUInt16(1)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT 'tests'; diff --git a/tests/queries/0_stateless/00940_max_parts_in_total.sql b/tests/queries/0_stateless/00940_max_parts_in_total.sql index b6d0e6f53ae..e2633c2f0f8 100644 --- a/tests/queries/0_stateless/00940_max_parts_in_total.sql +++ b/tests/queries/0_stateless/00940_max_parts_in_total.sql @@ -3,6 +3,6 @@ create table max_parts_in_total (x UInt64) ENGINE = MergeTree PARTITION BY x ORD INSERT INTO max_parts_in_total SELECT number FROM numbers(10); SELECT 1; -INSERT INTO max_parts_in_total SELECT 123; -- { serverError 252 } +INSERT INTO max_parts_in_total SELECT 123; -- { serverError TOO_MANY_PARTS } drop table max_parts_in_total; diff --git a/tests/queries/0_stateless/00949_format.sql b/tests/queries/0_stateless/00949_format.sql index 0683b2b6952..8fd44cc8eba 100644 --- a/tests/queries/0_stateless/00949_format.sql +++ b/tests/queries/0_stateless/00949_format.sql @@ -16,25 +16,25 @@ select 100 = length(format(concat((select arrayStringConcat(arrayMap(x ->'}', ra select format('', 'first'); select concat('third', 'first', 'second')=format('{2}{0}{1}', 'first', 'second', 'third'); -select format('{', ''); -- { serverError 36 } -select format('{{}', ''); -- { serverError 36 } -select format('{ {}', ''); -- { serverError 36 } -select format('}', ''); -- { serverError 36 } +select format('{', ''); -- { serverError BAD_ARGUMENTS } +select format('{{}', ''); -- { serverError BAD_ARGUMENTS } +select format('{ {}', ''); -- { serverError BAD_ARGUMENTS } +select format('}', ''); -- { serverError BAD_ARGUMENTS } select format('{{', ''); -select format('{}}', ''); -- { serverError 36 } +select format('{}}', ''); -- { serverError BAD_ARGUMENTS } select format('}}', ''); -select format('{2 }', ''); -- { serverError 36 } -select format('{}{}{}{}{}{} }{}', '', '', '', '', '', '', ''); -- { serverError 36 } -select format('{sometext}', ''); -- { serverError 36 } -select format('{\0sometext}', ''); -- { serverError 36 } -select format('{1023}', ''); -- { serverError 36 } -select format('{10000000000000000000000000000000000000000000000000}', ''); -- { serverError 36 } -select format('{} {0}', '', ''); -- { serverError 36 } -select format('{0} {}', '', ''); -- { serverError 36 } -select format('Hello {} World {} {}{}', 'first', 'second', 'third') from system.numbers limit 2; -- { serverError 36 } -select format('Hello {0} World {1} {2}{3}', 'first', 'second', 'third') from system.numbers limit 2; -- { serverError 36 } +select format('{2 }', ''); -- { serverError BAD_ARGUMENTS } +select format('{}{}{}{}{}{} }{}', '', '', '', '', '', '', ''); -- { serverError BAD_ARGUMENTS } +select format('{sometext}', ''); -- { serverError BAD_ARGUMENTS } +select format('{\0sometext}', ''); -- { serverError BAD_ARGUMENTS } +select format('{1023}', ''); -- { serverError BAD_ARGUMENTS } +select format('{10000000000000000000000000000000000000000000000000}', ''); -- { serverError BAD_ARGUMENTS } +select format('{} {0}', '', ''); -- { serverError BAD_ARGUMENTS } +select format('{0} {}', '', ''); -- { serverError BAD_ARGUMENTS } +select format('Hello {} World {} {}{}', 'first', 'second', 'third') from system.numbers limit 2; -- { serverError BAD_ARGUMENTS } +select format('Hello {0} World {1} {2}{3}', 'first', 'second', 'third') from system.numbers limit 2; -- { serverError BAD_ARGUMENTS } -select 50 = length(format((select arrayStringConcat(arrayMap(x ->'{', range(101)))), '')); -- { serverError 36 } +select 50 = length(format((select arrayStringConcat(arrayMap(x ->'{', range(101)))), '')); -- { serverError BAD_ARGUMENTS } select format('{}{}{}', materialize(toFixedString('a', 1)), materialize(toFixedString('b', 1)), materialize(toFixedString('c', 1))) == 'abc'; select format('{}{}{}', materialize(toFixedString('a', 1)), materialize('b'), materialize(toFixedString('c', 1))) == 'abc'; diff --git a/tests/queries/0_stateless/00957_neighbor.sql b/tests/queries/0_stateless/00957_neighbor.sql index 8c40f0aab47..ee71b962609 100644 --- a/tests/queries/0_stateless/00957_neighbor.sql +++ b/tests/queries/0_stateless/00957_neighbor.sql @@ -1,16 +1,16 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; -- no arguments -select neighbor(); -- { serverError 42 } +select neighbor(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -- single argument -select neighbor(1); -- { serverError 42 } +select neighbor(1); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -- greater than 3 arguments -select neighbor(1,2,3,4); -- { serverError 42 } +select neighbor(1,2,3,4); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -- bad default value -select neighbor(dummy, 1, 'hello'); -- { serverError 386 } +select neighbor(dummy, 1, 'hello'); -- { serverError NO_COMMON_TYPE } -- types without common supertype (UInt64 and Int8) -select number, neighbor(number, 1, -10) from numbers(3); -- { serverError 386 } +select number, neighbor(number, 1, -10) from numbers(3); -- { serverError NO_COMMON_TYPE } -- nullable offset is not allowed -select number, if(number > 1, number, null) as offset, neighbor(number, offset) from numbers(3); -- { serverError 43 } +select number, if(number > 1, number, null) as offset, neighbor(number, offset) from numbers(3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select 'Zero offset'; select number, neighbor(number, 0) from numbers(3); select 'Nullable values'; diff --git a/tests/queries/0_stateless/00965_shard_unresolvable_addresses.sql b/tests/queries/0_stateless/00965_shard_unresolvable_addresses.sql index 5b763d2d853..79600c6f67e 100644 --- a/tests/queries/0_stateless/00965_shard_unresolvable_addresses.sql +++ b/tests/queries/0_stateless/00965_shard_unresolvable_addresses.sql @@ -2,7 +2,7 @@ SET prefer_localhost_replica = 1; -SELECT count() FROM remote('127.0.0.1,localhos', system.one); -- { serverError 279 } +SELECT count() FROM remote('127.0.0.1,localhos', system.one); -- { serverError ALL_CONNECTION_TRIES_FAILED } SELECT count() FROM remote('127.0.0.1|localhos', system.one); -- Clear cache to avoid future errors in the logs diff --git a/tests/queries/0_stateless/00969_columns_clause.sql b/tests/queries/0_stateless/00969_columns_clause.sql index fc5b72d3913..e6ae59a2f30 100644 --- a/tests/queries/0_stateless/00969_columns_clause.sql +++ b/tests/queries/0_stateless/00969_columns_clause.sql @@ -11,7 +11,7 @@ SELECT number, COLUMNS('ber') FROM numbers(2); -- It works for unanchored regula SELECT number, COLUMNS('x') FROM numbers(2); SELECT COLUMNS('') FROM numbers(2); -SELECT COLUMNS('x') FROM numbers(10) WHERE number > 5; -- { serverError 51 } +SELECT COLUMNS('x') FROM numbers(10) WHERE number > 5; -- { serverError EMPTY_LIST_OF_COLUMNS_QUERIED } SELECT * FROM numbers(2) WHERE NOT ignore(); SELECT * FROM numbers(2) WHERE NOT ignore(*); @@ -19,9 +19,9 @@ SELECT * FROM numbers(2) WHERE NOT ignore(COLUMNS('.+')); SELECT * FROM numbers(2) WHERE NOT ignore(COLUMNS('x')); SELECT COLUMNS('n') + COLUMNS('u') FROM system.numbers LIMIT 2; -SELECT COLUMNS('n') + COLUMNS('u') FROM (SELECT 1 AS a, 2 AS b); -- { serverError 42 } +SELECT COLUMNS('n') + COLUMNS('u') FROM (SELECT 1 AS a, 2 AS b); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } SELECT COLUMNS('a') + COLUMNS('b') FROM (SELECT 1 AS a, 2 AS b); SELECT COLUMNS('a') + COLUMNS('a') FROM (SELECT 1 AS a, 2 AS b); SELECT COLUMNS('b') + COLUMNS('b') FROM (SELECT 1 AS a, 2 AS b); -SELECT COLUMNS('a|b') + COLUMNS('b') FROM (SELECT 1 AS a, 2 AS b); -- { serverError 42 } +SELECT COLUMNS('a|b') + COLUMNS('b') FROM (SELECT 1 AS a, 2 AS b); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } SELECT plus(COLUMNS('^(a|b)$')) FROM (SELECT 1 AS a, 2 AS b); diff --git a/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql b/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql index 5abb1af620a..a5b14893413 100644 --- a/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql +++ b/tests/queries/0_stateless/00971_merge_tree_uniform_read_distribution_and_max_rows_to_read.sql @@ -11,7 +11,7 @@ SELECT count() FROM merge_tree; SET max_rows_to_read = 900000; -- constant ignore will be pruned by part pruner. ignore(*) is used. -SELECT count() FROM merge_tree WHERE not ignore(*); -- { serverError 158 } -SELECT count() FROM merge_tree WHERE not ignore(*); -- { serverError 158 } +SELECT count() FROM merge_tree WHERE not ignore(*); -- { serverError TOO_MANY_ROWS } +SELECT count() FROM merge_tree WHERE not ignore(*); -- { serverError TOO_MANY_ROWS } DROP TABLE merge_tree; diff --git a/tests/queries/0_stateless/00972_geohashesInBox.sql b/tests/queries/0_stateless/00972_geohashesInBox.sql index d52a03b055e..636474e7aa1 100644 --- a/tests/queries/0_stateless/00972_geohashesInBox.sql +++ b/tests/queries/0_stateless/00972_geohashesInBox.sql @@ -65,8 +65,8 @@ SELECT 'input values are clamped to -90..90, -180..180 range'; SELECT length(geohashesInBox(-inf, -inf, inf, inf, 3)); SELECT 'errors'; -SELECT geohashesInBox(); -- { serverError 42 } -- not enough arguments -SELECT geohashesInBox(1, 2, 3, 4, 5); -- { serverError 43 } -- wrong types of arguments -SELECT geohashesInBox(toFloat32(1.0), 2.0, 3.0, 4.0, 5); -- { serverError 43 } -- all lats and longs should be of the same type -SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 12); -- { serverError 128 } -- to many elements in array +SELECT geohashesInBox(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -- not enough arguments +SELECT geohashesInBox(1, 2, 3, 4, 5); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -- wrong types of arguments +SELECT geohashesInBox(toFloat32(1.0), 2.0, 3.0, 4.0, 5); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -- all lats and longs should be of the same type +SELECT geohashesInBox(24.48, 40.56, 24.785, 40.81, 12); -- { serverError TOO_LARGE_ARRAY_SIZE } -- to many elements in array diff --git a/tests/queries/0_stateless/00974_low_cardinality_cast.sql b/tests/queries/0_stateless/00974_low_cardinality_cast.sql index b52c00513d3..04a6785f896 100644 --- a/tests/queries/0_stateless/00974_low_cardinality_cast.sql +++ b/tests/queries/0_stateless/00974_low_cardinality_cast.sql @@ -3,6 +3,6 @@ SET cast_keep_nullable = 0; SELECT CAST('Hello' AS LowCardinality(Nullable(String))); SELECT CAST(Null AS LowCardinality(Nullable(String))); SELECT CAST(CAST('Hello' AS LowCardinality(Nullable(String))) AS String); -SELECT CAST(CAST(Null AS LowCardinality(Nullable(String))) AS String); -- { serverError 349 } +SELECT CAST(CAST(Null AS LowCardinality(Nullable(String))) AS String); -- { serverError CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN } SELECT CAST(CAST('Hello' AS Nullable(String)) AS String); -SELECT CAST(CAST(Null AS Nullable(String)) AS String); -- { serverError 349 } +SELECT CAST(CAST(Null AS Nullable(String)) AS String); -- { serverError CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN } diff --git a/tests/queries/0_stateless/00975_values_list.sql b/tests/queries/0_stateless/00975_values_list.sql index 35afc99e93e..c1e3a2fbfbd 100644 --- a/tests/queries/0_stateless/00975_values_list.sql +++ b/tests/queries/0_stateless/00975_values_list.sql @@ -12,8 +12,8 @@ SELECT * FROM VALUES('n UInt64, s String, ss String', (1 + 22, '23', toString(23 SELECT * FROM VALUES('a Decimal(4, 4), b String, c String', (divide(toDecimal32(5, 3), 3), 'a', 'b')); -SELECT * FROM VALUES('x Float64', toUInt64(-1)); -- { serverError 69 } -SELECT * FROM VALUES('x Float64', NULL); -- { serverError 53 } +SELECT * FROM VALUES('x Float64', toUInt64(-1)); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT * FROM VALUES('x Float64', NULL); -- { serverError TYPE_MISMATCH } SELECT * FROM VALUES('x Nullable(Float64)', NULL); DROP TABLE values_list; diff --git a/tests/queries/0_stateless/00976_asof_join_on.sql b/tests/queries/0_stateless/00976_asof_join_on.sql index afa125a9271..bd492897be9 100644 --- a/tests/queries/0_stateless/00976_asof_join_on.sql +++ b/tests/queries/0_stateless/00976_asof_join_on.sql @@ -18,10 +18,10 @@ SELECT '-'; SELECT A.a, A.t, B.b, B.t FROM A ASOF JOIN B ON A.a == B.b AND A.t > B.t ORDER BY (A.a, A.t); SELECT '-'; SELECT A.a, A.t, B.b, B.t FROM A ASOF JOIN B ON A.a == B.b AND A.t < B.t ORDER BY (A.a, A.t); -SELECT count() FROM A ASOF JOIN B ON A.a == B.b AND A.t == B.t; -- { serverError 403 } -SELECT count() FROM A ASOF JOIN B ON A.a == B.b AND A.t != B.t; -- { serverError 403 } +SELECT count() FROM A ASOF JOIN B ON A.a == B.b AND A.t == B.t; -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT count() FROM A ASOF JOIN B ON A.a == B.b AND A.t != B.t; -- { serverError INVALID_JOIN_ON_EXPRESSION } -SELECT A.a, A.t, B.b, B.t FROM A ASOF JOIN B ON A.a == B.b AND A.t < B.t OR A.a == B.b + 1 ORDER BY (A.a, A.t); -- { serverError 48 } +SELECT A.a, A.t, B.b, B.t FROM A ASOF JOIN B ON A.a == B.b AND A.t < B.t OR A.a == B.b + 1 ORDER BY (A.a, A.t); -- { serverError NOT_IMPLEMENTED } SELECT A.a, A.t, B.b, B.t FROM A ASOF INNER JOIN (SELECT * FROM B UNION ALL SELECT 1, 3) AS B ON B.t <= A.t AND A.a == B.b diff --git a/tests/queries/0_stateless/00976_max_execution_speed.sql b/tests/queries/0_stateless/00976_max_execution_speed.sql index 06386d77413..52c3f05ff43 100644 --- a/tests/queries/0_stateless/00976_max_execution_speed.sql +++ b/tests/queries/0_stateless/00976_max_execution_speed.sql @@ -1,2 +1,2 @@ SET max_execution_speed = 1, max_execution_time = 3; -SELECT count() FROM system.numbers; -- { serverError 159 } +SELECT count() FROM system.numbers; -- { serverError TIMEOUT_EXCEEDED } diff --git a/tests/queries/0_stateless/00977_int_div.sql b/tests/queries/0_stateless/00977_int_div.sql index 4184475e3a0..04fafbfcd8b 100644 --- a/tests/queries/0_stateless/00977_int_div.sql +++ b/tests/queries/0_stateless/00977_int_div.sql @@ -28,4 +28,4 @@ SELECT -1 DIV number FROM numbers(1, 10); SELECT toInt32(number) DIV -1 FROM numbers(1, 10); SELECT toInt64(number) DIV -1 FROM numbers(1, 10); SELECT number DIV -number FROM numbers(1, 10); -SELECT -1 DIV 0; -- { serverError 153 } +SELECT -1 DIV 0; -- { serverError ILLEGAL_DIVISION } diff --git a/tests/queries/0_stateless/00979_yandex_consistent_hash_fpe.sql b/tests/queries/0_stateless/00979_yandex_consistent_hash_fpe.sql index 3da52f2cb96..60b25111bec 100644 --- a/tests/queries/0_stateless/00979_yandex_consistent_hash_fpe.sql +++ b/tests/queries/0_stateless/00979_yandex_consistent_hash_fpe.sql @@ -1 +1 @@ -SELECT kostikConsistentHash(-1, 40000); -- { serverError 36 } +SELECT kostikConsistentHash(-1, 40000); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/00980_merge_alter_settings.sql b/tests/queries/0_stateless/00980_merge_alter_settings.sql index 174d1fcd508..dc47597991a 100644 --- a/tests/queries/0_stateless/00980_merge_alter_settings.sql +++ b/tests/queries/0_stateless/00980_merge_alter_settings.sql @@ -8,7 +8,7 @@ CREATE TABLE log_for_alter ( Data String ) ENGINE = Log(); -ALTER TABLE log_for_alter MODIFY SETTING aaa=123; -- { serverError 36 } +ALTER TABLE log_for_alter MODIFY SETTING aaa=123; -- { serverError BAD_ARGUMENTS } DROP TABLE IF EXISTS log_for_alter; @@ -19,7 +19,7 @@ CREATE TABLE table_for_alter ( Data String ) ENGINE = MergeTree() ORDER BY id SETTINGS index_granularity=4096, index_granularity_bytes = '10Mi'; -ALTER TABLE table_for_alter MODIFY SETTING index_granularity=555; -- { serverError 472 } +ALTER TABLE table_for_alter MODIFY SETTING index_granularity=555; -- { serverError READONLY_SETTING } SHOW CREATE TABLE table_for_alter; @@ -28,15 +28,15 @@ ALTER TABLE table_for_alter MODIFY SETTING parts_to_throw_insert = 1, parts_to_ SHOW CREATE TABLE table_for_alter; INSERT INTO table_for_alter VALUES (1, '1'); -INSERT INTO table_for_alter VALUES (2, '2'); -- { serverError 252 } +INSERT INTO table_for_alter VALUES (2, '2'); -- { serverError TOO_MANY_PARTS } DETACH TABLE table_for_alter; ATTACH TABLE table_for_alter; -INSERT INTO table_for_alter VALUES (2, '2'); -- { serverError 252 } +INSERT INTO table_for_alter VALUES (2, '2'); -- { serverError TOO_MANY_PARTS } -ALTER TABLE table_for_alter MODIFY SETTING xxx_yyy=124; -- { serverError 115 } +ALTER TABLE table_for_alter MODIFY SETTING xxx_yyy=124; -- { serverError UNKNOWN_SETTING } ALTER TABLE table_for_alter MODIFY SETTING parts_to_throw_insert = 100, parts_to_delay_insert = 100; @@ -64,7 +64,7 @@ CREATE TABLE table_for_reset_setting ( Data String ) ENGINE = MergeTree() ORDER BY id SETTINGS index_granularity=4096, index_granularity_bytes = '10Mi'; -ALTER TABLE table_for_reset_setting MODIFY SETTING index_granularity=555; -- { serverError 472 } +ALTER TABLE table_for_reset_setting MODIFY SETTING index_granularity=555; -- { serverError READONLY_SETTING } SHOW CREATE TABLE table_for_reset_setting; @@ -75,7 +75,7 @@ ALTER TABLE table_for_reset_setting MODIFY SETTING parts_to_throw_insert = 1, p SHOW CREATE TABLE table_for_reset_setting; -INSERT INTO table_for_reset_setting VALUES (1, '1'); -- { serverError 252 } +INSERT INTO table_for_reset_setting VALUES (1, '1'); -- { serverError TOO_MANY_PARTS } ALTER TABLE table_for_reset_setting RESET SETTING parts_to_delay_insert, parts_to_throw_insert; @@ -89,10 +89,10 @@ ATTACH TABLE table_for_reset_setting; SHOW CREATE TABLE table_for_reset_setting; -ALTER TABLE table_for_reset_setting RESET SETTING index_granularity; -- { serverError 472 } +ALTER TABLE table_for_reset_setting RESET SETTING index_granularity; -- { serverError READONLY_SETTING } -- don't execute alter with incorrect setting -ALTER TABLE table_for_reset_setting RESET SETTING merge_with_ttl_timeout, unknown_setting; -- { serverError 36 } +ALTER TABLE table_for_reset_setting RESET SETTING merge_with_ttl_timeout, unknown_setting; -- { serverError BAD_ARGUMENTS } ALTER TABLE table_for_reset_setting MODIFY SETTING merge_with_ttl_timeout = 300, max_concurrent_queries = 1; diff --git a/tests/queries/0_stateless/00980_zookeeper_merge_tree_alter_settings.sql b/tests/queries/0_stateless/00980_zookeeper_merge_tree_alter_settings.sql index b049e20cb6d..a717361f033 100644 --- a/tests/queries/0_stateless/00980_zookeeper_merge_tree_alter_settings.sql +++ b/tests/queries/0_stateless/00980_zookeeper_merge_tree_alter_settings.sql @@ -18,7 +18,7 @@ CREATE TABLE replicated_table_for_alter2 ( SHOW CREATE TABLE replicated_table_for_alter1; -ALTER TABLE replicated_table_for_alter1 MODIFY SETTING index_granularity = 4096; -- { serverError 472 } +ALTER TABLE replicated_table_for_alter1 MODIFY SETTING index_granularity = 4096; -- { serverError READONLY_SETTING } SHOW CREATE TABLE replicated_table_for_alter1; @@ -45,7 +45,7 @@ SELECT COUNT() FROM replicated_table_for_alter1; SELECT COUNT() FROM replicated_table_for_alter2; ALTER TABLE replicated_table_for_alter2 MODIFY SETTING parts_to_throw_insert = 1, parts_to_delay_insert = 1; -INSERT INTO replicated_table_for_alter2 VALUES (3, '1'), (4, '2'); -- { serverError 252 } +INSERT INTO replicated_table_for_alter2 VALUES (3, '1'), (4, '2'); -- { serverError TOO_MANY_PARTS } INSERT INTO replicated_table_for_alter1 VALUES (5, '5'), (6, '6'); @@ -89,7 +89,7 @@ CREATE TABLE replicated_table_for_reset_setting2 ( SHOW CREATE TABLE replicated_table_for_reset_setting1; SHOW CREATE TABLE replicated_table_for_reset_setting2; -ALTER TABLE replicated_table_for_reset_setting1 MODIFY SETTING index_granularity = 4096; -- { serverError 472 } +ALTER TABLE replicated_table_for_reset_setting1 MODIFY SETTING index_granularity = 4096; -- { serverError READONLY_SETTING } SHOW CREATE TABLE replicated_table_for_reset_setting1; @@ -109,7 +109,7 @@ SHOW CREATE TABLE replicated_table_for_reset_setting1; SHOW CREATE TABLE replicated_table_for_reset_setting2; -- don't execute alter with incorrect setting -ALTER TABLE replicated_table_for_reset_setting1 RESET SETTING check_delay_period, unknown_setting; -- { serverError 36 } +ALTER TABLE replicated_table_for_reset_setting1 RESET SETTING check_delay_period, unknown_setting; -- { serverError BAD_ARGUMENTS } ALTER TABLE replicated_table_for_reset_setting1 RESET SETTING merge_with_ttl_timeout; ALTER TABLE replicated_table_for_reset_setting2 RESET SETTING merge_with_ttl_timeout; diff --git a/tests/queries/0_stateless/00983_summing_merge_tree_not_an_identifier.sql b/tests/queries/0_stateless/00983_summing_merge_tree_not_an_identifier.sql index 7e138df20f5..091fce9de68 100644 --- a/tests/queries/0_stateless/00983_summing_merge_tree_not_an_identifier.sql +++ b/tests/queries/0_stateless/00983_summing_merge_tree_not_an_identifier.sql @@ -10,4 +10,4 @@ ENGINE = SummingMergeTree([price, spend]) PARTITION BY toYYYYMM(date) ORDER BY id SAMPLE BY id -SETTINGS index_granularity = 8192; -- { serverError 223 } +SETTINGS index_granularity = 8192; -- { serverError UNEXPECTED_AST_STRUCTURE } diff --git a/tests/queries/0_stateless/00986_materialized_view_stack_overflow.sql b/tests/queries/0_stateless/00986_materialized_view_stack_overflow.sql index a39688d81a7..bb95ee6abb6 100644 --- a/tests/queries/0_stateless/00986_materialized_view_stack_overflow.sql +++ b/tests/queries/0_stateless/00986_materialized_view_stack_overflow.sql @@ -9,7 +9,7 @@ CREATE TABLE test2 (a UInt8) ENGINE MergeTree ORDER BY a; CREATE MATERIALIZED VIEW mv1 TO test1 AS SELECT a FROM test2; CREATE MATERIALIZED VIEW mv2 TO test2 AS SELECT a FROM test1; -insert into test1 values (1); -- { serverError 306 } +insert into test1 values (1); -- { serverError TOO_DEEP_RECURSION } DROP TABLE test1; DROP TABLE test2; diff --git a/tests/queries/0_stateless/00987_distributed_stack_overflow.sql b/tests/queries/0_stateless/00987_distributed_stack_overflow.sql index 2749827c880..5a22ac56413 100644 --- a/tests/queries/0_stateless/00987_distributed_stack_overflow.sql +++ b/tests/queries/0_stateless/00987_distributed_stack_overflow.sql @@ -4,15 +4,15 @@ DROP TABLE IF EXISTS distr0; DROP TABLE IF EXISTS distr1; DROP TABLE IF EXISTS distr2; -CREATE TABLE distr (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr); -- { serverError 269 } +CREATE TABLE distr (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr); -- { serverError INFINITE_LOOP } -CREATE TABLE distr0 (x UInt8) ENGINE = Distributed(test_shard_localhost, '', distr0); -- { serverError 269 } +CREATE TABLE distr0 (x UInt8) ENGINE = Distributed(test_shard_localhost, '', distr0); -- { serverError INFINITE_LOOP } CREATE TABLE distr1 (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr2); CREATE TABLE distr2 (x UInt8) ENGINE = Distributed(test_shard_localhost, currentDatabase(), distr1); -SELECT * FROM distr1; -- { serverError 581 } -SELECT * FROM distr2; -- { serverError 581 } +SELECT * FROM distr1; -- { serverError TOO_LARGE_DISTRIBUTED_DEPTH } +SELECT * FROM distr2; -- { serverError TOO_LARGE_DISTRIBUTED_DEPTH } DROP TABLE distr1; DROP TABLE distr2; diff --git a/tests/queries/0_stateless/00988_constraints_replication_zookeeper_long.sql b/tests/queries/0_stateless/00988_constraints_replication_zookeeper_long.sql index 0bcde52d3d6..6eddf128b92 100644 --- a/tests/queries/0_stateless/00988_constraints_replication_zookeeper_long.sql +++ b/tests/queries/0_stateless/00988_constraints_replication_zookeeper_long.sql @@ -23,7 +23,7 @@ INSERT INTO replicated_constraints2 VALUES (3, 4); SYSTEM SYNC REPLICA replicated_constraints1; SYSTEM SYNC REPLICA replicated_constraints2; -INSERT INTO replicated_constraints1 VALUES (10, 10); -- { serverError 469 } +INSERT INTO replicated_constraints1 VALUES (10, 10); -- { serverError VIOLATED_CONSTRAINT } ALTER TABLE replicated_constraints1 DROP CONSTRAINT a_constraint; @@ -43,8 +43,8 @@ ALTER TABLE replicated_constraints2 ADD CONSTRAINT a_constraint CHECK a < 10; SYSTEM SYNC REPLICA replicated_constraints1; SYSTEM SYNC REPLICA replicated_constraints2; -INSERT INTO replicated_constraints1 VALUES (10, 11); -- { serverError 469 } -INSERT INTO replicated_constraints2 VALUES (9, 10); -- { serverError 469 } +INSERT INTO replicated_constraints1 VALUES (10, 11); -- { serverError VIOLATED_CONSTRAINT } +INSERT INTO replicated_constraints2 VALUES (9, 10); -- { serverError VIOLATED_CONSTRAINT } DROP TABLE replicated_constraints1; DROP TABLE replicated_constraints2; diff --git a/tests/queries/0_stateless/00988_expansion_aliases_limit.sql b/tests/queries/0_stateless/00988_expansion_aliases_limit.sql index 3c2442b15b5..77f2ba2dbd1 100644 --- a/tests/queries/0_stateless/00988_expansion_aliases_limit.sql +++ b/tests/queries/0_stateless/00988_expansion_aliases_limit.sql @@ -1 +1 @@ -SELECT 1 AS a, a + a AS b, b + b AS c, c + c AS d, d + d AS e, e + e AS f, f + f AS g, g + g AS h, h + h AS i, i + i AS j, j + j AS k, k + k AS l, l + l AS m, m + m AS n, n + n AS o, o + o AS p, p + p AS q, q + q AS r, r + r AS s, s + s AS t, t + t AS u, u + u AS v, v + v AS w, w + w AS x, x + x AS y, y + y AS z; -- { serverError 36, 168 } +SELECT 1 AS a, a + a AS b, b + b AS c, c + c AS d, d + d AS e, e + e AS f, f + f AS g, g + g AS h, h + h AS i, i + i AS j, j + j AS k, k + k AS l, l + l AS m, m + m AS n, n + n AS o, o + o AS p, p + p AS q, q + q AS r, r + r AS s, s + s AS t, t + t AS u, u + u AS v, v + v AS w, w + w AS x, x + x AS y, y + y AS z; -- { serverError BAD_ARGUMENTS, 168 } diff --git a/tests/queries/0_stateless/00990_hasToken_and_tokenbf.sql b/tests/queries/0_stateless/00990_hasToken_and_tokenbf.sql index 9552acd3c93..403b61b75c9 100644 --- a/tests/queries/0_stateless/00990_hasToken_and_tokenbf.sql +++ b/tests/queries/0_stateless/00990_hasToken_and_tokenbf.sql @@ -53,10 +53,10 @@ select max(id) from bloom_filter2 where hasTokenCaseInsensitive(s, 'ABC'); -- SELECT max(id) FROM bloom_filter WHERE NOT hasToken(s, 'yyy'); -- accessing to many rows -SELECT max(id) FROM bloom_filter WHERE hasToken(s, 'yyy'); -- { serverError 158 } +SELECT max(id) FROM bloom_filter WHERE hasToken(s, 'yyy'); -- { serverError TOO_MANY_ROWS } -- this syntax is not supported by tokenbf -SELECT max(id) FROM bloom_filter WHERE hasToken(s, 'zzz') == 1; -- { serverError 158 } +SELECT max(id) FROM bloom_filter WHERE hasToken(s, 'zzz') == 1; -- { serverError TOO_MANY_ROWS } DROP TABLE bloom_filter; diff --git a/tests/queries/0_stateless/00990_request_splitting.sql b/tests/queries/0_stateless/00990_request_splitting.sql index cc5b0ed2998..6a1e3902de4 100644 --- a/tests/queries/0_stateless/00990_request_splitting.sql +++ b/tests/queries/0_stateless/00990_request_splitting.sql @@ -1 +1 @@ -SELECT * FROM url('http://127.0.0.1:1337/? HTTP/1.1\r\nTest: test', CSV, 'column1 String'); -- { serverError 1000 } +SELECT * FROM url('http://127.0.0.1:1337/? HTTP/1.1\r\nTest: test', CSV, 'column1 String'); -- { serverError POCO_EXCEPTION } diff --git a/tests/queries/0_stateless/00995_order_by_with_fill.reference b/tests/queries/0_stateless/00995_order_by_with_fill.reference index 4863c83c544..eadbe1b006e 100644 --- a/tests/queries/0_stateless/00995_order_by_with_fill.reference +++ b/tests/queries/0_stateless/00995_order_by_with_fill.reference @@ -516,8 +516,8 @@ SELECT * FROM fill ORDER BY a WITH FILL, b WITH fill TO 6 STEP 2; 8 0 8 2 8 4 -SELECT * FROM fill ORDER BY a WITH FILL STEP -1; -- { serverError 475 } -SELECT * FROM fill ORDER BY a WITH FILL FROM 10 TO 1; -- { serverError 475 } -SELECT * FROM fill ORDER BY a DESC WITH FILL FROM 1 TO 10; -- { serverError 475 } -SELECT * FROM fill ORDER BY a WITH FILL FROM -10 to 10; -- { serverError 475 } +SELECT * FROM fill ORDER BY a WITH FILL STEP -1; -- { serverError INVALID_WITH_FILL_EXPRESSION } +SELECT * FROM fill ORDER BY a WITH FILL FROM 10 TO 1; -- { serverError INVALID_WITH_FILL_EXPRESSION } +SELECT * FROM fill ORDER BY a DESC WITH FILL FROM 1 TO 10; -- { serverError INVALID_WITH_FILL_EXPRESSION } +SELECT * FROM fill ORDER BY a WITH FILL FROM -10 to 10; -- { serverError INVALID_WITH_FILL_EXPRESSION } DROP TABLE fill; diff --git a/tests/queries/0_stateless/00995_order_by_with_fill.sql b/tests/queries/0_stateless/00995_order_by_with_fill.sql index fe7a6e5d4ce..7140ca748ec 100644 --- a/tests/queries/0_stateless/00995_order_by_with_fill.sql +++ b/tests/queries/0_stateless/00995_order_by_with_fill.sql @@ -31,9 +31,9 @@ SELECT * FROM fill ORDER BY a WITH FILL, b WITH fill; SELECT * FROM fill ORDER BY a WITH FILL, b WITH fill TO 6 STEP 2; -SELECT * FROM fill ORDER BY a WITH FILL STEP -1; -- { serverError 475 } -SELECT * FROM fill ORDER BY a WITH FILL FROM 10 TO 1; -- { serverError 475 } -SELECT * FROM fill ORDER BY a DESC WITH FILL FROM 1 TO 10; -- { serverError 475 } -SELECT * FROM fill ORDER BY a WITH FILL FROM -10 to 10; -- { serverError 475 } +SELECT * FROM fill ORDER BY a WITH FILL STEP -1; -- { serverError INVALID_WITH_FILL_EXPRESSION } +SELECT * FROM fill ORDER BY a WITH FILL FROM 10 TO 1; -- { serverError INVALID_WITH_FILL_EXPRESSION } +SELECT * FROM fill ORDER BY a DESC WITH FILL FROM 1 TO 10; -- { serverError INVALID_WITH_FILL_EXPRESSION } +SELECT * FROM fill ORDER BY a WITH FILL FROM -10 to 10; -- { serverError INVALID_WITH_FILL_EXPRESSION } DROP TABLE fill; diff --git a/tests/queries/0_stateless/00996_neighbor.sql b/tests/queries/0_stateless/00996_neighbor.sql index 50b07242eac..f9cbf69a836 100644 --- a/tests/queries/0_stateless/00996_neighbor.sql +++ b/tests/queries/0_stateless/00996_neighbor.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; SELECT number, neighbor(toString(number), 0) FROM numbers(10); SELECT number, neighbor(toString(number), 5) FROM numbers(10); diff --git a/tests/queries/0_stateless/00998_constraints_all_tables.sql b/tests/queries/0_stateless/00998_constraints_all_tables.sql index bb0d6933a01..0985e9a4e06 100644 --- a/tests/queries/0_stateless/00998_constraints_all_tables.sql +++ b/tests/queries/0_stateless/00998_constraints_all_tables.sql @@ -1,41 +1,41 @@ DROP TABLE IF EXISTS constrained; CREATE TABLE constrained (URL String, CONSTRAINT is_censor CHECK domainWithoutWWW(URL) = 'censor.net', CONSTRAINT is_utf8 CHECK isValidUTF8(URL)) ENGINE = Null; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError 469 } -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError VIOLATED_CONSTRAINT } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError VIOLATED_CONSTRAINT } INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), (toValidUTF8('https://censor.net/te\xFFst')); DROP TABLE constrained; CREATE TABLE constrained (URL String, CONSTRAINT is_censor CHECK domainWithoutWWW(URL) = 'censor.net', CONSTRAINT is_utf8 CHECK isValidUTF8(URL)) ENGINE = Memory; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), (toValidUTF8('https://censor.net/te\xFFst')); SELECT count() FROM constrained; DROP TABLE constrained; CREATE TABLE constrained (URL String, CONSTRAINT is_censor CHECK domainWithoutWWW(URL) = 'censor.net', CONSTRAINT is_utf8 CHECK isValidUTF8(URL)) ENGINE = StripeLog; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), (toValidUTF8('https://censor.net/te\xFFst')); SELECT count() FROM constrained; DROP TABLE constrained; CREATE TABLE constrained (URL String, CONSTRAINT is_censor CHECK domainWithoutWWW(URL) = 'censor.net', CONSTRAINT is_utf8 CHECK isValidUTF8(URL)) ENGINE = TinyLog; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), (toValidUTF8('https://censor.net/te\xFFst')); SELECT count() FROM constrained; DROP TABLE constrained; CREATE TABLE constrained (URL String, CONSTRAINT is_censor CHECK domainWithoutWWW(URL) = 'censor.net', CONSTRAINT is_utf8 CHECK isValidUTF8(URL)) ENGINE = Log; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), ('https://censor.net/te\xFFst'); -- { serverError VIOLATED_CONSTRAINT } SELECT count() FROM constrained; INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('ftp://censor.net/Hello'), (toValidUTF8('https://censor.net/te\xFFst')); SELECT count() FROM constrained; @@ -47,7 +47,7 @@ CREATE TABLE constrained (URL String, CONSTRAINT is_censor CHECK domainWithoutWW CREATE TABLE constrained2 AS constrained; SHOW CREATE TABLE constrained; SHOW CREATE TABLE constrained2; -INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError 469 } -INSERT INTO constrained2 VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError 469 } +INSERT INTO constrained VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError VIOLATED_CONSTRAINT } +INSERT INTO constrained2 VALUES ('https://www.censor.net/?q=upyachka'), ('Hello'), ('test'); -- { serverError VIOLATED_CONSTRAINT } DROP TABLE constrained; DROP TABLE constrained2; diff --git a/tests/queries/0_stateless/01000_subquery_requires_alias.sql b/tests/queries/0_stateless/01000_subquery_requires_alias.sql index 27320fab933..3cd522a8389 100644 --- a/tests/queries/0_stateless/01000_subquery_requires_alias.sql +++ b/tests/queries/0_stateless/01000_subquery_requires_alias.sql @@ -7,11 +7,11 @@ USING (B); SELECT * FROM (SELECT 1 as A, 2 as B) X ALL LEFT JOIN (SELECT 3 as A, 2 as B) -USING (B); -- { serverError 206 } +USING (B); -- { serverError ALIAS_REQUIRED } SELECT * FROM (SELECT 1 as A, 2 as B) ALL LEFT JOIN (SELECT 3 as A, 2 as B) Y -USING (B); -- { serverError 206 } +USING (B); -- { serverError ALIAS_REQUIRED } set joined_subquery_requires_alias = 0; diff --git a/tests/queries/0_stateless/01010_partial_merge_join_negative.sql b/tests/queries/0_stateless/01010_partial_merge_join_negative.sql index 3ae0eee869c..757e1e92936 100644 --- a/tests/queries/0_stateless/01010_partial_merge_join_negative.sql +++ b/tests/queries/0_stateless/01010_partial_merge_join_negative.sql @@ -22,34 +22,34 @@ SELECT 'any'; SELECT * FROM t0 ANY LEFT JOIN t1 ON t1.x = t0.x; SELECT * FROM t0 ANY INNER JOIN t1 ON t1.x = t0.x; -SELECT * FROM t0 ANY RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError 48 } -SELECT * FROM t0 ANY FULL JOIN t1 ON t1.x = t0.x; -- { serverError 48 } +SELECT * FROM t0 ANY RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANY FULL JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } SELECT * FROM t0 ANY LEFT JOIN t1 USING (x); SELECT * FROM t0 ANY INNER JOIN t1 USING (x); -SELECT * FROM t0 ANY RIGHT JOIN t1 USING (x); -- { serverError 48 } -SELECT * FROM t0 ANY FULL JOIN t1 USING (x); -- { serverError 48 } +SELECT * FROM t0 ANY RIGHT JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANY FULL JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } SELECT 'semi'; SELECT * FROM t0 SEMI LEFT JOIN t1 ON t1.x = t0.x; -SELECT * FROM t0 SEMI RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError 48 } +SELECT * FROM t0 SEMI RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } SELECT * FROM t0 SEMI LEFT JOIN t1 USING (x); -SELECT * FROM t0 SEMI RIGHT JOIN t1 USING (x); -- { serverError 48 } +SELECT * FROM t0 SEMI RIGHT JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } SELECT 'anti'; -SELECT * FROM t0 ANTI LEFT JOIN t1 ON t1.x = t0.x; -- { serverError 48 } -SELECT * FROM t0 ANTI RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError 48 } +SELECT * FROM t0 ANTI LEFT JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANTI RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } -SELECT * FROM t0 ANTI LEFT JOIN t1 USING (x); -- { serverError 48 } -SELECT * FROM t0 ANTI RIGHT JOIN t1 USING (x); -- { serverError 48 } +SELECT * FROM t0 ANTI LEFT JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANTI RIGHT JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } SELECT 'asof'; -SELECT * FROM t0 ASOF LEFT JOIN t1 ON t1.x = t0.x AND t0.y > t1.y; -- { serverError 48 } -SELECT * FROM t0 ASOF LEFT JOIN t1 USING (x, y); -- { serverError 48 } +SELECT * FROM t0 ASOF LEFT JOIN t1 ON t1.x = t0.x AND t0.y > t1.y; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ASOF LEFT JOIN t1 USING (x, y); -- { serverError NOT_IMPLEMENTED } DROP TABLE t0; DROP TABLE t1; diff --git a/tests/queries/0_stateless/01010_pmj_on_disk.sql b/tests/queries/0_stateless/01010_pmj_on_disk.sql index 4925f78f82f..96a4bf512cc 100644 --- a/tests/queries/0_stateless/01010_pmj_on_disk.sql +++ b/tests/queries/0_stateless/01010_pmj_on_disk.sql @@ -16,7 +16,7 @@ ANY LEFT JOIN ( FROM numbers(4000) ) js2 USING n -ORDER BY n; -- { serverError 191 } +ORDER BY n; -- { serverError SET_SIZE_LIMIT_EXCEEDED } SET join_algorithm = 'partial_merge'; diff --git a/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql b/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql index f9f30b44700..a090be85221 100644 --- a/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql +++ b/tests/queries/0_stateless/01010_pmj_right_table_memory_limits.sql @@ -11,7 +11,7 @@ ANY LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) ) js2 -USING n; -- { serverError 241 } +USING n; -- { serverError MEMORY_LIMIT_EXCEEDED } SET join_algorithm = 'partial_merge'; SET default_max_bytes_in_join = 0; @@ -24,7 +24,7 @@ ANY LEFT JOIN ( SELECT number * 2 AS n, number AS j FROM numbers(1000000) ) js2 -USING n; -- { serverError 12 } +USING n; -- { serverError PARAMETER_OUT_OF_BOUND } SELECT n, j FROM ( @@ -35,7 +35,7 @@ ANY LEFT JOIN ( FROM numbers(1000000) ) js2 USING n -SETTINGS max_bytes_in_join = 30000000; -- { serverError 241 } +SETTINGS max_bytes_in_join = 30000000; -- { serverError MEMORY_LIMIT_EXCEEDED } SELECT n, j FROM ( diff --git a/tests/queries/0_stateless/01011_test_create_as_skip_indices.sql b/tests/queries/0_stateless/01011_test_create_as_skip_indices.sql index 4ddad963ffd..ed2a50f8106 100644 --- a/tests/queries/0_stateless/01011_test_create_as_skip_indices.sql +++ b/tests/queries/0_stateless/01011_test_create_as_skip_indices.sql @@ -1,6 +1,6 @@ CREATE TABLE foo (key int, INDEX i1 key TYPE minmax GRANULARITY 1) Engine=MergeTree() ORDER BY key; CREATE TABLE as_foo AS foo; -CREATE TABLE dist (key int, INDEX i1 key TYPE minmax GRANULARITY 1) Engine=Distributed(test_shard_localhost, currentDatabase(), 'foo'); -- { serverError 36 } +CREATE TABLE dist (key int, INDEX i1 key TYPE minmax GRANULARITY 1) Engine=Distributed(test_shard_localhost, currentDatabase(), 'foo'); -- { serverError BAD_ARGUMENTS } CREATE TABLE dist_as_foo Engine=Distributed(test_shard_localhost, currentDatabase(), 'foo') AS foo; DROP TABLE foo; diff --git a/tests/queries/0_stateless/01012_reset_running_accumulate.sql b/tests/queries/0_stateless/01012_reset_running_accumulate.sql index eed653cc629..09bd29de185 100644 --- a/tests/queries/0_stateless/01012_reset_running_accumulate.sql +++ b/tests/queries/0_stateless/01012_reset_running_accumulate.sql @@ -1,6 +1,6 @@ -- Disable external aggregation because the state is reset for each new block of data in 'runningAccumulate' function. SET max_bytes_before_external_group_by = 0; -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; SELECT grouping, item, diff --git a/tests/queries/0_stateless/01013_totals_without_aggregation.sql b/tests/queries/0_stateless/01013_totals_without_aggregation.sql index 291f95c6bd6..ab656cd92b5 100644 --- a/tests/queries/0_stateless/01013_totals_without_aggregation.sql +++ b/tests/queries/0_stateless/01013_totals_without_aggregation.sql @@ -3,6 +3,6 @@ SET allow_experimental_analyzer = 1; SELECT 11 AS n GROUP BY n WITH TOTALS; SELECT 12 AS n GROUP BY n WITH ROLLUP; SELECT 13 AS n GROUP BY n WITH CUBE; -SELECT 1 AS n WITH TOTALS; -- { serverError 48 } -SELECT 1 AS n WITH ROLLUP; -- { serverError 48 } -SELECT 1 AS n WITH CUBE; -- { serverError 48 } +SELECT 1 AS n WITH TOTALS; -- { serverError NOT_IMPLEMENTED } +SELECT 1 AS n WITH ROLLUP; -- { serverError NOT_IMPLEMENTED } +SELECT 1 AS n WITH CUBE; -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/01014_function_repeat_corner_cases.sql b/tests/queries/0_stateless/01014_function_repeat_corner_cases.sql index 53e55a63702..cedd55bce84 100644 --- a/tests/queries/0_stateless/01014_function_repeat_corner_cases.sql +++ b/tests/queries/0_stateless/01014_function_repeat_corner_cases.sql @@ -1,6 +1,6 @@ SELECT length(repeat('x', 1000000)); SELECT length(repeat('', 1000000)); -SELECT length(repeat('x', 1000001)); -- { serverError 131 } +SELECT length(repeat('x', 1000001)); -- { serverError TOO_LARGE_STRING_SIZE } SET max_memory_usage = 100000000; -SELECT length(repeat(repeat('Hello, world!', 1000000), 10)); -- { serverError 241 } +SELECT length(repeat(repeat('Hello, world!', 1000000), 10)); -- { serverError MEMORY_LIMIT_EXCEEDED } SELECT repeat(toString(number), number) FROM system.numbers LIMIT 11; diff --git a/tests/queries/0_stateless/01016_simhash_minhash.sql b/tests/queries/0_stateless/01016_simhash_minhash.sql index 5494416a905..79abb018d56 100644 --- a/tests/queries/0_stateless/01016_simhash_minhash.sql +++ b/tests/queries/0_stateless/01016_simhash_minhash.sql @@ -111,8 +111,8 @@ SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinH SELECT 'wordShingleMinHashCaseInsensitiveUTF8'; SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinHashCaseInsensitiveUTF8(s, 2, 3) as h FROM defaults GROUP BY h ORDER BY h; -SELECT wordShingleSimHash('foobar', 9223372036854775807); -- { serverError 69 } -SELECT wordShingleSimHash('foobar', 1001); -- { serverError 69 } -SELECT wordShingleSimHash('foobar', 0); -- { serverError 69 } +SELECT wordShingleSimHash('foobar', 9223372036854775807); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT wordShingleSimHash('foobar', 1001); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT wordShingleSimHash('foobar', 0); -- { serverError ARGUMENT_OUT_OF_BOUND } DROP TABLE defaults; diff --git a/tests/queries/0_stateless/01016_simhash_minhash_ppc.sql b/tests/queries/0_stateless/01016_simhash_minhash_ppc.sql index 9d5d1297dfe..d7f3eeccf13 100644 --- a/tests/queries/0_stateless/01016_simhash_minhash_ppc.sql +++ b/tests/queries/0_stateless/01016_simhash_minhash_ppc.sql @@ -111,8 +111,8 @@ SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinH SELECT 'wordShingleMinHashCaseInsensitiveUTF8'; SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinHashCaseInsensitiveUTF8(s, 2, 3) as h FROM defaults GROUP BY h ORDER BY h; -SELECT wordShingleSimHash('foobar', 9223372036854775807); -- { serverError 69 } -SELECT wordShingleSimHash('foobar', 1001); -- { serverError 69 } -SELECT wordShingleSimHash('foobar', 0); -- { serverError 69 } +SELECT wordShingleSimHash('foobar', 9223372036854775807); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT wordShingleSimHash('foobar', 1001); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT wordShingleSimHash('foobar', 0); -- { serverError ARGUMENT_OUT_OF_BOUND } DROP TABLE defaults; diff --git a/tests/queries/0_stateless/01017_in_unconvertible_complex_type.sql b/tests/queries/0_stateless/01017_in_unconvertible_complex_type.sql index d675c195726..48eb8ce5c4b 100644 --- a/tests/queries/0_stateless/01017_in_unconvertible_complex_type.sql +++ b/tests/queries/0_stateless/01017_in_unconvertible_complex_type.sql @@ -9,5 +9,5 @@ select [toUInt8(0)] in [-1]; select [toUInt8(255)] in [-1]; -- When left and right element types are not compatible, we should get an error. -select (toUInt8(1)) in ('a'); -- { serverError 53 } -select [toUInt8(1)] in ['a']; -- { serverError 53 } +select (toUInt8(1)) in ('a'); -- { serverError TYPE_MISMATCH } +select [toUInt8(1)] in ['a']; -- { serverError TYPE_MISMATCH } diff --git a/tests/queries/0_stateless/01017_uniqCombined_memory_usage.sql b/tests/queries/0_stateless/01017_uniqCombined_memory_usage.sql index 68472a93c9c..92ef928bc2f 100644 --- a/tests/queries/0_stateless/01017_uniqCombined_memory_usage.sql +++ b/tests/queries/0_stateless/01017_uniqCombined_memory_usage.sql @@ -13,14 +13,14 @@ SET min_untracked_memory = 4194304; -- 4MiB -- HashTable for UInt32 (used until (1<<13) elements), hence 8192 elements SELECT 'UInt32'; SET max_memory_usage = 4000000; -SELECT sum(u) FROM (SELECT intDiv(number, 8192) AS k, uniqCombined(number % 8192) u FROM numbers(8192 * 100) GROUP BY k); -- { serverError 241 } +SELECT sum(u) FROM (SELECT intDiv(number, 8192) AS k, uniqCombined(number % 8192) u FROM numbers(8192 * 100) GROUP BY k); -- { serverError MEMORY_LIMIT_EXCEEDED } SET max_memory_usage = 9830400; SELECT sum(u) FROM (SELECT intDiv(number, 8192) AS k, uniqCombined(number % 8192) u FROM numbers(8192 * 100) GROUP BY k); -- HashTable for UInt64 (used until (1<<12) elements), hence 4096 elements SELECT 'UInt64'; SET max_memory_usage = 4000000; -SELECT sum(u) FROM (SELECT intDiv(number, 4096) AS k, uniqCombined(reinterpretAsString(number % 4096)) u FROM numbers(4096 * 100) GROUP BY k); -- { serverError 241 } +SELECT sum(u) FROM (SELECT intDiv(number, 4096) AS k, uniqCombined(reinterpretAsString(number % 4096)) u FROM numbers(4096 * 100) GROUP BY k); -- { serverError MEMORY_LIMIT_EXCEEDED } SET max_memory_usage = 9830400; @@ -31,14 +31,14 @@ SELECT 'K=16'; -- HashTable for UInt32 (used until (1<<12) elements), hence 4096 elements SELECT 'UInt32'; SET max_memory_usage = 2000000; -SELECT sum(u) FROM (SELECT intDiv(number, 4096) AS k, uniqCombined(16)(number % 4096) u FROM numbers(4096 * 100) GROUP BY k); -- { serverError 241 } +SELECT sum(u) FROM (SELECT intDiv(number, 4096) AS k, uniqCombined(16)(number % 4096) u FROM numbers(4096 * 100) GROUP BY k); -- { serverError MEMORY_LIMIT_EXCEEDED } SET max_memory_usage = 4915200; SELECT sum(u) FROM (SELECT intDiv(number, 4096) AS k, uniqCombined(16)(number % 4096) u FROM numbers(4096 * 100) GROUP BY k); -- HashTable for UInt64 (used until (1<<11) elements), hence 2048 elements SELECT 'UInt64'; SET max_memory_usage = 2000000; -SELECT sum(u) FROM (SELECT intDiv(number, 2048) AS k, uniqCombined(16)(reinterpretAsString(number % 2048)) u FROM numbers(2048 * 100) GROUP BY k); -- { serverError 241 } +SELECT sum(u) FROM (SELECT intDiv(number, 2048) AS k, uniqCombined(16)(reinterpretAsString(number % 2048)) u FROM numbers(2048 * 100) GROUP BY k); -- { serverError MEMORY_LIMIT_EXCEEDED } SET max_memory_usage = 4915200; SELECT sum(u) FROM (SELECT intDiv(number, 2048) AS k, uniqCombined(16)(reinterpretAsString(number % 2048)) u FROM numbers(2048 * 100) GROUP BY k); @@ -47,13 +47,13 @@ SELECT 'K=18'; -- HashTable for UInt32 (used until (1<<14) elements), hence 16384 elements SELECT 'UInt32'; SET max_memory_usage = 8000000; -SELECT sum(u) FROM (SELECT intDiv(number, 16384) AS k, uniqCombined(18)(number % 16384) u FROM numbers(16384 * 100) GROUP BY k); -- { serverError 241 } +SELECT sum(u) FROM (SELECT intDiv(number, 16384) AS k, uniqCombined(18)(number % 16384) u FROM numbers(16384 * 100) GROUP BY k); -- { serverError MEMORY_LIMIT_EXCEEDED } SET max_memory_usage = 19660800; SELECT sum(u) FROM (SELECT intDiv(number, 16384) AS k, uniqCombined(18)(number % 16384) u FROM numbers(16384 * 100) GROUP BY k); -- HashTable for UInt64 (used until (1<<13) elements), hence 8192 elements SELECT 'UInt64'; SET max_memory_usage = 8000000; -SELECT sum(u) FROM (SELECT intDiv(number, 8192) AS k, uniqCombined(18)(reinterpretAsString(number % 8192)) u FROM numbers(8192 * 100) GROUP BY k); -- { serverError 241 } +SELECT sum(u) FROM (SELECT intDiv(number, 8192) AS k, uniqCombined(18)(reinterpretAsString(number % 8192)) u FROM numbers(8192 * 100) GROUP BY k); -- { serverError MEMORY_LIMIT_EXCEEDED } SET max_memory_usage = 19660800; SELECT sum(u) FROM (SELECT intDiv(number, 8192) AS k, uniqCombined(18)(reinterpretAsString(number % 8192)) u FROM numbers(8192 * 100) GROUP BY k); diff --git a/tests/queries/0_stateless/01018_Distributed__shard_num.reference b/tests/queries/0_stateless/01018_Distributed__shard_num.reference index 232f12ed101..de223d4f464 100644 --- a/tests/queries/0_stateless/01018_Distributed__shard_num.reference +++ b/tests/queries/0_stateless/01018_Distributed__shard_num.reference @@ -74,7 +74,7 @@ SELECT _shard_num, key, b.host_name, b.host_address IN ('::1', '127.0.0.1'), b.p FROM dist_1 a JOIN system.clusters b ON _shard_num = b.shard_num -WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError 403 } +WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError INVALID_JOIN_ON_EXPRESSION } SELECT 'Rewrite with alias'; Rewrite with alias SELECT a._shard_num, key FROM dist_1 a; @@ -85,7 +85,7 @@ SELECT a._shard_num, a.key, b.host_name, b.host_address IN ('::1', '127.0.0.1'), FROM dist_1 a JOIN system.clusters b ON a._shard_num = b.shard_num -WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError 47, 403 } +WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError UNKNOWN_IDENTIFIER, 403 } SELECT 'dist_3'; dist_3 SELECT * FROM dist_3; diff --git a/tests/queries/0_stateless/01018_Distributed__shard_num.sql b/tests/queries/0_stateless/01018_Distributed__shard_num.sql index 2d6386a2487..3b793da6dfb 100644 --- a/tests/queries/0_stateless/01018_Distributed__shard_num.sql +++ b/tests/queries/0_stateless/01018_Distributed__shard_num.sql @@ -73,7 +73,7 @@ SELECT _shard_num, key, b.host_name, b.host_address IN ('::1', '127.0.0.1'), b.p FROM dist_1 a JOIN system.clusters b ON _shard_num = b.shard_num -WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError 403 } +WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError INVALID_JOIN_ON_EXPRESSION } SELECT 'Rewrite with alias'; SELECT a._shard_num, key FROM dist_1 a; @@ -82,7 +82,7 @@ SELECT a._shard_num, a.key, b.host_name, b.host_address IN ('::1', '127.0.0.1'), FROM dist_1 a JOIN system.clusters b ON a._shard_num = b.shard_num -WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError 47, 403 } +WHERE b.cluster = 'test_cluster_two_shards_localhost'; -- { serverError UNKNOWN_IDENTIFIER, 403 } SELECT 'dist_3'; SELECT * FROM dist_3; diff --git a/tests/queries/0_stateless/01018_ambiguous_column.sql b/tests/queries/0_stateless/01018_ambiguous_column.sql index a94e1cd4601..e9e754ed7a8 100644 --- a/tests/queries/0_stateless/01018_ambiguous_column.sql +++ b/tests/queries/0_stateless/01018_ambiguous_column.sql @@ -12,9 +12,9 @@ USE system; SELECT dummy FROM one AS A JOIN one ON A.dummy = one.dummy; SELECT dummy FROM one JOIN one AS A ON A.dummy = one.dummy; SELECT dummy FROM one l JOIN one r ON dummy = r.dummy; -SELECT dummy FROM one l JOIN one r ON l.dummy = dummy; -- { serverError 403 } +SELECT dummy FROM one l JOIN one r ON l.dummy = dummy; -- { serverError INVALID_JOIN_ON_EXPRESSION } SELECT dummy FROM one l JOIN one r ON one.dummy = r.dummy; -SELECT dummy FROM one l JOIN one r ON l.dummy = one.dummy; -- { serverError 403 } +SELECT dummy FROM one l JOIN one r ON l.dummy = one.dummy; -- { serverError INVALID_JOIN_ON_EXPRESSION } SELECT * from one JOIN one A ON one.dummy = A.dummy diff --git a/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql b/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql index 28b68504766..c74ea9b7c12 100644 --- a/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql +++ b/tests/queries/0_stateless/01018_ddl_dictionaries_create.sql @@ -115,7 +115,7 @@ CREATE DICTIONARY lazy_db.dict3 PRIMARY KEY key_column, second_column SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_for_dict' PASSWORD '' DB 'database_for_dict_01018')) LIFETIME(MIN 1 MAX 10) -LAYOUT(COMPLEX_KEY_HASHED()); --{serverError 1} +LAYOUT(COMPLEX_KEY_HASHED()); --{serverError UNSUPPORTED_METHOD} DROP DATABASE IF EXISTS lazy_db; diff --git a/tests/queries/0_stateless/01018_ddl_dictionaries_select.sql b/tests/queries/0_stateless/01018_ddl_dictionaries_select.sql index 523b057d4e1..4b9b15bee8f 100644 --- a/tests/queries/0_stateless/01018_ddl_dictionaries_select.sql +++ b/tests/queries/0_stateless/01018_ddl_dictionaries_select.sql @@ -38,7 +38,7 @@ SELECT count(distinct(dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'se DETACH DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict1; -SELECT dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'second_column', toUInt64(11)); -- {serverError 36} +SELECT dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'second_column', toUInt64(11)); -- {serverError BAD_ARGUMENTS} ATTACH DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict1; @@ -46,7 +46,7 @@ SELECT dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'second_column', t DROP DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict1; -SELECT dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'second_column', toUInt64(11)); -- {serverError 36} +SELECT dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'second_column', toUInt64(11)); -- {serverError BAD_ARGUMENTS} -- SOURCE(CLICKHOUSE(...)) uses default params if not specified DROP DICTIONARY IF EXISTS {CLICKHOUSE_DATABASE:Identifier}.dict1; @@ -86,7 +86,7 @@ SELECT dictGetFloat64({CLICKHOUSE_DATABASE:String} || '.dict1', 'fourth_column', DETACH DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict1; -SELECT dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'second_column', tuple(toUInt64(11), '121')); -- {serverError 36} +SELECT dictGetUInt8({CLICKHOUSE_DATABASE:String} || '.dict1', 'second_column', tuple(toUInt64(11), '121')); -- {serverError BAD_ARGUMENTS} ATTACH DICTIONARY {CLICKHOUSE_DATABASE:Identifier}.dict1; @@ -128,10 +128,10 @@ SELECT dictGetString({CLICKHOUSE_DATABASE:String} || '.dict3', 'some_column', to USE {CLICKHOUSE_DATABASE:Identifier}; SELECT dictGetString(dict3, 'some_column', toUInt64(12)); SELECT dictGetString({CLICKHOUSE_DATABASE:Identifier}.dict3, 'some_column', toUInt64(12)); -SELECT dictGetString(default.dict3, 'some_column', toUInt64(12)); -- {serverError 36} +SELECT dictGetString(default.dict3, 'some_column', toUInt64(12)); -- {serverError BAD_ARGUMENTS} SELECT dictGet(dict3, 'some_column', toUInt64(12)); SELECT dictGet({CLICKHOUSE_DATABASE:Identifier}.dict3, 'some_column', toUInt64(12)); -SELECT dictGet(default.dict3, 'some_column', toUInt64(12)); -- {serverError 36} +SELECT dictGet(default.dict3, 'some_column', toUInt64(12)); -- {serverError BAD_ARGUMENTS} USE default; -- alias should be handled correctly @@ -139,6 +139,6 @@ SELECT {CLICKHOUSE_DATABASE:String} || '.dict3' as n, dictGet(n, 'some_column', DROP TABLE {CLICKHOUSE_DATABASE:Identifier}.table_for_dict; -SYSTEM RELOAD DICTIONARIES; -- {serverError 60} +SYSTEM RELOAD DICTIONARIES; -- {serverError UNKNOWN_TABLE} SELECT dictGetString({CLICKHOUSE_DATABASE:String} || '.dict3', 'some_column', toUInt64(12)); diff --git a/tests/queries/0_stateless/01018_dictionaries_from_dictionaries.sql b/tests/queries/0_stateless/01018_dictionaries_from_dictionaries.sql index e72e113f859..010aff24c20 100644 --- a/tests/queries/0_stateless/01018_dictionaries_from_dictionaries.sql +++ b/tests/queries/0_stateless/01018_dictionaries_from_dictionaries.sql @@ -88,13 +88,13 @@ SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'non_exis LIFETIME(MIN 1 MAX 10) LAYOUT(HASHED()); -SELECT count(*) FROM database_for_dict.dict4; -- {serverError 60} +SELECT count(*) FROM database_for_dict.dict4; -- {serverError UNKNOWN_TABLE} SELECT name from system.tables WHERE database = 'database_for_dict' ORDER BY name; SELECT name from system.dictionaries WHERE database = 'database_for_dict' ORDER BY name; DROP DATABASE IF EXISTS database_for_dict; -SELECT count(*) from database_for_dict.dict3; --{serverError 81} -SELECT count(*) from database_for_dict.dict2; --{serverError 81} -SELECT count(*) from database_for_dict.dict1; --{serverError 81} +SELECT count(*) from database_for_dict.dict3; --{serverError UNKNOWN_DATABASE} +SELECT count(*) from database_for_dict.dict2; --{serverError UNKNOWN_DATABASE} +SELECT count(*) from database_for_dict.dict1; --{serverError UNKNOWN_DATABASE} diff --git a/tests/queries/0_stateless/01018_ip_dictionary_long.sql b/tests/queries/0_stateless/01018_ip_dictionary_long.sql index 43025038f87..cb8ef223c6f 100644 --- a/tests/queries/0_stateless/01018_ip_dictionary_long.sql +++ b/tests/queries/0_stateless/01018_ip_dictionary_long.sql @@ -42,7 +42,7 @@ SETTINGS(dictionary_use_async_executor=1, max_threads=8) ; -- fuzzer -SELECT '127.0.0.0/24' = dictGetString({CLICKHOUSE_DATABASE:String} || '.dict_ipv4_trie', 'prefixprefixprefixprefix', tuple(IPv4StringToNumOrDefault('127.0.0.0127.0.0.0'))); -- { serverError 36 } +SELECT '127.0.0.0/24' = dictGetString({CLICKHOUSE_DATABASE:String} || '.dict_ipv4_trie', 'prefixprefixprefixprefix', tuple(IPv4StringToNumOrDefault('127.0.0.0127.0.0.0'))); -- { serverError BAD_ARGUMENTS } SELECT 0 == dictGetUInt32({CLICKHOUSE_DATABASE:String} || '.dict_ipv4_trie', 'asn', tuple(IPv4StringToNum('0.0.0.0'))); SELECT 1 == dictGetUInt32({CLICKHOUSE_DATABASE:String} || '.dict_ipv4_trie', 'asn', tuple(IPv4StringToNum('128.0.0.0'))); diff --git a/tests/queries/0_stateless/01019_Buffer_and_max_memory_usage.sql b/tests/queries/0_stateless/01019_Buffer_and_max_memory_usage.sql index 777effe9e81..78700cb9819 100644 --- a/tests/queries/0_stateless/01019_Buffer_and_max_memory_usage.sql +++ b/tests/queries/0_stateless/01019_Buffer_and_max_memory_usage.sql @@ -30,7 +30,7 @@ SET max_insert_threads=1; -- Check that max_memory_usage is ignored only on flush and not on squash SET min_insert_block_size_bytes=9e6; SET min_insert_block_size_rows=0; -INSERT INTO buffer_ SELECT toUInt64(number) FROM system.numbers LIMIT toUInt64(10e6+1); -- { serverError 241 } +INSERT INTO buffer_ SELECT toUInt64(number) FROM system.numbers LIMIT toUInt64(10e6+1); -- { serverError MEMORY_LIMIT_EXCEEDED } OPTIMIZE TABLE buffer_; -- flush just in case diff --git a/tests/queries/0_stateless/01019_materialized_view_select_extra_columns.sql b/tests/queries/0_stateless/01019_materialized_view_select_extra_columns.sql index 4b7ea127190..b1c78b4c72d 100644 --- a/tests/queries/0_stateless/01019_materialized_view_select_extra_columns.sql +++ b/tests/queries/0_stateless/01019_materialized_view_select_extra_columns.sql @@ -28,7 +28,7 @@ FROM mv_extra_columns_src; INSERT INTO mv_extra_columns_src VALUES (0, 0), (1, 1), (2, 2); SELECT * FROM mv_extra_columns_dst ORDER by v; -SELECT * FROM mv_extra_columns_view; -- { serverError 10 } +SELECT * FROM mv_extra_columns_view; -- { serverError NOT_FOUND_COLUMN_IN_BLOCK } DROP TABLE mv_extra_columns_view; DROP TABLE mv_extra_columns_src; diff --git a/tests/queries/0_stateless/01024__getScalar.sql b/tests/queries/0_stateless/01024__getScalar.sql index 0f66411a32f..1a47ed67e7e 100644 --- a/tests/queries/0_stateless/01024__getScalar.sql +++ b/tests/queries/0_stateless/01024__getScalar.sql @@ -1 +1 @@ -CREATE TABLE foo (key String, macro String MATERIALIZED __getScalar(key)) Engine=Null(); -- { serverError 43 } +CREATE TABLE foo (key String, macro String MATERIALIZED __getScalar(key)) Engine=Null(); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01030_storage_hdfs_syntax.sql b/tests/queries/0_stateless/01030_storage_hdfs_syntax.sql index b679a0ccf9c..0dc025ce59a 100644 --- a/tests/queries/0_stateless/01030_storage_hdfs_syntax.sql +++ b/tests/queries/0_stateless/01030_storage_hdfs_syntax.sql @@ -3,8 +3,8 @@ drop table if exists test_table_hdfs_syntax ; create table test_table_hdfs_syntax (id UInt32) ENGINE = HDFS('') -; -- { serverError 36 } +; -- { serverError BAD_ARGUMENTS } create table test_table_hdfs_syntax (id UInt32) ENGINE = HDFS('','','', '') -; -- { serverError 42 } +; -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } drop table if exists test_table_hdfs_syntax ; diff --git a/tests/queries/0_stateless/01030_storage_url_syntax.sql b/tests/queries/0_stateless/01030_storage_url_syntax.sql index eda108aca2f..0eb89af8462 100644 --- a/tests/queries/0_stateless/01030_storage_url_syntax.sql +++ b/tests/queries/0_stateless/01030_storage_url_syntax.sql @@ -3,7 +3,7 @@ drop table if exists test_table_url_syntax create table test_table_url_syntax (id UInt32) ENGINE = URL('') ; -- { serverError UNSUPPORTED_URI_SCHEME } create table test_table_url_syntax (id UInt32) ENGINE = URL('','','','') -; -- { serverError 42 } +; -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } drop table if exists test_table_url_syntax ; @@ -17,7 +17,7 @@ create table test_table_url(id UInt32) ENGINE = URL('http://localhost/endpoint.j drop table test_table_url; create table test_table_url(id UInt32) ENGINE = URL('http://localhost/endpoint', 'ErrorFormat') -; -- { serverError 73 } +; -- { serverError UNKNOWN_FORMAT } create table test_table_url(id UInt32) ENGINE = URL('http://localhost/endpoint', 'JSONEachRow', 'gzip'); drop table test_table_url; @@ -62,5 +62,5 @@ create table test_table_url(id UInt32) ENGINE = URL('http://localhost/endpoint', drop table test_table_url; create table test_table_url(id UInt32) ENGINE = URL('http://localhost/endpoint', 'JSONEachRow', 'zip') -; -- { serverError 48 } +; -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/01032_duplicate_column_insert_query.sql b/tests/queries/0_stateless/01032_duplicate_column_insert_query.sql index ac1a2439c4b..2fcc846e538 100644 --- a/tests/queries/0_stateless/01032_duplicate_column_insert_query.sql +++ b/tests/queries/0_stateless/01032_duplicate_column_insert_query.sql @@ -12,6 +12,6 @@ INSERT INTO sometable (date, time, value) VALUES ('2019-11-08', 1573185600, 100) SELECT COUNT() from sometable; -INSERT INTO sometable (date, time, value, time) VALUES ('2019-11-08', 1573185600, 100, 1573185600); -- {serverError 15} +INSERT INTO sometable (date, time, value, time) VALUES ('2019-11-08', 1573185600, 100, 1573185600); -- {serverError DUPLICATE_COLUMN} DROP TABLE IF EXISTS sometable; diff --git a/tests/queries/0_stateless/01033_storage_odbc_parsing_exception_check.sql b/tests/queries/0_stateless/01033_storage_odbc_parsing_exception_check.sql index 16f7a8b77de..5df291fb762 100644 --- a/tests/queries/0_stateless/01033_storage_odbc_parsing_exception_check.sql +++ b/tests/queries/0_stateless/01033_storage_odbc_parsing_exception_check.sql @@ -2,7 +2,7 @@ DROP TABLE IF EXISTS BannerDict; -CREATE TABLE BannerDict (`BannerID` UInt64, `CompaignID` UInt64) ENGINE = ODBC('DSN=pgconn;Database=postgres', bannerdict); -- {serverError 42} +CREATE TABLE BannerDict (`BannerID` UInt64, `CompaignID` UInt64) ENGINE = ODBC('DSN=pgconn;Database=postgres', bannerdict); -- {serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} CREATE TABLE BannerDict (`BannerID` UInt64, `CompaignID` UInt64) ENGINE = ODBC('DSN=pgconn;Database=postgres', somedb, bannerdict); diff --git a/tests/queries/0_stateless/01034_unknown_qualified_column_in_join.sql b/tests/queries/0_stateless/01034_unknown_qualified_column_in_join.sql index 35f6d07d9b6..de5be4d3486 100644 --- a/tests/queries/0_stateless/01034_unknown_qualified_column_in_join.sql +++ b/tests/queries/0_stateless/01034_unknown_qualified_column_in_join.sql @@ -1,3 +1,3 @@ -SELECT l.c FROM (SELECT 1 AS a, 2 AS b) AS l join (SELECT 2 AS b, 3 AS c) AS r USING b; -- { serverError 47 } -SELECT r.a FROM (SELECT 1 AS a, 2 AS b) AS l join (SELECT 2 AS b, 3 AS c) AS r USING b; -- { serverError 47 } +SELECT l.c FROM (SELECT 1 AS a, 2 AS b) AS l join (SELECT 2 AS b, 3 AS c) AS r USING b; -- { serverError UNKNOWN_IDENTIFIER } +SELECT r.a FROM (SELECT 1 AS a, 2 AS b) AS l join (SELECT 2 AS b, 3 AS c) AS r USING b; -- { serverError UNKNOWN_IDENTIFIER } SELECT l.a, r.c FROM (SELECT 1 AS a, 2 AS b) AS l join (SELECT 2 AS b, 3 AS c) AS r USING b; diff --git a/tests/queries/0_stateless/01036_union_different_columns.sql b/tests/queries/0_stateless/01036_union_different_columns.sql index f4936b948cb..396b7ac4c0f 100644 --- a/tests/queries/0_stateless/01036_union_different_columns.sql +++ b/tests/queries/0_stateless/01036_union_different_columns.sql @@ -1 +1 @@ -select 1 as c1, 2 as c2, 3 as c3 union all (select 1 as c1, 2 as c2, 3 as c3 union all select 1 as c1, 2 as c2) -- { serverError 258 } +select 1 as c1, 2 as c2, 3 as c3 union all (select 1 as c1, 2 as c2, 3 as c3 union all select 1 as c1, 2 as c2) -- { serverError UNION_ALL_RESULT_STRUCTURES_MISMATCH } diff --git a/tests/queries/0_stateless/01039_mergetree_exec_time.sql b/tests/queries/0_stateless/01039_mergetree_exec_time.sql index bb114c41ec8..3d522af66ad 100644 --- a/tests/queries/0_stateless/01039_mergetree_exec_time.sql +++ b/tests/queries/0_stateless/01039_mergetree_exec_time.sql @@ -1,5 +1,5 @@ DROP TABLE IF EXISTS tab; create table tab (A Int64) Engine=MergeTree order by tuple() SETTINGS min_bytes_for_wide_part = 0, min_rows_for_wide_part = 0; insert into tab select cityHash64(number) from numbers(1000); -select sum(sleep(0.1)) from tab settings max_block_size = 1, max_execution_time=1; -- { serverError 159 } +select sum(sleep(0.1)) from tab settings max_block_size = 1, max_execution_time=1; -- { serverError TIMEOUT_EXCEEDED } DROP TABLE IF EXISTS tab; diff --git a/tests/queries/0_stateless/01042_h3_k_ring.sql b/tests/queries/0_stateless/01042_h3_k_ring.sql index 8931efc44c2..da4955683ac 100644 --- a/tests/queries/0_stateless/01042_h3_k_ring.sql +++ b/tests/queries/0_stateless/01042_h3_k_ring.sql @@ -2,12 +2,12 @@ SELECT arraySort(h3kRing(581276613233082367, toUInt16(1))); SELECT h3kRing(581276613233082367, toUInt16(0)); -SELECT h3kRing(581276613233082367, -1); -- { serverError 43 } -SELECT h3kRing(581276613233082367, toUInt16(-1)); -- { serverError 12 } +SELECT h3kRing(581276613233082367, -1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT h3kRing(581276613233082367, toUInt16(-1)); -- { serverError PARAMETER_OUT_OF_BOUND } SELECT arraySort(h3kRing(581276613233082367, 1)); SELECT h3kRing(581276613233082367, 0); -SELECT h3kRing(581276613233082367, -1); -- { serverError 43 } +SELECT h3kRing(581276613233082367, -1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } DROP TABLE IF EXISTS h3_indexes; diff --git a/tests/queries/0_stateless/01045_array_zip.sql b/tests/queries/0_stateless/01045_array_zip.sql index a2d54c8ae3f..0bf77747123 100644 --- a/tests/queries/0_stateless/01045_array_zip.sql +++ b/tests/queries/0_stateless/01045_array_zip.sql @@ -4,6 +4,6 @@ SELECT arrayZip(['a', 'b', 'c'], ['d', 'e', 'f'], ['g', 'h', 'i']); SELECT arrayZip(); -- { serverError TOO_FEW_ARGUMENTS_FOR_FUNCTION } -SELECT arrayZip('a', 'b', 'c'); -- { serverError 43 } +SELECT arrayZip('a', 'b', 'c'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -SELECT arrayZip(['a', 'b', 'c'], ['d', 'e', 'f', 'd']); -- { serverError 190 } +SELECT arrayZip(['a', 'b', 'c'], ['d', 'e', 'f', 'd']); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } diff --git a/tests/queries/0_stateless/01045_dictionaries_restrictions.sql b/tests/queries/0_stateless/01045_dictionaries_restrictions.sql index b4dbd741767..702e0507176 100644 --- a/tests/queries/0_stateless/01045_dictionaries_restrictions.sql +++ b/tests/queries/0_stateless/01045_dictionaries_restrictions.sql @@ -9,7 +9,7 @@ LIFETIME(MIN 0 MAX 1) LAYOUT(CACHE(SIZE_IN_CELLS 10)); -- because of lazy load we can check only in dictGet query -select dictGetString({CLICKHOUSE_DATABASE:String} || '.restricted_dict', 'value', toUInt64(1)); -- {serverError 482} +select dictGetString({CLICKHOUSE_DATABASE:String} || '.restricted_dict', 'value', toUInt64(1)); -- {serverError DICTIONARY_ACCESS_DENIED} select 'Ok.'; diff --git a/tests/queries/0_stateless/01048_exists_query.sql b/tests/queries/0_stateless/01048_exists_query.sql index 07d3a026c68..8d0077f2ceb 100644 --- a/tests/queries/0_stateless/01048_exists_query.sql +++ b/tests/queries/0_stateless/01048_exists_query.sql @@ -36,7 +36,7 @@ EXISTS TABLE db_01048.t_01048; -- Dictionaries are tables as well. But not all t EXISTS DICTIONARY db_01048.t_01048; -- But dictionary-tables cannot be dropped as usual tables. -DROP TABLE db_01048.t_01048; -- { serverError 520 } +DROP TABLE db_01048.t_01048; -- { serverError CANNOT_DETACH_DICTIONARY_AS_TABLE } DROP DICTIONARY db_01048.t_01048; EXISTS db_01048.t_01048; EXISTS TABLE db_01048.t_01048; diff --git a/tests/queries/0_stateless/01050_engine_join_crash.sql b/tests/queries/0_stateless/01050_engine_join_crash.sql index 7a71550bb1d..db35497df14 100644 --- a/tests/queries/0_stateless/01050_engine_join_crash.sql +++ b/tests/queries/0_stateless/01050_engine_join_crash.sql @@ -7,7 +7,7 @@ CREATE TABLE testJoinTable (number UInt64, data String) ENGINE = Join(ANY, INNER INSERT INTO testJoinTable VALUES (1, '1'), (2, '2'), (3, '3'); -SELECT * FROM (SELECT * FROM numbers(10)) js1 INNER JOIN testJoinTable USING number; -- { serverError 264 } +SELECT * FROM (SELECT * FROM numbers(10)) js1 INNER JOIN testJoinTable USING number; -- { serverError INCOMPATIBLE_TYPE_OF_JOIN } SELECT * FROM (SELECT * FROM numbers(10)) js1 INNER JOIN (SELECT * FROM testJoinTable) js2 USING number ORDER BY number; SELECT * FROM (SELECT * FROM numbers(10)) js1 ANY INNER JOIN testJoinTable USING number ORDER BY number; SELECT * FROM testJoinTable ORDER BY number; diff --git a/tests/queries/0_stateless/01051_aggregate_function_crash.sql b/tests/queries/0_stateless/01051_aggregate_function_crash.sql index c50c275d834..a55ead8a2d7 100644 --- a/tests/queries/0_stateless/01051_aggregate_function_crash.sql +++ b/tests/queries/0_stateless/01051_aggregate_function_crash.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; SELECT runningAccumulate(string_state) FROM ( diff --git a/tests/queries/0_stateless/01051_all_join_engine.sql b/tests/queries/0_stateless/01051_all_join_engine.sql index f894ea84962..2a8da8b2000 100644 --- a/tests/queries/0_stateless/01051_all_join_engine.sql +++ b/tests/queries/0_stateless/01051_all_join_engine.sql @@ -35,8 +35,8 @@ SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; SET join_use_nulls = 1; -SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s; -- { serverError 264 } -SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; -- { serverError 264 } +SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s; -- { serverError INCOMPATIBLE_TYPE_OF_JOIN } +SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; -- { serverError INCOMPATIBLE_TYPE_OF_JOIN } SELECT 'inner (join_use_nulls mix)'; SELECT * FROM t1 INNER JOIN inner_join j USING(x) ORDER BY x, str, s; @@ -73,8 +73,8 @@ SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; SET join_use_nulls = 0; -SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s; -- { serverError 264 } -SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; -- { serverError 264 } +SELECT * FROM t1 LEFT JOIN left_join j USING(x) ORDER BY x, str, s; -- { serverError INCOMPATIBLE_TYPE_OF_JOIN } +SELECT * FROM t1 FULL JOIN full_join j USING(x) ORDER BY x, str, s; -- { serverError INCOMPATIBLE_TYPE_OF_JOIN } SELECT 'inner (join_use_nulls mix2)'; SELECT * FROM t1 INNER JOIN inner_join j USING(x) ORDER BY x, str, s; diff --git a/tests/queries/0_stateless/01052_array_reduce_exception.sql b/tests/queries/0_stateless/01052_array_reduce_exception.sql index 2bdfc2136a2..55dfe8c1541 100644 --- a/tests/queries/0_stateless/01052_array_reduce_exception.sql +++ b/tests/queries/0_stateless/01052_array_reduce_exception.sql @@ -1 +1 @@ -SELECT arrayReduce('aggThrow(0.0001)', range(number % 10)) FROM system.numbers FORMAT Null; -- { serverError 503 } +SELECT arrayReduce('aggThrow(0.0001)', range(number % 10)) FROM system.numbers FORMAT Null; -- { serverError AGGREGATE_FUNCTION_THROW } diff --git a/tests/queries/0_stateless/01055_compact_parts_1.sql b/tests/queries/0_stateless/01055_compact_parts_1.sql index 9acd2578025..ff5ab722e0f 100644 --- a/tests/queries/0_stateless/01055_compact_parts_1.sql +++ b/tests/queries/0_stateless/01055_compact_parts_1.sql @@ -5,13 +5,13 @@ drop table if exists mt_compact_2; create table mt_compact (a Int, s String) engine = MergeTree order by a partition by a settings index_granularity_bytes = 0; -alter table mt_compact modify setting min_rows_for_wide_part = 1000; -- { serverError 48 } +alter table mt_compact modify setting min_rows_for_wide_part = 1000; -- { serverError NOT_IMPLEMENTED } show create table mt_compact; create table mt_compact_2 (a Int, s String) engine = MergeTree order by a partition by a settings min_rows_for_wide_part = 1000; insert into mt_compact_2 values (1, 'a'); -alter table mt_compact attach partition 1 from mt_compact_2; -- { serverError 36 } +alter table mt_compact attach partition 1 from mt_compact_2; -- { serverError BAD_ARGUMENTS } drop table mt_compact; drop table mt_compact_2; diff --git a/tests/queries/0_stateless/01056_create_table_as.sql b/tests/queries/0_stateless/01056_create_table_as.sql index aa2dffb6e2d..dbcab489f82 100644 --- a/tests/queries/0_stateless/01056_create_table_as.sql +++ b/tests/queries/0_stateless/01056_create_table_as.sql @@ -17,7 +17,7 @@ DROP TABLE t3; -- view CREATE VIEW v AS SELECT * FROM t1; -CREATE TABLE t3 AS v; -- { serverError 80 } +CREATE TABLE t3 AS v; -- { serverError INCORRECT_QUERY } DROP TABLE v; -- dictionary @@ -36,7 +36,7 @@ SOURCE(CLICKHOUSE( TABLE 'dict_data' DB concat(currentDatabase(), '_1') USER 'default' PASSWORD '')) LIFETIME(MIN 0 MAX 0) LAYOUT(SPARSE_HASHED()); -CREATE TABLE t3 AS dict; -- { serverError 80 } +CREATE TABLE t3 AS dict; -- { serverError INCORRECT_QUERY } DROP TABLE IF EXISTS t1; DROP TABLE IF EXISTS t3; diff --git a/tests/queries/0_stateless/01056_predicate_optimizer_bugs.sql b/tests/queries/0_stateless/01056_predicate_optimizer_bugs.sql index 6ea42ec32b0..07f94c03e10 100644 --- a/tests/queries/0_stateless/01056_predicate_optimizer_bugs.sql +++ b/tests/queries/0_stateless/01056_predicate_optimizer_bugs.sql @@ -1,7 +1,7 @@ SET enable_optimize_predicate_expression = 1; SET joined_subquery_requires_alias = 0; SET convert_query_to_cnf = 0; -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; -- https://github.com/ClickHouse/ClickHouse/issues/3885 -- https://github.com/ClickHouse/ClickHouse/issues/5485 diff --git a/tests/queries/0_stateless/01062_alter_on_mutataion_zookeeper_long.sql b/tests/queries/0_stateless/01062_alter_on_mutataion_zookeeper_long.sql index 3777ebb1af3..b949b5eb860 100644 --- a/tests/queries/0_stateless/01062_alter_on_mutataion_zookeeper_long.sql +++ b/tests/queries/0_stateless/01062_alter_on_mutataion_zookeeper_long.sql @@ -44,7 +44,7 @@ SELECT sum(value1) from test_alter_on_mutation; ALTER TABLE test_alter_on_mutation DROP COLUMN value; -SELECT sum(value) from test_alter_on_mutation; -- {serverError 47} +SELECT sum(value) from test_alter_on_mutation; -- {serverError UNKNOWN_IDENTIFIER} ALTER TABLE test_alter_on_mutation ADD COLUMN value String DEFAULT '10'; diff --git a/tests/queries/0_stateless/01062_pm_all_join_with_block_continuation.sql b/tests/queries/0_stateless/01062_pm_all_join_with_block_continuation.sql index 6a42c725e4a..cb1e4da4696 100644 --- a/tests/queries/0_stateless/01062_pm_all_join_with_block_continuation.sql +++ b/tests/queries/0_stateless/01062_pm_all_join_with_block_continuation.sql @@ -24,12 +24,12 @@ SET max_joined_block_size_rows = 0; SELECT count(1) FROM ( SELECT materialize(1) as k, n FROM numbers(10) nums JOIN (SELECT materialize(1) AS k, number n FROM numbers(1000000)) j - USING k); -- { serverError 241 } + USING k); -- { serverError MEMORY_LIMIT_EXCEEDED } SELECT count(1) FROM ( SELECT materialize(1) as k, n FROM numbers(1000) nums JOIN (SELECT materialize(1) AS k, number n FROM numbers(10000)) j - USING k); -- { serverError 241 } + USING k); -- { serverError MEMORY_LIMIT_EXCEEDED } SELECT 'max_joined_block_size_rows = 2000'; SET max_joined_block_size_rows = 2000; diff --git a/tests/queries/0_stateless/01065_if_not_finite.sql b/tests/queries/0_stateless/01065_if_not_finite.sql index c0f0721b2dc..8d44644e35c 100644 --- a/tests/queries/0_stateless/01065_if_not_finite.sql +++ b/tests/queries/0_stateless/01065_if_not_finite.sql @@ -6,6 +6,6 @@ SELECT ifNotFinite(nan, 2); SELECT ifNotFinite(-1 / 0, 2); SELECT ifNotFinite(log(0), NULL); SELECT ifNotFinite(sqrt(-1), -42); -SELECT ifNotFinite(12345678901234567890, -12345678901234567890); -- { serverError 386 } +SELECT ifNotFinite(12345678901234567890, -12345678901234567890); -- { serverError NO_COMMON_TYPE } SELECT ifNotFinite(NULL, 1); diff --git a/tests/queries/0_stateless/01069_database_memory.sql b/tests/queries/0_stateless/01069_database_memory.sql index 76a98bf544c..5aab9175c58 100644 --- a/tests/queries/0_stateless/01069_database_memory.sql +++ b/tests/queries/0_stateless/01069_database_memory.sql @@ -14,10 +14,10 @@ SELECT * FROM memory_01069.mt ORDER BY n; SELECT * FROM memory_01069.file ORDER BY n; DROP TABLE memory_01069.mt; -SELECT * FROM memory_01069.mt ORDER BY n; -- { serverError 60 } +SELECT * FROM memory_01069.mt ORDER BY n; -- { serverError UNKNOWN_TABLE } SELECT * FROM memory_01069.file ORDER BY n; -SHOW CREATE TABLE memory_01069.mt; -- { serverError 60 } +SHOW CREATE TABLE memory_01069.mt; -- { serverError UNKNOWN_TABLE } SHOW CREATE TABLE memory_01069.file; DROP DATABASE memory_01069; diff --git a/tests/queries/0_stateless/01070_exception_code_in_query_log_table.sql b/tests/queries/0_stateless/01070_exception_code_in_query_log_table.sql index eae7f653d3e..8660750e766 100644 --- a/tests/queries/0_stateless/01070_exception_code_in_query_log_table.sql +++ b/tests/queries/0_stateless/01070_exception_code_in_query_log_table.sql @@ -1,5 +1,5 @@ DROP TABLE IF EXISTS test_table_for_01070_exception_code_in_query_log_table; -SELECT * FROM test_table_for_01070_exception_code_in_query_log_table; -- { serverError 60 } +SELECT * FROM test_table_for_01070_exception_code_in_query_log_table; -- { serverError UNKNOWN_TABLE } CREATE TABLE test_table_for_01070_exception_code_in_query_log_table (value UInt64) ENGINE=Memory(); SELECT * FROM test_table_for_01070_exception_code_in_query_log_table; SYSTEM FLUSH LOGS; diff --git a/tests/queries/0_stateless/01070_h3_to_children.sql b/tests/queries/0_stateless/01070_h3_to_children.sql index fc394b03edf..ac40d14e14d 100644 --- a/tests/queries/0_stateless/01070_h3_to_children.sql +++ b/tests/queries/0_stateless/01070_h3_to_children.sql @@ -1,6 +1,6 @@ -- Tags: no-fasttest -SELECT h3ToChildren(599405990164561919, 16); -- { serverError 69 } +SELECT h3ToChildren(599405990164561919, 16); -- { serverError ARGUMENT_OUT_OF_BOUND } DROP TABLE IF EXISTS h3_indexes; diff --git a/tests/queries/0_stateless/01070_materialize_ttl.sql b/tests/queries/0_stateless/01070_materialize_ttl.sql index b322b67882c..a633ce06918 100644 --- a/tests/queries/0_stateless/01070_materialize_ttl.sql +++ b/tests/queries/0_stateless/01070_materialize_ttl.sql @@ -12,7 +12,7 @@ insert into ttl values (toDateTime('2100-10-10 00:00:00'), 4); set materialize_ttl_after_modify = 0; -alter table ttl materialize ttl; -- { serverError 80 } +alter table ttl materialize ttl; -- { serverError INCORRECT_QUERY } alter table ttl modify ttl d + interval 1 day; -- TTL should not be applied diff --git a/tests/queries/0_stateless/01070_to_decimal_or_null_exception.sql b/tests/queries/0_stateless/01070_to_decimal_or_null_exception.sql index 9283cc76cd7..7430d7eac2f 100644 --- a/tests/queries/0_stateless/01070_to_decimal_or_null_exception.sql +++ b/tests/queries/0_stateless/01070_to_decimal_or_null_exception.sql @@ -1,6 +1,6 @@ -SELECT toDecimal32('e', 1); -- { serverError 72 } -SELECT toDecimal64('e', 2); -- { serverError 72 } -SELECT toDecimal128('e', 3); -- { serverError 72 } +SELECT toDecimal32('e', 1); -- { serverError CANNOT_PARSE_NUMBER } +SELECT toDecimal64('e', 2); -- { serverError CANNOT_PARSE_NUMBER } +SELECT toDecimal128('e', 3); -- { serverError CANNOT_PARSE_NUMBER } SELECT toDecimal32OrNull('e', 1) x, isNull(x); SELECT toDecimal64OrNull('e', 2) x, isNull(x); diff --git a/tests/queries/0_stateless/01071_force_optimize_skip_unused_shards.sql b/tests/queries/0_stateless/01071_force_optimize_skip_unused_shards.sql index 6a43a750b01..aa51233b605 100644 --- a/tests/queries/0_stateless/01071_force_optimize_skip_unused_shards.sql +++ b/tests/queries/0_stateless/01071_force_optimize_skip_unused_shards.sql @@ -16,22 +16,22 @@ select * from dist_01071; set force_optimize_skip_unused_shards=1; select * from dist_01071; set force_optimize_skip_unused_shards=2; -select * from dist_01071; -- { serverError 507 } +select * from dist_01071; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } drop table if exists dist_01071; create table dist_01071 as data_01071 Engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01071, key%2); set force_optimize_skip_unused_shards=0; select * from dist_01071; set force_optimize_skip_unused_shards=1; -select * from dist_01071; -- { serverError 507 } +select * from dist_01071; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } set force_optimize_skip_unused_shards=2; -select * from dist_01071; -- { serverError 507 } +select * from dist_01071; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } drop table if exists dist_01071; -- non deterministic function (i.e. rand()) create table dist_01071 as data_01071 Engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01071, key + rand()); set force_optimize_skip_unused_shards=1; -select * from dist_01071 where key = 0; -- { serverError 507 } +select * from dist_01071 where key = 0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } drop table if exists data_01071; drop table if exists dist_01071; @@ -42,7 +42,7 @@ set force_optimize_skip_unused_shards=2; create table data2_01071 (key Int, sub_key Int) Engine=Null(); create table dist2_layer_01071 as data2_01071 Engine=Distributed(test_cluster_two_shards, currentDatabase(), data2_01071, sub_key%2); create table dist2_01071 as data2_01071 Engine=Distributed(test_cluster_two_shards, currentDatabase(), dist2_layer_01071, key%2); -select * from dist2_01071 where key = 1; -- { serverError 507 } +select * from dist2_01071 where key = 1; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } set force_optimize_skip_unused_shards_nesting=1; select * from dist2_01071 where key = 1; drop table if exists data2_01071; diff --git a/tests/queries/0_stateless/01071_prohibition_secondary_index_with_old_format_merge_tree.sql b/tests/queries/0_stateless/01071_prohibition_secondary_index_with_old_format_merge_tree.sql index f92b6779587..e8c40f77b63 100644 --- a/tests/queries/0_stateless/01071_prohibition_secondary_index_with_old_format_merge_tree.sql +++ b/tests/queries/0_stateless/01071_prohibition_secondary_index_with_old_format_merge_tree.sql @@ -1,7 +1,7 @@ set allow_deprecated_syntax_for_merge_tree=1; CREATE TABLE old_syntax_01071_test (date Date, id UInt8) ENGINE = MergeTree(date, id, 8192); -ALTER TABLE old_syntax_01071_test ADD INDEX id_minmax id TYPE minmax GRANULARITY 1; -- { serverError 36 } +ALTER TABLE old_syntax_01071_test ADD INDEX id_minmax id TYPE minmax GRANULARITY 1; -- { serverError BAD_ARGUMENTS } CREATE TABLE new_syntax_01071_test (date Date, id UInt8) ENGINE = MergeTree() ORDER BY id; ALTER TABLE new_syntax_01071_test ADD INDEX id_minmax id TYPE minmax GRANULARITY 1; DETACH TABLE new_syntax_01071_test; diff --git a/tests/queries/0_stateless/01072_drop_temporary_table_with_same_name.sql b/tests/queries/0_stateless/01072_drop_temporary_table_with_same_name.sql index fdb809d540f..d8d79683722 100644 --- a/tests/queries/0_stateless/01072_drop_temporary_table_with_same_name.sql +++ b/tests/queries/0_stateless/01072_drop_temporary_table_with_same_name.sql @@ -4,12 +4,12 @@ DROP TABLE IF EXISTS table_to_drop; CREATE TABLE table_to_drop(x Int8) ENGINE=Log; CREATE TEMPORARY TABLE table_to_drop(x Int8); DROP TEMPORARY TABLE table_to_drop; -DROP TEMPORARY TABLE table_to_drop; -- { serverError 60 } +DROP TEMPORARY TABLE table_to_drop; -- { serverError UNKNOWN_TABLE } DROP TABLE table_to_drop; -DROP TABLE table_to_drop; -- { serverError 60 } +DROP TABLE table_to_drop; -- { serverError UNKNOWN_TABLE } CREATE TABLE table_to_drop(x Int8) ENGINE=Log; CREATE TEMPORARY TABLE table_to_drop(x Int8); DROP TABLE table_to_drop; DROP TABLE table_to_drop; -DROP TABLE table_to_drop; -- { serverError 60 } +DROP TABLE table_to_drop; -- { serverError UNKNOWN_TABLE } diff --git a/tests/queries/0_stateless/01072_optimize_skip_unused_shards_const_expr_eval.sql b/tests/queries/0_stateless/01072_optimize_skip_unused_shards_const_expr_eval.sql index 24eaaacb8bd..77d09f5b103 100644 --- a/tests/queries/0_stateless/01072_optimize_skip_unused_shards_const_expr_eval.sql +++ b/tests/queries/0_stateless/01072_optimize_skip_unused_shards_const_expr_eval.sql @@ -16,16 +16,16 @@ select * from dist_01072 where key=toInt32OrZero(toString(xxHash64(0))); select * from dist_01072 where key=toInt32(xxHash32(0)); select * from dist_01072 where key=toInt32(toInt32(xxHash32(0))); select * from dist_01072 where key=toInt32(toInt32(toInt32(xxHash32(0)))); -select * from dist_01072 where key=value; -- { serverError 507 } -select * from dist_01072 where key=toInt32(value); -- { serverError 507 } +select * from dist_01072 where key=value; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +select * from dist_01072 where key=toInt32(value); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } select * from dist_01072 where key=value settings force_optimize_skip_unused_shards=0; select * from dist_01072 where key=toInt32(value) settings force_optimize_skip_unused_shards=0; drop table dist_01072; create table dist_01072 (key Int, value Nullable(Int), str String) Engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01072, key%2); select * from dist_01072 where key=toInt32(xxHash32(0)); -select * from dist_01072 where key=value; -- { serverError 507 } -select * from dist_01072 where key=toInt32(value); -- { serverError 507 } +select * from dist_01072 where key=value; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +select * from dist_01072 where key=toInt32(value); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } select * from dist_01072 where key=value settings force_optimize_skip_unused_shards=0; select * from dist_01072 where key=toInt32(value) settings force_optimize_skip_unused_shards=0; @@ -34,16 +34,16 @@ set allow_suspicious_low_cardinality_types=1; drop table dist_01072; create table dist_01072 (key Int, value LowCardinality(Int), str String) Engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01072, key%2); select * from dist_01072 where key=toInt32(xxHash32(0)); -select * from dist_01072 where key=value; -- { serverError 507 } -select * from dist_01072 where key=toInt32(value); -- { serverError 507 } +select * from dist_01072 where key=value; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +select * from dist_01072 where key=toInt32(value); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } select * from dist_01072 where key=value settings force_optimize_skip_unused_shards=0; select * from dist_01072 where key=toInt32(value) settings force_optimize_skip_unused_shards=0; drop table dist_01072; create table dist_01072 (key Int, value LowCardinality(Nullable(Int)), str String) Engine=Distributed(test_cluster_two_shards, currentDatabase(), data_01072, key%2); select * from dist_01072 where key=toInt32(xxHash32(0)); -select * from dist_01072 where key=value; -- { serverError 507 } -select * from dist_01072 where key=toInt32(value); -- { serverError 507 } +select * from dist_01072 where key=value; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +select * from dist_01072 where key=toInt32(value); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } select * from dist_01072 where key=value settings force_optimize_skip_unused_shards=0; select * from dist_01072 where key=toInt32(value) settings force_optimize_skip_unused_shards=0; diff --git a/tests/queries/0_stateless/01073_attach_if_not_exists.sql b/tests/queries/0_stateless/01073_attach_if_not_exists.sql index a99d5fb5041..1b507bf4794 100644 --- a/tests/queries/0_stateless/01073_attach_if_not_exists.sql +++ b/tests/queries/0_stateless/01073_attach_if_not_exists.sql @@ -1,6 +1,6 @@ CREATE TABLE aine (a Int) ENGINE = Log; -ATTACH TABLE aine; -- { serverError 57 } +ATTACH TABLE aine; -- { serverError TABLE_ALREADY_EXISTS } ATTACH TABLE IF NOT EXISTS aine; DETACH TABLE aine; ATTACH TABLE IF NOT EXISTS aine; diff --git a/tests/queries/0_stateless/01073_bad_alter_partition.sql b/tests/queries/0_stateless/01073_bad_alter_partition.sql index 2e3cd47d6a0..e179a64f359 100644 --- a/tests/queries/0_stateless/01073_bad_alter_partition.sql +++ b/tests/queries/0_stateless/01073_bad_alter_partition.sql @@ -16,7 +16,7 @@ SELECT 4, * FROM merge_tree ORDER BY d; ALTER TABLE merge_tree DROP PARTITION '2020-01-05'; SELECT 5, * FROM merge_tree ORDER BY d; -ALTER TABLE merge_tree DROP PARTITION '202001-06'; -- { serverError 38 } +ALTER TABLE merge_tree DROP PARTITION '202001-06'; -- { serverError CANNOT_PARSE_DATE } SELECT 6, * FROM merge_tree ORDER BY d; DROP TABLE merge_tree; diff --git a/tests/queries/0_stateless/01074_h3_range_check.sql b/tests/queries/0_stateless/01074_h3_range_check.sql index 4c655f44a8b..3e3f5a33295 100644 --- a/tests/queries/0_stateless/01074_h3_range_check.sql +++ b/tests/queries/0_stateless/01074_h3_range_check.sql @@ -1,5 +1,5 @@ -- Tags: no-fasttest -SELECT h3EdgeLengthM(100); -- { serverError 69 } -SELECT h3HexAreaM2(100); -- { serverError 69 } -SELECT h3HexAreaKm2(100); -- { serverError 69 } +SELECT h3EdgeLengthM(100); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT h3HexAreaM2(100); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT h3HexAreaKm2(100); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01076_predicate_optimizer_with_view.sql b/tests/queries/0_stateless/01076_predicate_optimizer_with_view.sql index cfa25179d05..6b035e28011 100644 --- a/tests/queries/0_stateless/01076_predicate_optimizer_with_view.sql +++ b/tests/queries/0_stateless/01076_predicate_optimizer_with_view.sql @@ -13,7 +13,7 @@ EXPLAIN SYNTAX SELECT * FROM test_view WHERE id = 2; EXPLAIN SYNTAX SELECT id FROM test_view WHERE id = 1; EXPLAIN SYNTAX SELECT s.id FROM test_view AS s WHERE s.id = 1; -SELECT * FROM (SELECT toUInt64(b), sum(id) AS b FROM test) WHERE `toUInt64(sum(id))` = 3; -- { serverError 47 } +SELECT * FROM (SELECT toUInt64(b), sum(id) AS b FROM test) WHERE `toUInt64(sum(id))` = 3; -- { serverError UNKNOWN_IDENTIFIER } DROP TABLE IF EXISTS test; DROP TABLE IF EXISTS test_view; diff --git a/tests/queries/0_stateless/01079_alter_default_zookeeper_long.sql b/tests/queries/0_stateless/01079_alter_default_zookeeper_long.sql index 9239d2e9984..36f5dbb8b80 100644 --- a/tests/queries/0_stateless/01079_alter_default_zookeeper_long.sql +++ b/tests/queries/0_stateless/01079_alter_default_zookeeper_long.sql @@ -13,7 +13,7 @@ ORDER BY key; INSERT INTO alter_default select toDate('2020-01-05'), number from system.numbers limit 100; -- Cannot add column without type -ALTER TABLE alter_default ADD COLUMN value DEFAULT '10'; --{serverError 36} +ALTER TABLE alter_default ADD COLUMN value DEFAULT '10'; --{serverError BAD_ARGUMENTS} ALTER TABLE alter_default ADD COLUMN value String DEFAULT '10'; @@ -45,7 +45,7 @@ ALTER TABLE alter_default MODIFY COLUMN value UInt8 DEFAULT 10; SHOW CREATE TABLE alter_default; -ALTER TABLE alter_default ADD COLUMN bad_column UInt8 DEFAULT 'q'; --{serverError 6} +ALTER TABLE alter_default ADD COLUMN bad_column UInt8 DEFAULT 'q'; --{serverError CANNOT_PARSE_TEXT} ALTER TABLE alter_default ADD COLUMN better_column UInt8 DEFAULT '1'; @@ -53,7 +53,7 @@ SHOW CREATE TABLE alter_default; ALTER TABLE alter_default ADD COLUMN other_date String DEFAULT '0'; -ALTER TABLE alter_default MODIFY COLUMN other_date DateTime; --{serverError 41} +ALTER TABLE alter_default MODIFY COLUMN other_date DateTime; --{serverError CANNOT_PARSE_DATETIME} ALTER TABLE alter_default MODIFY COLUMN other_date DEFAULT 1; diff --git a/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql b/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql index 59ab5595577..6e502611479 100644 --- a/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql +++ b/tests/queries/0_stateless/01081_PartialSortingTransform_full_column.sql @@ -2,7 +2,7 @@ drop table if exists test_01081; create table test_01081 (key Int) engine=MergeTree() order by key; insert into test_01081 select * from system.numbers limit 10; -select 1 from remote('127.{1,2}', currentDatabase(), test_01081) lhs join system.one as rhs on rhs.dummy = 1 order by 1; -- { serverError 403 } +select 1 from remote('127.{1,2}', currentDatabase(), test_01081) lhs join system.one as rhs on rhs.dummy = 1 order by 1; -- { serverError INVALID_JOIN_ON_EXPRESSION } -- With multiple blocks triggers: -- @@ -11,6 +11,6 @@ select 1 from remote('127.{1,2}', currentDatabase(), test_01081) lhs join system -- _dummy Int Int32(size = 0), 1 UInt8 Const(size = 0, UInt8(size = 1)). insert into test_01081 select * from system.numbers limit 10; -select 1 from remote('127.{1,2}', currentDatabase(), test_01081) lhs join system.one as rhs on rhs.dummy = 1 order by 1; -- { serverError 403 } +select 1 from remote('127.{1,2}', currentDatabase(), test_01081) lhs join system.one as rhs on rhs.dummy = 1 order by 1; -- { serverError INVALID_JOIN_ON_EXPRESSION } drop table if exists test_01081; diff --git a/tests/queries/0_stateless/01083_expressions_in_engine_arguments.sql b/tests/queries/0_stateless/01083_expressions_in_engine_arguments.sql index b162fdb21fd..6268765aa27 100644 --- a/tests/queries/0_stateless/01083_expressions_in_engine_arguments.sql +++ b/tests/queries/0_stateless/01083_expressions_in_engine_arguments.sql @@ -36,7 +36,7 @@ CREATE TABLE url (n UInt64, col String) ENGINE=URL CREATE VIEW view AS SELECT toInt64(n) as n FROM (SELECT toString(n) as n from merge WHERE _table != 'qwerty' ORDER BY _table) UNION ALL SELECT * FROM file; -- The following line is needed just to disable checking stderr for emptiness -SELECT nonexistentsomething; -- { serverError 47 } +SELECT nonexistentsomething; -- { serverError UNKNOWN_IDENTIFIER } CREATE DICTIONARY dict (n UInt64, col String DEFAULT '42') PRIMARY KEY n SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9440 SECURE 1 USER 'default' TABLE 'url')) LIFETIME(1) LAYOUT(CACHE(SIZE_IN_CELLS 1)); diff --git a/tests/queries/0_stateless/01089_alter_settings_old_format.sql b/tests/queries/0_stateless/01089_alter_settings_old_format.sql index 7e7674f4d43..daeed522fac 100644 --- a/tests/queries/0_stateless/01089_alter_settings_old_format.sql +++ b/tests/queries/0_stateless/01089_alter_settings_old_format.sql @@ -9,7 +9,7 @@ CREATE TABLE old_format_mt ( ) ENGINE = MergeTree(event_date, (key, value1), 8192); -ALTER TABLE old_format_mt MODIFY SETTING enable_mixed_granularity_parts = 1; --{serverError 36} +ALTER TABLE old_format_mt MODIFY SETTING enable_mixed_granularity_parts = 1; --{serverError BAD_ARGUMENTS} SELECT 1; diff --git a/tests/queries/0_stateless/01093_cyclic_defaults_filimonov.sql b/tests/queries/0_stateless/01093_cyclic_defaults_filimonov.sql index f5f88db9d66..06010c983fc 100644 --- a/tests/queries/0_stateless/01093_cyclic_defaults_filimonov.sql +++ b/tests/queries/0_stateless/01093_cyclic_defaults_filimonov.sql @@ -6,7 +6,7 @@ CREATE TABLE test `a3` UInt64 DEFAULT a2 + 1, `a4` UInt64 ALIAS a3 + 1 ) -ENGINE = Log; -- { serverError 174 } +ENGINE = Log; -- { serverError CYCLIC_ALIASES } CREATE TABLE pythagoras ( @@ -14,6 +14,6 @@ CREATE TABLE pythagoras `b` Float64 DEFAULT sqrt((c * c) - (a * a)), `c` Float64 DEFAULT sqrt((a * a) + (b * b)) ) -ENGINE = Log; -- { serverError 174 } +ENGINE = Log; -- { serverError CYCLIC_ALIASES } -- TODO: It works but should not: CREATE TABLE test (a DEFAULT b, b DEFAULT a) ENGINE = Memory diff --git a/tests/queries/0_stateless/01095_tpch_like_smoke.sql b/tests/queries/0_stateless/01095_tpch_like_smoke.sql index 10ea601abad..ed1be5591e1 100644 --- a/tests/queries/0_stateless/01095_tpch_like_smoke.sql +++ b/tests/queries/0_stateless/01095_tpch_like_smoke.sql @@ -180,7 +180,7 @@ order by n_name, s_name, p_partkey -limit 100; -- { serverError 1, 47 } +limit 100; -- { serverError UNSUPPORTED_METHOD, 47 } select 3; select @@ -546,7 +546,7 @@ where revenue0 ) order by - s_suppkey; -- { serverError 47 } + s_suppkey; -- { serverError UNKNOWN_IDENTIFIER } drop view revenue0; select 16; @@ -598,7 +598,7 @@ where lineitem where l_partkey = p_partkey - ); -- { serverError 1, 47 } + ); -- { serverError UNSUPPORTED_METHOD, 47 } select 18; select @@ -709,7 +709,7 @@ where and s_nationkey = n_nationkey and n_name = 'CANADA' order by - s_name; -- { serverError 1, 47 } + s_name; -- { serverError UNSUPPORTED_METHOD, 47 } select 21, 'fail: exists, not exists'; -- TODO -- select diff --git a/tests/queries/0_stateless/01097_cyclic_defaults.sql b/tests/queries/0_stateless/01097_cyclic_defaults.sql index 1d63038911f..570c93c5109 100644 --- a/tests/queries/0_stateless/01097_cyclic_defaults.sql +++ b/tests/queries/0_stateless/01097_cyclic_defaults.sql @@ -1,20 +1,20 @@ DROP TABLE IF EXISTS table_with_cyclic_defaults; -CREATE TABLE table_with_cyclic_defaults (a DEFAULT b, b DEFAULT a) ENGINE = Memory; --{serverError 174} +CREATE TABLE table_with_cyclic_defaults (a DEFAULT b, b DEFAULT a) ENGINE = Memory; --{serverError CYCLIC_ALIASES} -CREATE TABLE table_with_cyclic_defaults (a DEFAULT b + 1, b DEFAULT a * a) ENGINE = Memory; --{serverError 174} +CREATE TABLE table_with_cyclic_defaults (a DEFAULT b + 1, b DEFAULT a * a) ENGINE = Memory; --{serverError CYCLIC_ALIASES} -CREATE TABLE table_with_cyclic_defaults (a DEFAULT b, b DEFAULT toString(c), c DEFAULT concat(a, '1')) ENGINE = Memory; --{serverError 174} +CREATE TABLE table_with_cyclic_defaults (a DEFAULT b, b DEFAULT toString(c), c DEFAULT concat(a, '1')) ENGINE = Memory; --{serverError CYCLIC_ALIASES} -CREATE TABLE table_with_cyclic_defaults (a DEFAULT b, b DEFAULT c, c DEFAULT a * b) ENGINE = Memory; --{serverError 174} +CREATE TABLE table_with_cyclic_defaults (a DEFAULT b, b DEFAULT c, c DEFAULT a * b) ENGINE = Memory; --{serverError CYCLIC_ALIASES} -CREATE TABLE table_with_cyclic_defaults (a String DEFAULT b, b String DEFAULT a) ENGINE = Memory; --{serverError 174} +CREATE TABLE table_with_cyclic_defaults (a String DEFAULT b, b String DEFAULT a) ENGINE = Memory; --{serverError CYCLIC_ALIASES} CREATE TABLE table_with_cyclic_defaults (a String) ENGINE = Memory; -ALTER TABLE table_with_cyclic_defaults ADD COLUMN c String DEFAULT b, ADD COLUMN b String DEFAULT c; --{serverError 174} +ALTER TABLE table_with_cyclic_defaults ADD COLUMN c String DEFAULT b, ADD COLUMN b String DEFAULT c; --{serverError CYCLIC_ALIASES} -ALTER TABLE table_with_cyclic_defaults ADD COLUMN b String DEFAULT a, MODIFY COLUMN a DEFAULT b; --{serverError 174} +ALTER TABLE table_with_cyclic_defaults ADD COLUMN b String DEFAULT a, MODIFY COLUMN a DEFAULT b; --{serverError CYCLIC_ALIASES} SELECT 1; diff --git a/tests/queries/0_stateless/01099_operators_date_and_timestamp.sql b/tests/queries/0_stateless/01099_operators_date_and_timestamp.sql index feffd08562a..6140bad46fd 100644 --- a/tests/queries/0_stateless/01099_operators_date_and_timestamp.sql +++ b/tests/queries/0_stateless/01099_operators_date_and_timestamp.sql @@ -16,21 +16,21 @@ select timestamp '2001-09-28 23:00:00' - interval 23 hour; SET session_timezone = 'Europe/Amsterdam'; select (date '2001-09-29' + interval 12345 second) x, toTypeName(x); -select (date '2001-09-29' + interval 12345 millisecond) x, toTypeName(x); -- { serverError 43 } -select (date '2001-09-29' + interval 12345 microsecond) x, toTypeName(x); -- { serverError 43 } -select (date '2001-09-29' + interval 12345 nanosecond) x, toTypeName(x); -- { serverError 43 } +select (date '2001-09-29' + interval 12345 millisecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (date '2001-09-29' + interval 12345 microsecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (date '2001-09-29' + interval 12345 nanosecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select (date '2001-09-29' - interval 12345 second) x, toTypeName(x); -select (date '2001-09-29' - interval 12345 millisecond) x, toTypeName(x); -- { serverError 43 } -select (date '2001-09-29' - interval 12345 microsecond) x, toTypeName(x); -- { serverError 43 } -select (date '2001-09-29' - interval 12345 nanosecond) x, toTypeName(x); -- { serverError 43 } +select (date '2001-09-29' - interval 12345 millisecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (date '2001-09-29' - interval 12345 microsecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (date '2001-09-29' - interval 12345 nanosecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select (toDate32('2001-09-29') + interval 12345 second) x, toTypeName(x); -select (toDate32('2001-09-29') + interval 12345 millisecond) x, toTypeName(x); -- { serverError 43 } -select (toDate32('2001-09-29') + interval 12345 microsecond) x, toTypeName(x); -- { serverError 43 } -select (toDate32('2001-09-29') + interval 12345 nanosecond) x, toTypeName(x); -- { serverError 43 } +select (toDate32('2001-09-29') + interval 12345 millisecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (toDate32('2001-09-29') + interval 12345 microsecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (toDate32('2001-09-29') + interval 12345 nanosecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select (toDate32('2001-09-29') - interval 12345 second) x, toTypeName(x); -select (toDate32('2001-09-29') - interval 12345 millisecond) x, toTypeName(x); -- { serverError 43 } -select (toDate32('2001-09-29') - interval 12345 microsecond) x, toTypeName(x); -- { serverError 43 } -select (toDate32('2001-09-29') - interval 12345 nanosecond) x, toTypeName(x); -- { serverError 43 } +select (toDate32('2001-09-29') - interval 12345 millisecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (toDate32('2001-09-29') - interval 12345 microsecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select (toDate32('2001-09-29') - interval 12345 nanosecond) x, toTypeName(x); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select (timestamp '2001-12-29 03:00:00' - timestamp '2001-12-27 12:00:00') x, toTypeName(x); diff --git a/tests/queries/0_stateless/01101_literal_column_clash.sql b/tests/queries/0_stateless/01101_literal_column_clash.sql index b9645e3609e..a19c0b13874 100644 --- a/tests/queries/0_stateless/01101_literal_column_clash.sql +++ b/tests/queries/0_stateless/01101_literal_column_clash.sql @@ -7,7 +7,7 @@ join (select '1' as sid) as t2 on t2.sid = cast(t1.iid as String); select cast(7 as String), * from (select 3 "'String'"); select cast(7 as String), * from (select number "'String'" FROM numbers(2)); SELECT concat('xyz', 'abc'), * FROM (SELECT 2 AS "'xyz'"); -with 3 as "1" select 1, "1"; -- { serverError 352 } +with 3 as "1" select 1, "1"; -- { serverError AMBIGUOUS_COLUMN_NAME } -- https://github.com/ClickHouse/ClickHouse/issues/9953 select 1, * from (select 2 x) a left join (select 1, 3 y) b on y = x; @@ -17,9 +17,9 @@ select null, isConstant(null), * from (select 2 x, null) a right join (select 3 -- other cases with joins and constants -select cast(1, 'UInt8') from (select arrayJoin([1, 2]) as a) t1 left join (select 1 as b) t2 on b = ignore('UInt8'); -- { serverError 403 } +select cast(1, 'UInt8') from (select arrayJoin([1, 2]) as a) t1 left join (select 1 as b) t2 on b = ignore('UInt8'); -- { serverError INVALID_JOIN_ON_EXPRESSION } -select isConstant('UInt8'), toFixedString('hello', toUInt8(substring('UInt8', 5, 1))) from (select arrayJoin([1, 2]) as a) t1 left join (select 1 as b) t2 on b = ignore('UInt8'); -- { serverError 403 } +select isConstant('UInt8'), toFixedString('hello', toUInt8(substring('UInt8', 5, 1))) from (select arrayJoin([1, 2]) as a) t1 left join (select 1 as b) t2 on b = ignore('UInt8'); -- { serverError INVALID_JOIN_ON_EXPRESSION } -- https://github.com/ClickHouse/ClickHouse/issues/20624 select 2 as `toString(x)`, x from (select 1 as x); diff --git a/tests/queries/0_stateless/01109_exchange_tables.sql b/tests/queries/0_stateless/01109_exchange_tables.sql index b10377436f9..28f4a16bb49 100644 --- a/tests/queries/0_stateless/01109_exchange_tables.sql +++ b/tests/queries/0_stateless/01109_exchange_tables.sql @@ -11,9 +11,9 @@ CREATE TABLE t0 ENGINE=MergeTree() ORDER BY tuple() AS SELECT rowNumberInAllBloc CREATE TABLE t1 ENGINE=Log() AS SELECT * FROM system.tables AS t JOIN system.databases AS d ON t.database=d.name; CREATE TABLE t2 ENGINE=MergeTree() ORDER BY tuple() AS SELECT rowNumberInAllBlocks() + (SELECT count() FROM t0), * FROM (SELECT arrayJoin(['hello', 'world'])); -EXCHANGE TABLES t1 AND t3; -- { serverError 60 } -EXCHANGE TABLES t4 AND t2; -- { serverError 60 } -RENAME TABLE t0 TO t1; -- { serverError 57 } +EXCHANGE TABLES t1 AND t3; -- { serverError UNKNOWN_TABLE } +EXCHANGE TABLES t4 AND t2; -- { serverError UNKNOWN_TABLE } +RENAME TABLE t0 TO t1; -- { serverError TABLE_ALREADY_EXISTS } DROP TABLE t1; RENAME TABLE t0 TO t1; SELECT * FROM t1; @@ -41,9 +41,9 @@ CREATE TABLE test_01109_other_atomic.t3 ENGINE=MergeTree() ORDER BY tuple() CREATE TABLE test_01109_ordinary.t4 AS t1; -EXCHANGE TABLES test_01109_other_atomic.t3 AND test_01109_ordinary.t4; -- { serverError 48 } -EXCHANGE TABLES test_01109_ordinary.t4 AND test_01109_other_atomic.t3; -- { serverError 48 } -EXCHANGE TABLES test_01109_ordinary.t4 AND test_01109_ordinary.t4; -- { serverError 48 } +EXCHANGE TABLES test_01109_other_atomic.t3 AND test_01109_ordinary.t4; -- { serverError NOT_IMPLEMENTED } +EXCHANGE TABLES test_01109_ordinary.t4 AND test_01109_other_atomic.t3; -- { serverError NOT_IMPLEMENTED } +EXCHANGE TABLES test_01109_ordinary.t4 AND test_01109_ordinary.t4; -- { serverError NOT_IMPLEMENTED } EXCHANGE TABLES t1 AND test_01109_other_atomic.t3; EXCHANGE TABLES t2 AND t2; @@ -56,7 +56,7 @@ DROP DATABASE IF EXISTS test_01109_rename_exists; CREATE DATABASE test_01109_rename_exists ENGINE=Atomic; USE test_01109_rename_exists; CREATE TABLE t0 ENGINE=Log() AS SELECT * FROM system.numbers limit 2; -RENAME TABLE t0_tmp TO t1; -- { serverError 60 } +RENAME TABLE t0_tmp TO t1; -- { serverError UNKNOWN_TABLE } RENAME TABLE if exists t0_tmp TO t1; RENAME TABLE if exists t0 TO t1; SELECT * FROM t1; diff --git a/tests/queries/0_stateless/01109_inflating_cross_join.sql b/tests/queries/0_stateless/01109_inflating_cross_join.sql index 315f5c43c1e..bf7ef7c8fc3 100644 --- a/tests/queries/0_stateless/01109_inflating_cross_join.sql +++ b/tests/queries/0_stateless/01109_inflating_cross_join.sql @@ -1,7 +1,7 @@ SET max_memory_usage = 16000000; SET max_joined_block_size_rows = 10000000; -SELECT count(*) FROM numbers(10000) n1 CROSS JOIN numbers(1000) n2; -- { serverError 241 } +SELECT count(*) FROM numbers(10000) n1 CROSS JOIN numbers(1000) n2; -- { serverError MEMORY_LIMIT_EXCEEDED } SET max_joined_block_size_rows = 1000; SELECT count(*) FROM numbers(10000) n1 CROSS JOIN numbers(1000) n2; diff --git a/tests/queries/0_stateless/01114_materialize_clear_index_compact_parts.sql b/tests/queries/0_stateless/01114_materialize_clear_index_compact_parts.sql index b2ebe7e2cc2..06c8852d4a7 100644 --- a/tests/queries/0_stateless/01114_materialize_clear_index_compact_parts.sql +++ b/tests/queries/0_stateless/01114_materialize_clear_index_compact_parts.sql @@ -28,7 +28,7 @@ SELECT count() FROM minmax_compact WHERE i64 = 2; ALTER TABLE minmax_compact CLEAR INDEX idx IN PARTITION 1; ALTER TABLE minmax_compact CLEAR INDEX idx IN PARTITION 2; -SELECT count() FROM minmax_compact WHERE i64 = 2; -- { serverError 158 } +SELECT count() FROM minmax_compact WHERE i64 = 2; -- { serverError TOO_MANY_ROWS } set max_rows_to_read = 10; SELECT count() FROM minmax_compact WHERE i64 = 2; diff --git a/tests/queries/0_stateless/01114_mysql_database_engine_segfault.sql b/tests/queries/0_stateless/01114_mysql_database_engine_segfault.sql index 027acd536b3..3379acf4d7b 100644 --- a/tests/queries/0_stateless/01114_mysql_database_engine_segfault.sql +++ b/tests/queries/0_stateless/01114_mysql_database_engine_segfault.sql @@ -1,4 +1,4 @@ -- Tags: no-parallel, no-fasttest DROP DATABASE IF EXISTS conv_main; -CREATE DATABASE conv_main ENGINE = MySQL('127.0.0.1:3456', conv_main, 'metrika', 'password'); -- { serverError 501 } +CREATE DATABASE conv_main ENGINE = MySQL('127.0.0.1:3456', conv_main, 'metrika', 'password'); -- { serverError CANNOT_CREATE_DATABASE } diff --git a/tests/queries/0_stateless/01118_is_constant.sql b/tests/queries/0_stateless/01118_is_constant.sql index 5cbff986dd2..9e412159091 100644 --- a/tests/queries/0_stateless/01118_is_constant.sql +++ b/tests/queries/0_stateless/01118_is_constant.sql @@ -6,5 +6,5 @@ SELECT isConstant(x) FROM (SELECT 1 x); SELECT '---'; SELECT isConstant(x) FROM (SELECT 1 x UNION ALL SELECT 2); SELECT '---'; -select isConstant(); -- { serverError 42 } -select isConstant(1, 2); -- { serverError 42 } +select isConstant(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select isConstant(1, 2); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } diff --git a/tests/queries/0_stateless/01122_totals_rollup_having_block_header.sql b/tests/queries/0_stateless/01122_totals_rollup_having_block_header.sql index 6fb877c350a..7f0c29e9401 100644 --- a/tests/queries/0_stateless/01122_totals_rollup_having_block_header.sql +++ b/tests/queries/0_stateless/01122_totals_rollup_having_block_header.sql @@ -8,7 +8,7 @@ INSERT INTO rollup_having VALUES (NULL, NULL); INSERT INTO rollup_having VALUES ('a', NULL); INSERT INTO rollup_having VALUES ('a', 'b'); -SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP WITH TOTALS HAVING a IS NOT NULL; -- { serverError 48 } -SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP WITH TOTALS HAVING a IS NOT NULL and b IS NOT NULL; -- { serverError 48 } +SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP WITH TOTALS HAVING a IS NOT NULL; -- { serverError NOT_IMPLEMENTED } +SELECT a, b, count(*) FROM rollup_having GROUP BY a, b WITH ROLLUP WITH TOTALS HAVING a IS NOT NULL and b IS NOT NULL; -- { serverError NOT_IMPLEMENTED } DROP TABLE rollup_having; diff --git a/tests/queries/0_stateless/01131_max_rows_to_sort.sql b/tests/queries/0_stateless/01131_max_rows_to_sort.sql index d18f35e091e..0d6ff643acd 100644 --- a/tests/queries/0_stateless/01131_max_rows_to_sort.sql +++ b/tests/queries/0_stateless/01131_max_rows_to_sort.sql @@ -1,5 +1,5 @@ SET max_rows_to_sort = 100; -SELECT * FROM system.numbers ORDER BY number; -- { serverError 396 } +SELECT * FROM system.numbers ORDER BY number; -- { serverError TOO_MANY_ROWS_OR_BYTES } SET sort_overflow_mode = 'break'; SET max_block_size = 1000; diff --git a/tests/queries/0_stateless/01132_max_rows_to_read.sql b/tests/queries/0_stateless/01132_max_rows_to_read.sql index 8127befa83c..7d2030a767b 100644 --- a/tests/queries/0_stateless/01132_max_rows_to_read.sql +++ b/tests/queries/0_stateless/01132_max_rows_to_read.sql @@ -2,13 +2,13 @@ SET max_block_size = 10; SET max_rows_to_read = 20; SET read_overflow_mode = 'throw'; -SELECT count() FROM numbers(30); -- { serverError 158 } +SELECT count() FROM numbers(30); -- { serverError TOO_MANY_ROWS } SELECT count() FROM numbers(19); SELECT count() FROM numbers(20); -SELECT count() FROM numbers(21); -- { serverError 158 } +SELECT count() FROM numbers(21); -- { serverError TOO_MANY_ROWS } -- check early exception if the estimated number of rows is high -SELECT * FROM numbers(30); -- { serverError 158 } +SELECT * FROM numbers(30); -- { serverError TOO_MANY_ROWS } SET read_overflow_mode = 'break'; diff --git a/tests/queries/0_stateless/01134_max_rows_to_group_by.sql b/tests/queries/0_stateless/01134_max_rows_to_group_by.sql index f9ea37cb65a..ea4b87cda5b 100644 --- a/tests/queries/0_stateless/01134_max_rows_to_group_by.sql +++ b/tests/queries/0_stateless/01134_max_rows_to_group_by.sql @@ -5,7 +5,7 @@ SET group_by_overflow_mode = 'throw'; -- Settings 'max_rows_to_group_by' and 'max_bytes_before_external_group_by' are mutually exclusive. SET max_bytes_before_external_group_by = 0; -SELECT 'test1', number FROM system.numbers GROUP BY number; -- { serverError 158 } +SELECT 'test1', number FROM system.numbers GROUP BY number; -- { serverError TOO_MANY_ROWS } SET group_by_overflow_mode = 'break'; SELECT 'test2', number FROM system.numbers GROUP BY number ORDER BY number; @@ -14,7 +14,7 @@ SET max_rows_to_read = 500; SELECT 'test3', number FROM system.numbers GROUP BY number ORDER BY number; SET group_by_overflow_mode = 'any'; -SELECT 'test4', number FROM numbers(1000) GROUP BY number ORDER BY number; -- { serverError 158 } +SELECT 'test4', number FROM numbers(1000) GROUP BY number ORDER BY number; -- { serverError TOO_MANY_ROWS } SET max_rows_to_read = 1000; SELECT 'test5', number FROM numbers(1000) GROUP BY number ORDER BY number; diff --git a/tests/queries/0_stateless/01134_set_overflow_mode.sql b/tests/queries/0_stateless/01134_set_overflow_mode.sql index 791bc6d7f9e..c3cf5ffeda6 100644 --- a/tests/queries/0_stateless/01134_set_overflow_mode.sql +++ b/tests/queries/0_stateless/01134_set_overflow_mode.sql @@ -2,10 +2,10 @@ SET max_block_size = 10; SET max_rows_in_set = 20; SET set_overflow_mode = 'throw'; -SELECT arrayJoin([5, 25]) IN (SELECT DISTINCT toUInt8(intDiv(number, 10)) FROM numbers(300)); -- { serverError 191 } +SELECT arrayJoin([5, 25]) IN (SELECT DISTINCT toUInt8(intDiv(number, 10)) FROM numbers(300)); -- { serverError SET_SIZE_LIMIT_EXCEEDED } SELECT arrayJoin([5, 25]) IN (SELECT DISTINCT toUInt8(intDiv(number, 10)) FROM numbers(190)); SELECT arrayJoin([5, 25]) IN (SELECT DISTINCT toUInt8(intDiv(number, 10)) FROM numbers(200)); -SELECT arrayJoin([5, 25]) IN (SELECT DISTINCT toUInt8(intDiv(number, 10)) FROM numbers(210)); -- { serverError 191 } +SELECT arrayJoin([5, 25]) IN (SELECT DISTINCT toUInt8(intDiv(number, 10)) FROM numbers(210)); -- { serverError SET_SIZE_LIMIT_EXCEEDED } SET set_overflow_mode = 'break'; diff --git a/tests/queries/0_stateless/01139_asof_join_types.sql b/tests/queries/0_stateless/01139_asof_join_types.sql index 4cfde5d3210..1a2308318f5 100644 --- a/tests/queries/0_stateless/01139_asof_join_types.sql +++ b/tests/queries/0_stateless/01139_asof_join_types.sql @@ -15,4 +15,4 @@ select * from (select 0 as k, toDecimal128(1, 0) as v) t1 asof join (select 0 as select * from (select 0 as k, toDate(0) as v) t1 asof join (select 0 as k, toDate(0) as v) t2 using(k, v); select * from (select 0 as k, toDateTime(0, 'UTC') as v) t1 asof join (select 0 as k, toDateTime(0, 'UTC') as v) t2 using(k, v); -select * from (select 0 as k, 'x' as v) t1 asof join (select 0 as k, 'x' as v) t2 using(k, v); -- { serverError 169 } +select * from (select 0 as k, 'x' as v) t1 asof join (select 0 as k, 'x' as v) t2 using(k, v); -- { serverError BAD_TYPE_OF_FIELD } diff --git a/tests/queries/0_stateless/01141_join_get_negative.sql b/tests/queries/0_stateless/01141_join_get_negative.sql index e165d34e460..86c00ee436b 100644 --- a/tests/queries/0_stateless/01141_join_get_negative.sql +++ b/tests/queries/0_stateless/01141_join_get_negative.sql @@ -4,8 +4,8 @@ DROP TABLE IF EXISTS t2; CREATE TABLE t1 (`s` String, `x` Array(UInt8), `k` UInt64) ENGINE = Join(ANY, LEFT, k); CREATE TABLE t2 (`s` String, `x` Array(UInt8), `k` UInt64) ENGINE = Join(ANY, INNER, k); -SELECT joinGet('t1', '', number) FROM numbers(2); -- { serverError 16 } -SELECT joinGet('t2', 's', number) FROM numbers(2); -- { serverError 264 } +SELECT joinGet('t1', '', number) FROM numbers(2); -- { serverError NO_SUCH_COLUMN_IN_TABLE } +SELECT joinGet('t2', 's', number) FROM numbers(2); -- { serverError INCOMPATIBLE_TYPE_OF_JOIN } DROP TABLE t1; DROP TABLE t2; diff --git a/tests/queries/0_stateless/01147_partial_merge_full_join.sql b/tests/queries/0_stateless/01147_partial_merge_full_join.sql index b32ad82a41e..0d5eb133378 100644 --- a/tests/queries/0_stateless/01147_partial_merge_full_join.sql +++ b/tests/queries/0_stateless/01147_partial_merge_full_join.sql @@ -11,29 +11,29 @@ INSERT INTO t1 (x, y) VALUES (0, 0); SET join_algorithm = 'partial_merge'; SELECT 't join none using'; -SELECT * FROM t1 ANY RIGHT JOIN t0 USING (x) ORDER BY x; -- { serverError 48 } -SELECT * FROM t1 ANY FULL JOIN t0 USING (x) ORDER BY x; -- { serverError 48 } +SELECT * FROM t1 ANY RIGHT JOIN t0 USING (x) ORDER BY x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t1 ANY FULL JOIN t0 USING (x) ORDER BY x; -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t1 RIGHT JOIN t0 USING (x) ORDER BY x; SELECT '-'; SELECT * FROM t1 FULL JOIN t0 USING (x) ORDER BY x; SELECT 't join none on'; -SELECT * FROM t1 ANY RIGHT JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError 48 } -SELECT * FROM t1 ANY FULL JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError 48 } +SELECT * FROM t1 ANY RIGHT JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t1 ANY FULL JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t1 RIGHT JOIN t0 ON t1.x = t0.x ORDER BY x; SELECT '-'; SELECT * FROM t1 FULL JOIN t0 ON t1.x = t0.x ORDER BY x; SELECT 'none join t using'; -SELECT * FROM t0 ANY RIGHT JOIN t1 USING (x); -- { serverError 48 } -SELECT * FROM t0 ANY FULL JOIN t1 USING (x); -- { serverError 48 } +SELECT * FROM t0 ANY RIGHT JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANY FULL JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t0 RIGHT JOIN t1 USING (x); SELECT '-'; SELECT * FROM t0 FULL JOIN t1 USING (x); SELECT 'none join t on'; -SELECT * FROM t0 ANY RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError 48 } -SELECT * FROM t0 ANY FULL JOIN t1 ON t1.x = t0.x; -- { serverError 48 } +SELECT * FROM t0 ANY RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANY FULL JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t0 RIGHT JOIN t1 ON t1.x = t0.x; SELECT '-'; @@ -43,29 +43,29 @@ SELECT '/none'; SET join_use_nulls = 1; SELECT 't join none using'; -SELECT * FROM t1 ANY RIGHT JOIN t0 USING (x) ORDER BY x; -- { serverError 48 } -SELECT * FROM t1 ANY FULL JOIN t0 USING (x) ORDER BY x; -- { serverError 48 } +SELECT * FROM t1 ANY RIGHT JOIN t0 USING (x) ORDER BY x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t1 ANY FULL JOIN t0 USING (x) ORDER BY x; -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t1 RIGHT JOIN t0 USING (x) ORDER BY x; SELECT '-'; SELECT * FROM t1 FULL JOIN t0 USING (x) ORDER BY x; SELECT 't join none on'; -SELECT * FROM t1 ANY RIGHT JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError 48 } -SELECT * FROM t1 ANY FULL JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError 48 } +SELECT * FROM t1 ANY RIGHT JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t1 ANY FULL JOIN t0 ON t1.x = t0.x ORDER BY x; -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t1 RIGHT JOIN t0 ON t1.x = t0.x ORDER BY x; SELECT '-'; SELECT * FROM t1 FULL JOIN t0 ON t1.x = t0.x ORDER BY x; SELECT 'none join t using'; -SELECT * FROM t0 ANY RIGHT JOIN t1 USING (x); -- { serverError 48 } -SELECT * FROM t0 ANY FULL JOIN t1 USING (x); -- { serverError 48 } +SELECT * FROM t0 ANY RIGHT JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANY FULL JOIN t1 USING (x); -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t0 RIGHT JOIN t1 USING (x); SELECT '-'; SELECT * FROM t0 FULL JOIN t1 USING (x); SELECT 'none join t on'; -SELECT * FROM t0 ANY RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError 48 } -SELECT * FROM t0 ANY FULL JOIN t1 ON t1.x = t0.x; -- { serverError 48 } +SELECT * FROM t0 ANY RIGHT JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } +SELECT * FROM t0 ANY FULL JOIN t1 ON t1.x = t0.x; -- { serverError NOT_IMPLEMENTED } SELECT '-'; SELECT * FROM t0 RIGHT JOIN t1 ON t1.x = t0.x; SELECT '-'; diff --git a/tests/queries/0_stateless/01148_zookeeper_path_macros_unfolding.sql b/tests/queries/0_stateless/01148_zookeeper_path_macros_unfolding.sql index de244e64999..a585ef1c324 100644 --- a/tests/queries/0_stateless/01148_zookeeper_path_macros_unfolding.sql +++ b/tests/queries/0_stateless/01148_zookeeper_path_macros_unfolding.sql @@ -14,10 +14,10 @@ DETACH TABLE rmt1; ATTACH TABLE rmt1; SHOW CREATE TABLE rmt1; -CREATE TABLE rmt (n UInt64, s String) ENGINE = ReplicatedMergeTree('{default_path_test}{uuid}', '{default_name_test}') ORDER BY n; -- { serverError 36 } +CREATE TABLE rmt (n UInt64, s String) ENGINE = ReplicatedMergeTree('{default_path_test}{uuid}', '{default_name_test}') ORDER BY n; -- { serverError BAD_ARGUMENTS } CREATE TABLE rmt (n UInt64, s String) ENGINE = ReplicatedMergeTree('{default_path_test}test_01148', '{default_name_test}') ORDER BY n; SHOW CREATE TABLE rmt; -RENAME TABLE rmt TO rmt2; -- { serverError 48 } +RENAME TABLE rmt TO rmt2; -- { serverError NOT_IMPLEMENTED } DETACH TABLE rmt; ATTACH TABLE rmt; SHOW CREATE TABLE rmt; @@ -26,7 +26,7 @@ SET distributed_ddl_output_mode='none'; DROP DATABASE IF EXISTS test_01148_atomic; CREATE DATABASE test_01148_atomic ENGINE=Atomic; CREATE TABLE test_01148_atomic.rmt2 ON CLUSTER test_shard_localhost (n int, PRIMARY KEY n) ENGINE=ReplicatedMergeTree; -CREATE TABLE test_01148_atomic.rmt3 AS test_01148_atomic.rmt2; -- { serverError 36 } +CREATE TABLE test_01148_atomic.rmt3 AS test_01148_atomic.rmt2; -- { serverError BAD_ARGUMENTS } CREATE TABLE test_01148_atomic.rmt4 ON CLUSTER test_shard_localhost AS test_01148_atomic.rmt2; SHOW CREATE TABLE test_01148_atomic.rmt2; RENAME TABLE test_01148_atomic.rmt4 to test_01148_atomic.rmt3; @@ -36,7 +36,7 @@ DROP DATABASE IF EXISTS test_01148_ordinary; set allow_deprecated_database_ordinary=1; -- Creation of a database with Ordinary engine emits a warning. CREATE DATABASE test_01148_ordinary ENGINE=Ordinary; -RENAME TABLE test_01148_atomic.rmt3 to test_01148_ordinary.rmt3; -- { serverError 48 } +RENAME TABLE test_01148_atomic.rmt3 to test_01148_ordinary.rmt3; -- { serverError NOT_IMPLEMENTED } DROP DATABASE test_01148_ordinary; DROP DATABASE test_01148_atomic; diff --git a/tests/queries/0_stateless/01152_cross_replication.sql b/tests/queries/0_stateless/01152_cross_replication.sql index 5d013400539..40d48092244 100644 --- a/tests/queries/0_stateless/01152_cross_replication.sql +++ b/tests/queries/0_stateless/01152_cross_replication.sql @@ -8,9 +8,9 @@ DROP TABLE IF EXISTS demo_loan_01568_dist; CREATE DATABASE shard_0; CREATE DATABASE shard_1; -CREATE TABLE demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); -- { serverError 48 } +CREATE TABLE demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); -- { serverError NOT_IMPLEMENTED } SET distributed_ddl_entry_format_version = 2; -CREATE TABLE demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); -- { serverError 371 } +CREATE TABLE demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); -- { serverError INCONSISTENT_CLUSTER_DEFINITION } SET distributed_ddl_output_mode='throw'; CREATE TABLE shard_0.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); CREATE TABLE shard_1.demo_loan_01568 ON CLUSTER test_cluster_two_shards_different_databases ( `id` Int64 COMMENT 'id', `date_stat` Date COMMENT 'date of stat', `customer_no` String COMMENT 'customer no', `loan_principal` Float64 COMMENT 'loan principal' ) ENGINE=ReplacingMergeTree() ORDER BY id PARTITION BY toYYYYMM(date_stat); diff --git a/tests/queries/0_stateless/01153_attach_mv_uuid.sql b/tests/queries/0_stateless/01153_attach_mv_uuid.sql index 4f043e11221..00cce8a1de4 100644 --- a/tests/queries/0_stateless/01153_attach_mv_uuid.sql +++ b/tests/queries/0_stateless/01153_attach_mv_uuid.sql @@ -39,6 +39,6 @@ INSERT INTO src VALUES (3), (4); SELECT * FROM mv ORDER BY n; DROP TABLE mv SYNC; -ATTACH MATERIALIZED VIEW mv UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' TO INNER UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; -- { serverError 36 } +ATTACH MATERIALIZED VIEW mv UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' TO INNER UUID '3bd68e3c-2693-4352-ad66-a66eba9e345e' (n Int32, n2 Int64) ENGINE = MergeTree PARTITION BY n % 10 ORDER BY n AS SELECT n, n * n AS n2 FROM src; -- { serverError BAD_ARGUMENTS } DROP TABLE src; diff --git a/tests/queries/0_stateless/01155_rename_move_materialized_view.sql b/tests/queries/0_stateless/01155_rename_move_materialized_view.sql index 80ed707b695..9d85acafb05 100644 --- a/tests/queries/0_stateless/01155_rename_move_materialized_view.sql +++ b/tests/queries/0_stateless/01155_rename_move_materialized_view.sql @@ -53,8 +53,8 @@ DROP DATABASE test_01155_ordinary; USE default; INSERT INTO test_01155_atomic.src(s) VALUES ('after moving tables'); -SELECT materialize(2), substr(_table, 1, 10), s FROM merge('test_01155_atomic', '') ORDER BY _table, s; -- { serverError 81 } -SELECT dictGet('test_01155_ordinary.dict', 'x', 'after moving tables'); -- { serverError 36 } +SELECT materialize(2), substr(_table, 1, 10), s FROM merge('test_01155_atomic', '') ORDER BY _table, s; -- { serverError UNKNOWN_DATABASE } +SELECT dictGet('test_01155_ordinary.dict', 'x', 'after moving tables'); -- { serverError BAD_ARGUMENTS } RENAME DATABASE test_01155_atomic TO test_01155_ordinary; USE test_01155_ordinary; diff --git a/tests/queries/0_stateless/01157_replace_table.sql b/tests/queries/0_stateless/01157_replace_table.sql index 20cae67d8e7..3d07c69acec 100644 --- a/tests/queries/0_stateless/01157_replace_table.sql +++ b/tests/queries/0_stateless/01157_replace_table.sql @@ -29,7 +29,7 @@ select * from t order by n; select 'exception on create and fill'; -- table is not created if select fails -create or replace table join engine=Join(ANY, INNER, n) as select * from t where throwIf(n); -- { serverError 395 } +create or replace table join engine=Join(ANY, INNER, n) as select * from t where throwIf(n); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } select count() from system.tables where database=currentDatabase() and name='join'; -- table is created and filled @@ -38,7 +38,7 @@ select * from numbers(10) as t any join join on t.number=join.n order by n; -- table is not replaced if select fails insert into t(n) values (4); -replace table join engine=Join(ANY, INNER, n) as select * from t where throwIf(n); -- { serverError 395 } +replace table join engine=Join(ANY, INNER, n) as select * from t where throwIf(n); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } select * from numbers(10) as t any join join on t.number=join.n order by n; -- table is replaced diff --git a/tests/queries/0_stateless/01165_lost_part_empty_partition.sql b/tests/queries/0_stateless/01165_lost_part_empty_partition.sql index a1db1c27bee..84bee466365 100644 --- a/tests/queries/0_stateless/01165_lost_part_empty_partition.sql +++ b/tests/queries/0_stateless/01165_lost_part_empty_partition.sql @@ -5,7 +5,7 @@ create table rmt2 (d DateTime, n int) engine=ReplicatedMergeTree('/test/01165/{d system stop replicated sends rmt1; insert into rmt1 values (now(), arrayJoin([1, 2])); -- { clientError 36 } -insert into rmt1(n) select * from system.numbers limit arrayJoin([1, 2]); -- { serverError 36, 440 } +insert into rmt1(n) select * from system.numbers limit arrayJoin([1, 2]); -- { serverError BAD_ARGUMENTS, INVALID_LIMIT_EXPRESSION } insert into rmt1 values (now(), rand()); drop table rmt1; diff --git a/tests/queries/0_stateless/01173_transaction_control_queries.sql b/tests/queries/0_stateless/01173_transaction_control_queries.sql index 03c98f50cc4..9d3f56f8f6b 100644 --- a/tests/queries/0_stateless/01173_transaction_control_queries.sql +++ b/tests/queries/0_stateless/01173_transaction_control_queries.sql @@ -31,7 +31,7 @@ insert into mt1 values (3); insert into mt2 values (30); select 'on exception before start', arraySort(groupArray(n)) from (select n from mt1 union all select * from mt2); -- rollback on exception before start -select functionThatDoesNotExist(); -- { serverError 46 } +select functionThatDoesNotExist(); -- { serverError UNKNOWN_FUNCTION } -- cannot commit after exception commit; -- { serverError INVALID_TRANSACTION } -- after 46 begin transaction; -- { serverError INVALID_TRANSACTION } @@ -42,7 +42,7 @@ insert into mt1 values (4); insert into mt2 values (40); select 'on exception while processing', arraySort(groupArray(n)) from (select n from mt1 union all select * from mt2); -- rollback on exception while processing -select throwIf(100 < number) from numbers(1000); -- { serverError 395 } +select throwIf(100 < number) from numbers(1000); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } -- cannot commit after exception commit; -- { serverError INVALID_TRANSACTION } -- after 395 insert into mt1 values (5); -- { serverError INVALID_TRANSACTION } @@ -82,19 +82,19 @@ set transaction snapshot 5; -- { serverError INVALID_TRANSACTION } rollback; begin transaction; -create table m (n int) engine=Memory; -- { serverError 48 } +create table m (n int) engine=Memory; -- { serverError NOT_IMPLEMENTED } commit; -- { serverError INVALID_TRANSACTION } -- after 48 rollback; create table m (n int) engine=Memory; begin transaction; -insert into m values (1); -- { serverError 48 } +insert into m values (1); -- { serverError NOT_IMPLEMENTED } select * from m; -- { serverError INVALID_TRANSACTION } commit; -- { serverError INVALID_TRANSACTION } -- after 48 rollback; begin transaction; -select * from m; -- { serverError 48 } +select * from m; -- { serverError NOT_IMPLEMENTED } commit; -- { serverError INVALID_TRANSACTION } -- after 48 rollback; diff --git a/tests/queries/0_stateless/01178_int_field_to_decimal.sql b/tests/queries/0_stateless/01178_int_field_to_decimal.sql index bbd72e57d70..633e8b658f6 100644 --- a/tests/queries/0_stateless/01178_int_field_to_decimal.sql +++ b/tests/queries/0_stateless/01178_int_field_to_decimal.sql @@ -1,10 +1,10 @@ -select d from values('d Decimal(8, 8)', 0, 1) where d not in (-1, 0); -- { serverError 69 } -select d from values('d Decimal(8, 8)', 0, 2) where d not in (1, 0); -- { serverError 69 } -select d from values('d Decimal(9, 8)', 0, 3) where d not in (-9223372036854775808, 0); -- { serverError 69 } -select d from values('d Decimal(9, 8)', 0, 4) where d not in (18446744073709551615, 0); -- { serverError 69 } -select d from values('d Decimal(18, 8)', 0, 5) where d not in (-9223372036854775808, 0); -- { serverError 69 } -select d from values('d Decimal(18, 8)', 0, 6) where d not in (18446744073709551615, 0); -- { serverError 69 } -select d from values('d Decimal(26, 8)', 0, 7) where d not in (-9223372036854775808, 0); -- { serverError 69 } -select d from values('d Decimal(27, 8)', 0, 8) where d not in (18446744073709551615, 0); -- { serverError 69 } +select d from values('d Decimal(8, 8)', 0, 1) where d not in (-1, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +select d from values('d Decimal(8, 8)', 0, 2) where d not in (1, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +select d from values('d Decimal(9, 8)', 0, 3) where d not in (-9223372036854775808, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +select d from values('d Decimal(9, 8)', 0, 4) where d not in (18446744073709551615, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +select d from values('d Decimal(18, 8)', 0, 5) where d not in (-9223372036854775808, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +select d from values('d Decimal(18, 8)', 0, 6) where d not in (18446744073709551615, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +select d from values('d Decimal(26, 8)', 0, 7) where d not in (-9223372036854775808, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +select d from values('d Decimal(27, 8)', 0, 8) where d not in (18446744073709551615, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } select d from values('d Decimal(27, 8)', 0, 9) where d not in (-9223372036854775808, 0); select d from values('d Decimal(28, 8)', 0, 10) where d not in (18446744073709551615, 0); diff --git a/tests/queries/0_stateless/01182_materialized_view_different_structure.sql b/tests/queries/0_stateless/01182_materialized_view_different_structure.sql index 751bcc9e48e..485f9985974 100644 --- a/tests/queries/0_stateless/01182_materialized_view_different_structure.sql +++ b/tests/queries/0_stateless/01182_materialized_view_different_structure.sql @@ -24,13 +24,13 @@ SET allow_experimental_bigint_types=1; CREATE TABLE dist (n Int128) ENGINE=Distributed(test_cluster_two_shards, currentDatabase(), mv); INSERT INTO src SELECT number, toString(number) FROM numbers(1000); -INSERT INTO mv SELECT toString(number + 1000) FROM numbers(1000); -- { serverError 53 } -INSERT INTO mv SELECT arrayJoin(['42', 'test']); -- { serverError 53 } +INSERT INTO mv SELECT toString(number + 1000) FROM numbers(1000); -- { serverError TYPE_MISMATCH } +INSERT INTO mv SELECT arrayJoin(['42', 'test']); -- { serverError TYPE_MISMATCH } SELECT count(), sum(n), sum(toInt64(s)), max(n), min(n) FROM src; SELECT count(), sum(n), sum(toInt64(s)), max(n), min(n) FROM dst; SELECT count(), sum(toInt64(n)), max(n), min(n) FROM mv; -SELECT count(), sum(toInt64(n)), max(n), min(n) FROM dist; -- { serverError 70 } +SELECT count(), sum(toInt64(n)), max(n), min(n) FROM dist; -- { serverError CANNOT_CONVERT_TYPE } SELECT count(), sum(toInt64(n)), max(toUInt32(n)), min(toInt128(n)) FROM dist; DROP TABLE test_table; diff --git a/tests/queries/0_stateless/01185_create_or_replace_table.sql b/tests/queries/0_stateless/01185_create_or_replace_table.sql index e8845260726..11759d0bb0c 100644 --- a/tests/queries/0_stateless/01185_create_or_replace_table.sql +++ b/tests/queries/0_stateless/01185_create_or_replace_table.sql @@ -2,7 +2,7 @@ drop table if exists t1; -replace table t1 (n UInt64, s String) engine=MergeTree order by n; -- { serverError 60 } +replace table t1 (n UInt64, s String) engine=MergeTree order by n; -- { serverError UNKNOWN_TABLE } show tables; create or replace table t1 (n UInt64, s String) engine=MergeTree order by n; show tables; diff --git a/tests/queries/0_stateless/01188_attach_table_from_path.sql b/tests/queries/0_stateless/01188_attach_table_from_path.sql index 9bf401c8ea4..39ec643f623 100644 --- a/tests/queries/0_stateless/01188_attach_table_from_path.sql +++ b/tests/queries/0_stateless/01188_attach_table_from_path.sql @@ -4,9 +4,9 @@ drop table if exists test; drop table if exists file; drop table if exists mt; -attach table test from 'some/path' (n UInt8) engine=Memory; -- { serverError 48 } -attach table test from '/etc/passwd' (s String) engine=File(TSVRaw); -- { serverError 481 } -attach table test from '../../../../../../../../../etc/passwd' (s String) engine=File(TSVRaw); -- { serverError 481 } +attach table test from 'some/path' (n UInt8) engine=Memory; -- { serverError NOT_IMPLEMENTED } +attach table test from '/etc/passwd' (s String) engine=File(TSVRaw); -- { serverError PATH_ACCESS_DENIED } +attach table test from '../../../../../../../../../etc/passwd' (s String) engine=File(TSVRaw); -- { serverError PATH_ACCESS_DENIED } attach table test from 42 (s String) engine=File(TSVRaw); -- { clientError 62 } insert into table function file('01188_attach/file/data.TSV', 'TSV', 's String, n UInt8') values ('file', 42); diff --git a/tests/queries/0_stateless/01191_rename_dictionary.sql b/tests/queries/0_stateless/01191_rename_dictionary.sql index e9fed1dd6b2..6666c3308ca 100644 --- a/tests/queries/0_stateless/01191_rename_dictionary.sql +++ b/tests/queries/0_stateless/01191_rename_dictionary.sql @@ -16,15 +16,15 @@ INSERT INTO test_01191._ VALUES (42, 'test'); SELECT name, status FROM system.dictionaries WHERE database='test_01191'; SELECT name, engine FROM system.tables WHERE database='test_01191' ORDER BY name; -RENAME DICTIONARY test_01191.table TO test_01191.table1; -- {serverError 60} -EXCHANGE DICTIONARIES test_01191._ AND test_01191.dict; -- {serverError 80} +RENAME DICTIONARY test_01191.table TO test_01191.table1; -- {serverError UNKNOWN_TABLE} +EXCHANGE DICTIONARIES test_01191._ AND test_01191.dict; -- {serverError INCORRECT_QUERY} EXCHANGE TABLES test_01191.t AND test_01191.dict; SELECT name, status FROM system.dictionaries WHERE database='test_01191'; SELECT name, engine FROM system.tables WHERE database='test_01191' ORDER BY name; SELECT dictGet(test_01191.t, 's', toUInt64(42)); EXCHANGE TABLES test_01191.dict AND test_01191.t; -RENAME DICTIONARY test_01191.t TO test_01191.dict1; -- {serverError 80} -DROP DICTIONARY test_01191.t; -- {serverError 80} +RENAME DICTIONARY test_01191.t TO test_01191.dict1; -- {serverError INCORRECT_QUERY} +DROP DICTIONARY test_01191.t; -- {serverError INCORRECT_QUERY} DROP TABLE test_01191.t; CREATE DATABASE dummy_db ENGINE=Atomic; diff --git a/tests/queries/0_stateless/01202_array_auc_special.sql b/tests/queries/0_stateless/01202_array_auc_special.sql index a23b29ad9e1..e379050a982 100644 --- a/tests/queries/0_stateless/01202_array_auc_special.sql +++ b/tests/queries/0_stateless/01202_array_auc_special.sql @@ -1,9 +1,9 @@ -SELECT arrayAUC([], []); -- { serverError 43 } +SELECT arrayAUC([], []); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT arrayAUC([1], [1]); -SELECT arrayAUC([1], []); -- { serverError 43 } -SELECT arrayAUC([], [1]); -- { serverError 43 } -SELECT arrayAUC([1, 2], [3]); -- { serverError 36 } -SELECT arrayAUC([1], [2, 3]); -- { serverError 36 } +SELECT arrayAUC([1], []); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT arrayAUC([], [1]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT arrayAUC([1, 2], [3]); -- { serverError BAD_ARGUMENTS } +SELECT arrayAUC([1], [2, 3]); -- { serverError BAD_ARGUMENTS } SELECT arrayAUC([1, 1], [1, 1]); SELECT arrayAUC([1, 1], [0, 0]); SELECT arrayAUC([1, 1], [0, 1]); diff --git a/tests/queries/0_stateless/01211_optimize_skip_unused_shards_type_mismatch.sql b/tests/queries/0_stateless/01211_optimize_skip_unused_shards_type_mismatch.sql index fb75c3249cc..0c9aa078e91 100644 --- a/tests/queries/0_stateless/01211_optimize_skip_unused_shards_type_mismatch.sql +++ b/tests/queries/0_stateless/01211_optimize_skip_unused_shards_type_mismatch.sql @@ -9,7 +9,7 @@ create table data_02000 (key Int) Engine=Null(); create table dist_02000 as data_02000 Engine=Distributed(test_cluster_two_shards, currentDatabase(), data_02000, key); select * from data_02000 where key = 0xdeadbeafdeadbeaf; -select * from dist_02000 where key = 0xdeadbeafdeadbeaf settings force_optimize_skip_unused_shards=2; -- { serverError 507, CANNOT_CONVERT_TYPE } +select * from dist_02000 where key = 0xdeadbeafdeadbeaf settings force_optimize_skip_unused_shards=2; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS, CANNOT_CONVERT_TYPE } select * from dist_02000 where key = 0xdeadbeafdeadbeaf; drop table data_02000; diff --git a/tests/queries/0_stateless/01213_alter_rename_column.sql b/tests/queries/0_stateless/01213_alter_rename_column.sql index 1732ea88274..03dcf4d9840 100644 --- a/tests/queries/0_stateless/01213_alter_rename_column.sql +++ b/tests/queries/0_stateless/01213_alter_rename_column.sql @@ -22,9 +22,9 @@ SELECT renamed_value1 FROM table_for_rename WHERE key = 1; SELECT * FROM table_for_rename WHERE key = 1 FORMAT TSVWithNames; -ALTER TABLE table_for_rename RENAME COLUMN value3 to value2; --{serverError 15} -ALTER TABLE table_for_rename RENAME COLUMN value3 TO r1, RENAME COLUMN value3 TO r2; --{serverError 36} -ALTER TABLE table_for_rename RENAME COLUMN value3 TO r1, RENAME COLUMN r1 TO value1; --{serverError 48} +ALTER TABLE table_for_rename RENAME COLUMN value3 to value2; --{serverError DUPLICATE_COLUMN} +ALTER TABLE table_for_rename RENAME COLUMN value3 TO r1, RENAME COLUMN value3 TO r2; --{serverError BAD_ARGUMENTS} +ALTER TABLE table_for_rename RENAME COLUMN value3 TO r1, RENAME COLUMN r1 TO value1; --{serverError NOT_IMPLEMENTED} ALTER TABLE table_for_rename RENAME COLUMN value2 TO renamed_value2, RENAME COLUMN value3 TO renamed_value3; @@ -32,7 +32,7 @@ SELECT renamed_value2, renamed_value3 FROM table_for_rename WHERE key = 7; SELECT * FROM table_for_rename WHERE key = 7 FORMAT TSVWithNames; -ALTER TABLE table_for_rename RENAME COLUMN value100 to renamed_value100; --{serverError 10} +ALTER TABLE table_for_rename RENAME COLUMN value100 to renamed_value100; --{serverError NOT_FOUND_COLUMN_IN_BLOCK} ALTER TABLE table_for_rename RENAME COLUMN IF EXISTS value100 to renamed_value100; DROP TABLE IF EXISTS table_for_rename; diff --git a/tests/queries/0_stateless/01213_alter_rename_nested.sql b/tests/queries/0_stateless/01213_alter_rename_nested.sql index 1b00cd19e21..cc607e0b4f3 100644 --- a/tests/queries/0_stateless/01213_alter_rename_nested.sql +++ b/tests/queries/0_stateless/01213_alter_rename_nested.sql @@ -25,10 +25,10 @@ SHOW CREATE TABLE table_for_rename_nested; SELECT key, n.renamed_x FROM table_for_rename_nested WHERE key = 7; SELECT key, n.renamed_y FROM table_for_rename_nested WHERE key = 7; -ALTER TABLE table_for_rename_nested RENAME COLUMN n.renamed_x TO not_nested_x; --{serverError 36} +ALTER TABLE table_for_rename_nested RENAME COLUMN n.renamed_x TO not_nested_x; --{serverError BAD_ARGUMENTS} -- Currently not implemented -ALTER TABLE table_for_rename_nested RENAME COLUMN n TO renamed_n; --{serverError 48} +ALTER TABLE table_for_rename_nested RENAME COLUMN n TO renamed_n; --{serverError NOT_IMPLEMENTED} ALTER TABLE table_for_rename_nested RENAME COLUMN value1 TO renamed_value1; diff --git a/tests/queries/0_stateless/01213_alter_rename_primary_key_zookeeper_long.sql b/tests/queries/0_stateless/01213_alter_rename_primary_key_zookeeper_long.sql index ecb6018a385..373e754668d 100644 --- a/tests/queries/0_stateless/01213_alter_rename_primary_key_zookeeper_long.sql +++ b/tests/queries/0_stateless/01213_alter_rename_primary_key_zookeeper_long.sql @@ -19,11 +19,11 @@ INSERT INTO table_for_rename_pk SELECT toDate('2019-10-01') + number % 3, number SELECT key1, value1 FROM table_for_rename_pk WHERE key1 = 1 AND key2 = 1 AND key3 = 1; -ALTER TABLE table_for_rename_pk RENAME COLUMN key1 TO renamed_key1; --{serverError 524} +ALTER TABLE table_for_rename_pk RENAME COLUMN key1 TO renamed_key1; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE table_for_rename_pk RENAME COLUMN key3 TO renamed_key3; --{serverError 524} +ALTER TABLE table_for_rename_pk RENAME COLUMN key3 TO renamed_key3; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE table_for_rename_pk RENAME COLUMN key2 TO renamed_key2; --{serverError 524} +ALTER TABLE table_for_rename_pk RENAME COLUMN key2 TO renamed_key2; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} DROP TABLE IF EXISTS table_for_rename_pk; @@ -46,10 +46,10 @@ PRIMARY KEY (key1, key2); INSERT INTO table_for_rename_with_primary_key SELECT toDate('2019-10-01') + number % 3, number, number, number, toString(number), toString(number) from numbers(9); -ALTER TABLE table_for_rename_with_primary_key RENAME COLUMN key1 TO renamed_key1; --{serverError 524} +ALTER TABLE table_for_rename_with_primary_key RENAME COLUMN key1 TO renamed_key1; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE table_for_rename_with_primary_key RENAME COLUMN key2 TO renamed_key2; --{serverError 524} +ALTER TABLE table_for_rename_with_primary_key RENAME COLUMN key2 TO renamed_key2; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE table_for_rename_with_primary_key RENAME COLUMN key3 TO renamed_key3; --{serverError 524} +ALTER TABLE table_for_rename_with_primary_key RENAME COLUMN key3 TO renamed_key3; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} DROP TABLE IF EXISTS table_for_rename_with_primary_key; diff --git a/tests/queries/0_stateless/01213_alter_table_rename_nested.sql b/tests/queries/0_stateless/01213_alter_table_rename_nested.sql index e08e3c0c3b1..5efc065819b 100644 --- a/tests/queries/0_stateless/01213_alter_table_rename_nested.sql +++ b/tests/queries/0_stateless/01213_alter_table_rename_nested.sql @@ -25,14 +25,14 @@ SHOW CREATE TABLE table_for_rename_nested; SELECT key, n.renamed_x FROM table_for_rename_nested WHERE key = 7; SELECT key, n.renamed_y FROM table_for_rename_nested WHERE key = 7; -ALTER TABLE table_for_rename_nested RENAME COLUMN n.renamed_x TO not_nested_x; --{serverError 36} +ALTER TABLE table_for_rename_nested RENAME COLUMN n.renamed_x TO not_nested_x; --{serverError BAD_ARGUMENTS} -ALTER TABLE table_for_rename_nested RENAME COLUMN n.renamed_x TO q.renamed_x; --{serverError 36} +ALTER TABLE table_for_rename_nested RENAME COLUMN n.renamed_x TO q.renamed_x; --{serverError BAD_ARGUMENTS} -ALTER TABLE table_for_rename_nested RENAME COLUMN value1 TO q.renamed_x; --{serverError 36} +ALTER TABLE table_for_rename_nested RENAME COLUMN value1 TO q.renamed_x; --{serverError BAD_ARGUMENTS} -- Currently not implemented -ALTER TABLE table_for_rename_nested RENAME COLUMN n TO renamed_n; --{serverError 48} +ALTER TABLE table_for_rename_nested RENAME COLUMN n TO renamed_n; --{serverError NOT_IMPLEMENTED} DROP TABLE IF EXISTS table_for_rename_nested; diff --git a/tests/queries/0_stateless/01213_point_in_Myanmar.sql b/tests/queries/0_stateless/01213_point_in_Myanmar.sql index c941ee05208..fe0a3c37eb5 100644 --- a/tests/queries/0_stateless/01213_point_in_Myanmar.sql +++ b/tests/queries/0_stateless/01213_point_in_Myanmar.sql @@ -1,2 +1,2 @@ SELECT pointInPolygon((97.66905, 16.5026053), [(97.66905, 16.5026053), (97.667878, 16.4979175), (97.661433, 16.4917645), (97.656745, 16.4859047), (97.656745, 16.4818029), (97.658796, 16.4785801), (97.665535, 16.4753572), (97.670808, 16.4730135), (97.676082, 16.4697907), (97.680477, 16.4677398), (97.68575, 16.4686189), (97.689559, 16.4727207), (97.69454, 16.4744788), (97.698055, 16.4747718), (97.702157, 16.4724279), (97.703036, 16.4683261), (97.703036, 16.4633453), (97.702451, 16.4594354), (97.699533, 16.4539205), (97.699106, 16.4521467), (97.699896, 16.4500714), (97.701852, 16.4474887), (97.701272, 16.4460233), (97.699896, 16.4439216), (97.699857, 16.4425297), (97.700705, 16.4417585), (97.699266, 16.4404319), (97.696817, 16.439585), (97.69468, 16.4391501), (97.690854, 16.439294), (97.686571, 16.4407665), (97.683728, 16.4428458), (97.680647, 16.444719), (97.678369, 16.445322), (97.675195, 16.4448526), (97.672627, 16.4435941), (97.670568, 16.4419727), (97.667276, 16.4410039), (97.666215, 16.439402), (97.66599, 16.43656), (97.664579, 16.435632), (97.66195, 16.4344612), (97.659174, 16.4324549), (97.658693, 16.4290256), (97.659289, 16.4246502), (97.660882, 16.422609), (97.663533, 16.4225057), (97.666402, 16.4210711), (97.67148, 16.4170395), (97.673433, 16.4146478), (97.674184, 16.4124121), (97.6742, 16.4085257), (97.674894, 16.4055148), (97.675906, 16.4019452), (97.675287, 16.3996593), (97.675062, 16.3963334), (97.675798, 16.3936434), (97.675676, 16.3909321), (97.67508, 16.386655), (97.679839, 16.386241), (97.689403, 16.3726191), (97.692011, 16.372909), (97.696359, 16.3679819), (97.699866, 16.360968), (97.697233, 16.3609438), (97.693077, 16.3596272), (97.686631, 16.3584552), (97.68165, 16.3558182), (97.674619, 16.3496653), (97.667588, 16.3482003), (97.664072, 16.3502511), (97.659384, 16.3540599), (97.652353, 16.3578686), (97.649716, 16.3625565), (97.650595, 16.3672443), (97.65206, 16.3701742), (97.65206, 16.3733971), (97.651181, 16.3760339), (97.646493, 16.3763268), (97.6462, 16.3801357), (97.646786, 16.3851165), (97.643563, 16.3883393), (97.638583, 16.3889252), (97.636239, 16.392148), (97.630379, 16.3933199), (97.629132, 16.3964903), (97.624347, 16.4056104), (97.615377, 16.4165245), (97.614779, 16.4229534), (97.611938, 16.4335685), (97.613882, 16.4410439), (97.619713, 16.4461272), (97.62375, 16.4542007), (97.62345, 16.4640683), (97.618965, 16.4793181), (97.617321, 16.4884382), (97.617747, 16.4985751), (97.623301, 16.5026416), (97.629303, 16.5016624), (97.63272, 16.4986048), (97.640862, 16.498226), (97.647134, 16.5006382), (97.650873, 16.5051263), (97.654987, 16.5089598), (97.65639, 16.5118583), (97.658166, 16.5160658), (97.660395, 16.5197566), (97.66612, 16.5140318), (97.668757, 16.507879), (97.66905, 16.5026053)]); -SELECT pointInPolygon((97.641933, 16.5076538), [(97.66905, 16.5026053), (97.667878, 16.4979175), (97.661433, 16.4917645), (97.656745, 16.4859047), (97.656745, 16.4818029), (97.658796, 16.4785801), (97.665535, 16.4753572), (97.670808, 16.4730135), (97.676082, 16.4697907), (97.680477, 16.4677398), (97.68575, 16.4686189), (97.689559, 16.4727207), (97.69454, 16.4744788), (97.698055, 16.4747718), (97.702157, 16.4724279), (97.703036, 16.4683261), (97.703036, 16.4633453), (97.702451, 16.4594354), (97.699533, 16.4539205), (97.699106, 16.4521467), (97.699896, 16.4500714), (97.701852, 16.4474887), (97.701272, 16.4460233), (97.699896, 16.4439216), (97.699857, 16.4425297), (97.700705, 16.4417585), (97.699266, 16.4404319), (97.696817, 16.439585), (97.69468, 16.4391501), (97.690854, 16.439294), (97.686571, 16.4407665), (97.683728, 16.4428458), (97.680647, 16.444719), (97.678369, 16.445322), (97.675195, 16.4448526), (97.672627, 16.4435941), (97.670568, 16.4419727), (97.667276, 16.4410039), (97.666215, 16.439402), (97.66599, 16.43656), (97.664579, 16.435632), (97.66195, 16.4344612), (97.659174, 16.4324549), (97.658693, 16.4290256), (97.659289, 16.4246502), (97.660882, 16.422609), (97.663533, 16.4225057), (97.666402, 16.4210711), (97.67148, 16.4170395), (97.673433, 16.4146478), (97.674184, 16.4124121), (97.6742, 16.4085257), (97.674894, 16.4055148), (97.675906, 16.4019452), (97.675287, 16.3996593), (97.675062, 16.3963334), (97.675798, 16.3936434), (97.675676, 16.3909321), (97.67508, 16.386655), (97.679839, 16.386241), (97.689403, 16.3726191), (97.692011, 16.372909), (97.696359, 16.3679819), (97.699866, 16.360968), (97.697233, 16.3609438), (97.693077, 16.3596272), (97.686631, 16.3584552), (97.68165, 16.3558182), (97.674619, 16.3496653), (97.667588, 16.3482003), (97.664072, 16.3502511), (97.659384, 16.3540599), (97.652353, 16.3578686), (97.649716, 16.3625565), (97.650595, 16.3672443), (97.65206, 16.3701742), (97.65206, 16.3733971), (97.651181, 16.3760339), (97.646493, 16.3763268), (97.6462, 16.3801357), (97.646786, 16.3851165), (97.643563, 16.3883393), (97.638583, 16.3889252), (97.636239, 16.392148), (97.630379, 16.3933199), (97.629132, 16.3964903), (97.624347, 16.4056104), (97.615377, 16.4165245), (97.614779, 16.4229534), (97.611938, 16.4335685), (97.613882, 16.4410439), (97.619713, 16.4461272), (97.62375, 16.4542007), (97.62345, 16.4640683), (97.618965, 16.4793181), (97.617321, 16.4884382), (97.617747, 16.4985751), (97.623301, 16.5026416), (97.629303, 16.5016624), (97.63272, 16.4986048), (97.640862, 16.498226), (97.647134, 16.5006382), (97.650873, 16.5051263), (97.654987, 16.5089598), (97.65639, 16.5118583), (97.658166, 16.5160658), (97.660395, 16.5197566), (97.66612, 16.5140318), (97.668757, 16.507879), (97.66905, 16.5026053)], [(97.666491, 16.5599384), (97.665077, 16.5589283), (97.662417, 16.5607013), (97.659315, 16.5700096), (97.655104, 16.5821991), (97.654882, 16.5855235), (97.654593, 16.5931971), (97.659381, 16.5957754), (97.669927, 16.5995844), (97.683111, 16.6022215), (97.695123, 16.6028077), (97.704206, 16.5984131), (97.704499, 16.5825917), (97.70007, 16.5731793), (97.698976, 16.572997), (97.697211, 16.5717833), (97.692114, 16.5691237), (97.684358, 16.5691235), (97.675936, 16.567572), (97.66818, 16.5611446), (97.666491, 16.5599384)], [(97.653232, 16.574263), (97.652445, 16.5679244), (97.655949, 16.5683449), (97.659594, 16.5627383), (97.659734, 16.5585335), (97.662257, 16.5550293), (97.660855, 16.5512449), (97.658613, 16.5490023), (97.659173, 16.544517), (97.654407, 16.5408727), (97.641933, 16.5363874), (97.63086, 16.5303604), (97.628057, 16.5312014), (97.625954, 16.5415736), (97.63072, 16.5613367), (97.638569, 16.5820811), (97.645017, 16.5892294), (97.649743, 16.5887155), (97.653232, 16.574263)], [(97.625696, 16.5488739), (97.623579, 16.5396268), (97.620589, 16.5423678), (97.616353, 16.5530826), (97.611619, 16.5637974), (97.61112, 16.5725187), (97.613339, 16.5792777), (97.635042, 16.5874696), (97.64152, 16.5981844), (97.643015, 16.605909), (97.645756, 16.6066565), (97.650989, 16.6034172), (97.644012, 16.5984335), (97.64219, 16.5877556), (97.636038, 16.5804926), (97.63252, 16.570307), (97.628314, 16.5603089), (97.625696, 16.5488739)], [(97.607902, 16.3798949), (97.604911, 16.3719709), (97.602519, 16.3749612), (97.601323, 16.3955933), (97.604014, 16.406059), (97.604762, 16.4084511), (97.607896, 16.4081673), (97.609397, 16.397537), (97.609397, 16.3882674), (97.607902, 16.3798949)], [(97.64902, 16.5107163), (97.645437, 16.5073734), (97.641933, 16.5076538), (97.641933, 16.5108776), (97.645717, 16.5160636), (97.651112, 16.5211243), (97.655721, 16.5238328), (97.656392, 16.5184349), (97.654359, 16.515696), (97.64902, 16.5107163)]); -- { serverError 36 } +SELECT pointInPolygon((97.641933, 16.5076538), [(97.66905, 16.5026053), (97.667878, 16.4979175), (97.661433, 16.4917645), (97.656745, 16.4859047), (97.656745, 16.4818029), (97.658796, 16.4785801), (97.665535, 16.4753572), (97.670808, 16.4730135), (97.676082, 16.4697907), (97.680477, 16.4677398), (97.68575, 16.4686189), (97.689559, 16.4727207), (97.69454, 16.4744788), (97.698055, 16.4747718), (97.702157, 16.4724279), (97.703036, 16.4683261), (97.703036, 16.4633453), (97.702451, 16.4594354), (97.699533, 16.4539205), (97.699106, 16.4521467), (97.699896, 16.4500714), (97.701852, 16.4474887), (97.701272, 16.4460233), (97.699896, 16.4439216), (97.699857, 16.4425297), (97.700705, 16.4417585), (97.699266, 16.4404319), (97.696817, 16.439585), (97.69468, 16.4391501), (97.690854, 16.439294), (97.686571, 16.4407665), (97.683728, 16.4428458), (97.680647, 16.444719), (97.678369, 16.445322), (97.675195, 16.4448526), (97.672627, 16.4435941), (97.670568, 16.4419727), (97.667276, 16.4410039), (97.666215, 16.439402), (97.66599, 16.43656), (97.664579, 16.435632), (97.66195, 16.4344612), (97.659174, 16.4324549), (97.658693, 16.4290256), (97.659289, 16.4246502), (97.660882, 16.422609), (97.663533, 16.4225057), (97.666402, 16.4210711), (97.67148, 16.4170395), (97.673433, 16.4146478), (97.674184, 16.4124121), (97.6742, 16.4085257), (97.674894, 16.4055148), (97.675906, 16.4019452), (97.675287, 16.3996593), (97.675062, 16.3963334), (97.675798, 16.3936434), (97.675676, 16.3909321), (97.67508, 16.386655), (97.679839, 16.386241), (97.689403, 16.3726191), (97.692011, 16.372909), (97.696359, 16.3679819), (97.699866, 16.360968), (97.697233, 16.3609438), (97.693077, 16.3596272), (97.686631, 16.3584552), (97.68165, 16.3558182), (97.674619, 16.3496653), (97.667588, 16.3482003), (97.664072, 16.3502511), (97.659384, 16.3540599), (97.652353, 16.3578686), (97.649716, 16.3625565), (97.650595, 16.3672443), (97.65206, 16.3701742), (97.65206, 16.3733971), (97.651181, 16.3760339), (97.646493, 16.3763268), (97.6462, 16.3801357), (97.646786, 16.3851165), (97.643563, 16.3883393), (97.638583, 16.3889252), (97.636239, 16.392148), (97.630379, 16.3933199), (97.629132, 16.3964903), (97.624347, 16.4056104), (97.615377, 16.4165245), (97.614779, 16.4229534), (97.611938, 16.4335685), (97.613882, 16.4410439), (97.619713, 16.4461272), (97.62375, 16.4542007), (97.62345, 16.4640683), (97.618965, 16.4793181), (97.617321, 16.4884382), (97.617747, 16.4985751), (97.623301, 16.5026416), (97.629303, 16.5016624), (97.63272, 16.4986048), (97.640862, 16.498226), (97.647134, 16.5006382), (97.650873, 16.5051263), (97.654987, 16.5089598), (97.65639, 16.5118583), (97.658166, 16.5160658), (97.660395, 16.5197566), (97.66612, 16.5140318), (97.668757, 16.507879), (97.66905, 16.5026053)], [(97.666491, 16.5599384), (97.665077, 16.5589283), (97.662417, 16.5607013), (97.659315, 16.5700096), (97.655104, 16.5821991), (97.654882, 16.5855235), (97.654593, 16.5931971), (97.659381, 16.5957754), (97.669927, 16.5995844), (97.683111, 16.6022215), (97.695123, 16.6028077), (97.704206, 16.5984131), (97.704499, 16.5825917), (97.70007, 16.5731793), (97.698976, 16.572997), (97.697211, 16.5717833), (97.692114, 16.5691237), (97.684358, 16.5691235), (97.675936, 16.567572), (97.66818, 16.5611446), (97.666491, 16.5599384)], [(97.653232, 16.574263), (97.652445, 16.5679244), (97.655949, 16.5683449), (97.659594, 16.5627383), (97.659734, 16.5585335), (97.662257, 16.5550293), (97.660855, 16.5512449), (97.658613, 16.5490023), (97.659173, 16.544517), (97.654407, 16.5408727), (97.641933, 16.5363874), (97.63086, 16.5303604), (97.628057, 16.5312014), (97.625954, 16.5415736), (97.63072, 16.5613367), (97.638569, 16.5820811), (97.645017, 16.5892294), (97.649743, 16.5887155), (97.653232, 16.574263)], [(97.625696, 16.5488739), (97.623579, 16.5396268), (97.620589, 16.5423678), (97.616353, 16.5530826), (97.611619, 16.5637974), (97.61112, 16.5725187), (97.613339, 16.5792777), (97.635042, 16.5874696), (97.64152, 16.5981844), (97.643015, 16.605909), (97.645756, 16.6066565), (97.650989, 16.6034172), (97.644012, 16.5984335), (97.64219, 16.5877556), (97.636038, 16.5804926), (97.63252, 16.570307), (97.628314, 16.5603089), (97.625696, 16.5488739)], [(97.607902, 16.3798949), (97.604911, 16.3719709), (97.602519, 16.3749612), (97.601323, 16.3955933), (97.604014, 16.406059), (97.604762, 16.4084511), (97.607896, 16.4081673), (97.609397, 16.397537), (97.609397, 16.3882674), (97.607902, 16.3798949)], [(97.64902, 16.5107163), (97.645437, 16.5073734), (97.641933, 16.5076538), (97.641933, 16.5108776), (97.645717, 16.5160636), (97.651112, 16.5211243), (97.655721, 16.5238328), (97.656392, 16.5184349), (97.654359, 16.515696), (97.64902, 16.5107163)]); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01225_drop_dictionary_as_table.sql b/tests/queries/0_stateless/01225_drop_dictionary_as_table.sql index be2f7b2a9bf..3e497f9e3a4 100644 --- a/tests/queries/0_stateless/01225_drop_dictionary_as_table.sql +++ b/tests/queries/0_stateless/01225_drop_dictionary_as_table.sql @@ -16,7 +16,7 @@ LAYOUT(FLAT()); SYSTEM RELOAD DICTIONARY dict_db_01225.dict; -DROP TABLE dict_db_01225.dict; -- { serverError 520 } +DROP TABLE dict_db_01225.dict; -- { serverError CANNOT_DETACH_DICTIONARY_AS_TABLE } DROP DICTIONARY dict_db_01225.dict; DROP DATABASE dict_db_01225; diff --git a/tests/queries/0_stateless/01225_show_create_table_from_dictionary.sql b/tests/queries/0_stateless/01225_show_create_table_from_dictionary.sql index 28a5a0d9d55..27159528e2f 100644 --- a/tests/queries/0_stateless/01225_show_create_table_from_dictionary.sql +++ b/tests/queries/0_stateless/01225_show_create_table_from_dictionary.sql @@ -21,7 +21,7 @@ LIFETIME(MIN 0 MAX 0) LAYOUT(FLAT()); SHOW CREATE TABLE dict_db_01225_dictionary.`dict_db_01225.dict` FORMAT TSVRaw; -SHOW CREATE TABLE dict_db_01225_dictionary.`dict_db_01225.no_such_dict`; -- { serverError 487 } +SHOW CREATE TABLE dict_db_01225_dictionary.`dict_db_01225.no_such_dict`; -- { serverError CANNOT_GET_CREATE_DICTIONARY_QUERY } DROP DATABASE dict_db_01225; DROP DATABASE dict_db_01225_dictionary; diff --git a/tests/queries/0_stateless/01231_log_queries_min_type.sql b/tests/queries/0_stateless/01231_log_queries_min_type.sql index 0ed5e3e605c..8d1415e063c 100644 --- a/tests/queries/0_stateless/01231_log_queries_min_type.sql +++ b/tests/queries/0_stateless/01231_log_queries_min_type.sql @@ -15,7 +15,7 @@ select count() from system.query_log where current_database = currentDatabase() set max_rows_to_read='100K'; set log_queries_min_type='EXCEPTION_WHILE_PROCESSING'; -select '01231_log_queries_min_type/EXCEPTION_WHILE_PROCESSING', max(number) from system.numbers limit 1e6; -- { serverError 158 } +select '01231_log_queries_min_type/EXCEPTION_WHILE_PROCESSING', max(number) from system.numbers limit 1e6; -- { serverError TOO_MANY_ROWS } set max_rows_to_read=0; system flush logs; select count() from system.query_log where current_database = currentDatabase() @@ -23,7 +23,7 @@ select count() from system.query_log where current_database = currentDatabase() and event_date >= yesterday() and type = 'ExceptionWhileProcessing'; set max_rows_to_read='100K'; -select '01231_log_queries_min_type w/ Settings/EXCEPTION_WHILE_PROCESSING', max(number) from system.numbers limit 1e6; -- { serverError 158 } +select '01231_log_queries_min_type w/ Settings/EXCEPTION_WHILE_PROCESSING', max(number) from system.numbers limit 1e6; -- { serverError TOO_MANY_ROWS } system flush logs; set max_rows_to_read=0; select count() from system.query_log where diff --git a/tests/queries/0_stateless/01246_extractAllGroupsHorizontal.sql b/tests/queries/0_stateless/01246_extractAllGroupsHorizontal.sql index d28402056d3..baa39ca302f 100644 --- a/tests/queries/0_stateless/01246_extractAllGroupsHorizontal.sql +++ b/tests/queries/0_stateless/01246_extractAllGroupsHorizontal.sql @@ -1,13 +1,13 @@ -- error cases -SELECT extractAllGroupsHorizontal(); --{serverError 42} not enough arguments -SELECT extractAllGroupsHorizontal('hello'); --{serverError 42} not enough arguments -SELECT extractAllGroupsHorizontal('hello', 123); --{serverError 43} invalid argument type -SELECT extractAllGroupsHorizontal(123, 'world'); --{serverError 43} invalid argument type -SELECT extractAllGroupsHorizontal('hello world', '((('); --{serverError 427} invalid re -SELECT extractAllGroupsHorizontal('hello world', materialize('\\w+')); --{serverError 44} non-cons needle -SELECT extractAllGroupsHorizontal('hello world', '\\w+'); -- { serverError 36 } 0 groups -SELECT extractAllGroupsHorizontal('hello world', '(\\w+)') SETTINGS regexp_max_matches_per_row = 0; -- { serverError 128 } to many groups matched per row -SELECT extractAllGroupsHorizontal('hello world', '(\\w+)') SETTINGS regexp_max_matches_per_row = 1; -- { serverError 128 } to many groups matched per row +SELECT extractAllGroupsHorizontal(); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT extractAllGroupsHorizontal('hello'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT extractAllGroupsHorizontal('hello', 123); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} invalid argument type +SELECT extractAllGroupsHorizontal(123, 'world'); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} invalid argument type +SELECT extractAllGroupsHorizontal('hello world', '((('); --{serverError CANNOT_COMPILE_REGEXP} invalid re +SELECT extractAllGroupsHorizontal('hello world', materialize('\\w+')); --{serverError ILLEGAL_COLUMN} non-cons needle +SELECT extractAllGroupsHorizontal('hello world', '\\w+'); -- { serverError BAD_ARGUMENTS } 0 groups +SELECT extractAllGroupsHorizontal('hello world', '(\\w+)') SETTINGS regexp_max_matches_per_row = 0; -- { serverError TOO_LARGE_ARRAY_SIZE } to many groups matched per row +SELECT extractAllGroupsHorizontal('hello world', '(\\w+)') SETTINGS regexp_max_matches_per_row = 1; -- { serverError TOO_LARGE_ARRAY_SIZE } to many groups matched per row SELECT extractAllGroupsHorizontal('hello world', '(\\w+)') SETTINGS regexp_max_matches_per_row = 1000000 FORMAT Null; -- users now can set limit bigger than previous 1000 matches per row diff --git a/tests/queries/0_stateless/01246_extractAllGroupsVertical.sql b/tests/queries/0_stateless/01246_extractAllGroupsVertical.sql index 65ddbfe411b..7499802755d 100644 --- a/tests/queries/0_stateless/01246_extractAllGroupsVertical.sql +++ b/tests/queries/0_stateless/01246_extractAllGroupsVertical.sql @@ -1,11 +1,11 @@ -- error cases -SELECT extractAllGroupsVertical(); --{serverError 42} not enough arguments -SELECT extractAllGroupsVertical('hello'); --{serverError 42} not enough arguments -SELECT extractAllGroupsVertical('hello', 123); --{serverError 43} invalid argument type -SELECT extractAllGroupsVertical(123, 'world'); --{serverError 43} invalid argument type -SELECT extractAllGroupsVertical('hello world', '((('); --{serverError 427} invalid re -SELECT extractAllGroupsVertical('hello world', materialize('\\w+')); --{serverError 44} non-const needle -SELECT extractAllGroupsVertical('hello world', '\\w+'); -- { serverError 36 } 0 groups +SELECT extractAllGroupsVertical(); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT extractAllGroupsVertical('hello'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT extractAllGroupsVertical('hello', 123); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} invalid argument type +SELECT extractAllGroupsVertical(123, 'world'); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} invalid argument type +SELECT extractAllGroupsVertical('hello world', '((('); --{serverError CANNOT_COMPILE_REGEXP} invalid re +SELECT extractAllGroupsVertical('hello world', materialize('\\w+')); --{serverError ILLEGAL_COLUMN} non-const needle +SELECT extractAllGroupsVertical('hello world', '\\w+'); -- { serverError BAD_ARGUMENTS } 0 groups SELECT '1 group, multiple matches, String and FixedString'; SELECT extractAllGroupsVertical('hello world', '(\\w+)'); diff --git a/tests/queries/0_stateless/01246_least_greatest_generic.sql b/tests/queries/0_stateless/01246_least_greatest_generic.sql index 58a9f8df9b8..2744531eac5 100644 --- a/tests/queries/0_stateless/01246_least_greatest_generic.sql +++ b/tests/queries/0_stateless/01246_least_greatest_generic.sql @@ -17,7 +17,7 @@ SELECT least(toNullable(123), 456); SELECT LEAST(-1, 18446744073709551615) x, toTypeName(x); -- This can be improved -SELECT LEAST(-1., 18446744073709551615); -- { serverError 43 } +SELECT LEAST(-1., 18446744073709551615); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT LEAST(-1., 18446744073709551615.); SELECT greatest(-1, 1, 4294967295); @@ -33,4 +33,4 @@ SELECT greatest([], [NULL]); SELECT LEAST([NULL], [0]); SELECT GREATEST([NULL], [0]); -SELECT Greatest(); -- { serverError 42 } +SELECT Greatest(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } diff --git a/tests/queries/0_stateless/01247_some_msan_crashs_from_22517.sql b/tests/queries/0_stateless/01247_some_msan_crashs_from_22517.sql index 8bcbbde63d6..973ec67ba98 100644 --- a/tests/queries/0_stateless/01247_some_msan_crashs_from_22517.sql +++ b/tests/queries/0_stateless/01247_some_msan_crashs_from_22517.sql @@ -1,3 +1,3 @@ SELECT a FROM (SELECT ignore((SELECT 1)) AS a, a AS b); -SELECT x FROM (SELECT dummy AS x, plus(ignore(ignore(ignore(ignore('-922337203.6854775808', ignore(NULL)), ArrLen = 256, ignore(100, Arr.C3, ignore(NULL), (SELECT 10.000100135803223, count(*) FROM system.time_zones) > NULL)))), dummy, 65535) AS dummy ORDER BY ignore(-2) ASC, identity(x) DESC NULLS FIRST) FORMAT Null; -- { serverError 47 } +SELECT x FROM (SELECT dummy AS x, plus(ignore(ignore(ignore(ignore('-922337203.6854775808', ignore(NULL)), ArrLen = 256, ignore(100, Arr.C3, ignore(NULL), (SELECT 10.000100135803223, count(*) FROM system.time_zones) > NULL)))), dummy, 65535) AS dummy ORDER BY ignore(-2) ASC, identity(x) DESC NULLS FIRST) FORMAT Null; -- { serverError UNKNOWN_IDENTIFIER } diff --git a/tests/queries/0_stateless/01249_bad_arguments_for_bloom_filter.sql b/tests/queries/0_stateless/01249_bad_arguments_for_bloom_filter.sql index 0c9cfafa496..afb387d6701 100644 --- a/tests/queries/0_stateless/01249_bad_arguments_for_bloom_filter.sql +++ b/tests/queries/0_stateless/01249_bad_arguments_for_bloom_filter.sql @@ -8,9 +8,9 @@ set allow_deprecated_database_ordinary=1; CREATE DATABASE test_01249 ENGINE=Ordinary; -- Full ATTACH requires UUID with Atomic USE test_01249; -CREATE TABLE bloom_filter_idx_good(`u64` UInt64, `i32` Int32, `f64` Float64, `d` Decimal(10, 2), `s` String, `e` Enum8('a' = 1, 'b' = 2, 'c' = 3), `dt` Date, INDEX bloom_filter_a i32 TYPE bloom_filter(0, 1) GRANULARITY 1) ENGINE = MergeTree() ORDER BY u64 SETTINGS index_granularity = 8192; -- { serverError 42 } -CREATE TABLE bloom_filter_idx_good(`u64` UInt64, `i32` Int32, `f64` Float64, `d` Decimal(10, 2), `s` String, `e` Enum8('a' = 1, 'b' = 2, 'c' = 3), `dt` Date, INDEX bloom_filter_a i32 TYPE bloom_filter(-0.1) GRANULARITY 1) ENGINE = MergeTree() ORDER BY u64 SETTINGS index_granularity = 8192; -- { serverError 36 } -CREATE TABLE bloom_filter_idx_good(`u64` UInt64, `i32` Int32, `f64` Float64, `d` Decimal(10, 2), `s` String, `e` Enum8('a' = 1, 'b' = 2, 'c' = 3), `dt` Date, INDEX bloom_filter_a i32 TYPE bloom_filter(1.01) GRANULARITY 1) ENGINE = MergeTree() ORDER BY u64 SETTINGS index_granularity = 8192; -- { serverError 36 } +CREATE TABLE bloom_filter_idx_good(`u64` UInt64, `i32` Int32, `f64` Float64, `d` Decimal(10, 2), `s` String, `e` Enum8('a' = 1, 'b' = 2, 'c' = 3), `dt` Date, INDEX bloom_filter_a i32 TYPE bloom_filter(0, 1) GRANULARITY 1) ENGINE = MergeTree() ORDER BY u64 SETTINGS index_granularity = 8192; -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +CREATE TABLE bloom_filter_idx_good(`u64` UInt64, `i32` Int32, `f64` Float64, `d` Decimal(10, 2), `s` String, `e` Enum8('a' = 1, 'b' = 2, 'c' = 3), `dt` Date, INDEX bloom_filter_a i32 TYPE bloom_filter(-0.1) GRANULARITY 1) ENGINE = MergeTree() ORDER BY u64 SETTINGS index_granularity = 8192; -- { serverError BAD_ARGUMENTS } +CREATE TABLE bloom_filter_idx_good(`u64` UInt64, `i32` Int32, `f64` Float64, `d` Decimal(10, 2), `s` String, `e` Enum8('a' = 1, 'b' = 2, 'c' = 3), `dt` Date, INDEX bloom_filter_a i32 TYPE bloom_filter(1.01) GRANULARITY 1) ENGINE = MergeTree() ORDER BY u64 SETTINGS index_granularity = 8192; -- { serverError BAD_ARGUMENTS } DROP TABLE IF EXISTS bloom_filter_idx_good; ATTACH TABLE bloom_filter_idx_good(`u64` UInt64, `i32` Int32, `f64` Float64, `d` Decimal(10, 2), `s` String, `e` Enum8('a' = 1, 'b' = 2, 'c' = 3), `dt` Date, INDEX bloom_filter_a i32 TYPE bloom_filter(0., 1.) GRANULARITY 1) ENGINE = MergeTree() ORDER BY u64 SETTINGS index_granularity = 8192; diff --git a/tests/queries/0_stateless/01256_misspell_layout_name_podshumok.sql b/tests/queries/0_stateless/01256_misspell_layout_name_podshumok.sql index a41402a12e4..28945e3b1ba 100644 --- a/tests/queries/0_stateless/01256_misspell_layout_name_podshumok.sql +++ b/tests/queries/0_stateless/01256_misspell_layout_name_podshumok.sql @@ -6,4 +6,4 @@ CREATE DICTIONARY testip PRIMARY KEY network SOURCE(FILE(PATH '/tmp/test.csv' FORMAT CSVWithNames)) LIFETIME(MIN 0 MAX 300) -LAYOUT(IPTRIE()); -- { serverError 137 } +LAYOUT(IPTRIE()); -- { serverError UNKNOWN_ELEMENT_IN_CONFIG } diff --git a/tests/queries/0_stateless/01256_negative_generate_random.sql b/tests/queries/0_stateless/01256_negative_generate_random.sql index 7e05a394b8d..cbfae490af2 100644 --- a/tests/queries/0_stateless/01256_negative_generate_random.sql +++ b/tests/queries/0_stateless/01256_negative_generate_random.sql @@ -1,4 +1,4 @@ -SELECT * FROM generateRandom('i8', 1, 10, 10); -- { serverError 62 } -SELECT * FROM generateRandom; -- { serverError 60 } -SELECT * FROM generateRandom('i8 UInt8', 1, 10, 10, 10, 10); -- { serverError 42 } -SELECT * FROM generateRandom('', 1, 10, 10); -- { serverError 62 } +SELECT * FROM generateRandom('i8', 1, 10, 10); -- { serverError SYNTAX_ERROR } +SELECT * FROM generateRandom; -- { serverError UNKNOWN_TABLE } +SELECT * FROM generateRandom('i8 UInt8', 1, 10, 10, 10, 10); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT * FROM generateRandom('', 1, 10, 10); -- { serverError SYNTAX_ERROR } diff --git a/tests/queries/0_stateless/01257_dictionary_mismatch_types.sql b/tests/queries/0_stateless/01257_dictionary_mismatch_types.sql index dfdfdf46db2..a4bb7bf2525 100644 --- a/tests/queries/0_stateless/01257_dictionary_mismatch_types.sql +++ b/tests/queries/0_stateless/01257_dictionary_mismatch_types.sql @@ -66,7 +66,7 @@ SELECT dictGet('test_dict_db.table1_dict', 'col8', (col1, col2, col3, col4, col5)), dictGet('test_dict_db.table1_dict', 'col9', (col1, col2, col3, col4, col5)) FROM test_dict_db.table1 -WHERE dictHas('test_dict_db.table1_dict', (col1, col2, col3, col4, col5)); -- { serverError 349 } +WHERE dictHas('test_dict_db.table1_dict', (col1, col2, col3, col4, col5)); -- { serverError CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN } DROP TABLE test_dict_db.table1; CREATE TABLE test_dict_db.table1 diff --git a/tests/queries/0_stateless/01258_wrong_cast_filimonov.sql b/tests/queries/0_stateless/01258_wrong_cast_filimonov.sql index 5bcc159b384..4817a12cd0e 100644 --- a/tests/queries/0_stateless/01258_wrong_cast_filimonov.sql +++ b/tests/queries/0_stateless/01258_wrong_cast_filimonov.sql @@ -1 +1 @@ -create table x( id UInt64, t AggregateFunction(argMax, Enum8('' = -1, 'Male' = 1, 'Female' = 2), UInt64) DEFAULT arrayReduce('argMaxState', ['cast(-1, \'Enum8(\'\' = -1, \'Male\' = 1, \'Female\' = 2)'], [toUInt64(0)]) ) Engine=MergeTree ORDER BY id; -- { serverError 70 } +create table x( id UInt64, t AggregateFunction(argMax, Enum8('' = -1, 'Male' = 1, 'Female' = 2), UInt64) DEFAULT arrayReduce('argMaxState', ['cast(-1, \'Enum8(\'\' = -1, \'Male\' = 1, \'Female\' = 2)'], [toUInt64(0)]) ) Engine=MergeTree ORDER BY id; -- { serverError CANNOT_CONVERT_TYPE } diff --git a/tests/queries/0_stateless/01259_datetime64_ubsan.sql b/tests/queries/0_stateless/01259_datetime64_ubsan.sql index 4bc7a71dac3..be8e5dd7214 100644 --- a/tests/queries/0_stateless/01259_datetime64_ubsan.sql +++ b/tests/queries/0_stateless/01259_datetime64_ubsan.sql @@ -1,2 +1,2 @@ -select now64(10); -- { serverError 69 } +select now64(10); -- { serverError ARGUMENT_OUT_OF_BOUND } select length(toString(now64(9))); diff --git a/tests/queries/0_stateless/01259_dictionary_custom_settings_ddl.sql b/tests/queries/0_stateless/01259_dictionary_custom_settings_ddl.sql index 224aac43a1f..432256d33c2 100644 --- a/tests/queries/0_stateless/01259_dictionary_custom_settings_ddl.sql +++ b/tests/queries/0_stateless/01259_dictionary_custom_settings_ddl.sql @@ -36,7 +36,7 @@ LAYOUT(FLAT()) SETTINGS(max_result_bytes=1); SELECT 'INITIALIZING DICTIONARY'; -SELECT dictGetUInt64('ordinary_db.dict1', 'second_column', toUInt64(100500)); -- { serverError 396 } +SELECT dictGetUInt64('ordinary_db.dict1', 'second_column', toUInt64(100500)); -- { serverError TOO_MANY_ROWS_OR_BYTES } SELECT 'END'; diff --git a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql index 0eeb97e2b2d..343de3d0a12 100644 --- a/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql +++ b/tests/queries/0_stateless/01263_type_conversion_nvartolomei.sql @@ -43,7 +43,7 @@ SELECT * FROM d; SELECT '---'; INSERT INTO m VALUES ('b'); -SELECT toString(v) FROM (SELECT v FROM d ORDER BY v) FORMAT Null; -- { serverError 36 } +SELECT toString(v) FROM (SELECT v FROM d ORDER BY v) FORMAT Null; -- { serverError BAD_ARGUMENTS } DROP TABLE m; diff --git a/tests/queries/0_stateless/01268_DateTime64_in_WHERE.sql b/tests/queries/0_stateless/01268_DateTime64_in_WHERE.sql index 3e859717873..113d4226ca8 100644 --- a/tests/queries/0_stateless/01268_DateTime64_in_WHERE.sql +++ b/tests/queries/0_stateless/01268_DateTime64_in_WHERE.sql @@ -1,11 +1,11 @@ -- Error cases: -- non-const string column -WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT DT64 = materialize(S); -- {serverError 43} -WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT materialize(S) = toDateTime64(S, 3); -- {serverError 43} -WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT * WHERE DT64 = materialize(S); -- {serverError 43} -WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT * WHERE materialize(S) = DT64; -- {serverError 43} +WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT DT64 = materialize(S); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT materialize(S) = toDateTime64(S, 3); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT * WHERE DT64 = materialize(S); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +WITH '2020-02-05 14:34:12.333' as S, toDateTime64(S, 3) as DT64 SELECT * WHERE materialize(S) = DT64; -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} -SELECT * WHERE toDateTime64(123.345, 3) == 'ABCD'; -- {serverError 41} -- invalid DateTime64 string +SELECT * WHERE toDateTime64(123.345, 3) == 'ABCD'; -- {serverError CANNOT_PARSE_DATETIME} -- invalid DateTime64 string SELECT * WHERE toDateTime64(123.345, 3) == '2020-02-05 14:34:12.33333333333333333333333333333333333333333333333333333333'; SELECT 'in SELECT'; diff --git a/tests/queries/0_stateless/01268_dictionary_direct_layout.sql b/tests/queries/0_stateless/01268_dictionary_direct_layout.sql index 45b5c580561..66313528a6f 100644 --- a/tests/queries/0_stateless/01268_dictionary_direct_layout.sql +++ b/tests/queries/0_stateless/01268_dictionary_direct_layout.sql @@ -116,7 +116,7 @@ SELECT dictGetStringOrDefault('db_01268.dict2', 'region_name', toUInt64(8), 'NON SELECT dictGetStringOrDefault('db_01268.dict2', 'region_name', toUInt64(9), 'NONE'); SELECT dictGetStringOrDefault('db_01268.dict2', 'region_name', toUInt64(10), 'NONE'); -SELECT dictGetUInt64('db_01268.dict1', 'second_column', toUInt64(100500)); -- { serverError 396 } +SELECT dictGetUInt64('db_01268.dict1', 'second_column', toUInt64(100500)); -- { serverError TOO_MANY_ROWS_OR_BYTES } SELECT 'END'; diff --git a/tests/queries/0_stateless/01269_create_with_null.sql b/tests/queries/0_stateless/01269_create_with_null.sql index ac57f613dfd..30b7fc224f8 100644 --- a/tests/queries/0_stateless/01269_create_with_null.sql +++ b/tests/queries/0_stateless/01269_create_with_null.sql @@ -24,14 +24,14 @@ CREATE TABLE data_null_error ( a Nullable(INT) NULL, b INT NOT NULL, c Nullable(INT) -) engine=Memory(); --{serverError 377} +) engine=Memory(); --{serverError ILLEGAL_SYNTAX_FOR_DATA_TYPE} CREATE TABLE data_null_error ( a INT NULL, b Nullable(INT) NOT NULL, c Nullable(INT) -) engine=Memory(); --{serverError 377} +) engine=Memory(); --{serverError ILLEGAL_SYNTAX_FOR_DATA_TYPE} SET data_type_default_nullable='true'; @@ -53,7 +53,7 @@ DETACH TABLE set_null; ATTACH TABLE set_null; SHOW CREATE TABLE set_null; -CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8)) ENGINE=Memory; -- { serverError 43 } +CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8)) ENGINE=Memory; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } CREATE TABLE cannot_be_nullable (n Int8, a Array(UInt8) NOT NULL) ENGINE=Memory; SHOW CREATE TABLE cannot_be_nullable; DETACH TABLE cannot_be_nullable; diff --git a/tests/queries/0_stateless/01269_toStartOfSecond.sql b/tests/queries/0_stateless/01269_toStartOfSecond.sql index 641da4a15a9..6ebfde0aab9 100644 --- a/tests/queries/0_stateless/01269_toStartOfSecond.sql +++ b/tests/queries/0_stateless/01269_toStartOfSecond.sql @@ -1,8 +1,8 @@ -- Error cases -SELECT toStartOfSecond('123'); -- {serverError 43} -SELECT toStartOfSecond(now()); -- {serverError 43} -SELECT toStartOfSecond(); -- {serverError 42} -SELECT toStartOfSecond(now64(), 123); -- {serverError 43} +SELECT toStartOfSecond('123'); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT toStartOfSecond(now()); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT toStartOfSecond(); -- {serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} +SELECT toStartOfSecond(now64(), 123); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} WITH toDateTime64('2019-09-16 19:20:11', 3, 'Asia/Istanbul') AS dt64 SELECT toStartOfSecond(dt64, 'UTC') AS res, toTypeName(res); WITH toDateTime64('2019-09-16 19:20:11', 0, 'UTC') AS dt64 SELECT toStartOfSecond(dt64) AS res, toTypeName(res); diff --git a/tests/queries/0_stateless/01273_extractGroups.sql b/tests/queries/0_stateless/01273_extractGroups.sql index 9dfca7e0adf..f060b1d42de 100644 --- a/tests/queries/0_stateless/01273_extractGroups.sql +++ b/tests/queries/0_stateless/01273_extractGroups.sql @@ -1,13 +1,13 @@ -- error cases -SELECT extractGroups(); --{serverError 42} not enough arguments -SELECT extractGroups('hello'); --{serverError 42} not enough arguments -SELECT extractGroups('hello', 123); --{serverError 43} invalid argument type -SELECT extractGroups(123, 'world'); --{serverError 43} invalid argument type -SELECT extractGroups('hello world', '((('); --{serverError 427} invalid re -SELECT extractGroups('hello world', materialize('\\w+')); --{serverError 44} non-const needle +SELECT extractGroups(); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT extractGroups('hello'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT extractGroups('hello', 123); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} invalid argument type +SELECT extractGroups(123, 'world'); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} invalid argument type +SELECT extractGroups('hello world', '((('); --{serverError CANNOT_COMPILE_REGEXP} invalid re +SELECT extractGroups('hello world', materialize('\\w+')); --{serverError ILLEGAL_COLUMN} non-const needle SELECT '0 groups, zero matches'; -SELECT extractGroups('hello world', '\\w+'); -- { serverError 36 } +SELECT extractGroups('hello world', '\\w+'); -- { serverError BAD_ARGUMENTS } SELECT '1 group, multiple matches, String and FixedString'; SELECT extractGroups('hello world', '(\\w+) (\\w+)'); diff --git a/tests/queries/0_stateless/01273_h3EdgeAngle_range_check.sql b/tests/queries/0_stateless/01273_h3EdgeAngle_range_check.sql index f17ffa9b040..2c5e27f6ceb 100644 --- a/tests/queries/0_stateless/01273_h3EdgeAngle_range_check.sql +++ b/tests/queries/0_stateless/01273_h3EdgeAngle_range_check.sql @@ -1,3 +1,3 @@ -- Tags: no-fasttest -SELECT h3EdgeAngle(100); -- { serverError 69 } +SELECT h3EdgeAngle(100); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01275_extract_groups_check.sql b/tests/queries/0_stateless/01275_extract_groups_check.sql index f8bc5943a78..b1fe1a136d9 100644 --- a/tests/queries/0_stateless/01275_extract_groups_check.sql +++ b/tests/queries/0_stateless/01275_extract_groups_check.sql @@ -1,14 +1,14 @@ -SELECT extractGroups('hello', ''); -- { serverError 36 } -SELECT extractAllGroups('hello', ''); -- { serverError 36 } +SELECT extractGroups('hello', ''); -- { serverError BAD_ARGUMENTS } +SELECT extractAllGroups('hello', ''); -- { serverError BAD_ARGUMENTS } -SELECT extractGroups('hello', ' '); -- { serverError 36 } -SELECT extractAllGroups('hello', ' '); -- { serverError 36 } +SELECT extractGroups('hello', ' '); -- { serverError BAD_ARGUMENTS } +SELECT extractAllGroups('hello', ' '); -- { serverError BAD_ARGUMENTS } -SELECT extractGroups('hello', '\0'); -- { serverError 36 } -SELECT extractAllGroups('hello', '\0'); -- { serverError 36 } +SELECT extractGroups('hello', '\0'); -- { serverError BAD_ARGUMENTS } +SELECT extractAllGroups('hello', '\0'); -- { serverError BAD_ARGUMENTS } -SELECT extractGroups('hello', 'world'); -- { serverError 36 } -SELECT extractAllGroups('hello', 'world'); -- { serverError 36 } +SELECT extractGroups('hello', 'world'); -- { serverError BAD_ARGUMENTS } +SELECT extractAllGroups('hello', 'world'); -- { serverError BAD_ARGUMENTS } -SELECT extractGroups('hello', 'hello|world'); -- { serverError 36 } -SELECT extractAllGroups('hello', 'hello|world'); -- { serverError 36 } +SELECT extractGroups('hello', 'hello|world'); -- { serverError BAD_ARGUMENTS } +SELECT extractAllGroups('hello', 'hello|world'); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01277_alter_rename_column_constraint.sql b/tests/queries/0_stateless/01277_alter_rename_column_constraint.sql index b9d5030239d..76c1a3589c4 100644 --- a/tests/queries/0_stateless/01277_alter_rename_column_constraint.sql +++ b/tests/queries/0_stateless/01277_alter_rename_column_constraint.sql @@ -15,7 +15,7 @@ PARTITION BY date ORDER BY key; INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number + 2) from numbers(9); -INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(9); --{serverError 469} +INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(9); --{serverError VIOLATED_CONSTRAINT} SELECT * FROM table_for_rename ORDER BY key; @@ -26,7 +26,7 @@ SELECT * FROM table_for_rename ORDER BY key; SELECT '-- insert after rename --'; INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number + 2) from numbers(10, 10); -INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(10, 10); --{serverError 469} +INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(10, 10); --{serverError VIOLATED_CONSTRAINT} SELECT * FROM table_for_rename ORDER BY key; SELECT '-- rename columns back --'; @@ -37,7 +37,7 @@ SELECT * FROM table_for_rename ORDER BY key; SELECT '-- insert after rename column --'; INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number + 2) from numbers(20,10); -INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number), toString(number + 2) from numbers(20, 10); --{serverError 469} +INSERT INTO table_for_rename SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number), toString(number + 2) from numbers(20, 10); --{serverError VIOLATED_CONSTRAINT} SELECT * FROM table_for_rename ORDER BY key; DROP TABLE IF EXISTS table_for_rename; diff --git a/tests/queries/0_stateless/01277_alter_rename_column_constraint_zookeeper_long.sql b/tests/queries/0_stateless/01277_alter_rename_column_constraint_zookeeper_long.sql index 8d8f590540a..ce8d87f9a80 100644 --- a/tests/queries/0_stateless/01277_alter_rename_column_constraint_zookeeper_long.sql +++ b/tests/queries/0_stateless/01277_alter_rename_column_constraint_zookeeper_long.sql @@ -17,7 +17,7 @@ PARTITION BY date ORDER BY key; INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number + 2) from numbers(9); -INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(9); ; --{serverError 469} +INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(9); ; --{serverError VIOLATED_CONSTRAINT} SELECT * FROM table_for_rename1 ORDER BY key; @@ -28,7 +28,7 @@ SELECT * FROM table_for_rename1 ORDER BY key; SELECT '-- insert after rename --'; INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number + 2) from numbers(10, 10); -INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(10, 10); ; --{serverError 469} +INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number) from numbers(10, 10); ; --{serverError VIOLATED_CONSTRAINT} SELECT * FROM table_for_rename1 ORDER BY key; SELECT '-- rename columns back --'; @@ -39,7 +39,7 @@ SELECT * FROM table_for_rename1 ORDER BY key; SELECT '-- insert after rename column --'; INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number + 1), toString(number + 2) from numbers(20,10); -INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number), toString(number + 2) from numbers(20, 10); ; --{serverError 469} +INSERT INTO table_for_rename1 SELECT toDate('2019-10-01') + number % 3, number, toString(number), toString(number), toString(number + 2) from numbers(20, 10); ; --{serverError VIOLATED_CONSTRAINT} SELECT * FROM table_for_rename1 ORDER BY key; DROP TABLE IF EXISTS table_for_rename1; diff --git a/tests/queries/0_stateless/01277_convert_field_to_type_logical_error.sql b/tests/queries/0_stateless/01277_convert_field_to_type_logical_error.sql index 4712c124237..f4443135b53 100644 --- a/tests/queries/0_stateless/01277_convert_field_to_type_logical_error.sql +++ b/tests/queries/0_stateless/01277_convert_field_to_type_logical_error.sql @@ -1 +1 @@ -SELECT -2487, globalNullIn(toIntervalMinute(-88074), 'qEkek..'), [-27.537293]; -- { serverError 53 } +SELECT -2487, globalNullIn(toIntervalMinute(-88074), 'qEkek..'), [-27.537293]; -- { serverError TYPE_MISMATCH } diff --git a/tests/queries/0_stateless/01277_fromUnixTimestamp64.sql b/tests/queries/0_stateless/01277_fromUnixTimestamp64.sql index 846ffa094a5..30e013def70 100644 --- a/tests/queries/0_stateless/01277_fromUnixTimestamp64.sql +++ b/tests/queries/0_stateless/01277_fromUnixTimestamp64.sql @@ -1,15 +1,15 @@ -- -- Error cases -SELECT fromUnixTimestamp64Milli(); -- {serverError 42} -SELECT fromUnixTimestamp64Micro(); -- {serverError 42} -SELECT fromUnixTimestamp64Nano(); -- {serverError 42} +SELECT fromUnixTimestamp64Milli(); -- {serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} +SELECT fromUnixTimestamp64Micro(); -- {serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} +SELECT fromUnixTimestamp64Nano(); -- {serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} -SELECT fromUnixTimestamp64Milli('abc'); -- {serverError 43} -SELECT fromUnixTimestamp64Micro('abc'); -- {serverError 43} -SELECT fromUnixTimestamp64Nano('abc'); -- {serverError 43} +SELECT fromUnixTimestamp64Milli('abc'); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT fromUnixTimestamp64Micro('abc'); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT fromUnixTimestamp64Nano('abc'); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} -SELECT fromUnixTimestamp64Milli('abc', 123); -- {serverError 43} -SELECT fromUnixTimestamp64Micro('abc', 123); -- {serverError 43} -SELECT fromUnixTimestamp64Nano('abc', 123); -- {serverError 43} +SELECT fromUnixTimestamp64Milli('abc', 123); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT fromUnixTimestamp64Micro('abc', 123); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT fromUnixTimestamp64Nano('abc', 123); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT 'const column'; WITH diff --git a/tests/queries/0_stateless/01277_random_fixed_string.sql b/tests/queries/0_stateless/01277_random_fixed_string.sql index 99782c1ac34..d21ba5142d6 100644 --- a/tests/queries/0_stateless/01277_random_fixed_string.sql +++ b/tests/queries/0_stateless/01277_random_fixed_string.sql @@ -1,5 +1,5 @@ -SELECT randomFixedString('string'); -- { serverError 43 } -SELECT randomFixedString(0); -- { serverError 69 } -SELECT randomFixedString(rand() % 10); -- { serverError 44 } +SELECT randomFixedString('string'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT randomFixedString(0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT randomFixedString(rand() % 10); -- { serverError ILLEGAL_COLUMN } SELECT toTypeName(randomFixedString(10)); SELECT DISTINCT c > 30000 FROM (SELECT arrayJoin(arrayMap(x -> reinterpretAsUInt8(substring(randomFixedString(100), x + 1, 1)), range(100))) AS byte, count() AS c FROM numbers(100000) GROUP BY byte ORDER BY byte); diff --git a/tests/queries/0_stateless/01278_random_string_utf8.sql b/tests/queries/0_stateless/01278_random_string_utf8.sql index 76349d9d814..da2dc48c3e1 100644 --- a/tests/queries/0_stateless/01278_random_string_utf8.sql +++ b/tests/queries/0_stateless/01278_random_string_utf8.sql @@ -1,4 +1,4 @@ -SELECT randomStringUTF8('string'); -- { serverError 43 } +SELECT randomStringUTF8('string'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT lengthUTF8(randomStringUTF8(100)); SELECT toTypeName(randomStringUTF8(10)); SELECT isValidUTF8(randomStringUTF8(100000)); diff --git a/tests/queries/0_stateless/01280_ttl_where_group_by_negative.sql b/tests/queries/0_stateless/01280_ttl_where_group_by_negative.sql index 978b2bfcc10..83c7465e793 100644 --- a/tests/queries/0_stateless/01280_ttl_where_group_by_negative.sql +++ b/tests/queries/0_stateless/01280_ttl_where_group_by_negative.sql @@ -1,6 +1,6 @@ -- Tags: no-parallel -create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by x set y = max(y); -- { serverError 450} -create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by b set y = max(y); -- { serverError 450} -create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by a, b, x set y = max(y); -- { serverError 450} -create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by a, b set y = max(y), y = max(y); -- { serverError 450} +create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by x set y = max(y); -- { serverError BAD_TTL_EXPRESSION} +create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by b set y = max(y); -- { serverError BAD_TTL_EXPRESSION} +create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by a, b, x set y = max(y); -- { serverError BAD_TTL_EXPRESSION} +create table ttl_01280_error (a Int, b Int, x Int64, y Int64, d DateTime) engine = MergeTree order by (a, b) ttl d + interval 1 second group by a, b set y = max(y), y = max(y); -- { serverError BAD_TTL_EXPRESSION} diff --git a/tests/queries/0_stateless/01281_alter_rename_and_other_renames.sql b/tests/queries/0_stateless/01281_alter_rename_and_other_renames.sql index b0ccd7751ab..43c477fb69d 100644 --- a/tests/queries/0_stateless/01281_alter_rename_and_other_renames.sql +++ b/tests/queries/0_stateless/01281_alter_rename_and_other_renames.sql @@ -4,8 +4,8 @@ CREATE TABLE rename_table_multiple (key Int32, value1 String, value2 Int32) ENGI INSERT INTO rename_table_multiple VALUES (1, 2, 3); -ALTER TABLE rename_table_multiple RENAME COLUMN value1 TO value1_string, MODIFY COLUMN value1_string String; --{serverError 48} -ALTER TABLE rename_table_multiple MODIFY COLUMN value1 String, RENAME COLUMN value1 to value1_string; --{serverError 48} +ALTER TABLE rename_table_multiple RENAME COLUMN value1 TO value1_string, MODIFY COLUMN value1_string String; --{serverError NOT_IMPLEMENTED} +ALTER TABLE rename_table_multiple MODIFY COLUMN value1 String, RENAME COLUMN value1 to value1_string; --{serverError NOT_IMPLEMENTED} ALTER TABLE rename_table_multiple RENAME COLUMN value1 TO value1_string; ALTER TABLE rename_table_multiple MODIFY COLUMN value1_string String; @@ -38,8 +38,8 @@ CREATE TABLE rename_table_multiple_compact (key Int32, value1 String, value2 Int INSERT INTO rename_table_multiple_compact VALUES (1, 2, 3); -ALTER TABLE rename_table_multiple_compact RENAME COLUMN value1 TO value1_string, MODIFY COLUMN value1_string String; --{serverError 48} -ALTER TABLE rename_table_multiple_compact MODIFY COLUMN value1 String, RENAME COLUMN value1 to value1_string; --{serverError 48} +ALTER TABLE rename_table_multiple_compact RENAME COLUMN value1 TO value1_string, MODIFY COLUMN value1_string String; --{serverError NOT_IMPLEMENTED} +ALTER TABLE rename_table_multiple_compact MODIFY COLUMN value1 String, RENAME COLUMN value1 to value1_string; --{serverError NOT_IMPLEMENTED} ALTER TABLE rename_table_multiple_compact RENAME COLUMN value1 TO value1_string; ALTER TABLE rename_table_multiple_compact MODIFY COLUMN value1_string String; diff --git a/tests/queries/0_stateless/01281_parseDateTime64BestEffort.sql b/tests/queries/0_stateless/01281_parseDateTime64BestEffort.sql index 808eaf291d5..37c1a54fe0b 100644 --- a/tests/queries/0_stateless/01281_parseDateTime64BestEffort.sql +++ b/tests/queries/0_stateless/01281_parseDateTime64BestEffort.sql @@ -1,16 +1,16 @@ -- Error cases -SELECT parseDateTime64BestEffort(); -- {serverError 42} -SELECT parseDateTime64BestEffort(123); -- {serverError 43} -SELECT parseDateTime64BestEffort('foo'); -- {serverError 41} +SELECT parseDateTime64BestEffort(); -- {serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} +SELECT parseDateTime64BestEffort(123); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT parseDateTime64BestEffort('foo'); -- {serverError CANNOT_PARSE_DATETIME} -SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', 'bar'); -- {serverError 43} -- invalid scale parameter -SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', 3, 4); -- {serverError 43} -- invalid timezone parameter +SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', 'bar'); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} -- invalid scale parameter +SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', 3, 4); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} -- invalid timezone parameter SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', 3, 'baz'); -- {serverError BAD_ARGUMENTS} -- unknown timezone -SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', materialize(3), 4); -- {serverError 43, 44} -- non-const precision -SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', 3, materialize('UTC')); -- {serverError 44} -- non-const timezone +SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', materialize(3), 4); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT, 44} -- non-const precision +SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184Z', 3, materialize('UTC')); -- {serverError ILLEGAL_COLUMN} -- non-const timezone -SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184012345678910111213141516171819Z', 3, 'UTC'); -- {serverError 6} +SELECT parseDateTime64BestEffort('2020-05-14T03:37:03.253184012345678910111213141516171819Z', 3, 'UTC'); -- {serverError CANNOT_PARSE_TEXT} SELECT 'orNull'; SELECT parseDateTime64BestEffortOrNull('2020-05-14T03:37:03.253184Z', 3, 'UTC'); diff --git a/tests/queries/0_stateless/01281_unsucceeded_insert_select_queries_counter.sql b/tests/queries/0_stateless/01281_unsucceeded_insert_select_queries_counter.sql index 3b122fc0228..faf40b59d17 100644 --- a/tests/queries/0_stateless/01281_unsucceeded_insert_select_queries_counter.sql +++ b/tests/queries/0_stateless/01281_unsucceeded_insert_select_queries_counter.sql @@ -14,7 +14,7 @@ WHERE event in ('FailedQuery', 'FailedInsertQuery', 'FailedSelectQuery'); CREATE TABLE to_insert (value UInt64) ENGINE = Memory(); -- Failed insert before execution -INSERT INTO table_that_do_not_exists VALUES (42); -- { serverError 60 } +INSERT INTO table_that_do_not_exists VALUES (42); -- { serverError UNKNOWN_TABLE } SELECT current_value - previous_value FROM ( @@ -27,7 +27,7 @@ on previous.event = current.event; -- Failed insert in execution -INSERT INTO to_insert SELECT throwIf(1); -- { serverError 395 } +INSERT INTO to_insert SELECT throwIf(1); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } SELECT current_value - previous_value FROM ( @@ -40,7 +40,7 @@ on previous.event = current.event; -- Failed select before execution -SELECT * FROM table_that_do_not_exists; -- { serverError 60 } +SELECT * FROM table_that_do_not_exists; -- { serverError UNKNOWN_TABLE } SELECT current_value - previous_value FROM ( @@ -52,7 +52,7 @@ ALL LEFT JOIN ( on previous.event = current.event; -- Failed select in execution -SELECT throwIf(1); -- { serverError 395 } +SELECT throwIf(1); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } SELECT current_value - previous_value FROM ( diff --git a/tests/queries/0_stateless/01284_escape_sequences_php_mysql_style.sql b/tests/queries/0_stateless/01284_escape_sequences_php_mysql_style.sql index e9f2346233f..5d24e2009c9 100644 --- a/tests/queries/0_stateless/01284_escape_sequences_php_mysql_style.sql +++ b/tests/queries/0_stateless/01284_escape_sequences_php_mysql_style.sql @@ -4,5 +4,5 @@ SELECT 'a\_\c\l\i\c\k\h\o\u\s\e', 'a\\_\\c\\l\\i\\c\\k\\h\\o\\u\\s\\e'; select 'aXb' like 'a_b', 'aXb' like 'a\_b', 'a_b' like 'a\_b', 'a_b' like 'a\\_b'; SELECT match('Hello', '\w+'), match('Hello', '\\w+'), match('Hello', '\\\w+'), match('Hello', '\w\+'), match('Hello', 'w+'); -SELECT match('Hello', '\He\l\l\o'); -- { serverError 427 } -SELECT match('Hello', '\H\e\l\l\o'); -- { serverError 427 } +SELECT match('Hello', '\He\l\l\o'); -- { serverError CANNOT_COMPILE_REGEXP } +SELECT match('Hello', '\H\e\l\l\o'); -- { serverError CANNOT_COMPILE_REGEXP } diff --git a/tests/queries/0_stateless/01284_fuzz_bits.sql b/tests/queries/0_stateless/01284_fuzz_bits.sql index 24da23787cb..95a07c7bd44 100644 --- a/tests/queries/0_stateless/01284_fuzz_bits.sql +++ b/tests/queries/0_stateless/01284_fuzz_bits.sql @@ -1,5 +1,5 @@ -SELECT fuzzBits(toString('string'), 1); -- { serverError 43 } -SELECT fuzzBits('string', -1.0); -- { serverError 69 } +SELECT fuzzBits(toString('string'), 1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT fuzzBits('string', -1.0); -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT fuzzBits('', 0.3); SELECT length(fuzzBits(randomString(100), 0.5)); SELECT toTypeName(fuzzBits(randomString(100), 0.5)); diff --git a/tests/queries/0_stateless/01286_constraints_on_default.sql b/tests/queries/0_stateless/01286_constraints_on_default.sql index d150bac15b5..d6e0324aca3 100644 --- a/tests/queries/0_stateless/01286_constraints_on_default.sql +++ b/tests/queries/0_stateless/01286_constraints_on_default.sql @@ -6,8 +6,8 @@ CREATE TABLE default_constraints CONSTRAINT c CHECK y < 5 ) ENGINE = Memory; -INSERT INTO default_constraints (x) SELECT number FROM system.numbers LIMIT 5; -- { serverError 469 } -INSERT INTO default_constraints (x) VALUES (0),(1),(2),(3),(4); -- { serverError 469 } +INSERT INTO default_constraints (x) SELECT number FROM system.numbers LIMIT 5; -- { serverError VIOLATED_CONSTRAINT } +INSERT INTO default_constraints (x) VALUES (0),(1),(2),(3),(4); -- { serverError VIOLATED_CONSTRAINT } SELECT y, throwIf(NOT y < 5) FROM default_constraints; SELECT count() FROM default_constraints; @@ -22,8 +22,8 @@ CREATE TEMPORARY TABLE default_constraints CONSTRAINT c CHECK y < 5 ); -INSERT INTO default_constraints (x) SELECT number FROM system.numbers LIMIT 5; -- { serverError 469 } -INSERT INTO default_constraints (x) VALUES (0),(1),(2),(3),(4); -- { serverError 469 } +INSERT INTO default_constraints (x) SELECT number FROM system.numbers LIMIT 5; -- { serverError VIOLATED_CONSTRAINT } +INSERT INTO default_constraints (x) VALUES (0),(1),(2),(3),(4); -- { serverError VIOLATED_CONSTRAINT } SELECT y, throwIf(NOT y < 5) FROM default_constraints; SELECT count() FROM default_constraints; diff --git a/tests/queries/0_stateless/01287_max_execution_speed.sql b/tests/queries/0_stateless/01287_max_execution_speed.sql index 6c749294975..35bc4e02d38 100644 --- a/tests/queries/0_stateless/01287_max_execution_speed.sql +++ b/tests/queries/0_stateless/01287_max_execution_speed.sql @@ -1,12 +1,12 @@ -- Tags: no-fasttest SET min_execution_speed = 100000000000, timeout_before_checking_execution_speed = 0; -SELECT count() FROM system.numbers; -- { serverError 160 } +SELECT count() FROM system.numbers; -- { serverError TOO_SLOW } SET min_execution_speed = 0; SELECT 'Ok (1)'; SET min_execution_speed_bytes = 800000000000, timeout_before_checking_execution_speed = 0; -SELECT count() FROM system.numbers; -- { serverError 160 } +SELECT count() FROM system.numbers; -- { serverError TOO_SLOW } SET min_execution_speed_bytes = 0; SELECT 'Ok (2)'; diff --git a/tests/queries/0_stateless/01291_unsupported_conversion_from_decimal.sql b/tests/queries/0_stateless/01291_unsupported_conversion_from_decimal.sql index 256c6424901..e5948465d53 100644 --- a/tests/queries/0_stateless/01291_unsupported_conversion_from_decimal.sql +++ b/tests/queries/0_stateless/01291_unsupported_conversion_from_decimal.sql @@ -1,5 +1,5 @@ -SELECT toIntervalSecond(now64()); -- { serverError 70 } -SELECT CAST(now64() AS IntervalSecond); -- { serverError 70 } +SELECT toIntervalSecond(now64()); -- { serverError CANNOT_CONVERT_TYPE } +SELECT CAST(now64() AS IntervalSecond); -- { serverError CANNOT_CONVERT_TYPE } -SELECT toIntervalSecond(now64()); -- { serverError 70 } -SELECT CAST(now64() AS IntervalSecond); -- { serverError 70 } +SELECT toIntervalSecond(now64()); -- { serverError CANNOT_CONVERT_TYPE } +SELECT CAST(now64() AS IntervalSecond); -- { serverError CANNOT_CONVERT_TYPE } diff --git a/tests/queries/0_stateless/01292_create_user.sql b/tests/queries/0_stateless/01292_create_user.sql index a283ce687e6..46808aec1ef 100644 --- a/tests/queries/0_stateless/01292_create_user.sql +++ b/tests/queries/0_stateless/01292_create_user.sql @@ -19,7 +19,7 @@ SHOW CREATE USER u3_01292; SELECT '-- rename'; ALTER USER u2_01292 RENAME TO 'u2_01292_renamed'; -SHOW CREATE USER u2_01292; -- { serverError 192 } -- User not found +SHOW CREATE USER u2_01292; -- { serverError UNKNOWN_USER } -- User not found SHOW CREATE USER u2_01292_renamed; DROP USER u1_01292, u2_01292_renamed, u3_01292; diff --git a/tests/queries/0_stateless/01293_create_role.sql b/tests/queries/0_stateless/01293_create_role.sql index fd75d62964d..4b656ffb10f 100644 --- a/tests/queries/0_stateless/01293_create_role.sql +++ b/tests/queries/0_stateless/01293_create_role.sql @@ -14,7 +14,7 @@ SHOW CREATE ROLE r2_01293; SELECT '-- rename'; ALTER ROLE r2_01293 RENAME TO 'r2_01293_renamed'; -SHOW CREATE ROLE r2_01293; -- { serverError 511 } -- Role not found +SHOW CREATE ROLE r2_01293; -- { serverError UNKNOWN_ROLE } -- Role not found SHOW CREATE ROLE r2_01293_renamed; DROP ROLE r1_01293, r2_01293_renamed; diff --git a/tests/queries/0_stateless/01293_optimize_final_force.reference b/tests/queries/0_stateless/01293_optimize_final_force.reference index b0b9422adf0..e69de29bb2d 100644 --- a/tests/queries/0_stateless/01293_optimize_final_force.reference +++ b/tests/queries/0_stateless/01293_optimize_final_force.reference @@ -1,100 +0,0 @@ -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 -55 0 diff --git a/tests/queries/0_stateless/01293_optimize_final_force.sh b/tests/queries/0_stateless/01293_optimize_final_force.sh index 9b9ed6272a1..d3d3d3e1ac5 100755 --- a/tests/queries/0_stateless/01293_optimize_final_force.sh +++ b/tests/queries/0_stateless/01293_optimize_final_force.sh @@ -6,23 +6,33 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CURDIR"/../shell_config.sh -for _ in {1..100}; do $CLICKHOUSE_CLIENT --multiquery --query " -DROP TABLE IF EXISTS mt; -CREATE TABLE mt (x UInt8, k UInt8 DEFAULT 0) ENGINE = SummingMergeTree ORDER BY k; +it=0 +TIMELIMIT=31 +while [ $SECONDS -lt "$TIMELIMIT" ] && [ $it -lt 100 ]; +do + it=$((it+1)) + $CLICKHOUSE_CLIENT --multiquery --query " + DROP TABLE IF EXISTS mt; + CREATE TABLE mt (x UInt8, k UInt8 DEFAULT 0) ENGINE = SummingMergeTree ORDER BY k; -INSERT INTO mt (x) VALUES (1); -INSERT INTO mt (x) VALUES (2); -INSERT INTO mt (x) VALUES (3); -INSERT INTO mt (x) VALUES (4); -INSERT INTO mt (x) VALUES (5); -INSERT INTO mt (x) VALUES (6); -INSERT INTO mt (x) VALUES (7); -INSERT INTO mt (x) VALUES (8); -INSERT INTO mt (x) VALUES (9); -INSERT INTO mt (x) VALUES (10); + INSERT INTO mt (x) VALUES (1); + INSERT INTO mt (x) VALUES (2); + INSERT INTO mt (x) VALUES (3); + INSERT INTO mt (x) VALUES (4); + INSERT INTO mt (x) VALUES (5); + INSERT INTO mt (x) VALUES (6); + INSERT INTO mt (x) VALUES (7); + INSERT INTO mt (x) VALUES (8); + INSERT INTO mt (x) VALUES (9); + INSERT INTO mt (x) VALUES (10); -OPTIMIZE TABLE mt FINAL; -SELECT * FROM mt; + OPTIMIZE TABLE mt FINAL; + "; -DROP TABLE mt; -"; done + RES=$($CLICKHOUSE_CLIENT --query "SELECT * FROM mt;") + if [ "$RES" != "55 0" ]; then + echo "FAIL. Got: $RES" + fi + + $CLICKHOUSE_CLIENT --query "DROP TABLE mt;" +done diff --git a/tests/queries/0_stateless/01294_create_settings_profile.sql b/tests/queries/0_stateless/01294_create_settings_profile.sql index f71eefa6975..2a7fadad88b 100644 --- a/tests/queries/0_stateless/01294_create_settings_profile.sql +++ b/tests/queries/0_stateless/01294_create_settings_profile.sql @@ -17,7 +17,7 @@ SHOW CREATE SETTINGS PROFILE s3_01294; SELECT '-- rename'; ALTER SETTINGS PROFILE s2_01294 RENAME TO 's2_01294_renamed'; -SHOW CREATE SETTINGS PROFILE s2_01294; -- { serverError 180 } -- Profile not found +SHOW CREATE SETTINGS PROFILE s2_01294; -- { serverError THERE_IS_NO_PROFILE } -- Profile not found SHOW CREATE SETTINGS PROFILE s2_01294_renamed; DROP SETTINGS PROFILE s1_01294, s2_01294_renamed, s3_01294; diff --git a/tests/queries/0_stateless/01295_create_row_policy.sql b/tests/queries/0_stateless/01295_create_row_policy.sql index 5ccd815c89a..e09c2c1745c 100644 --- a/tests/queries/0_stateless/01295_create_row_policy.sql +++ b/tests/queries/0_stateless/01295_create_row_policy.sql @@ -19,7 +19,7 @@ SHOW CREATE ROW POLICY p3_01295 ON db.table; SELECT '-- rename'; ALTER ROW POLICY p2_01295 ON db.table RENAME TO 'p2_01295_renamed'; -SHOW CREATE ROW POLICY p2_01295 ON db.table; -- { serverError 523 } -- Policy not found +SHOW CREATE ROW POLICY p2_01295 ON db.table; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found SHOW CREATE ROW POLICY p2_01295_renamed ON db.table; DROP ROW POLICY p1_01295, p2_01295_renamed, p3_01295 ON db.table; diff --git a/tests/queries/0_stateless/01296_codecs_bad_arguments.sql b/tests/queries/0_stateless/01296_codecs_bad_arguments.sql index d7eb53300ec..a1d22123b16 100644 --- a/tests/queries/0_stateless/01296_codecs_bad_arguments.sql +++ b/tests/queries/0_stateless/01296_codecs_bad_arguments.sql @@ -2,11 +2,11 @@ DROP TABLE IF EXISTS delta_table; DROP TABLE IF EXISTS zstd_table; DROP TABLE IF EXISTS lz4_table; -CREATE TABLE delta_table (`id` UInt64 CODEC(Delta(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError 433} -CREATE TABLE zstd_table (`id` UInt64 CODEC(ZSTD(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError 433} -CREATE TABLE lz4_table (`id` UInt64 CODEC(LZ4HC(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError 433} +CREATE TABLE delta_table (`id` UInt64 CODEC(Delta(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError ILLEGAL_CODEC_PARAMETER} +CREATE TABLE zstd_table (`id` UInt64 CODEC(ZSTD(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError ILLEGAL_CODEC_PARAMETER} +CREATE TABLE lz4_table (`id` UInt64 CODEC(LZ4HC(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError ILLEGAL_CODEC_PARAMETER} -CREATE TABLE lz4_table (`id` UInt64 CODEC(LZ4(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError 378} +CREATE TABLE lz4_table (`id` UInt64 CODEC(LZ4(tuple()))) ENGINE = MergeTree() ORDER BY tuple(); --{serverError DATA_TYPE_CANNOT_HAVE_ARGUMENTS} SELECT 1; diff --git a/tests/queries/0_stateless/01296_create_row_policy_in_current_database.sql b/tests/queries/0_stateless/01296_create_row_policy_in_current_database.sql index c1e8068075b..a05a9245515 100644 --- a/tests/queries/0_stateless/01296_create_row_policy_in_current_database.sql +++ b/tests/queries/0_stateless/01296_create_row_policy_in_current_database.sql @@ -16,7 +16,7 @@ ALTER POLICY p1_01296 ON table USING 1; SHOW CREATE POLICY p1_01296 ON db_01296.table; SHOW CREATE POLICY p1_01296 ON table; DROP POLICY p1_01296 ON table; -DROP POLICY p1_01296 ON db_01296.table; -- { serverError 523 } -- Policy not found +DROP POLICY p1_01296 ON db_01296.table; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found SELECT '-- multiple policies'; CREATE ROW POLICY p1_01296, p2_01296 ON table USING 1; @@ -41,12 +41,12 @@ SHOW CREATE POLICY p2_01296 ON table; DROP POLICY p1_01296, p2_01296 ON table; DROP POLICY p3_01296 ON table, table2; DROP POLICY p4_01296 ON table, p5_01296 ON table2; -DROP POLICY p1_01296 ON db_01296.table; -- { serverError 523 } -- Policy not found -DROP POLICY p2_01296 ON db_01296.table; -- { serverError 523 } -- Policy not found -DROP POLICY p3_01296 ON db_01296.table; -- { serverError 523 } -- Policy not found -DROP POLICY p3_01296 ON db_01296.table2; -- { serverError 523 } -- Policy not found -DROP POLICY p4_01296 ON db_01296.table; -- { serverError 523 } -- Policy not found -DROP POLICY p5_01296 ON db_01296.table2; -- { serverError 523 } -- Policy not found +DROP POLICY p1_01296 ON db_01296.table; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found +DROP POLICY p2_01296 ON db_01296.table; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found +DROP POLICY p3_01296 ON db_01296.table; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found +DROP POLICY p3_01296 ON db_01296.table2; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found +DROP POLICY p4_01296 ON db_01296.table; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found +DROP POLICY p5_01296 ON db_01296.table2; -- { serverError UNKNOWN_ROW_POLICY } -- Policy not found USE default; DROP DATABASE db_01296; diff --git a/tests/queries/0_stateless/01297_alter_distributed.sql b/tests/queries/0_stateless/01297_alter_distributed.sql index c79d98b7b3b..a68e137bfc8 100644 --- a/tests/queries/0_stateless/01297_alter_distributed.sql +++ b/tests/queries/0_stateless/01297_alter_distributed.sql @@ -25,7 +25,7 @@ show create table merge_distributed; --error: should fall, because there is no `dummy1` column alter table merge_distributed add column dummy1 String after CounterID; -select CounterID, dummy1 from merge_distributed where dummy1 <> '' limit 10; -- { serverError 47 } +select CounterID, dummy1 from merge_distributed where dummy1 <> '' limit 10; -- { serverError UNKNOWN_IDENTIFIER } drop table merge_distributed; drop table merge_distributed1; diff --git a/tests/queries/0_stateless/01297_create_quota.sql b/tests/queries/0_stateless/01297_create_quota.sql index a0ecb6bd2d0..febdc7be6f5 100644 --- a/tests/queries/0_stateless/01297_create_quota.sql +++ b/tests/queries/0_stateless/01297_create_quota.sql @@ -21,7 +21,7 @@ SHOW CREATE QUOTA q4_01297; SELECT '-- rename'; ALTER QUOTA q2_01297 RENAME TO 'q2_01297_renamed'; -SHOW CREATE QUOTA q2_01297; -- { serverError 199 } -- Policy not found +SHOW CREATE QUOTA q2_01297; -- { serverError UNKNOWN_QUOTA } -- Policy not found SHOW CREATE QUOTA q2_01297_renamed; DROP QUOTA q1_01297, q2_01297_renamed, q3_01297, q4_01297; diff --git a/tests/queries/0_stateless/01305_polygons_union.sql b/tests/queries/0_stateless/01305_polygons_union.sql index 23ea0d050c3..50c96c325cd 100644 --- a/tests/queries/0_stateless/01305_polygons_union.sql +++ b/tests/queries/0_stateless/01305_polygons_union.sql @@ -1,6 +1,6 @@ select polygonsUnionCartesian([[[(0., 0.),(0., 3.),(1., 2.9),(2., 2.6),(2.6, 2.),(2.9, 1),(3., 0.),(0., 0.)]]], [[[(1., 1.),(1., 4.),(4., 4.),(4., 1.),(1., 1.)]]]); -SELECT arrayMap(a -> arrayMap(b -> arrayMap(c -> (round(c.1, 6), round(c.2, 6)), b), a), polygonsUnionCartesian([[[(2., 100.0000991821289), (0., 3.), (1., 2.9), (2., 2.6), (2.6, 2.), (2.9, 1), (3., 0.), (100.0000991821289, 2.)]]], [[[(1., 1.), (1000.0001220703125, nan), (4., 4.), (4., 1.), (1., 1.)]]])); -- { serverError 43 } +SELECT arrayMap(a -> arrayMap(b -> arrayMap(c -> (round(c.1, 6), round(c.2, 6)), b), a), polygonsUnionCartesian([[[(2., 100.0000991821289), (0., 3.), (1., 2.9), (2., 2.6), (2.6, 2.), (2.9, 1), (3., 0.), (100.0000991821289, 2.)]]], [[[(1., 1.), (1000.0001220703125, nan), (4., 4.), (4., 1.), (1., 1.)]]])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select arrayMap(a -> arrayMap(b -> arrayMap(c -> (round(c.1, 6), round(c.2, 6)), b), a), polygonsUnionSpherical([[[(4.3613577, 50.8651821), (4.349556, 50.8535879), (4.3602419, 50.8435626), (4.3830299, 50.8428851), (4.3904543, 50.8564867), (4.3613148, 50.8651279)]]], [[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]])); diff --git a/tests/queries/0_stateless/01308_polygon_area.sql b/tests/queries/0_stateless/01308_polygon_area.sql index 494d0de4570..26f026ae927 100644 --- a/tests/queries/0_stateless/01308_polygon_area.sql +++ b/tests/queries/0_stateless/01308_polygon_area.sql @@ -1,3 +1,3 @@ select polygonAreaCartesian([[[(0., 0.), (0., 5.), (5., 5.), (5., 0.)]]]); select round(polygonAreaSpherical([[[(4.346693, 50.858306), (4.367945, 50.852455), (4.366227, 50.840809), (4.344961, 50.833264), (4.338074, 50.848677), (4.346693, 50.858306)]]]), 14); -SELECT polygonAreaCartesian([]); -- { serverError 36 } +SELECT polygonAreaCartesian([]); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01310_enum_comparison.sql b/tests/queries/0_stateless/01310_enum_comparison.sql index ed63911e698..50752392bb3 100644 --- a/tests/queries/0_stateless/01310_enum_comparison.sql +++ b/tests/queries/0_stateless/01310_enum_comparison.sql @@ -3,4 +3,4 @@ INSERT INTO enum VALUES ('hello'); SELECT count() FROM enum WHERE x = 'hello'; SELECT count() FROM enum WHERE x = 'world'; -SELECT count() FROM enum WHERE x = 'xyz'; -- { serverError 691 } +SELECT count() FROM enum WHERE x = 'xyz'; -- { serverError UNKNOWN_ELEMENT_OF_ENUM } diff --git a/tests/queries/0_stateless/01311_comparison_with_constant_string.sql b/tests/queries/0_stateless/01311_comparison_with_constant_string.sql index d6641a50c45..5760c09d6e2 100644 --- a/tests/queries/0_stateless/01311_comparison_with_constant_string.sql +++ b/tests/queries/0_stateless/01311_comparison_with_constant_string.sql @@ -11,7 +11,7 @@ SELECT 1 IN (1.23, '2', 2); SELECT '---'; -- it should work but it doesn't. -SELECT 1 = '1.0'; -- { serverError 53 } +SELECT 1 = '1.0'; -- { serverError TYPE_MISMATCH } SELECT '---'; SELECT 1 = '257'; @@ -30,4 +30,4 @@ SELECT '---'; SELECT toDateTime('2020-06-13 01:02:03') = '2020-06-13T01:02:03'; SELECT '---'; -SELECT 0 = ''; -- { serverError 32 } +SELECT 0 = ''; -- { serverError ATTEMPT_TO_READ_AFTER_EOF } diff --git a/tests/queries/0_stateless/01313_parse_date_time_best_effort_null_zero.sql b/tests/queries/0_stateless/01313_parse_date_time_best_effort_null_zero.sql index ed56aec3fb0..2ffa05155ab 100644 --- a/tests/queries/0_stateless/01313_parse_date_time_best_effort_null_zero.sql +++ b/tests/queries/0_stateless/01313_parse_date_time_best_effort_null_zero.sql @@ -1,12 +1,12 @@ -SELECT parseDateTimeBestEffort(''); -- { serverError 41 } +SELECT parseDateTimeBestEffort(''); -- { serverError CANNOT_PARSE_DATETIME } SELECT parseDateTimeBestEffortOrNull(''); SELECT parseDateTimeBestEffortOrZero('', 'UTC'); -SELECT parseDateTime64BestEffort(''); -- { serverError 41 } +SELECT parseDateTime64BestEffort(''); -- { serverError CANNOT_PARSE_DATETIME } SELECT parseDateTime64BestEffortOrNull(''); SELECT parseDateTime64BestEffortOrZero('', 0, 'UTC'); SET date_time_input_format = 'best_effort'; -SELECT toDateTime(''); -- { serverError 41 } +SELECT toDateTime(''); -- { serverError CANNOT_PARSE_DATETIME } SELECT toDateTimeOrNull(''); SELECT toDateTimeOrZero('', 'UTC'); diff --git a/tests/queries/0_stateless/01318_alter_add_column_exists.sql b/tests/queries/0_stateless/01318_alter_add_column_exists.sql index e270c578600..5bfa07cd416 100644 --- a/tests/queries/0_stateless/01318_alter_add_column_exists.sql +++ b/tests/queries/0_stateless/01318_alter_add_column_exists.sql @@ -22,6 +22,6 @@ ALTER TABLE add_table ADD COLUMN IF NOT EXISTS value1 UInt64, ADD COLUMN IF NOT SHOW CREATE TABLE add_table; -ALTER TABLE add_table ADD COLUMN value3 UInt64, ADD COLUMN IF NOT EXISTS value3 UInt32; --{serverError 44} +ALTER TABLE add_table ADD COLUMN value3 UInt64, ADD COLUMN IF NOT EXISTS value3 UInt32; --{serverError ILLEGAL_COLUMN} DROP TABLE IF EXISTS add_table; diff --git a/tests/queries/0_stateless/01318_decrypt.sql b/tests/queries/0_stateless/01318_decrypt.sql index 8cd1414d11b..a41da46d3a0 100644 --- a/tests/queries/0_stateless/01318_decrypt.sql +++ b/tests/queries/0_stateless/01318_decrypt.sql @@ -13,47 +13,47 @@ ----------------------------------------------------------------------------------------- -- error cases ----------------------------------------------------------------------------------------- -SELECT aes_decrypt_mysql(); --{serverError 42} not enough arguments -SELECT aes_decrypt_mysql('aes-128-ecb'); --{serverError 42} not enough arguments -SELECT aes_decrypt_mysql('aes-128-ecb', 'text'); --{serverError 42} not enough arguments +SELECT aes_decrypt_mysql(); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT aes_decrypt_mysql('aes-128-ecb'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT aes_decrypt_mysql('aes-128-ecb', 'text'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments -- Mode -SELECT aes_decrypt_mysql(789, 'text', 'key'); --{serverError 43} bad mode type -SELECT aes_decrypt_mysql('blah blah blah', 'text', 'key'); -- {serverError 36} garbage mode value -SELECT aes_decrypt_mysql('des-ede3-ecb', 'text', 'key'); -- {serverError 36} bad mode value of valid cipher name -SELECT aes_decrypt_mysql('aes-128-gcm', 'text', 'key'); -- {serverError 36} mode is not supported by _mysql-functions +SELECT aes_decrypt_mysql(789, 'text', 'key'); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad mode type +SELECT aes_decrypt_mysql('blah blah blah', 'text', 'key'); -- {serverError BAD_ARGUMENTS} garbage mode value +SELECT aes_decrypt_mysql('des-ede3-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} bad mode value of valid cipher name +SELECT aes_decrypt_mysql('aes-128-gcm', 'text', 'key'); -- {serverError BAD_ARGUMENTS} mode is not supported by _mysql-functions -SELECT decrypt(789, 'text', 'key'); --{serverError 43} bad mode type -SELECT decrypt('blah blah blah', 'text', 'key'); -- {serverError 36} garbage mode value -SELECT decrypt('des-ede3-ecb', 'text', 'key'); -- {serverError 36} bad mode value of valid cipher name +SELECT decrypt(789, 'text', 'key'); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad mode type +SELECT decrypt('blah blah blah', 'text', 'key'); -- {serverError BAD_ARGUMENTS} garbage mode value +SELECT decrypt('des-ede3-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} bad mode value of valid cipher name -- Key -SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 456); --{serverError 43} bad key type -SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key'); -- {serverError 36} key is too short +SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 456); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad key type +SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} key is too short -SELECT decrypt('aes-128-ecb', 'text'); --{serverError 42} key is missing -SELECT decrypt('aes-128-ecb', 'text', 456); --{serverError 43} bad key type -SELECT decrypt('aes-128-ecb', 'text', 'key'); -- {serverError 36} key is too short -SELECT decrypt('aes-128-ecb', 'text', 'keykeykeykeykeykeykeykeykeykeykeykey'); -- {serverError 36} key is to long +SELECT decrypt('aes-128-ecb', 'text'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} key is missing +SELECT decrypt('aes-128-ecb', 'text', 456); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad key type +SELECT decrypt('aes-128-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} key is too short +SELECT decrypt('aes-128-ecb', 'text', 'keykeykeykeykeykeykeykeykeykeykeykey'); -- {serverError BAD_ARGUMENTS} key is to long -- IV -SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key', 1011); --{serverError 43} bad IV type 6 -SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key', 'iv'); --{serverError 36} IV is too short 4 +SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key', 1011); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad IV type 6 +SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key', 'iv'); --{serverError BAD_ARGUMENTS} IV is too short 4 -SELECT decrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 1011); --{serverError 43} bad IV type 1 -SELECT decrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iviviviviviviviviviviviviviviviviviviviviv'); --{serverError 36} IV is too long 3 -SELECT decrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iv'); --{serverError 36} IV is too short 2 +SELECT decrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 1011); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad IV type 1 +SELECT decrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iviviviviviviviviviviviviviviviviviviviviv'); --{serverError BAD_ARGUMENTS} IV is too long 3 +SELECT decrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iv'); --{serverError BAD_ARGUMENTS} IV is too short 2 --AAD -SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError 42} too many arguments +SELECT aes_decrypt_mysql('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} too many arguments -SELECT decrypt('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError 43} bad AAD type -SELECT decrypt('aes-128-gcm', 'text', 'key', 'IV', 1213); --{serverError 43} bad AAD type +SELECT decrypt('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad AAD type +SELECT decrypt('aes-128-gcm', 'text', 'key', 'IV', 1213); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad AAD type -- Invalid ciphertext should cause an error or produce garbage -SELECT ignore(decrypt('aes-128-ecb', 'hello there', '1111111111111111')); -- {serverError 454} 1 -SELECT ignore(decrypt('aes-128-cbc', 'hello there', '1111111111111111')); -- {serverError 454} 2 +SELECT ignore(decrypt('aes-128-ecb', 'hello there', '1111111111111111')); -- {serverError OPENSSL_ERROR} 1 +SELECT ignore(decrypt('aes-128-cbc', 'hello there', '1111111111111111')); -- {serverError OPENSSL_ERROR} 2 SELECT ignore(decrypt('aes-128-ofb', 'hello there', '1111111111111111')); -- GIGO SELECT ignore(decrypt('aes-128-ctr', 'hello there', '1111111111111111')); -- GIGO SELECT decrypt('aes-128-ctr', '', '1111111111111111') == ''; @@ -139,7 +139,7 @@ CREATE TABLE decrypt_null ( INSERT INTO decrypt_null VALUES ('2022-08-02 00:00:00', 1, encrypt('aes-256-gcm', 'value1', 'keykeykeykeykeykeykeykeykeykey01', 'iv1'), 'iv1'), ('2022-09-02 00:00:00', 2, encrypt('aes-256-gcm', 'value2', 'keykeykeykeykeykeykeykeykeykey02', 'iv2'), 'iv2'), ('2022-09-02 00:00:01', 3, encrypt('aes-256-gcm', 'value3', 'keykeykeykeykeykeykeykeykeykey03', 'iv3'), 'iv3'); -SELECT dt, user_id FROM decrypt_null WHERE (user_id > 0) AND (decrypt('aes-256-gcm', encrypted, 'keykeykeykeykeykeykeykeykeykey02', iv) = 'value2'); --{serverError 454} +SELECT dt, user_id FROM decrypt_null WHERE (user_id > 0) AND (decrypt('aes-256-gcm', encrypted, 'keykeykeykeykeykeykeykeykeykey02', iv) = 'value2'); --{serverError OPENSSL_ERROR} SELECT dt, user_id FROM decrypt_null WHERE (user_id > 0) AND (tryDecrypt('aes-256-gcm', encrypted, 'keykeykeykeykeykeykeykeykeykey02', iv) = 'value2'); SELECT dt, user_id, (tryDecrypt('aes-256-gcm', encrypted, 'keykeykeykeykeykeykeykeykeykey02', iv)) as value FROM decrypt_null ORDER BY user_id; diff --git a/tests/queries/0_stateless/01318_encrypt.sql b/tests/queries/0_stateless/01318_encrypt.sql index 2bcbfc187b6..548d36756aa 100644 --- a/tests/queries/0_stateless/01318_encrypt.sql +++ b/tests/queries/0_stateless/01318_encrypt.sql @@ -13,43 +13,43 @@ ----------------------------------------------------------------------------------------- -- error cases ----------------------------------------------------------------------------------------- -SELECT aes_encrypt_mysql(); --{serverError 42} not enough arguments -SELECT aes_encrypt_mysql('aes-128-ecb'); --{serverError 42} not enough arguments -SELECT aes_encrypt_mysql('aes-128-ecb', 'text'); --{serverError 42} not enough arguments +SELECT aes_encrypt_mysql(); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT aes_encrypt_mysql('aes-128-ecb'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments +SELECT aes_encrypt_mysql('aes-128-ecb', 'text'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} not enough arguments -- Mode -SELECT aes_encrypt_mysql(789, 'text', 'key'); --{serverError 43} bad mode type -SELECT aes_encrypt_mysql('blah blah blah', 'text', 'key'); -- {serverError 36} garbage mode value -SELECT aes_encrypt_mysql('des-ede3-ecb', 'text', 'key'); -- {serverError 36} bad mode value of valid cipher name -SELECT aes_encrypt_mysql('aes-128-gcm', 'text', 'key'); -- {serverError 36} mode is not supported by _mysql-functions +SELECT aes_encrypt_mysql(789, 'text', 'key'); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad mode type +SELECT aes_encrypt_mysql('blah blah blah', 'text', 'key'); -- {serverError BAD_ARGUMENTS} garbage mode value +SELECT aes_encrypt_mysql('des-ede3-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} bad mode value of valid cipher name +SELECT aes_encrypt_mysql('aes-128-gcm', 'text', 'key'); -- {serverError BAD_ARGUMENTS} mode is not supported by _mysql-functions -SELECT encrypt(789, 'text', 'key'); --{serverError 43} bad mode type -SELECT encrypt('blah blah blah', 'text', 'key'); -- {serverError 36} garbage mode value -SELECT encrypt('des-ede3-ecb', 'text', 'key'); -- {serverError 36} bad mode value of valid cipher name +SELECT encrypt(789, 'text', 'key'); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad mode type +SELECT encrypt('blah blah blah', 'text', 'key'); -- {serverError BAD_ARGUMENTS} garbage mode value +SELECT encrypt('des-ede3-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} bad mode value of valid cipher name -- Key -SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 456); --{serverError 43} bad key type -SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key'); -- {serverError 36} key is too short +SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 456); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad key type +SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} key is too short -SELECT encrypt('aes-128-ecb', 'text'); --{serverError 42} key is missing -SELECT encrypt('aes-128-ecb', 'text', 456); --{serverError 43} bad key type -SELECT encrypt('aes-128-ecb', 'text', 'key'); -- {serverError 36} key is too short -SELECT encrypt('aes-128-ecb', 'text', 'keykeykeykeykeykeykeykeykeykeykeykey'); -- {serverError 36} key is to long +SELECT encrypt('aes-128-ecb', 'text'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} key is missing +SELECT encrypt('aes-128-ecb', 'text', 456); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad key type +SELECT encrypt('aes-128-ecb', 'text', 'key'); -- {serverError BAD_ARGUMENTS} key is too short +SELECT encrypt('aes-128-ecb', 'text', 'keykeykeykeykeykeykeykeykeykeykeykey'); -- {serverError BAD_ARGUMENTS} key is to long -- IV -SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key', 1011); --{serverError 43} bad IV type 6 -SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key', 'iv'); --{serverError 36} IV is too short 4 +SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key', 1011); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad IV type 6 +SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key', 'iv'); --{serverError BAD_ARGUMENTS} IV is too short 4 -SELECT encrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 1011); --{serverError 43} bad IV type 1 -SELECT encrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iviviviviviviviviviviviviviviviviviviviviv'); --{serverError 36} IV is too long 3 -SELECT encrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iv'); --{serverError 36} IV is too short 2 +SELECT encrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 1011); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad IV type 1 +SELECT encrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iviviviviviviviviviviviviviviviviviviviviv'); --{serverError BAD_ARGUMENTS} IV is too long 3 +SELECT encrypt('aes-128-cbc', 'text', 'keykeykeykeykeyk', 'iv'); --{serverError BAD_ARGUMENTS} IV is too short 2 --AAD -SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError 42} too many arguments +SELECT aes_encrypt_mysql('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} too many arguments -SELECT encrypt('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError 43} bad AAD type -SELECT encrypt('aes-128-gcm', 'text', 'key', 'IV', 1213); --{serverError 43} bad AAD type +SELECT encrypt('aes-128-ecb', 'text', 'key', 'IV', 1213); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad AAD type +SELECT encrypt('aes-128-gcm', 'text', 'key', 'IV', 1213); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} bad AAD type ----------------------------------------------------------------------------------------- -- Validate against predefined ciphertext,plaintext,key and IV for MySQL compatibility mode diff --git a/tests/queries/0_stateless/01318_map_add_map_subtract.sql b/tests/queries/0_stateless/01318_map_add_map_subtract.sql index 6ead7a2db46..b934e506f59 100644 --- a/tests/queries/0_stateless/01318_map_add_map_subtract.sql +++ b/tests/queries/0_stateless/01318_map_add_map_subtract.sql @@ -2,11 +2,11 @@ drop table if exists map_test; create table map_test engine=TinyLog() as (select ([1, number], [toInt32(2),2]) as map from numbers(1, 10)); -- mapAdd -select mapAdd([1], [1]); -- { serverError 43 } -select mapAdd(([1], [1])); -- { serverError 42 } -select mapAdd(([1], [1]), map) from map_test; -- { serverError 43 } -select mapAdd(([toUInt64(1)], [1]), map) from map_test; -- { serverError 43 } -select mapAdd(([toUInt64(1), 2], [toInt32(1)]), map) from map_test; -- {serverError 42 } +select mapAdd([1], [1]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select mapAdd(([1], [1])); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select mapAdd(([1], [1]), map) from map_test; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select mapAdd(([toUInt64(1)], [1]), map) from map_test; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select mapAdd(([toUInt64(1), 2], [toInt32(1)]), map) from map_test; -- {serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } select mapAdd(([toUInt64(1)], [toInt32(1)]), map) from map_test; select mapAdd(cast(map, 'Tuple(Array(UInt8), Array(UInt8))'), ([1], [1]), ([2],[2]) ) from map_test; @@ -27,8 +27,8 @@ select mapAdd(([toInt64(1), 2], [toInt64(1), 1]), ([toInt64(1), 2], [toInt64(1), select mapAdd(([1, 2], [toFloat32(1.1), 1]), ([1, 2], [2.2, 1])) as res, toTypeName(res); select mapAdd(([1, 2], [toFloat64(1.1), 1]), ([1, 2], [2.2, 1])) as res, toTypeName(res); -select mapAdd(([toFloat32(1), 2], [toFloat64(1.1), 1]), ([toFloat32(1), 2], [2.2, 1])) as res, toTypeName(res); -- { serverError 43 } -select mapAdd(([1, 2], [toFloat64(1.1), 1]), ([1, 2], [1, 1])) as res, toTypeName(res); -- { serverError 43 } +select mapAdd(([toFloat32(1), 2], [toFloat64(1.1), 1]), ([toFloat32(1), 2], [2.2, 1])) as res, toTypeName(res); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select mapAdd(([1, 2], [toFloat64(1.1), 1]), ([1, 2], [1, 1])) as res, toTypeName(res); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select mapAdd((['a', 'b'], [1, 1]), ([key], [1])) from values('key String', ('b'), ('c'), ('d')); select mapAdd((cast(['a', 'b'], 'Array(FixedString(1))'), [1, 1]), ([key], [1])) as res, toTypeName(res) from values('key FixedString(1)', ('b'), ('c'), ('d')); select mapAdd((cast(['a', 'b'], 'Array(LowCardinality(String))'), [1, 1]), ([key], [1])) from values('key String', ('b'), ('c'), ('d')); diff --git a/tests/queries/0_stateless/01318_map_add_map_subtract_on_map_type.sql b/tests/queries/0_stateless/01318_map_add_map_subtract_on_map_type.sql index 9f0f1cb0489..5a03232b469 100644 --- a/tests/queries/0_stateless/01318_map_add_map_subtract_on_map_type.sql +++ b/tests/queries/0_stateless/01318_map_add_map_subtract_on_map_type.sql @@ -3,8 +3,8 @@ set allow_experimental_map_type = 1; create table mapop_test engine=TinyLog() as (select map(1, toInt32(2), number, 2) as m from numbers(1, 10)); -- mapAdd -select mapAdd(map(1, 1)); -- { serverError 42 } -select mapAdd(map(1, 1), m) from mapop_test; -- { serverError 43 } +select mapAdd(map(1, 1)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select mapAdd(map(1, 1), m) from mapop_test; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select mapAdd(map(toUInt64(1), toInt32(1)), m) from mapop_test; select mapAdd(cast(m, 'Map(UInt8, UInt8)'), map(1, 1), map(2,2)) from mapop_test; @@ -29,7 +29,7 @@ select mapAdd(map(toInt256(1), toInt256(1), 2, 1), map(toInt256(1), toInt256(1), select mapAdd(map(1, toFloat32(1.1), 2, 1), map(1, 2.2, 2, 1)) as res, toTypeName(res); select mapAdd(map(1, toFloat64(1.1), 2, 1), map(1, 2.2, 2, 1)) as res, toTypeName(res); -select mapAdd(map(1, toFloat64(1.1), 2, 1), map(1, 1, 2, 1)) as res, toTypeName(res); -- { serverError 43 } +select mapAdd(map(1, toFloat64(1.1), 2, 1), map(1, 1, 2, 1)) as res, toTypeName(res); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select mapAdd(map('a', 1, 'b', 1), map(key, 1)) from values('key String', ('b'), ('c'), ('d')); select mapAdd(map(cast('a', 'FixedString(1)'), 1, 'b', 1), map(key, 1)) as res, toTypeName(res) from values('key String', ('b'), ('c'), ('d')); select mapAdd(map(cast('a', 'LowCardinality(String)'), 1, 'b', 1), map(key, 1)) from values('key String', ('b'), ('c'), ('d')); diff --git a/tests/queries/0_stateless/01318_map_populate_series.sql b/tests/queries/0_stateless/01318_map_populate_series.sql index f7fa8c81e8c..351ea87dc53 100644 --- a/tests/queries/0_stateless/01318_map_populate_series.sql +++ b/tests/queries/0_stateless/01318_map_populate_series.sql @@ -31,6 +31,6 @@ select mapPopulateSeries([toInt64(-10), 2], [toInt64(1), 1], toInt64(-5)) as res -- empty select mapPopulateSeries(cast([], 'Array(UInt8)'), cast([], 'Array(UInt8)'), 5); -select mapPopulateSeries(['1', '2'], [1, 1]) as res, toTypeName(res); -- { serverError 43 } -select mapPopulateSeries([1, 2, 3], [1, 1]) as res, toTypeName(res); -- { serverError 36 } -select mapPopulateSeries([1, 2], [1, 1, 1]) as res, toTypeName(res); -- { serverError 36 } +select mapPopulateSeries(['1', '2'], [1, 1]) as res, toTypeName(res); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select mapPopulateSeries([1, 2, 3], [1, 1]) as res, toTypeName(res); -- { serverError BAD_ARGUMENTS } +select mapPopulateSeries([1, 2], [1, 1, 1]) as res, toTypeName(res); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01319_optimize_skip_unused_shards_nesting.sql b/tests/queries/0_stateless/01319_optimize_skip_unused_shards_nesting.sql index f056b87cc8a..09e535e6a86 100644 --- a/tests/queries/0_stateless/01319_optimize_skip_unused_shards_nesting.sql +++ b/tests/queries/0_stateless/01319_optimize_skip_unused_shards_nesting.sql @@ -16,7 +16,7 @@ set force_optimize_skip_unused_shards=1; set force_optimize_skip_unused_shards_nesting=2; set optimize_skip_unused_shards_nesting=2; -select * from dist_01319 where key = 1; -- { serverError 507 } +select * from dist_01319 where key = 1; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } set force_optimize_skip_unused_shards_nesting=1; select * from dist_01319 where key = 1; set force_optimize_skip_unused_shards_nesting=2; diff --git a/tests/queries/0_stateless/01320_optimize_skip_unused_shards_no_non_deterministic.sql b/tests/queries/0_stateless/01320_optimize_skip_unused_shards_no_non_deterministic.sql index 1cb36e1441d..778192d36be 100644 --- a/tests/queries/0_stateless/01320_optimize_skip_unused_shards_no_non_deterministic.sql +++ b/tests/queries/0_stateless/01320_optimize_skip_unused_shards_no_non_deterministic.sql @@ -9,7 +9,7 @@ create table dist_01320 as data_01320 Engine=Distributed(test_cluster_two_shards set optimize_skip_unused_shards=1; set force_optimize_skip_unused_shards=1; -select * from dist_01320 where key = 0; -- { serverError 507 } +select * from dist_01320 where key = 0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } drop table data_01320; drop table dist_01320; diff --git a/tests/queries/0_stateless/01323_bad_arg_in_arithmetic_operations.sql b/tests/queries/0_stateless/01323_bad_arg_in_arithmetic_operations.sql index 1c4bfc8f091..f362979b102 100644 --- a/tests/queries/0_stateless/01323_bad_arg_in_arithmetic_operations.sql +++ b/tests/queries/0_stateless/01323_bad_arg_in_arithmetic_operations.sql @@ -1,15 +1,15 @@ SET optimize_arithmetic_operations_in_aggregate_functions = 1; -SELECT max(multiply(1)); -- { serverError 42 } -SELECT min(multiply(2));-- { serverError 42 } -SELECT sum(multiply(3)); -- { serverError 42 } +SELECT max(multiply(1)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT min(multiply(2));-- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT sum(multiply(3)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -SELECT max(plus(1)); -- { serverError 42 } -SELECT min(plus(2)); -- { serverError 42 } -SELECT sum(plus(3)); -- { serverError 42 } +SELECT max(plus(1)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT min(plus(2)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT sum(plus(3)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -SELECT max(multiply()); -- { serverError 42 } -SELECT min(multiply(1, 2 ,3)); -- { serverError 42 } -SELECT sum(plus() + multiply()); -- { serverError 42 } +SELECT max(multiply()); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT min(multiply(1, 2 ,3)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT sum(plus() + multiply()); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -SELECT sum(plus(multiply(42, 3), multiply(42))); -- { serverError 42 } +SELECT sum(plus(multiply(42, 3), multiply(42))); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } diff --git a/tests/queries/0_stateless/01327_decimal_cut_extra_digits_after_point.sql b/tests/queries/0_stateless/01327_decimal_cut_extra_digits_after_point.sql index 4d456296d8f..df171b183f9 100644 --- a/tests/queries/0_stateless/01327_decimal_cut_extra_digits_after_point.sql +++ b/tests/queries/0_stateless/01327_decimal_cut_extra_digits_after_point.sql @@ -11,7 +11,7 @@ SELECT CAST('123456789123.1' AS Decimal(10, 5)); SELECT CAST('1234567891234.1' AS Decimal(10, 5)); SELECT CAST('1234567891234.12345111' AS Decimal(10, 5)); -- But it's just Decimal64, so there is the limit. -SELECT CAST('12345678912345.1' AS Decimal(10, 5)); -- { serverError 69 } +SELECT CAST('12345678912345.1' AS Decimal(10, 5)); -- { serverError ARGUMENT_OUT_OF_BOUND } -- The rounding may work in unexpected way: this is just integer rounding. -- We can improve it but here is the current behaviour: diff --git a/tests/queries/0_stateless/01329_compare_tuple_string_constant.sql b/tests/queries/0_stateless/01329_compare_tuple_string_constant.sql index 5878d8c4176..c56ffdd2019 100644 --- a/tests/queries/0_stateless/01329_compare_tuple_string_constant.sql +++ b/tests/queries/0_stateless/01329_compare_tuple_string_constant.sql @@ -1,4 +1,4 @@ -SELECT tuple(1) < ''; -- { serverError 27 } -SELECT tuple(1) < materialize(''); -- { serverError 386 } +SELECT tuple(1) < ''; -- { serverError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +SELECT tuple(1) < materialize(''); -- { serverError NO_COMMON_TYPE } SELECT (1, 2) < '(1,3)'; SELECT (1, 2) < '(1, 1)'; diff --git a/tests/queries/0_stateless/01330_array_join_in_higher_order_function.sql b/tests/queries/0_stateless/01330_array_join_in_higher_order_function.sql index 456b24a03d0..7ac8945d02c 100644 --- a/tests/queries/0_stateless/01330_array_join_in_higher_order_function.sql +++ b/tests/queries/0_stateless/01330_array_join_in_higher_order_function.sql @@ -1 +1 @@ -SELECT arrayMap(x -> arrayJoin([x, 1]), [1, 2]); -- { serverError 36 } +SELECT arrayMap(x -> arrayJoin([x, 1]), [1, 2]); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01332_join_type_syntax_position.sql b/tests/queries/0_stateless/01332_join_type_syntax_position.sql index bb87c7eb425..b2ec0da1b8a 100644 --- a/tests/queries/0_stateless/01332_join_type_syntax_position.sql +++ b/tests/queries/0_stateless/01332_join_type_syntax_position.sql @@ -10,8 +10,8 @@ select * from numbers(1) t1 right semi join numbers(1) t2 using number; select * from numbers(1) t1 left anti join numbers(1) t2 using number; select * from numbers(1) t1 right anti join numbers(1) t2 using number; -select * from numbers(1) t1 asof join numbers(1) t2 using number; -- { serverError 62 } -select * from numbers(1) t1 left asof join numbers(1) t2 using number; -- { serverError 62 } +select * from numbers(1) t1 asof join numbers(1) t2 using number; -- { serverError SYNTAX_ERROR } +select * from numbers(1) t1 left asof join numbers(1) t2 using number; -- { serverError SYNTAX_ERROR } -- legacy @@ -27,5 +27,5 @@ select * from numbers(1) t1 semi right join numbers(1) t2 using number; select * from numbers(1) t1 anti left join numbers(1) t2 using number; select * from numbers(1) t1 anti right join numbers(1) t2 using number; -select * from numbers(1) t1 asof join numbers(1) t2 using number; -- { serverError 62 } -select * from numbers(1) t1 asof left join numbers(1) t2 using number; -- { serverError 62 } +select * from numbers(1) t1 asof join numbers(1) t2 using number; -- { serverError SYNTAX_ERROR } +select * from numbers(1) t1 asof left join numbers(1) t2 using number; -- { serverError SYNTAX_ERROR } diff --git a/tests/queries/0_stateless/01333_select_abc_asterisk.sql b/tests/queries/0_stateless/01333_select_abc_asterisk.sql index e59829131d6..78bf2eaff23 100644 --- a/tests/queries/0_stateless/01333_select_abc_asterisk.sql +++ b/tests/queries/0_stateless/01333_select_abc_asterisk.sql @@ -1,6 +1,6 @@ select *; --error: should be failed for abc.*; -select abc.*; --{serverError 47} -select *, abc.*; --{serverError 47} -select abc.*, *; --{serverError 47} +select abc.*; --{serverError UNKNOWN_IDENTIFIER} +select *, abc.*; --{serverError UNKNOWN_IDENTIFIER} +select abc.*, *; --{serverError UNKNOWN_IDENTIFIER} diff --git a/tests/queries/0_stateless/01340_datetime64_fpe.sql b/tests/queries/0_stateless/01340_datetime64_fpe.sql index 3e76e3164b1..3ceb465cb32 100644 --- a/tests/queries/0_stateless/01340_datetime64_fpe.sql +++ b/tests/queries/0_stateless/01340_datetime64_fpe.sql @@ -1,7 +1,7 @@ -WITH toDateTime64('2019-09-16 19:20:12.3456789102019-09-16 19:20:12.345678910', 0) AS dt64 SELECT dt64; -- { serverError 6 } +WITH toDateTime64('2019-09-16 19:20:12.3456789102019-09-16 19:20:12.345678910', 0) AS dt64 SELECT dt64; -- { serverError CANNOT_PARSE_TEXT } SELECT toDateTime64('2011-11-11 11:11:11.1234567890123456789', 0); -SELECT toDateTime64('2011-11-11 11:11:11.-12345678901234567890', 0); -- { serverError 6 } +SELECT toDateTime64('2011-11-11 11:11:11.-12345678901234567890', 0); -- { serverError CANNOT_PARSE_TEXT } SELECT toDateTime64('2011-11-11 11:11:11.1', 0); @@ -26,46 +26,46 @@ SELECT toDateTime64('2011-11-11 11:11:11.1111111111111111111', 0); SELECT toDateTime64('2011-11-11 11:11:11.11111111111111111111', 0); SELECT toDateTime64('2011-11-11 11:11:11.111111111111111111111', 0); -SELECT toDateTime64('2011-11-11 11:11:11.-1', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-11', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-1111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-11111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-1111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-11111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-1111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-11111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-1111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-11111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-111111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-1111111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-11111111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-111111111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-1111111111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-11111111111111111111', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.-111111111111111111111', 0); -- { serverError 6 } +SELECT toDateTime64('2011-11-11 11:11:11.-1', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-11', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-1111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-11111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-1111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-11111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-1111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-11111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-1111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-11111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-111111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-1111111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-11111111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-111111111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-1111111111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-11111111111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.-111111111111111111111', 0); -- { serverError CANNOT_PARSE_TEXT } -SELECT toDateTime64('2011-11-11 11:11:11.+1', 0); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.++11', 10); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+111', 3); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.+++1111', 5); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.+11111', 7); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.+++++111111', 2); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.+1111111', 1); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.++++++11111111', 8); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.+111111111', 9); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.+++++++1111111111', 6); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.+11111111111', 4); -- { serverError 6 } -SELECT toDateTime64('2011-11-11 11:11:11.++++++++111111111111', 11); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+1111111111111', 15); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+++++++++11111111111111', 13); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+111111111111111', 12); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.++++++++++1111111111111111', 16); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+11111111111111111', 14); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+++++++++++111111111111111111', 15); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+1111111111111111111', 17); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.++++++++++++11111111111111111111', 19); -- { serverError 69 } -SELECT toDateTime64('2011-11-11 11:11:11.+111111111111111111111', 18); -- { serverError 69 } +SELECT toDateTime64('2011-11-11 11:11:11.+1', 0); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.++11', 10); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+111', 3); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.+++1111', 5); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.+11111', 7); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.+++++111111', 2); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.+1111111', 1); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.++++++11111111', 8); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.+111111111', 9); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.+++++++1111111111', 6); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.+11111111111', 4); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDateTime64('2011-11-11 11:11:11.++++++++111111111111', 11); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+1111111111111', 15); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+++++++++11111111111111', 13); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+111111111111111', 12); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.++++++++++1111111111111111', 16); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+11111111111111111', 14); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+++++++++++111111111111111111', 15); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+1111111111111111111', 17); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.++++++++++++11111111111111111111', 19); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT toDateTime64('2011-11-11 11:11:11.+111111111111111111111', 18); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01344_alter_enum_partition_key.sql b/tests/queries/0_stateless/01344_alter_enum_partition_key.sql index ce9d544f311..05dce6bec54 100644 --- a/tests/queries/0_stateless/01344_alter_enum_partition_key.sql +++ b/tests/queries/0_stateless/01344_alter_enum_partition_key.sql @@ -11,9 +11,9 @@ OPTIMIZE TABLE test FINAL; SELECT * FROM test ORDER BY x; SELECT name, partition, partition_id FROM system.parts WHERE database = currentDatabase() AND table = 'test' AND active ORDER BY partition; -ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2, 'test' = 3); -ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2, 'goodbye' = 4); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2, 'goodbye' = 4); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE test MODIFY COLUMN x Int8; INSERT INTO test VALUES (111, 'abc'); @@ -21,16 +21,16 @@ OPTIMIZE TABLE test FINAL; SELECT * FROM test ORDER BY x; SELECT name, partition, partition_id FROM system.parts WHERE database = currentDatabase() AND table = 'test' AND active ORDER BY partition; -ALTER TABLE test MODIFY COLUMN x Enum8('' = 1); -- { serverError 524 } -ALTER TABLE test MODIFY COLUMN x Enum16('' = 1); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x Enum8('' = 1); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test MODIFY COLUMN x Enum16('' = 1); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -ALTER TABLE test MODIFY COLUMN x UInt64; -- { serverError 524 } -ALTER TABLE test MODIFY COLUMN x String; -- { serverError 524 } -ALTER TABLE test MODIFY COLUMN x Nullable(Int64); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x UInt64; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test MODIFY COLUMN x String; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test MODIFY COLUMN x Nullable(Int64); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -ALTER TABLE test RENAME COLUMN x TO z; -- { serverError 524 } -ALTER TABLE test RENAME COLUMN y TO z; -- { serverError 524 } -ALTER TABLE test DROP COLUMN x; -- { serverError 47 } -ALTER TABLE test DROP COLUMN y; -- { serverError 47 } +ALTER TABLE test RENAME COLUMN x TO z; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test RENAME COLUMN y TO z; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test DROP COLUMN x; -- { serverError UNKNOWN_IDENTIFIER } +ALTER TABLE test DROP COLUMN y; -- { serverError UNKNOWN_IDENTIFIER } DROP TABLE test; diff --git a/tests/queries/0_stateless/01346_alter_enum_partition_key_replicated_zookeeper_long.sql b/tests/queries/0_stateless/01346_alter_enum_partition_key_replicated_zookeeper_long.sql index d40bcc15e55..cb253b59d79 100644 --- a/tests/queries/0_stateless/01346_alter_enum_partition_key_replicated_zookeeper_long.sql +++ b/tests/queries/0_stateless/01346_alter_enum_partition_key_replicated_zookeeper_long.sql @@ -25,10 +25,10 @@ SELECT * FROM test2 ORDER BY x; SELECT min_block_number, max_block_number, partition, partition_id FROM system.parts WHERE database = currentDatabase() AND table = 'test' AND active ORDER BY partition; SELECT min_block_number, max_block_number, partition, partition_id FROM system.parts WHERE database = currentDatabase() AND table = 'test2' AND active ORDER BY partition; -ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2, 'test' = 3); -ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2, 'goodbye' = 4); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x Enum('hello' = 1, 'world' = 2, 'goodbye' = 4); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE test MODIFY COLUMN x Int8; INSERT INTO test VALUES (111, 'abc'); @@ -39,17 +39,17 @@ SELECT * FROM test2 ORDER BY x; SELECT min_block_number, max_block_number, partition, partition_id FROM system.parts WHERE database = currentDatabase() AND table = 'test' AND active ORDER BY partition; SELECT min_block_number, max_block_number, partition, partition_id FROM system.parts WHERE database = currentDatabase() AND table = 'test2' AND active ORDER BY partition; -ALTER TABLE test MODIFY COLUMN x Enum8('' = 1); -- { serverError 524 } -ALTER TABLE test MODIFY COLUMN x Enum16('' = 1); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x Enum8('' = 1); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test MODIFY COLUMN x Enum16('' = 1); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -ALTER TABLE test MODIFY COLUMN x UInt64; -- { serverError 524 } -ALTER TABLE test MODIFY COLUMN x String; -- { serverError 524 } -ALTER TABLE test MODIFY COLUMN x Nullable(Int64); -- { serverError 524 } +ALTER TABLE test MODIFY COLUMN x UInt64; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test MODIFY COLUMN x String; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test MODIFY COLUMN x Nullable(Int64); -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -ALTER TABLE test RENAME COLUMN x TO z; -- { serverError 524 } -ALTER TABLE test RENAME COLUMN y TO z; -- { serverError 524 } -ALTER TABLE test DROP COLUMN x; -- { serverError 47 } -ALTER TABLE test DROP COLUMN y; -- { serverError 47 } +ALTER TABLE test RENAME COLUMN x TO z; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test RENAME COLUMN y TO z; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } +ALTER TABLE test DROP COLUMN x; -- { serverError UNKNOWN_IDENTIFIER } +ALTER TABLE test DROP COLUMN y; -- { serverError UNKNOWN_IDENTIFIER } DROP TABLE test SYNC; DROP TABLE test2 SYNC; diff --git a/tests/queries/0_stateless/01350_intdiv_nontrivial_fpe.sql b/tests/queries/0_stateless/01350_intdiv_nontrivial_fpe.sql index 29dfb2c3fda..9c5523e6f5d 100644 --- a/tests/queries/0_stateless/01350_intdiv_nontrivial_fpe.sql +++ b/tests/queries/0_stateless/01350_intdiv_nontrivial_fpe.sql @@ -1,5 +1,5 @@ select intDiv(-9223372036854775808, 255); select intDiv(-9223372036854775808, 65535); select intDiv(-9223372036854775808, 4294967295); -select intDiv(-9223372036854775808, 18446744073709551615); -- { serverError 153 } -select intDiv(-9223372036854775808, -1); -- { serverError 153 } +select intDiv(-9223372036854775808, 18446744073709551615); -- { serverError ILLEGAL_DIVISION } +select intDiv(-9223372036854775808, -1); -- { serverError ILLEGAL_DIVISION } diff --git a/tests/queries/0_stateless/01352_generate_random_overflow.sql b/tests/queries/0_stateless/01352_generate_random_overflow.sql index d49f8cb2687..69180b6b13d 100644 --- a/tests/queries/0_stateless/01352_generate_random_overflow.sql +++ b/tests/queries/0_stateless/01352_generate_random_overflow.sql @@ -1 +1 @@ -SELECT i FROM generateRandom('i Array(Nullable(Enum8(\'hello\' = 1, \'world\' = 5)))', 1025, 65535, 9223372036854775807) LIMIT 10; -- { serverError 128 } +SELECT i FROM generateRandom('i Array(Nullable(Enum8(\'hello\' = 1, \'world\' = 5)))', 1025, 65535, 9223372036854775807) LIMIT 10; -- { serverError TOO_LARGE_ARRAY_SIZE } diff --git a/tests/queries/0_stateless/01353_neighbor_overflow.sql b/tests/queries/0_stateless/01353_neighbor_overflow.sql index ac168cb3305..c55f5401dae 100644 --- a/tests/queries/0_stateless/01353_neighbor_overflow.sql +++ b/tests/queries/0_stateless/01353_neighbor_overflow.sql @@ -1,3 +1,3 @@ -SET allow_deprecated_functions = 1; -SELECT neighbor(toString(number), -9223372036854775808) FROM numbers(100); -- { serverError 69 } -WITH neighbor(toString(number), toInt64(rand64())) AS x SELECT * FROM system.numbers WHERE NOT ignore(x); -- { serverError 69 } +SET allow_deprecated_error_prone_window_functions = 1; +SELECT neighbor(toString(number), -9223372036854775808) FROM numbers(100); -- { serverError ARGUMENT_OUT_OF_BOUND } +WITH neighbor(toString(number), toInt64(rand64())) AS x SELECT * FROM system.numbers WHERE NOT ignore(x); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01356_wrong_filter-type_bug.sql b/tests/queries/0_stateless/01356_wrong_filter-type_bug.sql index b3f48967ba2..a9b79da0f23 100644 --- a/tests/queries/0_stateless/01356_wrong_filter-type_bug.sql +++ b/tests/queries/0_stateless/01356_wrong_filter-type_bug.sql @@ -3,7 +3,7 @@ drop table if exists t0; CREATE TABLE t0 (`c0` String, `c1` Int32 CODEC(NONE), `c2` Int32) ENGINE = MergeTree() ORDER BY tuple(); insert into t0 values ('a', 1, 2); -SELECT t0.c2, t0.c1, t0.c0 FROM t0 PREWHERE t0.c0 ORDER BY ((t0.c2)>=(t0.c1)), (((- (((t0.c0)>(t0.c0))))) IS NULL) FORMAT TabSeparatedWithNamesAndTypes; -- {serverError 59} -SELECT t0.c2, t0.c1, t0.c0 FROM t0 WHERE t0.c0 ORDER BY ((t0.c2)>=(t0.c1)), (((- (((t0.c0)>(t0.c0))))) IS NULL) FORMAT TabSeparatedWithNamesAndTypes settings optimize_move_to_prewhere=0; -- {serverError 59} +SELECT t0.c2, t0.c1, t0.c0 FROM t0 PREWHERE t0.c0 ORDER BY ((t0.c2)>=(t0.c1)), (((- (((t0.c0)>(t0.c0))))) IS NULL) FORMAT TabSeparatedWithNamesAndTypes; -- {serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER} +SELECT t0.c2, t0.c1, t0.c0 FROM t0 WHERE t0.c0 ORDER BY ((t0.c2)>=(t0.c1)), (((- (((t0.c0)>(t0.c0))))) IS NULL) FORMAT TabSeparatedWithNamesAndTypes settings optimize_move_to_prewhere=0; -- {serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER} drop table if exists t0; diff --git a/tests/queries/0_stateless/01358_constexpr_constraint.sql b/tests/queries/0_stateless/01358_constexpr_constraint.sql index 4560ac47c42..280fd6bdc0c 100644 --- a/tests/queries/0_stateless/01358_constexpr_constraint.sql +++ b/tests/queries/0_stateless/01358_constexpr_constraint.sql @@ -9,4 +9,4 @@ insert into constrained values ('a'); DROP TEMPORARY TABLE constrained; CREATE TEMPORARY TABLE constrained (x UInt8, CONSTRAINT bogus CHECK 0); -INSERT INTO constrained VALUES (1); -- { serverError 469 } +INSERT INTO constrained VALUES (1); -- { serverError VIOLATED_CONSTRAINT } diff --git a/tests/queries/0_stateless/01372_remote_table_function_empty_table.sql b/tests/queries/0_stateless/01372_remote_table_function_empty_table.sql index 55c9d3f63d3..b2ae15e6ec2 100644 --- a/tests/queries/0_stateless/01372_remote_table_function_empty_table.sql +++ b/tests/queries/0_stateless/01372_remote_table_function_empty_table.sql @@ -1,4 +1,4 @@ -SELECT * FROM remote('127..2', 'a.'); -- { serverError 62 } +SELECT * FROM remote('127..2', 'a.'); -- { serverError SYNTAX_ERROR } -- Clear cache to avoid future errors in the logs SYSTEM DROP DNS CACHE diff --git a/tests/queries/0_stateless/01373_summing_merge_tree_explicit_columns_definition.sql b/tests/queries/0_stateless/01373_summing_merge_tree_explicit_columns_definition.sql index cc456b3a257..7b34f8c9d42 100644 --- a/tests/queries/0_stateless/01373_summing_merge_tree_explicit_columns_definition.sql +++ b/tests/queries/0_stateless/01373_summing_merge_tree_explicit_columns_definition.sql @@ -2,6 +2,6 @@ DROP TABLE IF EXISTS tt_error_1373; CREATE TABLE tt_error_1373 ( a Int64, d Int64, val Int64 ) -ENGINE = SummingMergeTree((a, val)) PARTITION BY (a) ORDER BY (d); -- { serverError 36 } +ENGINE = SummingMergeTree((a, val)) PARTITION BY (a) ORDER BY (d); -- { serverError BAD_ARGUMENTS } DROP TABLE IF EXISTS tt_error_1373; \ No newline at end of file diff --git a/tests/queries/0_stateless/01375_GROUP_BY_injective_elimination_dictGet_BAD_ARGUMENTS.sql b/tests/queries/0_stateless/01375_GROUP_BY_injective_elimination_dictGet_BAD_ARGUMENTS.sql index 8ff9cd2b9f2..df228d4e8a6 100644 --- a/tests/queries/0_stateless/01375_GROUP_BY_injective_elimination_dictGet_BAD_ARGUMENTS.sql +++ b/tests/queries/0_stateless/01375_GROUP_BY_injective_elimination_dictGet_BAD_ARGUMENTS.sql @@ -1 +1 @@ -SELECT dictGetString(concat('default', '.countryId'), 'country', toUInt64(number)) AS country FROM numbers(2) GROUP BY country; -- { serverError 36 } +SELECT dictGetString(concat('default', '.countryId'), 'country', toUInt64(number)) AS country FROM numbers(2) GROUP BY country; -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01376_GROUP_BY_injective_elimination_dictGet.sql b/tests/queries/0_stateless/01376_GROUP_BY_injective_elimination_dictGet.sql index 5a070b443aa..a868b38b4d7 100644 --- a/tests/queries/0_stateless/01376_GROUP_BY_injective_elimination_dictGet.sql +++ b/tests/queries/0_stateless/01376_GROUP_BY_injective_elimination_dictGet.sql @@ -1,7 +1,7 @@ -- Tags: no-parallel -- https://github.com/ClickHouse/ClickHouse/issues/11469 -SELECT dictGet('default.countryId', 'country', toUInt64(number)) AS country FROM numbers(2) GROUP BY country; -- { serverError 36 } +SELECT dictGet('default.countryId', 'country', toUInt64(number)) AS country FROM numbers(2) GROUP BY country; -- { serverError BAD_ARGUMENTS } -- with real dictionary diff --git a/tests/queries/0_stateless/01380_coded_delta_exception_code.sql b/tests/queries/0_stateless/01380_coded_delta_exception_code.sql index f4b88a93904..5312a23c11f 100644 --- a/tests/queries/0_stateless/01380_coded_delta_exception_code.sql +++ b/tests/queries/0_stateless/01380_coded_delta_exception_code.sql @@ -1,6 +1,6 @@ -CREATE TABLE delta_codec_synthetic (`id` Decimal(38, 10) CODEC(Delta, ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 36 } -CREATE TABLE delta_codec_synthetic (`id` Decimal(38, 10) CODEC(DoubleDelta, ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 36 } -CREATE TABLE delta_codec_synthetic (`id` Decimal(38, 10) CODEC(Gorilla, ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 36 } +CREATE TABLE delta_codec_synthetic (`id` Decimal(38, 10) CODEC(Delta, ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError BAD_ARGUMENTS } +CREATE TABLE delta_codec_synthetic (`id` Decimal(38, 10) CODEC(DoubleDelta, ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError BAD_ARGUMENTS } +CREATE TABLE delta_codec_synthetic (`id` Decimal(38, 10) CODEC(Gorilla, ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError BAD_ARGUMENTS } CREATE TABLE delta_codec_synthetic (`id` UInt64 CODEC(DoubleDelta(3), ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError ILLEGAL_CODEC_PARAMETER } CREATE TABLE delta_codec_synthetic (`id` UInt64 CODEC(Gorilla('hello, world'), ZSTD(22))) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError ILLEGAL_CODEC_PARAMETER } diff --git a/tests/queries/0_stateless/01384_bloom_filter_bad_arguments.sql b/tests/queries/0_stateless/01384_bloom_filter_bad_arguments.sql index 5b18f28883a..42379418e56 100644 --- a/tests/queries/0_stateless/01384_bloom_filter_bad_arguments.sql +++ b/tests/queries/0_stateless/01384_bloom_filter_bad_arguments.sql @@ -1,10 +1,10 @@ DROP TABLE IF EXISTS test; -create table test (a String, index a a type tokenbf_v1(0, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError 36 } -create table test (a String, index a a type tokenbf_v1(2, 0, 0) granularity 1) engine MergeTree order by a; -- { serverError 36 } -create table test (a String, index a a type tokenbf_v1(0, 1, 1) granularity 1) engine MergeTree order by a; -- { serverError 36 } -create table test (a String, index a a type tokenbf_v1(1, 0, 1) granularity 1) engine MergeTree order by a; -- { serverError 36 } +create table test (a String, index a a type tokenbf_v1(0, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError BAD_ARGUMENTS } +create table test (a String, index a a type tokenbf_v1(2, 0, 0) granularity 1) engine MergeTree order by a; -- { serverError BAD_ARGUMENTS } +create table test (a String, index a a type tokenbf_v1(0, 1, 1) granularity 1) engine MergeTree order by a; -- { serverError BAD_ARGUMENTS } +create table test (a String, index a a type tokenbf_v1(1, 0, 1) granularity 1) engine MergeTree order by a; -- { serverError BAD_ARGUMENTS } -create table test (a String, index a a type tokenbf_v1(0.1, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError 36 } -create table test (a String, index a a type tokenbf_v1(-1, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError 36 } -create table test (a String, index a a type tokenbf_v1(0xFFFFFFFF, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError 36 } +create table test (a String, index a a type tokenbf_v1(0.1, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError BAD_ARGUMENTS } +create table test (a String, index a a type tokenbf_v1(-1, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError BAD_ARGUMENTS } +create table test (a String, index a a type tokenbf_v1(0xFFFFFFFF, 2, 0) granularity 1) engine MergeTree order by a; -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql index b45b9c84b18..819b664bb4d 100644 --- a/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql +++ b/tests/queries/0_stateless/01386_negative_float_constant_key_condition.sql @@ -14,7 +14,7 @@ SETTINGS index_granularity = 8192; INSERT INTO t0 VALUES (0, 0); SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1524532316)); -SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1.0)); -- { serverError 70 } +SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND -1.0)); -- { serverError CANNOT_CONVERT_TYPE } SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND inf)); SELECT t0.c1 FROM t0 WHERE NOT (t0.c1 OR (t0.c0 AND nan)); diff --git a/tests/queries/0_stateless/01387_clear_column_default_depends.sql b/tests/queries/0_stateless/01387_clear_column_default_depends.sql index 733daafa91c..30208b8d3d8 100644 --- a/tests/queries/0_stateless/01387_clear_column_default_depends.sql +++ b/tests/queries/0_stateless/01387_clear_column_default_depends.sql @@ -7,7 +7,7 @@ INSERT INTO test (x) VALUES (1), (2), (3); SELECT * FROM test ORDER BY x, y; ALTER TABLE test CLEAR COLUMN x; SELECT * FROM test ORDER BY x, y; -ALTER TABLE test DROP COLUMN x; -- { serverError 44 } +ALTER TABLE test DROP COLUMN x; -- { serverError ILLEGAL_COLUMN } DROP TABLE test; DROP TABLE IF EXISTS test; @@ -16,7 +16,7 @@ INSERT INTO test (x) VALUES (1), (2), (3); SELECT x, y FROM test ORDER BY x, y; ALTER TABLE test CLEAR COLUMN x; SELECT x, y FROM test ORDER BY x, y; -ALTER TABLE test DROP COLUMN x; -- { serverError 44 } +ALTER TABLE test DROP COLUMN x; -- { serverError ILLEGAL_COLUMN } DROP TABLE test; DROP TABLE IF EXISTS test; @@ -25,7 +25,7 @@ INSERT INTO test (x) VALUES (1), (2), (3); SELECT x, y FROM test ORDER BY x, y; ALTER TABLE test CLEAR COLUMN x; SELECT x, y FROM test ORDER BY x, y; -ALTER TABLE test DROP COLUMN x; -- { serverError 44 } +ALTER TABLE test DROP COLUMN x; -- { serverError ILLEGAL_COLUMN } DROP TABLE test; diff --git a/tests/queries/0_stateless/01388_clear_all_columns.sql b/tests/queries/0_stateless/01388_clear_all_columns.sql index c1f59efba83..cc395aa7fb4 100644 --- a/tests/queries/0_stateless/01388_clear_all_columns.sql +++ b/tests/queries/0_stateless/01388_clear_all_columns.sql @@ -3,7 +3,7 @@ DROP TABLE IF EXISTS test; CREATE TABLE test (x UInt8) ENGINE = MergeTree ORDER BY tuple(); INSERT INTO test (x) VALUES (1), (2), (3); -ALTER TABLE test CLEAR COLUMN x; --{serverError 36} +ALTER TABLE test CLEAR COLUMN x; --{serverError BAD_ARGUMENTS} DROP TABLE test; DROP TABLE IF EXISTS test; @@ -13,16 +13,16 @@ INSERT INTO test (x, y) VALUES (1, 1), (2, 2), (3, 3); ALTER TABLE test CLEAR COLUMN x; -ALTER TABLE test CLEAR COLUMN x IN PARTITION ''; --{serverError 248} -ALTER TABLE test CLEAR COLUMN x IN PARTITION 'asdasd'; --{serverError 248} -ALTER TABLE test CLEAR COLUMN x IN PARTITION '123'; --{serverError 248} +ALTER TABLE test CLEAR COLUMN x IN PARTITION ''; --{serverError INVALID_PARTITION_VALUE} +ALTER TABLE test CLEAR COLUMN x IN PARTITION 'asdasd'; --{serverError INVALID_PARTITION_VALUE} +ALTER TABLE test CLEAR COLUMN x IN PARTITION '123'; --{serverError INVALID_PARTITION_VALUE} -ALTER TABLE test CLEAR COLUMN y; --{serverError 36} +ALTER TABLE test CLEAR COLUMN y; --{serverError BAD_ARGUMENTS} ALTER TABLE test ADD COLUMN z String DEFAULT 'Hello'; -- y is only real column in table -ALTER TABLE test CLEAR COLUMN y; --{serverError 36} +ALTER TABLE test CLEAR COLUMN y; --{serverError BAD_ARGUMENTS} ALTER TABLE test CLEAR COLUMN x; ALTER TABLE test CLEAR COLUMN z; diff --git a/tests/queries/0_stateless/01397_in_bad_arguments.sql b/tests/queries/0_stateless/01397_in_bad_arguments.sql index 4854abad091..a861ffa8f8b 100644 --- a/tests/queries/0_stateless/01397_in_bad_arguments.sql +++ b/tests/queries/0_stateless/01397_in_bad_arguments.sql @@ -1,4 +1,4 @@ -select in((1, 1, 1, 1)); -- { serverError 42 } -select in(1); -- { serverError 42 } -select in(); -- { serverError 42 } -select in(1, 2, 3); -- { serverError 42 } +select in((1, 1, 1, 1)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select in(1); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select in(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select in(1, 2, 3); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } diff --git a/tests/queries/0_stateless/01402_cast_nullable_string_to_enum.sql b/tests/queries/0_stateless/01402_cast_nullable_string_to_enum.sql index 1d445412381..cf4d57e4300 100644 --- a/tests/queries/0_stateless/01402_cast_nullable_string_to_enum.sql +++ b/tests/queries/0_stateless/01402_cast_nullable_string_to_enum.sql @@ -5,9 +5,9 @@ SELECT CAST(CAST(NULL AS Nullable(String)) AS Nullable(Enum8('Hello' = 1))); SELECT CAST(CAST(NULL AS Nullable(FixedString(1))) AS Nullable(Enum8('Hello' = 1))); -- empty string still not acceptable -SELECT CAST(CAST('' AS Nullable(String)) AS Nullable(Enum8('Hello' = 1))); -- { serverError 691 } -SELECT CAST(CAST('' AS Nullable(FixedString(1))) AS Nullable(Enum8('Hello' = 1))); -- { serverError 691 } +SELECT CAST(CAST('' AS Nullable(String)) AS Nullable(Enum8('Hello' = 1))); -- { serverError UNKNOWN_ELEMENT_OF_ENUM } +SELECT CAST(CAST('' AS Nullable(FixedString(1))) AS Nullable(Enum8('Hello' = 1))); -- { serverError UNKNOWN_ELEMENT_OF_ENUM } -- non-Nullable Enum() still not acceptable -SELECT CAST(CAST(NULL AS Nullable(String)) AS Enum8('Hello' = 1)); -- { serverError 349 } -SELECT CAST(CAST(NULL AS Nullable(FixedString(1))) AS Enum8('Hello' = 1)); -- { serverError 349 } +SELECT CAST(CAST(NULL AS Nullable(String)) AS Enum8('Hello' = 1)); -- { serverError CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN } +SELECT CAST(CAST(NULL AS Nullable(FixedString(1))) AS Enum8('Hello' = 1)); -- { serverError CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN } diff --git a/tests/queries/0_stateless/01404_roundUpToPowerOfTwoOrZero_safety.sql b/tests/queries/0_stateless/01404_roundUpToPowerOfTwoOrZero_safety.sql index d61a35c9999..a60be1fa347 100644 --- a/tests/queries/0_stateless/01404_roundUpToPowerOfTwoOrZero_safety.sql +++ b/tests/queries/0_stateless/01404_roundUpToPowerOfTwoOrZero_safety.sql @@ -1,4 +1,4 @@ -- repeat() with this length and this number of rows will allocation huge enough region (MSB set), -- which will cause roundUpToPowerOfTwoOrZero() returns 0 for such allocation (before the fix), -- and later repeat() will try to use this memory and will got SIGSEGV. -SELECT repeat('0.0001048576', number * (number * (number * 255))) FROM numbers(65535); -- { serverError 131 } +SELECT repeat('0.0001048576', number * (number * (number * 255))) FROM numbers(65535); -- { serverError TOO_LARGE_STRING_SIZE } diff --git a/tests/queries/0_stateless/01407_lambda_arrayJoin.sql b/tests/queries/0_stateless/01407_lambda_arrayJoin.sql index e1b8c1d5a76..050bacb7827 100644 --- a/tests/queries/0_stateless/01407_lambda_arrayJoin.sql +++ b/tests/queries/0_stateless/01407_lambda_arrayJoin.sql @@ -1,5 +1,5 @@ SELECT arrayFilter((a) -> ((a, arrayJoin([])) IN (Null, [Null])), []); SELECT arrayFilter((a) -> ((a, arrayJoin([[]])) IN (Null, [Null])), []); -SELECT * FROM system.one ARRAY JOIN arrayFilter((a) -> ((a, arrayJoin([])) IN (NULL)), []) AS arr_x; -- { serverError 43 } +SELECT * FROM system.one ARRAY JOIN arrayFilter((a) -> ((a, arrayJoin([])) IN (NULL)), []) AS arr_x; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT * FROM numbers(1) LEFT ARRAY JOIN arrayFilter((x_0, x_1) -> (arrayJoin([]) IN (NULL)), [], []) AS arr_x; diff --git a/tests/queries/0_stateless/01408_range_overflow.sql b/tests/queries/0_stateless/01408_range_overflow.sql index d26507f8358..7c1b1d7a99e 100644 --- a/tests/queries/0_stateless/01408_range_overflow.sql +++ b/tests/queries/0_stateless/01408_range_overflow.sql @@ -1,7 +1,7 @@ -- executeGeneric() SELECT range(1025, 1048576 + 9223372036854775807, 9223372036854775807); SELECT range(1025, 1048576 + (9223372036854775807 AS i), i); -SELECT range(1025, 18446744073709551615, 1); -- { serverError 69 } +SELECT range(1025, 18446744073709551615, 1); -- { serverError ARGUMENT_OUT_OF_BOUND } -- executeConstStep() SELECT range(number, 1048576 + 9223372036854775807, 9223372036854775807) FROM system.numbers LIMIT 1 OFFSET 1025; diff --git a/tests/queries/0_stateless/01410_nullable_key_and_index.sql b/tests/queries/0_stateless/01410_nullable_key_and_index.sql index 7c28a7a6e70..45c823480b8 100644 --- a/tests/queries/0_stateless/01410_nullable_key_and_index.sql +++ b/tests/queries/0_stateless/01410_nullable_key_and_index.sql @@ -68,10 +68,10 @@ SELECT * FROM xxxx_null WHERE ts > '2021-10-11 00:00:00'; DROP TABLE xxxx_null; -- nullable keys are forbidden when `allow_nullable_key = 0` -CREATE TABLE invalid_null (id Nullable(String)) ENGINE = MergeTree ORDER BY id; -- { serverError 44 } -CREATE TABLE invalid_lc_null (id LowCardinality(Nullable(String))) ENGINE = MergeTree ORDER BY id; -- { serverError 44 } -CREATE TABLE invalid_array_null (id Array(Nullable(String))) ENGINE = MergeTree ORDER BY id; -- { serverError 44 } -CREATE TABLE invalid_tuple_null (id Tuple(Nullable(String), UInt8)) ENGINE = MergeTree ORDER BY id; -- { serverError 44 } -CREATE TABLE invalid_map_null (id Map(UInt8, Nullable(String))) ENGINE = MergeTree ORDER BY id; -- { serverError 44 } +CREATE TABLE invalid_null (id Nullable(String)) ENGINE = MergeTree ORDER BY id; -- { serverError ILLEGAL_COLUMN } +CREATE TABLE invalid_lc_null (id LowCardinality(Nullable(String))) ENGINE = MergeTree ORDER BY id; -- { serverError ILLEGAL_COLUMN } +CREATE TABLE invalid_array_null (id Array(Nullable(String))) ENGINE = MergeTree ORDER BY id; -- { serverError ILLEGAL_COLUMN } +CREATE TABLE invalid_tuple_null (id Tuple(Nullable(String), UInt8)) ENGINE = MergeTree ORDER BY id; -- { serverError ILLEGAL_COLUMN } +CREATE TABLE invalid_map_null (id Map(UInt8, Nullable(String))) ENGINE = MergeTree ORDER BY id; -- { serverError ILLEGAL_COLUMN } CREATE TABLE invalid_simple_agg_state_null (id SimpleAggregateFunction(sum, Nullable(UInt64))) ENGINE = MergeTree ORDER BY id; -- { serverError DATA_TYPE_CANNOT_BE_USED_IN_KEY } -- AggregateFunctions are not comparable and cannot be used in key expressions. No need to test it. diff --git a/tests/queries/0_stateless/01412_group_array_moving_shard.sql b/tests/queries/0_stateless/01412_group_array_moving_shard.sql index 25e29409f6d..642619dc1af 100644 --- a/tests/queries/0_stateless/01412_group_array_moving_shard.sql +++ b/tests/queries/0_stateless/01412_group_array_moving_shard.sql @@ -14,15 +14,15 @@ SELECT groupArrayMovingAvg(256)(toDecimal128(-1, 1)) FROM numbers(300); SELECT groupArrayMovingSum(10)(number) FROM numbers(100); SELECT groupArrayMovingSum(10)(1) FROM numbers(100); -SELECT groupArrayMovingSum(0)(1) FROM numbers(100); -- { serverError 36 } -SELECT groupArrayMovingSum(0.)(1) FROM numbers(100); -- { serverError 36 } -SELECT groupArrayMovingSum(0.1)(1) FROM numbers(100); -- { serverError 36 } -SELECT groupArrayMovingSum(0.1)(1) FROM remote('127.0.0.{1,2}', numbers(100)); -- { serverError 36 } +SELECT groupArrayMovingSum(0)(1) FROM numbers(100); -- { serverError BAD_ARGUMENTS } +SELECT groupArrayMovingSum(0.)(1) FROM numbers(100); -- { serverError BAD_ARGUMENTS } +SELECT groupArrayMovingSum(0.1)(1) FROM numbers(100); -- { serverError BAD_ARGUMENTS } +SELECT groupArrayMovingSum(0.1)(1) FROM remote('127.0.0.{1,2}', numbers(100)); -- { serverError BAD_ARGUMENTS } SELECT groupArrayMovingSum(256)(1) FROM remote('127.0.0.{1,2}', numbers(100)); SELECT groupArrayMovingSum(256)(1) FROM remote('127.0.0.{1,2}', numbers(1000)); SELECT toTypeName(groupArrayMovingSum(256)(-1)) FROM remote('127.0.0.{1,2}', numbers(1000)); SELECT groupArrayMovingSum(256)(toDecimal32(1, 9)) FROM numbers(300); -SELECT groupArrayMovingSum(256)(toDecimal32(1000000000, 1)) FROM numbers(300); -- { serverError 407 } +SELECT groupArrayMovingSum(256)(toDecimal32(1000000000, 1)) FROM numbers(300); -- { serverError DECIMAL_OVERFLOW } SELECT groupArrayMovingSum(256)(toDecimal32(100000000, 1)) FROM numbers(300); SELECT groupArrayMovingSum(256)(toDecimal32(1, 1)) FROM numbers(300); diff --git a/tests/queries/0_stateless/01413_allow_non_metadata_alters.sql b/tests/queries/0_stateless/01413_allow_non_metadata_alters.sql index 6e876af0e33..86b11353308 100644 --- a/tests/queries/0_stateless/01413_allow_non_metadata_alters.sql +++ b/tests/queries/0_stateless/01413_allow_non_metadata_alters.sql @@ -14,19 +14,19 @@ ORDER BY tuple(); SET allow_non_metadata_alters = 0; -ALTER TABLE non_metadata_alters MODIFY COLUMN value3 UInt64; --{serverError 524} +ALTER TABLE non_metadata_alters MODIFY COLUMN value3 UInt64; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE non_metadata_alters MODIFY COLUMN value1 UInt32; --{serverError 524} +ALTER TABLE non_metadata_alters MODIFY COLUMN value1 UInt32; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE non_metadata_alters MODIFY COLUMN value4 Date; --{serverError 524} +ALTER TABLE non_metadata_alters MODIFY COLUMN value4 Date; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE non_metadata_alters DROP COLUMN value4; --{serverError 524} +ALTER TABLE non_metadata_alters DROP COLUMN value4; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE non_metadata_alters MODIFY COLUMN value2 Enum8('x' = 5, 'y' = 6); --{serverError 524} +ALTER TABLE non_metadata_alters MODIFY COLUMN value2 Enum8('x' = 5, 'y' = 6); --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE non_metadata_alters RENAME COLUMN value4 TO renamed_value4; --{serverError 524} +ALTER TABLE non_metadata_alters RENAME COLUMN value4 TO renamed_value4; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} -ALTER TABLE non_metadata_alters MODIFY COLUMN value3 UInt16 TTL value5 + INTERVAL 5 DAY; --{serverError 524} +ALTER TABLE non_metadata_alters MODIFY COLUMN value3 UInt16 TTL value5 + INTERVAL 5 DAY; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} SET materialize_ttl_after_modify = 0; diff --git a/tests/queries/0_stateless/01414_mutations_and_errors.sql b/tests/queries/0_stateless/01414_mutations_and_errors.sql index af7eeb8b9ee..b9eabb43e87 100644 --- a/tests/queries/0_stateless/01414_mutations_and_errors.sql +++ b/tests/queries/0_stateless/01414_mutations_and_errors.sql @@ -16,9 +16,9 @@ INSERT INTO mutation_table SELECT toDate('2019-10-02'), number, 'Hello' FROM num SELECT distinct(value) FROM mutation_table ORDER BY value; -ALTER TABLE mutation_table MODIFY COLUMN value UInt64 SETTINGS mutations_sync = 2; --{serverError 341} +ALTER TABLE mutation_table MODIFY COLUMN value UInt64 SETTINGS mutations_sync = 2; --{serverError UNFINISHED} -SELECT distinct(value) FROM mutation_table ORDER BY value; --{serverError 6} +SELECT distinct(value) FROM mutation_table ORDER BY value; --{serverError CANNOT_PARSE_TEXT} KILL MUTATION where table = 'mutation_table' and database = currentDatabase(); diff --git a/tests/queries/0_stateless/01416_clear_column_pk.sql b/tests/queries/0_stateless/01416_clear_column_pk.sql index a549d759130..794fb702b21 100644 --- a/tests/queries/0_stateless/01416_clear_column_pk.sql +++ b/tests/queries/0_stateless/01416_clear_column_pk.sql @@ -11,11 +11,11 @@ ORDER by (key1, key2); INSERT INTO table_with_pk_clear SELECT number, number * number, toString(number), toString(number * number) FROM numbers(1000); -ALTER TABLE table_with_pk_clear CLEAR COLUMN key1 IN PARTITION tuple(); --{serverError 524} +ALTER TABLE table_with_pk_clear CLEAR COLUMN key1 IN PARTITION tuple(); --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} SELECT count(distinct key1) FROM table_with_pk_clear; -ALTER TABLE table_with_pk_clear CLEAR COLUMN key2 IN PARTITION tuple(); --{serverError 524} +ALTER TABLE table_with_pk_clear CLEAR COLUMN key2 IN PARTITION tuple(); --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} SELECT count(distinct key2) FROM table_with_pk_clear; diff --git a/tests/queries/0_stateless/01418_custom_settings.sql b/tests/queries/0_stateless/01418_custom_settings.sql index be18f553589..f121dba0597 100644 --- a/tests/queries/0_stateless/01418_custom_settings.sql +++ b/tests/queries/0_stateless/01418_custom_settings.sql @@ -23,12 +23,12 @@ SELECT getSetting('custom_d') as v, toTypeName(v); SELECT name, value FROM system.settings WHERE name LIKE 'custom_%' ORDER BY name; SELECT '--- undefined setting ---'; -SELECT getSetting('custom_e') as v, toTypeName(v); -- { serverError 115 } -- Setting not found. +SELECT getSetting('custom_e') as v, toTypeName(v); -- { serverError UNKNOWN_SETTING } -- Setting not found. SET custom_e = 404; SELECT getSetting('custom_e') as v, toTypeName(v); SELECT '--- wrong prefix ---'; -SET invalid_custom = 8; -- { serverError 115 } -- Setting is neither a builtin nor started with one of the registered prefixes for user-defined settings. +SET invalid_custom = 8; -- { serverError UNKNOWN_SETTING } -- Setting is neither a builtin nor started with one of the registered prefixes for user-defined settings. SELECT '--- using query context ---'; SELECT getSetting('custom_e') as v, toTypeName(v) SETTINGS custom_e = -0.333; @@ -38,7 +38,7 @@ SELECT name, value FROM system.settings WHERE name = 'custom_e'; SELECT getSetting('custom_f') as v, toTypeName(v) SETTINGS custom_f = 'word'; SELECT name, value FROM system.settings WHERE name = 'custom_f' SETTINGS custom_f = 'word'; -SELECT getSetting('custom_f') as v, toTypeName(v); -- { serverError 115 } -- Setting not found. +SELECT getSetting('custom_f') as v, toTypeName(v); -- { serverError UNKNOWN_SETTING } -- Setting not found. SELECT COUNT() FROM system.settings WHERE name = 'custom_f'; SELECT '--- compound identifier ---'; diff --git a/tests/queries/0_stateless/01419_merge_tree_settings_sanity_check.sql b/tests/queries/0_stateless/01419_merge_tree_settings_sanity_check.sql index 5655a8af3d6..915c9dae5fb 100644 --- a/tests/queries/0_stateless/01419_merge_tree_settings_sanity_check.sql +++ b/tests/queries/0_stateless/01419_merge_tree_settings_sanity_check.sql @@ -9,7 +9,7 @@ CREATE TABLE mytable_local ENGINE = MergeTree() PARTITION BY toYYYYMM(eventday) ORDER BY (eventday, user_id) -SETTINGS number_of_free_entries_in_pool_to_execute_mutation = 100; -- { serverError 36 } +SETTINGS number_of_free_entries_in_pool_to_execute_mutation = 100; -- { serverError BAD_ARGUMENTS } CREATE TABLE mytable_local ( @@ -20,7 +20,7 @@ CREATE TABLE mytable_local ENGINE = MergeTree() PARTITION BY toYYYYMM(eventday) ORDER BY (eventday, user_id) -SETTINGS number_of_free_entries_in_pool_to_lower_max_size_of_merge = 100; -- { serverError 36 } +SETTINGS number_of_free_entries_in_pool_to_lower_max_size_of_merge = 100; -- { serverError BAD_ARGUMENTS } CREATE TABLE mytable_local ( @@ -31,7 +31,7 @@ CREATE TABLE mytable_local ENGINE = MergeTree() PARTITION BY toYYYYMM(eventday) ORDER BY (eventday, user_id) -SETTINGS number_of_free_entries_in_pool_to_execute_optimize_entire_partition = 100; -- { serverError 36 } +SETTINGS number_of_free_entries_in_pool_to_execute_optimize_entire_partition = 100; -- { serverError BAD_ARGUMENTS } CREATE TABLE mytable_local ( @@ -43,6 +43,6 @@ ENGINE = MergeTree() PARTITION BY toYYYYMM(eventday) ORDER BY (eventday, user_id); -ALTER TABLE mytable_local MODIFY SETTING number_of_free_entries_in_pool_to_execute_mutation = 100; -- { serverError 36 } +ALTER TABLE mytable_local MODIFY SETTING number_of_free_entries_in_pool_to_execute_mutation = 100; -- { serverError BAD_ARGUMENTS } DROP TABLE mytable_local; diff --git a/tests/queries/0_stateless/01421_assert_in_in.sql b/tests/queries/0_stateless/01421_assert_in_in.sql index 73f712b4015..22fd2f07254 100644 --- a/tests/queries/0_stateless/01421_assert_in_in.sql +++ b/tests/queries/0_stateless/01421_assert_in_in.sql @@ -1 +1 @@ -SELECT (1, 2) IN ((1, (2, 3)), (1 + 1, 1)); -- { serverError 53 } +SELECT (1, 2) IN ((1, (2, 3)), (1 + 1, 1)); -- { serverError TYPE_MISMATCH } diff --git a/tests/queries/0_stateless/01422_map_skip_null.sql b/tests/queries/0_stateless/01422_map_skip_null.sql index 683757a473b..bc632cb03e6 100644 --- a/tests/queries/0_stateless/01422_map_skip_null.sql +++ b/tests/queries/0_stateless/01422_map_skip_null.sql @@ -1,7 +1,7 @@ -select minMap(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError 43 } -select maxMap(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError 43 } -select sumMap(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError 43 } -select sumMapWithOverflow(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError 43 } +select minMap(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select maxMap(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select sumMap(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select sumMapWithOverflow(arrayJoin([([1], [null]), ([1], [null])])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select minMap(arrayJoin([([1, 2], [null, 11]), ([1, 2], [null, 22])])); select maxMap(arrayJoin([([1, 2], [null, 11]), ([1, 2], [null, 22])])); diff --git a/tests/queries/0_stateless/01424_parse_date_time_bad_date.sql b/tests/queries/0_stateless/01424_parse_date_time_bad_date.sql index 4606b773e60..897a208307c 100644 --- a/tests/queries/0_stateless/01424_parse_date_time_bad_date.sql +++ b/tests/queries/0_stateless/01424_parse_date_time_bad_date.sql @@ -1,2 +1,2 @@ -select parseDateTime64BestEffort('2.55'); -- { serverError 41 } +select parseDateTime64BestEffort('2.55'); -- { serverError CANNOT_PARSE_DATETIME } select parseDateTime64BestEffortOrNull('2.55'); diff --git a/tests/queries/0_stateless/01425_decimal_parse_big_negative_exponent.sql b/tests/queries/0_stateless/01425_decimal_parse_big_negative_exponent.sql index 1387206b882..085d015be3f 100644 --- a/tests/queries/0_stateless/01425_decimal_parse_big_negative_exponent.sql +++ b/tests/queries/0_stateless/01425_decimal_parse_big_negative_exponent.sql @@ -1,10 +1,10 @@ -SELECT '-1E9-1E9-1E9-1E9' AS x, toDecimal32(x, 0); -- { serverError 69 } -SELECT '-1E9' AS x, toDecimal32(x, 0); -- { serverError 69 } +SELECT '-1E9-1E9-1E9-1E9' AS x, toDecimal32(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '-1E9' AS x, toDecimal32(x, 0); -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT '1E-9' AS x, toDecimal32(x, 0); SELECT '1E-8' AS x, toDecimal32(x, 0); SELECT '1E-7' AS x, toDecimal32(x, 0); SELECT '1e-7' AS x, toDecimal32(x, 0); SELECT '1E-9' AS x, toDecimal32(x, 9); -SELECT '1E-9' AS x, toDecimal32(x, 10); -- { serverError 69 } -SELECT '1E-10' AS x, toDecimal32(x, 10); -- { serverError 69 } +SELECT '1E-9' AS x, toDecimal32(x, 10); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT '1E-10' AS x, toDecimal32(x, 10); -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT '1E-10' AS x, toDecimal32(x, 9); diff --git a/tests/queries/0_stateless/01428_h3_range_check.sql b/tests/queries/0_stateless/01428_h3_range_check.sql index 4a16aa2dc38..c606638989f 100644 --- a/tests/queries/0_stateless/01428_h3_range_check.sql +++ b/tests/queries/0_stateless/01428_h3_range_check.sql @@ -1,4 +1,4 @@ -- Tags: no-fasttest -SELECT h3ToChildren(599405990164561919, 100); -- { serverError 69 } -SELECT h3ToParent(599405990164561919, 100); -- { serverError 69 } +SELECT h3ToChildren(599405990164561919, 100); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT h3ToParent(599405990164561919, 100); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01429_join_on_error_messages.sql b/tests/queries/0_stateless/01429_join_on_error_messages.sql index b22d5259136..c123bdd6b38 100644 --- a/tests/queries/0_stateless/01429_join_on_error_messages.sql +++ b/tests/queries/0_stateless/01429_join_on_error_messages.sql @@ -1,23 +1,23 @@ -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON (arrayJoin([1]) = B.b); -- { serverError 403 } -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON (A.a = arrayJoin([1])); -- { serverError 403 } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON (arrayJoin([1]) = B.b); -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON (A.a = arrayJoin([1])); -- { serverError INVALID_JOIN_ON_EXPRESSION } -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON equals(a); -- { serverError 42, 62 } -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON less(a); -- { serverError 42, 62 } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON equals(a); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH, 62 } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON less(a); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH, 62 } -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a > b; -- { serverError 403 } -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a < b; -- { serverError 403 } -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a >= b; -- { serverError 403 } -SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a <= b; -- { serverError 403 } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a > b; -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a < b; -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a >= b; -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b) B ON a = b AND a <= b; -- { serverError INVALID_JOIN_ON_EXPRESSION } SET join_algorithm = 'partial_merge'; -SELECT 1 FROM (select 1 a) A JOIN (select 1 b, 1 c) B ON a = b OR a = c; -- { serverError 48 } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b, 1 c) B ON a = b OR a = c; -- { serverError NOT_IMPLEMENTED } -- works for a = b OR a = b because of equivalent disjunct optimization SET join_algorithm = 'grace_hash'; -SELECT 1 FROM (select 1 a) A JOIN (select 1 b, 1 c) B ON a = b OR a = c; -- { serverError 48 } +SELECT 1 FROM (select 1 a) A JOIN (select 1 b, 1 c) B ON a = b OR a = c; -- { serverError NOT_IMPLEMENTED } -- works for a = b OR a = b because of equivalent disjunct optimization SET join_algorithm = 'hash'; -- conditions for different table joined via OR -SELECT * FROM (SELECT 1 AS a, 1 AS b, 1 AS c) AS t1 INNER JOIN (SELECT 1 AS a, 1 AS b, 1 AS c) AS t2 ON t1.a = t2.a AND (t1.b > 0 OR t2.b > 0); -- { serverError 403 } +SELECT * FROM (SELECT 1 AS a, 1 AS b, 1 AS c) AS t1 INNER JOIN (SELECT 1 AS a, 1 AS b, 1 AS c) AS t2 ON t1.a = t2.a AND (t1.b > 0 OR t2.b > 0); -- { serverError INVALID_JOIN_ON_EXPRESSION } diff --git a/tests/queries/0_stateless/01430_modify_sample_by_zookeeper_long.sql b/tests/queries/0_stateless/01430_modify_sample_by_zookeeper_long.sql index 752bc6b377f..b0e51f5df78 100644 --- a/tests/queries/0_stateless/01430_modify_sample_by_zookeeper_long.sql +++ b/tests/queries/0_stateless/01430_modify_sample_by_zookeeper_long.sql @@ -8,7 +8,7 @@ SET max_block_size = 10; CREATE TABLE modify_sample (d Date DEFAULT '2000-01-01', x UInt8) ENGINE = MergeTree PARTITION BY d ORDER BY x; INSERT INTO modify_sample (x) SELECT toUInt8(number) AS x FROM system.numbers LIMIT 256; -SELECT count(), min(x), max(x), sum(x), uniqExact(x) FROM modify_sample SAMPLE 0.1; -- { serverError 141 } +SELECT count(), min(x), max(x), sum(x), uniqExact(x) FROM modify_sample SAMPLE 0.1; -- { serverError SAMPLING_NOT_SUPPORTED } ALTER TABLE modify_sample MODIFY SAMPLE BY x; SELECT count(), min(x), max(x), sum(x), uniqExact(x) FROM modify_sample SAMPLE 0.1; @@ -17,7 +17,7 @@ CREATE TABLE modify_sample_replicated (d Date DEFAULT '2000-01-01', x UInt8, y U INSERT INTO modify_sample_replicated (x, y) SELECT toUInt8(number) AS x, toUInt64(number) as y FROM system.numbers LIMIT 256; -SELECT count(), min(x), max(x), sum(x), uniqExact(x) FROM modify_sample_replicated SAMPLE 0.1; -- { serverError 141 } +SELECT count(), min(x), max(x), sum(x), uniqExact(x) FROM modify_sample_replicated SAMPLE 0.1; -- { serverError SAMPLING_NOT_SUPPORTED } ALTER TABLE modify_sample_replicated MODIFY SAMPLE BY x; SELECT count(), min(x), max(x), sum(x), uniqExact(x) FROM modify_sample_replicated SAMPLE 0.1; @@ -27,7 +27,7 @@ ATTACH TABLE modify_sample_replicated; SELECT count(), min(x), max(x), sum(x), uniqExact(x) FROM modify_sample_replicated SAMPLE 0.1; -ALTER TABLE modify_sample_replicated MODIFY SAMPLE BY d; -- { serverError 36 } +ALTER TABLE modify_sample_replicated MODIFY SAMPLE BY d; -- { serverError BAD_ARGUMENTS } ALTER TABLE modify_sample_replicated MODIFY SAMPLE BY y; SELECT count(), min(y), max(y), sum(y), uniqExact(y) FROM modify_sample_replicated SAMPLE 0.1; @@ -40,7 +40,7 @@ SELECT count(), min(y), max(y), sum(y), uniqExact(y) FROM modify_sample_replicat set allow_deprecated_syntax_for_merge_tree=1; CREATE TABLE modify_sample_old (d Date DEFAULT '2000-01-01', x UInt8, y UInt64) ENGINE = MergeTree(d, (x, y), 8192); -ALTER TABLE modify_sample_old MODIFY SAMPLE BY x; -- { serverError 36 } +ALTER TABLE modify_sample_old MODIFY SAMPLE BY x; -- { serverError BAD_ARGUMENTS } DROP TABLE modify_sample; diff --git a/tests/queries/0_stateless/01435_lcm_overflow.sql b/tests/queries/0_stateless/01435_lcm_overflow.sql index b069c0642bc..0b02a0c6b29 100644 --- a/tests/queries/0_stateless/01435_lcm_overflow.sql +++ b/tests/queries/0_stateless/01435_lcm_overflow.sql @@ -6,5 +6,5 @@ SELECT lcm(-15, -10); -- Implementation specific result on overflow: SELECT ignore(lcm(256, 9223372036854775807)); SELECT ignore(lcm(256, -9223372036854775807)); -SELECT ignore(lcm(-256, 9223372036854775807)); -- { serverError 407 } +SELECT ignore(lcm(-256, 9223372036854775807)); -- { serverError DECIMAL_OVERFLOW } SELECT ignore(lcm(-256, -9223372036854775807)); diff --git a/tests/queries/0_stateless/01440_big_int_least_greatest.sql b/tests/queries/0_stateless/01440_big_int_least_greatest.sql index 5e77d6c07f6..e7153842c00 100644 --- a/tests/queries/0_stateless/01440_big_int_least_greatest.sql +++ b/tests/queries/0_stateless/01440_big_int_least_greatest.sql @@ -30,5 +30,5 @@ SELECT least(toUInt64('18446744073709551615'), toUInt256(0)) x, least(toUInt64( greatest(toUInt64('18446744073709551615'), toUInt256(0)) y, greatest(toUInt64('18446744073709551615'), toUInt256('18446744073709551616')) y2, toTypeName(x), toTypeName(y); -SELECT least(toUInt32(0), toInt256(0)), greatest(toInt32(0), toUInt256(0)); -- { serverError 43 } -SELECT least(toInt32(0), toUInt256(0)), greatest(toInt32(0), toUInt256(0)); -- { serverError 43 } +SELECT least(toUInt32(0), toInt256(0)), greatest(toInt32(0), toUInt256(0)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT least(toInt32(0), toUInt256(0)), greatest(toInt32(0), toUInt256(0)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01442_date_time_with_params.sql b/tests/queries/0_stateless/01442_date_time_with_params.sql index 6b32ee4919d..4dc1f791465 100644 --- a/tests/queries/0_stateless/01442_date_time_with_params.sql +++ b/tests/queries/0_stateless/01442_date_time_with_params.sql @@ -13,7 +13,7 @@ SELECT CAST('2020-01-01 00:00:00', 'DateTime') AS a, toTypeName(a), CAST('2020-0 SELECT toDateTime32('2020-01-01 00:00:00') AS a, toTypeName(a); SELECT 'parseDateTimeBestEffort'; -SELECT parseDateTimeBestEffort('', 3) AS a, toTypeName(a); -- {serverError 41} +SELECT parseDateTimeBestEffort('', 3) AS a, toTypeName(a); -- {serverError CANNOT_PARSE_DATETIME} SELECT parseDateTimeBestEffort('2020-05-14T03:37:03', 3, 'UTC') AS a, toTypeName(a); SELECT parseDateTimeBestEffort('2020-05-14 03:37:03', 3, 'UTC') AS a, toTypeName(a); SELECT parseDateTimeBestEffort('2020-05-14 11:37:03 AM', 3, 'UTC') AS a, toTypeName(a); @@ -67,7 +67,7 @@ SELECT parseDateTimeBestEffortOrZero('Dec 15, 2021', 'UTC') AS a, toTypeName(a); SELECT parseDateTimeBestEffortOrZero('Dec 15, 2021', 3, 'UTC') AS a, toTypeName(a); SELECT 'parseDateTime32BestEffort'; -SELECT parseDateTime32BestEffort('') AS a, toTypeName(a); -- {serverError 41} +SELECT parseDateTime32BestEffort('') AS a, toTypeName(a); -- {serverError CANNOT_PARSE_DATETIME} SELECT parseDateTime32BestEffort('2020-05-14T03:37:03', 'UTC') AS a, toTypeName(a); SELECT parseDateTime32BestEffort('2020-05-14 03:37:03', 'UTC') AS a, toTypeName(a); SELECT parseDateTime32BestEffort('2020-05-14 11:37:03 AM', 'UTC') AS a, toTypeName(a); diff --git a/tests/queries/0_stateless/01442_h3kring_range_check.sql b/tests/queries/0_stateless/01442_h3kring_range_check.sql index ab8f69f345e..644cb256361 100644 --- a/tests/queries/0_stateless/01442_h3kring_range_check.sql +++ b/tests/queries/0_stateless/01442_h3kring_range_check.sql @@ -1,6 +1,6 @@ -- Tags: no-fasttest -SELECT h3kRing(581276613233082367, 65535); -- { serverError 12 } -SELECT h3kRing(581276613233082367, -1); -- { serverError 43 } +SELECT h3kRing(581276613233082367, 65535); -- { serverError PARAMETER_OUT_OF_BOUND } +SELECT h3kRing(581276613233082367, -1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT length(h3kRing(111111111111, 1000)); -SELECT h3kRing(581276613233082367, nan); -- { serverError 43 } +SELECT h3kRing(581276613233082367, nan); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01451_detach_drop_part.sql b/tests/queries/0_stateless/01451_detach_drop_part.sql index 4cbcd28af08..87188cebfa3 100644 --- a/tests/queries/0_stateless/01451_detach_drop_part.sql +++ b/tests/queries/0_stateless/01451_detach_drop_part.sql @@ -9,7 +9,7 @@ INSERT INTO mt_01451 VALUES (2); SELECT v FROM mt_01451 ORDER BY v; -ALTER TABLE mt_01451 DETACH PART 'all_100_100_0'; -- { serverError 232 } +ALTER TABLE mt_01451 DETACH PART 'all_100_100_0'; -- { serverError NO_SUCH_DATA_PART } ALTER TABLE mt_01451 DETACH PART 'all_2_2_0'; @@ -27,7 +27,7 @@ SELECT '-- drop part --'; ALTER TABLE mt_01451 DROP PART 'all_4_4_0'; -ALTER TABLE mt_01451 ATTACH PART 'all_4_4_0'; -- { serverError 233 } +ALTER TABLE mt_01451 ATTACH PART 'all_4_4_0'; -- { serverError BAD_DATA_PART_NAME } SELECT v FROM mt_01451 ORDER BY v; diff --git a/tests/queries/0_stateless/01451_replicated_detach_drop_and_quorum_long.sql b/tests/queries/0_stateless/01451_replicated_detach_drop_and_quorum_long.sql index 21b65995482..eaf6e442ce4 100644 --- a/tests/queries/0_stateless/01451_replicated_detach_drop_and_quorum_long.sql +++ b/tests/queries/0_stateless/01451_replicated_detach_drop_and_quorum_long.sql @@ -30,7 +30,7 @@ INSERT INTO replica2 SETTINGS insert_keeper_fault_injection_probability=0 VALUES SYSTEM SYNC REPLICA replica2; -ALTER TABLE replica1 DETACH PART 'all_2_2_0'; --{serverError 48} +ALTER TABLE replica1 DETACH PART 'all_2_2_0'; --{serverError NOT_IMPLEMENTED} SELECT name FROM system.parts WHERE table = 'replica1' and database = currentDatabase() and active = 1 ORDER BY name; diff --git a/tests/queries/0_stateless/01451_replicated_detach_drop_part_long.sql b/tests/queries/0_stateless/01451_replicated_detach_drop_part_long.sql index 25b2923ddd9..36f36dd3fc0 100644 --- a/tests/queries/0_stateless/01451_replicated_detach_drop_part_long.sql +++ b/tests/queries/0_stateless/01451_replicated_detach_drop_part_long.sql @@ -13,7 +13,7 @@ INSERT INTO replica1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES INSERT INTO replica1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (1); INSERT INTO replica1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (2); -ALTER TABLE replica1 DETACH PART 'all_100_100_0'; -- { serverError 232 } +ALTER TABLE replica1 DETACH PART 'all_100_100_0'; -- { serverError NO_SUCH_DATA_PART } SELECT v FROM replica1 ORDER BY v; @@ -35,7 +35,7 @@ SELECT '-- drop part --'; ALTER TABLE replica1 DROP PART 'all_3_3_0'; -ALTER TABLE replica1 ATTACH PART 'all_3_3_0'; -- { serverError 233 } +ALTER TABLE replica1 ATTACH PART 'all_3_3_0'; -- { serverError BAD_DATA_PART_NAME } SELECT v FROM replica1 ORDER BY v; diff --git a/tests/queries/0_stateless/01455_default_compression.sql b/tests/queries/0_stateless/01455_default_compression.sql index 5d035197702..099e419bd07 100644 --- a/tests/queries/0_stateless/01455_default_compression.sql +++ b/tests/queries/0_stateless/01455_default_compression.sql @@ -24,6 +24,6 @@ DESCRIBE TABLE compress_table; SHOW CREATE TABLE compress_table; -ALTER TABLE compress_table MODIFY COLUMN value2 CODEC(Default(5)); --{serverError 36} +ALTER TABLE compress_table MODIFY COLUMN value2 CODEC(Default(5)); --{serverError BAD_ARGUMENTS} DROP TABLE IF EXISTS compress_table; diff --git a/tests/queries/0_stateless/01455_optimize_trivial_insert_select.sql b/tests/queries/0_stateless/01455_optimize_trivial_insert_select.sql index 466c9aa3707..09a93d94dc3 100644 --- a/tests/queries/0_stateless/01455_optimize_trivial_insert_select.sql +++ b/tests/queries/0_stateless/01455_optimize_trivial_insert_select.sql @@ -1,5 +1,5 @@ SET max_insert_threads = 1, max_threads = 100, min_insert_block_size_rows = 1048576, max_block_size = 65536; -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; DROP TABLE IF EXISTS t; CREATE TABLE t (x UInt64) ENGINE = StripeLog; -- For trivial INSERT SELECT, max_threads is lowered to max_insert_threads and max_block_size is changed to min_insert_block_size_rows. diff --git a/tests/queries/0_stateless/01455_shard_leaf_max_rows_bytes_to_read.sql b/tests/queries/0_stateless/01455_shard_leaf_max_rows_bytes_to_read.sql index 620daeb9f35..ea07b545af6 100644 --- a/tests/queries/0_stateless/01455_shard_leaf_max_rows_bytes_to_read.sql +++ b/tests/queries/0_stateless/01455_shard_leaf_max_rows_bytes_to_read.sql @@ -7,13 +7,13 @@ -- read, and leaf limit will fail. SET prefer_localhost_replica=0; -SELECT count() FROM (SELECT * FROM remote('127.0.0.1', system.numbers) LIMIT 100) SETTINGS max_rows_to_read_leaf=1; -- { serverError 158 } -SELECT count() FROM (SELECT * FROM remote('127.0.0.1', system.numbers) LIMIT 100) SETTINGS max_bytes_to_read_leaf=1; -- { serverError 307 } +SELECT count() FROM (SELECT * FROM remote('127.0.0.1', system.numbers) LIMIT 100) SETTINGS max_rows_to_read_leaf=1; -- { serverError TOO_MANY_ROWS } +SELECT count() FROM (SELECT * FROM remote('127.0.0.1', system.numbers) LIMIT 100) SETTINGS max_bytes_to_read_leaf=1; -- { serverError TOO_MANY_BYTES } SELECT count() FROM (SELECT * FROM remote('127.0.0.1', system.numbers) LIMIT 100) SETTINGS max_rows_to_read_leaf=100; SELECT count() FROM (SELECT * FROM remote('127.0.0.1', system.numbers) LIMIT 100) SETTINGS max_bytes_to_read_leaf=1000; -SELECT count() FROM (SELECT * FROM remote('127.0.0.2', system.numbers) LIMIT 100) SETTINGS max_rows_to_read_leaf=1; -- { serverError 158 } -SELECT count() FROM (SELECT * FROM remote('127.0.0.2', system.numbers) LIMIT 100) SETTINGS max_bytes_to_read_leaf=1; -- { serverError 307 } +SELECT count() FROM (SELECT * FROM remote('127.0.0.2', system.numbers) LIMIT 100) SETTINGS max_rows_to_read_leaf=1; -- { serverError TOO_MANY_ROWS } +SELECT count() FROM (SELECT * FROM remote('127.0.0.2', system.numbers) LIMIT 100) SETTINGS max_bytes_to_read_leaf=1; -- { serverError TOO_MANY_BYTES } SELECT count() FROM (SELECT * FROM remote('127.0.0.2', system.numbers) LIMIT 100) SETTINGS max_rows_to_read_leaf=100; SELECT count() FROM (SELECT * FROM remote('127.0.0.2', system.numbers) LIMIT 100) SETTINGS max_bytes_to_read_leaf=1000; @@ -26,13 +26,13 @@ CREATE TABLE test_distributed AS test_local ENGINE = Distributed(test_cluster_tw INSERT INTO test_local SELECT '2000-08-01', number as value from numbers(50000); -SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_rows_to_read_leaf = 40000; -- { serverError 158 } -SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_bytes_to_read_leaf = 40000; -- { serverError 307 } +SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_rows_to_read_leaf = 40000; -- { serverError TOO_MANY_ROWS } +SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_bytes_to_read_leaf = 40000; -- { serverError TOO_MANY_BYTES } -SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_rows_to_read = 60000; -- { serverError 158 } +SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_rows_to_read = 60000; -- { serverError TOO_MANY_ROWS } SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_rows_to_read_leaf = 60000; -SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_bytes_to_read = 100000; -- { serverError 307 } +SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_bytes_to_read = 100000; -- { serverError TOO_MANY_BYTES } SELECT count() FROM (SELECT * FROM test_distributed) SETTINGS max_bytes_to_read_leaf = 100000; DROP TABLE IF EXISTS test_local; diff --git a/tests/queries/0_stateless/01457_min_index_granularity_bytes_setting.sql b/tests/queries/0_stateless/01457_min_index_granularity_bytes_setting.sql index 1846fa6640b..4f5fcccd1cf 100644 --- a/tests/queries/0_stateless/01457_min_index_granularity_bytes_setting.sql +++ b/tests/queries/0_stateless/01457_min_index_granularity_bytes_setting.sql @@ -5,7 +5,7 @@ CREATE TABLE invalid_min_index_granularity_bytes_setting id UInt64, value String ) ENGINE MergeTree() -ORDER BY id SETTINGS index_granularity_bytes = 1, min_index_granularity_bytes = 1024; -- { serverError 36 } +ORDER BY id SETTINGS index_granularity_bytes = 1, min_index_granularity_bytes = 1024; -- { serverError BAD_ARGUMENTS } DROP TABLE IF EXISTS valid_min_index_granularity_bytes_setting; diff --git a/tests/queries/0_stateless/01459_default_value_of_argument_type_nullptr_dereference.sql b/tests/queries/0_stateless/01459_default_value_of_argument_type_nullptr_dereference.sql index 50b95ae7177..ab5c3e4a314 100644 --- a/tests/queries/0_stateless/01459_default_value_of_argument_type_nullptr_dereference.sql +++ b/tests/queries/0_stateless/01459_default_value_of_argument_type_nullptr_dereference.sql @@ -1 +1 @@ -SELECT defaultValueOfTypeName(FQDN()); -- { serverError 44 } +SELECT defaultValueOfTypeName(FQDN()); -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/01461_alter_table_function.sql b/tests/queries/0_stateless/01461_alter_table_function.sql index 11f643f1e3e..95f488c37b5 100644 --- a/tests/queries/0_stateless/01461_alter_table_function.sql +++ b/tests/queries/0_stateless/01461_alter_table_function.sql @@ -14,7 +14,7 @@ CREATE TABLE table_from_numbers AS numbers(1000); SHOW CREATE TABLE table_from_numbers; -ALTER TABLE table_from_numbers ADD COLUMN col UInt8; --{serverError 48} +ALTER TABLE table_from_numbers ADD COLUMN col UInt8; --{serverError NOT_IMPLEMENTED} SHOW CREATE TABLE table_from_numbers; diff --git a/tests/queries/0_stateless/01462_test_codec_on_alias.sql b/tests/queries/0_stateless/01462_test_codec_on_alias.sql index 06a82c61b9e..b09dfa50b48 100644 --- a/tests/queries/0_stateless/01462_test_codec_on_alias.sql +++ b/tests/queries/0_stateless/01462_test_codec_on_alias.sql @@ -5,7 +5,7 @@ select 'create table compression_codec_on_alias with CODEC on ALIAS type'; CREATE TABLE compression_codec_on_alias ( `c0` ALIAS c1 CODEC(ZSTD), c1 UInt64 -) ENGINE = MergeTree() PARTITION BY c0 ORDER BY c1; -- { serverError 36 } +) ENGINE = MergeTree() PARTITION BY c0 ORDER BY c1; -- { serverError BAD_ARGUMENTS } select 'create table compression_codec_on_alias with proper CODEC'; @@ -16,7 +16,7 @@ CREATE TABLE compression_codec_on_alias ( select 'alter table compression_codec_on_alias add column (ALIAS type) with CODEC'; -ALTER TABLE compression_codec_on_alias ADD COLUMN `c3` ALIAS c2 CODEC(ZSTD) AFTER c2; -- { serverError 36 } +ALTER TABLE compression_codec_on_alias ADD COLUMN `c3` ALIAS c2 CODEC(ZSTD) AFTER c2; -- { serverError BAD_ARGUMENTS } select 'alter table compression_codec_on_alias add column (NOT ALIAS type) with CODEC'; diff --git a/tests/queries/0_stateless/01463_resample_overflow.sql b/tests/queries/0_stateless/01463_resample_overflow.sql index 298f852ed14..872f4662851 100644 --- a/tests/queries/0_stateless/01463_resample_overflow.sql +++ b/tests/queries/0_stateless/01463_resample_overflow.sql @@ -1 +1 @@ -select groupArrayResample(-9223372036854775808, 9223372036854775807, 9223372036854775807)(number, toInt64(number)) FROM numbers(7); -- { serverError 69 } +select groupArrayResample(-9223372036854775808, 9223372036854775807, 9223372036854775807)(number, toInt64(number)) FROM numbers(7); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01470_columns_transformers.sql b/tests/queries/0_stateless/01470_columns_transformers.sql index 1490dabdcec..021582dc05a 100644 --- a/tests/queries/0_stateless/01470_columns_transformers.sql +++ b/tests/queries/0_stateless/01470_columns_transformers.sql @@ -16,13 +16,13 @@ SELECT a.* APPLY(toDate) EXCEPT(i, j) APPLY(any) from columns_transformers a; SELECT * EXCEPT STRICT i from columns_transformers; SELECT * EXCEPT STRICT (i, j) from columns_transformers; -SELECT * EXCEPT STRICT i, j1 from columns_transformers; -- { serverError 47 } +SELECT * EXCEPT STRICT i, j1 from columns_transformers; -- { serverError UNKNOWN_IDENTIFIER } SELECT * EXCEPT STRICT(i, j1) from columns_transformers; -- { serverError NO_SUCH_COLUMN_IN_TABLE , BAD_ARGUMENTS } SELECT * REPLACE STRICT i + 1 AS i from columns_transformers; SELECT * REPLACE STRICT(i + 1 AS col) from columns_transformers; -- { serverError NO_SUCH_COLUMN_IN_TABLE, BAD_ARGUMENTS } SELECT * REPLACE(i + 1 AS i) APPLY(sum) from columns_transformers; SELECT columns_transformers.* REPLACE(j + 2 AS j, i + 1 AS i) APPLY(avg) from columns_transformers; -SELECT columns_transformers.* REPLACE(j + 1 AS j, j + 2 AS j) APPLY(avg) from columns_transformers; -- { serverError 43 } +SELECT columns_transformers.* REPLACE(j + 1 AS j, j + 2 AS j) APPLY(avg) from columns_transformers; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -- REPLACE after APPLY will not match anything SELECT a.* APPLY(toDate) REPLACE(i + 1 AS i) APPLY(any) from columns_transformers a; SELECT a.* APPLY(toDate) REPLACE STRICT(i + 1 AS i) APPLY(any) from columns_transformers a; -- { serverError NO_SUCH_COLUMN_IN_TABLE, BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01470_test_insert_select_asterisk.sql b/tests/queries/0_stateless/01470_test_insert_select_asterisk.sql index 607b8a25f82..815ebd761d3 100644 --- a/tests/queries/0_stateless/01470_test_insert_select_asterisk.sql +++ b/tests/queries/0_stateless/01470_test_insert_select_asterisk.sql @@ -10,7 +10,7 @@ INSERT INTO insert_select_src VALUES (1, 2), (3, 4); INSERT INTO insert_select_dst(* EXCEPT (middle_a, middle_b)) SELECT * FROM insert_select_src; INSERT INTO insert_select_dst(insert_select_dst.* EXCEPT (middle_a, middle_b)) SELECT * FROM insert_select_src; INSERT INTO insert_select_dst(COLUMNS('.*') EXCEPT (middle_a, middle_b)) SELECT * FROM insert_select_src; -INSERT INTO insert_select_dst(insert_select_src.* EXCEPT (middle_a, middle_b)) SELECT * FROM insert_select_src; -- { serverError 47 } +INSERT INTO insert_select_dst(insert_select_src.* EXCEPT (middle_a, middle_b)) SELECT * FROM insert_select_src; -- { serverError UNKNOWN_IDENTIFIER } SELECT * FROM insert_select_dst; diff --git a/tests/queries/0_stateless/01471_top_k_range_check.sql b/tests/queries/0_stateless/01471_top_k_range_check.sql index 1e7ac04bbc5..ea4990c3281 100644 --- a/tests/queries/0_stateless/01471_top_k_range_check.sql +++ b/tests/queries/0_stateless/01471_top_k_range_check.sql @@ -1 +1 @@ -SELECT length(topKWeighted(2, -9223372036854775808)(number, 1025)) FROM system.numbers; -- { serverError 69 } +SELECT length(topKWeighted(2, -9223372036854775808)(number, 1025)) FROM system.numbers; -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01472_toBoundsOfInterval_disallow_empty_tz_field.sql b/tests/queries/0_stateless/01472_toBoundsOfInterval_disallow_empty_tz_field.sql index 47e81653b25..78f7c5b21b3 100644 --- a/tests/queries/0_stateless/01472_toBoundsOfInterval_disallow_empty_tz_field.sql +++ b/tests/queries/0_stateless/01472_toBoundsOfInterval_disallow_empty_tz_field.sql @@ -1,40 +1,40 @@ -SELECT toStartOfDay(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfDay(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfDay(toDateTime('2017-12-31 03:45:00', 'UTC'), 'UTC'); -- success -SELECT toMonday(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toMonday(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toMonday(toDateTime('2017-12-31 00:00:00', 'UTC'), 'UTC'); -- success -SELECT toStartOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 0, ''); -- {serverError 43} +SELECT toStartOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 0, ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 0, 'UTC'); -- success -SELECT toStartOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 1, ''); -- {serverError 43} +SELECT toStartOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 1, ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 1, 'UTC'); -- success -SELECT toLastDayOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 0, ''); -- {serverError 43} +SELECT toLastDayOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 0, ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toLastDayOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 0, 'UTC'); -- success -SELECT toLastDayOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 1, ''); -- {serverError 43} +SELECT toLastDayOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 1, ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toLastDayOfWeek(toDateTime('2017-12-31 00:00:00', 'UTC'), 1, 'UTC'); -- success -SELECT toStartOfMonth(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfMonth(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfMonth(toDateTime('2017-12-31 00:00:00', 'UTC'), 'UTC'); -- success -SELECT toStartOfQuarter(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfQuarter(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfQuarter(toDateTime('2017-12-31 00:00:00', 'UTC'), 'UTC'); -- success -SELECT toStartOfYear(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfYear(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfYear(toDateTime('2017-12-31 00:00:00', 'UTC'), 'UTC'); -- success -SELECT toStartOfTenMinutes(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfTenMinutes(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfTenMinutes(toDateTime('2017-12-31 05:12:30', 'UTC'), 'UTC'); -- success -SELECT toStartOfFifteenMinutes(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfFifteenMinutes(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfFifteenMinutes(toDateTime('2017-12-31 01:17:00', 'UTC'), 'UTC'); -- success -SELECT toStartOfHour(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfHour(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfHour(toDateTime('2017-12-31 01:59:00', 'UTC'), 'UTC'); -- success -SELECT toStartOfMinute(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError 43} +SELECT toStartOfMinute(toDateTime('2017-12-31 00:00:00', 'UTC'), ''); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT toStartOfMinute(toDateTime('2017-12-31 00:01:30', 'UTC'), 'UTC'); -- success -- special case - allow empty time_zone when using functions like today(), yesterday() etc. diff --git a/tests/queries/0_stateless/01474_bad_global_join.sql b/tests/queries/0_stateless/01474_bad_global_join.sql index 622e14e6f22..a6f28f7d330 100644 --- a/tests/queries/0_stateless/01474_bad_global_join.sql +++ b/tests/queries/0_stateless/01474_bad_global_join.sql @@ -10,7 +10,7 @@ INSERT INTO local_table SELECT number AS id, toString(number) AS val FROM number CREATE TABLE dist_table AS local_table ENGINE = Distributed('test_cluster_two_shards_localhost', currentDatabase(), local_table); -SELECT uniq(d.val) FROM dist_table AS d GLOBAL LEFT JOIN numbers(100) AS t USING id; -- { serverError 47, 284 } +SELECT uniq(d.val) FROM dist_table AS d GLOBAL LEFT JOIN numbers(100) AS t USING id; -- { serverError UNKNOWN_IDENTIFIER, 284 } SELECT uniq(d.val) FROM dist_table AS d GLOBAL LEFT JOIN local_table AS t USING id; DROP TABLE local_table; diff --git a/tests/queries/0_stateless/01478_not_equi-join_on.sql b/tests/queries/0_stateless/01478_not_equi-join_on.sql index b52af5fcb46..fb5db88b0a6 100644 --- a/tests/queries/0_stateless/01478_not_equi-join_on.sql +++ b/tests/queries/0_stateless/01478_not_equi-join_on.sql @@ -1,7 +1,7 @@ SELECT * FROM (SELECT NULL AS a, 1 AS b) AS foo RIGHT JOIN (SELECT 1024 AS b) AS bar -ON 1 = foo.b; -- { serverError 403 } +ON 1 = foo.b; -- { serverError INVALID_JOIN_ON_EXPRESSION } SELECT * FROM (SELECT NULL AS a, 1 AS b) AS foo RIGHT JOIN (SELECT 1024 AS b) AS bar -ON 1 = bar.b; -- { serverError 403 } +ON 1 = bar.b; -- { serverError INVALID_JOIN_ON_EXPRESSION } diff --git a/tests/queries/0_stateless/01493_alter_remove_no_property_zookeeper_long.sql b/tests/queries/0_stateless/01493_alter_remove_no_property_zookeeper_long.sql index e9859d3fe96..a00e10b616c 100644 --- a/tests/queries/0_stateless/01493_alter_remove_no_property_zookeeper_long.sql +++ b/tests/queries/0_stateless/01493_alter_remove_no_property_zookeeper_long.sql @@ -12,14 +12,14 @@ ORDER BY tuple(); SHOW CREATE TABLE no_prop_table; -- just nothing happened -ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE DEFAULT; --{serverError 36} -ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE MATERIALIZED; --{serverError 36} -ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE ALIAS; --{serverError 36} -ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE CODEC; --{serverError 36} -ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE COMMENT; --{serverError 36} -ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE TTL; --{serverError 36} +ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE DEFAULT; --{serverError BAD_ARGUMENTS} +ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE MATERIALIZED; --{serverError BAD_ARGUMENTS} +ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE ALIAS; --{serverError BAD_ARGUMENTS} +ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE CODEC; --{serverError BAD_ARGUMENTS} +ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE COMMENT; --{serverError BAD_ARGUMENTS} +ALTER TABLE no_prop_table MODIFY COLUMN some_column REMOVE TTL; --{serverError BAD_ARGUMENTS} -ALTER TABLE no_prop_table REMOVE TTL; --{serverError 36} +ALTER TABLE no_prop_table REMOVE TTL; --{serverError BAD_ARGUMENTS} SHOW CREATE TABLE no_prop_table; @@ -36,18 +36,18 @@ ORDER BY tuple(); SHOW CREATE TABLE r_no_prop_table; -ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE DEFAULT; --{serverError 36} -ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE MATERIALIZED; --{serverError 36} -ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE ALIAS; --{serverError 36} -ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE CODEC; --{serverError 36} -ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE COMMENT; --{serverError 36} -ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE TTL; --{serverError 36} +ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE DEFAULT; --{serverError BAD_ARGUMENTS} +ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE MATERIALIZED; --{serverError BAD_ARGUMENTS} +ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE ALIAS; --{serverError BAD_ARGUMENTS} +ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE CODEC; --{serverError BAD_ARGUMENTS} +ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE COMMENT; --{serverError BAD_ARGUMENTS} +ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE TTL; --{serverError BAD_ARGUMENTS} -ALTER TABLE r_no_prop_table REMOVE TTL; --{serverError 36} +ALTER TABLE r_no_prop_table REMOVE TTL; --{serverError BAD_ARGUMENTS} SHOW CREATE TABLE r_no_prop_table; -ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE ttl; --{serverError 36} -ALTER TABLE r_no_prop_table remove TTL; --{serverError 36} +ALTER TABLE r_no_prop_table MODIFY COLUMN some_column REMOVE ttl; --{serverError BAD_ARGUMENTS} +ALTER TABLE r_no_prop_table remove TTL; --{serverError BAD_ARGUMENTS} DROP TABLE IF EXISTS r_no_prop_table; diff --git a/tests/queries/0_stateless/01493_alter_remove_wrong_default.sql b/tests/queries/0_stateless/01493_alter_remove_wrong_default.sql index 2099604ec13..3cd8e983957 100644 --- a/tests/queries/0_stateless/01493_alter_remove_wrong_default.sql +++ b/tests/queries/0_stateless/01493_alter_remove_wrong_default.sql @@ -8,14 +8,14 @@ CREATE TABLE default_table ( ENGINE = MergeTree() ORDER BY tuple(); -ALTER TABLE default_table MODIFY COLUMN key REMOVE MATERIALIZED; --{serverError 36} -ALTER TABLE default_table MODIFY COLUMN key REMOVE ALIAS; --{serverError 36} +ALTER TABLE default_table MODIFY COLUMN key REMOVE MATERIALIZED; --{serverError BAD_ARGUMENTS} +ALTER TABLE default_table MODIFY COLUMN key REMOVE ALIAS; --{serverError BAD_ARGUMENTS} -ALTER TABLE default_table MODIFY COLUMN value1 REMOVE DEFAULT; --{serverError 36} -ALTER TABLE default_table MODIFY COLUMN value1 REMOVE ALIAS; --{serverError 36} +ALTER TABLE default_table MODIFY COLUMN value1 REMOVE DEFAULT; --{serverError BAD_ARGUMENTS} +ALTER TABLE default_table MODIFY COLUMN value1 REMOVE ALIAS; --{serverError BAD_ARGUMENTS} -ALTER TABLE default_table MODIFY COLUMN value2 REMOVE DEFAULT; --{serverError 36} -ALTER TABLE default_table MODIFY COLUMN value2 REMOVE MATERIALIZED; --{serverError 36} +ALTER TABLE default_table MODIFY COLUMN value2 REMOVE DEFAULT; --{serverError BAD_ARGUMENTS} +ALTER TABLE default_table MODIFY COLUMN value2 REMOVE MATERIALIZED; --{serverError BAD_ARGUMENTS} SHOW CREATE TABLE default_table; diff --git a/tests/queries/0_stateless/01504_compression_multiple_streams.sql b/tests/queries/0_stateless/01504_compression_multiple_streams.sql index 7cdf1b52651..c456d4c4064 100644 --- a/tests/queries/0_stateless/01504_compression_multiple_streams.sql +++ b/tests/queries/0_stateless/01504_compression_multiple_streams.sql @@ -89,13 +89,13 @@ CREATE TABLE columns_with_multiple_streams_bad_case ( field0 Nullable(String) CODEC(Delta, LZ4) ) ENGINE = MergeTree -ORDER BY tuple(); --{serverError 36} +ORDER BY tuple(); --{serverError BAD_ARGUMENTS} CREATE TABLE columns_with_multiple_streams_bad_case ( field0 Tuple(Array(UInt64), String) CODEC(T64, LZ4) ) ENGINE = MergeTree -ORDER BY tuple(); --{serverError 431} +ORDER BY tuple(); --{serverError ILLEGAL_SYNTAX_FOR_CODEC_TYPE} SET allow_suspicious_codecs = 1; diff --git a/tests/queries/0_stateless/01504_rocksdb.sql b/tests/queries/0_stateless/01504_rocksdb.sql index f79f31139fe..0f2737c8c5c 100644 --- a/tests/queries/0_stateless/01504_rocksdb.sql +++ b/tests/queries/0_stateless/01504_rocksdb.sql @@ -4,9 +4,9 @@ DROP TABLE IF EXISTS 01504_test; -CREATE TABLE 01504_test (key String, value UInt32) Engine=EmbeddedRocksDB; -- { serverError 36 } -CREATE TABLE 01504_test (key String, value UInt32) Engine=EmbeddedRocksDB PRIMARY KEY(key2); -- { serverError 47 } -CREATE TABLE 01504_test (key String, value UInt32) Engine=EmbeddedRocksDB PRIMARY KEY(key, value); -- { serverError 36 } +CREATE TABLE 01504_test (key String, value UInt32) Engine=EmbeddedRocksDB; -- { serverError BAD_ARGUMENTS } +CREATE TABLE 01504_test (key String, value UInt32) Engine=EmbeddedRocksDB PRIMARY KEY(key2); -- { serverError UNKNOWN_IDENTIFIER } +CREATE TABLE 01504_test (key String, value UInt32) Engine=EmbeddedRocksDB PRIMARY KEY(key, value); -- { serverError BAD_ARGUMENTS } CREATE TABLE 01504_test (key Tuple(String, UInt32), value UInt64) Engine=EmbeddedRocksDB PRIMARY KEY(key); DROP TABLE IF EXISTS 01504_test; @@ -40,8 +40,8 @@ SET max_rows_to_read = 2; SELECT dummy == (1,1.2) FROM 01504_test WHERE k IN (1, 3) OR k IN (1) OR k IN (3, 1) OR k IN [1] OR k IN [1, 3] ; SELECT k == 4 FROM 01504_test WHERE k = 4 OR k IN [4] OR k in (4, 10000001, 10000002) AND value > 0; SELECT k == 4 FROM 01504_test WHERE k IN (SELECT toUInt32(number) FROM keys WHERE number = 4); -SELECT k, value FROM 01504_test WHERE k = 0 OR value > 0; -- { serverError 158 } -SELECT k, value FROM 01504_test WHERE k = 0 AND k IN (1, 3) OR k > 8; -- { serverError 158 } +SELECT k, value FROM 01504_test WHERE k = 0 OR value > 0; -- { serverError TOO_MANY_ROWS } +SELECT k, value FROM 01504_test WHERE k = 0 AND k IN (1, 3) OR k > 8; -- { serverError TOO_MANY_ROWS } TRUNCATE TABLE 01504_test; SELECT 0 == COUNT(1) FROM 01504_test; diff --git a/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql b/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql index aaf88f95f0c..496fe26ad07 100644 --- a/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql +++ b/tests/queries/0_stateless/01505_trivial_count_with_partition_predicate.sql @@ -12,12 +12,12 @@ select count() from test1 settings max_parallel_replicas = 3; -- optimized (toYear is monotonic and we provide the partition expr as is) select count() from test1 where toYear(toDate(p)) = 1999; -- non-optimized (toDate(DateTime) is always monotonic, but we cannot relaxing the predicates to do trivial count()) -select count() from test1 where p > toDateTime('2020-09-01 10:00:00'); -- { serverError 158 } +select count() from test1 where p > toDateTime('2020-09-01 10:00:00'); -- { serverError TOO_MANY_ROWS } -- optimized (partition expr wrapped with non-monotonic functions) select count() FROM test1 where toDate(p) = '2020-09-01' and sipHash64(toString(toDate(p))) % 2 = 1; select count() FROM test1 where toDate(p) = '2020-09-01' and sipHash64(toString(toDate(p))) % 2 = 0; -- non-optimized (some predicate depends on non-partition_expr columns) -select count() FROM test1 where toDate(p) = '2020-09-01' and k = 2; -- { serverError 158 } +select count() FROM test1 where toDate(p) = '2020-09-01' and k = 2; -- { serverError TOO_MANY_ROWS } -- optimized select count() from test1 where toDate(p) > '2020-09-01'; -- non-optimized @@ -36,10 +36,10 @@ select count() from test_tuple where i > 2; -- optimized select count() from test_tuple where i < 1; -- non-optimized -select count() from test_tuple array join [p,p] as c where toDate(p) = '2020-09-01'; -- { serverError 158 } +select count() from test_tuple array join [p,p] as c where toDate(p) = '2020-09-01'; -- { serverError TOO_MANY_ROWS } select count() from test_tuple array join [1,2] as c where toDate(p) = '2020-09-01' settings max_rows_to_read = 4; -- non-optimized -select count() from test_tuple array join [1,2,3] as c where toDate(p) = '2020-09-01'; -- { serverError 158 } +select count() from test_tuple array join [1,2,3] as c where toDate(p) = '2020-09-01'; -- { serverError TOO_MANY_ROWS } select count() from test_tuple array join [1,2,3] as c where toDate(p) = '2020-09-01' settings max_rows_to_read = 6; create table test_two_args(i int, j int, k int) engine MergeTree partition by i + j order by k settings index_granularity = 1; @@ -49,7 +49,7 @@ insert into test_two_args values (1, 2, 3), (2, 1, 3), (0, 3, 4); -- optimized select count() from test_two_args where i + j = 3; -- non-optimized -select count() from test_two_args where i = 1; -- { serverError 158 } +select count() from test_two_args where i = 1; -- { serverError TOO_MANY_ROWS } drop table test1; drop table test_tuple; diff --git a/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas_long.sql b/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas_long.sql index 24b368090e7..95aa46c833c 100644 --- a/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas_long.sql +++ b/tests/queries/0_stateless/01509_parallel_quorum_insert_no_replicas_long.sql @@ -19,11 +19,11 @@ ORDER BY tuple(); SET insert_quorum_parallel=1; SET insert_quorum=3; -INSERT INTO r1 VALUES(1, '1'); --{serverError 285} +INSERT INTO r1 VALUES(1, '1'); --{serverError TOO_FEW_LIVE_REPLICAS} -- retry should still fail despite the insert_deduplicate enabled -INSERT INTO r1 VALUES(1, '1'); --{serverError 285} -INSERT INTO r1 VALUES(1, '1'); --{serverError 285} +INSERT INTO r1 VALUES(1, '1'); --{serverError TOO_FEW_LIVE_REPLICAS} +INSERT INTO r1 VALUES(1, '1'); --{serverError TOO_FEW_LIVE_REPLICAS} SELECT 'insert to two replicas works'; SET insert_quorum=2, insert_quorum_parallel=1; @@ -35,11 +35,11 @@ SELECT COUNT() FROM r2; DETACH TABLE r2; -INSERT INTO r1 VALUES(2, '2'); --{serverError 285} +INSERT INTO r1 VALUES(2, '2'); --{serverError TOO_FEW_LIVE_REPLICAS} -- retry should fail despite the insert_deduplicate enabled -INSERT INTO r1 VALUES(2, '2'); --{serverError 285} -INSERT INTO r1 VALUES(2, '2'); --{serverError 285} +INSERT INTO r1 VALUES(2, '2'); --{serverError TOO_FEW_LIVE_REPLICAS} +INSERT INTO r1 VALUES(2, '2'); --{serverError TOO_FEW_LIVE_REPLICAS} SET insert_quorum=1, insert_quorum_parallel=1; SELECT 'insert to single replica works'; @@ -67,7 +67,7 @@ INSERT INTO r2 VALUES(3, '3'); INSERT INTO r1 VALUES(3, '3'); -- will start failing if we increase quorum SET insert_quorum=3, insert_quorum_parallel=1; -INSERT INTO r1 VALUES(3, '3'); --{serverError 285} +INSERT INTO r1 VALUES(3, '3'); --{serverError TOO_FEW_LIVE_REPLICAS} -- work back ok when quorum=2 SET insert_quorum=2, insert_quorum_parallel=1; INSERT INTO r2 VALUES(3, '3'); @@ -79,11 +79,11 @@ SYSTEM STOP FETCHES r2; SET insert_quorum_timeout=0; -INSERT INTO r1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (4, '4'); -- { serverError 319 } +INSERT INTO r1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (4, '4'); -- { serverError UNKNOWN_STATUS_OF_INSERT } -- retry should fail despite the insert_deduplicate enabled -INSERT INTO r1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (4, '4'); -- { serverError 319 } -INSERT INTO r1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (4, '4'); -- { serverError 319 } +INSERT INTO r1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (4, '4'); -- { serverError UNKNOWN_STATUS_OF_INSERT } +INSERT INTO r1 SETTINGS insert_keeper_fault_injection_probability=0 VALUES (4, '4'); -- { serverError UNKNOWN_STATUS_OF_INSERT } SELECT * FROM r2 WHERE key=4; SYSTEM START FETCHES r2; diff --git a/tests/queries/0_stateless/01511_alter_version_versioned_collapsing_merge_tree.sql b/tests/queries/0_stateless/01511_alter_version_versioned_collapsing_merge_tree.sql index 8f0b2d12ab0..87995abb96a 100644 --- a/tests/queries/0_stateless/01511_alter_version_versioned_collapsing_merge_tree.sql +++ b/tests/queries/0_stateless/01511_alter_version_versioned_collapsing_merge_tree.sql @@ -36,11 +36,11 @@ INSERT INTO TABLE table_with_version VALUES(3, '3', 65555, -1); SELECT * FROM table_with_version FINAL ORDER BY key; -ALTER TABLE table_with_version MODIFY COLUMN version String; --{serverError 524} -ALTER TABLE table_with_version MODIFY COLUMN version Int64; --{serverError 524} -ALTER TABLE table_with_version MODIFY COLUMN version UInt16; --{serverError 524} -ALTER TABLE table_with_version MODIFY COLUMN version Float64; --{serverError 524} -ALTER TABLE table_with_version MODIFY COLUMN version Date; --{serverError 524} -ALTER TABLE table_with_version MODIFY COLUMN version DateTime; --{serverError 524} +ALTER TABLE table_with_version MODIFY COLUMN version String; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} +ALTER TABLE table_with_version MODIFY COLUMN version Int64; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} +ALTER TABLE table_with_version MODIFY COLUMN version UInt16; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} +ALTER TABLE table_with_version MODIFY COLUMN version Float64; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} +ALTER TABLE table_with_version MODIFY COLUMN version Date; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} +ALTER TABLE table_with_version MODIFY COLUMN version DateTime; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} DROP TABLE IF EXISTS table_with_version; diff --git a/tests/queries/0_stateless/01512_create_replicate_merge_tree_one_arg.sql b/tests/queries/0_stateless/01512_create_replicate_merge_tree_one_arg.sql index fbdf68e6063..77da5a18249 100644 --- a/tests/queries/0_stateless/01512_create_replicate_merge_tree_one_arg.sql +++ b/tests/queries/0_stateless/01512_create_replicate_merge_tree_one_arg.sql @@ -1,5 +1,5 @@ -- Tags: replica CREATE TABLE mt (v UInt8) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{database}/test_01497/mt') - ORDER BY tuple() -- { serverError 36 } + ORDER BY tuple() -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01513_optimize_aggregation_in_order_memory_long.sql b/tests/queries/0_stateless/01513_optimize_aggregation_in_order_memory_long.sql index b107af07194..d9430018469 100644 --- a/tests/queries/0_stateless/01513_optimize_aggregation_in_order_memory_long.sql +++ b/tests/queries/0_stateless/01513_optimize_aggregation_in_order_memory_long.sql @@ -14,9 +14,9 @@ set max_threads=1; set max_block_size=500; set max_bytes_before_external_group_by=0; -select key, groupArray(repeat('a', 200)), count() from data_01513 group by key format Null settings optimize_aggregation_in_order=0; -- { serverError 241 } +select key, groupArray(repeat('a', 200)), count() from data_01513 group by key format Null settings optimize_aggregation_in_order=0; -- { serverError MEMORY_LIMIT_EXCEEDED } select key, groupArray(repeat('a', 200)), count() from data_01513 group by key format Null settings optimize_aggregation_in_order=1; -- for WITH TOTALS previous groups should be kept. -select key, groupArray(repeat('a', 200)), count() from data_01513 group by key with totals format Null settings optimize_aggregation_in_order=1; -- { serverError 241 } +select key, groupArray(repeat('a', 200)), count() from data_01513 group by key with totals format Null settings optimize_aggregation_in_order=1; -- { serverError MEMORY_LIMIT_EXCEEDED } drop table data_01513; diff --git a/tests/queries/0_stateless/01515_force_data_skipping_indices.sql b/tests/queries/0_stateless/01515_force_data_skipping_indices.sql index 40b66b0ff7b..d504e1c7d22 100644 --- a/tests/queries/0_stateless/01515_force_data_skipping_indices.sql +++ b/tests/queries/0_stateless/01515_force_data_skipping_indices.sql @@ -13,23 +13,23 @@ ORDER BY key; INSERT INTO data_01515 VALUES (1, 2, 3); SELECT * FROM data_01515; -SELECT * FROM data_01515 SETTINGS force_data_skipping_indices=''; -- { serverError 6 } -SELECT * FROM data_01515 SETTINGS force_data_skipping_indices='d1_idx'; -- { serverError 277 } -SELECT * FROM data_01515 SETTINGS force_data_skipping_indices='d1_null_idx'; -- { serverError 277 } +SELECT * FROM data_01515 SETTINGS force_data_skipping_indices=''; -- { serverError CANNOT_PARSE_TEXT } +SELECT * FROM data_01515 SETTINGS force_data_skipping_indices='d1_idx'; -- { serverError INDEX_NOT_USED } +SELECT * FROM data_01515 SETTINGS force_data_skipping_indices='d1_null_idx'; -- { serverError INDEX_NOT_USED } SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_idx'; SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='`d1_idx`'; SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' d1_idx '; SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' d1_idx '; -SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_idx,d1_null_idx'; -- { serverError 277 } -SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_null_idx,d1_idx'; -- { serverError 277 } -SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_null_idx,d1_idx,,'; -- { serverError 277 } -SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' d1_null_idx,d1_idx'; -- { serverError 277 } -SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' `d1_null_idx`,d1_idx'; -- { serverError 277 } -SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_null_idx'; -- { serverError 277 } -SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' d1_null_idx '; -- { serverError 277 } +SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_idx,d1_null_idx'; -- { serverError INDEX_NOT_USED } +SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_null_idx,d1_idx'; -- { serverError INDEX_NOT_USED } +SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_null_idx,d1_idx,,'; -- { serverError INDEX_NOT_USED } +SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' d1_null_idx,d1_idx'; -- { serverError INDEX_NOT_USED } +SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' `d1_null_idx`,d1_idx'; -- { serverError INDEX_NOT_USED } +SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='d1_null_idx'; -- { serverError INDEX_NOT_USED } +SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices=' d1_null_idx '; -- { serverError INDEX_NOT_USED } -SELECT * FROM data_01515 WHERE d1_null = 0 SETTINGS force_data_skipping_indices='d1_null_idx'; -- { serverError 277 } +SELECT * FROM data_01515 WHERE d1_null = 0 SETTINGS force_data_skipping_indices='d1_null_idx'; -- { serverError INDEX_NOT_USED } SELECT * FROM data_01515 WHERE assumeNotNull(d1_null) = 0 SETTINGS force_data_skipping_indices='d1_null_idx'; DROP TABLE data_01515; diff --git a/tests/queries/0_stateless/01516_create_table_primary_key.sql b/tests/queries/0_stateless/01516_create_table_primary_key.sql index 1e5a0b9cddf..1f5f80a8a7b 100644 --- a/tests/queries/0_stateless/01516_create_table_primary_key.sql +++ b/tests/queries/0_stateless/01516_create_table_primary_key.sql @@ -38,7 +38,7 @@ ATTACH TABLE primary_key_test(v1 Int32, v2 Int32) ENGINE=ReplacingMergeTree ORDE SELECT * FROM primary_key_test FINAL; DROP TABLE primary_key_test; -CREATE TABLE primary_key_test(v1 Int64, v2 Int32, v3 String, PRIMARY KEY(v1, gcd(v1, v2))) ENGINE=ReplacingMergeTree ORDER BY v1; -- { serverError 36 } +CREATE TABLE primary_key_test(v1 Int64, v2 Int32, v3 String, PRIMARY KEY(v1, gcd(v1, v2))) ENGINE=ReplacingMergeTree ORDER BY v1; -- { serverError BAD_ARGUMENTS } CREATE TABLE primary_key_test(v1 Int64, v2 Int32, v3 String, PRIMARY KEY(v1, gcd(v1, v2))) ENGINE=ReplacingMergeTree ORDER BY (v1, gcd(v1, v2)); diff --git a/tests/queries/0_stateless/01521_format_readable_time_delta2.sql b/tests/queries/0_stateless/01521_format_readable_time_delta2.sql index cb432183fed..b27dfc6a64b 100644 --- a/tests/queries/0_stateless/01521_format_readable_time_delta2.sql +++ b/tests/queries/0_stateless/01521_format_readable_time_delta2.sql @@ -5,7 +5,7 @@ SELECT formatReadableTimeDelta(-(1 + 60 + 3600 + 86400 + 30.5 * 86400 + 365 * 86 SELECT formatReadableTimeDelta(-(1 + 60 + 3600 + 86400 + 30.5 * 86400 + 365 * 86400), 'hours'); SELECT formatReadableTimeDelta(-(1 + 60 + 3600 + 86400 + 30.5 * 86400 + 365 * 86400), 'minutes'); SELECT formatReadableTimeDelta(-(1 + 60 + 3600 + 86400 + 30.5 * 86400 + 365 * 86400), 'seconds'); -SELECT formatReadableTimeDelta(-(1 + 60 + 3600 + 86400 + 30.5 * 86400 + 365 * 86400), 'second'); -- { serverError 36 } +SELECT formatReadableTimeDelta(-(1 + 60 + 3600 + 86400 + 30.5 * 86400 + 365 * 86400), 'second'); -- { serverError BAD_ARGUMENTS } SELECT formatReadableTimeDelta(-(60 + 3600 + 86400 + 30.5 * 86400 + 365 * 86400)); SELECT formatReadableTimeDelta(-(1 + 3600 + 86400 + 30.5 * 86400 + 365 * 86400)); diff --git a/tests/queries/0_stateless/01522_validate_alter_default.sql b/tests/queries/0_stateless/01522_validate_alter_default.sql index dbddffe369e..c4db2b91a02 100644 --- a/tests/queries/0_stateless/01522_validate_alter_default.sql +++ b/tests/queries/0_stateless/01522_validate_alter_default.sql @@ -9,8 +9,8 @@ Engine = MergeTree() PARTITION BY toYYYYMM(EventDate) ORDER BY Id; -ALTER TABLE table2 MODIFY COLUMN `Value` DEFAULT 'some_string'; --{serverError 6} +ALTER TABLE table2 MODIFY COLUMN `Value` DEFAULT 'some_string'; --{serverError CANNOT_PARSE_TEXT} -ALTER TABLE table2 ADD COLUMN `Value2` DEFAULT 'some_string'; --{serverError 36} +ALTER TABLE table2 ADD COLUMN `Value2` DEFAULT 'some_string'; --{serverError BAD_ARGUMENTS} DROP TABLE IF EXISTS table2; diff --git a/tests/queries/0_stateless/01527_bad_aggregation_in_lambda.sql b/tests/queries/0_stateless/01527_bad_aggregation_in_lambda.sql index 3be73ba56e7..c1b86fdfc9a 100644 --- a/tests/queries/0_stateless/01527_bad_aggregation_in_lambda.sql +++ b/tests/queries/0_stateless/01527_bad_aggregation_in_lambda.sql @@ -1 +1 @@ -SELECT arrayMap(x -> x * sum(x), range(10)); -- { serverError 10, 47 } +SELECT arrayMap(x -> x * sum(x), range(10)); -- { serverError NOT_FOUND_COLUMN_IN_BLOCK, 47 } diff --git a/tests/queries/0_stateless/01527_materialized_view_stack_overflow.sql b/tests/queries/0_stateless/01527_materialized_view_stack_overflow.sql index 4a67ef4b2d8..9fa3a865a8f 100644 --- a/tests/queries/0_stateless/01527_materialized_view_stack_overflow.sql +++ b/tests/queries/0_stateless/01527_materialized_view_stack_overflow.sql @@ -3,8 +3,8 @@ DROP TABLE IF EXISTS v; CREATE TABLE t (c String) ENGINE = Memory; -CREATE MATERIALIZED VIEW v to v AS SELECT c FROM t; -- { serverError 36 } -CREATE MATERIALIZED VIEW v to t AS SELECT * FROM v; -- { serverError 60 } +CREATE MATERIALIZED VIEW v to v AS SELECT c FROM t; -- { serverError BAD_ARGUMENTS } +CREATE MATERIALIZED VIEW v to t AS SELECT * FROM v; -- { serverError UNKNOWN_TABLE } DROP TABLE IF EXISTS t1; DROP TABLE IF EXISTS t2; @@ -17,8 +17,8 @@ CREATE TABLE t2 (c String) ENGINE = Memory; CREATE MATERIALIZED VIEW v1 to t1 AS SELECT * FROM t2; CREATE MATERIALIZED VIEW v2 to t2 AS SELECT * FROM t1; -INSERT INTO t1 VALUES ('Hello'); -- { serverError 306 } -INSERT INTO t2 VALUES ('World'); -- { serverError 306 } +INSERT INTO t1 VALUES ('Hello'); -- { serverError TOO_DEEP_RECURSION } +INSERT INTO t2 VALUES ('World'); -- { serverError TOO_DEEP_RECURSION } DROP TABLE IF EXISTS t; DROP TABLE IF EXISTS v; diff --git a/tests/queries/0_stateless/01528_allow_nondeterministic_optimize_skip_unused_shards.sql b/tests/queries/0_stateless/01528_allow_nondeterministic_optimize_skip_unused_shards.sql index ac04178e585..534c6b44ac3 100644 --- a/tests/queries/0_stateless/01528_allow_nondeterministic_optimize_skip_unused_shards.sql +++ b/tests/queries/0_stateless/01528_allow_nondeterministic_optimize_skip_unused_shards.sql @@ -5,7 +5,7 @@ create table dist_01528 as system.one engine=Distributed('test_cluster_two_shard set optimize_skip_unused_shards=1; set force_optimize_skip_unused_shards=1; -select * from dist_01528 where dummy = 2; -- { serverError 507 } +select * from dist_01528 where dummy = 2; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } select * from dist_01528 where dummy = 2 settings allow_nondeterministic_optimize_skip_unused_shards=1; drop table dist_01528; diff --git a/tests/queries/0_stateless/01528_to_uuid_or_null_or_zero.sql b/tests/queries/0_stateless/01528_to_uuid_or_null_or_zero.sql index ae6a1b2db04..1be3002fd1b 100644 --- a/tests/queries/0_stateless/01528_to_uuid_or_null_or_zero.sql +++ b/tests/queries/0_stateless/01528_to_uuid_or_null_or_zero.sql @@ -1,7 +1,7 @@ DROP TABLE IF EXISTS to_uuid_test; SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0'); -SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0T'); --{serverError 6} +SELECT toUUID('61f0c404-5cb3-11e7-907b-a6006ad3dba0T'); --{serverError CANNOT_PARSE_TEXT} SELECT toUUIDOrNull('61f0c404-5cb3-11e7-907b-a6006ad3dba0T'); SELECT toUUIDOrZero('59f0c404-5cb3-11e7-907b-a6006ad3dba0T'); @@ -11,7 +11,7 @@ INSERT INTO to_uuid_test VALUES ('61f0c404-5cb3-11e7-907b-a6006ad3dba0'); SELECT toUUID(value) FROM to_uuid_test; INSERT INTO to_uuid_test VALUES ('61f0c404-5cb3-11e7-907b-a6006ad3dba0T'); -SELECT toUUID(value) FROM to_uuid_test; -- {serverError 6} +SELECT toUUID(value) FROM to_uuid_test; -- {serverError CANNOT_PARSE_TEXT} SELECT toUUIDOrNull(value) FROM to_uuid_test; SELECT toUUIDOrZero(value) FROM to_uuid_test; diff --git a/tests/queries/0_stateless/01530_drop_database_atomic_sync.sql b/tests/queries/0_stateless/01530_drop_database_atomic_sync.sql index 13b4a4e331b..6cc0eac4315 100644 --- a/tests/queries/0_stateless/01530_drop_database_atomic_sync.sql +++ b/tests/queries/0_stateless/01530_drop_database_atomic_sync.sql @@ -30,7 +30,7 @@ create table db_01530_atomic.data (key Int) Engine=ReplicatedMergeTree('/clickho drop database db_01530_atomic; create database db_01530_atomic Engine=Atomic; -create table db_01530_atomic.data (key Int) Engine=ReplicatedMergeTree('/clickhouse/tables/{database}/db_01530_atomic/data', 'test') order by key; -- { serverError 253 } +create table db_01530_atomic.data (key Int) Engine=ReplicatedMergeTree('/clickhouse/tables/{database}/db_01530_atomic/data', 'test') order by key; -- { serverError REPLICA_ALREADY_EXISTS } set database_atomic_wait_for_drop_and_detach_synchronously=1; diff --git a/tests/queries/0_stateless/01535_decimal_round_scale_overflow_check.sql b/tests/queries/0_stateless/01535_decimal_round_scale_overflow_check.sql index 18509221203..d81a23f641e 100644 --- a/tests/queries/0_stateless/01535_decimal_round_scale_overflow_check.sql +++ b/tests/queries/0_stateless/01535_decimal_round_scale_overflow_check.sql @@ -1 +1 @@ -SELECT round(toDecimal32(1, 0), -9223372036854775806); -- { serverError 69 } +SELECT round(toDecimal32(1, 0), -9223372036854775806); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/01536_fuzz_cast.sql b/tests/queries/0_stateless/01536_fuzz_cast.sql index fb1303549b6..7fcdf999339 100644 --- a/tests/queries/0_stateless/01536_fuzz_cast.sql +++ b/tests/queries/0_stateless/01536_fuzz_cast.sql @@ -1,2 +1,2 @@ SET cast_keep_nullable = 0; -SELECT CAST(arrayJoin([NULL, '', '', NULL, '', NULL, '01.02.2017 03:04\005GMT', '', NULL, '01/02/2017 03:04:05 MSK01/02/\0017 03:04:05 MSK', '', NULL, '03/04/201903/04/201903/04/\001903/04/2019']), 'Enum8(\'a\' = 1, \'b\' = 2)') AS x; -- { serverError 349 } +SELECT CAST(arrayJoin([NULL, '', '', NULL, '', NULL, '01.02.2017 03:04\005GMT', '', NULL, '01/02/2017 03:04:05 MSK01/02/\0017 03:04:05 MSK', '', NULL, '03/04/201903/04/201903/04/\001903/04/2019']), 'Enum8(\'a\' = 1, \'b\' = 2)') AS x; -- { serverError CANNOT_INSERT_NULL_IN_ORDINARY_COLUMN } diff --git a/tests/queries/0_stateless/01538_fuzz_aggregate.sql b/tests/queries/0_stateless/01538_fuzz_aggregate.sql index 13dadabda63..bfd027af946 100644 --- a/tests/queries/0_stateless/01538_fuzz_aggregate.sql +++ b/tests/queries/0_stateless/01538_fuzz_aggregate.sql @@ -7,4 +7,4 @@ FROM FROM system.numbers_mt GROUP BY k ) -ARRAY JOIN ns; -- { serverError 47 } +ARRAY JOIN ns; -- { serverError UNKNOWN_IDENTIFIER } diff --git a/tests/queries/0_stateless/01543_toModifiedJulianDay.sql b/tests/queries/0_stateless/01543_toModifiedJulianDay.sql index 4cc0813c8cc..47303e0a851 100644 --- a/tests/queries/0_stateless/01543_toModifiedJulianDay.sql +++ b/tests/queries/0_stateless/01543_toModifiedJulianDay.sql @@ -5,9 +5,9 @@ SELECT toModifiedJulianDay('1858-11-16'); SELECT toModifiedJulianDay('1858-11-17'); SELECT toModifiedJulianDay('2020-11-01'); SELECT toModifiedJulianDay(NULL); -SELECT toModifiedJulianDay('unparsable'); -- { serverError 27 } -SELECT toModifiedJulianDay('1999-02-29'); -- { serverError 38 } -SELECT toModifiedJulianDay('1999-13-32'); -- { serverError 38 } +SELECT toModifiedJulianDay('unparsable'); -- { serverError CANNOT_PARSE_INPUT_ASSERTION_FAILED } +SELECT toModifiedJulianDay('1999-02-29'); -- { serverError CANNOT_PARSE_DATE } +SELECT toModifiedJulianDay('1999-13-32'); -- { serverError CANNOT_PARSE_DATE } SELECT 'or null'; SELECT toModifiedJulianDayOrNull('2020-11-01'); diff --git a/tests/queries/0_stateless/01544_fromModifiedJulianDay.sql b/tests/queries/0_stateless/01544_fromModifiedJulianDay.sql index 5e682a942d5..d405aa16f3f 100644 --- a/tests/queries/0_stateless/01544_fromModifiedJulianDay.sql +++ b/tests/queries/0_stateless/01544_fromModifiedJulianDay.sql @@ -6,8 +6,8 @@ SELECT fromModifiedJulianDay(0); SELECT fromModifiedJulianDay(59154); SELECT fromModifiedJulianDay(NULL); SELECT fromModifiedJulianDay(CAST(NULL, 'Nullable(Int64)')); -SELECT fromModifiedJulianDay(-678942); -- { serverError 490 } -SELECT fromModifiedJulianDay(2973484); -- { serverError 490 } +SELECT fromModifiedJulianDay(-678942); -- { serverError CANNOT_FORMAT_DATETIME } +SELECT fromModifiedJulianDay(2973484); -- { serverError CANNOT_FORMAT_DATETIME } SELECT 'or null'; SELECT fromModifiedJulianDayOrNull(59154); diff --git a/tests/queries/0_stateless/01548_uncomparable_columns_in_keys.sql b/tests/queries/0_stateless/01548_uncomparable_columns_in_keys.sql index ff51085f58c..6b8d1c0102e 100644 --- a/tests/queries/0_stateless/01548_uncomparable_columns_in_keys.sql +++ b/tests/queries/0_stateless/01548_uncomparable_columns_in_keys.sql @@ -1,9 +1,9 @@ DROP TABLE IF EXISTS uncomparable_keys; -CREATE TABLE foo (id UInt64, key AggregateFunction(max, UInt64)) ENGINE MergeTree ORDER BY key; --{serverError 549} +CREATE TABLE foo (id UInt64, key AggregateFunction(max, UInt64)) ENGINE MergeTree ORDER BY key; --{serverError DATA_TYPE_CANNOT_BE_USED_IN_KEY} -CREATE TABLE foo (id UInt64, key AggregateFunction(max, UInt64)) ENGINE MergeTree PARTITION BY key; --{serverError 549} +CREATE TABLE foo (id UInt64, key AggregateFunction(max, UInt64)) ENGINE MergeTree PARTITION BY key; --{serverError DATA_TYPE_CANNOT_BE_USED_IN_KEY} -CREATE TABLE foo (id UInt64, key AggregateFunction(max, UInt64)) ENGINE MergeTree ORDER BY (key) SAMPLE BY key; --{serverError 549} +CREATE TABLE foo (id UInt64, key AggregateFunction(max, UInt64)) ENGINE MergeTree ORDER BY (key) SAMPLE BY key; --{serverError DATA_TYPE_CANNOT_BE_USED_IN_KEY} DROP TABLE IF EXISTS uncomparable_keys; diff --git a/tests/queries/0_stateless/01548_with_totals_having.sql b/tests/queries/0_stateless/01548_with_totals_having.sql index 2562ea3f3e5..a4ee7468e31 100644 --- a/tests/queries/0_stateless/01548_with_totals_having.sql +++ b/tests/queries/0_stateless/01548_with_totals_having.sql @@ -1,2 +1,2 @@ -SELECT * FROM numbers(4) GROUP BY number WITH TOTALS HAVING sum(number) <= arrayJoin([]); -- { serverError 44, 59 } -SELECT * FROM numbers(4) GROUP BY number WITH TOTALS HAVING sum(number) <= arrayJoin([3, 2, 1, 0]) ORDER BY number; -- { serverError 44 } +SELECT * FROM numbers(4) GROUP BY number WITH TOTALS HAVING sum(number) <= arrayJoin([]); -- { serverError ILLEGAL_COLUMN, 59 } +SELECT * FROM numbers(4) GROUP BY number WITH TOTALS HAVING sum(number) <= arrayJoin([3, 2, 1, 0]) ORDER BY number; -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/01550_create_map_type.sql b/tests/queries/0_stateless/01550_create_map_type.sql index 92362f5596b..592e89e3855 100644 --- a/tests/queries/0_stateless/01550_create_map_type.sql +++ b/tests/queries/0_stateless/01550_create_map_type.sql @@ -77,4 +77,4 @@ SELECT sum(m['1']), sum(m['7']), sum(m['100']) FROM table_map; DROP TABLE IF EXISTS table_map; -SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10] -- { serverError 53} +SELECT CAST(([2, 1, 1023], ['', '']), 'Map(UInt8, String)') AS map, map[10] -- { serverError TYPE_MISMATCH} diff --git a/tests/queries/0_stateless/01553_settings_early_apply.sql b/tests/queries/0_stateless/01553_settings_early_apply.sql index e217f20a926..4c168bdb3a5 100644 --- a/tests/queries/0_stateless/01553_settings_early_apply.sql +++ b/tests/queries/0_stateless/01553_settings_early_apply.sql @@ -1,14 +1,14 @@ set output_format_write_statistics=0; -select * from numbers(100) settings max_result_rows = 1; -- { serverError 396 } -select * from numbers(100) FORMAT JSON settings max_result_rows = 1; -- { serverError 396 } -select * from numbers(100) FORMAT TSVWithNamesAndTypes settings max_result_rows = 1; -- { serverError 396 } -select * from numbers(100) FORMAT CSVWithNamesAndTypes settings max_result_rows = 1; -- { serverError 396 } -select * from numbers(100) FORMAT JSONCompactEachRowWithNamesAndTypes settings max_result_rows = 1; -- { serverError 396 } -select * from numbers(100) FORMAT XML settings max_result_rows = 1; -- { serverError 396 } +select * from numbers(100) settings max_result_rows = 1; -- { serverError TOO_MANY_ROWS_OR_BYTES } +select * from numbers(100) FORMAT JSON settings max_result_rows = 1; -- { serverError TOO_MANY_ROWS_OR_BYTES } +select * from numbers(100) FORMAT TSVWithNamesAndTypes settings max_result_rows = 1; -- { serverError TOO_MANY_ROWS_OR_BYTES } +select * from numbers(100) FORMAT CSVWithNamesAndTypes settings max_result_rows = 1; -- { serverError TOO_MANY_ROWS_OR_BYTES } +select * from numbers(100) FORMAT JSONCompactEachRowWithNamesAndTypes settings max_result_rows = 1; -- { serverError TOO_MANY_ROWS_OR_BYTES } +select * from numbers(100) FORMAT XML settings max_result_rows = 1; -- { serverError TOO_MANY_ROWS_OR_BYTES } SET max_result_rows = 1; -select * from numbers(10); -- { serverError 396 } +select * from numbers(10); -- { serverError TOO_MANY_ROWS_OR_BYTES } select * from numbers(10) SETTINGS result_overflow_mode = 'break', max_block_size = 1 FORMAT PrettySpaceNoEscapes; select * from numbers(10) settings max_result_rows = 10; select * from numbers(10) FORMAT JSONCompact settings max_result_rows = 10, output_format_write_statistics = 0; diff --git a/tests/queries/0_stateless/01555_system_distribution_queue_mask.sql b/tests/queries/0_stateless/01555_system_distribution_queue_mask.sql index 3a90765226a..3c14eccb9ee 100644 --- a/tests/queries/0_stateless/01555_system_distribution_queue_mask.sql +++ b/tests/queries/0_stateless/01555_system_distribution_queue_mask.sql @@ -17,7 +17,7 @@ system stop distributed sends dist_01555; insert into dist_01555 values (1)(2); -- since test_cluster_with_incorrect_pw contains incorrect password ignore error -system flush distributed dist_01555; -- { serverError 516 } +system flush distributed dist_01555; -- { serverError AUTHENTICATION_FAILED } select length(splitByChar('*', data_path)), replaceRegexpOne(data_path, '^.*/([^/]*)/' , '\\1'), extract(last_exception, 'AUTHENTICATION_FAILED'), dateDiff('s', last_exception_time, now()) < 3600 from system.distribution_queue where database = currentDatabase() and table = 'dist_01555' format CSV; drop table dist_01555; @@ -30,7 +30,7 @@ create table dist_01555 (key Int) Engine=Distributed(test_cluster_with_incorrect insert into dist_01555 values (1)(2); -- since test_cluster_with_incorrect_pw contains incorrect password ignore error -system flush distributed dist_01555; -- { serverError 516 } +system flush distributed dist_01555; -- { serverError AUTHENTICATION_FAILED } select length(splitByChar('*', data_path)), replaceRegexpOne(data_path, '^.*/([^/]*)/' , '\\1'), extract(last_exception, 'AUTHENTICATION_FAILED'), dateDiff('s', last_exception_time, now()) < 3600 from system.distribution_queue where database = currentDatabase() and table = 'dist_01555' format CSV; drop table dist_01555; diff --git a/tests/queries/0_stateless/01560_crash_in_agg_empty_arglist.sql b/tests/queries/0_stateless/01560_crash_in_agg_empty_arglist.sql index a66ae5a7024..8cf8cc1add6 100644 --- a/tests/queries/0_stateless/01560_crash_in_agg_empty_arglist.sql +++ b/tests/queries/0_stateless/01560_crash_in_agg_empty_arglist.sql @@ -2,4 +2,4 @@ SELECT 1; SYSTEM FLUSH LOGS; -SELECT any() as t, substring(query, 1, 70) AS query, avg(memory_usage) usage, count() count FROM system.query_log WHERE current_database = currentDatabase() AND event_date >= toDate(1604295323) AND event_time >= toDateTime(1604295323) AND type in (1,2,3,4) and initial_user in ('') and('all' = 'all' or(positionCaseInsensitive(query, 'all') = 1)) GROUP BY query ORDER BY usage desc LIMIT 5; -- { serverError 42 } +SELECT any() as t, substring(query, 1, 70) AS query, avg(memory_usage) usage, count() count FROM system.query_log WHERE current_database = currentDatabase() AND event_date >= toDate(1604295323) AND event_time >= toDateTime(1604295323) AND type in (1,2,3,4) and initial_user in ('') and('all' = 'all' or(positionCaseInsensitive(query, 'all') = 1)) GROUP BY query ORDER BY usage desc LIMIT 5; -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } diff --git a/tests/queries/0_stateless/01560_mann_whitney.sql b/tests/queries/0_stateless/01560_mann_whitney.sql index e3a9b4ecd03..610f90958cf 100644 --- a/tests/queries/0_stateless/01560_mann_whitney.sql +++ b/tests/queries/0_stateless/01560_mann_whitney.sql @@ -6,5 +6,5 @@ SELECT '223.0', '0.5426959774289482'; WITH mannWhitneyUTest(left, right) AS pair SELECT roundBankers(pair.1, 16) as t_stat, roundBankers(pair.2, 16) as p_value from mann_whitney_test; WITH mannWhitneyUTest('two-sided', 1)(left, right) as pair SELECT roundBankers(pair.1, 16) as t_stat, roundBankers(pair.2, 16) as p_value from mann_whitney_test; WITH mannWhitneyUTest('two-sided')(left, right) as pair SELECT roundBankers(pair.1, 16) as t_stat, roundBankers(pair.2, 16) as p_value from mann_whitney_test; -WITH mannWhitneyUTest('two-sided')(1, right) AS pair SELECT roundBankers(pair.1, 16) AS t_stat, roundBankers(pair.2, 16) AS p_value FROM mann_whitney_test; --{serverError 36} +WITH mannWhitneyUTest('two-sided')(1, right) AS pair SELECT roundBankers(pair.1, 16) AS t_stat, roundBankers(pair.2, 16) AS p_value FROM mann_whitney_test; --{serverError BAD_ARGUMENTS} DROP TABLE IF EXISTS mann_whitney_test; diff --git a/tests/queries/0_stateless/01564_test_hint_woes.reference b/tests/queries/0_stateless/01564_test_hint_woes.reference index 9ce4572eab4..d1c938deb58 100644 --- a/tests/queries/0_stateless/01564_test_hint_woes.reference +++ b/tests/queries/0_stateless/01564_test_hint_woes.reference @@ -8,17 +8,17 @@ insert into values_01564 values ('f'); -- { clientError 6 } select 1; 1 insert into values_01564 values ('f'); -- { clientError 6 } -select nonexistent column; -- { serverError 47 } +select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } select 1; 1 -select nonexistent column; -- { serverError 47 } +select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } -- server error hint after broken insert values (violated constraint) -insert into values_01564 values (11); -- { serverError 469 } -insert into values_01564 values (11); -- { serverError 469 } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } select 1; 1 -insert into values_01564 values (11); -- { serverError 469 } -select nonexistent column; -- { serverError 47 } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } +select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } -- query after values on the same line insert into values_01564 values (1); select 1; select 1; @@ -28,6 +28,6 @@ CREATE TABLE t0 (c0 String, c1 Int32) ENGINE = Memory() ; INSERT INTO t0(c0, c1) VALUES ("1",1) ; -- { clientError 47 } INSERT INTO t0(c0, c1) VALUES ('1', 1) ; -- the return code must be zero after the final query has failed with expected error -insert into values_01564 values (11); -- { serverError 469 } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } drop table t0; drop table values_01564; diff --git a/tests/queries/0_stateless/01564_test_hint_woes.sql b/tests/queries/0_stateless/01564_test_hint_woes.sql index fee85130b03..dd2c1accd4a 100644 --- a/tests/queries/0_stateless/01564_test_hint_woes.sql +++ b/tests/queries/0_stateless/01564_test_hint_woes.sql @@ -10,7 +10,7 @@ insert into values_01564 values ('f'); -- { clientError 6 } select 1; insert into values_01564 values ('f'); -- { clientError 6 } -select nonexistent column; -- { serverError 47 } +select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } -- syntax error hint after broken insert values insert into values_01564 this is bad syntax values ('f'); -- { clientError 62 } @@ -19,22 +19,22 @@ insert into values_01564 this is bad syntax values ('f'); -- { clientError 62 } select 1; insert into values_01564 this is bad syntax values ('f'); -- { clientError 62 } -select nonexistent column; -- { serverError 47 } +select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } -- server error hint after broken insert values (violated constraint) -insert into values_01564 values (11); -- { serverError 469 } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } -insert into values_01564 values (11); -- { serverError 469 } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } select 1; -insert into values_01564 values (11); -- { serverError 469 } -select nonexistent column; -- { serverError 47 } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } +select nonexistent column; -- { serverError UNKNOWN_IDENTIFIER } -- query after values on the same line insert into values_01564 values (1); select 1; -- even this works (not sure why we need it lol) --- insert into values_01564 values (11) /*{ serverError 469 }*/; select 1; +-- insert into values_01564 values (11) /*{ serverError VIOLATED_CONSTRAINT }*/; select 1; -- syntax error, where the last token we can parse is long before the semicolon. select this is too many words for an alias; -- { clientError 62 } @@ -48,7 +48,7 @@ INSERT INTO t0(c0, c1) VALUES ("1",1) ; -- { clientError 47 } INSERT INTO t0(c0, c1) VALUES ('1', 1) ; -- the return code must be zero after the final query has failed with expected error -insert into values_01564 values (11); -- { serverError 469 } +insert into values_01564 values (11); -- { serverError VIOLATED_CONSTRAINT } drop table t0; drop table values_01564; diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.reference b/tests/queries/0_stateless/01568_window_functions_distributed.reference index 29ff2e7133c..3d3b8d55b43 100644 --- a/tests/queries/0_stateless/01568_window_functions_distributed.reference +++ b/tests/queries/0_stateless/01568_window_functions_distributed.reference @@ -86,7 +86,7 @@ select groupArray(groupArray(number)) over (rows unbounded preceding) as x from [[0,3,6]] [[0,3,6],[1,4,7]] [[0,3,6],[1,4,7],[2,5,8]] -select groupArray(groupArray(number)) over (rows unbounded preceding) as x from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) order by x settings distributed_group_by_no_merge=2; -- { serverError 48 } +select groupArray(groupArray(number)) over (rows unbounded preceding) as x from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) order by x settings distributed_group_by_no_merge=2; -- { serverError NOT_IMPLEMENTED } -- proper ORDER BY w/window functions select p, o, count() over (partition by p) from remote('127.0.0.{1,2}', '', t_01568) diff --git a/tests/queries/0_stateless/01568_window_functions_distributed.sql b/tests/queries/0_stateless/01568_window_functions_distributed.sql index ecce7b412ba..45bb1ad3940 100644 --- a/tests/queries/0_stateless/01568_window_functions_distributed.sql +++ b/tests/queries/0_stateless/01568_window_functions_distributed.sql @@ -26,7 +26,7 @@ select distinct sum(number) over w as x, max(number) over w as y from remote('12 -- window functions + aggregation w/shards select groupArray(groupArray(number)) over (rows unbounded preceding) as x from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) order by x; select groupArray(groupArray(number)) over (rows unbounded preceding) as x from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) order by x settings distributed_group_by_no_merge=1; -select groupArray(groupArray(number)) over (rows unbounded preceding) as x from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) order by x settings distributed_group_by_no_merge=2; -- { serverError 48 } +select groupArray(groupArray(number)) over (rows unbounded preceding) as x from remote('127.0.0.{1,2}', '', t_01568) group by mod(number, 3) order by x settings distributed_group_by_no_merge=2; -- { serverError NOT_IMPLEMENTED } -- proper ORDER BY w/window functions select p, o, count() over (partition by p) diff --git a/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.reference b/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.reference index 351c70637c0..6cdbd9c5cdb 100644 --- a/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.reference +++ b/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.reference @@ -28,4 +28,4 @@ SimpleAggregateFunction(groupArrayArray, Array(UInt64)) [0] with groupUniqArrayArraySimpleState([number]) as c select toTypeName(c), c from numbers(1); SimpleAggregateFunction(groupUniqArrayArray, Array(UInt64)) [0] -- non-SimpleAggregateFunction -with countSimpleState(number) as c select toTypeName(c), c from numbers(1); -- { serverError 36 } +with countSimpleState(number) as c select toTypeName(c), c from numbers(1); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.sql b/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.sql index 94f0589670f..7417b8643db 100644 --- a/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.sql +++ b/tests/queries/0_stateless/01570_aggregator_combinator_simple_state.sql @@ -15,4 +15,4 @@ with groupArrayArraySimpleState([number]) as c select toTypeName(c), c from numb with groupUniqArrayArraySimpleState([number]) as c select toTypeName(c), c from numbers(1); -- non-SimpleAggregateFunction -with countSimpleState(number) as c select toTypeName(c), c from numbers(1); -- { serverError 36 } +with countSimpleState(number) as c select toTypeName(c), c from numbers(1); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01571_window_functions.reference b/tests/queries/0_stateless/01571_window_functions.reference index 62741848958..2cc5b9737f3 100644 --- a/tests/queries/0_stateless/01571_window_functions.reference +++ b/tests/queries/0_stateless/01571_window_functions.reference @@ -46,7 +46,7 @@ select count() over (rows between 1 + 1 preceding and 1 + 1 following) from numb 4 3 -- signed and unsigned in offset do not cause logical error -select count() over (rows between 2 following and 1 + -1 following) FROM numbers(10); -- { serverError 36 } +select count() over (rows between 2 following and 1 + -1 following) FROM numbers(10); -- { serverError BAD_ARGUMENTS } -- default arguments of lagInFrame can be a subtype of the argument select number, lagInFrame(toNullable(number), 2, null) over w, diff --git a/tests/queries/0_stateless/01571_window_functions.sql b/tests/queries/0_stateless/01571_window_functions.sql index 4cad5c5c40b..dfe9c4376e5 100644 --- a/tests/queries/0_stateless/01571_window_functions.sql +++ b/tests/queries/0_stateless/01571_window_functions.sql @@ -30,7 +30,7 @@ drop table order_by_const; select count() over (rows between 1 + 1 preceding and 1 + 1 following) from numbers(10); -- signed and unsigned in offset do not cause logical error -select count() over (rows between 2 following and 1 + -1 following) FROM numbers(10); -- { serverError 36 } +select count() over (rows between 2 following and 1 + -1 following) FROM numbers(10); -- { serverError BAD_ARGUMENTS } -- default arguments of lagInFrame can be a subtype of the argument select number, diff --git a/tests/queries/0_stateless/01575_disable_detach_table_of_dictionary.sql b/tests/queries/0_stateless/01575_disable_detach_table_of_dictionary.sql index 2cf9ce661b6..60bf817fca1 100644 --- a/tests/queries/0_stateless/01575_disable_detach_table_of_dictionary.sql +++ b/tests/queries/0_stateless/01575_disable_detach_table_of_dictionary.sql @@ -13,11 +13,11 @@ SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'table_fo LIFETIME(MIN 1 MAX 10) LAYOUT(FLAT()); -DETACH TABLE database_for_dict.dict1; -- { serverError 520 } +DETACH TABLE database_for_dict.dict1; -- { serverError CANNOT_DETACH_DICTIONARY_AS_TABLE } DETACH DICTIONARY database_for_dict.dict1; -ATTACH TABLE database_for_dict.dict1; -- { serverError 80 } +ATTACH TABLE database_for_dict.dict1; -- { serverError INCORRECT_QUERY } ATTACH DICTIONARY database_for_dict.dict1; diff --git a/tests/queries/0_stateless/01579_date_datetime_index_comparison.sql b/tests/queries/0_stateless/01579_date_datetime_index_comparison.sql index 60de837f8fc..c1ba86b016d 100644 --- a/tests/queries/0_stateless/01579_date_datetime_index_comparison.sql +++ b/tests/queries/0_stateless/01579_date_datetime_index_comparison.sql @@ -10,7 +10,7 @@ drop table if exists test_index; select toTypeName([-1, toUInt32(1)]); -- We don't promote to wide integers -select toTypeName([-1, toUInt64(1)]); -- { serverError 386 } +select toTypeName([-1, toUInt64(1)]); -- { serverError NO_COMMON_TYPE } select toTypeName([-1, toInt128(1)]); select toTypeName([toInt64(-1), toInt128(1)]); select toTypeName([toUInt64(1), toUInt256(1)]); diff --git a/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql b/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql index 0f10052667c..594a2f71162 100644 --- a/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql +++ b/tests/queries/0_stateless/01581_deduplicate_by_columns_local.sql @@ -25,12 +25,12 @@ PARTITION BY (partition_key + 1) -- ensure that column in expression is properly ORDER BY (pk, toString(sk * 10)); -- silly order key to ensure that key column is checked even when it is a part of expression. See [1] below. -- ERROR cases -OPTIMIZE TABLE full_duplicates DEDUPLICATE BY pk, sk, val, mat, alias; -- { serverError 16 } -- alias column is present -OPTIMIZE TABLE full_duplicates DEDUPLICATE BY sk, val; -- { serverError 8 } -- primary key column is missing -OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(pk, sk, val, mat, alias, partition_key); -- { serverError 51 } -- list is empty -OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(pk); -- { serverError 8 } -- primary key column is missing [1] -OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(sk); -- { serverError 8 } -- sorting key column is missing [1] -OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(partition_key); -- { serverError 8 } -- partitioning column is missing [1] +OPTIMIZE TABLE full_duplicates DEDUPLICATE BY pk, sk, val, mat, alias; -- { serverError NO_SUCH_COLUMN_IN_TABLE } -- alias column is present +OPTIMIZE TABLE full_duplicates DEDUPLICATE BY sk, val; -- { serverError THERE_IS_NO_COLUMN } -- primary key column is missing +OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(pk, sk, val, mat, alias, partition_key); -- { serverError EMPTY_LIST_OF_COLUMNS_QUERIED } -- list is empty +OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(pk); -- { serverError THERE_IS_NO_COLUMN } -- primary key column is missing [1] +OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(sk); -- { serverError THERE_IS_NO_COLUMN } -- sorting key column is missing [1] +OPTIMIZE TABLE full_duplicates DEDUPLICATE BY * EXCEPT(partition_key); -- { serverError THERE_IS_NO_COLUMN } -- partitioning column is missing [1] OPTIMIZE TABLE full_duplicates DEDUPLICATE BY; -- { clientError 62 } -- empty list is a syntax error OPTIMIZE TABLE partial_duplicates DEDUPLICATE BY pk,sk,val,mat EXCEPT mat; -- { clientError 62 } -- invalid syntax diff --git a/tests/queries/0_stateless/01581_to_int_inf_nan.sql b/tests/queries/0_stateless/01581_to_int_inf_nan.sql index 4959b4d61e9..04679f239d5 100644 --- a/tests/queries/0_stateless/01581_to_int_inf_nan.sql +++ b/tests/queries/0_stateless/01581_to_int_inf_nan.sql @@ -1,10 +1,10 @@ -SELECT toInt64(inf); -- { serverError 70 } -SELECT toInt128(inf); -- { serverError 70 } -SELECT toInt256(inf); -- { serverError 70 } -SELECT toInt64(nan); -- { serverError 70 } -SELECT toInt128(nan); -- { serverError 70 } -SELECT toInt256(nan); -- { serverError 70 } -SELECT toUInt64(inf); -- { serverError 70 } -SELECT toUInt256(inf); -- { serverError 70 } -SELECT toUInt64(nan); -- { serverError 70 } -SELECT toUInt256(nan); -- { serverError 70 } +SELECT toInt64(inf); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toInt128(inf); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toInt256(inf); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toInt64(nan); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toInt128(nan); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toInt256(nan); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toUInt64(inf); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toUInt256(inf); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toUInt64(nan); -- { serverError CANNOT_CONVERT_TYPE } +SELECT toUInt256(nan); -- { serverError CANNOT_CONVERT_TYPE } diff --git a/tests/queries/0_stateless/01586_columns_pruning.sql b/tests/queries/0_stateless/01586_columns_pruning.sql index 598e64b9fe4..8ed7beb07e5 100644 --- a/tests/queries/0_stateless/01586_columns_pruning.sql +++ b/tests/queries/0_stateless/01586_columns_pruning.sql @@ -3,4 +3,4 @@ SET max_memory_usage = 10000000000; -- Unneeded column is removed from subquery. SELECT count() FROM (SELECT number, groupArray(repeat(toString(number), 1000000)) FROM numbers(1000000) GROUP BY number); -- Unneeded column cannot be removed from subquery and the query is out of memory -SELECT count() FROM (SELECT number, groupArray(repeat(toString(number), 1000000)) AS agg FROM numbers(1000000) GROUP BY number HAVING notEmpty(agg)); -- { serverError 241 } +SELECT count() FROM (SELECT number, groupArray(repeat(toString(number), 1000000)) AS agg FROM numbers(1000000) GROUP BY number HAVING notEmpty(agg)); -- { serverError MEMORY_LIMIT_EXCEEDED } diff --git a/tests/queries/0_stateless/01591_window_functions.reference b/tests/queries/0_stateless/01591_window_functions.reference index 156f36f7dba..4433bbbd620 100644 --- a/tests/queries/0_stateless/01591_window_functions.reference +++ b/tests/queries/0_stateless/01591_window_functions.reference @@ -25,7 +25,7 @@ select number, max(number) over (partition by intDiv(number, 3) order by number 6 8 9 9 -- not a window function -select number, abs(number) over (partition by toString(intDiv(number, 3)) rows unbounded preceding) from numbers(10); -- { serverError 63 } +select number, abs(number) over (partition by toString(intDiv(number, 3)) rows unbounded preceding) from numbers(10); -- { serverError UNKNOWN_AGGREGATE_FUNCTION } -- no partition by select number, avg(number) over (order by number rows unbounded preceding) from numbers(10); 0 0 @@ -1060,7 +1060,7 @@ settings max_block_size = 3; -- careful with auto-application of Null combinator select lagInFrame(toNullable(1)) over (); \N -select lagInFrameOrNull(1) over (); -- { serverError 36 } +select lagInFrameOrNull(1) over (); -- { serverError BAD_ARGUMENTS } -- this is the same as `select max(Null::Nullable(Nothing))` select intDiv(1, NULL) x, toTypeName(x), max(x) over (); \N Nullable(Nothing) \N @@ -1207,10 +1207,10 @@ from numbers(7) 6 6 2 2 6 6 1 1 -- negative offsets should not be allowed -select count() over (order by toInt64(number) range between -1 preceding and unbounded following) from numbers(1); -- { serverError 36 } -select count() over (order by toInt64(number) range between -1 following and unbounded following) from numbers(1); -- { serverError 36 } -select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError 36 } -select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between -1 preceding and unbounded following) from numbers(1); -- { serverError BAD_ARGUMENTS } +select count() over (order by toInt64(number) range between -1 following and unbounded following) from numbers(1); -- { serverError BAD_ARGUMENTS } +select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError BAD_ARGUMENTS } +select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError BAD_ARGUMENTS } -- a test with aggregate function that allocates memory in arena select sum(a[length(a)]) from ( @@ -1243,7 +1243,7 @@ from 3 -- -INT_MIN row offset that can lead to problems with negation, found when fuzzing -- under UBSan. Should be limited to at most INT_MAX. -select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } +select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError BAD_ARGUMENTS } -- Somehow in this case WindowTransform gets empty input chunks not marked as -- input end, and then two (!) empty input chunks marked as input end. Whatever. select count() over () from (select 1 a) l inner join (select 2 a) r using a; @@ -1267,13 +1267,13 @@ order by p, o, number 5 4 8 5 -- can't redefine PARTITION BY -select count() over (w partition by number) from numbers(1) window w as (partition by intDiv(number, 5)); -- { serverError 36 } +select count() over (w partition by number) from numbers(1) window w as (partition by intDiv(number, 5)); -- { serverError BAD_ARGUMENTS } -- can't redefine existing ORDER BY -select count() over (w order by number) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3)); -- { serverError 36 } +select count() over (w order by number) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3)); -- { serverError BAD_ARGUMENTS } -- parent window can't have frame -select count() over (w range unbounded preceding) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3) rows unbounded preceding); -- { serverError 36 } +select count() over (w range unbounded preceding) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3) rows unbounded preceding); -- { serverError BAD_ARGUMENTS } -- looks weird but probably should work -- this is a window that inherits and changes nothing select count() over (w) from numbers(1) window w as (); 1 -- nonexistent parent window -select count() over (w2 rows unbounded preceding); -- { serverError 36 } +select count() over (w2 rows unbounded preceding); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01591_window_functions.sql b/tests/queries/0_stateless/01591_window_functions.sql index 952a66616a9..b821ba13721 100644 --- a/tests/queries/0_stateless/01591_window_functions.sql +++ b/tests/queries/0_stateless/01591_window_functions.sql @@ -15,7 +15,7 @@ select number, count() over (partition by intDiv(number, 3) order by number rows select number, max(number) over (partition by intDiv(number, 3) order by number desc rows unbounded preceding) from numbers(10) settings max_block_size = 2; -- not a window function -select number, abs(number) over (partition by toString(intDiv(number, 3)) rows unbounded preceding) from numbers(10); -- { serverError 63 } +select number, abs(number) over (partition by toString(intDiv(number, 3)) rows unbounded preceding) from numbers(10); -- { serverError UNKNOWN_AGGREGATE_FUNCTION } -- no partition by select number, avg(number) over (order by number rows unbounded preceding) from numbers(10); @@ -383,7 +383,7 @@ settings max_block_size = 3; -- careful with auto-application of Null combinator select lagInFrame(toNullable(1)) over (); -select lagInFrameOrNull(1) over (); -- { serverError 36 } +select lagInFrameOrNull(1) over (); -- { serverError BAD_ARGUMENTS } -- this is the same as `select max(Null::Nullable(Nothing))` select intDiv(1, NULL) x, toTypeName(x), max(x) over (); -- to make lagInFrame return null for out-of-frame rows, cast the argument to @@ -486,10 +486,10 @@ from numbers(7) ; -- negative offsets should not be allowed -select count() over (order by toInt64(number) range between -1 preceding and unbounded following) from numbers(1); -- { serverError 36 } -select count() over (order by toInt64(number) range between -1 following and unbounded following) from numbers(1); -- { serverError 36 } -select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError 36 } -select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError 36 } +select count() over (order by toInt64(number) range between -1 preceding and unbounded following) from numbers(1); -- { serverError BAD_ARGUMENTS } +select count() over (order by toInt64(number) range between -1 following and unbounded following) from numbers(1); -- { serverError BAD_ARGUMENTS } +select count() over (order by toInt64(number) range between unbounded preceding and -1 preceding) from numbers(1); -- { serverError BAD_ARGUMENTS } +select count() over (order by toInt64(number) range between unbounded preceding and -1 following) from numbers(1); -- { serverError BAD_ARGUMENTS } -- a test with aggregate function that allocates memory in arena select sum(a[length(a)]) @@ -521,7 +521,7 @@ from -- -INT_MIN row offset that can lead to problems with negation, found when fuzzing -- under UBSan. Should be limited to at most INT_MAX. -select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError 36 } +select count() over (rows between 2147483648 preceding and 2147493648 following) from numbers(2); -- { serverError BAD_ARGUMENTS } -- Somehow in this case WindowTransform gets empty input chunks not marked as -- input end, and then two (!) empty input chunks marked as input end. Whatever. @@ -538,16 +538,16 @@ order by p, o, number ; -- can't redefine PARTITION BY -select count() over (w partition by number) from numbers(1) window w as (partition by intDiv(number, 5)); -- { serverError 36 } +select count() over (w partition by number) from numbers(1) window w as (partition by intDiv(number, 5)); -- { serverError BAD_ARGUMENTS } -- can't redefine existing ORDER BY -select count() over (w order by number) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3)); -- { serverError 36 } +select count() over (w order by number) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3)); -- { serverError BAD_ARGUMENTS } -- parent window can't have frame -select count() over (w range unbounded preceding) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3) rows unbounded preceding); -- { serverError 36 } +select count() over (w range unbounded preceding) from numbers(1) window w as (partition by intDiv(number, 5) order by mod(number, 3) rows unbounded preceding); -- { serverError BAD_ARGUMENTS } -- looks weird but probably should work -- this is a window that inherits and changes nothing select count() over (w) from numbers(1) window w as (); -- nonexistent parent window -select count() over (w2 rows unbounded preceding); -- { serverError 36 } +select count() over (w2 rows unbounded preceding); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01598_memory_limit_zeros.sql b/tests/queries/0_stateless/01598_memory_limit_zeros.sql index cc2a75e023e..45e34c7c8df 100644 --- a/tests/queries/0_stateless/01598_memory_limit_zeros.sql +++ b/tests/queries/0_stateless/01598_memory_limit_zeros.sql @@ -1,4 +1,4 @@ -- Tags: no-parallel, no-fasttest, no-random-settings SET max_memory_usage = 1, max_untracked_memory = 1000000, max_threads=40; -select 'test', count(*) from zeros_mt(1000000) where not ignore(zero); -- { serverError 241 } +select 'test', count(*) from zeros_mt(1000000) where not ignore(zero); -- { serverError MEMORY_LIMIT_EXCEEDED } diff --git a/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.sql b/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.sql index b3739af93f8..3e4bf124a27 100644 --- a/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.sql +++ b/tests/queries/0_stateless/01600_remerge_sort_lowered_memory_bytes_ratio.sql @@ -12,8 +12,8 @@ set max_block_size=40960; -- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 819200 rows) to save memory consumption -- MergeSortingTransform: Memory usage is lowered from 186.25 MiB to 95.00 MiB -- MergeSortingTransform: Re-merging is not useful (memory usage was not lowered by remerge_sort_lowered_memory_bytes_ratio=2.0) -select number k, repeat(toString(number), 11) v1, repeat(toString(number), 12) v2 from numbers(3e6) order by v1, v2 limit 400e3 format Null; -- { serverError 241 } -select number k, repeat(toString(number), 11) v1, repeat(toString(number), 12) v2 from numbers(3e6) order by v1, v2 limit 400e3 settings remerge_sort_lowered_memory_bytes_ratio=2. format Null; -- { serverError 241 } +select number k, repeat(toString(number), 11) v1, repeat(toString(number), 12) v2 from numbers(3e6) order by v1, v2 limit 400e3 format Null; -- { serverError MEMORY_LIMIT_EXCEEDED } +select number k, repeat(toString(number), 11) v1, repeat(toString(number), 12) v2 from numbers(3e6) order by v1, v2 limit 400e3 settings remerge_sort_lowered_memory_bytes_ratio=2. format Null; -- { serverError MEMORY_LIMIT_EXCEEDED } -- remerge_sort_lowered_memory_bytes_ratio 1.9 is good (need at least 1.91/0.98=1.94) -- MergeSortingTransform: Re-merging intermediate ORDER BY data (20 blocks with 819200 rows) to save memory consumption diff --git a/tests/queries/0_stateless/01600_select_in_different_types.sql b/tests/queries/0_stateless/01600_select_in_different_types.sql index 25d37c122e0..a9eb6ed2acf 100644 --- a/tests/queries/0_stateless/01600_select_in_different_types.sql +++ b/tests/queries/0_stateless/01600_select_in_different_types.sql @@ -30,6 +30,6 @@ SELECT '1' IN (SELECT 1); SELECT 1 IN (SELECT 1) SETTINGS transform_null_in = 1; SELECT 1 IN (SELECT 'a') SETTINGS transform_null_in = 1; -SELECT 'a' IN (SELECT 1) SETTINGS transform_null_in = 1; -- { serverError 6 } +SELECT 'a' IN (SELECT 1) SETTINGS transform_null_in = 1; -- { serverError CANNOT_PARSE_TEXT } SELECT 1 IN (SELECT -1) SETTINGS transform_null_in = 1; -SELECT -1 IN (SELECT 1) SETTINGS transform_null_in = 1; -- { serverError 70 } +SELECT -1 IN (SELECT 1) SETTINGS transform_null_in = 1; -- { serverError CANNOT_CONVERT_TYPE } diff --git a/tests/queries/0_stateless/01601_detach_permanently.sql b/tests/queries/0_stateless/01601_detach_permanently.sql index 6ab3a7f9b21..9f2ecaeadfe 100644 --- a/tests/queries/0_stateless/01601_detach_permanently.sql +++ b/tests/queries/0_stateless/01601_detach_permanently.sql @@ -14,26 +14,26 @@ INSERT INTO test1601_detach_permanently_atomic.test_name_reuse SELECT * FROM num DETACH table test1601_detach_permanently_atomic.test_name_reuse PERMANENTLY; SELECT 'can not create table with same name as detached permanently'; -create table test1601_detach_permanently_atomic.test_name_reuse (number UInt64) engine=MergeTree order by tuple(); -- { serverError 57 } +create table test1601_detach_permanently_atomic.test_name_reuse (number UInt64) engine=MergeTree order by tuple(); -- { serverError TABLE_ALREADY_EXISTS } SELECT 'can not detach twice'; -DETACH table test1601_detach_permanently_atomic.test_name_reuse PERMANENTLY; -- { serverError 60 } -DETACH table test1601_detach_permanently_atomic.test_name_reuse; -- { serverError 60 } +DETACH table test1601_detach_permanently_atomic.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } +DETACH table test1601_detach_permanently_atomic.test_name_reuse; -- { serverError UNKNOWN_TABLE } SELECT 'can not drop detached'; -drop table test1601_detach_permanently_atomic.test_name_reuse; -- { serverError 60 } +drop table test1601_detach_permanently_atomic.test_name_reuse; -- { serverError UNKNOWN_TABLE } create table test1601_detach_permanently_atomic.test_name_rename_attempt (number UInt64) engine=MergeTree order by tuple(); SELECT 'can not replace with the other table'; -RENAME TABLE test1601_detach_permanently_atomic.test_name_rename_attempt TO test1601_detach_permanently_atomic.test_name_reuse; -- { serverError 57 } -EXCHANGE TABLES test1601_detach_permanently_atomic.test_name_rename_attempt AND test1601_detach_permanently_atomic.test_name_reuse; -- { serverError 60 } +RENAME TABLE test1601_detach_permanently_atomic.test_name_rename_attempt TO test1601_detach_permanently_atomic.test_name_reuse; -- { serverError TABLE_ALREADY_EXISTS } +EXCHANGE TABLES test1601_detach_permanently_atomic.test_name_rename_attempt AND test1601_detach_permanently_atomic.test_name_reuse; -- { serverError UNKNOWN_TABLE } SELECT 'can still show the create statement'; SHOW CREATE TABLE test1601_detach_permanently_atomic.test_name_reuse FORMAT Vertical; SELECT 'can not attach with bad uuid'; -ATTACH TABLE test1601_detach_permanently_atomic.test_name_reuse UUID '00000000-0000-0000-0000-000000000001' (`number` UInt64 ) ENGINE = MergeTree ORDER BY tuple() SETTINGS index_granularity = 8192 ; -- { serverError 57 } +ATTACH TABLE test1601_detach_permanently_atomic.test_name_reuse UUID '00000000-0000-0000-0000-000000000001' (`number` UInt64 ) ENGINE = MergeTree ORDER BY tuple() SETTINGS index_granularity = 8192 ; -- { serverError TABLE_ALREADY_EXISTS } SELECT 'can attach with short syntax'; ATTACH TABLE test1601_detach_permanently_atomic.test_name_reuse; @@ -43,7 +43,7 @@ SELECT count() FROM test1601_detach_permanently_atomic.test_name_reuse; DETACH table test1601_detach_permanently_atomic.test_name_reuse; SELECT 'can not detach permanently the table which is already detached (temporary)'; -DETACH table test1601_detach_permanently_atomic.test_name_reuse PERMANENTLY; -- { serverError 60 } +DETACH table test1601_detach_permanently_atomic.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } DETACH DATABASE test1601_detach_permanently_atomic; ATTACH DATABASE test1601_detach_permanently_atomic; @@ -59,7 +59,7 @@ ATTACH DATABASE test1601_detach_permanently_atomic; SELECT 'After database reattachement the table is still absent (it was detached permamently)'; SELECT 'And we can not detach it permanently'; -DETACH table test1601_detach_permanently_atomic.test_name_reuse PERMANENTLY; -- { serverError 60 } +DETACH table test1601_detach_permanently_atomic.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } SELECT 'But we can attach it back'; ATTACH TABLE test1601_detach_permanently_atomic.test_name_reuse; @@ -85,19 +85,19 @@ INSERT INTO test1601_detach_permanently_ordinary.test_name_reuse SELECT * FROM n DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; SELECT 'can not create table with same name as detached permanently'; -create table test1601_detach_permanently_ordinary.test_name_reuse (number UInt64) engine=MergeTree order by tuple(); -- { serverError 57 } +create table test1601_detach_permanently_ordinary.test_name_reuse (number UInt64) engine=MergeTree order by tuple(); -- { serverError TABLE_ALREADY_EXISTS } SELECT 'can not detach twice'; -DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; -- { serverError 60 } -DETACH table test1601_detach_permanently_ordinary.test_name_reuse; -- { serverError 60 } +DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } +DETACH table test1601_detach_permanently_ordinary.test_name_reuse; -- { serverError UNKNOWN_TABLE } SELECT 'can not drop detached'; -drop table test1601_detach_permanently_ordinary.test_name_reuse; -- { serverError 60 } +drop table test1601_detach_permanently_ordinary.test_name_reuse; -- { serverError UNKNOWN_TABLE } create table test1601_detach_permanently_ordinary.test_name_rename_attempt (number UInt64) engine=MergeTree order by tuple(); SELECT 'can not replace with the other table'; -RENAME TABLE test1601_detach_permanently_ordinary.test_name_rename_attempt TO test1601_detach_permanently_ordinary.test_name_reuse; -- { serverError 57 } +RENAME TABLE test1601_detach_permanently_ordinary.test_name_rename_attempt TO test1601_detach_permanently_ordinary.test_name_reuse; -- { serverError TABLE_ALREADY_EXISTS } SELECT 'can still show the create statement'; SHOW CREATE TABLE test1601_detach_permanently_ordinary.test_name_reuse FORMAT Vertical; @@ -112,7 +112,7 @@ ATTACH TABLE test1601_detach_permanently_ordinary.test_name_reuse; DETACH table test1601_detach_permanently_ordinary.test_name_reuse; SELECT 'can not detach permanently the table which is already detached (temporary)'; -DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; -- { serverError 60 } +DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } DETACH DATABASE test1601_detach_permanently_ordinary; ATTACH DATABASE test1601_detach_permanently_ordinary; @@ -126,7 +126,7 @@ ATTACH DATABASE test1601_detach_permanently_ordinary; SELECT 'After database reattachement the table is still absent (it was detached permamently)'; SELECT 'And we can not detach it permanently'; -DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; -- { serverError 60 } +DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } SELECT 'But we can attach it back'; ATTACH TABLE test1601_detach_permanently_ordinary.test_name_reuse; @@ -135,7 +135,7 @@ SELECT 'And detach permanently again to check how database drop will behave'; DETACH table test1601_detach_permanently_ordinary.test_name_reuse PERMANENTLY; SELECT 'DROP database - Directory not empty error, but database detached'; -DROP DATABASE test1601_detach_permanently_ordinary; -- { serverError 219 } +DROP DATABASE test1601_detach_permanently_ordinary; -- { serverError DATABASE_NOT_EMPTY } ATTACH DATABASE test1601_detach_permanently_ordinary; @@ -159,19 +159,19 @@ INSERT INTO test1601_detach_permanently_lazy.test_name_reuse SELECT * FROM numbe DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; SELECT 'can not create table with same name as detached permanently'; -create table test1601_detach_permanently_lazy.test_name_reuse (number UInt64) engine=Log; -- { serverError 57 } +create table test1601_detach_permanently_lazy.test_name_reuse (number UInt64) engine=Log; -- { serverError TABLE_ALREADY_EXISTS } SELECT 'can not detach twice'; -DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; -- { serverError 60 } -DETACH table test1601_detach_permanently_lazy.test_name_reuse; -- { serverError 60 } +DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } +DETACH table test1601_detach_permanently_lazy.test_name_reuse; -- { serverError UNKNOWN_TABLE } SELECT 'can not drop detached'; -drop table test1601_detach_permanently_lazy.test_name_reuse; -- { serverError 60 } +drop table test1601_detach_permanently_lazy.test_name_reuse; -- { serverError UNKNOWN_TABLE } create table test1601_detach_permanently_lazy.test_name_rename_attempt (number UInt64) engine=Log; SELECT 'can not replace with the other table'; -RENAME TABLE test1601_detach_permanently_lazy.test_name_rename_attempt TO test1601_detach_permanently_lazy.test_name_reuse; -- { serverError 57 } +RENAME TABLE test1601_detach_permanently_lazy.test_name_rename_attempt TO test1601_detach_permanently_lazy.test_name_reuse; -- { serverError TABLE_ALREADY_EXISTS } SELECT 'can still show the create statement'; SHOW CREATE TABLE test1601_detach_permanently_lazy.test_name_reuse FORMAT Vertical; @@ -186,7 +186,7 @@ ATTACH TABLE test1601_detach_permanently_lazy.test_name_reuse; DETACH table test1601_detach_permanently_lazy.test_name_reuse; SELECT 'can not detach permanently the table which is already detached (temporary)'; -DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; -- { serverError 60 } +DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } DETACH DATABASE test1601_detach_permanently_lazy; ATTACH DATABASE test1601_detach_permanently_lazy; @@ -200,7 +200,7 @@ ATTACH DATABASE test1601_detach_permanently_lazy; SELECT 'After database reattachement the table is still absent (it was detached permamently)'; SELECT 'And we can not detach it permanently'; -DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; -- { serverError 60 } +DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; -- { serverError UNKNOWN_TABLE } SELECT 'But we can attach it back'; ATTACH TABLE test1601_detach_permanently_lazy.test_name_reuse; @@ -209,7 +209,7 @@ SELECT 'And detach permanently again to check how database drop will behave'; DETACH table test1601_detach_permanently_lazy.test_name_reuse PERMANENTLY; SELECT 'DROP database - Directory not empty error, but database deteched'; -DROP DATABASE test1601_detach_permanently_lazy; -- { serverError 219 } +DROP DATABASE test1601_detach_permanently_lazy; -- { serverError DATABASE_NOT_EMPTY } ATTACH DATABASE test1601_detach_permanently_lazy; diff --git a/tests/queries/0_stateless/01602_insert_into_table_function_cluster.sql b/tests/queries/0_stateless/01602_insert_into_table_function_cluster.sql index 006cef24080..7c3e5608e80 100644 --- a/tests/queries/0_stateless/01602_insert_into_table_function_cluster.sql +++ b/tests/queries/0_stateless/01602_insert_into_table_function_cluster.sql @@ -9,10 +9,10 @@ INSERT INTO FUNCTION cluster('test_shard_localhost', currentDatabase(), x) SELEC INSERT INTO FUNCTION cluster('test_shard_localhost', currentDatabase(), x, rand()) SELECT * FROM numbers(10); -- More than one shard, sharding key is necessary -INSERT INTO FUNCTION cluster('test_cluster_two_shards_localhost', currentDatabase(), x) SELECT * FROM numbers(10); --{ serverError 55 } +INSERT INTO FUNCTION cluster('test_cluster_two_shards_localhost', currentDatabase(), x) SELECT * FROM numbers(10); --{ serverError STORAGE_REQUIRES_PARAMETER } INSERT INTO FUNCTION cluster('test_cluster_two_shards_localhost', currentDatabase(), x, rand()) SELECT * FROM numbers(10); -INSERT INTO FUNCTION remote('127.0.0.{1,2}', currentDatabase(), y, 'default') SELECT * FROM numbers(10); -- { serverError 55 } +INSERT INTO FUNCTION remote('127.0.0.{1,2}', currentDatabase(), y, 'default') SELECT * FROM numbers(10); -- { serverError STORAGE_REQUIRES_PARAMETER } INSERT INTO FUNCTION remote('127.0.0.{1,2}', currentDatabase(), y, 'default', rand()) SELECT * FROM numbers(10); SELECT * FROM x ORDER BY number; diff --git a/tests/queries/0_stateless/01602_modified_julian_day_msan.sql b/tests/queries/0_stateless/01602_modified_julian_day_msan.sql index d18665f0bcf..829229824da 100644 --- a/tests/queries/0_stateless/01602_modified_julian_day_msan.sql +++ b/tests/queries/0_stateless/01602_modified_julian_day_msan.sql @@ -1,4 +1,4 @@ -SELECT tryBase64Decode(( SELECT countSubstrings(toModifiedJulianDayOrNull('\0'), '') ) AS n, ( SELECT regionIn('l. ') ) AS srocpnuv); -- { serverError 43 } -SELECT countSubstrings(toModifiedJulianDayOrNull('\0'), ''); -- { serverError 43 } -SELECT countSubstrings(toInt32OrNull('123qwe123'), ''); -- { serverError 43 } +SELECT tryBase64Decode(( SELECT countSubstrings(toModifiedJulianDayOrNull('\0'), '') ) AS n, ( SELECT regionIn('l. ') ) AS srocpnuv); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT countSubstrings(toModifiedJulianDayOrNull('\0'), ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT countSubstrings(toInt32OrNull('123qwe123'), ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT 'Ok.'; diff --git a/tests/queries/0_stateless/01602_runningConcurrency.sql b/tests/queries/0_stateless/01602_runningConcurrency.sql index 55b3aae867a..0a08f116d86 100644 --- a/tests/queries/0_stateless/01602_runningConcurrency.sql +++ b/tests/queries/0_stateless/01602_runningConcurrency.sql @@ -35,17 +35,17 @@ DROP TABLE runningConcurrency_test; SELECT 'Erroneous cases'; -- Constant columns are currently not supported. -SELECT runningConcurrency(toDate(arrayJoin([1, 2])), toDate('2000-01-01')); -- { serverError 44 } +SELECT runningConcurrency(toDate(arrayJoin([1, 2])), toDate('2000-01-01')); -- { serverError ILLEGAL_COLUMN } -- Unsupported data types -SELECT runningConcurrency('strings are', 'not supported'); -- { serverError 43 } -SELECT runningConcurrency(NULL, NULL); -- { serverError 43 } -SELECT runningConcurrency(CAST(NULL, 'Nullable(DateTime)'), CAST(NULL, 'Nullable(DateTime)')); -- { serverError 43 } +SELECT runningConcurrency('strings are', 'not supported'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT runningConcurrency(NULL, NULL); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT runningConcurrency(CAST(NULL, 'Nullable(DateTime)'), CAST(NULL, 'Nullable(DateTime)')); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -- Mismatching data types -SELECT runningConcurrency(toDate('2000-01-01'), toDateTime('2000-01-01 00:00:00')); -- { serverError 43 } +SELECT runningConcurrency(toDate('2000-01-01'), toDateTime('2000-01-01 00:00:00')); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -- begin > end -SELECT runningConcurrency(toDate('2000-01-02'), toDate('2000-01-01')); -- { serverError 117 } +SELECT runningConcurrency(toDate('2000-01-02'), toDate('2000-01-01')); -- { serverError INCORRECT_DATA } diff --git a/tests/queries/0_stateless/01602_show_create_view.sql b/tests/queries/0_stateless/01602_show_create_view.sql index 1d4dd54b1c1..0aaabc2fa49 100644 --- a/tests/queries/0_stateless/01602_show_create_view.sql +++ b/tests/queries/0_stateless/01602_show_create_view.sql @@ -22,11 +22,11 @@ SHOW CREATE VIEW test_1602.v; SHOW CREATE VIEW test_1602.vv; -SHOW CREATE VIEW test_1602.not_exist_view; -- { serverError 390 } +SHOW CREATE VIEW test_1602.not_exist_view; -- { serverError CANNOT_GET_CREATE_TABLE_QUERY } -SHOW CREATE VIEW test_1602.tbl; -- { serverError 36 } +SHOW CREATE VIEW test_1602.tbl; -- { serverError BAD_ARGUMENTS } -SHOW CREATE TEMPORARY VIEW; -- { serverError 60 } +SHOW CREATE TEMPORARY VIEW; -- { serverError UNKNOWN_TABLE } SHOW CREATE VIEW; -- { clientError 62 } diff --git a/tests/queries/0_stateless/01603_insert_select_too_many_parts.sql b/tests/queries/0_stateless/01603_insert_select_too_many_parts.sql index d0832cdcc8e..276e3e015ee 100644 --- a/tests/queries/0_stateless/01603_insert_select_too_many_parts.sql +++ b/tests/queries/0_stateless/01603_insert_select_too_many_parts.sql @@ -12,6 +12,6 @@ INSERT INTO too_many_parts SELECT * FROM numbers(10) SETTINGS max_insert_threads SELECT count() FROM too_many_parts; -- exception is thrown if threshold is exceeded on new INSERT. -INSERT INTO too_many_parts SELECT * FROM numbers(10); -- { serverError 252 } +INSERT INTO too_many_parts SELECT * FROM numbers(10); -- { serverError TOO_MANY_PARTS } DROP TABLE too_many_parts; diff --git a/tests/queries/0_stateless/01603_rename_overwrite_bug.sql b/tests/queries/0_stateless/01603_rename_overwrite_bug.sql index cc283ab4292..9791daf010a 100644 --- a/tests/queries/0_stateless/01603_rename_overwrite_bug.sql +++ b/tests/queries/0_stateless/01603_rename_overwrite_bug.sql @@ -9,7 +9,7 @@ create database test_1603_rename_bug_ordinary engine=Ordinary; create table test_1603_rename_bug_ordinary.foo engine=Memory as select * from numbers(100); create table test_1603_rename_bug_ordinary.bar engine=Log as select * from numbers(200); detach table test_1603_rename_bug_ordinary.foo; -rename table test_1603_rename_bug_ordinary.bar to test_1603_rename_bug_ordinary.foo; -- { serverError 57 } +rename table test_1603_rename_bug_ordinary.bar to test_1603_rename_bug_ordinary.foo; -- { serverError TABLE_ALREADY_EXISTS } attach table test_1603_rename_bug_ordinary.foo; SELECT count() from test_1603_rename_bug_ordinary.foo; SELECT count() from test_1603_rename_bug_ordinary.bar; @@ -21,7 +21,7 @@ create database test_1603_rename_bug_atomic engine=Atomic; create table test_1603_rename_bug_atomic.foo engine=Memory as select * from numbers(100); create table test_1603_rename_bug_atomic.bar engine=Log as select * from numbers(200); detach table test_1603_rename_bug_atomic.foo; -rename table test_1603_rename_bug_atomic.bar to test_1603_rename_bug_atomic.foo; -- { serverError 57 } +rename table test_1603_rename_bug_atomic.bar to test_1603_rename_bug_atomic.foo; -- { serverError TABLE_ALREADY_EXISTS } attach table test_1603_rename_bug_atomic.foo; SELECT count() from test_1603_rename_bug_atomic.foo; SELECT count() from test_1603_rename_bug_atomic.bar; diff --git a/tests/queries/0_stateless/01611_string_to_low_cardinality_key_alter.sql b/tests/queries/0_stateless/01611_string_to_low_cardinality_key_alter.sql index 6478d33dfcc..3b03c82e2b4 100644 --- a/tests/queries/0_stateless/01611_string_to_low_cardinality_key_alter.sql +++ b/tests/queries/0_stateless/01611_string_to_low_cardinality_key_alter.sql @@ -21,7 +21,7 @@ ATTACH TABLE table_with_lc_key; SELECT * FROM table_with_lc_key WHERE enum_key > 0 and lc_key like 'h%'; ALTER TABLE table_with_lc_key MODIFY COLUMN enum_key Enum('x' = 2, 'y' = 1, 'z' = 3); -ALTER TABLE table_with_lc_key MODIFY COLUMN enum_key Enum16('x' = 2, 'y' = 1, 'z' = 3); --{serverError 524} +ALTER TABLE table_with_lc_key MODIFY COLUMN enum_key Enum16('x' = 2, 'y' = 1, 'z' = 3); --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} SHOW CREATE TABLE table_with_lc_key; DETACH TABLE table_with_lc_key; @@ -62,6 +62,6 @@ ATTACH TABLE table_with_string_key; SELECT * FROM table_with_string_key WHERE int_key > 0 and str_key like 'h%'; -ALTER TABLE table_with_string_key MODIFY COLUMN int_key Enum8('y' = 1, 'x' = 2); --{serverError 524} +ALTER TABLE table_with_string_key MODIFY COLUMN int_key Enum8('y' = 1, 'x' = 2); --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} DROP TABLE IF EXISTS table_with_string_key; diff --git a/tests/queries/0_stateless/01621_bar_nan_arguments.sql b/tests/queries/0_stateless/01621_bar_nan_arguments.sql index c28cb53d7ce..3862b0cd5bf 100644 --- a/tests/queries/0_stateless/01621_bar_nan_arguments.sql +++ b/tests/queries/0_stateless/01621_bar_nan_arguments.sql @@ -1,2 +1,2 @@ -SELECT bar((greatCircleAngle(65537, 2, 1, 1) - 1) * 65535, 1048576, 1048577, nan); -- { serverError 43 } -select bar(1,1,1,nan); -- { serverError 43 } +SELECT bar((greatCircleAngle(65537, 2, 1, 1) - 1) * 65535, 1048576, 1048577, nan); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select bar(1,1,1,nan); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01621_summap_check_types.sql b/tests/queries/0_stateless/01621_summap_check_types.sql index a950f3ea094..b46ae2d304e 100644 --- a/tests/queries/0_stateless/01621_summap_check_types.sql +++ b/tests/queries/0_stateless/01621_summap_check_types.sql @@ -2,4 +2,4 @@ select initializeAggregation('sumMap', [1, 2], [1, 2], [1, null]); CREATE TEMPORARY TABLE sum_map_overflow (events Array(UInt8), counts Array(UInt8)); INSERT INTO sum_map_overflow VALUES ([1], [255]), ([1], [2]); -SELECT [NULL], sumMapWithOverflow(events, [NULL], [[(NULL)]], counts) FROM sum_map_overflow; -- { serverError 43 } +SELECT [NULL], sumMapWithOverflow(events, [NULL], [[(NULL)]], counts) FROM sum_map_overflow; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01630_disallow_floating_point_as_partition_key.sql b/tests/queries/0_stateless/01630_disallow_floating_point_as_partition_key.sql index 0c33bce6068..96ba60a33a4 100644 --- a/tests/queries/0_stateless/01630_disallow_floating_point_as_partition_key.sql +++ b/tests/queries/0_stateless/01630_disallow_floating_point_as_partition_key.sql @@ -1,7 +1,7 @@ DROP TABLE IF EXISTS test; -CREATE TABLE test (a Float32, b int) Engine = MergeTree() ORDER BY tuple() PARTITION BY a; -- { serverError 36 } +CREATE TABLE test (a Float32, b int) Engine = MergeTree() ORDER BY tuple() PARTITION BY a; -- { serverError BAD_ARGUMENTS } CREATE TABLE test (a Float32, b int) Engine = MergeTree() ORDER BY tuple() PARTITION BY a settings allow_floating_point_partition_key=true; DROP TABLE IF EXISTS test; -CREATE TABLE test (a Float32, b int, c String, d Float64) Engine = MergeTree() ORDER BY tuple() PARTITION BY (b, c, d) settings allow_floating_point_partition_key=false; -- { serverError 36 } +CREATE TABLE test (a Float32, b int, c String, d Float64) Engine = MergeTree() ORDER BY tuple() PARTITION BY (b, c, d) settings allow_floating_point_partition_key=false; -- { serverError BAD_ARGUMENTS } CREATE TABLE test (a Float32, b int, c String, d Float64) Engine = MergeTree() ORDER BY tuple() PARTITION BY (b, c, d) settings allow_floating_point_partition_key=true; DROP TABLE IF EXISTS test; diff --git a/tests/queries/0_stateless/01632_group_array_msan.sql b/tests/queries/0_stateless/01632_group_array_msan.sql index f67ff896c3f..033d3754a2d 100644 --- a/tests/queries/0_stateless/01632_group_array_msan.sql +++ b/tests/queries/0_stateless/01632_group_array_msan.sql @@ -1,4 +1,4 @@ -SELECT groupArrayMerge(1048577)(y * 1048576) FROM (SELECT groupArrayState(9223372036854775807)(x) AS y FROM (SELECT 1048576 AS x)) FORMAT Null; -- { serverError 43 } +SELECT groupArrayMerge(1048577)(y * 1048576) FROM (SELECT groupArrayState(9223372036854775807)(x) AS y FROM (SELECT 1048576 AS x)) FORMAT Null; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT groupArrayMerge(1048577)(y * 1048576) FROM (SELECT groupArrayState(1048577)(x) AS y FROM (SELECT 1048576 AS x)) FORMAT Null; SELECT groupArrayMerge(9223372036854775807)(y * 1048576) FROM (SELECT groupArrayState(9223372036854775807)(x) AS y FROM (SELECT 1048576 AS x)) FORMAT Null; -SELECT quantileResampleMerge(0.5, 257, 65536, 1)(tuple(*).1) FROM (SELECT quantileResampleState(0.10, 1, 2, 42)(number, number) FROM numbers(100)); -- { serverError 43 } +SELECT quantileResampleMerge(0.5, 257, 65536, 1)(tuple(*).1) FROM (SELECT quantileResampleState(0.10, 1, 2, 42)(number, number) FROM numbers(100)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01632_max_partitions_to_read.sql b/tests/queries/0_stateless/01632_max_partitions_to_read.sql index b91405569bc..c8b2347b19d 100644 --- a/tests/queries/0_stateless/01632_max_partitions_to_read.sql +++ b/tests/queries/0_stateless/01632_max_partitions_to_read.sql @@ -4,7 +4,7 @@ create table p(d Date, i int, j int) engine MergeTree partition by d order by i insert into p values ('2021-01-01', 1, 2), ('2021-01-02', 4, 5); -select * from p order by i; -- { serverError 565 } +select * from p order by i; -- { serverError TOO_MANY_PARTITIONS } select * from p order by i settings max_partitions_to_read = 2; diff --git a/tests/queries/0_stateless/01634_sum_map_nulls.sql b/tests/queries/0_stateless/01634_sum_map_nulls.sql index a0b892f9803..149b946ec1c 100644 --- a/tests/queries/0_stateless/01634_sum_map_nulls.sql +++ b/tests/queries/0_stateless/01634_sum_map_nulls.sql @@ -1,5 +1,5 @@ SELECT initializeAggregation('sumMap', [1, 2, 1], [1, 1, 1], [-1, null, 10]); SELECT initializeAggregation('sumMap', [1, 2, 1], [1, 1, 1], [-1, null, null]); -SELECT initializeAggregation('sumMap', [1, 2, 1], [1, 1, 1], [null, null, null]); -- { serverError 43 } +SELECT initializeAggregation('sumMap', [1, 2, 1], [1, 1, 1], [null, null, null]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT initializeAggregation('sumMap', [1, 2, 1], [1, 1, 1], [-1, 10, 10]); SELECT initializeAggregation('sumMap', [1, 2, 1], [1, 1, 1], [-1, 10, null]); diff --git a/tests/queries/0_stateless/01634_uuid_fuzz.sql b/tests/queries/0_stateless/01634_uuid_fuzz.sql index 62ca209f6f3..2ffde0fd4f4 100644 --- a/tests/queries/0_stateless/01634_uuid_fuzz.sql +++ b/tests/queries/0_stateless/01634_uuid_fuzz.sql @@ -1 +1 @@ -SELECT toUUID(-1.1); -- { serverError 48 } +SELECT toUUID(-1.1); -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/01635_sum_map_fuzz.sql b/tests/queries/0_stateless/01635_sum_map_fuzz.sql index 0749e6e6be6..853eb66cb1d 100644 --- a/tests/queries/0_stateless/01635_sum_map_fuzz.sql +++ b/tests/queries/0_stateless/01635_sum_map_fuzz.sql @@ -2,5 +2,5 @@ SELECT finalizeAggregation(*) FROM (select initializeAggregation('sumMapState', DROP TABLE IF EXISTS sum_map_overflow; CREATE TABLE sum_map_overflow(events Array(UInt8), counts Array(UInt8)) ENGINE = Log; -SELECT [NULL], sumMapWithOverflow(events, [NULL], [[(NULL)]], counts) FROM sum_map_overflow; -- { serverError 43 } +SELECT [NULL], sumMapWithOverflow(events, [NULL], [[(NULL)]], counts) FROM sum_map_overflow; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } DROP TABLE sum_map_overflow; diff --git a/tests/queries/0_stateless/01651_bugs_from_15889.sql b/tests/queries/0_stateless/01651_bugs_from_15889.sql index a991a68b5dc..dd31f2941ef 100644 --- a/tests/queries/0_stateless/01651_bugs_from_15889.sql +++ b/tests/queries/0_stateless/01651_bugs_from_15889.sql @@ -54,7 +54,7 @@ WHERE (query_id = WHERE current_database = currentDatabase() AND (query LIKE '%test cpu time query profiler%') AND (query NOT LIKE '%system%') ORDER BY event_time DESC LIMIT 1 -)) AND (symbol LIKE '%Source%'); -- { serverError 125 } +)) AND (symbol LIKE '%Source%'); -- { serverError INCORRECT_RESULT_OF_SCALAR_SUBQUERY } WITH addressToSymbol(arrayJoin(trace)) AS symbol @@ -69,7 +69,7 @@ WHERE greaterOrEquals(event_date, ignore(ignore(ignore(NULL, '')), 256), yesterd WHERE current_database = currentDatabase() AND (event_date >= yesterday()) AND (query LIKE '%test memory profiler%') ORDER BY event_time DESC LIMIT 1 -)); -- { serverError 125, 42 } +)); -- { serverError INCORRECT_RESULT_OF_SCALAR_SUBQUERY, 42 } DROP TABLE IF EXISTS trace_log; @@ -92,7 +92,7 @@ WITH ( ORDER BY query_start_time DESC LIMIT 1 ) AS t) -SELECT if(dateDiff('second', toDateTime(time_with_microseconds), toDateTime(t)) = -9223372036854775808, 'ok', ''); -- { serverError 43 } +SELECT if(dateDiff('second', toDateTime(time_with_microseconds), toDateTime(t)) = -9223372036854775808, 'ok', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } WITH ( ( diff --git a/tests/queries/0_stateless/01652_ttl_old_syntax.sql b/tests/queries/0_stateless/01652_ttl_old_syntax.sql index 7b11247d968..765ce6cf1f8 100644 --- a/tests/queries/0_stateless/01652_ttl_old_syntax.sql +++ b/tests/queries/0_stateless/01652_ttl_old_syntax.sql @@ -2,6 +2,6 @@ DROP TABLE IF EXISTS ttl_old_syntax; set allow_deprecated_syntax_for_merge_tree=1; CREATE TABLE ttl_old_syntax (d Date, i Int) ENGINE = MergeTree(d, i, 8291); -ALTER TABLE ttl_old_syntax MODIFY TTL toDate('2020-01-01'); -- { serverError 36 } +ALTER TABLE ttl_old_syntax MODIFY TTL toDate('2020-01-01'); -- { serverError BAD_ARGUMENTS } DROP TABLE ttl_old_syntax; diff --git a/tests/queries/0_stateless/01653_tuple_hamming_distance_2.sql b/tests/queries/0_stateless/01653_tuple_hamming_distance_2.sql index 81afb1e1201..45a9afd3afa 100644 --- a/tests/queries/0_stateless/01653_tuple_hamming_distance_2.sql +++ b/tests/queries/0_stateless/01653_tuple_hamming_distance_2.sql @@ -18,6 +18,6 @@ SELECT tupleHammingDistance(('abc', (1, 2)), ('def', (1, 2))); SELECT tupleHammingDistance(('abc', (1, 2)), ('def', (1, 3))); -SELECT tupleHammingDistance(tuple(1), tuple(1, 1)); --{serverError 43} -SELECT tupleHammingDistance(tuple(1), tuple('a')); --{serverError 386} -SELECT tupleHammingDistance((1, 3), (3, 'a')); --{serverError 386} +SELECT tupleHammingDistance(tuple(1), tuple(1, 1)); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} +SELECT tupleHammingDistance(tuple(1), tuple('a')); --{serverError NO_COMMON_TYPE} +SELECT tupleHammingDistance((1, 3), (3, 'a')); --{serverError NO_COMMON_TYPE} diff --git a/tests/queries/0_stateless/01655_sleep_infinite_float.sql b/tests/queries/0_stateless/01655_sleep_infinite_float.sql index a469ba9674a..10196a5a00c 100644 --- a/tests/queries/0_stateless/01655_sleep_infinite_float.sql +++ b/tests/queries/0_stateless/01655_sleep_infinite_float.sql @@ -1,2 +1,2 @@ -SELECT sleep(nan); -- { serverError 36 } -SELECT sleep(inf); -- { serverError 36 } +SELECT sleep(nan); -- { serverError BAD_ARGUMENTS } +SELECT sleep(inf); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01656_sequence_next_node_long.sql b/tests/queries/0_stateless/01656_sequence_next_node_long.sql index a53074701b3..578bc80433b 100644 --- a/tests/queries/0_stateless/01656_sequence_next_node_long.sql +++ b/tests/queries/0_stateless/01656_sequence_next_node_long.sql @@ -230,6 +230,6 @@ SELECT '(backward, first_match, 1, B)', id, sequenceNextNode('backward', 'first_ SELECT '(backward, first_match, 1, B->C)', id, sequenceNextNode('backward', 'first_match')(dt, action, referrer = '2', action = 'B', action = 'A') AS next_node FROM test_base_condition GROUP BY id ORDER BY id; SET allow_experimental_funnel_functions = 0; -SELECT '(backward, first_match, 1, B->C)', id, sequenceNextNode('backward', 'first_match')(dt, action, referrer = '2', action = 'B', action = 'A') AS next_node FROM test_base_condition GROUP BY id ORDER BY id; -- { serverError 63 } +SELECT '(backward, first_match, 1, B->C)', id, sequenceNextNode('backward', 'first_match')(dt, action, referrer = '2', action = 'B', action = 'A') AS next_node FROM test_base_condition GROUP BY id ORDER BY id; -- { serverError UNKNOWN_AGGREGATE_FUNCTION } DROP TABLE IF EXISTS test_base_condition; diff --git a/tests/queries/0_stateless/01656_test_query_log_factories_info.sql b/tests/queries/0_stateless/01656_test_query_log_factories_info.sql index 8a6b604b053..70902e2d6bc 100644 --- a/tests/queries/0_stateless/01656_test_query_log_factories_info.sql +++ b/tests/queries/0_stateless/01656_test_query_log_factories_info.sql @@ -20,7 +20,7 @@ FROM numbers(100); SELECT repeat('aa', number) FROM numbers(10e3) SETTINGS max_memory_usage=4e6, max_block_size=100 -FORMAT Null; -- { serverError 241 } +FORMAT Null; -- { serverError MEMORY_LIMIT_EXCEEDED } SELECT ''; diff --git a/tests/queries/0_stateless/01658_values_ubsan.sql b/tests/queries/0_stateless/01658_values_ubsan.sql index 10d17c2f00a..2836b8b52db 100644 --- a/tests/queries/0_stateless/01658_values_ubsan.sql +++ b/tests/queries/0_stateless/01658_values_ubsan.sql @@ -1 +1 @@ -SELECT * FROM VALUES('x UInt8, y UInt16', 1 + 2, 'Hello'); -- { serverError 36 } +SELECT * FROM VALUES('x UInt8, y UInt16', 1 + 2, 'Hello'); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01659_h3_buffer_overflow.sql b/tests/queries/0_stateless/01659_h3_buffer_overflow.sql index 6132370e61c..9dba9bb789c 100644 --- a/tests/queries/0_stateless/01659_h3_buffer_overflow.sql +++ b/tests/queries/0_stateless/01659_h3_buffer_overflow.sql @@ -10,6 +10,6 @@ SELECT h3GetBaseCell(0xFFFFFFFFFFFFFF) FORMAT Null; SELECT h3GetResolution(0xFFFFFFFFFFFFFF) FORMAT Null; SELECT h3kRing(0xFFFFFFFFFFFFFF, toUInt16(10)) FORMAT Null; SELECT h3ToGeo(0xFFFFFFFFFFFFFF) FORMAT Null; -SELECT h3HexRing(0xFFFFFFFFFFFFFF, toUInt16(10)) FORMAT Null; -- { serverError 117 } -SELECT h3HexRing(0xFFFFFFFFFFFFFF, toUInt16(10000)) FORMAT Null; -- { serverError 117 } +SELECT h3HexRing(0xFFFFFFFFFFFFFF, toUInt16(10)) FORMAT Null; -- { serverError INCORRECT_DATA } +SELECT h3HexRing(0xFFFFFFFFFFFFFF, toUInt16(10000)) FORMAT Null; -- { serverError INCORRECT_DATA } SELECT length(h3HexRing(581276613233082367, toUInt16(1))) FORMAT Null; diff --git a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql index a056d77896c..afb6866a0df 100644 --- a/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql +++ b/tests/queries/0_stateless/01661_extract_all_groups_throw_fast.sql @@ -1,2 +1,2 @@ -SELECT repeat('abcdefghijklmnopqrstuvwxyz', number * 100) AS haystack, extractAllGroupsHorizontal(haystack, '(\\w)') AS matches FROM numbers(1023); -- { serverError 128 } +SELECT repeat('abcdefghijklmnopqrstuvwxyz', number * 100) AS haystack, extractAllGroupsHorizontal(haystack, '(\\w)') AS matches FROM numbers(1023); -- { serverError TOO_LARGE_ARRAY_SIZE } SELECT count(extractAllGroupsHorizontal(materialize('a'), '(a)')) FROM numbers(1000000) FORMAT Null; -- shouldn't fail diff --git a/tests/queries/0_stateless/01665_running_difference_ubsan.sql b/tests/queries/0_stateless/01665_running_difference_ubsan.sql index 504cb0269f8..19947b6ad84 100644 --- a/tests/queries/0_stateless/01665_running_difference_ubsan.sql +++ b/tests/queries/0_stateless/01665_running_difference_ubsan.sql @@ -1,2 +1,2 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; SELECT k, d, i FROM (SELECT t.1 AS k, t.2 AS v, runningDifference(v) AS d, runningDifference(cityHash64(t.1)) AS i FROM (SELECT arrayJoin([(NULL, 65535), ('a', 7), ('a', 3), ('b', 11), ('b', 2), ('', -9223372036854775808)]) AS t)) WHERE i = 9223372036854775807; diff --git a/tests/queries/0_stateless/01666_gcd_ubsan.reference b/tests/queries/0_stateless/01666_gcd_ubsan.reference index 37b1968542e..6c70991b7ce 100644 --- a/tests/queries/0_stateless/01666_gcd_ubsan.reference +++ b/tests/queries/0_stateless/01666_gcd_ubsan.reference @@ -1,10 +1,10 @@ -- { echo } -SELECT gcd(9223372036854775807, -9223372036854775808); -- { serverError 407 } -SELECT gcd(9223372036854775808, -9223372036854775807); -- { serverError 407 } -SELECT gcd(-9223372036854775808, 9223372036854775807); -- { serverError 407 } -SELECT gcd(-9223372036854775807, 9223372036854775808); -- { serverError 407 } -SELECT gcd(9223372036854775808, -1); -- { serverError 407 } -SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError 43 } +SELECT gcd(9223372036854775807, -9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(9223372036854775808, -9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(-9223372036854775808, 9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(-9223372036854775807, 9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(9223372036854775808, -1); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT lcm(toInt128(-170141183460469231731687303715884105728), toInt128(-170141183460469231731687303715884105728)); 170141183460469231722463931679029329921 SELECT lcm(toInt128(-170141183460469231731687303715884105720), toInt128(-170141183460469231731687303715884105720)); diff --git a/tests/queries/0_stateless/01666_gcd_ubsan.sql b/tests/queries/0_stateless/01666_gcd_ubsan.sql index da41022ddeb..bd7023caa5b 100644 --- a/tests/queries/0_stateless/01666_gcd_ubsan.sql +++ b/tests/queries/0_stateless/01666_gcd_ubsan.sql @@ -1,10 +1,10 @@ -- { echo } -SELECT gcd(9223372036854775807, -9223372036854775808); -- { serverError 407 } -SELECT gcd(9223372036854775808, -9223372036854775807); -- { serverError 407 } -SELECT gcd(-9223372036854775808, 9223372036854775807); -- { serverError 407 } -SELECT gcd(-9223372036854775807, 9223372036854775808); -- { serverError 407 } -SELECT gcd(9223372036854775808, -1); -- { serverError 407 } -SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError 43 } +SELECT gcd(9223372036854775807, -9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(9223372036854775808, -9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(-9223372036854775808, 9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(-9223372036854775807, 9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT gcd(9223372036854775808, -1); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT lcm(toInt128(-170141183460469231731687303715884105728), toInt128(-170141183460469231731687303715884105728)); SELECT lcm(toInt128(-170141183460469231731687303715884105720), toInt128(-170141183460469231731687303715884105720)); SELECT lcm(toInt128('-170141183460469231731687303715884105720'), toInt128('-170141183460469231731687303715884105720')); diff --git a/tests/queries/0_stateless/01666_lcm_ubsan.reference b/tests/queries/0_stateless/01666_lcm_ubsan.reference index bd1972e8a6d..a0829909e86 100644 --- a/tests/queries/0_stateless/01666_lcm_ubsan.reference +++ b/tests/queries/0_stateless/01666_lcm_ubsan.reference @@ -1,10 +1,10 @@ -- { echo } -SELECT lcm(9223372036854775807, -9223372036854775808); -- { serverError 407 } -SELECT lcm(9223372036854775808, -9223372036854775807); -- { serverError 407 } -SELECT lcm(-9223372036854775808, 9223372036854775807); -- { serverError 407 } -SELECT lcm(-9223372036854775807, 9223372036854775808); -- { serverError 407 } -SELECT lcm(9223372036854775808, -1); -- { serverError 407 } -SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError 43 } +SELECT lcm(9223372036854775807, -9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(9223372036854775808, -9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-9223372036854775808, 9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-9223372036854775807, 9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(9223372036854775808, -1); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT lcm(toInt128(-170141183460469231731687303715884105728), toInt128(-170141183460469231731687303715884105728)); 170141183460469231722463931679029329921 SELECT lcm(toInt128(-170141183460469231731687303715884105720), toInt128(-170141183460469231731687303715884105720)); diff --git a/tests/queries/0_stateless/01666_lcm_ubsan.sql b/tests/queries/0_stateless/01666_lcm_ubsan.sql index 8ebdf148a65..cd3eba3621f 100644 --- a/tests/queries/0_stateless/01666_lcm_ubsan.sql +++ b/tests/queries/0_stateless/01666_lcm_ubsan.sql @@ -1,10 +1,10 @@ -- { echo } -SELECT lcm(9223372036854775807, -9223372036854775808); -- { serverError 407 } -SELECT lcm(9223372036854775808, -9223372036854775807); -- { serverError 407 } -SELECT lcm(-9223372036854775808, 9223372036854775807); -- { serverError 407 } -SELECT lcm(-9223372036854775807, 9223372036854775808); -- { serverError 407 } -SELECT lcm(9223372036854775808, -1); -- { serverError 407 } -SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError 43 } +SELECT lcm(9223372036854775807, -9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(9223372036854775808, -9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-9223372036854775808, 9223372036854775807); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-9223372036854775807, 9223372036854775808); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(9223372036854775808, -1); -- { serverError DECIMAL_OVERFLOW } +SELECT lcm(-170141183460469231731687303715884105728, -170141183460469231731687303715884105728); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT lcm(toInt128(-170141183460469231731687303715884105728), toInt128(-170141183460469231731687303715884105728)); SELECT lcm(toInt128(-170141183460469231731687303715884105720), toInt128(-170141183460469231731687303715884105720)); SELECT lcm(toInt128('-170141183460469231731687303715884105720'), toInt128('-170141183460469231731687303715884105720')); diff --git a/tests/queries/0_stateless/01667_aes_args_check.sql b/tests/queries/0_stateless/01667_aes_args_check.sql index fc271e8aca1..71273558dab 100644 --- a/tests/queries/0_stateless/01667_aes_args_check.sql +++ b/tests/queries/0_stateless/01667_aes_args_check.sql @@ -1,4 +1,4 @@ -- Tags: no-fasttest -- Tag no-fasttest: Depends on OpenSSL -SELECT encrypt('aes-128-ecb', [1, -1, 0, NULL], 'text'); -- { serverError 43 } +SELECT encrypt('aes-128-ecb', [1, -1, 0, NULL], 'text'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01670_distributed_bytes_to_throw_insert.sql b/tests/queries/0_stateless/01670_distributed_bytes_to_throw_insert.sql index 0e85b61070b..7e32f6ab4c2 100644 --- a/tests/queries/0_stateless/01670_distributed_bytes_to_throw_insert.sql +++ b/tests/queries/0_stateless/01670_distributed_bytes_to_throw_insert.sql @@ -10,7 +10,7 @@ system stop distributed sends dist_01670; insert into dist_01670 select * from numbers(1) settings prefer_localhost_replica=0; -- second will fail, because of bytes_to_throw_insert=1 -- (previous block definitelly takes more, since it has header) -insert into dist_01670 select * from numbers(1) settings prefer_localhost_replica=0; -- { serverError 574 } +insert into dist_01670 select * from numbers(1) settings prefer_localhost_replica=0; -- { serverError DISTRIBUTED_TOO_MANY_PENDING_BYTES } system flush distributed dist_01670; drop table dist_01670; drop table data_01670; diff --git a/tests/queries/0_stateless/01670_neighbor_lc_bug.sql b/tests/queries/0_stateless/01670_neighbor_lc_bug.sql index b665c0b48fd..599a1f49063 100644 --- a/tests/queries/0_stateless/01670_neighbor_lc_bug.sql +++ b/tests/queries/0_stateless/01670_neighbor_lc_bug.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; SET output_format_pretty_row_numbers = 0; SELECT diff --git a/tests/queries/0_stateless/01674_where_prewhere_array_crash.sql b/tests/queries/0_stateless/01674_where_prewhere_array_crash.sql index 478e0039177..2611eedff6e 100644 --- a/tests/queries/0_stateless/01674_where_prewhere_array_crash.sql +++ b/tests/queries/0_stateless/01674_where_prewhere_array_crash.sql @@ -1,5 +1,5 @@ drop table if exists tab; create table tab (x UInt64, `arr.a` Array(UInt64), `arr.b` Array(UInt64)) engine = MergeTree order by x; -select x from tab array join arr prewhere x != 0 where arr; -- { serverError 47, 59 } -select x from tab array join arr prewhere arr where x != 0; -- { serverError 47, 59 } +select x from tab array join arr prewhere x != 0 where arr; -- { serverError UNKNOWN_IDENTIFIER, 59 } +select x from tab array join arr prewhere arr where x != 0; -- { serverError UNKNOWN_IDENTIFIER, 59 } drop table if exists tab; diff --git a/tests/queries/0_stateless/01676_reinterpret_as.sql b/tests/queries/0_stateless/01676_reinterpret_as.sql index cc52859724d..aa9f901c99d 100644 --- a/tests/queries/0_stateless/01676_reinterpret_as.sql +++ b/tests/queries/0_stateless/01676_reinterpret_as.sql @@ -39,4 +39,4 @@ SELECT reinterpret(toDecimal128(5, 2), 'Decimal128(2)'), reinterpret('1', 'Decim SELECT reinterpret(toDecimal256(5, 2), 'Decimal256(2)'), reinterpret('1', 'Decimal256(2)'); SELECT reinterpret(toDateTime64(0, 0), 'Decimal64(2)'); SELECT 'ReinterpretErrors'; -SELECT reinterpret('123', 'FixedString(1)'); -- {serverError 43} +SELECT reinterpret('123', 'FixedString(1)'); -- {serverError ILLEGAL_TYPE_OF_ARGUMENT} diff --git a/tests/queries/0_stateless/01677_bit_float.sql b/tests/queries/0_stateless/01677_bit_float.sql index 3692d8ac6a5..d0ad8f2d9ba 100644 --- a/tests/queries/0_stateless/01677_bit_float.sql +++ b/tests/queries/0_stateless/01677_bit_float.sql @@ -1,9 +1,9 @@ -SELECT bitAnd(0, inf); -- { serverError 43 } -SELECT bitXor(0, inf); -- { serverError 43 } -SELECT bitOr(0, inf); -- { serverError 43 } -SELECT bitTest(inf, 0); -- { serverError 43 } -SELECT bitTest(0, inf); -- { serverError 43 } -SELECT bitRotateLeft(inf, 0); -- { serverError 43 } -SELECT bitRotateRight(inf, 0); -- { serverError 43 } -SELECT bitShiftLeft(inf, 0); -- { serverError 43 } -SELECT bitShiftRight(inf, 0); -- { serverError 43 } +SELECT bitAnd(0, inf); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitXor(0, inf); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitOr(0, inf); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitTest(inf, 0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitTest(0, inf); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitRotateLeft(inf, 0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitRotateRight(inf, 0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitShiftLeft(inf, 0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT bitShiftRight(inf, 0); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01680_date_time_add_ubsan.sql b/tests/queries/0_stateless/01680_date_time_add_ubsan.sql index d2c443bddf9..6570621e4e9 100644 --- a/tests/queries/0_stateless/01680_date_time_add_ubsan.sql +++ b/tests/queries/0_stateless/01680_date_time_add_ubsan.sql @@ -1,3 +1,3 @@ -SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + (number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; -- { serverError 407 } +SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + (number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; -- { serverError DECIMAL_OVERFLOW } SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + toInt64(number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; SELECT round(round(round(round(round(100)), round(round(round(round(NULL), round(65535)), toTypeName(now() + 9223372036854775807) LIKE 'DateTime%DateTime%DateTime%DateTime%', round(-2)), 255), round(NULL)))); diff --git a/tests/queries/0_stateless/01681_bloom_filter_nullable_column.sql b/tests/queries/0_stateless/01681_bloom_filter_nullable_column.sql index 50663654b10..3b9af3d278f 100644 --- a/tests/queries/0_stateless/01681_bloom_filter_nullable_column.sql +++ b/tests/queries/0_stateless/01681_bloom_filter_nullable_column.sql @@ -21,10 +21,10 @@ SELECT * FROM bloom_filter_nullable_index WHERE str IN SELECT 'NullableTuple with transform_null_in=1'; SELECT * FROM bloom_filter_nullable_index WHERE str IN - (SELECT '1048576', str FROM bloom_filter_nullable_index) SETTINGS transform_null_in = 1; -- { serverError 20 } + (SELECT '1048576', str FROM bloom_filter_nullable_index) SETTINGS transform_null_in = 1; -- { serverError NUMBER_OF_COLUMNS_DOESNT_MATCH } SELECT * FROM bloom_filter_nullable_index WHERE str IN - (SELECT '1048576', str FROM bloom_filter_nullable_index) SETTINGS transform_null_in = 1; -- { serverError 20 } + (SELECT '1048576', str FROM bloom_filter_nullable_index) SETTINGS transform_null_in = 1; -- { serverError NUMBER_OF_COLUMNS_DOESNT_MATCH } SELECT 'NullableColumnFromCast with transform_null_in=0'; diff --git a/tests/queries/0_stateless/01682_gather_utils_ubsan.sql b/tests/queries/0_stateless/01682_gather_utils_ubsan.sql index 2388586e8fe..d1a0e5dcc81 100644 --- a/tests/queries/0_stateless/01682_gather_utils_ubsan.sql +++ b/tests/queries/0_stateless/01682_gather_utils_ubsan.sql @@ -1 +1 @@ -SELECT arrayResize([1, 2, 3], -9223372036854775808); -- { serverError 128 } +SELECT arrayResize([1, 2, 3], -9223372036854775808); -- { serverError TOO_LARGE_ARRAY_SIZE } diff --git a/tests/queries/0_stateless/01683_intdiv_ubsan.sql b/tests/queries/0_stateless/01683_intdiv_ubsan.sql index adac2505745..11a6645e69f 100644 --- a/tests/queries/0_stateless/01683_intdiv_ubsan.sql +++ b/tests/queries/0_stateless/01683_intdiv_ubsan.sql @@ -1 +1 @@ -SELECT DISTINCT intDiv(number, nan) FROM numbers(10); -- { serverError 153 } +SELECT DISTINCT intDiv(number, nan) FROM numbers(10); -- { serverError ILLEGAL_DIVISION } diff --git a/tests/queries/0_stateless/01684_insert_specify_shard_id.sql b/tests/queries/0_stateless/01684_insert_specify_shard_id.sql index 830f987e811..dc33f8c5c9b 100644 --- a/tests/queries/0_stateless/01684_insert_specify_shard_id.sql +++ b/tests/queries/0_stateless/01684_insert_specify_shard_id.sql @@ -26,12 +26,12 @@ SELECT * FROM x_dist ORDER by number; SELECT * FROM y_dist ORDER by number; -- no sharding key -INSERT INTO x_dist SELECT * FROM numbers(10); -- { serverError 55 } -INSERT INTO y_dist SELECT * FROM numbers(10); -- { serverError 55 } +INSERT INTO x_dist SELECT * FROM numbers(10); -- { serverError STORAGE_REQUIRES_PARAMETER } +INSERT INTO y_dist SELECT * FROM numbers(10); -- { serverError STORAGE_REQUIRES_PARAMETER } -- invalid shard id -INSERT INTO x_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError 577 } -INSERT INTO y_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError 577 } +INSERT INTO x_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError INVALID_SHARD_ID } +INSERT INTO y_dist SELECT * FROM numbers(10) settings insert_shard_id = 3; -- { serverError INVALID_SHARD_ID } DROP TABLE x; DROP TABLE x_dist; diff --git a/tests/queries/0_stateless/01686_rocksdb.sql b/tests/queries/0_stateless/01686_rocksdb.sql index 3ff218bf398..b907f8cf985 100644 --- a/tests/queries/0_stateless/01686_rocksdb.sql +++ b/tests/queries/0_stateless/01686_rocksdb.sql @@ -22,7 +22,7 @@ SELECT * FROM 01686_test WHERE key = NULL OR key = 0; SELECT '--'; SELECT * FROM 01686_test WHERE key IN (123, 456, -123) ORDER BY key; SELECT '--'; -SELECT * FROM 01686_test WHERE key = 'Hello'; -- { serverError 53 } +SELECT * FROM 01686_test WHERE key = 'Hello'; -- { serverError TYPE_MISMATCH } DETACH TABLE 01686_test SYNC; ATTACH TABLE 01686_test; diff --git a/tests/queries/0_stateless/01698_map_populate_overflow.sql b/tests/queries/0_stateless/01698_map_populate_overflow.sql index 90c47ff3949..e1f09d4ed79 100644 --- a/tests/queries/0_stateless/01698_map_populate_overflow.sql +++ b/tests/queries/0_stateless/01698_map_populate_overflow.sql @@ -1,2 +1,2 @@ SELECT mapPopulateSeries([0xFFFFFFFFFFFFFFFF], [0], 0xFFFFFFFFFFFFFFFF); -SELECT mapPopulateSeries([toUInt64(1)], [1], 0xFFFFFFFFFFFFFFFF); -- { serverError 128 } +SELECT mapPopulateSeries([toUInt64(1)], [1], 0xFFFFFFFFFFFFFFFF); -- { serverError TOO_LARGE_ARRAY_SIZE } diff --git a/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql b/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql index 72317df5439..645c304ffee 100644 --- a/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql +++ b/tests/queries/0_stateless/01700_point_in_polygon_ubsan.sql @@ -1 +1 @@ -SELECT pointInPolygon((0, 0), [[(0, 0), (10, 10), (256, -9223372036854775808)]]) FORMAT Null ;-- { serverError 36 } +SELECT pointInPolygon((0, 0), [[(0, 0), (10, 10), (256, -9223372036854775808)]]) FORMAT Null ;-- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01701_if_tuple_segfault.sql b/tests/queries/0_stateless/01701_if_tuple_segfault.sql index 93b28c578a9..6266f171ee3 100644 --- a/tests/queries/0_stateless/01701_if_tuple_segfault.sql +++ b/tests/queries/0_stateless/01701_if_tuple_segfault.sql @@ -18,7 +18,7 @@ SELECT * FROM agg_table; SELECT if(xxx = 'x', ([2], 3), ([3], 4)) FROM agg_table; -SELECT if(xxx = 'x', ([2], 3), ([3], 4, 'q', 'w', 7)) FROM agg_table; --{ serverError 386 } +SELECT if(xxx = 'x', ([2], 3), ([3], 4, 'q', 'w', 7)) FROM agg_table; --{ serverError NO_COMMON_TYPE } ALTER TABLE agg_table UPDATE two_values = (two_values.1, two_values.2) WHERE time BETWEEN toDateTime('2020-08-01 00:00:00') AND toDateTime('2020-12-01 00:00:00') SETTINGS mutations_sync = 2; diff --git a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh index 0fe04fb95fd..9284348dd62 100755 --- a/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh +++ b/tests/queries/0_stateless/01701_parallel_parsing_infinite_segmentation.sh @@ -1,4 +1,5 @@ #!/usr/bin/env bash +# Tags: no-debug, no-asan, no-tsan, no-msan CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh diff --git a/tests/queries/0_stateless/01702_system_numbers_scientific_notation.sql b/tests/queries/0_stateless/01702_system_numbers_scientific_notation.sql index 6e037ee4a2e..c87b33272d9 100644 --- a/tests/queries/0_stateless/01702_system_numbers_scientific_notation.sql +++ b/tests/queries/0_stateless/01702_system_numbers_scientific_notation.sql @@ -1,5 +1,5 @@ select * from numbers(1e2) format Null; select * from numbers_mt(1e2) format Null; -select * from numbers_mt('100') format Null; -- { serverError 43 } -select * from numbers_mt(inf) format Null; -- { serverError 43 } -select * from numbers_mt(nan) format Null; -- { serverError 43 } +select * from numbers_mt('100') format Null; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select * from numbers_mt(inf) format Null; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select * from numbers_mt(nan) format Null; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql index 2bb92aec713..75b4d7c6aa0 100644 --- a/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql +++ b/tests/queries/0_stateless/01709_inactive_parts_to_throw_insert.sql @@ -7,6 +7,6 @@ insert into data_01709 values (2); optimize table data_01709 final; -insert into data_01709 values (3); -- { serverError 252 } +insert into data_01709 values (3); -- { serverError TOO_MANY_PARTS } drop table data_01709; diff --git a/tests/queries/0_stateless/01710_force_use_projection.sql b/tests/queries/0_stateless/01710_force_use_projection.sql index af6ca69c540..53ea2aa30cb 100644 --- a/tests/queries/0_stateless/01710_force_use_projection.sql +++ b/tests/queries/0_stateless/01710_force_use_projection.sql @@ -12,6 +12,6 @@ insert into tp values (1, 2, 3); select sum(eventcnt) eventcnt, d1 from tp group by d1; -select avg(eventcnt) eventcnt, d1 from tp group by d1; -- { serverError 584 } +select avg(eventcnt) eventcnt, d1 from tp group by d1; -- { serverError PROJECTION_NOT_USED } drop table tp; diff --git a/tests/queries/0_stateless/01710_minmax_count_projection.sql b/tests/queries/0_stateless/01710_minmax_count_projection.sql index bc8327e3631..d0177da84d2 100644 --- a/tests/queries/0_stateless/01710_minmax_count_projection.sql +++ b/tests/queries/0_stateless/01710_minmax_count_projection.sql @@ -76,6 +76,6 @@ insert into test select number, number from numbers(1e3); select count(if(d=4, d, 1)) from test settings force_optimize_projection = 1; select count(d/3) from test settings force_optimize_projection = 1; -select count(if(d=4, Null, 1)) from test settings force_optimize_projection = 1; -- { serverError 584 } +select count(if(d=4, Null, 1)) from test settings force_optimize_projection = 1; -- { serverError PROJECTION_NOT_USED } drop table test; diff --git a/tests/queries/0_stateless/01710_projection_drop_if_exists.sql b/tests/queries/0_stateless/01710_projection_drop_if_exists.sql index f21092e5491..4c19ba18c1e 100644 --- a/tests/queries/0_stateless/01710_projection_drop_if_exists.sql +++ b/tests/queries/0_stateless/01710_projection_drop_if_exists.sql @@ -2,10 +2,10 @@ drop table if exists tp; create table tp (x Int32, y Int32, projection p (select x, y order by x)) engine = MergeTree order by y; -alter table tp drop projection pp; -- { serverError 582 } +alter table tp drop projection pp; -- { serverError NO_SUCH_PROJECTION_IN_TABLE } alter table tp drop projection if exists pp; alter table tp drop projection if exists p; -alter table tp drop projection p; -- { serverError 582 } +alter table tp drop projection p; -- { serverError NO_SUCH_PROJECTION_IN_TABLE } alter table tp drop projection if exists p; drop table tp; diff --git a/tests/queries/0_stateless/01710_projection_group_by_order_by.sql b/tests/queries/0_stateless/01710_projection_group_by_order_by.sql index 780162e0284..e97e2ff702f 100644 --- a/tests/queries/0_stateless/01710_projection_group_by_order_by.sql +++ b/tests/queries/0_stateless/01710_projection_group_by_order_by.sql @@ -5,6 +5,6 @@ DROP TABLE IF EXISTS t; drop table if exists tp; -create table tp (type Int32, eventcnt UInt64, projection p (select sum(eventcnt), type group by type order by sum(eventcnt))) engine = MergeTree order by type; -- { serverError 583 } +create table tp (type Int32, eventcnt UInt64, projection p (select sum(eventcnt), type group by type order by sum(eventcnt))) engine = MergeTree order by type; -- { serverError ILLEGAL_PROJECTION } drop table if exists tp; diff --git a/tests/queries/0_stateless/01710_projection_with_mixed_pipeline.sql b/tests/queries/0_stateless/01710_projection_with_mixed_pipeline.sql index 877fca4590d..2bf2cc48707 100644 --- a/tests/queries/0_stateless/01710_projection_with_mixed_pipeline.sql +++ b/tests/queries/0_stateless/01710_projection_with_mixed_pipeline.sql @@ -4,6 +4,6 @@ create table t (x UInt32) engine = MergeTree order by tuple() settings index_gra insert into t select number from numbers(100); alter table t add projection p (select uniqHLL12(x)); insert into t select number + 100 from numbers(100); -select uniqHLL12(x) from t settings optimize_use_projections = 1, max_bytes_to_read=400, max_block_size=8; -- { serverError 307 } +select uniqHLL12(x) from t settings optimize_use_projections = 1, max_bytes_to_read=400, max_block_size=8; -- { serverError TOO_MANY_BYTES } drop table if exists t; diff --git a/tests/queries/0_stateless/01710_projections.sql b/tests/queries/0_stateless/01710_projections.sql index 7c45792847e..2dde07fc547 100644 --- a/tests/queries/0_stateless/01710_projections.sql +++ b/tests/queries/0_stateless/01710_projections.sql @@ -6,8 +6,8 @@ insert into projection_test with rowNumberInAllBlocks() as id select 1, toDateTi set optimize_use_projections = 1, force_optimize_projection = 1; -select * from projection_test; -- { serverError 584 } -select toStartOfMinute(datetime) dt_m, countIf(first_time = 0) from projection_test join (select 1) x on 1 where domain = '1' group by dt_m order by dt_m; -- { serverError 584 } +select * from projection_test; -- { serverError PROJECTION_NOT_USED } +select toStartOfMinute(datetime) dt_m, countIf(first_time = 0) from projection_test join (select 1) x on 1 where domain = '1' group by dt_m order by dt_m; -- { serverError PROJECTION_NOT_USED } select toStartOfMinute(datetime) dt_m, countIf(first_time = 0) / count(), avg((kbytes * 8) / duration) from projection_test where domain = '1' group by dt_m order by dt_m; diff --git a/tests/queries/0_stateless/01713_table_ttl_old_syntax_zookeeper.sql b/tests/queries/0_stateless/01713_table_ttl_old_syntax_zookeeper.sql index 11346a812f2..5509d527dcd 100644 --- a/tests/queries/0_stateless/01713_table_ttl_old_syntax_zookeeper.sql +++ b/tests/queries/0_stateless/01713_table_ttl_old_syntax_zookeeper.sql @@ -9,7 +9,7 @@ CREATE TABLE ttl_table value UInt64 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{database}/test_01713_table_ttl', '1', date, date, 8192) -TTL date + INTERVAL 2 MONTH; --{ serverError 36 } +TTL date + INTERVAL 2 MONTH; --{ serverError BAD_ARGUMENTS } CREATE TABLE ttl_table ( @@ -17,7 +17,7 @@ CREATE TABLE ttl_table value UInt64 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{database}/test_01713_table_ttl', '1', date, date, 8192) -PARTITION BY date; --{ serverError 42 } +PARTITION BY date; --{ serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } CREATE TABLE ttl_table ( @@ -25,7 +25,7 @@ CREATE TABLE ttl_table value UInt64 ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/{database}/test_01713_table_ttl', '1', date, date, 8192) -ORDER BY value; --{ serverError 42 } +ORDER BY value; --{ serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } SELECT 1; diff --git a/tests/queries/0_stateless/01714_alter_drop_version.sql b/tests/queries/0_stateless/01714_alter_drop_version.sql index e3d5db33859..91670fff274 100644 --- a/tests/queries/0_stateless/01714_alter_drop_version.sql +++ b/tests/queries/0_stateless/01714_alter_drop_version.sql @@ -11,8 +11,8 @@ ORDER BY key; INSERT INTO alter_drop_version VALUES (1, '1', 1); -ALTER TABLE alter_drop_version DROP COLUMN ver; --{serverError 524} -ALTER TABLE alter_drop_version RENAME COLUMN ver TO rev; --{serverError 524} +ALTER TABLE alter_drop_version DROP COLUMN ver; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} +ALTER TABLE alter_drop_version RENAME COLUMN ver TO rev; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} DETACH TABLE alter_drop_version; diff --git a/tests/queries/0_stateless/01716_drop_rename_sign_column.sql b/tests/queries/0_stateless/01716_drop_rename_sign_column.sql index c9119ee2b46..bdaa5a5844d 100644 --- a/tests/queries/0_stateless/01716_drop_rename_sign_column.sql +++ b/tests/queries/0_stateless/01716_drop_rename_sign_column.sql @@ -8,7 +8,7 @@ CREATE TABLE signed_table ( INSERT INTO signed_table(k, v, s) VALUES (1, 'a', 1); -ALTER TABLE signed_table DROP COLUMN s; --{serverError 524} -ALTER TABLE signed_table RENAME COLUMN s TO s1; --{serverError 524} +ALTER TABLE signed_table DROP COLUMN s; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} +ALTER TABLE signed_table RENAME COLUMN s TO s1; --{serverError ALTER_OF_COLUMN_IS_FORBIDDEN} DROP TABLE IF EXISTS signed_table; diff --git a/tests/queries/0_stateless/01717_global_with_subquery_fix.sql b/tests/queries/0_stateless/01717_global_with_subquery_fix.sql index eaebbe1163e..b835b22f499 100644 --- a/tests/queries/0_stateless/01717_global_with_subquery_fix.sql +++ b/tests/queries/0_stateless/01717_global_with_subquery_fix.sql @@ -1,3 +1,3 @@ -- Tags: global -WITH (SELECT count(distinct colU) from tabA) AS withA, (SELECT count(distinct colU) from tabA) AS withB SELECT withA / withB AS ratio FROM (SELECT date AS period, colX FROM (SELECT date, if(colA IN (SELECT colB FROM tabC), 0, colA) AS colX FROM tabB) AS tempB GROUP BY period, colX) AS main; -- {serverError 60} +WITH (SELECT count(distinct colU) from tabA) AS withA, (SELECT count(distinct colU) from tabA) AS withB SELECT withA / withB AS ratio FROM (SELECT date AS period, colX FROM (SELECT date, if(colA IN (SELECT colB FROM tabC), 0, colA) AS colX FROM tabB) AS tempB GROUP BY period, colX) AS main; -- {serverError UNKNOWN_TABLE} diff --git a/tests/queries/0_stateless/01717_int_div_float_too_large_ubsan.sql b/tests/queries/0_stateless/01717_int_div_float_too_large_ubsan.sql index dc1e5b37050..04d18db5f72 100644 --- a/tests/queries/0_stateless/01717_int_div_float_too_large_ubsan.sql +++ b/tests/queries/0_stateless/01717_int_div_float_too_large_ubsan.sql @@ -1,2 +1,2 @@ -SELECT intDiv(18446744073709551615, 0.9998999834060669); -- { serverError 153 } -SELECT intDiv(18446744073709551615, 1.); -- { serverError 153 } +SELECT intDiv(18446744073709551615, 0.9998999834060669); -- { serverError ILLEGAL_DIVISION } +SELECT intDiv(18446744073709551615, 1.); -- { serverError ILLEGAL_DIVISION } diff --git a/tests/queries/0_stateless/01720_constraints_complex_types.sql b/tests/queries/0_stateless/01720_constraints_complex_types.sql index 273f509b6eb..fd40699e5fd 100644 --- a/tests/queries/0_stateless/01720_constraints_complex_types.sql +++ b/tests/queries/0_stateless/01720_constraints_complex_types.sql @@ -8,7 +8,7 @@ CREATE TABLE constraint_on_nullable_type ) ENGINE = TinyLog(); -INSERT INTO constraint_on_nullable_type VALUES (0); -- {serverError 469} +INSERT INTO constraint_on_nullable_type VALUES (0); -- {serverError VIOLATED_CONSTRAINT} INSERT INTO constraint_on_nullable_type VALUES (1); SELECT * FROM constraint_on_nullable_type; @@ -23,7 +23,7 @@ CREATE TABLE constraint_on_low_cardinality_type ) ENGINE = TinyLog; -INSERT INTO constraint_on_low_cardinality_type VALUES (0); -- {serverError 469} +INSERT INTO constraint_on_low_cardinality_type VALUES (0); -- {serverError VIOLATED_CONSTRAINT} INSERT INTO constraint_on_low_cardinality_type VALUES (2); SELECT * FROM constraint_on_low_cardinality_type; @@ -39,7 +39,7 @@ CREATE TABLE constraint_on_low_cardinality_nullable_type ) ENGINE = TinyLog; -INSERT INTO constraint_on_low_cardinality_nullable_type VALUES (0); -- {serverError 469} +INSERT INTO constraint_on_low_cardinality_nullable_type VALUES (0); -- {serverError VIOLATED_CONSTRAINT} INSERT INTO constraint_on_low_cardinality_nullable_type VALUES (3); SELECT * FROM constraint_on_low_cardinality_nullable_type; diff --git a/tests/queries/0_stateless/01720_engine_file_empty_if_not_exists.sql b/tests/queries/0_stateless/01720_engine_file_empty_if_not_exists.sql index d665dbc722f..d031c71f153 100644 --- a/tests/queries/0_stateless/01720_engine_file_empty_if_not_exists.sql +++ b/tests/queries/0_stateless/01720_engine_file_empty_if_not_exists.sql @@ -2,11 +2,11 @@ DROP TABLE IF EXISTS file_engine_table; CREATE TABLE file_engine_table (id UInt32) ENGINE=File(TSV); -SELECT * FROM file_engine_table; --{ serverError 107 } +SELECT * FROM file_engine_table; --{ serverError FILE_DOESNT_EXIST } SET engine_file_empty_if_not_exists=0; -SELECT * FROM file_engine_table; --{ serverError 107 } +SELECT * FROM file_engine_table; --{ serverError FILE_DOESNT_EXIST } SET engine_file_empty_if_not_exists=1; diff --git a/tests/queries/0_stateless/01720_type_map_and_casts.sql b/tests/queries/0_stateless/01720_type_map_and_casts.sql index d090d0e5b66..72d00da01f2 100644 --- a/tests/queries/0_stateless/01720_type_map_and_casts.sql +++ b/tests/queries/0_stateless/01720_type_map_and_casts.sql @@ -57,7 +57,7 @@ SELECT m[toUUID('00001192-0000-4000-8000-000000000001')] FROM table_map_with_key_integer; -SELECT m[257], m[1] FROM table_map_with_key_integer; -- { serverError 43 } +SELECT m[257], m[1] FROM table_map_with_key_integer; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } DROP TABLE IF EXISTS table_map_with_key_integer; @@ -85,4 +85,4 @@ DROP TABLE IF EXISTS table_map_with_key_integer; CREATE TABLE table_map_with_key_integer (m Map(Array(UInt32), String)) ENGINE = MergeTree() ORDER BY tuple(); DROP TABLE IF EXISTS table_map_with_key_integer; -CREATE TABLE table_map_with_key_integer (m Map(Nullable(String), String)) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError 36} +CREATE TABLE table_map_with_key_integer (m Map(Nullable(String), String)) ENGINE = MergeTree() ORDER BY tuple(); -- { serverError BAD_ARGUMENTS} diff --git a/tests/queries/0_stateless/01721_constraints_constant_expressions.sql b/tests/queries/0_stateless/01721_constraints_constant_expressions.sql index d70c0cd4dc0..150aef3b569 100644 --- a/tests/queries/0_stateless/01721_constraints_constant_expressions.sql +++ b/tests/queries/0_stateless/01721_constraints_constant_expressions.sql @@ -20,7 +20,7 @@ CREATE TABLE constraint_constant_number_expression_non_uint8 CONSTRAINT `c0` CHECK toUInt64(1) ) ENGINE = TinyLog(); -INSERT INTO constraint_constant_number_expression_non_uint8 VALUES (2); -- {serverError 1} +INSERT INTO constraint_constant_number_expression_non_uint8 VALUES (2); -- {serverError UNSUPPORTED_METHOD} SELECT * FROM constraint_constant_number_expression_non_uint8; @@ -33,7 +33,7 @@ CREATE TABLE constraint_constant_nullable_expression_that_contains_null CONSTRAINT `c0` CHECK nullIf(1 % 2, 1) ) ENGINE = TinyLog(); -INSERT INTO constraint_constant_nullable_expression_that_contains_null VALUES (3); -- {serverError 469} +INSERT INTO constraint_constant_nullable_expression_that_contains_null VALUES (3); -- {serverError VIOLATED_CONSTRAINT} SELECT * FROM constraint_constant_nullable_expression_that_contains_null; diff --git a/tests/queries/0_stateless/01730_distributed_group_by_no_merge_order_by_long.sql b/tests/queries/0_stateless/01730_distributed_group_by_no_merge_order_by_long.sql index 74bafe6e4cd..6625ad916e8 100644 --- a/tests/queries/0_stateless/01730_distributed_group_by_no_merge_order_by_long.sql +++ b/tests/queries/0_stateless/01730_distributed_group_by_no_merge_order_by_long.sql @@ -4,7 +4,7 @@ drop table if exists data_01730; -- does not use 127.1 due to prefer_localhost_replica -select * from remote('127.{2..11}', view(select * from numbers(1e6))) group by number order by number limit 20 settings distributed_group_by_no_merge=0, max_memory_usage='100Mi'; -- { serverError 241 } +select * from remote('127.{2..11}', view(select * from numbers(1e6))) group by number order by number limit 20 settings distributed_group_by_no_merge=0, max_memory_usage='100Mi'; -- { serverError MEMORY_LIMIT_EXCEEDED } -- no memory limit error, because with distributed_group_by_no_merge=2 remote servers will do ORDER BY and will cut to the LIMIT select * from remote('127.{2..11}', view(select * from numbers(1e6))) group by number order by number limit 20 settings distributed_group_by_no_merge=2, max_memory_usage='100Mi'; @@ -12,7 +12,7 @@ select * from remote('127.{2..11}', view(select * from numbers(1e6))) group by n -- and the query with GROUP BY on remote servers will first do GROUP BY and then send the block, -- so the initiator will first receive all blocks from remotes and only after start merging, -- and will hit the memory limit. -select * from remote('127.{2..11}', view(select * from numbers(1e6))) group by number order by number limit 1e6 settings distributed_group_by_no_merge=2, max_memory_usage='20Mi', max_block_size=4294967296; -- { serverError 241 } +select * from remote('127.{2..11}', view(select * from numbers(1e6))) group by number order by number limit 1e6 settings distributed_group_by_no_merge=2, max_memory_usage='20Mi', max_block_size=4294967296; -- { serverError MEMORY_LIMIT_EXCEEDED } -- with optimize_aggregation_in_order=1 remote servers will produce blocks more frequently, -- since they don't need to wait until the aggregation will be finished, diff --git a/tests/queries/0_stateless/01732_alters_bad_conversions.sql b/tests/queries/0_stateless/01732_alters_bad_conversions.sql index 27da5242368..fe8eb0c1438 100644 --- a/tests/queries/0_stateless/01732_alters_bad_conversions.sql +++ b/tests/queries/0_stateless/01732_alters_bad_conversions.sql @@ -3,13 +3,13 @@ DROP TABLE IF EXISTS bad_conversions_2; CREATE TABLE bad_conversions (a UInt32) ENGINE = MergeTree ORDER BY tuple(); INSERT INTO bad_conversions VALUES (1); -ALTER TABLE bad_conversions MODIFY COLUMN a Array(String); -- { serverError 53 } +ALTER TABLE bad_conversions MODIFY COLUMN a Array(String); -- { serverError TYPE_MISMATCH } SHOW CREATE TABLE bad_conversions; SELECT count() FROM system.mutations WHERE table = 'bad_conversions' AND database = currentDatabase(); CREATE TABLE bad_conversions_2 (e Enum('foo' = 1, 'bar' = 2)) ENGINE = MergeTree ORDER BY tuple(); INSERT INTO bad_conversions_2 VALUES (1); -ALTER TABLE bad_conversions_2 MODIFY COLUMN e Enum('bar' = 1, 'foo' = 2); -- { serverError 70 } +ALTER TABLE bad_conversions_2 MODIFY COLUMN e Enum('bar' = 1, 'foo' = 2); -- { serverError CANNOT_CONVERT_TYPE } SHOW CREATE TABLE bad_conversions_2; SELECT count() FROM system.mutations WHERE table = 'bad_conversions_2' AND database = currentDatabase(); diff --git a/tests/queries/0_stateless/01732_bigint_ubsan.sql b/tests/queries/0_stateless/01732_bigint_ubsan.sql index 238a5d99d30..42d9fee450f 100644 --- a/tests/queries/0_stateless/01732_bigint_ubsan.sql +++ b/tests/queries/0_stateless/01732_bigint_ubsan.sql @@ -7,5 +7,5 @@ INSERT INTO decimal VALUES (0); INSERT INTO decimal VALUES (0.42); INSERT INTO decimal VALUES (-0.42); -SELECT f + 1048575, f - 21, f - 84, f * 21, f * -21, f / 21, f / 84 FROM decimal WHERE f > 0; -- { serverError 407 } -SELECT f + -2, f - 21, f - 84, f * 21, f * -21, f / 9223372036854775807, f / 84 FROM decimal WHERE f > 0; -- { serverError 407 } +SELECT f + 1048575, f - 21, f - 84, f * 21, f * -21, f / 21, f / 84 FROM decimal WHERE f > 0; -- { serverError DECIMAL_OVERFLOW } +SELECT f + -2, f - 21, f - 84, f * 21, f * -21, f / 9223372036854775807, f / 84 FROM decimal WHERE f > 0; -- { serverError DECIMAL_OVERFLOW } diff --git a/tests/queries/0_stateless/01732_union_and_union_all.sql b/tests/queries/0_stateless/01732_union_and_union_all.sql index 2de6daa5bb9..e1108d046da 100644 --- a/tests/queries/0_stateless/01732_union_and_union_all.sql +++ b/tests/queries/0_stateless/01732_union_and_union_all.sql @@ -1 +1 @@ -select 1 UNION select 1 UNION ALL select 1; -- { serverError 558 } +select 1 UNION select 1 UNION ALL select 1; -- { serverError EXPECTED_ALL_OR_DISTINCT } diff --git a/tests/queries/0_stateless/01734_datetime64_from_float.sql b/tests/queries/0_stateless/01734_datetime64_from_float.sql index c4290a0cadb..9e623793299 100644 --- a/tests/queries/0_stateless/01734_datetime64_from_float.sql +++ b/tests/queries/0_stateless/01734_datetime64_from_float.sql @@ -17,6 +17,6 @@ SELECT toDateTime64(-999999999999, 9, 'UTC'); SELECT toDateTime64(9200000000.0, 9, 'UTC'); -- value < 2262-04-11 SELECT toDateTime64(9200000000, 9, 'UTC'); -SELECT toDateTime64(9300000000.0, 9, 'UTC'); -- { serverError 407 } # 2262-04-11 < value -SELECT toDateTime64(9300000000, 9, 'UTC'); -- { serverError 407 } +SELECT toDateTime64(9300000000.0, 9, 'UTC'); -- { serverError DECIMAL_OVERFLOW } # 2262-04-11 < value +SELECT toDateTime64(9300000000, 9, 'UTC'); -- { serverError DECIMAL_OVERFLOW } diff --git a/tests/queries/0_stateless/01745_alter_delete_view.sql b/tests/queries/0_stateless/01745_alter_delete_view.sql index c242f1be63e..e4715b16b59 100644 --- a/tests/queries/0_stateless/01745_alter_delete_view.sql +++ b/tests/queries/0_stateless/01745_alter_delete_view.sql @@ -20,7 +20,7 @@ INSERT INTO test_table (f1, f2, pk) VALUES (1,1,1), (1,1,2), (2,1,1), (2,1,2); SELECT * FROM test_view ORDER BY f1, f2; -ALTER TABLE test_view DELETE WHERE pk = 2; --{serverError 48} +ALTER TABLE test_view DELETE WHERE pk = 2; --{serverError NOT_IMPLEMENTED} SELECT * FROM test_view ORDER BY f1, f2; diff --git a/tests/queries/0_stateless/01746_forbid_drop_column_referenced_by_mv.sql b/tests/queries/0_stateless/01746_forbid_drop_column_referenced_by_mv.sql index f084cae7780..bac41762540 100644 --- a/tests/queries/0_stateless/01746_forbid_drop_column_referenced_by_mv.sql +++ b/tests/queries/0_stateless/01746_forbid_drop_column_referenced_by_mv.sql @@ -19,10 +19,10 @@ SELECT FROM `01746_merge_tree`; ALTER TABLE `01746_merge_tree` - DROP COLUMN n3; -- { serverError 524 } + DROP COLUMN n3; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE `01746_merge_tree` - DROP COLUMN n2; -- { serverError 524 } + DROP COLUMN n2; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -- ok ALTER TABLE `01746_merge_tree` @@ -50,10 +50,10 @@ SELECT FROM `01746_null`; ALTER TABLE `01746_null` - DROP COLUMN n1; -- { serverError 524 } + DROP COLUMN n1; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE `01746_null` - DROP COLUMN n2; -- { serverError 524 } + DROP COLUMN n2; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -- ok ALTER TABLE `01746_null` @@ -86,10 +86,10 @@ SELECT FROM `01746_dist`; ALTER TABLE `01746_dist` - DROP COLUMN n1; -- { serverError 524 } + DROP COLUMN n1; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE `01746_dist` - DROP COLUMN n2; -- { serverError 524 } + DROP COLUMN n2; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -- ok ALTER TABLE `01746_dist` @@ -122,10 +122,10 @@ SELECT FROM `01746_merge`; ALTER TABLE `01746_merge` - DROP COLUMN n1; -- { serverError 524 } + DROP COLUMN n1; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE `01746_merge` - DROP COLUMN n2; -- { serverError 524 } + DROP COLUMN n2; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -- ok ALTER TABLE `01746_merge` @@ -158,10 +158,10 @@ SELECT FROM `01746_buffer`; ALTER TABLE `01746_buffer` - DROP COLUMN n1; -- { serverError 524 } + DROP COLUMN n1; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE `01746_buffer` - DROP COLUMN n2; -- { serverError 524 } + DROP COLUMN n2; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -- ok ALTER TABLE `01746_buffer` diff --git a/tests/queries/0_stateless/01752_distributed_query_sigsegv.sql b/tests/queries/0_stateless/01752_distributed_query_sigsegv.sql index c6af23bd1db..2fe3e29a773 100644 --- a/tests/queries/0_stateless/01752_distributed_query_sigsegv.sql +++ b/tests/queries/0_stateless/01752_distributed_query_sigsegv.sql @@ -1,10 +1,10 @@ -- Tags: distributed -- this is enough to trigger the regression -SELECT throwIf(dummy = 0) FROM remote('127.1', system.one); -- { serverError 395 } +SELECT throwIf(dummy = 0) FROM remote('127.1', system.one); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } -- these are just in case -SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one); -- { serverError 395 } -SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one) SETTINGS prefer_localhost_replica=0; -- { serverError 395 } -SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one) SETTINGS prefer_localhost_replica=0, distributed_group_by_no_merge=1; -- { serverError 395 } -SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one) SETTINGS prefer_localhost_replica=0, distributed_group_by_no_merge=2; -- { serverError 395 } +SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } +SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one) SETTINGS prefer_localhost_replica=0; -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } +SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one) SETTINGS prefer_localhost_replica=0, distributed_group_by_no_merge=1; -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } +SELECT throwIf(dummy = 0) FROM remote('127.{1,2}', system.one) SETTINGS prefer_localhost_replica=0, distributed_group_by_no_merge=2; -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } diff --git a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference index 70ee806f79d..28dbb9215a8 100644 --- a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference +++ b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.reference @@ -101,11 +101,11 @@ select * from dist_01756 where dummy in ('0'); select 'errors'; errors -- optimize_skip_unused_shards does not support non-constants -select * from dist_01756 where dummy in (select * from system.one); -- { serverError 507 } +select * from dist_01756 where dummy in (select * from system.one); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- this is a constant for analyzer -select * from dist_01756 where dummy in (toUInt8(0)) settings allow_experimental_analyzer=0; -- { serverError 507 } +select * from dist_01756 where dummy in (toUInt8(0)) settings allow_experimental_analyzer=0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- NOT IN does not supported -select * from dist_01756 where dummy not in (0, 2); -- { serverError 507 } +select * from dist_01756 where dummy not in (0, 2); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- -- others -- @@ -140,9 +140,9 @@ select * from dist_01756_str where key in ('0', '2'); select * from dist_01756_str where key in (0, 2); 0 -- analyzer does support this -select * from dist_01756_str where key in ('0', Null) settings allow_experimental_analyzer=0; -- { serverError 507 } --- select * from dist_01756_str where key in (0, 2); -- { serverError 53 } --- select * from dist_01756_str where key in (0, Null); -- { serverError 53 } +select * from dist_01756_str where key in ('0', Null) settings allow_experimental_analyzer=0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +-- select * from dist_01756_str where key in (0, 2); -- { serverError TYPE_MISMATCH } +-- select * from dist_01756_str where key in (0, Null); -- { serverError TYPE_MISMATCH } -- different type #2 select 'different types -- conversion'; @@ -150,14 +150,14 @@ different types -- conversion create table dist_01756_column as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); select * from dist_01756_column where dummy in (0, '255'); 0 -select * from dist_01756_column where dummy in (0, '255foo'); -- { serverError 53 } +select * from dist_01756_column where dummy in (0, '255foo'); -- { serverError TYPE_MISMATCH } -- intHash64 does not accept string, but implicit conversion should be done select * from dist_01756 where dummy in ('0', '2'); 0 -- optimize_skip_unused_shards_limit select 'optimize_skip_unused_shards_limit'; optimize_skip_unused_shards_limit -select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1, force_optimize_skip_unused_shards=0; 0 0 diff --git a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql index a90f109620b..9a1a00cc0a1 100644 --- a/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql +++ b/tests/queries/0_stateless/01756_optimize_skip_unused_shards_rewrite_in.sql @@ -111,11 +111,11 @@ select * from dist_01756 where dummy in ('0'); select 'errors'; -- optimize_skip_unused_shards does not support non-constants -select * from dist_01756 where dummy in (select * from system.one); -- { serverError 507 } +select * from dist_01756 where dummy in (select * from system.one); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- this is a constant for analyzer -select * from dist_01756 where dummy in (toUInt8(0)) settings allow_experimental_analyzer=0; -- { serverError 507 } +select * from dist_01756 where dummy in (toUInt8(0)) settings allow_experimental_analyzer=0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- NOT IN does not supported -select * from dist_01756 where dummy not in (0, 2); -- { serverError 507 } +select * from dist_01756 where dummy not in (0, 2); -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- -- others @@ -146,21 +146,21 @@ create table dist_01756_str as data_01756_str engine=Distributed(test_cluster_tw select * from dist_01756_str where key in ('0', '2'); select * from dist_01756_str where key in (0, 2); -- analyzer does support this -select * from dist_01756_str where key in ('0', Null) settings allow_experimental_analyzer=0; -- { serverError 507 } --- select * from dist_01756_str where key in (0, 2); -- { serverError 53 } --- select * from dist_01756_str where key in (0, Null); -- { serverError 53 } +select * from dist_01756_str where key in ('0', Null) settings allow_experimental_analyzer=0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +-- select * from dist_01756_str where key in (0, 2); -- { serverError TYPE_MISMATCH } +-- select * from dist_01756_str where key in (0, Null); -- { serverError TYPE_MISMATCH } -- different type #2 select 'different types -- conversion'; create table dist_01756_column as system.one engine=Distributed(test_cluster_two_shards, system, one, dummy); select * from dist_01756_column where dummy in (0, '255'); -select * from dist_01756_column where dummy in (0, '255foo'); -- { serverError 53 } +select * from dist_01756_column where dummy in (0, '255foo'); -- { serverError TYPE_MISMATCH } -- intHash64 does not accept string, but implicit conversion should be done select * from dist_01756 where dummy in ('0', '2'); -- optimize_skip_unused_shards_limit select 'optimize_skip_unused_shards_limit'; -select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } select * from dist_01756 where dummy in (0, 2) settings optimize_skip_unused_shards_limit=1, force_optimize_skip_unused_shards=0; -- { echoOff } diff --git a/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.sql b/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.sql index 3f97b912105..3853ccb4080 100644 --- a/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.sql +++ b/tests/queries/0_stateless/01757_optimize_skip_unused_shards_limit.sql @@ -11,27 +11,27 @@ select * from dist_01757 where dummy in (0,) format Null; select * from dist_01757 where dummy in (0, 1) format Null settings optimize_skip_unused_shards_limit=2; -- in negative -select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=1; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- or negative -select * from dist_01757 where dummy = 0 or dummy = 1 settings optimize_skip_unused_shards_limit=1; -- { serverError 507 } +select * from dist_01757 where dummy = 0 or dummy = 1 settings optimize_skip_unused_shards_limit=1; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- or select * from dist_01757 where dummy = 0 or dummy = 1 format Null settings optimize_skip_unused_shards_limit=2; -- and negative -- disabled for analyzer cause new implementation consider `dummy = 0 and dummy = 1` as constant False. -select * from dist_01757 where dummy = 0 and dummy = 1 settings optimize_skip_unused_shards_limit=1, allow_experimental_analyzer=0; -- { serverError 507 } -select * from dist_01757 where dummy = 0 and dummy = 2 and dummy = 3 settings optimize_skip_unused_shards_limit=1, allow_experimental_analyzer=0; -- { serverError 507 } -select * from dist_01757 where dummy = 0 and dummy = 2 and dummy = 3 settings optimize_skip_unused_shards_limit=2, allow_experimental_analyzer=0; -- { serverError 507 } +select * from dist_01757 where dummy = 0 and dummy = 1 settings optimize_skip_unused_shards_limit=1, allow_experimental_analyzer=0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +select * from dist_01757 where dummy = 0 and dummy = 2 and dummy = 3 settings optimize_skip_unused_shards_limit=1, allow_experimental_analyzer=0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } +select * from dist_01757 where dummy = 0 and dummy = 2 and dummy = 3 settings optimize_skip_unused_shards_limit=2, allow_experimental_analyzer=0; -- { serverError UNABLE_TO_SKIP_UNUSED_SHARDS } -- and select * from dist_01757 where dummy = 0 and dummy = 1 settings optimize_skip_unused_shards_limit=2; select * from dist_01757 where dummy = 0 and dummy = 1 and dummy = 3 settings optimize_skip_unused_shards_limit=3; -- ARGUMENT_OUT_OF_BOUND error -select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=0; -- { serverError 69 } -select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=9223372036854775808; -- { serverError 69 } +select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=0; -- { serverError ARGUMENT_OUT_OF_BOUND } +select * from dist_01757 where dummy in (0, 1) settings optimize_skip_unused_shards_limit=9223372036854775808; -- { serverError ARGUMENT_OUT_OF_BOUND } drop table dist_01757; diff --git a/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.sql b/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.sql index 1a1e65a4e1a..b85090c51c6 100644 --- a/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.sql +++ b/tests/queries/0_stateless/01759_dictionary_unique_attribute_names.sql @@ -16,7 +16,7 @@ INSERT INTO 01759_db.dictionary_source_table VALUES (0, 2, 3), (1, 5, 6), (2, 8, CREATE DICTIONARY 01759_db.test_dictionary(key UInt64, value1 UInt64, value1 UInt64) PRIMARY KEY key SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'dictionary_source_table' DB '01759_db')) -LAYOUT(COMPLEX_KEY_DIRECT()); -- {serverError 36} +LAYOUT(COMPLEX_KEY_DIRECT()); -- {serverError BAD_ARGUMENTS} CREATE DICTIONARY 01759_db.test_dictionary(key UInt64, value1 UInt64, value2 UInt64) PRIMARY KEY key diff --git a/tests/queries/0_stateless/01760_modulo_negative.sql b/tests/queries/0_stateless/01760_modulo_negative.sql index dbea06cc100..3e5f9626761 100644 --- a/tests/queries/0_stateless/01760_modulo_negative.sql +++ b/tests/queries/0_stateless/01760_modulo_negative.sql @@ -1 +1 @@ -SELECT -number % -9223372036854775808 FROM system.numbers; -- { serverError 153 } +SELECT -number % -9223372036854775808 FROM system.numbers; -- { serverError ILLEGAL_DIVISION } diff --git a/tests/queries/0_stateless/01760_polygon_dictionaries.sql b/tests/queries/0_stateless/01760_polygon_dictionaries.sql index e74b3ce03b9..f3be66eb858 100644 --- a/tests/queries/0_stateless/01760_polygon_dictionaries.sql +++ b/tests/queries/0_stateless/01760_polygon_dictionaries.sql @@ -61,10 +61,10 @@ FROM 01760_db.points ORDER BY x, y; SELECT 'check NaN or infinite point input'; -SELECT tuple(nan, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} -SELECT tuple(nan, nan) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} -SELECT tuple(inf, nan) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} -SELECT tuple(inf, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError 36} +SELECT tuple(nan, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError BAD_ARGUMENTS} +SELECT tuple(nan, nan) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError BAD_ARGUMENTS} +SELECT tuple(inf, nan) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError BAD_ARGUMENTS} +SELECT tuple(inf, inf) as key, dictGet('01760_db.dict_array', 'name', key); --{serverError BAD_ARGUMENTS} DROP DICTIONARY 01760_db.dict_array; DROP TABLE 01760_db.points; diff --git a/tests/queries/0_stateless/01763_max_distributed_depth.sql b/tests/queries/0_stateless/01763_max_distributed_depth.sql index f50d15e7121..08dc533876d 100644 --- a/tests/queries/0_stateless/01763_max_distributed_depth.sql +++ b/tests/queries/0_stateless/01763_max_distributed_depth.sql @@ -19,17 +19,17 @@ DROP TABLE IF EXISTS tt7; CREATE TABLE tt7 as tt6 ENGINE = Distributed('test_shard_localhost', '', 'tt6', rand()); -INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError 581 } +INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError TOO_LARGE_DISTRIBUTED_DEPTH } -SELECT * FROM tt6; -- { serverError 581 } +SELECT * FROM tt6; -- { serverError TOO_LARGE_DISTRIBUTED_DEPTH } SET max_distributed_depth = 0; -- stack overflow -INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError 306} +INSERT INTO tt6 VALUES (1, 1, 1, 1, 'ok'); -- { serverError TOO_DEEP_RECURSION} -- stack overflow -SELECT * FROM tt6; -- { serverError 306 } +SELECT * FROM tt6; -- { serverError TOO_DEEP_RECURSION } DROP TABLE tt6; DROP TABLE tt7; diff --git a/tests/queries/0_stateless/01764_prefer_column_name_to_alias.sql b/tests/queries/0_stateless/01764_prefer_column_name_to_alias.sql index 194d2b90854..781ba609554 100644 --- a/tests/queries/0_stateless/01764_prefer_column_name_to_alias.sql +++ b/tests/queries/0_stateless/01764_prefer_column_name_to_alias.sql @@ -1,6 +1,6 @@ -SELECT avg(number) AS number, max(number) FROM numbers(10); -- { serverError 184 } -SELECT sum(x) AS x, max(x) FROM (SELECT 1 AS x UNION ALL SELECT 2 AS x) t; -- { serverError 184 } -select sum(C1) as C1, count(C1) as C2 from (select number as C1 from numbers(3)) as ITBL; -- { serverError 184 } +SELECT avg(number) AS number, max(number) FROM numbers(10); -- { serverError ILLEGAL_AGGREGATION } +SELECT sum(x) AS x, max(x) FROM (SELECT 1 AS x UNION ALL SELECT 2 AS x) t; -- { serverError ILLEGAL_AGGREGATION } +select sum(C1) as C1, count(C1) as C2 from (select number as C1 from numbers(3)) as ITBL; -- { serverError ILLEGAL_AGGREGATION } set prefer_column_name_to_alias = 1; SELECT avg(number) AS number, max(number) FROM numbers(10); diff --git a/tests/queries/0_stateless/01768_extended_range.sql b/tests/queries/0_stateless/01768_extended_range.sql index 4acaccd1399..fe506e97b05 100644 --- a/tests/queries/0_stateless/01768_extended_range.sql +++ b/tests/queries/0_stateless/01768_extended_range.sql @@ -1,4 +1,4 @@ SELECT toYear(toDateTime64('1968-12-12 11:22:33', 0, 'UTC')); SELECT toInt16(toRelativeWeekNum(toDateTime64('1960-11-30 18:00:11.999', 3, 'UTC'))); SELECT toStartOfQuarter(toDateTime64('1990-01-04 12:14:12', 0, 'UTC')); -SELECT toUnixTimestamp(toDateTime64('1900-12-12 11:22:33', 0, 'UTC')); -- { serverError 407 } +SELECT toUnixTimestamp(toDateTime64('1900-12-12 11:22:33', 0, 'UTC')); -- { serverError DECIMAL_OVERFLOW } diff --git a/tests/queries/0_stateless/01773_datetime64_add_ubsan.sql b/tests/queries/0_stateless/01773_datetime64_add_ubsan.sql index 70dcd6a133f..f0a352a79de 100644 --- a/tests/queries/0_stateless/01773_datetime64_add_ubsan.sql +++ b/tests/queries/0_stateless/01773_datetime64_add_ubsan.sql @@ -1,2 +1,2 @@ -- The result is unspecified but UBSan should not argue. -SELECT ignore(addHours(now64(3), inf)) FROM numbers(2); -- { serverError 407 } +SELECT ignore(addHours(now64(3), inf)) FROM numbers(2); -- { serverError DECIMAL_OVERFLOW } diff --git a/tests/queries/0_stateless/01774_bar_with_illegal_value.sql b/tests/queries/0_stateless/01774_bar_with_illegal_value.sql index 60c7f303c13..44ed0521d08 100644 --- a/tests/queries/0_stateless/01774_bar_with_illegal_value.sql +++ b/tests/queries/0_stateless/01774_bar_with_illegal_value.sql @@ -1 +1 @@ -SELECT greatCircleAngle(1048575, 257, -9223372036854775808, 1048576) - NULL, bar(7, -inf, 1024); -- { serverError 36 } +SELECT greatCircleAngle(1048575, 257, -9223372036854775808, 1048576) - NULL, bar(7, -inf, 1024); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01774_ip_address_in_range.sql b/tests/queries/0_stateless/01774_ip_address_in_range.sql index 29c2bcb220d..1480b63402a 100644 --- a/tests/queries/0_stateless/01774_ip_address_in_range.sql +++ b/tests/queries/0_stateless/01774_ip_address_in_range.sql @@ -51,14 +51,14 @@ SELECT isIPAddressInRange('::127.0.0.1', '127.0.0.1/32'); SELECT '# Unparsable arguments'; -SELECT isIPAddressInRange('unparsable', '127.0.0.0/8'); -- { serverError 6 } -SELECT isIPAddressInRange('127.0.0.1', 'unparsable'); -- { serverError 6 } +SELECT isIPAddressInRange('unparsable', '127.0.0.0/8'); -- { serverError CANNOT_PARSE_TEXT } +SELECT isIPAddressInRange('127.0.0.1', 'unparsable'); -- { serverError CANNOT_PARSE_TEXT } SELECT '# Wrong argument types'; -SELECT isIPAddressInRange(100, '127.0.0.0/8'); -- { serverError 43 } -SELECT isIPAddressInRange(NULL, '127.0.0.0/8'); -- { serverError 43 } -SELECT isIPAddressInRange(CAST(NULL, 'Nullable(String)'), '127.0.0.0/8'); -- { serverError 43 } -SELECT isIPAddressInRange('127.0.0.1', 100); -- { serverError 43 } -SELECT isIPAddressInRange(100, NULL); -- { serverError 43 } -WITH arrayJoin([NULL, NULL, NULL, NULL]) AS prefix SELECT isIPAddressInRange([NULL, NULL, 0, 255, 0], prefix); -- { serverError 43 } +SELECT isIPAddressInRange(100, '127.0.0.0/8'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT isIPAddressInRange(NULL, '127.0.0.0/8'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT isIPAddressInRange(CAST(NULL, 'Nullable(String)'), '127.0.0.0/8'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT isIPAddressInRange('127.0.0.1', 100); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT isIPAddressInRange(100, NULL); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +WITH arrayJoin([NULL, NULL, NULL, NULL]) AS prefix SELECT isIPAddressInRange([NULL, NULL, 0, 255, 0], prefix); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01776_decrypt_aead_size_check.sql b/tests/queries/0_stateless/01776_decrypt_aead_size_check.sql index f03b1e0350a..75834e25a10 100644 --- a/tests/queries/0_stateless/01776_decrypt_aead_size_check.sql +++ b/tests/queries/0_stateless/01776_decrypt_aead_size_check.sql @@ -1,4 +1,4 @@ -- Tags: no-fasttest -- Tag no-fasttest: Depends on OpenSSL -SELECT decrypt('aes-128-gcm', 'text', 'key', 'IV'); -- { serverError 36 } +SELECT decrypt('aes-128-gcm', 'text', 'key', 'IV'); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01777_map_populate_series_ubsan.sql b/tests/queries/0_stateless/01777_map_populate_series_ubsan.sql index 5a8c182425a..241b863d17d 100644 --- a/tests/queries/0_stateless/01777_map_populate_series_ubsan.sql +++ b/tests/queries/0_stateless/01777_map_populate_series_ubsan.sql @@ -1,2 +1,2 @@ -- Should correctly throw exception about overflow: -SELECT mapPopulateSeries([-9223372036854775808, toUInt32(2)], [toUInt32(1023), -1]); -- { serverError 128 } +SELECT mapPopulateSeries([-9223372036854775808, toUInt32(2)], [toUInt32(1023), -1]); -- { serverError TOO_LARGE_ARRAY_SIZE } diff --git a/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql index 2ea6119cef8..1eee4090112 100644 --- a/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql +++ b/tests/queries/0_stateless/01780_clickhouse_dictionary_source_loop.sql @@ -13,7 +13,7 @@ PRIMARY KEY id SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 TABLE 'dict1')) LAYOUT(DIRECT()); -SELECT * FROM dict1; --{serverError 36} +SELECT * FROM dict1; --{serverError BAD_ARGUMENTS} DROP DICTIONARY dict1; @@ -27,7 +27,7 @@ PRIMARY KEY id SOURCE(CLICKHOUSE(HOST 'localhost' PORT 9000 DATABASE '01780_db' TABLE 'dict2')) LAYOUT(DIRECT()); -SELECT * FROM 01780_db.dict2; --{serverError 36} +SELECT * FROM 01780_db.dict2; --{serverError BAD_ARGUMENTS} DROP DICTIONARY 01780_db.dict2; DROP TABLE IF EXISTS 01780_db.dict3_source; diff --git a/tests/queries/0_stateless/01780_range_msan.sql b/tests/queries/0_stateless/01780_range_msan.sql index dd0a35c3eea..7cfdddbfa04 100644 --- a/tests/queries/0_stateless/01780_range_msan.sql +++ b/tests/queries/0_stateless/01780_range_msan.sql @@ -1 +1 @@ -SELECT range(toUInt256(1), 1); -- { serverError 44 } +SELECT range(toUInt256(1), 1); -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/01782_field_oom.sql b/tests/queries/0_stateless/01782_field_oom.sql index 2609c589d94..acbbac7f524 100644 --- a/tests/queries/0_stateless/01782_field_oom.sql +++ b/tests/queries/0_stateless/01782_field_oom.sql @@ -1,2 +1,2 @@ SET max_memory_usage = '500M'; -SELECT sumMap([number], [number]) FROM system.numbers_mt; -- { serverError 241 } +SELECT sumMap([number], [number]) FROM system.numbers_mt; -- { serverError MEMORY_LIMIT_EXCEEDED } diff --git a/tests/queries/0_stateless/01784_parallel_formatting_memory.sql b/tests/queries/0_stateless/01784_parallel_formatting_memory.sql index 35dc063f895..00b3b2d88d9 100644 --- a/tests/queries/0_stateless/01784_parallel_formatting_memory.sql +++ b/tests/queries/0_stateless/01784_parallel_formatting_memory.sql @@ -1,2 +1,2 @@ SET max_memory_usage = '1G'; -SELECT range(65535) FROM system.one ARRAY JOIN range(65536) AS number; -- { serverError 241 } +SELECT range(65535) FROM system.one ARRAY JOIN range(65536) AS number; -- { serverError MEMORY_LIMIT_EXCEEDED } diff --git a/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql index efd8ea2a565..2edf99299cd 100644 --- a/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql +++ b/tests/queries/0_stateless/01788_update_nested_type_subcolumn_check.sql @@ -29,7 +29,7 @@ alter table test_wide_nested update `info.id` = [100,200], `info.age`=[68,72] wh alter table test_wide_nested update `info.id` = `info.age` where id = 3; select * from test_wide_nested; -alter table test_wide_nested update `info.id` = [100,200], `info.age` = [10,20,30], `info.name` = ['a','b','c'] where id = 0; -- { serverError 341 } +alter table test_wide_nested update `info.id` = [100,200], `info.age` = [10,20,30], `info.name` = ['a','b','c'] where id = 0; -- { serverError UNFINISHED } kill mutation where table = 'test_wide_nested' and database = currentDatabase() format Null; @@ -54,7 +54,7 @@ ALTER TABLE test_wide_nested ADD COLUMN `info2.name` Array(String); ALTER table test_wide_nested update `info2.id` = `info.id`, `info2.name` = `info.name` where 1; select * from test_wide_nested; -alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = [10,20,30] where id = 1; -- { serverError 341 } +alter table test_wide_nested update `info.id` = [100,200,300], `info.age` = [10,20,30] where id = 1; -- { serverError UNFINISHED } kill mutation where table = 'test_wide_nested' and database = currentDatabase() format Null; diff --git a/tests/queries/0_stateless/01801_approx_total_rows_mergetree_reverse.sql b/tests/queries/0_stateless/01801_approx_total_rows_mergetree_reverse.sql index 8809bf67a4c..6fcc7c92e27 100644 --- a/tests/queries/0_stateless/01801_approx_total_rows_mergetree_reverse.sql +++ b/tests/queries/0_stateless/01801_approx_total_rows_mergetree_reverse.sql @@ -1,8 +1,8 @@ drop table if exists data_01801; create table data_01801 (key Int) engine=MergeTree() order by key settings index_granularity=10 as select number/10 from numbers(100); -select * from data_01801 where key = 0 order by key settings max_rows_to_read=9 format Null; -- { serverError 158 } -select * from data_01801 where key = 0 order by key desc settings max_rows_to_read=9 format Null; -- { serverError 158 } +select * from data_01801 where key = 0 order by key settings max_rows_to_read=9 format Null; -- { serverError TOO_MANY_ROWS } +select * from data_01801 where key = 0 order by key desc settings max_rows_to_read=9 format Null; -- { serverError TOO_MANY_ROWS } select * from data_01801 where key = 0 order by key settings max_rows_to_read=10 format Null; select * from data_01801 where key = 0 order by key desc settings max_rows_to_read=10 format Null; diff --git a/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql index 4b8bf0844a3..85e778eb8d3 100644 --- a/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql +++ b/tests/queries/0_stateless/01802_rank_corr_mann_whitney_over_window.sql @@ -13,8 +13,8 @@ SETTINGS index_granularity = 8192; INSERT INTO 01802_empsalary VALUES ('sales', 1, 5000, '2006-10-01'), ('develop', 8, 6000, '2006-10-01'), ('personnel', 2, 3900, '2006-12-23'), ('develop', 10, 5200, '2007-08-01'), ('sales', 3, 4800, '2007-08-01'), ('sales', 4, 4801, '2007-08-08'), ('develop', 11, 5200, '2007-08-15'), ('personnel', 5, 3500, '2007-12-10'), ('develop', 7, 4200, '2008-01-01'), ('develop', 9, 4500, '2008-01-01'); -SELECT mannWhitneyUTest(salary, salary) OVER (ORDER BY salary ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; -- {serverError 36} +SELECT mannWhitneyUTest(salary, salary) OVER (ORDER BY salary ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; -- {serverError BAD_ARGUMENTS} -SELECT rankCorr(salary, 0.5) OVER (ORDER BY salary ASC ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; -- {serverError 36} +SELECT rankCorr(salary, 0.5) OVER (ORDER BY salary ASC ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) AS func FROM 01802_empsalary; -- {serverError BAD_ARGUMENTS} DROP TABLE IF EXISTS 01802_empsalary; diff --git a/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql index fcbe585b70a..d2bcdb12103 100644 --- a/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql +++ b/tests/queries/0_stateless/01804_uniq_up_to_ubsan.sql @@ -1,2 +1,2 @@ -SELECT uniqUpTo(1e100)(number) FROM numbers(5); -- { serverError 70 } -SELECT uniqUpTo(-1e100)(number) FROM numbers(5); -- { serverError 70 } +SELECT uniqUpTo(1e100)(number) FROM numbers(5); -- { serverError CANNOT_CONVERT_TYPE } +SELECT uniqUpTo(-1e100)(number) FROM numbers(5); -- { serverError CANNOT_CONVERT_TYPE } diff --git a/tests/queries/0_stateless/01817_storage_buffer_parameters.sql b/tests/queries/0_stateless/01817_storage_buffer_parameters.sql index 84727bc5d6b..b973def845d 100644 --- a/tests/queries/0_stateless/01817_storage_buffer_parameters.sql +++ b/tests/queries/0_stateless/01817_storage_buffer_parameters.sql @@ -28,7 +28,7 @@ create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, /* min_time= */ 1, /* max_time= */ 86400, /* min_rows= */ 1e9, /* max_rows= */ 1e6, /* min_bytes= */ 0 /* max_bytes= 4e6 */ -); -- { serverError 42 } +); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -- too much args create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, /* num_layers= */ 1, @@ -37,6 +37,6 @@ create table buffer_01817 (key Int) Engine=Buffer(currentDatabase(), data_01817, /* min_bytes= */ 0, /* max_bytes= */ 4e6, /* flush_time= */ 86400, /* flush_rows= */ 10, /* flush_bytes= */0, 0 -); -- { serverError 42 } +); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } drop table data_01817; diff --git a/tests/queries/0_stateless/01821_join_table_mutation.sql b/tests/queries/0_stateless/01821_join_table_mutation.sql index 78903ebd6ec..c9d82d07f8c 100644 --- a/tests/queries/0_stateless/01821_join_table_mutation.sql +++ b/tests/queries/0_stateless/01821_join_table_mutation.sql @@ -20,7 +20,7 @@ SELECT name FROM join_table_mutation WHERE id = 10; ALTER TABLE join_table_mutation DELETE WHERE id % 2 = 0; -ALTER TABLE join_table_mutation UPDATE name = 'some' WHERE 1; -- {serverError 48} +ALTER TABLE join_table_mutation UPDATE name = 'some' WHERE 1; -- {serverError NOT_IMPLEMENTED} SELECT count() FROM join_table_mutation; diff --git a/tests/queries/0_stateless/01825_type_json_1.sql b/tests/queries/0_stateless/01825_type_json_1.sql index e74faf2d4c7..6876349677e 100644 --- a/tests/queries/0_stateless/01825_type_json_1.sql +++ b/tests/queries/0_stateless/01825_type_json_1.sql @@ -82,4 +82,4 @@ ORDER BY name; DROP TABLE IF EXISTS t_json; -CREATE TABLE t_json(id UInt64, data Object('JSON')) ENGINE = Log; -- { serverError 44 } +CREATE TABLE t_json(id UInt64, data Object('JSON')) ENGINE = Log; -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/01825_type_json_from_map.sql b/tests/queries/0_stateless/01825_type_json_from_map.sql index 51e60843a1a..7cad50b363b 100644 --- a/tests/queries/0_stateless/01825_type_json_from_map.sql +++ b/tests/queries/0_stateless/01825_type_json_from_map.sql @@ -36,7 +36,7 @@ SELECT sum(obj.col1), sum(obj.col4), sum(obj.col7), sum(obj.col8 = 0) FROM t_jso INSERT INTO t_json SELECT number, (range(number % 10), range(number % 10))::Map(UInt64, UInt64) -FROM numbers(1000000); -- { serverError 53 } +FROM numbers(1000000); -- { serverError TYPE_MISMATCH } DROP TABLE IF EXISTS t_json; DROP TABLE IF EXISTS t_map; diff --git a/tests/queries/0_stateless/01847_bad_like.sql b/tests/queries/0_stateless/01847_bad_like.sql index 8eb6fd3941f..79f7cb58a9c 100644 --- a/tests/queries/0_stateless/01847_bad_like.sql +++ b/tests/queries/0_stateless/01847_bad_like.sql @@ -22,7 +22,7 @@ SELECT '\\' LIKE '%\\\\%'; SELECT '\\' LIKE '\\\\%'; SELECT '\\' LIKE '%\\\\'; SELECT '\\' LIKE '\\\\'; -SELECT '\\' LIKE '\\'; -- { serverError 25 } +SELECT '\\' LIKE '\\'; -- { serverError CANNOT_PARSE_ESCAPE_SEQUENCE } SELECT '\\xyz\\' LIKE '\\\\%\\\\'; SELECT '\\xyz\\' LIKE '\\\\___\\\\'; diff --git a/tests/queries/0_stateless/01849_geoToS2.sql b/tests/queries/0_stateless/01849_geoToS2.sql index e997fec14e5..8e268753b3d 100644 --- a/tests/queries/0_stateless/01849_geoToS2.sql +++ b/tests/queries/0_stateless/01849_geoToS2.sql @@ -42,11 +42,11 @@ SELECT first, second, result FROM ( ORDER BY s2_index ); -SELECT s2ToGeo(toUInt64(-1)); -- { serverError 36 } -SELECT s2ToGeo(nan); -- { serverError 43 } +SELECT s2ToGeo(toUInt64(-1)); -- { serverError BAD_ARGUMENTS } +SELECT s2ToGeo(nan); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT geoToS2(toFloat64(toUInt64(-1)), toFloat64(toUInt64(-1))); -- { serverError BAD_ARGUMENTS } -SELECT geoToS2(nan, nan); -- { serverError 43 } -SELECT geoToS2(-inf, 1.1754943508222875e-38); -- { serverError 43 } +SELECT geoToS2(nan, nan); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT geoToS2(-inf, 1.1754943508222875e-38); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.sql b/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.sql index 91f68314ca6..3fc0f8a3ee6 100644 --- a/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.sql +++ b/tests/queries/0_stateless/01850_dist_INSERT_preserve_error.sql @@ -11,7 +11,7 @@ create table dist_01850 (key Int) engine=Distributed('test_cluster_two_replicas_ set distributed_foreground_insert=1; set prefer_localhost_replica=0; -insert into dist_01850 values (1); -- { serverError 60 } +insert into dist_01850 values (1); -- { serverError UNKNOWN_TABLE } drop table if exists dist_01850; drop table shard_0.data_01850; diff --git a/tests/queries/0_stateless/01851_array_difference_decimal_overflow_ubsan.sql b/tests/queries/0_stateless/01851_array_difference_decimal_overflow_ubsan.sql index ebf2efda4f1..4e7b7301e00 100644 --- a/tests/queries/0_stateless/01851_array_difference_decimal_overflow_ubsan.sql +++ b/tests/queries/0_stateless/01851_array_difference_decimal_overflow_ubsan.sql @@ -1 +1 @@ -SELECT arrayDifference([toDecimal32(100.0000991821289, 0), -2147483647]) AS x; --{serverError 407} +SELECT arrayDifference([toDecimal32(100.0000991821289, 0), -2147483647]) AS x; --{serverError DECIMAL_OVERFLOW} diff --git a/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql b/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql index a0239ff482c..da053c68f0e 100644 --- a/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql +++ b/tests/queries/0_stateless/01851_clear_column_referenced_by_mv.sql @@ -18,10 +18,10 @@ SELECT FROM `01851_merge_tree`; ALTER TABLE `01851_merge_tree` - DROP COLUMN n3; -- { serverError 524 } + DROP COLUMN n3; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } ALTER TABLE `01851_merge_tree` - DROP COLUMN n2; -- { serverError 524 } + DROP COLUMN n2; -- { serverError ALTER_OF_COLUMN_IS_FORBIDDEN } -- ok ALTER TABLE `01851_merge_tree` diff --git a/tests/queries/0_stateless/01852_map_combinator.sql b/tests/queries/0_stateless/01852_map_combinator.sql index a23a507bc27..546b040db53 100644 --- a/tests/queries/0_stateless/01852_map_combinator.sql +++ b/tests/queries/0_stateless/01852_map_combinator.sql @@ -34,16 +34,16 @@ select minMap(val) from values ('val Map(Int256, Int256)', (map(1, 1)), (map(1, select minMap(val) from values ('val Map(UInt128, UInt128)', (map(1, 1)), (map(1, 2))); select minMap(val) from values ('val Map(UInt256, UInt256)', (map(1, 1)), (map(1, 2))); -select sumMap(map(1,2), 1, 2); -- { serverError 42 } -select sumMap(map(1,2), map(1,3)); -- { serverError 42 } +select sumMap(map(1,2), 1, 2); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select sumMap(map(1,2), map(1,3)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -- array and tuple arguments -select avgMap([1,1,1], [2,2,2]); -- { serverError 43 } -select minMap((1,1)); -- { serverError 43 } -select minMap(([1,1,1],1)); -- { serverError 43 } -select minMap([1,1,1],1); -- { serverError 43 } -select minMap([1,1,1]); -- { serverError 43 } -select minMap(([1,1,1])); -- { serverError 43 } +select avgMap([1,1,1], [2,2,2]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select minMap((1,1)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select minMap(([1,1,1],1)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select minMap([1,1,1],1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select minMap([1,1,1]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select minMap(([1,1,1])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } DROP TABLE IF EXISTS sum_map_decimal; diff --git a/tests/queries/0_stateless/01853_s2_cells_intersect.sql b/tests/queries/0_stateless/01853_s2_cells_intersect.sql index c426a86f631..5ab7e7aa953 100644 --- a/tests/queries/0_stateless/01853_s2_cells_intersect.sql +++ b/tests/queries/0_stateless/01853_s2_cells_intersect.sql @@ -5,4 +5,4 @@ select s2CellsIntersect(9926595209846587392, 9926594385212866560); select s2CellsIntersect(9926595209846587392, 9937259648002293760); -SELECT s2CellsIntersect(9926595209846587392, 9223372036854775806); -- { serverError 36 } +SELECT s2CellsIntersect(9926595209846587392, 9223372036854775806); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01854_dictionary_range_hashed_min_max_attr.sql b/tests/queries/0_stateless/01854_dictionary_range_hashed_min_max_attr.sql index 34ce7ea04b5..0029971f050 100644 --- a/tests/queries/0_stateless/01854_dictionary_range_hashed_min_max_attr.sql +++ b/tests/queries/0_stateless/01854_dictionary_range_hashed_min_max_attr.sql @@ -10,4 +10,4 @@ PRIMARY KEY id SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() TABLE 'does_not_exists')) LIFETIME(MIN 0 MAX 1000) LAYOUT(RANGE_HASHED()) -RANGE(MIN first MAX last) -- { serverError 489 } +RANGE(MIN first MAX last) -- { serverError INCORRECT_DICTIONARY_DEFINITION } diff --git a/tests/queries/0_stateless/01854_s2_cap_contains.sql b/tests/queries/0_stateless/01854_s2_cap_contains.sql index 4ee4158fdbb..43a9d8705fb 100644 --- a/tests/queries/0_stateless/01854_s2_cap_contains.sql +++ b/tests/queries/0_stateless/01854_s2_cap_contains.sql @@ -5,10 +5,10 @@ select s2CapContains(1157339245694594829, 1.0, 1157347770437378819); select s2CapContains(1157339245694594829, 1.0, 1152921504606846977); select s2CapContains(1157339245694594829, 3.14, 1157339245694594829); -select s2CapContains(nan, 3.14, 1157339245694594829); -- { serverError 43 } -select s2CapContains(1157339245694594829, nan, 1157339245694594829); -- { serverError 43 } -select s2CapContains(1157339245694594829, 3.14, nan); -- { serverError 43 } +select s2CapContains(nan, 3.14, 1157339245694594829); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select s2CapContains(1157339245694594829, nan, 1157339245694594829); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select s2CapContains(1157339245694594829, 3.14, nan); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -select s2CapContains(toUInt64(-1), -1.0, toUInt64(-1)); -- { serverError 36 } -select s2CapContains(toUInt64(-1), 9999.9999, toUInt64(-1)); -- { serverError 36 } +select s2CapContains(toUInt64(-1), -1.0, toUInt64(-1)); -- { serverError BAD_ARGUMENTS } +select s2CapContains(toUInt64(-1), 9999.9999, toUInt64(-1)); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01854_s2_cap_union.sql b/tests/queries/0_stateless/01854_s2_cap_union.sql index 9f8510fb833..8f8e2090192 100644 --- a/tests/queries/0_stateless/01854_s2_cap_union.sql +++ b/tests/queries/0_stateless/01854_s2_cap_union.sql @@ -6,7 +6,7 @@ select s2CapUnion(1157339245694594829, -1.0, 1152921504606846977, -1.0); select s2CapUnion(1157339245694594829, toFloat64(toUInt64(-1)), 1157339245694594829, toFloat64(toUInt64(-1))); -select s2CapUnion(nan, 3.14, 1157339245694594829, 3.14); -- { serverError 43 } -select s2CapUnion(1157339245694594829, nan, 1157339245694594829, 3.14); -- { serverError 43 } -select s2CapUnion(1157339245694594829, 3.14, nan, 3.14); -- { serverError 43 } -select s2CapUnion(1157339245694594829, 3.14, 1157339245694594829, nan); -- { serverError 43 } +select s2CapUnion(nan, 3.14, 1157339245694594829, 3.14); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select s2CapUnion(1157339245694594829, nan, 1157339245694594829, 3.14); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select s2CapUnion(1157339245694594829, 3.14, nan, 3.14); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select s2CapUnion(1157339245694594829, 3.14, 1157339245694594829, nan); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01856_create_function.sql b/tests/queries/0_stateless/01856_create_function.sql index cdc4baad1af..8efb346d534 100644 --- a/tests/queries/0_stateless/01856_create_function.sql +++ b/tests/queries/0_stateless/01856_create_function.sql @@ -4,11 +4,11 @@ CREATE FUNCTION 01856_test_function_0 AS (a, b, c) -> a * b * c; SELECT 01856_test_function_0(2, 3, 4); SELECT isConstant(01856_test_function_0(1, 2, 3)); DROP FUNCTION 01856_test_function_0; -CREATE FUNCTION 01856_test_function_1 AS (a, b) -> 01856_test_function_1(a, b) + 01856_test_function_1(a, b); --{serverError 611} -CREATE FUNCTION cast AS a -> a + 1; --{serverError 609} -CREATE FUNCTION sum AS (a, b) -> a + b; --{serverError 609} +CREATE FUNCTION 01856_test_function_1 AS (a, b) -> 01856_test_function_1(a, b) + 01856_test_function_1(a, b); --{serverError CANNOT_CREATE_RECURSIVE_FUNCTION} +CREATE FUNCTION cast AS a -> a + 1; --{serverError FUNCTION_ALREADY_EXISTS} +CREATE FUNCTION sum AS (a, b) -> a + b; --{serverError FUNCTION_ALREADY_EXISTS} CREATE FUNCTION 01856_test_function_2 AS (a, b) -> a + b; -CREATE FUNCTION 01856_test_function_2 AS (a) -> a || '!!!'; --{serverError 609} +CREATE FUNCTION 01856_test_function_2 AS (a) -> a || '!!!'; --{serverError FUNCTION_ALREADY_EXISTS} DROP FUNCTION 01856_test_function_2; -DROP FUNCTION unknown_function; -- {serverError 46} -DROP FUNCTION CAST; -- {serverError 610} +DROP FUNCTION unknown_function; -- {serverError UNKNOWN_FUNCTION} +DROP FUNCTION CAST; -- {serverError CANNOT_DROP_FUNCTION} diff --git a/tests/queries/0_stateless/01866_view_persist_settings.sql b/tests/queries/0_stateless/01866_view_persist_settings.sql index c58b802494d..1c300b8e220 100644 --- a/tests/queries/0_stateless/01866_view_persist_settings.sql +++ b/tests/queries/0_stateless/01866_view_persist_settings.sql @@ -34,7 +34,7 @@ SET join_use_nulls = 1; SELECT 'join_use_nulls = 1'; SELECT '-'; -SELECT * FROM view_no_nulls; -- { serverError 80 } +SELECT * FROM view_no_nulls; -- { serverError INCORRECT_QUERY } SELECT '-'; SELECT * FROM view_no_nulls_set; SELECT '-'; @@ -70,7 +70,7 @@ SET join_use_nulls = 1; SELECT 'join_use_nulls = 1'; SELECT '-'; -SELECT * FROM view_no_nulls; -- { serverError 80 } +SELECT * FROM view_no_nulls; -- { serverError INCORRECT_QUERY } SELECT '-'; SELECT * FROM view_no_nulls_set; SELECT '-'; diff --git a/tests/queries/0_stateless/01880_materialized_view_to_table_type_check.sql b/tests/queries/0_stateless/01880_materialized_view_to_table_type_check.sql index 2da9884ba8e..768fda9cd18 100644 --- a/tests/queries/0_stateless/01880_materialized_view_to_table_type_check.sql +++ b/tests/queries/0_stateless/01880_materialized_view_to_table_type_check.sql @@ -6,7 +6,7 @@ CREATE TABLE test_input(id Int32) ENGINE=MergeTree() order by id; CREATE TABLE test(`id` Int32, `pv` AggregateFunction(sum, Int32)) ENGINE = AggregatingMergeTree() ORDER BY id; -CREATE MATERIALIZED VIEW test_mv to test(`id` Int32, `pv` AggregateFunction(sum, Int32)) as SELECT id, sumState(1) as pv from test_input group by id; -- { serverError 70 } +CREATE MATERIALIZED VIEW test_mv to test(`id` Int32, `pv` AggregateFunction(sum, Int32)) as SELECT id, sumState(1) as pv from test_input group by id; -- { serverError CANNOT_CONVERT_TYPE } INSERT INTO test_input SELECT toInt32(number % 1000) AS id FROM numbers(10); select '----------test--------:'; diff --git a/tests/queries/0_stateless/01880_remote_ipv6.sql b/tests/queries/0_stateless/01880_remote_ipv6.sql index 7f15449e556..0ec217898c2 100644 --- a/tests/queries/0_stateless/01880_remote_ipv6.sql +++ b/tests/queries/0_stateless/01880_remote_ipv6.sql @@ -3,21 +3,21 @@ SET connections_with_failover_max_tries=0; SELECT * FROM remote('[::1]', system.one) FORMAT Null; SELECT * FROM remote('[::1]:9000', system.one) FORMAT Null; -SELECT * FROM remote('[::1', system.one) FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('::1]', system.one) FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('::1', system.one) FORMAT Null; -- { serverError 36 } +SELECT * FROM remote('[::1', system.one) FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('::1]', system.one) FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('::1', system.one) FORMAT Null; -- { serverError BAD_ARGUMENTS } -SELECT * FROM remote('[::1][::1]', system.one) FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('[::1][::1', system.one) FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('[::1]::1]', system.one) FORMAT Null; -- { serverError 36 } +SELECT * FROM remote('[::1][::1]', system.one) FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('[::1][::1', system.one) FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('[::1]::1]', system.one) FORMAT Null; -- { serverError BAD_ARGUMENTS } SELECT * FROM remote('[::1]') FORMAT Null; SELECT * FROM remote('[::1]:9000') FORMAT Null; -SELECT * FROM remote('[::1') FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('::1]') FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('::1') FORMAT Null; -- { serverError 36 } +SELECT * FROM remote('[::1') FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('::1]') FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('::1') FORMAT Null; -- { serverError BAD_ARGUMENTS } -SELECT * FROM remote('[::1][::1]') FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('[::1][::1') FORMAT Null; -- { serverError 36 } -SELECT * FROM remote('[::1]::1]') FORMAT Null; -- { serverError 36 } +SELECT * FROM remote('[::1][::1]') FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('[::1][::1') FORMAT Null; -- { serverError BAD_ARGUMENTS } +SELECT * FROM remote('[::1]::1]') FORMAT Null; -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01882_scalar_subquery_exception.sql b/tests/queries/0_stateless/01882_scalar_subquery_exception.sql index 0fb50846502..b9f6f70f953 100644 --- a/tests/queries/0_stateless/01882_scalar_subquery_exception.sql +++ b/tests/queries/0_stateless/01882_scalar_subquery_exception.sql @@ -13,7 +13,7 @@ select count() / (select count() from nums_in_mem_dist where rand() > 0) -from system.one; -- { serverError 158 } +from system.one; -- { serverError TOO_MANY_ROWS } drop table nums_in_mem; drop table nums_in_mem_dist; diff --git a/tests/queries/0_stateless/01883_subcolumns_distributed.sql b/tests/queries/0_stateless/01883_subcolumns_distributed.sql index 7aedc7c8eab..05bab51018f 100644 --- a/tests/queries/0_stateless/01883_subcolumns_distributed.sql +++ b/tests/queries/0_stateless/01883_subcolumns_distributed.sql @@ -17,7 +17,7 @@ DROP TABLE t_subcolumns_local; -- StripeLog doesn't support subcolumns. CREATE TABLE t_subcolumns_local (arr Array(UInt32), n Nullable(String), t Tuple(s1 String, s2 String)) ENGINE = StripeLog; -SELECT arr.size0, n.null, t.s1, t.s2 FROM t_subcolumns_dist; -- { serverError 47 } +SELECT arr.size0, n.null, t.s1, t.s2 FROM t_subcolumns_dist; -- { serverError UNKNOWN_IDENTIFIER } DROP TABLE t_subcolumns_local; DROP TABLE t_subcolumns_dist; diff --git a/tests/queries/0_stateless/01888_bloom_filter_hasAny.sql b/tests/queries/0_stateless/01888_bloom_filter_hasAny.sql index ea2c81f2b37..aef32791fd4 100644 --- a/tests/queries/0_stateless/01888_bloom_filter_hasAny.sql +++ b/tests/queries/0_stateless/01888_bloom_filter_hasAny.sql @@ -15,7 +15,7 @@ SELECT count() FROM bftest WHERE hasAny(x, materialize([1,2,3])) FORMAT Null; -- verify the expression in WHERE works on non-index col the same way as on index cols SELECT count() FROM bftest WHERE hasAny(y, [NULL,-42]) FORMAT Null; SELECT count() FROM bftest WHERE hasAny(y, [0,NULL]) FORMAT Null; -SELECT count() FROM bftest WHERE hasAny(y, [[123], -42]) FORMAT Null; -- { serverError 386 } +SELECT count() FROM bftest WHERE hasAny(y, [[123], -42]) FORMAT Null; -- { serverError NO_COMMON_TYPE } SELECT count() FROM bftest WHERE hasAny(y, [toDecimal32(123, 3), 2]) FORMAT Null; -- different, doesn't fail SET force_data_skipping_indices='ix1'; @@ -25,15 +25,15 @@ SELECT count() FROM bftest WHERE hasAny(x, []) FORMAT Null; SELECT count() FROM bftest WHERE hasAny(x, [1]) FORMAT Null; -- can't use bloom_filter with `hasAny` on non-constant arguments (just like `has`) -SELECT count() FROM bftest WHERE hasAny(x, materialize([1,2,3])) FORMAT Null; -- { serverError 277 } +SELECT count() FROM bftest WHERE hasAny(x, materialize([1,2,3])) FORMAT Null; -- { serverError INDEX_NOT_USED } -- NULLs are not Ok -SELECT count() FROM bftest WHERE hasAny(x, [NULL,-42]) FORMAT Null; -- { serverError 277 } -SELECT count() FROM bftest WHERE hasAny(x, [0,NULL]) FORMAT Null; -- { serverError 277 } +SELECT count() FROM bftest WHERE hasAny(x, [NULL,-42]) FORMAT Null; -- { serverError INDEX_NOT_USED } +SELECT count() FROM bftest WHERE hasAny(x, [0,NULL]) FORMAT Null; -- { serverError INDEX_NOT_USED } -- non-compatible types -SELECT count() FROM bftest WHERE hasAny(x, [[123], -42]) FORMAT Null; -- { serverError 386 } -SELECT count() FROM bftest WHERE hasAny(x, [toDecimal32(123, 3), 2]) FORMAT Null; -- { serverError 277 } +SELECT count() FROM bftest WHERE hasAny(x, [[123], -42]) FORMAT Null; -- { serverError NO_COMMON_TYPE } +SELECT count() FROM bftest WHERE hasAny(x, [toDecimal32(123, 3), 2]) FORMAT Null; -- { serverError INDEX_NOT_USED } -- Bug discovered by AST fuzzier (fixed, shouldn't crash). SELECT 1 FROM bftest WHERE has(x, -0.) OR 0. FORMAT Null; diff --git a/tests/queries/0_stateless/01888_read_int_safe.sql b/tests/queries/0_stateless/01888_read_int_safe.sql index 197338775c4..e70db497f2b 100644 --- a/tests/queries/0_stateless/01888_read_int_safe.sql +++ b/tests/queries/0_stateless/01888_read_int_safe.sql @@ -1,10 +1,10 @@ -select toInt64('--1'); -- { serverError 72 } -select toInt64('+-1'); -- { serverError 72 } -select toInt64('++1'); -- { serverError 72 } -select toInt64('++'); -- { serverError 72 } -select toInt64('+'); -- { serverError 72 } -select toInt64('1+1'); -- { serverError 6 } -select toInt64('1-1'); -- { serverError 6 } -select toInt64(''); -- { serverError 32 } +select toInt64('--1'); -- { serverError CANNOT_PARSE_NUMBER } +select toInt64('+-1'); -- { serverError CANNOT_PARSE_NUMBER } +select toInt64('++1'); -- { serverError CANNOT_PARSE_NUMBER } +select toInt64('++'); -- { serverError CANNOT_PARSE_NUMBER } +select toInt64('+'); -- { serverError CANNOT_PARSE_NUMBER } +select toInt64('1+1'); -- { serverError CANNOT_PARSE_TEXT } +select toInt64('1-1'); -- { serverError CANNOT_PARSE_TEXT } +select toInt64(''); -- { serverError ATTEMPT_TO_READ_AFTER_EOF } select toInt64('1'); select toInt64('-1'); diff --git a/tests/queries/0_stateless/01889_sql_json_functions.reference b/tests/queries/0_stateless/01889_sql_json_functions.reference index 244860571cf..125e8dfd7b7 100644 --- a/tests/queries/0_stateless/01889_sql_json_functions.reference +++ b/tests/queries/0_stateless/01889_sql_json_functions.reference @@ -47,10 +47,10 @@ SELECT JSON_VALUE('{"hello":1}', '$[\'hello\']'); 1 SELECT JSON_VALUE('{"hello 1":1}', '$["hello 1"]'); 1 -SELECT JSON_VALUE('{"1key":1}', '$..1key'); -- { serverError 36 } -SELECT JSON_VALUE('{"1key":1}', '$1key'); -- { serverError 36 } -SELECT JSON_VALUE('{"1key":1}', '$key'); -- { serverError 36 } -SELECT JSON_VALUE('{"1key":1}', '$.[key]'); -- { serverError 36 } +SELECT JSON_VALUE('{"1key":1}', '$..1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_VALUE('{"1key":1}', '$1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_VALUE('{"1key":1}', '$key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_VALUE('{"1key":1}', '$.[key]'); -- { serverError BAD_ARGUMENTS } SELECT '--JSON_QUERY--'; --JSON_QUERY-- SELECT JSON_QUERY('{"hello":1}', '$'); @@ -105,10 +105,10 @@ SELECT JSON_QUERY('{"hello":1}', '$[\'hello\']'); [1] SELECT JSON_QUERY('{"hello 1":1}', '$["hello 1"]'); [1] -SELECT JSON_QUERY('{"1key":1}', '$..1key'); -- { serverError 36 } -SELECT JSON_QUERY('{"1key":1}', '$1key'); -- { serverError 36 } -SELECT JSON_QUERY('{"1key":1}', '$key'); -- { serverError 36 } -SELECT JSON_QUERY('{"1key":1}', '$.[key]'); -- { serverError 36 } +SELECT JSON_QUERY('{"1key":1}', '$..1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_QUERY('{"1key":1}', '$1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_QUERY('{"1key":1}', '$key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_QUERY('{"1key":1}', '$.[key]'); -- { serverError BAD_ARGUMENTS } SELECT '--JSON_EXISTS--'; --JSON_EXISTS-- SELECT JSON_EXISTS('{"hello":1}', '$'); diff --git a/tests/queries/0_stateless/01889_sql_json_functions.sql b/tests/queries/0_stateless/01889_sql_json_functions.sql index 4cba985c2df..6e0852029ba 100644 --- a/tests/queries/0_stateless/01889_sql_json_functions.sql +++ b/tests/queries/0_stateless/01889_sql_json_functions.sql @@ -25,10 +25,10 @@ SELECT JSON_VALUE('{"hello":1}', '$[hello]'); SELECT JSON_VALUE('{"hello":1}', '$["hello"]'); SELECT JSON_VALUE('{"hello":1}', '$[\'hello\']'); SELECT JSON_VALUE('{"hello 1":1}', '$["hello 1"]'); -SELECT JSON_VALUE('{"1key":1}', '$..1key'); -- { serverError 36 } -SELECT JSON_VALUE('{"1key":1}', '$1key'); -- { serverError 36 } -SELECT JSON_VALUE('{"1key":1}', '$key'); -- { serverError 36 } -SELECT JSON_VALUE('{"1key":1}', '$.[key]'); -- { serverError 36 } +SELECT JSON_VALUE('{"1key":1}', '$..1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_VALUE('{"1key":1}', '$1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_VALUE('{"1key":1}', '$key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_VALUE('{"1key":1}', '$.[key]'); -- { serverError BAD_ARGUMENTS } SELECT '--JSON_QUERY--'; SELECT JSON_QUERY('{"hello":1}', '$'); @@ -57,10 +57,10 @@ SELECT JSON_QUERY('{"hello":1}', '$[hello]'); SELECT JSON_QUERY('{"hello":1}', '$["hello"]'); SELECT JSON_QUERY('{"hello":1}', '$[\'hello\']'); SELECT JSON_QUERY('{"hello 1":1}', '$["hello 1"]'); -SELECT JSON_QUERY('{"1key":1}', '$..1key'); -- { serverError 36 } -SELECT JSON_QUERY('{"1key":1}', '$1key'); -- { serverError 36 } -SELECT JSON_QUERY('{"1key":1}', '$key'); -- { serverError 36 } -SELECT JSON_QUERY('{"1key":1}', '$.[key]'); -- { serverError 36 } +SELECT JSON_QUERY('{"1key":1}', '$..1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_QUERY('{"1key":1}', '$1key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_QUERY('{"1key":1}', '$key'); -- { serverError BAD_ARGUMENTS } +SELECT JSON_QUERY('{"1key":1}', '$.[key]'); -- { serverError BAD_ARGUMENTS } SELECT '--JSON_EXISTS--'; SELECT JSON_EXISTS('{"hello":1}', '$'); diff --git a/tests/queries/0_stateless/01890_state_of_state.sql b/tests/queries/0_stateless/01890_state_of_state.sql index 7391228f4e8..bec3ddad370 100644 --- a/tests/queries/0_stateless/01890_state_of_state.sql +++ b/tests/queries/0_stateless/01890_state_of_state.sql @@ -7,16 +7,16 @@ SELECT toTypeName(initializeAggregation('uniqExact', 0)); SELECT toTypeName(initializeAggregation('uniqExactState', 0)); SELECT toTypeName(initializeAggregation('uniqExactState', initializeAggregation('quantileState', 0))); SELECT hex(toString(initializeAggregation('quantileState', 0))); -SELECT toTypeName(initializeAggregation('sumState', initializeAggregation('quantileState', 0))); -- { serverError 43 } +SELECT toTypeName(initializeAggregation('sumState', initializeAggregation('quantileState', 0))); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT toTypeName(initializeAggregation('anyState', initializeAggregation('quantileState', 0))); SELECT toTypeName(initializeAggregation('anyState', initializeAggregation('uniqState', 0))); SELECT hex(toString(initializeAggregation('uniqState', initializeAggregation('uniqState', 0)))); SELECT hex(toString(initializeAggregation('uniqState', initializeAggregation('quantileState', 0)))); SELECT hex(toString(initializeAggregation('anyLastState', initializeAggregation('uniqState', 0)))); SELECT hex(toString(initializeAggregation('anyState', initializeAggregation('uniqState', 0)))); -SELECT hex(toString(initializeAggregation('maxState', initializeAggregation('uniqState', 0)))); -- { serverError 43 } +SELECT hex(toString(initializeAggregation('maxState', initializeAggregation('uniqState', 0)))); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT hex(toString(initializeAggregation('uniqExactState', initializeAggregation('uniqState', 0)))); SELECT finalizeAggregation(initializeAggregation('uniqExactState', initializeAggregation('uniqState', 0))); -SELECT toTypeName(quantileState(x)) FROM (SELECT uniqState(number) AS x FROM numbers(1000)); -- { serverError 43 } -SELECT hex(toString(quantileState(x))) FROM (SELECT uniqState(number) AS x FROM numbers(1000)); -- { serverError 43 } +SELECT toTypeName(quantileState(x)) FROM (SELECT uniqState(number) AS x FROM numbers(1000)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT hex(toString(quantileState(x))) FROM (SELECT uniqState(number) AS x FROM numbers(1000)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT hex(toString(anyState(x))), hex(toString(any(x))) FROM (SELECT uniqState(number) AS x FROM numbers(1000)) FORMAT Vertical; diff --git a/tests/queries/0_stateless/01902_self_aliases_in_columns.sql b/tests/queries/0_stateless/01902_self_aliases_in_columns.sql index b03d7c15f62..1b2745af1b4 100644 --- a/tests/queries/0_stateless/01902_self_aliases_in_columns.sql +++ b/tests/queries/0_stateless/01902_self_aliases_in_columns.sql @@ -4,11 +4,11 @@ CREATE TABLE a `x` MATERIALIZED x ) ENGINE = MergeTree -ORDER BY number; --{ serverError 174} +ORDER BY number; --{ serverError CYCLIC_ALIASES} CREATE TABLE foo ( i Int32, j ALIAS j + 1 ) -ENGINE = MergeTree() ORDER BY i; --{ serverError 174} +ENGINE = MergeTree() ORDER BY i; --{ serverError CYCLIC_ALIASES} diff --git a/tests/queries/0_stateless/01904_dictionary_default_nullable_type.sql b/tests/queries/0_stateless/01904_dictionary_default_nullable_type.sql index e6831c92c9f..4c623941a19 100644 --- a/tests/queries/0_stateless/01904_dictionary_default_nullable_type.sql +++ b/tests/queries/0_stateless/01904_dictionary_default_nullable_type.sql @@ -109,7 +109,7 @@ LAYOUT(IP_TRIE()); -- Nullable type is not supported by IPTrie dictionary SELECT 'IPTrie dictionary'; -SELECT dictGet('ip_trie_dictionary', 'value', tuple(IPv4StringToNum('127.0.0.0'))); --{serverError 1} +SELECT dictGet('ip_trie_dictionary', 'value', tuple(IPv4StringToNum('127.0.0.0'))); --{serverError UNSUPPORTED_METHOD} DROP TABLE dictionary_nullable_source_table; DROP TABLE dictionary_nullable_default_source_table; diff --git a/tests/queries/0_stateless/01906_bigint_accurate_cast_ubsan.sql b/tests/queries/0_stateless/01906_bigint_accurate_cast_ubsan.sql index 4b9fa9662a9..c038b3b563e 100644 --- a/tests/queries/0_stateless/01906_bigint_accurate_cast_ubsan.sql +++ b/tests/queries/0_stateless/01906_bigint_accurate_cast_ubsan.sql @@ -1,15 +1,15 @@ -SELECT accurateCast(1e35, 'UInt32'); -- { serverError 70 } -SELECT accurateCast(1e35, 'UInt64'); -- { serverError 70 } -SELECT accurateCast(1e35, 'UInt128'); -- { serverError 70 } -SELECT accurateCast(1e35, 'UInt256'); -- { serverError 70 } +SELECT accurateCast(1e35, 'UInt32'); -- { serverError CANNOT_CONVERT_TYPE } +SELECT accurateCast(1e35, 'UInt64'); -- { serverError CANNOT_CONVERT_TYPE } +SELECT accurateCast(1e35, 'UInt128'); -- { serverError CANNOT_CONVERT_TYPE } +SELECT accurateCast(1e35, 'UInt256'); -- { serverError CANNOT_CONVERT_TYPE } SELECT accurateCast(1e19, 'UInt64'); SELECT accurateCast(1e19, 'UInt128'); SELECT accurateCast(1e19, 'UInt256'); -SELECT accurateCast(1e20, 'UInt64'); -- { serverError 70 } -SELECT accurateCast(1e20, 'UInt128'); -- { serverError 70 } -SELECT accurateCast(1e20, 'UInt256'); -- { serverError 70 } +SELECT accurateCast(1e20, 'UInt64'); -- { serverError CANNOT_CONVERT_TYPE } +SELECT accurateCast(1e20, 'UInt128'); -- { serverError CANNOT_CONVERT_TYPE } +SELECT accurateCast(1e20, 'UInt256'); -- { serverError CANNOT_CONVERT_TYPE } -SELECT accurateCast(1e19, 'Int64'); -- { serverError 70 } +SELECT accurateCast(1e19, 'Int64'); -- { serverError CANNOT_CONVERT_TYPE } SELECT accurateCast(1e19, 'Int128'); SELECT accurateCast(1e19, 'Int256'); diff --git a/tests/queries/0_stateless/01910_memory_tracking_topk.sql b/tests/queries/0_stateless/01910_memory_tracking_topk.sql index ea0b4f9047e..c638d7a96e9 100644 --- a/tests/queries/0_stateless/01910_memory_tracking_topk.sql +++ b/tests/queries/0_stateless/01910_memory_tracking_topk.sql @@ -3,4 +3,4 @@ -- Memory limit must correctly apply, triggering an exception: SET max_memory_usage = '100M'; -SELECT length(topK(5592405)(tuple(number))) FROM numbers(10) GROUP BY number; -- { serverError 241 } +SELECT length(topK(5592405)(tuple(number))) FROM numbers(10) GROUP BY number; -- { serverError MEMORY_LIMIT_EXCEEDED } diff --git a/tests/queries/0_stateless/01917_distinct_on.sql b/tests/queries/0_stateless/01917_distinct_on.sql index ae528b6e838..fe202184f07 100644 --- a/tests/queries/0_stateless/01917_distinct_on.sql +++ b/tests/queries/0_stateless/01917_distinct_on.sql @@ -14,7 +14,7 @@ SELECT DISTINCT ON (a) * FROM t1; -- SELECT DISTINCT ON a a, b FROM t1; -- { clientError 62 } -- "Code: 47. DB::Exception: Missing columns: 'DISTINCT'" - error can be better --- SELECT DISTINCT ON (a, b) DISTINCT a, b FROM t1; -- { serverError 47 } +-- SELECT DISTINCT ON (a, b) DISTINCT a, b FROM t1; -- { serverError UNKNOWN_IDENTIFIER } -- SELECT DISTINCT DISTINCT ON (a, b) a, b FROM t1; -- { clientError 62 } -- SELECT ALL DISTINCT ON (a, b) a, b FROM t1; -- { clientError 62 } diff --git a/tests/queries/0_stateless/01917_prewhere_column_type.sql b/tests/queries/0_stateless/01917_prewhere_column_type.sql index 9ce87ab548c..7aedfe7cb1c 100644 --- a/tests/queries/0_stateless/01917_prewhere_column_type.sql +++ b/tests/queries/0_stateless/01917_prewhere_column_type.sql @@ -10,12 +10,12 @@ SELECT s FROM t1 WHERE f AND (e = 1); SELECT s FROM t1 WHERE f AND (e = 1) SETTINGS optimize_move_to_prewhere=true; SELECT s FROM t1 WHERE f AND (e = 1) SETTINGS optimize_move_to_prewhere=false; SELECT s FROM t1 PREWHERE f AND (e = 1); -SELECT s FROM t1 PREWHERE f; -- { serverError 59 } -SELECT s FROM t1 PREWHERE f WHERE (e = 1); -- { serverError 59 } -SELECT s FROM t1 PREWHERE f WHERE f AND (e = 1); -- { serverError 59 } +SELECT s FROM t1 PREWHERE f; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } +SELECT s FROM t1 PREWHERE f WHERE (e = 1); -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } +SELECT s FROM t1 PREWHERE f WHERE f AND (e = 1); -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } SELECT s FROM t1 WHERE e AND (e = 1); -SELECT s FROM t1 PREWHERE e; -- { serverError 59 } -SELECT s FROM t1 PREWHERE e WHERE (e = 1); -- { serverError 59 } -SELECT s FROM t1 PREWHERE e WHERE f AND (e = 1); -- { serverError 59 } +SELECT s FROM t1 PREWHERE e; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } +SELECT s FROM t1 PREWHERE e WHERE (e = 1); -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } +SELECT s FROM t1 PREWHERE e WHERE f AND (e = 1); -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } diff --git a/tests/queries/0_stateless/01921_datatype_date32.sql b/tests/queries/0_stateless/01921_datatype_date32.sql index 717afc483aa..490617e3e58 100644 --- a/tests/queries/0_stateless/01921_datatype_date32.sql +++ b/tests/queries/0_stateless/01921_datatype_date32.sql @@ -17,11 +17,11 @@ select toDayOfWeek(x1) from t1; select '-------toDayOfYear---------'; select toDayOfYear(x1) from t1; select '-------toHour---------'; -select toHour(x1) from t1; -- { serverError 43 } +select toHour(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toMinute---------'; -select toMinute(x1) from t1; -- { serverError 43 } +select toMinute(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toSecond---------'; -select toSecond(x1) from t1; -- { serverError 43 } +select toSecond(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toStartOfDay---------'; select toStartOfDay(x1, 'Asia/Istanbul') from t1; select '-------toMonday---------'; @@ -45,17 +45,17 @@ select toStartOfQuarter(x1) from t1; select '-------toStartOfYear---------'; select toStartOfYear(x1) from t1; select '-------toStartOfSecond---------'; -select toStartOfSecond(x1) from t1; -- { serverError 43 } +select toStartOfSecond(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toStartOfMinute---------'; -select toStartOfMinute(x1) from t1; -- { serverError 43 } +select toStartOfMinute(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toStartOfFiveMinutes---------'; -select toStartOfFiveMinutes(x1) from t1; -- { serverError 43 } +select toStartOfFiveMinutes(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toStartOfTenMinutes---------'; -select toStartOfTenMinutes(x1) from t1; -- { serverError 43 } +select toStartOfTenMinutes(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toStartOfFifteenMinutes---------'; -select toStartOfFifteenMinutes(x1) from t1; -- { serverError 43 } +select toStartOfFifteenMinutes(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toStartOfHour---------'; -select toStartOfHour(x1) from t1; -- { serverError 43 } +select toStartOfHour(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toStartOfISOYear---------'; select toStartOfISOYear(x1) from t1; select '-------toRelativeYearNum---------'; @@ -75,7 +75,7 @@ select toRelativeMinuteNum(x1, 'Asia/Istanbul') from t1; select '-------toRelativeSecondNum---------'; select toRelativeSecondNum(x1, 'Asia/Istanbul') from t1; select '-------toTime---------'; -select toTime(x1) from t1; -- { serverError 43 } +select toTime(x1) from t1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select '-------toYYYYMM---------'; select toYYYYMM(x1) from t1; select '-------toYYYYMMDD---------'; diff --git a/tests/queries/0_stateless/01923_ttl_with_modify_column.sql b/tests/queries/0_stateless/01923_ttl_with_modify_column.sql index 650f32fb588..732a699b254 100644 --- a/tests/queries/0_stateless/01923_ttl_with_modify_column.sql +++ b/tests/queries/0_stateless/01923_ttl_with_modify_column.sql @@ -38,6 +38,6 @@ INSERT INTO t_ttl_modify_column VALUES (now()); SELECT sum(rows), groupUniqArray(type) FROM system.parts_columns WHERE database = currentDatabase() AND table = 't_ttl_modify_column' AND column = 'InsertionDateTime' AND active; -ALTER TABLE t_ttl_modify_column MODIFY COLUMN InsertionDateTime Float32; -- { serverError 43 } +ALTER TABLE t_ttl_modify_column MODIFY COLUMN InsertionDateTime Float32; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } DROP TABLE IF EXISTS t_ttl_modify_column; diff --git a/tests/queries/0_stateless/01925_broken_partition_id_zookeeper.sql b/tests/queries/0_stateless/01925_broken_partition_id_zookeeper.sql index 9c6aa3146ee..d0fc34e4f8d 100644 --- a/tests/queries/0_stateless/01925_broken_partition_id_zookeeper.sql +++ b/tests/queries/0_stateless/01925_broken_partition_id_zookeeper.sql @@ -11,9 +11,9 @@ ENGINE = ReplicatedMergeTree('/clickhouse/test_01925_{database}/rmt', 'r1') ORDER BY tuple() PARTITION BY date; -ALTER TABLE broken_partition DROP PARTITION ID '20210325_0_13241_6_12747'; --{serverError 248} +ALTER TABLE broken_partition DROP PARTITION ID '20210325_0_13241_6_12747'; --{serverError INVALID_PARTITION_VALUE} -ALTER TABLE broken_partition DROP PARTITION ID '20210325_0_13241_6_12747'; --{serverError 248} +ALTER TABLE broken_partition DROP PARTITION ID '20210325_0_13241_6_12747'; --{serverError INVALID_PARTITION_VALUE} DROP TABLE IF EXISTS broken_partition; @@ -22,7 +22,7 @@ DROP TABLE IF EXISTS old_partition_key; set allow_deprecated_syntax_for_merge_tree=1; CREATE TABLE old_partition_key (sd Date, dh UInt64, ak UInt32, ed Date) ENGINE=MergeTree(sd, dh, (ak, ed, dh), 8192); -ALTER TABLE old_partition_key DROP PARTITION ID '20210325_0_13241_6_12747'; --{serverError 248} +ALTER TABLE old_partition_key DROP PARTITION ID '20210325_0_13241_6_12747'; --{serverError INVALID_PARTITION_VALUE} ALTER TABLE old_partition_key DROP PARTITION ID '202103'; diff --git a/tests/queries/0_stateless/01925_map_populate_series_on_map.reference b/tests/queries/0_stateless/01925_map_populate_series_on_map.reference index 318f5ced231..dd5738331c9 100644 --- a/tests/queries/0_stateless/01925_map_populate_series_on_map.reference +++ b/tests/queries/0_stateless/01925_map_populate_series_on_map.reference @@ -67,6 +67,6 @@ select mapPopulateSeries(map(toInt64(-10), toInt64(1), 2, 1)) as res, toTypeName {-10:1,-9:0,-8:0,-7:0,-6:0,-5:0,-4:0,-3:0,-2:0,-1:0,0:0,1:0,2:1} Map(Int64, Int64) select mapPopulateSeries(map(toInt64(-10), toInt64(1), 2, 1), toInt64(-5)) as res, toTypeName(res); {-10:1,-9:0,-8:0,-7:0,-6:0,-5:0} Map(Int64, Int64) -select mapPopulateSeries(); -- { serverError 42 } -select mapPopulateSeries('asdf'); -- { serverError 43 } -select mapPopulateSeries(map('1', 1, '2', 1)) as res, toTypeName(res); -- { serverError 43 } +select mapPopulateSeries(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select mapPopulateSeries('asdf'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select mapPopulateSeries(map('1', 1, '2', 1)) as res, toTypeName(res); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01925_map_populate_series_on_map.sql b/tests/queries/0_stateless/01925_map_populate_series_on_map.sql index 635fba37cc8..f2b0dd60286 100644 --- a/tests/queries/0_stateless/01925_map_populate_series_on_map.sql +++ b/tests/queries/0_stateless/01925_map_populate_series_on_map.sql @@ -31,6 +31,6 @@ select mapPopulateSeries(map(toInt32(-10), toInt32(1), 2, 1)) as res, toTypeName select mapPopulateSeries(map(toInt64(-10), toInt64(1), 2, 1)) as res, toTypeName(res); select mapPopulateSeries(map(toInt64(-10), toInt64(1), 2, 1), toInt64(-5)) as res, toTypeName(res); -select mapPopulateSeries(); -- { serverError 42 } -select mapPopulateSeries('asdf'); -- { serverError 43 } -select mapPopulateSeries(map('1', 1, '2', 1)) as res, toTypeName(res); -- { serverError 43 } +select mapPopulateSeries(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select mapPopulateSeries('asdf'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select mapPopulateSeries(map('1', 1, '2', 1)) as res, toTypeName(res); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/01933_invalid_date.sql b/tests/queries/0_stateless/01933_invalid_date.sql index b9ea9319aea..26beea4d551 100644 --- a/tests/queries/0_stateless/01933_invalid_date.sql +++ b/tests/queries/0_stateless/01933_invalid_date.sql @@ -1,10 +1,10 @@ -SELECT toDate('07-08-2019'); -- { serverError 38 } -SELECT toDate('2019-0708'); -- { serverError 38 } -SELECT toDate('201907-08'); -- { serverError 38 } +SELECT toDate('07-08-2019'); -- { serverError CANNOT_PARSE_DATE } +SELECT toDate('2019-0708'); -- { serverError CANNOT_PARSE_DATE } +SELECT toDate('201907-08'); -- { serverError CANNOT_PARSE_DATE } SELECT toDate('2019^7^8'); CREATE TEMPORARY TABLE test (d Date); INSERT INTO test VALUES ('2018-01-01'); -SELECT * FROM test WHERE d >= '07-08-2019'; -- { serverError 38 } +SELECT * FROM test WHERE d >= '07-08-2019'; -- { serverError CANNOT_PARSE_DATE } SELECT * FROM test WHERE d >= '2019-07-08'; diff --git a/tests/queries/0_stateless/01934_constexpr_aggregate_function_parameters.sql b/tests/queries/0_stateless/01934_constexpr_aggregate_function_parameters.sql index 95d411c4cec..3146c01eed0 100644 --- a/tests/queries/0_stateless/01934_constexpr_aggregate_function_parameters.sql +++ b/tests/queries/0_stateless/01934_constexpr_aggregate_function_parameters.sql @@ -1,10 +1,10 @@ SELECT groupArray(2 + 3)(number) FROM numbers(10); SELECT groupArray('5'::UInt8)(number) FROM numbers(10); -SELECT groupArray(NULL)(number) FROM numbers(10); -- { serverError 36 } -SELECT groupArray(NULL + NULL)(number) FROM numbers(10); -- { serverError 36 } -SELECT groupArray([])(number) FROM numbers(10); -- { serverError 36 } -SELECT groupArray(throwIf(1))(number) FROM numbers(10); -- { serverError 36, 134 } +SELECT groupArray(NULL)(number) FROM numbers(10); -- { serverError BAD_ARGUMENTS } +SELECT groupArray(NULL + NULL)(number) FROM numbers(10); -- { serverError BAD_ARGUMENTS } +SELECT groupArray([])(number) FROM numbers(10); -- { serverError BAD_ARGUMENTS } +SELECT groupArray(throwIf(1))(number) FROM numbers(10); -- { serverError BAD_ARGUMENTS, 134 } -- Not the best error message, can be improved. -SELECT groupArray(number)(number) FROM numbers(10); -- { serverError 36, 47 } +SELECT groupArray(number)(number) FROM numbers(10); -- { serverError BAD_ARGUMENTS, 47 } diff --git a/tests/queries/0_stateless/01942_create_table_with_sample.sql b/tests/queries/0_stateless/01942_create_table_with_sample.sql index 6320edd7a31..8e919027f65 100644 --- a/tests/queries/0_stateless/01942_create_table_with_sample.sql +++ b/tests/queries/0_stateless/01942_create_table_with_sample.sql @@ -2,7 +2,7 @@ CREATE TABLE IF NOT EXISTS sample_incorrect (`x` UUID) ENGINE = MergeTree ORDER BY tuple(x) -SAMPLE BY x; -- { serverError 59 } +SAMPLE BY x; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } DROP TABLE IF EXISTS sample_correct; CREATE TABLE IF NOT EXISTS sample_correct diff --git a/tests/queries/0_stateless/01943_non_deterministic_order_key.sql b/tests/queries/0_stateless/01943_non_deterministic_order_key.sql index 200a88ec677..781ea1b1cc7 100644 --- a/tests/queries/0_stateless/01943_non_deterministic_order_key.sql +++ b/tests/queries/0_stateless/01943_non_deterministic_order_key.sql @@ -1,3 +1,3 @@ -CREATE TABLE a (number UInt64) ENGINE = MergeTree ORDER BY if(now() > toDateTime('2020-06-01 13:31:40'), toInt64(number), -number); -- { serverError 36 } -CREATE TABLE b (number UInt64) ENGINE = MergeTree ORDER BY now() > toDateTime(number); -- { serverError 36 } -CREATE TABLE c (number UInt64) ENGINE = MergeTree ORDER BY now(); -- { serverError 36 } +CREATE TABLE a (number UInt64) ENGINE = MergeTree ORDER BY if(now() > toDateTime('2020-06-01 13:31:40'), toInt64(number), -number); -- { serverError BAD_ARGUMENTS } +CREATE TABLE b (number UInt64) ENGINE = MergeTree ORDER BY now() > toDateTime(number); -- { serverError BAD_ARGUMENTS } +CREATE TABLE c (number UInt64) ENGINE = MergeTree ORDER BY now(); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/01944_range_max_elements.sql b/tests/queries/0_stateless/01944_range_max_elements.sql index c18f61e3190..d08f41e449a 100644 --- a/tests/queries/0_stateless/01944_range_max_elements.sql +++ b/tests/queries/0_stateless/01944_range_max_elements.sql @@ -1,7 +1,7 @@ SET function_range_max_elements_in_block = 10; SELECT range(number % 3) FROM numbers(10); SELECT range(number % 3) FROM numbers(11); -SELECT range(number % 3) FROM numbers(12); -- { serverError 69 } +SELECT range(number % 3) FROM numbers(12); -- { serverError ARGUMENT_OUT_OF_BOUND } SET function_range_max_elements_in_block = 12; SELECT range(number % 3) FROM numbers(12); diff --git a/tests/queries/0_stateless/01950_aliases_bad_cast.sql b/tests/queries/0_stateless/01950_aliases_bad_cast.sql index bdd2339f855..370e83b1eef 100644 --- a/tests/queries/0_stateless/01950_aliases_bad_cast.sql +++ b/tests/queries/0_stateless/01950_aliases_bad_cast.sql @@ -1,2 +1,2 @@ -SELECT 1, * FROM (SELECT NULL AS `1`); -- { serverError 352 } -SELECT '7', 'xyz', * FROM (SELECT NULL AS `'xyz'`); -- { serverError 352 } +SELECT 1, * FROM (SELECT NULL AS `1`); -- { serverError AMBIGUOUS_COLUMN_NAME } +SELECT '7', 'xyz', * FROM (SELECT NULL AS `'xyz'`); -- { serverError AMBIGUOUS_COLUMN_NAME } diff --git a/tests/queries/0_stateless/01961_roaring_memory_tracking.sql b/tests/queries/0_stateless/01961_roaring_memory_tracking.sql index 043febbcf55..485c8192f69 100644 --- a/tests/queries/0_stateless/01961_roaring_memory_tracking.sql +++ b/tests/queries/0_stateless/01961_roaring_memory_tracking.sql @@ -3,4 +3,4 @@ SET max_bytes_before_external_group_by = 0; SET max_memory_usage = '100M'; -SELECT cityHash64(rand() % 1000) as n, groupBitmapState(number) FROM numbers_mt(200000000) GROUP BY n FORMAT Null; -- { serverError 241 } +SELECT cityHash64(rand() % 1000) as n, groupBitmapState(number) FROM numbers_mt(200000000) GROUP BY n FORMAT Null; -- { serverError MEMORY_LIMIT_EXCEEDED } diff --git a/tests/queries/0_stateless/02000_join_on_const.sql b/tests/queries/0_stateless/02000_join_on_const.sql index a68e75443d8..2c1152e0ae6 100644 --- a/tests/queries/0_stateless/02000_join_on_const.sql +++ b/tests/queries/0_stateless/02000_join_on_const.sql @@ -15,11 +15,11 @@ SELECT 70 = 10 * sum(t1.id) + sum(t2.id) AND count() == 4 FROM t1 JOIN t2 ON toL SELECT 70 = 10 * sum(t1.id) + sum(t2.id) AND count() == 4 FROM t1 JOIN t2 ON toLowCardinality(toNullable(1)); SELECT 70 = 10 * sum(t1.id) + sum(t2.id) AND count() == 4 FROM t1 JOIN t2 ON toNullable(toLowCardinality(1)); -SELECT * FROM t1 JOIN t2 ON toUInt16(1); -- { serverError 403 } -SELECT * FROM t1 JOIN t2 ON toInt8(1); -- { serverError 403 } -SELECT * FROM t1 JOIN t2 ON 256; -- { serverError 403 } -SELECT * FROM t1 JOIN t2 ON -1; -- { serverError 403 } -SELECT * FROM t1 JOIN t2 ON toString(1); -- { serverError 403 } +SELECT * FROM t1 JOIN t2 ON toUInt16(1); -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT * FROM t1 JOIN t2 ON toInt8(1); -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT * FROM t1 JOIN t2 ON 256; -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT * FROM t1 JOIN t2 ON -1; -- { serverError INVALID_JOIN_ON_EXPRESSION } +SELECT * FROM t1 JOIN t2 ON toString(1); -- { serverError INVALID_JOIN_ON_EXPRESSION } SELECT '- ON NULL -'; diff --git a/tests/queries/0_stateless/02002_global_subqueries_subquery_or_table_name.sql b/tests/queries/0_stateless/02002_global_subqueries_subquery_or_table_name.sql index 8ac8dc35276..525021785c1 100644 --- a/tests/queries/0_stateless/02002_global_subqueries_subquery_or_table_name.sql +++ b/tests/queries/0_stateless/02002_global_subqueries_subquery_or_table_name.sql @@ -4,4 +4,4 @@ SELECT cityHash64(number GLOBAL IN (NULL, -2147483648, -9223372036854775808), nan, 1024, NULL, NULL, 1.000100016593933, NULL), (NULL, cityHash64(inf, -2147483648, NULL, NULL, 10.000100135803223), cityHash64(1.1754943508222875e-38, NULL, NULL, NULL), 2147483647) FROM cluster(test_cluster_two_shards_localhost, numbers((NULL, cityHash64(0., 65536, NULL, NULL, 10000000000., NULL), 0) GLOBAL IN (some_identifier), 65536)) -WHERE number GLOBAL IN [1025] --{serverError 36, 284} +WHERE number GLOBAL IN [1025] --{serverError BAD_ARGUMENTS, 284} diff --git a/tests/queries/0_stateless/02004_invalid_partition_mutation_stuck.sql b/tests/queries/0_stateless/02004_invalid_partition_mutation_stuck.sql index 71c8b9af652..07706c27cdf 100644 --- a/tests/queries/0_stateless/02004_invalid_partition_mutation_stuck.sql +++ b/tests/queries/0_stateless/02004_invalid_partition_mutation_stuck.sql @@ -12,7 +12,7 @@ PARTITION BY p ORDER BY t SETTINGS number_of_free_entries_in_pool_to_execute_mutation=0; INSERT INTO rep_data VALUES (1, now()); -ALTER TABLE rep_data MATERIALIZE INDEX idx IN PARTITION ID 'NO_SUCH_PART'; -- { serverError 248 } +ALTER TABLE rep_data MATERIALIZE INDEX idx IN PARTITION ID 'NO_SUCH_PART'; -- { serverError INVALID_PARTITION_VALUE } ALTER TABLE rep_data MATERIALIZE INDEX idx IN PARTITION ID '1'; ALTER TABLE rep_data MATERIALIZE INDEX idx IN PARTITION ID '2'; @@ -28,6 +28,6 @@ PARTITION BY p ORDER BY t SETTINGS number_of_free_entries_in_pool_to_execute_mutation=0; INSERT INTO data VALUES (1, now()); -ALTER TABLE data MATERIALIZE INDEX idx IN PARTITION ID 'NO_SUCH_PART'; -- { serverError 248 } +ALTER TABLE data MATERIALIZE INDEX idx IN PARTITION ID 'NO_SUCH_PART'; -- { serverError INVALID_PARTITION_VALUE } ALTER TABLE data MATERIALIZE INDEX idx IN PARTITION ID '1'; ALTER TABLE data MATERIALIZE INDEX idx IN PARTITION ID '2'; diff --git a/tests/queries/0_stateless/02004_max_hyperscan_regex_length.sql b/tests/queries/0_stateless/02004_max_hyperscan_regex_length.sql index 17d3796e88c..2133bcf888d 100644 --- a/tests/queries/0_stateless/02004_max_hyperscan_regex_length.sql +++ b/tests/queries/0_stateless/02004_max_hyperscan_regex_length.sql @@ -6,51 +6,51 @@ set max_hyperscan_regexp_total_length = 1; SELECT '- const pattern'; select multiMatchAny('123', ['1']); -select multiMatchAny('123', ['12']); -- { serverError 36 } -select multiMatchAny('123', ['1', '2']); -- { serverError 36 } +select multiMatchAny('123', ['12']); -- { serverError BAD_ARGUMENTS } +select multiMatchAny('123', ['1', '2']); -- { serverError BAD_ARGUMENTS } select multiMatchAnyIndex('123', ['1']); -select multiMatchAnyIndex('123', ['12']); -- { serverError 36 } -select multiMatchAnyIndex('123', ['1', '2']); -- { serverError 36 } +select multiMatchAnyIndex('123', ['12']); -- { serverError BAD_ARGUMENTS } +select multiMatchAnyIndex('123', ['1', '2']); -- { serverError BAD_ARGUMENTS } select multiMatchAllIndices('123', ['1']); -select multiMatchAllIndices('123', ['12']); -- { serverError 36 } -select multiMatchAllIndices('123', ['1', '2']); -- { serverError 36 } +select multiMatchAllIndices('123', ['12']); -- { serverError BAD_ARGUMENTS } +select multiMatchAllIndices('123', ['1', '2']); -- { serverError BAD_ARGUMENTS } select multiFuzzyMatchAny('123', 0, ['1']); -select multiFuzzyMatchAny('123', 0, ['12']); -- { serverError 36 } -select multiFuzzyMatchAny('123', 0, ['1', '2']); -- { serverError 36 } +select multiFuzzyMatchAny('123', 0, ['12']); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAny('123', 0, ['1', '2']); -- { serverError BAD_ARGUMENTS } select multiFuzzyMatchAnyIndex('123', 0, ['1']); -select multiFuzzyMatchAnyIndex('123', 0, ['12']); -- { serverError 36 } -select multiFuzzyMatchAnyIndex('123', 0, ['1', '2']); -- { serverError 36 } +select multiFuzzyMatchAnyIndex('123', 0, ['12']); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAnyIndex('123', 0, ['1', '2']); -- { serverError BAD_ARGUMENTS } select multiFuzzyMatchAllIndices('123', 0, ['1']); -select multiFuzzyMatchAllIndices('123', 0, ['12']); -- { serverError 36 } -select multiFuzzyMatchAllIndices('123', 0, ['1', '2']); -- { serverError 36 } +select multiFuzzyMatchAllIndices('123', 0, ['12']); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAllIndices('123', 0, ['1', '2']); -- { serverError BAD_ARGUMENTS } SELECT '- non-const pattern'; select multiMatchAny(materialize('123'), materialize(['1'])); -select multiMatchAny(materialize('123'), materialize(['12'])); -- { serverError 36 } -select multiMatchAny(materialize('123'), materialize(['1', '2'])); -- { serverError 36 } +select multiMatchAny(materialize('123'), materialize(['12'])); -- { serverError BAD_ARGUMENTS } +select multiMatchAny(materialize('123'), materialize(['1', '2'])); -- { serverError BAD_ARGUMENTS } select multiMatchAnyIndex(materialize('123'), materialize(['1'])); -select multiMatchAnyIndex(materialize('123'), materialize(['12'])); -- { serverError 36 } -select multiMatchAnyIndex(materialize('123'), materialize(['1', '2'])); -- { serverError 36 } +select multiMatchAnyIndex(materialize('123'), materialize(['12'])); -- { serverError BAD_ARGUMENTS } +select multiMatchAnyIndex(materialize('123'), materialize(['1', '2'])); -- { serverError BAD_ARGUMENTS } select multiMatchAllIndices(materialize('123'), materialize(['1'])); -select multiMatchAllIndices(materialize('123'), materialize(['12'])); -- { serverError 36 } -select multiMatchAllIndices(materialize('123'), materialize(['1', '2'])); -- { serverError 36 } +select multiMatchAllIndices(materialize('123'), materialize(['12'])); -- { serverError BAD_ARGUMENTS } +select multiMatchAllIndices(materialize('123'), materialize(['1', '2'])); -- { serverError BAD_ARGUMENTS } select multiFuzzyMatchAny(materialize('123'), 0, materialize(['1'])); -select multiFuzzyMatchAny(materialize('123'), 0, materialize(['12'])); -- { serverError 36 } -select multiFuzzyMatchAny(materialize('123'), 0, materialize(['1', '2'])); -- { serverError 36 } +select multiFuzzyMatchAny(materialize('123'), 0, materialize(['12'])); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAny(materialize('123'), 0, materialize(['1', '2'])); -- { serverError BAD_ARGUMENTS } select multiFuzzyMatchAnyIndex(materialize('123'), 0, materialize(['1'])); -select multiFuzzyMatchAnyIndex(materialize('123'), 0, materialize(['12'])); -- { serverError 36 } -select multiFuzzyMatchAnyIndex(materialize('123'), 0, materialize(['1', '2'])); -- { serverError 36 } +select multiFuzzyMatchAnyIndex(materialize('123'), 0, materialize(['12'])); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAnyIndex(materialize('123'), 0, materialize(['1', '2'])); -- { serverError BAD_ARGUMENTS } select multiFuzzyMatchAllIndices(materialize('123'), 0, materialize(['1'])); -select multiFuzzyMatchAllIndices(materialize('123'), 0, materialize(['12'])); -- { serverError 36 } -select multiFuzzyMatchAllIndices(materialize('123'), 0, materialize(['1', '2'])); -- { serverError 36 } +select multiFuzzyMatchAllIndices(materialize('123'), 0, materialize(['12'])); -- { serverError BAD_ARGUMENTS } +select multiFuzzyMatchAllIndices(materialize('123'), 0, materialize(['1', '2'])); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02006_test_positional_arguments.reference b/tests/queries/0_stateless/02006_test_positional_arguments.reference index 079bd071103..b3f46c12492 100644 --- a/tests/queries/0_stateless/02006_test_positional_arguments.reference +++ b/tests/queries/0_stateless/02006_test_positional_arguments.reference @@ -164,10 +164,10 @@ FROM test GROUP BY 1 + greatest(x1, 1), x2 -select max(x1), x2 from test group by 1, 2; -- { serverError 43, 184 } -select 1 + max(x1), x2 from test group by 1, 2; -- { serverError 43, 184 } -select max(x1), x2 from test group by -2, -1; -- { serverError 43, 184 } -select 1 + max(x1), x2 from test group by -2, -1; -- { serverError 43, 184 } +select max(x1), x2 from test group by 1, 2; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } +select 1 + max(x1), x2 from test group by 1, 2; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } +select max(x1), x2 from test group by -2, -1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } +select 1 + max(x1), x2 from test group by -2, -1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } explain syntax select x1 + x3, x3 from test group by 1, 2; SELECT x1 + x3, diff --git a/tests/queries/0_stateless/02006_test_positional_arguments.sql b/tests/queries/0_stateless/02006_test_positional_arguments.sql index 6f427e0298d..96b1aa4cebd 100644 --- a/tests/queries/0_stateless/02006_test_positional_arguments.sql +++ b/tests/queries/0_stateless/02006_test_positional_arguments.sql @@ -46,10 +46,10 @@ explain syntax select max(x1), x2 from test group by -1 order by -2, -1; explain syntax select 1 + greatest(x1, 1), x2 from test group by 1, 2; explain syntax select 1 + greatest(x1, 1), x2 from test group by -2, -1; -select max(x1), x2 from test group by 1, 2; -- { serverError 43, 184 } -select 1 + max(x1), x2 from test group by 1, 2; -- { serverError 43, 184 } -select max(x1), x2 from test group by -2, -1; -- { serverError 43, 184 } -select 1 + max(x1), x2 from test group by -2, -1; -- { serverError 43, 184 } +select max(x1), x2 from test group by 1, 2; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } +select 1 + max(x1), x2 from test group by 1, 2; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } +select max(x1), x2 from test group by -2, -1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } +select 1 + max(x1), x2 from test group by -2, -1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT, 184 } explain syntax select x1 + x3, x3 from test group by 1, 2; explain syntax select x1 + x3, x3 from test group by -2, -1; diff --git a/tests/queries/0_stateless/02008_materialize_column.sql b/tests/queries/0_stateless/02008_materialize_column.sql index cc7d3096402..aeddda2a27e 100644 --- a/tests/queries/0_stateless/02008_materialize_column.sql +++ b/tests/queries/0_stateless/02008_materialize_column.sql @@ -5,7 +5,7 @@ SET mutations_sync = 2; CREATE TABLE tmp (x Int64) ENGINE = MergeTree() ORDER BY tuple() PARTITION BY tuple(); INSERT INTO tmp SELECT * FROM system.numbers LIMIT 20; -ALTER TABLE tmp MATERIALIZE COLUMN x; -- { serverError 36 } +ALTER TABLE tmp MATERIALIZE COLUMN x; -- { serverError BAD_ARGUMENTS } ALTER TABLE tmp ADD COLUMN s String DEFAULT toString(x); SELECT arraySort(arraySort(groupArray(x))), groupArray(s) FROM tmp; diff --git a/tests/queries/0_stateless/02008_tuple_to_name_value_pairs.sql b/tests/queries/0_stateless/02008_tuple_to_name_value_pairs.sql index 1f6026bb61e..9f3443cf605 100644 --- a/tests/queries/0_stateless/02008_tuple_to_name_value_pairs.sql +++ b/tests/queries/0_stateless/02008_tuple_to_name_value_pairs.sql @@ -19,7 +19,7 @@ INSERT INTO test02008 VALUES (tuple(3.3, 5.5, 6.6)); SELECT untuple(arrayJoin(tupleToNameValuePairs(col))) from test02008; DROP TABLE IF EXISTS test02008; -SELECT tupleToNameValuePairs(tuple(1, 1.3)); -- { serverError 43 } -SELECT tupleToNameValuePairs(tuple(1, [1,2])); -- { serverError 43 } -SELECT tupleToNameValuePairs(tuple(1, 'a')); -- { serverError 43 } -SELECT tupleToNameValuePairs(33); -- { serverError 43 } +SELECT tupleToNameValuePairs(tuple(1, 1.3)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT tupleToNameValuePairs(tuple(1, [1,2])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT tupleToNameValuePairs(tuple(1, 'a')); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT tupleToNameValuePairs(33); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/02009_array_join_partition.sql b/tests/queries/0_stateless/02009_array_join_partition.sql index b8eedb5592f..3b9468947dd 100644 --- a/tests/queries/0_stateless/02009_array_join_partition.sql +++ b/tests/queries/0_stateless/02009_array_join_partition.sql @@ -1,4 +1,4 @@ CREATE TABLE table_2009_part (`i` Int64, `d` Date, `s` String) ENGINE = MergeTree PARTITION BY toYYYYMM(d) ORDER BY i; -ALTER TABLE table_2009_part ATTACH PARTITION tuple(arrayJoin([0, 1])); -- {serverError 36} -ALTER TABLE table_2009_part ATTACH PARTITION tuple(toYYYYMM(toDate([arrayJoin([arrayJoin([arrayJoin([arrayJoin([3, materialize(NULL), arrayJoin([1025, materialize(NULL), materialize(NULL)]), NULL])])]), materialize(NULL)])], NULL))); -- {serverError 36} +ALTER TABLE table_2009_part ATTACH PARTITION tuple(arrayJoin([0, 1])); -- {serverError BAD_ARGUMENTS} +ALTER TABLE table_2009_part ATTACH PARTITION tuple(toYYYYMM(toDate([arrayJoin([arrayJoin([arrayJoin([arrayJoin([3, materialize(NULL), arrayJoin([1025, materialize(NULL), materialize(NULL)]), NULL])])]), materialize(NULL)])], NULL))); -- {serverError BAD_ARGUMENTS} diff --git a/tests/queries/0_stateless/02010_array_index_bad_cast.sql b/tests/queries/0_stateless/02010_array_index_bad_cast.sql index 42a6556fc77..14162e0d2e2 100644 --- a/tests/queries/0_stateless/02010_array_index_bad_cast.sql +++ b/tests/queries/0_stateless/02010_array_index_bad_cast.sql @@ -1,3 +1,3 @@ -- This query throws exception about uncomparable data types (but at least it does not introduce bad cast in code). SET allow_suspicious_low_cardinality_types=1; -SELECT has(materialize(CAST(['2021-07-14'] AS Array(LowCardinality(Nullable(DateTime))))), materialize('2021-07-14'::DateTime64(7))); -- { serverError 44 } +SELECT has(materialize(CAST(['2021-07-14'] AS Array(LowCardinality(Nullable(DateTime))))), materialize('2021-07-14'::DateTime64(7))); -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/02011_normalize_utf8.sql b/tests/queries/0_stateless/02011_normalize_utf8.sql index 5abb6b4d8fb..acb76b38dd0 100644 --- a/tests/queries/0_stateless/02011_normalize_utf8.sql +++ b/tests/queries/0_stateless/02011_normalize_utf8.sql @@ -38,7 +38,7 @@ FROM normalize_test ORDER BY id; -SELECT char(228) AS value, normalizeUTF8NFC(value); -- { serverError 621 } -SELECT char(228) AS value, normalizeUTF8NFD(value); -- { serverError 621 } -SELECT char(228) AS value, normalizeUTF8NFKC(value); -- { serverError 621 } -SELECT char(228) AS value, normalizeUTF8NFKD(value); -- { serverError 621 } +SELECT char(228) AS value, normalizeUTF8NFC(value); -- { serverError CANNOT_NORMALIZE_STRING } +SELECT char(228) AS value, normalizeUTF8NFD(value); -- { serverError CANNOT_NORMALIZE_STRING } +SELECT char(228) AS value, normalizeUTF8NFKC(value); -- { serverError CANNOT_NORMALIZE_STRING } +SELECT char(228) AS value, normalizeUTF8NFKD(value); -- { serverError CANNOT_NORMALIZE_STRING } diff --git a/tests/queries/0_stateless/02011_tuple_vector_functions.sql b/tests/queries/0_stateless/02011_tuple_vector_functions.sql index 14f013937bb..d0cd89dc464 100644 --- a/tests/queries/0_stateless/02011_tuple_vector_functions.sql +++ b/tests/queries/0_stateless/02011_tuple_vector_functions.sql @@ -19,7 +19,7 @@ SELECT tupleDivideByNumber(tuple(1), materialize(1)); SELECT materialize((1, 2.0, 3.1)) * 3; SELECT 5.5 * (2, 4); SELECT (1, 2) / 2; -SELECT 2 / (1, 1); -- { serverError 43 } +SELECT 2 / (1, 1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT tuple(1, 2, 3) * tuple(2, 3, 4); SELECT dotProduct(materialize((-1, 2, 3.002)), materialize((2, 3.4, 4))); @@ -75,21 +75,21 @@ SELECT L1Normalize((NULL, 1)); SELECT cosineDistance((NULL, 1), (NULL, NULL)); SELECT max2(NULL, 1) - min2(NULL, 1); -SELECT L1Norm(1); -- { serverError 43 } -SELECT (1, 1) / toString(1); -- { serverError 43 } -SELECT -(1, toString(1)); -- { serverError 43 } -SELECT LpNorm((1, 2), toDecimal32(2, 4)); -- { serverError 43 } +SELECT L1Norm(1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT (1, 1) / toString(1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT -(1, toString(1)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT LpNorm((1, 2), toDecimal32(2, 4)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT (1, 2) * toDecimal32(3.1, 8); -SELECT cosineDistance((1, 2), (2, 3, 4)); -- { serverError 43 } -SELECT tuple() + tuple(); -- { serverError 42 } -SELECT LpNorm((1, 2, 3)); -- { serverError 42 } -SELECT max2(1, 2, -1); -- { serverError 42 } +SELECT cosineDistance((1, 2), (2, 3, 4)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT tuple() + tuple(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT LpNorm((1, 2, 3)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT max2(1, 2, -1); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -SELECT LpNorm((1, 2, 3), materialize(4.)); -- { serverError 44 } +SELECT LpNorm((1, 2, 3), materialize(4.)); -- { serverError ILLEGAL_COLUMN } SELECT tuple(*, 1) + tuple(2, *) FROM numbers(3); -SELECT LpDistance(tuple(*, 1), tuple(2, *), * + 1.) FROM numbers(3, 2); -- { serverError 44 } +SELECT LpDistance(tuple(*, 1), tuple(2, *), * + 1.) FROM numbers(3, 2); -- { serverError ILLEGAL_COLUMN } SELECT cosineDistance(tuple(*, * + 1), tuple(1, 2)) FROM numbers(1, 3); SELECT -tuple(NULL, * * 2, *) FROM numbers(2); @@ -99,12 +99,12 @@ SELECT normalizeL1((1, 1)), normalizeL2((1, 1)), normalizeLinf((1, 1)), normaliz SELECT LpNorm((1, 2, 3), 2.2); SELECT LpNorm((1.5, 2.5, 4), pi()); -SELECT LpNorm((3, 1, 4), 0); -- { serverError 69 } -SELECT LpNorm((1, 2, 3), 0.5); -- { serverError 69 } -SELECT LpNorm((1, 2, 3), inf); -- { serverError 69 } -SELECT LpNorm((1, 2, 3), -1.); -- { serverError 69 } -SELECT LpNorm((1, 2, 3), -1); -- { serverError 44 } -SELECT LpNorm((1, 2, 3), 0.); -- { serverError 69 } +SELECT LpNorm((3, 1, 4), 0); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT LpNorm((1, 2, 3), 0.5); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT LpNorm((1, 2, 3), inf); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT LpNorm((1, 2, 3), -1.); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT LpNorm((1, 2, 3), -1); -- { serverError ILLEGAL_COLUMN } +SELECT LpNorm((1, 2, 3), 0.); -- { serverError ARGUMENT_OUT_OF_BOUND } SELECT cosineDistance(materialize((NULL, -2147483648)), (1048577, 1048575)); -- not extra parentheses diff --git a/tests/queries/0_stateless/02013_bloom_filter_hasAll.sql b/tests/queries/0_stateless/02013_bloom_filter_hasAll.sql index adba3db6cf5..02ac2279686 100644 --- a/tests/queries/0_stateless/02013_bloom_filter_hasAll.sql +++ b/tests/queries/0_stateless/02013_bloom_filter_hasAll.sql @@ -16,7 +16,7 @@ SELECT count() FROM bftest WHERE hasAll(x, materialize([1,2,3])) FORMAT Null; -- verify the expression in WHERE works on non-index col the same way as on index cols SELECT count() FROM bftest WHERE hasAll(y, [NULL,-42]) FORMAT Null; SELECT count() FROM bftest WHERE hasAll(y, [0,NULL]) FORMAT Null; -SELECT count() FROM bftest WHERE hasAll(y, [[123], -42]) FORMAT Null; -- { serverError 386 } +SELECT count() FROM bftest WHERE hasAll(y, [[123], -42]) FORMAT Null; -- { serverError NO_COMMON_TYPE } SELECT count() FROM bftest WHERE hasAll(y, [toDecimal32(123, 3), 2]) FORMAT Null; -- different, doesn't fail SET force_data_skipping_indices='ix1'; @@ -26,15 +26,15 @@ SELECT count() FROM bftest WHERE hasAll(x, []) FORMAT Null; SELECT count() FROM bftest WHERE hasAll(x, [1]) FORMAT Null; -- can't use bloom_filter with `hasAll` on non-constant arguments (just like `has`) -SELECT count() FROM bftest WHERE hasAll(x, materialize([1,2,3])) FORMAT Null; -- { serverError 277 } +SELECT count() FROM bftest WHERE hasAll(x, materialize([1,2,3])) FORMAT Null; -- { serverError INDEX_NOT_USED } -- NULLs are not Ok -SELECT count() FROM bftest WHERE hasAll(x, [NULL,-42]) FORMAT Null; -- { serverError 277 } -SELECT count() FROM bftest WHERE hasAll(x, [0,NULL]) FORMAT Null; -- { serverError 277 } +SELECT count() FROM bftest WHERE hasAll(x, [NULL,-42]) FORMAT Null; -- { serverError INDEX_NOT_USED } +SELECT count() FROM bftest WHERE hasAll(x, [0,NULL]) FORMAT Null; -- { serverError INDEX_NOT_USED } -- non-compatible types -SELECT count() FROM bftest WHERE hasAll(x, [[123], -42]) FORMAT Null; -- { serverError 386 } -SELECT count() FROM bftest WHERE hasAll(x, [toDecimal32(123, 3), 2]) FORMAT Null; -- { serverError 277 } +SELECT count() FROM bftest WHERE hasAll(x, [[123], -42]) FORMAT Null; -- { serverError NO_COMMON_TYPE } +SELECT count() FROM bftest WHERE hasAll(x, [toDecimal32(123, 3), 2]) FORMAT Null; -- { serverError INDEX_NOT_USED } -- Bug discovered by AST fuzzier (fixed, shouldn't crash). SELECT 1 FROM bftest WHERE has(x, -0.) OR 0. FORMAT Null; diff --git a/tests/queries/0_stateless/02018_multiple_with_fill_for_the_same_column.sql b/tests/queries/0_stateless/02018_multiple_with_fill_for_the_same_column.sql index 32b38388cf6..0db88defaa9 100644 --- a/tests/queries/0_stateless/02018_multiple_with_fill_for_the_same_column.sql +++ b/tests/queries/0_stateless/02018_multiple_with_fill_for_the_same_column.sql @@ -1 +1 @@ -SELECT x, y FROM (SELECT 5 AS x, 'Hello' AS y) ORDER BY x WITH FILL FROM 3 TO 7, y, x WITH FILL FROM 1 TO 10; -- { serverError 475 } +SELECT x, y FROM (SELECT 5 AS x, 'Hello' AS y) ORDER BY x WITH FILL FROM 3 TO 7, y, x WITH FILL FROM 1 TO 10; -- { serverError INVALID_WITH_FILL_EXPRESSION } diff --git a/tests/queries/0_stateless/02024_merge_regexp_assert.sql b/tests/queries/0_stateless/02024_merge_regexp_assert.sql index fed26b08ad9..de1fdbbb56b 100644 --- a/tests/queries/0_stateless/02024_merge_regexp_assert.sql +++ b/tests/queries/0_stateless/02024_merge_regexp_assert.sql @@ -3,8 +3,8 @@ DROP TABLE IF EXISTS t; CREATE TABLE t (b UInt8) ENGINE = Memory; -SELECT a FROM merge(REGEXP('.'), '^t$'); -- { serverError 47 } -SELECT a FROM merge(REGEXP('\0'), '^t$'); -- { serverError 47 } -SELECT a FROM merge(REGEXP('\0a'), '^t$'); -- { serverError 47 } -SELECT a FROM merge(REGEXP('\0a'), '^$'); -- { serverError 36 } +SELECT a FROM merge(REGEXP('.'), '^t$'); -- { serverError UNKNOWN_IDENTIFIER } +SELECT a FROM merge(REGEXP('\0'), '^t$'); -- { serverError UNKNOWN_IDENTIFIER } +SELECT a FROM merge(REGEXP('\0a'), '^t$'); -- { serverError UNKNOWN_IDENTIFIER } +SELECT a FROM merge(REGEXP('\0a'), '^$'); -- { serverError BAD_ARGUMENTS } DROP TABLE t; diff --git a/tests/queries/0_stateless/02030_tuple_filter.sql b/tests/queries/0_stateless/02030_tuple_filter.sql index 1b79ad6c83c..42853dec681 100644 --- a/tests/queries/0_stateless/02030_tuple_filter.sql +++ b/tests/queries/0_stateless/02030_tuple_filter.sql @@ -35,9 +35,9 @@ SELECT * FROM test_tuple_filter WHERE (1, value) = (id, 'A'); SELECT * FROM test_tuple_filter WHERE tuple(id) = tuple(1); SELECT * FROM test_tuple_filter WHERE (id, (id, id) = (1, NULL)) == (NULL, NULL); -SELECT * FROM test_tuple_filter WHERE (log_date, value) = tuple('2021-01-01'); -- { serverError 43 } -SELECT * FROM test_tuple_filter WHERE (id, value) = tuple(1); -- { serverError 43 } -SELECT * FROM test_tuple_filter WHERE tuple(id, value) = tuple(value, id); -- { serverError 386 } -SELECT * FROM test_tuple_filter WHERE equals((id, value)); -- { serverError 42 } +SELECT * FROM test_tuple_filter WHERE (log_date, value) = tuple('2021-01-01'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT * FROM test_tuple_filter WHERE (id, value) = tuple(1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT * FROM test_tuple_filter WHERE tuple(id, value) = tuple(value, id); -- { serverError NO_COMMON_TYPE } +SELECT * FROM test_tuple_filter WHERE equals((id, value)); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } DROP TABLE IF EXISTS test_tuple_filter; diff --git a/tests/queries/0_stateless/02041_test_fuzzy_alter.sql b/tests/queries/0_stateless/02041_test_fuzzy_alter.sql index a330defc316..59e536cd68c 100644 --- a/tests/queries/0_stateless/02041_test_fuzzy_alter.sql +++ b/tests/queries/0_stateless/02041_test_fuzzy_alter.sql @@ -5,7 +5,7 @@ ENGINE = MergeTree ORDER BY a; ALTER TABLE alter_table - MODIFY COLUMN `b` DateTime DEFAULT now(([NULL, NULL, NULL, [-2147483648], [NULL, NULL, NULL, NULL, NULL, NULL, NULL]] AND (1048576 AND NULL) AND (NULL AND 1048575 AND NULL AND -2147483649) AND NULL) IN (test_01103.t1_distr.id)); --{serverError 47} + MODIFY COLUMN `b` DateTime DEFAULT now(([NULL, NULL, NULL, [-2147483648], [NULL, NULL, NULL, NULL, NULL, NULL, NULL]] AND (1048576 AND NULL) AND (NULL AND 1048575 AND NULL AND -2147483649) AND NULL) IN (test_01103.t1_distr.id)); --{serverError UNKNOWN_IDENTIFIER} SELECT 1; diff --git a/tests/queries/0_stateless/02096_sample_by_tuple.sql b/tests/queries/0_stateless/02096_sample_by_tuple.sql index 4996c9b8384..1a86e1bcab8 100644 --- a/tests/queries/0_stateless/02096_sample_by_tuple.sql +++ b/tests/queries/0_stateless/02096_sample_by_tuple.sql @@ -1,7 +1,7 @@ DROP TABLE IF EXISTS t; -CREATE TABLE t (n UInt8) ENGINE=MergeTree ORDER BY n SAMPLE BY tuple(); -- { serverError 80 } +CREATE TABLE t (n UInt8) ENGINE=MergeTree ORDER BY n SAMPLE BY tuple(); -- { serverError INCORRECT_QUERY } CREATE TABLE t (n UInt8) ENGINE=MergeTree ORDER BY tuple(); -ALTER TABLE t MODIFY SAMPLE BY tuple(); -- { serverError 80 } +ALTER TABLE t MODIFY SAMPLE BY tuple(); -- { serverError INCORRECT_QUERY } diff --git a/tests/queries/0_stateless/02097_polygon_dictionary_store_key.sql b/tests/queries/0_stateless/02097_polygon_dictionary_store_key.sql index 95557da481e..97297a776ee 100644 --- a/tests/queries/0_stateless/02097_polygon_dictionary_store_key.sql +++ b/tests/queries/0_stateless/02097_polygon_dictionary_store_key.sql @@ -18,7 +18,7 @@ SOURCE(CLICKHOUSE(TABLE 'polygons_test_table')) LAYOUT(POLYGON()) LIFETIME(0); -SELECT * FROM polygons_test_dictionary_no_option; -- {serverError 1} +SELECT * FROM polygons_test_dictionary_no_option; -- {serverError UNSUPPORTED_METHOD} DROP DICTIONARY IF EXISTS polygons_test_dictionary; CREATE DICTIONARY polygons_test_dictionary diff --git a/tests/queries/0_stateless/02097_remove_sample_by.sql b/tests/queries/0_stateless/02097_remove_sample_by.sql index 89fbfe0c4c5..d9e3c7eab67 100644 --- a/tests/queries/0_stateless/02097_remove_sample_by.sql +++ b/tests/queries/0_stateless/02097_remove_sample_by.sql @@ -7,8 +7,8 @@ CREATE TABLE t_remove_sample_by(id UInt64) ENGINE = MergeTree ORDER BY id SAMPLE ALTER TABLE t_remove_sample_by REMOVE SAMPLE BY; SHOW CREATE TABLE t_remove_sample_by; -ALTER TABLE t_remove_sample_by REMOVE SAMPLE BY; -- { serverError 36 } -SELECT * FROM t_remove_sample_by SAMPLE 1 / 10; -- { serverError 141 } +ALTER TABLE t_remove_sample_by REMOVE SAMPLE BY; -- { serverError BAD_ARGUMENTS } +SELECT * FROM t_remove_sample_by SAMPLE 1 / 10; -- { serverError SAMPLING_NOT_SUPPORTED } DROP TABLE t_remove_sample_by; @@ -22,7 +22,7 @@ SHOW CREATE TABLE t_remove_sample_by; DROP TABLE t_remove_sample_by; CREATE TABLE t_remove_sample_by(id UInt64) ENGINE = Memory; -ALTER TABLE t_remove_sample_by REMOVE SAMPLE BY; -- { serverError 36 } +ALTER TABLE t_remove_sample_by REMOVE SAMPLE BY; -- { serverError BAD_ARGUMENTS } DROP TABLE t_remove_sample_by; @@ -36,7 +36,7 @@ DETACH TABLE t_remove_sample_by; ATTACH TABLE t_remove_sample_by; INSERT INTO t_remove_sample_by VALUES (1); -SELECT * FROM t_remove_sample_by SAMPLE 1 / 10; -- { serverError 59 } +SELECT * FROM t_remove_sample_by SAMPLE 1 / 10; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } ALTER TABLE t_remove_sample_by REMOVE SAMPLE BY; SHOW CREATE TABLE t_remove_sample_by; diff --git a/tests/queries/0_stateless/02101_sql_user_defined_functions_drop_if_exists.sql b/tests/queries/0_stateless/02101_sql_user_defined_functions_drop_if_exists.sql index 09e2677774c..8061f227ba2 100644 --- a/tests/queries/0_stateless/02101_sql_user_defined_functions_drop_if_exists.sql +++ b/tests/queries/0_stateless/02101_sql_user_defined_functions_drop_if_exists.sql @@ -5,5 +5,5 @@ CREATE FUNCTION 02101_test_function AS x -> x + 1; SELECT 02101_test_function(1); DROP FUNCTION 02101_test_function; -DROP FUNCTION 02101_test_function; --{serverError 46} +DROP FUNCTION 02101_test_function; --{serverError UNKNOWN_FUNCTION} DROP FUNCTION IF EXISTS 02101_test_function; diff --git a/tests/queries/0_stateless/02102_sql_user_defined_functions_create_if_not_exists.sql b/tests/queries/0_stateless/02102_sql_user_defined_functions_create_if_not_exists.sql index 092fa660cb0..5dba8a2e715 100644 --- a/tests/queries/0_stateless/02102_sql_user_defined_functions_create_if_not_exists.sql +++ b/tests/queries/0_stateless/02102_sql_user_defined_functions_create_if_not_exists.sql @@ -3,6 +3,6 @@ CREATE FUNCTION IF NOT EXISTS 02102_test_function AS x -> x + 1; SELECT 02102_test_function(1); -CREATE FUNCTION 02102_test_function AS x -> x + 1; --{serverError 609} +CREATE FUNCTION 02102_test_function AS x -> x + 1; --{serverError FUNCTION_ALREADY_EXISTS} CREATE FUNCTION IF NOT EXISTS 02102_test_function AS x -> x + 1; DROP FUNCTION 02102_test_function; diff --git a/tests/queries/0_stateless/02112_with_fill_interval.sql b/tests/queries/0_stateless/02112_with_fill_interval.sql index d2416f9a84b..1210b0f2a28 100644 --- a/tests/queries/0_stateless/02112_with_fill_interval.sql +++ b/tests/queries/0_stateless/02112_with_fill_interval.sql @@ -18,7 +18,7 @@ SELECT toStartOfMonth(d) as d, count() FROM with_fill_date GROUP BY d ORDER BY d TO toDate('2021-01-01') STEP INTERVAL 3 MONTH; -SELECT d, count() FROM with_fill_date GROUP BY d ORDER BY d WITH FILL STEP INTERVAL 1 HOUR LIMIT 5; -- { serverError 475 } +SELECT d, count() FROM with_fill_date GROUP BY d ORDER BY d WITH FILL STEP INTERVAL 1 HOUR LIMIT 5; -- { serverError INVALID_WITH_FILL_EXPRESSION } SELECT '1 DAY'; SELECT d32, count() FROM with_fill_date GROUP BY d32 ORDER BY d32 WITH FILL STEP INTERVAL 1 DAY LIMIT 5; @@ -32,7 +32,7 @@ SELECT toStartOfMonth(d32) as d32, count() FROM with_fill_date GROUP BY d32 ORDE TO toDate('2021-01-01') STEP INTERVAL 3 MONTH; -SELECT d, count() FROM with_fill_date GROUP BY d ORDER BY d WITH FILL STEP INTERVAL 1 HOUR LIMIT 5; -- { serverError 475 } +SELECT d, count() FROM with_fill_date GROUP BY d ORDER BY d WITH FILL STEP INTERVAL 1 HOUR LIMIT 5; -- { serverError INVALID_WITH_FILL_EXPRESSION } DROP TABLE with_fill_date; @@ -58,7 +58,7 @@ SELECT toStartOfDay(d64) as d64, count() FROM with_fill_date GROUP BY d64 ORDER DROP TABLE with_fill_date; -SELECT number FROM numbers(100) ORDER BY number WITH FILL STEP INTERVAL 1 HOUR; -- { serverError 475 } +SELECT number FROM numbers(100) ORDER BY number WITH FILL STEP INTERVAL 1 HOUR; -- { serverError INVALID_WITH_FILL_EXPRESSION } CREATE TABLE with_fill_date (d Date, id UInt32) ENGINE = Memory; diff --git a/tests/queries/0_stateless/02113_format_row_bug.sql b/tests/queries/0_stateless/02113_format_row_bug.sql index c2144ca1537..ce934201297 100644 --- a/tests/queries/0_stateless/02113_format_row_bug.sql +++ b/tests/queries/0_stateless/02113_format_row_bug.sql @@ -1,6 +1,6 @@ -- Tags: no-fasttest -select formatRow('ORC', number, toDate(number)) from numbers(5); -- { serverError 36 } -select formatRow('Parquet', number, toDate(number)) from numbers(5); -- { serverError 36 } -select formatRow('Arrow', number, toDate(number)) from numbers(5); -- { serverError 36 } -select formatRow('Native', number, toDate(number)) from numbers(5); -- { serverError 36 } +select formatRow('ORC', number, toDate(number)) from numbers(5); -- { serverError BAD_ARGUMENTS } +select formatRow('Parquet', number, toDate(number)) from numbers(5); -- { serverError BAD_ARGUMENTS } +select formatRow('Arrow', number, toDate(number)) from numbers(5); -- { serverError BAD_ARGUMENTS } +select formatRow('Native', number, toDate(number)) from numbers(5); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02119_sumcount.sql b/tests/queries/0_stateless/02119_sumcount.sql index 86625996f44..dc66a822dcf 100644 --- a/tests/queries/0_stateless/02119_sumcount.sql +++ b/tests/queries/0_stateless/02119_sumcount.sql @@ -163,8 +163,8 @@ SELECT toTypeName(sumCount(v)), sumCount(v) FROM (SELECT '1'::UInt256 AS v FROM SELECT toTypeName(sumCount(v)), sumCount(v) FROM (SELECT '1.001'::Decimal(3, 3) AS v FROM numbers(100)); -- Other types -SELECT toTypeName(sumCount(v)), sumCount(v) FROM (SELECT 'a'::String AS v); -- { serverError 43 } -SELECT toTypeName(sumCount(v)), sumCount(v) FROM (SELECT now()::DateTime AS v); -- { serverError 43 } +SELECT toTypeName(sumCount(v)), sumCount(v) FROM (SELECT 'a'::String AS v); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT toTypeName(sumCount(v)), sumCount(v) FROM (SELECT now()::DateTime AS v); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -- SumCountIf diff --git a/tests/queries/0_stateless/02124_insert_deduplication_token_materialized_views.sql b/tests/queries/0_stateless/02124_insert_deduplication_token_materialized_views.sql index 88d3165d060..fdd75b91b1f 100644 --- a/tests/queries/0_stateless/02124_insert_deduplication_token_materialized_views.sql +++ b/tests/queries/0_stateless/02124_insert_deduplication_token_materialized_views.sql @@ -22,7 +22,7 @@ CREATE MATERIALIZED VIEW test_mv_c Engine=ReplicatedMergeTree ('/clickhouse/tabl order by tuple() AS SELECT test, A, count() c FROM test group by test, A; SET max_partitions_per_insert_block = 1; -INSERT INTO test SELECT 'case1', number%3, 1 FROM numbers(9); -- { serverError 252 } +INSERT INTO test SELECT 'case1', number%3, 1 FROM numbers(9); -- { serverError TOO_MANY_PARTS } SET max_partitions_per_insert_block = 0; INSERT INTO test SELECT 'case1', number%3, 1 FROM numbers(9); INSERT INTO test SELECT 'case1', number%3, 2 FROM numbers(9); @@ -40,7 +40,7 @@ select 'deduplicate_blocks_in_dependent_materialized_views=1, insert_deduplicati set deduplicate_blocks_in_dependent_materialized_views=1; SET max_partitions_per_insert_block = 1; -INSERT INTO test SELECT 'case2', number%3, 1 FROM numbers(9) ; -- { serverError 252 } +INSERT INTO test SELECT 'case2', number%3, 1 FROM numbers(9) ; -- { serverError TOO_MANY_PARTS } SET max_partitions_per_insert_block = 0; INSERT INTO test SELECT 'case2', number%3, 1 FROM numbers(9); INSERT INTO test SELECT 'case2', number%3, 2 FROM numbers(9); @@ -58,7 +58,7 @@ select 'deduplicate_blocks_in_dependent_materialized_views=0, insert_deduplicati set deduplicate_blocks_in_dependent_materialized_views=0; SET max_partitions_per_insert_block = 1; -INSERT INTO test SELECT 'case3', number%3, 1 FROM numbers(9) SETTINGS insert_deduplication_token = 'case3test1'; -- { serverError 252 } +INSERT INTO test SELECT 'case3', number%3, 1 FROM numbers(9) SETTINGS insert_deduplication_token = 'case3test1'; -- { serverError TOO_MANY_PARTS } SET max_partitions_per_insert_block = 0; INSERT INTO test SELECT 'case3', number%3, 1 FROM numbers(9) SETTINGS insert_deduplication_token = 'case3test1'; INSERT INTO test SELECT 'case3', number%3, 2 FROM numbers(9) SETTINGS insert_deduplication_token = 'case3test2'; @@ -75,7 +75,7 @@ select 'deduplicate_blocks_in_dependent_materialized_views=1, insert_deduplicati set deduplicate_blocks_in_dependent_materialized_views=1; SET max_partitions_per_insert_block = 1; -INSERT INTO test SELECT 'case4', number%3, 1 FROM numbers(9) SETTINGS insert_deduplication_token = 'case4test1' ; -- { serverError 252 } +INSERT INTO test SELECT 'case4', number%3, 1 FROM numbers(9) SETTINGS insert_deduplication_token = 'case4test1' ; -- { serverError TOO_MANY_PARTS } SET max_partitions_per_insert_block = 0; INSERT INTO test SELECT 'case4', number%3, 1 FROM numbers(9) SETTINGS insert_deduplication_token = 'case4test1'; INSERT INTO test SELECT 'case4', number%3, 2 FROM numbers(9) SETTINGS insert_deduplication_token = 'case4test2'; diff --git a/tests/queries/0_stateless/02125_dict_get_type_nullable_fix.sql b/tests/queries/0_stateless/02125_dict_get_type_nullable_fix.sql index 01fea381bf3..1d08dc636c5 100644 --- a/tests/queries/0_stateless/02125_dict_get_type_nullable_fix.sql +++ b/tests/queries/0_stateless/02125_dict_get_type_nullable_fix.sql @@ -19,4 +19,4 @@ SOURCE(CLICKHOUSE(TABLE '02125_test_table')) LAYOUT(DIRECT()); SELECT dictGet('02125_test_dictionary', 'value', toUInt64(0)); -SELECT dictGetString('02125_test_dictionary', 'value', toUInt64(0)); --{serverError 53} +SELECT dictGetString('02125_test_dictionary', 'value', toUInt64(0)); --{serverError TYPE_MISMATCH} diff --git a/tests/queries/0_stateless/02125_fix_storage_filelog.sql b/tests/queries/0_stateless/02125_fix_storage_filelog.sql index 7586df1ee00..1ac33586bad 100644 --- a/tests/queries/0_stateless/02125_fix_storage_filelog.sql +++ b/tests/queries/0_stateless/02125_fix_storage_filelog.sql @@ -1,3 +1,3 @@ -CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV'); -- {serverError 36 } -CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV'); -- {serverError 36 } -CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV'); -- {serverError 36 } +CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV'); -- {serverError BAD_ARGUMENTS } +CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV'); -- {serverError BAD_ARGUMENTS } +CREATE TABLE log (A String) ENGINE= FileLog('/tmp/aaa.csv', 'CSV'); -- {serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02125_recursive_sql_user_defined_functions.sql b/tests/queries/0_stateless/02125_recursive_sql_user_defined_functions.sql index 1870521c255..883ca6f9ab7 100644 --- a/tests/queries/0_stateless/02125_recursive_sql_user_defined_functions.sql +++ b/tests/queries/0_stateless/02125_recursive_sql_user_defined_functions.sql @@ -2,7 +2,7 @@ DROP FUNCTION IF EXISTS 02125_function; CREATE FUNCTION 02125_function AS x -> 02125_function(x); -SELECT 02125_function(1); --{serverError 1}; +SELECT 02125_function(1); --{serverError UNSUPPORTED_METHOD}; DROP FUNCTION 02125_function; DROP FUNCTION IF EXISTS 02125_function_1; @@ -11,8 +11,8 @@ CREATE FUNCTION 02125_function_1 AS x -> 02125_function_2(x); DROP FUNCTION IF EXISTS 02125_function_2; CREATE FUNCTION 02125_function_2 AS x -> 02125_function_1(x); -SELECT 02125_function_1(1); --{serverError 1}; -SELECT 02125_function_2(2); --{serverError 1}; +SELECT 02125_function_1(1); --{serverError UNSUPPORTED_METHOD}; +SELECT 02125_function_2(2); --{serverError UNSUPPORTED_METHOD}; CREATE OR REPLACE FUNCTION 02125_function_2 AS x -> x + 1; diff --git a/tests/queries/0_stateless/02126_identity_user_defined_function.sql b/tests/queries/0_stateless/02126_identity_user_defined_function.sql index a53c6e28a48..8a108ed21c6 100644 --- a/tests/queries/0_stateless/02126_identity_user_defined_function.sql +++ b/tests/queries/0_stateless/02126_identity_user_defined_function.sql @@ -6,7 +6,7 @@ SELECT 02126_function(1); DROP FUNCTION 02126_function; CREATE FUNCTION 02126_function AS () -> x; -SELECT 02126_function(); --{ serverError 47 } +SELECT 02126_function(); --{ serverError UNKNOWN_IDENTIFIER } DROP FUNCTION 02126_function; CREATE FUNCTION 02126_function AS () -> 5; diff --git a/tests/queries/0_stateless/02151_hash_table_sizes_stats.reference b/tests/queries/0_stateless/02151_hash_table_sizes_stats.reference new file mode 100644 index 00000000000..712e2b058a4 --- /dev/null +++ b/tests/queries/0_stateless/02151_hash_table_sizes_stats.reference @@ -0,0 +1,21 @@ +1 +-- +1 +-- +1 +-- +1 +-- +1 +1 +-- +1 +-- +1 +1 +-- +1 +-- +1 +1 +-- diff --git a/tests/queries/0_stateless/02151_hash_table_sizes_stats.sh b/tests/queries/0_stateless/02151_hash_table_sizes_stats.sh new file mode 100755 index 00000000000..f99dbdacec2 --- /dev/null +++ b/tests/queries/0_stateless/02151_hash_table_sizes_stats.sh @@ -0,0 +1,96 @@ +#!/usr/bin/env bash +# Tags: long, no-debug, no-tsan, no-msan, no-ubsan, no-asan, no-random-settings, no-random-merge-tree-settings + +# shellcheck disable=SC2154,SC2162 + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + +# tests rely on that all the rows are unique and max_threads divides table_size +table_size=1000005 +max_threads=5 + + +prepare_table() { + table_name="t_hash_table_sizes_stats_$RANDOM$RANDOM" + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS $table_name;" + if [ -z "$1" ]; then + $CLICKHOUSE_CLIENT -q "CREATE TABLE $table_name(number UInt64) Engine=MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi';" + else + $CLICKHOUSE_CLIENT -q "CREATE TABLE $table_name(number UInt64) Engine=MergeTree() ORDER BY $1 SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi';" + fi + $CLICKHOUSE_CLIENT -q "SYSTEM STOP MERGES $table_name;" + for ((i = 1; i <= max_threads; i++)); do + cnt=$((table_size / max_threads)) + from=$(((i - 1) * cnt)) + $CLICKHOUSE_CLIENT -q "INSERT INTO $table_name SELECT * FROM numbers($from, $cnt);" + done +} + +prepare_table_with_sorting_key() { + prepare_table "$1" +} + +run_query() { + query_id="${CLICKHOUSE_DATABASE}_hash_table_sizes_stats_$RANDOM$RANDOM" + $CLICKHOUSE_CLIENT --query_id="$query_id" --multiquery -q " + SET max_block_size = $((table_size / 10)); + SET merge_tree_min_rows_for_concurrent_read = 1; + SET max_untracked_memory = 0; + SET max_size_to_preallocate_for_aggregation = 1e12; + $query" +} + +check_preallocated_elements() { + # rows may be distributed in any way including "everything goes to the one particular thread" + $CLICKHOUSE_CLIENT --param_query_id="$1" -q " + SELECT COUNT(*) + FROM system.query_log + WHERE event_date >= yesterday() AND query_id = {query_id:String} AND current_database = currentDatabase() + AND ProfileEvents['AggregationPreallocatedElementsInHashTables'] BETWEEN $2 AND $3" +} + +check_convertion_to_two_level() { + # rows may be distributed in any way including "everything goes to the one particular thread" + $CLICKHOUSE_CLIENT --param_query_id="$1" -q " + SELECT SUM(ProfileEvents['AggregationHashTablesInitializedAsTwoLevel']) BETWEEN 1 AND $max_threads + FROM system.query_log + WHERE event_date >= yesterday() AND query_id = {query_id:String} AND current_database = currentDatabase()" +} + +print_border() { + echo "--" +} + +# each test case appends to this array +expected_results=() + +check_expectations() { + $CLICKHOUSE_CLIENT -q "SYSTEM FLUSH LOGS" + + for i in "${!expected_results[@]}"; do + read -a args <<< "${expected_results[$i]}" + if [ ${#args[@]} -eq 4 ]; then + check_convertion_to_two_level "${args[0]}" + fi + check_preallocated_elements "${args[@]}" + print_border + done +} + +# shellcheck source=./02151_hash_table_sizes_stats.testcases +source "$CURDIR"/02151_hash_table_sizes_stats.testcases + +test_one_thread_simple_group_by +test_one_thread_simple_group_by_with_limit +test_one_thread_simple_group_by_with_join_and_subquery +test_several_threads_simple_group_by_with_limit_single_level_ht +test_several_threads_simple_group_by_with_limit_two_level_ht +test_several_threads_simple_group_by_with_limit_and_rollup_single_level_ht +test_several_threads_simple_group_by_with_limit_and_rollup_two_level_ht +test_several_threads_simple_group_by_with_limit_and_cube_single_level_ht +test_several_threads_simple_group_by_with_limit_and_cube_two_level_ht + +check_expectations diff --git a/tests/queries/0_stateless/02151_hash_table_sizes_stats.testcases b/tests/queries/0_stateless/02151_hash_table_sizes_stats.testcases new file mode 100644 index 00000000000..7612108a700 --- /dev/null +++ b/tests/queries/0_stateless/02151_hash_table_sizes_stats.testcases @@ -0,0 +1,183 @@ +test_one_thread_simple_group_by() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint -- + SELECT number + FROM $table_name + GROUP BY number + SETTINGS max_threads = 1 + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $expected_size_hint $expected_size_hint") +} + +test_one_thread_simple_group_by_with_limit() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint despite the presence of limit -- + SELECT number + FROM $table_name + GROUP BY number + LIMIT 5 + SETTINGS max_threads = 1 + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $expected_size_hint $expected_size_hint") +} + +test_one_thread_simple_group_by_with_join_and_subquery() { + expected_size_hint=$((table_size + table_size / 2)) + prepare_table + + query=" + -- expected two size_hints for different keys: for the inner ($table_size) and the outer aggregation ($((table_size / 2))) + SELECT number + FROM $table_name AS t1 + JOIN + ( + SELECT number + FROM $table_name AS t2 + GROUP BY number + LIMIT $((table_size / 2)) + ) AS t3 USING(number) + GROUP BY number + SETTINGS max_threads = 1, + distributed_product_mode = 'local' + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $expected_size_hint $expected_size_hint") +} + +test_several_threads_simple_group_by_with_limit_single_level_ht() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint despite the presence of limit -- + SELECT number + FROM $table_name + GROUP BY number + LIMIT 5 + SETTINGS max_threads = $max_threads, + group_by_two_level_threshold = $((expected_size_hint + 1)), + group_by_two_level_threshold_bytes = $((table_size * 1000)) + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $((expected_size_hint / max_threads)) $((expected_size_hint * max_threads))") +} + +test_several_threads_simple_group_by_with_limit_two_level_ht() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint despite the presence of limit -- + SELECT number + FROM $table_name + GROUP BY number + LIMIT 5 + SETTINGS max_threads = $max_threads, + group_by_two_level_threshold = $expected_size_hint, + group_by_two_level_threshold_bytes = $((table_size * 1000)) + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $((expected_size_hint / max_threads)) $((expected_size_hint * max_threads)) check_two_level") +} + +test_several_threads_simple_group_by_with_limit_and_rollup_single_level_ht() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint despite the presence of limit -- + SELECT number + FROM $table_name + GROUP BY number + WITH ROLLUP + LIMIT 5 + SETTINGS max_threads = $max_threads, + group_by_two_level_threshold = $((expected_size_hint + 1)), + group_by_two_level_threshold_bytes = $((table_size * 1000)) + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $((expected_size_hint / max_threads)) $((expected_size_hint * max_threads))") +} + +test_several_threads_simple_group_by_with_limit_and_rollup_two_level_ht() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint despite the presence of limit -- + SELECT number + FROM $table_name + GROUP BY number + WITH ROLLUP + LIMIT 5 + SETTINGS max_threads = $max_threads, + group_by_two_level_threshold = $expected_size_hint, + group_by_two_level_threshold_bytes = $((table_size * 1000)) + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $((expected_size_hint / max_threads)) $((expected_size_hint * max_threads)) check_two_level") +} + +test_several_threads_simple_group_by_with_limit_and_cube_single_level_ht() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint despite the presence of limit -- + SELECT number + FROM $table_name + GROUP BY number + WITH CUBE + LIMIT 5 + SETTINGS max_threads = $max_threads, + group_by_two_level_threshold = $((expected_size_hint + 1)), + group_by_two_level_threshold_bytes = $((table_size * 1000)) + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $((expected_size_hint / max_threads)) $((expected_size_hint * max_threads))") +} + +test_several_threads_simple_group_by_with_limit_and_cube_two_level_ht() { + expected_size_hint=$table_size + prepare_table + + query=" + -- size_hint = $expected_size_hint despite the presence of limit -- + SELECT number + FROM $table_name + GROUP BY number + WITH CUBE + LIMIT 5 + SETTINGS max_threads = $max_threads, + group_by_two_level_threshold = $expected_size_hint, + group_by_two_level_threshold_bytes = $((table_size * 1000)) + FORMAT Null;" + + run_query + run_query + expected_results+=("$query_id $((expected_size_hint / max_threads)) $((expected_size_hint * max_threads)) check_two_level") +} diff --git a/tests/queries/0_stateless/02151_hash_table_sizes_stats_distributed.reference b/tests/queries/0_stateless/02151_hash_table_sizes_stats_distributed.reference new file mode 100644 index 00000000000..0d10114f4ff --- /dev/null +++ b/tests/queries/0_stateless/02151_hash_table_sizes_stats_distributed.reference @@ -0,0 +1,33 @@ +1 +1 +-- +1 +1 +-- +1 +1 +-- +1 +1 +-- +1 +1 +1 +1 +-- +1 +1 +-- +1 +1 +1 +1 +-- +1 +1 +-- +1 +1 +1 +1 +-- diff --git a/tests/queries/0_stateless/02151_hash_table_sizes_stats_distributed.sh b/tests/queries/0_stateless/02151_hash_table_sizes_stats_distributed.sh new file mode 100755 index 00000000000..056c176c1ff --- /dev/null +++ b/tests/queries/0_stateless/02151_hash_table_sizes_stats_distributed.sh @@ -0,0 +1,103 @@ +#!/usr/bin/env bash +# Tags: long, distributed, no-debug, no-tsan, no-msan, no-ubsan, no-asan, no-random-settings, no-random-merge-tree-settings + +# These tests don't use `current_database = currentDatabase()` condition, because database name isn't propagated during remote queries. + +# shellcheck disable=SC2154,SC2162 + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + + +# tests rely on that all the rows are unique and max_threads divides table_size +table_size=1000005 +max_threads=5 + + +prepare_table() { + table_name="t_hash_table_sizes_stats_$RANDOM$RANDOM" + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS $table_name;" + if [ -z "$1" ]; then + $CLICKHOUSE_CLIENT -q "CREATE TABLE $table_name(number UInt64) Engine=MergeTree() ORDER BY tuple() SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi';" + else + $CLICKHOUSE_CLIENT -q "CREATE TABLE $table_name(number UInt64) Engine=MergeTree() ORDER BY $1 SETTINGS index_granularity = 8192, index_granularity_bytes = '10Mi';" + fi + $CLICKHOUSE_CLIENT -q "SYSTEM STOP MERGES $table_name;" + for ((i = 1; i <= max_threads; i++)); do + cnt=$((table_size / max_threads)) + from=$(((i - 1) * cnt)) + $CLICKHOUSE_CLIENT -q "INSERT INTO $table_name SELECT * FROM numbers($from, $cnt);" + done + $CLICKHOUSE_CLIENT -q "DROP TABLE IF EXISTS ${table_name}_d;" + $CLICKHOUSE_CLIENT -q "CREATE TABLE ${table_name}_d AS $table_name ENGINE = Distributed(test_cluster_two_shards, currentDatabase(), $table_name);" + table_name="${table_name}_d" +} + +prepare_table_with_sorting_key() { + prepare_table "$1" +} + +run_query() { + query_id="${CLICKHOUSE_DATABASE}_hash_table_sizes_stats_$RANDOM$RANDOM" + $CLICKHOUSE_CLIENT --query_id="$query_id" --multiquery -q " + SET max_block_size = $((table_size / 10)); + SET merge_tree_min_rows_for_concurrent_read = 1; + SET max_untracked_memory = 0; + SET prefer_localhost_replica = 1; + $query" +} + +check_preallocated_elements() { + # rows may be distributed in any way including "everything goes to the one particular thread" + $CLICKHOUSE_CLIENT --param_query_id="$1" -q " + SELECT COUNT(*) + FROM system.query_log + WHERE event_date >= yesterday() AND (query_id = {query_id:String} OR initial_query_id = {query_id:String}) + AND ProfileEvents['AggregationPreallocatedElementsInHashTables'] BETWEEN $2 AND $3 + GROUP BY query_id" +} + +check_convertion_to_two_level() { + # rows may be distributed in any way including "everything goes to the one particular thread" + $CLICKHOUSE_CLIENT --param_query_id="$1" -q " + SELECT SUM(ProfileEvents['AggregationHashTablesInitializedAsTwoLevel']) BETWEEN 1 AND $max_threads + FROM system.query_log + WHERE event_date >= yesterday() AND (query_id = {query_id:String} OR initial_query_id = {query_id:String}) + GROUP BY query_id" +} + +print_border() { + echo "--" +} + +# each test case appends to this array +expected_results=() + +check_expectations() { + $CLICKHOUSE_CLIENT -q "SYSTEM FLUSH LOGS" + + for i in "${!expected_results[@]}"; do + read -a args <<< "${expected_results[$i]}" + if [ ${#args[@]} -eq 4 ]; then + check_convertion_to_two_level "${args[0]}" + fi + check_preallocated_elements "${args[@]}" + print_border + done +} + +# shellcheck source=./02151_hash_table_sizes_stats.testcases +source "$CURDIR"/02151_hash_table_sizes_stats.testcases + +test_one_thread_simple_group_by +test_one_thread_simple_group_by_with_limit +test_one_thread_simple_group_by_with_join_and_subquery +test_several_threads_simple_group_by_with_limit_single_level_ht +test_several_threads_simple_group_by_with_limit_two_level_ht +test_several_threads_simple_group_by_with_limit_and_rollup_single_level_ht +test_several_threads_simple_group_by_with_limit_and_rollup_two_level_ht +test_several_threads_simple_group_by_with_limit_and_cube_single_level_ht +test_several_threads_simple_group_by_with_limit_and_cube_two_level_ht + +check_expectations diff --git a/tests/queries/0_stateless/02155_dictionary_comment.sql b/tests/queries/0_stateless/02155_dictionary_comment.sql index e31d9d28366..30b85e16a7c 100644 --- a/tests/queries/0_stateless/02155_dictionary_comment.sql +++ b/tests/queries/0_stateless/02155_dictionary_comment.sql @@ -19,7 +19,7 @@ LAYOUT(DIRECT()); SELECT name, comment FROM system.dictionaries WHERE name == '02155_test_dictionary' AND database == currentDatabase(); -ALTER TABLE 02155_test_dictionary COMMENT COLUMN value 'value_column'; --{serverError 48} +ALTER TABLE 02155_test_dictionary COMMENT COLUMN value 'value_column'; --{serverError NOT_IMPLEMENTED} ALTER TABLE 02155_test_dictionary MODIFY COMMENT '02155_test_dictionary_comment_0'; SELECT name, comment FROM system.dictionaries WHERE name == '02155_test_dictionary' AND database == currentDatabase(); @@ -42,7 +42,7 @@ CREATE TABLE 02155_test_dictionary_view SELECT * FROM 02155_test_dictionary_view; -ALTER TABLE 02155_test_dictionary_view COMMENT COLUMN value 'value_column'; --{serverError 48} +ALTER TABLE 02155_test_dictionary_view COMMENT COLUMN value 'value_column'; --{serverError NOT_IMPLEMENTED} ALTER TABLE 02155_test_dictionary_view MODIFY COMMENT '02155_test_dictionary_view_comment_0'; SELECT name, comment FROM system.tables WHERE name == '02155_test_dictionary_view' AND database == currentDatabase(); diff --git a/tests/queries/0_stateless/02155_read_in_order_max_rows_to_read.sql b/tests/queries/0_stateless/02155_read_in_order_max_rows_to_read.sql index 4b47a860071..acd2379ff54 100644 --- a/tests/queries/0_stateless/02155_read_in_order_max_rows_to_read.sql +++ b/tests/queries/0_stateless/02155_read_in_order_max_rows_to_read.sql @@ -17,8 +17,8 @@ SELECT a FROM t_max_rows_to_read ORDER BY a LIMIT 5 SETTINGS max_rows_to_read = SELECT a FROM t_max_rows_to_read WHERE a = 10 OR a = 20 SETTINGS max_rows_to_read = 12; -SELECT a FROM t_max_rows_to_read ORDER BY a LIMIT 20 FORMAT Null SETTINGS max_rows_to_read = 12; -- { serverError 158 } -SELECT a FROM t_max_rows_to_read WHERE a > 10 ORDER BY a LIMIT 5 FORMAT Null SETTINGS max_rows_to_read = 12; -- { serverError 158 } -SELECT a FROM t_max_rows_to_read WHERE a = 10 OR a = 20 FORMAT Null SETTINGS max_rows_to_read = 4; -- { serverError 158 } +SELECT a FROM t_max_rows_to_read ORDER BY a LIMIT 20 FORMAT Null SETTINGS max_rows_to_read = 12; -- { serverError TOO_MANY_ROWS } +SELECT a FROM t_max_rows_to_read WHERE a > 10 ORDER BY a LIMIT 5 FORMAT Null SETTINGS max_rows_to_read = 12; -- { serverError TOO_MANY_ROWS } +SELECT a FROM t_max_rows_to_read WHERE a = 10 OR a = 20 FORMAT Null SETTINGS max_rows_to_read = 4; -- { serverError TOO_MANY_ROWS } DROP TABLE t_max_rows_to_read; diff --git a/tests/queries/0_stateless/02158_proportions_ztest.sql b/tests/queries/0_stateless/02158_proportions_ztest.sql index bda50b43a97..aee50e57f9d 100644 --- a/tests/queries/0_stateless/02158_proportions_ztest.sql +++ b/tests/queries/0_stateless/02158_proportions_ztest.sql @@ -10,4 +10,4 @@ DROP TABLE IF EXISTS proportions_ztest; SELECT NULL, proportionsZTest(257, 1048575, 1048575, 257, -inf, NULL), - proportionsZTest(1024, 1025, 2, 2, 'unpooled'); -- { serverError 43 } \ No newline at end of file + proportionsZTest(1024, 1025, 2, 2, 'unpooled'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } \ No newline at end of file diff --git a/tests/queries/0_stateless/02161_addressToLineWithInlines.sql b/tests/queries/0_stateless/02161_addressToLineWithInlines.sql index 78b414378f1..cf400ed34c5 100644 --- a/tests/queries/0_stateless/02161_addressToLineWithInlines.sql +++ b/tests/queries/0_stateless/02161_addressToLineWithInlines.sql @@ -1,7 +1,7 @@ -- Tags: no-tsan, no-asan, no-ubsan, no-msan, no-debug SET allow_introspection_functions = 0; -SELECT addressToLineWithInlines(1); -- { serverError 446 } +SELECT addressToLineWithInlines(1); -- { serverError FUNCTION_NOT_ALLOWED } SET allow_introspection_functions = 1; SET query_profiler_real_time_period_ns = 0; diff --git a/tests/queries/0_stateless/02165_h3_num_hexagons.sql b/tests/queries/0_stateless/02165_h3_num_hexagons.sql index 7ab48b3738b..9753d6daee9 100644 --- a/tests/queries/0_stateless/02165_h3_num_hexagons.sql +++ b/tests/queries/0_stateless/02165_h3_num_hexagons.sql @@ -16,4 +16,4 @@ SELECT h3NumHexagons(12); SELECT h3NumHexagons(13); SELECT h3NumHexagons(14); SELECT h3NumHexagons(15); -SELECT h3NumHexagons(16); -- { serverError 69 } +SELECT h3NumHexagons(16); -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/02176_dict_get_has_implicit_key_cast.sql b/tests/queries/0_stateless/02176_dict_get_has_implicit_key_cast.sql index 54f5c12cdb4..fbc0990e49b 100644 --- a/tests/queries/0_stateless/02176_dict_get_has_implicit_key_cast.sql +++ b/tests/queries/0_stateless/02176_dict_get_has_implicit_key_cast.sql @@ -20,12 +20,12 @@ LAYOUT(DIRECT()); SELECT dictGet('02176_test_simple_key_dictionary', 'value', toUInt64(0)); SELECT dictGet('02176_test_simple_key_dictionary', 'value', toUInt8(0)); SELECT dictGet('02176_test_simple_key_dictionary', 'value', '0'); -SELECT dictGet('02176_test_simple_key_dictionary', 'value', [0]); --{serverError 43} +SELECT dictGet('02176_test_simple_key_dictionary', 'value', [0]); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT dictHas('02176_test_simple_key_dictionary', toUInt64(0)); SELECT dictHas('02176_test_simple_key_dictionary', toUInt8(0)); SELECT dictHas('02176_test_simple_key_dictionary', '0'); -SELECT dictHas('02176_test_simple_key_dictionary', [0]); --{serverError 43} +SELECT dictHas('02176_test_simple_key_dictionary', [0]); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} DROP DICTIONARY 02176_test_simple_key_dictionary; DROP TABLE 02176_test_simple_key_table; @@ -54,13 +54,13 @@ LAYOUT(COMPLEX_KEY_DIRECT()); SELECT dictGet('02176_test_complex_key_dictionary', 'value', tuple(toUInt64(0), '0')); SELECT dictGet('02176_test_complex_key_dictionary', 'value', tuple(toUInt8(0), '0')); SELECT dictGet('02176_test_complex_key_dictionary', 'value', tuple('0', '0')); -SELECT dictGet('02176_test_complex_key_dictionary', 'value', tuple([0], '0')); --{serverError 43} +SELECT dictGet('02176_test_complex_key_dictionary', 'value', tuple([0], '0')); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT dictGet('02176_test_complex_key_dictionary', 'value', tuple(toUInt64(0), 0)); SELECT dictHas('02176_test_complex_key_dictionary', tuple(toUInt64(0), '0')); SELECT dictHas('02176_test_complex_key_dictionary', tuple(toUInt8(0), '0')); SELECT dictHas('02176_test_complex_key_dictionary', tuple('0', '0')); -SELECT dictHas('02176_test_complex_key_dictionary', tuple([0], '0')); --{serverError 43} +SELECT dictHas('02176_test_complex_key_dictionary', tuple([0], '0')); --{serverError ILLEGAL_TYPE_OF_ARGUMENT} SELECT dictHas('02176_test_complex_key_dictionary', tuple(toUInt64(0), 0)); DROP DICTIONARY 02176_test_complex_key_dictionary; diff --git a/tests/queries/0_stateless/02177_sum_if_not_found.sql b/tests/queries/0_stateless/02177_sum_if_not_found.sql index c888f8b39aa..42ccb0ee5cf 100644 --- a/tests/queries/0_stateless/02177_sum_if_not_found.sql +++ b/tests/queries/0_stateless/02177_sum_if_not_found.sql @@ -1,7 +1,7 @@ SELECT sumIf(1, 0); SELECT SumIf(1, 0); SELECT sUmIf(1, 0); -SELECT sumIF(1, 0); -- { serverError 46 } +SELECT sumIF(1, 0); -- { serverError UNKNOWN_FUNCTION } DROP TABLE IF EXISTS data; DROP TABLE IF EXISTS agg; @@ -20,7 +20,7 @@ SELECT t, sumIF(n, 0) FROM data -GROUP BY t; -- { serverError 46} +GROUP BY t; -- { serverError UNKNOWN_FUNCTION} CREATE TABLE agg ENGINE = AggregatingMergeTree diff --git a/tests/queries/0_stateless/02179_map_cast_to_array.sql b/tests/queries/0_stateless/02179_map_cast_to_array.sql index 25b090c10b7..5720e4eb0b5 100644 --- a/tests/queries/0_stateless/02179_map_cast_to_array.sql +++ b/tests/queries/0_stateless/02179_map_cast_to_array.sql @@ -2,7 +2,7 @@ WITH map(1, 'Test') AS value, 'Array(Tuple(UInt64, String))' AS type SELECT value, cast(value, type), cast(materialize(value), type); WITH map(1, 'Test') AS value, 'Array(Tuple(UInt64, UInt64))' AS type -SELECT value, cast(value, type), cast(materialize(value), type); --{serverError 6} +SELECT value, cast(value, type), cast(materialize(value), type); --{serverError CANNOT_PARSE_TEXT} WITH map(1, '1234') AS value, 'Array(Tuple(UInt64, UInt64))' AS type SELECT value, cast(value, type), cast(materialize(value), type); diff --git a/tests/queries/0_stateless/02181_dictionary_attach_detach.sql b/tests/queries/0_stateless/02181_dictionary_attach_detach.sql index fb7a2aa71fb..1c30c5a47a9 100644 --- a/tests/queries/0_stateless/02181_dictionary_attach_detach.sql +++ b/tests/queries/0_stateless/02181_dictionary_attach_detach.sql @@ -19,8 +19,8 @@ SOURCE(CLICKHOUSE(TABLE '02181_test_table')) LAYOUT(HASHED()) LIFETIME(0); -DETACH TABLE 02181_test_dictionary; --{serverError 520} -ATTACH TABLE 02181_test_dictionary; --{serverError 80} +DETACH TABLE 02181_test_dictionary; --{serverError CANNOT_DETACH_DICTIONARY_AS_TABLE} +ATTACH TABLE 02181_test_dictionary; --{serverError INCORRECT_QUERY} DETACH DICTIONARY 02181_test_dictionary; ATTACH DICTIONARY 02181_test_dictionary; diff --git a/tests/queries/0_stateless/02181_sql_user_defined_functions_invalid_lambda.sql b/tests/queries/0_stateless/02181_sql_user_defined_functions_invalid_lambda.sql index c436394ab99..115f3b2f9c0 100644 --- a/tests/queries/0_stateless/02181_sql_user_defined_functions_invalid_lambda.sql +++ b/tests/queries/0_stateless/02181_sql_user_defined_functions_invalid_lambda.sql @@ -1,4 +1,4 @@ -CREATE FUNCTION 02181_invalid_lambda AS lambda(((x * 2) AS x_doubled) + x_doubled); --{serverError 1} -CREATE FUNCTION 02181_invalid_lambda AS lambda(x); --{serverError 1} -CREATE FUNCTION 02181_invalid_lambda AS lambda(); --{serverError 1} -CREATE FUNCTION 02181_invalid_lambda AS lambda(tuple(x)) --{serverError 1} +CREATE FUNCTION 02181_invalid_lambda AS lambda(((x * 2) AS x_doubled) + x_doubled); --{serverError UNSUPPORTED_METHOD} +CREATE FUNCTION 02181_invalid_lambda AS lambda(x); --{serverError UNSUPPORTED_METHOD} +CREATE FUNCTION 02181_invalid_lambda AS lambda(); --{serverError UNSUPPORTED_METHOD} +CREATE FUNCTION 02181_invalid_lambda AS lambda(tuple(x)) --{serverError UNSUPPORTED_METHOD} diff --git a/tests/queries/0_stateless/02183_dictionary_no_attributes.sql b/tests/queries/0_stateless/02183_dictionary_no_attributes.sql index bd3d73594f8..b9c9f1ba9c6 100644 --- a/tests/queries/0_stateless/02183_dictionary_no_attributes.sql +++ b/tests/queries/0_stateless/02183_dictionary_no_attributes.sql @@ -16,7 +16,7 @@ LIFETIME(0); SELECT 'FlatDictionary'; -SELECT dictGet('02183_flat_dictionary', 'value', 0); -- {serverError 36} +SELECT dictGet('02183_flat_dictionary', 'value', 0); -- {serverError BAD_ARGUMENTS} SELECT dictHas('02183_flat_dictionary', 0); SELECT dictHas('02183_flat_dictionary', 1); SELECT dictHas('02183_flat_dictionary', 2); diff --git a/tests/queries/0_stateless/02184_default_table_engine.sql b/tests/queries/0_stateless/02184_default_table_engine.sql index aff30eeea98..2c7ffbbced3 100644 --- a/tests/queries/0_stateless/02184_default_table_engine.sql +++ b/tests/queries/0_stateless/02184_default_table_engine.sql @@ -1,13 +1,13 @@ SET default_table_engine = 'None'; -CREATE TABLE table_02184 (x UInt8); --{serverError 119} +CREATE TABLE table_02184 (x UInt8); --{serverError ENGINE_REQUIRED} SET default_table_engine = 'Log'; CREATE TABLE table_02184 (x UInt8); SHOW CREATE TABLE table_02184; DROP TABLE table_02184; SET default_table_engine = 'MergeTree'; -CREATE TABLE table_02184 (x UInt8); --{serverError 42} +CREATE TABLE table_02184 (x UInt8); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} CREATE TABLE table_02184 (x UInt8, PRIMARY KEY (x)); SHOW CREATE TABLE table_02184; DROP TABLE table_02184; @@ -15,7 +15,7 @@ DROP TABLE table_02184; CREATE TABLE test_optimize_exception (date Date) PARTITION BY toYYYYMM(date) ORDER BY date; SHOW CREATE TABLE test_optimize_exception; DROP TABLE test_optimize_exception; -CREATE TABLE table_02184 (x UInt8) PARTITION BY x; --{serverError 36} +CREATE TABLE table_02184 (x UInt8) PARTITION BY x; --{serverError BAD_ARGUMENTS} CREATE TABLE table_02184 (x UInt8) ORDER BY x; SHOW CREATE TABLE table_02184; DROP TABLE table_02184; @@ -67,8 +67,8 @@ DROP TABLE t1; DROP TABLE t2; -CREATE DATABASE test_02184 ORDER BY kek; -- {serverError 80} -CREATE DATABASE test_02184 SETTINGS x=1; -- {serverError 115} +CREATE DATABASE test_02184 ORDER BY kek; -- {serverError INCORRECT_QUERY} +CREATE DATABASE test_02184 SETTINGS x=1; -- {serverError UNKNOWN_SETTING} CREATE TABLE table_02184 (x UInt8, y int, PRIMARY KEY (x)) ENGINE=MergeTree PRIMARY KEY y; -- {clientError 36} SET default_table_engine = 'MergeTree'; CREATE TABLE table_02184 (x UInt8, y int, PRIMARY KEY (x)) PRIMARY KEY y; -- {clientError 36} @@ -85,8 +85,8 @@ CREATE TEMPORARY TABLE tmp (n int); SHOW CREATE TEMPORARY TABLE tmp; CREATE TEMPORARY TABLE tmp1 (n int) ENGINE=Memory; CREATE TEMPORARY TABLE tmp2 (n int) ENGINE=Log; -CREATE TEMPORARY TABLE tmp2 (n int) ORDER BY n; -- {serverError 36} -CREATE TEMPORARY TABLE tmp2 (n int, PRIMARY KEY (n)); -- {serverError 36} +CREATE TEMPORARY TABLE tmp2 (n int) ORDER BY n; -- {serverError BAD_ARGUMENTS} +CREATE TEMPORARY TABLE tmp2 (n int, PRIMARY KEY (n)); -- {serverError BAD_ARGUMENTS} CREATE TABLE log (n int); SHOW CREATE log; @@ -100,9 +100,9 @@ DROP TABLE log1; DROP TABLE mem; SET default_table_engine = 'None'; -CREATE TABLE mem AS SELECT 1 as n; --{serverError 119} +CREATE TABLE mem AS SELECT 1 as n; --{serverError ENGINE_REQUIRED} SET default_table_engine = 'Memory'; -CREATE TABLE mem ORDER BY n AS SELECT 1 as n; -- {serverError 36} +CREATE TABLE mem ORDER BY n AS SELECT 1 as n; -- {serverError BAD_ARGUMENTS} SET default_table_engine = 'MergeTree'; CREATE TABLE mt ORDER BY n AS SELECT 1 as n; CREATE TABLE mem ENGINE=Memory AS SELECT 1 as n; diff --git a/tests/queries/0_stateless/02212_h3_get_pentagon_indexes.sql b/tests/queries/0_stateless/02212_h3_get_pentagon_indexes.sql index c7a72fed6bc..d4eab090ab2 100644 --- a/tests/queries/0_stateless/02212_h3_get_pentagon_indexes.sql +++ b/tests/queries/0_stateless/02212_h3_get_pentagon_indexes.sql @@ -23,7 +23,7 @@ INSERT INTO table1 VALUES(15); SELECT h3GetPentagonIndexes(resolution) AS indexes from table1 order by indexes; -SELECT h3GetPentagonIndexes(20) AS indexes; -- { serverError 69 } +SELECT h3GetPentagonIndexes(20) AS indexes; -- { serverError ARGUMENT_OUT_OF_BOUND } DROP TABLE table1; diff --git a/tests/queries/0_stateless/02212_h3_get_res0_indexes.sql b/tests/queries/0_stateless/02212_h3_get_res0_indexes.sql index e84f1f43964..648463f9d9d 100644 --- a/tests/queries/0_stateless/02212_h3_get_res0_indexes.sql +++ b/tests/queries/0_stateless/02212_h3_get_res0_indexes.sql @@ -1,6 +1,6 @@ -- Tags: no-fasttest SELECT h3GetRes0Indexes(); -SELECT h3GetRes0Indexes(3); -- { serverError 42 } +SELECT h3GetRes0Indexes(3); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } SELECT h3GetRes0Indexes() FROM system.numbers LIMIT 5; diff --git a/tests/queries/0_stateless/02223_h3_test_const_columns.sql b/tests/queries/0_stateless/02223_h3_test_const_columns.sql index 50ccfaaf173..8a2deef0eab 100644 --- a/tests/queries/0_stateless/02223_h3_test_const_columns.sql +++ b/tests/queries/0_stateless/02223_h3_test_const_columns.sql @@ -5,9 +5,9 @@ select h3ToParent(641573946153969375, arrayJoin([1,2])); SELECT round(h3HexAreaM2(arrayJoin([1,2])), 2); SELECT round(h3HexAreaKm2(arrayJoin([1,2])), 2); SELECT round(h3CellAreaM2(arrayJoin([579205133326352383,589753847883235327,594082350283882495])), 2); -SELECT NULL, toFloat64('-1'), -2147483648, h3CellAreaM2(arrayJoin([9223372036854775807, 65535, NULL])); -- { serverError 117 } +SELECT NULL, toFloat64('-1'), -2147483648, h3CellAreaM2(arrayJoin([9223372036854775807, 65535, NULL])); -- { serverError INCORRECT_DATA } SELECT round(h3CellAreaRads2(arrayJoin([579205133326352383,589753847883235327,594082350283882495])), 2); -SELECT NULL, toFloat64('-1'), -2147483648, h3CellAreaRads2(arrayJoin([9223372036854775807, 65535, NULL])); -- { serverError 117 } +SELECT NULL, toFloat64('-1'), -2147483648, h3CellAreaRads2(arrayJoin([9223372036854775807, 65535, NULL])); -- { serverError INCORRECT_DATA } SELECT h3GetResolution(arrayJoin([579205133326352383,589753847883235327,594082350283882495])); SELECT round(h3EdgeAngle(arrayJoin([0,1,2])), 2); SELECT round(h3EdgeLengthM(arrayJoin([0,1,2])), 2); @@ -30,5 +30,5 @@ SELECT round(h3ExactEdgeLengthKm(arrayJoin([1298057039473278975,1370114633511206 SELECT round(h3ExactEdgeLengthRads(arrayJoin([1298057039473278975,1370114633511206911,1442172227549134847,1514229821587062783])), 2); SELECT h3NumHexagons(arrayJoin([1,2,3])); SELECT h3Line(arrayJoin([stringToH3('85283473fffffff')]), arrayJoin([stringToH3('8528342bfffffff')])); -SELECT h3HexRing(arrayJoin([579205133326352383]), arrayJoin([toUInt16(1),toUInt16(2),toUInt16(3)])); -- { serverError 117 } +SELECT h3HexRing(arrayJoin([579205133326352383]), arrayJoin([toUInt16(1),toUInt16(2),toUInt16(3)])); -- { serverError INCORRECT_DATA } SELECT h3HexRing(arrayJoin([581276613233082367]), arrayJoin([toUInt16(0),toUInt16(1),toUInt16(2)])); diff --git a/tests/queries/0_stateless/02232_dist_insert_send_logs_level_hung.sh b/tests/queries/0_stateless/02232_dist_insert_send_logs_level_hung.sh index 734cef06214..618dc83c223 100755 --- a/tests/queries/0_stateless/02232_dist_insert_send_logs_level_hung.sh +++ b/tests/queries/0_stateless/02232_dist_insert_send_logs_level_hung.sh @@ -1,7 +1,8 @@ #!/usr/bin/env bash -# Tags: long, no-parallel -# Tag: no-parallel - to heavy -# Tag: long - to heavy +# Tags: long, no-parallel, disabled +# Tag: no-parallel - too heavy +# Tag: long - too heavy +# Tag: disabled - Always takes 4+ minutes, in serial mode, which is too much to be always run in CI # This is the regression test when remote peer send some logs for INSERT, # it is easy to archive using materialized views, with small block size. @@ -49,10 +50,10 @@ insert_client_opts=( timeout 250s $CLICKHOUSE_CLIENT "${client_opts[@]}" "${insert_client_opts[@]}" -q "insert into function remote('127.2', currentDatabase(), in_02232) select * from numbers(1e6)" # Kill underlying query of remote() to make KILL faster -# This test is reproducing very interesting bahaviour. +# This test is reproducing very interesting behaviour. # The block size is 1, so the secondary query creates InterpreterSelectQuery for each row due to pushing to the MV. # It works extremely slow, and the initial query produces new blocks and writes them to the socket much faster -# then the secondary query can read and process them. Therefore, it fills network buffers in the kernel. +# than the secondary query can read and process them. Therefore, it fills network buffers in the kernel. # Once a buffer in the kernel is full, send(...) blocks until the secondary query will finish processing data # that it already has in ReadBufferFromPocoSocket and call recv. # Or until the kernel will decide to resize the buffer (seems like it has non-trivial rules for that). diff --git a/tests/queries/0_stateless/02233_interpolate_1.sql b/tests/queries/0_stateless/02233_interpolate_1.sql index d589a18421b..36b7c4dbc6a 100644 --- a/tests/queries/0_stateless/02233_interpolate_1.sql +++ b/tests/queries/0_stateless/02233_interpolate_1.sql @@ -21,22 +21,22 @@ SELECT n, source, inter FROM ( # Test INTERPOLATE with incompatible const - should produce error SELECT n, source, inter FROM ( SELECT toFloat32(number % 10) AS n, 'original' AS source, number as inter FROM numbers(10) WHERE number % 3 = 1 -) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (inter AS 'inter'); -- { serverError 6 } +) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (inter AS 'inter'); -- { serverError CANNOT_PARSE_TEXT } # Test INTERPOLATE with incompatible expression - should produce error SELECT n, source, inter FROM ( SELECT toFloat32(number % 10) AS n, 'original' AS source, number as inter FROM numbers(10) WHERE number % 3 = 1 -) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (inter AS reverse(inter)); -- { serverError 44 } +) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (inter AS reverse(inter)); -- { serverError ILLEGAL_COLUMN } # Test INTERPOLATE with column from WITH FILL expression - should produce error SELECT n, source, inter FROM ( SELECT toFloat32(number % 10) AS n, 'original' AS source, number as inter FROM numbers(10) WHERE number % 3 = 1 -) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (n AS n); -- { serverError 475 } +) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (n AS n); -- { serverError INVALID_WITH_FILL_EXPRESSION } # Test INTERPOLATE with inconsistent column - should produce error SELECT n, source, inter FROM ( SELECT toFloat32(number % 10) AS n, 'original' AS source, number as inter FROM numbers(10) WHERE number % 3 = 1 -) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (inter AS source); -- { serverError 6, 32 } +) ORDER BY n WITH FILL FROM 0 TO 11.51 STEP 0.5 INTERPOLATE (inter AS source); -- { serverError CANNOT_PARSE_TEXT, 32 } # Test INTERPOLATE with aliased column SELECT n, source, inter + 1 AS inter_p FROM ( diff --git a/tests/queries/0_stateless/02233_with_total_empty_chunk.sql b/tests/queries/0_stateless/02233_with_total_empty_chunk.sql index e1e8186ed76..d59319ac75e 100644 --- a/tests/queries/0_stateless/02233_with_total_empty_chunk.sql +++ b/tests/queries/0_stateless/02233_with_total_empty_chunk.sql @@ -1,3 +1,3 @@ SET allow_experimental_analyzer = 1; -SELECT (NULL, NULL, NULL, NULL, NULL, NULL, NULL) FROM numbers(0) GROUP BY number WITH TOTALS HAVING sum(number) <= arrayJoin([]) -- { serverError 59 }; +SELECT (NULL, NULL, NULL, NULL, NULL, NULL, NULL) FROM numbers(0) GROUP BY number WITH TOTALS HAVING sum(number) <= arrayJoin([]) -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER }; diff --git a/tests/queries/0_stateless/02242_make_date.sql b/tests/queries/0_stateless/02242_make_date.sql index 78feabfffb3..14d9dbdec91 100644 --- a/tests/queries/0_stateless/02242_make_date.sql +++ b/tests/queries/0_stateless/02242_make_date.sql @@ -17,14 +17,14 @@ select makeDate(cast('-1980.1' as Decimal(20,5)), 9, 18); select makeDate(cast(1980.1 as Float32), 9, 19); select makeDate(cast(-1980.1 as Float32), 9, 20); -select makeDate(cast(1980 as Date), 10, 30); -- { serverError 43 } -select makeDate(cast(-1980 as Date), 10, 30); -- { serverError 43 } -select makeDate(cast(1980 as Date32), 10, 30); -- { serverError 43 } -select makeDate(cast(-1980 as Date32), 10, 30); -- { serverError 43 } -select makeDate(cast(1980 as DateTime), 10, 30); -- { serverError 43 } -select makeDate(cast(-1980 as DateTime), 10, 30); -- { serverError 43 } -select makeDate(cast(1980 as DateTime64), 10, 30); -- { serverError 43 } -select makeDate(cast(-1980 as DateTime64), 10, 30); -- { serverError 43 } +select makeDate(cast(1980 as Date), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as Date), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(1980 as Date32), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as Date32), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(1980 as DateTime), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as DateTime), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(1980 as DateTime64), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as DateTime64), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select makeDate(0.0, 1, 2); select makeDate(1980, 15, 1); @@ -60,12 +60,12 @@ select makeDate(0xffffffff+2010,1,4); select makeDate(0x7fffffffffffffff+2010,1,3); select makeDate(0xffffffffffffffff+2010,1,4); -select makeDate('1980', '10', '20'); -- { serverError 43 } -select makeDate('-1980', 3, 17); -- { serverError 43 } +select makeDate('1980', '10', '20'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate('-1980', 3, 17); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -select makeDate('aa', 3, 24); -- { serverError 43 } -select makeDate(1994, 'aa', 24); -- { serverError 43 } -select makeDate(1984, 3, 'aa'); -- { serverError 43 } +select makeDate('aa', 3, 24); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(1994, 'aa', 24); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(1984, 3, 'aa'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select makeDate(True, 3, 24); select makeDate(1994, True, 24); @@ -78,8 +78,8 @@ select makeDate(NULL, 3, 4); select makeDate(1980, NULL, 4); select makeDate(1980, 3, NULL); -select makeDate(1980); -- { serverError 42 } -select makeDate(1980, 1, 1, 1); -- { serverError 42 } +select makeDate(1980); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select makeDate(1980, 1, 1, 1); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } select MAKEDATE(1980, 1, 1); select MAKEDATE(1980, 1); diff --git a/tests/queries/0_stateless/02242_make_date_mysql.sql b/tests/queries/0_stateless/02242_make_date_mysql.sql index 82d80579788..5070c78d4a4 100644 --- a/tests/queries/0_stateless/02242_make_date_mysql.sql +++ b/tests/queries/0_stateless/02242_make_date_mysql.sql @@ -13,18 +13,18 @@ select makeDate(cast('-1980.1' as Decimal(20,5)), 9); select makeDate(cast(1980.1 as Float32), 9); select makeDate(cast(-1980.1 as Float32), 9); -select makeDate(cast(1980 as Date), 10); -- { serverError 43 } -select makeDate(cast(-1980 as Date), 10); -- { serverError 43 } -select makeDate(cast(1980 as Date32), 10); -- { serverError 43 } -select makeDate(cast(-1980 as Date32), 10); -- { serverError 43 } -select makeDate(cast(1980 as DateTime), 10); -- { serverError 43 } -select makeDate(cast(-1980 as DateTime), 10); -- { serverError 43 } -select makeDate(cast(1980 as DateTime64), 10); -- { serverError 43 } -select makeDate(cast(-1980 as DateTime64), 10); -- { serverError 43 } -select makeDate('1980', '10'); -- { serverError 43 } -select makeDate('-1980', 3); -- { serverError 43 } -select makeDate('aa', 3); -- { serverError 43 } -select makeDate(1994, 'aa'); -- { serverError 43 } +select makeDate(cast(1980 as Date), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as Date), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(1980 as Date32), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as Date32), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(1980 as DateTime), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as DateTime), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(1980 as DateTime64), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(cast(-1980 as DateTime64), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate('1980', '10'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate('-1980', 3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate('aa', 3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate(1994, 'aa'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select makeDate(0, 1); select makeDate(19800, 12); diff --git a/tests/queries/0_stateless/02243_make_date32.sql b/tests/queries/0_stateless/02243_make_date32.sql index 9b0009b33a2..e6a319d31dd 100644 --- a/tests/queries/0_stateless/02243_make_date32.sql +++ b/tests/queries/0_stateless/02243_make_date32.sql @@ -17,14 +17,14 @@ select makeDate32(cast('-1980.1' as Decimal(20,5)), 9, 18); select makeDate32(cast(1980.1 as Float32), 9, 19); select makeDate32(cast(-1980.1 as Float32), 9, 20); -select makeDate32(cast(1980 as Date), 10, 30); -- { serverError 43 } -select makeDate32(cast(-1980 as Date), 10, 30); -- { serverError 43 } -select makeDate32(cast(1980 as Date32), 10, 30); -- { serverError 43 } -select makeDate32(cast(-1980 as Date32), 10, 30); -- { serverError 43 } -select makeDate32(cast(1980 as DateTime), 10, 30); -- { serverError 43 } -select makeDate32(cast(-1980 as DateTime), 10, 30); -- { serverError 43 } -select makeDate32(cast(1980 as DateTime64), 10, 30); -- { serverError 43 } -select makeDate32(cast(-1980 as DateTime64), 10, 30); -- { serverError 43 } +select makeDate32(cast(1980 as Date), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as Date), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(1980 as Date32), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as Date32), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(1980 as DateTime), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as DateTime), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(1980 as DateTime64), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as DateTime64), 10, 30); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select makeDate32(0.0, 1, 2); select makeDate32(1980, 15, 1); @@ -59,12 +59,12 @@ select makeDate32(0xffffffff+2010,1,4); select makeDate32(0x7fffffffffffffff+2010,1,3); select makeDate32(0xffffffffffffffff+2010,1,4); -select makeDate32('1980', '10', '20'); -- { serverError 43 } -select makeDate32('-1980', 3, 17); -- { serverError 43 } +select makeDate32('1980', '10', '20'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32('-1980', 3, 17); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } -select makeDate32('aa', 3, 24); -- { serverError 43 } -select makeDate32(1994, 'aa', 24); -- { serverError 43 } -select makeDate32(1984, 3, 'aa'); -- { serverError 43 } +select makeDate32('aa', 3, 24); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(1994, 'aa', 24); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(1984, 3, 'aa'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select makeDate32(True, 3, 24); select makeDate32(1994, True, 24); @@ -77,8 +77,8 @@ select makeDate32(NULL, 3, 4); select makeDate32(1980, NULL, 4); select makeDate32(1980, 3, NULL); -select makeDate32(1980); -- { serverError 42 } -select makeDate32(1980, 1, 1, 1); -- { serverError 42 } +select makeDate32(1980); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +select makeDate32(1980, 1, 1, 1); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } select makeDate32(year, month, day) from (select NULL as year, 2 as month, 3 as day union all select 1984 as year, 2 as month, 3 as day) order by year, month, day; diff --git a/tests/queries/0_stateless/02243_make_date32_mysql.sql b/tests/queries/0_stateless/02243_make_date32_mysql.sql index 4a67dcd80de..dc2dd77d91e 100644 --- a/tests/queries/0_stateless/02243_make_date32_mysql.sql +++ b/tests/queries/0_stateless/02243_make_date32_mysql.sql @@ -13,18 +13,18 @@ select makeDate32(cast('-1980.1' as Decimal(20,5)), 9); select makeDate32(cast(1980.1 as Float32), 9); select makeDate32(cast(-1980.1 as Float32), 9); -select makeDate32(cast(1980 as Date), 10); -- { serverError 43 } -select makeDate32(cast(-1980 as Date), 10); -- { serverError 43 } -select makeDate32(cast(1980 as Date32), 10); -- { serverError 43 } -select makeDate32(cast(-1980 as Date32), 10); -- { serverError 43 } -select makeDate32(cast(1980 as DateTime), 10); -- { serverError 43 } -select makeDate32(cast(-1980 as DateTime), 10); -- { serverError 43 } -select makeDate32(cast(1980 as DateTime64), 10); -- { serverError 43 } -select makeDate32(cast(-1980 as DateTime64), 10); -- { serverError 43 } -select makeDate32('1980', '10'); -- { serverError 43 } -select makeDate32('-1980', 3); -- { serverError 43 } -select makeDate32('aa', 3); -- { serverError 43 } -select makeDate32(1994, 'aa'); -- { serverError 43 } +select makeDate32(cast(1980 as Date), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as Date), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(1980 as Date32), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as Date32), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(1980 as DateTime), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as DateTime), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(1980 as DateTime64), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(cast(-1980 as DateTime64), 10); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32('1980', '10'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32('-1980', 3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32('aa', 3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select makeDate32(1994, 'aa'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select makeDate32(0, 1); select makeDate32(19800, 12); diff --git a/tests/queries/0_stateless/02244_url_engine_headers_test.sql b/tests/queries/0_stateless/02244_url_engine_headers_test.sql index 6df01055289..c172e810119 100644 --- a/tests/queries/0_stateless/02244_url_engine_headers_test.sql +++ b/tests/queries/0_stateless/02244_url_engine_headers_test.sql @@ -1,12 +1,12 @@ -select * from url(url_with_headers, url='http://127.0.0.1:8123?query=select+12', format='RawBLOB'); -- { serverError 86 } +select * from url(url_with_headers, url='http://127.0.0.1:8123?query=select+12', format='RawBLOB'); -- { serverError RECEIVED_ERROR_FROM_REMOTE_IO_SERVER } select * from url(url_with_headers, url='http://127.0.0.1:8123?query=select+12', format='RawBLOB', headers('X-ClickHouse-Database'='default')); select * from url(url_with_headers, url='http://127.0.0.1:8123?query=select+12', format='RawBLOB', headers('X-ClickHouse-Database'='default', 'X-ClickHouse-Format'='JSONEachRow')); -select * from url(url_with_headers, url='http://127.0.0.1:8123?query=select+12', format='RawBLOB', headers('X-ClickHouse-Database'='kek')); -- { serverError 86 } +select * from url(url_with_headers, url='http://127.0.0.1:8123?query=select+12', format='RawBLOB', headers('X-ClickHouse-Database'='kek')); -- { serverError RECEIVED_ERROR_FROM_REMOTE_IO_SERVER } select * from url('http://127.0.0.1:8123?query=select+12', 'RawBLOB'); select * from url('http://127.0.0.1:8123?query=select+12', 'RawBLOB', headers('X-ClickHouse-Database'='default')); select * from url('http://127.0.0.1:8123?query=select+12', 'RawBLOB', headers('X-ClickHouse-Database'='default', 'X-ClickHouse-Format'='JSONEachRow')); -select * from url('http://127.0.0.1:8123?query=select+12', 'RawBLOB', headers('X-ClickHouse-Format'='JSONEachRow', 'X-ClickHouse-Database'='kek')); -- { serverError 86 } -select * from url('http://127.0.0.1:8123?query=select+12', 'RawBLOB', headers('X-ClickHouse-Format'='JSONEachRow', 'X-ClickHouse-Database'=1)); -- { serverError 36 } +select * from url('http://127.0.0.1:8123?query=select+12', 'RawBLOB', headers('X-ClickHouse-Format'='JSONEachRow', 'X-ClickHouse-Database'='kek')); -- { serverError RECEIVED_ERROR_FROM_REMOTE_IO_SERVER } +select * from url('http://127.0.0.1:8123?query=select+12', 'RawBLOB', headers('X-ClickHouse-Format'='JSONEachRow', 'X-ClickHouse-Database'=1)); -- { serverError BAD_ARGUMENTS } drop table if exists url; create table url (i String) engine=URL('http://127.0.0.1:8123?query=select+12', 'RawBLOB', headers('X-ClickHouse-Format'='JSONEachRow')); select * from url; diff --git a/tests/queries/0_stateless/02245_make_datetime64.sql b/tests/queries/0_stateless/02245_make_datetime64.sql index 71629ad8dff..45a058fc452 100644 --- a/tests/queries/0_stateless/02245_make_datetime64.sql +++ b/tests/queries/0_stateless/02245_make_datetime64.sql @@ -13,9 +13,9 @@ select toTypeName(cast(makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 7, 'CET') as select makeDateTime64(1900, 1, 1, 0, 0, 0, 0, 9, 'UTC'); select makeDateTime64(1899, 12, 31, 23, 59, 59, 999999999, 9, 'UTC'); select makeDateTime64(2299, 12, 31, 23, 59, 59, 99999999, 8, 'UTC'); -select makeDateTime64(2299, 12, 31, 23, 59, 59, 999999999, 9, 'UTC'); -- { serverError 407 } +select makeDateTime64(2299, 12, 31, 23, 59, 59, 999999999, 9, 'UTC'); -- { serverError DECIMAL_OVERFLOW } select makeDateTime64(2262, 4, 11, 23, 47, 16, 854775807, 9, 'UTC'); -select makeDateTime64(2262, 4, 11, 23, 47, 16, 854775808, 9, 'UTC'); -- { serverError 407 } +select makeDateTime64(2262, 4, 11, 23, 47, 16, 854775808, 9, 'UTC'); -- { serverError DECIMAL_OVERFLOW } select makeDateTime64(2262, 4, 11, 23, 47, 16, 85477581, 8, 'UTC'); select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 0, 'CET'); @@ -28,8 +28,8 @@ select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 6, 'CET'); select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 7, 'CET'); select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 8, 'CET'); select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 9, 'CET'); -select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 10, 'CET'); -- { serverError 69 } -select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, -1, 'CET'); -- { serverError 69 } +select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, 10, 'CET'); -- { serverError ARGUMENT_OUT_OF_BOUND } +select makeDateTime64(1991, 8, 24, 21, 4, 0, 1234, -1, 'CET'); -- { serverError ARGUMENT_OUT_OF_BOUND } select makeDateTime64(1984, 0, 1, 0, 0, 0, 0, 9, 'UTC'); select makeDateTime64(1984, 1, 0, 0, 0, 0, 0, 9, 'UTC'); @@ -89,4 +89,4 @@ select makeDateTime64(year, 1, 1, 1, 0, 0, 0, precision, timezone) from ( select 1984 as year, 5 as precision, 'UTC' as timezone union all select 1985 as year, 5 as precision, 'UTC' as timezone -); -- { serverError 44 } +); -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/02252_reset_non_existing_setting.sql b/tests/queries/0_stateless/02252_reset_non_existing_setting.sql index 362388c4a10..47865c7bbc8 100644 --- a/tests/queries/0_stateless/02252_reset_non_existing_setting.sql +++ b/tests/queries/0_stateless/02252_reset_non_existing_setting.sql @@ -7,7 +7,7 @@ CREATE TABLE most_ordinary_mt ENGINE = MergeTree() ORDER BY tuple(); -ALTER TABLE most_ordinary_mt RESET SETTING ttl; --{serverError 36} -ALTER TABLE most_ordinary_mt RESET SETTING allow_remote_fs_zero_copy_replication, xxx; --{serverError 36} +ALTER TABLE most_ordinary_mt RESET SETTING ttl; --{serverError BAD_ARGUMENTS} +ALTER TABLE most_ordinary_mt RESET SETTING allow_remote_fs_zero_copy_replication, xxx; --{serverError BAD_ARGUMENTS} DROP TABLE IF EXISTS most_ordinary_mt; diff --git a/tests/queries/0_stateless/02269_to_start_of_interval_overflow.sql b/tests/queries/0_stateless/02269_to_start_of_interval_overflow.sql index 84204834614..a3e03c7e89f 100644 --- a/tests/queries/0_stateless/02269_to_start_of_interval_overflow.sql +++ b/tests/queries/0_stateless/02269_to_start_of_interval_overflow.sql @@ -1,6 +1,6 @@ -select toStartOfInterval(toDateTime64('\0930-12-12 12:12:12.1234567', 3), toIntervalNanosecond(1024)); -- {serverError 407} +select toStartOfInterval(toDateTime64('\0930-12-12 12:12:12.1234567', 3), toIntervalNanosecond(1024)); -- {serverError DECIMAL_OVERFLOW} SELECT toDateTime64(-9223372036854775808, 1048575, toIntervalNanosecond(9223372036854775806), NULL), toStartOfInterval(toDateTime64(toIntervalNanosecond(toIntervalNanosecond(257), toDateTime64(toStartOfInterval(toDateTime64(NULL)))), '', 100), toIntervalNanosecond(toStartOfInterval(toDateTime64(toIntervalNanosecond(NULL), NULL)), -1)), - toStartOfInterval(toDateTime64('\0930-12-12 12:12:12.1234567', 3), toIntervalNanosecond(1024)); -- {serverError 407} + toStartOfInterval(toDateTime64('\0930-12-12 12:12:12.1234567', 3), toIntervalNanosecond(1024)); -- {serverError DECIMAL_OVERFLOW} diff --git a/tests/queries/0_stateless/02280_add_query_level_settings.sql b/tests/queries/0_stateless/02280_add_query_level_settings.sql index a44f8eb854e..2d4e2a9e6d5 100644 --- a/tests/queries/0_stateless/02280_add_query_level_settings.sql +++ b/tests/queries/0_stateless/02280_add_query_level_settings.sql @@ -6,11 +6,11 @@ CREATE TABLE table_for_alter ( ) ENGINE = MergeTree() ORDER BY id SETTINGS parts_to_throw_insert = 1, parts_to_delay_insert = 1; INSERT INTO table_for_alter VALUES (1, '1'); -INSERT INTO table_for_alter VALUES (2, '2'); -- { serverError 252 } +INSERT INTO table_for_alter VALUES (2, '2'); -- { serverError TOO_MANY_PARTS } INSERT INTO table_for_alter settings parts_to_throw_insert = 100, parts_to_delay_insert = 100 VALUES (2, '2'); -INSERT INTO table_for_alter VALUES (3, '3'); -- { serverError 252 } +INSERT INTO table_for_alter VALUES (3, '3'); -- { serverError TOO_MANY_PARTS } ALTER TABLE table_for_alter MODIFY SETTING parts_to_throw_insert = 100, parts_to_delay_insert = 100; diff --git a/tests/queries/0_stateless/02283_array_norm.sql b/tests/queries/0_stateless/02283_array_norm.sql index dcb5288a1ac..f48d88e3f50 100644 --- a/tests/queries/0_stateless/02283_array_norm.sql +++ b/tests/queries/0_stateless/02283_array_norm.sql @@ -34,13 +34,13 @@ SELECT id, L1Norm(materialize([5., 6.])) FROM vec1f; SELECT id, L1Norm(v), L2Norm(v), L2SquaredNorm(v), LpNorm(v, 2.7), LinfNorm(v) FROM vec1d; SELECT id, L1Norm(materialize([5., 6.])) FROM vec1d; -SELECT L1Norm(1, 2); -- { serverError 42 } +SELECT L1Norm(1, 2); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } -SELECT LpNorm([1,2]); -- { serverError 42 } -SELECT LpNorm([1,2], -3.4); -- { serverError 69 } -SELECT LpNorm([1,2], 'aa'); -- { serverError 43 } -SELECT LpNorm([1,2], [1]); -- { serverError 43 } -SELECT LpNorm([1,2], materialize(3.14)); -- { serverError 44 } +SELECT LpNorm([1,2]); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT LpNorm([1,2], -3.4); -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT LpNorm([1,2], 'aa'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT LpNorm([1,2], [1]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT LpNorm([1,2], materialize(3.14)); -- { serverError ILLEGAL_COLUMN } DROP TABLE vec1; DROP TABLE vec1f; diff --git a/tests/queries/0_stateless/02292_h3_unidirectional_funcs.sql b/tests/queries/0_stateless/02292_h3_unidirectional_funcs.sql index 4082671356e..7436b1ba930 100644 --- a/tests/queries/0_stateless/02292_h3_unidirectional_funcs.sql +++ b/tests/queries/0_stateless/02292_h3_unidirectional_funcs.sql @@ -23,7 +23,7 @@ SELECT h3GetUnidirectionalEdgesFromHexagon(stringToH3('85283473ffffff')); select h3GetUnidirectionalEdge(stringToH3('85283473fffffff'), stringToH3('85283477fffffff')); select h3GetUnidirectionalEdge(stringToH3('85283473fffffff'), stringToH3('85283473fffffff')); -SELECT h3GetUnidirectionalEdge(stringToH3('85283473ffffff'), stringToH3('852\03477fffffff')); -- { serverError 43 } +SELECT h3GetUnidirectionalEdge(stringToH3('85283473ffffff'), stringToH3('852\03477fffffff')); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT h3UnidirectionalEdgeIsValid(1248204388774707199) as edge; SELECT h3UnidirectionalEdgeIsValid(1248204388774707197) as edge; diff --git a/tests/queries/0_stateless/02293_h3_hex_ring.sql b/tests/queries/0_stateless/02293_h3_hex_ring.sql index 5651f5ce557..4d9f3699453 100644 --- a/tests/queries/0_stateless/02293_h3_hex_ring.sql +++ b/tests/queries/0_stateless/02293_h3_hex_ring.sql @@ -1,9 +1,9 @@ -- Tags: no-fasttest SELECT h3HexRing(581276613233082367, toUInt16(0)); -SELECT h3HexRing(579205132326352334, toUInt16(1)) as hexRing; -- { serverError 117 } -SELECT h3HexRing(581276613233082367, -1); -- { serverError 43 } -SELECT h3HexRing(581276613233082367, toUInt16(-1)); -- { serverError 12 } +SELECT h3HexRing(579205132326352334, toUInt16(1)) as hexRing; -- { serverError INCORRECT_DATA } +SELECT h3HexRing(581276613233082367, -1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT h3HexRing(581276613233082367, toUInt16(-1)); -- { serverError PARAMETER_OUT_OF_BOUND } DROP TABLE IF EXISTS h3_indexes; diff --git a/tests/queries/0_stateless/02293_h3_line.sql b/tests/queries/0_stateless/02293_h3_line.sql index 476587ebe7c..d0c5140810f 100644 --- a/tests/queries/0_stateless/02293_h3_line.sql +++ b/tests/queries/0_stateless/02293_h3_line.sql @@ -51,6 +51,6 @@ https://h3geo.org/docs/api/traversal SELECT length(h3Line(stringToH3(start), stringToH3(end))) FROM h3_indexes ORDER BY id; -SELECT h3Line(0xffffffffffffff, 0xffffffffffffff); -- { serverError 117 } +SELECT h3Line(0xffffffffffffff, 0xffffffffffffff); -- { serverError INCORRECT_DATA } DROP TABLE h3_indexes; diff --git a/tests/queries/0_stateless/02294_dictionaries_hierarchical_index.sql b/tests/queries/0_stateless/02294_dictionaries_hierarchical_index.sql index bc2a1020ab8..7904e612f48 100644 --- a/tests/queries/0_stateless/02294_dictionaries_hierarchical_index.sql +++ b/tests/queries/0_stateless/02294_dictionaries_hierarchical_index.sql @@ -16,7 +16,7 @@ CREATE DICTIONARY hierarchy_flat_dictionary_index PRIMARY KEY id SOURCE(CLICKHOUSE(TABLE 'test_hierarchy_source_table')) LAYOUT(FLAT()) -LIFETIME(0); -- {serverError 36 } +LIFETIME(0); -- {serverError BAD_ARGUMENTS } DROP DICTIONARY IF EXISTS hierarchy_flat_dictionary_index; CREATE DICTIONARY hierarchy_flat_dictionary_index diff --git a/tests/queries/0_stateless/02294_stringsearch_with_nonconst_needle.sql b/tests/queries/0_stateless/02294_stringsearch_with_nonconst_needle.sql index 6dd4c4f396d..4afa7e315ea 100644 --- a/tests/queries/0_stateless/02294_stringsearch_with_nonconst_needle.sql +++ b/tests/queries/0_stateless/02294_stringsearch_with_nonconst_needle.sql @@ -40,7 +40,7 @@ drop table if exists non_const_needle; -- rudimentary tests of "multiSearchFirstIndex()", "multiSearchAnyPosition()" and "multiSearchFirstIndex()" functions select 'MULTISEARCHANY'; -select multiSearchAny(materialize('Hello World'), materialize([])); -- { serverError 43 } +select multiSearchAny(materialize('Hello World'), materialize([])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select 0 = multiSearchAny('Hello World', CAST([], 'Array(String)')); select 1 = multiSearchAny(materialize('Hello World'), materialize(['orld'])); select 0 = multiSearchAny(materialize('Hello World'), materialize(['Hallo', 'Welt'])); @@ -50,7 +50,7 @@ select 1 = multiSearchAnyUTF8(materialize('Hello World £'), materialize(['WORLD select 1 = multiSearchAnyCaseInsensitiveUTF8(materialize('Hello World £'), materialize(['WORLD'])); select 'MULTISEARCHFIRSTINDEX'; -select multiSearchFirstIndex(materialize('Hello World'), materialize([])); -- { serverError 43 } +select multiSearchFirstIndex(materialize('Hello World'), materialize([])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select 0 = multiSearchFirstIndex('Hello World', CAST([], 'Array(String)')); select 1 = multiSearchFirstIndex(materialize('Hello World'), materialize(['orld'])); select 0 = multiSearchFirstIndex(materialize('Hello World'), materialize(['Hallo', 'Welt'])); @@ -60,7 +60,7 @@ select 2 = multiSearchFirstIndexUTF8(materialize('Hello World £'), materialize( select 1 = multiSearchFirstIndexCaseInsensitiveUTF8(materialize('Hello World £'), materialize(['WORLD'])); select 'MULTISEARCHFIRSTPOSITION'; -select multiSearchFirstPosition(materialize('Hello World'), materialize([])); -- { serverError 43 } +select multiSearchFirstPosition(materialize('Hello World'), materialize([])); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } select 0 = multiSearchFirstPosition('Hello World', CAST([], 'Array(String)')); select 8 = multiSearchFirstPosition(materialize('Hello World'), materialize(['orld'])); select 0 = multiSearchFirstPosition(materialize('Hello World'), materialize(['Hallo', 'Welt'])); diff --git a/tests/queries/0_stateless/02302_s3_file_pruning.reference b/tests/queries/0_stateless/02302_s3_file_pruning.reference index 7e69bdd55db..52de703714a 100644 --- a/tests/queries/0_stateless/02302_s3_file_pruning.reference +++ b/tests/queries/0_stateless/02302_s3_file_pruning.reference @@ -2,7 +2,7 @@ drop table if exists test_02302; create table test_02302 (a UInt64) engine = S3(s3_conn, filename='test_02302_{_partition_id}', format=Parquet) partition by a; insert into test_02302 select number from numbers(10) settings s3_truncate_on_insert=1; -select * from test_02302; -- { serverError 48 } +select * from test_02302; -- { serverError NOT_IMPLEMENTED } drop table test_02302; set max_rows_to_read = 1; -- Test s3 table function with glob diff --git a/tests/queries/0_stateless/02302_s3_file_pruning.sql b/tests/queries/0_stateless/02302_s3_file_pruning.sql index 647dfd5e5eb..58afb682fac 100644 --- a/tests/queries/0_stateless/02302_s3_file_pruning.sql +++ b/tests/queries/0_stateless/02302_s3_file_pruning.sql @@ -7,7 +7,7 @@ SET merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injectio drop table if exists test_02302; create table test_02302 (a UInt64) engine = S3(s3_conn, filename='test_02302_{_partition_id}', format=Parquet) partition by a; insert into test_02302 select number from numbers(10) settings s3_truncate_on_insert=1; -select * from test_02302; -- { serverError 48 } +select * from test_02302; -- { serverError NOT_IMPLEMENTED } drop table test_02302; set max_rows_to_read = 1; diff --git a/tests/queries/0_stateless/02310_uuid_v7.reference b/tests/queries/0_stateless/02310_uuid_v7.reference index ca4150bded2..1fa98ca522a 100644 --- a/tests/queries/0_stateless/02310_uuid_v7.reference +++ b/tests/queries/0_stateless/02310_uuid_v7.reference @@ -1,18 +1,3 @@ --- generateUUIDv7 -- -UUID -7 -2 -0 -0 -1 --- generateUUIDv7ThreadMonotonic -- -UUID -7 -2 -0 -0 -1 --- generateUUIDv7NonMonotonic -- UUID 7 2 diff --git a/tests/queries/0_stateless/02310_uuid_v7.sql b/tests/queries/0_stateless/02310_uuid_v7.sql index 0f12de07d20..e1aa3189d93 100644 --- a/tests/queries/0_stateless/02310_uuid_v7.sql +++ b/tests/queries/0_stateless/02310_uuid_v7.sql @@ -1,23 +1,8 @@ -SELECT '-- generateUUIDv7 --'; +-- Tests function generateUUIDv7 + SELECT toTypeName(generateUUIDv7()); SELECT substring(hex(generateUUIDv7()), 13, 1); -- check version bits SELECT bitAnd(bitShiftRight(toUInt128(generateUUIDv7()), 62), 3); -- check variant bits SELECT generateUUIDv7(1) = generateUUIDv7(2); SELECT generateUUIDv7() = generateUUIDv7(1); SELECT generateUUIDv7(1) = generateUUIDv7(1); - -SELECT '-- generateUUIDv7ThreadMonotonic --'; -SELECT toTypeName(generateUUIDv7ThreadMonotonic()); -SELECT substring(hex(generateUUIDv7ThreadMonotonic()), 13, 1); -- check version bits -SELECT bitAnd(bitShiftRight(toUInt128(generateUUIDv7ThreadMonotonic()), 62), 3); -- check variant bits -SELECT generateUUIDv7ThreadMonotonic(1) = generateUUIDv7ThreadMonotonic(2); -SELECT generateUUIDv7ThreadMonotonic() = generateUUIDv7ThreadMonotonic(1); -SELECT generateUUIDv7ThreadMonotonic(1) = generateUUIDv7ThreadMonotonic(1); - -SELECT '-- generateUUIDv7NonMonotonic --'; -SELECT toTypeName(generateUUIDv7NonMonotonic()); -SELECT substring(hex(generateUUIDv7NonMonotonic()), 13, 1); -- check version bits -SELECT bitAnd(bitShiftRight(toUInt128(generateUUIDv7NonMonotonic()), 62), 3); -- check variant bits -SELECT generateUUIDv7NonMonotonic(1) = generateUUIDv7NonMonotonic(2); -SELECT generateUUIDv7NonMonotonic() = generateUUIDv7NonMonotonic(1); -SELECT generateUUIDv7NonMonotonic(1) = generateUUIDv7NonMonotonic(1); diff --git a/tests/queries/0_stateless/02311_system_zookeeper_insert.sql b/tests/queries/0_stateless/02311_system_zookeeper_insert.sql index e1c42278086..8f183608713 100644 --- a/tests/queries/0_stateless/02311_system_zookeeper_insert.sql +++ b/tests/queries/0_stateless/02311_system_zookeeper_insert.sql @@ -31,13 +31,13 @@ insert into system.zookeeper (name, path, value) SELECT name, '/' || currentData SELECT * FROM (SELECT path, name, value FROM system.zookeeper ORDER BY path, name) WHERE path LIKE '/' || currentDatabase() || '/2-insert-test%'; -- test exceptions -insert into system.zookeeper (name, value) values ('abc', 'y'); -- { serverError 36 } -insert into system.zookeeper (path, value) values ('a/b/c', 'y'); -- { serverError 36 } -insert into system.zookeeper (name, version) values ('abc', 111); -- { serverError 44 } -insert into system.zookeeper (name, versionxyz) values ('abc', 111); -- { serverError 16 } -insert into system.zookeeper (name, path, value) values ('a/b/c', '/', 'y'); -- { serverError 36 } -insert into system.zookeeper (name, path, value) values ('/', '/a/b/c', 'z'); -- { serverError 36 } -insert into system.zookeeper (name, path, value) values ('', '/', 'y'); -- { serverError 36 } -insert into system.zookeeper (name, path, value) values ('abc', '/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc', 'y'); -- { serverError 36 } +insert into system.zookeeper (name, value) values ('abc', 'y'); -- { serverError BAD_ARGUMENTS } +insert into system.zookeeper (path, value) values ('a/b/c', 'y'); -- { serverError BAD_ARGUMENTS } +insert into system.zookeeper (name, version) values ('abc', 111); -- { serverError ILLEGAL_COLUMN } +insert into system.zookeeper (name, versionxyz) values ('abc', 111); -- { serverError NO_SUCH_COLUMN_IN_TABLE } +insert into system.zookeeper (name, path, value) values ('a/b/c', '/', 'y'); -- { serverError BAD_ARGUMENTS } +insert into system.zookeeper (name, path, value) values ('/', '/a/b/c', 'z'); -- { serverError BAD_ARGUMENTS } +insert into system.zookeeper (name, path, value) values ('', '/', 'y'); -- { serverError BAD_ARGUMENTS } +insert into system.zookeeper (name, path, value) values ('abc', '/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc/abc', 'y'); -- { serverError BAD_ARGUMENTS } drop table if exists test_zkinsert; diff --git a/tests/queries/0_stateless/02317_like_with_trailing_escape.sql b/tests/queries/0_stateless/02317_like_with_trailing_escape.sql index a5017e920c2..521b4a16f4a 100644 --- a/tests/queries/0_stateless/02317_like_with_trailing_escape.sql +++ b/tests/queries/0_stateless/02317_like_with_trailing_escape.sql @@ -5,9 +5,9 @@ CREATE TABLE tab (haystack String, pattern String) engine = MergeTree() ORDER BY INSERT INTO tab VALUES ('haystack', 'pattern\\'); -- const pattern -SELECT haystack LIKE 'pattern\\' from tab; -- { serverError 25 } +SELECT haystack LIKE 'pattern\\' from tab; -- { serverError CANNOT_PARSE_ESCAPE_SEQUENCE } -- non-const pattern -SELECT haystack LIKE pattern from tab; -- { serverError 25 } +SELECT haystack LIKE pattern from tab; -- { serverError CANNOT_PARSE_ESCAPE_SEQUENCE } DROP TABLE IF EXISTS tab; diff --git a/tests/queries/0_stateless/02319_dict_get_check_arguments_size.sql b/tests/queries/0_stateless/02319_dict_get_check_arguments_size.sql index 089d783a123..e1d1ab9fa73 100644 --- a/tests/queries/0_stateless/02319_dict_get_check_arguments_size.sql +++ b/tests/queries/0_stateless/02319_dict_get_check_arguments_size.sql @@ -19,9 +19,9 @@ SOURCE(CLICKHOUSE(TABLE 'dictionary_source_table')) LIFETIME(0); SELECT dictGet('test_dictionary', 'value', 0); -SELECT dictGet('test_dictionary', 'value', 0, 'DefaultValue'); --{serverError 42} +SELECT dictGet('test_dictionary', 'value', 0, 'DefaultValue'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} SELECT dictGetOrDefault('test_dictionary', 'value', 1, 'DefaultValue'); -SELECT dictGetOrDefault('test_dictionary', 'value', 1, 'DefaultValue', 1); --{serverError 42} +SELECT dictGetOrDefault('test_dictionary', 'value', 1, 'DefaultValue', 1); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} DROP DICTIONARY test_dictionary; @@ -51,9 +51,9 @@ RANGE(MIN start MAX end) LIFETIME(0); SELECT dictGet('range_hashed_dictionary', 'value', 0, toUInt64(4)); -SELECT dictGet('range_hashed_dictionary', 'value', 4, toUInt64(6), 'DefaultValue'); --{serverError 42} +SELECT dictGet('range_hashed_dictionary', 'value', 4, toUInt64(6), 'DefaultValue'); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} SELECT dictGetOrDefault('range_hashed_dictionary', 'value', 1, toUInt64(6), 'DefaultValue'); -SELECT dictGetOrDefault('range_hashed_dictionary', 'value', 1, toUInt64(6), 'DefaultValue', 1); --{serverError 42} +SELECT dictGetOrDefault('range_hashed_dictionary', 'value', 1, toUInt64(6), 'DefaultValue', 1); --{serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH} DROP DICTIONARY range_hashed_dictionary; DROP TABLE dictionary_source_table; diff --git a/tests/queries/0_stateless/02319_timeslots_dt64.sql b/tests/queries/0_stateless/02319_timeslots_dt64.sql index 3d8f8a22e5a..a6838b8b638 100644 --- a/tests/queries/0_stateless/02319_timeslots_dt64.sql +++ b/tests/queries/0_stateless/02319_timeslots_dt64.sql @@ -2,8 +2,8 @@ SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.12', 2, 'UTC'), toDecimal64(1 SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.233', 3, 'UTC'), toDecimal64(10000.12, 2), toDecimal64(634.1, 1)); SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.3456', 4, 'UTC'), toDecimal64(600, 0), toDecimal64(30, 0)); -SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.23', 2, 'UTC')); -- { serverError 42 } -SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.345', 3, 'UTC'), toDecimal64(62.3, 1), toDecimal64(12.34, 2), 'one more'); -- { serverError 42 } -SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.456', 3, 'UTC'), 'wrong argument'); -- { serverError 43 } -SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.123', 3, 'UTC'), toDecimal64(600, 0), 'wrong argument'); -- { serverError 43 } -SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.1232', 4, 'UTC'), toDecimal64(600, 0), toDecimal64(0, 0)); -- { serverError 44 } \ No newline at end of file +SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.23', 2, 'UTC')); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.345', 3, 'UTC'), toDecimal64(62.3, 1), toDecimal64(12.34, 2), 'one more'); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.456', 3, 'UTC'), 'wrong argument'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.123', 3, 'UTC'), toDecimal64(600, 0), 'wrong argument'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT timeSlots(toDateTime64('2000-01-02 03:04:05.1232', 4, 'UTC'), toDecimal64(600, 0), toDecimal64(0, 0)); -- { serverError ILLEGAL_COLUMN } \ No newline at end of file diff --git a/tests/queries/0_stateless/02337_analyzer_columns_basic.sql b/tests/queries/0_stateless/02337_analyzer_columns_basic.sql index 76f9f8b25e4..167eecc6fb8 100644 --- a/tests/queries/0_stateless/02337_analyzer_columns_basic.sql +++ b/tests/queries/0_stateless/02337_analyzer_columns_basic.sql @@ -30,8 +30,8 @@ INSERT INTO test_table VALUES (0, 'Value'); SELECT 'Table access without table name qualification'; -SELECT test_id FROM test_table; -- { serverError 47 } -SELECT test_id FROM test_unknown_table; -- { serverError 60 } +SELECT test_id FROM test_table; -- { serverError UNKNOWN_IDENTIFIER } +SELECT test_id FROM test_unknown_table; -- { serverError UNKNOWN_TABLE } DESCRIBE (SELECT id FROM test_table); SELECT id FROM test_table; diff --git a/tests/queries/0_stateless/02337_base58.sql b/tests/queries/0_stateless/02337_base58.sql index 416f975ecf6..bc1b2c301e5 100644 --- a/tests/queries/0_stateless/02337_base58.sql +++ b/tests/queries/0_stateless/02337_base58.sql @@ -1,8 +1,8 @@ -- Tags: no-fasttest SELECT base58Encode('Hold my beer...'); -SELECT base58Encode('Hold my beer...', 'Second arg'); -- { serverError 42 } -SELECT base58Decode('Hold my beer...'); -- { serverError 36 } +SELECT base58Encode('Hold my beer...', 'Second arg'); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } +SELECT base58Decode('Hold my beer...'); -- { serverError BAD_ARGUMENTS } SELECT base58Decode(encoded) FROM (SELECT base58Encode(val) as encoded FROM (select arrayJoin(['', 'f', 'fo', 'foo', 'foob', 'fooba', 'foobar', 'Hello world!']) val)); SELECT tryBase58Decode(encoded) FROM (SELECT base58Encode(val) as encoded FROM (select arrayJoin(['', 'f', 'fo', 'foo', 'foob', 'fooba', 'foobar', 'Hello world!']) val)); diff --git a/tests/queries/0_stateless/02341_analyzer_aliases_basics.sql b/tests/queries/0_stateless/02341_analyzer_aliases_basics.sql index 467073fc4e8..9f21db8e659 100644 --- a/tests/queries/0_stateless/02341_analyzer_aliases_basics.sql +++ b/tests/queries/0_stateless/02341_analyzer_aliases_basics.sql @@ -12,7 +12,7 @@ SELECT 1 AS x, x, x + 1; SELECT x, x + 1, 1 AS x; SELECT x, 1 + (2 + (3 AS x)); -SELECT a AS b, b AS a; -- { serverError 174 } +SELECT a AS b, b AS a; -- { serverError CYCLIC_ALIASES } DROP TABLE IF EXISTS test_table; CREATE TABLE test_table @@ -30,8 +30,8 @@ SELECT id_1, value_1, id as id_1, value as value_1 FROM test_table; WITH value_1 as value_2, id_1 as id_2, id AS id_1, value AS value_1 SELECT id_2, value_2 FROM test_table; -SELECT (id + b) AS id, id as b FROM test_table; -- { serverError 174 } -SELECT (1 + b + 1 + id) AS id, b as c, id as b FROM test_table; -- { serverError 174 } +SELECT (id + b) AS id, id as b FROM test_table; -- { serverError CYCLIC_ALIASES } +SELECT (1 + b + 1 + id) AS id, b as c, id as b FROM test_table; -- { serverError CYCLIC_ALIASES } SELECT 'Alias conflict with identifier inside expression'; diff --git a/tests/queries/0_stateless/02343_analyzer_column_transformers_strict.sql b/tests/queries/0_stateless/02343_analyzer_column_transformers_strict.sql index 98ee7bc8f58..7e323c570b8 100644 --- a/tests/queries/0_stateless/02343_analyzer_column_transformers_strict.sql +++ b/tests/queries/0_stateless/02343_analyzer_column_transformers_strict.sql @@ -10,9 +10,9 @@ CREATE TABLE test_table INSERT INTO test_table VALUES (0, 'Value'); SELECT * EXCEPT (id) FROM test_table; -SELECT * EXCEPT STRICT (id, value1) FROM test_table; -- { serverError 36 } +SELECT * EXCEPT STRICT (id, value1) FROM test_table; -- { serverError BAD_ARGUMENTS } SELECT * REPLACE STRICT (1 AS id, 2 AS value) FROM test_table; -SELECT * REPLACE STRICT (1 AS id, 2 AS value_1) FROM test_table; -- { serverError 36 } +SELECT * REPLACE STRICT (1 AS id, 2 AS value_1) FROM test_table; -- { serverError BAD_ARGUMENTS } DROP TABLE IF EXISTS test_table; diff --git a/tests/queries/0_stateless/02343_analyzer_lambdas.sql b/tests/queries/0_stateless/02343_analyzer_lambdas.sql index 25928acb2c3..80fa47fc325 100644 --- a/tests/queries/0_stateless/02343_analyzer_lambdas.sql +++ b/tests/queries/0_stateless/02343_analyzer_lambdas.sql @@ -49,11 +49,11 @@ WITH x -> * AS lambda SELECT lambda(1); WITH x -> * AS lambda SELECT lambda(1) FROM test_table; WITH cast(tuple(1), 'Tuple (value UInt64)') AS compound_value SELECT arrayMap(x -> compound_value.*, [1,2,3]); -WITH cast(tuple(1, 1), 'Tuple (value_1 UInt64, value_2 UInt64)') AS compound_value SELECT arrayMap(x -> compound_value.*, [1,2,3]); -- { serverError 1 } +WITH cast(tuple(1, 1), 'Tuple (value_1 UInt64, value_2 UInt64)') AS compound_value SELECT arrayMap(x -> compound_value.*, [1,2,3]); -- { serverError UNSUPPORTED_METHOD } WITH cast(tuple(1, 1), 'Tuple (value_1 UInt64, value_2 UInt64)') AS compound_value SELECT arrayMap(x -> plus(compound_value.*), [1,2,3]); WITH cast(tuple(1), 'Tuple (value UInt64)') AS compound_value SELECT id, test_table.* APPLY x -> compound_value.* FROM test_table; -WITH cast(tuple(1, 1), 'Tuple (value_1 UInt64, value_2 UInt64)') AS compound_value SELECT id, test_table.* APPLY x -> compound_value.* FROM test_table; -- { serverError 1 } +WITH cast(tuple(1, 1), 'Tuple (value_1 UInt64, value_2 UInt64)') AS compound_value SELECT id, test_table.* APPLY x -> compound_value.* FROM test_table; -- { serverError UNSUPPORTED_METHOD } WITH cast(tuple(1, 1), 'Tuple (value_1 UInt64, value_2 UInt64)') AS compound_value SELECT id, test_table.* APPLY x -> plus(compound_value.*) FROM test_table; SELECT 'Lambda untuple'; diff --git a/tests/queries/0_stateless/02344_analyzer_multiple_aliases_for_expression.sql b/tests/queries/0_stateless/02344_analyzer_multiple_aliases_for_expression.sql index cd1bca8285b..ee02b79cc32 100644 --- a/tests/queries/0_stateless/02344_analyzer_multiple_aliases_for_expression.sql +++ b/tests/queries/0_stateless/02344_analyzer_multiple_aliases_for_expression.sql @@ -14,14 +14,14 @@ SELECT id AS value, id AS value FROM test_table; WITH x -> x + 1 AS lambda, x -> x + 1 AS lambda SELECT lambda(1); SELECT (SELECT 1) AS subquery, (SELECT 1) AS subquery; -SELECT 1 AS value, 2 AS value; -- { serverError 179 } -SELECT plus(1, 1) AS value, 2 AS value; -- { serverError 179 } -SELECT (SELECT 1) AS subquery, 1 AS subquery; -- { serverError 179 } -WITH x -> x + 1 AS lambda, x -> x + 2 AS lambda SELECT lambda(1); -- { serverError 179 } -WITH x -> x + 1 AS lambda SELECT (SELECT 1) AS lambda; -- { serverError 179 } -WITH x -> x + 1 AS lambda SELECT 1 AS lambda; -- { serverError 179 } -SELECT id AS value, value AS value FROM test_table; -- { serverError 179 } -SELECT id AS value_1, value AS value_1 FROM test_table; -- { serverError 179 } -SELECT id AS value, (id + 1) AS value FROM test_table; -- { serverError 179 } +SELECT 1 AS value, 2 AS value; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +SELECT plus(1, 1) AS value, 2 AS value; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +SELECT (SELECT 1) AS subquery, 1 AS subquery; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +WITH x -> x + 1 AS lambda, x -> x + 2 AS lambda SELECT lambda(1); -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +WITH x -> x + 1 AS lambda SELECT (SELECT 1) AS lambda; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +WITH x -> x + 1 AS lambda SELECT 1 AS lambda; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +SELECT id AS value, value AS value FROM test_table; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +SELECT id AS value_1, value AS value_1 FROM test_table; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } +SELECT id AS value, (id + 1) AS value FROM test_table; -- { serverError MULTIPLE_EXPRESSIONS_FOR_ALIAS } DROP TABLE test_table; diff --git a/tests/queries/0_stateless/02345_implicit_transaction.sql b/tests/queries/0_stateless/02345_implicit_transaction.sql index b0cb4ab6305..ee2e0a07c3e 100644 --- a/tests/queries/0_stateless/02345_implicit_transaction.sql +++ b/tests/queries/0_stateless/02345_implicit_transaction.sql @@ -6,7 +6,7 @@ CREATE MATERIALIZED VIEW landing_to_target TO target AS SELECT n + throwIf(n == 3333) FROM landing; -INSERT INTO landing SELECT * FROM numbers(10000); -- { serverError 395 } +INSERT INTO landing SELECT * FROM numbers(10000); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } SELECT 'no_transaction_landing', count() FROM landing; SELECT 'no_transaction_target', count() FROM target; @@ -15,40 +15,40 @@ TRUNCATE TABLE target; BEGIN TRANSACTION; -INSERT INTO landing SELECT * FROM numbers(10000); -- { serverError 395 } +INSERT INTO landing SELECT * FROM numbers(10000); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } ROLLBACK; SELECT 'after_transaction_landing', count() FROM landing; SELECT 'after_transaction_target', count() FROM target; -- Same but using implicit_transaction -INSERT INTO landing SETTINGS implicit_transaction=True SELECT * FROM numbers(10000); -- { serverError 395 } +INSERT INTO landing SETTINGS implicit_transaction=True SELECT * FROM numbers(10000); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } SELECT 'after_implicit_txn_in_query_settings_landing', count() FROM landing; SELECT 'after_implicit_txn_in_query_settings_target', count() FROM target; -- Same but using implicit_transaction in a session SET implicit_transaction=True; -INSERT INTO landing SELECT * FROM numbers(10000); -- { serverError 395 } +INSERT INTO landing SELECT * FROM numbers(10000); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } SET implicit_transaction=False; SELECT 'after_implicit_txn_in_session_landing', count() FROM landing; SELECT 'after_implicit_txn_in_session_target', count() FROM target; -- Reading from incompatible sources with implicit_transaction works the same way as with normal transactions: -- Currently reading from system tables inside a transaction is Not implemented: -SELECT name, value, changed FROM system.settings where name = 'implicit_transaction' SETTINGS implicit_transaction=True; -- { serverError 48 } +SELECT name, value, changed FROM system.settings where name = 'implicit_transaction' SETTINGS implicit_transaction=True; -- { serverError NOT_IMPLEMENTED } -- Verify that you don't have to manually close transactions with implicit_transaction SET implicit_transaction=True; -SELECT throwIf(number == 0) FROM numbers(100); -- { serverError 395 } -SELECT throwIf(number == 0) FROM numbers(100); -- { serverError 395 } -SELECT throwIf(number == 0) FROM numbers(100); -- { serverError 395 } -SELECT throwIf(number == 0) FROM numbers(100); -- { serverError 395 } +SELECT throwIf(number == 0) FROM numbers(100); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } +SELECT throwIf(number == 0) FROM numbers(100); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } +SELECT throwIf(number == 0) FROM numbers(100); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } +SELECT throwIf(number == 0) FROM numbers(100); -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } SET implicit_transaction=False; -- implicit_transaction is ignored when inside a transaction (no recursive transaction error) BEGIN TRANSACTION; SELECT 'inside_txn_and_implicit', 1 SETTINGS implicit_transaction=True; -SELECT throwIf(number == 0) FROM numbers(100) SETTINGS implicit_transaction=True; -- { serverError 395 } +SELECT throwIf(number == 0) FROM numbers(100) SETTINGS implicit_transaction=True; -- { serverError FUNCTION_THROW_IF_VALUE_IS_NON_ZERO } ROLLBACK; SELECT 'inside_txn_and_implicit', 1 SETTINGS implicit_transaction=True; diff --git a/tests/queries/0_stateless/02346_fulltext_index_search.sql b/tests/queries/0_stateless/02346_fulltext_index_search.sql index 3c172bfdaf7..62cd6073842 100644 --- a/tests/queries/0_stateless/02346_fulltext_index_search.sql +++ b/tests/queries/0_stateless/02346_fulltext_index_search.sql @@ -266,4 +266,4 @@ CREATE TABLE tab(k UInt64, s String, INDEX af(s) TYPE full_text(3, 123)) SELECT number, format('{},{},{},{}', hex(12345678), hex(87654321), hex(number/17 + 5), hex(13579012)) as s - FROM numbers(1024); -- { serverError 80 } + FROM numbers(1024); -- { serverError INCORRECT_QUERY } diff --git a/tests/queries/0_stateless/02353_translate.sql b/tests/queries/0_stateless/02353_translate.sql index a7059ec85a7..f6f40c4265d 100644 --- a/tests/queries/0_stateless/02353_translate.sql +++ b/tests/queries/0_stateless/02353_translate.sql @@ -9,5 +9,5 @@ SELECT translateUTF8(toString(number), '1234567890', 'ዩय𐑿𐐏নՅðй¿ SELECT translate('abc', '', ''); SELECT translateUTF8('abc', '', ''); -SELECT translate('abc', 'Ááéíóúôè', 'aaeiouoe'); -- { serverError 36 } -SELECT translateUTF8('abc', 'efg', ''); -- { serverError 36 } +SELECT translate('abc', 'Ááéíóúôè', 'aaeiouoe'); -- { serverError BAD_ARGUMENTS } +SELECT translateUTF8('abc', 'efg', ''); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02357_file_default_value.sql b/tests/queries/0_stateless/02357_file_default_value.sql index 008fc4edb1f..070b868c7f9 100644 --- a/tests/queries/0_stateless/02357_file_default_value.sql +++ b/tests/queries/0_stateless/02357_file_default_value.sql @@ -1,3 +1,3 @@ -SELECT file('nonexistent.txt'); -- { serverError 107 } +SELECT file('nonexistent.txt'); -- { serverError FILE_DOESNT_EXIST } SELECT file('nonexistent.txt', 'default'); SELECT file('nonexistent.txt', NULL); diff --git a/tests/queries/0_stateless/02364_multiSearch_function_family.sql b/tests/queries/0_stateless/02364_multiSearch_function_family.sql index 99690e1545e..65ad3a7ed02 100644 --- a/tests/queries/0_stateless/02364_multiSearch_function_family.sql +++ b/tests/queries/0_stateless/02364_multiSearch_function_family.sql @@ -525,7 +525,7 @@ select multiSearchAllPositions(materialize('string'), 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', -'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'str']); -- { serverError 42 } +'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'str']); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } select multiSearchFirstIndex(materialize('string'), ['o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', @@ -535,4 +535,4 @@ select multiSearchFirstIndex(materialize('string'), 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', -'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'str']); -- { serverError 42 } +'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'o', 'str']); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } diff --git a/tests/queries/0_stateless/02366_kql_tabular.sql b/tests/queries/0_stateless/02366_kql_tabular.sql index 1a3d1ed92eb..f4c2de2b160 100644 --- a/tests/queries/0_stateless/02366_kql_tabular.sql +++ b/tests/queries/0_stateless/02366_kql_tabular.sql @@ -31,7 +31,7 @@ print '-- Query has second Column selection --'; Customers | project FirstName,LastName,Occupation | take 3 | project FirstName,LastName; print '-- Query has second Column selection with extra column --'; -Customers| project FirstName,LastName,Occupation | take 3 | project FirstName,LastName,Education;-- { serverError 47 } +Customers| project FirstName,LastName,Occupation | take 3 | project FirstName,LastName,Education;-- { serverError UNKNOWN_IDENTIFIER } print '-- Query with desc sort --'; Customers | project FirstName | take 5 | sort by FirstName desc; @@ -89,5 +89,5 @@ StormEvents | where startswith "W" | summarize Count=count() by State; -- { clie SET max_query_size = 55; SET dialect='kusto'; -Customers | where Education contains 'degree' | order by LastName; -- { serverError 62 } +Customers | where Education contains 'degree' | order by LastName; -- { serverError SYNTAX_ERROR } SET max_query_size=262144; diff --git a/tests/queries/0_stateless/02366_with_fill_date.sql b/tests/queries/0_stateless/02366_with_fill_date.sql index aca57b127af..baaec92deb5 100644 --- a/tests/queries/0_stateless/02366_with_fill_date.sql +++ b/tests/queries/0_stateless/02366_with_fill_date.sql @@ -1,5 +1,5 @@ SELECT toDate('2022-02-01') AS d1 FROM numbers(18) AS number -ORDER BY d1 ASC WITH FILL FROM toDateTime('2022-02-01') TO toDateTime('2022-07-01') STEP toIntervalMonth(1); -- { serverError 475 } +ORDER BY d1 ASC WITH FILL FROM toDateTime('2022-02-01') TO toDateTime('2022-07-01') STEP toIntervalMonth(1); -- { serverError INVALID_WITH_FILL_EXPRESSION } diff --git a/tests/queries/0_stateless/02369_analyzer_array_join_function.sql b/tests/queries/0_stateless/02369_analyzer_array_join_function.sql index 9a9939d2a2f..e60ec7e71a9 100644 --- a/tests/queries/0_stateless/02369_analyzer_array_join_function.sql +++ b/tests/queries/0_stateless/02369_analyzer_array_join_function.sql @@ -22,9 +22,9 @@ SELECT '--'; SELECT arrayMap(x -> arrayJoin([1, 2, 3]), [1, 2, 3]); -SELECT arrayMap(x -> arrayJoin(x), [[1, 2, 3]]); -- { serverError 36 } +SELECT arrayMap(x -> arrayJoin(x), [[1, 2, 3]]); -- { serverError BAD_ARGUMENTS } -SELECT arrayMap(x -> arrayJoin(cast(x, 'Array(UInt8)')), [[1, 2, 3]]); -- { serverError 36 } +SELECT arrayMap(x -> arrayJoin(cast(x, 'Array(UInt8)')), [[1, 2, 3]]); -- { serverError BAD_ARGUMENTS } SELECT '--'; diff --git a/tests/queries/0_stateless/02370_analyzer_in_function.sql b/tests/queries/0_stateless/02370_analyzer_in_function.sql index a7128ced449..7287c94deda 100644 --- a/tests/queries/0_stateless/02370_analyzer_in_function.sql +++ b/tests/queries/0_stateless/02370_analyzer_in_function.sql @@ -17,7 +17,7 @@ SELECT 1 IN [1, 2]; SELECT (1, 1) IN [(1, 1), (1, 2)]; SELECT (1, 1) IN [(1, 2), (1, 2)]; -SELECT (1, 2) IN 1; -- { serverError 43 } -SELECT (1, 2) IN [1]; -- { serverError 124 } -SELECT (1, 2) IN (((1, 2), (1, 2)), ((1, 2), (1, 2))); -- { serverError 43 } -SELECT (1, 2) IN [((1, 2), (1, 2)), ((1, 2), (1, 2))]; -- { serverError 43 } +SELECT (1, 2) IN 1; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT (1, 2) IN [1]; -- { serverError INCORRECT_ELEMENT_OF_SET } +SELECT (1, 2) IN (((1, 2), (1, 2)), ((1, 2), (1, 2))); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT (1, 2) IN [((1, 2), (1, 2)), ((1, 2), (1, 2))]; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/02371_analyzer_join_cross.sql b/tests/queries/0_stateless/02371_analyzer_join_cross.sql index 17388de68ab..3624a1d2282 100644 --- a/tests/queries/0_stateless/02371_analyzer_join_cross.sql +++ b/tests/queries/0_stateless/02371_analyzer_join_cross.sql @@ -70,9 +70,9 @@ SELECT t1.id, test_table_join_1.id, t1.value, test_table_join_1.value, t2.id, te t3.id, test_table_join_3.id, t3.value, test_table_join_3.value FROM test_table_join_1 AS t1, test_table_join_2 AS t2, test_table_join_3 AS t3; -SELECT id FROM test_table_join_1, test_table_join_2; -- { serverError 207 } +SELECT id FROM test_table_join_1, test_table_join_2; -- { serverError AMBIGUOUS_IDENTIFIER } -SELECT value FROM test_table_join_1, test_table_join_2; -- { serverError 207 } +SELECT value FROM test_table_join_1, test_table_join_2; -- { serverError AMBIGUOUS_IDENTIFIER } DROP TABLE test_table_join_1; DROP TABLE test_table_join_2; diff --git a/tests/queries/0_stateless/02372_analyzer_join.reference b/tests/queries/0_stateless/02372_analyzer_join.reference index b8a658106ff..eefcb1e50dc 100644 --- a/tests/queries/0_stateless/02372_analyzer_join.reference +++ b/tests/queries/0_stateless/02372_analyzer_join.reference @@ -26,8 +26,8 @@ SELECT t1.value, t2.value FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 ON t1.id = t2.id; Join_1_Value_0 Join_2_Value_0 Join_1_Value_1 Join_2_Value_1 -SELECT id FROM test_table_join_1 INNER JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } -SELECT value FROM test_table_join_1 INNER JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } +SELECT id FROM test_table_join_1 INNER JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } +SELECT value FROM test_table_join_1 INNER JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } SELECT 'JOIN ON with conditions'; JOIN ON with conditions SELECT t1.id, t1.value, t2.id, t2.value @@ -94,8 +94,8 @@ FROM test_table_join_1 AS t1 LEFT JOIN test_table_join_2 AS t2 ON t1.id = t2.id; Join_1_Value_0 Join_2_Value_0 Join_1_Value_1 Join_2_Value_1 Join_1_Value_2 -SELECT id FROM test_table_join_1 LEFT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } -SELECT value FROM test_table_join_1 LEFT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } +SELECT id FROM test_table_join_1 LEFT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } +SELECT value FROM test_table_join_1 LEFT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } SELECT 'JOIN ON with conditions'; JOIN ON with conditions SELECT t1.id, t1.value, t2.id, t2.value @@ -171,8 +171,8 @@ FROM test_table_join_1 AS t1 RIGHT JOIN test_table_join_2 AS t2 ON t1.id = t2.id Join_1_Value_0 Join_2_Value_0 Join_1_Value_1 Join_2_Value_1 Join_2_Value_3 -SELECT id FROM test_table_join_1 RIGHT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } -SELECT value FROM test_table_join_1 RIGHT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } +SELECT id FROM test_table_join_1 RIGHT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } +SELECT value FROM test_table_join_1 RIGHT JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } SELECT 'JOIN ON with conditions'; JOIN ON with conditions SELECT t1.id, t1.value, t2.id, t2.value @@ -252,8 +252,8 @@ Join_1_Value_0 Join_2_Value_0 Join_1_Value_1 Join_2_Value_1 Join_1_Value_2 Join_2_Value_3 -SELECT id FROM test_table_join_1 FULL JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } -SELECT value FROM test_table_join_1 FULL JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } +SELECT id FROM test_table_join_1 FULL JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } +SELECT value FROM test_table_join_1 FULL JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } SELECT 'JOIN ON with conditions'; JOIN ON with conditions SELECT t1.id, t1.value, t2.id, t2.value diff --git a/tests/queries/0_stateless/02372_analyzer_join.sql.j2 b/tests/queries/0_stateless/02372_analyzer_join.sql.j2 index f6032a96b33..facf4dc018b 100644 --- a/tests/queries/0_stateless/02372_analyzer_join.sql.j2 +++ b/tests/queries/0_stateless/02372_analyzer_join.sql.j2 @@ -62,9 +62,9 @@ SELECT '--'; SELECT t1.value, t2.value FROM test_table_join_1 AS t1 {{ join_type }} JOIN test_table_join_2 AS t2 ON t1.id = t2.id; -SELECT id FROM test_table_join_1 {{ join_type }} JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } +SELECT id FROM test_table_join_1 {{ join_type }} JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } -SELECT value FROM test_table_join_1 {{ join_type }} JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError 207 } +SELECT value FROM test_table_join_1 {{ join_type }} JOIN test_table_join_2 ON test_table_join_1.id = test_table_join_2.id; -- { serverError AMBIGUOUS_IDENTIFIER } SELECT 'JOIN ON with conditions'; diff --git a/tests/queries/0_stateless/02372_now_in_block.sql b/tests/queries/0_stateless/02372_now_in_block.sql index 815f74e5845..aee4572ce8d 100644 --- a/tests/queries/0_stateless/02372_now_in_block.sql +++ b/tests/queries/0_stateless/02372_now_in_block.sql @@ -1,4 +1,4 @@ SELECT count() FROM (SELECT DISTINCT nowInBlock(), nowInBlock('Pacific/Pitcairn') FROM system.numbers LIMIT 2); -SELECT nowInBlock(1); -- { serverError 43 } +SELECT nowInBlock(1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT nowInBlock(NULL) IS NULL; SELECT nowInBlock('UTC', 'UTC'); -- { serverError TOO_MANY_ARGUMENTS_FOR_FUNCTION } diff --git a/tests/queries/0_stateless/02374_analyzer_array_join.reference b/tests/queries/0_stateless/02374_analyzer_array_join.reference index 6dd384c7d9c..ad7750228d6 100644 --- a/tests/queries/0_stateless/02374_analyzer_array_join.reference +++ b/tests/queries/0_stateless/02374_analyzer_array_join.reference @@ -45,7 +45,13 @@ SELECT id, value, value_1, value_2 FROM test_table ARRAY JOIN [[1, 2, 3]] AS val 0 Value [1,2,3] 1 0 Value [1,2,3] 2 0 Value [1,2,3] 3 -SELECT 1 AS value FROM test_table ARRAY JOIN [1,2,3] AS value; -- { serverError 179 } +SELECT 1 AS value FROM test_table ARRAY JOIN [1,2,3] AS value; +1 +1 +1 +1 +1 +1 SELECT 'ARRAY JOIN with column'; ARRAY JOIN with column SELECT id, value, test_table.value_array FROM test_table ARRAY JOIN value_array; @@ -84,7 +90,13 @@ SELECT id, value, value_array AS value_array_array_alias FROM test_table ARRAY J 0 Value [4,5,6] SELECT '--'; -- -SELECT id AS value FROM test_table ARRAY JOIN value_array AS value; -- { serverError 179 } +SELECT id AS value FROM test_table ARRAY JOIN value_array AS value; +0 +0 +0 +0 +0 +0 SELECT '--'; -- SELECT id, value, value_array AS value_array_array_alias, value_array_array_alias_element FROM test_table ARRAY JOIN value_array_array_alias AS value_array_array_alias_element; @@ -120,3 +132,7 @@ WHERE NOT ignore(elem) GROUP BY sum(ignore(ignore(ignore(1., 1, 36, 8, 8), ignore(52, 37, 37, '03147_parquet_memory_tracking.parquet', 37, 37, toUInt256(37), 37, 37, toNullable(37), 37, 37), 1., 1, 36, 8, 8), emptyArrayToSingle(arrayMap(x -> toString(x), arrayMap(x -> nullIf(x, 2), arrayJoin([[1]])))))) IGNORE NULLS, modulo(toLowCardinality('03147_parquet_memory_tracking.parquet'), number, toLowCardinality(3)); -- { serverError UNKNOWN_IDENTIFIER } +[1,2] 1 +[1,2] 2 +1 +2 diff --git a/tests/queries/0_stateless/02374_analyzer_array_join.sql b/tests/queries/0_stateless/02374_analyzer_array_join.sql index bc4bb6616c1..8c26df1806e 100644 --- a/tests/queries/0_stateless/02374_analyzer_array_join.sql +++ b/tests/queries/0_stateless/02374_analyzer_array_join.sql @@ -33,7 +33,7 @@ SELECT '--'; SELECT id, value, value_1, value_2 FROM test_table ARRAY JOIN [[1, 2, 3]] AS value_1 ARRAY JOIN value_1 AS value_2; -SELECT 1 AS value FROM test_table ARRAY JOIN [1,2,3] AS value; -- { serverError 179 } +SELECT 1 AS value FROM test_table ARRAY JOIN [1,2,3] AS value; SELECT 'ARRAY JOIN with column'; @@ -53,7 +53,7 @@ SELECT id, value, value_array AS value_array_array_alias FROM test_table ARRAY J SELECT '--'; -SELECT id AS value FROM test_table ARRAY JOIN value_array AS value; -- { serverError 179 } +SELECT id AS value FROM test_table ARRAY JOIN value_array AS value; SELECT '--'; @@ -80,3 +80,6 @@ GROUP BY -- { echoOff } DROP TABLE test_table; + +select [1, 2] as arr, x from system.one array join arr as x; +select x + 1 as x from (select [number] as arr from numbers(2)) as s array join arr as x; diff --git a/tests/queries/0_stateless/02379_analyzer_subquery_depth.sql b/tests/queries/0_stateless/02379_analyzer_subquery_depth.sql index c2109f543eb..5699a15aead 100644 --- a/tests/queries/0_stateless/02379_analyzer_subquery_depth.sql +++ b/tests/queries/0_stateless/02379_analyzer_subquery_depth.sql @@ -1,4 +1,4 @@ SET allow_experimental_analyzer = 1; -SELECT (SELECT a FROM (SELECT 1 AS a)) SETTINGS max_subquery_depth = 1; -- { serverError 162 } +SELECT (SELECT a FROM (SELECT 1 AS a)) SETTINGS max_subquery_depth = 1; -- { serverError TOO_DEEP_SUBQUERIES } SELECT (SELECT a FROM (SELECT 1 AS a)) SETTINGS max_subquery_depth = 2; diff --git a/tests/queries/0_stateless/02382_analyzer_matcher_join_using.reference b/tests/queries/0_stateless/02382_analyzer_matcher_join_using.reference index f3c57eb2889..4076b5696fc 100644 --- a/tests/queries/0_stateless/02382_analyzer_matcher_join_using.reference +++ b/tests/queries/0_stateless/02382_analyzer_matcher_join_using.reference @@ -3,7 +3,7 @@ SELECT * FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 USING (id) ORDER BY id, t1.value; 0 Join_1_Value_0 Join_2_Value_0 1 Join_1_Value_1 Join_2_Value_1 -SELECT * FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 USING (id, id, id) ORDER BY id, t1.value; -- { serverError 36 } +SELECT * FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 USING (id, id, id) ORDER BY id, t1.value; -- { serverError BAD_ARGUMENTS } SELECT '--'; -- SELECT * FROM test_table_join_1 AS t1 LEFT JOIN test_table_join_2 AS t2 USING (id) ORDER BY id, t1.value; diff --git a/tests/queries/0_stateless/02382_analyzer_matcher_join_using.sql b/tests/queries/0_stateless/02382_analyzer_matcher_join_using.sql index 25d493dc422..7983b05a69e 100644 --- a/tests/queries/0_stateless/02382_analyzer_matcher_join_using.sql +++ b/tests/queries/0_stateless/02382_analyzer_matcher_join_using.sql @@ -37,7 +37,7 @@ INSERT INTO test_table_join_3 VALUES (4, 'Join_3_Value_4'); SELECT * FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 USING (id) ORDER BY id, t1.value; -SELECT * FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 USING (id, id, id) ORDER BY id, t1.value; -- { serverError 36 } +SELECT * FROM test_table_join_1 AS t1 INNER JOIN test_table_join_2 AS t2 USING (id, id, id) ORDER BY id, t1.value; -- { serverError BAD_ARGUMENTS } SELECT '--'; diff --git a/tests/queries/0_stateless/02384_analyzer_dict_get_join_get.sql b/tests/queries/0_stateless/02384_analyzer_dict_get_join_get.sql index ff6e417d756..f4619f20765 100644 --- a/tests/queries/0_stateless/02384_analyzer_dict_get_join_get.sql +++ b/tests/queries/0_stateless/02384_analyzer_dict_get_join_get.sql @@ -30,7 +30,7 @@ SELECT dictGet(test_dictionary, 'value', toUInt64(0)); WITH 'test_dictionary' AS dictionary SELECT dictGet(dictionary, 'value', toUInt64(0)); -WITH 'invalid_dictionary' AS dictionary SELECT dictGet(dictionary, 'value', toUInt64(0)); -- { serverError 36 } +WITH 'invalid_dictionary' AS dictionary SELECT dictGet(dictionary, 'value', toUInt64(0)); -- { serverError BAD_ARGUMENTS } DROP DICTIONARY test_dictionary; DROP TABLE test_table; @@ -54,6 +54,6 @@ SELECT joinGet(test_table_join, 'value', toUInt64(0)); WITH 'test_table_join' AS join_table SELECT joinGet(join_table, 'value', toUInt64(0)); -WITH 'invalid_test_table_join' AS join_table SELECT joinGet(join_table, 'value', toUInt64(0)); -- { serverError 60 } +WITH 'invalid_test_table_join' AS join_table SELECT joinGet(join_table, 'value', toUInt64(0)); -- { serverError UNKNOWN_TABLE } DROP TABLE test_table_join; diff --git a/tests/queries/0_stateless/02384_decrypt_bad_arguments.sql b/tests/queries/0_stateless/02384_decrypt_bad_arguments.sql index d29768558c4..7a3042513bc 100644 --- a/tests/queries/0_stateless/02384_decrypt_bad_arguments.sql +++ b/tests/queries/0_stateless/02384_decrypt_bad_arguments.sql @@ -1,2 +1,2 @@ -- Tags: no-fasttest -SELECT decrypt('aes-128-gcm', [1024, 65535, NULL, NULL, 9223372036854775807, 1048576, NULL], 'text', 'key', 'IV'); -- { serverError 43 } +SELECT decrypt('aes-128-gcm', [1024, 65535, NULL, NULL, 9223372036854775807, 1048576, NULL], 'text', 'key', 'IV'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/02385_analyzer_aliases_compound_expression.sql b/tests/queries/0_stateless/02385_analyzer_aliases_compound_expression.sql index 1a195bbfffe..861ada9623a 100644 --- a/tests/queries/0_stateless/02385_analyzer_aliases_compound_expression.sql +++ b/tests/queries/0_stateless/02385_analyzer_aliases_compound_expression.sql @@ -6,7 +6,7 @@ SELECT '--'; WITH (x -> x + 1) AS lambda SELECT lambda(1); -WITH (x -> x + 1) AS lambda SELECT lambda.nested(1); -- { serverError 36 } +WITH (x -> x + 1) AS lambda SELECT lambda.nested(1); -- { serverError BAD_ARGUMENTS } SELECT '--'; @@ -16,6 +16,6 @@ SELECT '--'; SELECT * FROM t1 AS t2, (SELECT 1) AS t1; -SELECT * FROM (SELECT 1) AS t1, t1.nested AS t2; -- { serverError 36 } +SELECT * FROM (SELECT 1) AS t1, t1.nested AS t2; -- { serverError BAD_ARGUMENTS } -SELECT * FROM t1.nested AS t2, (SELECT 1) AS t1; -- { serverError 36 } +SELECT * FROM t1.nested AS t2, (SELECT 1) AS t1; -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02385_profile_events_overflow.sql b/tests/queries/0_stateless/02385_profile_events_overflow.sql index 518efab6d48..9006241dd9c 100644 --- a/tests/queries/0_stateless/02385_profile_events_overflow.sql +++ b/tests/queries/0_stateless/02385_profile_events_overflow.sql @@ -9,7 +9,7 @@ SELECT max(x) - min(x) FROM t; TRUNCATE TABLE t; INSERT INTO t SELECT value FROM system.events WHERE event = 'OverflowThrow'; -SELECT count() FROM system.numbers SETTINGS max_rows_to_read = 1, read_overflow_mode = 'throw'; -- { serverError 158 } +SELECT count() FROM system.numbers SETTINGS max_rows_to_read = 1, read_overflow_mode = 'throw'; -- { serverError TOO_MANY_ROWS } INSERT INTO t SELECT value FROM system.events WHERE event = 'OverflowThrow'; SELECT max(x) - min(x) FROM t; diff --git a/tests/queries/0_stateless/02388_analyzer_recursive_lambda.sql b/tests/queries/0_stateless/02388_analyzer_recursive_lambda.sql index 6fc8ff2aae0..9fd2f73703d 100644 --- a/tests/queries/0_stateless/02388_analyzer_recursive_lambda.sql +++ b/tests/queries/0_stateless/02388_analyzer_recursive_lambda.sql @@ -1,5 +1,5 @@ SET allow_experimental_analyzer = 1; -WITH x -> plus(lambda(1), x) AS lambda SELECT lambda(1048576); -- { serverError 1 }; +WITH x -> plus(lambda(1), x) AS lambda SELECT lambda(1048576); -- { serverError UNSUPPORTED_METHOD }; -WITH lambda(lambda(plus(x, x, -1)), tuple(x), x + 2147483646) AS lambda, x -> plus(lambda(1), x, 2) AS lambda SELECT 1048576, lambda(1048576); -- { serverError 1 }; +WITH lambda(lambda(plus(x, x, -1)), tuple(x), x + 2147483646) AS lambda, x -> plus(lambda(1), x, 2) AS lambda SELECT 1048576, lambda(1048576); -- { serverError UNSUPPORTED_METHOD }; diff --git a/tests/queries/0_stateless/02388_conversion_from_string_with_datetime64_to_date_and_date32.sql b/tests/queries/0_stateless/02388_conversion_from_string_with_datetime64_to_date_and_date32.sql index b1f905993b4..4fa2b024dc2 100644 --- a/tests/queries/0_stateless/02388_conversion_from_string_with_datetime64_to_date_and_date32.sql +++ b/tests/queries/0_stateless/02388_conversion_from_string_with_datetime64_to_date_and_date32.sql @@ -17,17 +17,17 @@ SELECT toDate('2022-08-22T01:02:03.123456'); SELECT toDate32('2022-08-22T01:02:03.123456'); -SELECT toDate('2022-08-22+01:02:03'); -- { serverError 6 } -SELECT toDate32('2022-08-22+01:02:03'); -- { serverError 6 } +SELECT toDate('2022-08-22+01:02:03'); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDate32('2022-08-22+01:02:03'); -- { serverError CANNOT_PARSE_TEXT } -SELECT toDate('2022-08-22 01:02:0'); -- { serverError 6 } -SELECT toDate32('2022-08-22 01:02:0'); -- { serverError 6 } +SELECT toDate('2022-08-22 01:02:0'); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDate32('2022-08-22 01:02:0'); -- { serverError CANNOT_PARSE_TEXT } -SELECT toDate('2022-08-22 01:02:03.'); -- { serverError 6 } -SELECT toDate32('2022-08-22 01:02:03.'); -- { serverError 6 } +SELECT toDate('2022-08-22 01:02:03.'); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDate32('2022-08-22 01:02:03.'); -- { serverError CANNOT_PARSE_TEXT } -SELECT toDate('2022-08-22 01:02:03.111a'); -- { serverError 6 } -SELECT toDate32('2022-08-22 01:02:03.2b'); -- { serverError 6 } +SELECT toDate('2022-08-22 01:02:03.111a'); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDate32('2022-08-22 01:02:03.2b'); -- { serverError CANNOT_PARSE_TEXT } -SELECT toDate('2022-08-22 01:02:03.a'); -- { serverError 6 } -SELECT toDate32('2022-08-22 01:02:03.b'); -- { serverError 6 } +SELECT toDate('2022-08-22 01:02:03.a'); -- { serverError CANNOT_PARSE_TEXT } +SELECT toDate32('2022-08-22 01:02:03.b'); -- { serverError CANNOT_PARSE_TEXT } diff --git a/tests/queries/0_stateless/02389_analyzer_nested_lambda.reference b/tests/queries/0_stateless/02389_analyzer_nested_lambda.reference index 68eb282a6a1..b44cd678109 100644 --- a/tests/queries/0_stateless/02389_analyzer_nested_lambda.reference +++ b/tests/queries/0_stateless/02389_analyzer_nested_lambda.reference @@ -117,5 +117,5 @@ SELECT arrayMap(x -> concat(concat(concat(concat(concat(toString(id), '___\0____ FROM test_table WHERE concat(concat(concat(toString(id), '___\0_______\0____'), toString(id)), concat(toString(id), NULL), toString(id)); SELECT '--'; -- -SELECT arrayMap(x -> splitByChar(toString(id), arrayMap(x -> toString(1), [NULL])), [NULL]) FROM test_table; -- { serverError 44 }; +SELECT arrayMap(x -> splitByChar(toString(id), arrayMap(x -> toString(1), [NULL])), [NULL]) FROM test_table; -- { serverError ILLEGAL_COLUMN }; DROP TABLE test_table; diff --git a/tests/queries/0_stateless/02389_analyzer_nested_lambda.sql b/tests/queries/0_stateless/02389_analyzer_nested_lambda.sql index be4b64888ca..8e3777ebc15 100644 --- a/tests/queries/0_stateless/02389_analyzer_nested_lambda.sql +++ b/tests/queries/0_stateless/02389_analyzer_nested_lambda.sql @@ -122,7 +122,7 @@ FROM test_table WHERE concat(concat(concat(toString(id), '___\0_______\0____'), SELECT '--'; -SELECT arrayMap(x -> splitByChar(toString(id), arrayMap(x -> toString(1), [NULL])), [NULL]) FROM test_table; -- { serverError 44 }; +SELECT arrayMap(x -> splitByChar(toString(id), arrayMap(x -> toString(1), [NULL])), [NULL]) FROM test_table; -- { serverError ILLEGAL_COLUMN }; DROP TABLE test_table; diff --git a/tests/queries/0_stateless/02391_recursive_buffer.sql b/tests/queries/0_stateless/02391_recursive_buffer.sql index c0954ed834b..1a630722b5a 100644 --- a/tests/queries/0_stateless/02391_recursive_buffer.sql +++ b/tests/queries/0_stateless/02391_recursive_buffer.sql @@ -3,16 +3,16 @@ DROP TABLE IF EXISTS test; CREATE TABLE test (key UInt32) Engine = Buffer(currentDatabase(), test, 16, 10, 100, 10000, 1000000, 10000000, 100000000); -SELECT * FROM test; -- { serverError 269 } -SELECT * FROM system.tables WHERE table = 'test' AND database = currentDatabase() FORMAT Null; -- { serverError 269 } +SELECT * FROM test; -- { serverError INFINITE_LOOP } +SELECT * FROM system.tables WHERE table = 'test' AND database = currentDatabase() FORMAT Null; -- { serverError INFINITE_LOOP } DROP TABLE test; DROP TABLE IF EXISTS test1; DROP TABLE IF EXISTS test2; CREATE TABLE test1 (key UInt32) Engine = Buffer(currentDatabase(), test2, 16, 10, 100, 10000, 1000000, 10000000, 100000000); CREATE TABLE test2 (key UInt32) Engine = Buffer(currentDatabase(), test1, 16, 10, 100, 10000, 1000000, 10000000, 100000000); -SELECT * FROM test1; -- { serverError 306 } -SELECT * FROM test2; -- { serverError 306 } -SELECT * FROM system.tables WHERE table IN ('test1', 'test2') AND database = currentDatabase(); -- { serverError 306 } +SELECT * FROM test1; -- { serverError TOO_DEEP_RECURSION } +SELECT * FROM test2; -- { serverError TOO_DEEP_RECURSION } +SELECT * FROM system.tables WHERE table IN ('test1', 'test2') AND database = currentDatabase(); -- { serverError TOO_DEEP_RECURSION } DROP TABLE test1; DROP TABLE test2; diff --git a/tests/queries/0_stateless/02411_legacy_geobase.sql b/tests/queries/0_stateless/02411_legacy_geobase.sql index a7d82f3beb9..48525bcdc4f 100644 --- a/tests/queries/0_stateless/02411_legacy_geobase.sql +++ b/tests/queries/0_stateless/02411_legacy_geobase.sql @@ -1,7 +1,7 @@ -- Tags: no-fasttest SELECT regionToName(number::UInt32, 'en') FROM numbers(13); -SELECT regionToName(number::UInt32, 'xy') FROM numbers(13); -- { serverError 1000 } +SELECT regionToName(number::UInt32, 'xy') FROM numbers(13); -- { serverError POCO_EXCEPTION } SELECT regionToName(number::UInt32, 'en'), regionToCity(number::UInt32) AS id, regionToName(id, 'en') FROM numbers(13); SELECT regionToName(number::UInt32, 'en'), regionToArea(number::UInt32) AS id, regionToName(id, 'en') FROM numbers(13); diff --git a/tests/queries/0_stateless/02416_keeper_map.sql b/tests/queries/0_stateless/02416_keeper_map.sql index c191b539de6..6037a8835a2 100644 --- a/tests/queries/0_stateless/02416_keeper_map.sql +++ b/tests/queries/0_stateless/02416_keeper_map.sql @@ -2,10 +2,10 @@ DROP TABLE IF EXISTS 02416_test SYNC; -CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416'); -- { serverError 36 } -CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416') PRIMARY KEY(key2); -- { serverError 47 } -CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416') PRIMARY KEY(key, value); -- { serverError 36 } -CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416') PRIMARY KEY(concat(key, value)); -- { serverError 36 } +CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416'); -- { serverError BAD_ARGUMENTS } +CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416') PRIMARY KEY(key2); -- { serverError UNKNOWN_IDENTIFIER } +CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416') PRIMARY KEY(key, value); -- { serverError BAD_ARGUMENTS } +CREATE TABLE 02416_test (key String, value UInt32) Engine=KeeperMap('/' || currentDatabase() || '/test2416') PRIMARY KEY(concat(key, value)); -- { serverError BAD_ARGUMENTS } CREATE TABLE 02416_test (key Tuple(String, UInt32), value UInt64) Engine=KeeperMap('/' || currentDatabase() || '/test2416') PRIMARY KEY(key); DROP TABLE IF EXISTS 02416_test SYNC; diff --git a/tests/queries/0_stateless/02416_rocksdb_delete_update.sql b/tests/queries/0_stateless/02416_rocksdb_delete_update.sql index 28953a108d7..489788f890c 100644 --- a/tests/queries/0_stateless/02416_rocksdb_delete_update.sql +++ b/tests/queries/0_stateless/02416_rocksdb_delete_update.sql @@ -31,7 +31,7 @@ ALTER TABLE 02416_rocksdb UPDATE value = 'Another' WHERE key > 2; SELECT * FROM 02416_rocksdb ORDER BY key; SELECT '-----------'; -ALTER TABLE 02416_rocksdb UPDATE key = key * 10 WHERE 1 = 1; -- { serverError 36 } +ALTER TABLE 02416_rocksdb UPDATE key = key * 10 WHERE 1 = 1; -- { serverError BAD_ARGUMENTS } SELECT * FROM 02416_rocksdb ORDER BY key; SELECT '-----------'; diff --git a/tests/queries/0_stateless/02419_contingency_array_nullable.sql b/tests/queries/0_stateless/02419_contingency_array_nullable.sql index 5d56e259d2f..92e37127201 100644 --- a/tests/queries/0_stateless/02419_contingency_array_nullable.sql +++ b/tests/queries/0_stateless/02419_contingency_array_nullable.sql @@ -1 +1 @@ -SELECT contingency(1, [1, NULL]); -- { serverError 48 } +SELECT contingency(1, [1, NULL]); -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/02422_insert_different_granularity.sql b/tests/queries/0_stateless/02422_insert_different_granularity.sql index e122cd134fe..8d5c43fd990 100644 --- a/tests/queries/0_stateless/02422_insert_different_granularity.sql +++ b/tests/queries/0_stateless/02422_insert_different_granularity.sql @@ -78,4 +78,4 @@ SETTINGS index_granularity = 8192, index_granularity_bytes = 0, min_bytes_for_wi INSERT INTO table_one SELECT intDiv(number, 10), number FROM numbers(100); -ALTER TABLE table_two REPLACE PARTITION 0 FROM table_one; -- { serverError 36 } +ALTER TABLE table_two REPLACE PARTITION 0 FROM table_one; -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02423_multidimensional_array_get_data_at.sql b/tests/queries/0_stateless/02423_multidimensional_array_get_data_at.sql index a47fbdfc789..5a98159a7fd 100644 --- a/tests/queries/0_stateless/02423_multidimensional_array_get_data_at.sql +++ b/tests/queries/0_stateless/02423_multidimensional_array_get_data_at.sql @@ -1,7 +1,7 @@ -SELECT formatRow('RawBLOB', [[[33]], []]); -- { serverError 48 } -SELECT formatRow('RawBLOB', [[[]], []]); -- { serverError 48 } -SELECT formatRow('RawBLOB', [[[[[[[0x48, 0x65, 0x6c, 0x6c, 0x6f]]]]]], []]); -- { serverError 48 } -SELECT formatRow('RawBLOB', []::Array(Array(Nothing))); -- { serverError 48 } -SELECT formatRow('RawBLOB', [[], [['Hello']]]); -- { serverError 48 } -SELECT formatRow('RawBLOB', [[['World']], []]); -- { serverError 48 } -SELECT formatRow('RawBLOB', []::Array(String)); -- { serverError 48 } +SELECT formatRow('RawBLOB', [[[33]], []]); -- { serverError NOT_IMPLEMENTED } +SELECT formatRow('RawBLOB', [[[]], []]); -- { serverError NOT_IMPLEMENTED } +SELECT formatRow('RawBLOB', [[[[[[[0x48, 0x65, 0x6c, 0x6c, 0x6f]]]]]], []]); -- { serverError NOT_IMPLEMENTED } +SELECT formatRow('RawBLOB', []::Array(Array(Nothing))); -- { serverError NOT_IMPLEMENTED } +SELECT formatRow('RawBLOB', [[], [['Hello']]]); -- { serverError NOT_IMPLEMENTED } +SELECT formatRow('RawBLOB', [[['World']], []]); -- { serverError NOT_IMPLEMENTED } +SELECT formatRow('RawBLOB', []::Array(String)); -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/02425_categorical_information_value_properties.sql b/tests/queries/0_stateless/02425_categorical_information_value_properties.sql index 81ed8400680..bc033ec4a7b 100644 --- a/tests/queries/0_stateless/02425_categorical_information_value_properties.sql +++ b/tests/queries/0_stateless/02425_categorical_information_value_properties.sql @@ -3,12 +3,12 @@ SELECT corr(c1, c2) FROM VALUES((0, 0), (NULL, 2), (1, 0), (1, 1)); SELECT round(arrayJoin(categoricalInformationValue(c1, c2)), 3) FROM VALUES((0, 0), (NULL, 2), (1, 0), (1, 1)); SELECT round(arrayJoin(categoricalInformationValue(c1, c2)), 3) FROM VALUES((0, 0), (NULL, 1), (1, 0), (1, 1)); SELECT categoricalInformationValue(c1, c2) FROM VALUES((0, 0), (NULL, 1)); -SELECT categoricalInformationValue(c1, c2) FROM VALUES((NULL, 1)); -- { serverError 43 } +SELECT categoricalInformationValue(c1, c2) FROM VALUES((NULL, 1)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT categoricalInformationValue(dummy, dummy); SELECT categoricalInformationValue(dummy, dummy) WHERE 0; SELECT categoricalInformationValue(c1, c2) FROM VALUES((toNullable(0), 0)); SELECT groupUniqArray(*) FROM VALUES(toNullable(0)); SELECT groupUniqArray(*) FROM VALUES(NULL); -SELECT categoricalInformationValue(c1, c2) FROM VALUES((NULL, NULL)); -- { serverError 43 } +SELECT categoricalInformationValue(c1, c2) FROM VALUES((NULL, NULL)); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT categoricalInformationValue(c1, c2) FROM VALUES((0, 0), (NULL, 0)); SELECT quantiles(0.5, 0.9)(c1) FROM VALUES(0::Nullable(UInt8)); diff --git a/tests/queries/0_stateless/02426_create_suspicious_fixed_string.sql b/tests/queries/0_stateless/02426_create_suspicious_fixed_string.sql index c681c3c54d6..9bcbeb608eb 100644 --- a/tests/queries/0_stateless/02426_create_suspicious_fixed_string.sql +++ b/tests/queries/0_stateless/02426_create_suspicious_fixed_string.sql @@ -1,4 +1,4 @@ CREATE TABLE fixed_string (id UInt64, s FixedString(256)) ENGINE = MergeTree() ORDER BY id; -CREATE TABLE suspicious_fixed_string (id UInt64, s FixedString(257)) ENGINE = MergeTree() ORDER BY id; -- { serverError 44 } +CREATE TABLE suspicious_fixed_string (id UInt64, s FixedString(257)) ENGINE = MergeTree() ORDER BY id; -- { serverError ILLEGAL_COLUMN } SET allow_suspicious_fixed_string_types = 1; CREATE TABLE suspicious_fixed_string (id UInt64, s FixedString(257)) ENGINE = MergeTree() ORDER BY id; diff --git a/tests/queries/0_stateless/02429_groupBitmap_chain_state.sql b/tests/queries/0_stateless/02429_groupBitmap_chain_state.sql index e55a07dc49c..27e549b603b 100644 --- a/tests/queries/0_stateless/02429_groupBitmap_chain_state.sql +++ b/tests/queries/0_stateless/02429_groupBitmap_chain_state.sql @@ -1,6 +1,6 @@ SELECT groupBitmapAnd(z) y FROM ( SELECT groupBitmapState(u) AS z FROM ( SELECT 123 AS u ) AS a1 ); SELECT groupBitmapAnd(y) FROM (SELECT groupBitmapAndState(z) y FROM ( SELECT groupBitmapState(u) AS z FROM ( SELECT 123 AS u ) AS a1 ) AS a2); -SELECT groupBitmapAnd(z) FROM ( SELECT minState(u) AS z FROM ( SELECT 123 AS u ) AS a1 ) AS a2; -- { serverError 43 } -SELECT groupBitmapOr(z) FROM ( SELECT maxState(u) AS z FROM ( SELECT '123' AS u ) AS a1 ) AS a2; -- { serverError 43 } -SELECT groupBitmapXor(z) FROM ( SELECT countState() AS z FROM ( SELECT '123' AS u ) AS a1 ) AS a2; -- { serverError 43 } +SELECT groupBitmapAnd(z) FROM ( SELECT minState(u) AS z FROM ( SELECT 123 AS u ) AS a1 ) AS a2; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT groupBitmapOr(z) FROM ( SELECT maxState(u) AS z FROM ( SELECT '123' AS u ) AS a1 ) AS a2; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT groupBitmapXor(z) FROM ( SELECT countState() AS z FROM ( SELECT '123' AS u ) AS a1 ) AS a2; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/02454_disable_mergetree_with_lightweight_delete_column.sql b/tests/queries/0_stateless/02454_disable_mergetree_with_lightweight_delete_column.sql index da60f22d977..9487307e71d 100644 --- a/tests/queries/0_stateless/02454_disable_mergetree_with_lightweight_delete_column.sql +++ b/tests/queries/0_stateless/02454_disable_mergetree_with_lightweight_delete_column.sql @@ -1,6 +1,6 @@ drop table if exists t_row_exists; -create table t_row_exists(a int, _row_exists int) engine=MergeTree order by a; --{serverError 44} +create table t_row_exists(a int, _row_exists int) engine=MergeTree order by a; --{serverError ILLEGAL_COLUMN} create table t_row_exists(a int, b int) engine=MergeTree order by a; alter table t_row_exists add column _row_exists int; --{serverError ILLEGAL_COLUMN} diff --git a/tests/queries/0_stateless/02457_tuple_of_intervals.sql b/tests/queries/0_stateless/02457_tuple_of_intervals.sql index be9ccb50d92..9b2c3a475d2 100644 --- a/tests/queries/0_stateless/02457_tuple_of_intervals.sql +++ b/tests/queries/0_stateless/02457_tuple_of_intervals.sql @@ -18,12 +18,12 @@ SELECT '---'; SELECT '2022-10-11'::Date + tuple(INTERVAL 1 DAY); SELECT '2022-10-11'::Date - tuple(INTERVAL 1 DAY); SELECT tuple(INTERVAL 1 DAY) + '2022-10-11'::Date; -SELECT tuple(INTERVAL 1 DAY) - '2022-10-11'::Date; -- { serverError 43 } +SELECT tuple(INTERVAL 1 DAY) - '2022-10-11'::Date; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } WITH tuple(INTERVAL 1 SECOND) + INTERVAL 1 SECOND as expr SELECT expr, toTypeName(expr); WITH tuple(INTERVAL 1 SECOND) - INTERVAL 1 SECOND as expr SELECT expr, toTypeName(expr); -WITH INTERVAL 1 SECOND + tuple(INTERVAL 1 SECOND) as expr SELECT expr, toTypeName(expr); -- { serverError 43 } -WITH INTERVAL 1 SECOND - tuple(INTERVAL 1 SECOND) as expr SELECT expr, toTypeName(expr); -- { serverError 43 } +WITH INTERVAL 1 SECOND + tuple(INTERVAL 1 SECOND) as expr SELECT expr, toTypeName(expr); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +WITH INTERVAL 1 SECOND - tuple(INTERVAL 1 SECOND) as expr SELECT expr, toTypeName(expr); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT '---'; diff --git a/tests/queries/0_stateless/02458_key_condition_not_like_prefix.sql b/tests/queries/0_stateless/02458_key_condition_not_like_prefix.sql index e821b16ed5c..a6f0b9133f2 100644 --- a/tests/queries/0_stateless/02458_key_condition_not_like_prefix.sql +++ b/tests/queries/0_stateless/02458_key_condition_not_like_prefix.sql @@ -4,9 +4,9 @@ INSERT INTO data (str) SELECT 'ba' FROM numbers(100000); INSERT INTO data (str) SELECT 'ca' FROM numbers(100000); SELECT count() FROM data WHERE str NOT LIKE 'a%' SETTINGS force_primary_key=1; SELECT count() FROM data WHERE str NOT LIKE 'a%%' SETTINGS force_primary_key=1; -SELECT count() FROM data WHERE str NOT LIKE 'a' SETTINGS force_primary_key=1; -- { serverError 277 } -SELECT count() FROM data WHERE str NOT LIKE '%a' SETTINGS force_primary_key=1; -- { serverError 277 } -SELECT count() FROM data WHERE str NOT LIKE 'a_' SETTINGS force_primary_key=1; -- { serverError 277 } -SELECT count() FROM data WHERE str NOT LIKE 'a%_' SETTINGS force_primary_key=1; -- { serverError 277 } -SELECT count() FROM data WHERE str NOT LIKE '_a' SETTINGS force_primary_key=1; -- { serverError 277 } -SELECT count() FROM data WHERE str NOT LIKE 'a%\_' SETTINGS force_primary_key=1; -- { serverError 277 } +SELECT count() FROM data WHERE str NOT LIKE 'a' SETTINGS force_primary_key=1; -- { serverError INDEX_NOT_USED } +SELECT count() FROM data WHERE str NOT LIKE '%a' SETTINGS force_primary_key=1; -- { serverError INDEX_NOT_USED } +SELECT count() FROM data WHERE str NOT LIKE 'a_' SETTINGS force_primary_key=1; -- { serverError INDEX_NOT_USED } +SELECT count() FROM data WHERE str NOT LIKE 'a%_' SETTINGS force_primary_key=1; -- { serverError INDEX_NOT_USED } +SELECT count() FROM data WHERE str NOT LIKE '_a' SETTINGS force_primary_key=1; -- { serverError INDEX_NOT_USED } +SELECT count() FROM data WHERE str NOT LIKE 'a%\_' SETTINGS force_primary_key=1; -- { serverError INDEX_NOT_USED } diff --git a/tests/queries/0_stateless/02463_julian_day_ubsan.sql b/tests/queries/0_stateless/02463_julian_day_ubsan.sql index a8583d7b0a8..2174a5cb4fa 100644 --- a/tests/queries/0_stateless/02463_julian_day_ubsan.sql +++ b/tests/queries/0_stateless/02463_julian_day_ubsan.sql @@ -1 +1 @@ -SELECT fromModifiedJulianDay(9223372036854775807 :: Int64); -- { serverError 490 } +SELECT fromModifiedJulianDay(9223372036854775807 :: Int64); -- { serverError CANNOT_FORMAT_DATETIME } diff --git a/tests/queries/0_stateless/02465_limit_trivial_max_rows_to_read.sql b/tests/queries/0_stateless/02465_limit_trivial_max_rows_to_read.sql index c2e97c8c704..700a5404427 100644 --- a/tests/queries/0_stateless/02465_limit_trivial_max_rows_to_read.sql +++ b/tests/queries/0_stateless/02465_limit_trivial_max_rows_to_read.sql @@ -10,13 +10,13 @@ SET max_block_size = 10; SET max_rows_to_read = 20; SET read_overflow_mode = 'throw'; -SELECT number FROM numbers(30); -- { serverError 158 } -SELECT number FROM numbers(30) LIMIT 21; -- { serverError 158 } +SELECT number FROM numbers(30); -- { serverError TOO_MANY_ROWS } +SELECT number FROM numbers(30) LIMIT 21; -- { serverError TOO_MANY_ROWS } SELECT number FROM numbers(30) LIMIT 1; SELECT number FROM numbers(5); SELECT a FROM t_max_rows_to_read LIMIT 1; -SELECT a FROM t_max_rows_to_read LIMIT 11 offset 11; -- { serverError 158 } -SELECT a FROM t_max_rows_to_read WHERE a > 50 LIMIT 1; -- { serverError 158 } +SELECT a FROM t_max_rows_to_read LIMIT 11 offset 11; -- { serverError TOO_MANY_ROWS } +SELECT a FROM t_max_rows_to_read WHERE a > 50 LIMIT 1; -- { serverError TOO_MANY_ROWS } DROP TABLE t_max_rows_to_read; diff --git a/tests/queries/0_stateless/02473_prewhere_with_bigint.sql b/tests/queries/0_stateless/02473_prewhere_with_bigint.sql index 29c6f0da2a1..ef1ec490450 100644 --- a/tests/queries/0_stateless/02473_prewhere_with_bigint.sql +++ b/tests/queries/0_stateless/02473_prewhere_with_bigint.sql @@ -5,20 +5,20 @@ DROP TABLE IF EXISTS prewhere_uint256; CREATE TABLE prewhere_int128 (a Int128) ENGINE=MergeTree ORDER BY a; INSERT INTO prewhere_int128 VALUES (1); -SELECT a FROM prewhere_int128 PREWHERE a; -- { serverError 59 } +SELECT a FROM prewhere_int128 PREWHERE a; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } DROP TABLE prewhere_int128; CREATE TABLE prewhere_int256 (a Int256) ENGINE=MergeTree ORDER BY a; INSERT INTO prewhere_int256 VALUES (1); -SELECT a FROM prewhere_int256 PREWHERE a; -- { serverError 59 } +SELECT a FROM prewhere_int256 PREWHERE a; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } DROP TABLE prewhere_int256; CREATE TABLE prewhere_uint128 (a UInt128) ENGINE=MergeTree ORDER BY a; INSERT INTO prewhere_uint128 VALUES (1); -SELECT a FROM prewhere_uint128 PREWHERE a; -- { serverError 59 } +SELECT a FROM prewhere_uint128 PREWHERE a; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } DROP TABLE prewhere_uint128; CREATE TABLE prewhere_uint256 (a UInt256) ENGINE=MergeTree ORDER BY a; INSERT INTO prewhere_uint256 VALUES (1); -SELECT a FROM prewhere_uint256 PREWHERE a; -- { serverError 59 } +SELECT a FROM prewhere_uint256 PREWHERE a; -- { serverError ILLEGAL_TYPE_OF_COLUMN_FOR_FILTER } DROP TABLE prewhere_uint256; diff --git a/tests/queries/0_stateless/02474_analyzer_subqueries_table_expression_modifiers.sql b/tests/queries/0_stateless/02474_analyzer_subqueries_table_expression_modifiers.sql index 456783cad26..5ac8c79d4ed 100644 --- a/tests/queries/0_stateless/02474_analyzer_subqueries_table_expression_modifiers.sql +++ b/tests/queries/0_stateless/02474_analyzer_subqueries_table_expression_modifiers.sql @@ -1,17 +1,17 @@ SET allow_experimental_analyzer = 1; -SELECT * FROM (SELECT 1) FINAL; -- { serverError 1 } -SELECT * FROM (SELECT 1) SAMPLE 1/2; -- { serverError 1 } -SELECT * FROM (SELECT 1) FINAL SAMPLE 1/2; -- { serverError 1 } +SELECT * FROM (SELECT 1) FINAL; -- { serverError UNSUPPORTED_METHOD } +SELECT * FROM (SELECT 1) SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } +SELECT * FROM (SELECT 1) FINAL SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } -WITH cte_subquery AS (SELECT 1) SELECT * FROM cte_subquery FINAL; -- { serverError 1 } -WITH cte_subquery AS (SELECT 1) SELECT * FROM cte_subquery SAMPLE 1/2; -- { serverError 1 } -WITH cte_subquery AS (SELECT 1) SELECT * FROM cte_subquery FINAL SAMPLE 1/2; -- { serverError 1 } +WITH cte_subquery AS (SELECT 1) SELECT * FROM cte_subquery FINAL; -- { serverError UNSUPPORTED_METHOD } +WITH cte_subquery AS (SELECT 1) SELECT * FROM cte_subquery SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } +WITH cte_subquery AS (SELECT 1) SELECT * FROM cte_subquery FINAL SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } -SELECT * FROM (SELECT 1 UNION ALL SELECT 1) FINAL; -- { serverError 1 } -SELECT * FROM (SELECT 1 UNION ALL SELECT 1) SAMPLE 1/2; -- { serverError 1 } -SELECT * FROM (SELECT 1 UNION ALL SELECT 1) FINAL SAMPLE 1/2; -- { serverError 1 } +SELECT * FROM (SELECT 1 UNION ALL SELECT 1) FINAL; -- { serverError UNSUPPORTED_METHOD } +SELECT * FROM (SELECT 1 UNION ALL SELECT 1) SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } +SELECT * FROM (SELECT 1 UNION ALL SELECT 1) FINAL SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } -WITH cte_subquery AS (SELECT 1 UNION ALL SELECT 1) SELECT * FROM cte_subquery FINAL; -- { serverError 1 } -WITH cte_subquery AS (SELECT 1 UNION ALL SELECT 1) SELECT * FROM cte_subquery SAMPLE 1/2; -- { serverError 1 } -WITH cte_subquery AS (SELECT 1 UNION ALL SELECT 1) SELECT * FROM cte_subquery FINAL SAMPLE 1/2; -- { serverError 1 } +WITH cte_subquery AS (SELECT 1 UNION ALL SELECT 1) SELECT * FROM cte_subquery FINAL; -- { serverError UNSUPPORTED_METHOD } +WITH cte_subquery AS (SELECT 1 UNION ALL SELECT 1) SELECT * FROM cte_subquery SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } +WITH cte_subquery AS (SELECT 1 UNION ALL SELECT 1) SELECT * FROM cte_subquery FINAL SAMPLE 1/2; -- { serverError UNSUPPORTED_METHOD } diff --git a/tests/queries/0_stateless/02475_precise_decimal_arithmetics.sql b/tests/queries/0_stateless/02475_precise_decimal_arithmetics.sql index 3bd7906c7d8..435b72c019f 100644 --- a/tests/queries/0_stateless/02475_precise_decimal_arithmetics.sql +++ b/tests/queries/0_stateless/02475_precise_decimal_arithmetics.sql @@ -2,7 +2,7 @@ -- check cases when one of operands is zero SELECT divideDecimal(toDecimal32(0, 2), toDecimal128(11.123456, 6)); -SELECT divideDecimal(toDecimal64(123.123, 3), toDecimal64(0, 1)); -- { serverError 153 } +SELECT divideDecimal(toDecimal64(123.123, 3), toDecimal64(0, 1)); -- { serverError ILLEGAL_DIVISION } SELECT multiplyDecimal(toDecimal32(0, 2), toDecimal128(11.123456, 6)); SELECT multiplyDecimal(toDecimal32(123.123, 3), toDecimal128(0, 1)); @@ -11,13 +11,13 @@ SELECT multiplyDecimal(toDecimal256(1e38, 0), toDecimal256(1e38, 0)); SELECT divideDecimal(toDecimal256(1e66, 0), toDecimal256(1e-10, 10), 0); -- fits Decimal256, but scale is too big to fit -SELECT multiplyDecimal(toDecimal256(1e38, 0), toDecimal256(1e38, 0), 2); -- { serverError 407 } -SELECT divideDecimal(toDecimal256(1e72, 0), toDecimal256(1e-5, 5), 2); -- { serverError 407 } +SELECT multiplyDecimal(toDecimal256(1e38, 0), toDecimal256(1e38, 0), 2); -- { serverError DECIMAL_OVERFLOW } +SELECT divideDecimal(toDecimal256(1e72, 0), toDecimal256(1e-5, 5), 2); -- { serverError DECIMAL_OVERFLOW } -- does not fit Decimal256 -SELECT multiplyDecimal(toDecimal256('1e38', 0), toDecimal256('1e38', 0)); -- { serverError 407 } -SELECT multiplyDecimal(toDecimal256(1e39, 0), toDecimal256(1e39, 0), 0); -- { serverError 407 } -SELECT divideDecimal(toDecimal256(1e39, 0), toDecimal256(1e-38, 39)); -- { serverError 407 } +SELECT multiplyDecimal(toDecimal256('1e38', 0), toDecimal256('1e38', 0)); -- { serverError DECIMAL_OVERFLOW } +SELECT multiplyDecimal(toDecimal256(1e39, 0), toDecimal256(1e39, 0), 0); -- { serverError DECIMAL_OVERFLOW } +SELECT divideDecimal(toDecimal256(1e39, 0), toDecimal256(1e-38, 39)); -- { serverError DECIMAL_OVERFLOW } -- test different signs SELECT divideDecimal(toDecimal128(123.76, 2), toDecimal128(11.123456, 6)); diff --git a/tests/queries/0_stateless/02475_split_with_max_substrings.sql b/tests/queries/0_stateless/02475_split_with_max_substrings.sql index 3f367c75433..e0b7bf0a8ee 100644 --- a/tests/queries/0_stateless/02475_split_with_max_substrings.sql +++ b/tests/queries/0_stateless/02475_split_with_max_substrings.sql @@ -1,11 +1,11 @@ SELECT '-- negative tests'; -SELECT splitByChar(',', '1,2,3', ''); -- { serverError 43 } -SELECT splitByRegexp('[ABC]', 'oneAtwoBthreeC', ''); -- { serverError 43 } -SELECT alphaTokens('abca1abc', ''); -- { serverError 43 } -SELECT splitByAlpha('abca1abc', ''); -- { serverError 43 } -SELECT splitByNonAlpha(' 1! a, b. ', ''); -- { serverError 43 } -SELECT splitByWhitespace(' 1! a, b. ', ''); -- { serverError 43 } -SELECT splitByString(', ', '1, 2 3, 4,5, abcde', ''); -- { serverError 43 } +SELECT splitByChar(',', '1,2,3', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT splitByRegexp('[ABC]', 'oneAtwoBthreeC', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT alphaTokens('abca1abc', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT splitByAlpha('abca1abc', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT splitByNonAlpha(' 1! a, b. ', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT splitByWhitespace(' 1! a, b. ', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT splitByString(', ', '1, 2 3, 4,5, abcde', ''); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT '-- splitByChar'; SELECT '-- (default)'; diff --git a/tests/queries/0_stateless/02477_invalid_reads.sql b/tests/queries/0_stateless/02477_invalid_reads.sql index 08748af3378..1e362fc7575 100644 --- a/tests/queries/0_stateless/02477_invalid_reads.sql +++ b/tests/queries/0_stateless/02477_invalid_reads.sql @@ -1,61 +1,61 @@ -- MIN, MAX AND FAMILY should check for errors in its input -SELECT finalizeAggregation(CAST(unhex('0F00000030'), 'AggregateFunction(min, String)')); -- { serverError 33 } -SELECT finalizeAggregation(CAST(unhex('FFFF000030'), 'AggregateFunction(min, String)')); -- { serverError 33 } +SELECT finalizeAggregation(CAST(unhex('0F00000030'), 'AggregateFunction(min, String)')); -- { serverError CANNOT_READ_ALL_DATA } +SELECT finalizeAggregation(CAST(unhex('FFFF000030'), 'AggregateFunction(min, String)')); -- { serverError CANNOT_READ_ALL_DATA } -- UBSAN SELECT 'ubsan', hex(finalizeAggregation(CAST(unhex('4000000030313233343536373839303132333435363738393031323334353637383930313233343536373839303132333435363738393031323334353637383930313233010000000000000000'), 'AggregateFunction(argMax, String, UInt64)'))); -- aggThrow should check for errors in its input -SELECT finalizeAggregation(CAST('', 'AggregateFunction(aggThrow(0.), UInt8)')); -- { serverError 32 } +SELECT finalizeAggregation(CAST('', 'AggregateFunction(aggThrow(0.), UInt8)')); -- { serverError ATTEMPT_TO_READ_AFTER_EOF } -- categoricalInformationValue should check for errors in its input SELECT finalizeAggregation(CAST(unhex('01000000000000000100000000000000'), - 'AggregateFunction(categoricalInformationValue, UInt8, UInt8)')); -- { serverError 33 } + 'AggregateFunction(categoricalInformationValue, UInt8, UInt8)')); -- { serverError CANNOT_READ_ALL_DATA } SELECT finalizeAggregation(CAST(unhex('0101000000000000000100000000000000020000000000000001000000000000'), - 'AggregateFunction(categoricalInformationValue, Nullable(UInt8), UInt8)')); -- { serverError 33 } + 'AggregateFunction(categoricalInformationValue, Nullable(UInt8), UInt8)')); -- { serverError CANNOT_READ_ALL_DATA } -- groupArray should check for errors in its input -SELECT finalizeAggregation(CAST(unhex('5FF3001310132'), 'AggregateFunction(groupArray, String)')); -- { serverError 33 } -SELECT finalizeAggregation(CAST(unhex('FF000000000000000001000000000000000200000000000000'), 'AggregateFunction(groupArray, UInt64)')); -- { serverError 33 } +SELECT finalizeAggregation(CAST(unhex('5FF3001310132'), 'AggregateFunction(groupArray, String)')); -- { serverError CANNOT_READ_ALL_DATA } +SELECT finalizeAggregation(CAST(unhex('FF000000000000000001000000000000000200000000000000'), 'AggregateFunction(groupArray, UInt64)')); -- { serverError CANNOT_READ_ALL_DATA } -- Same for groupArrayMovingXXXX -SELECT finalizeAggregation(CAST(unhex('0FF00000000000000001000000000000000300000000000000'), 'AggregateFunction(groupArrayMovingSum, UInt64)')); -- { serverError 33 } -SELECT finalizeAggregation(CAST(unhex('0FF00000000000000001000000000000000300000000000000'), 'AggregateFunction(groupArrayMovingAvg, UInt64)')); -- { serverError 33 } +SELECT finalizeAggregation(CAST(unhex('0FF00000000000000001000000000000000300000000000000'), 'AggregateFunction(groupArrayMovingSum, UInt64)')); -- { serverError CANNOT_READ_ALL_DATA } +SELECT finalizeAggregation(CAST(unhex('0FF00000000000000001000000000000000300000000000000'), 'AggregateFunction(groupArrayMovingAvg, UInt64)')); -- { serverError CANNOT_READ_ALL_DATA } -- Histogram SELECT finalizeAggregation(CAST(unhex('00000000000024C000000000000018C00500000000000024C0000000000000F03F00000000000022C0000000000000F03F00000000000020C0000000000000'), - 'AggregateFunction(histogram(5), Int64)')); -- { serverError 33 } + 'AggregateFunction(histogram(5), Int64)')); -- { serverError CANNOT_READ_ALL_DATA } -- StatisticalSample SELECT finalizeAggregation(CAST(unhex('0F01000000000000244000000000000026400000000000002840000000000000244000000000000026400000000000002840000000000000F03F'), - 'AggregateFunction(mannWhitneyUTest, Float64, UInt8)')); -- { serverError 33 } + 'AggregateFunction(mannWhitneyUTest, Float64, UInt8)')); -- { serverError CANNOT_READ_ALL_DATA } -- maxIntersections SELECT finalizeAggregation(CAST(unhex('0F010000000000000001000000000000000300000000000000FFFFFFFFFFFFFFFF03340B9B047F000001000000000000000500000065000000FFFFFFFFFFFFFFFF'), - 'AggregateFunction(maxIntersections, UInt8, UInt8)')); -- { serverError 33 } + 'AggregateFunction(maxIntersections, UInt8, UInt8)')); -- { serverError CANNOT_READ_ALL_DATA } -- sequenceNextNode (This was fine because it would fail in the next readBinary call, but better to add a test) SELECT finalizeAggregation(CAST(unhex('FFFFFFF014181056F38010000000000000001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF'), 'AggregateFunction(sequenceNextNode(''forward'', ''head''), DateTime, Nullable(String), UInt8, Nullable(UInt8))')) - SETTINGS allow_experimental_funnel_functions=1; -- { serverError 33 } + SETTINGS allow_experimental_funnel_functions=1; -- { serverError CANNOT_READ_ALL_DATA } -- Fuzzer (ALL) SELECT finalizeAggregation(CAST(unhex('FFFFFFF014181056F38010000000000000001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF014181056F38010000000000000001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF'), 'AggregateFunction(sequenceNextNode(\'forward\', \'head\'), DateTime, Nullable(String), UInt8, Nullable(UInt8))')) - SETTINGS allow_experimental_funnel_functions = 1; -- { serverError 128 } + SETTINGS allow_experimental_funnel_functions = 1; -- { serverError TOO_LARGE_ARRAY_SIZE } -- Fuzzer 2 (UBSAN) SELECT finalizeAggregation(CAST(unhex('FFFFFFF014181056F38010000000000000001FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF'), 'AggregateFunction(sequenceNextNode(\'forward\', \'head\'), DateTime, Nullable(String), UInt8, Nullable(UInt8))')) - SETTINGS allow_experimental_funnel_functions = 1; -- { serverError 33 } + SETTINGS allow_experimental_funnel_functions = 1; -- { serverError CANNOT_READ_ALL_DATA } -- uniqUpTo SELECT finalizeAggregation(CAST(unhex('04128345AA2BC97190'), - 'AggregateFunction(uniqUpTo(10), String)')); -- { serverError 33 } + 'AggregateFunction(uniqUpTo(10), String)')); -- { serverError CANNOT_READ_ALL_DATA } -- quantiles SELECT finalizeAggregation(CAST(unhex('0F0000000000000000'), - 'AggregateFunction(quantileExact, UInt64)')); -- { serverError 33 } + 'AggregateFunction(quantileExact, UInt64)')); -- { serverError CANNOT_READ_ALL_DATA } SELECT finalizeAggregation(CAST(unhex('0F000000000000803F'), - 'AggregateFunction(quantileTDigest, UInt64)')); -- { serverError 33 } + 'AggregateFunction(quantileTDigest, UInt64)')); -- { serverError CANNOT_READ_ALL_DATA } diff --git a/tests/queries/0_stateless/02478_factorial.sql b/tests/queries/0_stateless/02478_factorial.sql index e1a0f7d60e5..74d34bd9884 100644 --- a/tests/queries/0_stateless/02478_factorial.sql +++ b/tests/queries/0_stateless/02478_factorial.sql @@ -2,6 +2,6 @@ select factorial(-1) = 1; select factorial(0) = 1; select factorial(10) = 3628800; -select factorial(100); -- { serverError 36 } -select factorial('100'); -- { serverError 43 } -select factorial(100.1234); -- { serverError 43 } +select factorial(100); -- { serverError BAD_ARGUMENTS } +select factorial('100'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +select factorial(100.1234); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/02478_window_frame_type_groups.sql b/tests/queries/0_stateless/02478_window_frame_type_groups.sql index 4c6d663791b..f762bcb61ee 100644 --- a/tests/queries/0_stateless/02478_window_frame_type_groups.sql +++ b/tests/queries/0_stateless/02478_window_frame_type_groups.sql @@ -1,7 +1,7 @@ SET allow_experimental_analyzer = 0; -SELECT toUInt64(dense_rank(1) OVER (ORDER BY 100 ASC GROUPS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) FROM numbers(10); -- { serverError 48 } +SELECT toUInt64(dense_rank(1) OVER (ORDER BY 100 ASC GROUPS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) FROM numbers(10); -- { serverError NOT_IMPLEMENTED } SET allow_experimental_analyzer = 1; -SELECT toUInt64(dense_rank(1) OVER (ORDER BY 100 ASC GROUPS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) FROM numbers(10); -- { serverError 48 } +SELECT toUInt64(dense_rank(1) OVER (ORDER BY 100 ASC GROUPS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)) FROM numbers(10); -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/02479_analyzer_join_with_constants.sql b/tests/queries/0_stateless/02479_analyzer_join_with_constants.sql index bf081bed228..9f77cf39f47 100644 --- a/tests/queries/0_stateless/02479_analyzer_join_with_constants.sql +++ b/tests/queries/0_stateless/02479_analyzer_join_with_constants.sql @@ -24,7 +24,7 @@ SELECT * FROM (SELECT 1 AS id, 1 AS value) AS t1 ASOF LEFT JOIN (SELECT 1 AS id, SELECT '--'; -SELECT b.dt FROM (SELECT NULL > NULL AS pk, 1 AS dt FROM numbers(5)) AS a ASOF LEFT JOIN (SELECT NULL AS pk, 1 AS dt) AS b ON (a.pk = b.pk) AND 1 != 1 AND (a.dt >= b.dt); -- { serverError 403, NOT_FOUND_COLUMN_IN_BLOCK } +SELECT b.dt FROM (SELECT NULL > NULL AS pk, 1 AS dt FROM numbers(5)) AS a ASOF LEFT JOIN (SELECT NULL AS pk, 1 AS dt) AS b ON (a.pk = b.pk) AND 1 != 1 AND (a.dt >= b.dt); -- { serverError INVALID_JOIN_ON_EXPRESSION, NOT_FOUND_COLUMN_IN_BLOCK } SELECT '--'; diff --git a/tests/queries/0_stateless/02480_suspicious_lowcard_in_key.sql b/tests/queries/0_stateless/02480_suspicious_lowcard_in_key.sql index 8d537514dbf..4408bd2f0a5 100644 --- a/tests/queries/0_stateless/02480_suspicious_lowcard_in_key.sql +++ b/tests/queries/0_stateless/02480_suspicious_lowcard_in_key.sql @@ -6,6 +6,6 @@ create table test (val LowCardinality(Float32)) engine MergeTree order by val; insert into test values (nan); -select count() from test where toUInt64(val) = -1; -- { serverError 70 } +select count() from test where toUInt64(val) = -1; -- { serverError CANNOT_CONVERT_TYPE } drop table if exists test; diff --git a/tests/queries/0_stateless/02481_analyzer_join_alias_unknown_identifier_crash.sql b/tests/queries/0_stateless/02481_analyzer_join_alias_unknown_identifier_crash.sql index b0983159eaf..0c5f0eba750 100644 --- a/tests/queries/0_stateless/02481_analyzer_join_alias_unknown_identifier_crash.sql +++ b/tests/queries/0_stateless/02481_analyzer_join_alias_unknown_identifier_crash.sql @@ -24,7 +24,7 @@ SELECT toTypeName(t2_value), t2.value AS t2_value FROM test_table_join_1 AS t1 -INNER JOIN test_table_join_2 USING (id); -- { serverError 47 }; +INNER JOIN test_table_join_2 USING (id); -- { serverError UNKNOWN_IDENTIFIER }; SELECT toTypeName(t2_value), diff --git a/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.reference b/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.reference index 02650f92607..a7096a686f5 100644 --- a/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.reference +++ b/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.reference @@ -3,5 +3,5 @@ drop table if exists test_02481_mismatch_files; create table test_02481_mismatch_files (a UInt64, b String) engine = S3(s3_conn, filename='test_02481_mismatch_files_{_partition_id}', format=Parquet) partition by a; set s3_truncate_on_insert=1; insert into test_02481_mismatch_files values (1, 'a'), (22, 'b'), (333, 'c'); -select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet); -- { serverError 636 } -select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet) settings s3_throw_on_zero_files_match=1; -- { serverError 107 } +select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet); -- { serverError CANNOT_EXTRACT_TABLE_STRUCTURE } +select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet) settings s3_throw_on_zero_files_match=1; -- { serverError FILE_DOESNT_EXIST } diff --git a/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.sql b/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.sql index 6e6f456bfad..7ec1d3ebd5f 100644 --- a/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.sql +++ b/tests/queries/0_stateless/02481_s3_throw_if_mismatch_files.sql @@ -7,6 +7,6 @@ create table test_02481_mismatch_files (a UInt64, b String) engine = S3(s3_conn, set s3_truncate_on_insert=1; insert into test_02481_mismatch_files values (1, 'a'), (22, 'b'), (333, 'c'); -select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet); -- { serverError 636 } +select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet); -- { serverError CANNOT_EXTRACT_TABLE_STRUCTURE } -select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet) settings s3_throw_on_zero_files_match=1; -- { serverError 107 } +select a, b from s3(s3_conn, filename='test_02481_mismatch_filesxxx*', format=Parquet) settings s3_throw_on_zero_files_match=1; -- { serverError FILE_DOESNT_EXIST } diff --git a/tests/queries/0_stateless/02494_analyzer_compound_expression_crash_fix.sql b/tests/queries/0_stateless/02494_analyzer_compound_expression_crash_fix.sql index b8d43acbef2..3e6f9f42724 100644 --- a/tests/queries/0_stateless/02494_analyzer_compound_expression_crash_fix.sql +++ b/tests/queries/0_stateless/02494_analyzer_compound_expression_crash_fix.sql @@ -11,6 +11,6 @@ INSERT INTO test_table VALUES (0, [[1]], ['1']); SELECT fields.name FROM (SELECT fields.name FROM test_table); -SELECT fields.name, fields.value FROM (SELECT fields.name FROM test_table); -- { serverError 47 } +SELECT fields.name, fields.value FROM (SELECT fields.name FROM test_table); -- { serverError UNKNOWN_IDENTIFIER } DROP TABLE IF EXISTS test_table; diff --git a/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql b/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql index 6dc45350c68..89021e8561f 100644 --- a/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql +++ b/tests/queries/0_stateless/02496_format_datetime_in_joda_syntax.sql @@ -80,16 +80,16 @@ with '2018-01-12 22:33:44.55' as s, toDateTime64(s, 6) as datetime64 SELECT form with '2018-01-12 22:33:44.55' as s, toDateTime64(s, 6) as datetime64 SELECT formatDateTimeInJodaSyntax(datetime64, 'SSSSSSSSSS'); -- { echoOff } -SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'z'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'zz'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'zzz'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'Z'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'b'); -- { serverError 48 } +SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'z'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'zz'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'zzz'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'Z'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDateTime('2018-01-12 22:33:44'), 'b'); -- { serverError NOT_IMPLEMENTED } -SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'z'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'zz'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'zzz'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'Z'); -- { serverError 48 } -SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'b'); -- { serverError 48 } +SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'z'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'zz'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'zzz'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'Z'); -- { serverError NOT_IMPLEMENTED } +SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), 'b'); -- { serverError NOT_IMPLEMENTED } -SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), '\'aaaa\'\''); -- { serverError 36 } +SELECT formatDateTimeInJodaSyntax(toDate32('2018-01-12 22:33:44'), '\'aaaa\'\''); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02496_remove_redundant_sorting.reference b/tests/queries/0_stateless/02496_remove_redundant_sorting.reference index dbb8ad02293..77ef213b36d 100644 --- a/tests/queries/0_stateless/02496_remove_redundant_sorting.reference +++ b/tests/queries/0_stateless/02496_remove_redundant_sorting.reference @@ -478,7 +478,7 @@ FROM ORDER BY number DESC ) ORDER BY number ASC -SETTINGS allow_deprecated_functions = 1 +SETTINGS allow_deprecated_error_prone_window_functions = 1 -- explain Expression (Projection) Sorting (Sorting for ORDER BY) diff --git a/tests/queries/0_stateless/02496_remove_redundant_sorting.sh b/tests/queries/0_stateless/02496_remove_redundant_sorting.sh index 31d2936628b..661b32fce72 100755 --- a/tests/queries/0_stateless/02496_remove_redundant_sorting.sh +++ b/tests/queries/0_stateless/02496_remove_redundant_sorting.sh @@ -315,7 +315,7 @@ FROM ORDER BY number DESC ) ORDER BY number ASC -SETTINGS allow_deprecated_functions = 1" +SETTINGS allow_deprecated_error_prone_window_functions = 1" run_query "$query" echo "-- non-stateful function does _not_ prevent removing inner ORDER BY" diff --git a/tests/queries/0_stateless/02496_remove_redundant_sorting_analyzer.reference b/tests/queries/0_stateless/02496_remove_redundant_sorting_analyzer.reference index d74ef70a23f..b6a2e3182df 100644 --- a/tests/queries/0_stateless/02496_remove_redundant_sorting_analyzer.reference +++ b/tests/queries/0_stateless/02496_remove_redundant_sorting_analyzer.reference @@ -477,7 +477,7 @@ FROM ORDER BY number DESC ) ORDER BY number ASC -SETTINGS allow_deprecated_functions = 1 +SETTINGS allow_deprecated_error_prone_window_functions = 1 -- explain Expression (Project names) Sorting (Sorting for ORDER BY) diff --git a/tests/queries/0_stateless/02497_having_without_actual_aggregation_bug.sql b/tests/queries/0_stateless/02497_having_without_actual_aggregation_bug.sql index e38de101a22..b28cbd4861e 100644 --- a/tests/queries/0_stateless/02497_having_without_actual_aggregation_bug.sql +++ b/tests/queries/0_stateless/02497_having_without_actual_aggregation_bug.sql @@ -4,5 +4,5 @@ select number from numbers_mt(10) having number >= 9; select count() from numbers_mt(100) having count() > 1; -select queryID() as t from numbers(10) with totals having t = initialQueryID(); -- { serverError 48 } -select count() from (select queryID() as t from remote('127.0.0.{1..3}', numbers(10)) with totals having t = initialQueryID()) settings prefer_localhost_replica = 1; -- { serverError 48 } +select queryID() as t from numbers(10) with totals having t = initialQueryID(); -- { serverError NOT_IMPLEMENTED } +select count() from (select queryID() as t from remote('127.0.0.{1..3}', numbers(10)) with totals having t = initialQueryID()) settings prefer_localhost_replica = 1; -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/02497_if_transform_strings_to_enum.sql b/tests/queries/0_stateless/02497_if_transform_strings_to_enum.sql index c3db61d1fb2..131eac390f1 100644 --- a/tests/queries/0_stateless/02497_if_transform_strings_to_enum.sql +++ b/tests/queries/0_stateless/02497_if_transform_strings_to_enum.sql @@ -37,9 +37,9 @@ SELECT transform(number, [NULL], ['google', 'censor.net', 'yahoo'], 'other') FRO EXPLAIN SYNTAX SELECT transform(number, [NULL], ['google', 'censor.net', 'yahoo'], 'other') FROM (SELECT NULL as number FROM system.numbers LIMIT 10); EXPLAIN QUERY TREE run_passes = 1 SELECT transform(number, [NULL], ['google', 'censor.net', 'yahoo'], 'other') FROM (SELECT NULL as number FROM system.numbers LIMIT 10); -SELECT transform(number, NULL, ['google', 'censor.net', 'yahoo'], 'other') FROM system.numbers LIMIT 10; -- { serverError 43 } -EXPLAIN SYNTAX SELECT transform(number, NULL, ['google', 'censor.net', 'yahoo'], 'other') FROM system.numbers LIMIT 10; -- { serverError 43 } -EXPLAIN QUERY TREE run_passes = 1 SELECT transform(number, NULL, ['google', 'censor.net', 'yahoo'], 'other') FROM system.numbers LIMIT 10; -- { serverError 43 } +SELECT transform(number, NULL, ['google', 'censor.net', 'yahoo'], 'other') FROM system.numbers LIMIT 10; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +EXPLAIN SYNTAX SELECT transform(number, NULL, ['google', 'censor.net', 'yahoo'], 'other') FROM system.numbers LIMIT 10; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +EXPLAIN QUERY TREE run_passes = 1 SELECT transform(number, NULL, ['google', 'censor.net', 'yahoo'], 'other') FROM system.numbers LIMIT 10; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SET optimize_if_transform_strings_to_enum = 0; diff --git a/tests/queries/0_stateless/02499_analyzer_aggregate_function_lambda_crash_fix.sql b/tests/queries/0_stateless/02499_analyzer_aggregate_function_lambda_crash_fix.sql index 80a89a0306d..f2698512112 100644 --- a/tests/queries/0_stateless/02499_analyzer_aggregate_function_lambda_crash_fix.sql +++ b/tests/queries/0_stateless/02499_analyzer_aggregate_function_lambda_crash_fix.sql @@ -1,4 +1,4 @@ SET allow_experimental_analyzer = 1; -SELECT count((t, x_0, x_1) -> ((key_2, x_0, x_1) IN (NULL, NULL, '0.3'))) FROM numbers(10); -- { serverError 1 } -SELECT count((t, x_0, x_1) -> ((key_2, x_0, x_1) IN (NULL, NULL, '0.3'))) OVER (PARTITION BY id) FROM numbers(10); -- { serverError 1 } +SELECT count((t, x_0, x_1) -> ((key_2, x_0, x_1) IN (NULL, NULL, '0.3'))) FROM numbers(10); -- { serverError UNSUPPORTED_METHOD } +SELECT count((t, x_0, x_1) -> ((key_2, x_0, x_1) IN (NULL, NULL, '0.3'))) OVER (PARTITION BY id) FROM numbers(10); -- { serverError UNSUPPORTED_METHOD } diff --git a/tests/queries/0_stateless/02501_limits_on_result_for_view.sql b/tests/queries/0_stateless/02501_limits_on_result_for_view.sql index 17e6024d973..aa9bcb0e527 100644 --- a/tests/queries/0_stateless/02501_limits_on_result_for_view.sql +++ b/tests/queries/0_stateless/02501_limits_on_result_for_view.sql @@ -16,7 +16,7 @@ CREATE VIEW 02501_view(`a` UInt64) AS SELECT a FROM 02501_dist; insert into 02501_test values(5),(6),(7),(8); -- test -SELECT * from 02501_view settings max_result_rows = 1; -- { serverError 396 } +SELECT * from 02501_view settings max_result_rows = 1; -- { serverError TOO_MANY_ROWS_OR_BYTES } SELECT sum(a) from 02501_view settings max_result_rows = 1; diff --git a/tests/queries/0_stateless/02504_regexp_dictionary_table_source.sql b/tests/queries/0_stateless/02504_regexp_dictionary_table_source.sql index 42d7acbf057..487b6e7f58e 100644 --- a/tests/queries/0_stateless/02504_regexp_dictionary_table_source.sql +++ b/tests/queries/0_stateless/02504_regexp_dictionary_table_source.sql @@ -55,19 +55,19 @@ select dictGet(regexp_dict1, ('name', 'version'), key) from needle_table; -- test invalid INSERT INTO regexp_dictionary_source_table VALUES (6, 2, '3[12]/tclwebkit', ['version'], ['10']) -SYSTEM RELOAD dictionary regexp_dict1; -- { serverError 489 } +SYSTEM RELOAD dictionary regexp_dict1; -- { serverError INCORRECT_DICTIONARY_DEFINITION } truncate table regexp_dictionary_source_table; INSERT INTO regexp_dictionary_source_table VALUES (6, 2, '3[12]/tclwebkit', ['version'], ['10']) -SYSTEM RELOAD dictionary regexp_dict1; -- { serverError 489 } +SYSTEM RELOAD dictionary regexp_dict1; -- { serverError INCORRECT_DICTIONARY_DEFINITION } truncate table regexp_dictionary_source_table; INSERT INTO regexp_dictionary_source_table VALUES (1, 2, 'Linux/(\d+[\.\d]*).+tlinux', ['name', 'version'], ['TencentOS', '\1']) INSERT INTO regexp_dictionary_source_table VALUES (2, 3, '(\d+)/tclwebkit(\d+[\.\d]*)', ['name', 'version', 'comment'], ['Android', '$1', 'test $1 and $2']) INSERT INTO regexp_dictionary_source_table VALUES (3, 1, '(\d+)/tclwebkit(\d+[\.\d]*)', ['name', 'version', 'comment'], ['Android', '$1', 'test $1 and $2']) -SYSTEM RELOAD dictionary regexp_dict1; -- { serverError 489 } +SYSTEM RELOAD dictionary regexp_dict1; -- { serverError INCORRECT_DICTIONARY_DEFINITION } -- test priority truncate table regexp_dictionary_source_table; @@ -78,7 +78,7 @@ SYSTEM RELOAD dictionary regexp_dict1; select dictGet(regexp_dict1, ('name', 'version', 'comment'), '33/tclwebkit'); truncate table regexp_dictionary_source_table; -SYSTEM RELOAD dictionary regexp_dict1; -- { serverError 489 } +SYSTEM RELOAD dictionary regexp_dict1; -- { serverError INCORRECT_DICTIONARY_DEFINITION } select * from dictionary(regexp_dict1); diff --git a/tests/queries/0_stateless/02521_analyzer_array_join_crash.reference b/tests/queries/0_stateless/02521_analyzer_array_join_crash.reference index 5e7728e0590..426cfe35e73 100644 --- a/tests/queries/0_stateless/02521_analyzer_array_join_crash.reference +++ b/tests/queries/0_stateless/02521_analyzer_array_join_crash.reference @@ -1,11 +1,10 @@ -- { echoOn } -SELECT id, value_element, value FROM test_table ARRAY JOIN [[1,2,3]] AS value_element, value_element AS value; -0 [1,2,3] [1,2,3] +SELECT id, value_element, value FROM test_table ARRAY JOIN [[1,2,3]] AS value_element, value_element AS value; -- { serverError UNKNOWN_IDENTIFIER } SELECT id, value_element, value FROM test_table ARRAY JOIN [[1,2,3]] AS value_element ARRAY JOIN value_element AS value; 0 [1,2,3] 1 0 [1,2,3] 2 0 [1,2,3] 3 -SELECT value_element, value FROM test_table ARRAY JOIN [1048577] AS value_element, arrayMap(x -> value_element, ['']) AS value; -1048577 [1048577] -SELECT arrayFilter(x -> notEmpty(concat(x)), [NULL, NULL]) FROM system.one ARRAY JOIN [1048577] AS elem, arrayMap(x -> splitByChar(x, elem), ['']) AS unused; -- { serverError 44 } +SELECT value_element, value FROM test_table ARRAY JOIN [1048577] AS value_element ARRAY JOIN arrayMap(x -> value_element, ['']) AS value; +1048577 1048577 +SELECT arrayFilter(x -> notEmpty(concat(x)), [NULL, NULL]) FROM system.one ARRAY JOIN [1048577] AS elem ARRAY JOIN arrayMap(x -> splitByChar(x, elem), ['']) AS unused; -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/02521_analyzer_array_join_crash.sql b/tests/queries/0_stateless/02521_analyzer_array_join_crash.sql index 53606e01ab7..7842d47d757 100644 --- a/tests/queries/0_stateless/02521_analyzer_array_join_crash.sql +++ b/tests/queries/0_stateless/02521_analyzer_array_join_crash.sql @@ -11,13 +11,13 @@ INSERT INTO test_table VALUES (0, 'Value'); -- { echoOn } -SELECT id, value_element, value FROM test_table ARRAY JOIN [[1,2,3]] AS value_element, value_element AS value; +SELECT id, value_element, value FROM test_table ARRAY JOIN [[1,2,3]] AS value_element, value_element AS value; -- { serverError UNKNOWN_IDENTIFIER } SELECT id, value_element, value FROM test_table ARRAY JOIN [[1,2,3]] AS value_element ARRAY JOIN value_element AS value; -SELECT value_element, value FROM test_table ARRAY JOIN [1048577] AS value_element, arrayMap(x -> value_element, ['']) AS value; +SELECT value_element, value FROM test_table ARRAY JOIN [1048577] AS value_element ARRAY JOIN arrayMap(x -> value_element, ['']) AS value; -SELECT arrayFilter(x -> notEmpty(concat(x)), [NULL, NULL]) FROM system.one ARRAY JOIN [1048577] AS elem, arrayMap(x -> splitByChar(x, elem), ['']) AS unused; -- { serverError 44 } +SELECT arrayFilter(x -> notEmpty(concat(x)), [NULL, NULL]) FROM system.one ARRAY JOIN [1048577] AS elem ARRAY JOIN arrayMap(x -> splitByChar(x, elem), ['']) AS unused; -- { serverError ILLEGAL_COLUMN } -- { echoOff } diff --git a/tests/queries/0_stateless/02521_to_custom_day_of_week.sql b/tests/queries/0_stateless/02521_to_custom_day_of_week.sql index 5475e15a984..4b194bf4b9f 100644 --- a/tests/queries/0_stateless/02521_to_custom_day_of_week.sql +++ b/tests/queries/0_stateless/02521_to_custom_day_of_week.sql @@ -7,4 +7,4 @@ with toDate('2023-01-09') as date_mon, date_mon - 1 as date_sun select toDayOfWe with toDate('2023-01-09') as date_mon, date_mon - 1 as date_sun select toDayOfWeek(date_mon, 4), toDayOfWeek(date_sun, 4); with toDate('2023-01-09') as date_mon, date_mon - 1 as date_sun select toDayOfWeek(date_mon, 5), toDayOfWeek(date_sun, 5); -select toDayOfWeek(today(), -1); -- { serverError 43 } +select toDayOfWeek(today(), -1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/02552_analyzer_optimize_group_by_function_keys_crash.sql b/tests/queries/0_stateless/02552_analyzer_optimize_group_by_function_keys_crash.sql index 5fca43ace92..ee9032472a7 100644 --- a/tests/queries/0_stateless/02552_analyzer_optimize_group_by_function_keys_crash.sql +++ b/tests/queries/0_stateless/02552_analyzer_optimize_group_by_function_keys_crash.sql @@ -1,3 +1,3 @@ SET allow_experimental_analyzer = 1; -SELECT NULL GROUP BY tuple('0.0000000007'), count(NULL) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) -- { serverError 184 }; +SELECT NULL GROUP BY tuple('0.0000000007'), count(NULL) OVER (ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) -- { serverError ILLEGAL_AGGREGATION }; diff --git a/tests/queries/0_stateless/02554_log_faminy_support_storage_policy.sql b/tests/queries/0_stateless/02554_log_faminy_support_storage_policy.sql index 4dbb4569c0f..0c040d688ee 100644 --- a/tests/queries/0_stateless/02554_log_faminy_support_storage_policy.sql +++ b/tests/queries/0_stateless/02554_log_faminy_support_storage_policy.sql @@ -24,4 +24,4 @@ SELECT * FROM test_2554_stripelog; DROP TABLE test_2554_stripelog; -CREATE TABLE test_2554_error (n UInt32) ENGINE = Log SETTINGS disk = 'default', storage_policy = 'default'; -- { serverError 471 } +CREATE TABLE test_2554_error (n UInt32) ENGINE = Log SETTINGS disk = 'default', storage_policy = 'default'; -- { serverError INVALID_SETTING_VALUE } diff --git a/tests/queries/0_stateless/02560_window_ntile.reference b/tests/queries/0_stateless/02560_window_ntile.reference index 1045fc1011a..d877b2034cb 100644 --- a/tests/queries/0_stateless/02560_window_ntile.reference +++ b/tests/queries/0_stateless/02560_window_ntile.reference @@ -208,15 +208,15 @@ select a, b, ntile(65535) over (partition by a order by b) from (select 1 as a, 1 98 99 1 99 100 -- Bad arguments -select a, b, ntile(3.0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile('2') over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(-2) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(b + 1) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } +select a, b, ntile(3.0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile('2') over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(-2) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(b + 1) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } -- Bad window type -select a, b, ntile(2) over (partition by a) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and unbounded following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between unbounded preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between current row and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b range unbounded preceding) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError 36 } +select a, b, ntile(2) over (partition by a) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and unbounded following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between unbounded preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between current row and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b range unbounded preceding) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02560_window_ntile.sql b/tests/queries/0_stateless/02560_window_ntile.sql index f2acf8fc94e..44e9b865052 100644 --- a/tests/queries/0_stateless/02560_window_ntile.sql +++ b/tests/queries/0_stateless/02560_window_ntile.sql @@ -11,16 +11,16 @@ select a, b, ntile(65535) over (partition by a order by b) from (select 1 as a, -- Bad arguments -select a, b, ntile(3.0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile('2') over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(-2) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(b + 1) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } +select a, b, ntile(3.0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile('2') over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(0) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(-2) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(b + 1) over (partition by a order by b) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } -- Bad window type -select a, b, ntile(2) over (partition by a) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and unbounded following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between unbounded preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b rows between current row and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError 36 } -select a, b, ntile(2) over (partition by a order by b range unbounded preceding) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError 36 } +select a, b, ntile(2) over (partition by a) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and unbounded following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between unbounded preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20)); -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between 4 preceding and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b rows between current row and 4 following) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError BAD_ARGUMENTS } +select a, b, ntile(2) over (partition by a order by b range unbounded preceding) from(select intDiv(number,10) as a, number%10 as b from numbers(20));; -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02560_with_fill_int256_int.sql b/tests/queries/0_stateless/02560_with_fill_int256_int.sql index 2039f7ec233..42647f1099d 100644 --- a/tests/queries/0_stateless/02560_with_fill_int256_int.sql +++ b/tests/queries/0_stateless/02560_with_fill_int256_int.sql @@ -5,5 +5,5 @@ SELECT (number * 2)::UInt256 FROM numbers(10) ORDER BY 1 ASC WITH FILL FROM 3 TO SELECT (number * 2)::Int128 FROM numbers(10) ORDER BY 1 ASC WITH FILL FROM -3 TO 5; SELECT (number * 2)::Int256 FROM numbers(10) ORDER BY 1 ASC WITH FILL FROM -3 TO 5; -SELECT (number * 2)::UInt128 FROM numbers(10) ORDER BY 1 ASC WITH FILL FROM -3 TO 5; -- { serverError 69 } -SELECT (number * 2)::UInt256 FROM numbers(10) ORDER BY 1 ASC WITH FILL FROM -3 TO 5; -- { serverError 69 } +SELECT (number * 2)::UInt128 FROM numbers(10) ORDER BY 1 ASC WITH FILL FROM -3 TO 5; -- { serverError ARGUMENT_OUT_OF_BOUND } +SELECT (number * 2)::UInt256 FROM numbers(10) ORDER BY 1 ASC WITH FILL FROM -3 TO 5; -- { serverError ARGUMENT_OUT_OF_BOUND } diff --git a/tests/queries/0_stateless/02561_with_fill_date_datetime_incompatible.sql b/tests/queries/0_stateless/02561_with_fill_date_datetime_incompatible.sql index 458e5047a63..ed634d66cc8 100644 --- a/tests/queries/0_stateless/02561_with_fill_date_datetime_incompatible.sql +++ b/tests/queries/0_stateless/02561_with_fill_date_datetime_incompatible.sql @@ -1,2 +1,2 @@ SELECT today() AS a -ORDER BY a ASC WITH FILL FROM now() - toIntervalMonth(1) TO now() + toIntervalDay(1) STEP 82600; -- { serverError 475 } +ORDER BY a ASC WITH FILL FROM now() - toIntervalMonth(1) TO now() + toIntervalDay(1) STEP 82600; -- { serverError INVALID_WITH_FILL_EXPRESSION } diff --git a/tests/queries/0_stateless/02597_column_delete_and_replication.sql b/tests/queries/0_stateless/02597_column_delete_and_replication.sql index b0257f666d9..9a627645c35 100644 --- a/tests/queries/0_stateless/02597_column_delete_and_replication.sql +++ b/tests/queries/0_stateless/02597_column_delete_and_replication.sql @@ -16,7 +16,7 @@ ALTER TABLE test UPDATE d = d || toString(sleepEachRow(0.3)) where 1; ALTER TABLE test ADD COLUMN x UInt32 default 0; ALTER TABLE test UPDATE d = d || '1' where x = 42; -ALTER TABLE test DROP COLUMN x SETTINGS mutations_sync = 2; --{serverError 36} +ALTER TABLE test DROP COLUMN x SETTINGS mutations_sync = 2; --{serverError BAD_ARGUMENTS} ALTER TABLE test UPDATE x = x + 1 where 1 SETTINGS mutations_sync = 2; diff --git a/tests/queries/0_stateless/02597_column_update_and_replication.sql b/tests/queries/0_stateless/02597_column_update_and_replication.sql index 42fe813f8a1..6f785db317f 100644 --- a/tests/queries/0_stateless/02597_column_update_and_replication.sql +++ b/tests/queries/0_stateless/02597_column_update_and_replication.sql @@ -16,7 +16,7 @@ ALTER TABLE test UPDATE d = d || toString(sleepEachRow(0.3)) where 1; ALTER TABLE test ADD COLUMN x UInt32 default 0; ALTER TABLE test UPDATE x = x + 1 where 1; -ALTER TABLE test DROP COLUMN x SETTINGS mutations_sync = 2; --{serverError 36} +ALTER TABLE test DROP COLUMN x SETTINGS mutations_sync = 2; --{serverError BAD_ARGUMENTS} ALTER TABLE test UPDATE x = x + 1 where 1 SETTINGS mutations_sync = 2; diff --git a/tests/queries/0_stateless/02597_column_update_tricy_expression_and_replication.sql b/tests/queries/0_stateless/02597_column_update_tricy_expression_and_replication.sql index b07b3b54514..34f88b19b7e 100644 --- a/tests/queries/0_stateless/02597_column_update_tricy_expression_and_replication.sql +++ b/tests/queries/0_stateless/02597_column_update_tricy_expression_and_replication.sql @@ -16,7 +16,7 @@ ALTER TABLE test UPDATE d = d + sleepEachRow(0.3) where 1; ALTER TABLE test ADD COLUMN x UInt32 default 0; ALTER TABLE test UPDATE d = x + 1 where 1; -ALTER TABLE test DROP COLUMN x SETTINGS mutations_sync = 2; --{serverError 36} +ALTER TABLE test DROP COLUMN x SETTINGS mutations_sync = 2; --{serverError BAD_ARGUMENTS} ALTER TABLE test UPDATE x = x + 1 where 1 SETTINGS mutations_sync = 2; diff --git a/tests/queries/0_stateless/02597_projection_materialize_and_replication.sql b/tests/queries/0_stateless/02597_projection_materialize_and_replication.sql index 031cb3cb6fb..fbdb9027841 100644 --- a/tests/queries/0_stateless/02597_projection_materialize_and_replication.sql +++ b/tests/queries/0_stateless/02597_projection_materialize_and_replication.sql @@ -16,7 +16,7 @@ ALTER TABLE test UPDATE d = d || toString(sleepEachRow(0.3)) where 1; ALTER TABLE test ADD PROJECTION d_order ( SELECT min(c_id) GROUP BY `d`); ALTER TABLE test MATERIALIZE PROJECTION d_order; -ALTER TABLE test DROP PROJECTION d_order SETTINGS mutations_sync = 2; --{serverError 36} +ALTER TABLE test DROP PROJECTION d_order SETTINGS mutations_sync = 2; --{serverError BAD_ARGUMENTS} -- just to wait prev mutation ALTER TABLE test DELETE where d = 'Hello' SETTINGS mutations_sync = 2; diff --git a/tests/queries/0_stateless/02677_analyzer_bitmap_has_any.sql b/tests/queries/0_stateless/02677_analyzer_bitmap_has_any.sql index f0f9845d91d..c06ea009c1d 100644 --- a/tests/queries/0_stateless/02677_analyzer_bitmap_has_any.sql +++ b/tests/queries/0_stateless/02677_analyzer_bitmap_has_any.sql @@ -18,7 +18,7 @@ FROM bitmapHasAny(bitmapBuild([toUInt64(1)]), ( SELECT groupBitmapState(toUInt64(2)) )) has2 -) SETTINGS allow_experimental_analyzer = 0; -- { serverError 43 } +) SETTINGS allow_experimental_analyzer = 0; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } SELECT '--------------'; diff --git a/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.sql b/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.sql index 7705b860e8e..b835ecd2c8c 100644 --- a/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.sql +++ b/tests/queries/0_stateless/02679_query_parameters_dangling_pointer.sql @@ -1,4 +1,4 @@ -- There is no use-after-free in the following query: SET param_o = 'a'; -CREATE TABLE test.xxx (a Int64) ENGINE=MergeTree ORDER BY ({o:String}); -- { serverError 44 } +CREATE TABLE test.xxx (a Int64) ENGINE=MergeTree ORDER BY ({o:String}); -- { serverError ILLEGAL_COLUMN } diff --git a/tests/queries/0_stateless/02701_non_parametric_function.sql b/tests/queries/0_stateless/02701_non_parametric_function.sql index b242bdc72ef..6c708d9aca6 100644 --- a/tests/queries/0_stateless/02701_non_parametric_function.sql +++ b/tests/queries/0_stateless/02701_non_parametric_function.sql @@ -1 +1 @@ -SELECT * FROM system.numbers WHERE number > toUInt64(10)(number) LIMIT 10; -- { serverError 309 } +SELECT * FROM system.numbers WHERE number > toUInt64(10)(number) LIMIT 10; -- { serverError FUNCTION_CANNOT_HAVE_PARAMETERS } diff --git a/tests/queries/0_stateless/02703_row_policies_for_database_combination.sql b/tests/queries/0_stateless/02703_row_policies_for_database_combination.sql index f9b466f1ade..8c93fc595ba 100644 --- a/tests/queries/0_stateless/02703_row_policies_for_database_combination.sql +++ b/tests/queries/0_stateless/02703_row_policies_for_database_combination.sql @@ -73,7 +73,7 @@ SELECT * FROM 02703_db.02703_rptable; CREATE TABLE 02703_db.02703_unexpected_columns (xx UInt8, yy UInt8) ENGINE = MergeTree ORDER BY xx; SELECT 'Policy not applicable'; -SELECT * FROM 02703_db.02703_unexpected_columns; -- { serverError 47 } -- Missing columns: 'x' while processing query +SELECT * FROM 02703_db.02703_unexpected_columns; -- { serverError UNKNOWN_IDENTIFIER } -- Missing columns: 'x' while processing query DROP ROW POLICY 02703_filter_5 ON 02703_db.*; SELECT 'None'; diff --git a/tests/queries/0_stateless/02707_keeper_map_delete_update_strict.sql b/tests/queries/0_stateless/02707_keeper_map_delete_update_strict.sql index cf59af2f388..cc599035322 100644 --- a/tests/queries/0_stateless/02707_keeper_map_delete_update_strict.sql +++ b/tests/queries/0_stateless/02707_keeper_map_delete_update_strict.sql @@ -33,7 +33,7 @@ ALTER TABLE 02707_keepermap_delete_update UPDATE value = 'Another' WHERE key > 2 SELECT *, _version FROM 02707_keepermap_delete_update ORDER BY key; SELECT '-----------'; -ALTER TABLE 02707_keepermap_delete_update UPDATE key = key * 10 WHERE 1 = 1; -- { serverError 36 } +ALTER TABLE 02707_keepermap_delete_update UPDATE key = key * 10 WHERE 1 = 1; -- { serverError BAD_ARGUMENTS } SELECT *, _version FROM 02707_keepermap_delete_update ORDER BY key; SELECT '-----------'; diff --git a/tests/queries/0_stateless/02716_create_direct_dict_with_lifetime_throws.sql b/tests/queries/0_stateless/02716_create_direct_dict_with_lifetime_throws.sql index d96f6249e43..763f74fb3a3 100644 --- a/tests/queries/0_stateless/02716_create_direct_dict_with_lifetime_throws.sql +++ b/tests/queries/0_stateless/02716_create_direct_dict_with_lifetime_throws.sql @@ -1,3 +1,3 @@ CREATE TABLE IF NOT EXISTS dict_source (key UInt64, value String) ENGINE=MergeTree ORDER BY key; -CREATE DICTIONARY dict(`key` UInt64,`value` String) PRIMARY KEY key SOURCE(CLICKHOUSE(table 'dict_source')) LAYOUT(DIRECT()) LIFETIME(0); -- { serverError 36 } +CREATE DICTIONARY dict(`key` UInt64,`value` String) PRIMARY KEY key SOURCE(CLICKHOUSE(table 'dict_source')) LAYOUT(DIRECT()) LIFETIME(0); -- { serverError BAD_ARGUMENTS } diff --git a/tests/queries/0_stateless/02771_ignore_data_skipping_indices.sql b/tests/queries/0_stateless/02771_ignore_data_skipping_indices.sql index 951d87fd2c0..d86b65c3291 100644 --- a/tests/queries/0_stateless/02771_ignore_data_skipping_indices.sql +++ b/tests/queries/0_stateless/02771_ignore_data_skipping_indices.sql @@ -16,11 +16,11 @@ ORDER BY key; INSERT INTO data_02771 VALUES (1, 2, 3); SELECT * FROM data_02771; -SELECT * FROM data_02771 SETTINGS ignore_data_skipping_indices=''; -- { serverError 6 } +SELECT * FROM data_02771 SETTINGS ignore_data_skipping_indices=''; -- { serverError CANNOT_PARSE_TEXT } SELECT * FROM data_02771 SETTINGS ignore_data_skipping_indices='x_idx'; SELECT * FROM data_02771 SETTINGS ignore_data_skipping_indices='na_idx'; -SELECT * FROM data_02771 WHERE x = 1 AND y = 1 SETTINGS ignore_data_skipping_indices='xy_idx',force_data_skipping_indices='xy_idx' ; -- { serverError 277 } +SELECT * FROM data_02771 WHERE x = 1 AND y = 1 SETTINGS ignore_data_skipping_indices='xy_idx',force_data_skipping_indices='xy_idx' ; -- { serverError INDEX_NOT_USED } SELECT * FROM data_02771 WHERE x = 1 AND y = 2 SETTINGS ignore_data_skipping_indices='xy_idx'; SET allow_experimental_analyzer = 0; diff --git a/tests/queries/0_stateless/02772_jit_date_time_add.sql b/tests/queries/0_stateless/02772_jit_date_time_add.sql index 61028ac4172..0ba994580f2 100644 --- a/tests/queries/0_stateless/02772_jit_date_time_add.sql +++ b/tests/queries/0_stateless/02772_jit_date_time_add.sql @@ -1,6 +1,6 @@ SET compile_expressions = 1; SET min_count_to_compile_expression = 0; -SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + (number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; -- { serverError 407 } +SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + (number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; -- { serverError DECIMAL_OVERFLOW } SELECT DISTINCT result FROM (SELECT toStartOfFifteenMinutes(toDateTime(toStartOfFifteenMinutes(toDateTime(1000.0001220703125) + (number * 65536))) + toInt64(number * 9223372036854775807)) AS result FROM system.numbers LIMIT 1048576) ORDER BY result DESC NULLS FIRST FORMAT Null; SELECT round(round(round(round(round(100)), round(round(round(round(NULL), round(65535)), toTypeName(now() + 9223372036854775807) LIKE 'DateTime%DateTime%DateTime%DateTime%', round(-2)), 255), round(NULL)))); diff --git a/tests/queries/0_stateless/02786_max_execution_time_leaf.sql b/tests/queries/0_stateless/02786_max_execution_time_leaf.sql index 1d02e82569c..f678c913b46 100644 --- a/tests/queries/0_stateless/02786_max_execution_time_leaf.sql +++ b/tests/queries/0_stateless/02786_max_execution_time_leaf.sql @@ -1,4 +1,4 @@ -- Tags: no-fasttest -SELECT count() FROM cluster('test_cluster_two_shards', view( SELECT * FROM numbers(100000000000) )) SETTINGS max_execution_time_leaf = 1; -- { serverError 159 } +SELECT count() FROM cluster('test_cluster_two_shards', view( SELECT * FROM numbers(100000000000) )) SETTINGS max_execution_time_leaf = 1; -- { serverError TIMEOUT_EXCEEDED } -- Can return partial result SELECT count() FROM cluster('test_cluster_two_shards', view( SELECT * FROM numbers(100000000000) )) FORMAT Null SETTINGS max_execution_time_leaf = 1, timeout_overflow_mode_leaf = 'break'; diff --git a/tests/queries/0_stateless/02788_current_schemas_function.sql b/tests/queries/0_stateless/02788_current_schemas_function.sql index 408b21c0e34..8cf03738db2 100644 --- a/tests/queries/0_stateless/02788_current_schemas_function.sql +++ b/tests/queries/0_stateless/02788_current_schemas_function.sql @@ -1,4 +1,4 @@ SELECT current_schemas(true) AS result; SELECT current_schemas(false) AS result; -SELECT current_schemas(1); -- { serverError 43 } -SELECT current_schemas(); -- { serverError 42 } \ No newline at end of file +SELECT current_schemas(1); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT current_schemas(); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } \ No newline at end of file diff --git a/tests/queries/0_stateless/02788_fix_logical_error_in_sorting.sql b/tests/queries/0_stateless/02788_fix_logical_error_in_sorting.sql index 6964d8cf47d..97741e6fcc9 100644 --- a/tests/queries/0_stateless/02788_fix_logical_error_in_sorting.sql +++ b/tests/queries/0_stateless/02788_fix_logical_error_in_sorting.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; DROP TABLE IF EXISTS session_events; DROP TABLE IF EXISTS event_types; diff --git a/tests/queries/0_stateless/02811_invalid_embedded_rocksdb_create.sql b/tests/queries/0_stateless/02811_invalid_embedded_rocksdb_create.sql index a87ac5e0de0..eca64d3865a 100644 --- a/tests/queries/0_stateless/02811_invalid_embedded_rocksdb_create.sql +++ b/tests/queries/0_stateless/02811_invalid_embedded_rocksdb_create.sql @@ -1,2 +1,2 @@ -- Tags: no-fasttest -CREATE TABLE dict (`k` String, `v` String) ENGINE = EmbeddedRocksDB(k) PRIMARY KEY k; -- {serverError 36} +CREATE TABLE dict (`k` String, `v` String) ENGINE = EmbeddedRocksDB(k) PRIMARY KEY k; -- {serverError BAD_ARGUMENTS} diff --git a/tests/queries/0_stateless/02812_pointwise_array_operations.sql b/tests/queries/0_stateless/02812_pointwise_array_operations.sql index e28c4bda347..c10332e4a17 100644 --- a/tests/queries/0_stateless/02812_pointwise_array_operations.sql +++ b/tests/queries/0_stateless/02812_pointwise_array_operations.sql @@ -11,8 +11,8 @@ SELECT ([1,2::UInt64]+[1,number]) from numbers(5); CREATE TABLE my_table (values Array(Int32)) ENGINE = MergeTree() ORDER BY values; INSERT INTO my_table (values) VALUES ([12, 3, 1]); SELECT values - [1,2,3] FROM my_table WHERE arrayExists(x -> x > 5, values); -SELECT ([12,13] % [5,6]); -- { serverError 43 } -SELECT ([2,3,4]-[1,-2,10,29]); -- { serverError 190 } +SELECT ([12,13] % [5,6]); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } +SELECT ([2,3,4]-[1,-2,10,29]); -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } CREATE TABLE a ( x Array(UInt64), y Array(UInt64)) ENGINE = Memory; INSERT INTO a VALUES ([2,3],[4,5]),([1,2,3], [4,5]),([6,7],[8,9,10]); -SELECT x, y, x+y FROM a; -- { serverError 190 } +SELECT x, y, x+y FROM a; -- { serverError SIZES_OF_ARRAYS_DONT_MATCH } diff --git a/tests/queries/0_stateless/02831_regexp_analyze_recursion.sql b/tests/queries/0_stateless/02831_regexp_analyze_recursion.sql index a2075ae903b..800b2c87192 100644 --- a/tests/queries/0_stateless/02831_regexp_analyze_recursion.sql +++ b/tests/queries/0_stateless/02831_regexp_analyze_recursion.sql @@ -1 +1 @@ -SELECT match('', repeat('(', 100000)); -- { serverError 427 } +SELECT match('', repeat('(', 100000)); -- { serverError CANNOT_COMPILE_REGEXP } diff --git a/tests/queries/0_stateless/02834_analyzer_with_statement_references.sql b/tests/queries/0_stateless/02834_analyzer_with_statement_references.sql index 6254c054eec..29ed6e3f0da 100644 --- a/tests/queries/0_stateless/02834_analyzer_with_statement_references.sql +++ b/tests/queries/0_stateless/02834_analyzer_with_statement_references.sql @@ -4,4 +4,4 @@ WITH test_aliases AS (SELECT number FROM numbers(20)), alias2 AS (SELECT number SELECT number FROM alias2 SETTINGS enable_global_with_statement = 1; WITH test_aliases AS (SELECT number FROM numbers(20)), alias2 AS (SELECT number FROM test_aliases) -SELECT number FROM alias2 SETTINGS enable_global_with_statement = 0; -- { serverError 60 } +SELECT number FROM alias2 SETTINGS enable_global_with_statement = 0; -- { serverError UNKNOWN_TABLE } diff --git a/tests/queries/0_stateless/02842_largestTriangleThreeBuckets_aggregate_function.sql b/tests/queries/0_stateless/02842_largestTriangleThreeBuckets_aggregate_function.sql index 254875ba041..d5ef564469e 100644 --- a/tests/queries/0_stateless/02842_largestTriangleThreeBuckets_aggregate_function.sql +++ b/tests/queries/0_stateless/02842_largestTriangleThreeBuckets_aggregate_function.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; drop table if exists largestTriangleThreeBucketsTestFloat64Float64; CREATE TABLE largestTriangleThreeBucketsTestFloat64Float64 @@ -55,10 +55,10 @@ CREATE TABLE largestTriangleTreeBucketsBucketSizeTest INSERT INTO largestTriangleTreeBucketsBucketSizeTest (x, y) SELECT (number + 1) AS x, (x % 1000) AS y FROM numbers(9999); -SELECT - arrayJoin(lttb(1000)(x, y)) AS point, - tupleElement(point, 1) AS point_x, - point_x - neighbor(point_x, -1) AS point_x_diff_with_previous_row +SELECT + arrayJoin(lttb(1000)(x, y)) AS point, + tupleElement(point, 1) AS point_x, + point_x - neighbor(point_x, -1) AS point_x_diff_with_previous_row FROM largestTriangleTreeBucketsBucketSizeTest LIMIT 990, 10; DROP TABLE largestTriangleTreeBucketsBucketSizeTest; diff --git a/tests/queries/0_stateless/02842_truncate_database.sql b/tests/queries/0_stateless/02842_truncate_database.sql index a767acba14c..09ac844cfe2 100644 --- a/tests/queries/0_stateless/02842_truncate_database.sql +++ b/tests/queries/0_stateless/02842_truncate_database.sql @@ -62,14 +62,14 @@ SHOW DICTIONARIES FROM test_truncate_database; TRUNCATE DATABASE test_truncate_database; -SELECT * FROM dest_view_set ORDER BY x LIMIT 1; -- {serverError 60} -SELECT * FROM dest_view_memory ORDER BY x LIMIT 1; -- {serverError 60} -SELECT * FROM dest_view_log ORDER BY x LIMIT 1; -- {serverError 60} -SELECT * FROM dest_view_tiny_log ORDER BY x LIMIT 1; -- {serverError 60} -SELECT * FROM dest_view_stripe_log ORDER BY x LIMIT 1; -- {serverError 60} -SELECT * FROM dest_view_merge_tree ORDER BY x LIMIT 1; -- {serverError 60} +SELECT * FROM dest_view_set ORDER BY x LIMIT 1; -- {serverError UNKNOWN_TABLE} +SELECT * FROM dest_view_memory ORDER BY x LIMIT 1; -- {serverError UNKNOWN_TABLE} +SELECT * FROM dest_view_log ORDER BY x LIMIT 1; -- {serverError UNKNOWN_TABLE} +SELECT * FROM dest_view_tiny_log ORDER BY x LIMIT 1; -- {serverError UNKNOWN_TABLE} +SELECT * FROM dest_view_stripe_log ORDER BY x LIMIT 1; -- {serverError UNKNOWN_TABLE} +SELECT * FROM dest_view_merge_tree ORDER BY x LIMIT 1; -- {serverError UNKNOWN_TABLE} SELECT name, database, element_count FROM system.dictionaries WHERE database = 'test_truncate_database' AND name = 'dest_dictionary'; -SELECT * FROM dest_dictionary; -- {serverError 60} +SELECT * FROM dest_dictionary; -- {serverError UNKNOWN_TABLE} SHOW TABLES FROM test_truncate_database; SHOW DICTIONARIES FROM test_truncate_database; diff --git a/tests/queries/0_stateless/02843_context_has_expired.sql b/tests/queries/0_stateless/02843_context_has_expired.sql index 8355ce2c18c..93204822f47 100644 --- a/tests/queries/0_stateless/02843_context_has_expired.sql +++ b/tests/queries/0_stateless/02843_context_has_expired.sql @@ -26,7 +26,7 @@ SELECT 1 IN (SELECT joinGetOrNull(02843_join, 'value', materialize(1))); SELECT 1 IN (SELECT materialize(connectionId())); SELECT 1000000 IN (SELECT materialize(getSetting('max_threads'))); -SELECT 1 in (SELECT file(materialize('a'))); -- { serverError 107 } +SELECT 1 in (SELECT file(materialize('a'))); -- { serverError FILE_DOESNT_EXIST } EXPLAIN ESTIMATE SELECT 1 IN (SELECT dictGet('02843_dict', 'value', materialize('1'))); EXPLAIN ESTIMATE SELECT 1 IN (SELECT joinGet(`02843_join`, 'value', materialize(1))); diff --git a/tests/queries/0_stateless/02883_array_scalar_mult_div_modulo.sql b/tests/queries/0_stateless/02883_array_scalar_mult_div_modulo.sql index 46145fec08b..28746300330 100644 --- a/tests/queries/0_stateless/02883_array_scalar_mult_div_modulo.sql +++ b/tests/queries/0_stateless/02883_array_scalar_mult_div_modulo.sql @@ -22,4 +22,4 @@ SELECT values * 5 FROM my_table WHERE arrayExists(x -> x > 5, values); DROP TABLE my_table; SELECT [6, 6, 3] % 2; SELECT [6, 6, 3] / 2.5::Decimal(1, 1); -SELECT [1] / 'a'; -- { serverError 43 } +SELECT [1] / 'a'; -- { serverError ILLEGAL_TYPE_OF_ARGUMENT } diff --git a/tests/queries/0_stateless/02888_attach_partition_from_different_tables.sql b/tests/queries/0_stateless/02888_attach_partition_from_different_tables.sql index 98f841394e1..ae930408bef 100644 --- a/tests/queries/0_stateless/02888_attach_partition_from_different_tables.sql +++ b/tests/queries/0_stateless/02888_attach_partition_from_different_tables.sql @@ -17,7 +17,7 @@ CREATE TABLE attach_partition_t2 ( ENGINE = MergeTree ORDER BY a; -ALTER TABLE attach_partition_t2 ATTACH PARTITION tuple() FROM attach_partition_t1; -- { serverError 36 } +ALTER TABLE attach_partition_t2 ATTACH PARTITION tuple() FROM attach_partition_t1; -- { serverError BAD_ARGUMENTS } -- test different projection name CREATE TABLE attach_partition_t3 ( @@ -50,7 +50,7 @@ CREATE TABLE attach_partition_t4 ( ENGINE = MergeTree ORDER BY a; -ALTER TABLE attach_partition_t4 ATTACH PARTITION tuple() FROM attach_partition_t3; -- { serverError 36 } +ALTER TABLE attach_partition_t4 ATTACH PARTITION tuple() FROM attach_partition_t3; -- { serverError BAD_ARGUMENTS } -- check attach with same index and projection CREATE TABLE attach_partition_t5 ( diff --git a/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql b/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql index adcdeecb9e1..783a922dfa4 100644 --- a/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql +++ b/tests/queries/0_stateless/02888_system_tables_with_inaccessible_table_function.sql @@ -13,8 +13,8 @@ CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc05 (a int) AS redis('127. CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc06 (a int) AS s3('http://some_addr:9000/cloud-storage-01/data.tsv', 'M9O7o0SX5I4udXhWxI12', '9ijqzmVN83fzD9XDkEAAAAAAAA', 'TSV'); -CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc01_without_schema AS postgresql('127.121.0.1:5432', 'postgres_db', 'postgres_table', 'postgres_user', '124444'); -- { serverError 614 } -CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc02_without_schema AS mysql('127.123.0.1:3306', 'mysql_db', 'mysql_table', 'mysql_user','123123'); -- {serverError 279 } +CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc01_without_schema AS postgresql('127.121.0.1:5432', 'postgres_db', 'postgres_table', 'postgres_user', '124444'); -- { serverError POSTGRESQL_CONNECTION_FAILURE } +CREATE TABLE {CLICKHOUSE_DATABASE:Identifier}.tablefunc02_without_schema AS mysql('127.123.0.1:3306', 'mysql_db', 'mysql_table', 'mysql_user','123123'); -- {serverError ALL_CONNECTION_TRIES_FAILED } SELECT name, engine, engine_full, create_table_query, data_paths, notEmpty([metadata_path]), notEmpty([uuid]) FROM system.tables diff --git a/tests/queries/0_stateless/02899_indexing_by_space_filling_curves.sql b/tests/queries/0_stateless/02899_indexing_by_space_filling_curves.sql index a3989039f50..8e9b9fcadef 100644 --- a/tests/queries/0_stateless/02899_indexing_by_space_filling_curves.sql +++ b/tests/queries/0_stateless/02899_indexing_by_space_filling_curves.sql @@ -9,7 +9,7 @@ SET max_rows_to_read = 8192, force_primary_key = 1, analyze_index_with_space_fil SELECT count() FROM test WHERE x >= 10 AND x <= 20 AND y >= 20 AND y <= 30; SET max_rows_to_read = 8192, force_primary_key = 1, analyze_index_with_space_filling_curves = 0; -SELECT count() FROM test WHERE x >= 10 AND x <= 20 AND y >= 20 AND y <= 30; -- { serverError 277 } +SELECT count() FROM test WHERE x >= 10 AND x <= 20 AND y >= 20 AND y <= 30; -- { serverError INDEX_NOT_USED } DROP TABLE test; diff --git a/tests/queries/0_stateless/02900_issue_55858.sql b/tests/queries/0_stateless/02900_issue_55858.sql index b7b6704cdb5..af01442ce33 100644 --- a/tests/queries/0_stateless/02900_issue_55858.sql +++ b/tests/queries/0_stateless/02900_issue_55858.sql @@ -1,9 +1,9 @@ set precise_float_parsing = 1; -select cast('2023-01-01' as Float64); -- { serverError 6 } -select cast('2023-01-01' as Float32); -- { serverError 6 } -select toFloat32('2023-01-01'); -- { serverError 6 } -select toFloat64('2023-01-01'); -- { serverError 6 } +select cast('2023-01-01' as Float64); -- { serverError CANNOT_PARSE_TEXT } +select cast('2023-01-01' as Float32); -- { serverError CANNOT_PARSE_TEXT } +select toFloat32('2023-01-01'); -- { serverError CANNOT_PARSE_TEXT } +select toFloat64('2023-01-01'); -- { serverError CANNOT_PARSE_TEXT } select toFloat32OrZero('2023-01-01'); select toFloat64OrZero('2023-01-01'); select toFloat32OrNull('2023-01-01'); diff --git a/tests/queries/0_stateless/02901_predicate_pushdown_cte_stateful.sql b/tests/queries/0_stateless/02901_predicate_pushdown_cte_stateful.sql index a208519b655..d65b0da42a4 100644 --- a/tests/queries/0_stateless/02901_predicate_pushdown_cte_stateful.sql +++ b/tests/queries/0_stateless/02901_predicate_pushdown_cte_stateful.sql @@ -1,4 +1,4 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; CREATE TABLE t ( diff --git a/tests/queries/0_stateless/02906_force_optimize_projection_name.sql b/tests/queries/0_stateless/02906_force_optimize_projection_name.sql index 6b9d7f74f9f..4fad50f5066 100644 --- a/tests/queries/0_stateless/02906_force_optimize_projection_name.sql +++ b/tests/queries/0_stateless/02906_force_optimize_projection_name.sql @@ -17,9 +17,9 @@ INSERT INTO test SELECT number, 'test' FROM numbers(1, 100); SELECT name FROM test GROUP BY name SETTINGS force_optimize_projection_name='projection_name'; -SELECT name FROM test GROUP BY name SETTINGS force_optimize_projection_name='non_existing_projection'; -- { serverError 117 } +SELECT name FROM test GROUP BY name SETTINGS force_optimize_projection_name='non_existing_projection'; -- { serverError INCORRECT_DATA } -SELECT name FROM test SETTINGS force_optimize_projection_name='projection_name'; -- { serverError 117 } +SELECT name FROM test SETTINGS force_optimize_projection_name='projection_name'; -- { serverError INCORRECT_DATA } INSERT INTO test SELECT number, 'test' FROM numbers(1, 100) SETTINGS force_optimize_projection_name='projection_name'; SELECT 1 SETTINGS force_optimize_projection_name='projection_name'; diff --git a/tests/queries/0_stateless/02906_interval_comparison.sql b/tests/queries/0_stateless/02906_interval_comparison.sql index 92400caa878..feaf403261c 100644 --- a/tests/queries/0_stateless/02906_interval_comparison.sql +++ b/tests/queries/0_stateless/02906_interval_comparison.sql @@ -1,7 +1,7 @@ -- Comparing the same types is ok: SELECT INTERVAL 1 SECOND = INTERVAL 1 SECOND; -- It is reasonable to not give an answer for this: -SELECT INTERVAL 30 DAY < INTERVAL 1 MONTH; -- { serverError 386 } +SELECT INTERVAL 30 DAY < INTERVAL 1 MONTH; -- { serverError NO_COMMON_TYPE } -- This we could change in the future: -SELECT INTERVAL 1 SECOND = INTERVAL 1 YEAR; -- { serverError 386 } -SELECT INTERVAL 1 SECOND <= INTERVAL 1 YEAR; -- { serverError 386 } +SELECT INTERVAL 1 SECOND = INTERVAL 1 YEAR; -- { serverError NO_COMMON_TYPE } +SELECT INTERVAL 1 SECOND <= INTERVAL 1 YEAR; -- { serverError NO_COMMON_TYPE } diff --git a/tests/queries/0_stateless/02910_prefetch_unexpceted_exception.sql b/tests/queries/0_stateless/02910_prefetch_unexpceted_exception.sql index 820e6a2d1e5..d03acf7c7e3 100644 --- a/tests/queries/0_stateless/02910_prefetch_unexpceted_exception.sql +++ b/tests/queries/0_stateless/02910_prefetch_unexpceted_exception.sql @@ -17,7 +17,7 @@ SET allow_prefetched_read_pool_for_local_filesystem=1; SYSTEM ENABLE FAILPOINT prefetched_reader_pool_failpoint; -SELECT * FROM prefetched_table FORMAT Null; --{serverError 36} +SELECT * FROM prefetched_table FORMAT Null; --{serverError BAD_ARGUMENTS} SYSTEM DISABLE FAILPOINT prefetched_reader_pool_failpoint; diff --git a/tests/queries/0_stateless/02932_query_settings_max_size_drop.sql b/tests/queries/0_stateless/02932_query_settings_max_size_drop.sql index 1685861bd2e..b3535ae3f52 100644 --- a/tests/queries/0_stateless/02932_query_settings_max_size_drop.sql +++ b/tests/queries/0_stateless/02932_query_settings_max_size_drop.sql @@ -5,7 +5,7 @@ AS SELECT number FROM numbers(1000) ; -DROP TABLE test_max_size_drop SETTINGS max_table_size_to_drop = 1; -- { serverError 359 } +DROP TABLE test_max_size_drop SETTINGS max_table_size_to_drop = 1; -- { serverError TABLE_SIZE_EXCEEDS_MAX_DROP_SIZE_LIMIT } DROP TABLE test_max_size_drop; CREATE TABLE test_max_size_drop @@ -15,7 +15,7 @@ AS SELECT number FROM numbers(1000) ; -ALTER TABLE test_max_size_drop DROP PARTITION tuple() SETTINGS max_partition_size_to_drop = 1; -- { serverError 359 } +ALTER TABLE test_max_size_drop DROP PARTITION tuple() SETTINGS max_partition_size_to_drop = 1; -- { serverError TABLE_SIZE_EXCEEDS_MAX_DROP_SIZE_LIMIT } ALTER TABLE test_max_size_drop DROP PARTITION tuple(); DROP TABLE test_max_size_drop; @@ -26,6 +26,6 @@ AS SELECT number FROM numbers(1000) ; -ALTER TABLE test_max_size_drop DROP PART 'all_1_1_0' SETTINGS max_partition_size_to_drop = 1; -- { serverError 359 } +ALTER TABLE test_max_size_drop DROP PART 'all_1_1_0' SETTINGS max_partition_size_to_drop = 1; -- { serverError TABLE_SIZE_EXCEEDS_MAX_DROP_SIZE_LIMIT } ALTER TABLE test_max_size_drop DROP PART 'all_1_1_0'; DROP TABLE test_max_size_drop; diff --git a/tests/queries/0_stateless/02943_variant_read_subcolumns.sh b/tests/queries/0_stateless/02943_variant_read_subcolumns.sh index 6bbd127d933..5ca8dd5f36f 100755 --- a/tests/queries/0_stateless/02943_variant_read_subcolumns.sh +++ b/tests/queries/0_stateless/02943_variant_read_subcolumns.sh @@ -7,8 +7,7 @@ CLICKHOUSE_LOG_COMMENT= # shellcheck source=../shell_config.sh . "$CUR_DIR"/../shell_config.sh -CH_CLIENT="$CLICKHOUSE_CLIENT --allow_experimental_variant_type=1 --use_variant_as_common_type=1 --allow_suspicious_variant_types=1 --max_insert_threads 4 --group_by_two_level_threshold 752249 --group_by_two_level_threshold_bytes 15083870 --distributed_aggregation_memory_efficient 1 --fsync_metadata 1 --output_format_parallel_formatting 0 --input_format_parallel_parsing 0 --min_chunk_bytes_for_parallel_parsing 6583861 --max_read_buffer_size 640584 --prefer_localhost_replica 1 --max_block_size 38844 --max_threads 48 --optimize_append_index 0 --optimize_if_chain_to_multiif 1 --optimize_if_transform_strings_to_enum 0 --optimize_read_in_order 1 --optimize_or_like_chain 0 --optimize_substitute_columns 1 --enable_multiple_prewhere_read_steps 1 --read_in_order_two_level_merge_threshold 4 --optimize_aggregation_in_order 0 --aggregation_in_order_max_block_bytes 18284646 --use_uncompressed_cache 1 --min_bytes_to_use_direct_io 10737418240 --min_bytes_to_use_mmap_io 10737418240 --local_filesystem_read_method pread --remote_filesystem_read_method read --local_filesystem_read_prefetch 1 --filesystem_cache_segments_batch_size 0 --read_from_filesystem_cache_if_exists_otherwise_bypass_cache 0 --throw_on_error_from_cache_on_write_operations 1 --remote_filesystem_read_prefetch 0 --allow_prefetched_read_pool_for_remote_filesystem 0 --filesystem_prefetch_max_memory_usage 128Mi --filesystem_prefetches_limit 0 --filesystem_prefetch_min_bytes_for_single_read_task 16Mi --filesystem_prefetch_step_marks 50 --filesystem_prefetch_step_bytes 0 --compile_aggregate_expressions 1 --compile_sort_description 0 --merge_tree_coarse_index_granularity 31 --optimize_distinct_in_order 1 --max_bytes_before_external_sort 1 --max_bytes_before_external_group_by 1 --max_bytes_before_remerge_sort 2640239625 --min_compress_block_size 3114155 --max_compress_block_size 226550 --merge_tree_compact_parts_min_granules_to_multibuffer_read 118 --optimize_sorting_by_input_stream_properties 0 --http_response_buffer_size 543038 --http_wait_end_of_query False --enable_memory_bound_merging_of_aggregation_results 1 --min_count_to_compile_expression 3 --min_count_to_compile_aggregate_expression 3 --min_count_to_compile_sort_description 0 --session_timezone America/Mazatlan --prefer_warmed_unmerged_parts_seconds 8 --use_page_cache_for_disks_without_file_cache False --page_cache_inject_eviction True --merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability 0.82 " - +CH_CLIENT="$CLICKHOUSE_CLIENT --allow_experimental_variant_type=1 --use_variant_as_common_type=1 --allow_suspicious_variant_types=1" function test() { diff --git a/tests/queries/0_stateless/02972_parallel_replicas_cte.reference b/tests/queries/0_stateless/02972_parallel_replicas_cte.reference index bbb5a960463..d3a06db1745 100644 --- a/tests/queries/0_stateless/02972_parallel_replicas_cte.reference +++ b/tests/queries/0_stateless/02972_parallel_replicas_cte.reference @@ -1,6 +1,6 @@ -990000 -990000 +900 +900 10 -990000 +900 1 -1000000 +1000 diff --git a/tests/queries/0_stateless/02972_parallel_replicas_cte.sql b/tests/queries/0_stateless/02972_parallel_replicas_cte.sql index c9ab83ff9ad..083b0ecc5c9 100644 --- a/tests/queries/0_stateless/02972_parallel_replicas_cte.sql +++ b/tests/queries/0_stateless/02972_parallel_replicas_cte.sql @@ -3,25 +3,25 @@ DROP TABLE IF EXISTS pr_2; DROP TABLE IF EXISTS numbers_1e6; CREATE TABLE pr_1 (`a` UInt32) ENGINE = MergeTree ORDER BY a PARTITION BY a % 10 AS -SELECT 10 * intDiv(number, 10) + 1 FROM numbers(1_000_000); +SELECT 10 * intDiv(number, 10) + 1 FROM numbers(1_000); CREATE TABLE pr_2 (`a` UInt32) ENGINE = MergeTree ORDER BY a AS -SELECT * FROM numbers(1_000_000); +SELECT * FROM numbers(1_000); -WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 10000) +WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 100) SELECT count() FROM pr_2 INNER JOIN filtered_groups ON pr_2.a = filtered_groups.a; -WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 10000) +WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 100) SELECT count() FROM pr_2 INNER JOIN filtered_groups ON pr_2.a = filtered_groups.a SETTINGS allow_experimental_parallel_reading_from_replicas = 1, parallel_replicas_for_non_replicated_merge_tree = 1, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', max_parallel_replicas = 3; -- Testing that it is disabled for allow_experimental_analyzer=0. With analyzer it will be supported (with correct result) -WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 10000) +WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 100) SELECT count() FROM pr_2 INNER JOIN filtered_groups ON pr_2.a = filtered_groups.a SETTINGS allow_experimental_analyzer = 0, allow_experimental_parallel_reading_from_replicas = 2, parallel_replicas_for_non_replicated_merge_tree = 1, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', max_parallel_replicas = 3; -- { serverError SUPPORT_IS_DISABLED } -- Disabled for any value of allow_experimental_parallel_reading_from_replicas != 1, not just 2 -WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 10000) +WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 100) SELECT count() FROM pr_2 INNER JOIN filtered_groups ON pr_2.a = filtered_groups.a SETTINGS allow_experimental_analyzer = 0, allow_experimental_parallel_reading_from_replicas = 512, parallel_replicas_for_non_replicated_merge_tree = 1, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', max_parallel_replicas = 3; -- { serverError SUPPORT_IS_DISABLED } @@ -33,7 +33,7 @@ SETTINGS allow_experimental_parallel_reading_from_replicas = 1, parallel_replica SELECT * FROM ( - WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 10000) + WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 100) SELECT count() FROM pr_2 INNER JOIN filtered_groups ON pr_2.a = filtered_groups.a ) SETTINGS allow_experimental_parallel_reading_from_replicas = 1, parallel_replicas_for_non_replicated_merge_tree = 1, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', max_parallel_replicas = 3; @@ -45,31 +45,31 @@ FROM SELECT c + 1 FROM ( - WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 10000) + WITH filtered_groups AS (SELECT a FROM pr_1 WHERE a >= 100) SELECT count() as c FROM pr_2 INNER JOIN filtered_groups ON pr_2.a = filtered_groups.a ) ) SETTINGS allow_experimental_parallel_reading_from_replicas = 1, parallel_replicas_for_non_replicated_merge_tree = 1, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', max_parallel_replicas = 3; -CREATE TABLE numbers_1e6 +CREATE TABLE numbers_1e3 ( `n` UInt64 ) ENGINE = MergeTree ORDER BY n -AS SELECT * FROM numbers(1_000_000); +AS SELECT * FROM numbers(1_000); -- Same but nested CTE's WITH cte1 AS ( SELECT n - FROM numbers_1e6 + FROM numbers_1e3 ), cte2 AS ( SELECT n - FROM numbers_1e6 + FROM numbers_1e3 WHERE n IN (cte1) ) SELECT count() diff --git a/tests/queries/0_stateless/02992_analyzer_group_by_const.reference b/tests/queries/0_stateless/02992_analyzer_group_by_const.reference index ff61ab0a515..ea9492581c9 100644 --- a/tests/queries/0_stateless/02992_analyzer_group_by_const.reference +++ b/tests/queries/0_stateless/02992_analyzer_group_by_const.reference @@ -4,3 +4,5 @@ a|x String, Const(size = 1, String(size = 1)) String, Const(size = 1, String(size = 1)) 5128475243952187658 +0 0 +0 0 diff --git a/tests/queries/0_stateless/02992_analyzer_group_by_const.sql b/tests/queries/0_stateless/02992_analyzer_group_by_const.sql index f30a49887c7..ede6e0deed9 100644 --- a/tests/queries/0_stateless/02992_analyzer_group_by_const.sql +++ b/tests/queries/0_stateless/02992_analyzer_group_by_const.sql @@ -10,3 +10,23 @@ select dumpColumnStructure('x') GROUP BY 'x'; select dumpColumnStructure('x'); -- from https://github.com/ClickHouse/ClickHouse/pull/60046 SELECT cityHash64('limit', _CAST(materialize('World'), 'LowCardinality(String)')) FROM system.one GROUP BY GROUPING SETS ('limit'); + +WITH ( + SELECT dummy AS x + FROM system.one + ) AS y +SELECT + y, + min(dummy) +FROM remote('127.0.0.{1,2}', system.one) +GROUP BY y; + +WITH ( + SELECT dummy AS x + FROM system.one + ) AS y +SELECT + y, + min(dummy) +FROM remote('127.0.0.{2,3}', system.one) +GROUP BY y; diff --git a/tests/queries/0_stateless/02997_insert_select_too_many_parts_multithread.sql b/tests/queries/0_stateless/02997_insert_select_too_many_parts_multithread.sql index 2dfc8094115..f902f191cb7 100644 --- a/tests/queries/0_stateless/02997_insert_select_too_many_parts_multithread.sql +++ b/tests/queries/0_stateless/02997_insert_select_too_many_parts_multithread.sql @@ -11,6 +11,6 @@ SET max_block_size = 1, min_insert_block_size_rows = 0, min_insert_block_size_by INSERT INTO too_many_parts SELECT * FROM numbers_mt(100); SELECT count() FROM too_many_parts; -INSERT INTO too_many_parts SELECT * FROM numbers_mt(10); -- { serverError 252 } +INSERT INTO too_many_parts SELECT * FROM numbers_mt(10); -- { serverError TOO_MANY_PARTS } DROP TABLE too_many_parts; diff --git a/tests/queries/0_stateless/03008_local_plain_rewritable.sh b/tests/queries/0_stateless/03008_local_plain_rewritable.sh index 1761c7d79b1..5fac964a219 100755 --- a/tests/queries/0_stateless/03008_local_plain_rewritable.sh +++ b/tests/queries/0_stateless/03008_local_plain_rewritable.sh @@ -6,13 +6,13 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CUR_DIR"/../shell_config.sh -${CLICKHOUSE_CLIENT} --query "drop table if exists test_mt sync" +${CLICKHOUSE_CLIENT} --query "drop table if exists 03008_test_local_mt sync" ${CLICKHOUSE_CLIENT} -nm --query " -create table test_mt (a Int32, b Int64, c Int64) +create table 03008_test_local_mt (a Int32, b Int64, c Int64) engine = MergeTree() partition by intDiv(a, 1000) order by tuple(a, b) settings disk = disk( - name = disk_s3_plain, + name = 03008_local_plain_rewritable, type = object_storage, object_storage_type = local, metadata_type = plain_rewritable, @@ -20,34 +20,36 @@ settings disk = disk( " ${CLICKHOUSE_CLIENT} -nm --query " -insert into test_mt (*) values (1, 2, 0), (2, 2, 2), (3, 1, 9), (4, 7, 7), (5, 10, 2), (6, 12, 5); -insert into test_mt (*) select number, number, number from numbers_mt(10000); +insert into 03008_test_local_mt (*) values (1, 2, 0), (2, 2, 2), (3, 1, 9), (4, 7, 7), (5, 10, 2), (6, 12, 5); +insert into 03008_test_local_mt (*) select number, number, number from numbers_mt(10000); " ${CLICKHOUSE_CLIENT} -nm --query " -select count(*) from test_mt; -select (*) from test_mt order by tuple(a, b) limit 10; +select count(*) from 03008_test_local_mt; +select (*) from 03008_test_local_mt order by tuple(a, b) limit 10; " -${CLICKHOUSE_CLIENT} --query "optimize table test_mt final" +${CLICKHOUSE_CLIENT} --query "optimize table 03008_test_local_mt final;" ${CLICKHOUSE_CLIENT} -nm --query " -alter table test_mt modify setting disk = 'disk_s3_plain', old_parts_lifetime = 3600; -select engine_full from system.tables WHERE database = currentDatabase() AND name = 'test_mt'; +alter table 03008_test_local_mt modify setting disk = '03008_local_plain_rewritable', old_parts_lifetime = 3600; +select engine_full from system.tables WHERE database = currentDatabase() AND name = '03008_test_local_mt'; " | grep -c "old_parts_lifetime = 3600" ${CLICKHOUSE_CLIENT} -nm --query " -select count(*) from test_mt; -select (*) from test_mt order by tuple(a, b) limit 10; +select count(*) from 03008_test_local_mt; +select (*) from 03008_test_local_mt order by tuple(a, b) limit 10; " ${CLICKHOUSE_CLIENT} -nm --query " -alter table test_mt update c = 0 where a % 2 = 1; -alter table test_mt add column d Int64 after c; -alter table test_mt drop column c; +alter table 03008_test_local_mt update c = 0 where a % 2 = 1; +alter table 03008_test_local_mt add column d Int64 after c; +alter table 03008_test_local_mt drop column c; " 2>&1 | grep -Fq "SUPPORT_IS_DISABLED" ${CLICKHOUSE_CLIENT} -nm --query " -truncate table test_mt; -select count(*) from test_mt; +truncate table 03008_test_local_mt; +select count(*) from 03008_test_local_mt; " + +${CLICKHOUSE_CLIENT} --query "drop table 03008_test_local_mt sync" diff --git a/tests/queries/0_stateless/03008_s3_plain_rewritable.sh b/tests/queries/0_stateless/03008_s3_plain_rewritable.sh index d72fc47f689..4d5989f6f12 100755 --- a/tests/queries/0_stateless/03008_s3_plain_rewritable.sh +++ b/tests/queries/0_stateless/03008_s3_plain_rewritable.sh @@ -7,47 +7,49 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) # shellcheck source=../shell_config.sh . "$CUR_DIR"/../shell_config.sh -${CLICKHOUSE_CLIENT} --query "drop table if exists test_mt" +${CLICKHOUSE_CLIENT} --query "drop table if exists test_s3_mt" ${CLICKHOUSE_CLIENT} -nm --query " -create table test_mt (a Int32, b Int64, c Int64) engine = MergeTree() partition by intDiv(a, 1000) order by tuple(a, b) +create table test_s3_mt (a Int32, b Int64, c Int64) engine = MergeTree() partition by intDiv(a, 1000) order by tuple(a, b) settings disk = disk( - name = s3_plain_rewritable, + name = 03008_s3_plain_rewritable, type = s3_plain_rewritable, - endpoint = 'http://localhost:11111/test/test_mt/', + endpoint = 'http://localhost:11111/test/03008_test_s3_mt/', access_key_id = clickhouse, secret_access_key = clickhouse); " ${CLICKHOUSE_CLIENT} -nm --query " -insert into test_mt (*) values (1, 2, 0), (2, 2, 2), (3, 1, 9), (4, 7, 7), (5, 10, 2), (6, 12, 5); -insert into test_mt (*) select number, number, number from numbers_mt(10000); -select count(*) from test_mt; -select (*) from test_mt order by tuple(a, b) limit 10; +insert into test_s3_mt (*) values (1, 2, 0), (2, 2, 2), (3, 1, 9), (4, 7, 7), (5, 10, 2), (6, 12, 5); +insert into test_s3_mt (*) select number, number, number from numbers_mt(10000); +select count(*) from test_s3_mt; +select (*) from test_s3_mt order by tuple(a, b) limit 10; " -${CLICKHOUSE_CLIENT} --query "optimize table test_mt final" +${CLICKHOUSE_CLIENT} --query "optimize table test_s3_mt final" ${CLICKHOUSE_CLIENT} -m --query " -alter table test_mt add projection test_mt_projection (select * order by b)" 2>&1 | grep -Fq "SUPPORT_IS_DISABLED" +alter table test_s3_mt add projection test_s3_mt_projection (select * order by b)" 2>&1 | grep -Fq "SUPPORT_IS_DISABLED" ${CLICKHOUSE_CLIENT} -nm --query " -alter table test_mt update c = 0 where a % 2 = 1; -alter table test_mt add column d Int64 after c; -alter table test_mt drop column c; +alter table test_s3_mt update c = 0 where a % 2 = 1; +alter table test_s3_mt add column d Int64 after c; +alter table test_s3_mt drop column c; " 2>&1 | grep -Fq "SUPPORT_IS_DISABLED" ${CLICKHOUSE_CLIENT} -nm --query " -detach table test_mt; -attach table test_mt; +detach table test_s3_mt; +attach table test_s3_mt; " -${CLICKHOUSE_CLIENT} --query "drop table if exists test_mt_dst" +${CLICKHOUSE_CLIENT} --query "drop table if exists test_s3_mt_dst" ${CLICKHOUSE_CLIENT} -m --query " -create table test_mt_dst (a Int32, b Int64, c Int64) engine = MergeTree() partition by intDiv(a, 1000) order by tuple(a, b) -settings disk = 's3_plain_rewritable' +create table test_s3_mt_dst (a Int32, b Int64, c Int64) engine = MergeTree() partition by intDiv(a, 1000) order by tuple(a, b) +settings disk = '03008_s3_plain_rewritable' " ${CLICKHOUSE_CLIENT} -m --query " -alter table test_mt move partition 0 to table test_mt_dst" 2>&1 | grep -Fq "SUPPORT_IS_DISABLED" +alter table test_s3_mt move partition 0 to table test_s3_mt_dst" 2>&1 | grep -Fq "SUPPORT_IS_DISABLED" + +${CLICKHOUSE_CLIENT} --query "drop table test_s3_mt sync" diff --git a/tests/queries/0_stateless/03009_storage_memory_circ_buffer_usage.sql b/tests/queries/0_stateless/03009_storage_memory_circ_buffer_usage.sql index fa4ba96277d..8dd96ae2efc 100644 --- a/tests/queries/0_stateless/03009_storage_memory_circ_buffer_usage.sql +++ b/tests/queries/0_stateless/03009_storage_memory_circ_buffer_usage.sql @@ -57,7 +57,7 @@ INSERT INTO memory SELECT * FROM numbers(9000, 10000); SELECT total_bytes FROM system.tables WHERE name = 'memory' and database = currentDatabase(); SELECT 'TESTING INVALID SETTINGS'; -CREATE TABLE faulty_memory (i UInt32) ENGINE = Memory SETTINGS min_rows_to_keep = 100; -- { serverError 452 } -CREATE TABLE faulty_memory (i UInt32) ENGINE = Memory SETTINGS min_bytes_to_keep = 100; -- { serverError 452 } +CREATE TABLE faulty_memory (i UInt32) ENGINE = Memory SETTINGS min_rows_to_keep = 100; -- { serverError SETTING_CONSTRAINT_VIOLATION } +CREATE TABLE faulty_memory (i UInt32) ENGINE = Memory SETTINGS min_bytes_to_keep = 100; -- { serverError SETTING_CONSTRAINT_VIOLATION } DROP TABLE memory; \ No newline at end of file diff --git a/tests/queries/0_stateless/03032_storage_memory_modify_settings.sql b/tests/queries/0_stateless/03032_storage_memory_modify_settings.sql index 1507107c37f..2815e8e04d0 100644 --- a/tests/queries/0_stateless/03032_storage_memory_modify_settings.sql +++ b/tests/queries/0_stateless/03032_storage_memory_modify_settings.sql @@ -67,8 +67,8 @@ SELECT total_rows FROM system.tables WHERE name = 'memory' and database = curren SELECT 'TESTING INVALID SETTINGS'; DROP TABLE IF EXISTS memory; CREATE TABLE memory (i UInt32) ENGINE = Memory; -ALTER TABLE memory MODIFY SETTING min_rows_to_keep = 100; -- { serverError 452 } -ALTER TABLE memory MODIFY SETTING min_bytes_to_keep = 100; -- { serverError 452 } +ALTER TABLE memory MODIFY SETTING min_rows_to_keep = 100; -- { serverError SETTING_CONSTRAINT_VIOLATION } +ALTER TABLE memory MODIFY SETTING min_bytes_to_keep = 100; -- { serverError SETTING_CONSTRAINT_VIOLATION } ALTER TABLE memory MODIFY SETTING max_rows_to_keep = 1000; ALTER TABLE memory MODIFY SETTING max_bytes_to_keep = 1000; diff --git a/tests/queries/0_stateless/03039_dynamic_aggregating_merge_tree.reference b/tests/queries/0_stateless/03039_dynamic_aggregating_merge_tree.reference new file mode 100644 index 00000000000..3c186fcc935 --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_aggregating_merge_tree.reference @@ -0,0 +1,32 @@ +MergeTree compact + horizontal merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 +MergeTree wide + horizontal merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 +MergeTree compact + vertical merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 +MergeTree wide + vertical merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 diff --git a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_1.sh b/tests/queries/0_stateless/03039_dynamic_aggregating_merge_tree.sh similarity index 55% rename from tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_1.sh rename to tests/queries/0_stateless/03039_dynamic_aggregating_merge_tree.sh index 9cfd2294c8d..b8760ec0e1d 100755 --- a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_1.sh +++ b/tests/queries/0_stateless/03039_dynamic_aggregating_merge_tree.sh @@ -7,36 +7,11 @@ CLICKHOUSE_LOG_COMMENT= # shellcheck source=../shell_config.sh . "$CUR_DIR"/../shell_config.sh -CH_CLIENT="$CLICKHOUSE_CLIENT --allow_merge_tree_settings --allow_experimental_dynamic_type=1 --optimize_aggregation_in_order 0 --index_granularity_bytes 10485760 --index_granularity 8128 --merge_max_block_size 8128" - +# Fix some settings to avoid timeouts because of some settings randomization +CH_CLIENT="$CLICKHOUSE_CLIENT --allow_merge_tree_settings --allow_experimental_dynamic_type=1 --index_granularity_bytes 10485760 --index_granularity 8128 --merge_max_block_size 8128 --optimize_aggregation_in_order 0" function test() { - echo "ReplacingMergeTree" - $CH_CLIENT -q "create table test (id UInt64, d Dynamic) engine=ReplacingMergeTree order by id settings $1;" - $CH_CLIENT -q "system stop merges test" - $CH_CLIENT -q "insert into test select number, number from numbers(100000)" - $CH_CLIENT -q "insert into test select number, 'str_' || toString(number) from numbers(50000, 100000)" - - $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" - $CH_CLIENT -nm -q "system start merges test; optimize table test final" - $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" - $CH_CLIENT -q "drop table test" - - echo "SummingMergeTree" - $CH_CLIENT -q "create table test (id UInt64, sum UInt64, d Dynamic) engine=SummingMergeTree(sum) order by id settings $1;" - $CH_CLIENT -q "system stop merges test" - $CH_CLIENT -q "insert into test select number, 1, number from numbers(100000)" - $CH_CLIENT -q "insert into test select number, 1, 'str_' || toString(number) from numbers(50000, 100000)" - - $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" - $CH_CLIENT -q "select count(), sum from test group by sum order by sum, count()" - $CH_CLIENT -nm -q "system start merges test; optimize table test final" - $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" - $CH_CLIENT -q "select count(), sum from test group by sum order by sum, count()" - $CH_CLIENT -q "drop table test" - - echo "AggregatingMergeTree" $CH_CLIENT -q "create table test (id UInt64, sum AggregateFunction(sum, UInt64), d Dynamic) engine=AggregatingMergeTree() order by id settings $1;" $CH_CLIENT -q "system stop merges test" $CH_CLIENT -q "insert into test select number, sumState(1::UInt64), number from numbers(100000) group by number" diff --git a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_1.reference b/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_1.reference deleted file mode 100644 index 6c69b81c183..00000000000 --- a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_1.reference +++ /dev/null @@ -1,88 +0,0 @@ -MergeTree compact + horizontal merge -ReplacingMergeTree -100000 String -100000 UInt64 -50000 UInt64 -100000 String -SummingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 -AggregatingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 -MergeTree wide + horizontal merge -ReplacingMergeTree -100000 String -100000 UInt64 -50000 UInt64 -100000 String -SummingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 -AggregatingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 -MergeTree compact + vertical merge -ReplacingMergeTree -100000 String -100000 UInt64 -50000 UInt64 -100000 String -SummingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 -AggregatingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 -MergeTree wide + vertical merge -ReplacingMergeTree -100000 String -100000 UInt64 -50000 UInt64 -100000 String -SummingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 -AggregatingMergeTree -100000 String -100000 UInt64 -200000 1 -50000 String -100000 UInt64 -100000 1 -50000 2 diff --git a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_2.reference b/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_2.reference deleted file mode 100644 index af6c7d8d567..00000000000 --- a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_2.reference +++ /dev/null @@ -1,44 +0,0 @@ -MergeTree compact + horizontal merge -CollapsingMergeTree -100000 String -100000 UInt64 -50000 String -50000 UInt64 -VersionedCollapsingMergeTree -100000 String -100000 UInt64 -75000 String -75000 UInt64 -MergeTree wide + horizontal merge -CollapsingMergeTree -100000 String -100000 UInt64 -50000 String -50000 UInt64 -VersionedCollapsingMergeTree -100000 String -100000 UInt64 -75000 String -75000 UInt64 -MergeTree compact + vertical merge -CollapsingMergeTree -100000 String -100000 UInt64 -50000 String -50000 UInt64 -VersionedCollapsingMergeTree -100000 String -100000 UInt64 -75000 String -75000 UInt64 -MergeTree wide + vertical merge -CollapsingMergeTree -100000 String -100000 UInt64 -50000 String -50000 UInt64 -VersionedCollapsingMergeTree -100000 String -100000 UInt64 -75000 String -75000 UInt64 diff --git a/tests/queries/0_stateless/03039_dynamic_collapsing_merge_tree.reference b/tests/queries/0_stateless/03039_dynamic_collapsing_merge_tree.reference new file mode 100644 index 00000000000..fc293cc2ec8 --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_collapsing_merge_tree.reference @@ -0,0 +1,20 @@ +MergeTree compact + horizontal merge +100000 String +100000 UInt64 +50000 String +50000 UInt64 +MergeTree wide + horizontal merge +100000 String +100000 UInt64 +50000 String +50000 UInt64 +MergeTree compact + vertical merge +100000 String +100000 UInt64 +50000 String +50000 UInt64 +MergeTree wide + vertical merge +100000 String +100000 UInt64 +50000 String +50000 UInt64 diff --git a/tests/queries/0_stateless/03039_dynamic_collapsing_merge_tree.sh b/tests/queries/0_stateless/03039_dynamic_collapsing_merge_tree.sh new file mode 100755 index 00000000000..881c9ec64cc --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_collapsing_merge_tree.sh @@ -0,0 +1,38 @@ +#!/usr/bin/env bash +# Tags: long + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# reset --log_comment +CLICKHOUSE_LOG_COMMENT= +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +# Fix some settings to avoid timeouts because of some settings randomization +CH_CLIENT="$CLICKHOUSE_CLIENT --allow_merge_tree_settings --allow_experimental_dynamic_type=1 --index_granularity_bytes 10485760 --index_granularity 8128 --merge_max_block_size 8128" + +function test() +{ + $CH_CLIENT -q "create table test (id UInt64, sign Int8, d Dynamic) engine=CollapsingMergeTree(sign) order by id settings $1;" + $CH_CLIENT -q "system stop merges test" + $CH_CLIENT -q "insert into test select number, 1, number from numbers(100000)" + $CH_CLIENT -q "insert into test select number, -1, 'str_' || toString(number) from numbers(50000, 100000)" + + $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" + $CH_CLIENT -nm -q "system start merges test; optimize table test final" + $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" + $CH_CLIENT -q "drop table test" +} + +$CH_CLIENT -q "drop table if exists test;" + +echo "MergeTree compact + horizontal merge" +test "min_rows_for_wide_part=100000000000, min_bytes_for_wide_part=1000000000000" + +echo "MergeTree wide + horizontal merge" +test "min_rows_for_wide_part=1, min_bytes_for_wide_part=1" + +echo "MergeTree compact + vertical merge" +test "min_rows_for_wide_part=100000000000, min_bytes_for_wide_part=1000000000000, vertical_merge_algorithm_min_rows_to_activate=1, vertical_merge_algorithm_min_columns_to_activate=1" + +echo "MergeTree wide + vertical merge" +test "min_rows_for_wide_part=1, min_bytes_for_wide_part=1, vertical_merge_algorithm_min_rows_to_activate=1, vertical_merge_algorithm_min_columns_to_activate=1" diff --git a/tests/queries/0_stateless/03039_dynamic_replacing_merge_tree.reference b/tests/queries/0_stateless/03039_dynamic_replacing_merge_tree.reference new file mode 100644 index 00000000000..132b9df6b26 --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_replacing_merge_tree.reference @@ -0,0 +1,20 @@ +MergeTree compact + horizontal merge +100000 String +100000 UInt64 +50000 UInt64 +100000 String +MergeTree wide + horizontal merge +100000 String +100000 UInt64 +50000 UInt64 +100000 String +MergeTree compact + vertical merge +100000 String +100000 UInt64 +50000 UInt64 +100000 String +MergeTree wide + vertical merge +100000 String +100000 UInt64 +50000 UInt64 +100000 String diff --git a/tests/queries/0_stateless/03039_dynamic_replacing_merge_tree.sh b/tests/queries/0_stateless/03039_dynamic_replacing_merge_tree.sh new file mode 100755 index 00000000000..fc9039ac98c --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_replacing_merge_tree.sh @@ -0,0 +1,39 @@ +#!/usr/bin/env bash +# Tags: long + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# reset --log_comment +CLICKHOUSE_LOG_COMMENT= +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +# Fix some settings to avoid timeouts because of some settings randomization +CH_CLIENT="$CLICKHOUSE_CLIENT --allow_merge_tree_settings --allow_experimental_dynamic_type=1 --index_granularity_bytes 10485760 --index_granularity 8128 --merge_max_block_size 8128" + + +function test() +{ + $CH_CLIENT -q "create table test (id UInt64, d Dynamic) engine=ReplacingMergeTree order by id settings $1;" + $CH_CLIENT -q "system stop merges test" + $CH_CLIENT -q "insert into test select number, number from numbers(100000)" + $CH_CLIENT -q "insert into test select number, 'str_' || toString(number) from numbers(50000, 100000)" + + $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" + $CH_CLIENT -nm -q "system start merges test; optimize table test final" + $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" + $CH_CLIENT -q "drop table test" +} + +$CH_CLIENT -q "drop table if exists test;" + +echo "MergeTree compact + horizontal merge" +test "min_rows_for_wide_part=100000000000, min_bytes_for_wide_part=1000000000000, vertical_merge_algorithm_min_rows_to_activate=10000000000, vertical_merge_algorithm_min_columns_to_activate=100000000000" + +echo "MergeTree wide + horizontal merge" +test "min_rows_for_wide_part=1, min_bytes_for_wide_part=1,vertical_merge_algorithm_min_rows_to_activate=1000000000, vertical_merge_algorithm_min_columns_to_activate=1000000000000" + +echo "MergeTree compact + vertical merge" +test "min_rows_for_wide_part=100000000000, min_bytes_for_wide_part=1000000000000, vertical_merge_algorithm_min_rows_to_activate=1, vertical_merge_algorithm_min_columns_to_activate=1" + +echo "MergeTree wide + vertical merge" +test "min_rows_for_wide_part=1, min_bytes_for_wide_part=1, vertical_merge_algorithm_min_rows_to_activate=1, vertical_merge_algorithm_min_columns_to_activate=1" diff --git a/tests/queries/0_stateless/03039_dynamic_summing_merge_tree.reference b/tests/queries/0_stateless/03039_dynamic_summing_merge_tree.reference new file mode 100644 index 00000000000..3c186fcc935 --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_summing_merge_tree.reference @@ -0,0 +1,32 @@ +MergeTree compact + horizontal merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 +MergeTree wide + horizontal merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 +MergeTree compact + vertical merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 +MergeTree wide + vertical merge +100000 String +100000 UInt64 +200000 1 +50000 String +100000 UInt64 +100000 1 +50000 2 diff --git a/tests/queries/0_stateless/03039_dynamic_summing_merge_tree.sh b/tests/queries/0_stateless/03039_dynamic_summing_merge_tree.sh new file mode 100755 index 00000000000..f9da70e95ca --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_summing_merge_tree.sh @@ -0,0 +1,40 @@ +#!/usr/bin/env bash +# Tags: long + +CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# reset --log_comment +CLICKHOUSE_LOG_COMMENT= +# shellcheck source=../shell_config.sh +. "$CUR_DIR"/../shell_config.sh + +# Fix some settings to avoid timeouts because of some settings randomization +CH_CLIENT="$CLICKHOUSE_CLIENT --allow_merge_tree_settings --allow_experimental_dynamic_type=1 --index_granularity_bytes 10485760 --index_granularity 8128 --merge_max_block_size 8128" + +function test() +{ + $CH_CLIENT -q "create table test (id UInt64, sum UInt64, d Dynamic) engine=SummingMergeTree(sum) order by id settings $1;" + $CH_CLIENT -q "system stop merges test" + $CH_CLIENT -q "insert into test select number, 1, number from numbers(100000)" + $CH_CLIENT -q "insert into test select number, 1, 'str_' || toString(number) from numbers(50000, 100000)" + + $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" + $CH_CLIENT -q "select count(), sum from test group by sum order by sum, count()" + $CH_CLIENT -nm -q "system start merges test; optimize table test final" + $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" + $CH_CLIENT -q "select count(), sum from test group by sum order by sum, count()" + $CH_CLIENT -q "drop table test" +} + +$CH_CLIENT -q "drop table if exists test;" + +echo "MergeTree compact + horizontal merge" +test "min_rows_for_wide_part=100000000000, min_bytes_for_wide_part=1000000000000, vertical_merge_algorithm_min_rows_to_activate=10000000000, vertical_merge_algorithm_min_columns_to_activate=100000000000" + +echo "MergeTree wide + horizontal merge" +test "min_rows_for_wide_part=1, min_bytes_for_wide_part=1,vertical_merge_algorithm_min_rows_to_activate=1000000000, vertical_merge_algorithm_min_columns_to_activate=1000000000000" + +echo "MergeTree compact + vertical merge" +test "min_rows_for_wide_part=100000000000, min_bytes_for_wide_part=1000000000000, vertical_merge_algorithm_min_rows_to_activate=1, vertical_merge_algorithm_min_columns_to_activate=1" + +echo "MergeTree wide + vertical merge" +test "min_rows_for_wide_part=1, min_bytes_for_wide_part=1, vertical_merge_algorithm_min_rows_to_activate=1, vertical_merge_algorithm_min_columns_to_activate=1" diff --git a/tests/queries/0_stateless/03039_dynamic_versioned_collapsing_merge_tree.reference b/tests/queries/0_stateless/03039_dynamic_versioned_collapsing_merge_tree.reference new file mode 100644 index 00000000000..cabb0fdefab --- /dev/null +++ b/tests/queries/0_stateless/03039_dynamic_versioned_collapsing_merge_tree.reference @@ -0,0 +1,20 @@ +MergeTree compact + horizontal merge +100000 String +100000 UInt64 +75000 String +75000 UInt64 +MergeTree wide + horizontal merge +100000 String +100000 UInt64 +75000 String +75000 UInt64 +MergeTree compact + vertical merge +100000 String +100000 UInt64 +75000 String +75000 UInt64 +MergeTree wide + vertical merge +100000 String +100000 UInt64 +75000 String +75000 UInt64 diff --git a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_2.sh b/tests/queries/0_stateless/03039_dynamic_versioned_collapsing_merge_tree.sh similarity index 64% rename from tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_2.sh rename to tests/queries/0_stateless/03039_dynamic_versioned_collapsing_merge_tree.sh index 02362012960..ca313307a6d 100755 --- a/tests/queries/0_stateless/03039_dynamic_all_merge_algorithms_2.sh +++ b/tests/queries/0_stateless/03039_dynamic_versioned_collapsing_merge_tree.sh @@ -7,23 +7,11 @@ CLICKHOUSE_LOG_COMMENT= # shellcheck source=../shell_config.sh . "$CUR_DIR"/../shell_config.sh -CH_CLIENT="$CLICKHOUSE_CLIENT --allow_experimental_dynamic_type=1 --index_granularity_bytes 10485760 --index_granularity 8128 --merge_max_block_size 8128" - +# Fix some settings to avoid timeouts because of some settings randomization +CH_CLIENT="$CLICKHOUSE_CLIENT --allow_merge_tree_settings --allow_experimental_dynamic_type=1 --index_granularity_bytes 10485760 --index_granularity 8128 --merge_max_block_size 8128" function test() { - echo "CollapsingMergeTree" - $CH_CLIENT -q "create table test (id UInt64, sign Int8, d Dynamic) engine=CollapsingMergeTree(sign) order by id settings $1;" - $CH_CLIENT -q "system stop merges test" - $CH_CLIENT -q "insert into test select number, 1, number from numbers(100000)" - $CH_CLIENT -q "insert into test select number, -1, 'str_' || toString(number) from numbers(50000, 100000)" - - $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" - $CH_CLIENT -nm -q "system start merges test; optimize table test final" - $CH_CLIENT -q "select count(), dynamicType(d) from test group by dynamicType(d) order by count(), dynamicType(d)" - $CH_CLIENT -q "drop table test" - - echo "VersionedCollapsingMergeTree" $CH_CLIENT -q "create table test (id UInt64, sign Int8, version UInt8, d Dynamic) engine=VersionedCollapsingMergeTree(sign, version) order by id settings $1;" $CH_CLIENT -q "system stop merges test" $CH_CLIENT -q "insert into test select number, 1, 1, number from numbers(100000)" diff --git a/tests/queries/0_stateless/03095_window_functions_qualify.sql b/tests/queries/0_stateless/03095_window_functions_qualify.sql index 35e203a2ffc..adedff2e2cf 100644 --- a/tests/queries/0_stateless/03095_window_functions_qualify.sql +++ b/tests/queries/0_stateless/03095_window_functions_qualify.sql @@ -27,10 +27,10 @@ SELECT '--'; EXPLAIN header = 1, actions = 1 SELECT number, COUNT() OVER (PARTITION BY number % 3) AS partition_count FROM numbers(10) QUALIFY COUNT() OVER (PARTITION BY number % 3) = 4 ORDER BY number; -SELECT number % toUInt256(2) AS key, count() FROM numbers(10) GROUP BY key WITH CUBE WITH TOTALS QUALIFY key = toNullable(toNullable(0)); -- { serverError 48 } +SELECT number % toUInt256(2) AS key, count() FROM numbers(10) GROUP BY key WITH CUBE WITH TOTALS QUALIFY key = toNullable(toNullable(0)); -- { serverError NOT_IMPLEMENTED } -SELECT number % 2 AS key, count(materialize(5)) IGNORE NULLS FROM numbers(10) WHERE toLowCardinality(toLowCardinality(materialize(2))) GROUP BY key WITH CUBE WITH TOTALS QUALIFY key = 0; -- { serverError 48 } +SELECT number % 2 AS key, count(materialize(5)) IGNORE NULLS FROM numbers(10) WHERE toLowCardinality(toLowCardinality(materialize(2))) GROUP BY key WITH CUBE WITH TOTALS QUALIFY key = 0; -- { serverError NOT_IMPLEMENTED } -SELECT 4, count(4) IGNORE NULLS, number % 2 AS key FROM numbers(10) GROUP BY key WITH ROLLUP WITH TOTALS QUALIFY key = materialize(0); -- { serverError 48 } +SELECT 4, count(4) IGNORE NULLS, number % 2 AS key FROM numbers(10) GROUP BY key WITH ROLLUP WITH TOTALS QUALIFY key = materialize(0); -- { serverError NOT_IMPLEMENTED } -SELECT 3, number % toLowCardinality(2) AS key, count() IGNORE NULLS FROM numbers(10) GROUP BY key WITH ROLLUP WITH TOTALS QUALIFY key = 0; -- { serverError 48 } +SELECT 3, number % toLowCardinality(2) AS key, count() IGNORE NULLS FROM numbers(10) GROUP BY key WITH ROLLUP WITH TOTALS QUALIFY key = 0; -- { serverError NOT_IMPLEMENTED } diff --git a/tests/queries/0_stateless/03096_text_log_format_string_args_not_empty.sql b/tests/queries/0_stateless/03096_text_log_format_string_args_not_empty.sql index cffc8a49c67..b1ddd141e04 100644 --- a/tests/queries/0_stateless/03096_text_log_format_string_args_not_empty.sql +++ b/tests/queries/0_stateless/03096_text_log_format_string_args_not_empty.sql @@ -1,8 +1,8 @@ set allow_experimental_analyzer = true; -select count; -- { serverError 47 } +select count; -- { serverError UNKNOWN_IDENTIFIER } -select conut(); -- { serverError 46 } +select conut(); -- { serverError UNKNOWN_FUNCTION } system flush logs; diff --git a/tests/queries/0_stateless/03130_generateSnowflakeId.reference b/tests/queries/0_stateless/03130_generateSnowflakeId.reference index f5b7872f81e..39669d21bee 100644 --- a/tests/queries/0_stateless/03130_generateSnowflakeId.reference +++ b/tests/queries/0_stateless/03130_generateSnowflakeId.reference @@ -1,9 +1,5 @@ --- generateSnowflakeID 1 0 0 1 100 --- generateSnowflakeIDThreadMonotonic -1 -100 diff --git a/tests/queries/0_stateless/03130_generateSnowflakeId.sql b/tests/queries/0_stateless/03130_generateSnowflakeId.sql index 57cdd21a9fe..0717c81aa0d 100644 --- a/tests/queries/0_stateless/03130_generateSnowflakeId.sql +++ b/tests/queries/0_stateless/03130_generateSnowflakeId.sql @@ -1,4 +1,4 @@ -SELECT '-- generateSnowflakeID'; +-- Test SQL function 'generateSnowflakeID' SELECT bitAnd(bitShiftRight(toUInt64(generateSnowflakeID()), 63), 1) = 0; -- check first bit is zero @@ -14,16 +14,3 @@ FROM SELECT DISTINCT generateSnowflakeID() FROM numbers(100) ); - -SELECT '-- generateSnowflakeIDThreadMonotonic'; - -SELECT bitAnd(bitShiftRight(toUInt64(generateSnowflakeIDThreadMonotonic()), 63), 1) = 0; -- check first bit is zero - -SELECT generateSnowflakeIDThreadMonotonic(1, 2); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH } - -SELECT count(*) -FROM -( - SELECT DISTINCT generateSnowflakeIDThreadMonotonic() - FROM numbers(100) -); diff --git a/tests/queries/0_stateless/03131_deprecated_functions.sql b/tests/queries/0_stateless/03131_deprecated_functions.sql index 35cfe648c00..acdf36a50da 100644 --- a/tests/queries/0_stateless/03131_deprecated_functions.sql +++ b/tests/queries/0_stateless/03131_deprecated_functions.sql @@ -1,10 +1,10 @@ -SELECT number, neighbor(number, 2) FROM system.numbers LIMIT 10; -- { serverError 721 } +SELECT number, neighbor(number, 2) FROM system.numbers LIMIT 10; -- { serverError DEPRECATED_FUNCTION } -SELECT runningDifference(number) FROM system.numbers LIMIT 10; -- { serverError 721 } +SELECT runningDifference(number) FROM system.numbers LIMIT 10; -- { serverError DEPRECATED_FUNCTION } -SELECT k, runningAccumulate(sum_k) AS res FROM (SELECT number as k, sumState(k) AS sum_k FROM numbers(10) GROUP BY k ORDER BY k); -- { serverError 721 } +SELECT k, runningAccumulate(sum_k) AS res FROM (SELECT number as k, sumState(k) AS sum_k FROM numbers(10) GROUP BY k ORDER BY k); -- { serverError DEPRECATED_FUNCTION } -SET allow_deprecated_functions=1; +SET allow_deprecated_error_prone_window_functions=1; SELECT number, neighbor(number, 2) FROM system.numbers LIMIT 10 FORMAT Null; diff --git a/tests/queries/0_stateless/03143_prewhere_profile_events.reference b/tests/queries/0_stateless/03143_prewhere_profile_events.reference new file mode 100644 index 00000000000..32c93b89dc5 --- /dev/null +++ b/tests/queries/0_stateless/03143_prewhere_profile_events.reference @@ -0,0 +1,4 @@ +52503 10000000 +52503 10052503 +26273 10000000 +0 10052503 diff --git a/tests/queries/0_stateless/03143_prewhere_profile_events.sh b/tests/queries/0_stateless/03143_prewhere_profile_events.sh new file mode 100755 index 00000000000..863fcc1fe01 --- /dev/null +++ b/tests/queries/0_stateless/03143_prewhere_profile_events.sh @@ -0,0 +1,84 @@ +#!/usr/bin/env bash +# Tags: no-random-merge-tree-settings + +CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd) +# shellcheck source=../shell_config.sh +. "$CURDIR"/../shell_config.sh + +${CLICKHOUSE_CLIENT} -nq " + DROP TABLE IF EXISTS t; + + CREATE TABLE t(a UInt32, b UInt32, c UInt32, d UInt32) ENGINE=MergeTree ORDER BY a SETTINGS min_bytes_for_wide_part=1, min_rows_for_wide_part=1; + + INSERT INTO t SELECT number, number, number, number FROM numbers_mt(1e7); + + OPTIMIZE TABLE t FINAL; +" + +query_id_1=$RANDOM$RANDOM +query_id_2=$RANDOM$RANDOM +query_id_3=$RANDOM$RANDOM +query_id_4=$RANDOM$RANDOM + +client_opts=( + --max_block_size 65409 + --max_threads 8 +) + +${CLICKHOUSE_CLIENT} "${client_opts[@]}" --query_id "$query_id_1" -nq " + SELECT * + FROM t +PREWHERE (b % 8192) = 42 + WHERE c = 42 + FORMAT Null +" + +${CLICKHOUSE_CLIENT} "${client_opts[@]}" --query_id "$query_id_2" -nq " + SELECT * + FROM t +PREWHERE (b % 8192) = 42 AND (c % 8192) = 42 + WHERE d = 42 + FORMAT Null +settings enable_multiple_prewhere_read_steps=1; +" + +${CLICKHOUSE_CLIENT} "${client_opts[@]}" --query_id "$query_id_3" -nq " + SELECT * + FROM t +PREWHERE (b % 8192) = 42 AND (c % 16384) = 42 + WHERE d = 42 + FORMAT Null +settings enable_multiple_prewhere_read_steps=0; +" + +${CLICKHOUSE_CLIENT} "${client_opts[@]}" --query_id "$query_id_4" -nq " + SELECT b, c + FROM t +PREWHERE (b % 8192) = 42 AND (c % 8192) = 42 + FORMAT Null +settings enable_multiple_prewhere_read_steps=1; +" + +${CLICKHOUSE_CLIENT} -nq " + SYSTEM FLUSH LOGS; + + -- 52503 which is 43 * number of granules, 10000000 + SELECT ProfileEvents['RowsReadByMainReader'], ProfileEvents['RowsReadByPrewhereReaders'] + FROM system.query_log + WHERE current_database=currentDatabase() AND query_id = '$query_id_1' and type = 'QueryFinish'; + + -- 52503, 10052503 which is the sum of 10000000 from the first prewhere step plus 52503 from the second + SELECT ProfileEvents['RowsReadByMainReader'], ProfileEvents['RowsReadByPrewhereReaders'] + FROM system.query_log + WHERE current_database=currentDatabase() AND query_id = '$query_id_2' and type = 'QueryFinish'; + + -- 26273 the same as query #1 but twice less data (43 * ceil((52503 / 43) / 2)), 10000000 + SELECT ProfileEvents['RowsReadByMainReader'], ProfileEvents['RowsReadByPrewhereReaders'] + FROM system.query_log + WHERE current_database=currentDatabase() AND query_id = '$query_id_3' and type = 'QueryFinish'; + + -- 0, 10052503 + SELECT ProfileEvents['RowsReadByMainReader'], ProfileEvents['RowsReadByPrewhereReaders'] + FROM system.query_log + WHERE current_database=currentDatabase() AND query_id = '$query_id_4' and type = 'QueryFinish'; +" diff --git a/tests/queries/0_stateless/03143_window_functions_qualify_validation.sql b/tests/queries/0_stateless/03143_window_functions_qualify_validation.sql index 2b6d1820b00..5adbe7ff2a7 100644 --- a/tests/queries/0_stateless/03143_window_functions_qualify_validation.sql +++ b/tests/queries/0_stateless/03143_window_functions_qualify_validation.sql @@ -19,8 +19,8 @@ CREATE TABLE uk_price_paid ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2); -SELECT count(), (quantile(0.9)(price) OVER ()) AS price_quantile FROM uk_price_paid WHERE toYear(date) = 2023 QUALIFY price > price_quantile; -- { serverError 215 } +SELECT count(), (quantile(0.9)(price) OVER ()) AS price_quantile FROM uk_price_paid WHERE toYear(date) = 2023 QUALIFY price > price_quantile; -- { serverError NOT_AN_AGGREGATE } -SELECT count() FROM uk_price_paid WHERE toYear(date) = 2023 QUALIFY price > (quantile(0.9)(price) OVER ()); -- { serverError 215 } +SELECT count() FROM uk_price_paid WHERE toYear(date) = 2023 QUALIFY price > (quantile(0.9)(price) OVER ()); -- { serverError NOT_AN_AGGREGATE } DROP TABLE uk_price_paid; diff --git a/tests/queries/0_stateless/03147_table_function_loop.reference b/tests/queries/0_stateless/03147_table_function_loop.reference new file mode 100644 index 00000000000..46a2310b65f --- /dev/null +++ b/tests/queries/0_stateless/03147_table_function_loop.reference @@ -0,0 +1,65 @@ +0 +1 +2 +0 +1 +2 +0 +1 +2 +0 +0 +1 +2 +0 +1 +2 +0 +1 +2 +0 +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +0 +1 +2 +3 +4 +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +0 +1 +2 +3 +4 +0 +1 +2 +3 +4 +5 +6 +7 +8 +9 +0 +1 +2 +3 +4 diff --git a/tests/queries/0_stateless/03147_table_function_loop.sql b/tests/queries/0_stateless/03147_table_function_loop.sql new file mode 100644 index 00000000000..af48e4b11e3 --- /dev/null +++ b/tests/queries/0_stateless/03147_table_function_loop.sql @@ -0,0 +1,14 @@ +-- Tags: no-parallel + +SELECT * FROM loop(numbers(3)) LIMIT 10; +SELECT * FROM loop (numbers(3)) LIMIT 10 settings max_block_size = 1; + +DROP DATABASE IF EXISTS 03147_db; +CREATE DATABASE IF NOT EXISTS 03147_db; +CREATE TABLE 03147_db.t (n Int8) ENGINE=MergeTree ORDER BY n; +INSERT INTO 03147_db.t SELECT * FROM numbers(10); +USE 03147_db; + +SELECT * FROM loop(03147_db.t) LIMIT 15; +SELECT * FROM loop(t) LIMIT 15; +SELECT * FROM loop(03147_db, t) LIMIT 15; diff --git a/tests/queries/0_stateless/03156_analyzer_array_join_distributed.reference b/tests/queries/0_stateless/03156_analyzer_array_join_distributed.reference new file mode 100644 index 00000000000..b5b2aec9c12 --- /dev/null +++ b/tests/queries/0_stateless/03156_analyzer_array_join_distributed.reference @@ -0,0 +1,12 @@ +Hello [1,2] 1 +Hello [1,2] 2 +Hello [1,2] 1 +Hello [1,2] 1 +Hello [1,2] 2 +Hello [1,2] 2 +Hello 1 +Hello 2 +Hello 1 +Hello 1 +Hello 2 +Hello 2 diff --git a/tests/queries/0_stateless/03156_analyzer_array_join_distributed.sql b/tests/queries/0_stateless/03156_analyzer_array_join_distributed.sql new file mode 100644 index 00000000000..f605a369822 --- /dev/null +++ b/tests/queries/0_stateless/03156_analyzer_array_join_distributed.sql @@ -0,0 +1,10 @@ +CREATE TABLE arrays_test (s String, arr Array(UInt8)) ENGINE = MergeTree() ORDER BY (s); + +INSERT INTO arrays_test VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []); + +SELECT s, arr, a FROM remote('127.0.0.2', currentDatabase(), arrays_test) ARRAY JOIN arr AS a WHERE a < 3 ORDER BY a; +SELECT s, arr, a FROM remote('127.0.0.{1,2}', currentDatabase(), arrays_test) ARRAY JOIN arr AS a WHERE a < 3 ORDER BY a; + + +SELECT s, arr FROM remote('127.0.0.2', currentDatabase(), arrays_test) ARRAY JOIN arr WHERE arr < 3 ORDER BY arr; +SELECT s, arr FROM remote('127.0.0.{1,2}', currentDatabase(), arrays_test) ARRAY JOIN arr WHERE arr < 3 ORDER BY arr; diff --git a/tests/queries/0_stateless/03156_tuple_map_low_cardinality.reference b/tests/queries/0_stateless/03156_tuple_map_low_cardinality.reference new file mode 100644 index 00000000000..5b2a36927ee --- /dev/null +++ b/tests/queries/0_stateless/03156_tuple_map_low_cardinality.reference @@ -0,0 +1,6 @@ +100000 +100000 +100000 +100000 +100000 +100000 diff --git a/tests/queries/0_stateless/03156_tuple_map_low_cardinality.sql b/tests/queries/0_stateless/03156_tuple_map_low_cardinality.sql new file mode 100644 index 00000000000..836b426a9a9 --- /dev/null +++ b/tests/queries/0_stateless/03156_tuple_map_low_cardinality.sql @@ -0,0 +1,33 @@ +DROP TABLE IF EXISTS t_map_lc; + +CREATE TABLE t_map_lc +( + id UInt64, + t Tuple(m Map(LowCardinality(String), LowCardinality(String))) +) +ENGINE = MergeTree ORDER BY id SETTINGS min_bytes_for_wide_part = 0; + +INSERT INTO t_map_lc SELECT * FROM generateRandom('id UInt64, t Tuple(m Map(LowCardinality(String), LowCardinality(String)))') LIMIT 100000; + +SELECT count(), FROM t_map_lc WHERE NOT ignore(*, mapKeys(t.m)); +SELECT count(), FROM t_map_lc WHERE NOT ignore(*, t.m.keys); +SELECT count(), FROM t_map_lc WHERE NOT ignore(*, t.m.values); +SELECT * FROM t_map_lc WHERE mapContains(t.m, 'not_existing_key_1337'); + +DROP TABLE t_map_lc; + +CREATE TABLE t_map_lc +( + id UInt64, + t Tuple(m Map(LowCardinality(String), LowCardinality(String))) +) +ENGINE = MergeTree ORDER BY id SETTINGS min_bytes_for_wide_part = '10G'; + +INSERT INTO t_map_lc SELECT * FROM generateRandom('id UInt64, t Tuple(m Map(LowCardinality(String), LowCardinality(String)))') LIMIT 100000; + +SELECT count(), FROM t_map_lc WHERE NOT ignore(*, mapKeys(t.m)); +SELECT count(), FROM t_map_lc WHERE NOT ignore(*, t.m.keys); +SELECT count(), FROM t_map_lc WHERE NOT ignore(*, t.m.values); +SELECT * FROM t_map_lc WHERE mapContains(t.m, 'not_existing_key_1337'); + +DROP TABLE t_map_lc; diff --git a/tests/queries/0_stateless/03161_cnf_reduction.reference b/tests/queries/0_stateless/03161_cnf_reduction.reference new file mode 100644 index 00000000000..5e39c0f3223 --- /dev/null +++ b/tests/queries/0_stateless/03161_cnf_reduction.reference @@ -0,0 +1,23 @@ +-- Expected plan with analyzer: +SELECT id +FROM `03161_table` +WHERE f +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 1 + +-- Expected result with analyzer: +1 + +-- Expected plan w/o analyzer: +SELECT id +FROM `03161_table` +WHERE f +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 0 + +-- Expected result w/o analyzer: +1 + +-- Reproducer from the issue with analyzer +2 + +-- Reproducer from the issue w/o analyzer +2 diff --git a/tests/queries/0_stateless/03161_cnf_reduction.sql b/tests/queries/0_stateless/03161_cnf_reduction.sql new file mode 100644 index 00000000000..b34e9171d45 --- /dev/null +++ b/tests/queries/0_stateless/03161_cnf_reduction.sql @@ -0,0 +1,72 @@ +DROP TABLE IF EXISTS 03161_table; + +CREATE TABLE 03161_table (id UInt32, f UInt8) ENGINE = Memory; + +INSERT INTO 03161_table VALUES (0, 0), (1, 1), (2, 0); + +SELECT '-- Expected plan with analyzer:'; + +EXPLAIN SYNTAX +SELECT id +FROM 03161_table +WHERE f AND (NOT(f) OR f) +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 1; + +SELECT ''; + +SELECT '-- Expected result with analyzer:'; + +SELECT id +FROM 03161_table +WHERE f AND (NOT(f) OR f) +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 1; + +SELECT ''; + +SELECT '-- Expected plan w/o analyzer:'; + +EXPLAIN SYNTAX +SELECT id +FROM 03161_table +WHERE f AND (NOT(f) OR f) +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 0; + +SELECT ''; + +SELECT '-- Expected result w/o analyzer:'; + +SELECT id +FROM 03161_table +WHERE f AND (NOT(f) OR f) +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 0; + +DROP TABLE IF EXISTS 03161_table; + +-- Checking reproducer from GitHub issue +-- https://github.com/ClickHouse/ClickHouse/issues/57400 + +DROP TABLE IF EXISTS 03161_reproducer; + +CREATE TABLE 03161_reproducer (c0 UInt8, c1 UInt8, c2 UInt8, c3 UInt8, c4 UInt8, c5 UInt8, c6 UInt8, c7 UInt8, c8 UInt8, c9 UInt8) ENGINE = Memory; + +INSERT INTO 03161_reproducer VALUES (0, 0, 0, 0, 0, 0, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0, 0, 0, 0, 1), (0, 0, 0, 0, 0, 0, 0, 0, 1, 0), (0, 0, 0, 0, 0, 0, 0, 0, 1, 1), (0, 0, 0, 0, 0, 0, 0, 1, 0, 0), (0, 0, 0, 0, 0, 0, 0, 1, 0, 1), (0, 0, 0, 0, 0, 0, 0, 1, 1, 0), (0, 0, 0, 0, 0, 0, 0, 1, 1, 1); + +SELECT ''; + +SELECT '-- Reproducer from the issue with analyzer'; + +SELECT count() +FROM 03161_reproducer +WHERE ((NOT c2) AND c2 AND (NOT c1)) OR ((NOT c2) AND c3 AND (NOT c5)) OR ((NOT c7) AND (NOT c8)) OR (c9 AND c6 AND c8 AND (NOT c8) AND (NOT c7)) +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 1; + +SELECT ''; + +SELECT '-- Reproducer from the issue w/o analyzer'; + +SELECT count() +FROM 03161_reproducer +WHERE ((NOT c2) AND c2 AND (NOT c1)) OR ((NOT c2) AND c3 AND (NOT c5)) OR ((NOT c7) AND (NOT c8)) OR (c9 AND c6 AND c8 AND (NOT c8) AND (NOT c7)) +SETTINGS convert_query_to_cnf = 1, optimize_using_constraints = 1, allow_experimental_analyzer = 0; + +DROP TABLE IF EXISTS 03161_reproducer; diff --git a/tests/queries/0_stateless/03161_ipv4_ipv6_equality.reference b/tests/queries/0_stateless/03161_ipv4_ipv6_equality.reference new file mode 100644 index 00000000000..2a4cb2e658f --- /dev/null +++ b/tests/queries/0_stateless/03161_ipv4_ipv6_equality.reference @@ -0,0 +1,8 @@ +1 +1 +0 +0 +0 +0 +0 +0 diff --git a/tests/queries/0_stateless/03161_ipv4_ipv6_equality.sql b/tests/queries/0_stateless/03161_ipv4_ipv6_equality.sql new file mode 100644 index 00000000000..da2a660977a --- /dev/null +++ b/tests/queries/0_stateless/03161_ipv4_ipv6_equality.sql @@ -0,0 +1,11 @@ +-- Equal +SELECT toIPv4('127.0.0.1') = toIPv6('::ffff:127.0.0.1'); +SELECT toIPv6('::ffff:127.0.0.1') = toIPv4('127.0.0.1'); + +-- Not equal +SELECT toIPv4('127.0.0.1') = toIPv6('::ffff:127.0.0.2'); +SELECT toIPv4('127.0.0.2') = toIPv6('::ffff:127.0.0.1'); +SELECT toIPv6('::ffff:127.0.0.1') = toIPv4('127.0.0.2'); +SELECT toIPv6('::ffff:127.0.0.2') = toIPv4('127.0.0.1'); +SELECT toIPv4('127.0.0.1') = toIPv6('::ffef:127.0.0.1'); +SELECT toIPv6('::ffef:127.0.0.1') = toIPv4('127.0.0.1'); \ No newline at end of file diff --git a/tests/queries/0_stateless/03164_s3_settings_for_queries_and_merges.reference b/tests/queries/0_stateless/03164_s3_settings_for_queries_and_merges.reference new file mode 100644 index 00000000000..a2aef9837d3 --- /dev/null +++ b/tests/queries/0_stateless/03164_s3_settings_for_queries_and_merges.reference @@ -0,0 +1,3 @@ +655360 +18 0 +2 1 diff --git a/tests/queries/0_stateless/03164_s3_settings_for_queries_and_merges.sql b/tests/queries/0_stateless/03164_s3_settings_for_queries_and_merges.sql new file mode 100644 index 00000000000..652b27b8a67 --- /dev/null +++ b/tests/queries/0_stateless/03164_s3_settings_for_queries_and_merges.sql @@ -0,0 +1,40 @@ +-- Tags: no-random-settings, no-fasttest + +SET allow_prefetched_read_pool_for_remote_filesystem=0; +SET allow_prefetched_read_pool_for_local_filesystem=0; +SET max_threads = 1; +SET remote_read_min_bytes_for_seek = 100000; +-- Will affect INSERT, but not merge +SET s3_check_objects_after_upload=1; + +DROP TABLE IF EXISTS t_compact_bytes_s3; +CREATE TABLE t_compact_bytes_s3(c1 UInt32, c2 UInt32, c3 UInt32, c4 UInt32, c5 UInt32) +ENGINE = MergeTree ORDER BY c1 +SETTINGS index_granularity = 512, min_bytes_for_wide_part = '10G', storage_policy = 's3_no_cache'; + +INSERT INTO t_compact_bytes_s3 SELECT number, number, number, number, number FROM numbers(512 * 32 * 40); + +SYSTEM DROP MARK CACHE; +OPTIMIZE TABLE t_compact_bytes_s3 FINAL; + +SYSTEM DROP MARK CACHE; +SELECT count() FROM t_compact_bytes_s3 WHERE NOT ignore(c2, c4); +SYSTEM FLUSH LOGS; + +SELECT + ProfileEvents['S3ReadRequestsCount'], + ProfileEvents['ReadBufferFromS3Bytes'] < ProfileEvents['ReadCompressedBytes'] * 1.1 +FROM system.query_log +WHERE event_date >= yesterday() AND type = 'QueryFinish' + AND current_database = currentDatabase() + AND query ilike '%INSERT INTO t_compact_bytes_s3 SELECT number, number, number%'; + +SELECT + ProfileEvents['S3ReadRequestsCount'], + ProfileEvents['ReadBufferFromS3Bytes'] < ProfileEvents['ReadCompressedBytes'] * 1.1 +FROM system.query_log +WHERE event_date >= yesterday() AND type = 'QueryFinish' + AND current_database = currentDatabase() + AND query ilike '%OPTIMIZE TABLE t_compact_bytes_s3 FINAL%'; + +DROP TABLE IF EXISTS t_compact_bytes_s3; diff --git a/tests/queries/0_stateless/data_bson/comments.bson b/tests/queries/0_stateless/data_bson/comments.bson index 9aa4b6e6562..06681c51976 100644 Binary files a/tests/queries/0_stateless/data_bson/comments.bson and b/tests/queries/0_stateless/data_bson/comments.bson differ diff --git a/tests/queries/0_stateless/data_bson/comments_new.bson b/tests/queries/0_stateless/data_bson/comments_new.bson new file mode 100644 index 00000000000..aa9ee9bdbb4 Binary files /dev/null and b/tests/queries/0_stateless/data_bson/comments_new.bson differ diff --git a/tests/queries/1_stateful/00144_functions_of_aggregation_states.sql b/tests/queries/1_stateful/00144_functions_of_aggregation_states.sql index c5cd45d68b3..e30c132d242 100644 --- a/tests/queries/1_stateful/00144_functions_of_aggregation_states.sql +++ b/tests/queries/1_stateful/00144_functions_of_aggregation_states.sql @@ -1,3 +1,3 @@ -SET allow_deprecated_functions = 1; +SET allow_deprecated_error_prone_window_functions = 1; SELECT EventDate, finalizeAggregation(state), runningAccumulate(state) FROM (SELECT EventDate, uniqState(UserID) AS state FROM test.hits GROUP BY EventDate ORDER BY EventDate); diff --git a/utils/changelog/changelog.py b/utils/changelog/changelog.py index 6b70952eced..304ab568e3c 100755 --- a/utils/changelog/changelog.py +++ b/utils/changelog/changelog.py @@ -3,18 +3,20 @@ import argparse import logging -import os.path as p import os +import os.path as p import re from datetime import date, timedelta -from subprocess import CalledProcessError, DEVNULL +from subprocess import DEVNULL, CalledProcessError from typing import Dict, List, Optional, TextIO from fuzzywuzzy.fuzz import ratio # type: ignore -from github_helper import GitHub, PullRequest, PullRequests, Repository from github.GithubException import RateLimitExceededException, UnknownObjectException from github.NamedUser import NamedUser -from git_helper import is_shallow, git_runner as runner + +from git_helper import git_runner as runner +from git_helper import is_shallow +from github_helper import GitHub, PullRequest, PullRequests, Repository # This array gives the preferred category order, and is also used to # normalize category names. @@ -25,6 +27,7 @@ categories_preferred_order = ( "New Feature", "Performance Improvement", "Improvement", + "Critical Bug Fix", "Bug Fix", "Build/Testing/Packaging Improvement", "Other", @@ -57,9 +60,10 @@ class Description: self.entry, ) # 2) issue URL w/o markdown link + # including #issuecomment-1 or #event-12 entry = re.sub( - r"([^(])https://github.com/ClickHouse/ClickHouse/issues/([0-9]{4,})", - r"\1[#\2](https://github.com/ClickHouse/ClickHouse/issues/\2)", + r"([^(])(https://github.com/ClickHouse/ClickHouse/issues/([0-9]{4,})[-#a-z0-9]*)", + r"\1[#\3](\2)", entry, ) # It's possible that we face a secondary rate limit. @@ -112,7 +116,7 @@ def get_descriptions(prs: PullRequests) -> Dict[str, List[Description]]: in_changelog = merge_commit in SHA_IN_CHANGELOG if in_changelog: desc = generate_description(pr, repos[repo_name]) - if desc is not None: + if desc: if desc.category not in descriptions: descriptions[desc.category] = [] descriptions[desc.category].append(desc) @@ -187,7 +191,7 @@ def parse_args() -> argparse.Namespace: # This function mirrors the PR description checks in ClickhousePullRequestTrigger. -# Returns False if the PR should not be mentioned changelog. +# Returns None if the PR should not be mentioned in changelog. def generate_description(item: PullRequest, repo: Repository) -> Optional[Description]: backport_number = item.number if item.head.ref.startswith("backport/"): @@ -270,7 +274,6 @@ def generate_description(item: PullRequest, repo: Repository) -> Optional[Descri category, ): category = "Bug Fix (user-visible misbehavior in an official stable release)" - return Description(item.number, item.user, item.html_url, item.title, category) # Filter out documentations changelog if re.match( @@ -299,8 +302,9 @@ def generate_description(item: PullRequest, repo: Repository) -> Optional[Descri return Description(item.number, item.user, item.html_url, entry, category) -def write_changelog(fd: TextIO, descriptions: Dict[str, List[Description]]): - year = date.today().year +def write_changelog( + fd: TextIO, descriptions: Dict[str, List[Description]], year: int +) -> None: to_commit = runner(f"git rev-parse {TO_REF}^{{}}")[:11] from_commit = runner(f"git rev-parse {FROM_REF}^{{}}")[:11] fd.write( @@ -358,6 +362,12 @@ def set_sha_in_changelog(): ).split("\n") +def get_year(prs: PullRequests) -> int: + if not prs: + return date.today().year + return max(pr.created_at.year for pr in prs) + + def main(): log_levels = [logging.WARN, logging.INFO, logging.DEBUG] args = parse_args() @@ -411,8 +421,9 @@ def main(): prs = gh.get_pulls_from_search(query=query, merged=merged, sort="created") descriptions = get_descriptions(prs) + changelog_year = get_year(prs) - write_changelog(args.output, descriptions) + write_changelog(args.output, descriptions, changelog_year) if __name__ == "__main__": diff --git a/utils/check-style/aspell-ignore/en/aspell-dict.txt b/utils/check-style/aspell-ignore/en/aspell-dict.txt index 8f8d74f39ad..244f2ad98ff 100644 --- a/utils/check-style/aspell-ignore/en/aspell-dict.txt +++ b/utils/check-style/aspell-ignore/en/aspell-dict.txt @@ -246,7 +246,6 @@ DockerHub DoubleDelta Doxygen Durre -doesnt ECMA Ecto EdgeAngle @@ -1358,6 +1357,7 @@ cond conf config configs +conformant congruential conjuction conjuctive @@ -1414,8 +1414,12 @@ cutQueryString cutQueryStringAndFragment cutToFirstSignificantSubdomain cutToFirstSignificantSubdomainCustom +cutToFirstSignificantSubdomainCustomRFC cutToFirstSignificantSubdomainCustomWithWWW +cutToFirstSignificantSubdomainCustomWithWWWRFC +cutToFirstSignificantSubdomainRFC cutToFirstSignificantSubdomainWithWWW +cutToFirstSignificantSubdomainWithWWWRFC cutURLParameter cutWWW cyrus @@ -1502,7 +1506,10 @@ displaySecretsInShowAndSelect distro divideDecimal dmesg +doesnt +domainRFC domainWithoutWWW +domainWithoutWWWRFC dont dotProduct downsampling @@ -1575,8 +1582,11 @@ filesystems finalizeAggregation fips firstLine +firstSignficantSubdomain firstSignificantSubdomain firstSignificantSubdomainCustom +firstSignificantSubdomainCustomRFC +firstSignificantSubdomainRFC fixedstring flamegraph flatbuffers @@ -1619,7 +1629,6 @@ generateRandom generateRandomStructure generateSeries generateSnowflakeID -generateSnowflakeIDThreadMonotonic generateULID generateUUIDv geoDistance @@ -2157,6 +2166,7 @@ polygonsUnionSpherical polygonsWithinCartesian polygonsWithinSpherical popcnt +portRFC porthttps positionCaseInsensitive positionCaseInsensitiveUTF @@ -2661,6 +2671,9 @@ toStartOfSecond toStartOfTenMinutes toStartOfWeek toStartOfYear +toStartOfMicrosecond +toStartOfMillisecond +toStartOfNanosecond toString toStringCutToZero toTime @@ -2691,6 +2704,7 @@ toolset topK topKWeighted topLevelDomain +topLevelDomainRFC topk topkweighted transactional @@ -2789,6 +2803,7 @@ urls usearch userspace userver +UTCTimestamp utils uuid uuidv