Merge branch 'master' into kssenii-patch-12

This commit is contained in:
Kseniia Sumarokova 2024-08-23 10:20:49 +02:00 committed by GitHub
commit d50a9cdec1
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
62 changed files with 1818 additions and 83 deletions

View File

@ -130,6 +130,7 @@ jobs:
with: with:
build_name: package_debug build_name: package_debug
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
force: true
BuilderBinDarwin: BuilderBinDarwin:
needs: [RunConfig, BuildDockers] needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}

View File

@ -34,7 +34,7 @@ curl https://clickhouse.com/ | sh
Every month we get together with the community (users, contributors, customers, those interested in learning more about ClickHouse) to discuss what is coming in the latest release. If you are interested in sharing what you've built on ClickHouse, let us know. Every month we get together with the community (users, contributors, customers, those interested in learning more about ClickHouse) to discuss what is coming in the latest release. If you are interested in sharing what you've built on ClickHouse, let us know.
* [v24.8 Community Call](https://clickhouse.com/company/events/v24-8-community-release-call) - August 20 * [v24.9 Community Call](https://clickhouse.com/company/events/v24-9-community-release-call) - September 26
## Upcoming Events ## Upcoming Events
@ -45,12 +45,20 @@ The following upcoming meetups are featuring creator of ClickHouse & CTO, Alexey
* [ClickHouse Guangzhou User Group Meetup](https://mp.weixin.qq.com/s/GSvo-7xUoVzCsuUvlLTpCw) - August 25 * [ClickHouse Guangzhou User Group Meetup](https://mp.weixin.qq.com/s/GSvo-7xUoVzCsuUvlLTpCw) - August 25
* [San Francisco Meetup (Cloudflare)](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/302540575) - September 5 * [San Francisco Meetup (Cloudflare)](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/302540575) - September 5
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/clickhouse-nc-meetup-group/events/302557230) - September 9 * [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/clickhouse-nc-meetup-group/events/302557230) - September 9
* [New York Meetup (Ramp)](https://www.meetup.com/clickhouse-new-york-user-group/events/302575342) - September 10 * [New York Meetup (Rokt)](https://www.meetup.com/clickhouse-new-york-user-group/events/302575342) - September 10
* [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12 * [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12
Other upcoming meetups
* [Seattle Meetup (Statsig)](https://www.meetup.com/clickhouse-seattle-user-group/events/302518075/) - August 27
* [Melbourne Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302732666/) - August 27
* [Sydney Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302862966/) - September 5
* [Zurich Meetup](https://www.meetup.com/clickhouse-switzerland-meetup-group/events/302267429/) - September 5
* [Toronto Meetup (Shopify)](https://www.meetup.com/clickhouse-toronto-user-group/events/301490855/) - September 10
* [London Meetup](https://www.meetup.com/clickhouse-london-user-group/events/302977267) - September 17
## Recent Recordings ## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments" * **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"
* **Recording available**: [**v24.4 Release Call**](https://www.youtube.com/watch?v=dtUqgcfOGmE) All the features of 24.4, one convenient video! Watch it now! * **Recording available**: [**v24.8 LTS Release Call**](https://www.youtube.com/watch?v=AeLmp2jc51k) All the features of 24.8 LTS, one convenient video! Watch it now!
## Interested in joining ClickHouse and making it your full-time job? ## Interested in joining ClickHouse and making it your full-time job?

View File

@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc # lts / testing / prestable / etc
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.8.1.2684" ARG VERSION="24.8.2.3"
ARG PACKAGES="clickhouse-keeper" ARG PACKAGES="clickhouse-keeper"
ARG DIRECT_DOWNLOAD_URLS="" ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc # lts / testing / prestable / etc
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.8.1.2684" ARG VERSION="24.8.2.3"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
ARG DIRECT_DOWNLOAD_URLS="" ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main" ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
ARG VERSION="24.8.1.2684" ARG VERSION="24.8.2.3"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
#docker-official-library:off #docker-official-library:off

View File

@ -0,0 +1,71 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.5.5.41-stable (441d4a6ebe3) FIXME as compared to v24.5.4.49-stable (63b760955a0)
#### Improvement
* Backported in [#66768](https://github.com/ClickHouse/ClickHouse/issues/66768): Make allow_experimental_analyzer be controlled by the initiator for distributed queries. This ensures compatibility and correctness during operations in mixed version clusters. [#65777](https://github.com/ClickHouse/ClickHouse/pull/65777) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#65350](https://github.com/ClickHouse/ClickHouse/issues/65350): Fix possible abort on uncaught exception in ~WriteBufferFromFileDescriptor in StatusFile. [#64206](https://github.com/ClickHouse/ClickHouse/pull/64206) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#65621](https://github.com/ClickHouse/ClickHouse/issues/65621): Fix `Cannot find column` in distributed query with `ARRAY JOIN` by `Nested` column. Fixes [#64755](https://github.com/ClickHouse/ClickHouse/issues/64755). [#64801](https://github.com/ClickHouse/ClickHouse/pull/64801) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67902](https://github.com/ClickHouse/ClickHouse/issues/67902): Fixing the `Not-ready Set` error after the `PREWHERE` optimization for StorageMerge. [#65057](https://github.com/ClickHouse/ClickHouse/pull/65057) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66884](https://github.com/ClickHouse/ClickHouse/issues/66884): Fix unexpeced size of low cardinality column in function calls. [#65298](https://github.com/ClickHouse/ClickHouse/pull/65298) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#65933](https://github.com/ClickHouse/ClickHouse/issues/65933): For queries that read from `PostgreSQL`, cancel the internal `PostgreSQL` query if the ClickHouse query is finished. Otherwise, `ClickHouse` query cannot be canceled until the internal `PostgreSQL` query is finished. [#65771](https://github.com/ClickHouse/ClickHouse/pull/65771) ([Maksim Kita](https://github.com/kitaisreal)).
* Backported in [#66301](https://github.com/ClickHouse/ClickHouse/issues/66301): Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
* Backported in [#66328](https://github.com/ClickHouse/ClickHouse/issues/66328): Add missing settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines/input_format_csv_try_infer_numbers_from_strings/input_format_csv_try_infer_strings_from_quoted_tuples` in schema inference cache because they can change the resulting schema. It prevents from incorrect result of schema inference with these settings changed. [#65980](https://github.com/ClickHouse/ClickHouse/pull/65980) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68252](https://github.com/ClickHouse/ClickHouse/issues/68252): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#66155](https://github.com/ClickHouse/ClickHouse/issues/66155): Fixed buffer overflow bug in `unbin`/`unhex` implementation. [#66106](https://github.com/ClickHouse/ClickHouse/pull/66106) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#66454](https://github.com/ClickHouse/ClickHouse/issues/66454): Fixed a bug in ZooKeeper client: a session could get stuck in unusable state after receiving a hardware error from ZooKeeper. For example, this might happen due to "soft memory limit" in ClickHouse Keeper. [#66140](https://github.com/ClickHouse/ClickHouse/pull/66140) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66226](https://github.com/ClickHouse/ClickHouse/issues/66226): Fix issue in SumIfToCountIfVisitor and signed integers. [#66146](https://github.com/ClickHouse/ClickHouse/pull/66146) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66680](https://github.com/ClickHouse/ClickHouse/issues/66680): Fix handling limit for `system.numbers_mt` when no index can be used. [#66231](https://github.com/ClickHouse/ClickHouse/pull/66231) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66604](https://github.com/ClickHouse/ClickHouse/issues/66604): Fixed how the ClickHouse server detects the maximum number of usable CPU cores as specified by cgroups v2 if the server runs in a container such as Docker. In more detail, containers often run their process in the root cgroup which has an empty name. In that case, ClickHouse ignored the CPU limits set by cgroups v2. [#66237](https://github.com/ClickHouse/ClickHouse/pull/66237) ([filimonov](https://github.com/filimonov)).
* Backported in [#66360](https://github.com/ClickHouse/ClickHouse/issues/66360): Fix the `Not-ready set` error when a subquery with `IN` is used in the constraint. [#66261](https://github.com/ClickHouse/ClickHouse/pull/66261) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68064](https://github.com/ClickHouse/ClickHouse/issues/68064): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68158](https://github.com/ClickHouse/ClickHouse/issues/68158): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#66972](https://github.com/ClickHouse/ClickHouse/issues/66972): Fix `Column identifier is already registered` error with `group_by_use_nulls=true` and new analyzer. [#66400](https://github.com/ClickHouse/ClickHouse/pull/66400) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66691](https://github.com/ClickHouse/ClickHouse/issues/66691): Fix the VALID UNTIL clause in the user definition resetting after a restart. Closes [#66405](https://github.com/ClickHouse/ClickHouse/issues/66405). [#66409](https://github.com/ClickHouse/ClickHouse/pull/66409) ([Nikolay Degterinsky](https://github.com/evillique)).
* Backported in [#66969](https://github.com/ClickHouse/ClickHouse/issues/66969): Fix `Cannot find column` error for queries with constant expression in `GROUP BY` key and new analyzer enabled. [#66433](https://github.com/ClickHouse/ClickHouse/pull/66433) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66720](https://github.com/ClickHouse/ClickHouse/issues/66720): Correctly track memory for `Allocator::realloc`. [#66548](https://github.com/ClickHouse/ClickHouse/pull/66548) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66951](https://github.com/ClickHouse/ClickHouse/issues/66951): Fix an invalid result for queries with `WINDOW`. This could happen when `PARTITION` columns have sparse serialization and window functions are executed in parallel. [#66579](https://github.com/ClickHouse/ClickHouse/pull/66579) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66757](https://github.com/ClickHouse/ClickHouse/issues/66757): Fix `Unknown identifier` and `Column is not under aggregate function` errors for queries with the expression `(column IS NULL).` The bug was triggered by [#65088](https://github.com/ClickHouse/ClickHouse/issues/65088), with the disabled analyzer only. [#66654](https://github.com/ClickHouse/ClickHouse/pull/66654) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66948](https://github.com/ClickHouse/ClickHouse/issues/66948): Fix `Method getResultType is not supported for QUERY query node` error when scalar subquery was used as the first argument of IN (with new analyzer). [#66655](https://github.com/ClickHouse/ClickHouse/pull/66655) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68115](https://github.com/ClickHouse/ClickHouse/issues/68115): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67633](https://github.com/ClickHouse/ClickHouse/issues/67633): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67481](https://github.com/ClickHouse/ClickHouse/issues/67481): In rare cases ClickHouse could consider parts as broken because of some unexpected projections on disk. Now it's fixed. [#66898](https://github.com/ClickHouse/ClickHouse/pull/66898) ([alesapin](https://github.com/alesapin)).
* Backported in [#67814](https://github.com/ClickHouse/ClickHouse/issues/67814): Only relevant to the experimental Variant data type. Fix crash with Variant + AggregateFunction type. [#67122](https://github.com/ClickHouse/ClickHouse/pull/67122) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67197](https://github.com/ClickHouse/ClickHouse/issues/67197): TRUNCATE DATABASE used to stop replication as if it was a DROP DATABASE query, it's fixed. [#67129](https://github.com/ClickHouse/ClickHouse/pull/67129) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#67379](https://github.com/ClickHouse/ClickHouse/issues/67379): Fix error `Cannot convert column because it is non constant in source stream but must be constant in result.` for a query that reads from the `Merge` table over the `Distriburted` table with one shard. [#67146](https://github.com/ClickHouse/ClickHouse/pull/67146) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67501](https://github.com/ClickHouse/ClickHouse/issues/67501): Fix crash in DistributedAsyncInsert when connection is empty. [#67219](https://github.com/ClickHouse/ClickHouse/pull/67219) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#67886](https://github.com/ClickHouse/ClickHouse/issues/67886): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67576](https://github.com/ClickHouse/ClickHouse/issues/67576): Fix execution of nested short-circuit functions. [#67520](https://github.com/ClickHouse/ClickHouse/pull/67520) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67850](https://github.com/ClickHouse/ClickHouse/issues/67850): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68272](https://github.com/ClickHouse/ClickHouse/issues/68272): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67807](https://github.com/ClickHouse/ClickHouse/issues/67807): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67836](https://github.com/ClickHouse/ClickHouse/issues/67836): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67991](https://github.com/ClickHouse/ClickHouse/issues/67991): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68207](https://github.com/ClickHouse/ClickHouse/issues/68207): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68091](https://github.com/ClickHouse/ClickHouse/issues/68091): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68122](https://github.com/ClickHouse/ClickHouse/issues/68122): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68171](https://github.com/ClickHouse/ClickHouse/issues/68171): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68337](https://github.com/ClickHouse/ClickHouse/issues/68337): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68667](https://github.com/ClickHouse/ClickHouse/issues/68667): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#66387](https://github.com/ClickHouse/ClickHouse/issues/66387): Disable broken cases from 02911_join_on_nullsafe_optimization. [#66310](https://github.com/ClickHouse/ClickHouse/pull/66310) ([vdimir](https://github.com/vdimir)).
* Backported in [#66426](https://github.com/ClickHouse/ClickHouse/issues/66426): Ignore subquery for IN in DDLLoadingDependencyVisitor. [#66395](https://github.com/ClickHouse/ClickHouse/pull/66395) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66544](https://github.com/ClickHouse/ClickHouse/issues/66544): Add additional log masking in CI. [#66523](https://github.com/ClickHouse/ClickHouse/pull/66523) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66859](https://github.com/ClickHouse/ClickHouse/issues/66859): Fix data race in S3::ClientCache. [#66644](https://github.com/ClickHouse/ClickHouse/pull/66644) ([Konstantin Morozov](https://github.com/k-morozov)).
* Backported in [#66875](https://github.com/ClickHouse/ClickHouse/issues/66875): Support one more case in JOIN ON ... IS NULL. [#66725](https://github.com/ClickHouse/ClickHouse/pull/66725) ([vdimir](https://github.com/vdimir)).
* Backported in [#67059](https://github.com/ClickHouse/ClickHouse/issues/67059): Increase asio pool size in case the server is tiny. [#66761](https://github.com/ClickHouse/ClickHouse/pull/66761) ([alesapin](https://github.com/alesapin)).
* Backported in [#66945](https://github.com/ClickHouse/ClickHouse/issues/66945): Small fix in realloc memory tracking. [#66820](https://github.com/ClickHouse/ClickHouse/pull/66820) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67252](https://github.com/ClickHouse/ClickHouse/issues/67252): Followup [#66725](https://github.com/ClickHouse/ClickHouse/issues/66725). [#66869](https://github.com/ClickHouse/ClickHouse/pull/66869) ([vdimir](https://github.com/vdimir)).
* Backported in [#67412](https://github.com/ClickHouse/ClickHouse/issues/67412): CI: Fix build results for release branches. [#67402](https://github.com/ClickHouse/ClickHouse/pull/67402) ([Max K.](https://github.com/maxknv)).
* Update version after release. [#67862](https://github.com/ClickHouse/ClickHouse/pull/67862) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68077](https://github.com/ClickHouse/ClickHouse/issues/68077): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).

View File

@ -0,0 +1,33 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.5.6.45-stable (bdca8604c29) FIXME as compared to v24.5.5.78-stable (0138248cb62)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#67902](https://github.com/ClickHouse/ClickHouse/issues/67902): Fixing the `Not-ready Set` error after the `PREWHERE` optimization for StorageMerge. [#65057](https://github.com/ClickHouse/ClickHouse/pull/65057) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68252](https://github.com/ClickHouse/ClickHouse/issues/68252): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#68064](https://github.com/ClickHouse/ClickHouse/issues/68064): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68158](https://github.com/ClickHouse/ClickHouse/issues/68158): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68115](https://github.com/ClickHouse/ClickHouse/issues/68115): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67886](https://github.com/ClickHouse/ClickHouse/issues/67886): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#68272](https://github.com/ClickHouse/ClickHouse/issues/68272): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67807](https://github.com/ClickHouse/ClickHouse/issues/67807): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67836](https://github.com/ClickHouse/ClickHouse/issues/67836): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67991](https://github.com/ClickHouse/ClickHouse/issues/67991): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68207](https://github.com/ClickHouse/ClickHouse/issues/68207): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68091](https://github.com/ClickHouse/ClickHouse/issues/68091): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68122](https://github.com/ClickHouse/ClickHouse/issues/68122): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68171](https://github.com/ClickHouse/ClickHouse/issues/68171): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68337](https://github.com/ClickHouse/ClickHouse/issues/68337): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68667](https://github.com/ClickHouse/ClickHouse/issues/68667): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Update version after release. [#67862](https://github.com/ClickHouse/ClickHouse/pull/67862) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68077](https://github.com/ClickHouse/ClickHouse/issues/68077): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Backported in [#68756](https://github.com/ClickHouse/ClickHouse/issues/68756): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,83 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.6.3.38-stable (4e33c831589) FIXME as compared to v24.6.2.17-stable (5710a8b5c0c)
#### Improvement
* Backported in [#66770](https://github.com/ClickHouse/ClickHouse/issues/66770): Make allow_experimental_analyzer be controlled by the initiator for distributed queries. This ensures compatibility and correctness during operations in mixed version clusters. [#65777](https://github.com/ClickHouse/ClickHouse/pull/65777) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#66885](https://github.com/ClickHouse/ClickHouse/issues/66885): Fix unexpeced size of low cardinality column in function calls. [#65298](https://github.com/ClickHouse/ClickHouse/pull/65298) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66303](https://github.com/ClickHouse/ClickHouse/issues/66303): Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
* Backported in [#66330](https://github.com/ClickHouse/ClickHouse/issues/66330): Add missing settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines/input_format_csv_try_infer_numbers_from_strings/input_format_csv_try_infer_strings_from_quoted_tuples` in schema inference cache because they can change the resulting schema. It prevents from incorrect result of schema inference with these settings changed. [#65980](https://github.com/ClickHouse/ClickHouse/pull/65980) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#66157](https://github.com/ClickHouse/ClickHouse/issues/66157): Fixed buffer overflow bug in `unbin`/`unhex` implementation. [#66106](https://github.com/ClickHouse/ClickHouse/pull/66106) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#66210](https://github.com/ClickHouse/ClickHouse/issues/66210): Disable the `merge-filters` optimization introduced in [#64760](https://github.com/ClickHouse/ClickHouse/issues/64760). It may cause an exception if optimization merges two filter expressions and does not apply a short-circuit evaluation. [#66126](https://github.com/ClickHouse/ClickHouse/pull/66126) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66456](https://github.com/ClickHouse/ClickHouse/issues/66456): Fixed a bug in ZooKeeper client: a session could get stuck in unusable state after receiving a hardware error from ZooKeeper. For example, this might happen due to "soft memory limit" in ClickHouse Keeper. [#66140](https://github.com/ClickHouse/ClickHouse/pull/66140) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66228](https://github.com/ClickHouse/ClickHouse/issues/66228): Fix issue in SumIfToCountIfVisitor and signed integers. [#66146](https://github.com/ClickHouse/ClickHouse/pull/66146) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66183](https://github.com/ClickHouse/ClickHouse/issues/66183): Fix rare case with missing data in the result of distributed query, close [#61432](https://github.com/ClickHouse/ClickHouse/issues/61432). [#66174](https://github.com/ClickHouse/ClickHouse/pull/66174) ([vdimir](https://github.com/vdimir)).
* Backported in [#66271](https://github.com/ClickHouse/ClickHouse/issues/66271): Don't throw `TIMEOUT_EXCEEDED` for `none_only_active` mode of `distributed_ddl_output_mode`. [#66218](https://github.com/ClickHouse/ClickHouse/pull/66218) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66682](https://github.com/ClickHouse/ClickHouse/issues/66682): Fix handling limit for `system.numbers_mt` when no index can be used. [#66231](https://github.com/ClickHouse/ClickHouse/pull/66231) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66587](https://github.com/ClickHouse/ClickHouse/issues/66587): Fixed how the ClickHouse server detects the maximum number of usable CPU cores as specified by cgroups v2 if the server runs in a container such as Docker. In more detail, containers often run their process in the root cgroup which has an empty name. In that case, ClickHouse ignored the CPU limits set by cgroups v2. [#66237](https://github.com/ClickHouse/ClickHouse/pull/66237) ([filimonov](https://github.com/filimonov)).
* Backported in [#66362](https://github.com/ClickHouse/ClickHouse/issues/66362): Fix the `Not-ready set` error when a subquery with `IN` is used in the constraint. [#66261](https://github.com/ClickHouse/ClickHouse/pull/66261) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68066](https://github.com/ClickHouse/ClickHouse/issues/68066): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68566](https://github.com/ClickHouse/ClickHouse/issues/68566): Fix indexHint function case found by fuzzer. [#66286](https://github.com/ClickHouse/ClickHouse/pull/66286) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68159](https://github.com/ClickHouse/ClickHouse/issues/68159): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#66613](https://github.com/ClickHouse/ClickHouse/issues/66613): Fix `Column identifier is already registered` error with `group_by_use_nulls=true` and new analyzer. [#66400](https://github.com/ClickHouse/ClickHouse/pull/66400) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66693](https://github.com/ClickHouse/ClickHouse/issues/66693): Fix the VALID UNTIL clause in the user definition resetting after a restart. Closes [#66405](https://github.com/ClickHouse/ClickHouse/issues/66405). [#66409](https://github.com/ClickHouse/ClickHouse/pull/66409) ([Nikolay Degterinsky](https://github.com/evillique)).
* Backported in [#66577](https://github.com/ClickHouse/ClickHouse/issues/66577): Fix `Cannot find column` error for queries with constant expression in `GROUP BY` key and new analyzer enabled. [#66433](https://github.com/ClickHouse/ClickHouse/pull/66433) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66721](https://github.com/ClickHouse/ClickHouse/issues/66721): Correctly track memory for `Allocator::realloc`. [#66548](https://github.com/ClickHouse/ClickHouse/pull/66548) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66670](https://github.com/ClickHouse/ClickHouse/issues/66670): Fix reading of uninitialized memory when hashing empty tuples. This closes [#66559](https://github.com/ClickHouse/ClickHouse/issues/66559). [#66562](https://github.com/ClickHouse/ClickHouse/pull/66562) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#66952](https://github.com/ClickHouse/ClickHouse/issues/66952): Fix an invalid result for queries with `WINDOW`. This could happen when `PARTITION` columns have sparse serialization and window functions are executed in parallel. [#66579](https://github.com/ClickHouse/ClickHouse/pull/66579) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66956](https://github.com/ClickHouse/ClickHouse/issues/66956): Fix removing named collections in local storage. [#66599](https://github.com/ClickHouse/ClickHouse/pull/66599) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66716](https://github.com/ClickHouse/ClickHouse/issues/66716): Fix removing named collections in local storage. [#66599](https://github.com/ClickHouse/ClickHouse/pull/66599) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66759](https://github.com/ClickHouse/ClickHouse/issues/66759): Fix `Unknown identifier` and `Column is not under aggregate function` errors for queries with the expression `(column IS NULL).` The bug was triggered by [#65088](https://github.com/ClickHouse/ClickHouse/issues/65088), with the disabled analyzer only. [#66654](https://github.com/ClickHouse/ClickHouse/pull/66654) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66751](https://github.com/ClickHouse/ClickHouse/issues/66751): Fix `Method getResultType is not supported for QUERY query node` error when scalar subquery was used as the first argument of IN (with new analyzer). [#66655](https://github.com/ClickHouse/ClickHouse/pull/66655) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68116](https://github.com/ClickHouse/ClickHouse/issues/68116): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67635](https://github.com/ClickHouse/ClickHouse/issues/67635): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67482](https://github.com/ClickHouse/ClickHouse/issues/67482): In rare cases ClickHouse could consider parts as broken because of some unexpected projections on disk. Now it's fixed. [#66898](https://github.com/ClickHouse/ClickHouse/pull/66898) ([alesapin](https://github.com/alesapin)).
* Backported in [#67816](https://github.com/ClickHouse/ClickHouse/issues/67816): Only relevant to the experimental Variant data type. Fix crash with Variant + AggregateFunction type. [#67122](https://github.com/ClickHouse/ClickHouse/pull/67122) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67199](https://github.com/ClickHouse/ClickHouse/issues/67199): TRUNCATE DATABASE used to stop replication as if it was a DROP DATABASE query, it's fixed. [#67129](https://github.com/ClickHouse/ClickHouse/pull/67129) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#67381](https://github.com/ClickHouse/ClickHouse/issues/67381): Fix error `Cannot convert column because it is non constant in source stream but must be constant in result.` for a query that reads from the `Merge` table over the `Distriburted` table with one shard. [#67146](https://github.com/ClickHouse/ClickHouse/pull/67146) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67244](https://github.com/ClickHouse/ClickHouse/issues/67244): This closes [#67156](https://github.com/ClickHouse/ClickHouse/issues/67156). This closes [#66447](https://github.com/ClickHouse/ClickHouse/issues/66447). The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/62907. [#67178](https://github.com/ClickHouse/ClickHouse/pull/67178) ([Maksim Kita](https://github.com/kitaisreal)).
* Backported in [#67503](https://github.com/ClickHouse/ClickHouse/issues/67503): Fix crash in DistributedAsyncInsert when connection is empty. [#67219](https://github.com/ClickHouse/ClickHouse/pull/67219) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#67887](https://github.com/ClickHouse/ClickHouse/issues/67887): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67578](https://github.com/ClickHouse/ClickHouse/issues/67578): Fix execution of nested short-circuit functions. [#67520](https://github.com/ClickHouse/ClickHouse/pull/67520) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68611](https://github.com/ClickHouse/ClickHouse/issues/68611): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#67852](https://github.com/ClickHouse/ClickHouse/issues/67852): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68275](https://github.com/ClickHouse/ClickHouse/issues/68275): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67808](https://github.com/ClickHouse/ClickHouse/issues/67808): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67838](https://github.com/ClickHouse/ClickHouse/issues/67838): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67993](https://github.com/ClickHouse/ClickHouse/issues/67993): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68208](https://github.com/ClickHouse/ClickHouse/issues/68208): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68093](https://github.com/ClickHouse/ClickHouse/issues/68093): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68124](https://github.com/ClickHouse/ClickHouse/issues/68124): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68221](https://github.com/ClickHouse/ClickHouse/issues/68221): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68173](https://github.com/ClickHouse/ClickHouse/issues/68173): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68339](https://github.com/ClickHouse/ClickHouse/issues/68339): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68396](https://github.com/ClickHouse/ClickHouse/issues/68396): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68668](https://github.com/ClickHouse/ClickHouse/issues/68668): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NO CL ENTRY
* NO CL ENTRY: 'Revert "Backport [#66599](https://github.com/ClickHouse/ClickHouse/issues/66599) to 24.6: Fix dropping named collection in local storage"'. [#66922](https://github.com/ClickHouse/ClickHouse/pull/66922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#66332](https://github.com/ClickHouse/ClickHouse/issues/66332): Do not raise a NOT_IMPLEMENTED error when getting s3 metrics with a multiple disk configuration. [#65403](https://github.com/ClickHouse/ClickHouse/pull/65403) ([Elena Torró](https://github.com/elenatorro)).
* Backported in [#66142](https://github.com/ClickHouse/ClickHouse/issues/66142): Fix flaky test_storage_s3_queue tests. [#66009](https://github.com/ClickHouse/ClickHouse/pull/66009) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#66389](https://github.com/ClickHouse/ClickHouse/issues/66389): Disable broken cases from 02911_join_on_nullsafe_optimization. [#66310](https://github.com/ClickHouse/ClickHouse/pull/66310) ([vdimir](https://github.com/vdimir)).
* Backported in [#66428](https://github.com/ClickHouse/ClickHouse/issues/66428): Ignore subquery for IN in DDLLoadingDependencyVisitor. [#66395](https://github.com/ClickHouse/ClickHouse/pull/66395) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66546](https://github.com/ClickHouse/ClickHouse/issues/66546): Add additional log masking in CI. [#66523](https://github.com/ClickHouse/ClickHouse/pull/66523) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66861](https://github.com/ClickHouse/ClickHouse/issues/66861): Fix data race in S3::ClientCache. [#66644](https://github.com/ClickHouse/ClickHouse/pull/66644) ([Konstantin Morozov](https://github.com/k-morozov)).
* Backported in [#66877](https://github.com/ClickHouse/ClickHouse/issues/66877): Support one more case in JOIN ON ... IS NULL. [#66725](https://github.com/ClickHouse/ClickHouse/pull/66725) ([vdimir](https://github.com/vdimir)).
* Backported in [#67061](https://github.com/ClickHouse/ClickHouse/issues/67061): Increase asio pool size in case the server is tiny. [#66761](https://github.com/ClickHouse/ClickHouse/pull/66761) ([alesapin](https://github.com/alesapin)).
* Backported in [#66940](https://github.com/ClickHouse/ClickHouse/issues/66940): Small fix in realloc memory tracking. [#66820](https://github.com/ClickHouse/ClickHouse/pull/66820) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67254](https://github.com/ClickHouse/ClickHouse/issues/67254): Followup [#66725](https://github.com/ClickHouse/ClickHouse/issues/66725). [#66869](https://github.com/ClickHouse/ClickHouse/pull/66869) ([vdimir](https://github.com/vdimir)).
* Backported in [#67414](https://github.com/ClickHouse/ClickHouse/issues/67414): CI: Fix build results for release branches. [#67402](https://github.com/ClickHouse/ClickHouse/pull/67402) ([Max K.](https://github.com/maxknv)).
* Update version after release. [#67909](https://github.com/ClickHouse/ClickHouse/pull/67909) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68079](https://github.com/ClickHouse/ClickHouse/issues/68079): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).

View File

@ -0,0 +1,55 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.7.3.47-stable (2e50fe27a14) FIXME as compared to v24.7.2.13-stable (6e41f601b2f)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68232](https://github.com/ClickHouse/ClickHouse/issues/68232): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#67969](https://github.com/ClickHouse/ClickHouse/issues/67969): Fixed reading of subcolumns after `ALTER ADD COLUMN` query. [#66243](https://github.com/ClickHouse/ClickHouse/pull/66243) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68068](https://github.com/ClickHouse/ClickHouse/issues/68068): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#67637](https://github.com/ClickHouse/ClickHouse/issues/67637): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67820](https://github.com/ClickHouse/ClickHouse/issues/67820): Fix possible deadlock on query cancel with parallel replicas. [#66905](https://github.com/ClickHouse/ClickHouse/pull/66905) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67818](https://github.com/ClickHouse/ClickHouse/issues/67818): Only relevant to the experimental Variant data type. Fix crash with Variant + AggregateFunction type. [#67122](https://github.com/ClickHouse/ClickHouse/pull/67122) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67766](https://github.com/ClickHouse/ClickHouse/issues/67766): Fix crash of `uniq` and `uniqTheta ` with `tuple()` argument. Closes [#67303](https://github.com/ClickHouse/ClickHouse/issues/67303). [#67306](https://github.com/ClickHouse/ClickHouse/pull/67306) ([flynn](https://github.com/ucasfl)).
* Backported in [#67881](https://github.com/ClickHouse/ClickHouse/issues/67881): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#68613](https://github.com/ClickHouse/ClickHouse/issues/68613): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#67854](https://github.com/ClickHouse/ClickHouse/issues/67854): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68278](https://github.com/ClickHouse/ClickHouse/issues/68278): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68040](https://github.com/ClickHouse/ClickHouse/issues/68040): Fix creation of view with recursive CTE. [#67587](https://github.com/ClickHouse/ClickHouse/pull/67587) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Backported in [#68038](https://github.com/ClickHouse/ClickHouse/issues/68038): Fix crash on `percent_rank`. `percent_rank`'s default frame type is changed to `range unbounded preceding and unbounded following`. `IWindowFunction`'s default window frame is considered and now window functions without window frame definition in sql can be put into different `WindowTransfomer`s properly. [#67661](https://github.com/ClickHouse/ClickHouse/pull/67661) ([lgbo](https://github.com/lgbo-ustc)).
* Backported in [#67713](https://github.com/ClickHouse/ClickHouse/issues/67713): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67840](https://github.com/ClickHouse/ClickHouse/issues/67840): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67995](https://github.com/ClickHouse/ClickHouse/issues/67995): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68224](https://github.com/ClickHouse/ClickHouse/issues/68224): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68095](https://github.com/ClickHouse/ClickHouse/issues/68095): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68126](https://github.com/ClickHouse/ClickHouse/issues/68126): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68223](https://github.com/ClickHouse/ClickHouse/issues/68223): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68175](https://github.com/ClickHouse/ClickHouse/issues/68175): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68341](https://github.com/ClickHouse/ClickHouse/issues/68341): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68398](https://github.com/ClickHouse/ClickHouse/issues/68398): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68669](https://github.com/ClickHouse/ClickHouse/issues/68669): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#67518](https://github.com/ClickHouse/ClickHouse/issues/67518): Split slow test 03036_dynamic_read_subcolumns. [#66954](https://github.com/ClickHouse/ClickHouse/pull/66954) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67516](https://github.com/ClickHouse/ClickHouse/issues/67516): Split 01508_partition_pruning_long. [#66983](https://github.com/ClickHouse/ClickHouse/pull/66983) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67529](https://github.com/ClickHouse/ClickHouse/issues/67529): Reduce max time of 00763_long_lock_buffer_alter_destination_table. [#67185](https://github.com/ClickHouse/ClickHouse/pull/67185) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#67803](https://github.com/ClickHouse/ClickHouse/issues/67803): Disable some Dynamic tests under sanitizers, rewrite 03202_dynamic_null_map_subcolumn to sql. [#67359](https://github.com/ClickHouse/ClickHouse/pull/67359) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67643](https://github.com/ClickHouse/ClickHouse/issues/67643): [Green CI] Fix potentially flaky test_mask_sensitive_info integration test. [#67506](https://github.com/ClickHouse/ClickHouse/pull/67506) ([Alexey Katsman](https://github.com/alexkats)).
* Backported in [#67609](https://github.com/ClickHouse/ClickHouse/issues/67609): Fix test_zookeeper_config_load_balancing after adding the xdist worker name to the instance. [#67590](https://github.com/ClickHouse/ClickHouse/pull/67590) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#67871](https://github.com/ClickHouse/ClickHouse/issues/67871): Fix 02434_cancel_insert_when_client_dies. [#67600](https://github.com/ClickHouse/ClickHouse/pull/67600) ([vdimir](https://github.com/vdimir)).
* Backported in [#67704](https://github.com/ClickHouse/ClickHouse/issues/67704): Fix 02910_bad_logs_level_in_local in fast tests. [#67603](https://github.com/ClickHouse/ClickHouse/pull/67603) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#67689](https://github.com/ClickHouse/ClickHouse/issues/67689): Fix 01605_adaptive_granularity_block_borders. [#67605](https://github.com/ClickHouse/ClickHouse/pull/67605) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67827](https://github.com/ClickHouse/ClickHouse/issues/67827): Try fix 03143_asof_join_ddb_long. [#67620](https://github.com/ClickHouse/ClickHouse/pull/67620) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67892](https://github.com/ClickHouse/ClickHouse/issues/67892): Revert "Merge pull request [#66510](https://github.com/ClickHouse/ClickHouse/issues/66510) from canhld94/fix_trivial_count_non_deterministic_func". [#67800](https://github.com/ClickHouse/ClickHouse/pull/67800) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68081](https://github.com/ClickHouse/ClickHouse/issues/68081): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Update version after release. [#68044](https://github.com/ClickHouse/ClickHouse/pull/68044) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68269](https://github.com/ClickHouse/ClickHouse/issues/68269): [Green CI] Fix test 01903_correct_block_size_prediction_with_default. [#68203](https://github.com/ClickHouse/ClickHouse/pull/68203) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#68432](https://github.com/ClickHouse/ClickHouse/issues/68432): tests: make 01600_parts_states_metrics_long better. [#68265](https://github.com/ClickHouse/ClickHouse/pull/68265) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68538](https://github.com/ClickHouse/ClickHouse/issues/68538): CI: Native build for package_aarch64. [#68457](https://github.com/ClickHouse/ClickHouse/pull/68457) ([Max K.](https://github.com/maxknv)).
* Backported in [#68555](https://github.com/ClickHouse/ClickHouse/issues/68555): CI: Minor release workflow fix. [#68536](https://github.com/ClickHouse/ClickHouse/pull/68536) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,12 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.8.2.3-lts (b54f79ed323) FIXME as compared to v24.8.1.2684-lts (161c62fd295)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68670](https://github.com/ClickHouse/ClickHouse/issues/68670): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).

View File

@ -155,6 +155,8 @@ SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'tag 1';
SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'tag 2'; SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'tag 2';
``` ```
To remove only entries with tag `tag` from the query cache, you can use statement `SYSTEM DROP QUERY CACHE TAG 'tag'`.
ClickHouse reads table data in blocks of [max_block_size](settings/settings.md#setting-max_block_size) rows. Due to filtering, aggregation, ClickHouse reads table data in blocks of [max_block_size](settings/settings.md#setting-max_block_size) rows. Due to filtering, aggregation,
etc., result blocks are typically much smaller than 'max_block_size' but there are also cases where they are much bigger. Setting etc., result blocks are typically much smaller than 'max_block_size' but there are also cases where they are much bigger. Setting
[query_cache_squash_partial_results](settings/settings.md#query-cache-squash-partial-results) (enabled by default) controls if result blocks [query_cache_squash_partial_results](settings/settings.md#query-cache-squash-partial-results) (enabled by default) controls if result blocks

View File

@ -2855,7 +2855,7 @@ The minimum chunk size in bytes, which each thread will parse in parallel.
## merge_selecting_sleep_ms {#merge_selecting_sleep_ms} ## merge_selecting_sleep_ms {#merge_selecting_sleep_ms}
Sleep time for merge selecting when no part is selected. A lower setting triggers selecting tasks in `background_schedule_pool` frequently, which results in a large number of requests to ClickHouse Keeper in large-scale clusters. Minimum time to wait before trying to select parts to merge again after no parts were selected. A lower setting triggers selecting tasks in `background_schedule_pool` frequently, which results in a large number of requests to ClickHouse Keeper in large-scale clusters.
Possible values: Possible values:
@ -2863,6 +2863,16 @@ Possible values:
Default value: `5000`. Default value: `5000`.
## max_merge_selecting_sleep_ms
Maximum time to wait before trying to select parts to merge again after no parts were selected. A lower setting triggers selecting tasks in `background_schedule_pool` frequently, which results in a large number of requests to ClickHouse Keeper in large-scale clusters.
Possible values:
- Any positive integer.
Default value: `60000`.
## parallel_distributed_insert_select {#parallel_distributed_insert_select} ## parallel_distributed_insert_select {#parallel_distributed_insert_select}
Enables parallel distributed `INSERT ... SELECT` query. Enables parallel distributed `INSERT ... SELECT` query.

View File

@ -70,7 +70,7 @@ SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON as json
└────────────────────────────────────────────────┘ └────────────────────────────────────────────────┘
``` ```
CAST from named `Tuple`, `Map` and `Object('json')` to `JSON` type will be supported later. CAST from `JSON`, named `Tuple`, `Map` and `Object('json')` to `JSON` type will be supported later.
## Reading JSON paths as subcolumns ## Reading JSON paths as subcolumns

View File

@ -53,29 +53,28 @@ SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY
└─────────────────────┴───────────────────────────────┘ └─────────────────────┴───────────────────────────────┘
``` ```
Intervals with different types cant be combined. You cant use intervals like `4 DAY 1 HOUR`. Specify intervals in units that are smaller or equal to the smallest unit of the interval, for example, the interval `1 day and an hour` interval can be expressed as `25 HOUR` or `90000 SECOND`. Also it is possible to use multiple intervals simultaneously:
You cant perform arithmetical operations with `Interval`-type values, but you can add intervals of different types consequently to values in `Date` or `DateTime` data types. For example:
``` sql ``` sql
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR SELECT now() AS current_date_time, current_date_time + (INTERVAL 4 DAY + INTERVAL 3 HOUR)
``` ```
``` text ``` text
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐ ┌───current_date_time─┬─plus(current_date_time, plus(toIntervalDay(4), toIntervalHour(3)))─┐
│ 2019-10-23 11:16:28 │ 2019-10-27 14:16:28 │ 2024-08-08 18:31:39 │ 2024-08-12 21:31:39
└─────────────────────┴────────────────────────────────────────────────────────┘ └─────────────────────┴────────────────────────────────────────────────────────────────────
``` ```
The following query causes an exception: And to compare values with different intervals:
``` sql ``` sql
select now() AS current_date_time, current_date_time + (INTERVAL 4 DAY + INTERVAL 3 HOUR) SELECT toIntervalMicrosecond(3600000000) = toIntervalHour(1);
``` ```
``` text ``` text
Received exception from server (version 19.14.1): ┌─less(toIntervalMicrosecond(179999999), toIntervalMinute(3))─┐
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argument types for function plus: if one argument is Interval, then another must be Date or DateTime.. │ 1 │
└─────────────────────────────────────────────────────────────┘
``` ```
## See Also ## See Also

View File

@ -8,6 +8,78 @@ sidebar_label: Replacing in Strings
[General strings functions](string-functions.md) and [functions for searching in strings](string-search-functions.md) are described separately. [General strings functions](string-functions.md) and [functions for searching in strings](string-search-functions.md) are described separately.
## overlay
Replace part of the string `input` with another string `replace`, starting at the 1-based index `offset`.
**Syntax**
```sql
overlay(s, replace, offset[, length])
```
**Parameters**
- `input`: A string type [String](../data-types/string.md).
- `replace`: A string type [String](../data-types/string.md).
- `offset`: An integer type [Int](../data-types/int-uint.md). If `offset` is negative, it is counted from the end of the `input` string.
- `length`: Optional. An integer type [Int](../data-types/int-uint.md). `length` specifies the length of the snippet within input to be replaced. If `length` is not specified, the number of bytes removed from `input` equals the length of `replace`; otherwise `length` bytes are removed.
**Returned value**
- A [String](../data-types/string.md) data type value.
**Example**
```sql
SELECT overlay('ClickHouse SQL', 'CORE', 12) AS res;
```
Result:
```text
┌─res─────────────┐
│ ClickHouse CORE │
└─────────────────┘
```
## overlayUTF8
Replace part of the string `input` with another string `replace`, starting at the 1-based index `offset`.
Assumes that the string contains valid UTF-8 encoded text. If this assumption is violated, no exception is thrown and the result is undefined.
**Syntax**
```sql
overlayUTF8(s, replace, offset[, length])
```
**Parameters**
- `s`: A string type [String](../data-types/string.md).
- `replace`: A string type [String](../data-types/string.md).
- `offset`: An integer type [Int](../data-types/int-uint.md). If `offset` is negative, it is counted from the end of the `input` string.
- `length`: Optional. An integer type [Int](../data-types/int-uint.md). `length` specifies the length of the snippet within input to be replaced. If `length` is not specified, the number of characters removed from `input` equals the length of `replace`; otherwise `length` characters are removed.
**Returned value**
- A [String](../data-types/string.md) data type value.
**Example**
```sql
SELECT overlayUTF8('ClickHouse是一款OLAP数据库', '开源', 12, 2) AS res;
```
Result:
```text
┌─res────────────────────────┐
│ ClickHouse是开源OLAP数据库 │
└────────────────────────────┘
```
## replaceOne ## replaceOne
Replaces the first occurrence of the substring `pattern` in `haystack` by the `replacement` string. Replaces the first occurrence of the substring `pattern` in `haystack` by the `replacement` string.

View File

@ -136,7 +136,13 @@ The compiled expression cache is enabled/disabled with the query/user/profile-le
## DROP QUERY CACHE ## DROP QUERY CACHE
```sql
SYSTEM DROP QUERY CACHE;
SYSTEM DROP QUERY CACHE TAG '<tag>'
````
Clears the [query cache](../../operations/query-cache.md). Clears the [query cache](../../operations/query-cache.md).
If a tag is specified, only query cache entries with the specified tag are deleted.
## DROP FORMAT SCHEMA CACHE {#system-drop-schema-format} ## DROP FORMAT SCHEMA CACHE {#system-drop-schema-format}

View File

@ -22,18 +22,26 @@ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not su
### Из deb-пакетов {#install-from-deb-packages} ### Из deb-пакетов {#install-from-deb-packages}
Яндекс рекомендует использовать официальные скомпилированные `deb`-пакеты для Debian или Ubuntu. Для установки пакетов выполните: Рекомендуется использовать официальные скомпилированные `deb`-пакеты для Debian или Ubuntu. Для установки пакетов выполните:
``` bash ``` bash
sudo apt-get install -y apt-transport-https ca-certificates dirmngr sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754 curl -fsSL 'https://packages.clickhouse.com/rpm/lts/repodata/repomd.xml.key' | sudo gpg --dearmor -o /usr/share/keyrings/clickhouse-keyring.gpg
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \ echo "deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb stable main" | sudo tee \
/etc/apt/sources.list.d/clickhouse.list /etc/apt/sources.list.d/clickhouse.list
sudo apt-get update sudo apt-get update
```
#### Установка ClickHouse server и client
```bash
sudo apt-get install -y clickhouse-server clickhouse-client sudo apt-get install -y clickhouse-server clickhouse-client
```
#### Запуск ClickHouse server
```bash
sudo service clickhouse-server start sudo service clickhouse-server start
clickhouse-client # or "clickhouse-client --password" if you've set up a password. clickhouse-client # or "clickhouse-client --password" if you've set up a password.
``` ```
@ -55,7 +63,7 @@ clickhouse-client # or "clickhouse-client --password" if you've set up a passwor
::: :::
### Из rpm-пакетов {#from-rpm-packages} ### Из rpm-пакетов {#from-rpm-packages}
Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm`-пакеты для CentOS, RedHat и всех остальных дистрибутивов Linux, основанных на rpm. Команда ClickHouse рекомендует использовать официальные предкомпилированные `rpm`-пакеты для CentOS, RedHat и всех остальных дистрибутивов Linux, основанных на rpm.
#### Установка официального репозитория #### Установка официального репозитория
@ -102,7 +110,7 @@ sudo yum install clickhouse-server clickhouse-client
### Из tgz-архивов {#from-tgz-archives} ### Из tgz-архивов {#from-tgz-archives}
Команда ClickHouse в Яндексе рекомендует использовать предкомпилированные бинарники из `tgz`-архивов для всех дистрибутивов, где невозможна установка `deb`- и `rpm`- пакетов. Команда ClickHouse рекомендует использовать предкомпилированные бинарники из `tgz`-архивов для всех дистрибутивов, где невозможна установка `deb`- и `rpm`- пакетов.
Интересующую версию архивов можно скачать вручную с помощью `curl` или `wget` из репозитория https://packages.clickhouse.com/tgz/. Интересующую версию архивов можно скачать вручную с помощью `curl` или `wget` из репозитория https://packages.clickhouse.com/tgz/.
После этого архивы нужно распаковать и воспользоваться скриптами установки. Пример установки самой свежей версии: После этого архивы нужно распаковать и воспользоваться скриптами установки. Пример установки самой свежей версии:

View File

@ -54,29 +54,28 @@ SELECT now() as current_date_time, current_date_time + INTERVAL 4 DAY
└─────────────────────┴───────────────────────────────┘ └─────────────────────┴───────────────────────────────┘
``` ```
Нельзя объединять интервалы различных типов. Нельзя использовать интервалы вида `4 DAY 1 HOUR`. Вместо этого выражайте интервал в единицах меньших или равных минимальной единице интервала, например, интервал «1 день и 1 час» можно выразить как `25 HOUR` или `90000 SECOND`. Также можно использовать различные типы интервалов одновременно:
Арифметические операции со значениями типов `Interval` не доступны, однако можно последовательно добавлять различные интервалы к значениям типов `Date` и `DateTime`. Например:
``` sql ``` sql
SELECT now() AS current_date_time, current_date_time + INTERVAL 4 DAY + INTERVAL 3 HOUR SELECT now() AS current_date_time, current_date_time + (INTERVAL 4 DAY + INTERVAL 3 HOUR)
``` ```
``` text ``` text
┌───current_date_time─┬─plus(plus(now(), toIntervalDay(4)), toIntervalHour(3))─┐ ┌───current_date_time─┬─plus(current_date_time, plus(toIntervalDay(4), toIntervalHour(3)))─┐
│ 2019-10-23 11:16:28 │ 2019-10-27 14:16:28 │ 2024-08-08 18:31:39 │ 2024-08-12 21:31:39
└─────────────────────┴────────────────────────────────────────────────────────┘ └─────────────────────┴────────────────────────────────────────────────────────────────────
``` ```
Следующий запрос приведёт к генерированию исключения: И сравнивать значения из разными интервалами:
``` sql ``` sql
select now() AS current_date_time, current_date_time + (INTERVAL 4 DAY + INTERVAL 3 HOUR) SELECT toIntervalMicrosecond(3600000000) = toIntervalHour(1);
``` ```
``` text ``` text
Received exception from server (version 19.14.1): ┌─less(toIntervalMicrosecond(179999999), toIntervalMinute(3))─┐
Code: 43. DB::Exception: Received from localhost:9000. DB::Exception: Wrong argument types for function plus: if one argument is Interval, then another must be Date or DateTime.. │ 1 │
└─────────────────────────────────────────────────────────────┘
``` ```
## Смотрите также {#smotrite-takzhe} ## Смотрите также {#smotrite-takzhe}

View File

@ -197,6 +197,12 @@ public:
cache_policy->remove(key); cache_policy->remove(key);
} }
void remove(std::function<bool(const Key&, const MappedPtr &)> predicate)
{
std::lock_guard lock(mutex);
cache_policy->remove(predicate);
}
size_t sizeInBytes() const size_t sizeInBytes() const
{ {
std::lock_guard lock(mutex); std::lock_guard lock(mutex);

View File

@ -55,6 +55,7 @@ public:
virtual void set(const Key & key, const MappedPtr & mapped) = 0; virtual void set(const Key & key, const MappedPtr & mapped) = 0;
virtual void remove(const Key & key) = 0; virtual void remove(const Key & key) = 0;
virtual void remove(std::function<bool(const Key & key, const MappedPtr & mapped)> predicate) = 0;
virtual void clear() = 0; virtual void clear() = 0;
virtual std::vector<KeyMapped> dump() const = 0; virtual std::vector<KeyMapped> dump() const = 0;

View File

@ -79,6 +79,22 @@ public:
cells.erase(it); cells.erase(it);
} }
void remove(std::function<bool(const Key &, const MappedPtr &)> predicate) override
{
for (auto it = cells.begin(); it != cells.end();)
{
if (predicate(it->first, it->second.value))
{
Cell & cell = it->second;
current_size_in_bytes -= cell.size;
queue.erase(cell.queue_iterator);
it = cells.erase(it);
}
else
++it;
}
}
MappedPtr get(const Key & key) override MappedPtr get(const Key & key) override
{ {
auto it = cells.find(key); auto it = cells.find(key);

View File

@ -95,6 +95,27 @@ public:
cells.erase(it); cells.erase(it);
} }
void remove(std::function<bool(const Key &, const MappedPtr &)> predicate) override
{
for (auto it = cells.begin(); it != cells.end();)
{
if (predicate(it->first, it->second.value))
{
auto & cell = it->second;
current_size_in_bytes -= cell.size;
if (cell.is_protected)
current_protected_size -= cell.size;
auto & queue = cell.is_protected ? protected_queue : probationary_queue;
queue.erase(cell.queue_iterator);
it = cells.erase(it);
}
else
++it;
}
}
MappedPtr get(const Key & key) override MappedPtr get(const Key & key) override
{ {
auto it = cells.find(key); auto it = cells.find(key);

View File

@ -145,6 +145,23 @@ public:
size_in_bytes -= sz; size_in_bytes -= sz;
} }
void remove(std::function<bool(const Key &, const MappedPtr &)> predicate) override
{
for (auto it = cache.begin(); it != cache.end();)
{
if (predicate(it->first, it->second))
{
size_t sz = weight_function(*it->second);
if (it->first.user_id.has_value())
Base::user_quotas->decreaseActual(*it->first.user_id, sz);
it = cache.erase(it);
size_in_bytes -= sz;
}
else
++it;
}
}
MappedPtr get(const Key & key) override MappedPtr get(const Key & key) override
{ {
auto it = cache.find(key); auto it = cache.find(key);

View File

@ -3,6 +3,7 @@
#include <utility> #include <utility>
#include <Core/Types.h> #include <Core/Types.h>
#include <DataTypes/DataTypeInterval.h>
namespace DB namespace DB
@ -212,6 +213,8 @@ static bool callOnIndexAndDataType(TypeIndex number, F && f, ExtraArgs && ... ar
case TypeIndex::IPv4: return f(TypePair<DataTypeIPv4, T>(), std::forward<ExtraArgs>(args)...); case TypeIndex::IPv4: return f(TypePair<DataTypeIPv4, T>(), std::forward<ExtraArgs>(args)...);
case TypeIndex::IPv6: return f(TypePair<DataTypeIPv6, T>(), std::forward<ExtraArgs>(args)...); case TypeIndex::IPv6: return f(TypePair<DataTypeIPv6, T>(), std::forward<ExtraArgs>(args)...);
case TypeIndex::Interval: return f(TypePair<DataTypeInterval, T>(), std::forward<ExtraArgs>(args)...);
default: default:
break; break;
} }

View File

@ -149,6 +149,8 @@ std::unique_ptr<IDataType::SubstreamData> IDataType::getSubcolumnData(
ISerialization::EnumerateStreamsSettings settings; ISerialization::EnumerateStreamsSettings settings;
settings.position_independent_encoding = false; settings.position_independent_encoding = false;
/// Don't enumerate dynamic subcolumns, they are handled separately.
settings.enumerate_dynamic_streams = false;
data.serialization->enumerateStreams(settings, callback_with_data, data); data.serialization->enumerateStreams(settings, callback_with_data, data);
if (!res && data.type->hasDynamicSubcolumnsData()) if (!res && data.type->hasDynamicSubcolumnsData())

View File

@ -241,6 +241,10 @@ public:
{ {
SubstreamPath path; SubstreamPath path;
bool position_independent_encoding = true; bool position_independent_encoding = true;
/// If set to false, don't enumerate dynamic subcolumns
/// (such as dynamic types in Dynamic column or dynamic paths in JSON column).
/// It may be needed when dynamic subcolumns are processed separately.
bool enumerate_dynamic_streams = true;
}; };
virtual void enumerateStreams( virtual void enumerateStreams(

View File

@ -64,7 +64,7 @@ void SerializationDynamic::enumerateStreams(
const auto * deserialize_state = data.deserialize_state ? checkAndGetState<DeserializeBinaryBulkStateDynamic>(data.deserialize_state) : nullptr; const auto * deserialize_state = data.deserialize_state ? checkAndGetState<DeserializeBinaryBulkStateDynamic>(data.deserialize_state) : nullptr;
/// If column is nullptr and we don't have deserialize state yet, nothing to enumerate as we don't have any variants. /// If column is nullptr and we don't have deserialize state yet, nothing to enumerate as we don't have any variants.
if (!column_dynamic && !deserialize_state) if (!settings.enumerate_dynamic_streams || (!column_dynamic && !deserialize_state))
return; return;
const auto & variant_type = column_dynamic ? column_dynamic->getVariantInfo().variant_type : checkAndGetState<DeserializeBinaryBulkStateDynamicStructure>(deserialize_state->structure_state)->variant_type; const auto & variant_type = column_dynamic ? column_dynamic->getVariantInfo().variant_type : checkAndGetState<DeserializeBinaryBulkStateDynamicStructure>(deserialize_state->structure_state)->variant_type;

View File

@ -130,7 +130,7 @@ void SerializationObject::enumerateStreams(EnumerateStreamsSettings & settings,
} }
/// If column or deserialization state was provided, iterate over dynamic paths, /// If column or deserialization state was provided, iterate over dynamic paths,
if (column_object || structure_state) if (settings.enumerate_dynamic_streams && (column_object || structure_state))
{ {
/// Enumerate dynamic paths in sorted order for consistency. /// Enumerate dynamic paths in sorted order for consistency.
const auto * dynamic_paths = column_object ? &column_object->getDynamicPaths() : nullptr; const auto * dynamic_paths = column_object ? &column_object->getDynamicPaths() : nullptr;

View File

@ -228,6 +228,39 @@ void convertUInt64toInt64IfPossible(const DataTypes & types, TypeIndexSet & type
} }
} }
DataTypePtr findSmallestIntervalSuperType(const DataTypes &types, TypeIndexSet &types_set)
{
auto min_interval = IntervalKind::Kind::Year;
DataTypePtr smallest_type;
bool is_higher_interval = false; // For Years, Quarters and Months
for (const auto &type : types)
{
if (const auto * interval_type = typeid_cast<const DataTypeInterval *>(type.get()))
{
auto current_interval = interval_type->getKind().kind;
if (current_interval > IntervalKind::Kind::Week)
is_higher_interval = true;
if (current_interval < min_interval)
{
min_interval = current_interval;
smallest_type = type;
}
}
}
if (is_higher_interval && min_interval <= IntervalKind::Kind::Week)
throw Exception(ErrorCodes::NO_COMMON_TYPE, "Cannot compare intervals {} and {} because the number of days in a month is not fixed", types[0]->getName(), types[1]->getName());
if (smallest_type)
{
types_set.clear();
types_set.insert(smallest_type->getTypeId());
}
return smallest_type;
}
} }
template <LeastSupertypeOnError on_error> template <LeastSupertypeOnError on_error>
@ -652,6 +685,13 @@ DataTypePtr getLeastSupertype(const DataTypes & types)
return numeric_type; return numeric_type;
} }
/// For interval data types.
{
auto res = findSmallestIntervalSuperType(types, type_ids);
if (res)
return res;
}
/// All other data types (UUID, AggregateFunction, Enum...) are compatible only if they are the same (checked in trivial cases). /// All other data types (UUID, AggregateFunction, Enum...) are compatible only if they are the same (checked in trivial cases).
return throwOrReturn<on_error>(types, "", ErrorCodes::NO_COMMON_TYPE); return throwOrReturn<on_error>(types, "", ErrorCodes::NO_COMMON_TYPE);
} }

View File

@ -1,5 +1,7 @@
#pragma once #pragma once
#include <DataTypes/IDataType.h> #include <DataTypes/IDataType.h>
#include <DataTypes/DataTypeInterval.h>
#include <Common/IntervalKind.h>
namespace DB namespace DB
{ {
@ -48,4 +50,7 @@ DataTypePtr getLeastSupertypeOrString(const TypeIndexSet & types);
DataTypePtr tryGetLeastSupertype(const TypeIndexSet & types); DataTypePtr tryGetLeastSupertype(const TypeIndexSet & types);
/// A vector that shows the conversion rates to the next Interval type starting from NanoSecond
static std::vector<int> interval_conversions = {1, 1000, 1000, 1000, 60, 60, 24, 7, 4, 3, 4};
} }

View File

@ -123,7 +123,7 @@ public:
class Executor class Executor
{ {
public: public:
static ColumnPtr run(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count, uint32_t parse_depth, uint32_t parse_backtracks, const ContextPtr & context) static ColumnPtr run(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count, uint32_t parse_depth, uint32_t parse_backtracks, bool function_json_value_return_type_allow_complex)
{ {
MutableColumnPtr to{result_type->createColumn()}; MutableColumnPtr to{result_type->createColumn()};
to->reserve(input_rows_count); to->reserve(input_rows_count);
@ -191,7 +191,7 @@ public:
{ {
/// Instead of creating a new generator for each row, we can reuse the same one. /// Instead of creating a new generator for each row, we can reuse the same one.
generator_json_path.reinitialize(); generator_json_path.reinitialize();
added_to_column = impl.insertResultToColumn(*to, document, generator_json_path, context); added_to_column = impl.insertResultToColumn(*to, document, generator_json_path, function_json_value_return_type_allow_complex);
} }
if (!added_to_column) if (!added_to_column)
{ {
@ -204,11 +204,18 @@ public:
}; };
template <typename Name, template <typename, typename> typename Impl> template <typename Name, template <typename, typename> typename Impl>
class FunctionSQLJSON : public IFunction, WithConstContext class FunctionSQLJSON : public IFunction
{ {
public: public:
static FunctionPtr create(ContextPtr context_) { return std::make_shared<FunctionSQLJSON>(context_); } static FunctionPtr create(ContextPtr context_) { return std::make_shared<FunctionSQLJSON>(context_); }
explicit FunctionSQLJSON(ContextPtr context_) : WithConstContext(context_) { } explicit FunctionSQLJSON(ContextPtr context_)
: max_parser_depth(context_->getSettingsRef().max_parser_depth),
max_parser_backtracks(context_->getSettingsRef().max_parser_backtracks),
allow_simdjson(context_->getSettingsRef().allow_simdjson),
function_json_value_return_type_allow_complex(context_->getSettingsRef().function_json_value_return_type_allow_complex),
function_json_value_return_type_allow_nullable(context_->getSettingsRef().function_json_value_return_type_allow_nullable)
{
}
static constexpr auto name = Name::name; static constexpr auto name = Name::name;
String getName() const override { return Name::name; } String getName() const override { return Name::name; }
@ -221,7 +228,7 @@ public:
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
{ {
return Impl<DummyJSONParser, DefaultJSONStringSerializer<DummyJSONParser::Element>>::getReturnType( return Impl<DummyJSONParser, DefaultJSONStringSerializer<DummyJSONParser::Element>>::getReturnType(
Name::name, arguments, getContext()); Name::name, arguments, function_json_value_return_type_allow_nullable);
} }
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const override ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const override
@ -231,19 +238,25 @@ public:
/// 2. Create ASTPtr /// 2. Create ASTPtr
/// 3. Parser(Tokens, ASTPtr) -> complete AST /// 3. Parser(Tokens, ASTPtr) -> complete AST
/// 4. Execute functions: call getNextItem on generator and handle each item /// 4. Execute functions: call getNextItem on generator and handle each item
unsigned parse_depth = static_cast<unsigned>(getContext()->getSettingsRef().max_parser_depth); unsigned parse_depth = static_cast<unsigned>(max_parser_depth);
unsigned parse_backtracks = static_cast<unsigned>(getContext()->getSettingsRef().max_parser_backtracks); unsigned parse_backtracks = static_cast<unsigned>(max_parser_backtracks);
#if USE_SIMDJSON #if USE_SIMDJSON
if (getContext()->getSettingsRef().allow_simdjson) if (allow_simdjson)
return FunctionSQLJSONHelpers::Executor< return FunctionSQLJSONHelpers::Executor<
Name, Name,
Impl<SimdJSONParser, JSONStringSerializer<SimdJSONParser::Element, SimdJSONElementFormatter>>, Impl<SimdJSONParser, JSONStringSerializer<SimdJSONParser::Element, SimdJSONElementFormatter>>,
SimdJSONParser>::run(arguments, result_type, input_rows_count, parse_depth, parse_backtracks, getContext()); SimdJSONParser>::run(arguments, result_type, input_rows_count, parse_depth, parse_backtracks, function_json_value_return_type_allow_complex);
#endif #endif
return FunctionSQLJSONHelpers:: return FunctionSQLJSONHelpers::
Executor<Name, Impl<DummyJSONParser, DefaultJSONStringSerializer<DummyJSONParser::Element>>, DummyJSONParser>::run( Executor<Name, Impl<DummyJSONParser, DefaultJSONStringSerializer<DummyJSONParser::Element>>, DummyJSONParser>::run(
arguments, result_type, input_rows_count, parse_depth, parse_backtracks, getContext()); arguments, result_type, input_rows_count, parse_depth, parse_backtracks, function_json_value_return_type_allow_complex);
} }
private:
const size_t max_parser_depth;
const size_t max_parser_backtracks;
const bool allow_simdjson;
const bool function_json_value_return_type_allow_complex;
const bool function_json_value_return_type_allow_nullable;
}; };
struct NameJSONExists struct NameJSONExists
@ -267,11 +280,11 @@ class JSONExistsImpl
public: public:
using Element = typename JSONParser::Element; using Element = typename JSONParser::Element;
static DataTypePtr getReturnType(const char *, const ColumnsWithTypeAndName &, const ContextPtr &) { return std::make_shared<DataTypeUInt8>(); } static DataTypePtr getReturnType(const char *, const ColumnsWithTypeAndName &, bool) { return std::make_shared<DataTypeUInt8>(); }
static size_t getNumberOfIndexArguments(const ColumnsWithTypeAndName & arguments) { return arguments.size() - 1; } static size_t getNumberOfIndexArguments(const ColumnsWithTypeAndName & arguments) { return arguments.size() - 1; }
static bool insertResultToColumn(IColumn & dest, const Element & root, GeneratorJSONPath<JSONParser> & generator_json_path, const ContextPtr &) static bool insertResultToColumn(IColumn & dest, const Element & root, GeneratorJSONPath<JSONParser> & generator_json_path, bool)
{ {
Element current_element = root; Element current_element = root;
VisitorStatus status; VisitorStatus status;
@ -305,9 +318,9 @@ class JSONValueImpl
public: public:
using Element = typename JSONParser::Element; using Element = typename JSONParser::Element;
static DataTypePtr getReturnType(const char *, const ColumnsWithTypeAndName &, const ContextPtr & context) static DataTypePtr getReturnType(const char *, const ColumnsWithTypeAndName &, bool function_json_value_return_type_allow_nullable)
{ {
if (context->getSettingsRef().function_json_value_return_type_allow_nullable) if (function_json_value_return_type_allow_nullable)
{ {
DataTypePtr string_type = std::make_shared<DataTypeString>(); DataTypePtr string_type = std::make_shared<DataTypeString>();
return std::make_shared<DataTypeNullable>(string_type); return std::make_shared<DataTypeNullable>(string_type);
@ -320,7 +333,7 @@ public:
static size_t getNumberOfIndexArguments(const ColumnsWithTypeAndName & arguments) { return arguments.size() - 1; } static size_t getNumberOfIndexArguments(const ColumnsWithTypeAndName & arguments) { return arguments.size() - 1; }
static bool insertResultToColumn(IColumn & dest, const Element & root, GeneratorJSONPath<JSONParser> & generator_json_path, const ContextPtr & context) static bool insertResultToColumn(IColumn & dest, const Element & root, GeneratorJSONPath<JSONParser> & generator_json_path, bool function_json_value_return_type_allow_complex)
{ {
Element current_element = root; Element current_element = root;
VisitorStatus status; VisitorStatus status;
@ -329,7 +342,7 @@ public:
{ {
if (status == VisitorStatus::Ok) if (status == VisitorStatus::Ok)
{ {
if (context->getSettingsRef().function_json_value_return_type_allow_complex) if (function_json_value_return_type_allow_complex)
{ {
break; break;
} }
@ -383,11 +396,11 @@ class JSONQueryImpl
public: public:
using Element = typename JSONParser::Element; using Element = typename JSONParser::Element;
static DataTypePtr getReturnType(const char *, const ColumnsWithTypeAndName &, const ContextPtr &) { return std::make_shared<DataTypeString>(); } static DataTypePtr getReturnType(const char *, const ColumnsWithTypeAndName &, bool) { return std::make_shared<DataTypeString>(); }
static size_t getNumberOfIndexArguments(const ColumnsWithTypeAndName & arguments) { return arguments.size() - 1; } static size_t getNumberOfIndexArguments(const ColumnsWithTypeAndName & arguments) { return arguments.size() - 1; }
static bool insertResultToColumn(IColumn & dest, const Element & root, GeneratorJSONPath<JSONParser> & generator_json_path, const ContextPtr &) static bool insertResultToColumn(IColumn & dest, const Element & root, GeneratorJSONPath<JSONParser> & generator_json_path, bool)
{ {
ColumnString & col_str = assert_cast<ColumnString &>(dest); ColumnString & col_str = assert_cast<ColumnString &>(dest);

View File

@ -48,6 +48,7 @@
#include <DataTypes/DataTypesBinaryEncoding.h> #include <DataTypes/DataTypesBinaryEncoding.h>
#include <DataTypes/ObjectUtils.h> #include <DataTypes/ObjectUtils.h>
#include <DataTypes/Serializations/SerializationDecimal.h> #include <DataTypes/Serializations/SerializationDecimal.h>
#include <DataTypes/getLeastSupertype.h>
#include <Formats/FormatSettings.h> #include <Formats/FormatSettings.h>
#include <Formats/FormatFactory.h> #include <Formats/FormatFactory.h>
#include <Functions/CastOverloadResolver.h> #include <Functions/CastOverloadResolver.h>
@ -1576,6 +1577,35 @@ struct ConvertImpl
arguments, result_type, input_rows_count, additions); arguments, result_type, input_rows_count, additions);
} }
} }
else if constexpr (std::is_same_v<FromDataType, DataTypeInterval> && std::is_same_v<ToDataType, DataTypeInterval>)
{
IntervalKind to = typeid_cast<const DataTypeInterval *>(result_type.get())->getKind();
IntervalKind from = typeid_cast<const DataTypeInterval *>(arguments[0].type.get())->getKind();
if (from == to || arguments[0].column->empty())
return arguments[0].column;
Int64 conversion_factor = 1;
Int64 result_value;
int from_position = static_cast<int>(from.kind);
int to_position = static_cast<int>(to.kind); /// Positions of each interval according to granularity map
if (from_position < to_position)
{
for (int i = from_position; i < to_position; ++i)
conversion_factor *= interval_conversions[i];
result_value = arguments[0].column->getInt(0) / conversion_factor;
}
else
{
for (int i = from_position; i > to_position; --i)
conversion_factor *= interval_conversions[i];
result_value = arguments[0].column->getInt(0) * conversion_factor;
}
return ColumnConst::create(ColumnInt64::create(1, result_value), input_rows_count);
}
else else
{ {
using FromFieldType = typename FromDataType::FieldType; using FromFieldType = typename FromDataType::FieldType;
@ -2184,7 +2214,7 @@ private:
const DataTypePtr from_type = removeNullable(arguments[0].type); const DataTypePtr from_type = removeNullable(arguments[0].type);
ColumnPtr result_column; ColumnPtr result_column;
[[maybe_unused]] FormatSettings::DateTimeOverflowBehavior date_time_overflow_behavior = default_date_time_overflow_behavior; FormatSettings::DateTimeOverflowBehavior date_time_overflow_behavior = default_date_time_overflow_behavior;
if (context) if (context)
date_time_overflow_behavior = context->getSettingsRef().date_time_overflow_behavior.value; date_time_overflow_behavior = context->getSettingsRef().date_time_overflow_behavior.value;
@ -2280,7 +2310,7 @@ private:
} }
} }
else else
result_column = ConvertImpl<LeftDataType, RightDataType, Name>::execute(arguments, result_type, input_rows_count, from_string_tag); result_column = ConvertImpl<LeftDataType, RightDataType, Name>::execute(arguments, result_type, input_rows_count, from_string_tag);
return true; return true;
}; };
@ -2337,6 +2367,10 @@ private:
else else
done = callOnIndexAndDataType<ToDataType>(from_type->getTypeId(), call, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag); done = callOnIndexAndDataType<ToDataType>(from_type->getTypeId(), call, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag);
} }
if constexpr (std::is_same_v<ToDataType, DataTypeInterval>)
if (WhichDataType(from_type).isInterval())
done = callOnIndexAndDataType<ToDataType>(from_type->getTypeId(), call, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag);
} }
if (!done) if (!done)

View File

@ -6,7 +6,6 @@
#include <Columns/ColumnString.h> #include <Columns/ColumnString.h>
#include <Functions/LowerUpperImpl.h> #include <Functions/LowerUpperImpl.h>
#include <base/find_symbols.h>
#include <unicode/unistr.h> #include <unicode/unistr.h>
#include <Common/StringUtils.h> #include <Common/StringUtils.h>
@ -43,7 +42,7 @@ struct LowerUpperUTF8Impl
String output; String output;
size_t curr_offset = 0; size_t curr_offset = 0;
for (size_t i = 0; i < offsets.size(); ++i) for (size_t i = 0; i < input_rows_count; ++i)
{ {
const auto * data_start = reinterpret_cast<const char *>(&data[offsets[i - 1]]); const auto * data_start = reinterpret_cast<const char *>(&data[offsets[i - 1]]);
size_t size = offsets[i] - offsets[i - 1]; size_t size = offsets[i] - offsets[i - 1];
@ -57,13 +56,15 @@ struct LowerUpperUTF8Impl
output.clear(); output.clear();
input.toUTF8String(output); input.toUTF8String(output);
/// For valid UTF-8 input strings, ICU sometimes produces output with extra '\0's at the end. Only the data before the first /// For valid UTF-8 input strings, ICU sometimes produces output with an extra '\0 at the end. Only the data before that
/// '\0' is valid. It the input is not valid UTF-8, then the behavior of lower/upperUTF8 is undefined by definition. In this /// '\0' is valid. If the input is not valid UTF-8, then the behavior of lower/upperUTF8 is undefined by definition. In this
/// case, the behavior is also reasonable. /// case, the behavior is also reasonable.
const char * res_end = find_last_not_symbols_or_null<'\0'>(output.data(), output.data() + output.size()); size_t valid_size = output.size();
size_t valid_size = res_end ? res_end - output.data() + 1 : 0; if (!output.empty() && output.back() == '\0')
--valid_size;
res_data.resize(curr_offset + valid_size + 1); res_data.resize(curr_offset + valid_size + 1);
memcpy(&res_data[curr_offset], output.data(), valid_size); memcpy(&res_data[curr_offset], output.data(), valid_size);
res_data[curr_offset + valid_size] = 0; res_data[curr_offset + valid_size] = 0;

718
src/Functions/overlay.cpp Normal file
View File

@ -0,0 +1,718 @@
#include <Columns/ColumnConst.h>
#include <Columns/ColumnString.h>
#include <DataTypes/DataTypeString.h>
#include <Functions/FunctionFactory.h>
#include <Functions/FunctionHelpers.h>
#include <Functions/GatherUtils/Sources.h>
#include <Functions/IFunction.h>
#include <Common/StringUtils.h>
#include <Common/UTF8Helpers.h>
namespace DB
{
namespace
{
/// If 'is_utf8' - measure offset and length in code points instead of bytes.
/// Syntax:
/// - overlay(input, replace, offset[, length])
/// - overlayUTF8(input, replace, offset[, length]) - measure offset and length in code points instead of bytes
template <bool is_utf8>
class FunctionOverlay : public IFunction
{
public:
static constexpr auto name = is_utf8 ? "overlayUTF8" : "overlay";
static FunctionPtr create(ContextPtr) { return std::make_shared<FunctionOverlay>(); }
String getName() const override { return name; }
bool isVariadic() const override { return true; }
size_t getNumberOfArguments() const override { return 0; }
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
bool useDefaultImplementationForConstants() const override { return true; }
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
{
FunctionArgumentDescriptors mandatory_args{
{"input", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isString), nullptr, "String"},
{"replace", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isString), nullptr, "String"},
{"offset", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isNativeInteger), nullptr, "(U)Int8/16/32/64"},
};
FunctionArgumentDescriptors optional_args{
{"length", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isNativeInteger), nullptr, "(U)Int8/16/32/64"},
};
validateFunctionArguments(*this, arguments, mandatory_args, optional_args);
return std::make_shared<DataTypeString>();
}
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t input_rows_count) const override
{
if (input_rows_count == 0)
return ColumnString::create();
bool has_four_args = (arguments.size() == 4);
ColumnPtr col_input = arguments[0].column;
const auto * col_input_const = checkAndGetColumn<ColumnConst>(col_input.get());
const auto * col_input_string = checkAndGetColumn<ColumnString>(col_input.get());
bool input_is_const = (col_input_const != nullptr);
ColumnPtr col_replace = arguments[1].column;
const auto * col_replace_const = checkAndGetColumn<ColumnConst>(col_replace.get());
const auto * col_replace_string = checkAndGetColumn<ColumnString>(col_replace.get());
bool replace_is_const = (col_replace_const != nullptr);
ColumnPtr col_offset = arguments[2].column;
const ColumnConst * col_offset_const = checkAndGetColumn<ColumnConst>(col_offset.get());
bool offset_is_const = false;
Int64 offset = -1;
if (col_offset_const)
{
offset = col_offset_const->getInt(0);
offset_is_const = true;
}
ColumnPtr col_length = has_four_args ? arguments[3].column : nullptr;
const ColumnConst * col_length_const = has_four_args ? checkAndGetColumn<ColumnConst>(col_length.get()) : nullptr;
bool length_is_const = false;
Int64 length = -1;
if (col_length_const)
{
length = col_length_const->getInt(0);
length_is_const = true;
}
auto res_col = ColumnString::create();
auto & res_data = res_col->getChars();
auto & res_offsets = res_col->getOffsets();
res_offsets.resize_exact(input_rows_count);
if (col_input_const)
{
StringRef input = col_input_const->getDataAt(0);
res_data.reserve((input.size + 1) * input_rows_count);
}
else
{
res_data.reserve(col_input_string->getChars().size());
}
#define OVERLAY_EXECUTE_CASE(HAS_FOUR_ARGS, OFFSET_IS_CONST, LENGTH_IS_CONST) \
if (input_is_const && replace_is_const) \
constantConstant<HAS_FOUR_ARGS, OFFSET_IS_CONST, LENGTH_IS_CONST>( \
input_rows_count, \
col_input_const->getDataAt(0), \
col_replace_const->getDataAt(0), \
col_offset, \
col_length, \
offset, \
length, \
res_data, \
res_offsets); \
else if (input_is_const && !replace_is_const) \
constantVector<HAS_FOUR_ARGS, OFFSET_IS_CONST, LENGTH_IS_CONST>( \
input_rows_count, \
col_input_const->getDataAt(0), \
col_replace_string->getChars(), \
col_replace_string->getOffsets(), \
col_offset, \
col_length, \
offset, \
length, \
res_data, \
res_offsets); \
else if (!input_is_const && replace_is_const) \
vectorConstant<HAS_FOUR_ARGS, OFFSET_IS_CONST, LENGTH_IS_CONST>( \
input_rows_count, \
col_input_string->getChars(), \
col_input_string->getOffsets(), \
col_replace_const->getDataAt(0), \
col_offset, \
col_length, \
offset, \
length, \
res_data, \
res_offsets); \
else \
vectorVector<HAS_FOUR_ARGS, OFFSET_IS_CONST, LENGTH_IS_CONST>( \
input_rows_count, \
col_input_string->getChars(), \
col_input_string->getOffsets(), \
col_replace_string->getChars(), \
col_replace_string->getOffsets(), \
col_offset, \
col_length, \
offset, \
length, \
res_data, \
res_offsets);
if (!has_four_args)
{
if (offset_is_const)
{
OVERLAY_EXECUTE_CASE(false, true, false)
}
else
{
OVERLAY_EXECUTE_CASE(false, false, false)
}
}
else
{
if (offset_is_const && length_is_const)
{
OVERLAY_EXECUTE_CASE(true, true, true)
}
else if (offset_is_const && !length_is_const)
{
OVERLAY_EXECUTE_CASE(true, true, false)
}
else if (!offset_is_const && length_is_const)
{
OVERLAY_EXECUTE_CASE(true, false, true)
}
else
{
OVERLAY_EXECUTE_CASE(true, false, false)
}
}
#undef OVERLAY_EXECUTE_CASE
return res_col;
}
private:
/// input offset is 1-based, maybe negative
/// output result is 0-based valid offset, within [0, input_size]
static size_t getValidOffset(Int64 offset, size_t input_size)
{
if (offset > 0)
{
if (static_cast<size_t>(offset) > input_size + 1)
return input_size;
else
return offset - 1;
}
else
{
if (input_size < -static_cast<size_t>(offset))
return 0;
else
return input_size + offset;
}
}
/// get character count of a slice [data, data+bytes)
static size_t getSliceSize(const UInt8 * data, size_t bytes)
{
if constexpr (is_utf8)
return UTF8::countCodePoints(data, bytes);
else
return bytes;
}
template <bool has_four_args, bool offset_is_const, bool length_is_const>
void constantConstant(
size_t rows,
const StringRef & input,
const StringRef & replace,
const ColumnPtr & column_offset,
const ColumnPtr & column_length,
Int64 const_offset,
Int64 const_length,
ColumnString::Chars & res_data,
ColumnString::Offsets & res_offsets) const
{
if (has_four_args && length_is_const && const_length < 0)
{
constantConstant<true, offset_is_const, false>(
rows, input, replace, column_offset, column_length, const_offset, -1, res_data, res_offsets);
return;
}
size_t input_size = getSliceSize(reinterpret_cast<const UInt8 *>(input.data), input.size);
size_t valid_offset = 0; // start from 0, not negative
if constexpr (offset_is_const)
valid_offset = getValidOffset(const_offset, input_size);
size_t replace_size = getSliceSize(reinterpret_cast<const UInt8 *>(replace.data), replace.size);
size_t valid_length = 0; // not negative
if constexpr (has_four_args && length_is_const)
{
assert(const_length >= 0);
valid_length = const_length;
}
else if constexpr (!has_four_args)
{
valid_length = replace_size;
}
Int64 offset = 0; // start from 1, maybe negative
Int64 length = 0; // maybe negative
const UInt8 * input_begin = reinterpret_cast<const UInt8 *>(input.data);
const UInt8 * input_end = reinterpret_cast<const UInt8 *>(input.data + input.size);
size_t res_offset = 0;
for (size_t i = 0; i < rows; ++i)
{
if constexpr (!offset_is_const)
{
offset = column_offset->getInt(i);
valid_offset = getValidOffset(offset, input_size);
}
if constexpr (has_four_args && !length_is_const)
{
length = column_length->getInt(i);
valid_length = length >= 0 ? length : replace_size;
}
size_t prefix_size = valid_offset;
size_t suffix_size = (prefix_size + valid_length > input_size) ? 0 : (input_size - prefix_size - valid_length);
if constexpr (!is_utf8)
{
size_t new_res_size = res_data.size() + prefix_size + replace_size + suffix_size + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], input.data, prefix_size);
res_offset += prefix_size;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], replace.data, replace_size);
res_offset += replace_size;
/// copy suffix after replaced region. It is not necessary to copy if suffix_size is zero.
if (suffix_size)
{
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], input.data + prefix_size + valid_length, suffix_size);
res_offset += suffix_size;
}
}
else
{
const auto * prefix_end = GatherUtils::UTF8StringSource::skipCodePointsForward(input_begin, prefix_size, input_end);
size_t prefix_bytes = prefix_end > input_end ? input.size : prefix_end - input_begin;
const auto * suffix_begin = GatherUtils::UTF8StringSource::skipCodePointsBackward(input_end, suffix_size, input_begin);
size_t suffix_bytes = input_end - suffix_begin;
size_t new_res_size = res_data.size() + prefix_bytes + replace.size + suffix_bytes + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], input_begin, prefix_bytes);
res_offset += prefix_bytes;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], replace.data, replace.size);
res_offset += replace.size;
/// copy suffix after replaced region. It is not necessary to copy if suffix_bytes is zero.
if (suffix_bytes)
{
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], suffix_begin, suffix_bytes);
res_offset += suffix_bytes;
}
}
/// add zero terminator
res_data[res_offset] = 0;
++res_offset;
res_offsets[i] = res_offset;
}
}
template <bool has_four_args, bool offset_is_const, bool length_is_const>
void vectorConstant(
size_t rows,
const ColumnString::Chars & input_data,
const ColumnString::Offsets & input_offsets,
const StringRef & replace,
const ColumnPtr & column_offset,
const ColumnPtr & column_length,
Int64 const_offset,
Int64 const_length,
ColumnString::Chars & res_data,
ColumnString::Offsets & res_offsets) const
{
if (has_four_args && length_is_const && const_length < 0)
{
vectorConstant<true, offset_is_const, false>(
rows, input_data, input_offsets, replace, column_offset, column_length, const_offset, -1, res_data, res_offsets);
return;
}
size_t replace_size = getSliceSize(reinterpret_cast<const UInt8 *>(replace.data), replace.size);
Int64 length = 0; // maybe negative
size_t valid_length = 0; // not negative
if constexpr (has_four_args && length_is_const)
{
assert(const_length >= 0);
valid_length = const_length;
}
else if constexpr (!has_four_args)
{
valid_length = replace_size;
}
Int64 offset = 0; // start from 1, maybe negative
size_t valid_offset = 0; // start from 0, not negative
size_t res_offset = 0;
for (size_t i = 0; i < rows; ++i)
{
size_t input_offset = input_offsets[i - 1];
size_t input_bytes = input_offsets[i] - input_offsets[i - 1] - 1;
size_t input_size = getSliceSize(&input_data[input_offset], input_bytes);
if constexpr (offset_is_const)
{
valid_offset = getValidOffset(const_offset, input_size);
}
else
{
offset = column_offset->getInt(i);
valid_offset = getValidOffset(offset, input_size);
}
if constexpr (has_four_args && !length_is_const)
{
length = column_length->getInt(i);
valid_length = length >= 0 ? length : replace_size;
}
size_t prefix_size = valid_offset;
size_t suffix_size = (prefix_size + valid_length > input_size) ? 0 : (input_size - prefix_size - valid_length);
if constexpr (!is_utf8)
{
size_t new_res_size = res_data.size() + prefix_size + replace_size + suffix_size + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], &input_data[input_offset], prefix_size);
res_offset += prefix_size;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], replace.data, replace_size);
res_offset += replace_size;
/// copy suffix after replaced region. It is not necessary to copy if suffix_size is zero.
if (suffix_size)
{
memcpySmallAllowReadWriteOverflow15(
&res_data[res_offset], &input_data[input_offset + prefix_size + valid_length], suffix_size);
res_offset += suffix_size;
}
}
else
{
const auto * input_begin = &input_data[input_offset];
const auto * input_end = &input_data[input_offset + input_bytes];
const auto * prefix_end = GatherUtils::UTF8StringSource::skipCodePointsForward(input_begin, prefix_size, input_end);
size_t prefix_bytes = prefix_end > input_end ? input_bytes : prefix_end - input_begin;
const auto * suffix_begin = GatherUtils::UTF8StringSource::skipCodePointsBackward(input_end, suffix_size, input_begin);
size_t suffix_bytes = input_end - suffix_begin;
size_t new_res_size = res_data.size() + prefix_bytes + replace.size + suffix_bytes + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], &input_data[input_offset], prefix_bytes);
res_offset += prefix_bytes;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], replace.data, replace.size);
res_offset += replace.size;
/// copy suffix after replaced region. It is not necessary to copy if suffix_bytes is zero.
if (suffix_bytes)
{
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], suffix_begin, suffix_bytes);
res_offset += suffix_bytes;
}
}
/// add zero terminator
res_data[res_offset] = 0;
++res_offset;
res_offsets[i] = res_offset;
}
}
template <bool has_four_args, bool offset_is_const, bool length_is_const>
void constantVector(
size_t rows,
const StringRef & input,
const ColumnString::Chars & replace_data,
const ColumnString::Offsets & replace_offsets,
const ColumnPtr & column_offset,
const ColumnPtr & column_length,
Int64 const_offset,
Int64 const_length,
ColumnString::Chars & res_data,
ColumnString::Offsets & res_offsets) const
{
if (has_four_args && length_is_const && const_length < 0)
{
constantVector<true, offset_is_const, false>(
rows, input, replace_data, replace_offsets, column_offset, column_length, const_offset, -1, res_data, res_offsets);
return;
}
size_t input_size = getSliceSize(reinterpret_cast<const UInt8 *>(input.data), input.size);
size_t valid_offset = 0; // start from 0, not negative
if constexpr (offset_is_const)
valid_offset = getValidOffset(const_offset, input_size);
Int64 length = 0; // maybe negative
size_t valid_length = 0; // not negative
if constexpr (has_four_args && length_is_const)
{
assert(const_length >= 0);
valid_length = const_length;
}
const auto * input_begin = reinterpret_cast<const UInt8 *>(input.data);
const auto * input_end = reinterpret_cast<const UInt8 *>(input.data + input.size);
Int64 offset = 0; // start from 1, maybe negative
size_t res_offset = 0;
for (size_t i = 0; i < rows; ++i)
{
size_t replace_offset = replace_offsets[i - 1];
size_t replace_bytes = replace_offsets[i] - replace_offsets[i - 1] - 1;
size_t replace_size = getSliceSize(&replace_data[replace_offset], replace_bytes);
if constexpr (!offset_is_const)
{
offset = column_offset->getInt(i);
valid_offset = getValidOffset(offset, input_size);
}
if constexpr (!has_four_args)
{
valid_length = replace_size;
}
else if constexpr (!length_is_const)
{
length = column_length->getInt(i);
valid_length = length >= 0 ? length : replace_size;
}
size_t prefix_size = valid_offset;
size_t suffix_size = (prefix_size + valid_length > input_size) ? 0 : (input_size - prefix_size - valid_length);
if constexpr (!is_utf8)
{
size_t new_res_size = res_data.size() + prefix_size + replace_size + suffix_size + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], input.data, prefix_size);
res_offset += prefix_size;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], &replace_data[replace_offset], replace_size);
res_offset += replace_size;
/// copy suffix after replaced region. It is not necessary to copy if suffix_size is zero.
if (suffix_size)
{
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], input.data + prefix_size + valid_length, suffix_size);
res_offset += suffix_size;
}
}
else
{
const auto * prefix_end = GatherUtils::UTF8StringSource::skipCodePointsForward(input_begin, prefix_size, input_end);
size_t prefix_bytes = prefix_end > input_end ? input.size : prefix_end - input_begin;
const auto * suffix_begin = GatherUtils::UTF8StringSource::skipCodePointsBackward(input_end, suffix_size, input_begin);
size_t suffix_bytes = input_end - suffix_begin;
size_t new_res_size = res_data.size() + prefix_bytes + replace_bytes + suffix_bytes + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], input_begin, prefix_bytes);
res_offset += prefix_bytes;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], &replace_data[replace_offset], replace_bytes);
res_offset += replace_bytes;
/// copy suffix after replaced region. It is not necessary to copy if suffix_bytes is zero
if (suffix_bytes)
{
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], suffix_begin, suffix_bytes);
res_offset += suffix_bytes;
}
}
/// add zero terminator
res_data[res_offset] = 0;
++res_offset;
res_offsets[i] = res_offset;
}
}
template <bool has_four_args, bool offset_is_const, bool length_is_const>
void vectorVector(
size_t rows,
const ColumnString::Chars & input_data,
const ColumnString::Offsets & input_offsets,
const ColumnString::Chars & replace_data,
const ColumnString::Offsets & replace_offsets,
const ColumnPtr & column_offset,
const ColumnPtr & column_length,
Int64 const_offset,
Int64 const_length,
ColumnString::Chars & res_data,
ColumnString::Offsets & res_offsets) const
{
if (has_four_args && length_is_const && const_length < 0)
{
vectorVector<true, offset_is_const, false>(
rows,
input_data,
input_offsets,
replace_data,
replace_offsets,
column_offset,
column_length,
const_offset,
-1,
res_data,
res_offsets);
return;
}
Int64 length = 0; // maybe negative
size_t valid_length = 0; // not negative
if constexpr (has_four_args && length_is_const)
{
assert(const_length >= 0);
valid_length = const_length;
}
Int64 offset = 0; // start from 1, maybe negative
size_t valid_offset = 0; // start from 0, not negative
size_t res_offset = 0;
for (size_t i = 0; i < rows; ++i)
{
size_t input_offset = input_offsets[i - 1];
size_t input_bytes = input_offsets[i] - input_offsets[i - 1] - 1;
size_t input_size = getSliceSize(&input_data[input_offset], input_bytes);
size_t replace_offset = replace_offsets[i - 1];
size_t replace_bytes = replace_offsets[i] - replace_offsets[i - 1] - 1;
size_t replace_size = getSliceSize(&replace_data[replace_offset], replace_bytes);
if constexpr (offset_is_const)
{
valid_offset = getValidOffset(const_offset, input_size);
}
else
{
offset = column_offset->getInt(i);
valid_offset = getValidOffset(offset, input_size);
}
if constexpr (!has_four_args)
{
valid_length = replace_size;
}
else if constexpr (!length_is_const)
{
length = column_length->getInt(i);
valid_length = length >= 0 ? length : replace_size;
}
size_t prefix_size = valid_offset;
size_t suffix_size = (prefix_size + valid_length > input_size) ? 0 : (input_size - prefix_size - valid_length);
if constexpr (!is_utf8)
{
size_t new_res_size = res_data.size() + prefix_size + replace_size + suffix_size + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], &input_data[input_offset], prefix_size);
res_offset += prefix_size;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], &replace_data[replace_offset], replace_size);
res_offset += replace_size;
/// copy suffix after replaced region. It is not necessary to copy if suffix_size is zero.
if (suffix_size)
{
memcpySmallAllowReadWriteOverflow15(
&res_data[res_offset], &input_data[input_offset + prefix_size + valid_length], suffix_size);
res_offset += suffix_size;
}
}
else
{
const auto * input_begin = &input_data[input_offset];
const auto * input_end = &input_data[input_offset + input_bytes];
const auto * prefix_end = GatherUtils::UTF8StringSource::skipCodePointsForward(input_begin, prefix_size, input_end);
size_t prefix_bytes = prefix_end > input_end ? input_bytes : prefix_end - input_begin;
const auto * suffix_begin = GatherUtils::UTF8StringSource::skipCodePointsBackward(input_end, suffix_size, input_begin);
size_t suffix_bytes = input_end - suffix_begin;
size_t new_res_size = res_data.size() + prefix_bytes + replace_bytes + suffix_bytes + 1; /// +1 for zero terminator
res_data.resize(new_res_size);
/// copy prefix before replaced region
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], input_begin, prefix_bytes);
res_offset += prefix_bytes;
/// copy replace
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], &replace_data[replace_offset], replace_bytes);
res_offset += replace_bytes;
/// copy suffix after replaced region. It is not necessary to copy if suffix_bytes is zero.
if (suffix_bytes)
{
memcpySmallAllowReadWriteOverflow15(&res_data[res_offset], suffix_begin, suffix_bytes);
res_offset += suffix_bytes;
}
}
/// add zero terminator
res_data[res_offset] = 0;
++res_offset;
res_offsets[i] = res_offset;
}
}
};
}
REGISTER_FUNCTION(Overlay)
{
factory.registerFunction<FunctionOverlay<false>>(
{.description = R"(
Replace a part of a string `input` with another string `replace`, starting at 1-based index `offset`. By default, the number of bytes removed from `input` equals the length of `replace`. If `length` (the optional fourth argument) is specified, a different number of bytes is removed.
)",
.categories{"String"}},
FunctionFactory::Case::Insensitive);
factory.registerFunction<FunctionOverlay<true>>(
{.description = R"(
Replace a part of a string `input` with another string `replace`, starting at 1-based index `offset`. By default, the number of characters removed from `input` equals the length of `replace`. If `length` (the optional fourth argument) is specified, a different number of characters is removed.
Assumes that the string contains valid UTF-8 encoded text. If this assumption is violated, no exception is thrown and the result is undefined.
)",
.categories{"String"}},
FunctionFactory::Case::Sensitive);
}
}

View File

@ -619,9 +619,18 @@ QueryCache::Writer QueryCache::createWriter(const Key & key, std::chrono::millis
return Writer(cache, key, max_entry_size_in_bytes, max_entry_size_in_rows, min_query_runtime, squash_partial_results, max_block_size); return Writer(cache, key, max_entry_size_in_bytes, max_entry_size_in_rows, min_query_runtime, squash_partial_results, max_block_size);
} }
void QueryCache::clear() void QueryCache::clear(const std::optional<String> & tag)
{ {
cache.clear(); if (tag)
{
auto predicate = [tag](const Key & key, const Cache::MappedPtr &) { return key.tag == tag.value(); };
cache.remove(predicate);
}
else
{
cache.clear();
}
std::lock_guard lock(mutex); std::lock_guard lock(mutex);
times_executed.clear(); times_executed.clear();
} }

View File

@ -211,7 +211,7 @@ public:
Reader createReader(const Key & key); Reader createReader(const Key & key);
Writer createWriter(const Key & key, std::chrono::milliseconds min_query_runtime, bool squash_partial_results, size_t max_block_size, size_t max_query_cache_size_in_bytes_quota, size_t max_query_cache_entries_quota); Writer createWriter(const Key & key, std::chrono::milliseconds min_query_runtime, bool squash_partial_results, size_t max_block_size, size_t max_query_cache_size_in_bytes_quota, size_t max_query_cache_entries_quota);
void clear(); void clear(const std::optional<String> & tag);
size_t sizeInBytes() const; size_t sizeInBytes() const;
size_t count() const; size_t count() const;

View File

@ -3228,12 +3228,12 @@ QueryCachePtr Context::getQueryCache() const
return shared->query_cache; return shared->query_cache;
} }
void Context::clearQueryCache() const void Context::clearQueryCache(const std::optional<String> & tag) const
{ {
std::lock_guard lock(shared->mutex); std::lock_guard lock(shared->mutex);
if (shared->query_cache) if (shared->query_cache)
shared->query_cache->clear(); shared->query_cache->clear(tag);
} }
void Context::clearCaches() const void Context::clearCaches() const

View File

@ -1068,7 +1068,7 @@ public:
void setQueryCache(size_t max_size_in_bytes, size_t max_entries, size_t max_entry_size_in_bytes, size_t max_entry_size_in_rows); void setQueryCache(size_t max_size_in_bytes, size_t max_entries, size_t max_entry_size_in_bytes, size_t max_entry_size_in_rows);
void updateQueryCacheConfiguration(const Poco::Util::AbstractConfiguration & config); void updateQueryCacheConfiguration(const Poco::Util::AbstractConfiguration & config);
std::shared_ptr<QueryCache> getQueryCache() const; std::shared_ptr<QueryCache> getQueryCache() const;
void clearQueryCache() const; void clearQueryCache(const std::optional<String> & tag) const;
/** Clear the caches of the uncompressed blocks and marks. /** Clear the caches of the uncompressed blocks and marks.
* This is usually done when renaming tables, changing the type of columns, deleting a table. * This is usually done when renaming tables, changing the type of columns, deleting a table.

View File

@ -369,9 +369,12 @@ BlockIO InterpreterSystemQuery::execute()
system_context->clearMMappedFileCache(); system_context->clearMMappedFileCache();
break; break;
case Type::DROP_QUERY_CACHE: case Type::DROP_QUERY_CACHE:
{
getContext()->checkAccess(AccessType::SYSTEM_DROP_QUERY_CACHE); getContext()->checkAccess(AccessType::SYSTEM_DROP_QUERY_CACHE);
getContext()->clearQueryCache(); getContext()->clearQueryCache(query.query_cache_tag);
break; break;
}
case Type::DROP_COMPILED_EXPRESSION_CACHE: case Type::DROP_COMPILED_EXPRESSION_CACHE:
#if USE_EMBEDDED_COMPILER #if USE_EMBEDDED_COMPILER
getContext()->checkAccess(AccessType::SYSTEM_DROP_COMPILED_EXPRESSION_CACHE); getContext()->checkAccess(AccessType::SYSTEM_DROP_COMPILED_EXPRESSION_CACHE);

View File

@ -131,6 +131,8 @@ public:
String disk; String disk;
UInt64 seconds{}; UInt64 seconds{};
std::optional<String> query_cache_tag;
String filesystem_cache_name; String filesystem_cache_name;
std::string key_to_drop; std::string key_to_drop;
std::optional<size_t> offset_to_drop; std::optional<size_t> offset_to_drop;

View File

@ -470,6 +470,7 @@ namespace DB
MR_MACROS(TABLE_OVERRIDE, "TABLE OVERRIDE") \ MR_MACROS(TABLE_OVERRIDE, "TABLE OVERRIDE") \
MR_MACROS(TABLE, "TABLE") \ MR_MACROS(TABLE, "TABLE") \
MR_MACROS(TABLES, "TABLES") \ MR_MACROS(TABLES, "TABLES") \
MR_MACROS(TAG, "TAG") \
MR_MACROS(TAGS, "TAGS") \ MR_MACROS(TAGS, "TAGS") \
MR_MACROS(TAGS_INNER_UUID, "TAGS INNER UUID") \ MR_MACROS(TAGS_INNER_UUID, "TAGS INNER UUID") \
MR_MACROS(TEMPORARY_TABLE, "TEMPORARY TABLE") \ MR_MACROS(TEMPORARY_TABLE, "TEMPORARY TABLE") \

View File

@ -471,6 +471,16 @@ bool ParserSystemQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expected &
res->seconds = seconds->as<ASTLiteral>()->value.safeGet<UInt64>(); res->seconds = seconds->as<ASTLiteral>()->value.safeGet<UInt64>();
break; break;
} }
case Type::DROP_QUERY_CACHE:
{
ParserLiteral tag_parser;
ASTPtr ast;
if (ParserKeyword{Keyword::TAG}.ignore(pos, expected) && tag_parser.parse(pos, ast, expected))
res->query_cache_tag = std::make_optional<String>(ast->as<ASTLiteral>()->value.safeGet<String>());
if (!parseQueryWithOnCluster(res, pos, expected))
return false;
break;
}
case Type::DROP_FILESYSTEM_CACHE: case Type::DROP_FILESYSTEM_CACHE:
{ {
ParserLiteral path_parser; ParserLiteral path_parser;

View File

@ -67,8 +67,8 @@ struct Settings;
M(Bool, fsync_part_directory, false, "Do fsync for part directory after all part operations (writes, renames, etc.).", 0) \ M(Bool, fsync_part_directory, false, "Do fsync for part directory after all part operations (writes, renames, etc.).", 0) \
M(UInt64, non_replicated_deduplication_window, 0, "How many last blocks of hashes should be kept on disk (0 - disabled).", 0) \ M(UInt64, non_replicated_deduplication_window, 0, "How many last blocks of hashes should be kept on disk (0 - disabled).", 0) \
M(UInt64, max_parts_to_merge_at_once, 100, "Max amount of parts which can be merged at once (0 - disabled). Doesn't affect OPTIMIZE FINAL query.", 0) \ M(UInt64, max_parts_to_merge_at_once, 100, "Max amount of parts which can be merged at once (0 - disabled). Doesn't affect OPTIMIZE FINAL query.", 0) \
M(UInt64, merge_selecting_sleep_ms, 5000, "Maximum sleep time for merge selecting, a lower setting will trigger selecting tasks in background_schedule_pool frequently which result in large amount of requests to zookeeper in large-scale clusters", 0) \ M(UInt64, merge_selecting_sleep_ms, 5000, "Minimum time to wait before trying to select parts to merge again after no parts were selected. A lower setting will trigger selecting tasks in background_schedule_pool frequently which result in large amount of requests to zookeeper in large-scale clusters", 0) \
M(UInt64, max_merge_selecting_sleep_ms, 60000, "Maximum sleep time for merge selecting, a lower setting will trigger selecting tasks in background_schedule_pool frequently which result in large amount of requests to zookeeper in large-scale clusters", 0) \ M(UInt64, max_merge_selecting_sleep_ms, 60000, "Maximum time to wait before trying to select parts to merge again after no parts were selected. A lower setting will trigger selecting tasks in background_schedule_pool frequently which result in large amount of requests to zookeeper in large-scale clusters", 0) \
M(Float, merge_selecting_sleep_slowdown_factor, 1.2f, "The sleep time for merge selecting task is multiplied by this factor when there's nothing to merge and divided when a merge was assigned", 0) \ M(Float, merge_selecting_sleep_slowdown_factor, 1.2f, "The sleep time for merge selecting task is multiplied by this factor when there's nothing to merge and divided when a merge was assigned", 0) \
M(UInt64, merge_tree_clear_old_temporary_directories_interval_seconds, 60, "The period of executing the clear old temporary directories operation in background.", 0) \ M(UInt64, merge_tree_clear_old_temporary_directories_interval_seconds, 60, "The period of executing the clear old temporary directories operation in background.", 0) \
M(UInt64, merge_tree_clear_old_parts_interval_seconds, 1, "The period of executing the clear old parts operation in background.", 0) \ M(UInt64, merge_tree_clear_old_parts_interval_seconds, 1, "The period of executing the clear old parts operation in background.", 0) \

View File

@ -538,6 +538,9 @@ static StoragePtr create(const StorageFactory::Arguments & args)
if (replica_name.empty()) if (replica_name.empty())
throw Exception(ErrorCodes::NO_REPLICA_NAME_GIVEN, "No replica name in config{}", verbose_help_message); throw Exception(ErrorCodes::NO_REPLICA_NAME_GIVEN, "No replica name in config{}", verbose_help_message);
// '\t' and '\n' will interrupt parsing 'source replica' in ReplicatedMergeTreeLogEntryData::readText
if (replica_name.find('\t') != String::npos || replica_name.find('\n') != String::npos)
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Replica name must not contain '\\t' or '\\n'");
arg_cnt = engine_args.size(); /// Update `arg_cnt` here because extractZooKeeperPathAndReplicaNameFromEngineArgs() could add arguments. arg_cnt = engine_args.size(); /// Update `arg_cnt` here because extractZooKeeperPathAndReplicaNameFromEngineArgs() could add arguments.
arg_num = 2; /// zookeeper_path and replica_name together are always two arguments. arg_num = 2; /// zookeeper_path and replica_name together are always two arguments.

View File

@ -290,6 +290,10 @@ VirtualColumnsDescription StorageDistributed::createVirtuals()
desc.addEphemeral("_shard_num", std::make_shared<DataTypeUInt32>(), "Deprecated. Use function shardNum instead"); desc.addEphemeral("_shard_num", std::make_shared<DataTypeUInt32>(), "Deprecated. Use function shardNum instead");
/// Add virtual columns from table with Merge engine.
desc.addEphemeral("_database", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>()), "The name of database which the row comes from");
desc.addEphemeral("_table", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>()), "The name of table which the row comes from");
return desc; return desc;
} }

View File

@ -642,10 +642,6 @@ std::vector<ReadFromMerge::ChildPlan> ReadFromMerge::createChildrenPlans(SelectQ
column_names_as_aliases.push_back(ExpressionActions::getSmallestColumn(storage_metadata_snapshot->getColumns().getAllPhysical()).name); column_names_as_aliases.push_back(ExpressionActions::getSmallestColumn(storage_metadata_snapshot->getColumns().getAllPhysical()).name);
} }
} }
else
{
}
auto child = createPlanForTable( auto child = createPlanForTable(
nested_storage_snaphsot, nested_storage_snaphsot,
@ -657,6 +653,7 @@ std::vector<ReadFromMerge::ChildPlan> ReadFromMerge::createChildrenPlans(SelectQ
row_policy_data_opt, row_policy_data_opt,
modified_context, modified_context,
current_streams); current_streams);
child.plan.addInterpreterContext(modified_context); child.plan.addInterpreterContext(modified_context);
if (child.plan.isInitialized()) if (child.plan.isInitialized())
@ -914,12 +911,14 @@ SelectQueryInfo ReadFromMerge::getModifiedQueryInfo(const ContextMutablePtr & mo
modified_query_info.table_expression = replacement_table_expression; modified_query_info.table_expression = replacement_table_expression;
modified_query_info.planner_context->getOrCreateTableExpressionData(replacement_table_expression); modified_query_info.planner_context->getOrCreateTableExpressionData(replacement_table_expression);
auto get_column_options = GetColumnsOptions(GetColumnsOptions::All).withExtendedObjects().withVirtuals(); auto get_column_options = GetColumnsOptions(GetColumnsOptions::All)
if (storage_snapshot_->storage.supportsSubcolumns()) .withExtendedObjects()
get_column_options.withSubcolumns(); .withSubcolumns(storage_snapshot_->storage.supportsSubcolumns());
std::unordered_map<std::string, QueryTreeNodePtr> column_name_to_node; std::unordered_map<std::string, QueryTreeNodePtr> column_name_to_node;
/// Consider only non-virtual columns of storage while checking for _table and _database columns.
/// I.e. always override virtual columns with these names from underlying table (if any).
if (!storage_snapshot_->tryGetColumn(get_column_options, "_table")) if (!storage_snapshot_->tryGetColumn(get_column_options, "_table"))
{ {
auto table_name_node = std::make_shared<ConstantNode>(current_storage_id.table_name); auto table_name_node = std::make_shared<ConstantNode>(current_storage_id.table_name);
@ -946,6 +945,7 @@ SelectQueryInfo ReadFromMerge::getModifiedQueryInfo(const ContextMutablePtr & mo
column_name_to_node.emplace("_database", function_node); column_name_to_node.emplace("_database", function_node);
} }
get_column_options.withVirtuals();
auto storage_columns = storage_snapshot_->metadata->getColumns(); auto storage_columns = storage_snapshot_->metadata->getColumns();
bool with_aliases = /* common_processed_stage == QueryProcessingStage::FetchColumns && */ !storage_columns.getAliases().empty(); bool with_aliases = /* common_processed_stage == QueryProcessingStage::FetchColumns && */ !storage_columns.getAliases().empty();

View File

@ -339,7 +339,7 @@ export -f run_tests
if [ "$NUM_TRIES" -gt "1" ]; then if [ "$NUM_TRIES" -gt "1" ]; then
# We don't run tests with Ordinary database in PRs, only in master. # We don't run tests with Ordinary database in PRs, only in master.
# So run new/changed tests with Ordinary at least once in flaky check. # So run new/changed tests with Ordinary at least once in flaky check.
NUM_TRIES=1; USE_DATABASE_ORDINARY=1; run_tests \ NUM_TRIES=1 USE_DATABASE_ORDINARY=1 run_tests \
| sed 's/All tests have finished/Redacted: a message about tests finish is deleted/' | sed 's/No tests were run/Redacted: a message about no tests run is deleted/' ||: | sed 's/All tests have finished/Redacted: a message about tests finish is deleted/' | sed 's/No tests were run/Redacted: a message about no tests run is deleted/' ||:
fi fi

View File

@ -26,3 +26,4 @@
1 1
1 1
1 1
2

View File

@ -38,3 +38,6 @@ select lowerUTF8('ır') = 'ır';
-- German language -- German language
select upper('öäüß') = 'öäüß'; select upper('öäüß') = 'öäüß';
select lower('ÖÄÜẞ') = 'ÖÄÜẞ'; select lower('ÖÄÜẞ') = 'ÖÄÜẞ';
-- Bug 68680
SELECT lengthUTF8(lowerUTF8('Ä\0'));

View File

@ -1,3 +1,17 @@
Cache query result in query cache
1 1
1 1
DROP entries with a certain tag, no entry will match
1
After a full DROP, the cache is empty now
0
Cache query result with different or no tag in query cache
1
1
1
2
4
DROP entries with certain tags
2
1
0 0

View File

@ -4,10 +4,31 @@
-- (it's silly to use what will be tested below but we have to assume other tests cluttered the query cache) -- (it's silly to use what will be tested below but we have to assume other tests cluttered the query cache)
SYSTEM DROP QUERY CACHE; SYSTEM DROP QUERY CACHE;
-- Cache query result in query cache SELECT 'Cache query result in query cache';
SELECT 1 SETTINGS use_query_cache = true; SELECT 1 SETTINGS use_query_cache = true;
SELECT count(*) FROM system.query_cache; SELECT count(*) FROM system.query_cache;
-- No query results are cached after DROP SELECT 'DROP entries with a certain tag, no entry will match';
SYSTEM DROP QUERY CACHE TAG 'tag';
SELECT count(*) FROM system.query_cache;
SELECT 'After a full DROP, the cache is empty now';
SYSTEM DROP QUERY CACHE; SYSTEM DROP QUERY CACHE;
SELECT count(*) FROM system.query_cache; SELECT count(*) FROM system.query_cache;
-- More tests for DROP with tags:
SELECT 'Cache query result with different or no tag in query cache';
SELECT 1 SETTINGS use_query_cache = true;
SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'abc';
SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'def';
SELECT 2 SETTINGS use_query_cache = true;
SELECT count(*) FROM system.query_cache;
SELECT 'DROP entries with certain tags';
SYSTEM DROP QUERY CACHE TAG '';
SELECT count(*) FROM system.query_cache;
SYSTEM DROP QUERY CACHE TAG 'def';
SELECT count(*) FROM system.query_cache;
SYSTEM DROP QUERY CACHE TAG 'abc';
SELECT count(*) FROM system.query_cache;

View File

@ -54,6 +54,8 @@ _row_exists UInt8 Persisted mask created by lightweight delete that show wheth
_block_number UInt64 Persisted original number of block that was assigned at insert Delta, LZ4 1 _block_number UInt64 Persisted original number of block that was assigned at insert Delta, LZ4 1
_block_offset UInt64 Persisted original number of row in block that was assigned at insert Delta, LZ4 1 _block_offset UInt64 Persisted original number of row in block that was assigned at insert Delta, LZ4 1
_shard_num UInt32 Deprecated. Use function shardNum instead 1 _shard_num UInt32 Deprecated. Use function shardNum instead 1
_database LowCardinality(String) The name of database which the row comes from 1
_table LowCardinality(String) The name of table which the row comes from 1
SET describe_compact_output = 0, describe_include_virtual_columns = 1, describe_include_subcolumns = 1; SET describe_compact_output = 0, describe_include_virtual_columns = 1, describe_include_subcolumns = 1;
DESCRIBE TABLE t_describe_options; DESCRIBE TABLE t_describe_options;
id UInt64 index column 0 0 id UInt64 index column 0 0
@ -87,6 +89,8 @@ _row_exists UInt8 Persisted mask created by lightweight delete that show wheth
_block_number UInt64 Persisted original number of block that was assigned at insert Delta, LZ4 0 1 _block_number UInt64 Persisted original number of block that was assigned at insert Delta, LZ4 0 1
_block_offset UInt64 Persisted original number of row in block that was assigned at insert Delta, LZ4 0 1 _block_offset UInt64 Persisted original number of row in block that was assigned at insert Delta, LZ4 0 1
_shard_num UInt32 Deprecated. Use function shardNum instead 0 1 _shard_num UInt32 Deprecated. Use function shardNum instead 0 1
_database LowCardinality(String) The name of database which the row comes from 0 1
_table LowCardinality(String) The name of table which the row comes from 0 1
arr.size0 UInt64 1 0 arr.size0 UInt64 1 0
t.a String ZSTD(1) 1 0 t.a String ZSTD(1) 1 0
t.b UInt64 ZSTD(1) 1 0 t.b UInt64 ZSTD(1) 1 0
@ -144,6 +148,8 @@ _row_exists UInt8 1
_block_number UInt64 1 _block_number UInt64 1
_block_offset UInt64 1 _block_offset UInt64 1
_shard_num UInt32 1 _shard_num UInt32 1
_database LowCardinality(String) 1
_table LowCardinality(String) 1
SET describe_compact_output = 1, describe_include_virtual_columns = 1, describe_include_subcolumns = 1; SET describe_compact_output = 1, describe_include_virtual_columns = 1, describe_include_subcolumns = 1;
DESCRIBE TABLE t_describe_options; DESCRIBE TABLE t_describe_options;
id UInt64 0 0 id UInt64 0 0
@ -177,6 +183,8 @@ _row_exists UInt8 0 1
_block_number UInt64 0 1 _block_number UInt64 0 1
_block_offset UInt64 0 1 _block_offset UInt64 0 1
_shard_num UInt32 0 1 _shard_num UInt32 0 1
_database LowCardinality(String) 0 1
_table LowCardinality(String) 0 1
arr.size0 UInt64 1 0 arr.size0 UInt64 1 0
t.a String 1 0 t.a String 1 0
t.b UInt64 1 0 t.b UInt64 1 0

View File

@ -0,0 +1,68 @@
Negative test of overlay
Test with 3 arguments and various combinations of const/non-const columns
Spark_SQL Spark_SQL和CH
Spark_SQL Spark_SQL和CH
Spark_SQL Spark_SQL和CH
Spark_SQL Spark_SQL和CH
Spark_SQL Spark_SQL和CH
Spark_SQL Spark_SQL和CH
Spark_SQL Spark_SQL和CH
Spark_SQL Spark_SQL和CH
Test with 4 arguments and various combinations of const/non-const columns
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Spark ANSI SQL Spark ANSI SQL和CH
Test with special offset values
-12 __ark SQL 之park SQL和CH
-11 __ark SQL S之ark SQL和CH
-10 __ark SQL Sp之rk SQL和CH
-9 __ark SQL Spa之k SQL和CH
-8 S__rk SQL Spar之 SQL和CH
-7 Sp__k SQL Spark之SQL和CH
-6 Spa__ SQL Spark 之QL和CH
-5 Spar__SQL Spark S之L和CH
-4 Spark__QL Spark SQ之和CH
-3 Spark __L Spark SQL之CH
-2 Spark S__ Spark SQL和之H
-1 Spark SQ__ Spark SQL和C之
0 Spark SQL__ Spark SQL和CH之
1 __ark SQL 之park SQL和CH
2 S__rk SQL S之ark SQL和CH
3 Sp__k SQL Sp之rk SQL和CH
4 Spa__ SQL Spa之k SQL和CH
5 Spar__SQL Spar之 SQL和CH
6 Spark__QL Spark之SQL和CH
7 Spark __L Spark 之QL和CH
8 Spark S__ Spark S之L和CH
9 Spark SQ__ Spark SQ之和CH
10 Spark SQL__ Spark SQL之CH
11 Spark SQL__ Spark SQL和之H
12 Spark SQL__ Spark SQL和C之
13 Spark SQL__ Spark SQL和CH之
Test with special length values
-1 Spark ANSI Spark ANSI H
0 Spark ANSI SQL Spark ANSI SQL和CH
1 Spark ANSI QL Spark ANSI QL和CH
2 Spark ANSI L Spark ANSI L和CH
3 Spark ANSI Spark ANSI 和CH
4 Spark ANSI Spark ANSI CH
5 Spark ANSI Spark ANSI H
6 Spark ANSI Spark ANSI
Test with special input and replace values
_ _
Spark SQL Spark SQL和CH
ANSI ANSI
Spark SQL Spark SQL和CH

View File

@ -0,0 +1,47 @@
SELECT 'Negative test of overlay';
SELECT overlay('hello', 'world'); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH }
SELECT overlay('hello', 'world', 2, 3, 'extra'); -- { serverError NUMBER_OF_ARGUMENTS_DOESNT_MATCH }
SELECT overlay(123, 'world', 2, 3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT }
SELECT overlay('hello', 456, 2, 3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT }
SELECT overlay('hello', 'world', 'two', 3); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT }
SELECT overlay('hello', 'world', 2, 'three'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT }
SELECT 'Test with 3 arguments and various combinations of const/non-const columns';
SELECT overlay('Spark SQL', '_', 6), overlayUTF8('Spark SQL和CH', '_', 6);
SELECT overlay(materialize('Spark SQL'), '_', 6), overlayUTF8(materialize('Spark SQL和CH'), '_', 6);
SELECT overlay('Spark SQL', materialize('_'), 6), overlayUTF8('Spark SQL和CH', materialize('_'), 6);
SELECT overlay('Spark SQL', '_', materialize(6)), overlayUTF8('Spark SQL和CH', '_', materialize(6));
SELECT overlay(materialize('Spark SQL'), materialize('_'), 6), overlayUTF8(materialize('Spark SQL和CH'), materialize('_'), 6);
SELECT overlay(materialize('Spark SQL'), '_', materialize(6)), overlayUTF8(materialize('Spark SQL和CH'), '_', materialize(6));
SELECT overlay('Spark SQL', materialize('_'), materialize(6)), overlayUTF8('Spark SQL和CH', materialize('_'), materialize(6));
SELECT overlay(materialize('Spark SQL'), materialize('_'), materialize(6)), overlayUTF8(materialize('Spark SQL和CH'), materialize('_'), materialize(6));
SELECT 'Test with 4 arguments and various combinations of const/non-const columns';
SELECT overlay('Spark SQL', 'ANSI ', 7, 0), overlayUTF8('Spark SQL和CH', 'ANSI ', 7, 0);
SELECT overlay(materialize('Spark SQL'), 'ANSI ', 7, 0), overlayUTF8(materialize('Spark SQL和CH'), 'ANSI ', 7, 0);
SELECT overlay('Spark SQL', materialize('ANSI '), 7, 0), overlayUTF8('Spark SQL和CH', materialize('ANSI '), 7, 0);
SELECT overlay('Spark SQL', 'ANSI ', materialize(7), 0), overlayUTF8('Spark SQL和CH', 'ANSI ', materialize(7), 0);
SELECT overlay('Spark SQL', 'ANSI ', 7, materialize(0)), overlayUTF8('Spark SQL和CH', 'ANSI ', 7, materialize(0));
SELECT overlay(materialize('Spark SQL'), materialize('ANSI '), 7, 0), overlayUTF8(materialize('Spark SQL和CH'), materialize('ANSI '), 7, 0);
SELECT overlay(materialize('Spark SQL'), 'ANSI ', materialize(7), 0), overlayUTF8(materialize('Spark SQL和CH'), 'ANSI ', materialize(7), 0);
SELECT overlay(materialize('Spark SQL'), 'ANSI ', 7, materialize(0)), overlayUTF8(materialize('Spark SQL和CH'), 'ANSI ', 7, materialize(0));
SELECT overlay('Spark SQL', materialize('ANSI '), materialize(7), 0), overlayUTF8('Spark SQL和CH', materialize('ANSI '), materialize(7), 0);
SELECT overlay('Spark SQL', materialize('ANSI '), 7, materialize(0)), overlayUTF8('Spark SQL和CH', materialize('ANSI '), 7, materialize(0));
SELECT overlay('Spark SQL', 'ANSI ', materialize(7), materialize(0)), overlayUTF8('Spark SQL和CH', 'ANSI ', materialize(7), materialize(0));
SELECT overlay(materialize('Spark SQL'), materialize('ANSI '), materialize(7), 0), overlayUTF8(materialize('Spark SQL和CH'), materialize('ANSI '), materialize(7), 0);
SELECT overlay(materialize('Spark SQL'), materialize('ANSI '), 7, materialize(0)), overlayUTF8(materialize('Spark SQL和CH'), materialize('ANSI '), 7, materialize(0));
SELECT overlay(materialize('Spark SQL'), 'ANSI ', materialize(7), materialize(0)), overlayUTF8(materialize('Spark SQL和CH'), 'ANSI ', materialize(7), materialize(0));
SELECT overlay('Spark SQL', materialize('ANSI '), materialize(7), materialize(0)), overlayUTF8('Spark SQL和CH', materialize('ANSI '), materialize(7), materialize(0));
SELECT overlay(materialize('Spark SQL'), materialize('ANSI '), materialize(7), materialize(0)), overlayUTF8(materialize('Spark SQL和CH'), materialize('ANSI '), materialize(7), materialize(0));
SELECT 'Test with special offset values';
WITH number - 12 AS offset SELECT offset, overlay('Spark SQL', '__', offset), overlayUTF8('Spark SQL和CH', '', offset) FROM numbers(26);
SELECT 'Test with special length values';
WITH number - 1 AS length SELECT length, overlay('Spark SQL', 'ANSI ', 7, length), overlayUTF8('Spark SQL和CH', 'ANSI ', 7, length) FROM numbers(8);
SELECT 'Test with special input and replace values';
SELECT overlay('', '_', 6), overlayUTF8('', '_', 6);
SELECT overlay('Spark SQL', '', 6), overlayUTF8('Spark SQL和CH', '', 6);
SELECT overlay('', 'ANSI ', 7, 0), overlayUTF8('', 'ANSI ', 7, 0);
SELECT overlay('Spark SQL', '', 7, 0), overlayUTF8('Spark SQL和CH', '', 7, 0);

View File

@ -0,0 +1,99 @@
Comparing nanoseconds
1
1
1
1
1
1
1
1
1
1
0
0
0
0
0
0
0
0
0
0
Comparing microseconds
1
1
1
1
1
1
1
0
0
0
0
0
0
0
Comparing milliseconds
1
1
1
1
1
1
0
0
0
0
0
0
Comparing seconds
1
1
1
1
1
0
0
0
0
0
Comparing minutes
1
1
1
1
0
0
0
0
Comparing hours
1
1
1
0
0
0
Comparing days
1
1
0
0
Comparing weeks
1
0
Comparing months
1
1
1
0
0
0
Comparing quarters
1
1
0
0
Comparing years
1
0

View File

@ -0,0 +1,142 @@
SELECT('Comparing nanoseconds');
SELECT INTERVAL 500 NANOSECOND > INTERVAL 300 NANOSECOND;
SELECT INTERVAL 1000 NANOSECOND < INTERVAL 1500 NANOSECOND;
SELECT INTERVAL 2000 NANOSECOND = INTERVAL 2000 NANOSECOND;
SELECT INTERVAL 1000 NANOSECOND >= INTERVAL 1 MICROSECOND;
SELECT INTERVAL 1000001 NANOSECOND > INTERVAL 1 MILLISECOND;
SELECT INTERVAL 2000000001 NANOSECOND > INTERVAL 2 SECOND;
SELECT INTERVAL 60000000000 NANOSECOND = INTERVAL 1 MINUTE;
SELECT INTERVAL 7199999999999 NANOSECOND < INTERVAL 2 HOUR;
SELECT INTERVAL 1 NANOSECOND < INTERVAL 2 DAY;
SELECT INTERVAL 5 NANOSECOND < INTERVAL 1 WEEK;
SELECT INTERVAL 500 NANOSECOND < INTERVAL 300 NANOSECOND;
SELECT INTERVAL 1000 NANOSECOND > INTERVAL 1500 NANOSECOND;
SELECT INTERVAL 2000 NANOSECOND != INTERVAL 2000 NANOSECOND;
SELECT INTERVAL 1000 NANOSECOND < INTERVAL 1 MICROSECOND;
SELECT INTERVAL 1000001 NANOSECOND < INTERVAL 1 MILLISECOND;
SELECT INTERVAL 2000000001 NANOSECOND < INTERVAL 2 SECOND;
SELECT INTERVAL 60000000000 NANOSECOND != INTERVAL 1 MINUTE;
SELECT INTERVAL 7199999999999 NANOSECOND > INTERVAL 2 HOUR;
SELECT INTERVAL 1 NANOSECOND > INTERVAL 2 DAY;
SELECT INTERVAL 5 NANOSECOND > INTERVAL 1 WEEK;
SELECT INTERVAL 1 NANOSECOND < INTERVAL 2 MONTH; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing microseconds');
SELECT INTERVAL 1 MICROSECOND < INTERVAL 999 MICROSECOND;
SELECT INTERVAL 1001 MICROSECOND > INTERVAL 1 MILLISECOND;
SELECT INTERVAL 2000000 MICROSECOND = INTERVAL 2 SECOND;
SELECT INTERVAL 179999999 MICROSECOND < INTERVAL 3 MINUTE;
SELECT INTERVAL 3600000000 MICROSECOND = INTERVAL 1 HOUR;
SELECT INTERVAL 36000000000000 MICROSECOND > INTERVAL 2 DAY;
SELECT INTERVAL 1209600000000 MICROSECOND = INTERVAL 2 WEEK;
SELECT INTERVAL 1 MICROSECOND > INTERVAL 999 MICROSECOND;
SELECT INTERVAL 1001 MICROSECOND < INTERVAL 1 MILLISECOND;
SELECT INTERVAL 2000000 MICROSECOND != INTERVAL 2 SECOND;
SELECT INTERVAL 179999999 MICROSECOND > INTERVAL 3 MINUTE;
SELECT INTERVAL 3600000000 MICROSECOND != INTERVAL 1 HOUR;
SELECT INTERVAL 36000000000000 MICROSECOND < INTERVAL 2 DAY;
SELECT INTERVAL 1209600000000 MICROSECOND != INTERVAL 2 WEEK;
SELECT INTERVAL 36000000000000 MICROSECOND < INTERVAL 1 QUARTER; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing milliseconds');
SELECT INTERVAL 2000 MILLISECOND > INTERVAL 2 MILLISECOND;
SELECT INTERVAL 2000 MILLISECOND = INTERVAL 2 SECOND;
SELECT INTERVAL 170000 MILLISECOND < INTERVAL 3 MINUTE;
SELECT INTERVAL 144000001 MILLISECOND > INTERVAL 40 HOUR;
SELECT INTERVAL 1728000000 MILLISECOND = INTERVAL 20 DAY;
SELECT INTERVAL 1198599999 MILLISECOND < INTERVAL 2 WEEK;
SELECT INTERVAL 2000 MILLISECOND < INTERVAL 2 MILLISECOND;
SELECT INTERVAL 2000 MILLISECOND != INTERVAL 2 SECOND;
SELECT INTERVAL 170000 MILLISECOND > INTERVAL 3 MINUTE;
SELECT INTERVAL 144000001 MILLISECOND < INTERVAL 40 HOUR;
SELECT INTERVAL 1728000000 MILLISECOND != INTERVAL 20 DAY;
SELECT INTERVAL 1198599999 MILLISECOND > INTERVAL 2 WEEK;
SELECT INTERVAL 36000000000000 MILLISECOND < INTERVAL 1 YEAR; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing seconds');
SELECT INTERVAL 120 SECOND > INTERVAL 2 SECOND;
SELECT INTERVAL 120 SECOND = INTERVAL 2 MINUTE;
SELECT INTERVAL 1 SECOND < INTERVAL 2 HOUR;
SELECT INTERVAL 86401 SECOND >= INTERVAL 1 DAY;
SELECT INTERVAL 1209600 SECOND = INTERVAL 2 WEEK;
SELECT INTERVAL 120 SECOND < INTERVAL 2 SECOND;
SELECT INTERVAL 120 SECOND != INTERVAL 2 MINUTE;
SELECT INTERVAL 1 SECOND > INTERVAL 2 HOUR;
SELECT INTERVAL 86401 SECOND < INTERVAL 1 DAY;
SELECT INTERVAL 1209600 SECOND != INTERVAL 2 WEEK;
SELECT INTERVAL 36000000000000 SECOND < INTERVAL 1 MONTH; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing minutes');
SELECT INTERVAL 1 MINUTE < INTERVAL 59 MINUTE;
SELECT INTERVAL 1 MINUTE < INTERVAL 59 HOUR;
SELECT INTERVAL 1440 MINUTE = INTERVAL 1 DAY;
SELECT INTERVAL 30241 MINUTE > INTERVAL 3 WEEK;
SELECT INTERVAL 1 MINUTE > INTERVAL 59 MINUTE;
SELECT INTERVAL 1 MINUTE > INTERVAL 59 HOUR;
SELECT INTERVAL 1440 MINUTE != INTERVAL 1 DAY;
SELECT INTERVAL 30241 MINUTE < INTERVAL 3 WEEK;
SELECT INTERVAL 2 MINUTE = INTERVAL 120 QUARTER; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing hours');
SELECT INTERVAL 48 HOUR > INTERVAL 2 HOUR;
SELECT INTERVAL 48 HOUR >= INTERVAL 2 DAY;
SELECT INTERVAL 672 HOUR = INTERVAL 4 WEEK;
SELECT INTERVAL 48 HOUR < INTERVAL 2 HOUR;
SELECT INTERVAL 48 HOUR < INTERVAL 2 DAY;
SELECT INTERVAL 672 HOUR != INTERVAL 4 WEEK;
SELECT INTERVAL 2 HOUR < INTERVAL 1 YEAR; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing days');
SELECT INTERVAL 1 DAY < INTERVAL 23 DAY;
SELECT INTERVAL 25 DAY > INTERVAL 3 WEEK;
SELECT INTERVAL 1 DAY > INTERVAL 23 DAY;
SELECT INTERVAL 25 DAY < INTERVAL 3 WEEK;
SELECT INTERVAL 2 DAY = INTERVAL 48 MONTH; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing weeks');
SELECT INTERVAL 1 WEEK < INTERVAL 6 WEEK;
SELECT INTERVAL 1 WEEK > INTERVAL 6 WEEK;
SELECT INTERVAL 124 WEEK > INTERVAL 8 QUARTER; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing months');
SELECT INTERVAL 1 MONTH < INTERVAL 3 MONTH;
SELECT INTERVAL 124 MONTH > INTERVAL 5 QUARTER;
SELECT INTERVAL 36 MONTH = INTERVAL 3 YEAR;
SELECT INTERVAL 1 MONTH > INTERVAL 3 MONTH;
SELECT INTERVAL 124 MONTH < INTERVAL 5 QUARTER;
SELECT INTERVAL 36 MONTH != INTERVAL 3 YEAR;
SELECT INTERVAL 6 MONTH = INTERVAL 26 MICROSECOND; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing quarters');
SELECT INTERVAL 5 QUARTER > INTERVAL 4 QUARTER;
SELECT INTERVAL 20 QUARTER = INTERVAL 5 YEAR;
SELECT INTERVAL 5 QUARTER < INTERVAL 4 QUARTER;
SELECT INTERVAL 20 QUARTER != INTERVAL 5 YEAR;
SELECT INTERVAL 2 QUARTER = INTERVAL 6 NANOSECOND; -- { serverError NO_COMMON_TYPE }
SELECT('Comparing years');
SELECT INTERVAL 1 YEAR < INTERVAL 3 YEAR;
SELECT INTERVAL 1 YEAR > INTERVAL 3 YEAR;
SELECT INTERVAL 2 YEAR = INTERVAL 8 SECOND; -- { serverError NO_COMMON_TYPE }

View File

@ -0,0 +1,15 @@
\N
\N
\N
\N
\N
str_0
str_1
str_2
str_3
str_4
\N
\N
\N
\N
\N

View File

@ -0,0 +1,9 @@
set allow_experimental_json_type=1;
drop table if exists test;
create table test (json JSON) engine=Memory;
insert into test select toJSONString(map('a', 'str_' || number)) from numbers(5);
select json.a.String from test;
select json.a.:String from test;
select json.a.UInt64 from test;
drop table test;

View File

@ -0,0 +1,8 @@
1 t_local_1
2 t_local_2
1 t_local_1
2 t_local_2
1 1
2 1
1 1
2 1

View File

@ -0,0 +1,24 @@
DROP TABLE IF EXISTS t_local_1;
DROP TABLE IF EXISTS t_local_2;
DROP TABLE IF EXISTS t_merge;
DROP TABLE IF EXISTS t_distr;
CREATE TABLE t_local_1 (a UInt32) ENGINE = MergeTree ORDER BY a;
CREATE TABLE t_local_2 (a UInt32) ENGINE = MergeTree ORDER BY a;
INSERT INTO t_local_1 VALUES (1);
INSERT INTO t_local_2 VALUES (2);
CREATE TABLE t_merge AS t_local_1 ENGINE = Merge(currentDatabase(), '^(t_local_1|t_local_2)$');
CREATE TABLE t_distr AS t_local_1 engine=Distributed('test_shard_localhost', currentDatabase(), t_merge, rand());
SELECT a, _table FROM t_merge ORDER BY a;
SELECT a, _table FROM t_distr ORDER BY a;
SELECT a, _database = currentDatabase() FROM t_merge ORDER BY a;
SELECT a, _database = currentDatabase() FROM t_distr ORDER BY a;
DROP TABLE IF EXISTS t_local_1;
DROP TABLE IF EXISTS t_local_2;
DROP TABLE IF EXISTS t_merge;
DROP TABLE IF EXISTS t_distr;

View File

@ -2210,6 +2210,7 @@ outfile
overcommit overcommit
overcommitted overcommitted
overfitting overfitting
overlayUTF
overparallelization overparallelization
packetpool packetpool
packetsize packetsize

View File

@ -1,3 +1,4 @@
v24.8.2.3-lts 2024-08-22
v24.8.1.2684-lts 2024-08-21 v24.8.1.2684-lts 2024-08-21
v24.7.3.42-stable 2024-08-08 v24.7.3.42-stable 2024-08-08
v24.7.2.13-stable 2024-08-01 v24.7.2.13-stable 2024-08-01
@ -5,6 +6,7 @@ v24.7.1.2915-stable 2024-07-30
v24.6.3.95-stable 2024-08-06 v24.6.3.95-stable 2024-08-06
v24.6.2.17-stable 2024-07-05 v24.6.2.17-stable 2024-07-05
v24.6.1.4423-stable 2024-07-01 v24.6.1.4423-stable 2024-07-01
v24.5.6.45-stable 2024-08-23
v24.5.5.78-stable 2024-08-05 v24.5.5.78-stable 2024-08-05
v24.5.4.49-stable 2024-07-01 v24.5.4.49-stable 2024-07-01
v24.5.3.5-stable 2024-06-13 v24.5.3.5-stable 2024-06-13
@ -14,6 +16,7 @@ v24.4.4.113-stable 2024-08-02
v24.4.3.25-stable 2024-06-14 v24.4.3.25-stable 2024-06-14
v24.4.2.141-stable 2024-06-07 v24.4.2.141-stable 2024-06-07
v24.4.1.2088-stable 2024-05-01 v24.4.1.2088-stable 2024-05-01
v24.3.9.5-lts 2024-08-22
v24.3.8.13-lts 2024-08-20 v24.3.8.13-lts 2024-08-20
v24.3.7.30-lts 2024-08-14 v24.3.7.30-lts 2024-08-14
v24.3.6.48-lts 2024-08-02 v24.3.6.48-lts 2024-08-02

1 v24.8.1.2684-lts v24.8.2.3-lts 2024-08-21 2024-08-22
1 v24.8.2.3-lts 2024-08-22
2 v24.8.1.2684-lts v24.8.1.2684-lts 2024-08-21 2024-08-21
3 v24.7.3.42-stable v24.7.3.42-stable 2024-08-08 2024-08-08
4 v24.7.2.13-stable v24.7.2.13-stable 2024-08-01 2024-08-01
6 v24.6.3.95-stable v24.6.3.95-stable 2024-08-06 2024-08-06
7 v24.6.2.17-stable v24.6.2.17-stable 2024-07-05 2024-07-05
8 v24.6.1.4423-stable v24.6.1.4423-stable 2024-07-01 2024-07-01
9 v24.5.6.45-stable 2024-08-23
10 v24.5.5.78-stable v24.5.5.78-stable 2024-08-05 2024-08-05
11 v24.5.4.49-stable v24.5.4.49-stable 2024-07-01 2024-07-01
12 v24.5.3.5-stable v24.5.3.5-stable 2024-06-13 2024-06-13
16 v24.4.3.25-stable v24.4.3.25-stable 2024-06-14 2024-06-14
17 v24.4.2.141-stable v24.4.2.141-stable 2024-06-07 2024-06-07
18 v24.4.1.2088-stable v24.4.1.2088-stable 2024-05-01 2024-05-01
19 v24.3.9.5-lts 2024-08-22
20 v24.3.8.13-lts v24.3.8.13-lts 2024-08-20 2024-08-20
21 v24.3.7.30-lts v24.3.7.30-lts 2024-08-14 2024-08-14
22 v24.3.6.48-lts v24.3.6.48-lts 2024-08-02 2024-08-02