Merge branch 'master' into fix-jit-aggregation

This commit is contained in:
Alexey Milovidov 2023-04-27 00:46:45 +03:00 committed by GitHub
commit a34f94833c
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
30 changed files with 492 additions and 130 deletions

View File

@ -24,19 +24,19 @@
* Added a `lost_part_count` column to the `system.replicas` table. The column value shows the total number of lost parts in the corresponding table. Value is stored in zookeeper and can be used instead of not persistent `ReplicatedDataLoss` profile event for monitoring. [#48526](https://github.com/ClickHouse/ClickHouse/pull/48526) ([Sergei Trifonov](https://github.com/serxa)).
* Add `soundex` function for compatibility. Closes [#39880](https://github.com/ClickHouse/ClickHouse/issues/39880). [#48567](https://github.com/ClickHouse/ClickHouse/pull/48567) ([FriendLey](https://github.com/FriendLey)).
* Support `Map` type for JSONExtract. [#48629](https://github.com/ClickHouse/ClickHouse/pull/48629) ([李扬](https://github.com/taiyang-li)).
* Add `PrettyJSONEachRow` format to output pretty JSON with new line delimieters and 4 space indents. [#48898](https://github.com/ClickHouse/ClickHouse/pull/48898) ([Kruglov Pavel](https://github.com/Avogar)).
* Add `PrettyJSONEachRow` format to output pretty JSON with new line delimiters and 4 space indents. [#48898](https://github.com/ClickHouse/ClickHouse/pull/48898) ([Kruglov Pavel](https://github.com/Avogar)).
* Add `ParquetMetadata` input format to read Parquet file metadata. [#48911](https://github.com/ClickHouse/ClickHouse/pull/48911) ([Kruglov Pavel](https://github.com/Avogar)).
* Add `extractKeyValuePairs` function to extract key value pairs from strings. Input strings might contain noise (i.e log files / do not need to be 100% formatted in key-value-pair format), the algorithm will look for key value pairs matching the arguments passed to the function. As of now, function accepts the following arguments: `data_column` (mandatory), `key_value_pair_delimiter` (defaults to `:`), `pair_delimiters` (defaults to `\space \, \;`) and `quoting_character` (defaults to double quotes). [#43606](https://github.com/ClickHouse/ClickHouse/pull/43606) ([Arthur Passos](https://github.com/arthurpassos)).
* Add `extractKeyValuePairs` function to extract key value pairs from strings. Input strings might contain noise (i.e. log files / do not need to be 100% formatted in key-value-pair format), the algorithm will look for key value pairs matching the arguments passed to the function. As of now, function accepts the following arguments: `data_column` (mandatory), `key_value_pair_delimiter` (defaults to `:`), `pair_delimiters` (defaults to `\space \, \;`) and `quoting_character` (defaults to double quotes). [#43606](https://github.com/ClickHouse/ClickHouse/pull/43606) ([Arthur Passos](https://github.com/arthurpassos)).
* Functions replaceOne(), replaceAll(), replaceRegexpOne() and replaceRegexpAll() can now be called with non-const pattern and replacement arguments. [#46589](https://github.com/ClickHouse/ClickHouse/pull/46589) ([Robert Schulze](https://github.com/rschu1ze)).
* Added functions to work with columns of type `Map`: `mapConcat`, `mapSort`, `mapExists`. [#48071](https://github.com/ClickHouse/ClickHouse/pull/48071) ([Anton Popov](https://github.com/CurtizJ)).
#### Performance Improvement
* Reading files in `Parquet` format is now much faster. IO and decoding are parallelized (controlled by `max_threads` setting), and only required data ranges are read. [#47964](https://github.com/ClickHouse/ClickHouse/pull/47964) ([Michael Kolupaev](https://github.com/al13n321)).
* If we run a mutation with IN (subquery) like this: `ALTER TABLE t UPDATE col='new value' WHERE id IN (SELECT id FROM huge_table)` and the table `t` has multiple parts than for each part a set for subquery `SELECT id FROM huge_table` is built in memory. And if there are many parts then this might consume a lot of memory (and lead to an OOM) and CPU. The solution is to introduce a short-lived cache of sets that are currently being built by mutation tasks. If another task of the same mutation is executed concurrently it can lookup the set in the cache, wait for it to be built and reuse it. [#46835](https://github.com/ClickHouse/ClickHouse/pull/46835) ([Alexander Gololobov](https://github.com/davenger)).
* If we run a mutation with IN (subquery) like this: `ALTER TABLE t UPDATE col='new value' WHERE id IN (SELECT id FROM huge_table)` and the table `t` has multiple parts than for each part a set for subquery `SELECT id FROM huge_table` is built in memory. And if there are many parts then this might consume a lot of memory (and lead to an OOM) and CPU. The solution is to introduce a short-lived cache of sets that are currently being built by mutation tasks. If another task of the same mutation is executed concurrently it can look up the set in the cache, wait for it to be built and reuse it. [#46835](https://github.com/ClickHouse/ClickHouse/pull/46835) ([Alexander Gololobov](https://github.com/davenger)).
* Only check dependencies if necessary when applying `ALTER TABLE` queries. [#48062](https://github.com/ClickHouse/ClickHouse/pull/48062) ([Raúl Marín](https://github.com/Algunenano)).
* Optimize function `mapUpdate`. [#48118](https://github.com/ClickHouse/ClickHouse/pull/48118) ([Anton Popov](https://github.com/CurtizJ)).
* Now an internal query to local replica is sent explicitly and data from it received through loopback interface. Setting `prefer_localhost_replica` is not respected for parallel replicas. This is needed for better scheduling and makes the code cleaner: the initiator is only responsible for coordinating of the reading process and merging results, continiously answering for requests while all the secondary queries read the data. Note: Using loopback interface is not so performant, otherwise some replicas could starve for tasks which could lead to even slower query execution and not utilizing all possible resources. The initialization of the coordinator is now even more lazy. All incoming requests contain the information about the reading algorithm we initialize the coordinator with it when first request comes. If any replica will decide to read with different algorithm - an exception will be thrown and a query will be aborted. [#48246](https://github.com/ClickHouse/ClickHouse/pull/48246) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Do not build set for the right side of `IN` clause with subquery when it is used only for analysis of skip indexes and they are disabled by setting (`use_skip_indexes=0`). Previously it might affect the performance of queries. [#48299](https://github.com/ClickHouse/ClickHouse/pull/48299) ([Anton Popov](https://github.com/CurtizJ)).
* Now an internal query to local replica is sent explicitly and data from it received through loopback interface. Setting `prefer_localhost_replica` is not respected for parallel replicas. This is needed for better scheduling and makes the code cleaner: the initiator is only responsible for coordinating of the reading process and merging results, continuously answering for requests while all the secondary queries read the data. Note: Using loopback interface is not so performant, otherwise some replicas could starve for tasks which could lead to even slower query execution and not utilizing all possible resources. The initialization of the coordinator is now even more lazy. All incoming requests contain the information about the reading algorithm we initialize the coordinator with it when first request comes. If any replica decides to read with a different algorithman exception will be thrown and a query will be aborted. [#48246](https://github.com/ClickHouse/ClickHouse/pull/48246) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Do not build set for the right side of `IN` clause with subquery when it is used only for analysis of skip indexes, and they are disabled by setting (`use_skip_indexes=0`). Previously it might affect the performance of queries. [#48299](https://github.com/ClickHouse/ClickHouse/pull/48299) ([Anton Popov](https://github.com/CurtizJ)).
* Query processing is parallelized right after reading `FROM file(...)`. Related to [#38755](https://github.com/ClickHouse/ClickHouse/issues/38755). [#48525](https://github.com/ClickHouse/ClickHouse/pull/48525) ([Igor Nikonov](https://github.com/devcrafter)).
* Query processing is parallelized right after reading from a data source. Affected data sources are mostly simple or external storages like table functions `url`, `file`. [#48727](https://github.com/ClickHouse/ClickHouse/pull/48727) ([Igor Nikonov](https://github.com/devcrafter)).
* Lowered contention of ThreadPool mutex (may increase performance for a huge amount of small jobs). [#48750](https://github.com/ClickHouse/ClickHouse/pull/48750) ([Sergei Trifonov](https://github.com/serxa)).
@ -58,11 +58,11 @@
* `bitCount` function support `FixedString` and `String` data type. [#49044](https://github.com/ClickHouse/ClickHouse/pull/49044) ([flynn](https://github.com/ucasfl)).
* Added configurable retries for all operations with [Zoo]Keeper for Backup queries. [#47224](https://github.com/ClickHouse/ClickHouse/pull/47224) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Enable `use_environment_credentials` for S3 by default, so the entire provider chain is constructed by default. [#47397](https://github.com/ClickHouse/ClickHouse/pull/47397) ([Antonio Andelic](https://github.com/antonio2368)).
* Currently, the JSON_VALUE function is similar as spark's get_json_object function, which support to get value from json string by a path like '$.key'. But still has something different - 1. in spark's get_json_object will return null while the path is not exist, but in JSON_VALUE will return empty string; - 2. in spark's get_json_object will return a complext type value, such as a json object/array value, but in JSON_VALUE will return empty string. [#47494](https://github.com/ClickHouse/ClickHouse/pull/47494) ([KevinyhZou](https://github.com/KevinyhZou)).
* Currently, the JSON_VALUE function is similar as spark's get_json_object function, which support to get value from JSON string by a path like '$.key'. But still has something different - 1. in spark's get_json_object will return null while the path is not exist, but in JSON_VALUE will return empty string; - 2. in spark's get_json_object will return a complex type value, such as a JSON object/array value, but in JSON_VALUE will return empty string. [#47494](https://github.com/ClickHouse/ClickHouse/pull/47494) ([KevinyhZou](https://github.com/KevinyhZou)).
* For `use_structure_from_insertion_table_in_table_functions` more flexible insert table structure propagation to table function. Fixed an issue with name mapping and using virtual columns. No more need for 'auto' setting. [#47962](https://github.com/ClickHouse/ClickHouse/pull/47962) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Do not continue retrying to connect to ZK if the query is killed or over limits. [#47985](https://github.com/ClickHouse/ClickHouse/pull/47985) ([Raúl Marín](https://github.com/Algunenano)).
* Do not continue retrying to connect to Keeper if the query is killed or over limits. [#47985](https://github.com/ClickHouse/ClickHouse/pull/47985) ([Raúl Marín](https://github.com/Algunenano)).
* Support Enum output/input in `BSONEachRow`, allow all map key types and avoid extra calculations on output. [#48122](https://github.com/ClickHouse/ClickHouse/pull/48122) ([Kruglov Pavel](https://github.com/Avogar)).
* Support more ClickHouse types in `ORC`/`Arrow`/`Parquet` formats: Enum(8|16), (U)Int(128|256), Decimal256 (for ORC), allow reading IPv4 from Int32 values (ORC outputs IPv4 as Int32 and we couldn't read it back), fix reading Nullable(IPv6) from binary data for `ORC`. [#48126](https://github.com/ClickHouse/ClickHouse/pull/48126) ([Kruglov Pavel](https://github.com/Avogar)).
* Support more ClickHouse types in `ORC`/`Arrow`/`Parquet` formats: Enum(8|16), (U)Int(128|256), Decimal256 (for ORC), allow reading IPv4 from Int32 values (ORC outputs IPv4 as Int32, and we couldn't read it back), fix reading Nullable(IPv6) from binary data for `ORC`. [#48126](https://github.com/ClickHouse/ClickHouse/pull/48126) ([Kruglov Pavel](https://github.com/Avogar)).
* Add columns `perform_ttl_move_on_insert`, `load_balancing` for table `system.storage_policies`, modify column `volume_type` type to `Enum8`. [#48167](https://github.com/ClickHouse/ClickHouse/pull/48167) ([lizhuoyu5](https://github.com/lzydmxy)).
* Added support for `BACKUP ALL` command which backups all tables and databases, including temporary and system ones. [#48189](https://github.com/ClickHouse/ClickHouse/pull/48189) ([Vitaly Baranov](https://github.com/vitlibar)).
* Function mapFromArrays supports `Map` type as an input. [#48207](https://github.com/ClickHouse/ClickHouse/pull/48207) ([李扬](https://github.com/taiyang-li)).
@ -73,7 +73,7 @@
* Add new setting `keeper_map_strict_mode` which enforces extra guarantees on operations made on top of `KeeperMap` tables. [#48293](https://github.com/ClickHouse/ClickHouse/pull/48293) ([Antonio Andelic](https://github.com/antonio2368)).
* Check primary key type for simple dictionary is native unsigned integer type Add setting `check_dictionary_primary_key ` for compatibility(set `check_dictionary_primary_key =false` to disable checking). [#48335](https://github.com/ClickHouse/ClickHouse/pull/48335) ([lizhuoyu5](https://github.com/lzydmxy)).
* Don't replicate mutations for `KeeperMap` because it's unnecessary. [#48354](https://github.com/ClickHouse/ClickHouse/pull/48354) ([Antonio Andelic](https://github.com/antonio2368)).
* Allow write/read unnamed tuple as nested Message in Protobuf format. Tuple elements and Message fields are mathced by position. [#48390](https://github.com/ClickHouse/ClickHouse/pull/48390) ([Kruglov Pavel](https://github.com/Avogar)).
* Allow to write/read unnamed tuple as nested Message in Protobuf format. Tuple elements and Message fields are matched by position. [#48390](https://github.com/ClickHouse/ClickHouse/pull/48390) ([Kruglov Pavel](https://github.com/Avogar)).
* Support `additional_table_filters` and `additional_result_filter` settings in the new planner. Also, add a documentation entry for `additional_result_filter`. [#48405](https://github.com/ClickHouse/ClickHouse/pull/48405) ([Dmitry Novik](https://github.com/novikd)).
* `parseDateTime` now understands format string '%f' (fractional seconds). [#48420](https://github.com/ClickHouse/ClickHouse/pull/48420) ([Robert Schulze](https://github.com/rschu1ze)).
* Format string "%f" in formatDateTime() now prints "000000" if the formatted value has no fractional seconds, the previous behavior (single zero) can be restored using setting "formatdatetime_f_prints_single_zero = 1". [#48422](https://github.com/ClickHouse/ClickHouse/pull/48422) ([Robert Schulze](https://github.com/rschu1ze)).
@ -103,7 +103,7 @@
* Add fallback to password authentication when authentication with SSL user certificate has failed. Closes [#48974](https://github.com/ClickHouse/ClickHouse/issues/48974). [#48989](https://github.com/ClickHouse/ClickHouse/pull/48989) ([Nikolay Degterinsky](https://github.com/evillique)).
* Improve the embedded dashboard. Close [#46671](https://github.com/ClickHouse/ClickHouse/issues/46671). [#49036](https://github.com/ClickHouse/ClickHouse/pull/49036) ([Kevin Zhang](https://github.com/Kinzeng)).
* Add profile events for log messages, so you can easily see the count of log messages by severity. [#49042](https://github.com/ClickHouse/ClickHouse/pull/49042) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* In previous versions, the `LineAsString` format worked inconsistently when the parallel parsing was enabled or not, in presence of DOS or MacOS Classic line breaks. This closes [#49039](https://github.com/ClickHouse/ClickHouse/issues/49039). [#49052](https://github.com/ClickHouse/ClickHouse/pull/49052) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* In previous versions, the `LineAsString` format worked inconsistently when the parallel parsing was enabled or not, in presence of DOS or macOS Classic line breaks. This closes [#49039](https://github.com/ClickHouse/ClickHouse/issues/49039). [#49052](https://github.com/ClickHouse/ClickHouse/pull/49052) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* The exception message about the unparsed query parameter will also tell about the name of the parameter. Reimplement [#48878](https://github.com/ClickHouse/ClickHouse/issues/48878). Close [#48772](https://github.com/ClickHouse/ClickHouse/issues/48772). [#49061](https://github.com/ClickHouse/ClickHouse/pull/49061) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### Build/Testing/Packaging Improvement
@ -121,12 +121,12 @@
* Fix system.query_views_log for MVs that are pushed from background threads [#46668](https://github.com/ClickHouse/ClickHouse/pull/46668) ([Azat Khuzhin](https://github.com/azat)).
* Fix several `RENAME COLUMN` bugs [#46946](https://github.com/ClickHouse/ClickHouse/pull/46946) ([alesapin](https://github.com/alesapin)).
* Fix minor hiliting issues in clickhouse-format [#47610](https://github.com/ClickHouse/ClickHouse/pull/47610) ([Natasha Murashkina](https://github.com/murfel)).
* Fix a bug in LLVM's libc++ leading to a crash for uploading parts to S3 which size is greater then INT_MAX [#47693](https://github.com/ClickHouse/ClickHouse/pull/47693) ([Azat Khuzhin](https://github.com/azat)).
* Fix a bug in LLVM's libc++ leading to a crash for uploading parts to S3 which size is greater than INT_MAX [#47693](https://github.com/ClickHouse/ClickHouse/pull/47693) ([Azat Khuzhin](https://github.com/azat)).
* Fix overflow in the `sparkbar` function [#48121](https://github.com/ClickHouse/ClickHouse/pull/48121) ([Vladimir C](https://github.com/vdimir)).
* Fix race in S3 [#48190](https://github.com/ClickHouse/ClickHouse/pull/48190) ([Anton Popov](https://github.com/CurtizJ)).
* Disable JIT for aggregate functions due to inconsistent behavior [#48195](https://github.com/ClickHouse/ClickHouse/pull/48195) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix alter formatting (minor) [#48289](https://github.com/ClickHouse/ClickHouse/pull/48289) ([Natasha Murashkina](https://github.com/murfel)).
* Fix cpu usage in RabbitMQ (was worsened in 23.2 after [#44404](https://github.com/ClickHouse/ClickHouse/issues/44404)) [#48311](https://github.com/ClickHouse/ClickHouse/pull/48311) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix CPU usage in RabbitMQ (was worsened in 23.2 after [#44404](https://github.com/ClickHouse/ClickHouse/issues/44404)) [#48311](https://github.com/ClickHouse/ClickHouse/pull/48311) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix crash in EXPLAIN PIPELINE for Merge over Distributed [#48320](https://github.com/ClickHouse/ClickHouse/pull/48320) ([Azat Khuzhin](https://github.com/azat)).
* Fix serializing LowCardinality as Arrow dictionary [#48361](https://github.com/ClickHouse/ClickHouse/pull/48361) ([Kruglov Pavel](https://github.com/Avogar)).
* Reset downloader for cache file segment in TemporaryFileStream [#48386](https://github.com/ClickHouse/ClickHouse/pull/48386) ([Vladimir C](https://github.com/vdimir)).
@ -155,7 +155,6 @@
* Fix `UNKNOWN_IDENTIFIER` error while selecting from table with row policy and column with dots [#48976](https://github.com/ClickHouse/ClickHouse/pull/48976) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix aggregation by empty nullable strings [#48999](https://github.com/ClickHouse/ClickHouse/pull/48999) ([LiuNeng](https://github.com/liuneng1994)).
### <a id="233"></a> ClickHouse release 23.3 LTS, 2023-03-30
#### Upgrade Notes

2
contrib/qpl vendored

@ -1 +1 @@
Subproject commit d75a29d95d8a548297fce3549d21020005364dc8
Subproject commit 0bce2b03423f6fbeb8bce66cc8be0bf558058848

View File

@ -40,9 +40,10 @@ set (LOG_HW_INIT OFF)
set (SANITIZE_MEMORY OFF)
set (SANITIZE_THREADS OFF)
set (LIB_FUZZING_ENGINE OFF)
set (DYNAMIC_LOADING_LIBACCEL_CONFIG OFF)
function(GetLibraryVersion _content _outputVar)
string(REGEX MATCHALL "Qpl VERSION (.+) LANGUAGES" VERSION_REGEX "${_content}")
string(REGEX MATCHALL "QPL VERSION (.+) LANGUAGES" VERSION_REGEX "${_content}")
SET(${_outputVar} ${CMAKE_MATCH_1} PARENT_SCOPE)
endfunction()
@ -240,7 +241,9 @@ add_library(core_iaa OBJECT ${HW_PATH_SRC})
target_include_directories(core_iaa
PRIVATE ${UUID_DIR}
PUBLIC $<BUILD_INTERFACE:${QPL_SRC_DIR}/core-iaa/include>
PRIVATE $<BUILD_INTERFACE:${QPL_SRC_DIR}/core-iaa/sources/include>
PUBLIC $<BUILD_INTERFACE:${QPL_SRC_DIR}/core-iaa/sources/include>
PRIVATE $<BUILD_INTERFACE:${QPL_PROJECT_DIR}/include> # status.h in own_checkers.h
PRIVATE $<BUILD_INTERFACE:${QPL_PROJECT_DIR}/sources/c_api> # own_checkers.h
PRIVATE $<TARGET_PROPERTY:qplcore_avx512,INTERFACE_INCLUDE_DIRECTORIES>)
target_compile_options(core_iaa
@ -339,4 +342,7 @@ target_link_libraries(_qpl
PRIVATE ${CMAKE_DL_LIBS})
add_library (ch_contrib::qpl ALIAS _qpl)
target_include_directories(_qpl SYSTEM BEFORE PUBLIC "${QPL_PROJECT_DIR}/include")
target_include_directories(_qpl SYSTEM BEFORE
PUBLIC "${QPL_PROJECT_DIR}/include"
PUBLIC "${LIBACCEL_SOURCE_DIR}/accfg"
PUBLIC ${UUID_DIR})

View File

@ -24,6 +24,90 @@ SELECT
└─────────────────────┴────────────┴────────────┴─────────────────────┘
```
## makeDate
Creates a [Date](../../sql-reference/data-types/date.md) from a year, month and day argument.
**Syntax**
``` sql
makeDate(year, month, day)
```
**Arguments**
- `year` — Year. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `month` — Month. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `day` — Day. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
**Returned value**
- A date created from the arguments.
Type: [Date](../../sql-reference/data-types/date.md).
**Example**
``` sql
SELECT makeDate(2023, 2, 28) AS Date;
```
Result:
``` text
┌───────date─┐
│ 2023-02-28 │
└────────────┘
```
## makeDate32
Like [makeDate](#makeDate) but produces a [Date32](../../sql-reference/data-types/date32.md).
## makeDateTime
Creates a [DateTime](../../sql-reference/data-types/datetime.md) from a year, month, day, hour, minute and second argument.
**Syntax**
``` sql
makeDateTime(year, month, day, hour, minute, second[, timezone])
```
**Arguments**
- `year` — Year. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `month` — Month. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `day` — Day. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `hour` — Hour. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `minute` — Minute. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `second` — Second. [Integer](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md) or [Decimal](../../sql-reference/data-types/decimal.md).
- `timezone` — [Timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional).
**Returned value**
- A date with time created from the arguments.
Type: [DateTime](../../sql-reference/data-types/datetime.md).
**Example**
``` sql
SELECT makeDateTime(2023, 2, 28, 17, 12, 33) AS DateTime;
```
Result:
``` text
┌────────────DateTime─┐
│ 2023-02-28 17:12:33 │
└─────────────────────┘
```
## makeDateTime64
Like [makeDateTime](#makedatetime) but produces a [DateTime64](../../sql-reference/data-types/datetime64.md).
## timeZone
Returns the timezone of the server.

View File

@ -109,7 +109,7 @@ For the query to run successfully, the following conditions must be met:
- Both tables must have the same structure.
- Both tables must have the same partition key, the same order by key and the same primary key.
- Both tables must have the same storage policy (a disk where the partition is stored should be available for both tables).
- Both tables must have the same storage policy.
## REPLACE PARTITION
@ -123,7 +123,7 @@ For the query to run successfully, the following conditions must be met:
- Both tables must have the same structure.
- Both tables must have the same partition key, the same order by key and the same primary key.
- Both tables must have the same storage policy (a disk where the partition is stored should be available for both tables).
- Both tables must have the same storage policy.
## MOVE PARTITION TO TABLE
@ -137,7 +137,7 @@ For the query to run successfully, the following conditions must be met:
- Both tables must have the same structure.
- Both tables must have the same partition key, the same order by key and the same primary key.
- Both tables must have the same storage policy (a disk where the partition is stored should be available for both tables).
- Both tables must have the same storage policy.
- Both tables must be the same engine family (replicated or non-replicated).
## CLEAR COLUMN IN PARTITION

View File

@ -7,6 +7,7 @@
#include <Parsers/ASTIdentifier.h>
#include <Poco/Logger.h>
#include <Common/logger_useful.h>
#include "libaccel_config.h"
namespace DB
{
@ -16,11 +17,6 @@ namespace ErrorCodes
extern const int CANNOT_DECOMPRESS;
}
std::array<qpl_job *, DeflateQplJobHWPool::MAX_HW_JOB_NUMBER> DeflateQplJobHWPool::hw_job_ptr_pool;
std::array<std::atomic_bool, DeflateQplJobHWPool::MAX_HW_JOB_NUMBER> DeflateQplJobHWPool::hw_job_ptr_locks;
bool DeflateQplJobHWPool::job_pool_ready = false;
std::unique_ptr<uint8_t[]> DeflateQplJobHWPool::hw_jobs_buffer;
DeflateQplJobHWPool & DeflateQplJobHWPool::instance()
{
static DeflateQplJobHWPool pool;
@ -28,47 +24,69 @@ DeflateQplJobHWPool & DeflateQplJobHWPool::instance()
}
DeflateQplJobHWPool::DeflateQplJobHWPool()
: random_engine(std::random_device()())
, distribution(0, MAX_HW_JOB_NUMBER - 1)
: max_hw_jobs(0)
, random_engine(std::random_device()())
{
Poco::Logger * log = &Poco::Logger::get("DeflateQplJobHWPool");
UInt32 job_size = 0;
const char * qpl_version = qpl_get_library_version();
/// Get size required for saving a single qpl job object
qpl_get_job_size(qpl_path_hardware, &job_size);
/// Allocate entire buffer for storing all job objects
hw_jobs_buffer = std::make_unique<uint8_t[]>(job_size * MAX_HW_JOB_NUMBER);
/// Initialize pool for storing all job object pointers
/// Reallocate buffer by shifting address offset for each job object.
for (UInt32 index = 0; index < MAX_HW_JOB_NUMBER; ++index)
// loop all configured workqueue size to get maximum job number.
accfg_ctx * ctx_ptr = nullptr;
auto ctx_status = accfg_new(&ctx_ptr);
if (ctx_status == 0)
{
qpl_job * qpl_job_ptr = reinterpret_cast<qpl_job *>(hw_jobs_buffer.get() + index * job_size);
if (auto status = qpl_init_job(qpl_path_hardware, qpl_job_ptr); status != QPL_STS_OK)
auto * dev_ptr = accfg_device_get_first(ctx_ptr);
while (dev_ptr != nullptr)
{
for (auto * wq_ptr = accfg_wq_get_first(dev_ptr); wq_ptr != nullptr; wq_ptr = accfg_wq_get_next(wq_ptr))
max_hw_jobs += accfg_wq_get_size(wq_ptr);
dev_ptr = accfg_device_get_next(dev_ptr);
}
}
else
{
job_pool_ready = false;
LOG_WARNING(log, "Initialization of hardware-assisted DeflateQpl codec failed, falling back to software DeflateQpl codec. Failed to create new libaccel_config context -> status: {}, QPL Version: {}.", ctx_status, qpl_version);
return;
}
if (max_hw_jobs == 0)
{
job_pool_ready = false;
LOG_WARNING(log, "Initialization of hardware-assisted DeflateQpl codec failed, falling back to software DeflateQpl codec. Failed to get available workqueue size -> total_wq_size: {}, QPL Version: {}.", max_hw_jobs, qpl_version);
return;
}
distribution = std::uniform_int_distribution<int>(0, max_hw_jobs - 1);
/// Get size required for saving a single qpl job object
qpl_get_job_size(qpl_path_hardware, &per_job_size);
/// Allocate job buffer pool for storing all job objects
hw_jobs_buffer = std::make_unique<uint8_t[]>(per_job_size * max_hw_jobs);
hw_job_ptr_locks = std::make_unique<std::atomic_bool[]>(max_hw_jobs);
/// Initialize all job objects in job buffer pool
for (UInt32 index = 0; index < max_hw_jobs; ++index)
{
qpl_job * job_ptr = reinterpret_cast<qpl_job *>(hw_jobs_buffer.get() + index * per_job_size);
if (auto status = qpl_init_job(qpl_path_hardware, job_ptr); status != QPL_STS_OK)
{
job_pool_ready = false;
LOG_WARNING(log, "Initialization of hardware-assisted DeflateQpl codec failed: {} , falling back to software DeflateQpl codec. Please check if Intel In-Memory Analytics Accelerator (IAA) is properly set up. QPL Version: {}.", static_cast<UInt32>(status), qpl_version);
LOG_WARNING(log, "Initialization of hardware-assisted DeflateQpl codec failed, falling back to software DeflateQpl codec. Failed to Initialize qpl job -> status: {}, QPL Version: {}.", static_cast<UInt32>(status), qpl_version);
return;
}
hw_job_ptr_pool[index] = qpl_job_ptr;
unLockJob(index);
}
job_pool_ready = true;
LOG_DEBUG(log, "Hardware-assisted DeflateQpl codec is ready! QPL Version: {}",qpl_version);
LOG_DEBUG(log, "Hardware-assisted DeflateQpl codec is ready! QPL Version: {}, max_hw_jobs: {}",qpl_version, max_hw_jobs);
}
DeflateQplJobHWPool::~DeflateQplJobHWPool()
{
for (UInt32 i = 0; i < MAX_HW_JOB_NUMBER; ++i)
for (UInt32 i = 0; i < max_hw_jobs; ++i)
{
if (hw_job_ptr_pool[i])
{
while (!tryLockJob(i));
qpl_fini_job(hw_job_ptr_pool[i]);
unLockJob(i);
hw_job_ptr_pool[i] = nullptr;
}
qpl_job * job_ptr = reinterpret_cast<qpl_job *>(hw_jobs_buffer.get() + i * per_job_size);
while (!tryLockJob(i));
qpl_fini_job(job_ptr);
unLockJob(i);
}
job_pool_ready = false;
}
@ -83,14 +101,14 @@ qpl_job * DeflateQplJobHWPool::acquireJob(UInt32 & job_id)
{
index = distribution(random_engine);
retry++;
if (retry > MAX_HW_JOB_NUMBER)
if (retry > max_hw_jobs)
{
return nullptr;
}
}
job_id = MAX_HW_JOB_NUMBER - index;
assert(index < MAX_HW_JOB_NUMBER);
return hw_job_ptr_pool[index];
job_id = max_hw_jobs - index;
assert(index < max_hw_jobs);
return reinterpret_cast<qpl_job *>(hw_jobs_buffer.get() + index * per_job_size);
}
else
return nullptr;
@ -99,19 +117,19 @@ qpl_job * DeflateQplJobHWPool::acquireJob(UInt32 & job_id)
void DeflateQplJobHWPool::releaseJob(UInt32 job_id)
{
if (isJobPoolReady())
unLockJob(MAX_HW_JOB_NUMBER - job_id);
unLockJob(max_hw_jobs - job_id);
}
bool DeflateQplJobHWPool::tryLockJob(UInt32 index)
{
bool expected = false;
assert(index < MAX_HW_JOB_NUMBER);
assert(index < max_hw_jobs);
return hw_job_ptr_locks[index].compare_exchange_strong(expected, true);
}
void DeflateQplJobHWPool::unLockJob(UInt32 index)
{
assert(index < MAX_HW_JOB_NUMBER);
assert(index < max_hw_jobs);
hw_job_ptr_locks[index].store(false);
}

View File

@ -24,22 +24,23 @@ public:
static DeflateQplJobHWPool & instance();
qpl_job * acquireJob(UInt32 & job_id);
static void releaseJob(UInt32 job_id);
static const bool & isJobPoolReady() { return job_pool_ready; }
void releaseJob(UInt32 job_id);
const bool & isJobPoolReady() { return job_pool_ready; }
private:
static bool tryLockJob(UInt32 index);
static void unLockJob(UInt32 index);
bool tryLockJob(UInt32 index);
void unLockJob(UInt32 index);
/// size of each job objects
UInt32 per_job_size;
/// Maximum jobs running in parallel supported by IAA hardware
static constexpr auto MAX_HW_JOB_NUMBER = 1024;
UInt32 max_hw_jobs;
/// Entire buffer for storing all job objects
static std::unique_ptr<uint8_t[]> hw_jobs_buffer;
/// Job pool for storing all job object pointers
static std::array<qpl_job *, MAX_HW_JOB_NUMBER> hw_job_ptr_pool;
std::unique_ptr<uint8_t[]> hw_jobs_buffer;
/// Locks for accessing each job object pointers
static std::array<std::atomic_bool, MAX_HW_JOB_NUMBER> hw_job_ptr_locks;
static bool job_pool_ready;
std::unique_ptr<std::atomic_bool[]> hw_job_ptr_locks;
bool job_pool_ready;
std::mt19937 random_engine;
std::uniform_int_distribution<int> distribution;
};

View File

@ -47,7 +47,9 @@ public:
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t /*input_rows_count*/) const override
{
const ColumnPtr column_haystack = arguments[0].column;
ColumnPtr column_haystack = arguments[0].column;
column_haystack = column_haystack->convertToFullColumnIfConst();
const ColumnPtr column_needle = arguments[1].column;
const ColumnPtr column_replacement = arguments[2].column;

View File

@ -50,8 +50,8 @@ public:
bool useDefaultImplementationForConstants() const override { return true; }
protected:
template <class AgrumentNames>
void checkRequiredArguments(const ColumnsWithTypeAndName & arguments, const AgrumentNames & argument_names, const size_t optional_argument_count) const
template <class ArgumentNames>
void checkRequiredArguments(const ColumnsWithTypeAndName & arguments, const ArgumentNames & argument_names, const size_t optional_argument_count) const
{
if (arguments.size() < argument_names.size() || arguments.size() > argument_names.size() + optional_argument_count)
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
@ -67,8 +67,8 @@ protected:
}
}
template <class AgrumentNames>
void convertRequiredArguments(const ColumnsWithTypeAndName & arguments, const AgrumentNames & argument_names, Columns & converted_arguments) const
template <class ArgumentNames>
void convertRequiredArguments(const ColumnsWithTypeAndName & arguments, const ArgumentNames & argument_names, Columns & converted_arguments) const
{
const DataTypePtr converted_argument_type = std::make_shared<DataTypeFloat32>();
converted_arguments.clear();
@ -87,7 +87,7 @@ template <typename Traits>
class FunctionMakeDate : public FunctionWithNumericParamsBase
{
private:
static constexpr std::array<const char*, 3> argument_names = {"year", "month", "day"};
static constexpr std::array argument_names = {"year", "month", "day"};
public:
static constexpr auto name = Traits::name;
@ -112,7 +112,7 @@ public:
Columns converted_arguments;
convertRequiredArguments(arguments, argument_names, converted_arguments);
auto res_column = Traits::ReturnColumnType::create(input_rows_count);
auto res_column = Traits::ReturnDataType::ColumnType::create(input_rows_count);
auto & result_data = res_column->getData();
const auto & year_data = typeid_cast<const ColumnFloat32 &>(*converted_arguments[0]).getData();
@ -150,7 +150,6 @@ struct MakeDateTraits
{
static constexpr auto name = "makeDate";
using ReturnDataType = DataTypeDate;
using ReturnColumnType = ColumnDate;
static constexpr auto MIN_YEAR = 1970;
static constexpr auto MAX_YEAR = 2149;
@ -163,7 +162,6 @@ struct MakeDate32Traits
{
static constexpr auto name = "makeDate32";
using ReturnDataType = DataTypeDate32;
using ReturnColumnType = ColumnDate32;
static constexpr auto MIN_YEAR = 1900;
static constexpr auto MAX_YEAR = 2299;
@ -174,7 +172,7 @@ struct MakeDate32Traits
class FunctionMakeDateTimeBase : public FunctionWithNumericParamsBase
{
protected:
static constexpr std::array<const char*, 6> argument_names = {"year", "month", "day", "hour", "minute", "second"};
static constexpr std::array argument_names = {"year", "month", "day", "hour", "minute", "second"};
public:
bool isVariadic() const override { return true; }
@ -197,13 +195,13 @@ protected:
{
/// Note that hour, minute and second are checked against 99 to behave consistently with parsing DateTime from String
/// E.g. "select cast('1984-01-01 99:99:99' as DateTime);" returns "1984-01-05 04:40:39"
if (unlikely(std::isnan(year) || std::isnan(month) || std::isnan(day_of_month) ||
if (std::isnan(year) || std::isnan(month) || std::isnan(day_of_month) ||
std::isnan(hour) || std::isnan(minute) || std::isnan(second) ||
year < DATE_LUT_MIN_YEAR || month < 1 || month > 12 || day_of_month < 1 || day_of_month > 31 ||
hour < 0 || hour > 99 || minute < 0 || minute > 99 || second < 0 || second > 99))
hour < 0 || hour > 99 || minute < 0 || minute > 99 || second < 0 || second > 99) [[unlikely]]
return minDateTime(lut);
if (unlikely(year > DATE_LUT_MAX_YEAR))
if (year > DATE_LUT_MAX_YEAR) [[unlikely]]
return maxDateTime(lut);
return lut.makeDateTime(
@ -290,9 +288,9 @@ public:
const auto second = second_data[i];
auto date_time = dateTime(year, month, day, hour, minute, second, date_lut);
if (unlikely(date_time < 0))
if (date_time < 0) [[unlikely]]
date_time = 0;
else if (unlikely(date_time > 0x0ffffffffll))
else if (date_time > 0x0ffffffffll) [[unlikely]]
date_time = 0x0ffffffffll;
result_data[i] = static_cast<UInt32>(date_time);
@ -394,21 +392,21 @@ public:
auto date_time = dateTime(year, month, day, hour, minute, second, date_lut);
double fraction = 0;
if (unlikely(date_time == min_date_time))
if (date_time == min_date_time) [[unlikely]]
fraction = 0;
else if (unlikely(date_time == max_date_time))
else if (date_time == max_date_time) [[unlikely]]
fraction = 999999999;
else
{
fraction = fraction_data ? (*fraction_data)[i] : 0;
if (unlikely(std::isnan(fraction)))
if (std::isnan(fraction)) [[unlikely]]
{
date_time = min_date_time;
fraction = 0;
}
else if (unlikely(fraction < 0))
else if (fraction < 0) [[unlikely]]
fraction = 0;
else if (unlikely(fraction > max_fraction))
else if (fraction > max_fraction) [[unlikely]]
fraction = max_fraction;
}

View File

@ -1154,7 +1154,7 @@ void FileCache::reduceSizeToDownloaded(
chassert(cell->queue_iterator);
chassert(cell->queue_iterator->size() >= downloaded_size);
const ssize_t diff = cell->queue_iterator->size() - downloaded_size;
const int64_t diff = cell->queue_iterator->size() - downloaded_size;
if (diff > 0)
cell->queue_iterator->updateSize(-diff, cache_lock);

View File

@ -61,7 +61,7 @@ public:
/// the iterator should automatically point to the next record.
virtual void removeAndGetNext(std::lock_guard<std::mutex> &) = 0;
virtual void updateSize(ssize_t, std::lock_guard<std::mutex> &) = 0;
virtual void updateSize(int64_t, std::lock_guard<std::mutex> &) = 0;
};
public:

View File

@ -94,7 +94,7 @@ void LRUFileCachePriority::LRUFileCacheIterator::removeAndGetNext(std::lock_guar
queue_iter = cache_priority->queue.erase(queue_iter);
}
void LRUFileCachePriority::LRUFileCacheIterator::updateSize(ssize_t size, std::lock_guard<std::mutex> &)
void LRUFileCachePriority::LRUFileCacheIterator::updateSize(int64_t size, std::lock_guard<std::mutex> &)
{
cache_priority->cache_size += size;

View File

@ -54,7 +54,7 @@ public:
void removeAndGetNext(std::lock_guard<std::mutex> &) override;
void updateSize(ssize_t size, std::lock_guard<std::mutex> &) override;
void updateSize(int64_t size, std::lock_guard<std::mutex> &) override;
void use(std::lock_guard<std::mutex> &) override;

View File

@ -331,7 +331,7 @@ void ReadFromParallelRemoteReplicasStep::initializePipeline(QueryPipelineBuilder
all_replicas_count = cluster->getShardsInfo().size();
}
/// Find local shard
/// Find local shard. It might happen that there is no local shard, but that's fine
for (const auto & shard: cluster->getShardsInfo())
{
if (shard.isLocal())
@ -346,9 +346,6 @@ void ReadFromParallelRemoteReplicasStep::initializePipeline(QueryPipelineBuilder
}
}
if (pipes.empty())
throw Exception(ErrorCodes::LOGICAL_ERROR, "No local shard");
auto current_shard = cluster->getShardsInfo().begin();
while (pipes.size() != all_replicas_count)
{

View File

@ -84,7 +84,15 @@ namespace
}
void operator() (const UUID & x) const
{
#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__
auto tmp_x = x.toUnderType();
char * start = reinterpret_cast<char *>(&tmp_x);
char * end = start + sizeof(tmp_x);
std::reverse(start, end);
operator()(tmp_x);
#else
operator()(x.toUnderType());
#endif
}
void operator() (const IPv4 & x) const
{

View File

@ -345,6 +345,7 @@ private:
PreparedInsert(pqxx::connection & connection_, const String & table, const String & schema,
const ColumnsWithTypeAndName & columns, const String & on_conflict_)
: Inserter(connection_)
, statement_name("insert_" + getHexUIntLowercase(thread_local_rng()))
{
WriteBufferFromOwnString buf;
buf << getInsertQuery(schema, table, columns, IdentifierQuotingStyle::DoubleQuotes);
@ -357,12 +358,14 @@ private:
}
buf << ") ";
buf << on_conflict_;
connection.prepare("insert", buf.str());
connection.prepare(statement_name, buf.str());
prepared = true;
}
void complete() override
{
connection.unprepare("insert");
connection.unprepare(statement_name);
prepared = false;
tx.commit();
}
@ -371,8 +374,24 @@ private:
pqxx::params params;
params.reserve(row.size());
params.append_multi(row);
tx.exec_prepared("insert", params);
tx.exec_prepared(statement_name, params);
}
~PreparedInsert() override
{
try
{
if (prepared)
connection.unprepare(statement_name);
}
catch (...)
{
tryLogCurrentException(__PRETTY_FUNCTION__);
}
}
const String statement_name;
bool prepared = false;
};
StorageMetadataPtr metadata_snapshot;

View File

@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Tags: long
# Tags: long, no-debug
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh

View File

@ -3,15 +3,27 @@
1 l \N Nullable(String)
2 \N Nullable(String)
-
1 l Nullable(String) \N Nullable(String)
0 \N Nullable(String) \N Nullable(String)
0 \N Nullable(String) \N Nullable(String)
1 l Nullable(String) \N Nullable(String)
-
1 l LowCardinality(String) \N Nullable(String)
0 LowCardinality(String) \N Nullable(String)
0 LowCardinality(String) \N Nullable(String)
1 l LowCardinality(String) \N Nullable(String)
-
1 l \N Nullable(String)
0 \N \N Nullable(String)
0 \N \N Nullable(String)
1 l \N Nullable(String)
-
1 l \N Nullable(String)
0 \N Nullable(String)
0 \N Nullable(String)
1 l \N Nullable(String)
-
1 l \N Nullable(String)
0 \N Nullable(String)
0 \N Nullable(String)
1 l \N Nullable(String)
0 \N
-
0
-

View File

@ -15,19 +15,37 @@ SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (x) ORD
SELECT '-';
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x;
-- lc should be supertype for l.lc and r.lc, so expect Nullable(String)
SELECT x, lc, toTypeName(lc), r.lc, toTypeName(r.lc) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, toTypeName(lc), r.lc, toTypeName(r.lc) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, toTypeName(lc), r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT '-';
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x;
-- old behavior is different
SELECT x, lc, toTypeName(lc), r.lc, toTypeName(r.lc) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, toTypeName(lc), r.lc, toTypeName(r.lc) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, toTypeName(lc), r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT '-';
SELECT x, lc FROM t AS l RIGHT JOIN nr AS r USING (lc);
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT '-';
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT '-';
SELECT x, lc FROM t AS l RIGHT JOIN nr AS r USING (lc) SETTINGS allow_experimental_analyzer = 1;
SELECT '-';
SELECT x, lc FROM t AS l RIGHT JOIN nr AS r USING (lc) SETTINGS allow_experimental_analyzer = 0;
SELECT '-';

View File

@ -4,6 +4,16 @@
2 \N Nullable(String)
-
1 l \N Nullable(String)
0 \N \N Nullable(String)
0 \N \N Nullable(String)
1 l \N Nullable(String)
-
1 l \N Nullable(String)
0 \N \N Nullable(String)
0 \N \N Nullable(String)
1 l \N Nullable(String)
-
1 l \N Nullable(String)
0 \N Nullable(String)
0 \N Nullable(String)
1 l \N Nullable(String)

View File

@ -17,15 +17,27 @@ SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (x) ORD
SELECT '-';
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT '-';
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 1;
SELECT '-';
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, r.lc, toTypeName(r.lc) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT '-';
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l LEFT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l RIGHT JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT x, lc, materialize(r.lc) y, toTypeName(y) FROM t AS l FULL JOIN nr AS r USING (lc) ORDER BY x SETTINGS allow_experimental_analyzer = 0;
SELECT '-';

View File

@ -1,5 +1,5 @@
#!/usr/bin/env bash
# Tags: long, no-parallel
# Tags: long, no-parallel, no-debug
set -e

View File

@ -3,6 +3,16 @@
1 l \N LowCardinality(String) Nullable(String)
2 \N LowCardinality(String) Nullable(String)
-
\N \N Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(String)
\N \N Nullable(String) LowCardinality(String)
-
1 l \N LowCardinality(String) Nullable(String)
2 \N LowCardinality(String) Nullable(String)
1 l \N LowCardinality(String) Nullable(String)
2 \N LowCardinality(String) Nullable(String)
-
0 \N Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(String)
0 \N Nullable(String) LowCardinality(String)

View File

@ -10,8 +10,27 @@ CREATE TABLE nr (`x` Nullable(UInt32), `s` Nullable(String)) ENGINE = Memory;
INSERT INTO t VALUES (1, 'l');
INSERT INTO nr VALUES (2, NULL);
SET join_use_nulls = 0;
SET allow_experimental_analyzer = 1;
-- t.x is supertupe for `x` from left and right since `x` is inside `USING`.
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l LEFT JOIN nr AS r USING (x) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l RIGHT JOIN nr AS r USING (x) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l FULL JOIN nr AS r USING (x) ORDER BY t.x;
SELECT '-';
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l LEFT JOIN t AS r USING (x) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l RIGHT JOIN t AS r USING (x) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l FULL JOIN t AS r USING (x) ORDER BY t.x;
SELECT '-';
SET allow_experimental_analyzer = 0;
-- t.x is supertupe for `x` from left and right since `x` is inside `USING`.
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l LEFT JOIN nr AS r USING (x) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l RIGHT JOIN nr AS r USING (x) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l FULL JOIN nr AS r USING (x) ORDER BY t.x;

View File

@ -17,7 +17,7 @@
1 \N l Nullable(String) LowCardinality(String)
0 \N Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(String)
-
- join_use_nulls -
1 l \N LowCardinality(String) Nullable(String)
2 \N \N LowCardinality(Nullable(String)) Nullable(String)
1 l \N LowCardinality(Nullable(String)) Nullable(String)
@ -33,3 +33,47 @@
1 l \N LowCardinality(Nullable(String)) Nullable(String)
\N \N \N LowCardinality(Nullable(String)) Nullable(String)
-
\N \N \N Nullable(String) LowCardinality(Nullable(String))
1 \N l Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(Nullable(String))
\N \N \N Nullable(String) LowCardinality(Nullable(String))
- analyzer -
1 l \N LowCardinality(String) Nullable(String)
2 \N LowCardinality(String) Nullable(String)
1 l \N LowCardinality(String) Nullable(String)
2 \N LowCardinality(String) Nullable(String)
-
\N \N Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(String)
\N \N Nullable(String) LowCardinality(String)
-
1 l \N Nullable(String) Nullable(String)
0 \N \N Nullable(String) Nullable(String)
0 \N \N Nullable(String) Nullable(String)
1 l \N Nullable(String) Nullable(String)
-
0 \N \N Nullable(String) Nullable(String)
1 \N l Nullable(String) Nullable(String)
0 \N \N Nullable(String) Nullable(String)
1 \N l Nullable(String) Nullable(String)
- join_use_nulls -
1 l \N LowCardinality(String) Nullable(String)
2 \N \N LowCardinality(Nullable(String)) Nullable(String)
1 l \N LowCardinality(Nullable(String)) Nullable(String)
2 \N \N LowCardinality(Nullable(String)) Nullable(String)
-
\N \N \N Nullable(String) LowCardinality(Nullable(String))
1 \N l Nullable(String) LowCardinality(String)
1 \N l Nullable(String) LowCardinality(Nullable(String))
\N \N \N Nullable(String) LowCardinality(Nullable(String))
-
1 l \N Nullable(String) Nullable(String)
\N \N \N Nullable(String) Nullable(String)
1 l \N Nullable(String) Nullable(String)
\N \N \N Nullable(String) Nullable(String)
-
\N \N \N Nullable(String) Nullable(String)
1 \N l Nullable(String) Nullable(String)
1 \N l Nullable(String) Nullable(String)
\N \N \N Nullable(String) Nullable(String)

View File

@ -10,6 +10,14 @@ CREATE TABLE nr (`x` Nullable(UInt32), `s` Nullable(String)) ENGINE = Memory;
INSERT INTO t VALUES (1, 'l');
INSERT INTO nr VALUES (2, NULL);
{% for allow_experimental_analyzer in [0, 1] -%}
SET allow_experimental_analyzer = {{ allow_experimental_analyzer }};
{% if allow_experimental_analyzer -%}
SELECT '- analyzer -';
{% endif -%}
SET join_use_nulls = 0;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l LEFT JOIN nr AS r USING (x) ORDER BY t.x;
@ -36,7 +44,7 @@ SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l FULL JOIN t
SET join_use_nulls = 1;
SELECT '-';
SELECT '- join_use_nulls -';
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l LEFT JOIN nr AS r USING (x) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l RIGHT JOIN nr AS r USING (x) ORDER BY t.x;
@ -56,10 +64,11 @@ SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM t AS l FULL JOIN nr
SELECT '-';
-- TODO
-- SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l LEFT JOIN t AS r USING (s) ORDER BY t.x;
-- SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l RIGHT JOIN t AS r USING (s) ORDER BY t.x;
-- SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l FULL JOIN t AS r USING (s) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l LEFT JOIN t AS r USING (s) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l RIGHT JOIN t AS r USING (s) ORDER BY t.x;
SELECT t.x, l.s, r.s, toTypeName(l.s), toTypeName(r.s) FROM nr AS l FULL JOIN t AS r USING (s) ORDER BY t.x;
{% endfor %}
DROP TABLE t;
DROP TABLE nr;

View File

@ -1 +1,2 @@
6ba51fa36c625adab5d58007c96e32bf
ebc1c2f37455caea601feeb840757dd3

View File

@ -1,7 +1,32 @@
drop table if exists tab;
create table tab (i8 Int8, i16 Int16, i32 Int32, i64 Int64, i128 Int128, i256 Int256, u8 UInt8, u16 UInt16, u32 UInt32, u64 UInt64, u128 UInt128, u256 UInt256, id UUID, s String, fs FixedString(33), a Array(UInt8), t Tuple(UInt16, UInt32), d Date, dt DateTime('Asia/Istanbul'), dt64 DateTime64(3, 'Asia/Istanbul'), dec128 Decimal128(3), dec256 Decimal256(4), lc LowCardinality(String)) engine = MergeTree PARTITION BY (i8, i16, i32, i64, i128, i256, u8, u16, u32, u64, u128, u256, id, s, fs, a, t, d, dt, dt64, dec128, dec256, lc) order by tuple();
insert into tab values (-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, '61f0c404-5cb3-11e7-907b-a6006ad3dba0', 'a', 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', [1, 2, 3], (-1, -2), '2020-01-01', '2020-01-01 01:01:01', '2020-01-01 01:01:01', '123.456', '78.9101', 'a');
DROP TABLE IF EXISTS tab;
CREATE TABLE tab (
i8 Int8,
i16 Int16,
i32 Int32,
i64 Int64,
i128 Int128,
i256 Int256,
u8 UInt8,
u16 UInt16,
u32 UInt32,
u64 UInt64,
u128 UInt128,
u256 UInt256,
id UUID,
s String,
fs FixedString(33),
a Array(UInt8),
t Tuple(UInt16, UInt32),
d Date,
dt DateTime('Asia/Istanbul'),
dt64 DateTime64(3, 'Asia/Istanbul'),
dec128 Decimal128(3),
dec256 Decimal256(4),
lc LowCardinality(String))
engine = MergeTree PARTITION BY (i8, i16, i32, i64, i128, i256, u8, u16, u32, u64, u128, u256, id, s, fs, a, t, d, dt, dt64, dec128, dec256, lc) ORDER BY tuple();
INSERT INTO tab VALUES (-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, '61f0c404-5cb3-11e7-907b-a6006ad3dba0', 'a', 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', [1, 2, 3], (-1, -2), '2020-01-01', '2020-01-01 01:01:01', '2020-01-01 01:01:01', '123.456', '78.9101', 'a');
INSERT INTO tab VALUES (123, 12345, 1234567890, 1234567890000000000, 123456789000000000000000000000000000000, 123456789000000000000000000000000000000000000000000000000000000000000000000000, 123, 12345, 1234567890, 1234567890000000000, 123456789000000000000000000000000000000, 123456789000000000000000000000000000000000000000000000000000000000000000000000, '61f0c404-5cb3-11e7-907b-a6006ad3dba0', 'a', 'aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa', [1, 2, 3], (-1, -2), '2020-01-01', '2020-01-01 01:01:01', '2020-01-01 01:01:01', '123.456', '78.9101', 'a');
-- Here we check that partition id did not change.
-- Different result means Backward Incompatible Change. Old partitions will not be accepted by new server.
select partition_id from system.parts where table = 'tab' and database = currentDatabase();
drop table if exists tab;
SELECT partition_id FROM system.parts WHERE table = 'tab' AND database = currentDatabase();
DROP TABLE IF EXISTS tab;

View File

@ -5,18 +5,33 @@
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
1 Hello World l x Hexxo Worxd
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
- const needle, non-const replacement
1 Hello World l xx Hexxxxo Worxxd
2 Hello World l x Hexxo Worxd
3 Hello World l x Hexxo Worxd
4 Hello World l x Hexxo Worxd
5 Hello World l x Hexxo Worxd
1 Hello World l xx Hexxxxo Worxxd
2 Hello World l x Hexxo Worxd
3 Hello World l x Hexxo Worxd
4 Hello World l x Hexxo Worxd
5 Hello World l x Hexxo Worxd
- non-const needle, non-const replacement
1 Hello World l xx Hexxxxo Worxxd
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
1 Hello World l xx Hexxxxo Worxxd
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
** replaceOne() **
- non-const needle, const replacement
1 Hello World l x Hexlo World
@ -24,18 +39,33 @@
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
1 Hello World l x Hexlo World
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
- const needle, non-const replacement
1 Hello World l xx Hexxlo World
2 Hello World l x Hexlo World
3 Hello World l x Hexlo World
4 Hello World l x Hexlo World
5 Hello World l x Hexlo World
1 Hello World l xx Hexxlo World
2 Hello World l x Hexlo World
3 Hello World l x Hexlo World
4 Hello World l x Hexlo World
5 Hello World l x Hexlo World
- non-const needle, non-const replacement
1 Hello World l xx Hexxlo World
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
1 Hello World l xx Hexxlo World
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hello World
5 Hello World . x Hello World
** replaceRegexpAll() **
- non-const needle, const replacement
1 Hello World l x Hexxo Worxd
@ -43,18 +73,33 @@
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllx Wxrld
5 Hello World . x xxxxxxxxxxx
1 Hello World l x Hexxo Worxd
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllx Wxrld
5 Hello World . x xxxxxxxxxxx
- const needle, non-const replacement
1 Hello World l xx Hexxxxo Worxxd
2 Hello World l x Hexxo Worxd
3 Hello World l x Hexxo Worxd
4 Hello World l x Hexxo Worxd
5 Hello World l x Hexxo Worxd
1 Hello World l xx Hexxxxo Worxxd
2 Hello World l x Hexxo Worxd
3 Hello World l x Hexxo Worxd
4 Hello World l x Hexxo Worxd
5 Hello World l x Hexxo Worxd
- non-const needle, non-const replacement
1 Hello World l xx Hexxxxo Worxxd
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllx Wxrld
5 Hello World . x xxxxxxxxxxx
1 Hello World l xx Hexxxxo Worxxd
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllx Wxrld
5 Hello World . x xxxxxxxxxxx
** replaceRegexpOne() **
- non-const needle, const replacement
1 Hello World l x Hexlo World
@ -62,16 +107,31 @@
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllo World
5 Hello World . x xello World
1 Hello World l x Hexlo World
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllo World
5 Hello World . x xello World
- const needle, non-const replacement
1 Hello World l xx Hexxlo World
2 Hello World l x Hexlo World
3 Hello World l x Hexlo World
4 Hello World l x Hexlo World
5 Hello World l x Hexlo World
1 Hello World l xx Hexxlo World
2 Hello World l x Hexlo World
3 Hello World l x Hexlo World
4 Hello World l x Hexlo World
5 Hello World l x Hexlo World
- non-const needle, non-const replacement
1 Hello World l xx Hexxlo World
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllo World
5 Hello World . x xello World
1 Hello World l xx Hexxlo World
2 Hello World ll x Hexo World
3 Hello World not_found x Hello World
4 Hello World [eo] x Hxllo World
5 Hello World . x xello World
Check that an exception is thrown if the needle is empty

View File

@ -9,53 +9,63 @@ CREATE TABLE test_tab
INSERT INTO test_tab VALUES (1, 'Hello World', 'l', 'xx') (2, 'Hello World', 'll', 'x') (3, 'Hello World', 'not_found', 'x') (4, 'Hello World', '[eo]', 'x') (5, 'Hello World', '.', 'x')
SELECT '** replaceAll() **';
SELECT '- non-const needle, const replacement';
SELECT id, haystack, needle, 'x', replaceAll(haystack, needle, 'x') FROM test_tab ORDER BY id;
SELECT id, haystack, needle, 'x', replaceAll('Hello World', needle, 'x') FROM test_tab ORDER BY id;
SELECT '- const needle, non-const replacement';
SELECT id, haystack, 'l', replacement, replaceAll(haystack, 'l', replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, 'l', replacement, replaceAll('Hello World', 'l', replacement) FROM test_tab ORDER BY id;
SELECT '- non-const needle, non-const replacement';
SELECT id, haystack, needle, replacement, replaceAll(haystack, needle, replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, needle, replacement, replaceAll('Hello World', needle, replacement) FROM test_tab ORDER BY id;
SELECT '** replaceOne() **';
SELECT '- non-const needle, const replacement';
SELECT id, haystack, needle, 'x', replaceOne(haystack, needle, 'x') FROM test_tab ORDER BY id;
SELECT id, haystack, needle, 'x', replaceOne('Hello World', needle, 'x') FROM test_tab ORDER BY id;
SELECT '- const needle, non-const replacement';
SELECT id, haystack, 'l', replacement, replaceOne(haystack, 'l', replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, 'l', replacement, replaceOne('Hello World', 'l', replacement) FROM test_tab ORDER BY id;
SELECT '- non-const needle, non-const replacement';
SELECT id, haystack, needle, replacement, replaceOne(haystack, needle, replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, needle, replacement, replaceOne('Hello World', needle, replacement) FROM test_tab ORDER BY id;
SELECT '** replaceRegexpAll() **';
SELECT '- non-const needle, const replacement';
SELECT id, haystack, needle, 'x', replaceRegexpAll(haystack, needle, 'x') FROM test_tab ORDER BY id;
SELECT id, haystack, needle, 'x', replaceRegexpAll('Hello World', needle, 'x') FROM test_tab ORDER BY id;
SELECT '- const needle, non-const replacement';
SELECT id, haystack, 'l', replacement, replaceRegexpAll(haystack, 'l', replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, 'l', replacement, replaceRegexpAll('Hello World', 'l', replacement) FROM test_tab ORDER BY id;
SELECT '- non-const needle, non-const replacement';
SELECT id, haystack, needle, replacement, replaceRegexpAll(haystack, needle, replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, needle, replacement, replaceRegexpAll('Hello World', needle, replacement) FROM test_tab ORDER BY id;
SELECT '** replaceRegexpOne() **';
SELECT '- non-const needle, const replacement';
SELECT id, haystack, needle, 'x', replaceRegexpOne(haystack, needle, 'x') FROM test_tab ORDER BY id;
SELECT id, haystack, needle, 'x', replaceRegexpOne('Hello World', needle, 'x') FROM test_tab ORDER BY id;
SELECT '- const needle, non-const replacement';
SELECT id, haystack, 'l', replacement, replaceRegexpOne(haystack, 'l', replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, 'l', replacement, replaceRegexpOne('Hello World', 'l', replacement) FROM test_tab ORDER BY id;
SELECT '- non-const needle, non-const replacement';
SELECT id, haystack, needle, replacement, replaceRegexpOne(haystack, needle, replacement) FROM test_tab ORDER BY id;
SELECT id, haystack, needle, replacement, replaceRegexpOne('Hello World', needle, replacement) FROM test_tab ORDER BY id;
DROP TABLE IF EXISTS test_tab;