mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-16 12:44:42 +00:00
215 KiB
215 KiB
Table of Contents
ClickHouse release v23.5, 2023-06-08
ClickHouse release v23.4, 2023-04-26
ClickHouse release v23.3 LTS, 2023-03-30
ClickHouse release v23.2, 2023-02-23
ClickHouse release v23.1, 2023-01-25
Changelog for 2022
2023 Changelog
ClickHouse release 23.5, 2023-06-08
Upgrade Notes
- Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the
compress_marks
andcompress_primary_key
settings in the<merge_tree>
section of the server configuration file. Upgrade notes: If you upgrade from versions prior to 22.9, you should either upgrade all replicas at once or disable the compression before upgrade, or upgrade through an intermediate version, where the compressed marks are supported but not enabled by default, such as 23.3. #42587 (Alexey Milovidov). - Make local object storage work consistently with s3 object storage, fix problem with append (closes #48465), make it configurable as independent storage. The change is backward incompatible because the cache on top of local object storage is not compatible to previous versions. #48791 (Kseniia Sumarokova).
- The experimental feature "in-memory data parts" is removed. The data format is still supported, but the settings are no-op, and compact or wide parts will be used instead. This closes #45409. #49429 (Alexey Milovidov).
- Changed default values of settings
parallelize_output_from_storages
andinput_format_parquet_preserve_order
. This allows ClickHouse to reorder rows when reading from files (e.g. CSV or Parquet), greatly improving performance in many cases. To restore the old behavior of preserving order, useparallelize_output_from_storages = 0
,input_format_parquet_preserve_order = 1
. #49479 (Michael Kolupaev). - Make projections production-ready. Add the
optimize_use_projections
setting to control whether the projections will be selected for SELECT queries. The settingallow_experimental_projection_optimization
is obsolete and does nothing. #49719 (Alexey Milovidov). - Mark
joinGet
as non-deterministic (so asdictGet
). It allows using them in mutations without an extra setting. #49843 (Azat Khuzhin). - Revert the "
groupArray
returns cannot be nullable" change (due to binary compatibility breakage forgroupArray
/groupArrayLast
/groupArraySample
overNullable
types, which likely will lead toTOO_LARGE_ARRAY_SIZE
orCANNOT_READ_ALL_DATA
). #49971 (Azat Khuzhin). - Setting
enable_memory_bound_merging_of_aggregation_results
is enabled by default. If you update from version prior to 22.12, we recommend to set this flag tofalse
until update is finished. #50319 (Nikita Taranov).
New Feature
- Added storage engine AzureBlobStorage and azureBlobStorage table function. The supported set of features is very similar to storage/table function S3 [#50604] (https://github.com/ClickHouse/ClickHouse/pull/50604) (alesapin) (SmitaRKulkarni.
- Added native ClickHouse Keeper CLI Client, it is available as
clickhouse keeper-client
#47414 (pufit). - Add
urlCluster
table function. Refactor all *Cluster table functions to reduce code duplication. Make schema inference work for all possible *Cluster function signatures and for named collections. Closes #38499. #45427 (attack204), Pavel Kruglov. - The query cache can now be used for production workloads. #47977 (Robert Schulze). The query cache can now support queries with totals and extremes modifier. #48853 (Robert Schulze). Make
allow_experimental_query_cache
setting as obsolete for backward-compatibility. It was removed in https://github.com/ClickHouse/ClickHouse/pull/47977. #49934 (Timur Solodovnikov). - Geographical data types (
Point
,Ring
,Polygon
, andMultiPolygon
) are production-ready. #50022 (Alexey Milovidov). - Add schema inference to PostgreSQL, MySQL, MeiliSearch, and SQLite table engines. Closes #49972. #50000 (Nikolay Degterinsky).
- Password type in queries like
CREATE USER u IDENTIFIED BY 'p'
will be automatically set according to the settingdefault_password_type
in theconfig.xml
on the server. Closes #42915. #44674 (Nikolay Degterinsky). - Add bcrypt password authentication type. Closes #34599. #44905 (Nikolay Degterinsky).
- Introduces new keyword
INTO OUTFILE 'file.txt' APPEND
. #48880 (alekar). - Added
system.zookeeper_connection
table that shows information about Keeper connections. #45245 (mateng915). - Add new function
generateRandomStructure
that generates random table structure. It can be used in combination with table functiongenerateRandom
. #47409 (Kruglov Pavel). - Allow the use of
CASE
without anELSE
branch and extendedtransform
to deal with more types. Also fix some issues that made transform() return incorrect results when decimal types were mixed with other numeric types. #48300 (Salvatore Mesoraca). This closes #2655. This closes #9596. This closes #38666. - Added server-side encryption using KMS keys with S3 tables, and the
header
setting with S3 disks. Closes #48723. #48724 (Johann Gan). - Add MemoryTracker for the background tasks (merges and mutation). Introduces
merges_mutations_memory_usage_soft_limit
andmerges_mutations_memory_usage_to_ram_ratio
settings that represent the soft memory limit for merges and mutations. If this limit is reached ClickHouse won't schedule new merge or mutation tasks. AlsoMergesMutationsMemoryTracking
metric is introduced to allow observing current memory usage of background tasks. Resubmit #46089. Closes #48774. #48787 (Dmitry Novik). - Function
dotProduct
work for array. #49050 (FFFFFFFHHHHHHH). - Support statement
SHOW INDEX
to improve compatibility with MySQL. #49158 (Robert Schulze). - Add virtual column
_file
and_path
support to table functionurl
. - Impove error message for table functionurl
. - resolves #49231 - resolves #49232. #49356 (Ziyi Tan). - Adding the
grants
field in the users.xml file, which allows specifying grants for users. #49381 (pufit). - Support full/right join by using grace hash join algorithm. #49483 (lgbo).
WITH FILL
modifier groups filling by sorting prefix. Controlled byuse_with_fill_by_sorting_prefix
setting (enabled by default). Related to #33203#issuecomment-1418736794. #49503 (Igor Nikonov).- Clickhouse-client now accepts queries after "--multiquery" when "--query" (or "-q") is absent. example: clickhouse-client --multiquery "select 1; select 2;". #49870 (Alexey Gerasimchuk).
- Add separate
handshake_timeout
for receiving Hello packet from replica. Closes #48854. #49948 (Kruglov Pavel). - Added a function "space" which repeats a space as many times as specified. #50103 (Robert Schulze).
- Added --input_format_csv_trim_whitespaces option. #50215 (Alexey Gerasimchuk).
- Allow the
dictGetAll
function for regexp tree dictionaries to return values from multiple matches as arrays. Closes #50254. #50255 (Johann Gan). - Added
toLastDayOfWeek
function to round a date or a date with time up to the nearest Saturday or Sunday. #50315 (Victor Krasnov). - Ability to ignore a skip index by specifying
ignore_data_skipping_indices
. #50329 (Boris Kuschel). - Add
system.user_processes
table andSHOW USER PROCESSES
query to show memory info and ProfileEvents on user level. #50492 (János Benjamin Antal). - Add server and format settings
display_secrets_in_show_and_select
for displaying secrets of tables, databases, table functions, and dictionaries. Add privilegedisplaySecretsInShowAndSelect
controlling which users can view secrets. #46528 (Mike Kot). - Allow to set up a ROW POLICY for all tables that belong to a DATABASE. #47640 (Ilya Golshtein).
Performance Improvement
- Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the
compress_marks
andcompress_primary_key
settings in the<merge_tree>
section of the server configuration file. #42587 (Alexey Milovidov). - New setting s3_max_inflight_parts_for_one_file sets the limit of concurrently loaded parts with multipart upload request in scope of one file. #49961 (Sema Checherinda).
- When reading from multiple files reduce parallel parsing threads for each file. Resolves #42192. #46661 (SmitaRKulkarni).
- Use aggregate projection only if it reads fewer granules than normal reading. It should help in case if query hits the PK of the table, but not the projection. Fixes #49150. #49417 (Nikolai Kochetov).
- Do not store blocks in
ANY
hash join if nothing is inserted. #48633 (vdimir). - Fixes aggregate combinator
-If
when JIT compiled, and enable JIT compilation for aggregate functions. Closes #48120. #49083 (Igor Nikonov). - For reading from remote tables we use smaller tasks (instead of reading the whole part) to make tasks stealing work * task size is determined by size of columns to read * always use 1mb buffers for reading from s3 * boundaries of cache segments aligned to 1mb so they have decent size even with small tasks. it also should prevent fragmentation. #49287 (Nikita Taranov).
- Introduced settings: -
merge_max_block_size_bytes
to limit the amount of memory used for background operations. -vertical_merge_algorithm_min_bytes_to_activate
to add another condition to activate vertical merges. #49313 (Nikita Mikhaylov). - Default size of a read buffer for reading from local filesystem changed to a slightly better value. Also two new settings are introduced:
max_read_buffer_size_local_fs
andmax_read_buffer_size_remote_fs
. #49321 (Nikita Taranov). - Improve memory usage and speed of
SPARSE_HASHED
/HASHED
dictionaries (e.g.SPARSE_HASHED
now eats 2.6x less memory, and is ~2x faster). #49380 (Azat Khuzhin). - Optimize the
system.query_log
andsystem.query_thread_log
tables by applyingLowCardinality
when appropriate. The queries over these tables will be faster. #49530 (Alexey Milovidov). - Better performance when reading local
Parquet
files (through parallel reading). #49539 (Michael Kolupaev). - Improve the performance of
RIGHT/FULL JOIN
by up to 2 times in certain scenarios, especially when joining a small left table with a large right table. #49585 (lgbo). - Improve performance of BLAKE3 by 11% by enabling LTO for Rust. #49600 (Azat Khuzhin). Now it is on par with C++.
- Optimize the structure of the
system.opentelemetry_span_log
. UseLowCardinality
where appropriate. Although this table is generally stupid (it is using the Map data type even for common attributes), it will be slightly better. #49647 (Alexey Milovidov). - Try to reserve hash table's size in
grace_hash
join. #49816 (lgbo). - Parallel merge of
uniqExactIf
states. Closes #49885. #50285 (flynn). - Keeper improvement: add
CheckNotExists
request to Keeper, which allows to improve the performance of Replicated tables. #48897 (Antonio Andelic). - Keeper performance improvements: avoid serializing same request twice while processing. Cache deserialization results of large requests. Controlled by new coordination setting
min_request_size_for_cache
. #49004 (Antonio Andelic). - Reduced number of
List
ZooKeeper requests when selecting parts to merge and a lot of partitions do not have anything to merge. #49637 (Alexander Tokmakov). - Rework locking in the FS cache #44985 (Kseniia Sumarokova).
- Disable pure parallel replicas if trivial count optimization is possible. #50594 (Raúl Marín).
- Don't send head request for all keys in Iceberg schema inference, only for keys that are used for reaing data. #50203 (Kruglov Pavel).
- Setting
enable_memory_bound_merging_of_aggregation_results
is enabled by default. #50319 (Nikita Taranov).
Experimental Feature
DEFLATE_QPL
codec lower the minimum simd version to SSE 4.2. doc change in qpl - Intel® QPL relies on a run-time kernels dispatcher and cpuid check to choose the best available implementation(sse/avx2/avx512) - restructured cmakefile for qpl build in clickhouse to align with latest upstream qpl. #49811 (jasperzhu).- Add initial support to do JOINs with pure parallel replicas. #49544 (Raúl Marín).
- More parallelism on
Outdated
parts removal with "zero-copy replication". #49630 (Alexander Tokmakov). - Parallel Replicas: 1) Fixed an error
NOT_FOUND_COLUMN_IN_BLOCK
in case of using parallel replicas with non-replicated storage with disabled settingparallel_replicas_for_non_replicated_merge_tree
2) Nowallow_experimental_parallel_reading_from_replicas
have 3 possible values - 0, 1 and 2. 0 - disabled, 1 - enabled, silently disable them in case of failure (in case of FINAL or JOIN), 2 - enabled, throw an expection in case of failure. 3) If FINAL modifier is used in SELECT query and parallel replicas are enabled, ClickHouse will try to disable them ifallow_experimental_parallel_reading_from_replicas
is set to 1 and throw an exception otherwise. #50195 (Nikita Mikhaylov). - When parallel replicas are enabled they will always skip unavailable servers (the behavior is controlled by the setting
skip_unavailable_shards
, enabled by default and can be only disabled). This closes: #48565. #50293 (Nikita Mikhaylov).
Improvement
- The
BACKUP
command will not decrypt data from encrypted disks while making a backup. Instead the data will be stored in a backup in encrypted form. Such backups can be restored only to an encrypted disk with the same (or extended) list of encryption keys. #48896 (Vitaly Baranov). - Added possibility to use temporary tables in FROM part of ATTACH PARTITION FROM and REPLACE PARTITION FROM. #49436 (Roman Vasin).
- Added setting
async_insert
forMergeTree
tables. It has the same meaning as query-level settingasync_insert
and enables asynchronous inserts for specific table. Note: it doesn't take effect for insert queries fromclickhouse-client
, use query-level setting in that case. #49122 (Anton Popov). - Add support for size suffixes in quota creation statement parameters. #49087 (Eridanus).
- Extend
first_value
andlast_value
to accept NULL. #46467 (lgbo). - Add alias
str_to_map
andmapFromString
forextractKeyValuePairs
. closes https://github.com/clickhouse/clickhouse/issues/47185. #49466 (flynn). - Add support for CGroup version 2 for asynchronous metrics about the memory usage and availability. This closes #37983. #45999 (sichenzhao).
- Cluster table functions should always skip unavailable shards. close #46314. #46765 (zk_kiger).
- Allow CSV file to contain empty columns in its header. #47496 (你不要过来啊).
- Add Google Cloud Storage S3 compatible table function
gcs
. Like theoss
andcosn
functions, it is just an alias over thes3
table function, and it does not bring any new features. #47815 (Kuba Kaflik). - Add ability to use strict parts size for S3 (compatibility with CloudFlare R2 S3 Storage). #48492 (Azat Khuzhin).
- Added new columns with info about
Replicated
database replicas tosystem.clusters
:database_shard_name
,database_replica_name
,is_active
. Added an optionalFROM SHARD
clause toSYSTEM DROP DATABASE REPLICA
query. #48548 (Alexander Tokmakov). - Add a new column
zookeeper_name
in system.replicas, to indicate on which (auxiliary) zookeeper cluster the replicated table's metadata is stored. #48549 (cangyin). IN
operator support the comparison ofDate
andDate32
. Closes #48736. #48806 (flynn).- Support for erasure codes in
HDFS
, author: @M1eyu2018, @tomscut. #48833 (M1eyu). - Implement SYSTEM DROP REPLICA from auxillary ZooKeeper clusters, may be close #48931. #48932 (wangxiaobo).
- Add Array data type to MongoDB. Closes #48598. #48983 (Nikolay Degterinsky).
- Support storing
Interval
data types in tables. #49085 (larryluogit). - Allow using
ntile
window function without explicit window frame definition:ntile(3) OVER (ORDER BY a)
, close #46763. #49093 (vdimir). - Added settings (
number_of_mutations_to_delay
,number_of_mutations_to_throw
) to delay or throwALTER
queries that create mutations (ALTER UPDATE
,ALTER DELETE
,ALTER MODIFY COLUMN
, ...) in case when table already has a lot of unfinished mutations. #49117 (Anton Popov). - Catch exception from
create_directories
in filesystem cache. #49203 (Kseniia Sumarokova). - Copies embedded examples to a new field
example
insystem.functions
to supplement the fielddescription
. #49222 (Dan Roscigno). - Enable connection options for the MongoDB dictionary. Example:
xml <source> <mongodb> <host>localhost</host> <port>27017</port> <user></user> <password></password> <db>test</db> <collection>dictionary_source</collection> <options>ssl=true</options> </mongodb> </source>
### Documentation entry for user-facing changes. #49225 (MikhailBurdukov). - Added an alias
asymptotic
forasymp
computational method forkolmogorovSmirnovTest
. Improved documentation. #49286 (Nikita Mikhaylov). - Aggregation function groupBitAnd/Or/Xor now work on signed integer data. This makes them consistent with the behavior of scalar functions bitAnd/Or/Xor. #49292 (exmy).
- Split function-documentation into more fine-granular fields. #49300 (Robert Schulze).
- Use multiple threads shared between all tables within a server to load outdated data parts. The the size of the pool and its queue is controlled by
max_outdated_parts_loading_thread_pool_size
andoutdated_part_loading_thread_pool_queue_size
settings. #49317 (Nikita Mikhaylov). - Don't overestimate the size of processed data for
LowCardinality
columns when they share dictionaries between blocks. This closes #49322. See also #48745. #49323 (Alexey Milovidov). - Parquet writer now uses reasonable row group size when invoked through
OUTFILE
. #49325 (Michael Kolupaev). - Allow restricted keywords like
ARRAY
as an alias if the alias is quoted. Closes #49324. #49360 (Nikolay Degterinsky). - Data parts loading and deletion jobs were moved to shared server-wide pools instead of per-table pools. Pools sizes are controlled via settings
max_active_parts_loading_thread_pool_size
,max_outdated_parts_loading_thread_pool_size
andmax_parts_cleaning_thread_pool_size
in top-level config. Table-level settingsmax_part_loading_threads
andmax_part_removal_threads
became obsolete. #49474 (Nikita Mikhaylov). - Allow
?password=pass
in URL of the Play UI. Password is replaced in browser history. #49505 (Mike Kot). - Allow reading zero-size objects from remote filesystems. (because empty files are not backup'd, so we might end up with zero blobs in metadata file). Closes #49480. #49519 (Kseniia Sumarokova).
- Attach thread MemoryTracker to
total_memory_tracker
afterThreadGroup
detached. #49527 (Dmitry Novik). - Fix parameterized views when a query parameter is used multiple times in the query. #49556 (Azat Khuzhin).
- Release memory allocated for the last sent ProfileEvents snapshot in the context of a query. Followup #47564. #49561 (Dmitry Novik).
- Function "makeDate" now provides a MySQL-compatible overload (year & day of the year argument). #49603 (Robert Schulze).
- Support
dictionary
table function forRegExpTreeDictionary
. #49666 (Han Fei). - Added weighted fair IO scheduling policy. Added dynamic resource manager, which allows IO scheduling hierarchy to be updated in runtime w/o server restarts. #49671 (Sergei Trifonov).
- Add compose request after multipart upload to GCS. This enables the usage of copy operation on objects uploaded with the multipart upload. It's recommended to set
s3_strict_upload_part_size
to some value because compose request can fail on objects created with parts of different sizes. #49693 (Antonio Andelic). - For the
extractKeyValuePairs
function: improve the "best-effort" parsing logic to acceptkey_value_delimiter
as a valid part of the value. This also simplifies branching and might even speed up things a bit. #49760 (Arthur Passos). - Add
initial_query_id
field for system.processors_profile_log #49777 (helifu). - System log tables can now have custom sorting keys. #49778 (helifu).
- A new field
partitions
tosystem.query_log
is used to indicate which partitions are participating in the calculation. #49779 (helifu). - Added
enable_the_endpoint_id_with_zookeeper_name_prefix
setting forReplicatedMergeTree
(disabled by default). When enabled, it adds ZooKeeper cluster name to table's interserver communication endpoint. It avoidsDuplicate interserver IO endpoint
errors when having replicated tables with the same path, but different auxiliary ZooKeepers. #49780 (helifu). - Add query parameters to
clickhouse-local
. Closes #46561. #49785 (Nikolay Degterinsky). - Allow loading dictionaries and functions from YAML by default. In previous versions, it required editing the
dictionaries_config
oruser_defined_executable_functions_config
in the configuration file, as they expected*.xml
files. #49812 (Alexey Milovidov). - The Kafka table engine now allows to use alias columns. #49824 (Aleksandr Musorin).
- Add setting to limit the max number of pairs produced by
extractKeyValuePairs
, a safeguard to avoid using way too much memory. #49836 (Arthur Passos). - Add support for (an unusual) case where the arguments in the
IN
operator are single-element tuples. #49844 (MikhailBurdukov). bitHammingDistance
function supportString
andFixedString
data type. Closes #48827. #49858 (flynn).- Fix timeout resetting errors in the client on OS X. #49863 (alekar).
- Add support for big integers, such as UInt128, Int128, UInt256, and Int256 in the function
bitCount
. This enables Hamming distance over large bit masks for AI applications. #49867 (Alexey Milovidov). - Fingerprints to be used instead of key IDs in encrypted disks. This simplifies the configuration of encrypted disks. #49882 (Vitaly Baranov).
- Add UUID data type to PostgreSQL. Closes #49739. #49894 (Nikolay Degterinsky).
- Function
toUnixTimestamp
now acceptsDate
andDate32
arguments. #49989 (Victor Krasnov). - Charge only server memory for dictionaries. #49995 (Azat Khuzhin).
- The server will allow using the
SQL_*
settings such asSQL_AUTO_IS_NULL
as no-ops for MySQL compatibility. This closes #49927. #50013 (Alexey Milovidov). - Preserve initial_query_id for ON CLUSTER queries, which is useful for introspection (under
distributed_ddl_entry_format_version=5
). #50015 (Azat Khuzhin). - Preserve backward incompatibility for renamed settings by using aliases (
allow_experimental_projection_optimization
foroptimize_use_projections
,allow_experimental_lightweight_delete
forenable_lightweight_delete
). #50044 (Azat Khuzhin). - Support passing FQDN through setting my_hostname to register cluster node in keeper. Add setting of invisible to support multi compute groups. A compute group as a cluster, is invisible to other compute groups. #50186 (Yangkuan Liu).
- Fix PostgreSQL reading all the data even though
LIMIT n
could be specified. #50187 (Kseniia Sumarokova). - Add new profile events for queries with subqueries (
QueriesWithSubqueries
/SelectQueriesWithSubqueries
/InsertQueriesWithSubqueries
). #50204 (Azat Khuzhin). - Adding the roles field in the users.xml file, which allows specifying roles with grants via a config file. #50278 (pufit).
- Report
CGroupCpuCfsPeriod
andCGroupCpuCfsQuota
in AsynchronousMetrics. - Respect cgroup v2 memory limits during server startup. #50379 (alekar). - Add a signal handler for SIGQUIT to work the same way as SIGINT. Closes #50298. #50435 (Nikolay Degterinsky).
- In case JSON parse fails due to the large size of the object output the last position to allow debugging. #50474 (Valentin Alexeev).
- Support decimals with not fixed size. Closes #49130. #50586 (Kruglov Pavel).
Build/Testing/Packaging Improvement
- New and improved
keeper-bench
. Everything can be customized from YAML/XML file: - request generator - each type of request generator can have a specific set of fields - multi requests can be generated just by doing the same undermulti
key - for each request or subrequest in multi aweight
field can be defined to control distribution - define trees that need to be setup for a test run - hosts can be defined with all timeouts customizable and it's possible to control how many sessions to generate for each host - integers defined withmin_value
andmax_value
fields are random number generators. #48547 (Antonio Andelic). - Io_uring is not supported on macos, don't choose it when running tests on local to avoid occassional failures. #49250 (Frank Chen).
- Support named fault injection for testing. #49361 (Han Fei).
- Allow running ClickHouse in the OS where the
prctl
(process control) syscall is not available, such as AWS Lambda. #49538 (Alexey Milovidov). - Fixed the issue of build conflict between contrib/isa-l and isa-l in qpl 49296. #49584 (jasperzhu).
- Utilities are now only build if explicitly requested ("-DENABLE_UTILS=1") instead of by default, this reduces link times in typical development builds. #49620 (Robert Schulze).
- Pull build description of idxd-config into a separate CMake file to avoid accidental removal in future. #49651 (jasperzhu).
- Add CI check with an enabled analyzer in the master. Follow-up #49562. #49668 (Dmitry Novik).
- Switch to LLVM/clang 16. #49678 (Azat Khuzhin).
- Allow building ClickHouse with clang-17. #49851 (Alexey Milovidov). #50410 (Alexey Milovidov).
- ClickHouse is now easier to be integrated into other cmake projects. #49991 (Amos Bird). (Which is strongly discouraged - Alexey Milovidov).
- Fix strange additional QEMU logging after #47151, see https://s3.amazonaws.com/clickhouse-test-reports/50078/a4743996ee4f3583884d07bcd6501df0cfdaa346/stateless_tests__release__databasereplicated__[3_4].html. #50442 (Mikhail f. Shiryaev).
- ClickHouse can work on Linux RISC-V 6.1.22. This closes #50456. #50457 (Alexey Milovidov).
- Bump internal protobuf to v3.18 (fixes bogus CVE-2022-1941). #50400 (Robert Schulze).
- Bump internal libxml2 to v2.10.4 (fixes bogus CVE-2023-28484 and bogus CVE-2023-29469). #50402 (Robert Schulze).
- Bump c-ares to v1.19.1 (bogus CVE-2023-32067, bogus CVE-2023-31130, bogus CVE-2023-31147). #50403 (Robert Schulze).
- Fix bogus CVE-2022-2469 in libgsasl. #50404 (Robert Schulze).
Bug Fix (user-visible misbehavior in an official stable release)
- ActionsDAG: fix wrong optimization #47584 (Salvatore Mesoraca).
- Correctly handle concurrent snapshots in Keeper #48466 (Antonio Andelic).
- MergeTreeMarksLoader holds DataPart instead of DataPartStorage #48515 (SmitaRKulkarni).
- Sequence state fix #48603 (Ilya Golshtein).
- Back/Restore concurrency check on previous fails #48726 (SmitaRKulkarni).
- Fix Attaching a table with non-existent ZK path does not increase the ReadonlyReplica metric #48954 (wangxiaobo).
- Fix possible terminate called for uncaught exception in some places #49112 (Kruglov Pavel).
- Fix key not found error for queries with multiple StorageJoin #49137 (vdimir).
- Fix wrong query result when using nullable primary key #49172 (Duc Canh Le).
- Fix reinterpretAs*() on big endian machines #49198 (Suzy Wang).
- (Experimental zero-copy replication) Lock zero copy parts more atomically #49211 (alesapin).
- Fix race on Outdated parts loading #49223 (Alexander Tokmakov).
- Fix all key value is null and group use rollup return wrong answer #49282 (Shuai li).
- Fix calculating load_factor for HASHED dictionaries with SHARDS #49319 (Azat Khuzhin).
- Disallow configuring compression CODECs for alias columns #49363 (Timur Solodovnikov).
- Fix bug in removal of existing part directory #49365 (alesapin).
- Properly fix GCS when HMAC is used #49390 (Antonio Andelic).
- Fix fuzz bug when subquery set is not built when reading from remote() #49425 (Alexander Gololobov).
- Invert
shutdown_wait_unfinished_queries
#49427 (Konstantin Bogdanov). - (Experimental zero-copy replication) Fix another zero copy bug #49473 (alesapin).
- Fix postgres database setting #49481 (Mal Curtis).
- Correctly handle
s3Cluster
arguments #49490 (Antonio Andelic). - Fix bug in TraceCollector destructor. #49508 (Yakov Olkhovskiy).
- Fix AsynchronousReadIndirectBufferFromRemoteFS breaking on short seeks #49525 (Michael Kolupaev).
- Fix dictionaries loading order #49560 (Alexander Tokmakov).
- Forbid the change of data type of Object('json') column #49563 (Nikolay Degterinsky).
- Fix stress test (Logical error: Expected 7134 >= 11030) #49623 (Kseniia Sumarokova).
- Fix bug in DISTINCT #49628 (Alexey Milovidov).
- Fix: DISTINCT in order with zero values in non-sorted columns #49636 (Igor Nikonov).
- Fix one-off error in big integers found by UBSan with fuzzer #49645 (Alexey Milovidov).
- Fix reading from sparse columns after restart #49660 (Anton Popov).
- Fix assert in SpanHolder::finish() with fibers #49673 (Kruglov Pavel).
- Fix short circuit functions and mutations with sparse arguments #49716 (Anton Popov).
- Fix writing appended files to incremental backups #49725 (Vitaly Baranov).
- Fix "There is no physical column _row_exists in table" error occurring during lightweight delete mutation on a table with Object column. #49737 (Alexander Gololobov).
- Fix msan issue in randomStringUTF8(uneven number) #49750 (Robert Schulze).
- Fix aggregate function kolmogorovSmirnovTest #49768 (FFFFFFFHHHHHHH).
- Fix settings aliases in native protocol #49776 (Azat Khuzhin).
- Fix
arrayMap
with array of tuples with single argument #49789 (Anton Popov). - Fix per-query IO/BACKUPs throttling settings #49797 (Azat Khuzhin).
- Fix setting NULL in profile definition #49831 (Vitaly Baranov).
- Fix a bug with projections and the aggregate_functions_null_for_empty setting (for query_plan_optimize_projection) #49873 (Amos Bird).
- Fix processing pending batch for Distributed async INSERT after restart #49884 (Azat Khuzhin).
- Fix assertion in CacheMetadata::doCleanup #49914 (Kseniia Sumarokova).
- fix
is_prefix
in OptimizeRegularExpression #49919 (Han Fei). - Fix metrics
WriteBufferFromS3Bytes
,WriteBufferFromS3Microseconds
andWriteBufferFromS3RequestsErrors
#49930 (Aleksandr Musorin). - Fix IPv6 encoding in protobuf #49933 (Yakov Olkhovskiy).
- Fix possible Logical error on bad Nullable parsing for text formats #49960 (Kruglov Pavel).
- Add setting output_format_parquet_compliant_nested_types to produce more compatible Parquet files #50001 (Michael Kolupaev).
- Fix logical error in stress test "Not enough space to add ..." #50021 (Kseniia Sumarokova).
- Avoid deadlock when starting table in attach thread of
ReplicatedMergeTree
#50026 (Antonio Andelic). - Fix assert in SpanHolder::finish() with fibers attempt 2 #50034 (Kruglov Pavel).
- Add proper escaping for DDL OpenTelemetry context serialization #50045 (Azat Khuzhin).
- Fix reporting broken projection parts #50052 (Amos Bird).
- JIT compilation not equals NaN fix #50056 (Maksim Kita).
- Fix crashing in case of Replicated database without arguments #50058 (Azat Khuzhin).
- Fix crash with
multiIf
and constant condition and nullable arguments #50123 (Anton Popov). - Fix invalid index analysis for date related keys #50153 (Amos Bird).
- do not allow modify order by when there are no order by cols #50154 (Han Fei).
- Fix broken index analysis when binary operator contains a null constant argument #50177 (Amos Bird).
- clickhouse-client: disallow usage of
--query
and--queries-file
at the same time #50210 (Alexey Gerasimchuk). - Fix UB for INTO OUTFILE extensions (APPEND / AND STDOUT) and WATCH EVENTS #50216 (Azat Khuzhin).
- Fix skipping spaces at end of row in CustomSeparatedIgnoreSpaces format #50224 (Kruglov Pavel).
- Fix iceberg metadata parsing #50232 (Kseniia Sumarokova).
- Fix nested distributed SELECT in WITH clause #50234 (Azat Khuzhin).
- Fix msan issue in keyed siphash #50245 (Robert Schulze).
- Fix bugs in Poco sockets in non-blocking mode, use true non-blocking sockets #50252 (Kruglov Pavel).
- Fix checksum calculation for backup entries #50264 (Vitaly Baranov).
- Comparison functions NaN fix #50287 (Maksim Kita).
- JIT aggregation nullable key fix #50291 (Maksim Kita).
- Fix clickhouse-local crashing when writing empty Arrow or Parquet output #50328 (Michael Kolupaev).
- Fix crash when Pool::Entry::disconnect() is called #50334 (Val Doroshchuk).
- Improved fetch part by holding directory lock longer #50339 (SmitaRKulkarni).
- Fix bitShift* functions with both constant arguments #50343 (Kruglov Pavel).
- Fix Keeper deadlock on exception when preprocessing requests. #50387 (frinkr).
- Fix hashing of const integer values #50421 (Robert Schulze).
- Fix merge_tree_min_rows_for_seek/merge_tree_min_bytes_for_seek for data skipping indexes #50432 (Azat Khuzhin).
- Limit the number of in-flight tasks for loading outdated parts #50450 (Nikita Mikhaylov).
- Keeper fix: apply uncommitted state after snapshot install #50483 (Antonio Andelic).
- Fix incorrect constant folding #50536 (Alexey Milovidov).
- Fix logical error in stress test (Not enough space to add ...) #50583 (Kseniia Sumarokova).
- Fix converting Null to LowCardinality(Nullable) in values table function #50637 (Kruglov Pavel).
- Revert invalid RegExpTreeDictionary optimization #50642 (Johann Gan).
ClickHouse release 23.4, 2023-04-26
Backward Incompatible Change
- Formatter '%M' in function formatDateTime() now prints the month name instead of the minutes. This makes the behavior consistent with MySQL. The previous behavior can be restored using setting "formatdatetime_parsedatetime_m_is_month_name = 0". #47246 (Robert Schulze).
- This change makes sense only if you are using the virtual filesystem cache. If
path
in the virtual filesystem cache configuration is not empty and is not an absolute path, then it will be put in<clickhouse server data directory>/caches/<path_from_cache_config>
. #48784 (Kseniia Sumarokova). - Primary/secondary indices and sorting keys with identical expressions are now rejected. This behavior can be disabled using setting
allow_suspicious_indices
. #48536 (凌涛).
New Feature
- Support new aggregate function
quantileGK
/quantilesGK
, like approx_percentile in spark. Greenwald-Khanna algorithm refer to http://infolab.stanford.edu/~datar/courses/cs361a/papers/quantiles.pdf. #46428 (李扬). - Add a statement
SHOW COLUMNS
which shows distilled information from system.columns. #48017 (Robert Schulze). - Added
LIGHTWEIGHT
andPULL
modifiers forSYSTEM SYNC REPLICA
query.LIGHTWEIGHT
version waits for fetches and drop-ranges only (merges and mutations are ignored).PULL
version pulls new entries from ZooKeeper and does not wait for them. Fixes #47794. #48085 (Alexander Tokmakov). - Add
kafkaMurmurHash
function for compatibility with Kafka DefaultPartitioner. Closes #47834. #48185 (Nikolay Degterinsky). - Allow to easily create a user with the same grants as the current user by using
GRANT CURRENT GRANTS
. #48262 (pufit). - Add statistical aggregate function
kolmogorovSmirnovTest
. Close #48228. #48325 (FFFFFFFHHHHHHH). - Added a
lost_part_count
column to thesystem.replicas
table. The column value shows the total number of lost parts in the corresponding table. Value is stored in zookeeper and can be used instead of not persistentReplicatedDataLoss
profile event for monitoring. #48526 (Sergei Trifonov). - Add
soundex
function for compatibility. Closes #39880. #48567 (FriendLey). - Support
Map
type for JSONExtract. #48629 (李扬). - Add
PrettyJSONEachRow
format to output pretty JSON with new line delimiters and 4 space indents. #48898 (Kruglov Pavel). - Add
ParquetMetadata
input format to read Parquet file metadata. #48911 (Kruglov Pavel). - Add
extractKeyValuePairs
function to extract key value pairs from strings. Input strings might contain noise (i.e. log files / do not need to be 100% formatted in key-value-pair format), the algorithm will look for key value pairs matching the arguments passed to the function. As of now, function accepts the following arguments:data_column
(mandatory),key_value_pair_delimiter
(defaults to:
),pair_delimiters
(defaults to\space \, \;
) andquoting_character
(defaults to double quotes). #43606 (Arthur Passos). - Functions replaceOne(), replaceAll(), replaceRegexpOne() and replaceRegexpAll() can now be called with non-const pattern and replacement arguments. #46589 (Robert Schulze).
- Added functions to work with columns of type
Map
:mapConcat
,mapSort
,mapExists
. #48071 (Anton Popov).
Performance Improvement
- Reading files in
Parquet
format is now much faster. IO and decoding are parallelized (controlled bymax_threads
setting), and only required data ranges are read. #47964 (Michael Kolupaev). - If we run a mutation with IN (subquery) like this:
ALTER TABLE t UPDATE col='new value' WHERE id IN (SELECT id FROM huge_table)
and the tablet
has multiple parts than for each part a set for subquerySELECT id FROM huge_table
is built in memory. And if there are many parts then this might consume a lot of memory (and lead to an OOM) and CPU. The solution is to introduce a short-lived cache of sets that are currently being built by mutation tasks. If another task of the same mutation is executed concurrently it can look up the set in the cache, wait for it to be built and reuse it. #46835 (Alexander Gololobov). - Only check dependencies if necessary when applying
ALTER TABLE
queries. #48062 (Raúl Marín). - Optimize function
mapUpdate
. #48118 (Anton Popov). - Now an internal query to local replica is sent explicitly and data from it received through loopback interface. Setting
prefer_localhost_replica
is not respected for parallel replicas. This is needed for better scheduling and makes the code cleaner: the initiator is only responsible for coordinating of the reading process and merging results, continuously answering for requests while all the secondary queries read the data. Note: Using loopback interface is not so performant, otherwise some replicas could starve for tasks which could lead to even slower query execution and not utilizing all possible resources. The initialization of the coordinator is now even more lazy. All incoming requests contain the information about the reading algorithm we initialize the coordinator with it when first request comes. If any replica decides to read with a different algorithm–an exception will be thrown and a query will be aborted. #48246 (Nikita Mikhaylov). - Do not build set for the right side of
IN
clause with subquery when it is used only for analysis of skip indexes, and they are disabled by setting (use_skip_indexes=0
). Previously it might affect the performance of queries. #48299 (Anton Popov). - Query processing is parallelized right after reading
FROM file(...)
. Related to #38755. #48525 (Igor Nikonov). Query processing is parallelized right after reading from any data source. Affected data sources are mostly simple or external storages like table functionsurl
,file
. #48727 (Igor Nikonov). This is controlled by the settingparallelize_output_from_storages
which is not enabled by default. - Lowered contention of ThreadPool mutex (may increase performance for a huge amount of small jobs). #48750 (Sergei Trifonov).
- Reduce memory usage for multiple
ALTER DELETE
mutations. #48522 (Nikolai Kochetov). - Remove the excessive connection attempts if the
skip_unavailable_shards
setting is enabled. #48771 (Azat Khuzhin).
Experimental Feature
- Entries in the query cache are now squashed to max_block_size and compressed. #45912 (Robert Schulze).
- It is now possible to define per-user quotas in the query cache. #48284 (Robert Schulze).
- Some fixes for parallel replicas #48433 (Nikita Mikhaylov).
- Implement zero-copy-replication (an experimental feature) on encrypted disks. #48741 (Vitaly Baranov).
Improvement
- Increase default value for
connect_timeout_with_failover_ms
to 1000 ms (because of adding async connections in https://github.com/ClickHouse/ClickHouse/pull/47229) . Closes #5188. #49009 (Kruglov Pavel). - Several improvements around data lakes: - Make
Iceberg
work with non-partitioned data. - SupportIceberg
format version v2 (previously only v1 was supported) - Support reading partitioned data forDeltaLake
/Hudi
- Faster reading ofDeltaLake
metadata by using Delta's checkpoint files - Fixed incorrectHudi
reads: previously it incorrectly chose which data to read and therefore was able to read correctly only small size tables - Made these engines to pickup updates of changed data (previously the state was set on table creation) - Make proper testing forIceberg
/DeltaLake
/Hudi
using spark. #47307 (Kseniia Sumarokova). - Add async connection to socket and async writing to socket. Make creating connections and sending query/external tables async across shards. Refactor code with fibers. Closes #46931. We will be able to increase
connect_timeout_with_failover_ms
by default after this PR (https://github.com/ClickHouse/ClickHouse/issues/5188). #47229 (Kruglov Pavel). - Support config sections
keeper
/keeper_server
as an alternative tozookeeper
. Close #34766 , #34767. #35113 (李扬). - It is possible to set secure flag in named_collections for a dictionary with a ClickHouse table source. Addresses #38450 . #46323 (Ilya Golshtein).
bitCount
function supportFixedString
andString
data type. #49044 (flynn).- Added configurable retries for all operations with [Zoo]Keeper for Backup queries. #47224 (Nikita Mikhaylov).
- Enable
use_environment_credentials
for S3 by default, so the entire provider chain is constructed by default. #47397 (Antonio Andelic). - Currently, the JSON_VALUE function is similar as spark's get_json_object function, which support to get value from JSON string by a path like '$.key'. But still has something different - 1. in spark's get_json_object will return null while the path is not exist, but in JSON_VALUE will return empty string; - 2. in spark's get_json_object will return a complex type value, such as a JSON object/array value, but in JSON_VALUE will return empty string. #47494 (KevinyhZou).
- For
use_structure_from_insertion_table_in_table_functions
more flexible insert table structure propagation to table function. Fixed an issue with name mapping and using virtual columns. No more need for 'auto' setting. #47962 (Yakov Olkhovskiy). - Do not continue retrying to connect to Keeper if the query is killed or over limits. #47985 (Raúl Marín).
- Support Enum output/input in
BSONEachRow
, allow all map key types and avoid extra calculations on output. #48122 (Kruglov Pavel). - Support more ClickHouse types in
ORC
/Arrow
/Parquet
formats: Enum(8|16), (U)Int(128|256), Decimal256 (for ORC), allow reading IPv4 from Int32 values (ORC outputs IPv4 as Int32, and we couldn't read it back), fix reading Nullable(IPv6) from binary data forORC
. #48126 (Kruglov Pavel). - Add columns
perform_ttl_move_on_insert
,load_balancing
for tablesystem.storage_policies
, modify columnvolume_type
type toEnum8
. #48167 (lizhuoyu5). - Added support for
BACKUP ALL
command which backups all tables and databases, including temporary and system ones. #48189 (Vitaly Baranov). - Function mapFromArrays supports
Map
type as an input. #48207 (李扬). - The output of some SHOW PROCESSLIST is now sorted. #48241 (Robert Schulze).
- Per-query/per-server throttling for remote IO/local IO/BACKUPs (server settings:
max_remote_read_network_bandwidth_for_server
,max_remote_write_network_bandwidth_for_server
,max_local_read_bandwidth_for_server
,max_local_write_bandwidth_for_server
,max_backup_bandwidth_for_server
, settings:max_remote_read_network_bandwidth
,max_remote_write_network_bandwidth
,max_local_read_bandwidth
,max_local_write_bandwidth
,max_backup_bandwidth
). #48242 (Azat Khuzhin). - Support more types in
CapnProto
format: Map, (U)Int(128|256), Decimal(128|256). Allow integer conversions during input/output. #48257 (Kruglov Pavel). - Don't throw CURRENT_WRITE_BUFFER_IS_EXHAUSTED for normal behaviour. #48288 (Raúl Marín).
- Add new setting
keeper_map_strict_mode
which enforces extra guarantees on operations made on top ofKeeperMap
tables. #48293 (Antonio Andelic). - Check primary key type for simple dictionary is native unsigned integer type Add setting
check_dictionary_primary_key
for compatibility(setcheck_dictionary_primary_key =false
to disable checking). #48335 (lizhuoyu5). - Don't replicate mutations for
KeeperMap
because it's unnecessary. #48354 (Antonio Andelic). - Allow to write/read unnamed tuple as nested Message in Protobuf format. Tuple elements and Message fields are matched by position. #48390 (Kruglov Pavel).
- Support
additional_table_filters
andadditional_result_filter
settings in the new planner. Also, add a documentation entry foradditional_result_filter
. #48405 (Dmitry Novik). parseDateTime
now understands format string '%f' (fractional seconds). #48420 (Robert Schulze).- Format string "%f" in formatDateTime() now prints "000000" if the formatted value has no fractional seconds, the previous behavior (single zero) can be restored using setting "formatdatetime_f_prints_single_zero = 1". #48422 (Robert Schulze).
- Don't replicate DELETE and TRUNCATE for KeeperMap. #48434 (Antonio Andelic).
- Generate valid Decimals and Bools in generateRandom function. #48436 (Kruglov Pavel).
- Allow trailing commas in expression list of SELECT query, for example
SELECT a, b, c, FROM table
. Closes #37802. #48438 (Nikolay Degterinsky). - Override
CLICKHOUSE_USER
andCLICKHOUSE_PASSWORD
environment variables with--user
and--password
client parameters. Closes #38909. #48440 (Nikolay Degterinsky). - Added retries to loading of data parts in
MergeTree
tables in case of retryable errors. #48442 (Anton Popov). - Add support for
Date
,Date32
,DateTime
,DateTime64
data types toarrayMin
,arrayMax
,arrayDifference
functions. Closes #21645. #48445 (Nikolay Degterinsky). - Add support for
{server_uuid}
macro. It is useful for identifying replicas in autoscaled clusters when new replicas are constantly added and removed in runtime. This closes #48554. #48563 (Alexey Milovidov). - The installation script will create a hard link instead of copying if it is possible. #48578 (Alexey Milovidov).
- Support
SHOW TABLE
syntax meaning the same asSHOW CREATE TABLE
. Closes #48580. #48591 (flynn). - HTTP temporary buffers now support working by evicting data from the virtual filesystem cache. #48664 (Vladimir C).
- Make Schema inference works for
CREATE AS SELECT
. Closes #47599. #48679 (flynn). - Added a
replicated_max_mutations_in_one_entry
setting forReplicatedMergeTree
that allows limiting the number of mutation commands per oneMUTATE_PART
entry (default is 10000). #48731 (Alexander Tokmakov). - In AggregateFunction types, don't count unused arena bytes as
read_bytes
. #48745 (Raúl Marín). - Fix some MySQL-related settings not being handled with the MySQL dictionary source + named collection. Closes #48402. #48759 (Kseniia Sumarokova).
- If a user set
max_single_part_upload_size
to a very large value, it can lead to a crash due to a bug in the AWS S3 SDK. This fixes #47679. #48816 (Alexey Milovidov). - Fix data race in
RabbitMQ
(report), refactor the code. #48845 (Kseniia Sumarokova). - Add aliases
name
andpart_name
formsystem.parts
andsystem.part_log
. Closes #48718. #48850 (sichenzhao). - Functions "arrayDifferenceSupport()", "arrayCumSum()" and "arrayCumSumNonNegative()" now support input arrays of wide integer types (U)Int128/256. #48866 (cluster).
- Multi-line history in clickhouse-client is now no longer padded. This makes pasting more natural. #48870 (Joanna Hulboj).
- Implement a slight improvement for the rare case when ClickHouse is run inside LXC and LXCFS is used. The LXCFS has an issue: sometimes it returns an error "Transport endpoint is not connected" on reading from the file inside
/proc
. This error was correctly logged into ClickHouse's server log. We have additionally workaround this issue by reopening a file. This is a minuscule change. #48922 (Real). - Improve memory accounting for prefetches. Randomise prefetch settings In CI. #48973 (Kseniia Sumarokova).
- Correctly set headers for native copy operations on GCS. #48981 (Antonio Andelic).
- Add support for specifying setting names in the command line with dashes instead of underscores, for example,
--max-threads
instead of--max_threads
. Additionally, support Unicode dash characters like—
instead of--
- this is useful when you communicate with a team in another company, and a manager from that team copy-pasted code from MS Word. #48985 (alekseygolub). - Add fallback to password authentication when authentication with SSL user certificate has failed. Closes #48974. #48989 (Nikolay Degterinsky).
- Improve the embedded dashboard. Close #46671. #49036 (Kevin Zhang).
- Add profile events for log messages, so you can easily see the count of log messages by severity. #49042 (Alexey Milovidov).
- In previous versions, the
LineAsString
format worked inconsistently when the parallel parsing was enabled or not, in presence of DOS or macOS Classic line breaks. This closes #49039. #49052 (Alexey Milovidov). - The exception message about the unparsed query parameter will also tell about the name of the parameter. Reimplement #48878. Close #48772. #49061 (Alexey Milovidov).
Build/Testing/Packaging Improvement
- Update time zones. The following were updated: Africa/Cairo, Africa/Casablanca, Africa/El_Aaiun, America/Bogota, America/Cambridge_Bay, America/Ciudad_Juarez, America/Godthab, America/Inuvik, America/Iqaluit, America/Nuuk, America/Ojinaga, America/Pangnirtung, America/Rankin_Inlet, America/Resolute, America/Whitehorse, America/Yellowknife, Asia/Gaza, Asia/Hebron, Asia/Kuala_Lumpur, Asia/Singapore, Canada/Yukon, Egypt, Europe/Kirov, Europe/Volgograd, Singapore. #48572 (Alexey Milovidov).
- Reduce the number of dependencies in the header files to speed up the build. #47984 (Dmitry Novik).
- Randomize compression of marks and indices in tests. #48286 (Alexey Milovidov).
- Bump internal ZSTD from 1.5.4 to 1.5.5. #46797 (Robert Schulze).
- Randomize vertical merges from compact to wide parts in tests. #48287 (Raúl Marín).
- Support for CRC32 checksum in HDFS. Fix performance issues. #48614 (Alexey Milovidov).
- Remove remainders of GCC support. #48671 (Robert Schulze).
- Add CI run with new analyzer infrastructure enabled. #48719 (Dmitry Novik).
Bug Fix (user-visible misbehavior in an official stable release)
- Fix system.query_views_log for MVs that are pushed from background threads #46668 (Azat Khuzhin).
- Fix several
RENAME COLUMN
bugs #46946 (alesapin). - Fix minor hiliting issues in clickhouse-format #47610 (Natasha Murashkina).
- Fix a bug in LLVM's libc++ leading to a crash for uploading parts to S3 which size is greater than INT_MAX #47693 (Azat Khuzhin).
- Fix overflow in the
sparkbar
function #48121 (Vladimir C). - Fix race in S3 #48190 (Anton Popov).
- Disable JIT for aggregate functions due to inconsistent behavior #48195 (Alexey Milovidov).
- Fix alter formatting (minor) #48289 (Natasha Murashkina).
- Fix CPU usage in RabbitMQ (was worsened in 23.2 after #44404) #48311 (Kseniia Sumarokova).
- Fix crash in EXPLAIN PIPELINE for Merge over Distributed #48320 (Azat Khuzhin).
- Fix serializing LowCardinality as Arrow dictionary #48361 (Kruglov Pavel).
- Reset downloader for cache file segment in TemporaryFileStream #48386 (Vladimir C).
- Fix possible SYSTEM SYNC REPLICA stuck in case of DROP/REPLACE PARTITION #48391 (Azat Khuzhin).
- Fix a startup error when loading a distributed table that depends on a dictionary #48419 (MikhailBurdukov).
- Don't check dependencies when renaming system tables automatically #48431 (Raúl Marín).
- Update only affected rows in KeeperMap storage #48435 (Antonio Andelic).
- Fix possible segfault in the VFS cache #48469 (Kseniia Sumarokova).
toTimeZone
function throws an error when no constant string is provided #48471 (Jordi Villar).- Fix logical error with IPv4 in Protobuf, add support for Date32 #48486 (Kruglov Pavel).
- "changed" flag in system.settings was calculated incorrectly for settings with multiple values #48516 (MikhailBurdukov).
- Fix storage
Memory
with enabled compression #48517 (Anton Popov). - Fix bracketed-paste mode messing up password input in the event of client reconnection #48528 (Michael Kolupaev).
- Fix nested map for keys of IP and UUID types #48556 (Yakov Olkhovskiy).
- Fix an uncaught exception in case of parallel loader for hashed dictionaries #48571 (Azat Khuzhin).
- The
groupArray
aggregate function correctly works for empty result over nullable types #48593 (lgbo). - Fix bug in Keeper when a node is not created with scheme
auth
in ACL sometimes. #48595 (Aleksei Filatov). - Allow IPv4 comparison operators with UInt #48611 (Yakov Olkhovskiy).
- Fix possible error from cache #48636 (Kseniia Sumarokova).
- Async inserts with empty data will no longer throw exception. #48663 (Anton Popov).
- Fix table dependencies in case of failed RENAME TABLE #48683 (Azat Khuzhin).
- If the primary key has duplicate columns (which is only possible for projections), in previous versions it might lead to a bug #48838 (Amos Bird).
- Fix for a race condition in ZooKeeper when joining send_thread/receive_thread #48849 (Alexander Gololobov).
- Fix unexpected part name error when trying to drop a ignored detached part with zero copy replication #48862 (Michael Lex).
- Fix reading
Date32
Parquet/Arrow column into not aDate32
column #48864 (Kruglov Pavel). - Fix
UNKNOWN_IDENTIFIER
error while selecting from table with row policy and column with dots #48976 (Kruglov Pavel). - Fix aggregation by empty nullable strings #48999 (LiuNeng).
ClickHouse release 23.3 LTS, 2023-03-30
Upgrade Notes
- Lightweight DELETEs are production ready and enabled by default. The
DELETE
query for MergeTree tables is now available by default. - The behavior of
*domain*RFC
andnetloc
functions is slightly changed: relaxed the set of symbols that are allowed in the URL authority for better conformance. #46841 (Azat Khuzhin). - Prohibited creating tables based on KafkaEngine with DEFAULT/EPHEMERAL/ALIAS/MATERIALIZED statements for columns. #47138 (Aleksandr Musorin).
- An "asynchronous connection drain" feature is removed. Related settings and metrics are removed as well. It was an internal feature, so the removal should not affect users who had never heard about that feature. #47486 (Alexander Tokmakov).
- Support 256-bit Decimal data type (more than 38 digits) in
arraySum
/Min
/Max
/Avg
/Product
,arrayCumSum
/CumSumNonNegative
,arrayDifference
, array construction, IN operator, query parameters,groupArrayMovingSum
, statistical functions,min
/max
/any
/argMin
/argMax
, PostgreSQL wire protocol, MySQL table engine and function,sumMap
,mapAdd
,mapSubtract
,arrayIntersect
. Add support for big integers inarrayIntersect
. Statistical aggregate functions involving moments (such ascorr
or variousTTest
s) will useFloat64
as their internal representation (they were usingDecimal128
before this change, but it was pointless), and these functions can returnnan
instead ofinf
in case of infinite variance. Some functions were allowed onDecimal256
data types but returnedDecimal128
in previous versions - now it is fixed. This closes #47569. This closes #44864. This closes #28335. #47594 (Alexey Milovidov). - Make backup_threads/restore_threads server settings (instead of user settings). #47881 (Azat Khuzhin).
- Do not allow const and non-deterministic secondary indices #46839 (Anton Popov).
New Feature
- Add a new mode for splitting the work on replicas using settings
parallel_replicas_custom_key
andparallel_replicas_custom_key_filter_type
. If the cluster consists of a single shard with multiple replicas, up tomax_parallel_replicas
will be randomly picked and turned into shards. For each shard, a corresponding filter is added to the query on the initiator before being sent to the shard. If the cluster consists of multiple shards, it will behave the same assample_key
but with the possibility to define an arbitrary key. #45108 (Antonio Andelic). - An option to display partial result on cancel: Added query setting
partial_result_on_first_cancel
allowing the canceled query (e.g. due to Ctrl-C) to return a partial result. #45689 (Alexey Perevyshin). - Added support of arbitrary tables engines for temporary tables (except for Replicated and KeeperMap engines). Close #31497. #46071 (Roman Vasin).
- Add support for replication of user-defined SQL functions using centralized storage in Keeper. #46085 (Aleksei Filatov).
- Implement
system.server_settings
(similar tosystem.settings
), which will contain server configurations. #46550 (pufit). - Support for
UNDROP TABLE
query. Closes #46811. #47241 (chen). - Allow separate grants for named collections (e.g. to be able to give
SHOW/CREATE/ALTER/DROP named collection
access only to certain collections, instead of all at once). Closes #40894. Add new access typeNAMED_COLLECTION_CONTROL
which is not given to user default unless explicitly added to the user config (is required to be able to doGRANT ALL
), alsoshow_named_collections
is no longer obligatory to be manually specified for user default to be able to have full access rights as was in 23.2. #46241 (Kseniia Sumarokova). - Allow nested custom disks. Previously custom disks supported only flat disk structure. #47106 (Kseniia Sumarokova).
- Introduce a function
widthBucket
(with aWIDTH_BUCKET
alias for compatibility). #42974. #46790 (avoiderboi). - Add new function
parseDateTime
/parseDateTimeInJodaSyntax
according to the specified format string. parseDateTime parses String to DateTime in MySQL syntax, parseDateTimeInJodaSyntax parses in Joda syntax. #46815 (李扬). - Use
dummy UInt8
for the default structure of table functionnull
. Closes #46930. #47006 (flynn). - Support for date format with a comma, like
Dec 15, 2021
in theparseDateTimeBestEffort
function. Closes #46816. #47071 (chen). - Add settings
http_wait_end_of_query
andhttp_response_buffer_size
that corresponds to URL paramswait_end_of_query
andbuffer_size
for the HTTP interface. This allows changing these settings in the profiles. #47108 (Vladimir C). - Add
system.dropped_tables
table that shows tables that were dropped fromAtomic
databases but were not completely removed yet. #47364 (chen). - Add
INSTR
as alias ofpositionCaseInsensitive
for MySQL compatibility. Closes #47529. #47535 (flynn). - Added
toDecimalString
function allowing to convert numbers to string with fixed precision. #47838 (Andrey Zvonov). - Add a merge tree setting
max_number_of_mutations_for_replica
. It limits the number of part mutations per replica to the specified amount. Zero means no limit on the number of mutations per replica (the execution can still be constrained by other settings). #48047 (Vladimir C). - Add the Map-related function
mapFromArrays
, which allows the creation of a map from a pair of arrays. #31125 (李扬). - Allow control of compression in Parquet/ORC/Arrow output formats, adds support for more compression input formats. This closes #13541. #47114 (Kruglov Pavel).
- Add SSL User Certificate authentication to the native protocol. Closes #47077. #47596 (Nikolay Degterinsky).
- Add *OrNull() and *OrZero() variants for
parseDateTime
, add aliasstr_to_date
for MySQL parity. #48000 (Robert Schulze). - Added operator
REGEXP
(similar to operators "LIKE", "IN", "MOD" etc.) for better compatibility with MySQL #47869 (Robert Schulze).
Performance Improvement
- Marks in memory are now compressed, using 3-6x less memory. #47290 (Michael Kolupaev).
- Backups for large numbers of files were unbelievably slow in previous versions. Not anymore. Now they are unbelievably fast. #47251 (Alexey Milovidov). Introduced a separate thread pool for backup's IO operations. This will allow scaling it independently of other pools and increase performance. #47174 (Nikita Mikhaylov). Use MultiRead request and retries for collecting metadata at the final stage of backup processing. #47243 (Nikita Mikhaylov). If a backup and restoring data are both in S3 then server-side copy should be used from now on. #47546 (Vitaly Baranov).
- Fixed excessive reading in queries with
FINAL
. #47801 (Nikita Taranov). - Setting
max_final_threads
would be set to the number of cores at server startup (by the same algorithm as used formax_threads
). This improves the concurrency offinal
execution on servers with high number of CPUs. #47915 (Nikita Taranov). - Allow executing reading pipeline for DIRECT dictionary with CLICKHOUSE source in multiple threads. To enable set
dictionary_use_async_executor=1
inSETTINGS
section for source inCREATE DICTIONARY
statement. #47986 (Vladimir C). - Optimize one nullable key aggregate performance. #45772 (LiuNeng).
- Implemented lowercase
tokenbf_v1
index utilization forhasTokenOrNull
,hasTokenCaseInsensitive
andhasTokenCaseInsensitiveOrNull
. #46252 (ltrk2). - Optimize functions
position
andLIKE
by searching the first two chars using SIMD. #46289 (Jiebin Sun). - Optimize queries from the
system.detached_parts
, which could be significantly large. Added several sources with respect to the block size limitation; in each block, an IO thread pool is used to calculate the part size, i.e. to make syscalls in parallel. #46624 (Sema Checherinda). - Increase the default value of
max_replicated_merges_in_queue
for ReplicatedMergeTree tables from 16 to 1000. It allows faster background merge operation on clusters with a very large number of replicas, such as clusters with shared storage in ClickHouse Cloud. #47050 (Alexey Milovidov). - Updated
clickhouse-copier
to useGROUP BY
instead ofDISTINCT
to get the list of partitions. For large tables, this reduced the select time from over 500s to under 1s. #47386 (Clayton McClure). - Fix performance degradation in
ASOF JOIN
. #47544 (Ongkong). - Even more batching in Keeper. Improve performance by avoiding breaking batches on read requests. #47978 (Antonio Andelic).
- Allow PREWHERE for Merge with different DEFAULT expressions for columns. #46831 (Azat Khuzhin).
Experimental Feature
- Parallel replicas: Improved the overall performance by better utilizing the local replica, and forbid the reading with parallel replicas from non-replicated MergeTree by default. #47858 (Nikita Mikhaylov).
- Support filter push down to left table for JOIN with
Join
,Dictionary
andEmbeddedRocksDB
tables if the experimental Analyzer is enabled. #47280 (Maksim Kita). - Now ReplicatedMergeTree with zero copy replication has less load to Keeper. #47676 (alesapin).
- Fix create materialized view with MaterializedPostgreSQL #40807 (Maksim Buren).
Improvement
- Enable
input_format_json_ignore_unknown_keys_in_named_tuple
by default. #46742 (Kruglov Pavel). - Allow errors to be ignored while pushing to MATERIALIZED VIEW (add new setting
materialized_views_ignore_errors
, by default tofalse
, but it is set totrue
for flushing logs tosystem.*_log
tables unconditionally). #46658 (Azat Khuzhin). - Track the file queue of distributed sends in memory. #45491 (Azat Khuzhin).
- Now
X-ClickHouse-Query-Id
andX-ClickHouse-Timezone
headers are added to responses in all queries via HTTP protocol. Previously it was done only forSELECT
queries. #46364 (Anton Popov). - External tables from
MongoDB
: support for connection to a replica set via a URI with a host:port enum and support for the readPreference option in MongoDB dictionaries. Example URI: mongodb://db0.example.com:27017,db1.example.com:27017,db2.example.com:27017/?replicaSet=myRepl&readPreference=primary. #46524 (artem-yadr). - This improvement should be invisible for users. Re-implement projection analysis on top of query plan. Added setting
query_plan_optimize_projection=1
to switch between old and new version. Fixes #44963. #46537 (Nikolai Kochetov). - Use Parquet format v2 instead of v1 in output format by default. Add setting
output_format_parquet_version
to control parquet version, possible values1.0
,2.4
,2.6
,2.latest
(default). #46617 (Kruglov Pavel). - It is now possible to use the new configuration syntax to configure Kafka topics with periods (
.
) in their name. #46752 (Robert Schulze). - Fix heuristics that check hyperscan patterns for problematic repeats. #46819 (Robert Schulze).
- Don't report ZK node exists to system.errors when a block was created concurrently by a different replica. #46820 (Raúl Marín).
- Increase the limit for opened files in
clickhouse-local
. It will be able to read fromweb
tables on servers with a huge number of CPU cores. Do not back off reading from the URL table engine in case of too many opened files. This closes #46852. #46853 (Alexey Milovidov). - Exceptions thrown when numbers cannot be parsed now have an easier-to-read exception message. #46917 (Robert Schulze).
- Added update
system.backups
after every processed task to track the progress of backups. #46989 (Aleksandr Musorin). - Allow types conversion in Native input format. Add settings
input_format_native_allow_types_conversion
that controls it (enabled by default). #46990 (Kruglov Pavel). - Allow IPv4 in the
range
function to generate IP ranges. #46995 (Yakov Olkhovskiy). - Improve exception message when it's impossible to move a part from one volume/disk to another. #47032 (alesapin).
- Support
Bool
type inJSONType
function. PreviouslyNull
type was mistakenly returned for bool values. #47046 (Anton Popov). - Use
_request_body
parameter to configure predefined HTTP queries. #47086 (Constantine Peresypkin). - Automatic indentation in the built-in UI SQL editor when Enter is pressed. #47113 (Alexey Korepanov).
- Self-extraction with 'sudo' will attempt to set uid and gid of extracted files to running user. #47116 (Yakov Olkhovskiy).
- Previously, the
repeat
function's second argument only accepted an unsigned integer type, which meant it could not accept values such as -1. This behavior differed from that of the Spark function. In this update, the repeat function has been modified to match the behavior of the Spark function. It now accepts the same types of inputs, including negative integers. Extensive testing has been performed to verify the correctness of the updated implementation. #47134 (KevinyhZou). Note: the changelog entry was rewritten by ChatGPT. - Remove
::__1
part from stacktraces. Displaystd::basic_string<char, ...
asString
in stacktraces. #47171 (Mike Kot). - Reimplement interserver mode to avoid replay attacks (note, that change is backward compatible with older servers). #47213 (Azat Khuzhin).
- Improve recognition of regular expression groups and refine the regexp_tree dictionary. #47218 (Han Fei).
- Keeper improvement: Add new 4LW
clrs
to clean resources used by Keeper (e.g. release unused memory). #47256 (Antonio Andelic). - Add optional arguments to codecs
DoubleDelta(bytes_size)
,Gorilla(bytes_size)
,FPC(level, float_size)
, this allows using these codecs without column type inclickhouse-compressor
. Fix possible aborts and arithmetic errors inclickhouse-compressor
with these codecs. Fixes: https://github.com/ClickHouse/ClickHouse/discussions/47262. #47271 (Kruglov Pavel). - Add support for big int types to the
runningDifference
function. Closes #47194. #47322 (Nikolay Degterinsky). - Add an expiration window for S3 credentials that have an expiration time to avoid
ExpiredToken
errors in some edge cases. It can be controlled withexpiration_window_seconds
config, the default is 120 seconds. #47423 (Antonio Andelic). - Support Decimals and Date32 in
Avro
format. #47434 (Kruglov Pavel). - Do not start the server if an interrupted conversion from
Ordinary
toAtomic
was detected, print a better error message with troubleshooting instructions. #47487 (Alexander Tokmakov). - Add a new column
kind
to thesystem.opentelemetry_span_log
. This column holds the value of SpanKind defined in OpenTelemtry. #47499 (Frank Chen). - Allow reading/writing nested arrays in
Protobuf
format with only the root field name as column name. Previously column name should've contained all nested field names (likea.b.c Array(Array(Array(UInt32)))
, now you can use justa Array(Array(Array(UInt32)))
. #47650 (Kruglov Pavel). - Added an optional
STRICT
modifier forSYSTEM SYNC REPLICA
which makes the query wait for the replication queue to become empty (just like it worked before https://github.com/ClickHouse/ClickHouse/pull/45648). #47659 (Alexander Tokmakov). - Improve the naming of some OpenTelemetry span logs. #47667 (Frank Chen).
- Prevent using too long chains of aggregate function combinators (they can lead to slow queries in the analysis stage). This closes #47715. #47716 (Alexey Milovidov).
- Support for subquery in parameterized views; resolves #46741 #47725 (SmitaRKulkarni).
- Fix memory leak in MySQL integration (reproduces with
connection_auto_close=1
). #47732 (Kseniia Sumarokova). - Improved error handling in the code related to Decimal parameters, resulting in more informative error messages. Previously, when incorrect Decimal parameters were supplied, the error message generated was unclear or unhelpful. With this update, the error message printed has been fixed to provide more detailed and useful information, making it easier to identify and correct issues related to Decimal parameters. #47812 (Yu Feng). Note: this changelog entry is rewritten by ChatGPT.
- The parameter
exact_rows_before_limit
is used to makerows_before_limit_at_least
is designed to accurately reflect the number of rows returned before the limit is reached. This pull request addresses issues encountered when the query involves distributed processing across multiple shards or sorting operations. Prior to this update, these scenarios were not functioning as intended. #47874 (Amos Bird). - ThreadPools metrics introspection. #47880 (Azat Khuzhin).
- Add
WriteBufferFromS3Microseconds
andWriteBufferFromS3RequestsErrors
profile events. #47885 (Antonio Andelic). - Add
--link
and--noninteractive
(-y
) options to ClickHouse install. Closes #47750. #47887 (Nikolay Degterinsky). - Fixed
UNKNOWN_TABLE
exception when attaching to a materialized view that has dependent tables that are not available. This might be useful when trying to restore state from a backup. #47975 (MikhailBurdukov). - Fix case when the (optional) path is not added to an encrypted disk configuration. #47981 (Kseniia Sumarokova).
- Support for CTE in parameterized views Implementation: Updated to allow query parameters while evaluating scalar subqueries. #48065 (SmitaRKulkarni).
- Support big integers
(U)Int128/(U)Int256
,Map
with any key type andDateTime64
with any precision (not only 3 and 6). #48119 (Kruglov Pavel). - Allow skipping errors related to unknown enum values in row input formats. #48133 (Alexey Milovidov).
Build/Testing/Packaging Improvement
- ClickHouse now builds with
C++23
. #47424 (Robert Schulze). - Fuzz
EXPLAIN
queries in the AST Fuzzer. #47803 #47852 (flynn). - Split stress test and the automated backward compatibility check (now Upgrade check). #44879 (Kruglov Pavel).
- Updated the Ubuntu Image for Docker to calm down some bogus security reports. #46784 (Julio Jimenez). Please note that ClickHouse has no dependencies and does not require Docker.
- Adds a prompt to allow the removal of an existing
clickhouse
download when using "curl | sh" download of ClickHouse. Prompt is "ClickHouse binary clickhouse already exists. Overwrite? [y/N]". #46859 (Dan Roscigno). - Fix error during server startup on old distros (e.g. Amazon Linux 2) and on ARM that glibc 2.28 symbols are not found. #47008 (Robert Schulze).
- Prepare for clang 16. #47027 (Amos Bird).
- Added a CI check which ensures ClickHouse can run with an old glibc on ARM. #47063 (Robert Schulze).
- Add a style check to prevent incorrect usage of the
NDEBUG
macro. #47699 (Alexey Milovidov). - Speed up the build a little. #47714 (Alexey Milovidov).
- Bump
vectorscan
to 5.4.9. #47955 (Robert Schulze). - Add a unit test to assert Apache Arrow's fatal logging does not abort. It covers the changes in ClickHouse/arrow#16. #47958 (Arthur Passos).
- Restore the ability of native macOS debug server build to start. #48050 (Robert Schulze). Note: this change is only relevant for development, as the ClickHouse official builds are done with cross-compilation.
Bug Fix (user-visible misbehavior in an official stable release)
- Fix formats parser resetting, test processing bad messages in
Kafka
#45693 (Kruglov Pavel). - Fix data size calculation in Keeper #46086 (Antonio Andelic).
- Fixed a bug in automatic retries of
DROP TABLE
query withReplicatedMergeTree
tables andAtomic
databases. In rare cases it could lead toCan't get data for node /zk_path/log_pointer
andThe specified key does not exist
errors if the ZooKeeper session expired during DROP and a new replicated table with the same path in ZooKeeper was created in parallel. #46384 (Alexander Tokmakov). - Fix incorrect alias recursion while normalizing queries that prevented some queries to run. #46609 (Raúl Marín).
- Fix IPv4/IPv6 serialization/deserialization in binary formats #46616 (Kruglov Pavel).
- ActionsDAG: do not change result of
and
during optimization #46653 (Salvatore Mesoraca). - Improve query cancellation when a client dies #46681 (Alexander Tokmakov).
- Fix arithmetic operations in aggregate optimization #46705 (Duc Canh Le).
- Fix possible
clickhouse-local
's abort on JSONEachRow schema inference #46731 (Kruglov Pavel). - Fix changing an expired role #46772 (Vitaly Baranov).
- Fix combined PREWHERE column accumulation from multiple steps #46785 (Alexander Gololobov).
- Use initial range for fetching file size in HTTP read buffer. Without this change, some remote files couldn't be processed. #46824 (Antonio Andelic).
- Fix the incorrect progress bar while using the URL tables #46830 (Antonio Andelic).
- Fix MSan report in
maxIntersections
function #46847 (Alexey Milovidov). - Fix a bug in
Map
data type #46856 (Alexey Milovidov). - Fix wrong results of some LIKE searches when the LIKE pattern contains quoted non-quotable characters #46875 (Robert Schulze).
- Fix - WITH FILL would produce abort when the Filling Transform processing an empty block #46897 (Yakov Olkhovskiy).
- Fix date and int inference from string in JSON #46972 (Kruglov Pavel).
- Fix bug in zero-copy replication disk choice during fetch #47010 (alesapin).
- Fix a typo in systemd service definition #47051 (Palash Goel).
- Fix the NOT_IMPLEMENTED error with CROSS JOIN and algorithm = auto #47068 (Vladimir C).
- Fix the problem that the 'ReplicatedMergeTree' table failed to insert two similar data when the 'part_type' is configured as 'InMemory' mode (experimental feature). #47121 (liding1992).
- External dictionaries / library-bridge: Fix error "unknown library method 'extDict_libClone'" #47136 (alex filatov).
- Fix race condition in a grace hash join with limit #47153 (Vladimir C).
- Fix concrete columns PREWHERE support #47154 (Azat Khuzhin).
- Fix possible deadlock in Query Status #47161 (Kruglov Pavel).
- Forbid insert select for the same
Join
table, as it leads to a deadlock #47260 (Vladimir C). - Skip merged partitions for
min_age_to_force_merge_seconds
merges #47303 (Antonio Andelic). - Modify find_first_symbols, so it works as expected for find_first_not_symbols #47304 (Arthur Passos).
- Fix big numbers inference in CSV #47410 (Kruglov Pavel).
- Disable logical expression optimizer for expression with aliases. #47451 (Nikolai Kochetov).
- Fix error in
decodeURLComponent
#47457 (Alexey Milovidov). - Fix explain graph with projection #47473 (flynn).
- Fix query parameters #47488 (Alexey Milovidov).
- Parameterized view: a bug fix. #47495 (SmitaRKulkarni).
- Fuzzer of data formats, and the corresponding fixes. #47519 (Alexey Milovidov).
- Fix monotonicity check for
DateTime64
#47526 (Antonio Andelic). - Fix "block structure mismatch" for a Nullable LowCardinality column #47537 (Nikolai Kochetov).
- Proper fix for a bug in Apache Parquet #45878 #47538 (Kruglov Pavel).
- Fix
BSONEachRow
parallel parsing when document size is invalid #47540 (Kruglov Pavel). - Preserve error in
system.distribution_queue
onSYSTEM FLUSH DISTRIBUTED
#47541 (Azat Khuzhin). - Check for duplicate column in
BSONEachRow
format #47609 (Kruglov Pavel). - Fix wait for zero copy lock during move #47631 (alesapin).
- Fix aggregation by partitions #47634 (Nikita Taranov).
- Fix bug in tuple as array serialization in
BSONEachRow
format #47690 (Kruglov Pavel). - Fix crash in
polygonsSymDifferenceCartesian
#47702 (pufit). - Fix reading from storage
File
compressed files withzlib
andgzip
compression #47796 (Anton Popov). - Improve empty query detection for PostgreSQL (for pgx golang driver) #47854 (Azat Khuzhin).
- Fix DateTime monotonicity check for LowCardinality types #47860 (Antonio Andelic).
- Use restore_threads (not backup_threads) for RESTORE ASYNC #47861 (Azat Khuzhin).
- Fix DROP COLUMN with ReplicatedMergeTree containing projections #47883 (Antonio Andelic).
- Fix for Replicated database recovery #47901 (Alexander Tokmakov).
- Hotfix for too verbose warnings in HTTP #47903 (Alexander Tokmakov).
- Fix "Field value too long" in
catboostEvaluate
#47970 (Robert Schulze). - Fix #36971: Watchdog: exit with non-zero code if child process exits #47973 (Коренберг Марк).
- Fix for "index file
cidx
is unexpectedly long" #48010 (SmitaRKulkarni). - Fix MaterializedPostgreSQL query to get attributes (replica-identity) #48015 (Solomatov Sergei).
- parseDateTime(): Fix UB (signed integer overflow) #48019 (Robert Schulze).
- Use unique names for Records in Avro to avoid reusing its schema #48057 (Kruglov Pavel).
- Correctly set TCP/HTTP socket timeouts in Keeper #48108 (Antonio Andelic).
- Fix possible member call on null pointer in
Avro
format #48184 (Kruglov Pavel).
ClickHouse release 23.2, 2023-02-23
Backward Incompatible Change
- Extend function "toDayOfWeek()" (alias: "DAYOFWEEK") with a mode argument that encodes whether the week starts on Monday or Sunday and whether counting starts at 0 or 1. For consistency with other date time functions, the mode argument was inserted between the time and the time zone arguments. This breaks existing usage of the (previously undocumented) 2-argument syntax "toDayOfWeek(time, time_zone)". A fix is to rewrite the function into "toDayOfWeek(time, 0, time_zone)". #45233 (Robert Schulze).
- Rename setting
max_query_cache_size
tofilesystem_cache_max_download_size
. #45614 (Kseniia Sumarokova). - The
default
user will not have permissions for access typeSHOW NAMED COLLECTION
by default (e.g.default
user will no longer be able to grant ALL to other users as it was before, therefore this PR is backward incompatible). #46010 (Kseniia Sumarokova). - If the SETTINGS clause is specified before the FORMAT clause, the settings will be applied to formatting as well. #46003 (Azat Khuzhin).
- Remove support for setting
materialized_postgresql_allow_automatic_update
(which was by default turned off). #46106 (Kseniia Sumarokova). - Slightly improve performance of
countDigits
on realistic datasets. This closed #44518. In previous versions,countDigits(0)
returned0
; now it returns1
, which is more correct, and follows the existing documentation. #46187 (Alexey Milovidov). - Disallow creation of new columns compressed by a combination of codecs "Delta" or "DoubleDelta" followed by codecs "Gorilla" or "FPC". This can be bypassed using setting "allow_suspicious_codecs = true". #45652 (Robert Schulze).
New Feature
- Add
StorageIceberg
and table functioniceberg
to access iceberg table store on S3. #45384 (flynn). - Allow configuring storage as
SETTINGS disk = '<disk_name>'
(instead ofstorage_policy
) and with explicit disk creationSETTINGS disk = disk(type=s3, ...)
. #41976 (Kseniia Sumarokova). - Expose
ProfileEvents
counters insystem.part_log
. #38614 (Bharat Nallan). - Enrichment of the existing
ReplacingMergeTree
engine to allow duplicate the insertion. It leverages the power of bothReplacingMergeTree
andCollapsingMergeTree
in one MergeTree engine. Deleted data are not returned when queried, but not removed from disk neither. #41005 (youennL-cs). - Add
generateULID
function. Closes #36536. #44662 (Nikolay Degterinsky). - Add
corrMatrix
aggregate function, calculating each two columns. In addition, since AggregatefunctionscovarSamp
andcovarPop
are similar tocorr
, I addcovarSampMatrix
,covarPopMatrix
by the way. @alexey-milovidov closes #44587. #44680 (FFFFFFFHHHHHHH). - Introduce arrayShuffle function for random array permutations. #45271 (Joanna Hulboj).
- Support types
FIXED_SIZE_BINARY
type in Arrow,FIXED_LENGTH_BYTE_ARRAY
inParquet
and match them toFixedString
. Add settingsoutput_format_parquet_fixed_string_as_fixed_byte_array/output_format_arrow_fixed_string_as_fixed_byte_array
to control default output type for FixedString. Closes #45326. #45340 (Kruglov Pavel). - Add a new column
last_exception_time
to system.replication_queue. #45457 (Frank Chen). - Add two new functions which allow for user-defined keys/seeds with SipHash{64,128}. #45513 (Salvatore Mesoraca).
- Allow a three-argument version for table function
format
. close #45808. #45873 (FFFFFFFHHHHHHH). - Add
JodaTime
format support for 'x','w','S'. Refer to https://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html. #46073 (zk_kiger). - Support window function
ntile
. (lgbo). - Add setting
final
to implicitly apply theFINAL
modifier to every table. #40945 (Arthur Passos). - Added
arrayPartialSort
andarrayPartialReverseSort
functions. #46296 (Joanna Hulboj). - The new http parameter
client_protocol_version
allows setting a client protocol version for HTTP responses using the Native format. #40397. #46360 (Geoff Genz). - Add new function
regexpExtract
, like spark functionREGEXP_EXTRACT
for compatibility. It is similar to the existing functionextract
. #46469 (李扬). - Add new function
JSONArrayLength
, which returns the number of elements in the outermost JSON array. The function returns NULL if the input JSON string is invalid. #46631 (李扬).
Performance Improvement
- The introduced logic works if PREWHERE condition is a conjunction of multiple conditions (cond1 AND cond2 AND ... ). It groups those conditions that require reading the same columns into steps. After each step the corresponding part of the full condition is computed and the result rows might be filtered. This allows to read fewer rows in the next steps thus saving IO bandwidth and doing less computation. This logic is disabled by default for now. It will be enabled by default in one of the future releases once it is known to not have any regressions, so it is highly encouraged to be used for testing. It can be controlled by 2 settings: "enable_multiple_prewhere_read_steps" and "move_all_conditions_to_prewhere". #46140 (Alexander Gololobov).
- An option added to aggregate partitions independently if table partition key and group by key are compatible. Controlled by the setting
allow_aggregate_partitions_independently
. Disabled by default because of limited applicability (please refer to the docs). #45364 (Nikita Taranov). - Allow using Vertical merge algorithm with parts in Compact format. This will allow ClickHouse server to use much less memory for background operations. This closes #46084. #45681 #46282 (Anton Popov).
- Optimize
Parquet
reader by using batch reader. #45878 (LiuNeng). - Add new
local_filesystem_read_method
methodio_uring
based on the asynchronous Linux io_uring subsystem, improving read performance almost universally compared to the defaultpread
method. #38456 (Saulius Valatka). - Rewrite aggregate functions with
if
expression as argument when logically equivalent. For example,avg(if(cond, col, null))
can be rewritten to avgIf(cond, col). It is helpful in performance. #44730 (李扬). - Improve lower/upper function performance with avx512 instructions. #37894 (yaqi-zhao).
- Remove the limitation that on systems with >=32 cores and SMT disabled ClickHouse uses only half of the cores (the case when you disable Hyper Threading in BIOS). #44973 (Robert Schulze).
- Improve performance of function
multiIf
by columnar executing, speed up by 2.3x. #45296 (李扬). - Add fast path for function
position
when the needle is empty. #45382 (李扬). - Enable
query_plan_remove_redundant_sorting
optimization by default. Optimization implemented in #45420. #45567 (Igor Nikonov). - Increased HTTP Transfer Encoding chunk size to improve performance of large queries using the HTTP interface. #45593 (Geoff Genz).
- Fixed performance of short
SELECT
queries that read from tables with large number ofArray
/Map
/Nested
columns. #45630 (Anton Popov). - Improve performance of filtering for big integers and decimal types. #45949 (李扬).
- This change could effectively reduce the overhead of obtaining the filter from ColumnNullable(UInt8) and improve the overall query performance. To evaluate the impact of this change, we adopted TPC-H benchmark but revised the column types from non-nullable to nullable, and we measured the QPS of its queries as the performance indicator. #45962 (Zhiguo Zhou).
- Make the
_part
and_partition_id
virtual column beLowCardinality(String)
type. Closes #45964. #45975 (flynn). - Improve the performance of Decimal conversion when the scale does not change. #46095 (Alexey Milovidov).
- Allow to increase prefetching for read data. #46168 (Kseniia Sumarokova).
- Rewrite
arrayExists(x -> x = 1, arr)
->has(arr, 1)
, which improve performance by 1.34x. #46188 (李扬). - Fix too big memory usage for vertical merges on non-remote disk. Respect
max_insert_delayed_streams_for_parallel_write
for the remote disk. #46275 (Nikolai Kochetov). - Update zstd to v1.5.4. It has some minor improvements in performance and compression ratio. If you run replicas with different versions of ClickHouse you may see reasonable error messages
Data after merge/mutation is not byte-identical to data on another replicas.
with explanation. These messages are Ok and you should not worry. #46280 (Raúl Marín). - Fix performance degradation caused by #39737. #46309 (Alexey Milovidov).
- The
replicas_status
handle will answer quickly even in case of a large replication queue. #46310 (Alexey Milovidov). - Add avx512 support for aggregate function
sum
, function unary arithmetic, function comparison. #37870 (zhao zhou). - Rewrote the code around marks distribution and the overall coordination of the reading in order to achieve the maximum performance improvement. This closes #34527. #43772 (Nikita Mikhaylov).
- Remove redundant DISTINCT clauses in query (subqueries). Implemented on top of query plan. It does similar optimization as
optimize_duplicate_order_by_and_distinct
regarding DISTINCT clauses. Can be enabled viaquery_plan_remove_redundant_distinct
setting. Related to #42648. #44176 (Igor Nikonov). - A few query rewrite optimizations:
sumIf(123, cond) -> 123 * countIf(1, cond)
,sum(if(cond, 123, 0)) -> 123 * countIf(cond)
,sum(if(cond, 0, 123)) -> 123 * countIf(not(cond))
#44728 (李扬). - Improved how memory bound merging and aggregation in order on top query plan interact. Previously we fell back to explicit sorting for AIO in some cases when it wasn't actually needed. #45892 (Nikita Taranov).
- Concurrent merges are scheduled using round-robin by default to ensure fair and starvation-free operation. Previously in heavily overloaded shards, big merges could possibly be starved by smaller merges due to the use of strict priority scheduling. Added
background_merges_mutations_scheduling_policy
server config option to select scheduling algorithm (round_robin
orshortest_task_first
). #46247 (Sergei Trifonov).
Improvement
- Enable retries for INSERT by default in case of ZooKeeper session loss. We already use it in production. #46308 (Alexey Milovidov).
- Add ability to ignore unknown keys in JSON object for named tuples (
input_format_json_ignore_unknown_keys_in_named_tuple
). #45678 (Azat Khuzhin). - Support optimizing the
where
clause with sorting key expression move toprewhere
for query withfinal
. #38893. #38950 (hexiaoting). - Add new metrics for backups: num_processed_files and processed_files_size described actual number of processed files. #42244 (Aleksandr).
- Added retries on interserver DNS errors. #43179 (Anton Kozlov).
- Keeper improvement: try preallocating space on the disk to avoid undefined out-of-space issues. Introduce setting
max_log_file_size
for the maximum size of Keeper's Raft log files. #44370 (Antonio Andelic). - Optimize behavior for a replica delay api logic in case the replica is read-only. #45148 (mateng915).
- Ask for the password in clickhouse-client interactively in a case when the empty password is wrong. Closes #46702. #46730 (Nikolay Degterinsky).
- Mark
Gorilla
compression on columns of non-Float* type as suspicious. #45376 (Robert Schulze). - Show replica name that is executing a merge in the
postpone_reason
column. #45458 (Frank Chen). - Save exception stack trace in part_log. #45459 (Frank Chen).
- The
regexp_tree
dictionary is polished and now it is compatible with https://github.com/ua-parser/uap-core. #45631 (Han Fei). - Updated checking of
SYSTEM SYNC REPLICA
, resolves #45508 #45648 (SmitaRKulkarni). - Rename setting
replication_alter_partitions_sync
toalter_sync
. #45659 (Antonio Andelic). - The
generateRandom
table function and the engine now supportLowCardinality
data types. This is useful for testing, for example you can writeINSERT INTO table SELECT * FROM generateRandom() LIMIT 1000
. This is needed to debug #45590. #45661 (Alexey Milovidov). - The experimental query result cache now provides more modular configuration settings. #45679 (Robert Schulze).
- Renamed "query result cache" to "query cache". #45682 (Robert Schulze).
- add
SYSTEM SYNC FILE CACHE
command. It will do thesync
syscall. #8921. #45685 (DR). - Add a new S3 setting
allow_head_object_request
. This PR makes usage ofGetObjectAttributes
request instead ofHeadObject
introduced in https://github.com/ClickHouse/ClickHouse/pull/45288 optional (and disabled by default). #45701 (Vitaly Baranov). - Add ability to override connection settings based on connection names (that said that now you can forget about storing password for each connection, you can simply put everything into
~/.clickhouse-client/config.xml
and even use different history files for them, which can be also useful). #45715 (Azat Khuzhin). - Arrow format: support the duration type. Closes #45669. #45750 (flynn).
- Extend the logging in the Query Cache to improve investigations of the caching behavior. #45751 (Robert Schulze).
- The query cache's server-level settings are now reconfigurable at runtime. #45758 (Robert Schulze).
- Hide password in logs when a table function's arguments are specified with a named collection. #45774 (Vitaly Baranov).
- Improve internal S3 client to correctly deduce regions and redirections for different types of URLs. #45783 (Antonio Andelic).
- Add support for Map, IPv4 and IPv6 types in generateRandom. Mostly useful for testing. #45785 (Raúl Marín).
- Support empty/notEmpty for IP types. #45799 (Yakov Olkhovskiy).
- The column
num_processed_files
was split into two columns:num_files
(for BACKUP) andfiles_read
(for RESTORE). The columnprocessed_files_size
was split into two columns:total_size
(for BACKUP) andbytes_read
(for RESTORE). #45800 (Vitaly Baranov). - Add support for
SHOW ENGINES
query for MySQL compatibility. #45859 (Filatenkov Artur). - Improved how the obfuscator deals with queries. #45867 (Raúl Marín).
- Improve behaviour of conversion into Date for boundary value 65535 (2149-06-06). #46042 #45914 (Joanna Hulboj).
- Add setting
check_referential_table_dependencies
to check referential dependencies onDROP TABLE
. This PR solves #38326. #45936 (Vitaly Baranov). - Fix
tupleElement
to returnNull
when havingNull
argument. Closes #45894. #45952 (flynn). - Throw an error on no files satisfying the S3 wildcard. Closes #45587. #45957 (chen).
- Use cluster state data to check concurrent backup/restore. #45982 (SmitaRKulkarni).
- ClickHouse Client: Use "exact" matching for fuzzy search, which has correct case ignorance and more appropriate algorithm for matching SQL queries. #46000 (Azat Khuzhin).
- Forbid wrong create View syntax
CREATE View X TO Y AS SELECT
. Closes #4331. #46043 (flynn). - Storage
Log
family support setting thestorage_policy
. Closes #43421. #46044 (flynn). - Improve
JSONColumns
format when the result is empty. Closes #46024. #46053 (flynn). - Add reference implementation for SipHash128. #46065 (Salvatore Mesoraca).
- Add a new metric to record allocations times and bytes using mmap. #46068 (李扬).
- Currently for functions like
leftPad
,rightPad
,leftPadUTF8
,rightPadUTF8
, the second argumentlength
must be UInt8|16|32|64|128|256. Which is too strict for clickhouse users, besides, it is not consistent with other similar functions likearrayResize
,substring
and so on. #46103 (李扬). - Fix assertion in the
welchTTest
function in debug build when the resulting statistics is NaN. Unified the behavior with other similar functions. Change the behavior ofstudentTTest
to return NaN instead of throwing an exception because the previous behavior was inconvenient. This closes #41176 This closes #42162. #46141 (Alexey Milovidov). - More convenient usage of big integers and ORDER BY WITH FILL. Allow using plain integers for start and end points in WITH FILL when ORDER BY big (128-bit and 256-bit) integers. Fix the wrong result for big integers with negative start or end points. This closes #16733. #46152 (Alexey Milovidov).
- Add
parts
,active_parts
andtotal_marks
columns tosystem.tables
on issue. #46161 (attack204). - Functions "multi[Fuzzy]Match(Any|AnyIndex|AllIndices}" now reject regexes which will likely evaluate very slowly in vectorscan. #46167 (Robert Schulze).
- When
insert_null_as_default
is enabled and column doesn't have defined default value, the default of column type will be used. Also this PR fixes using default values on nulls in case of LowCardinality columns. #46171 (Kruglov Pavel). - Prefer explicitly defined access keys for S3 clients. If
use_environment_credentials
is set totrue
, and the user has provided the access key through query or config, they will be used instead of the ones from the environment variable. #46191 (Antonio Andelic). - Add an alias "DATE_FORMAT()" for function "formatDateTime()" to improve compatibility with MySQL's SQL dialect, extend function
formatDateTime
with substitutions "a", "b", "c", "h", "i", "k", "l" "r", "s", "W". ### Documentation entry for user-facing changes User-readable short description:DATE_FORMAT
is an alias offormatDateTime
. Formats a Time according to the given Format string. Format is a constant expression, so you cannot have multiple formats for a single result column. (Provide link to formatDateTime). #46302 (Jake Bamrah). - Add
ProfileEvents
andCurrentMetrics
about the callback tasks for parallel replicas (s3Cluster
andMergeTree
tables). #46313 (Alexey Milovidov). - Add support for
DELETE
andUPDATE
for tables usingKeeperMap
storage engine. #46330 (Antonio Andelic). - Allow writing RENAME queries with query parameters. Resolves #45778. #46407 (Nikolay Degterinsky).
- Fix parameterized SELECT queries with REPLACE transformer. Resolves #33002. #46420 (Nikolay Degterinsky).
- Exclude the internal database used for temporary/external tables from the calculation of asynchronous metric "NumberOfDatabases". This makes the behavior consistent with system table "system.databases". #46435 (Robert Schulze).
- Added
last_exception_time
column into distribution_queue table. #46564 (Aleksandr). - Support for IN clause with parameter in parameterized views. #46583 (SmitaRKulkarni).
- Do not load named collections on server startup (load them on first access instead). #46607 (Kseniia Sumarokova).
Build/Testing/Packaging Improvement
- Introduce GWP-ASan implemented by the LLVM runtime. This closes #27039. #45226 (Han Fei).
- We want to make our tests less stable and more flaky: add randomization for merge tree settings in tests. #38983 (Anton Popov).
- Enable the HDFS support in PowerPC and which helps to fixes the following functional tests 02113_hdfs_assert.sh, 02244_hdfs_cluster.sql and 02368_cancel_write_into_hdfs.sh. #44949 (MeenaRenganathan22).
- Add systemd.service file for clickhouse-keeper. Fixes #44293. #45568 (Mikhail f. Shiryaev).
- ClickHouse's fork of poco was moved from "contrib/" to "base/poco/". #46075 (Robert Schulze).
- Add an option for
clickhouse-watchdog
to restart the child process. This does not make a lot of use. #46312 (Alexey Milovidov). - If the environment variable
CLICKHOUSE_DOCKER_RESTART_ON_EXIT
is set to 1, the Docker container will runclickhouse-server
as a child instead of the first process, and restart it when it exited. #46391 (Alexey Milovidov). - Fix Systemd service file. #46461 (SuperDJY).
- Raised the minimum Clang version needed to build ClickHouse from 12 to 15. #46710 (Robert Schulze).
- Upgrade Intel QPL from v0.3.0 to v1.0.0 2. Build libaccel-config and link it statically to QPL library instead of dynamically. #45809 (jasperzhu).
Bug Fix (user-visible misbehavior in official stable release)
- Flush data exactly by
rabbitmq_flush_interval_ms
or byrabbitmq_max_block_size
inStorageRabbitMQ
. Closes #42389. Closes #45160. #44404 (Kseniia Sumarokova). - Use PODArray to render in sparkBar function, so we can control the memory usage. Close #44467. #44489 (Duc Canh Le).
- Fix functions (quantilesExactExclusive, quantilesExactInclusive) return unsorted array element. #45379 (wujunfu).
- Fix uncaught exception in HTTPHandler when open telemetry is enabled. #45456 (Frank Chen).
- Don't infer Dates from 8 digit numbers. It could lead to wrong data to be read. #45581 (Kruglov Pavel).
- Fixes to correctly use
odbc_bridge_use_connection_pooling
setting. #45591 (Bharat Nallan). - When the callback in the cache is called, it is possible that this cache is destructed. To keep it safe, we capture members by value. It's also safe for task schedule because it will be deactivated before storage is destroyed. Resolve #45548. #45601 (Han Fei).
- Fix data corruption when codecs Delta or DoubleDelta are combined with codec Gorilla. #45615 (Robert Schulze).
- Correctly check types when using N-gram bloom filter index to avoid invalid reads. #45617 (Antonio Andelic).
- A couple of segfaults have been reported around
c-ares
. They were introduced in my previous pull requests. I have fixed them with the help of Alexander Tokmakov. #45629 (Arthur Passos). - Fix key description when encountering duplicate primary keys. This can happen in projections. See #45590 for details. #45686 (Amos Bird).
- Set compression method and level for backup Closes #45690. #45737 (Pradeep Chhetri).
- Should use
select_query_typed.limitByOffset()
instead ofselect_query_typed.limitOffset()
. #45817 (刘陶峰). - When use experimental analyzer, queries like
SELECT number FROM numbers(100) LIMIT 10 OFFSET 10;
get wrong results (empty result for this sql). That is caused by an unnecessary offset step added by planner. #45822 (刘陶峰). - Backward compatibility - allow implicit narrowing conversion from UInt64 to IPv4 - required for "INSERT ... VALUES ..." expression. #45865 (Yakov Olkhovskiy).
- Bugfix IPv6 parser for mixed ip4 address with missed first octet (like
::.1.2.3
). #45871 (Yakov Olkhovskiy). - Add the
query_kind
column to thesystem.processes
table and theSHOW PROCESSLIST
query. Remove duplicate code. It fixes a bug: the global configuration parametermax_concurrent_select_queries
was not respected to queries withINTERSECT
orEXCEPT
chains. #45872 (Alexey Milovidov). - Fix crash in a function
stochasticLinearRegression
. Found by WingFuzz. #45985 (Nikolai Kochetov). - Fix crash in
SELECT
queries withINTERSECT
andEXCEPT
modifiers that read data from tables with enabled sparse columns (controlled by settingratio_of_defaults_for_sparse_serialization
). #45987 (Anton Popov). - Fix read in order optimization for DESC sorting with FINAL, close #45815. #46009 (Vladimir C).
- Fix reading of non existing nested columns with multiple level in compact parts. #46045 (Azat Khuzhin).
- Fix elapsed column in system.processes (10x error). #46047 (Azat Khuzhin).
- Follow-up fix for Replace domain IP types (IPv4, IPv6) with native https://github.com/ClickHouse/ClickHouse/pull/43221. #46087 (Yakov Olkhovskiy).
- Fix environment variable substitution in the configuration when a parameter already has a value. This closes #46131. This closes #9547. #46144 (pufit).
- Fix incorrect predicate push down with grouping sets. Closes #45947. #46151 (flynn).
- Fix possible pipeline stuck error on
fulls_sorting_join
with constant keys. #46175 (Vladimir C). - Never rewrite tuple functions as literals during formatting to avoid incorrect results. #46232 (Salvatore Mesoraca).
- Fix possible out of bounds error while reading LowCardinality(Nullable) in Arrow format. #46270 (Kruglov Pavel).
- Fix
SYSTEM UNFREEZE
queries failing with the exceptionCANNOT_PARSE_INPUT_ASSERTION_FAILED
. #46325 (Aleksei Filatov). - Fix possible crash which can be caused by an integer overflow while deserializing aggregating state of a function that stores HashTable. #46349 (Nikolai Kochetov).
- Fix possible
LOGICAL_ERROR
in asynchronous inserts with invalid data sent in formatVALUES
. #46350 (Anton Popov). - Fixed a LOGICAL_ERROR on an attempt to execute
ALTER ... MOVE PART ... TO TABLE
. This type of query was never actually supported. #46359 (Alexander Tokmakov). - Fix s3Cluster schema inference in parallel distributed insert select when
parallel_distributed_insert_select
is enabled. #46381 (Kruglov Pavel). - Fix queries like
ALTER TABLE ... UPDATE nested.arr1 = nested.arr2 ...
, wherearr1
andarr2
are fields of the sameNested
column. #46387 (Anton Popov). - Scheduler may fail to schedule a task. If it happens, the whole MulityPartUpload should be aborted and
UploadHelper
must wait for already scheduled tasks. #46451 (Dmitry Novik). - Fix PREWHERE for Merge with different default types (fixes some
NOT_FOUND_COLUMN_IN_BLOCK
when the default type for the column differs, also allowPREWHERE
when the type of column is the same across tables, and prohibit it, only if it differs). #46454 (Azat Khuzhin). - Fix a crash that could happen when constant values are used in
ORDER BY
. Fixes #46466. #46493 (Nikolai Kochetov). - Do not throw exception if
disk
setting was specified on query level, butstorage_policy
was specified in config merge tree settings section.disk
will override setting from config. #46533 (Kseniia Sumarokova). - Fix an invalid processing of constant
LowCardinality
argument in functionarrayMap
. This bug could lead to a segfault in release, and logical errorBad cast
in debug build. #46569 (Alexey Milovidov). - fixes #46557. #46611 (Alexander Gololobov).
- Fix endless restarts of clickhouse-server systemd unit if server cannot start within 1m30sec (Disable timeout logic for starting clickhouse-server from systemd service). #46613 (Azat Khuzhin).
- Allocated during asynchronous inserts memory buffers were deallocated in the global context and MemoryTracker counters for corresponding user and query were not updated correctly. That led to false positive OOM exceptions. #46622 (Dmitry Novik).
- Updated to not clear on_expression from table_join as its used by future analyze runs resolves #45185. #46487 (SmitaRKulkarni).
ClickHouse release 23.1, 2023-01-26
ClickHouse release 23.1
Upgrade Notes
- The
SYSTEM RESTART DISK
query becomes a no-op. #44647 (alesapin). - The
PREALLOCATE
option forHASHED
/SPARSE_HASHED
dictionaries becomes a no-op. #45388 (Azat Khuzhin). It does not give significant advantages anymore. - Disallow
Gorilla
codec on columns of non-Float32 or non-Float64 type. #45252 (Robert Schulze). It was pointless and led to inconsistencies. - Parallel quorum inserts might work incorrectly with
*MergeTree
tables created with the deprecated syntax. Therefore, parallel quorum inserts support is completely disabled for such tables. It does not affect tables created with a new syntax. #45430 (Alexander Tokmakov). - Use the
GetObjectAttributes
request instead of theHeadObject
request to get the size of an object in AWS S3. This change fixes handling endpoints without explicit regions after updating the AWS SDK, for example. #45288 (Vitaly Baranov). AWS S3 and Minio are tested, but keep in mind that various S3-compatible services (GCS, R2, B2) may have subtle incompatibilities. This change also may require you to adjust the ACL to allow theGetObjectAttributes
request. - Forbid paths in timezone names. For example, a timezone name like
/usr/share/zoneinfo/Asia/Aden
is not allowed; the IANA timezone database name likeAsia/Aden
should be used. #44225 (Kruglov Pavel). - Queries combining equijoin and constant expressions (e.g.,
JOIN ON t1.x = t2.x AND 1 = 1
) are forbidden due to incorrect results. #44016 (Vladimir C).
New Feature
- Dictionary source for extracting keys by traversing regular expressions tree. It can be used for User-Agent parsing. #40878 (Vage Ogannisian). #43858 (Han Fei).
- Added parametrized view functionality, now it's possible to specify query parameters for the View table engine. resolves #40907. #41687 (SmitaRKulkarni).
- Add
quantileInterpolatedWeighted
/quantilesInterpolatedWeighted
functions. #38252 (Bharat Nallan). - Array join support for the
Map
type, like the function "explode" in Spark. #43239 (李扬). - Support SQL standard binary and hex string literals. #43785 (Mo Xuan).
- Allow formatting
DateTime
in Joda-Time style. Refer to the Joda-Time docs. #43818 (李扬). - Implemented a fractional second formatter (
%f
) forformatDateTime
. #44060 (ltrk2). #44497 (Alexander Gololobov). - Added
age
function to calculate the difference between two dates or dates with time values expressed as the number of full units. Closes #41115. #44421 (Robert Schulze). - Add
Null
source for dictionaries. Closes #44240. #44502 (mayamika). - Allow configuring the S3 storage class with the
s3_storage_class
configuration option. Such as<s3_storage_class>STANDARD/INTELLIGENT_TIERING</s3_storage_class>
Closes #44443. #44707 (chen). - Insert default values in case of missing elements in JSON object while parsing named tuple. Add setting
input_format_json_defaults_for_missing_elements_in_named_tuple
that controls this behaviour. Closes #45142#issuecomment-1380153217. #45231 (Kruglov Pavel). - Record server startup time in ProfileEvents (
ServerStartupMilliseconds
). Resolves #43188. #45250 (SmitaRKulkarni). - Refactor and Improve streaming engines Kafka/RabbitMQ/NATS and add support for all formats, also refactor formats a bit: - Fix producing messages in row-based formats with suffixes/prefixes. Now every message is formatted completely with all delimiters and can be parsed back using input format. - Support block-based formats like Native, Parquet, ORC, etc. Every block is formatted as a separate message. The number of rows in one message depends on the block size, so you can control it via the setting
max_block_size
. - Add new engine settingskafka_max_rows_per_message/rabbitmq_max_rows_per_message/nats_max_rows_per_message
. They control the number of rows formatted in one message in row-based formats. Default value: 1. - Fix high memory consumption in the NATS table engine. - Support arbitrary binary data in NATS producer (previously it worked only with strings contained \0 at the end) - Add missing Kafka/RabbitMQ/NATS engine settings in the documentation. - Refactor producing and consuming in Kafka/RabbitMQ/NATS, separate it from WriteBuffers/ReadBuffers semantic. - Refactor output formats: remove callbacks on each row used in Kafka/RabbitMQ/NATS (now we don't use callbacks there), allow to use IRowOutputFormat directly, clarify row end and row between delimiters, make it possible to reset output format to start formatting again - Add proper implementation in formatRow function (bonus after formats refactoring). #42777 (Kruglov Pavel). - Support reading/writing
Nested
tables asList
ofStruct
inCapnProto
format. Read/writeDecimal32/64
asInt32/64
. Closes #43319. #43379 (Kruglov Pavel). - Added a
message_format_string
column tosystem.text_log
. The column contains a pattern that was used to format the message. #44543 (Alexander Tokmakov). This allows various analytics over the ClickHouse logs. - Try to autodetect headers with column names (and maybe types) for CSV/TSV/CustomSeparated input formats. Add settings input_format_tsv/csv/custom_detect_header that enable this behaviour (enabled by default). Closes #44640. #44953 (Kruglov Pavel).
Experimental Feature
- Add an experimental inverted index as a new secondary index type for efficient text search. #38667 (larryluogit).
- Add experimental query result cache. #43797 (Robert Schulze).
- Added extendable and configurable scheduling subsystem for IO requests (not yet integrated with IO code itself). #41840 (Sergei Trifonov). This feature does nothing at all, enjoy.
- Added
SYSTEM DROP DATABASE REPLICA
that removes metadata of a dead replica of aReplicated
database. Resolves #41794. #42807 (Alexander Tokmakov).
Performance Improvement
- Do not load inactive parts at startup of
MergeTree
tables. #42181 (Anton Popov). - Improved latency of reading from storage
S3
and table functions3
with large numbers of small files. Now settingsremote_filesystem_read_method
andremote_filesystem_read_prefetch
take effect while reading from storageS3
. #43726 (Anton Popov). - Optimization for reading struct fields in Parquet/ORC files. Only the required fields are loaded. #44484 (lgbo).
- Two-level aggregation algorithm was mistakenly disabled for queries over the HTTP interface. It was enabled back, and it leads to a major performance improvement. #45450 (Nikolai Kochetov).
- Added mmap support for StorageFile, which should improve the performance of clickhouse-local. #43927 (pufit).
- Added sharding support in HashedDictionary to allow parallel load (almost linear scaling based on number of shards). #40003 (Azat Khuzhin).
- Speed up query parsing. #42284 (Raúl Marín).
- Always replace OR chain
expr = x1 OR ... OR expr = xN
toexpr IN (x1, ..., xN)
in the case whereexpr
is aLowCardinality
column. Settingoptimize_min_equality_disjunction_chain_length
is ignored in this case. #42889 (Guo Wangyang). - Slightly improve performance by optimizing the code around ThreadStatus. #43586 (Zhiguo Zhou).
- Optimize the column-wise ternary logic evaluation by achieving auto-vectorization. In the performance test of this microbenchmark, we've observed a peak performance gain of 21x on the ICX device (Intel Xeon Platinum 8380 CPU). #43669 (Zhiguo Zhou).
- Avoid acquiring read locks in the
system.tables
table if possible. #43840 (Raúl Marín). - Optimize ThreadPool. The performance experiments of SSB (Star Schema Benchmark) on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) shows that this change could effectively decrease the lock contention for ThreadPoolImpl::mutex by 75%, increasing the CPU utilization and improving the overall performance by 2.4%. #44308 (Zhiguo Zhou).
- Now the optimisation for predicting the hash table size is applied only if the cached hash table size is sufficiently large (thresholds were determined empirically and hardcoded). #44455 (Nikita Taranov).
- Small performance improvement for asynchronous reading from remote filesystems. #44868 (Kseniia Sumarokova).
- Add fast path for: -
col like '%%'
; -col like '%'
; -col not like '%'
; -col not like '%'
; -match(col, '.*')
. #45244 (李扬). - Slightly improve happy path optimisation in filtering (WHERE clause). #45289 (Nikita Taranov).
- Provide monotonicity info for
toUnixTimestamp64*
to enable more algebraic optimizations for index analysis. #44116 (Nikita Taranov). - Allow the configuration of temporary data for query processing (spilling to disk) to cooperate with the filesystem cache (taking up the space from the cache disk) #43972 (Vladimir C). This mainly improves ClickHouse Cloud, but can be used for self-managed setups as well, if you know what to do.
- Make
system.replicas
table do parallel fetches of replicas statuses. Closes #43918. #43998 (Nikolay Degterinsky). - Optimize memory consumption during backup to S3: files to S3 now will be copied directly without using
WriteBufferFromS3
(which could use a lot of memory). #45188 (Vitaly Baranov). - Add a cache for async block ids. This will reduce the number of requests of ZooKeeper when we enable async inserts deduplication. #45106 (Han Fei).
Improvement
- Use structure from insertion table in generateRandom without arguments. #45239 (Kruglov Pavel).
- Allow to implicitly convert floats stored in string fields of JSON to integers in
JSONExtract
functions. E.g.JSONExtract('{"a": "1000.111"}', 'a', 'UInt64')
->1000
, previously it returned 0. #45432 (Anton Popov). - Added fields
supports_parallel_parsing
andsupports_parallel_formatting
to tablesystem.formats
for better introspection. #45499 (Anton Popov). - Improve reading CSV field in CustomSeparated/Template format. Closes #42352 Closes #39620. #43332 (Kruglov Pavel).
- Unify query elapsed time measurements. #43455 (Raúl Marín).
- Improve automatic usage of structure from insertion table in table functions file/hdfs/s3 when virtual columns are present in a select query, it fixes the possible error
Block structure mismatch
ornumber of columns mismatch
. #43695 (Kruglov Pavel). - Add support for signed arguments in the function
range
. Fixes #43333. #43733 (sanyu). - Remove redundant sorting, for example, sorting related ORDER BY clauses in subqueries. Implemented on top of query plan. It does similar optimization as
optimize_duplicate_order_by_and_distinct
regardingORDER BY
clauses, but more generic, since it's applied to any redundant sorting steps (not only caused by ORDER BY clause) and applied to subqueries of any depth. Related to #42648. #43905 (Igor Nikonov). - Add the ability to disable deduplication of files for BACKUP (for backups without deduplication ATTACH can be used instead of full RESTORE). For example
BACKUP foo TO S3(...) SETTINGS deduplicate_files=0
(defaultdeduplicate_files=1
). #43947 (Azat Khuzhin). - Refactor and improve schema inference for text formats. Add new setting
schema_inference_make_columns_nullable
that controls making result typesNullable
(enabled by default);. #44019 (Kruglov Pavel). - Better support for
PROXYv1
protocol. #44135 (Yakov Olkhovskiy). - Add information about the latest part check by cleanup threads into
system.parts
table. #44244 (Dmitry Novik). - Disable table functions in readonly mode for inserts. #44290 (SmitaRKulkarni).
- Add a setting
simultaneous_parts_removal_limit
to allow limiting the number of parts being processed by one iteration of CleanupThread. #44461 (Dmitry Novik). - Do not initialize ReadBufferFromS3 when only virtual columns are needed in a query. This may be helpful to #44246. #44493 (chen).
- Prevent duplicate column names hints. Closes #44130. #44519 (Joanna Hulboj).
- Allow macro substitution in endpoint of disks. Resolve #40951. #44533 (SmitaRKulkarni).
- Improve schema inference when
input_format_json_read_object_as_string
is enabled. #44546 (Kruglov Pavel). - Add a user-level setting
database_replicated_allow_replicated_engine_arguments
which allows banning the creation ofReplicatedMergeTree
tables with arguments inDatabaseReplicated
. #44566 (alesapin). - Prevent users from mistakenly specifying zero (invalid) value for
index_granularity
. This closes #44536. #44578 (Alexey Milovidov). - Added possibility to set path to service keytab file in
keytab
parameter inkerberos
section of config.xml. #44594 (Roman Vasin). - Use already written part of the query for fuzzy search (pass to the
skim
library, which is written in Rust and linked statically to ClickHouse). #44600 (Azat Khuzhin). - Enable
input_format_json_read_objects_as_strings
by default to be able to read nested JSON objects while JSON Object type is experimental. #44657 (Kruglov Pavel). - Improvement for deduplication of async inserts: when users do duplicate async inserts, we should deduplicate inside the memory before we query Keeper. #44682 (Han Fei).
- Input/ouptut
Avro
format will parse bool type as ClickHouse bool type. #44684 (Kruglov Pavel). - Support Bool type in Arrow/Parquet/ORC. Closes #43970. #44698 (Kruglov Pavel).
- Don't greedily parse beyond the quotes when reading UUIDs - it may lead to mistakenly successful parsing of incorrect data. #44686 (Raúl Marín).
- Infer UInt64 in case of Int64 overflow and fix some transforms in schema inference. #44696 (Kruglov Pavel).
- Previously dependency resolving inside
Replicated
database was done in a hacky way, and now it's done right using an explicit graph. #44697 (Nikita Mikhaylov). - Fix
output_format_pretty_row_numbers
does not preserve the counter across the blocks. Closes #44815. #44832 (flynn). - Don't report errors in
system.errors
due to parts being merged concurrently with the background cleanup process. #44874 (Raúl Marín). - Optimize and fix metrics for Distributed async INSERT. #44922 (Azat Khuzhin).
- Added settings to disallow concurrent backups and restores resolves #43891 Implementation: * Added server-level settings to disallow concurrent backups and restores, which are read and set when BackupWorker is created in Context. * Settings are set to true by default. * Before starting backup or restores, added a check to see if any other backups/restores are running. For internal requests, it checks if it is from the self node using backup_uuid. #45072 (SmitaRKulkarni).
- Add
<storage_policy>
config parameter for system logs. #45320 (Stig Bakken).
Build/Testing/Packaging Improvement
- Statically link with the
skim
library (it is written in Rust) for fuzzy search in clickhouse client/local history. #44239 (Azat Khuzhin). - We removed support for shared linking because of Rust. Actually, Rust is only an excuse for this removal, and we wanted to remove it nevertheless. #44828 (Alexey Milovidov).
- Remove the dependency on the
adduser
tool from the packages, because we don't use it. This fixes #44934. #45011 (Alexey Milovidov). - The
SQLite
library is updated to the latest. It is used for the SQLite database and table integration engines. Also, fixed a false-positive TSan report. This closes #45027. #45031 (Alexey Milovidov). - CRC-32 changes to address the WeakHash collision issue in PowerPC. #45144 (MeenaRenganathan22).
- Update aws-c* submodules #43020 (Vitaly Baranov).
- Automatically merge green backport PRs and green approved PRs #41110 (Mikhail f. Shiryaev).
- Introduce a website for the status of ClickHouse CI. Source.
Bug Fix
- Replace domain IP types (IPv4, IPv6) with native. #43221 (Yakov Olkhovskiy). It automatically fixes some missing implementations in the code.
- Fix the backup process if mutations get killed during the backup process. #45351 (Vitaly Baranov).
- Fix the
Invalid number of rows in Chunk
exception message. #41404. #42126 (Alexander Gololobov). - Fix possible use of an uninitialized value after executing expressions after sorting. Closes #43386 #43635 (Kruglov Pavel).
- Better handling of NULL in aggregate combinators, fix possible segfault/logical error while using an obscure optimization
optimize_rewrite_sum_if_to_count_if
. Closes #43758. #43813 (Kruglov Pavel). - Fix CREATE USER/ROLE query settings constraints. #43993 (Nikolay Degterinsky).
- Fixed bug with non-parsable default value for
EPHEMERAL
column in table metadata. #44026 (Yakov Olkhovskiy). - Fix parsing of bad version from compatibility setting. #44224 (Kruglov Pavel).
- Bring interval subtraction from datetime in line with addition. #44241 (ltrk2).
- Remove limits on the maximum size of the result for view. #44261 (lizhuoyu5).
- Fix possible logical error in cache if
do_not_evict_index_and_mrk_files=1
. Closes #42142. #44268 (Kseniia Sumarokova). - Fix possible too early cache write interruption in write-through cache (caching could be stopped due to false assumption when it shouldn't have). #44289 (Kseniia Sumarokova).
- Fix possible crash in the case function
IN
with constant arguments was used as a constant argument together withLowCardinality
. Fixes #44221. #44346 (Nikolai Kochetov). - Fix support for complex parameters (like arrays) of parametric aggregate functions. This closes #30975. The aggregate function
sumMapFiltered
was unusable in distributed queries before this change. #44358 (Alexey Milovidov). - Fix reading ObjectId in BSON schema inference. #44382 (Kruglov Pavel).
- Fix race which can lead to premature temp parts removal before merge finishes in ReplicatedMergeTree. This issue could lead to errors like
No such file or directory: xxx
. Fixes #43983. #44383 (alesapin). - Some invalid
SYSTEM ... ON CLUSTER
queries worked in an unexpected way if a cluster name was not specified. It's fixed, now invalid queries throwSYNTAX_ERROR
as they should. Fixes #44264. #44387 (Alexander Tokmakov). - Fix reading Map type in ORC format. #44400 (Kruglov Pavel).
- Fix reading columns that are not presented in input data in Parquet/ORC formats. Previously it could lead to error
INCORRECT_NUMBER_OF_COLUMNS
. Closes #44333. #44405 (Kruglov Pavel). - Previously the
bar
function used the same '▋' (U+258B "Left five eighths block") character to display both 5/8 and 6/8 bars. This change corrects this behavior by using '▊' (U+258A "Left three quarters block") for displaying 6/8 bar. #44410 (Alexander Gololobov). - Placing profile settings after profile settings constraints in the configuration file made constraints ineffective. #44411 (Konstantin Bogdanov).
- Fix
SYNTAX_ERROR
while runningEXPLAIN AST INSERT
queries with data. Closes #44207. #44413 (save-my-heart). - Fix reading bool value with CRLF in CSV format. Closes #44401. #44442 (Kruglov Pavel).
- Don't execute and/or/if/multiIf on a LowCardinality dictionary, so the result type cannot be LowCardinality. It could lead to the error
Illegal column ColumnLowCardinality
in some cases. Fixes #43603. #44469 (Kruglov Pavel). - Fix mutations with the setting
max_streams_for_merge_tree_reading
. #44472 (Anton Popov). - Fix potential null pointer dereference with GROUPING SETS in ASTSelectQuery::formatImpl (#43049). #44479 (Robert Schulze).
- Validate types in table function arguments, CAST function arguments, JSONAsObject schema inference according to settings. #44501 (Kruglov Pavel).
- Fix IN function with LowCardinality and const column, close #44503. #44506 (Duc Canh Le).
- Fixed a bug in the normalization of a
DEFAULT
expression inCREATE TABLE
statement. The second argument of the functionin
(or the right argument of operatorIN
) might be replaced with the result of its evaluation during CREATE query execution. Fixes #44496. #44547 (Alexander Tokmakov). - Projections do not work in presence of WITH ROLLUP, WITH CUBE and WITH TOTALS. In previous versions, a query produced an exception instead of skipping the usage of projections. This closes #44614. This closes #42772. #44615 (Alexey Milovidov).
- Async blocks were not cleaned because the function
get all blocks sorted by time
didn't get async blocks. #44651 (Han Fei). - Fix
LOGICAL_ERROR
The top step of the right pipeline should be ExpressionStep
for JOIN with subquery, UNION, and TOTALS. Fixes #43687. #44673 (Nikolai Kochetov). - Avoid
std::out_of_range
exception in the Executable table engine. #44681 (Kruglov Pavel). - Do not apply
optimize_syntax_fuse_functions
to quantiles on AST, close #44712. #44713 (Vladimir C). - Fix bug with wrong type in Merge table and PREWHERE, close #43324. #44716 (Vladimir C).
- Fix a possible crash during shutdown (while destroying TraceCollector). Fixes #44757. #44758 (Nikolai Kochetov).
- Fix a possible crash in distributed query processing. The crash could happen if a query with totals or extremes returned an empty result and there are mismatched types in the Distributed and the local tables. Fixes #44738. #44760 (Nikolai Kochetov).
- Fix fsync for fetches (
min_compressed_bytes_to_fsync_after_fetch
)/small files (ttl.txt, columns.txt) in mutations (min_rows_to_fsync_after_merge
/min_compressed_bytes_to_fsync_after_merge
). #44781 (Azat Khuzhin). - A rare race condition was possible when querying the
system.parts
orsystem.parts_columns
tables in the presence of parts being moved between disks. Introduced in #41145. #44809 (Alexey Milovidov). - Fix the error
Context has expired
which could appear with enabled projections optimization. Can be reproduced for queries with specific functions, likedictHas/dictGet
which use context in runtime. Fixes #44844. #44850 (Nikolai Kochetov). - A fix for
Cannot read all data
error which could happen while readingLowCardinality
dictionary from remote fs. Fixes #44709. #44875 (Nikolai Kochetov). - Ignore cases when hardware monitor sensors cannot be read instead of showing a full exception message in logs. #44895 (Raúl Marín).
- Use
max_delay_to_insert
value in case the calculated time to delay INSERT exceeds the setting value. Related to #44902. #44916 (Igor Nikonov). - Fix error
Different order of columns in UNION subquery
for queries withUNION
. Fixes #44866. #44920 (Nikolai Kochetov). - Delay for INSERT can be calculated incorrectly, which can lead to always using
max_delay_to_insert
setting as delay instead of a correct value. Using simple formulamax_delay_to_insert * (parts_over_threshold/max_allowed_parts_over_threshold)
i.e. delay grows proportionally to parts over threshold. Closes #44902. #44954 (Igor Nikonov). - Fix alter table TTL error when a wide part has the lightweight delete mask. #44959 (Mingliang Pan).
- Follow-up fix for Replace domain IP types (IPv4, IPv6) with native #43221. #45024 (Yakov Olkhovskiy).
- Follow-up fix for Replace domain IP types (IPv4, IPv6) with native https://github.com/ClickHouse/ClickHouse/pull/43221. #45043 (Yakov Olkhovskiy).
- A buffer overflow was possible in the parser. Found by fuzzer. #45047 (Alexey Milovidov).
- Fix possible cannot-read-all-data error in storage FileLog. Closes #45051, #38257. #45057 (Kseniia Sumarokova).
- Memory efficient aggregation (setting
distributed_aggregation_memory_efficient
) is disabled when grouping sets are present in the query. #45058 (Nikita Taranov). - Fix
RANGE_HASHED
dictionary to count range columns as part of the primary key during updates whenupdate_field
is specified. Closes #44588. #45061 (Maksim Kita). - Fix error
Cannot capture column
forLowCardinality
captured argument of nested lambda. Fixes #45028. #45065 (Nikolai Kochetov). - Fix the wrong query result of
additional_table_filters
(additional filter was not applied) in case the minmax/count projection is used. #45133 (Nikolai Kochetov). - Fixed bug in
histogram
function accepting negative values. #45147 (simpleton). - Fix wrong column nullability in StoreageJoin, close #44940. #45184 (Vladimir C).
- Fix
background_fetches_pool_size
settings reload (increase at runtime). #45189 (Raúl Marín). - Correctly process
SELECT
queries on KV engines (e.g. KeeperMap, EmbeddedRocksDB) usingIN
on the key with subquery producing different type. #45215 (Antonio Andelic). - Fix logical error in SEMI JOIN & join_use_nulls in some cases, close #45163, close #45209. #45230 (Vladimir C).
- Fix heap-use-after-free in reading from s3. #45253 (Kruglov Pavel).
- Fix bug when the Avro Union type is ['null', Nested type], closes #45275. Fix bug that incorrectly infers
bytes
type toFloat
. #45276 (flynn). - Throw a correct exception when explicit PREWHERE cannot be used with a table using the storage engine
Merge
. #45319 (Antonio Andelic). - Under WSL1 Ubuntu self-extracting ClickHouse fails to decompress due to inconsistency - /proc/self/maps reporting 32bit file's inode, while stat reporting 64bit inode. #45339 (Yakov Olkhovskiy).
- Fix race in Distributed table startup (that could lead to processing file of async INSERT multiple times). #45360 (Azat Khuzhin).
- Fix a possible crash while reading from storage
S3
and table functions3
in the case whenListObject
request has failed. #45371 (Anton Popov). - Fix
SELECT ... FROM system.dictionaries
exception when there is a dictionary with a bad structure (e.g. incorrect type in XML config). #45399 (Aleksei Filatov). - Fix s3Cluster schema inference when structure from insertion table is used in
INSERT INTO ... SELECT * FROM s3Cluster
queries. #45422 (Kruglov Pavel). - Fix bug in JSON/BSONEachRow parsing with HTTP that could lead to using default values for some columns instead of values from data. #45424 (Kruglov Pavel).
- Fixed bug (Code: 632. DB::Exception: Unexpected data ... after parsed IPv6 value ...) with typed parsing of IP types from text source. #45425 (Yakov Olkhovskiy).
- close #45297 Add check for empty regular expressions. #45428 (Han Fei).
- Fix possible (likely distributed) query hung. #45448 (Azat Khuzhin).
- Fix possible deadlock with
allow_asynchronous_read_from_io_pool_for_merge_tree
enabled in case of exception fromThreadPool::schedule
. #45481 (Nikolai Kochetov). - Fix possible in-use table after DETACH. #45493 (Azat Khuzhin).
- Fix rare abort in the case when a query is canceled and parallel parsing was used during its execution. #45498 (Anton Popov).
- Fix a race between Distributed table creation and INSERT into it (could lead to CANNOT_LINK during INSERT into the table). #45502 (Azat Khuzhin).
- Add proper default (SLRU) to cache policy getter. Closes #45514. #45524 (Kseniia Sumarokova).
- Disallow array join in mutations closes #42637 #44447 (SmitaRKulkarni).
- Fix for qualified asterisks with alias table name and column transformer. Resolves #44736. #44755 (SmitaRKulkarni).