mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 01:25:21 +00:00
8afa0f33d1
Fix a link in the 23.12 changelog
468 KiB
468 KiB
Table of Contents
ClickHouse release v23.12, 2023-12-28
ClickHouse release v23.11, 2023-12-06
ClickHouse release v23.10, 2023-11-02
ClickHouse release v23.9, 2023-09-28
ClickHouse release v23.8 LTS, 2023-08-31
ClickHouse release v23.7, 2023-07-27
ClickHouse release v23.6, 2023-06-30
ClickHouse release v23.5, 2023-06-08
ClickHouse release v23.4, 2023-04-26
ClickHouse release v23.3 LTS, 2023-03-30
ClickHouse release v23.2, 2023-02-23
ClickHouse release v23.1, 2023-01-25
Changelog for 2022
2023 Changelog
ClickHouse release 23.12, 2023-12-28
Backward Incompatible Change
- Fix check for non-deterministic functions in TTL expressions. Previously, you could create a TTL expression with non-deterministic functions in some cases, which could lead to undefined behavior later. This fixes #37250. Disallow TTL expressions that don't depend on any columns of a table by default. It can be allowed back by
SET allow_suspicious_ttl_expressions = 1
orSET compatibility = '23.11'
. Closes #37286. #51858 (Alexey Milovidov). - The MergeTree setting
clean_deleted_rows
is deprecated, it has no effect anymore. TheCLEANUP
keyword for theOPTIMIZE
is not allowed by default (it can be unlocked with theallow_experimental_replacing_merge_with_cleanup
setting). #58267 (Alexander Tokmakov). This fixes #57930. This closes #54988. This closes #54570. This closes #50346. This closes #47579. The feature has to be removed because it is not good. We have to remove it as quickly as possible, because there is no other option. #57932 (Alexey Milovidov).
New Feature
- Implement Refreshable Materialized Views, requested in #33919. #56946 (Michael Kolupaev, Michael Guzov).
- Introduce
PASTE JOIN
, which allows users to join tables withoutON
clause simply by row numbers. Example:SELECT * FROM (SELECT number AS a FROM numbers(2)) AS t1 PASTE JOIN (SELECT number AS a FROM numbers(2) ORDER BY a DESC) AS t2
. #57995 (Yarik Briukhovetskyi). - The
ORDER BY
clause now supports specifyingALL
, meaning that ClickHouse sorts by all columns in theSELECT
clause. Example:SELECT col1, col2 FROM tab WHERE [...] ORDER BY ALL
. #57875 (zhongyuankai). - Added a new mutation command
ALTER TABLE <table> APPLY DELETED MASK
, which allows to enforce applying of mask written by lightweight delete and to remove rows marked as deleted from disk. #57433 (Anton Popov). - A handler
/binary
opens a visual viewer of symbols inside the ClickHouse binary. #58211 (Alexey Milovidov). - Added a new SQL function
sqid
to generate Sqids (https://sqids.org/), example:SELECT sqid(125, 126)
. #57512 (Robert Schulze). - Add a new function
seriesPeriodDetectFFT
to detect series period using FFT. #57574 (Bhavna Jindal). - Add an HTTP endpoint for checking if Keeper is ready to accept traffic. #55876 (Konstantin Bogdanov).
- Add 'union' mode for schema inference. In this mode the resulting table schema is the union of all files schemas (so schema is inferred from each file). The mode of schema inference is controlled by a setting
schema_inference_mode
with two possible values -default
andunion
. Closes #55428. #55892 (Kruglov Pavel). - Add new setting
input_format_csv_try_infer_numbers_from_strings
that allows to infer numbers from strings in CSV format. Closes #56455. #56859 (Kruglov Pavel). - When the number of databases or tables exceeds a configurable threshold, show a warning to the user. #57375 (凌涛).
- Dictionary with
HASHED_ARRAY
(andCOMPLEX_KEY_HASHED_ARRAY
) layout supportsSHARDS
similarly toHASHED
. #57544 (vdimir). - Add asynchronous metrics for total primary key bytes and total allocated primary key bytes in memory. #57551 (Bharat Nallan).
- Add
SHA512_256
function. #57645 (Bharat Nallan). - Add
FORMAT_BYTES
as an alias forformatReadableSize
. #57592 (Bharat Nallan). - Allow passing optional session token to the
s3
table function. #57850 (Shani Elharrar). - Introduce a new setting
http_make_head_request
. If it is turned off, the URL table engine will not do a HEAD request to determine the file size. This is needed to support inefficient, misconfigured, or not capable HTTP servers. #54602 (Fionera). - It is now possible to refer to ALIAS column in index (non-primary-key) definitions (issue #55650). Example:
CREATE TABLE tab(col UInt32, col_alias ALIAS col + 1, INDEX idx (col_alias) TYPE minmax) ENGINE = MergeTree ORDER BY col;
. #57546 (Robert Schulze). - Added a new setting
readonly
which can be used to specify an S3 disk is read only. It can be useful to create a table on a disk ofs3_plain
type, while having read only access to the underlying S3 bucket. #57977 (Pengyuan Bian). - The primary key analysis in MergeTree tables will now be applied to predicates that include the virtual column
_part_offset
(optionally with_part
). This feature can serve as a special kind of a secondary index. #58224 (Amos Bird).
Performance Improvement
- Extract non-intersecting parts ranges from MergeTree table during FINAL processing. That way we can avoid additional FINAL logic for this non-intersecting parts ranges. In case when amount of duplicate values with same primary key is low, performance will be almost the same as without FINAL. Improve reading performance for MergeTree FINAL when
do_not_merge_across_partitions_select_final
setting is set. #58120 (Maksim Kita). - Made copy between s3 disks using a s3-server-side copy instead of copying through the buffer. Improves
BACKUP/RESTORE
operations andclickhouse-disks copy
command. #56744 (MikhailBurdukov). - Hash JOIN respects setting
max_joined_block_size_rows
and do not produce large blocks forALL JOIN
. #56996 (vdimir). - Release memory for aggregation earlier. This may avoid unnecessary external aggregation. #57691 (Nikolai Kochetov).
- Improve performance of string serialization. #57717 (Maksim Kita).
- Support trivial count optimization for
Merge
-engine tables. #57867 (skyoct). - Optimized aggregation in some cases. #57872 (Anton Popov).
- The
hasAny
function can now take advantage of the full-text skipping indices. #57878 (Jpnock). - Function
if(cond, then, else)
(and its aliascond ? then : else
) were optimized to use branch-free evaluation. #57885 (zhanglistar). - MergeTree automatically derive
do_not_merge_across_partitions_select_final
setting if partition key expression contains only columns from primary key expression. #58218 (Maksim Kita). - Speedup
MIN
andMAX
for native types. #58231 (Raúl Marín). - Implement
SLRU
cache policy for filesystem cache. #57076 (Kseniia Sumarokova). - The limit for the number of connections per endpoint for background fetches was raised from
15
to the value ofbackground_fetches_pool_size
setting. - MergeTree-level settingreplicated_max_parallel_fetches_for_host
became obsolete - MergeTree-level settingsreplicated_fetches_http_connection_timeout
,replicated_fetches_http_send_timeout
andreplicated_fetches_http_receive_timeout
are moved to the Server-level. - Settingkeep_alive_timeout
is added to the list of Server-level settings. #57523 (Nikita Mikhaylov). - Make querying
system.filesystem_cache
not memory intensive. #57687 (Kseniia Sumarokova). - Reduce memory usage on strings deserialization. #57787 (Maksim Kita).
- More efficient constructor for Enum - it makes sense when Enum has a boatload of values. #57887 (Duc Canh Le).
- An improvement for reading from the filesystem cache: always use
pread
method. #57970 (Nikita Taranov). - Add optimization for AND notEquals chain in logical expression optimizer. This optimization is only available with the experimental Analyzer enabled. #58214 (Kevin Mingtarja).
Improvement
- Support for soft memory limit in Keeper. It will refuse requests if the memory usage is close to the maximum. #57271 (Han Fei). #57699 (Han Fei).
- Make inserts into distributed tables handle updated cluster configuration properly. When the list of cluster nodes is dynamically updated, the Directory Monitor of the distribution table will update it. #42826 (zhongyuankai).
- Do not allow creating a replicated table with inconsistent merge parameters. #56833 (Duc Canh Le).
- Show uncompressed size in
system.tables
. #56618. #57186 (Chen Lixiang). - Add
skip_unavailable_shards
as a setting forDistributed
tables that is similar to the corresponding query-level setting. Closes #43666. #57218 (Gagan Goel). - The function
substring
(aliases:substr
,mid
) can now be used withEnum
types. Previously, the first function argument had to be a value of typeString
orFixedString
. This improves compatibility with 3rd party tools such as Tableau via MySQL interface. #57277 (Serge Klochkov). - Function
format
now supports arbitrary argument types (instead of onlyString
andFixedString
arguments). This is important to calculateSELECT format('The {0} to all questions is {1}', 'answer', 42)
. #57549 (Robert Schulze). - Allows to use the
date_trunc
function with a case-insensitive first argument. Both cases are now supported:SELECT date_trunc('day', now())
andSELECT date_trunc('DAY', now())
. #57624 (Yarik Briukhovetskyi). - Better hints when a table doesn't exist. #57342 (Bharat Nallan).
- Allow to overwrite
max_partition_size_to_drop
andmax_table_size_to_drop
server settings in query time. #57452 (Jordi Villar). - Slightly better inference of unnamed tupes in JSON formats. #57751 (Kruglov Pavel).
- Add support for read-only flag when connecting to Keeper (fixes #53749). #57479 (Mikhail Koviazin).
- Fix possible distributed sends stuck due to "No such file or directory" (during recovering a batch from disk). Fix possible issues with
error_count
fromsystem.distribution_queue
(in case ofdistributed_directory_monitor_max_sleep_time_ms
>5min). Introduce profile event to track async INSERT failures -DistributedAsyncInsertionFailures
. #57480 (Azat Khuzhin). - Support PostgreSQL generated columns and default column values in
MaterializedPostgreSQL
(experimental feature). Closes #40449. #57568 (Kseniia Sumarokova). - Allow to apply some filesystem cache config settings changes without server restart. #57578 (Kseniia Sumarokova).
- Properly handling PostgreSQL table structure with empty array. #57618 (Mike Kot).
- Expose the total number of errors occurred since last server restart as a
ClickHouseErrorMetric_ALL
metric. #57627 (Nikita Mikhaylov). - Allow nodes in the configuration file with
from_env
/from_zk
reference and non empty element with replace=1. #57628 (Azat Khuzhin). - A table function
fuzzJSON
which allows generating a lot of malformed JSON for fuzzing. #57646 (Julia Kartseva). - Allow IPv6 to UInt128 conversion and binary arithmetic. #57707 (Yakov Olkhovskiy).
- Add a setting for
async inserts deduplication cache
- how long we wait for cache update. Deprecate settingasync_block_ids_cache_min_update_interval_ms
. Now cache is updated only in case of conflicts. #57743 (alesapin). sleep()
function now can be cancelled withKILL QUERY
. #57746 (Vitaly Baranov).- Forbid
CREATE TABLE ... AS SELECT
queries forReplicated
table engines in the experimentalReplicated
database because they are not supported. Reference #35408. #57796 (Nikolay Degterinsky). - Fix and improve transforming queries for external databases, to recursively obtain all compatible predicates. #57888 (flynn).
- Support dynamic reloading of the filesystem cache size. Closes #57866. #57897 (Kseniia Sumarokova).
- Correctly support
system.stack_trace
for threads with blocked SIGRTMIN (these threads can exist in low-quality external libraries such as Apache rdkafka). #57907 (Azat Khuzhin). Aand also send signal to the threads only if it is not blocked to avoid waitingstorage_system_stack_trace_pipe_read_timeout_ms
when it does not make any sense. #58136 (Azat Khuzhin). - Tolerate keeper failures in the quorum inserts' check. #57986 (Raúl Marín).
- Add max/peak RSS (
MemoryResidentMax
) into system.asynchronous_metrics. #58095 (Azat Khuzhin). - This PR allows users to use s3-style links (
https://
ands3://
) without mentioning region if it's not default. Also find the correct region if the user mentioned the wrong one. #58148 (Yarik Briukhovetskyi). clickhouse-format --obfuscate
will know about Settings, MergeTreeSettings, and time zones and keep their names unchanged. #58179 (Alexey Milovidov).- Added explicit
finalize()
function inZipArchiveWriter
. Simplify too complicated code inZipArchiveWriter
. This fixes #58074. #58202 (Vitaly Baranov). - Make caches with the same path use the same cache objects. This behaviour existed before, but was broken in 23.4. If such caches with the same path have different set of cache settings, an exception will be thrown, that this is not allowed. #58264 (Kseniia Sumarokova).
- Parallel replicas (experimental feature): friendly settings #57542 (Igor Nikonov).
- Parallel replicas (experimental feature): announcement response handling improvement #57749 (Igor Nikonov).
- Parallel replicas (experimental feature): give more respect to
min_number_of_marks
inParallelReplicasReadingCoordinator
#57763 (Nikita Taranov). - Parallel replicas (experimental feature): disable parallel replicas with IN (subquery) #58133 (Igor Nikonov).
- Parallel replicas (experimental feature): add profile event 'ParallelReplicasUsedCount' #58173 (Igor Nikonov).
- Non POST requests such as HEAD will be readonly similar to GET. #58060 (San).
- Add
bytes_uncompressed
column tosystem.part_log
#58167 (Jordi Villar). - Add base backup name to
system.backups
andsystem.backup_log
tables #58178 (Pradeep Chhetri). - Add support for specifying query parameters in the command line in clickhouse-local #58210 (Pradeep Chhetri).
Build/Testing/Packaging Improvement
- Randomize more settings #39663 (Anton Popov).
- Randomize disabled optimizations in CI #57315 (Raúl Marín).
- Allow usage of Azure-related table engines/functions on macOS. #51866 (Alexey Milovidov).
- ClickHouse Fast Test now uses Musl instead of GLibc. #57711 (Alexey Milovidov). The fully-static Musl build is available to download from the CI.
- Run ClickBench for every commit. This closes #57708. #57712 (Alexey Milovidov).
- Remove the usage of a harmful C/POSIX
select
function from external libraries. #57467 (Igor Nikonov). - Settings only available in ClickHouse Cloud will be also present in the open-source ClickHouse build for convenience. #57638 (Nikita Mikhaylov).
Bug Fix (user-visible misbehavior in an official stable release)
- Fixed a possibility of sorting order breakage in TTL GROUP BY #49103 (Nikita Mikhaylov).
- Fix: split
lttb
bucket strategy, first bucket and last bucket should only contain single point #57003 (FFish). - Fix possible deadlock in the
Template
format during sync after error #57004 (Kruglov Pavel). - Fix early stop while parsing a file with skipping lots of errors #57006 (Kruglov Pavel).
- Prevent dictionary's ACL bypass via the
dictionary
table function #57362 (Salvatore Mesoraca). - Fix another case of a "non-ready set" error found by Fuzzer. #57423 (Nikolai Kochetov).
- Fix several issues regarding PostgreSQL
array_ndims
usage. #57436 (Ryan Jacobs). - Fix RWLock inconsistency after write lock timeout #57454 (Vitaly Baranov). Fix RWLock inconsistency after write lock timeout (again) #57733 (Vitaly Baranov).
- Fix: don't exclude ephemeral column when building pushing to view chain #57461 (Yakov Olkhovskiy).
- MaterializedPostgreSQL (experimental issue): fix issue #41922, add test for #41923 #57515 (Kseniia Sumarokova).
- Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. #57538 (MikhailBurdukov).
- Fix crash in clickhouse-local #57553 (Nikolay Degterinsky).
- A fix for Hash JOIN. #57564 (vdimir).
- Fix possible error in PostgreSQL source #57567 (Kseniia Sumarokova).
- Fix type correction in Hash JOIN for nested LowCardinality. #57614 (vdimir).
- Avoid hangs of
system.stack_trace
by correctly prohibiting parallel reading from it. #57641 (Azat Khuzhin). - Fix an error for aggregation of sparse columns with
any(...) RESPECT NULL
#57710 (Azat Khuzhin). - Fix unary operators parsing #57713 (Nikolay Degterinsky).
- Fix dependency loading for the experimental table engine
MaterializedPostgreSQL
. #57754 (Kseniia Sumarokova). - Fix retries for disconnected nodes for BACKUP/RESTORE ON CLUSTER #57764 (Vitaly Baranov).
- Fix result of external aggregation in case of partially materialized projection #57790 (Anton Popov).
- Fix merge in aggregation functions with
*Map
combinator #57795 (Anton Popov). - Disable
system.kafka_consumers
because it has a bug. #57822 (Azat Khuzhin). - Fix LowCardinality keys support in Merge JOIN. #57827 (vdimir).
- A fix for
InterpreterCreateQuery
related to the sample block. #57855 (Maksim Kita). addresses_expr
were ignored for named collections from PostgreSQL. #57874 (joelynch).- Fix invalid memory access in BLAKE3 (Rust) #57876 (Raúl Marín). Then it was rewritten from Rust to C++ for better memory-safety. #57994 (Raúl Marín).
- Normalize function names in
CREATE INDEX
#57906 (Alexander Tokmakov). - Fix handling of unavailable replicas before first request happened #57933 (Nikita Taranov).
- Fix literal alias misclassification #57988 (Chen768959).
- Fix invalid preprocessing on Keeper #58069 (Antonio Andelic).
- Fix integer overflow in the
Poco
library, related toUTF32Encoding
#58073 (Andrey Fedotov). - Fix parallel replicas (experimental feature) in presence of a scalar subquery with a big integer value #58118 (Alexey Milovidov).
- Fix
accurateCastOrNull
for out-of-rangeDateTime
#58139 (Andrey Zvonov). - Fix possible
PARAMETER_OUT_OF_BOUND
error during subcolumns reading from a wide part in MergeTree #58175 (Kruglov Pavel). - Fix a slow-down of CREATE VIEW with an enormous number of subqueries #58220 (Tao Wang).
- Fix parallel parsing for JSONCompactEachRow #58181 (Alexey Milovidov). #58250 (Kruglov Pavel).
ClickHouse release 23.11, 2023-12-06
Backward Incompatible Change
- The default ClickHouse server configuration file has enabled
access_management
(user manipulation by SQL queries) andnamed_collection_control
(manipulation of named collection by SQL queries) for thedefault
user by default. This closes #56482. #56619 (Alexey Milovidov). - Multiple improvements for
RESPECT NULLS
/IGNORE NULLS
for window functions. If you use them as aggregate functions and store the states of aggregate functions with these modifiers, they might become incompatible. #57189 (Raúl Marín). - Remove optimization
optimize_move_functions_out_of_any
. #57190 (Raúl Marín). - Formatters
%l
/%k
/%c
in functionparseDateTime
are now able to parse hours/months without leading zeros, e.g.select parseDateTime('2023-11-26 8:14', '%F %k:%i')
now works. Setparsedatetime_parse_without_leading_zeros = 0
to restore the previous behavior which required two digits. FunctionformatDateTime
is now also able to print hours/months without leading zeros. This is controlled by settingformatdatetime_format_without_leading_zeros
but off by default to not break existing use cases. #55872 (Azat Khuzhin). - You can no longer use the aggregate function
avgWeighted
with arguments of typeDecimal
. Workaround: convert arguments toFloat64
. This closes #43928. This closes #31768. This closes #56435. If you have used this function inside materialized views or projections withDecimal
arguments, contact support@clickhouse.com. Fixed error in aggregate functionsumMap
and made it slower around 1.5..2 times. It does not matter because the function is garbage anyway. This closes #54955. This closes #53134. This closes #55148. Fix a bug in functiongroupArraySample
- it used the same random seed in case more than one aggregate state is generated in a query. #56350 (Alexey Milovidov).
New Feature
- Added server setting
async_load_databases
for asynchronous loading of databases and tables. Speeds up the server start time. Applies to databases withOrdinary
,Atomic
andReplicated
engines. Their tables load metadata asynchronously. Query to a table increases the priority of the load job and waits for it to be done. Added a new tablesystem.asynchronous_loader
for introspection. #49351 (Sergei Trifonov). - Add system table
blob_storage_log
. It allows auditing all the data written to S3 and other object storages. #52918 (vdimir). - Use statistics to order prewhere conditions better. #53240 (Han Fei).
- Added support for compression in the Keeper's protocol. It can be enabled on the ClickHouse side by using this flag
use_compression
insidezookeeper
section. Keep in mind that only ClickHouse Keeper supports compression, while Apache ZooKeeper does not. Resolves #49507. #54957 (SmitaRKulkarni). - Introduce the feature
storage_metadata_write_full_object_key
. If it is set astrue
then metadata files are written with the new format. With that format ClickHouse stores full remote object key in the metadata file which allows better flexibility and optimization. #55566 (Sema Checherinda). - Add new settings and syntax to protect named collections' fields from being overridden. This is meant to prevent a malicious user from obtaining unauthorized access to secrets. #55782 (Salvatore Mesoraca).
- Add
hostname
column to all system log tables - it is useful if you make the system tables replicated, shared, or distributed. #55894 (Bharat Nallan). - Add
CHECK ALL TABLES
query. #56022 (vdimir). - Added function
fromDaysSinceYearZero
which is similar to MySQL'sFROM_DAYS
. E.g.SELECT fromDaysSinceYearZero(739136)
returns2023-09-08
. #56088 (Joanna Hulboj). - Add an external Python tool to view backups and to extract information from them without using ClickHouse. #56268 (Vitaly Baranov).
- Implement a new setting called
preferred_optimize_projection_name
. If it is set to a non-empty string, the specified projection would be used if possible instead of choosing from all the candidates. #56309 (Yarik Briukhovetskyi). - Add 4-letter command for yielding/resigning leadership (https://github.com/ClickHouse/ClickHouse/issues/56352). #56354 (Pradeep Chhetri). #56620 (Pradeep Chhetri).
- Added a new SQL function,
arrayRandomSample(arr, k)
which returns a sample of k elements from the input array. Similar functionality could previously be achieved only with less convenient syntax, e.g.SELECT arrayReduce('groupArraySample(3)', range(10))
. #56416 (Robert Schulze). - Added support for
Float16
type data to use in.npy
files. Closes #56344. #56424 (Yarik Briukhovetskyi). - Added a system view
information_schema.statistics
for better compatibility with Tableau Online. #56425 (Serge Klochkov). - Add
system.symbols
table useful for introspection of the binary. #56548 (Alexey Milovidov). - Configurable dashboards. Queries for charts are now loaded using a query, which by default uses a new
system.dashboards
table. #56771 (Sergei Trifonov). - Introduce
fileCluster
table function - it is useful if you mount a shared filesystem (NFS and similar) into theuser_files
directory. #56868 (Andrey Zvonov). - Add
_size
virtual column with file size in bytes tos3/file/hdfs/url/azureBlobStorage
engines. #57126 (Kruglov Pavel). - Expose the number of errors for each error code occurred on a server since last restart from the Prometheus endpoint. #57209 (Nikita Mikhaylov).
- ClickHouse keeper reports its running availability zone at
/keeper/availability-zone
path. This can be configured via<availability_zone><value>us-west-1a</value></availability_zone>
. #56715 (Jianfei Hu). - Make ALTER materialized_view MODIFY QUERY non experimental and deprecate
allow_experimental_alter_materialized_view_structure
setting. Fixes #15206. #57311 (alesapin). - Setting
join_algorithm
respects specified order #51745 (vdimir). - Add support for the well-known Protobuf types in the Protobuf format. #56741 (János Benjamin Antal).
Performance Improvement
- Adaptive timeouts for interacting with S3. The first attempt is made with low send and receive timeouts. #56314 (Sema Checherinda).
- Increase the default value of
max_concurrent_queries
from 100 to 1000. This makes sense when there is a large number of connecting clients, which are slowly sending or receiving data, so the server is not limited by CPU, or when the number of CPU cores is larger than 100. Also, enable the concurrency control by default, and set the desired number of query processing threads in total as twice the number of CPU cores. It improves performance in scenarios with a very large number of concurrent queries. #46927 (Alexey Milovidov). - Support parallel evaluation of window functions. Fixes #34688. #39631 (Dmitry Novik).
Numbers
table engine (of thesystem.numbers
table) now analyzes the condition to generate the needed subset of data, like table's index. #50909 (JackyWoo).- Improved the performance of filtering by
IN (...)
condition forMerge
table engine. #54905 (Nikita Taranov). - An improvement which takes place when the filesystem cache is full and there are big reads. #55158 (Kseniia Sumarokova).
- Add ability to disable checksums for S3 to avoid excessive pass over the file (this is controlled by the setting
s3_disable_checksum
). #55559 (Azat Khuzhin). - Now we read synchronously from remote tables when data is in page cache (like we do for local tables). It is faster, it doesn't require synchronisation inside the thread pool, and doesn't hesitate to do
seek
-s on local FS, and reduces CPU wait. #55841 (Nikita Taranov). - Optimization for getting value from
map
,arrayElement
. It will bring about 30% speedup. - reduce the reserved memory - reduce theresize
call. #55957 (lgbo). - Optimization of multi-stage filtering with AVX-512. The performance experiments of the OnTime dataset on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) show that this change could bring the improvements of 7.4%, 5.9%, 4.7%, 3.0%, and 4.6% to the QPS of the query Q2, Q3, Q4, Q5 and Q6 respectively while having no impact on others. #56079 (Zhiguo Zhou).
- Limit the number of threads busy inside the query profiler. If there are more - they will skip profiling. #56105 (Alexey Milovidov).
- Decrease the amount of virtual function calls in window functions. #56120 (Maksim Kita).
- Allow recursive Tuple field pruning in ORC data format to speed up scaning. #56122 (李扬).
- Trivial count optimization for
Npy
data format: queries likeselect count() from 'data.npy'
will work much more fast because of caching the results. #56304 (Yarik Briukhovetskyi). - Queries with aggregation and a large number of streams will use less amount of memory during the plan's construction. #57074 (Alexey Milovidov).
- Improve performance of executing queries for use cases with many users and highly concurrent queries (>2000 QPS) by optimizing the access to ProcessList. #57106 (Andrej Hoos).
- Trivial improvement on array join, reuse some intermediate results. #57183 (李扬).
- There are cases when stack unwinding was slow. Not anymore. #57221 (Alexey Milovidov).
- Now we use default read pool for reading from external storage when
max_streams = 1
. It is beneficial when read prefetches are enabled. #57334 (Nikita Taranov). - Keeper improvement: improve memory-usage during startup by delaying log preprocessing. #55660 (Antonio Andelic).
- Improved performance of glob matching for
File
andHDFS
storages. #56141 (Andrey Zvonov). - Posting lists in experimental full text indexes are now compressed which reduces their size by 10-30%. #56226 (Harry Lee).
- Parallelise
BackupEntriesCollector
in backups. #56312 (Kseniia Sumarokova).
Improvement
- Add a new
MergeTree
settingadd_implicit_sign_column_constraint_for_collapsing_engine
(disabled by default). When enabled, it adds an implicit CHECK constraint forCollapsingMergeTree
tables that restricts the value of theSign
column to be only -1 or 1. #56701. #56986 (Kevin Mingtarja). - Enable adding new disk to storage configuration without restart. #56367 (Duc Canh Le).
- Support creating and materializing index in the same alter query, also support "modify TTL" and "materialize TTL" in the same query. Closes #55651. #56331 (flynn).
- Add a new table function named
fuzzJSON
with rows containing perturbed versions of the source JSON string with random variations. #56490 (Julia Kartseva). - Engine
Merge
filters the records according to the row policies of the underlying tables, so you don't have to create another row policy on aMerge
table. #50209 (Ilya Golshtein). - Add a setting
max_execution_time_leaf
to limit the execution time on shard for distributed query, andtimeout_overflow_mode_leaf
to control the behaviour if timeout happens. #51823 (Duc Canh Le). - Add ClickHouse setting to disable tunneling for HTTPS requests over HTTP proxy. #55033 (Arthur Passos).
- Set
background_fetches_pool_size
to 16, background_schedule_pool_size to 512 that is better for production usage with frequent small insertions. #54327 (Denny Crane). - While read data from a csv format file, and at end of line is
\r
, which not followed by\n
, then we will enconter the exception as followsCannot parse CSV format: found \r (CR) not followed by \n (LF). Line must end by \n (LF) or \r\n (CR LF) or \n\r.
In clickhouse, the csv end of line must be\n
or\r\n
or\n\r
, so the\r
must be followed by\n
, but in some suitation, the csv input data is abnormal, like above,\r
is at end of line. #54340 (KevinyhZou). - Update Arrow library to release-13.0.0 that supports new encodings. Closes #44505. #54800 (Kruglov Pavel).
- Improve performance of ON CLUSTER queries by removing heavy system calls to get all network interfaces when looking for local ip address in the DDL entry hosts list. #54909 (Duc Canh Le).
- Fixed accounting of memory allocated before attaching a thread to a query or a user. #56089 (Nikita Taranov).
- Add support for
LARGE_LIST
in Apache Arrow formats. #56118 (edef). - Allow manual compaction of
EmbeddedRocksDB
viaOPTIMIZE
query. #56225 (Azat Khuzhin). - Add ability to specify BlockBasedTableOptions for
EmbeddedRocksDB
tables. #56264 (Azat Khuzhin). SHOW COLUMNS
now displays MySQL's equivalent data type name when the connection was made through the MySQL protocol. Previously, this was the case when settinguse_mysql_types_in_show_columns = 1
. The setting is retained but made obsolete. #56277 (Robert Schulze).- Fixed possible
The local set of parts of table doesn't look like the set of parts in ZooKeeper
error if server was restarted just afterTRUNCATE
orDROP PARTITION
. #56282 (Alexander Tokmakov). - Fixed handling of non-const query strings in functions
formatQuery
/formatQuerySingleLine
. Also addedOrNull
variants of both functions that return a NULL when a query cannot be parsed instead of throwing an exception. #56327 (Robert Schulze). - Allow backup of materialized view with dropped inner table instead of failing the backup. #56387 (Kseniia Sumarokova).
- Queries to
system.replicas
initiate requests to ZooKeeper when certain columns are queried. When there are thousands of tables these requests might produce a considerable load on ZooKeeper. If there are multiple simultaneous queries tosystem.replicas
they do same requests multiple times. The change is to "deduplicate" requests from concurrent queries. #56420 (Alexander Gololobov). - Fix translation to MySQL compatible query for querying external databases. #56456 (flynn).
- Add support for backing up and restoring tables using
KeeperMap
engine. #56460 (Antonio Andelic). - 404 response for CompleteMultipartUpload has to be rechecked. Operation could be done on server even if client got timeout or other network errors. The next retry of CompleteMultipartUpload receives 404 response. If the object key exists that operation is considered as successful. #56475 (Sema Checherinda).
- Enable the HTTP OPTIONS method by default - it simplifies requesting ClickHouse from a web browser. #56483 (Alexey Milovidov).
- The value for
dns_max_consecutive_failures
was changed by mistake in #46550 - this is reverted and adjusted to a better value. Also, increased the HTTP keep-alive timeout to a reasonable value from production. #56485 (Alexey Milovidov). - Load base backups lazily (a base backup won't be loaded until it's needed). Also add some log message and profile events for backups. #56516 (Vitaly Baranov).
- Setting
query_cache_store_results_of_queries_with_nondeterministic_functions
(with valuesfalse
ortrue
) was marked obsolete. It was replaced by settingquery_cache_nondeterministic_function_handling
, a three-valued enum that controls how the query cache handles queries with non-deterministic functions: a) throw an exception (default behavior), b) save the non-deterministic query result regardless, or c) ignore, i.e. don't throw an exception and don't cache the result. #56519 (Robert Schulze). - Rewrite equality with
is null
check in JOIN ON section. Experimental Analyzer only. #56538 (vdimir). - Function
concat
now supports arbitrary argument types (instead of only String and FixedString arguments). This makes it behave more similar to MySQLconcat
implementation. For example,SELECT concat('ab', 42)
now returnsab42
. #56540 (Serge Klochkov). - Allow getting cache configuration from 'named_collection' section in config or from SQL created named collections. #56541 (Kseniia Sumarokova).
- PostgreSQL database engine: Make the removal of outdated tables less aggressive with unsuccessful postgres connection. #56609 (jsc0218).
- It took too much time to connnect to PG when URL is not right, so the relevant query stucks there and get cancelled. #56648 (jsc0218).
- Keeper improvement: disable compressed logs by default in Keeper. #56763 (Antonio Andelic).
- Add config setting
wait_dictionaries_load_at_startup
. #56782 (Vitaly Baranov). - There was a potential vulnerability in previous ClickHouse versions: if a user has connected and unsuccessfully tried to authenticate with the "interserver secret" method, the server didn't terminate the connection immediately but continued to receive and ignore the leftover packets from the client. While these packets are ignored, they are still parsed, and if they use a compression method with another known vulnerability, it will lead to exploitation of it without authentication. This issue was found with ClickHouse Bug Bounty Program by https://twitter.com/malacupa. #56794 (Alexey Milovidov).
- Fetching a part waits when that part is fully committed on remote replica. It is better not send part in PreActive state. In case of zero copy this is mandatory restriction. #56808 (Sema Checherinda).
- Fix possible postgresql logical replication conversion error when using experimental
MaterializedPostgreSQL
. #53721 (takakawa). - Implement user-level setting
alter_move_to_space_execute_async
which allow to execute queriesALTER TABLE ... MOVE PARTITION|PART TO DISK|VOLUME
asynchronously. The size of pool for background executions is controlled bybackground_move_pool_size
. Default behavior is synchronous execution. Fixes #47643. #56809 (alesapin). - Able to filter by engine when scanning system.tables, avoid unnecessary (potentially time-consuming) connection. #56813 (jsc0218).
- Show
total_bytes
andtotal_rows
in system tables for RocksDB storage. #56816 (Aleksandr Musorin). - Allow basic commands in ALTER for TEMPORARY tables. #56892 (Sergey).
- LZ4 compression. Buffer compressed block in a rare case when out buffer capacity is not enough for writing compressed block directly to out's buffer. #56938 (Sema Checherinda).
- Add metrics for the number of queued jobs, which is useful for the IO thread pool. #56958 (Alexey Milovidov).
- Add a setting for PostgreSQL table engine setting in the config file. Added a check for the setting Added documentation around the additional setting. #56959 (Peignon Melvyn).
- Function
concat
can now be called with a single argument, e.g.,SELECT concat('abc')
. This makes its behavior more consistent with MySQL's concat implementation. #57000 (Serge Klochkov). - Signs all
x-amz-*
headers as required by AWS S3 docs. #57001 (Arthur Passos). - Function
fromDaysSinceYearZero
(alias:FROM_DAYS
) can now be used with unsigned and signed integer types (previously, it had to be an unsigned integer). This improve compatibility with 3rd party tools such as Tableau Online. #57002 (Serge Klochkov). - Add
system.s3queue_log
to default config. #57036 (Kseniia Sumarokova). - Change the default for
wait_dictionaries_load_at_startup
to true, and use this setting only ifdictionaries_lazy_load
is false. #57133 (Vitaly Baranov). - Check dictionary source type on creation even if
dictionaries_lazy_load
is enabled. #57134 (Vitaly Baranov). - Plan-level optimizations can now be enabled/disabled individually. Previously, it was only possible to disable them all. The setting which previously did that (
query_plan_enable_optimizations
) is retained and can still be used to disable all optimizations. #57152 (Robert Schulze). - The server's exit code will correspond to the exception code. For example, if the server cannot start due to memory limit, it will exit with the code 241 = MEMORY_LIMIT_EXCEEDED. In previous versions, the exit code for exceptions was always 70 = Poco::Util::ExitCode::EXIT_SOFTWARE. #57153 (Alexey Milovidov).
- Do not demangle and symbolize stack frames from
functional
C++ header. #57201 (Mike Kot). - HTTP server page
/dashboard
now supports charts with multiple lines. #57236 (Sergei Trifonov). - The
max_memory_usage_in_client
command line option supports a string value with a suffix (K, M, G, etc). Closes #56879. #57273 (Yarik Briukhovetskyi). - Bumped Intel QPL (used by codec
DEFLATE_QPL
) from v1.2.0 to v1.3.1 . Also fixed a bug in case of BOF (Block On Fault) = 0, changed to handle page faults by falling back to SW path. #57291 (jasperzhu). - Increase default
replicated_deduplication_window
of MergeTree settings from 100 to 1k. #57335 (sichenzhao). - Stop using
INCONSISTENT_METADATA_FOR_BACKUP
that much. If possible prefer to continue scanning instead of stopping and starting the scanning for backup from the beginning. #57385 (Vitaly Baranov).
Build/Testing/Packaging Improvement
- Add SQLLogic test. #56078 (Han Fei).
- Make
clickhouse-local
andclickhouse-client
available under short names (ch
,chl
,chc
) for usability. #56634 (Alexey Milovidov). - Optimized build size further by removing unused code from external libraries. #56786 (Alexey Milovidov).
- Add automatic check that there are no large translation units. #56559 (Alexey Milovidov).
- Lower the size of the single-binary distribution. This closes #55181. #56617 (Alexey Milovidov).
- Information about the sizes of every translation unit and binary file after each build will be sent to the CI database in ClickHouse Cloud. This closes #56107. #56636 (Alexey Milovidov).
- Certain files of "Apache Arrow" library (which we use only for non-essential things like parsing the arrow format) were rebuilt all the time regardless of the build cache. This is fixed. #56657 (Alexey Milovidov).
- Avoid recompiling translation units depending on the autogenerated source file about version. #56660 (Alexey Milovidov).
- Tracing data of the linker invocations will be sent to the CI database in ClickHouse Cloud. #56725 (Alexey Milovidov).
- Use DWARF 5 debug symbols for the clickhouse binary (was DWARF 4 previously). #56770 (Michael Kolupaev).
- Add a new build option
SANITIZE_COVERAGE
. If it is enabled, the code is instrumented to track the coverage. The collected information is available inside ClickHouse with: (1) a new functioncoverage
that returns an array of unique addresses in the code found after the previous coverage reset; (2)SYSTEM RESET COVERAGE
query that resets the accumulated data. This allows us to compare the coverage of different tests, including differential code coverage. Continuation of #20539. #56102 (Alexey Milovidov). - Some of the stack frames might not be resolved when collecting stacks. In such cases the raw address might be helpful. #56267 (Alexander Gololobov).
- Add an option to disable
libssh
. #56333 (Alexey Milovidov). - Enable temporary_data_in_cache in S3 tests in CI. #48425 (vdimir).
- Set the max memory usage for clickhouse-client (
1G
) in the CI. #56873 (Nikita Mikhaylov).
Bug Fix (user-visible misbehavior in an official stable release)
- Fix exerimental Analyzer - insertion from select with subquery referencing insertion table should process only insertion block. #50857 (Yakov Olkhovskiy).
- Fix a bug in
str_to_map
function. #56423 (Arthur Passos). - Keeper
reconfig
: add timeout before yielding/taking leadership #53481 (Mike Kot). - Fix incorrect header in grace hash join and filter pushdown #53922 (vdimir).
- Select from system tables when table based on table function. #55540 (MikhailBurdukov).
- RFC: Fix "Cannot find column X in source stream" for Distributed queries with LIMIT BY #55836 (Azat Khuzhin).
- Fix 'Cannot read from file:' while running client in a background #55976 (Kruglov Pavel).
- Fix clickhouse-local exit on bad send_logs_level setting #55994 (Kruglov Pavel).
- Bug fix explain ast with parameterized view #56004 (SmitaRKulkarni).
- Fix a crash during table loading on startup #56232 (Nikolay Degterinsky).
- Fix ClickHouse-sourced dictionaries with an explicit query #56236 (Nikolay Degterinsky).
- Fix segfault in signal handler for Keeper #56266 (Antonio Andelic).
- Fix incomplete query result for UNION in view() function. #56274 (Nikolai Kochetov).
- Fix inconsistency of "cast('0' as DateTime64(3))" and "cast('0' as Nullable(DateTime64(3)))" #56286 (李扬).
- Fix rare race condition related to Memory allocation failure #56303 (alesapin).
- Fix restore from backup with
flatten_nested
anddata_type_default_nullable
#56306 (Kseniia Sumarokova). - Fix crash in case of adding a column with type Object(JSON) #56307 (Nikita Mikhaylov).
- Fix crash in filterPushDown #56380 (vdimir).
- Fix restore from backup with mat view and dropped source table #56383 (Kseniia Sumarokova).
- Fix segfault during Kerberos initialization #56401 (Nikolay Degterinsky).
- Fix buffer overflow in T64 #56434 (Alexey Milovidov).
- Fix nullable primary key in final (2) #56452 (Amos Bird).
- Fix ON CLUSTER queries without database on initial node #56484 (Nikolay Degterinsky).
- Fix startup failure due to TTL dependency #56489 (Nikolay Degterinsky).
- Fix ALTER COMMENT queries ON CLUSTER #56491 (Nikolay Degterinsky).
- Fix ALTER COLUMN with ALIAS #56493 (Nikolay Degterinsky).
- Fix empty NAMED COLLECTIONs #56494 (Nikolay Degterinsky).
- Fix two cases of projection analysis. #56502 (Amos Bird).
- Fix handling of aliases in query cache #56545 (Robert Schulze).
- Fix conversion from
Nullable(Enum)
toNullable(String)
#56644 (Nikolay Degterinsky). - More reliable log handling in Keeper #56670 (Antonio Andelic).
- Fix configuration merge for nodes with substitution attributes #56694 (Konstantin Bogdanov).
- Fix duplicate usage of table function input(). #56695 (Nikolai Kochetov).
- Fix: RabbitMQ OpenSSL dynamic loading issue #56703 (Igor Nikonov).
- Fix crash in GCD codec in case when zeros present in data #56704 (Nikita Mikhaylov).
- Fix 'mutex lock failed: Invalid argument' in clickhouse-local during insert into function #56710 (Kruglov Pavel).
- Fix Date text parsing in optimistic path #56765 (Kruglov Pavel).
- Fix crash in FPC codec #56795 (Alexey Milovidov).
- DatabaseReplicated: fix DDL query timeout after recovering a replica #56796 (Alexander Tokmakov).
- Fix incorrect nullable columns reporting in MySQL binary protocol #56799 (Serge Klochkov).
- Support Iceberg metadata files for metastore tables #56810 (Kruglov Pavel).
- Fix TSAN report under transform #56817 (Raúl Marín).
- Fix SET query and SETTINGS formatting #56825 (Nikolay Degterinsky).
- Fix failure to start due to table dependency in joinGet #56828 (Nikolay Degterinsky).
- Fix flattening existing Nested columns during ADD COLUMN #56830 (Nikolay Degterinsky).
- Fix allow cr end of line for csv #56901 (KevinyhZou).
- Fix
tryBase64Decode
with invalid input #56913 (Robert Schulze). - Fix generating deep nested columns in CapnProto/Protobuf schemas #56941 (Kruglov Pavel).
- Prevent incompatible ALTER of projection columns #56948 (Amos Bird).
- Fix sqlite file path validation #56984 (San).
- S3Queue: fix metadata reference increment #56990 (Kseniia Sumarokova).
- S3Queue minor fix #56999 (Kseniia Sumarokova).
- Fix file path validation for DatabaseFileSystem #57029 (San).
- Fix
fuzzBits
withARRAY JOIN
#57033 (Antonio Andelic). - Fix Nullptr dereference in partial merge join with joined_subquery_re… #57048 (vdimir).
- Fix race condition in RemoteSource #57052 (Raúl Marín).
- Implement
bitHammingDistance
for big integers #57073 (Alexey Milovidov). - S3-style links bug fix #57075 (Yarik Briukhovetskyi).
- Fix JSON_QUERY function with multiple numeric paths #57096 (KevinyhZou).
- Fix buffer overflow in Gorilla codec #57107 (Nikolay Degterinsky).
- Close interserver connection on any exception before authentication #57142 (Antonio Andelic).
- Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column #57147 (Nikolay Degterinsky).
- Fix incorrect JOIN plan optimization with partially materialized normal projection #57196 (Amos Bird).
- Ignore comments when comparing column descriptions #57259 (Antonio Andelic).
- Fix
ReadonlyReplica
metric for all cases #57267 (Antonio Andelic). - Background merges correctly use temporary data storage in the cache #57275 (vdimir).
- Keeper fix for changelog and snapshots #57299 (Antonio Andelic).
- Ignore finished ON CLUSTER tasks if hostname changed #57339 (Alexander Tokmakov).
- MergeTree mutations reuse source part index granularity #57352 (Maksim Kita).
- FS cache: add a limit for background download #57424 (Kseniia Sumarokova).
ClickHouse release 23.10, 2023-11-02
Backward Incompatible Change
- There is no longer an option to automatically remove broken data parts. This closes #55174. #55184 (Alexey Milovidov). #55557 (Jihyuk Bok).
- The obsolete in-memory data parts can no longer be read from the write-ahead log. If you have configured in-memory parts before, they have to be removed before the upgrade. #55186 (Alexey Milovidov).
- Remove the integration with Meilisearch. Reason: it was compatible only with the old version 0.18. The recent version of Meilisearch changed the protocol and does not work anymore. Note: we would appreciate it if you help to return it back. #55189 (Alexey Milovidov).
- Rename directory monitor concept into background INSERT. All the settings
*directory_monitor*
had been renamed todistributed_background_insert*
. Backward compatibility should be preserved (since old settings had been added as an alias). #55978 (Azat Khuzhin). - Do not interpret the
send_timeout
set on the client side as thereceive_timeout
on the server side and vise-versa. #56035 (Azat Khuzhin). - Comparison of time intervals with different units will throw an exception. This closes #55942. You might have occasionally rely on the previous behavior when the underlying numeric values were compared regardless of the units. #56090 (Alexey Milovidov).
- Rewrited the experimental
S3Queue
table engine completely: changed the way we keep information in zookeeper which allows to make less zookeeper requests, added caching of zookeeper state in cases when we know the state will not change, improved the polling from s3 process to make it less aggressive, changed the way ttl and max set for trached files is maintained, now it is a background process. Addedsystem.s3queue
andsystem.s3queue_log
tables. Closes #54998. #54422 (Kseniia Sumarokova). - Arbitrary paths on HTTP endpoint are no longer interpreted as a request to the
/query
endpoint. #55521 (Konstantin Bogdanov).
New Feature
- Add function
arrayFold(accumulator, x1, ..., xn -> expression, initial, array1, ..., arrayn)
which applies a lambda function to multiple arrays of the same cardinality and collects the result in an accumulator. #49794 (Lirikl). - Support for
Npy
format.SELECT * FROM file('example_array.npy', Npy)
. #55982 (Yarik Briukhovetskyi). - If a table has a space-filling curve in its key, e.g.,
ORDER BY mortonEncode(x, y)
, the conditions on its arguments, e.g.,x >= 10 AND x <= 20 AND y >= 20 AND y <= 30
can be used for indexing. A settinganalyze_index_with_space_filling_curves
is added to enable or disable this analysis. This closes #41195. Continuation of #4538. Continuation of #6286. Continuation of #28130. Continuation of #41753. #55642 (Alexey Milovidov). - A new setting called
force_optimize_projection_name
, it takes a name of projection as an argument. If it's value set to a non-empty string, ClickHouse checks that this projection is used in the query at least once. Closes #55331. #56134 (Yarik Briukhovetskyi). - Support asynchronous inserts with external data via native protocol. Previously it worked only if data is inlined into query. #54730 (Anton Popov).
- Added aggregation function
lttb
which uses the Largest-Triangle-Three-Buckets algorithm for downsampling data for visualization. #53145 (Sinan). - Query
CHECK TABLE
has better performance and usability (sends progress updates, cancellable). Support checking particular part withCHECK TABLE ... PART 'part_name'
. #53404 (vdimir). - Added function
jsonMergePatch
. When working with JSON data as strings, it provides a way to merge these strings (of JSON objects) together to form a single string containing a single JSON object. #54364 (Memo). - The second part of Kusto Query Language dialect support. Phase 1 implementation has been merged. #42510 (larryluogit).
- Added a new SQL function,
arrayRandomSample(arr, k)
which returns a sample of k elements from the input array. Similar functionality could previously be achieved only with less convenient syntax, e.g. "SELECT arrayReduce('groupArraySample(3)', range(10))". #54391 (itayisraelov). - Introduce
-ArgMin
/-ArgMax
aggregate combinators which allow to aggregate by min/max values only. One use case can be found in #54818. This PR also reorganize combinators into dedicated folder. #54947 (Amos Bird). - Allow to drop cache for Protobuf format with
SYSTEM DROP SCHEMA FORMAT CACHE [FOR Protobuf]
. #55064 (Aleksandr Musorin). - Add external HTTP Basic authenticator. #55199 (Aleksei Filatov).
- Added function
byteSwap
which reverses the bytes of unsigned integers. This is particularly useful for reversing values of types which are represented as unsigned integers internally such as IPv4. #55211 (Priyansh Agrawal). - Added function
formatQuery
which returns a formatted version (possibly spanning multiple lines) of a SQL query string. Also added functionformatQuerySingleLine
which does the same but the returned string will not contain linebreaks. #55239 (Salvatore Mesoraca). - Added
DWARF
input format that reads debug symbols from an ELF executable/library/object file. #55450 (Michael Kolupaev). - Allow to save unparsed records and errors in RabbitMQ, NATS and FileLog engines. Add virtual columns
_error
and_raw_message
(for NATS and RabbitMQ),_raw_record
(for FileLog) that are filled when ClickHouse fails to parse new record. The behaviour is controlled under storage settingsnats_handle_error_mode
for NATS,rabbitmq_handle_error_mode
for RabbitMQ,handle_error_mode
for FileLog similar tokafka_handle_error_mode
. If it's set todefault
, en exception will be thrown when ClickHouse fails to parse a record, if it's set tostream
, erorr and raw record will be saved into virtual columns. Closes #36035. #55477 (Kruglov Pavel). - Keeper client improvement: add
get_all_children_number command
that returns number of all children nodes under a specific path. #55485 (guoxiaolong). - Keeper client improvement: add
get_direct_children_number
command that returns number of direct children nodes under a path. #55898 (xuzifu666). - Add statement
SHOW SETTING setting_name
which is a simpler version of existing statementSHOW SETTINGS
. #55979 (Maksim Kita). - Added fields
substreams
andfilenames
to thesystem.parts_columns
table. #55108 (Anton Popov). - Add support for
SHOW MERGES
query. #55815 (megao). - Introduce a setting
create_table_empty_primary_key_by_default
for defaultORDER BY ()
. #55899 (Srikanth Chekuri).
Performance Improvement
- Add option
query_plan_preserve_num_streams_after_window_functions
to preserve the number of streams after evaluating window functions to allow parallel stream processing. #50771 (frinkr). - Release more streams if data is small. #53867 (Jiebin Sun).
- RoaringBitmaps being optimized before serialization. #55044 (UnamedRus).
- Posting lists in inverted indexes are now optimized to use the smallest possible representation for internal bitmaps. Depending on the repetitiveness of the data, this may significantly reduce the space consumption of inverted indexes. #55069 (Harry Lee).
- Fix contention on Context lock, this significantly improves performance for a lot of short-running concurrent queries. #55121 (Maksim Kita).
- Improved the performance of inverted index creation by 30%. This was achieved by replacing
std::unordered_map
withabsl::flat_hash_map
. #55210 (Harry Lee). - Support ORC filter push down (rowgroup level). #55330 (李扬).
- Improve performance of external aggregation with a lot of temporary files. #55489 (Maksim Kita).
- Set a reasonable size for the marks cache for secondary indices by default to avoid loading the marks over and over again. #55654 (Alexey Milovidov).
- Avoid unnecessary reconstruction of index granules when reading skip indexes. This addresses #55653. #55683 (Amos Bird).
- Cache CAST function in set during execution to improve the performance of function
IN
when set element type doesn't exactly match column type. #55712 (Duc Canh Le). - Performance improvement for
ColumnVector::insertMany
andColumnVector::insertManyFrom
. #55714 (frinkr). - Optimized Map subscript operations by predicting the next row's key position and reduce the comparisons. #55929 (lgbo).
- Support struct fields pruning in Parquet (in previous versions it didn't work in some cases). #56117 (lgbo).
- Add the ability to tune the number of parallel replicas used in a query execution based on the estimation of rows to read. #51692 (Raúl Marín).
- Optimized external aggregation memory consumption in case many temporary files were generated. #54798 (Nikita Taranov).
- Distributed queries executed in
async_socket_for_remote
mode (default) now respectmax_threads
limit. Previously, some queries could create excessive threads (up tomax_distributed_connections
), causing server performance issues. #53504 (filimonov). - Caching skip-able entries while executing DDL from Zookeeper distributed DDL queue. #54828 (Duc Canh Le).
- Experimental inverted indexes do not store tokens with too many matches (i.e. row ids in the posting list). This saves space and avoids ineffective index lookups when sequential scans would be equally fast or faster. The previous heuristics (
density
parameter passed to the index definition) that controlled when tokens would not be stored was too confusing for users. A much simpler heuristics based on parametermax_rows_per_postings_list
(default: 64k) is introduced which directly controls the maximum allowed number of row ids in a postings list. #55616 (Harry Lee). - Improve write performance to
EmbeddedRocksDB
tables. #55732 (Duc Canh Le). - Improved overall resilience for ClickHouse in case of many parts within partition (more than 1000). It might reduce the number of
TOO_MANY_PARTS
errors. #55526 (Nikita Mikhaylov). - Reduced memory consumption during loading of hierarchical dictionaries. #55838 (Nikita Taranov).
- All dictionaries support setting
dictionary_use_async_executor
. #55839 (vdimir). - Prevent excesive memory usage when deserializing AggregateFunctionTopKGenericData. #55947 (Raúl Marín).
- On a Keeper with lots of watches AsyncMetrics threads can consume 100% of CPU for noticable time in
DB::KeeperStorage::getSessionsWithWatchesCount
. The fix is to avoid traversing heavywatches
andlist_watches
sets. #56054 (Alexander Gololobov). - Add setting
optimize_trivial_approximate_count_query
to usecount
approximation for storage EmbeddedRocksDB. Enable trivial count for StorageJoin. #55806 (Duc Canh Le).
Improvement
- Functions
toDayOfWeek
(MySQL alias:DAYOFWEEK
),toYearWeek
(YEARWEEK
) andtoWeek
(WEEK
) now supportsString
arguments. This makes its behavior consistent with MySQL's behavior. #55589 (Robert Schulze). - Introduced setting
date_time_overflow_behavior
with possible valuesignore
,throw
,saturate
that controls the overflow behavior when converting from Date, Date32, DateTime64, Integer or Float to Date, Date32, DateTime or DateTime64. #55696 (Andrey Zvonov). - Implement query parameters support for
ALTER TABLE ... ACTION PARTITION [ID] {parameter_name:ParameterType}
. Merges #49516. Closes #49449. #55604 (alesapin). - Print processor ids in a prettier manner in EXPLAIN. #48852 (Vlad Seliverstov).
- Creating a direct dictionary with a lifetime field will be rejected at create time (as the lifetime does not make sense for direct dictionaries). Fixes: #27861. #49043 (Rory Crispin).
- Allow parameters in queries with partitions like
ALTER TABLE t DROP PARTITION
. Closes #49449. #49516 (Nikolay Degterinsky). - Add a new column
xid
forsystem.zookeeper_connection
. #50702 (helifu). - Display the correct server settings in
system.server_settings
after configuration reload. #53774 (helifu). - Add support for mathematical minus
−
character in queries, similar to-
. #54100 (Alexey Milovidov). - Add replica groups to the experimental
Replicated
database engine. Closes #53620. #54421 (Nikolay Degterinsky). - It is better to retry retriable s3 errors than totally fail the query. Set bigger value to the s3_retry_attempts by default. #54770 (Sema Checherinda).
- Add load balancing mode
hostname_levenshtein_distance
. #54826 (JackyWoo). - Improve hiding secrets in logs. #55089 (Vitaly Baranov).
- For now the projection analysis will be performed only on top of query plan. The setting
query_plan_optimize_projection
became obsolete (it was enabled by default long time ago). #55112 (Nikita Mikhaylov). - When function
untuple
is now called on a tuple with named elements and itself has an alias (e.g.select untuple(tuple(1)::Tuple(element_alias Int)) AS untuple_alias
), then the result column name is now generated from the untuple alias and the tuple element alias (in the example: "untuple_alias.element_alias"). #55123 (garcher22). - Added setting
describe_include_virtual_columns
, which allows to include virtual columns of table into result ofDESCRIBE
query. Added settingdescribe_compact_output
. If it is set totrue
,DESCRIBE
query returns only names and types of columns without extra information. #55129 (Anton Popov). - Sometimes
OPTIMIZE
withoptimize_throw_if_noop=1
may fail with an errorunknown reason
while the real cause of it - different projections in different parts. This behavior is fixed. #55130 (Nikita Mikhaylov). - Allow to have several
MaterializedPostgreSQL
tables following the same Postgres table. By default this behaviour is not enabled (for compatibility, because it is a backward-incompatible change), but can be turned on with settingmaterialized_postgresql_use_unique_replication_consumer_identifier
. Closes #54918. #55145 (Kseniia Sumarokova). - Allow to parse negative
DateTime64
andDateTime
with fractional part from short strings. #55146 (Andrey Zvonov). - To improve compatibility with MySQL, 1.
information_schema.tables
now includes the new fieldtable_rows
, and 2.information_schema.columns
now includes the new fieldextra
. #55215 (Robert Schulze). - Clickhouse-client won't show "0 rows in set" if it is zero and if exception was thrown. #55240 (Salvatore Mesoraca).
- Support rename table without keyword
TABLE
likeRENAME db.t1 to db.t2
. #55373 (凌涛). - Add
internal_replication
tosystem.clusters
. #55377 (Konstantin Morozov). - Select remote proxy resolver based on request protocol, add proxy feature docs and remove
DB::ProxyConfiguration::Protocol::ANY
. #55430 (Arthur Passos). - Avoid retrying keeper operations on INSERT after table shutdown. #55519 (Azat Khuzhin).
SHOW COLUMNS
now correctly reports typeFixedString
asBLOB
if settinguse_mysql_types_in_show_columns
is on. Also added two new settings,mysql_map_string_to_text_in_show_columns
andmysql_map_fixed_string_to_text_in_show_columns
to switch the output for typesString
andFixedString
asTEXT
orBLOB
. #55617 (Serge Klochkov).- During ReplicatedMergeTree tables startup clickhouse server checks set of parts for unexpected parts (exists locally, but not in zookeeper). All unexpected parts move to detached directory and instead of them server tries to restore some ancestor (covered) parts. Now server tries to restore closest ancestors instead of random covered parts. #55645 (alesapin).
- The advanced dashboard now supports draggable charts on touch devices. This closes #54206. #55649 (Alexey Milovidov).
- Use the default query format if declared when outputting exception with
http_write_exception_in_output_format
. #55739 (Raúl Marín). - Provide a better message for common MATERIALIZED VIEW pitfalls. #55826 (Raúl Marín).
- If you dropped the current database, you will still be able to run some queries in
clickhouse-local
and switch to another database. This makes the behavior consistent withclickhouse-client
. This closes #55834. #55853 (Alexey Milovidov). - Functions
(add|subtract)(Year|Quarter|Month|Week|Day|Hour|Minute|Second|Millisecond|Microsecond|Nanosecond)
now support string-encoded date arguments, e.g.SELECT addDays('2023-10-22', 1)
. This increases compatibility with MySQL and is needed by Tableau Online. #55869 (Robert Schulze). - The setting
apply_deleted_mask
when disabled allows to read rows that where marked as deleted by lightweight DELETE queries. This is useful for debugging. #55952 (Alexander Gololobov). - Allow skipping
null
values when serailizing Tuple to json objects, which makes it possible to keep compatiability with Spark'sto_json
function, which is also useful for gluten. #55956 (李扬). - Functions
(add|sub)Date
now support string-encoded date arguments, e.g.SELECT addDate('2023-10-22 11:12:13', INTERVAL 5 MINUTE)
. The same support for string-encoded date arguments is added to the plus and minus operators, e.g.SELECT '2023-10-23' + INTERVAL 1 DAY
. This increases compatibility with MySQL and is needed by Tableau Online. #55960 (Robert Schulze). - Allow unquoted strings with CR (
\r
) in CSV format. Closes #39930. #56046 (Kruglov Pavel). - Allow to run
clickhouse-keeper
using embedded config. #56086 (Maksim Kita). - Set limit of the maximum configuration value for
queued.min.messages
to avoid problem with start fetching data with Kafka. #56121 (Stas Morozov). - Fixed a typo in SQL function
minSampleSizeContinous
(renamedminSampleSizeContinuous
). Old name is preserved for backward compatibility. This closes: #56139. #56143 (Dorota Szeremeta). - Print path for broken parts on disk before shutting down the server. Before this change if a part is corrupted on disk and server cannot start, it was almost impossible to understand which part is broken. This is fixed. #56181 (Duc Canh Le).
Build/Testing/Packaging Improvement
- If the database in Docker is already initialized, it doesn't need to be initialized again upon subsequent launches. This can potentially fix the issue of infinite container restarts when the database fails to load within 1000 attempts (relevant for very large databases and multi-node setups). #50724 (Alexander Nikolaev).
- Resource with source code including submodules is built in Darwin special build task. It may be used to build ClickHouse without checking out the submodules. #51435 (Ilya Yatsishin).
- An error was occuring when building ClickHouse with the AVX series of instructions enabled globally (which isn't recommended). The reason is that snappy does not enable
SNAPPY_HAVE_X86_CRC32
. #55049 (monchickey). - Solve issue with launching standalone
clickhouse-keeper
fromclickhouse-server
package. #55226 (Mikhail f. Shiryaev). - In the tests, RabbitMQ version is updated to 3.12.6. Improved logs collection for RabbitMQ tests. #55424 (Ilya Yatsishin).
- Modified the error message difference between openssl and boringssl to fix the functional test. #55975 (MeenaRenganathan22).
- Use upstream repo for apache datasketches. #55787 (Nikita Taranov).
Bug Fix (user-visible misbehavior in an official stable release)
- Skip hardlinking inverted index files in mutation #47663 (cangyin).
- Fixed bug of
match
function (regex) with pattern containing alternation produces incorrect key condition. Closes #53222. #54696 (Yakov Olkhovskiy). - Fix 'Cannot find column' in read-in-order optimization with ARRAY JOIN #51746 (Nikolai Kochetov).
- Support missed experimental
Object(Nullable(json))
subcolumns in query. #54052 (zps). - Re-add fix for
accurateCastOrNull
#54629 (Salvatore Mesoraca). - Fix detecting
DEFAULT
for columns of a Distributed table created without AS #55060 (Vitaly Baranov). - Proper cleanup in case of exception in ctor of ShellCommandSource #55103 (Alexander Gololobov).
- Fix deadlock in LDAP assigned role update #55119 (Julian Maicher).
- Suppress error statistics update for internal exceptions #55128 (Robert Schulze).
- Fix deadlock in backups #55132 (alesapin).
- Fix storage Iceberg files retrieval #55144 (Kseniia Sumarokova).
- Fix partition pruning of extra columns in set. #55172 (Amos Bird).
- Fix recalculation of skip indexes in ALTER UPDATE queries when table has adaptive granularity #55202 (Duc Canh Le).
- Fix for background download in fs cache #55252 (Kseniia Sumarokova).
- Avoid possible memory leaks in compressors in case of missing buffer finalization #55262 (Azat Khuzhin).
- Fix functions execution over sparse columns #55275 (Azat Khuzhin).
- Fix incorrect merging of Nested for SELECT FINAL FROM SummingMergeTree #55276 (Azat Khuzhin).
- Fix bug with inability to drop detached partition in replicated merge tree on top of S3 without zero copy #55309 (alesapin).
- Fix a crash in MergeSortingPartialResultTransform (due to zero chunks after
remerge
) #55335 (Azat Khuzhin). - Fix data-race in CreatingSetsTransform (on errors) due to throwing shared exception #55338 (Azat Khuzhin).
- Fix trash optimization (up to a certain extent) #55353 (Alexey Milovidov).
- Fix leak in StorageHDFS #55370 (Azat Khuzhin).
- Fix parsing of arrays in cast operator #55417 (Anton Popov).
- Fix filtering by virtual columns with OR filter in query #55418 (Azat Khuzhin).
- Fix MongoDB connection issues #55419 (Nikolay Degterinsky).
- Fix MySQL interface boolean representation #55427 (Serge Klochkov).
- Fix MySQL text protocol DateTime formatting and LowCardinality(Nullable(T)) types reporting #55479 (Serge Klochkov).
- Make
use_mysql_types_in_show_columns
affect onlySHOW COLUMNS
#55481 (Robert Schulze). - Fix stack symbolizer parsing
DW_FORM_ref_addr
incorrectly and sometimes crashing #55483 (Michael Kolupaev). - Destroy fiber in case of exception in cancelBefore in AsyncTaskExecutor #55516 (Kruglov Pavel).
- Fix Query Parameters not working with custom HTTP handlers #55521 (Konstantin Bogdanov).
- Fix checking of non handled data for Values format #55527 (Azat Khuzhin).
- Fix 'Invalid cursor state' in odbc interacting with MS SQL Server #55558 (vdimir).
- Fix max execution time and 'break' overflow mode #55577 (Alexander Gololobov).
- Fix crash in QueryNormalizer with cyclic aliases #55602 (vdimir).
- Disable wrong optimization and add a test #55609 (Alexey Milovidov).
- Merging #52352 #55621 (Alexey Milovidov).
- Add a test to avoid incorrect decimal sorting #55662 (Amos Bird).
- Fix progress bar for s3 and azure Cluster functions with url without globs #55666 (Kruglov Pavel).
- Fix filtering by virtual columns with OR filter in query (resubmit) #55678 (Azat Khuzhin).
- Fixes and improvements for Iceberg storage #55695 (Kruglov Pavel).
- Fix data race in CreatingSetsTransform (v2) #55786 (Azat Khuzhin).
- Throw exception when parsing illegal string as float if precise_float_parsing is true #55861 (李扬).
- Disable predicate pushdown if the CTE contains stateful functions #55871 (Raúl Marín).
- Fix normalize ASTSelectWithUnionQuery, as it was stripping
FORMAT
from the query #55887 (flynn). - Try to fix possible segfault in Native ORC input format #55891 (Kruglov Pavel).
- Fix window functions in case of sparse columns. #55895 (János Benjamin Antal).
- fix: StorageNull supports subcolumns #55912 (FFish).
- Do not write retriable errors for Replicated mutate/merge into error log #55944 (Azat Khuzhin).
- Fix
SHOW DATABASES LIMIT <N>
#55962 (Raúl Marín). - Fix autogenerated Protobuf schema with fields with underscore #55974 (Kruglov Pavel).
- Fix dateTime64ToSnowflake64() with non-default scale #55983 (Robert Schulze).
- Fix output/input of Arrow dictionary column #55989 (Kruglov Pavel).
- Fix fetching schema from schema registry in AvroConfluent #55991 (Kruglov Pavel).
- Fix 'Block structure mismatch' on concurrent ALTER and INSERTs in Buffer table #55995 (Michael Kolupaev).
- Fix incorrect free space accounting for least_used JBOD policy #56030 (Azat Khuzhin).
- Fix missing scalar issue when evaluating subqueries inside table functions #56057 (Amos Bird).
- Fix wrong query result when http_write_exception_in_output_format=1 #56135 (Kruglov Pavel).
- Fix schema cache for fallback JSON->JSONEachRow with changed settings #56172 (Kruglov Pavel).
- Add error handler to odbc-bridge #56185 (Yakov Olkhovskiy).
ClickHouse release 23.9, 2023-09-28
Backward Incompatible Change
- Remove the
status_info
configuration option and dictionaries status from the default Prometheus handler. #54090 (Alexey Milovidov). - The experimental parts metadata cache is removed from the codebase. #54215 (Alexey Milovidov).
- Disable setting
input_format_json_try_infer_numbers_from_strings
by default, so we don't try to infer numbers from strings in JSON formats by default to avoid possible parsing errors when sample data contains strings that looks like a number. #55099 (Kruglov Pavel).
New Feature
- Improve schema inference from JSON formats: 1) Now it's possible to infer named Tuples from JSON objects without experimantal JSON type under a setting
input_format_json_try_infer_named_tuples_from_objects
in JSON formats. Previously without experimantal type JSON we could only infer JSON objects as Strings or Maps, now we can infer named Tuple. Resulting Tuple type will conain all keys of objects that were read in data sample during schema inference. It can be useful for reading structured JSON data without sparse objects. The setting is enabled by default. 2) Allow parsing JSON array into a column with type String under settinginput_format_json_read_arrays_as_strings
. It can help reading arrays with values with different types. 3) Allow to use type String for JSON keys with unkown types (null
/[]
/{}
) in sample data under settinginput_format_json_infer_incomplete_types_as_strings
. Now in JSON formats we can read any value into String column and we can avoid getting errorCannot determine type for column 'column_name' by first 25000 rows of data, most likely this column contains only Nulls or empty Arrays/Maps
during schema inference by using type String for unknown types, so the data will be read successfully. #54427 (Kruglov Pavel). - Added IO scheduling support for remote disks. Storage configuration for disk types
s3
,s3_plain
,hdfs
andazure_blob_storage
can now containread_resource
andwrite_resource
elements holding resource names. Scheduling policies for these resources can be configured in a separate server configuration sectionresources
. Queries can be marked using settingworkload
and classified using server configuration sectionworkload_classifiers
to achieve diverse resource scheduling goals. More details in the docs. #47009 (Sergei Trifonov). Added "bandwidth_limit" IO scheduling node type. It allows you to specifymax_speed
andmax_burst
constraints on traffic passing though this node. #54618 (Sergei Trifonov). - Added new type of authentication based on SSH keys. It works only for the native TCP protocol. #41109 (George Gamezardashvili).
- Added a new column
_block_number
for MergeTree tables. #44532. #47532 (SmitaRKulkarni). - Add
IF EMPTY
clause forDROP TABLE
queries. #48915 (Pavel Novitskiy). - SQL functions
toString(datetime, timezone)
andformatDateTime(datetime, format, timezone)
now support non-constant timezone arguments. #53680 (Yarik Briukhovetskyi). - Add support for
ALTER TABLE MODIFY COMMENT
. Note: something similar was added by an external contributor a long time ago, but the feature did not work at all and only confused users. This closes #36377. #51304 (Alexey Milovidov). Note: this command does not propagate between replicas, so the replicas of a table could have different comments. - Added
GCD
a.k.a. "greatest common denominator" as a new data compression codec. The codec computes the GCD of all column values, and then divides each value by the GCD. The GCD codec is a data preparation codec (similar to Delta and DoubleDelta) and cannot be used stand-alone. It works with data integer, decimal and date/time type. A viable use case for the GCD codec are column values that change (increase/decrease) in multiples of the GCD, e.g. 24 - 28 - 16 - 24 - 8 - 24 (assuming GCD = 4). #53149 (Alexander Nam). - Two new type aliases
DECIMAL(P)
(as shortcut forDECIMAL(P, 0)
andDECIMAL
(as shortcut forDECIMAL(10, 0)
) were added. This makes ClickHouse more compatible with MySQL's SQL dialect. #53328 (Val Doroshchuk). - Added a new system log table
backup_log
to track allBACKUP
andRESTORE
operations. #53638 (Victor Krasnov). - Added a format setting
output_format_markdown_escape_special_characters
(default: false). The setting controls whether special characters like!
,#
,$
etc. are escaped (i.e. prefixed by a backslash) in theMarkdown
output format. #53860 (irenjj). - Add function
decodeHTMLComponent
. #54097 (Bharat Nallan). - Added
peak_threads_usage
to query_log table. #54335 (Alexey Gerasimchuck). - Add
SHOW FUNCTIONS
support to clickhouse-client. #54337 (Julia Kartseva). - Added function
toDaysSinceYearZero
with aliasTO_DAYS
(for compatibility with MySQL) which returns the number of days passed since0001-01-01
(in Proleptic Gregorian Calendar). #54479 (Robert Schulze). FunctiontoDaysSinceYearZero
now supports arguments of typeDateTime
andDateTime64
. #54856 (Serge Klochkov). - Added functions
YYYYMMDDtoDate
,YYYYMMDDtoDate32
,YYYYMMDDhhmmssToDateTime
andYYYYMMDDhhmmssToDateTime64
. They convert a date or date with time encoded as integer (e.g. 20230911) into a native date or date with time. As such, they provide the opposite functionality of existing functionsYYYYMMDDToDate
,YYYYMMDDToDateTime
,YYYYMMDDhhmmddToDateTime
,YYYYMMDDhhmmddToDateTime64
. #54509 (Quanfa Fu) (Robert Schulze). - Add several string distance functions, including
byteHammingDistance
,editDistance
. #54935 (flynn). - Allow specifying the expiration date and, optionally, the time for user credentials with
VALID UNTIL datetime
clause. #51261 (Nikolay Degterinsky). - Allow S3-style URLs for table functions
s3
,gcs
,oss
. URL is automatically converted to HTTP. Example:'s3://clickhouse-public-datasets/hits.csv'
is converted to'https://clickhouse-public-datasets.s3.amazonaws.com/hits.csv'
. #54931 (Yarik Briukhovetskyi). - Add new setting
print_pretty_type_names
to print pretty deep nested types like Tuple/Maps/Arrays. #55095 (Kruglov Pavel).
Performance Improvement
- Speed up reading from S3 by enabling prefetches by default. #53709 (Alexey Milovidov).
- Do not implicitly read PK and version columns in lonely parts if unnecessary for queries with FINAL. #53919 (Duc Canh Le).
- Optimize group by constant keys. Will optimize queries with group by
_file/_path
after https://github.com/ClickHouse/ClickHouse/pull/53529. #53549 (Kruglov Pavel). - Improve performance of sorting for
Decimal
columns. Improve performance of insertion intoMergeTree
if ORDER BY contains aDecimal
column. Improve performance of sorting when data is already sorted or almost sorted. #35961 (Maksim Kita). - Improve performance for huge query analysis. Fixes #51224. #51469 (frinkr).
- An optimization to rewrite
COUNT(DISTINCT ...)
and variousuniq
variants tocount
if it is selected from a subquery with GROUP BY. #52082 #52645 (JackyWoo). - Remove manual calls to
mmap/mremap/munmap
and delegate all this work tojemalloc
- and it slightly improves performance. #52792 (Nikita Taranov). - Fixed high in CPU consumption when working with NATS. #54399 (Vasilev Pyotr).
- Since we use separate instructions for executing
toString
with datetime argument, it is possible to improve performance a bit for non-datetime arguments and have some parts of the code cleaner. Follows up #53680. #54443 (Yarik Briukhovetskyi). - Instead of serializing json elements into a
std::stringstream
, this PR try to put the serialization result intoColumnString
direclty. #54613 (lgbo). - Enable ORDER BY optimization for reading data in corresponding order from a MergeTree table in case that the table is behind a view. #54628 (Vitaly Baranov).
- Improve JSON SQL functions by reusing
GeneratorJSONPath
and removing several shared pointers. #54735 (lgbo). - Keeper tries to batch flush requests for better performance. #53049 (Antonio Andelic).
- Now
clickhouse-client
processes files in parallel in case ofINFILE 'glob_expression'
. Closes #54218. #54533 (Max K.). - Allow to use primary key for IN function where primary key column types are different from
IN
function right side column types. Example:SELECT id FROM test_table WHERE id IN (SELECT '5')
. Closes #48936. #54544 (Maksim Kita). - Hash JOIN tries to shrink internal buffers consuming half of maximal available memory (set by
max_bytes_in_join
). #54584 (vdimir). - Respect
max_block_size
for array join to avoid possible OOM. Close #54290. #54664 (李扬). - Reuse HTTP connections in the
s3
table function. #54812 (Michael Kolupaev). - Replace the linear search in
MergeTreeRangeReader::Stream::ceilRowsToCompleteGranules
with a binary search. #54869 (usurai).
Experimental Feature
- The creation of
Annoy
indexes can now be parallelized using settingmax_threads_for_annoy_index_creation
. #54047 (Robert Schulze). - Parallel replicas over distributed don't read from all replicas #54199 (Igor Nikonov).
Improvement
- Allow to replace long names of files of columns in
MergeTree
data parts to hashes of names. It helps to avoidFile name too long
error in some cases. #50612 (Anton Popov). - Parse data in
JSON
format asJSONEachRow
if failed to parse metadata. It will allow to read files with.json
extension even if real format is JSONEachRow. Closes #45740. #54405 (Kruglov Pavel). - Output valid JSON/XML on excetpion during HTTP query execution. Add setting
http_write_exception_in_output_format
to enable/disable this behaviour (enabled by default). #52853 (Kruglov Pavel). - View
information_schema.tables
now has a new fielddata_length
which shows the approximate size of the data on disk. Required to run queries generated by Amazon QuickSight. #55037 (Robert Schulze). - The MySQL interface gained a minimal implementation of prepared statements, just enough to allow a connection from Tableau Online to ClickHouse via the MySQL connector. #54115 (Serge Klochkov). Please note: the prepared statements implementation is pretty minimal, we do not support arguments binding yet, it is not required in this particular Tableau online use case. It will be implemented as a follow-up if necessary after extensive testing of Tableau Online in case we discover issues.
- Support case-insensitive and dot-all matching modes in
regexp_tree
dictionaries. #50906 (Johann Gan). - Keeper improvement: Add a
createIfNotExists
Keeper command. #48855 (Konstantin Bogdanov). - More precise integer type inference, fix #51236. #53003 (Chen768959).
- Introduced resolving of charsets in the string literals for MaterializedMySQL. #53220 (Val Doroshchuk).
- Fix a subtle issue with a rarely used
EmbeddedRocksDB
table engine in an extremely rare scenario: sometimes theEmbeddedRocksDB
table engine does not close files correctly in NFS after runningDROP TABLE
. #53502 (Mingliang Pan). RESTORE TABLE ON CLUSTER
must create replicated tables with a matching UUID on hosts. Otherwise the macro{uuid}
in ZooKeeper path can't work correctly after RESTORE. This PR implements that. #53765 (Vitaly Baranov).- Added restore setting
restore_broken_parts_as_detached
: if it's true the RESTORE process won't stop on broken parts while restoring, instead all the broken parts will be copied to thedetached
folder with the prefix `broken-from-backup'. If it's false the RESTORE process will stop on the first broken part (if any). The default value is false. #53877 (Vitaly Baranov). - Add
elapsed_ns
field to HTTP headers X-ClickHouse-Progress and X-ClickHouse-Summary. #54179 (joelynch). - Implementation of
reconfig
(https://github.com/ClickHouse/ClickHouse/pull/49450),sync
, andexists
commands for keeper-client. #54201 (pufit). clickhouse-local
andclickhouse-client
now allow to specify the--query
parameter multiple times, e.g../clickhouse-client --query "SELECT 1" --query "SELECT 2"
. This syntax is slightly more intuitive than./clickhouse-client --multiquery "SELECT 1;S ELECT 2"
, a bit easier to script (e.g.queries.push_back('--query "$q"')
) and more consistent with the behavior of existing parameter--queries-file
(e.g../clickhouse client --queries-file queries1.sql --queries-file queries2.sql
). #54249 (Robert Schulze).- Add sub-second precision to
formatReadableTimeDelta
. #54250 (Andrey Zvonov). - Enable
allow_remove_stale_moving_parts
by default. #54260 (vdimir). - Fix using count from cache and improve progress bar for reading from archives. #54271 (Kruglov Pavel).
- Add support for S3 credentials using SSO. To define a profile to be used with SSO, set
AWS_PROFILE
environment variable. #54347 (Antonio Andelic). - Support NULL as default for nested types Array/Tuple/Map for input formats. Closes #51100. #54351 (Kruglov Pavel).
- Allow reading some unusual configuration of chunks from Arrow/Parquet formats. #54370 (Arthur Passos).
- Add
STD
alias tostddevPop
function for MySQL compatibility. Closes #54274. #54382 (Nikolay Degterinsky). - Add
addDate
function for compatibility with MySQL andsubDate
for consistency. Reference #54275. #54400 (Nikolay Degterinsky). - Add
modification_time
intosystem.detached_parts
. #54506 (Azat Khuzhin). - Added a setting
splitby_max_substrings_includes_remaining_string
which controls if functions "splitBy*()" with argument "max_substring" > 0 include the remaining string (if any) in the result array (Python/Spark semantics) or not. The default behavior does not change. #54518 (Robert Schulze). - Better integer types inference for
Int64
/UInt64
fields. Continuation of #53003. Now it works also for nested types like Arrays of Arrays and for functions likemap/tuple
. Issue: #51236. #54553 (Kruglov Pavel). - Added array operations for multiplying, dividing and modulo on scalar. Works in each way, for example
5 * [5, 5]
and[5, 5] * 5
- both cases are possible. #54608 (Yarik Briukhovetskyi). - Add optional
version
argument torm
command inkeeper-client
to support safer deletes. #54708 (János Benjamin Antal). - Disable killing the server by systemd (that may lead to data loss when using Buffer tables). #54744 (Azat Khuzhin).
- Added field
is_deterministic
to system tablesystem.functions
which indicates whether the result of a function is stable between two invocations (given exactly the same inputs) or not. #54766 #55035 (Robert Schulze). - Made the views in schema
information_schema
more compatible with the equivalent views in MySQL (i.e. modified and extended them) up to a point where Tableau Online is able to connect to ClickHouse. More specifically: 1. The type of fieldinformation_schema.tables.table_type
changed from Enum8 to String. 2. Added fieldstable_comment
andtable_collation
to viewinformation_schema.table
. 3. Added viewsinformation_schema.key_column_usage
andreferential_constraints
. 4. Replaced uppercase aliases ininformation_schema
views with concrete uppercase columns. #54773 (Serge Klochkov). - The query cache now returns an error if the user tries to cache the result of a query with a non-deterministic function such as
now
,randomString
anddictGet
. Compared to the previous behavior (silently don't cache the result), this reduces confusion and surprise for users. #54801 (Robert Schulze). - Forbid special columns like materialized/ephemeral/alias for
file
/s3
/url
/... storages, fix insert into ephemeral columns from files. Closes #53477. #54803 (Kruglov Pavel). - More configurable collecting metadata for backup. #54804 (Vitaly Baranov).
clickhouse-local
's log file (if enabled with --server_logs_file flag) will now prefix each line with timestamp, thread id, etc, just likeclickhouse-server
. #54807 (Michael Kolupaev).- Field
is_obsolete
in thesystem.merge_tree_settings
table - it is now 1 for obsolete merge tree settings. Previously, only the description indicated that the setting is obsolete. #54837 (Robert Schulze). - Make it possible to use plural when using interval literals.
INTERVAL 2 HOURS
should be equivalent toINTERVAL 2 HOUR
. #54860 (Jordi Villar). - Always allow the creation of a projection with
Nullable
PK. This fixes #54814. #54895 (Amos Bird). - Retry backup's S3 operations after connection reset failure. #54900 (Vitaly Baranov).
- Make the exception message exact in case of the maximum value of a settings is less than the minimum value. #54925 (János Benjamin Antal).
LIKE
,match
, and other regular expressions matching functions now allow matching with patterns containing non-UTF-8 substrings by falling back to binary matching. Example: you can usestring LIKE '\xFE\xFF%'
to detect BOM. This closes #54486. #54942 (Alexey Milovidov).- Added
ContextLockWaitMicroseconds
profile event. #55029 (Maksim Kita). - The Keeper dynamically adjusts log levels. #50372 (helifu).
- Added function
timestamp
for compatibility with MySQL. Closes #54275. #54639 (Nikolay Degterinsky).
Build/Testing/Packaging Improvement
- Bumped the compiler of official and continuous integration builds of ClickHouse from Clang 16 to 17. #53831 (Robert Schulze).
- Regenerated tld data for lookups (
tldLookup.generated.cpp
). #54269 (Bharat Nallan). - Remove the redundant
clickhouse-keeper-client
symlink. #54587 (Tomas Barton). - Use
/usr/bin/env
to resolve bash - now it supports Nix OS. #54603 (Fionera). - CMake added
PROFILE_CPU
option needed to performperf record
without using a DWARF call graph. #54917 (Maksim Kita). - If the linker is different than LLD, stop with a fatal error. #55036 (Alexey Milovidov).
- Replaced the library to handle (encode/decode) base64 values from Turbo-Base64 to aklomp-base64. Both are SIMD-accelerated on x86 and ARM but 1. the license of the latter (BSD-2) is more favorable for ClickHouse, Turbo64 switched in the meantime to GPL-3, 2. with more GitHub stars, aklomp-base64 seems more future-proof, 3. aklomp-base64 has a slightly nicer API (which is arguably subjective), and 4. aklomp-base64 does not require us to hack around bugs (like non-threadsafe initialization). Note: aklomp-base64 rejects unpadded base64 values whereas Turbo-Base64 decodes them on a best-effort basis. RFC-4648 leaves it open whether padding is mandatory or not, but depending on the context this may be a behavioral change to be aware of. #54119 (Mikhail Koviazin).
Bug Fix (user-visible misbehavior in an official stable release)
- Fix REPLACE/MOVE PARTITION with zero-copy replication (note: "zero-copy replication" is an experimental feature) #54193 (Alexander Tokmakov).
- Fix zero copy locks with hardlinks (note: "zero-copy replication" is an experimental feature) #54859 (Alexander Tokmakov).
- Fix zero copy garbage (note: "zero-copy replication" is an experimental feature) #54550 (Alexander Tokmakov).
- Pass HTTP retry timeout as milliseconds (it was incorrect before). #54438 (Duc Canh Le).
- Fix misleading error message in OUTFILE with
CapnProto
/Protobuf
#52870 (Kruglov Pavel). - Fix summary reporting with parallel replicas with LIMIT #53050 (Raúl Marín).
- Fix throttling of BACKUPs from/to S3 (in case native copy was not used) and in some other places as well #53336 (Azat Khuzhin).
- Fix IO throttling during copying whole directories #53338 (Azat Khuzhin).
- Fix: moved to prewhere condition actions can lose column #53492 (Yakov Olkhovskiy).
- Fixed internal error when replacing with byte-equal parts #53735 (Pedro Riera).
- Fix: require columns participating in interpolate expression #53754 (Yakov Olkhovskiy).
- Fix cluster discovery initialization + setting up fail points in config #54113 (vdimir).
- Fix issues in
accurateCastOrNull
#54136 (Salvatore Mesoraca). - Fix nullable primary key with the FINAL modifier #54164 (Amos Bird).
- Fixed error that prevented insertion in replicated materialized view of new data in presence of duplicated data. #54184 (Pedro Riera).
- Fix: allow
IPv6
for bloom filter #54200 (Yakov Olkhovskiy). - fix possible type mismatch with
IPv4
#54212 (Bharat Nallan). - Fix
system.data_skipping_indices
for recreated indices #54225 (Artur Malchanau). - fix name clash for multiple join rewriter v2 #54240 (Tao Wang).
- Fix unexpected errors in
system.errors
after join #54306 (vdimir). - Fix
isZeroOrNull(NULL)
#54316 (flynn). - Fix: parallel replicas over distributed with
prefer_localhost_replica
= 1 #54334 (Igor Nikonov). - Fix logical error in vertical merge + replacing merge tree + optimize cleanup #54368 (alesapin).
- Fix possible error
URI contains invalid characters
in thes3
table function #54373 (Kruglov Pavel). - Fix segfault in AST optimization of
arrayExists
function #54379 (Nikolay Degterinsky). - Check for overflow before addition in
analysisOfVariance
function #54385 (Antonio Andelic). - Reproduce and fix the bug in removeSharedRecursive #54430 (Sema Checherinda).
- Fix possible incorrect result with SimpleAggregateFunction in PREWHERE and FINAL #54436 (Azat Khuzhin).
- Fix filtering parts with indexHint for non analyzer #54449 (Azat Khuzhin).
- Fix aggregate projections with normalized states #54480 (Amos Bird).
clickhouse-local
: something for multiquery parameter #54498 (CuiShuoGuo).clickhouse-local
supports--database
command line argument #54503 (vdimir).- Fix possible parsing error in
-WithNames
formats with disabledinput_format_with_names_use_header
#54513 (Kruglov Pavel). - Fix rare case of CHECKSUM_DOESNT_MATCH error #54549 (alesapin).
- Fix sorting of UNION ALL of already sorted results #54564 (Vitaly Baranov).
- Fix snapshot install in Keeper #54572 (Antonio Andelic).
- Fix race in
ColumnUnique
#54575 (Nikita Taranov). - Annoy/Usearch index: Fix LOGICAL_ERROR during build-up with default values #54600 (Robert Schulze).
- Fix serialization of
ColumnDecimal
#54601 (Nikita Taranov). - Fix schema inference for *Cluster functions for column names with spaces #54635 (Kruglov Pavel).
- Fix using structure from insertion tables in case of defaults and explicit insert columns #54655 (Kruglov Pavel).
- Fix: avoid using regex match, possibly containing alternation, as a key condition. #54696 (Yakov Olkhovskiy).
- Fix ReplacingMergeTree with vertical merge and cleanup #54706 (SmitaRKulkarni).
- Fix virtual columns having incorrect values after ORDER BY #54811 (Michael Kolupaev).
- Fix filtering parts with indexHint for non analyzer #54825 #54449 (Azat Khuzhin).
- Fix Keeper segfault during shutdown #54841 (Antonio Andelic).
- Fix
Invalid number of rows in Chunk
in MaterializedPostgreSQL #54844 (Kseniia Sumarokova). - Move obsolete format settings to separate section #54855 (Kruglov Pavel).
- Rebuild
minmax_count_projection
when partition key gets modified #54943 (Amos Bird). - Fix bad cast to
ColumnVector<Int128>
in functionif
#55019 (Kruglov Pavel). - Prevent attaching parts from tables with different projections or indices #55062 (János Benjamin Antal).
- Store NULL in scalar result map for empty subquery result #52240 (vdimir).
- Fix
FINAL
produces invalid read ranges in a rare case #54934 (Nikita Taranov). - Fix: insert quorum w/o keeper retries #55026 (Igor Nikonov).
- Fix simple state with nullable #55030 (Pedro Riera).
ClickHouse release 23.8 LTS, 2023-08-31
Backward Incompatible Change
- If a dynamic disk contains a name, it should be specified as
disk = disk(name = 'disk_name'
, ...) in disk function arguments. In previous version it could be specified asdisk = disk_<disk_name>(...)
, which is no longer supported. #52820 (Kseniia Sumarokova). clickhouse-benchmark
will establish connections in parallel when invoked with--concurrency
more than one. Previously it was unusable if you ran it with 1000 concurrent connections from Europe to the US. Correct calculation of QPS for connections with high latency. Backward incompatible change: the option for JSON output ofclickhouse-benchmark
is removed. If you've used this option, you can also extract data from thesystem.query_log
in JSON format as a workaround. #53293 (Alexey Milovidov).- The
microseconds
column is removed from thesystem.text_log
, and themilliseconds
column is removed from thesystem.metric_log
, because they are redundant in the presence of theevent_time_microseconds
column. #53601 (Alexey Milovidov). - Deprecate the metadata cache feature. It is experimental and we have never used it. The feature is dangerous: #51182. Remove the
system.merge_tree_metadata_cache
system table. The metadata cache is still available in this version but will be removed soon. This closes #39197. #51303 (Alexey Milovidov). - Disable support for 3DES in TLS connections. #52893 (Kenji Noguchi).
New Feature
- Direct import from zip/7z/tar archives. Example:
file('*.zip :: *.csv')
. #50321 (nikitakeba). - Add column
ptr
tosystem.trace_log
fortrace_type = 'MemorySample'
. This column contains an address of allocation. Added functionflameGraph
which can build flamegraph containing allocated and not released memory. Reworking of #38391. #45322 (Nikolai Kochetov). - Added table function
azureBlobStorageCluster
. The supported set of features is very similar to table functions3Cluster
. #50795 (SmitaRKulkarni). - Allow using
cluster
,clusterAllReplicas
,remote
, andremoteSecure
without table name in issue #50808. #50848 (Yangkuan Liu). - A system table to monitor Kafka consumers. #50999 (Ilya Golshtein).
- Added
max_sessions_for_user
setting. #51724 (Alexey Gerasimchuck). - New functions
toUTCTimestamp/fromUTCTimestamp
to act same as spark'sto_utc_timestamp/from_utc_timestamp
. #52117 (KevinyhZou). - Add new functions
structureToCapnProtoSchema
/structureToProtobufSchema
that convert ClickHouse table structure to CapnProto/Protobuf format schema. Allow to input/output data in CapnProto/Protobuf format without external format schema using autogenerated schema from table structure (controlled by settingsformat_capn_proto_use_autogenerated_schema
/format_protobuf_use_autogenerated_schema
). Allow to export autogenerated schema while input/output using settingoutput_format_schema
. #52278 (Kruglov Pavel). - A new field
query_cache_usage
insystem.query_log
now shows if and how the query cache was used. #52384 (Robert Schulze). - Add new function
startsWithUTF8
andendsWithUTF8
. #52555 (李扬). - Allow variable number of columns in TSV/CustomSeparated/JSONCompactEachRow, make schema inference work with variable number of columns. Add settings
input_format_tsv_allow_variable_number_of_columns
,input_format_custom_allow_variable_number_of_columns
,input_format_json_compact_allow_variable_number_of_columns
. #52692 (Kruglov Pavel). - Added
SYSTEM STOP/START PULLING REPLICATION LOG
queries (for testingReplicatedMergeTree
). #52881 (Alexander Tokmakov). - Allow to execute constant non-deterministic functions in mutations on initiator. #53129 (Anton Popov).
- Add input format
One
that doesn't read any data and always returns single row with columndummy
with typeUInt8
and value0
likesystem.one
. It can be used together with_file/_path
virtual columns to list files in file/s3/url/hdfs/etc table functions without reading any data. #53209 (Kruglov Pavel). - Add
tupleConcat
function. Closes #52759. #53239 (Nikolay Degterinsky). - Support
TRUNCATE DATABASE
operation. #53261 (Bharat Nallan). - Add
max_threads_for_indexes
setting to limit number of threads used for primary key processing. #53313 (jorisgio). - Re-add SipHash keyed functions. #53525 (Salvatore Mesoraca).
- (#52755 , #52895) Added functions
arrayRotateLeft
,arrayRotateRight
,arrayShiftLeft
,arrayShiftRight
. #53557 (Mikhail Koviazin). - Add column
name
tosystem.clusters
as an alias to cluster. #53605 (irenjj). - The advanced dashboard now allows mass editing (save/load). #53608 (Alexey Milovidov).
- The advanced dashboard now has an option to maximize charts and move them around. #53622 (Alexey Milovidov).
- Added support for adding and subtracting arrays:
[5,2] + [1,7]
. Division and multiplication were not implemented due to confusion between pointwise multiplication and the scalar product of arguments. Closes #49939. #52625 (Yarik Briukhovetskyi). - Add support for string literals as table names. Closes #52178. #52635 (hendrik-m).
Experimental Feature
- Add new table engine
S3Queue
for streaming data import from s3. Closes #37012. #49086 (s-kat). It is not ready to use. Do not use it. - Enable parallel reading from replicas over distributed table. Related to #49708. #53005 (Igor Nikonov).
- Add experimental support for HNSW as approximate neighbor search method. #53447 (Davit Vardanyan). This is currently intended for those who continue working on the implementation. Do not use it.
Performance Improvement
- Parquet filter pushdown. I.e. when reading Parquet files, row groups (chunks of the file) are skipped based on the WHERE condition and the min/max values in each column. In particular, if the file is roughly sorted by some column, queries that filter by a short range of that column will be much faster. #52951 (Michael Kolupaev).
- Optimize reading small row groups by batching them together in Parquet. Closes #53069. #53281 (Kruglov Pavel).
- Optimize count from files in most input formats. Closes #44334. #53637 (Kruglov Pavel).
- Use filter by file/path before reading in
url
/file
/hdfs
table functions. #53529 (Kruglov Pavel). - Enable JIT compilation for AArch64, PowerPC, SystemZ, RISC-V. #38217 (Maksim Kita).
- Add setting
rewrite_count_distinct_if_with_count_distinct_implementation
to rewritecountDistinctIf
withcount_distinct_implementation
. Closes #30642. #46051 (flynn). - Speed up merging of states of
uniq
anduniqExact
aggregate functions by parallelizing conversion before merge. #50748 (Jiebin Sun). - Optimize aggregation performance of nullable string key when using a large number of variable length keys. #51399 (LiuNeng).
- Add a pass in Analyzer for time filter optimization with preimage. The performance experiments of SSB on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) show that this change could bring an improvement of 8.5% to the geomean QPS when the experimental analyzer is enabled. #52091 (Zhiguo Zhou).
- Optimize the merge if all hash sets are single-level in the
uniqExact
(COUNT DISTINCT) function. #52973 (Jiebin Sun). Join
table engine: do not clone hash join data structure with all columns. #53046 (Duc Canh Le).- Implement native
ORC
input format without the "apache arrow" library to improve performance. #53324 (李扬). - The dashboard will tell the server to compress the data, which is useful for large time frames over slow internet connections. For example, one chart with 86400 points can be 1.5 MB uncompressed and 60 KB compressed with
br
. #53569 (Alexey Milovidov). - Better utilization of thread pool for BACKUPs and RESTOREs. #53649 (Nikita Mikhaylov).
- Load filesystem cache metadata on startup in parallel. Configured by
load_metadata_threads
(default: 1) cache config setting. Related to #52037. #52943 (Kseniia Sumarokova). - Improve
move_primary_key_columns_to_end_of_prewhere
. #53337 (Han Fei). - This optimizes the interaction with ClickHouse Keeper. Previously the caller could register the same watch callback multiple times. In that case each entry was consuming memory and the same callback was called multiple times which didn't make much sense. In order to avoid this the caller could have some logic to not add the same watch multiple times. With this change this deduplication is done internally if the watch callback is passed via shared_ptr. #53452 (Alexander Gololobov).
- Cache number of rows in files for count in file/s3/url/hdfs/azure functions. The cache can be enabled/disabled by setting
use_cache_for_count_from_files
(enabled by default). Continuation of https://github.com/ClickHouse/ClickHouse/pull/53637. #53692 (Kruglov Pavel). - More careful thread management will improve the speed of the S3 table function over a large number of files by more than ~25%. #53668 (pufit).
Improvement
- Add
stderr_reaction
configuration/setting to control the reaction (none, log or throw) when external command stderr has data. This helps make debugging external command easier. #43210 (Amos Bird). - Add
partition
column to thesystem part_log
and merge table. #48990 (Jianfei Hu). - The sizes of the (index) uncompressed/mark, mmap and query caches can now be configured dynamically at runtime (without server restart). #51446 (Robert Schulze).
- If a dictionary is created with a complex key, automatically choose the "complex key" layout variant. #49587 (xiebin).
- Add setting
use_concurrency_control
for better testing of the new concurrency control feature. #49618 (Alexey Milovidov). - Added suggestions for mistyped names for databases and tables. #49801 (Yarik Briukhovetskyi).
- While read small files from HDFS by Gluten, we found that it will cost more times when compare to directly query by Spark. And we did something with that. #50063 (KevinyhZou).
- There were too many worthless error logs after session expiration, which we didn't like. #50171 (helifu).
- Introduce fallback ZooKeeper sessions which are time-bound. Fixed
index
column in system.zookeeper_connection for DNS addresses. #50424 (Anton Kozlov). - Add ability to log when max_partitions_per_insert_block is reached. #50948 (Sean Haynes).
- Added a bunch of custom commands to clickhouse-keeper-client (mostly to make ClickHouse debugging easier). #51117 (pufit).
- Updated check for connection string in
azureBlobStorage
table function as connection string with "sas" does not always begin with the default endpoint and updated connection URL to include "sas" token after adding Azure's container to URL. #51141 (SmitaRKulkarni). - Fix description for filtering sets in the
full_sorting_merge
JOIN algorithm. #51329 (Tanay Tummalapalli). - Fixed memory consumption in
Aggregator
whenmax_block_size
is huge. #51566 (Nikita Taranov). - Add
SYSTEM SYNC FILESYSTEM CACHE
command. It will compare in-memory state of filesystem cache with what it has on disk and fix in-memory state if needed. This is only needed if you are making manual interventions in on-disk data, which is highly discouraged. #51622 (Kseniia Sumarokova). - Attempt to create a generic proxy resolver for CH while keeping backwards compatibility with existing S3 storage conf proxy resolver. #51749 (Arthur Passos).
- Support reading tuple subcolumns from file/s3/hdfs/url/azureBlobStorage table functions. #51806 (Kruglov Pavel).
- Function
arrayIntersect
now returns the values in the order, corresponding to the first argument. Closes #27622. #51850 (Yarik Briukhovetskyi). - Add new queries, which allow to create/drop of access entities in specified access storage or move access entities from one access storage to another. #51912 (pufit).
- Make
ALTER TABLE FREEZE
queries not replicated in the Replicated database engine. #52064 (Mike Kot). - Added possibility to flush system tables on unexpected shutdown. #52174 (Alexey Gerasimchuck).
- Fix the case when
s3
table function refused to work with pre-signed URLs. close #50846. #52310 (chen). - Add column
name
as an alias toevent
andmetric
in thesystem.events
andsystem.metrics
tables. Closes #51257. #52315 (chen). - Added support of syntax
CREATE UNIQUE INDEX
in parser as a no-op for better SQL compatibility.UNIQUE
index is not supported. Setcreate_index_ignore_unique = 1
to ignore UNIQUE keyword in queries. #52320 (Ilya Yatsishin). - Add support of predefined macro (
{database}
and{table}
) in some Kafka engine settings: topic, consumer, client_id, etc. #52386 (Yury Bogomolov). - Disable updating the filesystem cache during backup/restore. Filesystem cache must not be updated during backup/restore, it seems it just slows down the process without any profit (because the BACKUP command can read a lot of data and it's no use to put all the data to the filesystem cache and immediately evict it). #52402 (Vitaly Baranov).
- The configuration of S3 endpoint allow using it from the root, and append '/' automatically if needed. #47809. #52600 (xiaolei565).
- For clickhouse-local allow positional options and populate global UDF settings (user_scripts_path and user_defined_executable_functions_config). #52643 (Yakov Olkhovskiy).
system.asynchronous_metrics
now includes metrics "QueryCacheEntries" and "QueryCacheBytes" to inspect the query cache. #52650 (Robert Schulze).- Added possibility to use
s3_storage_class
parameter in theSETTINGS
clause of theBACKUP
statement for backups to S3. #52658 (Roman Vasin). - Add utility
print-backup-info.py
which parses a backup metadata file and prints information about the backup. #52690 (Vitaly Baranov). - Closes #49510. Currently we have database and table names case-sensitive, but BI tools query
information_schema
sometimes in lowercase, sometimes in uppercase. For this reason we haveinformation_schema
database, containing lowercase tables, such asinformation_schema.tables
andINFORMATION_SCHEMA
database, containing uppercase tables, such asINFORMATION_SCHEMA.TABLES
. But some tools are queryingINFORMATION_SCHEMA.tables
andinformation_schema.TABLES
. The proposed solution is to duplicate both lowercase and uppercase tables in lowercase and uppercaseinformation_schema
database. #52695 (Yarik Briukhovetskyi). - Query
CHECK TABLE
has better performance and usability (sends progress updates, cancellable). #52745 (vdimir). - Add support for
modulo
,intDiv
,intDivOrZero
for tuples by distributing them across tuple's elements. #52758 (Yakov Olkhovskiy). - Search for default
yaml
andyml
configs in clickhouse-client afterxml
. #52767 (Alexey Milovidov). - When merging into non-'clickhouse' rooted configuration, configs with different root node name just bypassed without exception. #52770 (Yakov Olkhovskiy).
- Now it's possible to specify min (
memory_profiler_sample_min_allocation_size
) and max (memory_profiler_sample_max_allocation_size
) size for allocations to be tracked with sampling memory profiler. #52779 (alesapin). - Add
precise_float_parsing
setting to switch float parsing methods (fast/precise). #52791 (Andrey Zvonov). - Use the same default paths for
clickhouse-keeper
(symlink) as forclickhouse-keeper
(executable). #52861 (Vitaly Baranov). - Improve error message for table function
remote
. Closes #40220. #52959 (jiyoungyoooo). - Added the possibility to specify custom storage policy in the
SETTINGS
clause ofRESTORE
queries. #52970 (Victor Krasnov). - Add the ability to throttle the S3 requests on backup operations (
BACKUP
andRESTORE
commands now honors3_max_[get/put]_[rps/burst]
). #52974 (Daniel Pozo Escalona). - Add settings to ignore ON CLUSTER clause in queries for management of replicated user-defined functions or access control entities with replicated storage. #52975 (Aleksei Filatov).
- EXPLAIN actions for JOIN step. #53006 (Maksim Kita).
- Make
hasTokenOrNull
andhasTokenCaseInsensitiveOrNull
return null for empty needles. #53059 (ltrk2). - Allow to restrict allowed paths for filesystem caches. Mainly useful for dynamic disks. If in server config
filesystem_caches_path
is specified, all filesystem caches' paths will be restricted to this directory. E.g. if thepath
in cache config is relative - it will be put infilesystem_caches_path
; ifpath
in cache config is absolute, it will be required to lie insidefilesystem_caches_path
. Iffilesystem_caches_path
is not specified in config, then behaviour will be the same as in earlier versions. #53124 (Kseniia Sumarokova). - Added a bunch of custom commands (mostly to make ClickHouse debugging easier). #53127 (pufit).
- Add diagnostic info about file name during schema inference - it helps when you process multiple files with globs. #53135 (Alexey Milovidov).
- Client will load suggestions using the main connection if the second connection is not allowed to create a session. #53177 (Alexey Gerasimchuck).
- Add EXCEPT clause to
SYSTEM STOP/START LISTEN QUERIES [ALL/DEFAULT/CUSTOM]
query, for exampleSYSTEM STOP LISTEN QUERIES ALL EXCEPT TCP, HTTP
. #53280 (Nikolay Degterinsky). - Change the default of
max_concurrent_queries
from 100 to 1000. It's ok to have many concurrent queries if they are not heavy, and mostly waiting for the network. Note: don't confuse concurrent queries and QPS: for example, ClickHouse server can do tens of thousands of QPS with less than 100 concurrent queries. #53285 (Alexey Milovidov). - Limit number of concurrent background partition optimize merges. #53405 (Duc Canh Le).
- Added a setting
allow_moving_table_directory_to_trash
that allows to ignoreDirectory for table data already exists
error when replicating/recovering aReplicated
database. #53425 (Alexander Tokmakov). - If server settings
asynchronous_metrics_update_period_s
andasynchronous_heavy_metrics_update_period_s
are misconfigured to 0, it will now fail gracefully instead of terminating the application. #53428 (Robert Schulze). - The ClickHouse server now respects memory limits changed via cgroups when reloading its configuration. #53455 (Robert Schulze).
- Add ability to turn off flush of Distributed tables on
DETACH
,DROP
, or server shutdown. #53501 (Azat Khuzhin). - The
domainRFC
function now supports IPv6 in square brackets. #53506 (Chen768959). - Use longer timeout for S3 CopyObject requests, which are used in backups. #53533 (Michael Kolupaev).
- Added server setting
aggregate_function_group_array_max_element_size
. This setting is used to limit array size forgroupArray
function at serialization. The default value is16777215
. #53550 (Nikolai Kochetov). SCHEMA
was added as alias forDATABASE
to improve MySQL compatibility. #53587 (Daniël van Eeden).- Add asynchronous metrics about tables in the system database. For example,
TotalBytesOfMergeTreeTablesSystem
. This closes #53603. #53604 (Alexey Milovidov). - SQL editor in the Play UI and Dashboard will not use Grammarly. #53614 (Alexey Milovidov).
- As expert-level settings, it is now possible to (1) configure the size_ratio (i.e. the relative size of the protected queue) of the [index] mark/uncompressed caches, (2) configure the cache policy of the index mark and index uncompressed caches. #53657 (Robert Schulze).
- Added client info validation to the query packet in TCPHandler. #53673 (Alexey Gerasimchuck).
- Retry loading parts in case of network errors while interaction with Microsoft Azure. #53750 (SmitaRKulkarni).
- Stacktrace for exceptions, Materailized view exceptions are propagated. #53766 (Ilya Golshtein).
- If no hostname or port were specified, keeper client will try to search for a connection string in the ClickHouse's config.xml. #53769 (pufit).
- Add profile event
PartsLockMicroseconds
which shows the amount of microseconds we hold the data parts lock in MergeTree table engine family. #53797 (alesapin). - Make reconnect limit in RAFT limits configurable for keeper. This configuration can help to make keeper to rebuild connection with peers quicker if the current connection is broken. #53817 (Pengyuan Bian).
- Ignore foreign keys in tables definition to improve compatibility with MySQL, so a user wouldn't need to rewrite his SQL of the foreign key part, ref #53380. #53864 (jsc0218).
Build/Testing/Packaging Improvement
- Don't expose symbols from ClickHouse binary to dynamic linker. It might fix #43933. #47475 (Alexey Milovidov).
- Add
clickhouse-keeper-client
symlink to the clickhouse-server package. #51882 (Mikhail f. Shiryaev). - Add https://github.com/elliotchance/sqltest to CI to report the SQL 2016 conformance. #52293 (Alexey Milovidov).
- Upgrade PRQL to 0.9.3. #53060 (Maximilian Roos).
- System tables from CI checks are exported to ClickHouse Cloud. #53086 (Alexey Milovidov).
- The compiler's profile data (
-ftime-trace
) is uploaded to ClickHouse Cloud. #53100 (Alexey Milovidov). - Speed up Debug and Tidy builds. #53178 (Alexey Milovidov).
- Speed up the build by removing tons and tonnes of garbage. One of the frequently included headers was poisoned by boost. #53180 (Alexey Milovidov).
- Remove even more garbage. #53182 (Alexey Milovidov).
- The function
arrayAUC
was using heavy C++ templates - ditched them. #53183 (Alexey Milovidov). - Some translation units were always rebuilt regardless of ccache. The culprit is found and fixed. #53184 (Alexey Milovidov).
- The compiler's profile data (
-ftime-trace
) is uploaded to ClickHouse Cloud., the second attempt after #53100. #53213 (Alexey Milovidov). - Export logs from CI in stateful tests to ClickHouse Cloud. #53351 (Alexey Milovidov).
- Export logs from CI in stress tests. #53353 (Alexey Milovidov).
- Export logs from CI in fuzzer. #53354 (Alexey Milovidov).
- Preserve environment parameters in
clickhouse start
command. Fixes #51962. #53418 (Mikhail f. Shiryaev). - Follow up for #53418. Small improvements for install_check.py, adding tests for proper ENV parameters passing to the main process on
init.d start
. #53457 (Mikhail f. Shiryaev). - Reorganize file management in CMake to prevent potential duplications. For instance,
indexHint.cpp
is duplicated in bothdbms_sources
andclickhouse_functions_sources
. #53621 (Amos Bird). - Upgrade snappy to 1.1.10. #53672 (李扬).
- Slightly improve cmake build by sanitizing some dependencies and removing some duplicates. Each commit includes a short description of the changes made. #53759 (Amos Bird).
Bug Fix (user-visible misbehavior in an official stable release)
- Do not reset (experimental) Annoy index during build-up with more than one mark #51325 (Tian Xinhui).
- Fix usage of temporary directories during RESTORE #51493 (Azat Khuzhin).
- Fix binary arithmetic for Nullable(IPv4) #51642 (Yakov Olkhovskiy).
- Support IPv4 and IPv6 data types as dictionary attributes #51756 (Yakov Olkhovskiy).
- A fix for checksum of compress marks #51777 (SmitaRKulkarni).
- Fix mistakenly comma parsing as part of datetime in CSV best effort parsing #51950 (Kruglov Pavel).
- Don't throw exception when executable UDF has parameters #51961 (Nikita Taranov).
- Fix recalculation of skip indexes and projections in
ALTER DELETE
queries #52530 (Anton Popov). - MaterializedMySQL: Fix the infinite loop in ReadBuffer::read #52621 (Val Doroshchuk).
- Load suggestion only with
clickhouse
dialect #52628 (János Benjamin Antal). - Init and destroy ares channel on demand. #52634 (Arthur Passos).
- Fix filtering by virtual columns with OR expression #52653 (Azat Khuzhin).
- Fix crash in function
tuple
with one sparse column argument #52659 (Anton Popov). - Fix named collections on cluster #52687 (Al Korgun).
- Fix reading of unnecessary column in case of multistage
PREWHERE
#52689 (Anton Popov). - Fix unexpected sort result on multi columns with nulls first direction #52761 (copperybean).
- Fix data race in Keeper reconfiguration #52804 (Antonio Andelic).
- Fix sorting of sparse columns with large limit #52827 (Anton Popov).
- clickhouse-keeper: fix implementation of server with poll. #52833 (Andy Fiddaman).
- Make regexp analyzer recognize named capturing groups #52840 (Han Fei).
- Fix possible assert in
~PushingAsyncPipelineExecutor
in clickhouse-local #52862 (Kruglov Pavel). - Fix reading of empty
Nested(Array(LowCardinality(...)))
#52949 (Anton Popov). - Added new tests for session_log and fixed the inconsistency between login and logout. #52958 (Alexey Gerasimchuck).
- Fix password leak in show create mysql table #52962 (Duc Canh Le).
- Convert sparse column format to full in CreateSetAndFilterOnTheFlyStep #53000 (vdimir).
- Fix rare race condition with empty key prefix directory deletion in fs cache #53055 (Kseniia Sumarokova).
- Fix ZstdDeflatingWriteBuffer truncating the output sometimes #53064 (Michael Kolupaev).
- Fix query_id in part_log with async flush queries #53103 (Raúl Marín).
- Fix possible error from cache "Read unexpected size" #53121 (Kseniia Sumarokova).
- Disable the new parquet encoder #53130 (Alexey Milovidov).
- Fix "Not-ready Set" exception #53162 (Nikolai Kochetov).
- Fix character escaping in the PostgreSQL engine #53250 (Nikolay Degterinsky).
- Experimental session_log table: Added new tests for session_log and fixed the inconsistency between login and logout. #53255 (Alexey Gerasimchuck). Fixed inconsistency between login success and logout #53302 (Alexey Gerasimchuck).
- Fix adding sub-second intervals to DateTime #53309 (Michael Kolupaev).
- Fix "Context has expired" error in dictionaries #53342 (Alexey Milovidov).
- Fix incorrect normal projection AST format #53347 (Amos Bird).
- Forbid use_structure_from_insertion_table_in_table_functions when execute Scalar #53348 (flynn).
- Fix loading lazy database during system.table select query #53372 (SmitaRKulkarni).
- Fixed system.data_skipping_indices for MaterializedMySQL #53381 (Filipp Ozinov).
- Fix processing single carriage return in TSV file segmentation engine #53407 (Kruglov Pavel).
- Fix
Context has expired
error properly #53433 (Michael Kolupaev). - Fix
timeout_overflow_mode
when having subquery in the rhs of IN #53439 (Duc Canh Le). - Fix an unexpected behavior in #53152 #53440 (Zhiguo Zhou).
- Fix JSON_QUERY Function parse error while path is all number #53470 (KevinyhZou).
- Fix wrong columns order for queries with parallel FINAL. #53489 (Nikolai Kochetov).
- Fixed SELECTing from ReplacingMergeTree with do_not_merge_across_partitions_select_final #53511 (Vasily Nemkov).
- Flush async insert queue first on shutdown #53547 (joelynch).
- Fix crash in join on sparse columna #53548 (vdimir).
- Fix possible UB in Set skipping index for functions with incorrect args #53559 (Azat Khuzhin).
- Fix possible UB in inverted indexes (experimental feature) #53560 (Azat Khuzhin).
- Fix: interpolate expression takes source column instead of same name aliased from select expression. #53572 (Yakov Olkhovskiy).
- Fix number of dropped granules in EXPLAIN PLAN index=1 #53616 (wangxiaobo).
- Correctly handle totals and extremes with
DelayedSource
#53644 (Antonio Andelic). - Prepared set cache in mutation pipeline stuck #53645 (Nikolai Kochetov).
- Fix bug on mutations with subcolumns of type JSON in predicates of UPDATE and DELETE queries. #53677 (VanDarkholme7).
- Fix filter pushdown for full_sorting_merge join #53699 (vdimir).
- Try to fix bug with
NULL::LowCardinality(Nullable(...)) NOT IN
#53706 (Andrey Zvonov). - Fix: sorted distinct with sparse columns #53711 (Igor Nikonov).
transform
: correctly handle default column with multiple rows #53742 (Salvatore Mesoraca).- Fix fuzzer crash in parseDateTime #53764 (Robert Schulze).
- MaterializedPostgreSQL: fix uncaught exception in getCreateTableQueryImpl #53832 (Kseniia Sumarokova).
- Fix possible segfault while using PostgreSQL engine #53847 (Kseniia Sumarokova).
- Fix named_collection_admin alias #54066 (Kseniia Sumarokova).
ClickHouse release 23.7, 2023-07-27
Backward Incompatible Change
- Add
NAMED COLLECTION
access type (aliasesUSE NAMED COLLECTION
,NAMED COLLECTION USAGE
). This PR is backward incompatible because this access type is disabled by default (because a parent access typeNAMED COLLECTION ADMIN
is disabled by default as well). Proposed in #50277. To grant useGRANT NAMED COLLECTION ON collection_name TO user
orGRANT NAMED COLLECTION ON * TO user
, to be able to give these grantsnamed_collection_admin
is required in config (previously it was namednamed_collection_control
, so will remain as an alias). #50625 (Kseniia Sumarokova). - Fixing a typo in the
system.parts
column namelast_removal_attemp_time
. Now it is namedlast_removal_attempt_time
. #52104 (filimonov). - Bump version of the distributed_ddl_entry_format_version to 5 by default (enables opentelemetry and initial_query_idd pass through). This will not allow to process existing entries for distributed DDL after downgrade (but note, that usually there should be no such unprocessed entries). #52128 (Azat Khuzhin).
- Check projection metadata the same way we check ordinary metadata. This change may prevent the server from starting in case there was a table with an invalid projection. An example is a projection that created positional columns in PK (e.g.
projection p (select * order by 1, 4)
which is not allowed in table PK and can cause a crash during insert/merge). Drop such projections before the update. Fixes #52353. #52361 (Nikolai Kochetov). - The experimental feature
hashid
is removed due to a bug. The quality of implementation was questionable at the start, and it didn't get through the experimental status. This closes #52406. #52449 (Alexey Milovidov).
New Feature
- Added
Overlay
database engine to combine multiple databases into one. AddedFilesystem
database engine to represent a directory in the filesystem as a set of implicitly available tables with auto-detected formats and structures. A newS3
database engine allows to read-only interact with s3 storage by representing a prefix as a set of tables. A newHDFS
database engine allows to interact with HDFS storage in the same way. #48821 (alekseygolub). - Add support for external disks in Keeper for storing snapshots and logs. #50098 (Antonio Andelic).
- Add support for multi-directory selection (
{}
) globs. #50559 (Andrey Zvonov). - Kafka connector can fetch Avro schema from schema registry with basic authentication using url-encoded credentials. #49664 (Ilya Golshtein).
- Add function
arrayJaccardIndex
which computes the Jaccard similarity between two arrays. #50076 (FFFFFFFHHHHHHH). - Add a column
is_obsolete
tosystem.settings
and similar tables. Closes #50819. #50826 (flynn). - Implement support of encrypted elements in configuration file. Added possibility to use encrypted text in leaf elements of configuration file. The text is encrypted using encryption codecs from
<encryption_codecs>
section. #50986 (Roman Vasin). - Grace Hash Join algorithm is now applicable to FULL and RIGHT JOINs. #49483. #51013 (lgbo).
- Add
SYSTEM STOP LISTEN
query for more graceful termination. Closes #47972. #51016 (Nikolay Degterinsky). - Add
input_format_csv_allow_variable_number_of_columns
options. #51273 (Dmitry Kardymon). - Another boring feature: add function
substring_index
, as in Spark or MySQL. #51472 (李扬). - A system table
jemalloc_bins
to show stats for jemalloc bins. ExampleSELECT *, size * (nmalloc - ndalloc) AS allocated_bytes FROM system.jemalloc_bins WHERE allocated_bytes > 0 ORDER BY allocated_bytes DESC LIMIT 10
. Enjoy. #51674 (Alexander Gololobov). - Add
RowBinaryWithDefaults
format with extra byte before each column as a flag for using the column's default value. Closes #50854. #51695 (Kruglov Pavel). - Added
default_temporary_table_engine
setting. Same asdefault_table_engine
but for temporary tables. #51292. #51708 (velavokr). - Added new
initcap
/initcapUTF8
functions which convert the first letter of each word to upper case and the rest to lower case. #51735 (Dmitry Kardymon). - Create table now supports
PRIMARY KEY
syntax in column definition. Columns are added to primary index in the same order columns are defined. #51881 (Ilya Yatsishin). - Added the possibility to use date and time format specifiers in log and error log file names, either in config files (
log
anderrorlog
tags) or command line arguments (--log-file
and--errorlog-file
). #51945 (Victor Krasnov). - Added Peak Memory Usage statistic to HTTP headers. #51946 (Dmitry Kardymon).
- Added new
hasSubsequence
(+CaseInsensitive
andUTF8
versions) functions to match subsequences in strings. #52050 (Dmitry Kardymon). - Add
array_agg
as alias ofgroupArray
for PostgreSQL compatibility. Closes #52100. ### Documentation entry for user-facing changes. #52135 (flynn). - Add
any_value
as a compatibility alias forany
aggregate function. Closes #52140. #52147 (flynn). - Add aggregate function
array_concat_agg
for compatibility with BigQuery, it's alias ofgroupArrayArray
. Closes #52139. #52149 (flynn). - Add
OCTET_LENGTH
as an alias tolength
. Closes #52153. #52176 (FFFFFFFHHHHHHH). - Added
firstLine
function to extract the first line from the multi-line string. This closes #51172. #52209 (Mikhail Koviazin). - Implement KQL-style formatting for the
Interval
data type. This is only needed for compatibility with theKusto
query language. #45671 (ltrk2). - Added query
SYSTEM FLUSH ASYNC INSERT QUEUE
which flushes all pending asynchronous inserts to the destination tables. Added a server-side settingasync_insert_queue_flush_on_shutdown
(true
by default) which determines whether to flush queue of asynchronous inserts on graceful shutdown. Settingasync_insert_threads
is now a server-side setting. #49160 (Anton Popov). - Aliases
current_database
and a new functioncurrent_schemas
for compatibility with PostgreSQL. #51076 (Pedro Riera). - Add alias for functions
today
(now available under thecurdate
/current_date
names) andnow
(current_timestamp
). #52106 (Lloyd-Pottiger). - Support
async_deduplication_token
for async insert. #52136 (Han Fei). - Add new setting
disable_url_encoding
that allows to disable decoding/encoding path in uri in URL engine. #52337 (Kruglov Pavel).
Performance Improvement
- Enable automatic selection of the sparse serialization format by default. It improves performance. The format is supported since version 22.1. After this change, downgrading to versions older than 22.1 might not be possible. A downgrade may require to set
ratio_of_defaults_for_sparse_serialization=0.9375
55153. You can turn off the usage of the sparse serialization format by providing theratio_of_defaults_for_sparse_serialization = 1
setting for your MergeTree tables. #49631 (Alexey Milovidov). - Enable
move_all_conditions_to_prewhere
andenable_multiple_prewhere_read_steps
settings by default. #46365 (Alexander Gololobov). - Improves performance of some queries by tuning allocator. #46416 (Azat Khuzhin).
- Now we use fixed-size tasks in
MergeTreePrefetchedReadPool
as inMergeTreeReadPool
. Also from now we use connection pool for S3 requests. #49732 (Nikita Taranov). - More pushdown to the right side of join. #50532 (Nikita Taranov).
- Improve grace_hash join by reserving hash table's size (resubmit). #50875 (lgbo).
- Waiting on lock in
OpenedFileCache
could be noticeable sometimes. We sharded it into multiple sub-maps (each with its own lock) to avoid contention. #51341 (Nikita Taranov). - Move conditions with primary key columns to the end of PREWHERE chain. The idea is that conditions with PK columns are likely to be used in PK analysis and will not contribute much more to PREWHERE filtering. #51958 (Alexander Gololobov).
- Speed up
COUNT(DISTINCT)
for String types by inlining SipHash. The performance experiments of OnTime on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) show that this change could bring an improvement of 11.6% to the QPS of the query Q8 while having no impact on others. #52036 (Zhiguo Zhou). - Enable
allow_vertical_merges_from_compact_to_wide_parts
by default. It will save memory usage during merges. #52295 (Alexey Milovidov). - Fix incorrect projection analysis which invalidates primary keys. This issue only exists when
query_plan_optimize_primary_key = 1, query_plan_optimize_projection = 1
. This fixes #48823. This fixes #51173. #52308 (Amos Bird). - Reduce the number of syscalls in
FileCache::loadMetadata
- this speeds up server startup if the filesystem cache is configured. #52435 (Raúl Marín). - Allow to have strict lower boundary for file segment size by downloading remaining data in the background. Minimum size of file segment (if actual file size is bigger) is configured as cache configuration setting
boundary_alignment
, by default4Mi
. Number of background threads are configured as cache configuration settingbackground_download_threads
, by default2
. Alsomax_file_segment_size
was increased from8Mi
to32Mi
in this PR. #51000 (Kseniia Sumarokova). - Decreased default timeouts for S3 from 30 seconds to 3 seconds, and for other HTTP from 180 seconds to 30 seconds. #51171 (Michael Kolupaev).
- New setting
merge_tree_determine_task_size_by_prewhere_columns
added. If set totrue
only sizes of the columns fromPREWHERE
section will be considered to determine reading task size. Otherwise all the columns from query are considered. #52606 (Nikita Taranov).
Improvement
- Use read_bytes/total_bytes_to_read for progress bar in s3/file/url/... table functions for better progress indication. #51286 (Kruglov Pavel).
- Introduce a table setting
wait_for_unique_parts_send_before_shutdown_ms
which specify the amount of time replica will wait before closing interserver handler for replicated sends. Also fix inconsistency with shutdown of tables and interserver handlers: now server shutdown tables first and only after it shut down interserver handlers. #51851 (alesapin). - Allow SQL standard
FETCH
withoutOFFSET
. See https://antonz.org/sql-fetch/. #51293 (Alexey Milovidov). - Allow filtering HTTP headers for the URL/S3 table functions with the new
http_forbid_headers
section in config. Both exact matching and regexp filters are available. #51038 (Nikolay Degterinsky). - Don't show messages about
16 EiB
free space in logs, as they don't make sense. This closes #49320. #49342 (Alexey Milovidov). - Properly check the limit for the
sleepEachRow
function. Add a settingfunction_sleep_max_microseconds_per_block
. This is needed for generic query fuzzer. #49343 (Alexey Milovidov). - Fix two issues in
geoHash
functions. #50066 (李扬). - Log async insert flush queries into
system.query_log
. #51160 (Raúl Marín). - Functions
date_diff
andage
now support millisecond/microsecond unit and work with microsecond precision. #51291 (Dmitry Kardymon). - Improve parsing of path in clickhouse-keeper-client. #51359 (Azat Khuzhin).
- A third-party product depending on ClickHouse (Gluten: a Plugin to Double SparkSQL's Performance) had a bug. This fix avoids heap overflow in that third-party product while reading from HDFS. #51386 (李扬).
- Add ability to disable native copy for S3 (setting for BACKUP/RESTORE
allow_s3_native_copy
, ands3_allow_native_copy
fors3
/s3_plain
disks). #51448 (Azat Khuzhin). - Add column
primary_key_size
tosystem.parts
table to show compressed primary key size on disk. Closes #51400. #51496 (Yarik Briukhovetskyi). - Allow running
clickhouse-local
without procfs, without home directory existing, and without name resolution plugins from glibc. #51518 (Alexey Milovidov). - Add placeholder
%a
for rull filename in rename_files_after_processing setting. #51603 (Kruglov Pavel). - Add column
modification_time
intosystem.parts_columns
. #51685 (Azat Khuzhin). - Add new setting
input_format_csv_use_default_on_bad_values
to CSV format that allows to insert default value when parsing of a single field failed. #51716 (KevinyhZou). - Added a crash log flush to the disk after the unexpected crash. #51720 (Alexey Gerasimchuck).
- Fix behavior in dashboard page where errors unrelated to authentication are not shown. Also fix 'overlapping' chart behavior. #51744 (Zach Naimon).
- Allow UUID to UInt128 conversion. #51765 (Dmitry Kardymon).
- Added support for function
range
of Nullable arguments. #51767 (Dmitry Kardymon). - Convert condition like
toyear(x) = c
toc1 <= x < c2
. #51795 (Han Fei). - Improve MySQL compatibility of the statement
SHOW INDEX
. #51796 (Robert Schulze). - Fix
use_structure_from_insertion_table_in_table_functions
does not work withMATERIALIZED
andALIAS
columns. Closes #51817. Closes #51019. #51825 (flynn). - Cache dictionary now requests only unique keys from source. Closes #51762. #51853 (Maksim Kita).
- Fixed the case when settings were not applied for EXPLAIN query when FORMAT was provided. #51859 (Nikita Taranov).
- Allow SETTINGS before FORMAT in DESCRIBE TABLE query for compatibility with SELECT query. Closes #51544. #51899 (Nikolay Degterinsky).
- Var-Int encoded integers (e.g. used by the native protocol) can now use the full 64-bit range. 3rd party clients are advised to update their var-int code accordingly. #51905 (Robert Schulze).
- Update certificates when they change without the need to manually SYSTEM RELOAD CONFIG. #52030 (Mike Kot).
- Added
allow_create_index_without_type
setting that allow to ignoreADD INDEX
queries without specifiedTYPE
. Standard SQL queries will just succeed without changing table schema. #52056 (Ilya Yatsishin). - Log messages are written to the
system.text_log
from the server startup. #52113 (Dmitry Kardymon). - In cases where the HTTP endpoint has multiple IP addresses and the first of them is unreachable, a timeout exception was thrown. Made session creation with handling all resolved endpoints. #52116 (Aleksei Filatov).
- Avro input format now supports Union even if it contains only a single type. Closes #52131. #52137 (flynn).
- Add setting
optimize_use_implicit_projections
to disable implicit projections (currently onlymin_max_count
projection). #52152 (Amos Bird). - It was possible to use the function
hasToken
for infinite loop. Now this possibility is removed. This closes #52156. #52160 (Alexey Milovidov). - Create ZK ancestors optimistically. #52195 (Raúl Marín).
- Fix #50582. Avoid the
Not found column ... in block
error in some cases of reading in-order and constants. #52259 (Chen768959). - Check whether S2 geo primitives are invalid as early as possible on ClickHouse side. This closes: #27090. #52260 (Nikita Mikhaylov).
- Add back missing projection QueryAccessInfo when
query_plan_optimize_projection = 1
. This fixes #50183 . This fixes #50093. #52327 (Amos Bird). - When
ZooKeeperRetriesControl
rethrows an error, it's more useful to see its original stack trace, not the one fromZooKeeperRetriesControl
itself. #52347 (Vitaly Baranov). - Wait for zero copy replication lock even if some disks don't support it. #52376 (Raúl Marín).
- Now interserver port will be closed only after tables are shut down. #52498 (alesapin).
Experimental Feature
- Writing parquet files is 10x faster, it's multi-threaded now. Almost the same speed as reading. #49367 (Michael Kolupaev). This is controlled by the setting
output_format_parquet_use_custom_encoder
which is disabled by default, because the feature is non-ideal. - Added support for PRQL as a query language. #50686 (János Benjamin Antal).
- Allow to add disk name for custom disks. Previously custom disks would use an internal generated disk name. Now it will be possible with
disk = disk_<name>(...)
(e.g. disk will have namename
) . #51552 (Kseniia Sumarokova). This syntax can be changed in this release. - (experimental MaterializedMySQL) Fixed crash when
mysqlxx::Pool::Entry
is used after it was disconnected. #52063 (Val Doroshchuk). - (experimental MaterializedMySQL)
CREATE TABLE ... AS SELECT
.. is now supported in MaterializedMySQL. #52067 (Val Doroshchuk). - (experimental MaterializedMySQL) Introduced automatic conversion of text types to utf8 for MaterializedMySQL. #52084 (Val Doroshchuk).
- (experimental MaterializedMySQL) Now unquoted UTF-8 strings are supported in DDL for MaterializedMySQL. #52318 (Val Doroshchuk).
- (experimental MaterializedMySQL) Now double quoted comments are supported in MaterializedMySQL. #52355 (Val Doroshchuk).
- Upgrade Intel QPL from v1.1.0 to v1.2.0 2. Upgrade Intel accel-config from v3.5 to v4.0 3. Fixed issue that Device IOTLB miss has big perf. impact for IAA accelerators. #52180 (jasperzhu).
- The
session_timezone
setting (new in version 23.6) is demoted to experimental. #52445 (Alexey Milovidov). - Support ZooKeeper
reconfig
command for ClickHouse Keeper with incremental reconfiguration which can be enabled viakeeper_server.enable_reconfiguration
setting. Support adding servers, removing servers, and changing server priorities. #49450 (Mike Kot). It is suspected that this feature is incomplete.
Build/Testing/Packaging Improvement
- Add experimental ClickHouse builds for Linux RISC-V 64 to CI. #31398 (Alexey Milovidov).
- Add integration test check with the enabled Analyzer. #50926 #52210 (Dmitry Novik).
- Reproducible builds for Rust. #52395 (Azat Khuzhin).
- Update Cargo dependencies. #51721 (Raúl Marín).
- Make the function
CHColumnToArrowColumn::fillArrowArrayWithArrayColumnData
to work with nullable arrays, which are not possible in ClickHouse, but needed for Gluten. #52112 (李扬). - We've updated the CCTZ library to master, but there are no user-visible changes. #52124 (Alexey Milovidov).
- The
system.licenses
table now includes the hard-forked library Poco. This closes #52066. #52127 (Alexey Milovidov). - Check that there are no cases of bad punctuation: whitespace before a comma like
Hello ,world
instead ofHello, world
. #52549 (Alexey Milovidov).
Bug Fix (user-visible misbehavior in an official stable release)
- Fix MaterializedPostgreSQL syncTables #49698 (Kseniia Sumarokova).
- Fix projection with optimize_aggregators_of_group_by_keys #49709 (Amos Bird).
- Fix optimize_skip_unused_shards with JOINs #51037 (Azat Khuzhin).
- Fix formatDateTime() with fractional negative datetime64 #51290 (Dmitry Kardymon).
- Functions
hasToken*
were totally wrong. Add a test for #43358 #51378 (Alexey Milovidov). - Fix optimization to move functions before sorting. #51481 (Nikolai Kochetov).
- Fix Block structure mismatch in Pipe::unitePipes for FINAL #51492 (Nikita Taranov).
- Fix SIGSEGV for clusters with zero weight across all shards (fixes INSERT INTO FUNCTION clusterAllReplicas()) #51545 (Azat Khuzhin).
- Fix timeout for hedged requests #51582 (Azat Khuzhin).
- Fix logical error in ANTI join with NULL #51601 (vdimir).
- Fix for moving 'IN' conditions to PREWHERE #51610 (Alexander Gololobov).
- Do not apply PredicateExpressionsOptimizer for ASOF/ANTI join #51633 (vdimir).
- Fix async insert with deduplication for ReplicatedMergeTree using merging algorithms #51676 (Antonio Andelic).
- Fix reading from empty column in
parseSipHashKey
#51804 (Nikita Taranov). - Fix segfault when create invalid EmbeddedRocksdb table #51847 (Duc Canh Le).
- Fix inserts into MongoDB tables #51876 (Nikolay Degterinsky).
- Fix deadlock on DatabaseCatalog shutdown #51908 (Alexander Tokmakov).
- Fix error in subquery operators #51922 (Alexey Milovidov).
- Fix async connect to hosts with multiple ips #51934 (Kruglov Pavel).
- Do not remove inputs after ActionsDAG::merge #51947 (Nikolai Kochetov).
- Check refcount in
RemoveManyObjectStorageOperation::finalize
instead ofexecute
#51954 (vdimir). - Allow parametric UDFs #51964 (Alexey Milovidov).
- Small fix for toDateTime64() for dates after 2283-12-31 #52130 (Andrey Zvonov).
- Fix ORDER BY tuple of WINDOW functions #52145 (Alexey Milovidov).
- Fix incorrect projection analysis when aggregation expression contains monotonic functions #52151 (Amos Bird).
- Fix error in
groupArrayMoving
functions #52161 (Alexey Milovidov). - Disable direct join for range dictionary #52187 (Duc Canh Le).
- Fix sticky mutations test (and extremely rare race condition) #52197 (alesapin).
- Fix race in Web disk #52211 (Kseniia Sumarokova).
- Fix data race in Connection::setAsyncCallback on unknown packet from server #52219 (Kruglov Pavel).
- Fix temp data deletion on startup, add test #52275 (vdimir).
- Don't use minmax_count projections when counting nullable columns #52297 (Amos Bird).
- MergeTree/ReplicatedMergeTree should use server timezone for log entries #52325 (Azat Khuzhin).
- Fix parameterized view with cte and multiple usage #52328 (SmitaRKulkarni).
- Disable expression templates for time intervals #52335 (Alexander Tokmakov).
- Fix
apply_snapshot
in Keeper #52358 (Antonio Andelic). - Update build-osx.md #52377 (AlexBykovski).
- Fix
countSubstrings
hang with empty needle and a column haystack #52409 (Sergei Trifonov). - Fix normal projection with merge table #52432 (Amos Bird).
- Fix possible double-free in Aggregator #52439 (Nikita Taranov).
- Fixed inserting into Buffer engine #52440 (Vasily Nemkov).
- The implementation of AnyHash was non-conformant. #52448 (Alexey Milovidov).
- Check recursion depth in OptimizedRegularExpression #52451 (Alexey Milovidov).
- Fix data-race DatabaseReplicated::startupTables()/canExecuteReplicatedMetadataAlter() #52490 (Azat Khuzhin).
- Fix abort in function
transform
#52513 (Alexey Milovidov). - Fix lightweight delete after drop of projection #52517 (Anton Popov).
- Fix possible error "Cannot drain connections: cancel first" #52585 (Kruglov Pavel).
ClickHouse release 23.6, 2023-06-29
Backward Incompatible Change
- Delete feature
do_not_evict_index_and_mark_files
in the fs cache. This feature was only making things worse. #51253 (Kseniia Sumarokova). - Remove ALTER support for experimental LIVE VIEW. #51287 (Alexey Milovidov).
- Decrease the default values for
http_max_field_value_size
andhttp_max_field_name_size
to 128 KiB. #51163 (Mikhail f. Shiryaev). - CGroups metrics related to CPU are replaced with one metric,
CGroupMaxCPU
for better usability. TheNormalized
CPU usage metrics will be normalized to CGroups limits instead of the total number of CPUs when they are set. This closes #50836. #50835 (Alexey Milovidov).
New Feature
- The function
transform
as well asCASE
with value matching started to support all data types. This closes #29730. This closes #32387. This closes #50827. This closes #31336. This closes #40493. #51351 (Alexey Milovidov). - Added option
--rename_files_after_processing <pattern>
. This closes #34207. #49626 (alekseygolub). - Add support for
TRUNCATE
modifier inINTO OUTFILE
clause. Suggest usingAPPEND
orTRUNCATE
forINTO OUTFILE
when file exists. #50950 (alekar). - Add table engine
Redis
and table functionredis
. It allows querying external Redis servers. #50150 (JackyWoo). - Allow to skip empty files in file/s3/url/hdfs table functions using settings
s3_skip_empty_files
,hdfs_skip_empty_files
,engine_file_skip_empty_files
,engine_url_skip_empty_files
. #50364 (Kruglov Pavel). - Add a new setting named
use_mysql_types_in_show_columns
to alter theSHOW COLUMNS
SQL statement to display MySQL equivalent types when a client is connected via the MySQL compatibility port. #49577 (Thomas Panetti). - Clickhouse-client can now be called with a connection string instead of "--host", "--port", "--user" etc. #50689 (Alexey Gerasimchuck).
- Add setting
session_timezone
; it is used as the default timezone for a session when not explicitly specified. #44149 (Andrey Zvonov). - Codec DEFLATE_QPL is now controlled via server setting "enable_deflate_qpl_codec" (default: false) instead of setting "allow_experimental_codecs". This marks DEFLATE_QPL non-experimental. #50775 (Robert Schulze).
Performance Improvement
- Improved scheduling of merge selecting and cleanup tasks in
ReplicatedMergeTree
. The tasks will not be executed too frequently when there's nothing to merge or cleanup. Added settingsmax_merge_selecting_sleep_ms
,merge_selecting_sleep_slowdown_factor
,max_cleanup_delay_period
andcleanup_thread_preferred_points_per_iteration
. It should close #31919. #50107 (Alexander Tokmakov). - Make filter push down through cross join. #50605 (Han Fei).
- Improve performance with enabled QueryProfiler using thread-local timer_id instead of global object. #48778 (Jiebin Sun).
- Rewrite CapnProto input/output format to improve its performance. Map column names and CapnProto fields case insensitive, fix reading/writing of nested structure fields. #49752 (Kruglov Pavel).
- Optimize parquet write performance for parallel threads. #50102 (Hongbin Ma).
- Disable
parallelize_output_from_storages
for processing MATERIALIZED VIEWs and storages with one block only. #50214 (Azat Khuzhin). - Merge PR #46558. Avoid block permutation during sort if the block is already sorted. #50697 (Alexey Milovidov, Maksim Kita).
- Make multiple list requests to ZooKeeper in parallel to speed up reading from system.zookeeper table. #51042 (Alexander Gololobov).
- Speedup initialization of DateTime lookup tables for time zones. This should reduce startup/connect time of clickhouse-client especially in debug build as it is rather heavy. #51347 (Alexander Gololobov).
- Fix data lakes slowness because of synchronous head requests. (Related to Iceberg/Deltalake/Hudi being slow with a lot of files). #50976 (Kseniia Sumarokova).
- Do not read all the columns from right GLOBAL JOIN table. #50721 (Nikolai Kochetov).
Experimental Feature
- Support parallel replicas with the analyzer. #50441 (Raúl Marín).
- Add random sleep before large merges/mutations execution to split load more evenly between replicas in case of zero-copy replication. #51282 (alesapin).
- Do not replicate
ALTER PARTITION
queries and mutations throughReplicated
database if it has only one shard and the underlying table isReplicatedMergeTree
. #51049 (Alexander Tokmakov).
Improvement
- Relax the thresholds for "too many parts" to be more modern. Return the backpressure during long-running insert queries. #50856 (Alexey Milovidov).
- Allow to cast IPv6 to IPv4 address for CIDR ::ffff:0:0/96 (IPv4-mapped addresses). #49759 (Yakov Olkhovskiy).
- Update MongoDB protocol to support MongoDB 5.1 version and newer. Support for the versions with the old protocol (<3.6) is preserved. Closes #45621, #49879. #50061 (Nikolay Degterinsky).
- Add setting
input_format_max_bytes_to_read_for_schema_inference
to limit the number of bytes to read in schema inference. Closes #50577. #50592 (Kruglov Pavel). - Respect setting
input_format_null_as_default
in schema inference. #50602 (Kruglov Pavel). - Allow to skip trailing empty lines in CSV/TSV/CustomSeparated formats via settings
input_format_csv_skip_trailing_empty_lines
,input_format_tsv_skip_trailing_empty_lines
andinput_format_custom_skip_trailing_empty_lines
(disabled by default). Closes #49315. #50635 (Kruglov Pavel). - Functions "toDateOrDefault|OrNull" and "accuateCast[OrDefault|OrNull]" now correctly parse numeric arguments. #50709 (Dmitry Kardymon).
- Support CSV with whitespace or
\t
field delimiters, and these delimiters are supported in Spark. #50712 (KevinyhZou). - Settings
number_of_mutations_to_delay
andnumber_of_mutations_to_throw
are enabled by default now with values 500 and 1000 respectively. #50726 (Anton Popov). - The dashboard correctly shows missing values. This closes #50831. #50832 (Alexey Milovidov).
- Added the possibility to use date and time arguments in the syslog timestamp format in functions
parseDateTimeBestEffort*
andparseDateTime64BestEffort*
. #50925 (Victor Krasnov). - Command line parameter "--password" in clickhouse-client can now be specified only once. #50966 (Alexey Gerasimchuck).
- Use
hash_of_all_files
fromsystem.parts
to check identity of parts during on-cluster backups. #50997 (Vitaly Baranov). - The system table zookeeper_connection connected_time identifies the time when the connection is established (standard format), and session_uptime_elapsed_seconds is added, which labels the duration of the established connection session (in seconds). #51026 (郭小龙).
- Improve the progress bar for file/s3/hdfs/url table functions by using chunk size from source data and using incremental total size counting in each thread. Fix the progress bar for *Cluster functions. This closes #47250. #51088 (Kruglov Pavel).
- Add total_bytes_to_read to the Progress packet in TCP protocol for better Progress bar. #51158 (Kruglov Pavel).
- Better checking of data parts on disks with filesystem cache. #51164 (Anton Popov).
- Fix sometimes not correct current_elements_num in fs cache. #51242 (Kseniia Sumarokova).
Build/Testing/Packaging Improvement
- Add embedded keeper-client to standalone keeper binary. #50964 (pufit).
- Actual LZ4 version is used now. #50621 (Nikita Taranov).
- ClickHouse server will print the list of changed settings on fatal errors. This closes #51137. #51138 (Alexey Milovidov).
- Allow building ClickHouse with clang-17. #51300 (Alexey Milovidov).
- SQLancer check is considered stable as bugs that were triggered by it are fixed. Now failures of SQLancer check will be reported as failed check status. #51340 (Ilya Yatsishin).
- Split huge
RUN
in Dockerfile into smaller conditional. Install the necessary tools on demand in the sameRUN
layer, and remove them after that. Upgrade the OS only once at the beginning. Use a modern way to check the signed repository. Downgrade the base repo to ubuntu:20.04 to address the issues on older docker versions. Upgrade golang version to address golang vulnerabilities. #51504 (Mikhail f. Shiryaev).
Bug Fix (user-visible misbehavior in an official stable release)
- Report loading status for executable dictionaries correctly #48775 (Anton Kozlov).
- Proper mutation of skip indices and projections #50104 (Amos Bird).
- Cleanup moving parts #50489 (vdimir).
- Fix backward compatibility for IP types hashing in aggregate functions #50551 (Yakov Olkhovskiy).
- Fix Log family table return wrong rows count after truncate #50585 (flynn).
- Fix bug in
uniqExact
parallel merging #50590 (Nikita Taranov). - Revert recent grace hash join changes #50699 (vdimir).
- Query Cache: Try to fix bad cast from
ColumnConst
toColumnVector<char8_t>
#50704 (Robert Schulze). - Avoid storing logs in Keeper containing unknown operation #50751 (Antonio Andelic).
- SummingMergeTree support for DateTime64 #50797 (Jordi Villar).
- Add compatibility setting for non-const timezones #50834 (Robert Schulze).
- Fix hashing of LDAP params in the cache entries #50865 (Julian Maicher).
- Fallback to parsing big integer from String instead of exception in Parquet format #50873 (Kruglov Pavel).
- Fix checking the lock file too often while writing a backup #50889 (Vitaly Baranov).
- Do not apply projection if read-in-order was enabled. #50923 (Nikolai Kochetov).
- Fix race in the Azure blob storage iterator #50936 (SmitaRKulkarni).
- Fix erroneous
sort_description
propagation inCreatingSets
#50955 (Nikita Taranov). - Fix Iceberg v2 optional metadata parsing #50974 (Kseniia Sumarokova).
- MaterializedMySQL: Keep parentheses for empty table overrides #50977 (Val Doroshchuk).
- Fix crash in BackupCoordinationStageSync::setError() #51012 (Vitaly Baranov).
- Fix subtly broken copy-on-write of ColumnLowCardinality dictionary #51064 (Michael Kolupaev).
- Generate safe IVs #51086 (Salvatore Mesoraca).
- Fix ineffective query cache for SELECTs with subqueries #51132 (Robert Schulze).
- Fix Set index with constant nullable comparison. #51205 (Nikolai Kochetov).
- Fix a crash in s3 and s3Cluster functions #51209 (Nikolay Degterinsky).
- Fix a crash with compiled expressions #51231 (LiuNeng).
- Fix use-after-free in StorageURL when switching URLs #51260 (Michael Kolupaev).
- Updated check for parameterized view #51272 (SmitaRKulkarni).
- Fix multiple writing of same file to backup #51299 (Vitaly Baranov).
- Fix fuzzer failure in ActionsDAG #51301 (Alexey Milovidov).
- Remove garbage from function
transform
#51350 (Alexey Milovidov).
ClickHouse release 23.5, 2023-06-08
Upgrade Notes
- Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the
compress_marks
andcompress_primary_key
settings in the<merge_tree>
section of the server configuration file. Upgrade notes: If you upgrade from versions prior to 22.9, you should either upgrade all replicas at once or disable the compression before upgrade, or upgrade through an intermediate version, where the compressed marks are supported but not enabled by default, such as 23.3. #42587 (Alexey Milovidov). - Make local object storage work consistently with s3 object storage, fix problem with append (closes #48465), make it configurable as independent storage. The change is backward incompatible because the cache on top of local object storage is not compatible to previous versions. #48791 (Kseniia Sumarokova).
- The experimental feature "in-memory data parts" is removed. The data format is still supported, but the settings are no-op, and compact or wide parts will be used instead. This closes #45409. #49429 (Alexey Milovidov).
- Changed default values of settings
parallelize_output_from_storages
andinput_format_parquet_preserve_order
. This allows ClickHouse to reorder rows when reading from files (e.g. CSV or Parquet), greatly improving performance in many cases. To restore the old behavior of preserving order, useparallelize_output_from_storages = 0
,input_format_parquet_preserve_order = 1
. #49479 (Michael Kolupaev). - Make projections production-ready. Add the
optimize_use_projections
setting to control whether the projections will be selected for SELECT queries. The settingallow_experimental_projection_optimization
is obsolete and does nothing. #49719 (Alexey Milovidov). - Mark
joinGet
as non-deterministic (so asdictGet
). It allows using them in mutations without an extra setting. #49843 (Azat Khuzhin). - Revert the "
groupArray
returns cannot be nullable" change (due to binary compatibility breakage forgroupArray
/groupArrayLast
/groupArraySample
overNullable
types, which likely will lead toTOO_LARGE_ARRAY_SIZE
orCANNOT_READ_ALL_DATA
). #49971 (Azat Khuzhin). - Setting
enable_memory_bound_merging_of_aggregation_results
is enabled by default. If you update from version prior to 22.12, we recommend to set this flag tofalse
until update is finished. #50319 (Nikita Taranov).
New Feature
- Added storage engine AzureBlobStorage and azureBlobStorage table function. The supported set of features is very similar to storage/table function S3 [#50604] (https://github.com/ClickHouse/ClickHouse/pull/50604) (alesapin) (SmitaRKulkarni.
- Added native ClickHouse Keeper CLI Client, it is available as
clickhouse keeper-client
#47414 (pufit). - Add
urlCluster
table function. Refactor all *Cluster table functions to reduce code duplication. Make schema inference work for all possible *Cluster function signatures and for named collections. Closes #38499. #45427 (attack204), Pavel Kruglov. - The query cache can now be used for production workloads. #47977 (Robert Schulze). The query cache can now support queries with totals and extremes modifier. #48853 (Robert Schulze). Make
allow_experimental_query_cache
setting as obsolete for backward-compatibility. It was removed in https://github.com/ClickHouse/ClickHouse/pull/47977. #49934 (Timur Solodovnikov). - Geographical data types (
Point
,Ring
,Polygon
, andMultiPolygon
) are production-ready. #50022 (Alexey Milovidov). - Add schema inference to PostgreSQL, MySQL, MeiliSearch, and SQLite table engines. Closes #49972. #50000 (Nikolay Degterinsky).
- Password type in queries like
CREATE USER u IDENTIFIED BY 'p'
will be automatically set according to the settingdefault_password_type
in theconfig.xml
on the server. Closes #42915. #44674 (Nikolay Degterinsky). - Add bcrypt password authentication type. Closes #34599. #44905 (Nikolay Degterinsky).
- Introduces new keyword
INTO OUTFILE 'file.txt' APPEND
. #48880 (alekar). - Added
system.zookeeper_connection
table that shows information about Keeper connections. #45245 (mateng915). - Add new function
generateRandomStructure
that generates random table structure. It can be used in combination with table functiongenerateRandom
. #47409 (Kruglov Pavel). - Allow the use of
CASE
without anELSE
branch and extendedtransform
to deal with more types. Also fix some issues that made transform() return incorrect results when decimal types were mixed with other numeric types. #48300 (Salvatore Mesoraca). This closes #2655. This closes #9596. This closes #38666. - Added server-side encryption using KMS keys with S3 tables, and the
header
setting with S3 disks. Closes #48723. #48724 (Johann Gan). - Add MemoryTracker for the background tasks (merges and mutation). Introduces
merges_mutations_memory_usage_soft_limit
andmerges_mutations_memory_usage_to_ram_ratio
settings that represent the soft memory limit for merges and mutations. If this limit is reached ClickHouse won't schedule new merge or mutation tasks. AlsoMergesMutationsMemoryTracking
metric is introduced to allow observing current memory usage of background tasks. Resubmit #46089. Closes #48774. #48787 (Dmitry Novik). - Function
dotProduct
work for array. #49050 (FFFFFFFHHHHHHH). - Support statement
SHOW INDEX
to improve compatibility with MySQL. #49158 (Robert Schulze). - Add virtual column
_file
and_path
support to table functionurl
. - Improve error message for table functionurl
. - resolves #49231 - resolves #49232. #49356 (Ziyi Tan). - Adding the
grants
field in the users.xml file, which allows specifying grants for users. #49381 (pufit). - Support full/right join by using grace hash join algorithm. #49483 (lgbo).
WITH FILL
modifier groups filling by sorting prefix. Controlled byuse_with_fill_by_sorting_prefix
setting (enabled by default). Related to #33203#issuecomment-1418736794. #49503 (Igor Nikonov).- Clickhouse-client now accepts queries after "--multiquery" when "--query" (or "-q") is absent. example: clickhouse-client --multiquery "select 1; select 2;". #49870 (Alexey Gerasimchuk).
- Add separate
handshake_timeout
for receiving Hello packet from replica. Closes #48854. #49948 (Kruglov Pavel). - Added a function "space" which repeats a space as many times as specified. #50103 (Robert Schulze).
- Added --input_format_csv_trim_whitespaces option. #50215 (Alexey Gerasimchuk).
- Allow the
dictGetAll
function for regexp tree dictionaries to return values from multiple matches as arrays. Closes #50254. #50255 (Johann Gan). - Added
toLastDayOfWeek
function to round a date or a date with time up to the nearest Saturday or Sunday. #50315 (Victor Krasnov). - Ability to ignore a skip index by specifying
ignore_data_skipping_indices
. #50329 (Boris Kuschel). - Add
system.user_processes
table andSHOW USER PROCESSES
query to show memory info and ProfileEvents on user level. #50492 (János Benjamin Antal). - Add server and format settings
display_secrets_in_show_and_select
for displaying secrets of tables, databases, table functions, and dictionaries. Add privilegedisplaySecretsInShowAndSelect
controlling which users can view secrets. #46528 (Mike Kot). - Allow to set up a ROW POLICY for all tables that belong to a DATABASE. #47640 (Ilya Golshtein).
Performance Improvement
- Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the
compress_marks
andcompress_primary_key
settings in the<merge_tree>
section of the server configuration file. #42587 (Alexey Milovidov). - New setting s3_max_inflight_parts_for_one_file sets the limit of concurrently loaded parts with multipart upload request in scope of one file. #49961 (Sema Checherinda).
- When reading from multiple files reduce parallel parsing threads for each file. Resolves #42192. #46661 (SmitaRKulkarni).
- Use aggregate projection only if it reads fewer granules than normal reading. It should help in case if query hits the PK of the table, but not the projection. Fixes #49150. #49417 (Nikolai Kochetov).
- Do not store blocks in
ANY
hash join if nothing is inserted. #48633 (vdimir). - Fixes aggregate combinator
-If
when JIT compiled, and enable JIT compilation for aggregate functions. Closes #48120. #49083 (Igor Nikonov). - For reading from remote tables we use smaller tasks (instead of reading the whole part) to make tasks stealing work * task size is determined by size of columns to read * always use 1mb buffers for reading from s3 * boundaries of cache segments aligned to 1mb so they have decent size even with small tasks. it also should prevent fragmentation. #49287 (Nikita Taranov).
- Introduced settings: -
merge_max_block_size_bytes
to limit the amount of memory used for background operations. -vertical_merge_algorithm_min_bytes_to_activate
to add another condition to activate vertical merges. #49313 (Nikita Mikhaylov). - Default size of a read buffer for reading from local filesystem changed to a slightly better value. Also two new settings are introduced:
max_read_buffer_size_local_fs
andmax_read_buffer_size_remote_fs
. #49321 (Nikita Taranov). - Improve memory usage and speed of
SPARSE_HASHED
/HASHED
dictionaries (e.g.SPARSE_HASHED
now eats 2.6x less memory, and is ~2x faster). #49380 (Azat Khuzhin). - Optimize the
system.query_log
andsystem.query_thread_log
tables by applyingLowCardinality
when appropriate. The queries over these tables will be faster. #49530 (Alexey Milovidov). - Better performance when reading local
Parquet
files (through parallel reading). #49539 (Michael Kolupaev). - Improve the performance of
RIGHT/FULL JOIN
by up to 2 times in certain scenarios, especially when joining a small left table with a large right table. #49585 (lgbo). - Improve performance of BLAKE3 by 11% by enabling LTO for Rust. #49600 (Azat Khuzhin). Now it is on par with C++.
- Optimize the structure of the
system.opentelemetry_span_log
. UseLowCardinality
where appropriate. Although this table is generally stupid (it is using the Map data type even for common attributes), it will be slightly better. #49647 (Alexey Milovidov). - Try to reserve hash table's size in
grace_hash
join. #49816 (lgbo). - Parallel merge of
uniqExactIf
states. Closes #49885. #50285 (flynn). - Keeper improvement: add
CheckNotExists
request to Keeper, which allows to improve the performance of Replicated tables. #48897 (Antonio Andelic). - Keeper performance improvements: avoid serializing same request twice while processing. Cache deserialization results of large requests. Controlled by new coordination setting
min_request_size_for_cache
. #49004 (Antonio Andelic). - Reduced number of
List
ZooKeeper requests when selecting parts to merge and a lot of partitions do not have anything to merge. #49637 (Alexander Tokmakov). - Rework locking in the FS cache #44985 (Kseniia Sumarokova).
- Disable pure parallel replicas if trivial count optimization is possible. #50594 (Raúl Marín).
- Don't send head request for all keys in Iceberg schema inference, only for keys that are used for reaing data. #50203 (Kruglov Pavel).
- Setting
enable_memory_bound_merging_of_aggregation_results
is enabled by default. #50319 (Nikita Taranov).
Experimental Feature
DEFLATE_QPL
codec lower the minimum simd version to SSE 4.2. doc change in qpl - Intel® QPL relies on a run-time kernels dispatcher and cpuid check to choose the best available implementation(sse/avx2/avx512) - restructured cmakefile for qpl build in clickhouse to align with latest upstream qpl. #49811 (jasperzhu).- Add initial support to do JOINs with pure parallel replicas. #49544 (Raúl Marín).
- More parallelism on
Outdated
parts removal with "zero-copy replication". #49630 (Alexander Tokmakov). - Parallel Replicas: 1) Fixed an error
NOT_FOUND_COLUMN_IN_BLOCK
in case of using parallel replicas with non-replicated storage with disabled settingparallel_replicas_for_non_replicated_merge_tree
2) Nowallow_experimental_parallel_reading_from_replicas
have 3 possible values - 0, 1 and 2. 0 - disabled, 1 - enabled, silently disable them in case of failure (in case of FINAL or JOIN), 2 - enabled, throw an exception in case of failure. 3) If FINAL modifier is used in SELECT query and parallel replicas are enabled, ClickHouse will try to disable them ifallow_experimental_parallel_reading_from_replicas
is set to 1 and throw an exception otherwise. #50195 (Nikita Mikhaylov). - When parallel replicas are enabled they will always skip unavailable servers (the behavior is controlled by the setting
skip_unavailable_shards
, enabled by default and can be only disabled). This closes: #48565. #50293 (Nikita Mikhaylov).
Improvement
- The
BACKUP
command will not decrypt data from encrypted disks while making a backup. Instead the data will be stored in a backup in encrypted form. Such backups can be restored only to an encrypted disk with the same (or extended) list of encryption keys. #48896 (Vitaly Baranov). - Added possibility to use temporary tables in FROM part of ATTACH PARTITION FROM and REPLACE PARTITION FROM. #49436 (Roman Vasin).
- Added setting
async_insert
forMergeTree
tables. It has the same meaning as query-level settingasync_insert
and enables asynchronous inserts for specific table. Note: it doesn't take effect for insert queries fromclickhouse-client
, use query-level setting in that case. #49122 (Anton Popov). - Add support for size suffixes in quota creation statement parameters. #49087 (Eridanus).
- Extend
first_value
andlast_value
to accept NULL. #46467 (lgbo). - Add alias
str_to_map
andmapFromString
forextractKeyValuePairs
. closes https://github.com/clickhouse/clickhouse/issues/47185. #49466 (flynn). - Add support for CGroup version 2 for asynchronous metrics about the memory usage and availability. This closes #37983. #45999 (sichenzhao).
- Cluster table functions should always skip unavailable shards. close #46314. #46765 (zk_kiger).
- Allow CSV file to contain empty columns in its header. #47496 (你不要过来啊).
- Add Google Cloud Storage S3 compatible table function
gcs
. Like theoss
andcosn
functions, it is just an alias over thes3
table function, and it does not bring any new features. #47815 (Kuba Kaflik). - Add ability to use strict parts size for S3 (compatibility with CloudFlare R2 S3 Storage). #48492 (Azat Khuzhin).
- Added new columns with info about
Replicated
database replicas tosystem.clusters
:database_shard_name
,database_replica_name
,is_active
. Added an optionalFROM SHARD
clause toSYSTEM DROP DATABASE REPLICA
query. #48548 (Alexander Tokmakov). - Add a new column
zookeeper_name
in system.replicas, to indicate on which (auxiliary) zookeeper cluster the replicated table's metadata is stored. #48549 (cangyin). IN
operator support the comparison ofDate
andDate32
. Closes #48736. #48806 (flynn).- Support for erasure codes in
HDFS
, author: @M1eyu2018, @tomscut. #48833 (M1eyu). - Implement SYSTEM DROP REPLICA from auxiliary ZooKeeper clusters, may be close #48931. #48932 (wangxiaobo).
- Add Array data type to MongoDB. Closes #48598. #48983 (Nikolay Degterinsky).
- Support storing
Interval
data types in tables. #49085 (larryluogit). - Allow using
ntile
window function without explicit window frame definition:ntile(3) OVER (ORDER BY a)
, close #46763. #49093 (vdimir). - Added settings (
number_of_mutations_to_delay
,number_of_mutations_to_throw
) to delay or throwALTER
queries that create mutations (ALTER UPDATE
,ALTER DELETE
,ALTER MODIFY COLUMN
, ...) in case when table already has a lot of unfinished mutations. #49117 (Anton Popov). - Catch exception from
create_directories
in filesystem cache. #49203 (Kseniia Sumarokova). - Copies embedded examples to a new field
example
insystem.functions
to supplement the fielddescription
. #49222 (Dan Roscigno). - Enable connection options for the MongoDB dictionary. Example:
xml <source> <mongodb> <host>localhost</host> <port>27017</port> <user></user> <password></password> <db>test</db> <collection>dictionary_source</collection> <options>ssl=true</options> </mongodb> </source>
### Documentation entry for user-facing changes. #49225 (MikhailBurdukov). - Added an alias
asymptotic
forasymp
computational method forkolmogorovSmirnovTest
. Improved documentation. #49286 (Nikita Mikhaylov). - Aggregation function groupBitAnd/Or/Xor now work on signed integer data. This makes them consistent with the behavior of scalar functions bitAnd/Or/Xor. #49292 (exmy).
- Split function-documentation into more fine-granular fields. #49300 (Robert Schulze).
- Use multiple threads shared between all tables within a server to load outdated data parts. The the size of the pool and its queue is controlled by
max_outdated_parts_loading_thread_pool_size
andoutdated_part_loading_thread_pool_queue_size
settings. #49317 (Nikita Mikhaylov). - Don't overestimate the size of processed data for
LowCardinality
columns when they share dictionaries between blocks. This closes #49322. See also #48745. #49323 (Alexey Milovidov). - Parquet writer now uses reasonable row group size when invoked through
OUTFILE
. #49325 (Michael Kolupaev). - Allow restricted keywords like
ARRAY
as an alias if the alias is quoted. Closes #49324. #49360 (Nikolay Degterinsky). - Data parts loading and deletion jobs were moved to shared server-wide pools instead of per-table pools. Pools sizes are controlled via settings
max_active_parts_loading_thread_pool_size
,max_outdated_parts_loading_thread_pool_size
andmax_parts_cleaning_thread_pool_size
in top-level config. Table-level settingsmax_part_loading_threads
andmax_part_removal_threads
became obsolete. #49474 (Nikita Mikhaylov). - Allow
?password=pass
in URL of the Play UI. Password is replaced in browser history. #49505 (Mike Kot). - Allow reading zero-size objects from remote filesystems. (because empty files are not backup'd, so we might end up with zero blobs in metadata file). Closes #49480. #49519 (Kseniia Sumarokova).
- Attach thread MemoryTracker to
total_memory_tracker
afterThreadGroup
detached. #49527 (Dmitry Novik). - Fix parameterized views when a query parameter is used multiple times in the query. #49556 (Azat Khuzhin).
- Release memory allocated for the last sent ProfileEvents snapshot in the context of a query. Followup #47564. #49561 (Dmitry Novik).
- Function "makeDate" now provides a MySQL-compatible overload (year & day of the year argument). #49603 (Robert Schulze).
- Support
dictionary
table function forRegExpTreeDictionary
. #49666 (Han Fei). - Added weighted fair IO scheduling policy. Added dynamic resource manager, which allows IO scheduling hierarchy to be updated in runtime w/o server restarts. #49671 (Sergei Trifonov).
- Add compose request after multipart upload to GCS. This enables the usage of copy operation on objects uploaded with the multipart upload. It's recommended to set
s3_strict_upload_part_size
to some value because compose request can fail on objects created with parts of different sizes. #49693 (Antonio Andelic). - For the
extractKeyValuePairs
function: improve the "best-effort" parsing logic to acceptkey_value_delimiter
as a valid part of the value. This also simplifies branching and might even speed up things a bit. #49760 (Arthur Passos). - Add
initial_query_id
field for system.processors_profile_log #49777 (helifu). - System log tables can now have custom sorting keys. #49778 (helifu).
- A new field
partitions
tosystem.query_log
is used to indicate which partitions are participating in the calculation. #49779 (helifu). - Added
enable_the_endpoint_id_with_zookeeper_name_prefix
setting forReplicatedMergeTree
(disabled by default). When enabled, it adds ZooKeeper cluster name to table's interserver communication endpoint. It avoidsDuplicate interserver IO endpoint
errors when having replicated tables with the same path, but different auxiliary ZooKeepers. #49780 (helifu). - Add query parameters to
clickhouse-local
. Closes #46561. #49785 (Nikolay Degterinsky). - Allow loading dictionaries and functions from YAML by default. In previous versions, it required editing the
dictionaries_config
oruser_defined_executable_functions_config
in the configuration file, as they expected*.xml
files. #49812 (Alexey Milovidov). - The Kafka table engine now allows to use alias columns. #49824 (Aleksandr Musorin).
- Add setting to limit the max number of pairs produced by
extractKeyValuePairs
, a safeguard to avoid using way too much memory. #49836 (Arthur Passos). - Add support for (an unusual) case where the arguments in the
IN
operator are single-element tuples. #49844 (MikhailBurdukov). bitHammingDistance
function supportString
andFixedString
data type. Closes #48827. #49858 (flynn).- Fix timeout resetting errors in the client on OS X. #49863 (alekar).
- Add support for big integers, such as UInt128, Int128, UInt256, and Int256 in the function
bitCount
. This enables Hamming distance over large bit masks for AI applications. #49867 (Alexey Milovidov). - Fingerprints to be used instead of key IDs in encrypted disks. This simplifies the configuration of encrypted disks. #49882 (Vitaly Baranov).
- Add UUID data type to PostgreSQL. Closes #49739. #49894 (Nikolay Degterinsky).
- Function
toUnixTimestamp
now acceptsDate
andDate32
arguments. #49989 (Victor Krasnov). - Charge only server memory for dictionaries. #49995 (Azat Khuzhin).
- The server will allow using the
SQL_*
settings such asSQL_AUTO_IS_NULL
as no-ops for MySQL compatibility. This closes #49927. #50013 (Alexey Milovidov). - Preserve initial_query_id for ON CLUSTER queries, which is useful for introspection (under
distributed_ddl_entry_format_version=5
). #50015 (Azat Khuzhin). - Preserve backward incompatibility for renamed settings by using aliases (
allow_experimental_projection_optimization
foroptimize_use_projections
,allow_experimental_lightweight_delete
forenable_lightweight_delete
). #50044 (Azat Khuzhin). - Support passing FQDN through setting my_hostname to register cluster node in keeper. Add setting of invisible to support multi compute groups. A compute group as a cluster, is invisible to other compute groups. #50186 (Yangkuan Liu).
- Fix PostgreSQL reading all the data even though
LIMIT n
could be specified. #50187 (Kseniia Sumarokova). - Add new profile events for queries with subqueries (
QueriesWithSubqueries
/SelectQueriesWithSubqueries
/InsertQueriesWithSubqueries
). #50204 (Azat Khuzhin). - Adding the roles field in the users.xml file, which allows specifying roles with grants via a config file. #50278 (pufit).
- Report
CGroupCpuCfsPeriod
andCGroupCpuCfsQuota
in AsynchronousMetrics. - Respect cgroup v2 memory limits during server startup. #50379 (alekar). - Add a signal handler for SIGQUIT to work the same way as SIGINT. Closes #50298. #50435 (Nikolay Degterinsky).
- In case JSON parse fails due to the large size of the object output the last position to allow debugging. #50474 (Valentin Alexeev).
- Support decimals with not fixed size. Closes #49130. #50586 (Kruglov Pavel).
Build/Testing/Packaging Improvement
- New and improved
keeper-bench
. Everything can be customized from YAML/XML file: - request generator - each type of request generator can have a specific set of fields - multi requests can be generated just by doing the same undermulti
key - for each request or subrequest in multi aweight
field can be defined to control distribution - define trees that need to be setup for a test run - hosts can be defined with all timeouts customizable and it's possible to control how many sessions to generate for each host - integers defined withmin_value
andmax_value
fields are random number generators. #48547 (Antonio Andelic). - Io_uring is not supported on macos, don't choose it when running tests on local to avoid occasional failures. #49250 (Frank Chen).
- Support named fault injection for testing. #49361 (Han Fei).
- Allow running ClickHouse in the OS where the
prctl
(process control) syscall is not available, such as AWS Lambda. #49538 (Alexey Milovidov). - Fixed the issue of build conflict between contrib/isa-l and isa-l in qpl 49296. #49584 (jasperzhu).
- Utilities are now only build if explicitly requested ("-DENABLE_UTILS=1") instead of by default, this reduces link times in typical development builds. #49620 (Robert Schulze).
- Pull build description of idxd-config into a separate CMake file to avoid accidental removal in future. #49651 (jasperzhu).
- Add CI check with an enabled analyzer in the master. Follow-up #49562. #49668 (Dmitry Novik).
- Switch to LLVM/clang 16. #49678 (Azat Khuzhin).
- Allow building ClickHouse with clang-17. #49851 (Alexey Milovidov). #50410 (Alexey Milovidov).
- ClickHouse is now easier to be integrated into other cmake projects. #49991 (Amos Bird). (Which is strongly discouraged - Alexey Milovidov).
- Fix strange additional QEMU logging after #47151, see https://s3.amazonaws.com/clickhouse-test-reports/50078/a4743996ee4f3583884d07bcd6501df0cfdaa346/stateless_tests__release__databasereplicated__[3_4].html. #50442 (Mikhail f. Shiryaev).
- ClickHouse can work on Linux RISC-V 6.1.22. This closes #50456. #50457 (Alexey Milovidov).
- Bump internal protobuf to v3.18 (fixes bogus CVE-2022-1941). #50400 (Robert Schulze).
- Bump internal libxml2 to v2.10.4 (fixes bogus CVE-2023-28484 and bogus CVE-2023-29469). #50402 (Robert Schulze).
- Bump c-ares to v1.19.1 (bogus CVE-2023-32067, bogus CVE-2023-31130, bogus CVE-2023-31147). #50403 (Robert Schulze).
- Fix bogus CVE-2022-2469 in libgsasl. #50404 (Robert Schulze).
Bug Fix (user-visible misbehavior in an official stable release)
- ActionsDAG: fix wrong optimization #47584 (Salvatore Mesoraca).
- Correctly handle concurrent snapshots in Keeper #48466 (Antonio Andelic).
- MergeTreeMarksLoader holds DataPart instead of DataPartStorage #48515 (SmitaRKulkarni).
- Sequence state fix #48603 (Ilya Golshtein).
- Back/Restore concurrency check on previous fails #48726 (SmitaRKulkarni).
- Fix Attaching a table with non-existent ZK path does not increase the ReadonlyReplica metric #48954 (wangxiaobo).
- Fix possible terminate called for uncaught exception in some places #49112 (Kruglov Pavel).
- Fix key not found error for queries with multiple StorageJoin #49137 (vdimir).
- Fix wrong query result when using nullable primary key #49172 (Duc Canh Le).
- Fix reinterpretAs*() on big endian machines #49198 (Suzy Wang).
- (Experimental zero-copy replication) Lock zero copy parts more atomically #49211 (alesapin).
- Fix race on Outdated parts loading #49223 (Alexander Tokmakov).
- Fix all key value is null and group use rollup return wrong answer #49282 (Shuai li).
- Fix calculating load_factor for HASHED dictionaries with SHARDS #49319 (Azat Khuzhin).
- Disallow configuring compression CODECs for alias columns #49363 (Timur Solodovnikov).
- Fix bug in removal of existing part directory #49365 (alesapin).
- Properly fix GCS when HMAC is used #49390 (Antonio Andelic).
- Fix fuzz bug when subquery set is not built when reading from remote() #49425 (Alexander Gololobov).
- Invert
shutdown_wait_unfinished_queries
#49427 (Konstantin Bogdanov). - (Experimental zero-copy replication) Fix another zero copy bug #49473 (alesapin).
- Fix postgres database setting #49481 (Mal Curtis).
- Correctly handle
s3Cluster
arguments #49490 (Antonio Andelic). - Fix bug in TraceCollector destructor. #49508 (Yakov Olkhovskiy).
- Fix AsynchronousReadIndirectBufferFromRemoteFS breaking on short seeks #49525 (Michael Kolupaev).
- Fix dictionaries loading order #49560 (Alexander Tokmakov).
- Forbid the change of data type of Object('json') column #49563 (Nikolay Degterinsky).
- Fix stress test (Logical error: Expected 7134 >= 11030) #49623 (Kseniia Sumarokova).
- Fix bug in DISTINCT #49628 (Alexey Milovidov).
- Fix: DISTINCT in order with zero values in non-sorted columns #49636 (Igor Nikonov).
- Fix one-off error in big integers found by UBSan with fuzzer #49645 (Alexey Milovidov).
- Fix reading from sparse columns after restart #49660 (Anton Popov).
- Fix assert in SpanHolder::finish() with fibers #49673 (Kruglov Pavel).
- Fix short circuit functions and mutations with sparse arguments #49716 (Anton Popov).
- Fix writing appended files to incremental backups #49725 (Vitaly Baranov).
- Fix "There is no physical column _row_exists in table" error occurring during lightweight delete mutation on a table with Object column. #49737 (Alexander Gololobov).
- Fix msan issue in randomStringUTF8(uneven number) #49750 (Robert Schulze).
- Fix aggregate function kolmogorovSmirnovTest #49768 (FFFFFFFHHHHHHH).
- Fix settings aliases in native protocol #49776 (Azat Khuzhin).
- Fix
arrayMap
with array of tuples with single argument #49789 (Anton Popov). - Fix per-query IO/BACKUPs throttling settings #49797 (Azat Khuzhin).
- Fix setting NULL in profile definition #49831 (Vitaly Baranov).
- Fix a bug with projections and the aggregate_functions_null_for_empty setting (for query_plan_optimize_projection) #49873 (Amos Bird).
- Fix processing pending batch for Distributed async INSERT after restart #49884 (Azat Khuzhin).
- Fix assertion in CacheMetadata::doCleanup #49914 (Kseniia Sumarokova).
- fix
is_prefix
in OptimizeRegularExpression #49919 (Han Fei). - Fix metrics
WriteBufferFromS3Bytes
,WriteBufferFromS3Microseconds
andWriteBufferFromS3RequestsErrors
#49930 (Aleksandr Musorin). - Fix IPv6 encoding in protobuf #49933 (Yakov Olkhovskiy).
- Fix possible Logical error on bad Nullable parsing for text formats #49960 (Kruglov Pavel).
- Add setting output_format_parquet_compliant_nested_types to produce more compatible Parquet files #50001 (Michael Kolupaev).
- Fix logical error in stress test "Not enough space to add ..." #50021 (Kseniia Sumarokova).
- Avoid deadlock when starting table in attach thread of
ReplicatedMergeTree
#50026 (Antonio Andelic). - Fix assert in SpanHolder::finish() with fibers attempt 2 #50034 (Kruglov Pavel).
- Add proper escaping for DDL OpenTelemetry context serialization #50045 (Azat Khuzhin).
- Fix reporting broken projection parts #50052 (Amos Bird).
- JIT compilation not equals NaN fix #50056 (Maksim Kita).
- Fix crashing in case of Replicated database without arguments #50058 (Azat Khuzhin).
- Fix crash with
multiIf
and constant condition and nullable arguments #50123 (Anton Popov). - Fix invalid index analysis for date related keys #50153 (Amos Bird).
- do not allow modify order by when there are no order by cols #50154 (Han Fei).
- Fix broken index analysis when binary operator contains a null constant argument #50177 (Amos Bird).
- clickhouse-client: disallow usage of
--query
and--queries-file
at the same time #50210 (Alexey Gerasimchuk). - Fix UB for INTO OUTFILE extensions (APPEND / AND STDOUT) and WATCH EVENTS #50216 (Azat Khuzhin).
- Fix skipping spaces at end of row in CustomSeparatedIgnoreSpaces format #50224 (Kruglov Pavel).
- Fix iceberg metadata parsing #50232 (Kseniia Sumarokova).
- Fix nested distributed SELECT in WITH clause #50234 (Azat Khuzhin).
- Fix msan issue in keyed siphash #50245 (Robert Schulze).
- Fix bugs in Poco sockets in non-blocking mode, use true non-blocking sockets #50252 (Kruglov Pavel).
- Fix checksum calculation for backup entries #50264 (Vitaly Baranov).
- Comparison functions NaN fix #50287 (Maksim Kita).
- JIT aggregation nullable key fix #50291 (Maksim Kita).
- Fix clickhouse-local crashing when writing empty Arrow or Parquet output #50328 (Michael Kolupaev).
- Fix crash when Pool::Entry::disconnect() is called #50334 (Val Doroshchuk).
- Improved fetch part by holding directory lock longer #50339 (SmitaRKulkarni).
- Fix bitShift* functions with both constant arguments #50343 (Kruglov Pavel).
- Fix Keeper deadlock on exception when preprocessing requests. #50387 (frinkr).
- Fix hashing of const integer values #50421 (Robert Schulze).
- Fix merge_tree_min_rows_for_seek/merge_tree_min_bytes_for_seek for data skipping indexes #50432 (Azat Khuzhin).
- Limit the number of in-flight tasks for loading outdated parts #50450 (Nikita Mikhaylov).
- Keeper fix: apply uncommitted state after snapshot install #50483 (Antonio Andelic).
- Fix incorrect constant folding #50536 (Alexey Milovidov).
- Fix logical error in stress test (Not enough space to add ...) #50583 (Kseniia Sumarokova).
- Fix converting Null to LowCardinality(Nullable) in values table function #50637 (Kruglov Pavel).
- Revert invalid RegExpTreeDictionary optimization #50642 (Johann Gan).
ClickHouse release 23.4, 2023-04-26
Backward Incompatible Change
- Formatter '%M' in function formatDateTime() now prints the month name instead of the minutes. This makes the behavior consistent with MySQL. The previous behavior can be restored using setting "formatdatetime_parsedatetime_m_is_month_name = 0". #47246 (Robert Schulze).
- This change makes sense only if you are using the virtual filesystem cache. If
path
in the virtual filesystem cache configuration is not empty and is not an absolute path, then it will be put in<clickhouse server data directory>/caches/<path_from_cache_config>
. #48784 (Kseniia Sumarokova). - Primary/secondary indices and sorting keys with identical expressions are now rejected. This behavior can be disabled using setting
allow_suspicious_indices
. #48536 (凌涛).
New Feature
- Support new aggregate function
quantileGK
/quantilesGK
, like approx_percentile in spark. Greenwald-Khanna algorithm refer to http://infolab.stanford.edu/~datar/courses/cs361a/papers/quantiles.pdf. #46428 (李扬). - Add a statement
SHOW COLUMNS
which shows distilled information from system.columns. #48017 (Robert Schulze). - Added
LIGHTWEIGHT
andPULL
modifiers forSYSTEM SYNC REPLICA
query.LIGHTWEIGHT
version waits for fetches and drop-ranges only (merges and mutations are ignored).PULL
version pulls new entries from ZooKeeper and does not wait for them. Fixes #47794. #48085 (Alexander Tokmakov). - Add
kafkaMurmurHash
function for compatibility with Kafka DefaultPartitioner. Closes #47834. #48185 (Nikolay Degterinsky). - Allow to easily create a user with the same grants as the current user by using
GRANT CURRENT GRANTS
. #48262 (pufit). - Add statistical aggregate function
kolmogorovSmirnovTest
. Close #48228. #48325 (FFFFFFFHHHHHHH). - Added a
lost_part_count
column to thesystem.replicas
table. The column value shows the total number of lost parts in the corresponding table. Value is stored in zookeeper and can be used instead of not persistentReplicatedDataLoss
profile event for monitoring. #48526 (Sergei Trifonov). - Add
soundex
function for compatibility. Closes #39880. #48567 (FriendLey). - Support
Map
type for JSONExtract. #48629 (李扬). - Add
PrettyJSONEachRow
format to output pretty JSON with new line delimiters and 4 space indents. #48898 (Kruglov Pavel). - Add
ParquetMetadata
input format to read Parquet file metadata. #48911 (Kruglov Pavel). - Add
extractKeyValuePairs
function to extract key value pairs from strings. Input strings might contain noise (i.e. log files / do not need to be 100% formatted in key-value-pair format), the algorithm will look for key value pairs matching the arguments passed to the function. As of now, function accepts the following arguments:data_column
(mandatory),key_value_pair_delimiter
(defaults to:
),pair_delimiters
(defaults to\space \, \;
) andquoting_character
(defaults to double quotes). #43606 (Arthur Passos). - Functions replaceOne(), replaceAll(), replaceRegexpOne() and replaceRegexpAll() can now be called with non-const pattern and replacement arguments. #46589 (Robert Schulze).
- Added functions to work with columns of type
Map
:mapConcat
,mapSort
,mapExists
. #48071 (Anton Popov).
Performance Improvement
- Reading files in
Parquet
format is now much faster. IO and decoding are parallelized (controlled bymax_threads
setting), and only required data ranges are read. #47964 (Michael Kolupaev). - If we run a mutation with IN (subquery) like this:
ALTER TABLE t UPDATE col='new value' WHERE id IN (SELECT id FROM huge_table)
and the tablet
has multiple parts than for each part a set for subquerySELECT id FROM huge_table
is built in memory. And if there are many parts then this might consume a lot of memory (and lead to an OOM) and CPU. The solution is to introduce a short-lived cache of sets that are currently being built by mutation tasks. If another task of the same mutation is executed concurrently it can look up the set in the cache, wait for it to be built and reuse it. #46835 (Alexander Gololobov). - Only check dependencies if necessary when applying
ALTER TABLE
queries. #48062 (Raúl Marín). - Optimize function
mapUpdate
. #48118 (Anton Popov). - Now an internal query to local replica is sent explicitly and data from it received through loopback interface. Setting
prefer_localhost_replica
is not respected for parallel replicas. This is needed for better scheduling and makes the code cleaner: the initiator is only responsible for coordinating of the reading process and merging results, continuously answering for requests while all the secondary queries read the data. Note: Using loopback interface is not so performant, otherwise some replicas could starve for tasks which could lead to even slower query execution and not utilizing all possible resources. The initialization of the coordinator is now even more lazy. All incoming requests contain the information about the reading algorithm we initialize the coordinator with it when first request comes. If any replica decides to read with a different algorithm–an exception will be thrown and a query will be aborted. #48246 (Nikita Mikhaylov). - Do not build set for the right side of
IN
clause with subquery when it is used only for analysis of skip indexes, and they are disabled by setting (use_skip_indexes=0
). Previously it might affect the performance of queries. #48299 (Anton Popov). - Query processing is parallelized right after reading
FROM file(...)
. Related to #38755. #48525 (Igor Nikonov). Query processing is parallelized right after reading from any data source. Affected data sources are mostly simple or external storages like table functionsurl
,file
. #48727 (Igor Nikonov). This is controlled by the settingparallelize_output_from_storages
which is not enabled by default. - Lowered contention of ThreadPool mutex (may increase performance for a huge amount of small jobs). #48750 (Sergei Trifonov).
- Reduce memory usage for multiple
ALTER DELETE
mutations. #48522 (Nikolai Kochetov). - Remove the excessive connection attempts if the
skip_unavailable_shards
setting is enabled. #48771 (Azat Khuzhin).
Experimental Feature
- Entries in the query cache are now squashed to max_block_size and compressed. #45912 (Robert Schulze).
- It is now possible to define per-user quotas in the query cache. #48284 (Robert Schulze).
- Some fixes for parallel replicas #48433 (Nikita Mikhaylov).
- Implement zero-copy-replication (an experimental feature) on encrypted disks. #48741 (Vitaly Baranov).
Improvement
- Increase default value for
connect_timeout_with_failover_ms
to 1000 ms (because of adding async connections in https://github.com/ClickHouse/ClickHouse/pull/47229) . Closes #5188. #49009 (Kruglov Pavel). - Several improvements around data lakes: - Make
Iceberg
work with non-partitioned data. - SupportIceberg
format version v2 (previously only v1 was supported) - Support reading partitioned data forDeltaLake
/Hudi
- Faster reading ofDeltaLake
metadata by using Delta's checkpoint files - Fixed incorrectHudi
reads: previously it incorrectly chose which data to read and therefore was able to read correctly only small size tables - Made these engines to pickup updates of changed data (previously the state was set on table creation) - Make proper testing forIceberg
/DeltaLake
/Hudi
using spark. #47307 (Kseniia Sumarokova). - Add async connection to socket and async writing to socket. Make creating connections and sending query/external tables async across shards. Refactor code with fibers. Closes #46931. We will be able to increase
connect_timeout_with_failover_ms
by default after this PR (https://github.com/ClickHouse/ClickHouse/issues/5188). #47229 (Kruglov Pavel). - Support config sections
keeper
/keeper_server
as an alternative tozookeeper
. Close #34766 , #34767. #35113 (李扬). - It is possible to set secure flag in named_collections for a dictionary with a ClickHouse table source. Addresses #38450 . #46323 (Ilya Golshtein).
bitCount
function supportFixedString
andString
data type. #49044 (flynn).- Added configurable retries for all operations with [Zoo]Keeper for Backup queries. #47224 (Nikita Mikhaylov).
- Enable
use_environment_credentials
for S3 by default, so the entire provider chain is constructed by default. #47397 (Antonio Andelic). - Currently, the JSON_VALUE function is similar as spark's get_json_object function, which support to get value from JSON string by a path like '$.key'. But still has something different - 1. in spark's get_json_object will return null while the path is not exist, but in JSON_VALUE will return empty string; - 2. in spark's get_json_object will return a complex type value, such as a JSON object/array value, but in JSON_VALUE will return empty string. #47494 (KevinyhZou).
- For
use_structure_from_insertion_table_in_table_functions
more flexible insert table structure propagation to table function. Fixed an issue with name mapping and using virtual columns. No more need for 'auto' setting. #47962 (Yakov Olkhovskiy). - Do not continue retrying to connect to Keeper if the query is killed or over limits. #47985 (Raúl Marín).
- Support Enum output/input in
BSONEachRow
, allow all map key types and avoid extra calculations on output. #48122 (Kruglov Pavel). - Support more ClickHouse types in
ORC
/Arrow
/Parquet
formats: Enum(8|16), (U)Int(128|256), Decimal256 (for ORC), allow reading IPv4 from Int32 values (ORC outputs IPv4 as Int32, and we couldn't read it back), fix reading Nullable(IPv6) from binary data forORC
. #48126 (Kruglov Pavel). - Add columns
perform_ttl_move_on_insert
,load_balancing
for tablesystem.storage_policies
, modify columnvolume_type
type toEnum8
. #48167 (lizhuoyu5). - Added support for
BACKUP ALL
command which backups all tables and databases, including temporary and system ones. #48189 (Vitaly Baranov). - Function mapFromArrays supports
Map
type as an input. #48207 (李扬). - The output of some SHOW PROCESSLIST is now sorted. #48241 (Robert Schulze).
- Per-query/per-server throttling for remote IO/local IO/BACKUPs (server settings:
max_remote_read_network_bandwidth_for_server
,max_remote_write_network_bandwidth_for_server
,max_local_read_bandwidth_for_server
,max_local_write_bandwidth_for_server
,max_backup_bandwidth_for_server
, settings:max_remote_read_network_bandwidth
,max_remote_write_network_bandwidth
,max_local_read_bandwidth
,max_local_write_bandwidth
,max_backup_bandwidth
). #48242 (Azat Khuzhin). - Support more types in
CapnProto
format: Map, (U)Int(128|256), Decimal(128|256). Allow integer conversions during input/output. #48257 (Kruglov Pavel). - Don't throw CURRENT_WRITE_BUFFER_IS_EXHAUSTED for normal behaviour. #48288 (Raúl Marín).
- Add new setting
keeper_map_strict_mode
which enforces extra guarantees on operations made on top ofKeeperMap
tables. #48293 (Antonio Andelic). - Check primary key type for simple dictionary is native unsigned integer type Add setting
check_dictionary_primary_key
for compatibility(setcheck_dictionary_primary_key =false
to disable checking). #48335 (lizhuoyu5). - Don't replicate mutations for
KeeperMap
because it's unnecessary. #48354 (Antonio Andelic). - Allow to write/read unnamed tuple as nested Message in Protobuf format. Tuple elements and Message fields are matched by position. #48390 (Kruglov Pavel).
- Support
additional_table_filters
andadditional_result_filter
settings in the new planner. Also, add a documentation entry foradditional_result_filter
. #48405 (Dmitry Novik). parseDateTime
now understands format string '%f' (fractional seconds). #48420 (Robert Schulze).- Format string "%f" in formatDateTime() now prints "000000" if the formatted value has no fractional seconds, the previous behavior (single zero) can be restored using setting "formatdatetime_f_prints_single_zero = 1". #48422 (Robert Schulze).
- Don't replicate DELETE and TRUNCATE for KeeperMap. #48434 (Antonio Andelic).
- Generate valid Decimals and Bools in generateRandom function. #48436 (Kruglov Pavel).
- Allow trailing commas in expression list of SELECT query, for example
SELECT a, b, c, FROM table
. Closes #37802. #48438 (Nikolay Degterinsky). - Override
CLICKHOUSE_USER
andCLICKHOUSE_PASSWORD
environment variables with--user
and--password
client parameters. Closes #38909. #48440 (Nikolay Degterinsky). - Added retries to loading of data parts in
MergeTree
tables in case of retryable errors. #48442 (Anton Popov). - Add support for
Date
,Date32
,DateTime
,DateTime64
data types toarrayMin
,arrayMax
,arrayDifference
functions. Closes #21645. #48445 (Nikolay Degterinsky). - Add support for
{server_uuid}
macro. It is useful for identifying replicas in autoscaled clusters when new replicas are constantly added and removed in runtime. This closes #48554. #48563 (Alexey Milovidov). - The installation script will create a hard link instead of copying if it is possible. #48578 (Alexey Milovidov).
- Support
SHOW TABLE
syntax meaning the same asSHOW CREATE TABLE
. Closes #48580. #48591 (flynn). - HTTP temporary buffers now support working by evicting data from the virtual filesystem cache. #48664 (Vladimir C).
- Make Schema inference works for
CREATE AS SELECT
. Closes #47599. #48679 (flynn). - Added a
replicated_max_mutations_in_one_entry
setting forReplicatedMergeTree
that allows limiting the number of mutation commands per oneMUTATE_PART
entry (default is 10000). #48731 (Alexander Tokmakov). - In AggregateFunction types, don't count unused arena bytes as
read_bytes
. #48745 (Raúl Marín). - Fix some MySQL-related settings not being handled with the MySQL dictionary source + named collection. Closes #48402. #48759 (Kseniia Sumarokova).
- If a user set
max_single_part_upload_size
to a very large value, it can lead to a crash due to a bug in the AWS S3 SDK. This fixes #47679. #48816 (Alexey Milovidov). - Fix data race in
RabbitMQ
(report), refactor the code. #48845 (Kseniia Sumarokova). - Add aliases
name
andpart_name
formsystem.parts
andsystem.part_log
. Closes #48718. #48850 (sichenzhao). - Functions "arrayDifferenceSupport()", "arrayCumSum()" and "arrayCumSumNonNegative()" now support input arrays of wide integer types (U)Int128/256. #48866 (cluster).
- Multi-line history in clickhouse-client is now no longer padded. This makes pasting more natural. #48870 (Joanna Hulboj).
- Implement a slight improvement for the rare case when ClickHouse is run inside LXC and LXCFS is used. The LXCFS has an issue: sometimes it returns an error "Transport endpoint is not connected" on reading from the file inside
/proc
. This error was correctly logged into ClickHouse's server log. We have additionally workaround this issue by reopening a file. This is a minuscule change. #48922 (Real). - Improve memory accounting for prefetches. Randomise prefetch settings In CI. #48973 (Kseniia Sumarokova).
- Correctly set headers for native copy operations on GCS. #48981 (Antonio Andelic).
- Add support for specifying setting names in the command line with dashes instead of underscores, for example,
--max-threads
instead of--max_threads
. Additionally, support Unicode dash characters like—
instead of--
- this is useful when you communicate with a team in another company, and a manager from that team copy-pasted code from MS Word. #48985 (alekseygolub). - Add fallback to password authentication when authentication with SSL user certificate has failed. Closes #48974. #48989 (Nikolay Degterinsky).
- Improve the embedded dashboard. Close #46671. #49036 (Kevin Zhang).
- Add profile events for log messages, so you can easily see the count of log messages by severity. #49042 (Alexey Milovidov).
- In previous versions, the
LineAsString
format worked inconsistently when the parallel parsing was enabled or not, in presence of DOS or macOS Classic line breaks. This closes #49039. #49052 (Alexey Milovidov). - The exception message about the unparsed query parameter will also tell about the name of the parameter. Reimplement #48878. Close #48772. #49061 (Alexey Milovidov).
Build/Testing/Packaging Improvement
- Update time zones. The following were updated: Africa/Cairo, Africa/Casablanca, Africa/El_Aaiun, America/Bogota, America/Cambridge_Bay, America/Ciudad_Juarez, America/Godthab, America/Inuvik, America/Iqaluit, America/Nuuk, America/Ojinaga, America/Pangnirtung, America/Rankin_Inlet, America/Resolute, America/Whitehorse, America/Yellowknife, Asia/Gaza, Asia/Hebron, Asia/Kuala_Lumpur, Asia/Singapore, Canada/Yukon, Egypt, Europe/Kirov, Europe/Volgograd, Singapore. #48572 (Alexey Milovidov).
- Reduce the number of dependencies in the header files to speed up the build. #47984 (Dmitry Novik).
- Randomize compression of marks and indices in tests. #48286 (Alexey Milovidov).
- Bump internal ZSTD from 1.5.4 to 1.5.5. #46797 (Robert Schulze).
- Randomize vertical merges from compact to wide parts in tests. #48287 (Raúl Marín).
- Support for CRC32 checksum in HDFS. Fix performance issues. #48614 (Alexey Milovidov).
- Remove remainders of GCC support. #48671 (Robert Schulze).
- Add CI run with new analyzer infrastructure enabled. #48719 (Dmitry Novik).
Bug Fix (user-visible misbehavior in an official stable release)
- Fix system.query_views_log for MVs that are pushed from background threads #46668 (Azat Khuzhin).
- Fix several
RENAME COLUMN
bugs #46946 (alesapin). - Fix minor hiliting issues in clickhouse-format #47610 (Natasha Murashkina).
- Fix a bug in LLVM's libc++ leading to a crash for uploading parts to S3 which size is greater than INT_MAX #47693 (Azat Khuzhin).
- Fix overflow in the
sparkbar
function #48121 (Vladimir C). - Fix race in S3 #48190 (Anton Popov).
- Disable JIT for aggregate functions due to inconsistent behavior #48195 (Alexey Milovidov).
- Fix alter formatting (minor) #48289 (Natasha Murashkina).
- Fix CPU usage in RabbitMQ (was worsened in 23.2 after #44404) #48311 (Kseniia Sumarokova).
- Fix crash in EXPLAIN PIPELINE for Merge over Distributed #48320 (Azat Khuzhin).
- Fix serializing LowCardinality as Arrow dictionary #48361 (Kruglov Pavel).
- Reset downloader for cache file segment in TemporaryFileStream #48386 (Vladimir C).
- Fix possible SYSTEM SYNC REPLICA stuck in case of DROP/REPLACE PARTITION #48391 (Azat Khuzhin).
- Fix a startup error when loading a distributed table that depends on a dictionary #48419 (MikhailBurdukov).
- Don't check dependencies when renaming system tables automatically #48431 (Raúl Marín).
- Update only affected rows in KeeperMap storage #48435 (Antonio Andelic).
- Fix possible segfault in the VFS cache #48469 (Kseniia Sumarokova).
toTimeZone
function throws an error when no constant string is provided #48471 (Jordi Villar).- Fix logical error with IPv4 in Protobuf, add support for Date32 #48486 (Kruglov Pavel).
- "changed" flag in system.settings was calculated incorrectly for settings with multiple values #48516 (MikhailBurdukov).
- Fix storage
Memory
with enabled compression #48517 (Anton Popov). - Fix bracketed-paste mode messing up password input in the event of client reconnection #48528 (Michael Kolupaev).
- Fix nested map for keys of IP and UUID types #48556 (Yakov Olkhovskiy).
- Fix an uncaught exception in case of parallel loader for hashed dictionaries #48571 (Azat Khuzhin).
- The
groupArray
aggregate function correctly works for empty result over nullable types #48593 (lgbo). - Fix bug in Keeper when a node is not created with scheme
auth
in ACL sometimes. #48595 (Aleksei Filatov). - Allow IPv4 comparison operators with UInt #48611 (Yakov Olkhovskiy).
- Fix possible error from cache #48636 (Kseniia Sumarokova).
- Async inserts with empty data will no longer throw exception. #48663 (Anton Popov).
- Fix table dependencies in case of failed RENAME TABLE #48683 (Azat Khuzhin).
- If the primary key has duplicate columns (which is only possible for projections), in previous versions it might lead to a bug #48838 (Amos Bird).
- Fix for a race condition in ZooKeeper when joining send_thread/receive_thread #48849 (Alexander Gololobov).
- Fix unexpected part name error when trying to drop a ignored detached part with zero copy replication #48862 (Michael Lex).
- Fix reading
Date32
Parquet/Arrow column into not aDate32
column #48864 (Kruglov Pavel). - Fix
UNKNOWN_IDENTIFIER
error while selecting from table with row policy and column with dots #48976 (Kruglov Pavel). - Fix aggregation by empty nullable strings #48999 (LiuNeng).
ClickHouse release 23.3 LTS, 2023-03-30
Upgrade Notes
- Lightweight DELETEs are production ready and enabled by default. The
DELETE
query for MergeTree tables is now available by default. - The behavior of
*domain*RFC
andnetloc
functions is slightly changed: relaxed the set of symbols that are allowed in the URL authority for better conformance. #46841 (Azat Khuzhin). - Prohibited creating tables based on KafkaEngine with DEFAULT/EPHEMERAL/ALIAS/MATERIALIZED statements for columns. #47138 (Aleksandr Musorin).
- An "asynchronous connection drain" feature is removed. Related settings and metrics are removed as well. It was an internal feature, so the removal should not affect users who had never heard about that feature. #47486 (Alexander Tokmakov).
- Support 256-bit Decimal data type (more than 38 digits) in
arraySum
/Min
/Max
/Avg
/Product
,arrayCumSum
/CumSumNonNegative
,arrayDifference
, array construction, IN operator, query parameters,groupArrayMovingSum
, statistical functions,min
/max
/any
/argMin
/argMax
, PostgreSQL wire protocol, MySQL table engine and function,sumMap
,mapAdd
,mapSubtract
,arrayIntersect
. Add support for big integers inarrayIntersect
. Statistical aggregate functions involving moments (such ascorr
or variousTTest
s) will useFloat64
as their internal representation (they were usingDecimal128
before this change, but it was pointless), and these functions can returnnan
instead ofinf
in case of infinite variance. Some functions were allowed onDecimal256
data types but returnedDecimal128
in previous versions - now it is fixed. This closes #47569. This closes #44864. This closes #28335. #47594 (Alexey Milovidov). - Make backup_threads/restore_threads server settings (instead of user settings). #47881 (Azat Khuzhin).
- Do not allow const and non-deterministic secondary indices #46839 (Anton Popov).
New Feature
- Add a new mode for splitting the work on replicas using settings
parallel_replicas_custom_key
andparallel_replicas_custom_key_filter_type
. If the cluster consists of a single shard with multiple replicas, up tomax_parallel_replicas
will be randomly picked and turned into shards. For each shard, a corresponding filter is added to the query on the initiator before being sent to the shard. If the cluster consists of multiple shards, it will behave the same assample_key
but with the possibility to define an arbitrary key. #45108 (Antonio Andelic). - An option to display partial result on cancel: Added query setting
partial_result_on_first_cancel
allowing the canceled query (e.g. due to Ctrl-C) to return a partial result. #45689 (Alexey Perevyshin). - Added support of arbitrary tables engines for temporary tables (except for Replicated and KeeperMap engines). Close #31497. #46071 (Roman Vasin).
- Add support for replication of user-defined SQL functions using centralized storage in Keeper. #46085 (Aleksei Filatov).
- Implement
system.server_settings
(similar tosystem.settings
), which will contain server configurations. #46550 (pufit). - Support for
UNDROP TABLE
query. Closes #46811. #47241 (chen). - Allow separate grants for named collections (e.g. to be able to give
SHOW/CREATE/ALTER/DROP named collection
access only to certain collections, instead of all at once). Closes #40894. Add new access typeNAMED_COLLECTION_CONTROL
which is not given to user default unless explicitly added to the user config (is required to be able to doGRANT ALL
), alsoshow_named_collections
is no longer obligatory to be manually specified for user default to be able to have full access rights as was in 23.2. #46241 (Kseniia Sumarokova). - Allow nested custom disks. Previously custom disks supported only flat disk structure. #47106 (Kseniia Sumarokova).
- Introduce a function
widthBucket
(with aWIDTH_BUCKET
alias for compatibility). #42974. #46790 (avoiderboi). - Add new function
parseDateTime
/parseDateTimeInJodaSyntax
according to the specified format string. parseDateTime parses String to DateTime in MySQL syntax, parseDateTimeInJodaSyntax parses in Joda syntax. #46815 (李扬). - Use
dummy UInt8
for the default structure of table functionnull
. Closes #46930. #47006 (flynn). - Support for date format with a comma, like
Dec 15, 2021
in theparseDateTimeBestEffort
function. Closes #46816. #47071 (chen). - Add settings
http_wait_end_of_query
andhttp_response_buffer_size
that corresponds to URL paramswait_end_of_query
andbuffer_size
for the HTTP interface. This allows changing these settings in the profiles. #47108 (Vladimir C). - Add
system.dropped_tables
table that shows tables that were dropped fromAtomic
databases but were not completely removed yet. #47364 (chen). - Add
INSTR
as alias ofpositionCaseInsensitive
for MySQL compatibility. Closes #47529. #47535 (flynn). - Added
toDecimalString
function allowing to convert numbers to string with fixed precision. #47838 (Andrey Zvonov). - Add a merge tree setting
max_number_of_mutations_for_replica
. It limits the number of part mutations per replica to the specified amount. Zero means no limit on the number of mutations per replica (the execution can still be constrained by other settings). #48047 (Vladimir C). - Add the Map-related function
mapFromArrays
, which allows the creation of a map from a pair of arrays. #31125 (李扬). - Allow control of compression in Parquet/ORC/Arrow output formats, adds support for more compression input formats. This closes #13541. #47114 (Kruglov Pavel).
- Add SSL User Certificate authentication to the native protocol. Closes #47077. #47596 (Nikolay Degterinsky).
- Add *OrNull() and *OrZero() variants for
parseDateTime
, add aliasstr_to_date
for MySQL parity. #48000 (Robert Schulze). - Added operator
REGEXP
(similar to operators "LIKE", "IN", "MOD" etc.) for better compatibility with MySQL #47869 (Robert Schulze).
Performance Improvement
- Marks in memory are now compressed, using 3-6x less memory. #47290 (Michael Kolupaev).
- Backups for large numbers of files were unbelievably slow in previous versions. Not anymore. Now they are unbelievably fast. #47251 (Alexey Milovidov). Introduced a separate thread pool for backup's IO operations. This will allow scaling it independently of other pools and increase performance. #47174 (Nikita Mikhaylov). Use MultiRead request and retries for collecting metadata at the final stage of backup processing. #47243 (Nikita Mikhaylov). If a backup and restoring data are both in S3 then server-side copy should be used from now on. #47546 (Vitaly Baranov).
- Fixed excessive reading in queries with
FINAL
. #47801 (Nikita Taranov). - Setting
max_final_threads
would be set to the number of cores at server startup (by the same algorithm as used formax_threads
). This improves the concurrency offinal
execution on servers with high number of CPUs. #47915 (Nikita Taranov). - Allow executing reading pipeline for DIRECT dictionary with CLICKHOUSE source in multiple threads. To enable set
dictionary_use_async_executor=1
inSETTINGS
section for source inCREATE DICTIONARY
statement. #47986 (Vladimir C). - Optimize one nullable key aggregate performance. #45772 (LiuNeng).
- Implemented lowercase
tokenbf_v1
index utilization forhasTokenOrNull
,hasTokenCaseInsensitive
andhasTokenCaseInsensitiveOrNull
. #46252 (ltrk2). - Optimize functions
position
andLIKE
by searching the first two chars using SIMD. #46289 (Jiebin Sun). - Optimize queries from the
system.detached_parts
, which could be significantly large. Added several sources with respect to the block size limitation; in each block, an IO thread pool is used to calculate the part size, i.e. to make syscalls in parallel. #46624 (Sema Checherinda). - Increase the default value of
max_replicated_merges_in_queue
for ReplicatedMergeTree tables from 16 to 1000. It allows faster background merge operation on clusters with a very large number of replicas, such as clusters with shared storage in ClickHouse Cloud. #47050 (Alexey Milovidov). - Updated
clickhouse-copier
to useGROUP BY
instead ofDISTINCT
to get the list of partitions. For large tables, this reduced the select time from over 500s to under 1s. #47386 (Clayton McClure). - Fix performance degradation in
ASOF JOIN
. #47544 (Ongkong). - Even more batching in Keeper. Improve performance by avoiding breaking batches on read requests. #47978 (Antonio Andelic).
- Allow PREWHERE for Merge with different DEFAULT expressions for columns. #46831 (Azat Khuzhin).
Experimental Feature
- Parallel replicas: Improved the overall performance by better utilizing the local replica, and forbid the reading with parallel replicas from non-replicated MergeTree by default. #47858 (Nikita Mikhaylov).
- Support filter push down to left table for JOIN with
Join
,Dictionary
andEmbeddedRocksDB
tables if the experimental Analyzer is enabled. #47280 (Maksim Kita). - Now ReplicatedMergeTree with zero copy replication has less load to Keeper. #47676 (alesapin).
- Fix create materialized view with MaterializedPostgreSQL #40807 (Maksim Buren).
Improvement
- Enable
input_format_json_ignore_unknown_keys_in_named_tuple
by default. #46742 (Kruglov Pavel). - Allow errors to be ignored while pushing to MATERIALIZED VIEW (add new setting
materialized_views_ignore_errors
, by default tofalse
, but it is set totrue
for flushing logs tosystem.*_log
tables unconditionally). #46658 (Azat Khuzhin). - Track the file queue of distributed sends in memory. #45491 (Azat Khuzhin).
- Now
X-ClickHouse-Query-Id
andX-ClickHouse-Timezone
headers are added to responses in all queries via HTTP protocol. Previously it was done only forSELECT
queries. #46364 (Anton Popov). - External tables from
MongoDB
: support for connection to a replica set via a URI with a host:port enum and support for the readPreference option in MongoDB dictionaries. Example URI: mongodb://db0.example.com:27017,db1.example.com:27017,db2.example.com:27017/?replicaSet=myRepl&readPreference=primary. #46524 (artem-yadr). - This improvement should be invisible for users. Re-implement projection analysis on top of query plan. Added setting
query_plan_optimize_projection=1
to switch between old and new version. Fixes #44963. #46537 (Nikolai Kochetov). - Use Parquet format v2 instead of v1 in output format by default. Add setting
output_format_parquet_version
to control parquet version, possible values1.0
,2.4
,2.6
,2.latest
(default). #46617 (Kruglov Pavel). - It is now possible to use the new configuration syntax to configure Kafka topics with periods (
.
) in their name. #46752 (Robert Schulze). - Fix heuristics that check hyperscan patterns for problematic repeats. #46819 (Robert Schulze).
- Don't report ZK node exists to system.errors when a block was created concurrently by a different replica. #46820 (Raúl Marín).
- Increase the limit for opened files in
clickhouse-local
. It will be able to read fromweb
tables on servers with a huge number of CPU cores. Do not back off reading from the URL table engine in case of too many opened files. This closes #46852. #46853 (Alexey Milovidov). - Exceptions thrown when numbers cannot be parsed now have an easier-to-read exception message. #46917 (Robert Schulze).
- Added update
system.backups
after every processed task to track the progress of backups. #46989 (Aleksandr Musorin). - Allow types conversion in Native input format. Add settings
input_format_native_allow_types_conversion
that controls it (enabled by default). #46990 (Kruglov Pavel). - Allow IPv4 in the
range
function to generate IP ranges. #46995 (Yakov Olkhovskiy). - Improve exception message when it's impossible to move a part from one volume/disk to another. #47032 (alesapin).
- Support
Bool
type inJSONType
function. PreviouslyNull
type was mistakenly returned for bool values. #47046 (Anton Popov). - Use
_request_body
parameter to configure predefined HTTP queries. #47086 (Constantine Peresypkin). - Automatic indentation in the built-in UI SQL editor when Enter is pressed. #47113 (Alexey Korepanov).
- Self-extraction with 'sudo' will attempt to set uid and gid of extracted files to running user. #47116 (Yakov Olkhovskiy).
- Previously, the
repeat
function's second argument only accepted an unsigned integer type, which meant it could not accept values such as -1. This behavior differed from that of the Spark function. In this update, the repeat function has been modified to match the behavior of the Spark function. It now accepts the same types of inputs, including negative integers. Extensive testing has been performed to verify the correctness of the updated implementation. #47134 (KevinyhZou). Note: the changelog entry was rewritten by ChatGPT. - Remove
::__1
part from stacktraces. Displaystd::basic_string<char, ...
asString
in stacktraces. #47171 (Mike Kot). - Reimplement interserver mode to avoid replay attacks (note, that change is backward compatible with older servers). #47213 (Azat Khuzhin).
- Improve recognition of regular expression groups and refine the regexp_tree dictionary. #47218 (Han Fei).
- Keeper improvement: Add new 4LW
clrs
to clean resources used by Keeper (e.g. release unused memory). #47256 (Antonio Andelic). - Add optional arguments to codecs
DoubleDelta(bytes_size)
,Gorilla(bytes_size)
,FPC(level, float_size)
, this allows using these codecs without column type inclickhouse-compressor
. Fix possible aborts and arithmetic errors inclickhouse-compressor
with these codecs. Fixes: https://github.com/ClickHouse/ClickHouse/discussions/47262. #47271 (Kruglov Pavel). - Add support for big int types to the
runningDifference
function. Closes #47194. #47322 (Nikolay Degterinsky). - Add an expiration window for S3 credentials that have an expiration time to avoid
ExpiredToken
errors in some edge cases. It can be controlled withexpiration_window_seconds
config, the default is 120 seconds. #47423 (Antonio Andelic). - Support Decimals and Date32 in
Avro
format. #47434 (Kruglov Pavel). - Do not start the server if an interrupted conversion from
Ordinary
toAtomic
was detected, print a better error message with troubleshooting instructions. #47487 (Alexander Tokmakov). - Add a new column
kind
to thesystem.opentelemetry_span_log
. This column holds the value of SpanKind defined in OpenTelemtry. #47499 (Frank Chen). - Allow reading/writing nested arrays in
Protobuf
format with only the root field name as column name. Previously column name should've contained all nested field names (likea.b.c Array(Array(Array(UInt32)))
, now you can use justa Array(Array(Array(UInt32)))
. #47650 (Kruglov Pavel). - Added an optional
STRICT
modifier forSYSTEM SYNC REPLICA
which makes the query wait for the replication queue to become empty (just like it worked before https://github.com/ClickHouse/ClickHouse/pull/45648). #47659 (Alexander Tokmakov). - Improve the naming of some OpenTelemetry span logs. #47667 (Frank Chen).
- Prevent using too long chains of aggregate function combinators (they can lead to slow queries in the analysis stage). This closes #47715. #47716 (Alexey Milovidov).
- Support for subquery in parameterized views; resolves #46741 #47725 (SmitaRKulkarni).
- Fix memory leak in MySQL integration (reproduces with
connection_auto_close=1
). #47732 (Kseniia Sumarokova). - Improved error handling in the code related to Decimal parameters, resulting in more informative error messages. Previously, when incorrect Decimal parameters were supplied, the error message generated was unclear or unhelpful. With this update, the error message printed has been fixed to provide more detailed and useful information, making it easier to identify and correct issues related to Decimal parameters. #47812 (Yu Feng). Note: this changelog entry is rewritten by ChatGPT.
- The parameter
exact_rows_before_limit
is used to makerows_before_limit_at_least
is designed to accurately reflect the number of rows returned before the limit is reached. This pull request addresses issues encountered when the query involves distributed processing across multiple shards or sorting operations. Prior to this update, these scenarios were not functioning as intended. #47874 (Amos Bird). - ThreadPools metrics introspection. #47880 (Azat Khuzhin).
- Add
WriteBufferFromS3Microseconds
andWriteBufferFromS3RequestsErrors
profile events. #47885 (Antonio Andelic). - Add
--link
and--noninteractive
(-y
) options to ClickHouse install. Closes #47750. #47887 (Nikolay Degterinsky). - Fixed
UNKNOWN_TABLE
exception when attaching to a materialized view that has dependent tables that are not available. This might be useful when trying to restore state from a backup. #47975 (MikhailBurdukov). - Fix case when the (optional) path is not added to an encrypted disk configuration. #47981 (Kseniia Sumarokova).
- Support for CTE in parameterized views Implementation: Updated to allow query parameters while evaluating scalar subqueries. #48065 (SmitaRKulkarni).
- Support big integers
(U)Int128/(U)Int256
,Map
with any key type andDateTime64
with any precision (not only 3 and 6). #48119 (Kruglov Pavel). - Allow skipping errors related to unknown enum values in row input formats. #48133 (Alexey Milovidov).
Build/Testing/Packaging Improvement
- ClickHouse now builds with
C++23
. #47424 (Robert Schulze). - Fuzz
EXPLAIN
queries in the AST Fuzzer. #47803 #47852 (flynn). - Split stress test and the automated backward compatibility check (now Upgrade check). #44879 (Kruglov Pavel).
- Updated the Ubuntu Image for Docker to calm down some bogus security reports. #46784 (Julio Jimenez). Please note that ClickHouse has no dependencies and does not require Docker.
- Adds a prompt to allow the removal of an existing
clickhouse
download when using "curl | sh" download of ClickHouse. Prompt is "ClickHouse binary clickhouse already exists. Overwrite? [y/N]". #46859 (Dan Roscigno). - Fix error during server startup on old distros (e.g. Amazon Linux 2) and on ARM that glibc 2.28 symbols are not found. #47008 (Robert Schulze).
- Prepare for clang 16. #47027 (Amos Bird).
- Added a CI check which ensures ClickHouse can run with an old glibc on ARM. #47063 (Robert Schulze).
- Add a style check to prevent incorrect usage of the
NDEBUG
macro. #47699 (Alexey Milovidov). - Speed up the build a little. #47714 (Alexey Milovidov).
- Bump
vectorscan
to 5.4.9. #47955 (Robert Schulze). - Add a unit test to assert Apache Arrow's fatal logging does not abort. It covers the changes in ClickHouse/arrow#16. #47958 (Arthur Passos).
- Restore the ability of native macOS debug server build to start. #48050 (Robert Schulze). Note: this change is only relevant for development, as the ClickHouse official builds are done with cross-compilation.
Bug Fix (user-visible misbehavior in an official stable release)
- Fix formats parser resetting, test processing bad messages in
Kafka
#45693 (Kruglov Pavel). - Fix data size calculation in Keeper #46086 (Antonio Andelic).
- Fixed a bug in automatic retries of
DROP TABLE
query withReplicatedMergeTree
tables andAtomic
databases. In rare cases it could lead toCan't get data for node /zk_path/log_pointer
andThe specified key does not exist
errors if the ZooKeeper session expired during DROP and a new replicated table with the same path in ZooKeeper was created in parallel. #46384 (Alexander Tokmakov). - Fix incorrect alias recursion while normalizing queries that prevented some queries to run. #46609 (Raúl Marín).
- Fix IPv4/IPv6 serialization/deserialization in binary formats #46616 (Kruglov Pavel).
- ActionsDAG: do not change result of
and
during optimization #46653 (Salvatore Mesoraca). - Improve query cancellation when a client dies #46681 (Alexander Tokmakov).
- Fix arithmetic operations in aggregate optimization #46705 (Duc Canh Le).
- Fix possible
clickhouse-local
's abort on JSONEachRow schema inference #46731 (Kruglov Pavel). - Fix changing an expired role #46772 (Vitaly Baranov).
- Fix combined PREWHERE column accumulation from multiple steps #46785 (Alexander Gololobov).
- Use initial range for fetching file size in HTTP read buffer. Without this change, some remote files couldn't be processed. #46824 (Antonio Andelic).
- Fix the incorrect progress bar while using the URL tables #46830 (Antonio Andelic).
- Fix MSan report in
maxIntersections
function #46847 (Alexey Milovidov). - Fix a bug in
Map
data type #46856 (Alexey Milovidov). - Fix wrong results of some LIKE searches when the LIKE pattern contains quoted non-quotable characters #46875 (Robert Schulze).
- Fix - WITH FILL would produce abort when the Filling Transform processing an empty block #46897 (Yakov Olkhovskiy).
- Fix date and int inference from string in JSON #46972 (Kruglov Pavel).
- Fix bug in zero-copy replication disk choice during fetch #47010 (alesapin).
- Fix a typo in systemd service definition #47051 (Palash Goel).
- Fix the NOT_IMPLEMENTED error with CROSS JOIN and algorithm = auto #47068 (Vladimir C).
- Fix the problem that the 'ReplicatedMergeTree' table failed to insert two similar data when the 'part_type' is configured as 'InMemory' mode (experimental feature). #47121 (liding1992).
- External dictionaries / library-bridge: Fix error "unknown library method 'extDict_libClone'" #47136 (alex filatov).
- Fix race condition in a grace hash join with limit #47153 (Vladimir C).
- Fix concrete columns PREWHERE support #47154 (Azat Khuzhin).
- Fix possible deadlock in Query Status #47161 (Kruglov Pavel).
- Forbid insert select for the same
Join
table, as it leads to a deadlock #47260 (Vladimir C). - Skip merged partitions for
min_age_to_force_merge_seconds
merges #47303 (Antonio Andelic). - Modify find_first_symbols, so it works as expected for find_first_not_symbols #47304 (Arthur Passos).
- Fix big numbers inference in CSV #47410 (Kruglov Pavel).
- Disable logical expression optimizer for expression with aliases. #47451 (Nikolai Kochetov).
- Fix error in
decodeURLComponent
#47457 (Alexey Milovidov). - Fix explain graph with projection #47473 (flynn).
- Fix query parameters #47488 (Alexey Milovidov).
- Parameterized view: a bug fix. #47495 (SmitaRKulkarni).
- Fuzzer of data formats, and the corresponding fixes. #47519 (Alexey Milovidov).
- Fix monotonicity check for
DateTime64
#47526 (Antonio Andelic). - Fix "block structure mismatch" for a Nullable LowCardinality column #47537 (Nikolai Kochetov).
- Proper fix for a bug in Apache Parquet #45878 #47538 (Kruglov Pavel).
- Fix
BSONEachRow
parallel parsing when document size is invalid #47540 (Kruglov Pavel). - Preserve error in
system.distribution_queue
onSYSTEM FLUSH DISTRIBUTED
#47541 (Azat Khuzhin). - Check for duplicate column in
BSONEachRow
format #47609 (Kruglov Pavel). - Fix wait for zero copy lock during move #47631 (alesapin).
- Fix aggregation by partitions #47634 (Nikita Taranov).
- Fix bug in tuple as array serialization in
BSONEachRow
format #47690 (Kruglov Pavel). - Fix crash in
polygonsSymDifferenceCartesian
#47702 (pufit). - Fix reading from storage
File
compressed files withzlib
andgzip
compression #47796 (Anton Popov). - Improve empty query detection for PostgreSQL (for pgx golang driver) #47854 (Azat Khuzhin).
- Fix DateTime monotonicity check for LowCardinality types #47860 (Antonio Andelic).
- Use restore_threads (not backup_threads) for RESTORE ASYNC #47861 (Azat Khuzhin).
- Fix DROP COLUMN with ReplicatedMergeTree containing projections #47883 (Antonio Andelic).
- Fix for Replicated database recovery #47901 (Alexander Tokmakov).
- Hotfix for too verbose warnings in HTTP #47903 (Alexander Tokmakov).
- Fix "Field value too long" in
catboostEvaluate
#47970 (Robert Schulze). - Fix #36971: Watchdog: exit with non-zero code if child process exits #47973 (Коренберг Марк).
- Fix for "index file
cidx
is unexpectedly long" #48010 (SmitaRKulkarni). - Fix MaterializedPostgreSQL query to get attributes (replica-identity) #48015 (Solomatov Sergei).
- parseDateTime(): Fix UB (signed integer overflow) #48019 (Robert Schulze).
- Use unique names for Records in Avro to avoid reusing its schema #48057 (Kruglov Pavel).
- Correctly set TCP/HTTP socket timeouts in Keeper #48108 (Antonio Andelic).
- Fix possible member call on null pointer in
Avro
format #48184 (Kruglov Pavel).
ClickHouse release 23.2, 2023-02-23
Backward Incompatible Change
- Extend function "toDayOfWeek()" (alias: "DAYOFWEEK") with a mode argument that encodes whether the week starts on Monday or Sunday and whether counting starts at 0 or 1. For consistency with other date time functions, the mode argument was inserted between the time and the time zone arguments. This breaks existing usage of the (previously undocumented) 2-argument syntax "toDayOfWeek(time, time_zone)". A fix is to rewrite the function into "toDayOfWeek(time, 0, time_zone)". #45233 (Robert Schulze).
- Rename setting
max_query_cache_size
tofilesystem_cache_max_download_size
. #45614 (Kseniia Sumarokova). - The
default
user will not have permissions for access typeSHOW NAMED COLLECTION
by default (e.g.default
user will no longer be able to grant ALL to other users as it was before, therefore this PR is backward incompatible). #46010 (Kseniia Sumarokova). - If the SETTINGS clause is specified before the FORMAT clause, the settings will be applied to formatting as well. #46003 (Azat Khuzhin).
- Remove support for setting
materialized_postgresql_allow_automatic_update
(which was by default turned off). #46106 (Kseniia Sumarokova). - Slightly improve performance of
countDigits
on realistic datasets. This closed #44518. In previous versions,countDigits(0)
returned0
; now it returns1
, which is more correct, and follows the existing documentation. #46187 (Alexey Milovidov). - Disallow creation of new columns compressed by a combination of codecs "Delta" or "DoubleDelta" followed by codecs "Gorilla" or "FPC". This can be bypassed using setting "allow_suspicious_codecs = true". #45652 (Robert Schulze).
New Feature
- Add
StorageIceberg
and table functioniceberg
to access iceberg table store on S3. #45384 (flynn). - Allow configuring storage as
SETTINGS disk = '<disk_name>'
(instead ofstorage_policy
) and with explicit disk creationSETTINGS disk = disk(type=s3, ...)
. #41976 (Kseniia Sumarokova). - Expose
ProfileEvents
counters insystem.part_log
. #38614 (Bharat Nallan). - Enrichment of the existing
ReplacingMergeTree
engine to allow duplicate the insertion. It leverages the power of bothReplacingMergeTree
andCollapsingMergeTree
in one MergeTree engine. Deleted data are not returned when queried, but not removed from disk neither. #41005 (youennL-cs). - Add
generateULID
function. Closes #36536. #44662 (Nikolay Degterinsky). - Add
corrMatrix
aggregate function, calculating each two columns. In addition, since AggregatefunctionscovarSamp
andcovarPop
are similar tocorr
, I addcovarSampMatrix
,covarPopMatrix
by the way. @alexey-milovidov closes #44587. #44680 (FFFFFFFHHHHHHH). - Introduce arrayShuffle function for random array permutations. #45271 (Joanna Hulboj).
- Support types
FIXED_SIZE_BINARY
type in Arrow,FIXED_LENGTH_BYTE_ARRAY
inParquet
and match them toFixedString
. Add settingsoutput_format_parquet_fixed_string_as_fixed_byte_array/output_format_arrow_fixed_string_as_fixed_byte_array
to control default output type for FixedString. Closes #45326. #45340 (Kruglov Pavel). - Add a new column
last_exception_time
to system.replication_queue. #45457 (Frank Chen). - Add two new functions which allow for user-defined keys/seeds with SipHash{64,128}. #45513 (Salvatore Mesoraca).
- Allow a three-argument version for table function
format
. close #45808. #45873 (FFFFFFFHHHHHHH). - Add
JodaTime
format support for 'x','w','S'. Refer to https://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html. #46073 (zk_kiger). - Support window function
ntile
. (lgbo). - Add setting
final
to implicitly apply theFINAL
modifier to every table. #40945 (Arthur Passos). - Added
arrayPartialSort
andarrayPartialReverseSort
functions. #46296 (Joanna Hulboj). - The new http parameter
client_protocol_version
allows setting a client protocol version for HTTP responses using the Native format. #40397. #46360 (Geoff Genz). - Add new function
regexpExtract
, like spark functionREGEXP_EXTRACT
for compatibility. It is similar to the existing functionextract
. #46469 (李扬). - Add new function
JSONArrayLength
, which returns the number of elements in the outermost JSON array. The function returns NULL if the input JSON string is invalid. #46631 (李扬).
Performance Improvement
- The introduced logic works if PREWHERE condition is a conjunction of multiple conditions (cond1 AND cond2 AND ... ). It groups those conditions that require reading the same columns into steps. After each step the corresponding part of the full condition is computed and the result rows might be filtered. This allows to read fewer rows in the next steps thus saving IO bandwidth and doing less computation. This logic is disabled by default for now. It will be enabled by default in one of the future releases once it is known to not have any regressions, so it is highly encouraged to be used for testing. It can be controlled by 2 settings: "enable_multiple_prewhere_read_steps" and "move_all_conditions_to_prewhere". #46140 (Alexander Gololobov).
- An option added to aggregate partitions independently if table partition key and group by key are compatible. Controlled by the setting
allow_aggregate_partitions_independently
. Disabled by default because of limited applicability (please refer to the docs). #45364 (Nikita Taranov). - Allow using Vertical merge algorithm with parts in Compact format. This will allow ClickHouse server to use much less memory for background operations. This closes #46084. #45681 #46282 (Anton Popov).
- Optimize
Parquet
reader by using batch reader. #45878 (LiuNeng). - Add new
local_filesystem_read_method
methodio_uring
based on the asynchronous Linux io_uring subsystem, improving read performance almost universally compared to the defaultpread
method. #38456 (Saulius Valatka). - Rewrite aggregate functions with
if
expression as argument when logically equivalent. For example,avg(if(cond, col, null))
can be rewritten to avgIf(cond, col). It is helpful in performance. #44730 (李扬). - Improve lower/upper function performance with avx512 instructions. #37894 (yaqi-zhao).
- Remove the limitation that on systems with >=32 cores and SMT disabled ClickHouse uses only half of the cores (the case when you disable Hyper Threading in BIOS). #44973 (Robert Schulze).
- Improve performance of function
multiIf
by columnar executing, speed up by 2.3x. #45296 (李扬). - Add fast path for function
position
when the needle is empty. #45382 (李扬). - Enable
query_plan_remove_redundant_sorting
optimization by default. Optimization implemented in #45420. #45567 (Igor Nikonov). - Increased HTTP Transfer Encoding chunk size to improve performance of large queries using the HTTP interface. #45593 (Geoff Genz).
- Fixed performance of short
SELECT
queries that read from tables with large number ofArray
/Map
/Nested
columns. #45630 (Anton Popov). - Improve performance of filtering for big integers and decimal types. #45949 (李扬).
- This change could effectively reduce the overhead of obtaining the filter from ColumnNullable(UInt8) and improve the overall query performance. To evaluate the impact of this change, we adopted TPC-H benchmark but revised the column types from non-nullable to nullable, and we measured the QPS of its queries as the performance indicator. #45962 (Zhiguo Zhou).
- Make the
_part
and_partition_id
virtual column beLowCardinality(String)
type. Closes #45964. #45975 (flynn). - Improve the performance of Decimal conversion when the scale does not change. #46095 (Alexey Milovidov).
- Allow to increase prefetching for read data. #46168 (Kseniia Sumarokova).
- Rewrite
arrayExists(x -> x = 1, arr)
->has(arr, 1)
, which improve performance by 1.34x. #46188 (李扬). - Fix too big memory usage for vertical merges on non-remote disk. Respect
max_insert_delayed_streams_for_parallel_write
for the remote disk. #46275 (Nikolai Kochetov). - Update zstd to v1.5.4. It has some minor improvements in performance and compression ratio. If you run replicas with different versions of ClickHouse you may see reasonable error messages
Data after merge/mutation is not byte-identical to data on another replicas.
with explanation. These messages are Ok and you should not worry. #46280 (Raúl Marín). - Fix performance degradation caused by #39737. #46309 (Alexey Milovidov).
- The
replicas_status
handle will answer quickly even in case of a large replication queue. #46310 (Alexey Milovidov). - Add avx512 support for aggregate function
sum
, function unary arithmetic, function comparison. #37870 (zhao zhou). - Rewrote the code around marks distribution and the overall coordination of the reading in order to achieve the maximum performance improvement. This closes #34527. #43772 (Nikita Mikhaylov).
- Remove redundant DISTINCT clauses in query (subqueries). Implemented on top of query plan. It does similar optimization as
optimize_duplicate_order_by_and_distinct
regarding DISTINCT clauses. Can be enabled viaquery_plan_remove_redundant_distinct
setting. Related to #42648. #44176 (Igor Nikonov). - A few query rewrite optimizations:
sumIf(123, cond) -> 123 * countIf(1, cond)
,sum(if(cond, 123, 0)) -> 123 * countIf(cond)
,sum(if(cond, 0, 123)) -> 123 * countIf(not(cond))
#44728 (李扬). - Improved how memory bound merging and aggregation in order on top query plan interact. Previously we fell back to explicit sorting for AIO in some cases when it wasn't actually needed. #45892 (Nikita Taranov).
- Concurrent merges are scheduled using round-robin by default to ensure fair and starvation-free operation. Previously in heavily overloaded shards, big merges could possibly be starved by smaller merges due to the use of strict priority scheduling. Added
background_merges_mutations_scheduling_policy
server config option to select scheduling algorithm (round_robin
orshortest_task_first
). #46247 (Sergei Trifonov).
Improvement
- Enable retries for INSERT by default in case of ZooKeeper session loss. We already use it in production. #46308 (Alexey Milovidov).
- Add ability to ignore unknown keys in JSON object for named tuples (
input_format_json_ignore_unknown_keys_in_named_tuple
). #45678 (Azat Khuzhin). - Support optimizing the
where
clause with sorting key expression move toprewhere
for query withfinal
. #38893. #38950 (hexiaoting). - Add new metrics for backups: num_processed_files and processed_files_size described actual number of processed files. #42244 (Aleksandr).
- Added retries on interserver DNS errors. #43179 (Anton Kozlov).
- Keeper improvement: try preallocating space on the disk to avoid undefined out-of-space issues. Introduce setting
max_log_file_size
for the maximum size of Keeper's Raft log files. #44370 (Antonio Andelic). - Optimize behavior for a replica delay api logic in case the replica is read-only. #45148 (mateng915).
- Ask for the password in clickhouse-client interactively in a case when the empty password is wrong. Closes #46702. #46730 (Nikolay Degterinsky).
- Mark
Gorilla
compression on columns of non-Float* type as suspicious. #45376 (Robert Schulze). - Show replica name that is executing a merge in the
postpone_reason
column. #45458 (Frank Chen). - Save exception stack trace in part_log. #45459 (Frank Chen).
- The
regexp_tree
dictionary is polished and now it is compatible with https://github.com/ua-parser/uap-core. #45631 (Han Fei). - Updated checking of
SYSTEM SYNC REPLICA
, resolves #45508 #45648 (SmitaRKulkarni). - Rename setting
replication_alter_partitions_sync
toalter_sync
. #45659 (Antonio Andelic). - The
generateRandom
table function and the engine now supportLowCardinality
data types. This is useful for testing, for example you can writeINSERT INTO table SELECT * FROM generateRandom() LIMIT 1000
. This is needed to debug #45590. #45661 (Alexey Milovidov). - The experimental query result cache now provides more modular configuration settings. #45679 (Robert Schulze).
- Renamed "query result cache" to "query cache". #45682 (Robert Schulze).
- add
SYSTEM SYNC FILE CACHE
command. It will do thesync
syscall. #8921. #45685 (DR). - Add a new S3 setting
allow_head_object_request
. This PR makes usage ofGetObjectAttributes
request instead ofHeadObject
introduced in https://github.com/ClickHouse/ClickHouse/pull/45288 optional (and disabled by default). #45701 (Vitaly Baranov). - Add ability to override connection settings based on connection names (that said that now you can forget about storing password for each connection, you can simply put everything into
~/.clickhouse-client/config.xml
and even use different history files for them, which can be also useful). #45715 (Azat Khuzhin). - Arrow format: support the duration type. Closes #45669. #45750 (flynn).
- Extend the logging in the Query Cache to improve investigations of the caching behavior. #45751 (Robert Schulze).
- The query cache's server-level settings are now reconfigurable at runtime. #45758 (Robert Schulze).
- Hide password in logs when a table function's arguments are specified with a named collection. #45774 (Vitaly Baranov).
- Improve internal S3 client to correctly deduce regions and redirections for different types of URLs. #45783 (Antonio Andelic).
- Add support for Map, IPv4 and IPv6 types in generateRandom. Mostly useful for testing. #45785 (Raúl Marín).
- Support empty/notEmpty for IP types. #45799 (Yakov Olkhovskiy).
- The column
num_processed_files
was split into two columns:num_files
(for BACKUP) andfiles_read
(for RESTORE). The columnprocessed_files_size
was split into two columns:total_size
(for BACKUP) andbytes_read
(for RESTORE). #45800 (Vitaly Baranov). - Add support for
SHOW ENGINES
query for MySQL compatibility. #45859 (Filatenkov Artur). - Improved how the obfuscator deals with queries. #45867 (Raúl Marín).
- Improve behaviour of conversion into Date for boundary value 65535 (2149-06-06). #46042 #45914 (Joanna Hulboj).
- Add setting
check_referential_table_dependencies
to check referential dependencies onDROP TABLE
. This PR solves #38326. #45936 (Vitaly Baranov). - Fix
tupleElement
to returnNull
when havingNull
argument. Closes #45894. #45952 (flynn). - Throw an error on no files satisfying the S3 wildcard. Closes #45587. #45957 (chen).
- Use cluster state data to check concurrent backup/restore. #45982 (SmitaRKulkarni).
- ClickHouse Client: Use "exact" matching for fuzzy search, which has correct case ignorance and more appropriate algorithm for matching SQL queries. #46000 (Azat Khuzhin).
- Forbid wrong create View syntax
CREATE View X TO Y AS SELECT
. Closes #4331. #46043 (flynn). - Storage
Log
family support setting thestorage_policy
. Closes #43421. #46044 (flynn). - Improve
JSONColumns
format when the result is empty. Closes #46024. #46053 (flynn). - Add reference implementation for SipHash128. #46065 (Salvatore Mesoraca).
- Add a new metric to record allocations times and bytes using mmap. #46068 (李扬).
- Currently for functions like
leftPad
,rightPad
,leftPadUTF8
,rightPadUTF8
, the second argumentlength
must be UInt8|16|32|64|128|256. Which is too strict for clickhouse users, besides, it is not consistent with other similar functions likearrayResize
,substring
and so on. #46103 (李扬). - Fix assertion in the
welchTTest
function in debug build when the resulting statistics is NaN. Unified the behavior with other similar functions. Change the behavior ofstudentTTest
to return NaN instead of throwing an exception because the previous behavior was inconvenient. This closes #41176 This closes #42162. #46141 (Alexey Milovidov). - More convenient usage of big integers and ORDER BY WITH FILL. Allow using plain integers for start and end points in WITH FILL when ORDER BY big (128-bit and 256-bit) integers. Fix the wrong result for big integers with negative start or end points. This closes #16733. #46152 (Alexey Milovidov).
- Add
parts
,active_parts
andtotal_marks
columns tosystem.tables
on issue. #46161 (attack204). - Functions "multi[Fuzzy]Match(Any|AnyIndex|AllIndices}" now reject regexes which will likely evaluate very slowly in vectorscan. #46167 (Robert Schulze).
- When
insert_null_as_default
is enabled and column doesn't have defined default value, the default of column type will be used. Also this PR fixes using default values on nulls in case of LowCardinality columns. #46171 (Kruglov Pavel). - Prefer explicitly defined access keys for S3 clients. If
use_environment_credentials
is set totrue
, and the user has provided the access key through query or config, they will be used instead of the ones from the environment variable. #46191 (Antonio Andelic). - Add an alias "DATE_FORMAT()" for function "formatDateTime()" to improve compatibility with MySQL's SQL dialect, extend function
formatDateTime
with substitutions "a", "b", "c", "h", "i", "k", "l" "r", "s", "W". ### Documentation entry for user-facing changes User-readable short description:DATE_FORMAT
is an alias offormatDateTime
. Formats a Time according to the given Format string. Format is a constant expression, so you cannot have multiple formats for a single result column. (Provide link to formatDateTime). #46302 (Jake Bamrah). - Add
ProfileEvents
andCurrentMetrics
about the callback tasks for parallel replicas (s3Cluster
andMergeTree
tables). #46313 (Alexey Milovidov). - Add support for
DELETE
andUPDATE
for tables usingKeeperMap
storage engine. #46330 (Antonio Andelic). - Allow writing RENAME queries with query parameters. Resolves #45778. #46407 (Nikolay Degterinsky).
- Fix parameterized SELECT queries with REPLACE transformer. Resolves #33002. #46420 (Nikolay Degterinsky).
- Exclude the internal database used for temporary/external tables from the calculation of asynchronous metric "NumberOfDatabases". This makes the behavior consistent with system table "system.databases". #46435 (Robert Schulze).
- Added
last_exception_time
column into distribution_queue table. #46564 (Aleksandr). - Support for IN clause with parameter in parameterized views. #46583 (SmitaRKulkarni).
- Do not load named collections on server startup (load them on first access instead). #46607 (Kseniia Sumarokova).
Build/Testing/Packaging Improvement
- Introduce GWP-ASan implemented by the LLVM runtime. This closes #27039. #45226 (Han Fei).
- We want to make our tests less stable and more flaky: add randomization for merge tree settings in tests. #38983 (Anton Popov).
- Enable the HDFS support in PowerPC and which helps to fixes the following functional tests 02113_hdfs_assert.sh, 02244_hdfs_cluster.sql and 02368_cancel_write_into_hdfs.sh. #44949 (MeenaRenganathan22).
- Add systemd.service file for clickhouse-keeper. Fixes #44293. #45568 (Mikhail f. Shiryaev).
- ClickHouse's fork of poco was moved from "contrib/" to "base/poco/". #46075 (Robert Schulze).
- Add an option for
clickhouse-watchdog
to restart the child process. This does not make a lot of use. #46312 (Alexey Milovidov). - If the environment variable
CLICKHOUSE_DOCKER_RESTART_ON_EXIT
is set to 1, the Docker container will runclickhouse-server
as a child instead of the first process, and restart it when it exited. #46391 (Alexey Milovidov). - Fix Systemd service file. #46461 (SuperDJY).
- Raised the minimum Clang version needed to build ClickHouse from 12 to 15. #46710 (Robert Schulze).
- Upgrade Intel QPL from v0.3.0 to v1.0.0 2. Build libaccel-config and link it statically to QPL library instead of dynamically. #45809 (jasperzhu).
Bug Fix (user-visible misbehavior in official stable release)
- Flush data exactly by
rabbitmq_flush_interval_ms
or byrabbitmq_max_block_size
inStorageRabbitMQ
. Closes #42389. Closes #45160. #44404 (Kseniia Sumarokova). - Use PODArray to render in sparkBar function, so we can control the memory usage. Close #44467. #44489 (Duc Canh Le).
- Fix functions (quantilesExactExclusive, quantilesExactInclusive) return unsorted array element. #45379 (wujunfu).
- Fix uncaught exception in HTTPHandler when open telemetry is enabled. #45456 (Frank Chen).
- Don't infer Dates from 8 digit numbers. It could lead to wrong data to be read. #45581 (Kruglov Pavel).
- Fixes to correctly use
odbc_bridge_use_connection_pooling
setting. #45591 (Bharat Nallan). - When the callback in the cache is called, it is possible that this cache is destructed. To keep it safe, we capture members by value. It's also safe for task schedule because it will be deactivated before storage is destroyed. Resolve #45548. #45601 (Han Fei).
- Fix data corruption when codecs Delta or DoubleDelta are combined with codec Gorilla. #45615 (Robert Schulze).
- Correctly check types when using N-gram bloom filter index to avoid invalid reads. #45617 (Antonio Andelic).
- A couple of segfaults have been reported around
c-ares
. They were introduced in my previous pull requests. I have fixed them with the help of Alexander Tokmakov. #45629 (Arthur Passos). - Fix key description when encountering duplicate primary keys. This can happen in projections. See #45590 for details. #45686 (Amos Bird).
- Set compression method and level for backup Closes #45690. #45737 (Pradeep Chhetri).
- Should use
select_query_typed.limitByOffset
instead ofselect_query_typed.limitOffset
. #45817 (刘陶峰). - When use experimental analyzer, queries like
SELECT number FROM numbers(100) LIMIT 10 OFFSET 10;
get wrong results (empty result for this sql). That is caused by an unnecessary offset step added by planner. #45822 (刘陶峰). - Backward compatibility - allow implicit narrowing conversion from UInt64 to IPv4 - required for "INSERT ... VALUES ..." expression. #45865 (Yakov Olkhovskiy).
- Bugfix IPv6 parser for mixed ip4 address with missed first octet (like
::.1.2.3
). #45871 (Yakov Olkhovskiy). - Add the
query_kind
column to thesystem.processes
table and theSHOW PROCESSLIST
query. Remove duplicate code. It fixes a bug: the global configuration parametermax_concurrent_select_queries
was not respected to queries withINTERSECT
orEXCEPT
chains. #45872 (Alexey Milovidov). - Fix crash in a function
stochasticLinearRegression
. Found by WingFuzz. #45985 (Nikolai Kochetov). - Fix crash in
SELECT
queries withINTERSECT
andEXCEPT
modifiers that read data from tables with enabled sparse columns (controlled by settingratio_of_defaults_for_sparse_serialization
). #45987 (Anton Popov). - Fix read in order optimization for DESC sorting with FINAL, close #45815. #46009 (Vladimir C).
- Fix reading of non existing nested columns with multiple level in compact parts. #46045 (Azat Khuzhin).
- Fix elapsed column in system.processes (10x error). #46047 (Azat Khuzhin).
- Follow-up fix for Replace domain IP types (IPv4, IPv6) with native https://github.com/ClickHouse/ClickHouse/pull/43221. #46087 (Yakov Olkhovskiy).
- Fix environment variable substitution in the configuration when a parameter already has a value. This closes #46131. This closes #9547. #46144 (pufit).
- Fix incorrect predicate push down with grouping sets. Closes #45947. #46151 (flynn).
- Fix possible pipeline stuck error on
fulls_sorting_join
with constant keys. #46175 (Vladimir C). - Never rewrite tuple functions as literals during formatting to avoid incorrect results. #46232 (Salvatore Mesoraca).
- Fix possible out of bounds error while reading LowCardinality(Nullable) in Arrow format. #46270 (Kruglov Pavel).
- Fix
SYSTEM UNFREEZE
queries failing with the exceptionCANNOT_PARSE_INPUT_ASSERTION_FAILED
. #46325 (Aleksei Filatov). - Fix possible crash which can be caused by an integer overflow while deserializing aggregating state of a function that stores HashTable. #46349 (Nikolai Kochetov).
- Fix possible
LOGICAL_ERROR
in asynchronous inserts with invalid data sent in formatVALUES
. #46350 (Anton Popov). - Fixed a LOGICAL_ERROR on an attempt to execute
ALTER ... MOVE PART ... TO TABLE
. This type of query was never actually supported. #46359 (Alexander Tokmakov). - Fix s3Cluster schema inference in parallel distributed insert select when
parallel_distributed_insert_select
is enabled. #46381 (Kruglov Pavel). - Fix queries like
ALTER TABLE ... UPDATE nested.arr1 = nested.arr2 ...
, wherearr1
andarr2
are fields of the sameNested
column. #46387 (Anton Popov). - Scheduler may fail to schedule a task. If it happens, the whole MulityPartUpload should be aborted and
UploadHelper
must wait for already scheduled tasks. #46451 (Dmitry Novik). - Fix PREWHERE for Merge with different default types (fixes some
NOT_FOUND_COLUMN_IN_BLOCK
when the default type for the column differs, also allowPREWHERE
when the type of column is the same across tables, and prohibit it, only if it differs). #46454 (Azat Khuzhin). - Fix a crash that could happen when constant values are used in
ORDER BY
. Fixes #46466. #46493 (Nikolai Kochetov). - Do not throw exception if
disk
setting was specified on query level, butstorage_policy
was specified in config merge tree settings section.disk
will override setting from config. #46533 (Kseniia Sumarokova). - Fix an invalid processing of constant
LowCardinality
argument in functionarrayMap
. This bug could lead to a segfault in release, and logical errorBad cast
in debug build. #46569 (Alexey Milovidov). - fixes #46557. #46611 (Alexander Gololobov).
- Fix endless restarts of clickhouse-server systemd unit if server cannot start within 1m30sec (Disable timeout logic for starting clickhouse-server from systemd service). #46613 (Azat Khuzhin).
- Allocated during asynchronous inserts memory buffers were deallocated in the global context and MemoryTracker counters for corresponding user and query were not updated correctly. That led to false positive OOM exceptions. #46622 (Dmitry Novik).
- Updated to not clear on_expression from table_join as its used by future analyze runs resolves #45185. #46487 (SmitaRKulkarni).
ClickHouse release 23.1, 2023-01-26
ClickHouse release 23.1
Upgrade Notes
- The
SYSTEM RESTART DISK
query becomes a no-op. #44647 (alesapin). - The
PREALLOCATE
option forHASHED
/SPARSE_HASHED
dictionaries becomes a no-op. #45388 (Azat Khuzhin). It does not give significant advantages anymore. - Disallow
Gorilla
codec on columns of non-Float32 or non-Float64 type. #45252 (Robert Schulze). It was pointless and led to inconsistencies. - Parallel quorum inserts might work incorrectly with
*MergeTree
tables created with the deprecated syntax. Therefore, parallel quorum inserts support is completely disabled for such tables. It does not affect tables created with a new syntax. #45430 (Alexander Tokmakov). - Use the
GetObjectAttributes
request instead of theHeadObject
request to get the size of an object in AWS S3. This change fixes handling endpoints without explicit regions after updating the AWS SDK, for example. #45288 (Vitaly Baranov). AWS S3 and Minio are tested, but keep in mind that various S3-compatible services (GCS, R2, B2) may have subtle incompatibilities. This change also may require you to adjust the ACL to allow theGetObjectAttributes
request. - Forbid paths in timezone names. For example, a timezone name like
/usr/share/zoneinfo/Asia/Aden
is not allowed; the IANA timezone database name likeAsia/Aden
should be used. #44225 (Kruglov Pavel). - Queries combining equijoin and constant expressions (e.g.,
JOIN ON t1.x = t2.x AND 1 = 1
) are forbidden due to incorrect results. #44016 (Vladimir C).
New Feature
- Dictionary source for extracting keys by traversing regular expressions tree. It can be used for User-Agent parsing. #40878 (Vage Ogannisian). #43858 (Han Fei).
- Added parametrized view functionality, now it's possible to specify query parameters for the View table engine. resolves #40907. #41687 (SmitaRKulkarni).
- Add
quantileInterpolatedWeighted
/quantilesInterpolatedWeighted
functions. #38252 (Bharat Nallan). - Array join support for the
Map
type, like the function "explode" in Spark. #43239 (李扬). - Support SQL standard binary and hex string literals. #43785 (Mo Xuan).
- Allow formatting
DateTime
in Joda-Time style. Refer to the Joda-Time docs. #43818 (李扬). - Implemented a fractional second formatter (
%f
) forformatDateTime
. #44060 (ltrk2). #44497 (Alexander Gololobov). - Added
age
function to calculate the difference between two dates or dates with time values expressed as the number of full units. Closes #41115. #44421 (Robert Schulze). - Add
Null
source for dictionaries. Closes #44240. #44502 (mayamika). - Allow configuring the S3 storage class with the
s3_storage_class
configuration option. Such as<s3_storage_class>STANDARD/INTELLIGENT_TIERING</s3_storage_class>
Closes #44443. #44707 (chen). - Insert default values in case of missing elements in JSON object while parsing named tuple. Add setting
input_format_json_defaults_for_missing_elements_in_named_tuple
that controls this behaviour. Closes #45142#issuecomment-1380153217. #45231 (Kruglov Pavel). - Record server startup time in ProfileEvents (
ServerStartupMilliseconds
). Resolves #43188. #45250 (SmitaRKulkarni). - Refactor and Improve streaming engines Kafka/RabbitMQ/NATS and add support for all formats, also refactor formats a bit: - Fix producing messages in row-based formats with suffixes/prefixes. Now every message is formatted completely with all delimiters and can be parsed back using input format. - Support block-based formats like Native, Parquet, ORC, etc. Every block is formatted as a separate message. The number of rows in one message depends on the block size, so you can control it via the setting
max_block_size
. - Add new engine settingskafka_max_rows_per_message/rabbitmq_max_rows_per_message/nats_max_rows_per_message
. They control the number of rows formatted in one message in row-based formats. Default value: 1. - Fix high memory consumption in the NATS table engine. - Support arbitrary binary data in NATS producer (previously it worked only with strings contained \0 at the end) - Add missing Kafka/RabbitMQ/NATS engine settings in the documentation. - Refactor producing and consuming in Kafka/RabbitMQ/NATS, separate it from WriteBuffers/ReadBuffers semantic. - Refactor output formats: remove callbacks on each row used in Kafka/RabbitMQ/NATS (now we don't use callbacks there), allow to use IRowOutputFormat directly, clarify row end and row between delimiters, make it possible to reset output format to start formatting again - Add proper implementation in formatRow function (bonus after formats refactoring). #42777 (Kruglov Pavel). - Support reading/writing
Nested
tables asList
ofStruct
inCapnProto
format. Read/writeDecimal32/64
asInt32/64
. Closes #43319. #43379 (Kruglov Pavel). - Added a
message_format_string
column tosystem.text_log
. The column contains a pattern that was used to format the message. #44543 (Alexander Tokmakov). This allows various analytics over the ClickHouse logs. - Try to autodetect headers with column names (and maybe types) for CSV/TSV/CustomSeparated input formats. Add settings input_format_tsv/csv/custom_detect_header that enable this behaviour (enabled by default). Closes #44640. #44953 (Kruglov Pavel).
Experimental Feature
- Add an experimental inverted index as a new secondary index type for efficient text search. #38667 (larryluogit).
- Add experimental query result cache. #43797 (Robert Schulze).
- Added extendable and configurable scheduling subsystem for IO requests (not yet integrated with IO code itself). #41840 (Sergei Trifonov). This feature does nothing at all, enjoy.
- Added
SYSTEM DROP DATABASE REPLICA
that removes metadata of a dead replica of aReplicated
database. Resolves #41794. #42807 (Alexander Tokmakov).
Performance Improvement
- Do not load inactive parts at startup of
MergeTree
tables. #42181 (Anton Popov). - Improved latency of reading from storage
S3
and table functions3
with large numbers of small files. Now settingsremote_filesystem_read_method
andremote_filesystem_read_prefetch
take effect while reading from storageS3
. #43726 (Anton Popov). - Optimization for reading struct fields in Parquet/ORC files. Only the required fields are loaded. #44484 (lgbo).
- Two-level aggregation algorithm was mistakenly disabled for queries over the HTTP interface. It was enabled back, and it leads to a major performance improvement. #45450 (Nikolai Kochetov).
- Added mmap support for StorageFile, which should improve the performance of clickhouse-local. #43927 (pufit).
- Added sharding support in HashedDictionary to allow parallel load (almost linear scaling based on number of shards). #40003 (Azat Khuzhin).
- Speed up query parsing. #42284 (Raúl Marín).
- Always replace OR chain
expr = x1 OR ... OR expr = xN
toexpr IN (x1, ..., xN)
in the case whereexpr
is aLowCardinality
column. Settingoptimize_min_equality_disjunction_chain_length
is ignored in this case. #42889 (Guo Wangyang). - Slightly improve performance by optimizing the code around ThreadStatus. #43586 (Zhiguo Zhou).
- Optimize the column-wise ternary logic evaluation by achieving auto-vectorization. In the performance test of this microbenchmark, we've observed a peak performance gain of 21x on the ICX device (Intel Xeon Platinum 8380 CPU). #43669 (Zhiguo Zhou).
- Avoid acquiring read locks in the
system.tables
table if possible. #43840 (Raúl Marín). - Optimize ThreadPool. The performance experiments of SSB (Star Schema Benchmark) on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) shows that this change could effectively decrease the lock contention for ThreadPoolImpl::mutex by 75%, increasing the CPU utilization and improving the overall performance by 2.4%. #44308 (Zhiguo Zhou).
- Now the optimisation for predicting the hash table size is applied only if the cached hash table size is sufficiently large (thresholds were determined empirically and hardcoded). #44455 (Nikita Taranov).
- Small performance improvement for asynchronous reading from remote filesystems. #44868 (Kseniia Sumarokova).
- Add fast path for: -
col like '%%'
; -col like '%'
; -col not like '%'
; -col not like '%'
; -match(col, '.*')
. #45244 (李扬). - Slightly improve happy path optimisation in filtering (WHERE clause). #45289 (Nikita Taranov).
- Provide monotonicity info for
toUnixTimestamp64*
to enable more algebraic optimizations for index analysis. #44116 (Nikita Taranov). - Allow the configuration of temporary data for query processing (spilling to disk) to cooperate with the filesystem cache (taking up the space from the cache disk) #43972 (Vladimir C). This mainly improves ClickHouse Cloud, but can be used for self-managed setups as well, if you know what to do.
- Make
system.replicas
table do parallel fetches of replicas statuses. Closes #43918. #43998 (Nikolay Degterinsky). - Optimize memory consumption during backup to S3: files to S3 now will be copied directly without using
WriteBufferFromS3
(which could use a lot of memory). #45188 (Vitaly Baranov). - Add a cache for async block ids. This will reduce the number of requests of ZooKeeper when we enable async inserts deduplication. #45106 (Han Fei).
Improvement
- Use structure from insertion table in generateRandom without arguments. #45239 (Kruglov Pavel).
- Allow to implicitly convert floats stored in string fields of JSON to integers in
JSONExtract
functions. E.g.JSONExtract('{"a": "1000.111"}', 'a', 'UInt64')
->1000
, previously it returned 0. #45432 (Anton Popov). - Added fields
supports_parallel_parsing
andsupports_parallel_formatting
to tablesystem.formats
for better introspection. #45499 (Anton Popov). - Improve reading CSV field in CustomSeparated/Template format. Closes #42352 Closes #39620. #43332 (Kruglov Pavel).
- Unify query elapsed time measurements. #43455 (Raúl Marín).
- Improve automatic usage of structure from insertion table in table functions file/hdfs/s3 when virtual columns are present in a select query, it fixes the possible error
Block structure mismatch
ornumber of columns mismatch
. #43695 (Kruglov Pavel). - Add support for signed arguments in the function
range
. Fixes #43333. #43733 (sanyu). - Remove redundant sorting, for example, sorting related ORDER BY clauses in subqueries. Implemented on top of query plan. It does similar optimization as
optimize_duplicate_order_by_and_distinct
regardingORDER BY
clauses, but more generic, since it's applied to any redundant sorting steps (not only caused by ORDER BY clause) and applied to subqueries of any depth. Related to #42648. #43905 (Igor Nikonov). - Add the ability to disable deduplication of files for BACKUP (for backups without deduplication ATTACH can be used instead of full RESTORE). For example
BACKUP foo TO S3(...) SETTINGS deduplicate_files=0
(defaultdeduplicate_files=1
). #43947 (Azat Khuzhin). - Refactor and improve schema inference for text formats. Add new setting
schema_inference_make_columns_nullable
that controls making result typesNullable
(enabled by default);. #44019 (Kruglov Pavel). - Better support for
PROXYv1
protocol. #44135 (Yakov Olkhovskiy). - Add information about the latest part check by cleanup threads into
system.parts
table. #44244 (Dmitry Novik). - Disable table functions in readonly mode for inserts. #44290 (SmitaRKulkarni).
- Add a setting
simultaneous_parts_removal_limit
to allow limiting the number of parts being processed by one iteration of CleanupThread. #44461 (Dmitry Novik). - Do not initialize ReadBufferFromS3 when only virtual columns are needed in a query. This may be helpful to #44246. #44493 (chen).
- Prevent duplicate column names hints. Closes #44130. #44519 (Joanna Hulboj).
- Allow macro substitution in endpoint of disks. Resolve #40951. #44533 (SmitaRKulkarni).
- Improve schema inference when
input_format_json_read_object_as_string
is enabled. #44546 (Kruglov Pavel). - Add a user-level setting
database_replicated_allow_replicated_engine_arguments
which allows banning the creation ofReplicatedMergeTree
tables with arguments inDatabaseReplicated
. #44566 (alesapin). - Prevent users from mistakenly specifying zero (invalid) value for
index_granularity
. This closes #44536. #44578 (Alexey Milovidov). - Added possibility to set path to service keytab file in
keytab
parameter inkerberos
section of config.xml. #44594 (Roman Vasin). - Use already written part of the query for fuzzy search (pass to the
skim
library, which is written in Rust and linked statically to ClickHouse). #44600 (Azat Khuzhin). - Enable
input_format_json_read_objects_as_strings
by default to be able to read nested JSON objects while JSON Object type is experimental. #44657 (Kruglov Pavel). - Improvement for deduplication of async inserts: when users do duplicate async inserts, we should deduplicate inside the memory before we query Keeper. #44682 (Han Fei).
- Input/output
Avro
format will parse bool type as ClickHouse bool type. #44684 (Kruglov Pavel). - Support Bool type in Arrow/Parquet/ORC. Closes #43970. #44698 (Kruglov Pavel).
- Don't greedily parse beyond the quotes when reading UUIDs - it may lead to mistakenly successful parsing of incorrect data. #44686 (Raúl Marín).
- Infer UInt64 in case of Int64 overflow and fix some transforms in schema inference. #44696 (Kruglov Pavel).
- Previously dependency resolving inside
Replicated
database was done in a hacky way, and now it's done right using an explicit graph. #44697 (Nikita Mikhaylov). - Fix
output_format_pretty_row_numbers
does not preserve the counter across the blocks. Closes #44815. #44832 (flynn). - Don't report errors in
system.errors
due to parts being merged concurrently with the background cleanup process. #44874 (Raúl Marín). - Optimize and fix metrics for Distributed async INSERT. #44922 (Azat Khuzhin).
- Added settings to disallow concurrent backups and restores resolves #43891 Implementation: * Added server-level settings to disallow concurrent backups and restores, which are read and set when BackupWorker is created in Context. * Settings are set to true by default. * Before starting backup or restores, added a check to see if any other backups/restores are running. For internal requests, it checks if it is from the self node using backup_uuid. #45072 (SmitaRKulkarni).
- Add
<storage_policy>
config parameter for system logs. #45320 (Stig Bakken).
Build/Testing/Packaging Improvement
- Statically link with the
skim
library (it is written in Rust) for fuzzy search in clickhouse client/local history. #44239 (Azat Khuzhin). - We removed support for shared linking because of Rust. Actually, Rust is only an excuse for this removal, and we wanted to remove it nevertheless. #44828 (Alexey Milovidov).
- Remove the dependency on the
adduser
tool from the packages, because we don't use it. This fixes #44934. #45011 (Alexey Milovidov). - The
SQLite
library is updated to the latest. It is used for the SQLite database and table integration engines. Also, fixed a false-positive TSan report. This closes #45027. #45031 (Alexey Milovidov). - CRC-32 changes to address the WeakHash collision issue in PowerPC. #45144 (MeenaRenganathan22).
- Update aws-c* submodules #43020 (Vitaly Baranov).
- Automatically merge green backport PRs and green approved PRs #41110 (Mikhail f. Shiryaev).
- Introduce a website for the status of ClickHouse CI. Source.
Bug Fix
- Replace domain IP types (IPv4, IPv6) with native. #43221 (Yakov Olkhovskiy). It automatically fixes some missing implementations in the code.
- Fix the backup process if mutations get killed during the backup process. #45351 (Vitaly Baranov).
- Fix the
Invalid number of rows in Chunk
exception message. #41404. #42126 (Alexander Gololobov). - Fix possible use of an uninitialized value after executing expressions after sorting. Closes #43386 #43635 (Kruglov Pavel).
- Better handling of NULL in aggregate combinators, fix possible segfault/logical error while using an obscure optimization
optimize_rewrite_sum_if_to_count_if
. Closes #43758. #43813 (Kruglov Pavel). - Fix CREATE USER/ROLE query settings constraints. #43993 (Nikolay Degterinsky).
- Fixed bug with non-parsable default value for
EPHEMERAL
column in table metadata. #44026 (Yakov Olkhovskiy). - Fix parsing of bad version from compatibility setting. #44224 (Kruglov Pavel).
- Bring interval subtraction from datetime in line with addition. #44241 (ltrk2).
- Remove limits on the maximum size of the result for view. #44261 (lizhuoyu5).
- Fix possible logical error in cache if
do_not_evict_index_and_mrk_files=1
. Closes #42142. #44268 (Kseniia Sumarokova). - Fix possible too early cache write interruption in write-through cache (caching could be stopped due to false assumption when it shouldn't have). #44289 (Kseniia Sumarokova).
- Fix possible crash in the case function
IN
with constant arguments was used as a constant argument together withLowCardinality
. Fixes #44221. #44346 (Nikolai Kochetov). - Fix support for complex parameters (like arrays) of parametric aggregate functions. This closes #30975. The aggregate function
sumMapFiltered
was unusable in distributed queries before this change. #44358 (Alexey Milovidov). - Fix reading ObjectId in BSON schema inference. #44382 (Kruglov Pavel).
- Fix race which can lead to premature temp parts removal before merge finishes in ReplicatedMergeTree. This issue could lead to errors like
No such file or directory: xxx
. Fixes #43983. #44383 (alesapin). - Some invalid
SYSTEM ... ON CLUSTER
queries worked in an unexpected way if a cluster name was not specified. It's fixed, now invalid queries throwSYNTAX_ERROR
as they should. Fixes #44264. #44387 (Alexander Tokmakov). - Fix reading Map type in ORC format. #44400 (Kruglov Pavel).
- Fix reading columns that are not presented in input data in Parquet/ORC formats. Previously it could lead to error
INCORRECT_NUMBER_OF_COLUMNS
. Closes #44333. #44405 (Kruglov Pavel). - Previously the
bar
function used the same '▋' (U+258B "Left five eighths block") character to display both 5/8 and 6/8 bars. This change corrects this behavior by using '▊' (U+258A "Left three quarters block") for displaying 6/8 bar. #44410 (Alexander Gololobov). - Placing profile settings after profile settings constraints in the configuration file made constraints ineffective. #44411 (Konstantin Bogdanov).
- Fix
SYNTAX_ERROR
while runningEXPLAIN AST INSERT
queries with data. Closes #44207. #44413 (save-my-heart). - Fix reading bool value with CRLF in CSV format. Closes #44401. #44442 (Kruglov Pavel).
- Don't execute and/or/if/multiIf on a LowCardinality dictionary, so the result type cannot be LowCardinality. It could lead to the error
Illegal column ColumnLowCardinality
in some cases. Fixes #43603. #44469 (Kruglov Pavel). - Fix mutations with the setting
max_streams_for_merge_tree_reading
. #44472 (Anton Popov). - Fix potential null pointer dereference with GROUPING SETS in ASTSelectQuery::formatImpl (#43049). #44479 (Robert Schulze).
- Validate types in table function arguments, CAST function arguments, JSONAsObject schema inference according to settings. #44501 (Kruglov Pavel).
- Fix IN function with LowCardinality and const column, close #44503. #44506 (Duc Canh Le).
- Fixed a bug in the normalization of a
DEFAULT
expression inCREATE TABLE
statement. The second argument of the functionin
(or the right argument of operatorIN
) might be replaced with the result of its evaluation during CREATE query execution. Fixes #44496. #44547 (Alexander Tokmakov). - Projections do not work in presence of WITH ROLLUP, WITH CUBE and WITH TOTALS. In previous versions, a query produced an exception instead of skipping the usage of projections. This closes #44614. This closes #42772. #44615 (Alexey Milovidov).
- Async blocks were not cleaned because the function
get all blocks sorted by time
didn't get async blocks. #44651 (Han Fei). - Fix
LOGICAL_ERROR
The top step of the right pipeline should be ExpressionStep
for JOIN with subquery, UNION, and TOTALS. Fixes #43687. #44673 (Nikolai Kochetov). - Avoid
std::out_of_range
exception in the Executable table engine. #44681 (Kruglov Pavel). - Do not apply
optimize_syntax_fuse_functions
to quantiles on AST, close #44712. #44713 (Vladimir C). - Fix bug with wrong type in Merge table and PREWHERE, close #43324. #44716 (Vladimir C).
- Fix a possible crash during shutdown (while destroying TraceCollector). Fixes #44757. #44758 (Nikolai Kochetov).
- Fix a possible crash in distributed query processing. The crash could happen if a query with totals or extremes returned an empty result and there are mismatched types in the Distributed and the local tables. Fixes #44738. #44760 (Nikolai Kochetov).
- Fix fsync for fetches (
min_compressed_bytes_to_fsync_after_fetch
)/small files (ttl.txt, columns.txt) in mutations (min_rows_to_fsync_after_merge
/min_compressed_bytes_to_fsync_after_merge
). #44781 (Azat Khuzhin). - A rare race condition was possible when querying the
system.parts
orsystem.parts_columns
tables in the presence of parts being moved between disks. Introduced in #41145. #44809 (Alexey Milovidov). - Fix the error
Context has expired
which could appear with enabled projections optimization. Can be reproduced for queries with specific functions, likedictHas/dictGet
which use context in runtime. Fixes #44844. #44850 (Nikolai Kochetov). - A fix for
Cannot read all data
error which could happen while readingLowCardinality
dictionary from remote fs. Fixes #44709. #44875 (Nikolai Kochetov). - Ignore cases when hardware monitor sensors cannot be read instead of showing a full exception message in logs. #44895 (Raúl Marín).
- Use
max_delay_to_insert
value in case the calculated time to delay INSERT exceeds the setting value. Related to #44902. #44916 (Igor Nikonov). - Fix error
Different order of columns in UNION subquery
for queries withUNION
. Fixes #44866. #44920 (Nikolai Kochetov). - Delay for INSERT can be calculated incorrectly, which can lead to always using
max_delay_to_insert
setting as delay instead of a correct value. Using simple formulamax_delay_to_insert * (parts_over_threshold/max_allowed_parts_over_threshold)
i.e. delay grows proportionally to parts over threshold. Closes #44902. #44954 (Igor Nikonov). - Fix alter table TTL error when a wide part has the lightweight delete mask. #44959 (Mingliang Pan).
- Follow-up fix for Replace domain IP types (IPv4, IPv6) with native #43221. #45024 (Yakov Olkhovskiy).
- Follow-up fix for Replace domain IP types (IPv4, IPv6) with native https://github.com/ClickHouse/ClickHouse/pull/43221. #45043 (Yakov Olkhovskiy).
- A buffer overflow was possible in the parser. Found by fuzzer. #45047 (Alexey Milovidov).
- Fix possible cannot-read-all-data error in storage FileLog. Closes #45051, #38257. #45057 (Kseniia Sumarokova).
- Memory efficient aggregation (setting
distributed_aggregation_memory_efficient
) is disabled when grouping sets are present in the query. #45058 (Nikita Taranov). - Fix
RANGE_HASHED
dictionary to count range columns as part of the primary key during updates whenupdate_field
is specified. Closes #44588. #45061 (Maksim Kita). - Fix error
Cannot capture column
forLowCardinality
captured argument of nested lambda. Fixes #45028. #45065 (Nikolai Kochetov). - Fix the wrong query result of
additional_table_filters
(additional filter was not applied) in case the minmax/count projection is used. #45133 (Nikolai Kochetov). - Fixed bug in
histogram
function accepting negative values. #45147 (simpleton). - Fix wrong column nullability in StoreageJoin, close #44940. #45184 (Vladimir C).
- Fix
background_fetches_pool_size
settings reload (increase at runtime). #45189 (Raúl Marín). - Correctly process
SELECT
queries on KV engines (e.g. KeeperMap, EmbeddedRocksDB) usingIN
on the key with subquery producing different type. #45215 (Antonio Andelic). - Fix logical error in SEMI JOIN & join_use_nulls in some cases, close #45163, close #45209. #45230 (Vladimir C).
- Fix heap-use-after-free in reading from s3. #45253 (Kruglov Pavel).
- Fix bug when the Avro Union type is ['null', Nested type], closes #45275. Fix bug that incorrectly infers
bytes
type toFloat
. #45276 (flynn). - Throw a correct exception when explicit PREWHERE cannot be used with a table using the storage engine
Merge
. #45319 (Antonio Andelic). - Under WSL1 Ubuntu self-extracting ClickHouse fails to decompress due to inconsistency - /proc/self/maps reporting 32bit file's inode, while stat reporting 64bit inode. #45339 (Yakov Olkhovskiy).
- Fix race in Distributed table startup (that could lead to processing file of async INSERT multiple times). #45360 (Azat Khuzhin).
- Fix a possible crash while reading from storage
S3
and table functions3
in the case whenListObject
request has failed. #45371 (Anton Popov). - Fix
SELECT ... FROM system.dictionaries
exception when there is a dictionary with a bad structure (e.g. incorrect type in XML config). #45399 (Aleksei Filatov). - Fix s3Cluster schema inference when structure from insertion table is used in
INSERT INTO ... SELECT * FROM s3Cluster
queries. #45422 (Kruglov Pavel). - Fix bug in JSON/BSONEachRow parsing with HTTP that could lead to using default values for some columns instead of values from data. #45424 (Kruglov Pavel).
- Fixed bug (Code: 632. DB::Exception: Unexpected data ... after parsed IPv6 value ...) with typed parsing of IP types from text source. #45425 (Yakov Olkhovskiy).
- close #45297 Add check for empty regular expressions. #45428 (Han Fei).
- Fix possible (likely distributed) query hung. #45448 (Azat Khuzhin).
- Fix possible deadlock with
allow_asynchronous_read_from_io_pool_for_merge_tree
enabled in case of exception fromThreadPool::schedule
. #45481 (Nikolai Kochetov). - Fix possible in-use table after DETACH. #45493 (Azat Khuzhin).
- Fix rare abort in the case when a query is canceled and parallel parsing was used during its execution. #45498 (Anton Popov).
- Fix a race between Distributed table creation and INSERT into it (could lead to CANNOT_LINK during INSERT into the table). #45502 (Azat Khuzhin).
- Add proper default (SLRU) to cache policy getter. Closes #45514. #45524 (Kseniia Sumarokova).
- Disallow array join in mutations closes #42637 #44447 (SmitaRKulkarni).
- Fix for qualified asterisks with alias table name and column transformer. Resolves #44736. #44755 (SmitaRKulkarni).