ClickHouse/CHANGELOG.md
Alexey Milovidov 0f27275fce Authorship
2023-09-28 06:47:33 +02:00

352 KiB
Raw Blame History

Table of Contents

ClickHouse release v23.9, 2023-09-28
ClickHouse release v23.8 LTS, 2023-08-31
ClickHouse release v23.7, 2023-07-27
ClickHouse release v23.6, 2023-06-30
ClickHouse release v23.5, 2023-06-08
ClickHouse release v23.4, 2023-04-26
ClickHouse release v23.3 LTS, 2023-03-30
ClickHouse release v23.2, 2023-02-23
ClickHouse release v23.1, 2023-01-25
Changelog for 2022

2023 Changelog

ClickHouse release 23.9, 2023-09-28

Backward Incompatible Change

  • Remove the status_info configuration option and dictionaries status from the default Prometheus handler. #54090 (Alexey Milovidov).
  • The experimental parts metadata cache is removed from the codebase. #54215 (Alexey Milovidov).

New Feature

  • Improve schema inference from JSON formats: 1) Now it's possible to infer named Tuples from JSON objects without experimantal JSON type under a setting input_format_json_try_infer_named_tuples_from_objects in JSON formats. Previously without experimantal type JSON we could only infer JSON objects as Strings or Maps, now we can infer named Tuple. Resulting Tuple type will conain all keys of objects that were read in data sample during schema inference. It can be useful for reading structured JSON data without sparse objects. The setting is enabled by default. 2) Allow parsing JSON array into a column with type String under setting input_format_json_read_arrays_as_strings. It can help reading arrays with values with different types. 3) Allow to use type String for JSON keys with unkown types (null/[]/{}) in sample data under setting input_format_json_infer_incomplete_types_as_strings. Now in JSON formats we can read any value into String column and we can avoid getting error Cannot determine type for column 'column_name' by first 25000 rows of data, most likely this column contains only Nulls or empty Arrays/Maps during schema inference by using type String for unknown types, so the data will be read successfully. #54427 (Kruglov Pavel).
  • Added IO scheduling support for remote disks. Storage configuration for disk types s3, s3_plain, hdfs and azure_blob_storage can now contain read_resource and write_resource elements holding resource names. Scheduling policies for these resources can be configured in a separate server configuration section resources. Queries can be marked using setting workload and classified using server configuration section workload_classifiers to achieve diverse resource scheduling goals. More details in the docs. #47009 (Sergei Trifonov). Added "bandwidth_limit" IO scheduling node type. It allows you to specify max_speed and max_burst constraints on traffic passing though this node. #54618 (Sergei Trifonov).
  • Added new type of authentication based on SSH keys. It works only for the native TCP protocol. #41109 (George Gamezardashvili).
  • Added a new column _block_number for MergeTree tables. #44532. #47532 (SmitaRKulkarni).
  • Add IF EMPTY clause for DROP TABLE queries. #48915 (Pavel Novitskiy).
  • SQL functions toString(datetime, timezone) and formatDateTime(datetime, format, timezone) now support non-constant timezone arguments. #53680 (Yarik Briukhovetskyi).
  • Add support for ALTER TABLE MODIFY COMMENT. Note: something similar was added by an external contributor a long time ago, but the feature did not work at all and only confused users. This closes #36377. #51304 (Alexey Milovidov). Note: this command does not propagate between replicas, so the replicas of a table could have different comments.
  • Added GCD a.k.a. "greatest common denominator" as a new data compression codec. The codec computes the GCD of all column values, and then divides each value by the GCD. The GCD codec is a data preparation codec (similar to Delta and DoubleDelta) and cannot be used stand-alone. It works with data integer, decimal and date/time type. A viable use case for the GCD codec are column values that change (increase/decrease) in multiples of the GCD, e.g. 24 - 28 - 16 - 24 - 8 - 24 (assuming GCD = 4). #53149 (Alexander Nam).
  • Two new type aliases DECIMAL(P) (as shortcut for DECIMAL(P, 0) and DECIMAL (as shortcut for DECIMAL(10, 0)) were added. This makes ClickHouse more compatible with MySQL's SQL dialect. #53328 (Val Doroshchuk).
  • Added a new system log table backup_log to track all BACKUP and RESTORE operations. #53638 (Victor Krasnov).
  • Added a format setting output_format_markdown_escape_special_characters (default: false). The setting controls whether special characters like !, #, $ etc. are escaped (i.e. prefixed by a backslash) in the Markdown output format. #53860 (irenjj).
  • Add function decodeHTMLComponent. #54097 (Bharat Nallan).
  • Added peak_threads_usage to query_log table. #54335 (Alexey Gerasimchuck).
  • Add SHOW FUNCTIONS support to clickhouse-client. #54337 (Julia Kartseva).
  • Added function toDaysSinceYearZero with alias TO_DAYS (for compatibility with MySQL) which returns the number of days passed since 0001-01-01 (in Proleptic Gregorian Calendar). #54479 (Robert Schulze). Function toDaysSinceYearZero() now supports arguments of type DateTime and DateTime64. #54856 (Serge Klochkov).
  • Added functions YYYYMMDDtoDate, YYYYMMDDtoDate32, YYYYMMDDhhmmssToDateTime and YYYYMMDDhhmmssToDateTime64. They convert a date or date with time encoded as integer (e.g. 20230911) into a native date or date with time. As such, they provide the opposite functionality of existing functions YYYYMMDDToDate, YYYYMMDDToDateTime, YYYYMMDDhhmmddToDateTime, YYYYMMDDhhmmddToDateTime64. #54509 (Quanfa Fu) (Robert Schulze).
  • Add several string distance functions, include byteHammingDistance, byteJaccardIndex, byteEditDistance. #54935 (flynn).
  • Allow specifying the expiration date and, optionally, the time for user credentials with VALID UNTIL datetime clause. #51261 (Nikolay Degterinsky).

Performance Improvement

  • Speed up reading from S3 by enabling prefetches by default. #53709 (Alexey Milovidov).
  • Do not implicitly read PK and version columns in lonely parts if unnecessary for queries with FINAL. #53919 (Duc Canh Le).
  • Optimize group by constant keys. Will optimize queries with group by _file/_path after https://github.com/ClickHouse/ClickHouse/pull/53529. #53549 (Kruglov Pavel).
  • Improve performance of sorting for Decimal columns. Improve performance of insertion into MergeTree if ORDER BY contains a Decimal column. Improve performance of sorting when data is already sorted or almost sorted. #35961 (Maksim Kita).
  • Improve performance for huge query analysis. Fixes #51224. #51469 (frinkr).
  • An optimization to rewrite COUNT(DISTINCT ...) and various uniq variants to count if it is selected from a subquery with GROUP BY. #52082 #52645 (JackyWoo).
  • Remove manual calls to mmap/mremap/munmap and delegate all this work to jemalloc - and it slightly improves performance. #52792 (Nikita Taranov).
  • Fixed high in CPU consumption when working with NATS. #54399 (Vasilev Pyotr).
  • Since we use separate instructions for executing toString() with datetime argument, it is possible to improve performance a bit for non-datetime arguments and have some parts of the code cleaner. Follows up #53680. #54443 (Yarik Briukhovetskyi).
  • Instead of serializing json elements into a std::stringstream, this PR try to put the serialization result into ColumnString direclty. #54613 (lgbo).
  • Enable ORDER BY optimization for reading data in corresponding order from a MergeTree table in case that the table is behind a view. #54628 (Vitaly Baranov).
  • Improve JSON SQL functions by reusing GeneratorJSONPath and removing several shared pointers. #54735 (lgbo).
  • Keeper tries to batch flush requests for better performance. #53049 (Antonio Andelic).
  • Now clickhouse-client processes files in parallel in case of INFILE 'glob_expression'. Closes #54218. #54533 (Max K.).
  • Allow to use primary key for IN function where primary key column types are different from IN function right side column types. Example: SELECT id FROM test_table WHERE id IN (SELECT '5'). Closes #48936. #54544 (Maksim Kita).
  • Hash JOIN tries to shrink internal buffers consuming half of maximal available memory (set by max_bytes_in_join). #54584 (vdimir).
  • Respect max_block_size for array join to avoid possible OOM. Close #54290. #54664 (李扬).
  • Reuse HTTP connections in the s3 table function. #54812 (Michael Kolupaev).
  • Replace the linear search in MergeTreeRangeReader::Stream::ceilRowsToCompleteGranules with a binary search. #54869 (usurai).

Experimental Feature

  • The creation of Annoy indexes can now be parallelized using setting max_threads_for_annoy_index_creation. #54047 (Robert Schulze).
  • Parallel replicas over distributed don't read from all replicas #54199 (Igor Nikonov).

Improvement

  • Allow to replace long names of files of columns in MergeTree data parts to hashes of names. It helps to avoid File name too long error in some cases. #50612 (Anton Popov).
  • Parse data in JSON format as JSONEachRow if failed to parse metadata. It will allow to read files with .json extension even if real format is JSONEachRow. Closes #45740. #54405 (Kruglov Pavel).
  • Output valid JSON/XML on excetpion during HTTP query execution. Add setting http_write_exception_in_output_format to enable/disable this behaviour (enabled by default). #52853 (Kruglov Pavel).
  • View information_schema.tables now has a new field data_length which shows the approximate size of the data on disk. Required to run queries generated by Amazon QuickSight. #55037 (Robert Schulze).
  • The MySQL interface gained a minimal implementation of prepared statements, just enough to allow a connection from Tableau Online to ClickHouse via the MySQL connector. #54115 (Serge Klochkov). Please note: the prepared statements implementation is pretty minimal, we do not support arguments binding yet, it is not required in this particular Tableau online use case. It will be implemented as a follow-up if necessary after extensive testing of Tableau Online in case we discover issues.
  • Support case-insensitive and dot-all matching modes in regexp_tree dictionaries. #50906 (Johann Gan).
  • Keeper improvement: Add a createIfNotExists Keeper command. #48855 (Konstantin Bogdanov).
  • More precise integer type inference, fix #51236. #53003 (Chen768959).
  • Introduced resolving of charsets in the string literals for MaterializedMySQL. #53220 (Val Doroshchuk).
  • Fix a subtle issue with a rarely used EmbeddedRocksDB table engine in an extremely rare scenario: sometimes the EmbeddedRocksDB table engine does not close files correctly in NFS after running DROP TABLE. #53502 (Mingliang Pan).
  • RESTORE TABLE ON CLUSTER must create replicated tables with a matching UUID on hosts. Otherwise the macro {uuid} in ZooKeeper path can't work correctly after RESTORE. This PR implements that. #53765 (Vitaly Baranov).
  • Added restore setting restore_broken_parts_as_detached: if it's true the RESTORE process won't stop on broken parts while restoring, instead all the broken parts will be copied to the detached folder with the prefix `broken-from-backup'. If it's false the RESTORE process will stop on the first broken part (if any). The default value is false. #53877 (Vitaly Baranov).
  • Add elapsed_ns field to HTTP headers X-ClickHouse-Progress and X-ClickHouse-Summary. #54179 (joelynch).
  • Implementation of reconfig (https://github.com/ClickHouse/ClickHouse/pull/49450), sync, and exists commands for keeper-client. #54201 (pufit).
  • clickhouse-local and clickhouse-client now allow to specify the --query parameter multiple times, e.g. ./clickhouse-client --query "SELECT 1" --query "SELECT 2". This syntax is slightly more intuitive than ./clickhouse-client --multiquery "SELECT 1;S ELECT 2", a bit easier to script (e.g. queries.push_back('--query "$q"')) and more consistent with the behavior of existing parameter --queries-file (e.g. ./clickhouse client --queries-file queries1.sql --queries-file queries2.sql). #54249 (Robert Schulze).
  • Add sub-second precision to formatReadableTimeDelta. #54250 (Andrey Zvonov).
  • Enable allow_remove_stale_moving_parts by default. #54260 (vdimir).
  • Fix using count from cache and improve progress bar for reading from archives. #54271 (Kruglov Pavel).
  • Add support for S3 credentials using SSO. To define a profile to be used with SSO, set AWS_PROFILE environment variable. #54347 (Antonio Andelic).
  • Support NULL as default for nested types Array/Tuple/Map for input formats. Closes #51100. #54351 (Kruglov Pavel).
  • Allow reading some unusual configuration of chunks from Arrow/Parquet formats. #54370 (Arthur Passos).
  • Add STD alias to stddevPop function for MySQL compatibility. Closes #54274. #54382 (Nikolay Degterinsky).
  • Add addDate function for compatibility with MySQL and subDate for consistency. Reference #54275. #54400 (Nikolay Degterinsky).
  • Support SAMPLE BY for views. #54477 (Azat Khuzhin).
  • Add modification_time into system.detached_parts. #54506 (Azat Khuzhin).
  • Added a setting splitby_max_substrings_includes_remaining_string which controls if functions "splitBy*()" with argument "max_substring" > 0 include the remaining string (if any) in the result array (Python/Spark semantics) or not. The default behavior does not change. #54518 (Robert Schulze).
  • Better integer types inference for Int64/UInt64 fields. Continuation of #53003. Now it works also for nested types like Arrays of Arrays and for functions like map/tuple. Issue: #51236. #54553 (Kruglov Pavel).
  • Added array operations for multiplying, dividing and modulo on scalar. Works in each way, for example 5 * [5, 5] and [5, 5] * 5 - both cases are possible. #54608 (Yarik Briukhovetskyi).
  • Add optional version argument to rm command in keeper-client to support safer deletes. #54708 (János Benjamin Antal).
  • Disable killing the server by systemd (that may lead to data loss when using Buffer tables). #54744 (Azat Khuzhin).
  • Added field is_deterministic to system table system.functions which indicates whether the result of a function is stable between two invocations (given exactly the same inputs) or not. #54766 #55035 (Robert Schulze).
  • Made the views in schema information_schema more compatible with the equivalent views in MySQL (i.e. modified and extended them) up to a point where Tableau Online is able to connect to ClickHouse. More specifically: 1. The type of field information_schema.tables.table_type changed from Enum8 to String. 2. Added fields table_comment and table_collation to view information_schema.table. 3. Added views information_schema.key_column_usage and referential_constraints. 4. Replaced uppercase aliases in information_schema views with concrete uppercase columns. #54773 (Serge Klochkov).
  • The query cache now returns an error if the user tries to cache the result of a query with a non-deterministic function such as now, randomString and dictGet. Compared to the previous behavior (silently don't cache the result), this reduces confusion and surprise for users. #54801 (Robert Schulze).
  • Forbid special columns like materialized/ephemeral/alias for file/s3/url/... storages, fix insert into ephemeral columns from files. Closes #53477. #54803 (Kruglov Pavel).
  • More configurable collecting metadata for backup. #54804 (Vitaly Baranov).
  • clickhouse-local's log file (if enabled with --server_logs_file flag) will now prefix each line with timestamp, thread id, etc, just like clickhouse-server. #54807 (Michael Kolupaev).
  • Field is_obsolete in the system.merge_tree_settings table - it is now 1 for obsolete merge tree settings. Previously, only the description indicated that the setting is obsolete. #54837 (Robert Schulze).
  • Make it possible to use plural when using interval literals. INTERVAL 2 HOURS should be equivalent to INTERVAL 2 HOUR. #54860 (Jordi Villar).
  • Always allow the creation of a projection with Nullable PK. This fixes #54814. #54895 (Amos Bird).
  • Retry backup's S3 operations after connection reset failure. #54900 (Vitaly Baranov).
  • Make the exception message exact in case of the maximum value of a settings is less than the minimum value. #54925 (János Benjamin Antal).
  • LIKE, match, and other regular expressions matching functions now allow matching with patterns containing non-UTF-8 substrings by falling back to binary matching. Example: you can use string LIKE '\xFE\xFF%' to detect BOM. This closes #54486. #54942 (Alexey Milovidov).
  • Added ContextLockWaitMicroseconds profile event. #55029 (Maksim Kita).

Build/Testing/Packaging Improvement

  • Bumped the compiler of official and continuous integration builds of ClickHouse from Clang 16 to 17. #53831 (Robert Schulze).
  • Regenerated tld data for lookups (tldLookup.generated.cpp). #54269 (Bharat Nallan).
  • Remove the redundant clickhouse-keeper-client symlink. #54587 (Tomas Barton).
  • Use /usr/bin/env to resolve bash - now it supports Nix OS. #54603 (Fionera).
  • CMake added PROFILE_CPU option needed to perform perf record without using a DWARF call graph. #54917 (Maksim Kita).
  • If the linker is different than LLD, stop with a fatal error. #55036 (Alexey Milovidov).
  • Replaced the library to handle (encode/decode) base64 values from Turbo-Base64 to aklomp-base64. Both are SIMD-accelerated on x86 and ARM but 1. the license of the latter (BSD-2) is more favorable for ClickHouse, Turbo64 switched in the meantime to GPL-3, 2. with more GitHub stars, aklomp-base64 seems more future-proof, 3. aklomp-base64 has a slightly nicer API (which is arguably subjective), and 4. aklomp-base64 does not require us to hack around bugs (like non-threadsafe initialization). Note: aklomp-base64 rejects unpadded base64 values whereas Turbo-Base64 decodes them on a best-effort basis. RFC-4648 leaves it open whether padding is mandatory or not, but depending on the context this may be a behavioral change to be aware of. #54119 (Mikhail Koviazin).

Bug Fix (user-visible misbehavior in an official stable release)

ClickHouse release 23.8 LTS, 2023-08-31

Backward Incompatible Change

  • If a dynamic disk contains a name, it should be specified as disk = disk(name = 'disk_name', ...) in disk function arguments. In previous version it could be specified as disk = disk_<disk_name>(...), which is no longer supported. #52820 (Kseniia Sumarokova).
  • clickhouse-benchmark will establish connections in parallel when invoked with --concurrency more than one. Previously it was unusable if you ran it with 1000 concurrent connections from Europe to the US. Correct calculation of QPS for connections with high latency. Backward incompatible change: the option for JSON output of clickhouse-benchmark is removed. If you've used this option, you can also extract data from the system.query_log in JSON format as a workaround. #53293 (Alexey Milovidov).
  • The microseconds column is removed from the system.text_log, and the milliseconds column is removed from the system.metric_log, because they are redundant in the presence of the event_time_microseconds column. #53601 (Alexey Milovidov).
  • Deprecate the metadata cache feature. It is experimental and we have never used it. The feature is dangerous: #51182. Remove the system.merge_tree_metadata_cache system table. The metadata cache is still available in this version but will be removed soon. This closes #39197. #51303 (Alexey Milovidov).
  • Disable support for 3DES in TLS connections. #52893 (Kenji Noguchi).

New Feature

  • Direct import from zip/7z/tar archives. Example: file('*.zip :: *.csv'). #50321 (nikitakeba).
  • Add column ptr to system.trace_log for trace_type = 'MemorySample'. This column contains an address of allocation. Added function flameGraph which can build flamegraph containing allocated and not released memory. Reworking of #38391. #45322 (Nikolai Kochetov).
  • Added table function azureBlobStorageCluster. The supported set of features is very similar to table function s3Cluster. #50795 (SmitaRKulkarni).
  • Allow using cluster, clusterAllReplicas, remote, and remoteSecure without table name in issue #50808. #50848 (Yangkuan Liu).
  • A system table to monitor Kafka consumers. #50999 (Ilya Golshtein).
  • Added max_sessions_for_user setting. #51724 (Alexey Gerasimchuck).
  • New functions toUTCTimestamp/fromUTCTimestamp to act same as spark's to_utc_timestamp/from_utc_timestamp. #52117 (KevinyhZou).
  • Add new functions structureToCapnProtoSchema/structureToProtobufSchema that convert ClickHouse table structure to CapnProto/Protobuf format schema. Allow to input/output data in CapnProto/Protobuf format without external format schema using autogenerated schema from table structure (controlled by settings format_capn_proto_use_autogenerated_schema/format_protobuf_use_autogenerated_schema). Allow to export autogenerated schema while input/output using setting output_format_schema. #52278 (Kruglov Pavel).
  • A new field query_cache_usage in system.query_log now shows if and how the query cache was used. #52384 (Robert Schulze).
  • Add new function startsWithUTF8 and endsWithUTF8. #52555 (李扬).
  • Allow variable number of columns in TSV/CustomSeparated/JSONCompactEachRow, make schema inference work with variable number of columns. Add settings input_format_tsv_allow_variable_number_of_columns, input_format_custom_allow_variable_number_of_columns, input_format_json_compact_allow_variable_number_of_columns. #52692 (Kruglov Pavel).
  • Added SYSTEM STOP/START PULLING REPLICATION LOG queries (for testing ReplicatedMergeTree). #52881 (Alexander Tokmakov).
  • Allow to execute constant non-deterministic functions in mutations on initiator. #53129 (Anton Popov).
  • Add input format One that doesn't read any data and always returns single row with column dummy with type UInt8 and value 0 like system.one. It can be used together with _file/_path virtual columns to list files in file/s3/url/hdfs/etc table functions without reading any data. #53209 (Kruglov Pavel).
  • Add tupleConcat function. Closes #52759. #53239 (Nikolay Degterinsky).
  • Support TRUNCATE DATABASE operation. #53261 (Bharat Nallan).
  • Add max_threads_for_indexes setting to limit number of threads used for primary key processing. #53313 (jorisgio).
  • Re-add SipHash keyed functions. #53525 (Salvatore Mesoraca).
  • (#52755 , #52895) Added functions arrayRotateLeft, arrayRotateRight, arrayShiftLeft, arrayShiftRight. #53557 (Mikhail Koviazin).
  • Add column name to system.clusters as an alias to cluster. #53605 (irenjj).
  • The advanced dashboard now allows mass editing (save/load). #53608 (Alexey Milovidov).
  • The advanced dashboard now has an option to maximize charts and move them around. #53622 (Alexey Milovidov).
  • Added support for adding and subtracting arrays: [5,2] + [1,7]. Division and multiplication were not implemented due to confusion between pointwise multiplication and the scalar product of arguments. Closes #49939. #52625 (Yarik Briukhovetskyi).
  • Add support for string literals as table names. Closes #52178. #52635 (hendrik-m).

Experimental Feature

  • Add new table engine S3Queue for streaming data import from s3. Closes #37012. #49086 (s-kat). It is not ready to use. Do not use it.
  • Enable parallel reading from replicas over distributed table. Related to #49708. #53005 (Igor Nikonov).
  • Add experimental support for HNSW as approximate neighbor search method. #53447 (Davit Vardanyan). This is currently intended for those who continue working on the implementation. Do not use it.

Performance Improvement

  • Parquet filter pushdown. I.e. when reading Parquet files, row groups (chunks of the file) are skipped based on the WHERE condition and the min/max values in each column. In particular, if the file is roughly sorted by some column, queries that filter by a short range of that column will be much faster. #52951 (Michael Kolupaev).
  • Optimize reading small row groups by batching them together in Parquet. Closes #53069. #53281 (Kruglov Pavel).
  • Optimize count from files in most input formats. Closes #44334. #53637 (Kruglov Pavel).
  • Use filter by file/path before reading in url/file/hdfs table functions. #53529 (Kruglov Pavel).
  • Enable JIT compilation for AArch64, PowerPC, SystemZ, RISC-V. #38217 (Maksim Kita).
  • Add setting rewrite_count_distinct_if_with_count_distinct_implementation to rewrite countDistinctIf with count_distinct_implementation. Closes #30642. #46051 (flynn).
  • Speed up merging of states of uniq and uniqExact aggregate functions by parallelizing conversion before merge. #50748 (Jiebin Sun).
  • Optimize aggregation performance of nullable string key when using a large number of variable length keys. #51399 (LiuNeng).
  • Add a pass in Analyzer for time filter optimization with preimage. The performance experiments of SSB on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) show that this change could bring an improvement of 8.5% to the geomean QPS when the experimental analyzer is enabled. #52091 (Zhiguo Zhou).
  • Optimize the merge if all hash sets are single-level in the uniqExact (COUNT DISTINCT) function. #52973 (Jiebin Sun).
  • Join table engine: do not clone hash join data structure with all columns. #53046 (Duc Canh Le).
  • Implement native ORC input format without the "apache arrow" library to improve performance. #53324 (李扬).
  • The dashboard will tell the server to compress the data, which is useful for large time frames over slow internet connections. For example, one chart with 86400 points can be 1.5 MB uncompressed and 60 KB compressed with br. #53569 (Alexey Milovidov).
  • Better utilization of thread pool for BACKUPs and RESTOREs. #53649 (Nikita Mikhaylov).
  • Load filesystem cache metadata on startup in parallel. Configured by load_metadata_threads (default: 1) cache config setting. Related to #52037. #52943 (Kseniia Sumarokova).
  • Improve move_primary_key_columns_to_end_of_prewhere. #53337 (Han Fei).
  • This optimizes the interaction with ClickHouse Keeper. Previously the caller could register the same watch callback multiple times. In that case each entry was consuming memory and the same callback was called multiple times which didn't make much sense. In order to avoid this the caller could have some logic to not add the same watch multiple times. With this change this deduplication is done internally if the watch callback is passed via shared_ptr. #53452 (Alexander Gololobov).
  • Cache number of rows in files for count in file/s3/url/hdfs/azure functions. The cache can be enabled/disabled by setting use_cache_for_count_from_files (enabled by default). Continuation of https://github.com/ClickHouse/ClickHouse/pull/53637. #53692 (Kruglov Pavel).
  • More careful thread management will improve the speed of the S3 table function over a large number of files by more than ~25%. #53668 (pufit).

Improvement

  • Add stderr_reaction configuration/setting to control the reaction (none, log or throw) when external command stderr has data. This helps make debugging external command easier. #43210 (Amos Bird).
  • Add partition column to the system part_log and merge table. #48990 (Jianfei Hu).
  • The sizes of the (index) uncompressed/mark, mmap and query caches can now be configured dynamically at runtime (without server restart). #51446 (Robert Schulze).
  • If a dictionary is created with a complex key, automatically choose the "complex key" layout variant. #49587 (xiebin).
  • Add setting use_concurrency_control for better testing of the new concurrency control feature. #49618 (Alexey Milovidov).
  • Added suggestions for mistyped names for databases and tables. #49801 (Yarik Briukhovetskyi).
  • While read small files from HDFS by Gluten, we found that it will cost more times when compare to directly query by Spark. And we did something with that. #50063 (KevinyhZou).
  • There were too many worthless error logs after session expiration, which we didn't like. #50171 (helifu).
  • Introduce fallback ZooKeeper sessions which are time-bound. Fixed index column in system.zookeeper_connection for DNS addresses. #50424 (Anton Kozlov).
  • Add ability to log when max_partitions_per_insert_block is reached. #50948 (Sean Haynes).
  • Added a bunch of custom commands to clickhouse-keeper-client (mostly to make ClickHouse debugging easier). #51117 (pufit).
  • Updated check for connection string in azureBlobStorage table function as connection string with "sas" does not always begin with the default endpoint and updated connection URL to include "sas" token after adding Azure's container to URL. #51141 (SmitaRKulkarni).
  • Fix description for filtering sets in the full_sorting_merge JOIN algorithm. #51329 (Tanay Tummalapalli).
  • Fixed memory consumption in Aggregator when max_block_size is huge. #51566 (Nikita Taranov).
  • Add SYSTEM SYNC FILESYSTEM CACHE command. It will compare in-memory state of filesystem cache with what it has on disk and fix in-memory state if needed. This is only needed if you are making manual interventions in on-disk data, which is highly discouraged. #51622 (Kseniia Sumarokova).
  • Attempt to create a generic proxy resolver for CH while keeping backwards compatibility with existing S3 storage conf proxy resolver. #51749 (Arthur Passos).
  • Support reading tuple subcolumns from file/s3/hdfs/url/azureBlobStorage table functions. #51806 (Kruglov Pavel).
  • Function arrayIntersect now returns the values in the order, corresponding to the first argument. Closes #27622. #51850 (Yarik Briukhovetskyi).
  • Add new queries, which allow to create/drop of access entities in specified access storage or move access entities from one access storage to another. #51912 (pufit).
  • Make ALTER TABLE FREEZE queries not replicated in the Replicated database engine. #52064 (Mike Kot).
  • Added possibility to flush system tables on unexpected shutdown. #52174 (Alexey Gerasimchuck).
  • Fix the case when s3 table function refused to work with pre-signed URLs. close #50846. #52310 (chen).
  • Add column name as an alias to event and metric in the system.events and system.metrics tables. Closes #51257. #52315 (chen).
  • Added support of syntax CREATE UNIQUE INDEX in parser as a no-op for better SQL compatibility. UNIQUE index is not supported. Set create_index_ignore_unique = 1 to ignore UNIQUE keyword in queries. #52320 (Ilya Yatsishin).
  • Add support of predefined macro ({database} and {table}) in some Kafka engine settings: topic, consumer, client_id, etc. #52386 (Yury Bogomolov).
  • Disable updating the filesystem cache during backup/restore. Filesystem cache must not be updated during backup/restore, it seems it just slows down the process without any profit (because the BACKUP command can read a lot of data and it's no use to put all the data to the filesystem cache and immediately evict it). #52402 (Vitaly Baranov).
  • The configuration of S3 endpoint allow using it from the root, and append '/' automatically if needed. #47809. #52600 (xiaolei565).
  • For clickhouse-local allow positional options and populate global UDF settings (user_scripts_path and user_defined_executable_functions_config). #52643 (Yakov Olkhovskiy).
  • system.asynchronous_metrics now includes metrics "QueryCacheEntries" and "QueryCacheBytes" to inspect the query cache. #52650 (Robert Schulze).
  • Added possibility to use s3_storage_class parameter in the SETTINGS clause of the BACKUP statement for backups to S3. #52658 (Roman Vasin).
  • Add utility print-backup-info.py which parses a backup metadata file and prints information about the backup. #52690 (Vitaly Baranov).
  • Closes #49510. Currently we have database and table names case-sensitive, but BI tools query information_schema sometimes in lowercase, sometimes in uppercase. For this reason we have information_schema database, containing lowercase tables, such as information_schema.tables and INFORMATION_SCHEMA database, containing uppercase tables, such as INFORMATION_SCHEMA.TABLES. But some tools are querying INFORMATION_SCHEMA.tables and information_schema.TABLES. The proposed solution is to duplicate both lowercase and uppercase tables in lowercase and uppercase information_schema database. #52695 (Yarik Briukhovetskyi).
  • QueryCHECK TABLE has better performance and usability (sends progress updates, cancellable). #52745 (vdimir).
  • Add support for modulo, intDiv, intDivOrZero for tuples by distributing them across tuple's elements. #52758 (Yakov Olkhovskiy).
  • Search for default yaml and yml configs in clickhouse-client after xml. #52767 (Alexey Milovidov).
  • When merging into non-'clickhouse' rooted configuration, configs with different root node name just bypassed without exception. #52770 (Yakov Olkhovskiy).
  • Now it's possible to specify min (memory_profiler_sample_min_allocation_size) and max (memory_profiler_sample_max_allocation_size) size for allocations to be tracked with sampling memory profiler. #52779 (alesapin).
  • Add precise_float_parsing setting to switch float parsing methods (fast/precise). #52791 (Andrey Zvonov).
  • Use the same default paths for clickhouse-keeper (symlink) as for clickhouse-keeper (executable). #52861 (Vitaly Baranov).
  • Improve error message for table function remote. Closes #40220. #52959 (jiyoungyoooo).
  • Added the possibility to specify custom storage policy in the SETTINGS clause of RESTORE queries. #52970 (Victor Krasnov).
  • Add the ability to throttle the S3 requests on backup operations (BACKUP and RESTORE commands now honor s3_max_[get/put]_[rps/burst]). #52974 (Daniel Pozo Escalona).
  • Add settings to ignore ON CLUSTER clause in queries for management of replicated user-defined functions or access control entities with replicated storage. #52975 (Aleksei Filatov).
  • EXPLAIN actions for JOIN step. #53006 (Maksim Kita).
  • Make hasTokenOrNull and hasTokenCaseInsensitiveOrNull return null for empty needles. #53059 (ltrk2).
  • Allow to restrict allowed paths for filesystem caches. Mainly useful for dynamic disks. If in server config filesystem_caches_path is specified, all filesystem caches' paths will be restricted to this directory. E.g. if the path in cache config is relative - it will be put in filesystem_caches_path; if path in cache config is absolute, it will be required to lie inside filesystem_caches_path. If filesystem_caches_path is not specified in config, then behaviour will be the same as in earlier versions. #53124 (Kseniia Sumarokova).
  • Added a bunch of custom commands (mostly to make ClickHouse debugging easier). #53127 (pufit).
  • Add diagnostic info about file name during schema inference - it helps when you process multiple files with globs. #53135 (Alexey Milovidov).
  • Client will load suggestions using the main connection if the second connection is not allowed to create a session. #53177 (Alexey Gerasimchuck).
  • Add EXCEPT clause to SYSTEM STOP/START LISTEN QUERIES [ALL/DEFAULT/CUSTOM] query, for example SYSTEM STOP LISTEN QUERIES ALL EXCEPT TCP, HTTP. #53280 (Nikolay Degterinsky).
  • Change the default of max_concurrent_queries from 100 to 1000. It's ok to have many concurrent queries if they are not heavy, and mostly waiting for the network. Note: don't confuse concurrent queries and QPS: for example, ClickHouse server can do tens of thousands of QPS with less than 100 concurrent queries. #53285 (Alexey Milovidov).
  • Limit number of concurrent background partition optimize merges. #53405 (Duc Canh Le).
  • Added a setting allow_moving_table_directory_to_trash that allows to ignore Directory for table data already exists error when replicating/recovering a Replicated database. #53425 (Alexander Tokmakov).
  • If server settings asynchronous_metrics_update_period_s and asynchronous_heavy_metrics_update_period_s are misconfigured to 0, it will now fail gracefully instead of terminating the application. #53428 (Robert Schulze).
  • The ClickHouse server now respects memory limits changed via cgroups when reloading its configuration. #53455 (Robert Schulze).
  • Add ability to turn off flush of Distributed tables on DETACH, DROP, or server shutdown. #53501 (Azat Khuzhin).
  • The domainRFC function now supports IPv6 in square brackets. #53506 (Chen768959).
  • Use longer timeout for S3 CopyObject requests, which are used in backups. #53533 (Michael Kolupaev).
  • Added server setting aggregate_function_group_array_max_element_size. This setting is used to limit array size for groupArray function at serialization. The default value is 16777215. #53550 (Nikolai Kochetov).
  • SCHEMA() was added as alias for DATABASE() to improve MySQL compatibility. #53587 (Daniël van Eeden).
  • Add asynchronous metrics about tables in the system database. For example, TotalBytesOfMergeTreeTablesSystem. This closes #53603. #53604 (Alexey Milovidov).
  • SQL editor in the Play UI and Dashboard will not use Grammarly. #53614 (Alexey Milovidov).
  • As expert-level settings, it is now possible to (1) configure the size_ratio (i.e. the relative size of the protected queue) of the [index] mark/uncompressed caches, (2) configure the cache policy of the index mark and index uncompressed caches. #53657 (Robert Schulze).
  • Added client info validation to the query packet in TCPHandler. #53673 (Alexey Gerasimchuck).
  • Retry loading parts in case of network errors while interaction with Microsoft Azure. #53750 (SmitaRKulkarni).
  • Stacktrace for exceptions, Materailized view exceptions are propagated. #53766 (Ilya Golshtein).
  • If no hostname or port were specified, keeper client will try to search for a connection string in the ClickHouse's config.xml. #53769 (pufit).
  • Add profile event PartsLockMicroseconds which shows the amount of microseconds we hold the data parts lock in MergeTree table engine family. #53797 (alesapin).
  • Make reconnect limit in RAFT limits configurable for keeper. This configuration can help to make keeper to rebuild connection with peers quicker if the current connection is broken. #53817 (Pengyuan Bian).
  • Ignore foreign keys in tables definition to improve compatibility with MySQL, so a user wouldn't need to rewrite his SQL of the foreign key part, ref #53380. #53864 (jsc0218).

Build/Testing/Packaging Improvement

Bug Fix (user-visible misbehavior in an official stable release)

ClickHouse release 23.7, 2023-07-27

Backward Incompatible Change

  • Add NAMED COLLECTION access type (aliases USE NAMED COLLECTION, NAMED COLLECTION USAGE). This PR is backward incompatible because this access type is disabled by default (because a parent access type NAMED COLLECTION ADMIN is disabled by default as well). Proposed in #50277. To grant use GRANT NAMED COLLECTION ON collection_name TO user or GRANT NAMED COLLECTION ON * TO user, to be able to give these grants named_collection_admin is required in config (previously it was named named_collection_control, so will remain as an alias). #50625 (Kseniia Sumarokova).
  • Fixing a typo in the system.parts column name last_removal_attemp_time. Now it is named last_removal_attempt_time. #52104 (filimonov).
  • Bump version of the distributed_ddl_entry_format_version to 5 by default (enables opentelemetry and initial_query_idd pass through). This will not allow to process existing entries for distributed DDL after downgrade (but note, that usually there should be no such unprocessed entries). #52128 (Azat Khuzhin).
  • Check projection metadata the same way we check ordinary metadata. This change may prevent the server from starting in case there was a table with an invalid projection. An example is a projection that created positional columns in PK (e.g. projection p (select * order by 1, 4) which is not allowed in table PK and can cause a crash during insert/merge). Drop such projections before the update. Fixes #52353. #52361 (Nikolai Kochetov).
  • The experimental feature hashid is removed due to a bug. The quality of implementation was questionable at the start, and it didn't get through the experimental status. This closes #52406. #52449 (Alexey Milovidov).

New Feature

  • Added Overlay database engine to combine multiple databases into one. Added Filesystem database engine to represent a directory in the filesystem as a set of implicitly available tables with auto-detected formats and structures. A new S3 database engine allows to read-only interact with s3 storage by representing a prefix as a set of tables. A new HDFS database engine allows to interact with HDFS storage in the same way. #48821 (alekseygolub).
  • Add support for external disks in Keeper for storing snapshots and logs. #50098 (Antonio Andelic).
  • Add support for multi-directory selection ({}) globs. #50559 (Andrey Zvonov).
  • Kafka connector can fetch Avro schema from schema registry with basic authentication using url-encoded credentials. #49664 (Ilya Golshtein).
  • Add function arrayJaccardIndex which computes the Jaccard similarity between two arrays. #50076 (FFFFFFFHHHHHHH).
  • Add a column is_obsolete to system.settings and similar tables. Closes #50819. #50826 (flynn).
  • Implement support of encrypted elements in configuration file. Added possibility to use encrypted text in leaf elements of configuration file. The text is encrypted using encryption codecs from <encryption_codecs> section. #50986 (Roman Vasin).
  • Grace Hash Join algorithm is now applicable to FULL and RIGHT JOINs. #49483. #51013 (lgbo).
  • Add SYSTEM STOP LISTEN query for more graceful termination. Closes #47972. #51016 (Nikolay Degterinsky).
  • Add input_format_csv_allow_variable_number_of_columns options. #51273 (Dmitry Kardymon).
  • Another boring feature: add function substring_index, as in Spark or MySQL. #51472 (李扬).
  • A system table jemalloc_bins to show stats for jemalloc bins. Example SELECT *, size * (nmalloc - ndalloc) AS allocated_bytes FROM system.jemalloc_bins WHERE allocated_bytes > 0 ORDER BY allocated_bytes DESC LIMIT 10. Enjoy. #51674 (Alexander Gololobov).
  • Add RowBinaryWithDefaults format with extra byte before each column as a flag for using the column's default value. Closes #50854. #51695 (Kruglov Pavel).
  • Added default_temporary_table_engine setting. Same as default_table_engine but for temporary tables. #51292. #51708 (velavokr).
  • Added new initcap / initcapUTF8 functions which convert the first letter of each word to upper case and the rest to lower case. #51735 (Dmitry Kardymon).
  • Create table now supports PRIMARY KEY syntax in column definition. Columns are added to primary index in the same order columns are defined. #51881 (Ilya Yatsishin).
  • Added the possibility to use date and time format specifiers in log and error log file names, either in config files (log and errorlog tags) or command line arguments (--log-file and --errorlog-file). #51945 (Victor Krasnov).
  • Added Peak Memory Usage statistic to HTTP headers. #51946 (Dmitry Kardymon).
  • Added new hasSubsequence (+CaseInsensitive and UTF8 versions) functions to match subsequences in strings. #52050 (Dmitry Kardymon).
  • Add array_agg as alias of groupArray for PostgreSQL compatibility. Closes #52100. ### Documentation entry for user-facing changes. #52135 (flynn).
  • Add any_value as a compatibility alias for any aggregate function. Closes #52140. #52147 (flynn).
  • Add aggregate function array_concat_agg for compatibility with BigQuery, it's alias of groupArrayArray. Closes #52139. #52149 (flynn).
  • Add OCTET_LENGTH as an alias to length. Closes #52153. #52176 (FFFFFFFHHHHHHH).
  • Added firstLine function to extract the first line from the multi-line string. This closes #51172. #52209 (Mikhail Koviazin).
  • Implement KQL-style formatting for the Interval data type. This is only needed for compatibility with the Kusto query language. #45671 (ltrk2).
  • Added query SYSTEM FLUSH ASYNC INSERT QUEUE which flushes all pending asynchronous inserts to the destination tables. Added a server-side setting async_insert_queue_flush_on_shutdown (true by default) which determines whether to flush queue of asynchronous inserts on graceful shutdown. Setting async_insert_threads is now a server-side setting. #49160 (Anton Popov).
  • Aliases current_database and a new function current_schemas for compatibility with PostgreSQL. #51076 (Pedro Riera).
  • Add alias for functions today (now available under the curdate/current_date names) and now (current_timestamp). #52106 (Lloyd-Pottiger).
  • Support async_deduplication_token for async insert. #52136 (Han Fei).
  • Add new setting disable_url_encoding that allows to disable decoding/encoding path in uri in URL engine. #52337 (Kruglov Pavel).

Performance Improvement

  • Enable automatic selection of the sparse serialization format by default. It improves performance. The format is supported since version 22.1. After this change, downgrading to versions older than 22.1 might not be possible. You can turn off the usage of the sparse serialization format by providing the ratio_of_defaults_for_sparse_serialization = 1 setting for your MergeTree tables. #49631 (Alexey Milovidov).
  • Enable move_all_conditions_to_prewhere and enable_multiple_prewhere_read_steps settings by default. #46365 (Alexander Gololobov).
  • Improves performance of some queries by tuning allocator. #46416 (Azat Khuzhin).
  • Now we use fixed-size tasks in MergeTreePrefetchedReadPool as in MergeTreeReadPool. Also from now we use connection pool for S3 requests. #49732 (Nikita Taranov).
  • More pushdown to the right side of join. #50532 (Nikita Taranov).
  • Improve grace_hash join by reserving hash table's size (resubmit). #50875 (lgbo).
  • Waiting on lock in OpenedFileCache could be noticeable sometimes. We sharded it into multiple sub-maps (each with its own lock) to avoid contention. #51341 (Nikita Taranov).
  • Move conditions with primary key columns to the end of PREWHERE chain. The idea is that conditions with PK columns are likely to be used in PK analysis and will not contribute much more to PREWHERE filtering. #51958 (Alexander Gololobov).
  • Speed up COUNT(DISTINCT) for String types by inlining SipHash. The performance experiments of OnTime on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) show that this change could bring an improvement of 11.6% to the QPS of the query Q8 while having no impact on others. #52036 (Zhiguo Zhou).
  • Enable allow_vertical_merges_from_compact_to_wide_parts by default. It will save memory usage during merges. #52295 (Alexey Milovidov).
  • Fix incorrect projection analysis which invalidates primary keys. This issue only exists when query_plan_optimize_primary_key = 1, query_plan_optimize_projection = 1. This fixes #48823. This fixes #51173. #52308 (Amos Bird).
  • Reduce the number of syscalls in FileCache::loadMetadata - this speeds up server startup if the filesystem cache is configured. #52435 (Raúl Marín).
  • Allow to have strict lower boundary for file segment size by downloading remaining data in the background. Minimum size of file segment (if actual file size is bigger) is configured as cache configuration setting boundary_alignment, by default 4Mi. Number of background threads are configured as cache configuration setting background_download_threads, by default 2. Also max_file_segment_size was increased from 8Mi to 32Mi in this PR. #51000 (Kseniia Sumarokova).
  • Decreased default timeouts for S3 from 30 seconds to 3 seconds, and for other HTTP from 180 seconds to 30 seconds. #51171 (Michael Kolupaev).
  • New setting merge_tree_determine_task_size_by_prewhere_columns added. If set to true only sizes of the columns from PREWHERE section will be considered to determine reading task size. Otherwise all the columns from query are considered. #52606 (Nikita Taranov).

Improvement

  • Use read_bytes/total_bytes_to_read for progress bar in s3/file/url/... table functions for better progress indication. #51286 (Kruglov Pavel).
  • Introduce a table setting wait_for_unique_parts_send_before_shutdown_ms which specify the amount of time replica will wait before closing interserver handler for replicated sends. Also fix inconsistency with shutdown of tables and interserver handlers: now server shutdown tables first and only after it shut down interserver handlers. #51851 (alesapin).
  • Allow SQL standard FETCH without OFFSET. See https://antonz.org/sql-fetch/. #51293 (Alexey Milovidov).
  • Allow filtering HTTP headers for the URL/S3 table functions with the new http_forbid_headers section in config. Both exact matching and regexp filters are available. #51038 (Nikolay Degterinsky).
  • Don't show messages about 16 EiB free space in logs, as they don't make sense. This closes #49320. #49342 (Alexey Milovidov).
  • Properly check the limit for the sleepEachRow function. Add a setting function_sleep_max_microseconds_per_block. This is needed for generic query fuzzer. #49343 (Alexey Milovidov).
  • Fix two issues in geoHash functions. #50066 (李扬).
  • Log async insert flush queries into system.query_log. #51160 (Raúl Marín).
  • Functions date_diff and age now support millisecond/microsecond unit and work with microsecond precision. #51291 (Dmitry Kardymon).
  • Improve parsing of path in clickhouse-keeper-client. #51359 (Azat Khuzhin).
  • A third-party product depending on ClickHouse (Gluten: a Plugin to Double SparkSQL's Performance) had a bug. This fix avoids heap overflow in that third-party product while reading from HDFS. #51386 (李扬).
  • Add ability to disable native copy for S3 (setting for BACKUP/RESTORE allow_s3_native_copy, and s3_allow_native_copy for s3/s3_plain disks). #51448 (Azat Khuzhin).
  • Add column primary_key_size to system.parts table to show compressed primary key size on disk. Closes #51400. #51496 (Yarik Briukhovetskyi).
  • Allow running clickhouse-local without procfs, without home directory existing, and without name resolution plugins from glibc. #51518 (Alexey Milovidov).
  • Add placeholder %a for rull filename in rename_files_after_processing setting. #51603 (Kruglov Pavel).
  • Add column modification_time into system.parts_columns. #51685 (Azat Khuzhin).
  • Add new setting input_format_csv_use_default_on_bad_values to CSV format that allows to insert default value when parsing of a single field failed. #51716 (KevinyhZou).
  • Added a crash log flush to the disk after the unexpected crash. #51720 (Alexey Gerasimchuck).
  • Fix behavior in dashboard page where errors unrelated to authentication are not shown. Also fix 'overlapping' chart behavior. #51744 (Zach Naimon).
  • Allow UUID to UInt128 conversion. #51765 (Dmitry Kardymon).
  • Added support for function range of Nullable arguments. #51767 (Dmitry Kardymon).
  • Convert condition like toyear(x) = c to c1 <= x < c2. #51795 (Han Fei).
  • Improve MySQL compatibility of the statement SHOW INDEX. #51796 (Robert Schulze).
  • Fix use_structure_from_insertion_table_in_table_functions does not work with MATERIALIZED and ALIAS columns. Closes #51817. Closes #51019. #51825 (flynn).
  • Cache dictionary now requests only unique keys from source. Closes #51762. #51853 (Maksim Kita).
  • Fixed the case when settings were not applied for EXPLAIN query when FORMAT was provided. #51859 (Nikita Taranov).
  • Allow SETTINGS before FORMAT in DESCRIBE TABLE query for compatibility with SELECT query. Closes #51544. #51899 (Nikolay Degterinsky).
  • Var-Int encoded integers (e.g. used by the native protocol) can now use the full 64-bit range. 3rd party clients are advised to update their var-int code accordingly. #51905 (Robert Schulze).
  • Update certificates when they change without the need to manually SYSTEM RELOAD CONFIG. #52030 (Mike Kot).
  • Added allow_create_index_without_type setting that allow to ignore ADD INDEX queries without specified TYPE. Standard SQL queries will just succeed without changing table schema. #52056 (Ilya Yatsishin).
  • Log messages are written to the system.text_log from the server startup. #52113 (Dmitry Kardymon).
  • In cases where the HTTP endpoint has multiple IP addresses and the first of them is unreachable, a timeout exception was thrown. Made session creation with handling all resolved endpoints. #52116 (Aleksei Filatov).
  • Avro input format now supports Union even if it contains only a single type. Closes #52131. #52137 (flynn).
  • Add setting optimize_use_implicit_projections to disable implicit projections (currently only min_max_count projection). #52152 (Amos Bird).
  • It was possible to use the function hasToken for infinite loop. Now this possibility is removed. This closes #52156. #52160 (Alexey Milovidov).
  • Create ZK ancestors optimistically. #52195 (Raúl Marín).
  • Fix #50582. Avoid the Not found column ... in block error in some cases of reading in-order and constants. #52259 (Chen768959).
  • Check whether S2 geo primitives are invalid as early as possible on ClickHouse side. This closes: #27090. #52260 (Nikita Mikhaylov).
  • Add back missing projection QueryAccessInfo when query_plan_optimize_projection = 1. This fixes #50183 . This fixes #50093. #52327 (Amos Bird).
  • When ZooKeeperRetriesControl rethrows an error, it's more useful to see its original stack trace, not the one from ZooKeeperRetriesControl itself. #52347 (Vitaly Baranov).
  • Wait for zero copy replication lock even if some disks don't support it. #52376 (Raúl Marín).
  • Now interserver port will be closed only after tables are shut down. #52498 (alesapin).

Experimental Feature

  • Writing parquet files is 10x faster, it's multi-threaded now. Almost the same speed as reading. #49367 (Michael Kolupaev). This is controlled by the setting output_format_parquet_use_custom_encoder which is disabled by default, because the feature is non-ideal.
  • Added support for PRQL as a query language. #50686 (János Benjamin Antal).
  • Allow to add disk name for custom disks. Previously custom disks would use an internal generated disk name. Now it will be possible with disk = disk_<name>(...) (e.g. disk will have name name) . #51552 (Kseniia Sumarokova). This syntax can be changed in this release.
  • (experimental MaterializedMySQL) Fixed crash when mysqlxx::Pool::Entry is used after it was disconnected. #52063 (Val Doroshchuk).
  • (experimental MaterializedMySQL) CREATE TABLE ... AS SELECT .. is now supported in MaterializedMySQL. #52067 (Val Doroshchuk).
  • (experimental MaterializedMySQL) Introduced automatic conversion of text types to utf8 for MaterializedMySQL. #52084 (Val Doroshchuk).
  • (experimental MaterializedMySQL) Now unquoted UTF-8 strings are supported in DDL for MaterializedMySQL. #52318 (Val Doroshchuk).
  • (experimental MaterializedMySQL) Now double quoted comments are supported in MaterializedMySQL. #52355 (Val Doroshchuk).
  • Upgrade Intel QPL from v1.1.0 to v1.2.0 2. Upgrade Intel accel-config from v3.5 to v4.0 3. Fixed issue that Device IOTLB miss has big perf. impact for IAA accelerators. #52180 (jasperzhu).
  • The session_timezone setting (new in version 23.6) is demoted to experimental. #52445 (Alexey Milovidov).
  • Support ZooKeeper reconfig command for ClickHouse Keeper with incremental reconfiguration which can be enabled via keeper_server.enable_reconfiguration setting. Support adding servers, removing servers, and changing server priorities. #49450 (Mike Kot). It is suspected that this feature is incomplete.

Build/Testing/Packaging Improvement

  • Add experimental ClickHouse builds for Linux RISC-V 64 to CI. #31398 (Alexey Milovidov).
  • Add integration test check with the enabled Analyzer. #50926 #52210 (Dmitry Novik).
  • Reproducible builds for Rust. #52395 (Azat Khuzhin).
  • Update Cargo dependencies. #51721 (Raúl Marín).
  • Make the function CHColumnToArrowColumn::fillArrowArrayWithArrayColumnData to work with nullable arrays, which are not possible in ClickHouse, but needed for Gluten. #52112 (李扬).
  • We've updated the CCTZ library to master, but there are no user-visible changes. #52124 (Alexey Milovidov).
  • The system.licenses table now includes the hard-forked library Poco. This closes #52066. #52127 (Alexey Milovidov).
  • Check that there are no cases of bad punctuation: whitespace before a comma like Hello ,world instead of Hello, world. #52549 (Alexey Milovidov).

Bug Fix (user-visible misbehavior in an official stable release)

ClickHouse release 23.6, 2023-06-29

Backward Incompatible Change

  • Delete feature do_not_evict_index_and_mark_files in the fs cache. This feature was only making things worse. #51253 (Kseniia Sumarokova).
  • Remove ALTER support for experimental LIVE VIEW. #51287 (Alexey Milovidov).
  • Decrease the default values for http_max_field_value_size and http_max_field_name_size to 128 KiB. #51163 (Mikhail f. Shiryaev).
  • CGroups metrics related to CPU are replaced with one metric, CGroupMaxCPU for better usability. The Normalized CPU usage metrics will be normalized to CGroups limits instead of the total number of CPUs when they are set. This closes #50836. #50835 (Alexey Milovidov).

New Feature

  • The function transform as well as CASE with value matching started to support all data types. This closes #29730. This closes #32387. This closes #50827. This closes #31336. This closes #40493. #51351 (Alexey Milovidov).
  • Added option --rename_files_after_processing <pattern>. This closes #34207. #49626 (alekseygolub).
  • Add support for TRUNCATE modifier in INTO OUTFILE clause. Suggest using APPEND or TRUNCATE for INTO OUTFILE when file exists. #50950 (alekar).
  • Add table engine Redis and table function redis. It allows querying external Redis servers. #50150 (JackyWoo).
  • Allow to skip empty files in file/s3/url/hdfs table functions using settings s3_skip_empty_files, hdfs_skip_empty_files, engine_file_skip_empty_files, engine_url_skip_empty_files. #50364 (Kruglov Pavel).
  • Add a new setting named use_mysql_types_in_show_columns to alter the SHOW COLUMNS SQL statement to display MySQL equivalent types when a client is connected via the MySQL compatibility port. #49577 (Thomas Panetti).
  • Clickhouse-client can now be called with a connection string instead of "--host", "--port", "--user" etc. #50689 (Alexey Gerasimchuck).
  • Add setting session_timezone; it is used as the default timezone for a session when not explicitly specified. #44149 (Andrey Zvonov).
  • Codec DEFLATE_QPL is now controlled via server setting "enable_deflate_qpl_codec" (default: false) instead of setting "allow_experimental_codecs". This marks DEFLATE_QPL non-experimental. #50775 (Robert Schulze).

Performance Improvement

  • Improved scheduling of merge selecting and cleanup tasks in ReplicatedMergeTree. The tasks will not be executed too frequently when there's nothing to merge or cleanup. Added settings max_merge_selecting_sleep_ms, merge_selecting_sleep_slowdown_factor, max_cleanup_delay_period and cleanup_thread_preferred_points_per_iteration. It should close #31919. #50107 (Alexander Tokmakov).
  • Make filter push down through cross join. #50605 (Han Fei).
  • Improve performance with enabled QueryProfiler using thread-local timer_id instead of global object. #48778 (Jiebin Sun).
  • Rewrite CapnProto input/output format to improve its performance. Map column names and CapnProto fields case insensitive, fix reading/writing of nested structure fields. #49752 (Kruglov Pavel).
  • Optimize parquet write performance for parallel threads. #50102 (Hongbin Ma).
  • Disable parallelize_output_from_storages for processing MATERIALIZED VIEWs and storages with one block only. #50214 (Azat Khuzhin).
  • Merge PR #46558. Avoid block permutation during sort if the block is already sorted. #50697 (Alexey Milovidov, Maksim Kita).
  • Make multiple list requests to ZooKeeper in parallel to speed up reading from system.zookeeper table. #51042 (Alexander Gololobov).
  • Speedup initialization of DateTime lookup tables for time zones. This should reduce startup/connect time of clickhouse-client especially in debug build as it is rather heavy. #51347 (Alexander Gololobov).
  • Fix data lakes slowness because of synchronous head requests. (Related to Iceberg/Deltalake/Hudi being slow with a lot of files). #50976 (Kseniia Sumarokova).
  • Do not read all the columns from right GLOBAL JOIN table. #50721 (Nikolai Kochetov).

Experimental Feature

  • Support parallel replicas with the analyzer. #50441 (Raúl Marín).
  • Add random sleep before large merges/mutations execution to split load more evenly between replicas in case of zero-copy replication. #51282 (alesapin).
  • Do not replicate ALTER PARTITION queries and mutations through Replicated database if it has only one shard and the underlying table is ReplicatedMergeTree. #51049 (Alexander Tokmakov).

Improvement

  • Relax the thresholds for "too many parts" to be more modern. Return the backpressure during long-running insert queries. #50856 (Alexey Milovidov).
  • Allow to cast IPv6 to IPv4 address for CIDR ::ffff:0:0/96 (IPv4-mapped addresses). #49759 (Yakov Olkhovskiy).
  • Update MongoDB protocol to support MongoDB 5.1 version and newer. Support for the versions with the old protocol (<3.6) is preserved. Closes #45621, #49879. #50061 (Nikolay Degterinsky).
  • Add setting input_format_max_bytes_to_read_for_schema_inference to limit the number of bytes to read in schema inference. Closes #50577. #50592 (Kruglov Pavel).
  • Respect setting input_format_null_as_default in schema inference. #50602 (Kruglov Pavel).
  • Allow to skip trailing empty lines in CSV/TSV/CustomSeparated formats via settings input_format_csv_skip_trailing_empty_lines, input_format_tsv_skip_trailing_empty_lines and input_format_custom_skip_trailing_empty_lines (disabled by default). Closes #49315. #50635 (Kruglov Pavel).
  • Functions "toDateOrDefault|OrNull" and "accuateCast[OrDefault|OrNull]" now correctly parse numeric arguments. #50709 (Dmitry Kardymon).
  • Support CSV with whitespace or \t field delimiters, and these delimiters are supported in Spark. #50712 (KevinyhZou).
  • Settings number_of_mutations_to_delay and number_of_mutations_to_throw are enabled by default now with values 500 and 1000 respectively. #50726 (Anton Popov).
  • The dashboard correctly shows missing values. This closes #50831. #50832 (Alexey Milovidov).
  • Added the possibility to use date and time arguments in the syslog timestamp format in functions parseDateTimeBestEffort* and parseDateTime64BestEffort*. #50925 (Victor Krasnov).
  • Command line parameter "--password" in clickhouse-client can now be specified only once. #50966 (Alexey Gerasimchuck).
  • Use hash_of_all_files from system.parts to check identity of parts during on-cluster backups. #50997 (Vitaly Baranov).
  • The system table zookeeper_connection connected_time identifies the time when the connection is established (standard format), and session_uptime_elapsed_seconds is added, which labels the duration of the established connection session (in seconds). #51026 (郭小龙).
  • Improve the progress bar for file/s3/hdfs/url table functions by using chunk size from source data and using incremental total size counting in each thread. Fix the progress bar for *Cluster functions. This closes #47250. #51088 (Kruglov Pavel).
  • Add total_bytes_to_read to the Progress packet in TCP protocol for better Progress bar. #51158 (Kruglov Pavel).
  • Better checking of data parts on disks with filesystem cache. #51164 (Anton Popov).
  • Fix sometimes not correct current_elements_num in fs cache. #51242 (Kseniia Sumarokova).

Build/Testing/Packaging Improvement

  • Add embedded keeper-client to standalone keeper binary. #50964 (pufit).
  • Actual LZ4 version is used now. #50621 (Nikita Taranov).
  • ClickHouse server will print the list of changed settings on fatal errors. This closes #51137. #51138 (Alexey Milovidov).
  • Allow building ClickHouse with clang-17. #51300 (Alexey Milovidov).
  • SQLancer check is considered stable as bugs that were triggered by it are fixed. Now failures of SQLancer check will be reported as failed check status. #51340 (Ilya Yatsishin).
  • Split huge RUN in Dockerfile into smaller conditional. Install the necessary tools on demand in the same RUN layer, and remove them after that. Upgrade the OS only once at the beginning. Use a modern way to check the signed repository. Downgrade the base repo to ubuntu:20.04 to address the issues on older docker versions. Upgrade golang version to address golang vulnerabilities. #51504 (Mikhail f. Shiryaev).

Bug Fix (user-visible misbehavior in an official stable release)

ClickHouse release 23.5, 2023-06-08

Upgrade Notes

  • Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the compress_marks and compress_primary_key settings in the <merge_tree> section of the server configuration file. Upgrade notes: If you upgrade from versions prior to 22.9, you should either upgrade all replicas at once or disable the compression before upgrade, or upgrade through an intermediate version, where the compressed marks are supported but not enabled by default, such as 23.3. #42587 (Alexey Milovidov).
  • Make local object storage work consistently with s3 object storage, fix problem with append (closes #48465), make it configurable as independent storage. The change is backward incompatible because the cache on top of local object storage is not compatible to previous versions. #48791 (Kseniia Sumarokova).
  • The experimental feature "in-memory data parts" is removed. The data format is still supported, but the settings are no-op, and compact or wide parts will be used instead. This closes #45409. #49429 (Alexey Milovidov).
  • Changed default values of settings parallelize_output_from_storages and input_format_parquet_preserve_order. This allows ClickHouse to reorder rows when reading from files (e.g. CSV or Parquet), greatly improving performance in many cases. To restore the old behavior of preserving order, use parallelize_output_from_storages = 0, input_format_parquet_preserve_order = 1. #49479 (Michael Kolupaev).
  • Make projections production-ready. Add the optimize_use_projections setting to control whether the projections will be selected for SELECT queries. The setting allow_experimental_projection_optimization is obsolete and does nothing. #49719 (Alexey Milovidov).
  • Mark joinGet as non-deterministic (so as dictGet). It allows using them in mutations without an extra setting. #49843 (Azat Khuzhin).
  • Revert the "groupArray returns cannot be nullable" change (due to binary compatibility breakage for groupArray/groupArrayLast/groupArraySample over Nullable types, which likely will lead to TOO_LARGE_ARRAY_SIZE or CANNOT_READ_ALL_DATA). #49971 (Azat Khuzhin).
  • Setting enable_memory_bound_merging_of_aggregation_results is enabled by default. If you update from version prior to 22.12, we recommend to set this flag to false until update is finished. #50319 (Nikita Taranov).

New Feature

  • Added storage engine AzureBlobStorage and azureBlobStorage table function. The supported set of features is very similar to storage/table function S3 [#50604] (https://github.com/ClickHouse/ClickHouse/pull/50604) (alesapin) (SmitaRKulkarni.
  • Added native ClickHouse Keeper CLI Client, it is available as clickhouse keeper-client #47414 (pufit).
  • Add urlCluster table function. Refactor all *Cluster table functions to reduce code duplication. Make schema inference work for all possible *Cluster function signatures and for named collections. Closes #38499. #45427 (attack204), Pavel Kruglov.
  • The query cache can now be used for production workloads. #47977 (Robert Schulze). The query cache can now support queries with totals and extremes modifier. #48853 (Robert Schulze). Make allow_experimental_query_cache setting as obsolete for backward-compatibility. It was removed in https://github.com/ClickHouse/ClickHouse/pull/47977. #49934 (Timur Solodovnikov).
  • Geographical data types (Point, Ring, Polygon, and MultiPolygon) are production-ready. #50022 (Alexey Milovidov).
  • Add schema inference to PostgreSQL, MySQL, MeiliSearch, and SQLite table engines. Closes #49972. #50000 (Nikolay Degterinsky).
  • Password type in queries like CREATE USER u IDENTIFIED BY 'p' will be automatically set according to the setting default_password_type in the config.xml on the server. Closes #42915. #44674 (Nikolay Degterinsky).
  • Add bcrypt password authentication type. Closes #34599. #44905 (Nikolay Degterinsky).
  • Introduces new keyword INTO OUTFILE 'file.txt' APPEND. #48880 (alekar).
  • Added system.zookeeper_connection table that shows information about Keeper connections. #45245 (mateng915).
  • Add new function generateRandomStructure that generates random table structure. It can be used in combination with table function generateRandom. #47409 (Kruglov Pavel).
  • Allow the use of CASE without an ELSE branch and extended transform to deal with more types. Also fix some issues that made transform() return incorrect results when decimal types were mixed with other numeric types. #48300 (Salvatore Mesoraca). This closes #2655. This closes #9596. This closes #38666.
  • Added server-side encryption using KMS keys with S3 tables, and the header setting with S3 disks. Closes #48723. #48724 (Johann Gan).
  • Add MemoryTracker for the background tasks (merges and mutation). Introduces merges_mutations_memory_usage_soft_limit and merges_mutations_memory_usage_to_ram_ratio settings that represent the soft memory limit for merges and mutations. If this limit is reached ClickHouse won't schedule new merge or mutation tasks. Also MergesMutationsMemoryTracking metric is introduced to allow observing current memory usage of background tasks. Resubmit #46089. Closes #48774. #48787 (Dmitry Novik).
  • Function dotProduct work for array. #49050 (FFFFFFFHHHHHHH).
  • Support statement SHOW INDEX to improve compatibility with MySQL. #49158 (Robert Schulze).
  • Add virtual column _file and _path support to table function url. - Improve error message for table function url. - resolves #49231 - resolves #49232. #49356 (Ziyi Tan).
  • Adding the grants field in the users.xml file, which allows specifying grants for users. #49381 (pufit).
  • Support full/right join by using grace hash join algorithm. #49483 (lgbo).
  • WITH FILL modifier groups filling by sorting prefix. Controlled by use_with_fill_by_sorting_prefix setting (enabled by default). Related to #33203#issuecomment-1418736794. #49503 (Igor Nikonov).
  • Clickhouse-client now accepts queries after "--multiquery" when "--query" (or "-q") is absent. example: clickhouse-client --multiquery "select 1; select 2;". #49870 (Alexey Gerasimchuk).
  • Add separate handshake_timeout for receiving Hello packet from replica. Closes #48854. #49948 (Kruglov Pavel).
  • Added a function "space" which repeats a space as many times as specified. #50103 (Robert Schulze).
  • Added --input_format_csv_trim_whitespaces option. #50215 (Alexey Gerasimchuk).
  • Allow the dictGetAll function for regexp tree dictionaries to return values from multiple matches as arrays. Closes #50254. #50255 (Johann Gan).
  • Added toLastDayOfWeek function to round a date or a date with time up to the nearest Saturday or Sunday. #50315 (Victor Krasnov).
  • Ability to ignore a skip index by specifying ignore_data_skipping_indices. #50329 (Boris Kuschel).
  • Add system.user_processes table and SHOW USER PROCESSES query to show memory info and ProfileEvents on user level. #50492 (János Benjamin Antal).
  • Add server and format settings display_secrets_in_show_and_select for displaying secrets of tables, databases, table functions, and dictionaries. Add privilege displaySecretsInShowAndSelect controlling which users can view secrets. #46528 (Mike Kot).
  • Allow to set up a ROW POLICY for all tables that belong to a DATABASE. #47640 (Ilya Golshtein).

Performance Improvement

  • Compress marks and primary key by default. It significantly reduces the cold query time. Upgrade notes: the support for compressed marks and primary key has been added in version 22.9. If you turned on compressed marks or primary key or installed version 23.5 or newer, which has compressed marks or primary key on by default, you will not be able to downgrade to version 22.8 or earlier. You can also explicitly disable compressed marks or primary keys by specifying the compress_marks and compress_primary_key settings in the <merge_tree> section of the server configuration file. #42587 (Alexey Milovidov).
  • New setting s3_max_inflight_parts_for_one_file sets the limit of concurrently loaded parts with multipart upload request in scope of one file. #49961 (Sema Checherinda).
  • When reading from multiple files reduce parallel parsing threads for each file. Resolves #42192. #46661 (SmitaRKulkarni).
  • Use aggregate projection only if it reads fewer granules than normal reading. It should help in case if query hits the PK of the table, but not the projection. Fixes #49150. #49417 (Nikolai Kochetov).
  • Do not store blocks in ANY hash join if nothing is inserted. #48633 (vdimir).
  • Fixes aggregate combinator -If when JIT compiled, and enable JIT compilation for aggregate functions. Closes #48120. #49083 (Igor Nikonov).
  • For reading from remote tables we use smaller tasks (instead of reading the whole part) to make tasks stealing work * task size is determined by size of columns to read * always use 1mb buffers for reading from s3 * boundaries of cache segments aligned to 1mb so they have decent size even with small tasks. it also should prevent fragmentation. #49287 (Nikita Taranov).
  • Introduced settings: - merge_max_block_size_bytes to limit the amount of memory used for background operations. - vertical_merge_algorithm_min_bytes_to_activate to add another condition to activate vertical merges. #49313 (Nikita Mikhaylov).
  • Default size of a read buffer for reading from local filesystem changed to a slightly better value. Also two new settings are introduced: max_read_buffer_size_local_fs and max_read_buffer_size_remote_fs. #49321 (Nikita Taranov).
  • Improve memory usage and speed of SPARSE_HASHED/HASHED dictionaries (e.g. SPARSE_HASHED now eats 2.6x less memory, and is ~2x faster). #49380 (Azat Khuzhin).
  • Optimize the system.query_log and system.query_thread_log tables by applying LowCardinality when appropriate. The queries over these tables will be faster. #49530 (Alexey Milovidov).
  • Better performance when reading local Parquet files (through parallel reading). #49539 (Michael Kolupaev).
  • Improve the performance of RIGHT/FULL JOIN by up to 2 times in certain scenarios, especially when joining a small left table with a large right table. #49585 (lgbo).
  • Improve performance of BLAKE3 by 11% by enabling LTO for Rust. #49600 (Azat Khuzhin). Now it is on par with C++.
  • Optimize the structure of the system.opentelemetry_span_log. Use LowCardinality where appropriate. Although this table is generally stupid (it is using the Map data type even for common attributes), it will be slightly better. #49647 (Alexey Milovidov).
  • Try to reserve hash table's size in grace_hash join. #49816 (lgbo).
  • Parallel merge of uniqExactIf states. Closes #49885. #50285 (flynn).
  • Keeper improvement: add CheckNotExists request to Keeper, which allows to improve the performance of Replicated tables. #48897 (Antonio Andelic).
  • Keeper performance improvements: avoid serializing same request twice while processing. Cache deserialization results of large requests. Controlled by new coordination setting min_request_size_for_cache. #49004 (Antonio Andelic).
  • Reduced number of List ZooKeeper requests when selecting parts to merge and a lot of partitions do not have anything to merge. #49637 (Alexander Tokmakov).
  • Rework locking in the FS cache #44985 (Kseniia Sumarokova).
  • Disable pure parallel replicas if trivial count optimization is possible. #50594 (Raúl Marín).
  • Don't send head request for all keys in Iceberg schema inference, only for keys that are used for reaing data. #50203 (Kruglov Pavel).
  • Setting enable_memory_bound_merging_of_aggregation_results is enabled by default. #50319 (Nikita Taranov).

Experimental Feature

  • DEFLATE_QPL codec lower the minimum simd version to SSE 4.2. doc change in qpl - Intel® QPL relies on a run-time kernels dispatcher and cpuid check to choose the best available implementation(sse/avx2/avx512) - restructured cmakefile for qpl build in clickhouse to align with latest upstream qpl. #49811 (jasperzhu).
  • Add initial support to do JOINs with pure parallel replicas. #49544 (Raúl Marín).
  • More parallelism on Outdated parts removal with "zero-copy replication". #49630 (Alexander Tokmakov).
  • Parallel Replicas: 1) Fixed an error NOT_FOUND_COLUMN_IN_BLOCK in case of using parallel replicas with non-replicated storage with disabled setting parallel_replicas_for_non_replicated_merge_tree 2) Now allow_experimental_parallel_reading_from_replicas have 3 possible values - 0, 1 and 2. 0 - disabled, 1 - enabled, silently disable them in case of failure (in case of FINAL or JOIN), 2 - enabled, throw an exception in case of failure. 3) If FINAL modifier is used in SELECT query and parallel replicas are enabled, ClickHouse will try to disable them if allow_experimental_parallel_reading_from_replicas is set to 1 and throw an exception otherwise. #50195 (Nikita Mikhaylov).
  • When parallel replicas are enabled they will always skip unavailable servers (the behavior is controlled by the setting skip_unavailable_shards, enabled by default and can be only disabled). This closes: #48565. #50293 (Nikita Mikhaylov).

Improvement

  • The BACKUP command will not decrypt data from encrypted disks while making a backup. Instead the data will be stored in a backup in encrypted form. Such backups can be restored only to an encrypted disk with the same (or extended) list of encryption keys. #48896 (Vitaly Baranov).
  • Added possibility to use temporary tables in FROM part of ATTACH PARTITION FROM and REPLACE PARTITION FROM. #49436 (Roman Vasin).
  • Added setting async_insert for MergeTree tables. It has the same meaning as query-level setting async_insert and enables asynchronous inserts for specific table. Note: it doesn't take effect for insert queries from clickhouse-client, use query-level setting in that case. #49122 (Anton Popov).
  • Add support for size suffixes in quota creation statement parameters. #49087 (Eridanus).
  • Extend first_value and last_value to accept NULL. #46467 (lgbo).
  • Add alias str_to_map and mapFromString for extractKeyValuePairs. closes https://github.com/clickhouse/clickhouse/issues/47185. #49466 (flynn).
  • Add support for CGroup version 2 for asynchronous metrics about the memory usage and availability. This closes #37983. #45999 (sichenzhao).
  • Cluster table functions should always skip unavailable shards. close #46314. #46765 (zk_kiger).
  • Allow CSV file to contain empty columns in its header. #47496 (你不要过来啊).
  • Add Google Cloud Storage S3 compatible table function gcs. Like the oss and cosn functions, it is just an alias over the s3 table function, and it does not bring any new features. #47815 (Kuba Kaflik).
  • Add ability to use strict parts size for S3 (compatibility with CloudFlare R2 S3 Storage). #48492 (Azat Khuzhin).
  • Added new columns with info about Replicated database replicas to system.clusters: database_shard_name, database_replica_name, is_active. Added an optional FROM SHARD clause to SYSTEM DROP DATABASE REPLICA query. #48548 (Alexander Tokmakov).
  • Add a new column zookeeper_name in system.replicas, to indicate on which (auxiliary) zookeeper cluster the replicated table's metadata is stored. #48549 (cangyin).
  • IN operator support the comparison of Date and Date32. Closes #48736. #48806 (flynn).
  • Support for erasure codes in HDFS, author: @M1eyu2018, @tomscut. #48833 (M1eyu).
  • Implement SYSTEM DROP REPLICA from auxiliary ZooKeeper clusters, may be close #48931. #48932 (wangxiaobo).
  • Add Array data type to MongoDB. Closes #48598. #48983 (Nikolay Degterinsky).
  • Support storing Interval data types in tables. #49085 (larryluogit).
  • Allow using ntile window function without explicit window frame definition: ntile(3) OVER (ORDER BY a), close #46763. #49093 (vdimir).
  • Added settings (number_of_mutations_to_delay, number_of_mutations_to_throw) to delay or throw ALTER queries that create mutations (ALTER UPDATE, ALTER DELETE, ALTER MODIFY COLUMN, ...) in case when table already has a lot of unfinished mutations. #49117 (Anton Popov).
  • Catch exception from create_directories in filesystem cache. #49203 (Kseniia Sumarokova).
  • Copies embedded examples to a new field example in system.functions to supplement the field description. #49222 (Dan Roscigno).
  • Enable connection options for the MongoDB dictionary. Example: xml <source> <mongodb> <host>localhost</host> <port>27017</port> <user></user> <password></password> <db>test</db> <collection>dictionary_source</collection> <options>ssl=true</options> </mongodb> </source> ### Documentation entry for user-facing changes. #49225 (MikhailBurdukov).
  • Added an alias asymptotic for asymp computational method for kolmogorovSmirnovTest. Improved documentation. #49286 (Nikita Mikhaylov).
  • Aggregation function groupBitAnd/Or/Xor now work on signed integer data. This makes them consistent with the behavior of scalar functions bitAnd/Or/Xor. #49292 (exmy).
  • Split function-documentation into more fine-granular fields. #49300 (Robert Schulze).
  • Use multiple threads shared between all tables within a server to load outdated data parts. The the size of the pool and its queue is controlled by max_outdated_parts_loading_thread_pool_size and outdated_part_loading_thread_pool_queue_size settings. #49317 (Nikita Mikhaylov).
  • Don't overestimate the size of processed data for LowCardinality columns when they share dictionaries between blocks. This closes #49322. See also #48745. #49323 (Alexey Milovidov).
  • Parquet writer now uses reasonable row group size when invoked through OUTFILE. #49325 (Michael Kolupaev).
  • Allow restricted keywords like ARRAY as an alias if the alias is quoted. Closes #49324. #49360 (Nikolay Degterinsky).
  • Data parts loading and deletion jobs were moved to shared server-wide pools instead of per-table pools. Pools sizes are controlled via settings max_active_parts_loading_thread_pool_size, max_outdated_parts_loading_thread_pool_size and max_parts_cleaning_thread_pool_size in top-level config. Table-level settings max_part_loading_threads and max_part_removal_threads became obsolete. #49474 (Nikita Mikhaylov).
  • Allow ?password=pass in URL of the Play UI. Password is replaced in browser history. #49505 (Mike Kot).
  • Allow reading zero-size objects from remote filesystems. (because empty files are not backup'd, so we might end up with zero blobs in metadata file). Closes #49480. #49519 (Kseniia Sumarokova).
  • Attach thread MemoryTracker to total_memory_tracker after ThreadGroup detached. #49527 (Dmitry Novik).
  • Fix parameterized views when a query parameter is used multiple times in the query. #49556 (Azat Khuzhin).
  • Release memory allocated for the last sent ProfileEvents snapshot in the context of a query. Followup #47564. #49561 (Dmitry Novik).
  • Function "makeDate" now provides a MySQL-compatible overload (year & day of the year argument). #49603 (Robert Schulze).
  • Support dictionary table function for RegExpTreeDictionary. #49666 (Han Fei).
  • Added weighted fair IO scheduling policy. Added dynamic resource manager, which allows IO scheduling hierarchy to be updated in runtime w/o server restarts. #49671 (Sergei Trifonov).
  • Add compose request after multipart upload to GCS. This enables the usage of copy operation on objects uploaded with the multipart upload. It's recommended to set s3_strict_upload_part_size to some value because compose request can fail on objects created with parts of different sizes. #49693 (Antonio Andelic).
  • For the extractKeyValuePairs function: improve the "best-effort" parsing logic to accept key_value_delimiter as a valid part of the value. This also simplifies branching and might even speed up things a bit. #49760 (Arthur Passos).
  • Add initial_query_id field for system.processors_profile_log #49777 (helifu).
  • System log tables can now have custom sorting keys. #49778 (helifu).
  • A new field partitions to system.query_log is used to indicate which partitions are participating in the calculation. #49779 (helifu).
  • Added enable_the_endpoint_id_with_zookeeper_name_prefix setting for ReplicatedMergeTree (disabled by default). When enabled, it adds ZooKeeper cluster name to table's interserver communication endpoint. It avoids Duplicate interserver IO endpoint errors when having replicated tables with the same path, but different auxiliary ZooKeepers. #49780 (helifu).
  • Add query parameters to clickhouse-local. Closes #46561. #49785 (Nikolay Degterinsky).
  • Allow loading dictionaries and functions from YAML by default. In previous versions, it required editing the dictionaries_config or user_defined_executable_functions_config in the configuration file, as they expected *.xml files. #49812 (Alexey Milovidov).
  • The Kafka table engine now allows to use alias columns. #49824 (Aleksandr Musorin).
  • Add setting to limit the max number of pairs produced by extractKeyValuePairs, a safeguard to avoid using way too much memory. #49836 (Arthur Passos).
  • Add support for (an unusual) case where the arguments in the IN operator are single-element tuples. #49844 (MikhailBurdukov).
  • bitHammingDistance function support String and FixedString data type. Closes #48827. #49858 (flynn).
  • Fix timeout resetting errors in the client on OS X. #49863 (alekar).
  • Add support for big integers, such as UInt128, Int128, UInt256, and Int256 in the function bitCount. This enables Hamming distance over large bit masks for AI applications. #49867 (Alexey Milovidov).
  • Fingerprints to be used instead of key IDs in encrypted disks. This simplifies the configuration of encrypted disks. #49882 (Vitaly Baranov).
  • Add UUID data type to PostgreSQL. Closes #49739. #49894 (Nikolay Degterinsky).
  • Function toUnixTimestamp now accepts Date and Date32 arguments. #49989 (Victor Krasnov).
  • Charge only server memory for dictionaries. #49995 (Azat Khuzhin).
  • The server will allow using the SQL_* settings such as SQL_AUTO_IS_NULL as no-ops for MySQL compatibility. This closes #49927. #50013 (Alexey Milovidov).
  • Preserve initial_query_id for ON CLUSTER queries, which is useful for introspection (under distributed_ddl_entry_format_version=5). #50015 (Azat Khuzhin).
  • Preserve backward incompatibility for renamed settings by using aliases (allow_experimental_projection_optimization for optimize_use_projections, allow_experimental_lightweight_delete for enable_lightweight_delete). #50044 (Azat Khuzhin).
  • Support passing FQDN through setting my_hostname to register cluster node in keeper. Add setting of invisible to support multi compute groups. A compute group as a cluster, is invisible to other compute groups. #50186 (Yangkuan Liu).
  • Fix PostgreSQL reading all the data even though LIMIT n could be specified. #50187 (Kseniia Sumarokova).
  • Add new profile events for queries with subqueries (QueriesWithSubqueries/SelectQueriesWithSubqueries/InsertQueriesWithSubqueries). #50204 (Azat Khuzhin).
  • Adding the roles field in the users.xml file, which allows specifying roles with grants via a config file. #50278 (pufit).
  • Report CGroupCpuCfsPeriod and CGroupCpuCfsQuota in AsynchronousMetrics. - Respect cgroup v2 memory limits during server startup. #50379 (alekar).
  • Add a signal handler for SIGQUIT to work the same way as SIGINT. Closes #50298. #50435 (Nikolay Degterinsky).
  • In case JSON parse fails due to the large size of the object output the last position to allow debugging. #50474 (Valentin Alexeev).
  • Support decimals with not fixed size. Closes #49130. #50586 (Kruglov Pavel).

Build/Testing/Packaging Improvement

Bug Fix (user-visible misbehavior in an official stable release)

ClickHouse release 23.4, 2023-04-26

Backward Incompatible Change

  • Formatter '%M' in function formatDateTime() now prints the month name instead of the minutes. This makes the behavior consistent with MySQL. The previous behavior can be restored using setting "formatdatetime_parsedatetime_m_is_month_name = 0". #47246 (Robert Schulze).
  • This change makes sense only if you are using the virtual filesystem cache. If path in the virtual filesystem cache configuration is not empty and is not an absolute path, then it will be put in <clickhouse server data directory>/caches/<path_from_cache_config>. #48784 (Kseniia Sumarokova).
  • Primary/secondary indices and sorting keys with identical expressions are now rejected. This behavior can be disabled using setting allow_suspicious_indices. #48536 (凌涛).

New Feature

  • Support new aggregate function quantileGK/quantilesGK, like approx_percentile in spark. Greenwald-Khanna algorithm refer to http://infolab.stanford.edu/~datar/courses/cs361a/papers/quantiles.pdf. #46428 (李扬).
  • Add a statement SHOW COLUMNS which shows distilled information from system.columns. #48017 (Robert Schulze).
  • Added LIGHTWEIGHT and PULL modifiers for SYSTEM SYNC REPLICA query. LIGHTWEIGHT version waits for fetches and drop-ranges only (merges and mutations are ignored). PULL version pulls new entries from ZooKeeper and does not wait for them. Fixes #47794. #48085 (Alexander Tokmakov).
  • Add kafkaMurmurHash function for compatibility with Kafka DefaultPartitioner. Closes #47834. #48185 (Nikolay Degterinsky).
  • Allow to easily create a user with the same grants as the current user by using GRANT CURRENT GRANTS. #48262 (pufit).
  • Add statistical aggregate function kolmogorovSmirnovTest. Close #48228. #48325 (FFFFFFFHHHHHHH).
  • Added a lost_part_count column to the system.replicas table. The column value shows the total number of lost parts in the corresponding table. Value is stored in zookeeper and can be used instead of not persistent ReplicatedDataLoss profile event for monitoring. #48526 (Sergei Trifonov).
  • Add soundex function for compatibility. Closes #39880. #48567 (FriendLey).
  • Support Map type for JSONExtract. #48629 (李扬).
  • Add PrettyJSONEachRow format to output pretty JSON with new line delimiters and 4 space indents. #48898 (Kruglov Pavel).
  • Add ParquetMetadata input format to read Parquet file metadata. #48911 (Kruglov Pavel).
  • Add extractKeyValuePairs function to extract key value pairs from strings. Input strings might contain noise (i.e. log files / do not need to be 100% formatted in key-value-pair format), the algorithm will look for key value pairs matching the arguments passed to the function. As of now, function accepts the following arguments: data_column (mandatory), key_value_pair_delimiter (defaults to :), pair_delimiters (defaults to \space \, \;) and quoting_character (defaults to double quotes). #43606 (Arthur Passos).
  • Functions replaceOne(), replaceAll(), replaceRegexpOne() and replaceRegexpAll() can now be called with non-const pattern and replacement arguments. #46589 (Robert Schulze).
  • Added functions to work with columns of type Map: mapConcat, mapSort, mapExists. #48071 (Anton Popov).

Performance Improvement

  • Reading files in Parquet format is now much faster. IO and decoding are parallelized (controlled by max_threads setting), and only required data ranges are read. #47964 (Michael Kolupaev).
  • If we run a mutation with IN (subquery) like this: ALTER TABLE t UPDATE col='new value' WHERE id IN (SELECT id FROM huge_table) and the table t has multiple parts than for each part a set for subquery SELECT id FROM huge_table is built in memory. And if there are many parts then this might consume a lot of memory (and lead to an OOM) and CPU. The solution is to introduce a short-lived cache of sets that are currently being built by mutation tasks. If another task of the same mutation is executed concurrently it can look up the set in the cache, wait for it to be built and reuse it. #46835 (Alexander Gololobov).
  • Only check dependencies if necessary when applying ALTER TABLE queries. #48062 (Raúl Marín).
  • Optimize function mapUpdate. #48118 (Anton Popov).
  • Now an internal query to local replica is sent explicitly and data from it received through loopback interface. Setting prefer_localhost_replica is not respected for parallel replicas. This is needed for better scheduling and makes the code cleaner: the initiator is only responsible for coordinating of the reading process and merging results, continuously answering for requests while all the secondary queries read the data. Note: Using loopback interface is not so performant, otherwise some replicas could starve for tasks which could lead to even slower query execution and not utilizing all possible resources. The initialization of the coordinator is now even more lazy. All incoming requests contain the information about the reading algorithm we initialize the coordinator with it when first request comes. If any replica decides to read with a different algorithman exception will be thrown and a query will be aborted. #48246 (Nikita Mikhaylov).
  • Do not build set for the right side of IN clause with subquery when it is used only for analysis of skip indexes, and they are disabled by setting (use_skip_indexes=0). Previously it might affect the performance of queries. #48299 (Anton Popov).
  • Query processing is parallelized right after reading FROM file(...). Related to #38755. #48525 (Igor Nikonov). Query processing is parallelized right after reading from any data source. Affected data sources are mostly simple or external storages like table functions url, file. #48727 (Igor Nikonov). This is controlled by the setting parallelize_output_from_storages which is not enabled by default.
  • Lowered contention of ThreadPool mutex (may increase performance for a huge amount of small jobs). #48750 (Sergei Trifonov).
  • Reduce memory usage for multiple ALTER DELETE mutations. #48522 (Nikolai Kochetov).
  • Remove the excessive connection attempts if the skip_unavailable_shards setting is enabled. #48771 (Azat Khuzhin).

Experimental Feature

Improvement

  • Increase default value for connect_timeout_with_failover_ms to 1000 ms (because of adding async connections in https://github.com/ClickHouse/ClickHouse/pull/47229) . Closes #5188. #49009 (Kruglov Pavel).
  • Several improvements around data lakes: - Make Iceberg work with non-partitioned data. - Support Iceberg format version v2 (previously only v1 was supported) - Support reading partitioned data for DeltaLake/Hudi - Faster reading of DeltaLake metadata by using Delta's checkpoint files - Fixed incorrect Hudi reads: previously it incorrectly chose which data to read and therefore was able to read correctly only small size tables - Made these engines to pickup updates of changed data (previously the state was set on table creation) - Make proper testing for Iceberg/DeltaLake/Hudi using spark. #47307 (Kseniia Sumarokova).
  • Add async connection to socket and async writing to socket. Make creating connections and sending query/external tables async across shards. Refactor code with fibers. Closes #46931. We will be able to increase connect_timeout_with_failover_ms by default after this PR (https://github.com/ClickHouse/ClickHouse/issues/5188). #47229 (Kruglov Pavel).
  • Support config sections keeper/keeper_server as an alternative to zookeeper. Close #34766 , #34767. #35113 (李扬).
  • It is possible to set secure flag in named_collections for a dictionary with a ClickHouse table source. Addresses #38450 . #46323 (Ilya Golshtein).
  • bitCount function support FixedString and String data type. #49044 (flynn).
  • Added configurable retries for all operations with [Zoo]Keeper for Backup queries. #47224 (Nikita Mikhaylov).
  • Enable use_environment_credentials for S3 by default, so the entire provider chain is constructed by default. #47397 (Antonio Andelic).
  • Currently, the JSON_VALUE function is similar as spark's get_json_object function, which support to get value from JSON string by a path like '$.key'. But still has something different - 1. in spark's get_json_object will return null while the path is not exist, but in JSON_VALUE will return empty string; - 2. in spark's get_json_object will return a complex type value, such as a JSON object/array value, but in JSON_VALUE will return empty string. #47494 (KevinyhZou).
  • For use_structure_from_insertion_table_in_table_functions more flexible insert table structure propagation to table function. Fixed an issue with name mapping and using virtual columns. No more need for 'auto' setting. #47962 (Yakov Olkhovskiy).
  • Do not continue retrying to connect to Keeper if the query is killed or over limits. #47985 (Raúl Marín).
  • Support Enum output/input in BSONEachRow, allow all map key types and avoid extra calculations on output. #48122 (Kruglov Pavel).
  • Support more ClickHouse types in ORC/Arrow/Parquet formats: Enum(8|16), (U)Int(128|256), Decimal256 (for ORC), allow reading IPv4 from Int32 values (ORC outputs IPv4 as Int32, and we couldn't read it back), fix reading Nullable(IPv6) from binary data for ORC. #48126 (Kruglov Pavel).
  • Add columns perform_ttl_move_on_insert, load_balancing for table system.storage_policies, modify column volume_type type to Enum8. #48167 (lizhuoyu5).
  • Added support for BACKUP ALL command which backups all tables and databases, including temporary and system ones. #48189 (Vitaly Baranov).
  • Function mapFromArrays supports Map type as an input. #48207 (李扬).
  • The output of some SHOW PROCESSLIST is now sorted. #48241 (Robert Schulze).
  • Per-query/per-server throttling for remote IO/local IO/BACKUPs (server settings: max_remote_read_network_bandwidth_for_server, max_remote_write_network_bandwidth_for_server, max_local_read_bandwidth_for_server, max_local_write_bandwidth_for_server, max_backup_bandwidth_for_server, settings: max_remote_read_network_bandwidth, max_remote_write_network_bandwidth, max_local_read_bandwidth, max_local_write_bandwidth, max_backup_bandwidth). #48242 (Azat Khuzhin).
  • Support more types in CapnProto format: Map, (U)Int(128|256), Decimal(128|256). Allow integer conversions during input/output. #48257 (Kruglov Pavel).
  • Don't throw CURRENT_WRITE_BUFFER_IS_EXHAUSTED for normal behaviour. #48288 (Raúl Marín).
  • Add new setting keeper_map_strict_mode which enforces extra guarantees on operations made on top of KeeperMap tables. #48293 (Antonio Andelic).
  • Check primary key type for simple dictionary is native unsigned integer type Add setting check_dictionary_primary_key for compatibility(set check_dictionary_primary_key =false to disable checking). #48335 (lizhuoyu5).
  • Don't replicate mutations for KeeperMap because it's unnecessary. #48354 (Antonio Andelic).
  • Allow to write/read unnamed tuple as nested Message in Protobuf format. Tuple elements and Message fields are matched by position. #48390 (Kruglov Pavel).
  • Support additional_table_filters and additional_result_filter settings in the new planner. Also, add a documentation entry for additional_result_filter. #48405 (Dmitry Novik).
  • parseDateTime now understands format string '%f' (fractional seconds). #48420 (Robert Schulze).
  • Format string "%f" in formatDateTime() now prints "000000" if the formatted value has no fractional seconds, the previous behavior (single zero) can be restored using setting "formatdatetime_f_prints_single_zero = 1". #48422 (Robert Schulze).
  • Don't replicate DELETE and TRUNCATE for KeeperMap. #48434 (Antonio Andelic).
  • Generate valid Decimals and Bools in generateRandom function. #48436 (Kruglov Pavel).
  • Allow trailing commas in expression list of SELECT query, for example SELECT a, b, c, FROM table. Closes #37802. #48438 (Nikolay Degterinsky).
  • Override CLICKHOUSE_USER and CLICKHOUSE_PASSWORD environment variables with --user and --password client parameters. Closes #38909. #48440 (Nikolay Degterinsky).
  • Added retries to loading of data parts in MergeTree tables in case of retryable errors. #48442 (Anton Popov).
  • Add support for Date, Date32, DateTime, DateTime64 data types to arrayMin, arrayMax, arrayDifference functions. Closes #21645. #48445 (Nikolay Degterinsky).
  • Add support for {server_uuid} macro. It is useful for identifying replicas in autoscaled clusters when new replicas are constantly added and removed in runtime. This closes #48554. #48563 (Alexey Milovidov).
  • The installation script will create a hard link instead of copying if it is possible. #48578 (Alexey Milovidov).
  • Support SHOW TABLE syntax meaning the same as SHOW CREATE TABLE. Closes #48580. #48591 (flynn).
  • HTTP temporary buffers now support working by evicting data from the virtual filesystem cache. #48664 (Vladimir C).
  • Make Schema inference works for CREATE AS SELECT. Closes #47599. #48679 (flynn).
  • Added a replicated_max_mutations_in_one_entry setting for ReplicatedMergeTree that allows limiting the number of mutation commands per one MUTATE_PART entry (default is 10000). #48731 (Alexander Tokmakov).
  • In AggregateFunction types, don't count unused arena bytes as read_bytes. #48745 (Raúl Marín).
  • Fix some MySQL-related settings not being handled with the MySQL dictionary source + named collection. Closes #48402. #48759 (Kseniia Sumarokova).
  • If a user set max_single_part_upload_size to a very large value, it can lead to a crash due to a bug in the AWS S3 SDK. This fixes #47679. #48816 (Alexey Milovidov).
  • Fix data race in RabbitMQ (report), refactor the code. #48845 (Kseniia Sumarokova).
  • Add aliases name and part_name form system.parts and system.part_log. Closes #48718. #48850 (sichenzhao).
  • Functions "arrayDifferenceSupport()", "arrayCumSum()" and "arrayCumSumNonNegative()" now support input arrays of wide integer types (U)Int128/256. #48866 (cluster).
  • Multi-line history in clickhouse-client is now no longer padded. This makes pasting more natural. #48870 (Joanna Hulboj).
  • Implement a slight improvement for the rare case when ClickHouse is run inside LXC and LXCFS is used. The LXCFS has an issue: sometimes it returns an error "Transport endpoint is not connected" on reading from the file inside /proc. This error was correctly logged into ClickHouse's server log. We have additionally workaround this issue by reopening a file. This is a minuscule change. #48922 (Real).
  • Improve memory accounting for prefetches. Randomise prefetch settings In CI. #48973 (Kseniia Sumarokova).
  • Correctly set headers for native copy operations on GCS. #48981 (Antonio Andelic).
  • Add support for specifying setting names in the command line with dashes instead of underscores, for example, --max-threads instead of --max_threads. Additionally, support Unicode dash characters like instead of -- - this is useful when you communicate with a team in another company, and a manager from that team copy-pasted code from MS Word. #48985 (alekseygolub).
  • Add fallback to password authentication when authentication with SSL user certificate has failed. Closes #48974. #48989 (Nikolay Degterinsky).
  • Improve the embedded dashboard. Close #46671. #49036 (Kevin Zhang).
  • Add profile events for log messages, so you can easily see the count of log messages by severity. #49042 (Alexey Milovidov).
  • In previous versions, the LineAsString format worked inconsistently when the parallel parsing was enabled or not, in presence of DOS or macOS Classic line breaks. This closes #49039. #49052 (Alexey Milovidov).
  • The exception message about the unparsed query parameter will also tell about the name of the parameter. Reimplement #48878. Close #48772. #49061 (Alexey Milovidov).

Build/Testing/Packaging Improvement

  • Update time zones. The following were updated: Africa/Cairo, Africa/Casablanca, Africa/El_Aaiun, America/Bogota, America/Cambridge_Bay, America/Ciudad_Juarez, America/Godthab, America/Inuvik, America/Iqaluit, America/Nuuk, America/Ojinaga, America/Pangnirtung, America/Rankin_Inlet, America/Resolute, America/Whitehorse, America/Yellowknife, Asia/Gaza, Asia/Hebron, Asia/Kuala_Lumpur, Asia/Singapore, Canada/Yukon, Egypt, Europe/Kirov, Europe/Volgograd, Singapore. #48572 (Alexey Milovidov).
  • Reduce the number of dependencies in the header files to speed up the build. #47984 (Dmitry Novik).
  • Randomize compression of marks and indices in tests. #48286 (Alexey Milovidov).
  • Bump internal ZSTD from 1.5.4 to 1.5.5. #46797 (Robert Schulze).
  • Randomize vertical merges from compact to wide parts in tests. #48287 (Raúl Marín).
  • Support for CRC32 checksum in HDFS. Fix performance issues. #48614 (Alexey Milovidov).
  • Remove remainders of GCC support. #48671 (Robert Schulze).
  • Add CI run with new analyzer infrastructure enabled. #48719 (Dmitry Novik).

Bug Fix (user-visible misbehavior in an official stable release)

ClickHouse release 23.3 LTS, 2023-03-30

Upgrade Notes

  • Lightweight DELETEs are production ready and enabled by default. The DELETE query for MergeTree tables is now available by default.
  • The behavior of *domain*RFC and netloc functions is slightly changed: relaxed the set of symbols that are allowed in the URL authority for better conformance. #46841 (Azat Khuzhin).
  • Prohibited creating tables based on KafkaEngine with DEFAULT/EPHEMERAL/ALIAS/MATERIALIZED statements for columns. #47138 (Aleksandr Musorin).
  • An "asynchronous connection drain" feature is removed. Related settings and metrics are removed as well. It was an internal feature, so the removal should not affect users who had never heard about that feature. #47486 (Alexander Tokmakov).
  • Support 256-bit Decimal data type (more than 38 digits) in arraySum/Min/Max/Avg/Product, arrayCumSum/CumSumNonNegative, arrayDifference, array construction, IN operator, query parameters, groupArrayMovingSum, statistical functions, min/max/any/argMin/argMax, PostgreSQL wire protocol, MySQL table engine and function, sumMap, mapAdd, mapSubtract, arrayIntersect. Add support for big integers in arrayIntersect. Statistical aggregate functions involving moments (such as corr or various TTests) will use Float64 as their internal representation (they were using Decimal128 before this change, but it was pointless), and these functions can return nan instead of inf in case of infinite variance. Some functions were allowed on Decimal256 data types but returned Decimal128 in previous versions - now it is fixed. This closes #47569. This closes #44864. This closes #28335. #47594 (Alexey Milovidov).
  • Make backup_threads/restore_threads server settings (instead of user settings). #47881 (Azat Khuzhin).
  • Do not allow const and non-deterministic secondary indices #46839 (Anton Popov).

New Feature

  • Add a new mode for splitting the work on replicas using settings parallel_replicas_custom_key and parallel_replicas_custom_key_filter_type. If the cluster consists of a single shard with multiple replicas, up to max_parallel_replicas will be randomly picked and turned into shards. For each shard, a corresponding filter is added to the query on the initiator before being sent to the shard. If the cluster consists of multiple shards, it will behave the same as sample_key but with the possibility to define an arbitrary key. #45108 (Antonio Andelic).
  • An option to display partial result on cancel: Added query setting partial_result_on_first_cancel allowing the canceled query (e.g. due to Ctrl-C) to return a partial result. #45689 (Alexey Perevyshin).
  • Added support of arbitrary tables engines for temporary tables (except for Replicated and KeeperMap engines). Close #31497. #46071 (Roman Vasin).
  • Add support for replication of user-defined SQL functions using centralized storage in Keeper. #46085 (Aleksei Filatov).
  • Implement system.server_settings (similar to system.settings), which will contain server configurations. #46550 (pufit).
  • Support for UNDROP TABLE query. Closes #46811. #47241 (chen).
  • Allow separate grants for named collections (e.g. to be able to give SHOW/CREATE/ALTER/DROP named collection access only to certain collections, instead of all at once). Closes #40894. Add new access type NAMED_COLLECTION_CONTROL which is not given to user default unless explicitly added to the user config (is required to be able to do GRANT ALL), also show_named_collections is no longer obligatory to be manually specified for user default to be able to have full access rights as was in 23.2. #46241 (Kseniia Sumarokova).
  • Allow nested custom disks. Previously custom disks supported only flat disk structure. #47106 (Kseniia Sumarokova).
  • Introduce a function widthBucket (with a WIDTH_BUCKET alias for compatibility). #42974. #46790 (avoiderboi).
  • Add new function parseDateTime/parseDateTimeInJodaSyntax according to the specified format string. parseDateTime parses String to DateTime in MySQL syntax, parseDateTimeInJodaSyntax parses in Joda syntax. #46815 (李扬).
  • Use dummy UInt8 for the default structure of table function null. Closes #46930. #47006 (flynn).
  • Support for date format with a comma, like Dec 15, 2021 in the parseDateTimeBestEffort function. Closes #46816. #47071 (chen).
  • Add settings http_wait_end_of_query and http_response_buffer_size that corresponds to URL params wait_end_of_query and buffer_size for the HTTP interface. This allows changing these settings in the profiles. #47108 (Vladimir C).
  • Add system.dropped_tables table that shows tables that were dropped from Atomic databases but were not completely removed yet. #47364 (chen).
  • Add INSTR as alias of positionCaseInsensitive for MySQL compatibility. Closes #47529. #47535 (flynn).
  • Added toDecimalString function allowing to convert numbers to string with fixed precision. #47838 (Andrey Zvonov).
  • Add a merge tree setting max_number_of_mutations_for_replica. It limits the number of part mutations per replica to the specified amount. Zero means no limit on the number of mutations per replica (the execution can still be constrained by other settings). #48047 (Vladimir C).
  • Add the Map-related function mapFromArrays, which allows the creation of a map from a pair of arrays. #31125 (李扬).
  • Allow control of compression in Parquet/ORC/Arrow output formats, adds support for more compression input formats. This closes #13541. #47114 (Kruglov Pavel).
  • Add SSL User Certificate authentication to the native protocol. Closes #47077. #47596 (Nikolay Degterinsky).
  • Add *OrNull() and *OrZero() variants for parseDateTime, add alias str_to_date for MySQL parity. #48000 (Robert Schulze).
  • Added operator REGEXP (similar to operators "LIKE", "IN", "MOD" etc.) for better compatibility with MySQL #47869 (Robert Schulze).

Performance Improvement

  • Marks in memory are now compressed, using 3-6x less memory. #47290 (Michael Kolupaev).
  • Backups for large numbers of files were unbelievably slow in previous versions. Not anymore. Now they are unbelievably fast. #47251 (Alexey Milovidov). Introduced a separate thread pool for backup's IO operations. This will allow scaling it independently of other pools and increase performance. #47174 (Nikita Mikhaylov). Use MultiRead request and retries for collecting metadata at the final stage of backup processing. #47243 (Nikita Mikhaylov). If a backup and restoring data are both in S3 then server-side copy should be used from now on. #47546 (Vitaly Baranov).
  • Fixed excessive reading in queries with FINAL. #47801 (Nikita Taranov).
  • Setting max_final_threads would be set to the number of cores at server startup (by the same algorithm as used for max_threads). This improves the concurrency of final execution on servers with high number of CPUs. #47915 (Nikita Taranov).
  • Allow executing reading pipeline for DIRECT dictionary with CLICKHOUSE source in multiple threads. To enable set dictionary_use_async_executor=1 in SETTINGS section for source in CREATE DICTIONARY statement. #47986 (Vladimir C).
  • Optimize one nullable key aggregate performance. #45772 (LiuNeng).
  • Implemented lowercase tokenbf_v1 index utilization for hasTokenOrNull, hasTokenCaseInsensitive and hasTokenCaseInsensitiveOrNull. #46252 (ltrk2).
  • Optimize functions position and LIKE by searching the first two chars using SIMD. #46289 (Jiebin Sun).
  • Optimize queries from the system.detached_parts, which could be significantly large. Added several sources with respect to the block size limitation; in each block, an IO thread pool is used to calculate the part size, i.e. to make syscalls in parallel. #46624 (Sema Checherinda).
  • Increase the default value of max_replicated_merges_in_queue for ReplicatedMergeTree tables from 16 to 1000. It allows faster background merge operation on clusters with a very large number of replicas, such as clusters with shared storage in ClickHouse Cloud. #47050 (Alexey Milovidov).
  • Updated clickhouse-copier to use GROUP BY instead of DISTINCT to get the list of partitions. For large tables, this reduced the select time from over 500s to under 1s. #47386 (Clayton McClure).
  • Fix performance degradation in ASOF JOIN. #47544 (Ongkong).
  • Even more batching in Keeper. Improve performance by avoiding breaking batches on read requests. #47978 (Antonio Andelic).
  • Allow PREWHERE for Merge with different DEFAULT expressions for columns. #46831 (Azat Khuzhin).

Experimental Feature

  • Parallel replicas: Improved the overall performance by better utilizing the local replica, and forbid the reading with parallel replicas from non-replicated MergeTree by default. #47858 (Nikita Mikhaylov).
  • Support filter push down to left table for JOIN with Join, Dictionary and EmbeddedRocksDB tables if the experimental Analyzer is enabled. #47280 (Maksim Kita).
  • Now ReplicatedMergeTree with zero copy replication has less load to Keeper. #47676 (alesapin).
  • Fix create materialized view with MaterializedPostgreSQL #40807 (Maksim Buren).

Improvement

  • Enable input_format_json_ignore_unknown_keys_in_named_tuple by default. #46742 (Kruglov Pavel).
  • Allow errors to be ignored while pushing to MATERIALIZED VIEW (add new setting materialized_views_ignore_errors, by default to false, but it is set to true for flushing logs to system.*_log tables unconditionally). #46658 (Azat Khuzhin).
  • Track the file queue of distributed sends in memory. #45491 (Azat Khuzhin).
  • Now X-ClickHouse-Query-Id and X-ClickHouse-Timezone headers are added to responses in all queries via HTTP protocol. Previously it was done only for SELECT queries. #46364 (Anton Popov).
  • External tables from MongoDB: support for connection to a replica set via a URI with a host:port enum and support for the readPreference option in MongoDB dictionaries. Example URI: mongodb://db0.example.com:27017,db1.example.com:27017,db2.example.com:27017/?replicaSet=myRepl&readPreference=primary. #46524 (artem-yadr).
  • This improvement should be invisible for users. Re-implement projection analysis on top of query plan. Added setting query_plan_optimize_projection=1 to switch between old and new version. Fixes #44963. #46537 (Nikolai Kochetov).
  • Use Parquet format v2 instead of v1 in output format by default. Add setting output_format_parquet_version to control parquet version, possible values 1.0, 2.4, 2.6, 2.latest (default). #46617 (Kruglov Pavel).
  • It is now possible to use the new configuration syntax to configure Kafka topics with periods (.) in their name. #46752 (Robert Schulze).
  • Fix heuristics that check hyperscan patterns for problematic repeats. #46819 (Robert Schulze).
  • Don't report ZK node exists to system.errors when a block was created concurrently by a different replica. #46820 (Raúl Marín).
  • Increase the limit for opened files in clickhouse-local. It will be able to read from web tables on servers with a huge number of CPU cores. Do not back off reading from the URL table engine in case of too many opened files. This closes #46852. #46853 (Alexey Milovidov).
  • Exceptions thrown when numbers cannot be parsed now have an easier-to-read exception message. #46917 (Robert Schulze).
  • Added update system.backups after every processed task to track the progress of backups. #46989 (Aleksandr Musorin).
  • Allow types conversion in Native input format. Add settings input_format_native_allow_types_conversion that controls it (enabled by default). #46990 (Kruglov Pavel).
  • Allow IPv4 in the range function to generate IP ranges. #46995 (Yakov Olkhovskiy).
  • Improve exception message when it's impossible to move a part from one volume/disk to another. #47032 (alesapin).
  • Support Bool type in JSONType function. Previously Null type was mistakenly returned for bool values. #47046 (Anton Popov).
  • Use _request_body parameter to configure predefined HTTP queries. #47086 (Constantine Peresypkin).
  • Automatic indentation in the built-in UI SQL editor when Enter is pressed. #47113 (Alexey Korepanov).
  • Self-extraction with 'sudo' will attempt to set uid and gid of extracted files to running user. #47116 (Yakov Olkhovskiy).
  • Previously, the repeat function's second argument only accepted an unsigned integer type, which meant it could not accept values such as -1. This behavior differed from that of the Spark function. In this update, the repeat function has been modified to match the behavior of the Spark function. It now accepts the same types of inputs, including negative integers. Extensive testing has been performed to verify the correctness of the updated implementation. #47134 (KevinyhZou). Note: the changelog entry was rewritten by ChatGPT.
  • Remove ::__1 part from stacktraces. Display std::basic_string<char, ... as String in stacktraces. #47171 (Mike Kot).
  • Reimplement interserver mode to avoid replay attacks (note, that change is backward compatible with older servers). #47213 (Azat Khuzhin).
  • Improve recognition of regular expression groups and refine the regexp_tree dictionary. #47218 (Han Fei).
  • Keeper improvement: Add new 4LW clrs to clean resources used by Keeper (e.g. release unused memory). #47256 (Antonio Andelic).
  • Add optional arguments to codecs DoubleDelta(bytes_size), Gorilla(bytes_size), FPC(level, float_size), this allows using these codecs without column type in clickhouse-compressor. Fix possible aborts and arithmetic errors in clickhouse-compressor with these codecs. Fixes: https://github.com/ClickHouse/ClickHouse/discussions/47262. #47271 (Kruglov Pavel).
  • Add support for big int types to the runningDifference function. Closes #47194. #47322 (Nikolay Degterinsky).
  • Add an expiration window for S3 credentials that have an expiration time to avoid ExpiredToken errors in some edge cases. It can be controlled with expiration_window_seconds config, the default is 120 seconds. #47423 (Antonio Andelic).
  • Support Decimals and Date32 in Avro format. #47434 (Kruglov Pavel).
  • Do not start the server if an interrupted conversion from Ordinary to Atomic was detected, print a better error message with troubleshooting instructions. #47487 (Alexander Tokmakov).
  • Add a new column kind to the system.opentelemetry_span_log. This column holds the value of SpanKind defined in OpenTelemtry. #47499 (Frank Chen).
  • Allow reading/writing nested arrays in Protobuf format with only the root field name as column name. Previously column name should've contained all nested field names (like a.b.c Array(Array(Array(UInt32))), now you can use just a Array(Array(Array(UInt32))). #47650 (Kruglov Pavel).
  • Added an optional STRICT modifier for SYSTEM SYNC REPLICA which makes the query wait for the replication queue to become empty (just like it worked before https://github.com/ClickHouse/ClickHouse/pull/45648). #47659 (Alexander Tokmakov).
  • Improve the naming of some OpenTelemetry span logs. #47667 (Frank Chen).
  • Prevent using too long chains of aggregate function combinators (they can lead to slow queries in the analysis stage). This closes #47715. #47716 (Alexey Milovidov).
  • Support for subquery in parameterized views; resolves #46741 #47725 (SmitaRKulkarni).
  • Fix memory leak in MySQL integration (reproduces with connection_auto_close=1). #47732 (Kseniia Sumarokova).
  • Improved error handling in the code related to Decimal parameters, resulting in more informative error messages. Previously, when incorrect Decimal parameters were supplied, the error message generated was unclear or unhelpful. With this update, the error message printed has been fixed to provide more detailed and useful information, making it easier to identify and correct issues related to Decimal parameters. #47812 (Yu Feng). Note: this changelog entry is rewritten by ChatGPT.
  • The parameter exact_rows_before_limit is used to make rows_before_limit_at_least is designed to accurately reflect the number of rows returned before the limit is reached. This pull request addresses issues encountered when the query involves distributed processing across multiple shards or sorting operations. Prior to this update, these scenarios were not functioning as intended. #47874 (Amos Bird).
  • ThreadPools metrics introspection. #47880 (Azat Khuzhin).
  • Add WriteBufferFromS3Microseconds and WriteBufferFromS3RequestsErrors profile events. #47885 (Antonio Andelic).
  • Add --link and --noninteractive (-y) options to ClickHouse install. Closes #47750. #47887 (Nikolay Degterinsky).
  • Fixed UNKNOWN_TABLE exception when attaching to a materialized view that has dependent tables that are not available. This might be useful when trying to restore state from a backup. #47975 (MikhailBurdukov).
  • Fix case when the (optional) path is not added to an encrypted disk configuration. #47981 (Kseniia Sumarokova).
  • Support for CTE in parameterized views Implementation: Updated to allow query parameters while evaluating scalar subqueries. #48065 (SmitaRKulkarni).
  • Support big integers (U)Int128/(U)Int256, Map with any key type and DateTime64 with any precision (not only 3 and 6). #48119 (Kruglov Pavel).
  • Allow skipping errors related to unknown enum values in row input formats. #48133 (Alexey Milovidov).

Build/Testing/Packaging Improvement

  • ClickHouse now builds with C++23. #47424 (Robert Schulze).
  • Fuzz EXPLAIN queries in the AST Fuzzer. #47803 #47852 (flynn).
  • Split stress test and the automated backward compatibility check (now Upgrade check). #44879 (Kruglov Pavel).
  • Updated the Ubuntu Image for Docker to calm down some bogus security reports. #46784 (Julio Jimenez). Please note that ClickHouse has no dependencies and does not require Docker.
  • Adds a prompt to allow the removal of an existing clickhouse download when using "curl | sh" download of ClickHouse. Prompt is "ClickHouse binary clickhouse already exists. Overwrite? [y/N]". #46859 (Dan Roscigno).
  • Fix error during server startup on old distros (e.g. Amazon Linux 2) and on ARM that glibc 2.28 symbols are not found. #47008 (Robert Schulze).
  • Prepare for clang 16. #47027 (Amos Bird).
  • Added a CI check which ensures ClickHouse can run with an old glibc on ARM. #47063 (Robert Schulze).
  • Add a style check to prevent incorrect usage of the NDEBUG macro. #47699 (Alexey Milovidov).
  • Speed up the build a little. #47714 (Alexey Milovidov).
  • Bump vectorscan to 5.4.9. #47955 (Robert Schulze).
  • Add a unit test to assert Apache Arrow's fatal logging does not abort. It covers the changes in ClickHouse/arrow#16. #47958 (Arthur Passos).
  • Restore the ability of native macOS debug server build to start. #48050 (Robert Schulze). Note: this change is only relevant for development, as the ClickHouse official builds are done with cross-compilation.

Bug Fix (user-visible misbehavior in an official stable release)

ClickHouse release 23.2, 2023-02-23

Backward Incompatible Change

  • Extend function "toDayOfWeek()" (alias: "DAYOFWEEK") with a mode argument that encodes whether the week starts on Monday or Sunday and whether counting starts at 0 or 1. For consistency with other date time functions, the mode argument was inserted between the time and the time zone arguments. This breaks existing usage of the (previously undocumented) 2-argument syntax "toDayOfWeek(time, time_zone)". A fix is to rewrite the function into "toDayOfWeek(time, 0, time_zone)". #45233 (Robert Schulze).
  • Rename setting max_query_cache_size to filesystem_cache_max_download_size. #45614 (Kseniia Sumarokova).
  • The default user will not have permissions for access type SHOW NAMED COLLECTION by default (e.g. default user will no longer be able to grant ALL to other users as it was before, therefore this PR is backward incompatible). #46010 (Kseniia Sumarokova).
  • If the SETTINGS clause is specified before the FORMAT clause, the settings will be applied to formatting as well. #46003 (Azat Khuzhin).
  • Remove support for setting materialized_postgresql_allow_automatic_update (which was by default turned off). #46106 (Kseniia Sumarokova).
  • Slightly improve performance of countDigits on realistic datasets. This closed #44518. In previous versions, countDigits(0) returned 0; now it returns 1, which is more correct, and follows the existing documentation. #46187 (Alexey Milovidov).
  • Disallow creation of new columns compressed by a combination of codecs "Delta" or "DoubleDelta" followed by codecs "Gorilla" or "FPC". This can be bypassed using setting "allow_suspicious_codecs = true". #45652 (Robert Schulze).

New Feature

  • Add StorageIceberg and table function iceberg to access iceberg table store on S3. #45384 (flynn).
  • Allow configuring storage as SETTINGS disk = '<disk_name>' (instead of storage_policy) and with explicit disk creation SETTINGS disk = disk(type=s3, ...). #41976 (Kseniia Sumarokova).
  • Expose ProfileEvents counters in system.part_log. #38614 (Bharat Nallan).
  • Enrichment of the existing ReplacingMergeTree engine to allow duplicate the insertion. It leverages the power of both ReplacingMergeTree and CollapsingMergeTree in one MergeTree engine. Deleted data are not returned when queried, but not removed from disk neither. #41005 (youennL-cs).
  • Add generateULID function. Closes #36536. #44662 (Nikolay Degterinsky).
  • Add corrMatrix aggregate function, calculating each two columns. In addition, since Aggregatefunctions covarSamp and covarPop are similar to corr, I add covarSampMatrix, covarPopMatrix by the way. @alexey-milovidov closes #44587. #44680 (FFFFFFFHHHHHHH).
  • Introduce arrayShuffle function for random array permutations. #45271 (Joanna Hulboj).
  • Support types FIXED_SIZE_BINARY type in Arrow, FIXED_LENGTH_BYTE_ARRAY in Parquet and match them to FixedString. Add settings output_format_parquet_fixed_string_as_fixed_byte_array/output_format_arrow_fixed_string_as_fixed_byte_array to control default output type for FixedString. Closes #45326. #45340 (Kruglov Pavel).
  • Add a new column last_exception_time to system.replication_queue. #45457 (Frank Chen).
  • Add two new functions which allow for user-defined keys/seeds with SipHash{64,128}. #45513 (Salvatore Mesoraca).
  • Allow a three-argument version for table function format. close #45808. #45873 (FFFFFFFHHHHHHH).
  • Add JodaTime format support for 'x','w','S'. Refer to https://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html. #46073 (zk_kiger).
  • Support window function ntile. (lgbo).
  • Add setting final to implicitly apply the FINAL modifier to every table. #40945 (Arthur Passos).
  • Added arrayPartialSort and arrayPartialReverseSort functions. #46296 (Joanna Hulboj).
  • The new http parameter client_protocol_version allows setting a client protocol version for HTTP responses using the Native format. #40397. #46360 (Geoff Genz).
  • Add new function regexpExtract, like spark function REGEXP_EXTRACT for compatibility. It is similar to the existing function extract. #46469 (李扬).
  • Add new function JSONArrayLength, which returns the number of elements in the outermost JSON array. The function returns NULL if the input JSON string is invalid. #46631 (李扬).

Performance Improvement

  • The introduced logic works if PREWHERE condition is a conjunction of multiple conditions (cond1 AND cond2 AND ... ). It groups those conditions that require reading the same columns into steps. After each step the corresponding part of the full condition is computed and the result rows might be filtered. This allows to read fewer rows in the next steps thus saving IO bandwidth and doing less computation. This logic is disabled by default for now. It will be enabled by default in one of the future releases once it is known to not have any regressions, so it is highly encouraged to be used for testing. It can be controlled by 2 settings: "enable_multiple_prewhere_read_steps" and "move_all_conditions_to_prewhere". #46140 (Alexander Gololobov).
  • An option added to aggregate partitions independently if table partition key and group by key are compatible. Controlled by the setting allow_aggregate_partitions_independently. Disabled by default because of limited applicability (please refer to the docs). #45364 (Nikita Taranov).
  • Allow using Vertical merge algorithm with parts in Compact format. This will allow ClickHouse server to use much less memory for background operations. This closes #46084. #45681 #46282 (Anton Popov).
  • Optimize Parquet reader by using batch reader. #45878 (LiuNeng).
  • Add new local_filesystem_read_method method io_uring based on the asynchronous Linux io_uring subsystem, improving read performance almost universally compared to the default pread method. #38456 (Saulius Valatka).
  • Rewrite aggregate functions with if expression as argument when logically equivalent. For example, avg(if(cond, col, null)) can be rewritten to avgIf(cond, col). It is helpful in performance. #44730 (李扬).
  • Improve lower/upper function performance with avx512 instructions. #37894 (yaqi-zhao).
  • Remove the limitation that on systems with >=32 cores and SMT disabled ClickHouse uses only half of the cores (the case when you disable Hyper Threading in BIOS). #44973 (Robert Schulze).
  • Improve performance of function multiIf by columnar executing, speed up by 2.3x. #45296 (李扬).
  • Add fast path for function position when the needle is empty. #45382 (李扬).
  • Enable query_plan_remove_redundant_sorting optimization by default. Optimization implemented in #45420. #45567 (Igor Nikonov).
  • Increased HTTP Transfer Encoding chunk size to improve performance of large queries using the HTTP interface. #45593 (Geoff Genz).
  • Fixed performance of short SELECT queries that read from tables with large number of Array/Map/Nested columns. #45630 (Anton Popov).
  • Improve performance of filtering for big integers and decimal types. #45949 (李扬).
  • This change could effectively reduce the overhead of obtaining the filter from ColumnNullable(UInt8) and improve the overall query performance. To evaluate the impact of this change, we adopted TPC-H benchmark but revised the column types from non-nullable to nullable, and we measured the QPS of its queries as the performance indicator. #45962 (Zhiguo Zhou).
  • Make the _part and _partition_id virtual column be LowCardinality(String) type. Closes #45964. #45975 (flynn).
  • Improve the performance of Decimal conversion when the scale does not change. #46095 (Alexey Milovidov).
  • Allow to increase prefetching for read data. #46168 (Kseniia Sumarokova).
  • Rewrite arrayExists(x -> x = 1, arr) -> has(arr, 1), which improve performance by 1.34x. #46188 (李扬).
  • Fix too big memory usage for vertical merges on non-remote disk. Respect max_insert_delayed_streams_for_parallel_write for the remote disk. #46275 (Nikolai Kochetov).
  • Update zstd to v1.5.4. It has some minor improvements in performance and compression ratio. If you run replicas with different versions of ClickHouse you may see reasonable error messages Data after merge/mutation is not byte-identical to data on another replicas. with explanation. These messages are Ok and you should not worry. #46280 (Raúl Marín).
  • Fix performance degradation caused by #39737. #46309 (Alexey Milovidov).
  • The replicas_status handle will answer quickly even in case of a large replication queue. #46310 (Alexey Milovidov).
  • Add avx512 support for aggregate function sum, function unary arithmetic, function comparison. #37870 (zhao zhou).
  • Rewrote the code around marks distribution and the overall coordination of the reading in order to achieve the maximum performance improvement. This closes #34527. #43772 (Nikita Mikhaylov).
  • Remove redundant DISTINCT clauses in query (subqueries). Implemented on top of query plan. It does similar optimization as optimize_duplicate_order_by_and_distinct regarding DISTINCT clauses. Can be enabled via query_plan_remove_redundant_distinct setting. Related to #42648. #44176 (Igor Nikonov).
  • A few query rewrite optimizations: sumIf(123, cond) -> 123 * countIf(1, cond), sum(if(cond, 123, 0)) -> 123 * countIf(cond), sum(if(cond, 0, 123)) -> 123 * countIf(not(cond)) #44728 (李扬).
  • Improved how memory bound merging and aggregation in order on top query plan interact. Previously we fell back to explicit sorting for AIO in some cases when it wasn't actually needed. #45892 (Nikita Taranov).
  • Concurrent merges are scheduled using round-robin by default to ensure fair and starvation-free operation. Previously in heavily overloaded shards, big merges could possibly be starved by smaller merges due to the use of strict priority scheduling. Added background_merges_mutations_scheduling_policy server config option to select scheduling algorithm (round_robin or shortest_task_first). #46247 (Sergei Trifonov).

Improvement

  • Enable retries for INSERT by default in case of ZooKeeper session loss. We already use it in production. #46308 (Alexey Milovidov).
  • Add ability to ignore unknown keys in JSON object for named tuples (input_format_json_ignore_unknown_keys_in_named_tuple). #45678 (Azat Khuzhin).
  • Support optimizing the where clause with sorting key expression move to prewhere for query with final. #38893. #38950 (hexiaoting).
  • Add new metrics for backups: num_processed_files and processed_files_size described actual number of processed files. #42244 (Aleksandr).
  • Added retries on interserver DNS errors. #43179 (Anton Kozlov).
  • Keeper improvement: try preallocating space on the disk to avoid undefined out-of-space issues. Introduce setting max_log_file_size for the maximum size of Keeper's Raft log files. #44370 (Antonio Andelic).
  • Optimize behavior for a replica delay api logic in case the replica is read-only. #45148 (mateng915).
  • Ask for the password in clickhouse-client interactively in a case when the empty password is wrong. Closes #46702. #46730 (Nikolay Degterinsky).
  • Mark Gorilla compression on columns of non-Float* type as suspicious. #45376 (Robert Schulze).
  • Show replica name that is executing a merge in the postpone_reason column. #45458 (Frank Chen).
  • Save exception stack trace in part_log. #45459 (Frank Chen).
  • The regexp_tree dictionary is polished and now it is compatible with https://github.com/ua-parser/uap-core. #45631 (Han Fei).
  • Updated checking of SYSTEM SYNC REPLICA, resolves #45508 #45648 (SmitaRKulkarni).
  • Rename setting replication_alter_partitions_sync to alter_sync. #45659 (Antonio Andelic).
  • The generateRandom table function and the engine now support LowCardinality data types. This is useful for testing, for example you can write INSERT INTO table SELECT * FROM generateRandom() LIMIT 1000. This is needed to debug #45590. #45661 (Alexey Milovidov).
  • The experimental query result cache now provides more modular configuration settings. #45679 (Robert Schulze).
  • Renamed "query result cache" to "query cache". #45682 (Robert Schulze).
  • add SYSTEM SYNC FILE CACHE command. It will do the sync syscall. #8921. #45685 (DR).
  • Add a new S3 setting allow_head_object_request. This PR makes usage of GetObjectAttributes request instead of HeadObject introduced in https://github.com/ClickHouse/ClickHouse/pull/45288 optional (and disabled by default). #45701 (Vitaly Baranov).
  • Add ability to override connection settings based on connection names (that said that now you can forget about storing password for each connection, you can simply put everything into ~/.clickhouse-client/config.xml and even use different history files for them, which can be also useful). #45715 (Azat Khuzhin).
  • Arrow format: support the duration type. Closes #45669. #45750 (flynn).
  • Extend the logging in the Query Cache to improve investigations of the caching behavior. #45751 (Robert Schulze).
  • The query cache's server-level settings are now reconfigurable at runtime. #45758 (Robert Schulze).
  • Hide password in logs when a table function's arguments are specified with a named collection. #45774 (Vitaly Baranov).
  • Improve internal S3 client to correctly deduce regions and redirections for different types of URLs. #45783 (Antonio Andelic).
  • Add support for Map, IPv4 and IPv6 types in generateRandom. Mostly useful for testing. #45785 (Raúl Marín).
  • Support empty/notEmpty for IP types. #45799 (Yakov Olkhovskiy).
  • The column num_processed_files was split into two columns: num_files (for BACKUP) and files_read (for RESTORE). The column processed_files_size was split into two columns: total_size (for BACKUP) and bytes_read (for RESTORE). #45800 (Vitaly Baranov).
  • Add support for SHOW ENGINES query for MySQL compatibility. #45859 (Filatenkov Artur).
  • Improved how the obfuscator deals with queries. #45867 (Raúl Marín).
  • Improve behaviour of conversion into Date for boundary value 65535 (2149-06-06). #46042 #45914 (Joanna Hulboj).
  • Add setting check_referential_table_dependencies to check referential dependencies on DROP TABLE. This PR solves #38326. #45936 (Vitaly Baranov).
  • Fix tupleElement to return Null when having Null argument. Closes #45894. #45952 (flynn).
  • Throw an error on no files satisfying the S3 wildcard. Closes #45587. #45957 (chen).
  • Use cluster state data to check concurrent backup/restore. #45982 (SmitaRKulkarni).
  • ClickHouse Client: Use "exact" matching for fuzzy search, which has correct case ignorance and more appropriate algorithm for matching SQL queries. #46000 (Azat Khuzhin).
  • Forbid wrong create View syntax CREATE View X TO Y AS SELECT. Closes #4331. #46043 (flynn).
  • Storage Log family support setting the storage_policy. Closes #43421. #46044 (flynn).
  • Improve JSONColumns format when the result is empty. Closes #46024. #46053 (flynn).
  • Add reference implementation for SipHash128. #46065 (Salvatore Mesoraca).
  • Add a new metric to record allocations times and bytes using mmap. #46068 (李扬).
  • Currently for functions like leftPad, rightPad, leftPadUTF8, rightPadUTF8, the second argument length must be UInt8|16|32|64|128|256. Which is too strict for clickhouse users, besides, it is not consistent with other similar functions like arrayResize, substring and so on. #46103 (李扬).
  • Fix assertion in the welchTTest function in debug build when the resulting statistics is NaN. Unified the behavior with other similar functions. Change the behavior of studentTTest to return NaN instead of throwing an exception because the previous behavior was inconvenient. This closes #41176 This closes #42162. #46141 (Alexey Milovidov).
  • More convenient usage of big integers and ORDER BY WITH FILL. Allow using plain integers for start and end points in WITH FILL when ORDER BY big (128-bit and 256-bit) integers. Fix the wrong result for big integers with negative start or end points. This closes #16733. #46152 (Alexey Milovidov).
  • Add parts, active_parts and total_marks columns to system.tables on issue. #46161 (attack204).
  • Functions "multi[Fuzzy]Match(Any|AnyIndex|AllIndices}" now reject regexes which will likely evaluate very slowly in vectorscan. #46167 (Robert Schulze).
  • When insert_null_as_default is enabled and column doesn't have defined default value, the default of column type will be used. Also this PR fixes using default values on nulls in case of LowCardinality columns. #46171 (Kruglov Pavel).
  • Prefer explicitly defined access keys for S3 clients. If use_environment_credentials is set to true, and the user has provided the access key through query or config, they will be used instead of the ones from the environment variable. #46191 (Antonio Andelic).
  • Add an alias "DATE_FORMAT()" for function "formatDateTime()" to improve compatibility with MySQL's SQL dialect, extend function formatDateTime with substitutions "a", "b", "c", "h", "i", "k", "l" "r", "s", "W". ### Documentation entry for user-facing changes User-readable short description: DATE_FORMAT is an alias of formatDateTime. Formats a Time according to the given Format string. Format is a constant expression, so you cannot have multiple formats for a single result column. (Provide link to formatDateTime). #46302 (Jake Bamrah).
  • Add ProfileEvents and CurrentMetrics about the callback tasks for parallel replicas (s3Cluster and MergeTree tables). #46313 (Alexey Milovidov).
  • Add support for DELETE and UPDATE for tables using KeeperMap storage engine. #46330 (Antonio Andelic).
  • Allow writing RENAME queries with query parameters. Resolves #45778. #46407 (Nikolay Degterinsky).
  • Fix parameterized SELECT queries with REPLACE transformer. Resolves #33002. #46420 (Nikolay Degterinsky).
  • Exclude the internal database used for temporary/external tables from the calculation of asynchronous metric "NumberOfDatabases". This makes the behavior consistent with system table "system.databases". #46435 (Robert Schulze).
  • Added last_exception_time column into distribution_queue table. #46564 (Aleksandr).
  • Support for IN clause with parameter in parameterized views. #46583 (SmitaRKulkarni).
  • Do not load named collections on server startup (load them on first access instead). #46607 (Kseniia Sumarokova).

Build/Testing/Packaging Improvement

  • Introduce GWP-ASan implemented by the LLVM runtime. This closes #27039. #45226 (Han Fei).
  • We want to make our tests less stable and more flaky: add randomization for merge tree settings in tests. #38983 (Anton Popov).
  • Enable the HDFS support in PowerPC and which helps to fixes the following functional tests 02113_hdfs_assert.sh, 02244_hdfs_cluster.sql and 02368_cancel_write_into_hdfs.sh. #44949 (MeenaRenganathan22).
  • Add systemd.service file for clickhouse-keeper. Fixes #44293. #45568 (Mikhail f. Shiryaev).
  • ClickHouse's fork of poco was moved from "contrib/" to "base/poco/". #46075 (Robert Schulze).
  • Add an option for clickhouse-watchdog to restart the child process. This does not make a lot of use. #46312 (Alexey Milovidov).
  • If the environment variable CLICKHOUSE_DOCKER_RESTART_ON_EXIT is set to 1, the Docker container will run clickhouse-server as a child instead of the first process, and restart it when it exited. #46391 (Alexey Milovidov).
  • Fix Systemd service file. #46461 (SuperDJY).
  • Raised the minimum Clang version needed to build ClickHouse from 12 to 15. #46710 (Robert Schulze).
  • Upgrade Intel QPL from v0.3.0 to v1.0.0 2. Build libaccel-config and link it statically to QPL library instead of dynamically. #45809 (jasperzhu).

Bug Fix (user-visible misbehavior in official stable release)

  • Flush data exactly by rabbitmq_flush_interval_ms or by rabbitmq_max_block_size in StorageRabbitMQ. Closes #42389. Closes #45160. #44404 (Kseniia Sumarokova).
  • Use PODArray to render in sparkBar function, so we can control the memory usage. Close #44467. #44489 (Duc Canh Le).
  • Fix functions (quantilesExactExclusive, quantilesExactInclusive) return unsorted array element. #45379 (wujunfu).
  • Fix uncaught exception in HTTPHandler when open telemetry is enabled. #45456 (Frank Chen).
  • Don't infer Dates from 8 digit numbers. It could lead to wrong data to be read. #45581 (Kruglov Pavel).
  • Fixes to correctly use odbc_bridge_use_connection_pooling setting. #45591 (Bharat Nallan).
  • When the callback in the cache is called, it is possible that this cache is destructed. To keep it safe, we capture members by value. It's also safe for task schedule because it will be deactivated before storage is destroyed. Resolve #45548. #45601 (Han Fei).
  • Fix data corruption when codecs Delta or DoubleDelta are combined with codec Gorilla. #45615 (Robert Schulze).
  • Correctly check types when using N-gram bloom filter index to avoid invalid reads. #45617 (Antonio Andelic).
  • A couple of segfaults have been reported around c-ares. They were introduced in my previous pull requests. I have fixed them with the help of Alexander Tokmakov. #45629 (Arthur Passos).
  • Fix key description when encountering duplicate primary keys. This can happen in projections. See #45590 for details. #45686 (Amos Bird).
  • Set compression method and level for backup Closes #45690. #45737 (Pradeep Chhetri).
  • Should use select_query_typed.limitByOffset() instead of select_query_typed.limitOffset(). #45817 (刘陶峰).
  • When use experimental analyzer, queries like SELECT number FROM numbers(100) LIMIT 10 OFFSET 10; get wrong results (empty result for this sql). That is caused by an unnecessary offset step added by planner. #45822 (刘陶峰).
  • Backward compatibility - allow implicit narrowing conversion from UInt64 to IPv4 - required for "INSERT ... VALUES ..." expression. #45865 (Yakov Olkhovskiy).
  • Bugfix IPv6 parser for mixed ip4 address with missed first octet (like ::.1.2.3). #45871 (Yakov Olkhovskiy).
  • Add the query_kind column to the system.processes table and the SHOW PROCESSLIST query. Remove duplicate code. It fixes a bug: the global configuration parameter max_concurrent_select_queries was not respected to queries with INTERSECT or EXCEPT chains. #45872 (Alexey Milovidov).
  • Fix crash in a function stochasticLinearRegression. Found by WingFuzz. #45985 (Nikolai Kochetov).
  • Fix crash in SELECT queries with INTERSECT and EXCEPT modifiers that read data from tables with enabled sparse columns (controlled by setting ratio_of_defaults_for_sparse_serialization). #45987 (Anton Popov).
  • Fix read in order optimization for DESC sorting with FINAL, close #45815. #46009 (Vladimir C).
  • Fix reading of non existing nested columns with multiple level in compact parts. #46045 (Azat Khuzhin).
  • Fix elapsed column in system.processes (10x error). #46047 (Azat Khuzhin).
  • Follow-up fix for Replace domain IP types (IPv4, IPv6) with native https://github.com/ClickHouse/ClickHouse/pull/43221. #46087 (Yakov Olkhovskiy).
  • Fix environment variable substitution in the configuration when a parameter already has a value. This closes #46131. This closes #9547. #46144 (pufit).
  • Fix incorrect predicate push down with grouping sets. Closes #45947. #46151 (flynn).
  • Fix possible pipeline stuck error on fulls_sorting_join with constant keys. #46175 (Vladimir C).
  • Never rewrite tuple functions as literals during formatting to avoid incorrect results. #46232 (Salvatore Mesoraca).
  • Fix possible out of bounds error while reading LowCardinality(Nullable) in Arrow format. #46270 (Kruglov Pavel).
  • Fix SYSTEM UNFREEZE queries failing with the exception CANNOT_PARSE_INPUT_ASSERTION_FAILED. #46325 (Aleksei Filatov).
  • Fix possible crash which can be caused by an integer overflow while deserializing aggregating state of a function that stores HashTable. #46349 (Nikolai Kochetov).
  • Fix possible LOGICAL_ERROR in asynchronous inserts with invalid data sent in format VALUES. #46350 (Anton Popov).
  • Fixed a LOGICAL_ERROR on an attempt to execute ALTER ... MOVE PART ... TO TABLE. This type of query was never actually supported. #46359 (Alexander Tokmakov).
  • Fix s3Cluster schema inference in parallel distributed insert select when parallel_distributed_insert_select is enabled. #46381 (Kruglov Pavel).
  • Fix queries like ALTER TABLE ... UPDATE nested.arr1 = nested.arr2 ..., where arr1 and arr2 are fields of the same Nested column. #46387 (Anton Popov).
  • Scheduler may fail to schedule a task. If it happens, the whole MulityPartUpload should be aborted and UploadHelper must wait for already scheduled tasks. #46451 (Dmitry Novik).
  • Fix PREWHERE for Merge with different default types (fixes some NOT_FOUND_COLUMN_IN_BLOCK when the default type for the column differs, also allow PREWHERE when the type of column is the same across tables, and prohibit it, only if it differs). #46454 (Azat Khuzhin).
  • Fix a crash that could happen when constant values are used in ORDER BY. Fixes #46466. #46493 (Nikolai Kochetov).
  • Do not throw exception if disk setting was specified on query level, but storage_policy was specified in config merge tree settings section. disk will override setting from config. #46533 (Kseniia Sumarokova).
  • Fix an invalid processing of constant LowCardinality argument in function arrayMap. This bug could lead to a segfault in release, and logical error Bad cast in debug build. #46569 (Alexey Milovidov).
  • fixes #46557. #46611 (Alexander Gololobov).
  • Fix endless restarts of clickhouse-server systemd unit if server cannot start within 1m30sec (Disable timeout logic for starting clickhouse-server from systemd service). #46613 (Azat Khuzhin).
  • Allocated during asynchronous inserts memory buffers were deallocated in the global context and MemoryTracker counters for corresponding user and query were not updated correctly. That led to false positive OOM exceptions. #46622 (Dmitry Novik).
  • Updated to not clear on_expression from table_join as its used by future analyze runs resolves #45185. #46487 (SmitaRKulkarni).

ClickHouse release 23.1, 2023-01-26

ClickHouse release 23.1

Upgrade Notes

  • The SYSTEM RESTART DISK query becomes a no-op. #44647 (alesapin).
  • The PREALLOCATE option for HASHED/SPARSE_HASHED dictionaries becomes a no-op. #45388 (Azat Khuzhin). It does not give significant advantages anymore.
  • Disallow Gorilla codec on columns of non-Float32 or non-Float64 type. #45252 (Robert Schulze). It was pointless and led to inconsistencies.
  • Parallel quorum inserts might work incorrectly with *MergeTree tables created with the deprecated syntax. Therefore, parallel quorum inserts support is completely disabled for such tables. It does not affect tables created with a new syntax. #45430 (Alexander Tokmakov).
  • Use the GetObjectAttributes request instead of the HeadObject request to get the size of an object in AWS S3. This change fixes handling endpoints without explicit regions after updating the AWS SDK, for example. #45288 (Vitaly Baranov). AWS S3 and Minio are tested, but keep in mind that various S3-compatible services (GCS, R2, B2) may have subtle incompatibilities. This change also may require you to adjust the ACL to allow the GetObjectAttributes request.
  • Forbid paths in timezone names. For example, a timezone name like /usr/share/zoneinfo/Asia/Aden is not allowed; the IANA timezone database name like Asia/Aden should be used. #44225 (Kruglov Pavel).
  • Queries combining equijoin and constant expressions (e.g., JOIN ON t1.x = t2.x AND 1 = 1) are forbidden due to incorrect results. #44016 (Vladimir C).

New Feature

  • Dictionary source for extracting keys by traversing regular expressions tree. It can be used for User-Agent parsing. #40878 (Vage Ogannisian). #43858 (Han Fei).
  • Added parametrized view functionality, now it's possible to specify query parameters for the View table engine. resolves #40907. #41687 (SmitaRKulkarni).
  • Add quantileInterpolatedWeighted/quantilesInterpolatedWeighted functions. #38252 (Bharat Nallan).
  • Array join support for the Map type, like the function "explode" in Spark. #43239 (李扬).
  • Support SQL standard binary and hex string literals. #43785 (Mo Xuan).
  • Allow formatting DateTime in Joda-Time style. Refer to the Joda-Time docs. #43818 (李扬).
  • Implemented a fractional second formatter (%f) for formatDateTime. #44060 (ltrk2). #44497 (Alexander Gololobov).
  • Added age function to calculate the difference between two dates or dates with time values expressed as the number of full units. Closes #41115. #44421 (Robert Schulze).
  • Add Null source for dictionaries. Closes #44240. #44502 (mayamika).
  • Allow configuring the S3 storage class with the s3_storage_class configuration option. Such as <s3_storage_class>STANDARD/INTELLIGENT_TIERING</s3_storage_class> Closes #44443. #44707 (chen).
  • Insert default values in case of missing elements in JSON object while parsing named tuple. Add setting input_format_json_defaults_for_missing_elements_in_named_tuple that controls this behaviour. Closes #45142#issuecomment-1380153217. #45231 (Kruglov Pavel).
  • Record server startup time in ProfileEvents (ServerStartupMilliseconds). Resolves #43188. #45250 (SmitaRKulkarni).
  • Refactor and Improve streaming engines Kafka/RabbitMQ/NATS and add support for all formats, also refactor formats a bit: - Fix producing messages in row-based formats with suffixes/prefixes. Now every message is formatted completely with all delimiters and can be parsed back using input format. - Support block-based formats like Native, Parquet, ORC, etc. Every block is formatted as a separate message. The number of rows in one message depends on the block size, so you can control it via the setting max_block_size. - Add new engine settings kafka_max_rows_per_message/rabbitmq_max_rows_per_message/nats_max_rows_per_message. They control the number of rows formatted in one message in row-based formats. Default value: 1. - Fix high memory consumption in the NATS table engine. - Support arbitrary binary data in NATS producer (previously it worked only with strings contained \0 at the end) - Add missing Kafka/RabbitMQ/NATS engine settings in the documentation. - Refactor producing and consuming in Kafka/RabbitMQ/NATS, separate it from WriteBuffers/ReadBuffers semantic. - Refactor output formats: remove callbacks on each row used in Kafka/RabbitMQ/NATS (now we don't use callbacks there), allow to use IRowOutputFormat directly, clarify row end and row between delimiters, make it possible to reset output format to start formatting again - Add proper implementation in formatRow function (bonus after formats refactoring). #42777 (Kruglov Pavel).
  • Support reading/writing Nested tables as List of Struct in CapnProto format. Read/write Decimal32/64 as Int32/64. Closes #43319. #43379 (Kruglov Pavel).
  • Added a message_format_string column to system.text_log. The column contains a pattern that was used to format the message. #44543 (Alexander Tokmakov). This allows various analytics over the ClickHouse logs.
  • Try to autodetect headers with column names (and maybe types) for CSV/TSV/CustomSeparated input formats. Add settings input_format_tsv/csv/custom_detect_header that enable this behaviour (enabled by default). Closes #44640. #44953 (Kruglov Pavel).

Experimental Feature

  • Add an experimental inverted index as a new secondary index type for efficient text search. #38667 (larryluogit).
  • Add experimental query result cache. #43797 (Robert Schulze).
  • Added extendable and configurable scheduling subsystem for IO requests (not yet integrated with IO code itself). #41840 (Sergei Trifonov). This feature does nothing at all, enjoy.
  • Added SYSTEM DROP DATABASE REPLICA that removes metadata of a dead replica of a Replicated database. Resolves #41794. #42807 (Alexander Tokmakov).

Performance Improvement

  • Do not load inactive parts at startup of MergeTree tables. #42181 (Anton Popov).
  • Improved latency of reading from storage S3 and table function s3 with large numbers of small files. Now settings remote_filesystem_read_method and remote_filesystem_read_prefetch take effect while reading from storage S3. #43726 (Anton Popov).
  • Optimization for reading struct fields in Parquet/ORC files. Only the required fields are loaded. #44484 (lgbo).
  • Two-level aggregation algorithm was mistakenly disabled for queries over the HTTP interface. It was enabled back, and it leads to a major performance improvement. #45450 (Nikolai Kochetov).
  • Added mmap support for StorageFile, which should improve the performance of clickhouse-local. #43927 (pufit).
  • Added sharding support in HashedDictionary to allow parallel load (almost linear scaling based on number of shards). #40003 (Azat Khuzhin).
  • Speed up query parsing. #42284 (Raúl Marín).
  • Always replace OR chain expr = x1 OR ... OR expr = xN to expr IN (x1, ..., xN) in the case where expr is a LowCardinality column. Setting optimize_min_equality_disjunction_chain_length is ignored in this case. #42889 (Guo Wangyang).
  • Slightly improve performance by optimizing the code around ThreadStatus. #43586 (Zhiguo Zhou).
  • Optimize the column-wise ternary logic evaluation by achieving auto-vectorization. In the performance test of this microbenchmark, we've observed a peak performance gain of 21x on the ICX device (Intel Xeon Platinum 8380 CPU). #43669 (Zhiguo Zhou).
  • Avoid acquiring read locks in the system.tables table if possible. #43840 (Raúl Marín).
  • Optimize ThreadPool. The performance experiments of SSB (Star Schema Benchmark) on the ICX device (Intel Xeon Platinum 8380 CPU, 80 cores, 160 threads) shows that this change could effectively decrease the lock contention for ThreadPoolImpl::mutex by 75%, increasing the CPU utilization and improving the overall performance by 2.4%. #44308 (Zhiguo Zhou).
  • Now the optimisation for predicting the hash table size is applied only if the cached hash table size is sufficiently large (thresholds were determined empirically and hardcoded). #44455 (Nikita Taranov).
  • Small performance improvement for asynchronous reading from remote filesystems. #44868 (Kseniia Sumarokova).
  • Add fast path for: - col like '%%'; - col like '%'; - col not like '%'; - col not like '%'; - match(col, '.*'). #45244 (李扬).
  • Slightly improve happy path optimisation in filtering (WHERE clause). #45289 (Nikita Taranov).
  • Provide monotonicity info for toUnixTimestamp64* to enable more algebraic optimizations for index analysis. #44116 (Nikita Taranov).
  • Allow the configuration of temporary data for query processing (spilling to disk) to cooperate with the filesystem cache (taking up the space from the cache disk) #43972 (Vladimir C). This mainly improves ClickHouse Cloud, but can be used for self-managed setups as well, if you know what to do.
  • Make system.replicas table do parallel fetches of replicas statuses. Closes #43918. #43998 (Nikolay Degterinsky).
  • Optimize memory consumption during backup to S3: files to S3 now will be copied directly without using WriteBufferFromS3 (which could use a lot of memory). #45188 (Vitaly Baranov).
  • Add a cache for async block ids. This will reduce the number of requests of ZooKeeper when we enable async inserts deduplication. #45106 (Han Fei).

Improvement

  • Use structure from insertion table in generateRandom without arguments. #45239 (Kruglov Pavel).
  • Allow to implicitly convert floats stored in string fields of JSON to integers in JSONExtract functions. E.g. JSONExtract('{"a": "1000.111"}', 'a', 'UInt64') -> 1000, previously it returned 0. #45432 (Anton Popov).
  • Added fields supports_parallel_parsing and supports_parallel_formatting to table system.formats for better introspection. #45499 (Anton Popov).
  • Improve reading CSV field in CustomSeparated/Template format. Closes #42352 Closes #39620. #43332 (Kruglov Pavel).
  • Unify query elapsed time measurements. #43455 (Raúl Marín).
  • Improve automatic usage of structure from insertion table in table functions file/hdfs/s3 when virtual columns are present in a select query, it fixes the possible error Block structure mismatch or number of columns mismatch. #43695 (Kruglov Pavel).
  • Add support for signed arguments in the function range. Fixes #43333. #43733 (sanyu).
  • Remove redundant sorting, for example, sorting related ORDER BY clauses in subqueries. Implemented on top of query plan. It does similar optimization as optimize_duplicate_order_by_and_distinct regarding ORDER BY clauses, but more generic, since it's applied to any redundant sorting steps (not only caused by ORDER BY clause) and applied to subqueries of any depth. Related to #42648. #43905 (Igor Nikonov).
  • Add the ability to disable deduplication of files for BACKUP (for backups without deduplication ATTACH can be used instead of full RESTORE). For example BACKUP foo TO S3(...) SETTINGS deduplicate_files=0 (default deduplicate_files=1). #43947 (Azat Khuzhin).
  • Refactor and improve schema inference for text formats. Add new setting schema_inference_make_columns_nullable that controls making result types Nullable (enabled by default);. #44019 (Kruglov Pavel).
  • Better support for PROXYv1 protocol. #44135 (Yakov Olkhovskiy).
  • Add information about the latest part check by cleanup threads into system.parts table. #44244 (Dmitry Novik).
  • Disable table functions in readonly mode for inserts. #44290 (SmitaRKulkarni).
  • Add a setting simultaneous_parts_removal_limit to allow limiting the number of parts being processed by one iteration of CleanupThread. #44461 (Dmitry Novik).
  • Do not initialize ReadBufferFromS3 when only virtual columns are needed in a query. This may be helpful to #44246. #44493 (chen).
  • Prevent duplicate column names hints. Closes #44130. #44519 (Joanna Hulboj).
  • Allow macro substitution in endpoint of disks. Resolve #40951. #44533 (SmitaRKulkarni).
  • Improve schema inference when input_format_json_read_object_as_string is enabled. #44546 (Kruglov Pavel).
  • Add a user-level setting database_replicated_allow_replicated_engine_arguments which allows banning the creation of ReplicatedMergeTree tables with arguments in DatabaseReplicated. #44566 (alesapin).
  • Prevent users from mistakenly specifying zero (invalid) value for index_granularity. This closes #44536. #44578 (Alexey Milovidov).
  • Added possibility to set path to service keytab file in keytab parameter in kerberos section of config.xml. #44594 (Roman Vasin).
  • Use already written part of the query for fuzzy search (pass to the skim library, which is written in Rust and linked statically to ClickHouse). #44600 (Azat Khuzhin).
  • Enable input_format_json_read_objects_as_strings by default to be able to read nested JSON objects while JSON Object type is experimental. #44657 (Kruglov Pavel).
  • Improvement for deduplication of async inserts: when users do duplicate async inserts, we should deduplicate inside the memory before we query Keeper. #44682 (Han Fei).
  • Input/output Avro format will parse bool type as ClickHouse bool type. #44684 (Kruglov Pavel).
  • Support Bool type in Arrow/Parquet/ORC. Closes #43970. #44698 (Kruglov Pavel).
  • Don't greedily parse beyond the quotes when reading UUIDs - it may lead to mistakenly successful parsing of incorrect data. #44686 (Raúl Marín).
  • Infer UInt64 in case of Int64 overflow and fix some transforms in schema inference. #44696 (Kruglov Pavel).
  • Previously dependency resolving inside Replicated database was done in a hacky way, and now it's done right using an explicit graph. #44697 (Nikita Mikhaylov).
  • Fix output_format_pretty_row_numbers does not preserve the counter across the blocks. Closes #44815. #44832 (flynn).
  • Don't report errors in system.errors due to parts being merged concurrently with the background cleanup process. #44874 (Raúl Marín).
  • Optimize and fix metrics for Distributed async INSERT. #44922 (Azat Khuzhin).
  • Added settings to disallow concurrent backups and restores resolves #43891 Implementation: * Added server-level settings to disallow concurrent backups and restores, which are read and set when BackupWorker is created in Context. * Settings are set to true by default. * Before starting backup or restores, added a check to see if any other backups/restores are running. For internal requests, it checks if it is from the self node using backup_uuid. #45072 (SmitaRKulkarni).
  • Add <storage_policy> config parameter for system logs. #45320 (Stig Bakken).

Build/Testing/Packaging Improvement

  • Statically link with the skim library (it is written in Rust) for fuzzy search in clickhouse client/local history. #44239 (Azat Khuzhin).
  • We removed support for shared linking because of Rust. Actually, Rust is only an excuse for this removal, and we wanted to remove it nevertheless. #44828 (Alexey Milovidov).
  • Remove the dependency on the adduser tool from the packages, because we don't use it. This fixes #44934. #45011 (Alexey Milovidov).
  • The SQLite library is updated to the latest. It is used for the SQLite database and table integration engines. Also, fixed a false-positive TSan report. This closes #45027. #45031 (Alexey Milovidov).
  • CRC-32 changes to address the WeakHash collision issue in PowerPC. #45144 (MeenaRenganathan22).
  • Update aws-c* submodules #43020 (Vitaly Baranov).
  • Automatically merge green backport PRs and green approved PRs #41110 (Mikhail f. Shiryaev).
  • Introduce a website for the status of ClickHouse CI. Source.

Bug Fix

  • Replace domain IP types (IPv4, IPv6) with native. #43221 (Yakov Olkhovskiy). It automatically fixes some missing implementations in the code.
  • Fix the backup process if mutations get killed during the backup process. #45351 (Vitaly Baranov).
  • Fix the Invalid number of rows in Chunk exception message. #41404. #42126 (Alexander Gololobov).
  • Fix possible use of an uninitialized value after executing expressions after sorting. Closes #43386 #43635 (Kruglov Pavel).
  • Better handling of NULL in aggregate combinators, fix possible segfault/logical error while using an obscure optimization optimize_rewrite_sum_if_to_count_if. Closes #43758. #43813 (Kruglov Pavel).
  • Fix CREATE USER/ROLE query settings constraints. #43993 (Nikolay Degterinsky).
  • Fixed bug with non-parsable default value for EPHEMERAL column in table metadata. #44026 (Yakov Olkhovskiy).
  • Fix parsing of bad version from compatibility setting. #44224 (Kruglov Pavel).
  • Bring interval subtraction from datetime in line with addition. #44241 (ltrk2).
  • Remove limits on the maximum size of the result for view. #44261 (lizhuoyu5).
  • Fix possible logical error in cache if do_not_evict_index_and_mrk_files=1. Closes #42142. #44268 (Kseniia Sumarokova).
  • Fix possible too early cache write interruption in write-through cache (caching could be stopped due to false assumption when it shouldn't have). #44289 (Kseniia Sumarokova).
  • Fix possible crash in the case function IN with constant arguments was used as a constant argument together with LowCardinality. Fixes #44221. #44346 (Nikolai Kochetov).
  • Fix support for complex parameters (like arrays) of parametric aggregate functions. This closes #30975. The aggregate function sumMapFiltered was unusable in distributed queries before this change. #44358 (Alexey Milovidov).
  • Fix reading ObjectId in BSON schema inference. #44382 (Kruglov Pavel).
  • Fix race which can lead to premature temp parts removal before merge finishes in ReplicatedMergeTree. This issue could lead to errors like No such file or directory: xxx. Fixes #43983. #44383 (alesapin).
  • Some invalid SYSTEM ... ON CLUSTER queries worked in an unexpected way if a cluster name was not specified. It's fixed, now invalid queries throw SYNTAX_ERROR as they should. Fixes #44264. #44387 (Alexander Tokmakov).
  • Fix reading Map type in ORC format. #44400 (Kruglov Pavel).
  • Fix reading columns that are not presented in input data in Parquet/ORC formats. Previously it could lead to error INCORRECT_NUMBER_OF_COLUMNS. Closes #44333. #44405 (Kruglov Pavel).
  • Previously the bar function used the same '▋' (U+258B "Left five eighths block") character to display both 5/8 and 6/8 bars. This change corrects this behavior by using '▊' (U+258A "Left three quarters block") for displaying 6/8 bar. #44410 (Alexander Gololobov).
  • Placing profile settings after profile settings constraints in the configuration file made constraints ineffective. #44411 (Konstantin Bogdanov).
  • Fix SYNTAX_ERROR while running EXPLAIN AST INSERT queries with data. Closes #44207. #44413 (save-my-heart).
  • Fix reading bool value with CRLF in CSV format. Closes #44401. #44442 (Kruglov Pavel).
  • Don't execute and/or/if/multiIf on a LowCardinality dictionary, so the result type cannot be LowCardinality. It could lead to the error Illegal column ColumnLowCardinality in some cases. Fixes #43603. #44469 (Kruglov Pavel).
  • Fix mutations with the setting max_streams_for_merge_tree_reading. #44472 (Anton Popov).
  • Fix potential null pointer dereference with GROUPING SETS in ASTSelectQuery::formatImpl (#43049). #44479 (Robert Schulze).
  • Validate types in table function arguments, CAST function arguments, JSONAsObject schema inference according to settings. #44501 (Kruglov Pavel).
  • Fix IN function with LowCardinality and const column, close #44503. #44506 (Duc Canh Le).
  • Fixed a bug in the normalization of a DEFAULT expression in CREATE TABLE statement. The second argument of the function in (or the right argument of operator IN) might be replaced with the result of its evaluation during CREATE query execution. Fixes #44496. #44547 (Alexander Tokmakov).
  • Projections do not work in presence of WITH ROLLUP, WITH CUBE and WITH TOTALS. In previous versions, a query produced an exception instead of skipping the usage of projections. This closes #44614. This closes #42772. #44615 (Alexey Milovidov).
  • Async blocks were not cleaned because the function get all blocks sorted by time didn't get async blocks. #44651 (Han Fei).
  • Fix LOGICAL_ERROR The top step of the right pipeline should be ExpressionStep for JOIN with subquery, UNION, and TOTALS. Fixes #43687. #44673 (Nikolai Kochetov).
  • Avoid std::out_of_range exception in the Executable table engine. #44681 (Kruglov Pavel).
  • Do not apply optimize_syntax_fuse_functions to quantiles on AST, close #44712. #44713 (Vladimir C).
  • Fix bug with wrong type in Merge table and PREWHERE, close #43324. #44716 (Vladimir C).
  • Fix a possible crash during shutdown (while destroying TraceCollector). Fixes #44757. #44758 (Nikolai Kochetov).
  • Fix a possible crash in distributed query processing. The crash could happen if a query with totals or extremes returned an empty result and there are mismatched types in the Distributed and the local tables. Fixes #44738. #44760 (Nikolai Kochetov).
  • Fix fsync for fetches (min_compressed_bytes_to_fsync_after_fetch)/small files (ttl.txt, columns.txt) in mutations (min_rows_to_fsync_after_merge/min_compressed_bytes_to_fsync_after_merge). #44781 (Azat Khuzhin).
  • A rare race condition was possible when querying the system.parts or system.parts_columns tables in the presence of parts being moved between disks. Introduced in #41145. #44809 (Alexey Milovidov).
  • Fix the error Context has expired which could appear with enabled projections optimization. Can be reproduced for queries with specific functions, like dictHas/dictGet which use context in runtime. Fixes #44844. #44850 (Nikolai Kochetov).
  • A fix for Cannot read all data error which could happen while reading LowCardinality dictionary from remote fs. Fixes #44709. #44875 (Nikolai Kochetov).
  • Ignore cases when hardware monitor sensors cannot be read instead of showing a full exception message in logs. #44895 (Raúl Marín).
  • Use max_delay_to_insert value in case the calculated time to delay INSERT exceeds the setting value. Related to #44902. #44916 (Igor Nikonov).
  • Fix error Different order of columns in UNION subquery for queries with UNION. Fixes #44866. #44920 (Nikolai Kochetov).
  • Delay for INSERT can be calculated incorrectly, which can lead to always using max_delay_to_insert setting as delay instead of a correct value. Using simple formula max_delay_to_insert * (parts_over_threshold/max_allowed_parts_over_threshold) i.e. delay grows proportionally to parts over threshold. Closes #44902. #44954 (Igor Nikonov).
  • Fix alter table TTL error when a wide part has the lightweight delete mask. #44959 (Mingliang Pan).
  • Follow-up fix for Replace domain IP types (IPv4, IPv6) with native #43221. #45024 (Yakov Olkhovskiy).
  • Follow-up fix for Replace domain IP types (IPv4, IPv6) with native https://github.com/ClickHouse/ClickHouse/pull/43221. #45043 (Yakov Olkhovskiy).
  • A buffer overflow was possible in the parser. Found by fuzzer. #45047 (Alexey Milovidov).
  • Fix possible cannot-read-all-data error in storage FileLog. Closes #45051, #38257. #45057 (Kseniia Sumarokova).
  • Memory efficient aggregation (setting distributed_aggregation_memory_efficient) is disabled when grouping sets are present in the query. #45058 (Nikita Taranov).
  • Fix RANGE_HASHED dictionary to count range columns as part of the primary key during updates when update_field is specified. Closes #44588. #45061 (Maksim Kita).
  • Fix error Cannot capture column for LowCardinality captured argument of nested lambda. Fixes #45028. #45065 (Nikolai Kochetov).
  • Fix the wrong query result of additional_table_filters (additional filter was not applied) in case the minmax/count projection is used. #45133 (Nikolai Kochetov).
  • Fixed bug in histogram function accepting negative values. #45147 (simpleton).
  • Fix wrong column nullability in StoreageJoin, close #44940. #45184 (Vladimir C).
  • Fix background_fetches_pool_size settings reload (increase at runtime). #45189 (Raúl Marín).
  • Correctly process SELECT queries on KV engines (e.g. KeeperMap, EmbeddedRocksDB) using IN on the key with subquery producing different type. #45215 (Antonio Andelic).
  • Fix logical error in SEMI JOIN & join_use_nulls in some cases, close #45163, close #45209. #45230 (Vladimir C).
  • Fix heap-use-after-free in reading from s3. #45253 (Kruglov Pavel).
  • Fix bug when the Avro Union type is ['null', Nested type], closes #45275. Fix bug that incorrectly infers bytes type to Float. #45276 (flynn).
  • Throw a correct exception when explicit PREWHERE cannot be used with a table using the storage engine Merge. #45319 (Antonio Andelic).
  • Under WSL1 Ubuntu self-extracting ClickHouse fails to decompress due to inconsistency - /proc/self/maps reporting 32bit file's inode, while stat reporting 64bit inode. #45339 (Yakov Olkhovskiy).
  • Fix race in Distributed table startup (that could lead to processing file of async INSERT multiple times). #45360 (Azat Khuzhin).
  • Fix a possible crash while reading from storage S3 and table function s3 in the case when ListObject request has failed. #45371 (Anton Popov).
  • Fix SELECT ... FROM system.dictionaries exception when there is a dictionary with a bad structure (e.g. incorrect type in XML config). #45399 (Aleksei Filatov).
  • Fix s3Cluster schema inference when structure from insertion table is used in INSERT INTO ... SELECT * FROM s3Cluster queries. #45422 (Kruglov Pavel).
  • Fix bug in JSON/BSONEachRow parsing with HTTP that could lead to using default values for some columns instead of values from data. #45424 (Kruglov Pavel).
  • Fixed bug (Code: 632. DB::Exception: Unexpected data ... after parsed IPv6 value ...) with typed parsing of IP types from text source. #45425 (Yakov Olkhovskiy).
  • close #45297 Add check for empty regular expressions. #45428 (Han Fei).
  • Fix possible (likely distributed) query hung. #45448 (Azat Khuzhin).
  • Fix possible deadlock with allow_asynchronous_read_from_io_pool_for_merge_tree enabled in case of exception from ThreadPool::schedule. #45481 (Nikolai Kochetov).
  • Fix possible in-use table after DETACH. #45493 (Azat Khuzhin).
  • Fix rare abort in the case when a query is canceled and parallel parsing was used during its execution. #45498 (Anton Popov).
  • Fix a race between Distributed table creation and INSERT into it (could lead to CANNOT_LINK during INSERT into the table). #45502 (Azat Khuzhin).
  • Add proper default (SLRU) to cache policy getter. Closes #45514. #45524 (Kseniia Sumarokova).
  • Disallow array join in mutations closes #42637 #44447 (SmitaRKulkarni).
  • Fix for qualified asterisks with alias table name and column transformer. Resolves #44736. #44755 (SmitaRKulkarni).

Changelog for 2022