mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 09:32:06 +00:00
143 KiB
143 KiB
Table of Contents
ClickHouse release v22.4, 2022-04-20
ClickHouse release v22.3-lts, 2022-03-17
ClickHouse release v22.2, 2022-02-17
ClickHouse release v22.1, 2022-01-18
Changelog for 2021
ClickHouse release master FIXME as compared to v22.3.3.44-lts
Backward Incompatible Change
- Do not allow SETTINGS after FORMAT for INSERT queries (there is compatibility setting
parser_settings_after_format_compact
to accept such queries, but it is turned OFF by default). #35883 (Azat Khuzhin). - Function
yandexConsistentHash
(consistent hashing algorithm by Konstantin "kostik" Oblakov) is renamed tokostikConsistentHash
. The old name is left as an alias for compatibility. Although this change is backward compatible, we may remove the alias in subsequent releases, that's why it's recommended to update the usages of this function in your apps. #35553 (Alexey Milovidov).
New Feature
- Added INTERPOLATE extension to the ORDER BY ... WITH FILL. Closes #34903. #35349 (Yakov Olkhovskiy).
- Profiling on Processors level (under
log_processors_profiles
setting, ClickHouse will write time that processor spent during execution/waiting for data tosystem.processors_profile_log
table). #34355 (Azat Khuzhin). - Added functions makeDate(year, month, day), makeDate32(year, month, day). #35628 (Alexander Gololobov). Implementation of makeDateTime() and makeDateTIme64(). #35934 (Alexander Gololobov).
- Support new type of quota
WRITTEN BYTES
to limit amount of written bytes during insert queries. #35736 (Anton Popov). - Added function
flattenTuple
. It receives nested namedTuple
as an argument and returns a flattenTuple
which elements are the paths from the originalTuple
. E.g.:Tuple(a Int, Tuple(b Int, c Int)) -> Tuple(a Int, b Int, c Int)
.flattenTuple
can be used to select all paths from typeObject
as separate columns. #35690 (Anton Popov). - Added functions
arrayFirstOrNull
,arrayLastOrNull
. Closes #35238. #35414 (Maksim Kita). - Added functions
minSampleSizeContinous
andminSampleSizeConversion
. Author achimbab. #35360 (Maksim Kita). - New functions minSampleSizeContinous and minSampleSizeConversion. #34354 (achimbab).
- Introduce format
ProtobufList
(all records as repeated messages in out Protobuf). Closes #16436. #35152 (Nikolai Kochetov). - Add
h3PointDistM
,h3PointDistKm
,h3PointDistRads
,h3GetRes0Indexes
,h3GetPentagonIndexes
functions. #34568 (Bharat Nallan). - Add
toLastDayOfMonth
function which rounds up a date or date with time to the last day of the month. #33501. #34394 (Habibullah Oladepo). - New aggregation function groupSortedArray to obtain an array of first N values. #34055 (palegre-tiny).
- Added load balancing setting for [Zoo]Keeper client. Closes #29617. #30325 (小路).
- Add a new kind of row policies named
simple
. Before this PR we had two kinds or row policies:permissive
andrestrictive
. Asimple
row policy adds a new filter on a table without any side-effects like it was for permissive and restrictive policies. #35345 (Vitaly Baranov). - Added an ability to specify cluster secret in replicated database. #35333 (Nikita Mikhaylov).
- Added sanity checks on server startup (available memory and disk space, max thread count, etc). #34566 (Sergei Trifonov).
- INTERVAL improvement - can be used with
[MILLI|MICRO|NANO]SECOND
. AddedtoStartOf[Milli|Micro|Nano]second()
functions. Added[add|subtract][Milli|Micro|Nano]seconds()
. #34353 (Andrey Zvonov).
Experimental Feature
- Added support for transactions for simple
MergeTree
tables. This feature is highly experimental and not recommended for production. Part of #22086. #24258 (tavplubix). - Support schema inference for type
Object
in formatJSONEachRow
. Allow to convert columns of typeMap
to columns of typeObject
. #35629 (Anton Popov). - Allow to write remote FS cache on all write operations. Add
system.remote_filesystem_cache
table. Adddrop remote filesystem cache
query. Add introspection for s3 metadata withsystem.remote_data_paths
table. Closes #34021. Add cache option for merges by adding moderead_from_filesystem_cache_if_exists_otherwise_bypass_cache
(turned on by default for merges and can also be turned on by query setting with the same name). Rename cache related settings (remote_fs_enable_cache -> enable_filesystem_cache
, etc). #35475 (Kseniia Sumarokova). - An option to store parts metadata in RocksDB. Speed up parts loading process of MergeTree to accelerate starting up of clickhouse-server. With this improvement, clickhouse-server was able to decrease starting up time from 75 minutes to 20 seconds, with 700k mergetree parts. #32928 (李扬).
Performance Improvement
- A new query plan optimization. Evaluate functions after
ORDER BY
when possible. As an example, for a querySELECT sipHash64(number) FROM numbers(1e8) ORDER BY number LIMIT 5
, functionsipHash64
would be evaluated afterORDER BY
andLIMIT
, which gives ~20x speed up. #35623 (Nikita Taranov). - Sizes of hash tables used during aggregation now collected and used in later queries to avoid hash tables resizes. #33439 (Nikita Taranov).
- Improvement for hasAll function using SIMD instructions (SSE and AVX2). #27653 (youennL-cs). #35723 (Maksim Kita).
- Multiple changes to improve ASOF JOIN performance (1.2 - 1.6x as fast). It also adds support to use big integers. #34733 (Raúl Marín).
- Improve performance of ASOF JOIN if key is native integer. #35525 (Maksim Kita).
- Parallelization of multipart upload into S3 storage. #35343 (Sergei Trifonov).
- URL storage engine now downloads multiple chunks in parallel if the endpoint supports HTTP Range. Two additional settings were added,
max_download_threads
andmax_download_buffer_size
, which control maximum number of threads a single query can use to download the file and the maximum number of bytes each thread can process. #35150 (Antonio Andelic). - Use multiple threads to download objects from S3. Downloading is controllable using
max_download_threads
andmax_download_buffer_size
settings. #35571 (Antonio Andelic). - Narrow mutex scope when interacting with HDFS. Related to #35292. #35646 (shuchaome).
- Require mutations for per-table TTL only when it had been changed. #35953 (Azat Khuzhin).
Improvement
- Multiple improvements for schema inference. Use some tweaks and heuristics to determine numbers, strings, arrays, tuples and maps in CSV, TSV and TSVRaw data formats. Add setting
input_format_csv_use_best_effort_in_schema_inference
for CSV format that enables/disables using these heuristics, if it's disabled, we treat everything as string. Add similar settinginput_format_tsv_use_best_effort_in_schema_inference
for TSV/TSVRaw format. These settings are enabled by default. - Add Maps support for schema inference in Values format. - Fix possible segfault in schema inference in Values format. - Allow to skip columns with unsupported types in Arrow/ORC/Parquet formats. Add corresponding settings for it:input_format_{parquet|orc|arrow}_skip_columns_with_unsupported_types_in_schema_inference
. These settings are disabled by default. - Allow to convert a column with type Null to a Nullable column with all NULL values in Arrow/Parquet formats. - Allow to specify column names in schema inference via settingcolumn_names_for_schema_inference
for formats that don't contain column names (like CSV, TSV, JSONCompactEachRow, etc) - Fix schema inference in ORC/Arrow/Parquet formats in terms of working with Nullable columns. Previously all inferred types were not Nullable and it blocked reading Nullable columns from data, now it's fixed and all inferred types are always Nullable (because we cannot understand that column is Nullable or not by reading the schema). - Fix schema inference in Template format with CSV escaping rules. #35582 (Kruglov Pavel). - Add parallel parsing and schema inference for format
JSONAsObject
. #35592 (Anton Popov). - Added a support for automatic schema inference to
s3Cluster
table function. Synced the signatures ofs3
ands3Cluster
. #35544 (Nikita Mikhaylov). - Added support for schema inference for
hdfsCluster
. #35602 (Nikita Mikhaylov). - Add new setting
input_format_json_read_bools_as_numbers
that allows to infer and parse bools as numbers in JSON input formats. It's enabled by default. Suggested by @alexey-milovidov. #35735 (Kruglov Pavel). - Improve columns ordering in schema inference for formats TSKV and JSONEachRow, closes #35640. Don't stop schema inference when reading empty row in schema inference for formats TSKV and JSONEachRow. #35724 (Kruglov Pavel).
- Add settings
input_format_orc_case_insensitive_column_matching
,input_format_arrow_case_insensitive_column_matching
, andinput_format_parquet_case_insensitive_column_matching
which allows ClickHouse to use case insensitive matching of columns while reading data from ORC, Arrow or Parquet files. #35459 (Antonio Andelic). - Added
is_secure
column tosystem.query_log
which denotes if the client is using a secure connection over TCP or HTTP. #35705 (Antonio Andelic). - Now
kafka_num_consumers
can be bigger than amount of physical cores in case of low resource machine (less than 16 cores). #35926 (alesapin). - Add some basic metrics to monitor engine=Kafka tables. #35916 (filimonov).
- Now it's not allowed to
ALTER TABLE ... RESET SETTING
for non-existing settings for MergeTree engines family. Fixes #35816. #35884 (alesapin). - Now some
ALTER MODIFY COLUMN
queries forArrays
andNullable
types can be done at metadata level without mutations. For example, alter fromArray(Enum8('Option1'=1))
toArray(Enum8('Option1'=1, 'Option2'=2))
. #35882 (alesapin). - Added an animation to the hourglass icon to indicate to the user that a query is running. #35860 (peledni).
- support ALTER TABLE t DETACH PARTITION (ALL). #35794 (awakeljw).
- Improve projection analysis to optimize trivial queries such as
count()
. #35788 (Amos Bird). - Support schema inference for insert select with using
input
table function. Get schema from insertion table instead of inferring it from the data in case of insert select from table functions that support schema inference. Closes #35639. #35760 (Kruglov Pavel). - Respect
remote_url_allow_hosts
for Hive tables. #35743 (李扬). - Implement
send_logs_level
for clickhouse-local. Closes #35653. #35716 (Kseniia Sumarokova). - Closes #35641 Allow
EPHEMERAL
columns without explicit default expression. #35706 (Yakov Olkhovskiy). - Add profile event counter
AsyncInsertBytes
about size of async INSERTs. #35644 (Alexey Milovidov). - Improve the pipeline description for JOIN. #35612 (何李夫).
- Deduce absolute hdfs config path. #35572 (李扬).
- Improve pasting performance and compatibility of clickhouse-client. This helps #35501. #35541 (Amos Bird).
- It was possible to get stack overflow in distributed queries if one of the settings
async_socket_for_remote
anduse_hedged_requests
is enabled while parsing very deeply nested data type (at least in debug build). Closes #35509. #35524 (Kruglov Pavel). - Add sizes of subcolumns to
system.parts_columns
table. #35488 (Anton Popov). - Add explicit table info to the scan node of query plan and pipeline. #35460 (何李夫).
- Allow server to bind to low-numbered ports (e.g. 443). ClickHouse installation script will set
cap_net_bind_service
to the binary file. #35451 (Alexey Milovidov). - Fix INSERT INTO table FROM INFILE: it did not display the progress bar. #35429 (xiedeyantu).
- Add arguments
--user
,--password
,--host
,--port
forclickhouse-diagnostics
tool. #35422 (李扬). - Support uuid for Postgres engines. Closes #35384. #35403 (Kseniia Sumarokova).
- For table function
s3cluster
orHDFSCluster
orhive
, we can't get rightAccessType
byStorageFactory::instance().getSourceAccessType(getStorageTypeName())
. This pr fix it. #35365 (李扬). - Remove
--testmode
option for clickhouse-client, enable it unconditionally. #35354 (Kseniia Sumarokova). - Don't allow
wchc
operation (four letter command) for clickhouse-keeper. #35320 (zhangyuli1). - Add function
getTypeSerializationStreams
. For a specified type (which is detected from column), it returns an array with all the serialization substream paths. This function is useful mainly for developers. #35290 (李扬). - If
port
is not specified in cluster configuration, default server port will be used. This closes #34769. #34772 (Alexey Milovidov). - Use
minmax
index for orc/parquet file in Hive Engine. Related PR: https://github.com/ClickHouse/arrow/pull/10. #34631 (李扬). - System log tables now allow to specify COMMENT in ENGINE declaration. Closes #33768. #34536 (Maksim Kita).
- Proper support of setting
max_rows_to_read
in case of reading in order of sorting key and specified limit. Previously the exceptionLimit for rows or bytes to read exceeded
could be thrown even if query actually requires to read less amount of rows. #33230 (Anton Popov). - Respect only quota & period from cgroups, ignore shares (which are not really limit the number of the cores which can be used). #35815 (filimonov).
Build/Testing/Packaging Improvement
- Add next batch of randomization settings in functional tests. #35047 (Kruglov Pavel).
- Add backward compatibility check in stress test. Closes #25088. #27928 (Kruglov Pavel).
- Migrate package building to
nfpm
- Deprecaterelease
script in favor ofpackages/build
- Build everything in clickhouse/binary-builder image (cleanup: clickhouse/deb-builder) - Add symbol stripping to cmake (todo: use $prefix/lib/$bin_dir/clickhouse/$binary.debug) - Fix issue with DWARF symbols - Add Alpine APK packages - Renamealien
toadditional_pkgs
. #33664 (Mikhail f. Shiryaev). - Add a night scan and upload for Coverity. #34895 (Boris Kuschel).
- A dedicated small package for
clickhouse-keeper
. #35308 (Mikhail f. Shiryaev). - Running with podman was failing: it complains about specifying the same volume twice. #35978 (Roman Nikonov).
- Minor improvement in contrib/krb5 build configuration. #35832 (Anton Kozlov).
- Add a label to recognize a building task for every image. #35583 (Mikhail f. Shiryaev).
- Apply
black
formatter to python code and add a per-commit check. #35466 (Mikhail f. Shiryaev). - Redo alpine image to use clean Dockerfile. Create a script in tests/ci to build both ubuntu and alpine images. Add clickhouse-keeper image (cc @nikitamikhaylov). Add build check to PullRequestCI. Add a job to a ReleaseCI. Add a job to MasterCI to build and push
clickhouse/clickhouse-server:head
andclickhouse/clickhouse-keeper:head
images for each merged PR. #35211 (Mikhail f. Shiryaev). - Fix stress-test report in CI, now we upload the runlog with information about started stress tests only once. #35093 (Mikhail f. Shiryaev).
- Switch to libcxx / libcxxabi from LLVM 14. #34906 (Raúl Marín).
- Update unixodbc to mitigate CVE-2018-7485. Note: this CVE is not relevant for ClickHouse as it implements its own isolation layer for ODBC. #35943 (Mikhail f. Shiryaev).
Bug Fix
- Added settings
input_format_ipv4_default_on_conversion_error
,input_format_ipv6_default_on_conversion_error
to allow insert of invalid ip address values as default into tables. Closes #35726. #35733 (Maksim Kita). - Avoid erasing columns from a block if it doesn't exist while reading data from Hive. #35393 (lgbo).
- Add type checking when creating materialized view. Close: #23684. #24896 (hexiaoting).
- Fix formatting of INSERT INFILE queries (missing quotes). #35886 (Azat Khuzhin).
- Disable
session_log
because memory safety issue has been found by fuzzing. See #35714. #35873 (Alexey Milovidov). - Avoid processing per-column TTL multiple times. #35820 (Azat Khuzhin).
- Fix inserts to columns of type
Object
in case when there is data related to several partitions in insert query. #35806 (Anton Popov). - Fix bug in indexes of not presented columns in -WithNames formats that led to error
INCORRECT_NUMBER_OF_COLUMNS
when the number of columns is more than 256. Closes #35793. #35803 (Kruglov Pavel). - Fixes #35751. #35799 (Nikolay Degterinsky).
- Fix for reading from HDFS in Snappy format. #35771 (shuchaome).
- Fix bug in conversion from custom types to string that could lead to segfault or unexpected error messages. Closes #35752. #35755 (Kruglov Pavel).
- Fix any/all (subquery) implementation. Closes #35489. #35727 (Kseniia Sumarokova).
- Fix dropping non-empty database in clickhouse-local. Closes #35692. #35711 (Kseniia Sumarokova).
- Fix bug in creating materialized view with subquery after server restart. Materialized view was not getting updated after inserts into underlying table after server restart. Closes #35511. #35691 (Kruglov Pavel).
- Fix possible
Can't adjust last granule
exception while reading subcolumns of experimental typeObject
. #35687 (Anton Popov). - Enable build with JIT compilation by default. #35683 (Maksim Kita).
- Fix possible loss of subcolumns in experimental type
Object
. #35682 (Anton Popov). - Fix check ASOF JOIN key nullability, close #35565. #35674 (Vladimir C).
- Fix part checking logic for parts with projections. Error happened when projection and main part had different types. This is similar to https://github.com/ClickHouse/ClickHouse/pull/33774 . The bug is addressed by @caoyang10. #35667 (Amos Bird).
- Fix server crash when large number of arguments are passed into
format
function. Please refer to the test file and see how to reproduce the crash. #35651 (Amos Bird). - Fix usage of quotas with asynchronous inserts. #35645 (Anton Popov).
- Fix positional arguments with aliases. Closes #35600. #35620 (Kseniia Sumarokova).
- Check
remote_url_allow_hosts
before schema inference in URL engine Closes #35064. #35619 (Kruglov Pavel). - Fix
HashJoin
when columns withLowCardinality
type are used. This closes #35548. #35616 (Antonio Andelic). - Fix possible segfault in MaterializedPostgreSQL which happened if exception occurred when data, collected in memory, was synced into underlying tables. Closes #35611. #35614 (Kseniia Sumarokova).
- Setting
database_atomic_wait_for_drop_and_detach_synchronously
worked incorrectly forATTACH TABLE
query when previously detached table is still in use, It's fixed. #35594 (tavplubix). - Fix HTTP headers with named collections, add compression_method. Closes #35273. Closes #35269. #35593 (Kseniia Sumarokova).
- Fix s3 engine getting virtual columns. Closes #35411. #35586 (Kseniia Sumarokova).
- Fixed return type deduction for
caseWithExpression
. The type of the ELSE branch is now correctly taken into account. #35576 (Antonio Andelic). - Fix parsing of IPv6 addresses longer than 39 characters. Closes #34022. #35539 (Maksim Kita).
- Fix cast into IPv4, IPv6 address in IN section. Fixes #35528. #35534 (Maksim Kita).
- Fix crash during short circuit function evaluation when one of arguments is nullable constant. Closes #35497. Closes #35496. #35502 (Maksim Kita).
- Fix crash for function
throwIf
with constant arguments. #35500 (Maksim Kita). - Fix bug in Keeper which can lead to unstable client connections. Introduced in #35031. #35498 (alesapin).
- Fix bug in function
if
when resulting column type differs with resulting data type that led to logical errors likeLogical error: 'Bad cast from type DB::ColumnVector<int> to DB::ColumnVector<long>'.
. Closes #35367. #35476 (Kruglov Pavel). - Fix excessive logging when using S3 as backend for MergeTree or as separate table engine/function. Fixes #30559. #35434 (alesapin).
- Now merges executed with zero copy replication (experimental) will not spam logs with message
Found parts with the same min block and with the same max block as the missing part _ on replica _. Hoping that it will eventually appear as a result of a merge.
. #35430 (alesapin). - Skip possible exception if empty chunks appear in GroupingAggregatedTransform. #35417 (Nikita Taranov).
- Fix working with columns that are not needed in query in Arrow/Parquet/ORC formats, it prevents possible errors like
Unsupported <format> type <type> of an input column <column_name>
when file contains column with unsupported type and we don't use it in query. #35406 (Kruglov Pavel). - Fix for local cache for remote filesystem (experimental feature) for high concurrency on corner cases. #35381 (Kseniia Sumarokova). Fix possible deadlock in cache. #35378 (Kseniia Sumarokova).
- Fix partition pruning in case of comparison with constant in
WHERE
. If column and constant had different types, overflow was possible. Query could return an incorrect empty result. This fixes #35304. #35334 (Amos Bird). - Fix schema inference for TSKV format while using small max_read_buffer_size. #35332 (Kruglov Pavel).
- Fix mutations in tables with enabled sparse columns. #35284 (Anton Popov).
- Do not delay final part writing by default (fixes possible
Memory limit exceeded
duringINSERT
by addingmax_insert_delayed_streams_for_parallel_write
with default to 1000 for writes to s3 and disabled as before otherwise). #34780 (Azat Khuzhin).
ClickHouse release v22.3-lts, 2022-03-17
Backward Incompatible Change
- Make
arrayCompact
function behave as other higher-order functions: perform compaction not of lambda function results but on the original array. If you're using nontrivial lambda functions in arrayCompact you may restore old behaviour by wrappingarrayCompact
arguments intoarrayMap
. Closes #34010 #18535 #14778. #34795 (Alexandre Snarskii). - Change implementation specific behavior on overflow of function
toDatetime
. It will be saturated to the nearest min/max supported instant of datetime instead of wraparound. This change is highlighted as "backward incompatible" because someone may unintentionally rely on the old behavior. #32898 (HaiBo Li). - Make function
cast(value, 'IPv4')
,cast(value, 'IPv6')
behave same astoIPv4
,toIPv6
functions. Changed behavior of incorrect IP address passed into functionstoIPv4
,toIPv6
, now if invalid IP address passes into this functions exception will be raised, before this function return default value. Added functionsIPv4StringToNumOrDefault
,IPv4StringToNumOrNull
,IPv6StringToNumOrDefault
,IPv6StringOrNull
toIPv4OrDefault
,toIPv4OrNull
,toIPv6OrDefault
,toIPv6OrNull
. FunctionsIPv4StringToNumOrDefault
,toIPv4OrDefault
,toIPv6OrDefault
should be used if previous logic relied onIPv4StringToNum
,toIPv4
,toIPv6
returning default value for invalid address. Added settingcast_ipv4_ipv6_default_on_conversion_error
, if this setting enabled, then IP address conversion functions will behave as before. Closes #22825. Closes #5799. Closes #35156. #35240 (Maksim Kita).
New Feature
- Support for caching data locally for remote filesystems. It can be enabled for
s3
disks. Closes #28961. #33717 (Kseniia Sumarokova). In the meantime, we enabled the test suite on s3 filesystem and no more known issues exist, so it is started to be production ready. - Add new table function
hive
. It can be used as followshive('<hive metastore url>', '<hive database>', '<hive table name>', '<columns definition>', '<partition columns>')
for exampleSELECT * FROM hive('thrift://hivetest:9083', 'test', 'demo', 'id Nullable(String), score Nullable(Int32), day Nullable(String)', 'day')
. #34946 (lgbo). - Support authentication of users connected via SSL by their X.509 certificate. #31484 (eungenue).
- Support schema inference for inserting into table functions
file
/hdfs
/s3
/url
. #34732 (Kruglov Pavel). - Now you can read
system.zookeeper
table without restrictions on path or usinglike
expression. This reads can generate quite heavy load for zookeeper so to enable this ability you have to enable settingallow_unrestricted_reads_from_keeper
. #34609 (Sergei Trifonov). - Display CPU and memory metrics in clickhouse-local. Close #34545. #34605 (李扬).
- Implement
startsWith
andendsWith
function for arrays, closes #33982. #34368 (usurai). - Add three functions for Map data type: 1.
mapReplace(map1, map2)
- replaces values for keys in map1 with the values of the corresponding keys in map2; adds keys from map2 that don't exist in map1. 2.mapFilter
3.mapMap
. mapFilter and mapMap are higher order functions, accepting two arguments, the first argument is a lambda function with k, v pair as arguments, the second argument is a column of type Map. #33698 (hexiaoting). - Allow getting default user and password for clickhouse-client from the
CLICKHOUSE_USER
andCLICKHOUSE_PASSWORD
environment variables. Close #34538. #34947 (DR).
Experimental Feature
- New data type
Object(<schema_format>)
, which supports storing of semi-structured data (for now JSON only). Data is written to such types as string. Then all paths are extracted according to format of semi-structured data and written as separate columns in most optimal types, that can store all their values. Those columns can be queried by names that match paths in source data. E.gdata.key1.key2
or with cast operatordata.key1.key2::Int64
. - Add
database_replicated_allow_only_replicated_engine
setting. When enabled, it only allowed to only createReplicated
tables or tables with stateless engines inReplicated
databases. #35214 (Nikolai Kochetov). Note thatReplicated
database is still an experimental feature.
Performance Improvement
- Improve performance of insertion into
MergeTree
tables by optimizing sorting. Up to 2x improvement is observed on realistic benchmarks. #34750 (Maksim Kita). - Columns pruning when reading Parquet, ORC and Arrow files from URL and S3. Closes #34163. #34849 (Kseniia Sumarokova).
- Columns pruning when reading Parquet, ORC and Arrow files from Hive. #34954 (lgbo).
- A bunch of performance optimizations from a performance superhero. Improve performance of processing queries with large
IN
section. Improve performance ofdirect
dictionary if its source isClickHouse
. Improve performance ofdetectCharset
,detectLanguageUnknown
functions. #34888 (Maksim Kita). - Improve performance of
any
aggregate function by using more batching. #34760 (Raúl Marín). - Multiple improvements for performance of
clickhouse-keeper
: less locking #35010 (zhanglistar), lower memory usage by streaming reading and writing of snapshot instead of full copy. #34584 (zhanglistar), optimizing compaction of log store in the RAFT implementation. #34534 (zhanglistar), versioning of the internal data structure #34486 (zhanglistar).
Improvement
- Allow asynchronous inserts to table functions. Fixes #34864. #34866 (Anton Popov).
- Implicit type casting of the key argument for functions
dictGetHierarchy
,dictIsIn
,dictGetChildren
,dictGetDescendants
. Closes #34970. #35027 (Maksim Kita). EXPLAIN AST
query can output AST in form of a graph in Graphviz format:EXPLAIN AST graph = 1 SELECT * FROM system.parts
. #35173 (李扬).- When large files were written with
s3
table function or table engine, the content type on the files was mistakenly set toapplication/xml
due to a bug in the AWS SDK. This closes #33964. #34433 (Alexey Milovidov). - Change restrictive row policies a bit to make them an easier alternative to permissive policies in easy cases. If for a particular table only restrictive policies exist (without permissive policies) users will be able to see some rows. Also
SHOW CREATE ROW POLICY
will always showAS permissive
orAS restrictive
in row policy's definition. #34596 (Vitaly Baranov). - Improve schema inference with globs in File/S3/HDFS/URL engines. Try to use the next path for schema inference in case of error. #34465 (Kruglov Pavel).
- Play UI now correctly detects the preferred light/dark theme from the OS. #35068 (peledni).
- Added
date_time_input_format = 'best_effort_us'
. Closes #34799. #34982 (WenYao). - A new settings called
allow_plaintext_password
andallow_no_password
are added in server configuration which turn on/off authentication types that can be potentially insecure in some environments. They are allowed by default. #34738 (Heena Bansal). - Support for
DateTime64
data type inArrow
format, closes #8280 and closes #28574. #34561 (李扬). - Reload
remote_url_allow_hosts
(filtering of outgoing connections) on config update. #35294 (Nikolai Kochetov). - Support
--testmode
parameter forclickhouse-local
. This parameter enables interpretation of test hints that we use in functional tests. #35264 (Kseniia Sumarokova). - Add
distributed_depth
to query log. It is like a more detailed variant ofis_initial_query
#35207 (李扬). - Respect
remote_url_allow_hosts
forMySQL
andPostgreSQL
table functions. #35191 (Heena Bansal). - Added
disk_name
field tosystem.part_log
. #35178 (Artyom Yurkov). - Do not retry non-rertiable errors when querying remote URLs. Closes #35161. #35172 (Kseniia Sumarokova).
- Support distributed INSERT SELECT queries (the setting
parallel_distributed_insert_select
) table functionview()
. #35132 (Azat Khuzhin). - More precise memory tracking during
INSERT
intoBuffer
withAggregateFunction
. #35072 (Azat Khuzhin). - Avoid division by zero in Query Profiler if Linux kernel has a bug. Closes #34787. #35032 (Alexey Milovidov).
- Add more sanity checks for keeper configuration: now mixing of localhost and non-local servers is not allowed, also add checks for same value of internal raft port and keeper client port. #35004 (alesapin).
- Currently, if the user changes the settings of the system tables there will be tons of logs and ClickHouse will rename the tables every minute. This fixes #34929. #34949 (Nikita Mikhaylov).
- Use connection pool for Hive metastore client. #34940 (lgbo).
- Ignore per-column
TTL
inCREATE TABLE AS
if new table engine does not support it (i.e. if the engine is not ofMergeTree
family). #34938 (Azat Khuzhin). - Allow
LowCardinality
strings forngrambf_v1
/tokenbf_v1
indexes. Closes #21865. #34911 (Lars Hiller Eidnes). - Allow opening empty sqlite db if the file doesn't exist. Closes #33367. #34907 (Kseniia Sumarokova).
- Implement memory statistics for FreeBSD - this is required for
max_server_memory_usage
to work correctly. #34902 (Alexandre Snarskii). - In previous versions the progress bar in clickhouse-client can jump forward near 50% for no reason. This closes #34324. #34801 (Alexey Milovidov).
- Now
ALTER TABLE DROP COLUMN columnX
queries forMergeTree
table engines will work instantly whencolumnX
is anALIAS
column. Fixes #34660. #34786 (alesapin). - Show hints when user mistyped the name of a data skipping index. Closes #29698. #34764 (flynn).
- Support
remote()
/cluster()
table functions forparallel_distributed_insert_select
. #34728 (Azat Khuzhin). - Do not reset logging that configured via
--log-file
/--errorlog-file
command line options in case of empty configuration in the config file. #34718 (Amos Bird). - Extract schema only once on table creation and prevent reading from local files/external sources to extract schema on each server startup. #34684 (Kruglov Pavel).
- Allow specifying argument names for executable UDFs. This is necessary for formats where argument name is part of serialization, like
Native
,JSONEachRow
. Closes #34604. #34653 (Maksim Kita). MaterializedMySQL
(experimental feature) now supportsmaterialized_mysql_tables_list
(a comma-separated list of MySQL database tables, which will be replicated by the MaterializedMySQL database engine. Default value: empty list — means all the tables will be replicated), mentioned at #32977. #34487 (zzsmdfj).- Improve OpenTelemetry span logs for INSERT operation on distributed table. #34480 (Frank Chen).
- Make the znode
ctime
andmtime
consistent between servers in ClickHouse Keeper. #33441 (小路).
Build/Testing/Packaging Improvement
- Package repository is migrated to JFrog Artifactory (Mikhail f. Shiryaev).
- Randomize some settings in functional tests, so more possible combinations of settings will be tested. This is yet another fuzzing method to ensure better test coverage. This closes #32268. #34092 (Kruglov Pavel).
- Drop PVS-Studio from our CI. #34680 (Mikhail f. Shiryaev).
- Add an ability to build stripped binaries with CMake. In previous versions it was performed by dh-tools. #35196 (alesapin).
- Smaller "fat-free"
clickhouse-keeper
build. #35031 (alesapin). - Use @robot-clickhouse as an author and committer for PRs like https://github.com/ClickHouse/ClickHouse/pull/34685. #34793 (Mikhail f. Shiryaev).
- Limit DWARF version for debug info by 4 max, because our internal stack symbolizer cannot parse DWARF version 5. This makes sense if you compile ClickHouse with clang-15. #34777 (Alexey Milovidov).
- Remove
clickhouse-test
debian package as unneeded complication. CI use tests from repository and standalone testing via deb package is no longer supported. #34606 (Ilya Yatsishin).
Bug Fix (user-visible misbehaviour in official stable or prestable release)
- A fix for HDFS integration: When the inner buffer size is too small, NEED_MORE_INPUT in
HadoopSnappyDecoder
will run multi times (>=3) for one compressed block. This makes the input data be copied into the wrong place inHadoopSnappyDecoder::buffer
. #35116 (lgbo). - Ignore obsolete grants in ATTACH GRANT statements. This PR fixes #34815. #34855 (Vitaly Baranov).
- Fix segfault in Postgres database when getting create table query if database was created using named collections. Closes #35312. #35313 (Kseniia Sumarokova).
- Fix partial merge join duplicate rows bug, close #31009. #35311 (Vladimir C).
- Fix possible
Assertion 'position() != working_buffer.end()' failed
while using bzip2 compression with smallmax_read_buffer_size
setting value. The bug was found in https://github.com/ClickHouse/ClickHouse/pull/35047. #35300 (Kruglov Pavel). While using lz4 compression with a small max_read_buffer_size setting value. #35296 (Kruglov Pavel). While using lzma compression with smallmax_read_buffer_size
setting value. #35295 (Kruglov Pavel). While usingbrotli
compression with a smallmax_read_buffer_size
setting value. The bug was found in https://github.com/ClickHouse/ClickHouse/pull/35047. #35281 (Kruglov Pavel). - Fix possible segfault in
JSONEachRow
schema inference. #35291 (Kruglov Pavel). - Fix
CHECK TABLE
query in case when sparse columns are enabled in table. #35274 (Anton Popov). - Avoid std::terminate in case of exception in reading from remote VFS. #35257 (Azat Khuzhin).
- Fix reading port from config, close #34776. #35193 (Vladimir C).
- Fix error in query with
WITH TOTALS
in case ifHAVING
returned empty result. This fixes #33711. #35186 (Amos Bird). - Fix a corner case of
replaceRegexpAll
, close #35117. #35182 (Vladimir C). - Schema inference didn't work properly on case of
INSERT INTO FUNCTION s3(...) FROM ...
, it tried to read schema from s3 file instead of from select query. #35176 (Kruglov Pavel). - Fix MaterializedPostgreSQL (experimental feature)
table overrides
for partition by, etc. Closes #35048. #35162 (Kseniia Sumarokova). - Fix MaterializedPostgreSQL (experimental feature) adding new table to replication (ATTACH TABLE) after manually removing (DETACH TABLE). Closes #33800. Closes #34922. Closes #34315. #35158 (Kseniia Sumarokova).
- Fix partition pruning error when non-monotonic function is used with IN operator. This fixes #35136. #35146 (Amos Bird).
- Fixed slightly incorrect translation of YAML configs to XML. #35135 (Miel Donkers).
- Fix
optimize_skip_unused_shards_rewrite_in
for signed columns and negative values. #35134 (Azat Khuzhin). - The
update_lag
external dictionary configuration option was unusable showing the error messageUnexpected key `update_lag` in dictionary source configuration
. #35089 (Jason Chu). - Avoid possible deadlock on server shutdown. #35081 (Azat Khuzhin).
- Fix missing alias after function is optimized to a subcolumn when setting
optimize_functions_to_subcolumns
is enabled. Closes #33798. #35079 (qieqieplus). - Fix reading from
system.asynchronous_inserts
table if there exists asynchronous insert into table function. #35050 (Anton Popov). - Fix possible exception
Reading for MergeTree family tables must be done with last position boundary
(relevant to operation on remote VFS). Closes #34979. #35001 (Kseniia Sumarokova). - Fix unexpected result when use -State type aggregate function in window frame. #34999 (metahys).
- Fix possible segfault in FileLog (experimental feature). Closes #30749. #34996 (Kseniia Sumarokova).
- Fix possible rare error
Cannot push block to port which already has data
. #34993 (Nikolai Kochetov). - Fix wrong schema inference for unquoted dates in CSV. Closes #34768. #34961 (Kruglov Pavel).
- Integration with Hive: Fix unexpected result when use
in
inwhere
in hive query. #34945 (lgbo). - Avoid busy polling in ClickHouse Keeper while searching for changelog files to delete. #34931 (Azat Khuzhin).
- Fix DateTime64 conversion from PostgreSQL. Closes #33364. #34910 (Kseniia Sumarokova).
- Fix possible "Part directory doesn't exist" during
INSERT
into MergeTree table backed by VFS over s3. #34876 (Azat Khuzhin). - Support DDLs like CREATE USER to be executed on cross replicated cluster. #34860 (Jianmei Zhang).
- Fix bugs for multiple columns group by in
WindowView
(experimental feature). #34859 (vxider). - Fix possible failures in S2 functions when queries contain const columns. #34745 (Bharat Nallan).
- Fix bug for H3 funcs containing const columns which cause queries to fail. #34743 (Bharat Nallan).
- Fix
No such file or directory
with enabledfsync_part_directory
and vertical merge. #34739 (Azat Khuzhin). - Fix serialization/printing for system queries
RELOAD MODEL
,RELOAD FUNCTION
,RESTART DISK
when usedON CLUSTER
. Closes #34514. #34696 (Maksim Kita). - Fix
allow_experimental_projection_optimization
withenable_global_with_statement
(before it may lead toStack size too large
error in case of multiple expressions inWITH
clause, and also it executes scalar subqueries again and again, so not it will be more optimal). #34650 (Azat Khuzhin). - Stop to select part for mutate when the other replica has already updated the transaction log for
ReplatedMergeTree
engine. #34633 (Jianmei Zhang). - Fix incorrect result of trivial count query when part movement feature is used #34089. #34385 (nvartolomei).
- Fix inconsistency of
max_query_size
limitation in distributed subqueries. #34078 (Chao Ma).
ClickHouse release v22.2, 2022-02-17
Upgrade Notes
- Applying data skipping indexes for queries with FINAL may produce incorrect result. In this release we disabled data skipping indexes by default for queries with FINAL (a new setting
use_skip_indexes_if_final
is introduced and disabled by default). #34243 (Azat Khuzhin).
New Feature
- Projections are production ready. Set
allow_experimental_projection_optimization
by default and deprecate this setting. #34456 (Nikolai Kochetov). - An option to create a new files on insert for
File
/S3
/HDFS
engines. Allow to overwrite a file inHDFS
. Throw an exception in attempt to overwrite a file inS3
by default. Throw an exception in attempt to append data to file in formats that have a suffix (and thus don't support appends, likeParquet
,ORC
). Closes #31640 Closes #31622 Closes #23862 Closes #15022 Closes #16674. #33302 (Kruglov Pavel). - Add a setting that allows a user to provide own deduplication semantic in
MergeTree
/ReplicatedMergeTree
If provided, it's used instead of data digest to generate block ID. So, for example, by providing a unique value for the setting in each INSERT statement, the user can avoid the same inserted data being deduplicated. This closes: #7461. #32304 (Igor Nikonov). - Add support of
DEFAULT
keyword for INSERT statements. Closes #6331. #33141 (Andrii Buriachevskyi). EPHEMERAL
column specifier is added toCREATE TABLE
query. Closes #9436. #34424 (yakov-olkhovskiy).- Support
IF EXISTS
clause forTTL expr TO [DISK|VOLUME] [IF EXISTS] 'xxx'
feature. Parts will be moved to disk or volume only if it exists on replica, soMOVE TTL
rules will be able to behave differently on replicas according to the existing storage policies. Resolves #34455. #34504 (Anton Popov). - Allow set default table engine and to create tables without specifying ENGINE. #34187 (Ilya Yatsishin).
- Add table function
format(format_name, data)
. #34125 (Kruglov Pavel). - Detect format in
clickhouse-local
by file name even in the case when it is passed to stdin. #33829 (Kruglov Pavel). - Add schema inference for
values
table function. Closes #33811. #34017 (Kruglov Pavel). - Dynamic reload of server TLS certificates on config reload. Closes #15764. #15765 (johnskopis). #31257 (Filatenkov Artur).
- Now ReplicatedMergeTree can recover data when some of its disks are broken. #13544 (Amos Bird).
- Fault-tolerant connections in clickhouse-client:
clickhouse-client ... --host host1 --host host2 --port port2 --host host3 --port port --host host4
. #34490 (Kruglov Pavel). #33824 (Filippov Denis). - Add
DEGREES
andRADIANS
functions for MySQL compatibility. #33769 (Bharat Nallan). - Add
h3ToCenterChild
function. #33313 (Bharat Nallan). Add new h3 miscellaneous functions:edgeLengthKm
,exactEdgeLengthKm
,exactEdgeLengthM
,exactEdgeLengthRads
,numHexagons
. #33621 (Bharat Nallan). - Add function
bitSlice
to extract bit subsequences from String/FixedString. #33360 (RogerYK). - Implemented
meanZTest
aggregate function. #33354 (achimbab). - Add confidence intervals to T-tests aggregate functions. #33260 (achimbab).
- Add function
addressToLineWithInlines
. Close #26211. #33467 (SuperDJY). - Added
#!
and#
as a recognised start of a single line comment. Closes #34138. #34230 (Aaron Katz).
Experimental Feature
- Functions for text classification: language and charset detection. See #23271. #33314 (Nikolay Degterinsky).
- Add memory overcommit to
MemoryTracker
. Addedguaranteed
settings for memory limits which represent soft memory limits. In case when hard memory limit is reached,MemoryTracker
tries to cancel the most overcommited query. New settingmemory_usage_overcommit_max_wait_microseconds
specifies how long queries may wait another query to stop. Closes #28375. #31182 (Dmitry Novik). - Enable stream to table join in WindowView. #33729 (vxider).
- Support
SET
,YEAR
,TIME
andGEOMETRY
data types inMaterializedMySQL
(experimental feature). Fixes #18091, #21536, #26361. #33429 (zzsmdfj). - Fix various issues when projection is enabled by default. Each issue is described in separate commit. This is for #33678 . This fixes #34273. #34305 (Amos Bird).
Performance Improvement
- Support
optimize_read_in_order
if prefix of sorting key is already sorted. E.g. if we have sorting keyORDER BY (a, b)
in table and query withWHERE a = const ORDER BY b
clauses, now it will be applied reading in order of sorting key instead of full sort. #32748 (Anton Popov). - Improve performance of partitioned insert into table functions
URL
,S3
,File
,HDFS
. Closes #34348. #34510 (Maksim Kita). - Multiple performance improvements of clickhouse-keeper. #34484 #34587 (zhanglistar).
FlatDictionary
improve performance of dictionary data load. #33871 (Maksim Kita).- Improve performance of
mapPopulateSeries
function. Closes #33944. #34318 (Maksim Kita). _file
and_path
virtual columns (in file-like table engines) are madeLowCardinality
- it will make queries for multiple files faster. Closes #34300. #34317 (flynn).- Speed up loading of data parts. It was not parallelized before: the setting
part_loading_threads
did not have effect. See #4699. #34310 (alexey-milovidov). - Improve performance of
LineAsString
format. This closes #34303. #34306 (alexey-milovidov). - Optimize
quantilesExact{Low,High}
to usenth_element
instead ofsort
. #34287 (Danila Kutenin). - Slightly improve performance of
Regexp
format. #34202 (alexey-milovidov). - Minor improvement for analysis of scalar subqueries. #34128 (Federico Rodriguez).
- Make ORDER BY tuple almost as fast as ORDER BY columns. We have special optimizations for multiple column ORDER BY: https://github.com/ClickHouse/ClickHouse/pull/10831 . It's beneficial to also apply to tuple columns. #34060 (Amos Bird).
- Rework and reintroduce the scalar subqueries cache to Materialized Views execution. #33958 (Raúl Marín).
- Slightly improve performance of
ORDER BY
by adding x86-64 AVX-512 support formemcmpSmall
functions to accelerate memory comparison. It works only if you compile ClickHouse by yourself. #33706 (hanqf-git). - Improve
range_hashed
dictionary performance if for key there are a lot of intervals. Fixes #23821. #33516 (Maksim Kita). - For inserts and merges into S3, write files in parallel whenever possible (TODO: check if it's merged). #33291 (Nikolai Kochetov).
- Improve
clickhouse-keeper
performance and fix several memory leaks in NuRaft library. #33329 (alesapin).
Improvement
- Support asynchronous inserts in
clickhouse-client
for queries with inlined data. #34267 (Anton Popov). - Functions
dictGet
,dictHas
implicitly cast key argument to dictionary key structure, if they are different. #33672 (Maksim Kita). - Improvements for
range_hashed
dictionaries. Improve performance of load time if there are multiple attributes. Allow to create a dictionary without attributes. Added option to specify strategy when intervalsstart
andend
haveNullable
typeconvert_null_range_bound_to_open
by default istrue
. Closes #29791. Allow to specifyFloat
,Decimal
,DateTime64
,Int128
,Int256
,UInt128
,UInt256
as range types.RangeHashedDictionary
added support for range values that extendInt64
type. Closes #28322. Added optionrange_lookup_strategy
to specify range lookup typemin
,max
by default ismin
. Closes #21647. Fixed allocated bytes calculations. Fixed type name insystem.dictionaries
in case ofComplexKeyHashedDictionary
. #33927 (Maksim Kita). flat
,hashed
,hashed_array
dictionaries now support creating with empty attributes, with support of reading the keys and usingdictHas
. Fixes #33820. #33918 (Maksim Kita).- Added support for
DateTime64
data type in dictionaries. #33914 (Maksim Kita). - Allow to write
s3(url, access_key_id, secret_access_key)
(autodetect of data format and table structure, but with explicit credentials). #34503 (Kruglov Pavel). - Added sending of the output format back to client like it's done in HTTP protocol as suggested in #34362. Closes #34362. #34499 (Vitaly Baranov).
- Send ProfileEvents statistics in case of INSERT SELECT query (to display query metrics in
clickhouse-client
for this type of queries). #34498 (Dmitry Novik). - Recognize
.jsonl
extension for JSONEachRow format. #34496 (Kruglov Pavel). - Improve schema inference in clickhouse-local. Allow to write just
clickhouse-local -q "select * from table" < data.format
. #34495 (Kruglov Pavel). - Privileges CREATE/ALTER/DROP ROW POLICY now can be granted on a table or on
database.*
as well as globally*.*
. #34489 (Vitaly Baranov). - Allow to export arbitrary large files to
s3
. Add two new settings:s3_upload_part_size_multiply_factor
ands3_upload_part_size_multiply_parts_count_threshold
. Now each times3_upload_part_size_multiply_parts_count_threshold
uploaded to S3 from a single querys3_min_upload_part_size
multiplied bys3_upload_part_size_multiply_factor
. Fixes #34244. #34422 (alesapin). - Allow to skip not found (404) URLs for globs when using URL storage / table function. Also closes #34359. #34392 (Kseniia Sumarokova).
- Default input and output formats for
clickhouse-local
that can be overriden by --input-format and --output-format. Close #30631. #34352 (李扬). - Add options for
clickhouse-format
. Which close #30528 -max_query_size
-max_parser_depth
. #34349 (李扬). - Better handling of pre-inputs before client start. This is for #34308. #34336 (Amos Bird).
REGEXP_MATCHES
andREGEXP_REPLACE
function aliases for compatibility with PostgreSQL. Close #30885. #34334 (李扬).- Some servers expect a User-Agent header in their HTTP requests. A
User-Agent
header entry has been added to HTTP requests of the form: User-Agent: ClickHouse/VERSION_STRING. #34330 (Saad Ur Rahman). - Cancel merges before acquiring table lock for
TRUNCATE
query to avoidDEADLOCK_AVOIDED
error in some cases. Fixes #34302. #34304 (tavplubix). - Change severity of the "Cancelled merging parts" message in logs, because it's not an error. This closes #34148. #34232 (alexey-milovidov).
- Add ability to compose PostgreSQL-style cast operator
::
with expressions using[]
and.
operators (array and tuple indexing). #34229 (Nikolay Degterinsky). - Recognize
YYYYMMDD-hhmmss
format inparseDateTimeBestEffort
function. This closes #34206. #34208 (alexey-milovidov). - Allow carriage return in the middle of the line while parsing by
Regexp
format. This closes #34200. #34205 (alexey-milovidov). - Allow to parse dictionary's
PRIMARY KEY
asPRIMARY KEY (id, value)
; previously supported onlyPRIMARY KEY id, value
. Closes #34135. #34141 (Maksim Kita). - An optional argument for
splitByChar
to limit the number of resulting elements. close #34081. #34140 (李扬). - Improving the experience of multiple line editing for clickhouse-client. This is a follow-up of #31123. #34114 (Amos Bird).
- Add
UUID
suport inMsgPack
input/output format. #34065 (Kruglov Pavel). - Tracing context (for OpenTelemetry) is now propagated from GRPC client metadata (this change is relevant for GRPC client-server protocol). #34064 (andremarianiello).
- Supports all types of
SYSTEM
queries withON CLUSTER
clause. #34005 (小路). - Improve memory accounting for queries that are using less than
max_untracker_memory
. #34001 (Azat Khuzhin). - Fixed UTF-8 string case-insensitive search when lowercase and uppercase characters are represented by different number of bytes. Example is
ẞ
andß
. This closes #7334. #33992 (Harry Lee). - Detect format and schema from stdin in
clickhouse-local
. #33960 (Kruglov Pavel). - Correctly handle the case of misconfiguration when multiple disks are using the same path on the filesystem. #29072. #33905 (zhongyuankai).
- Try every resolved IP address while getting S3 proxy. S3 proxies are rarely used, mostly in Yandex Cloud. #33862 (Nikolai Kochetov).
- Support EXPLAIN AST CREATE FUNCTION query
EXPLAIN AST CREATE FUNCTION mycast AS (n) -> cast(n as String)
will returnEXPLAIN AST CREATE FUNCTION mycast AS n -> CAST(n, 'String')
. #33819 (李扬). - Added support for cast from
Map(Key, Value)
toArray(Tuple(Key, Value))
. #33794 (Maksim Kita). - Add some improvements and fixes for
Bool
data type. Fixes #33244. #33737 (Kruglov Pavel). - Parse and store OpenTelemetry trace-id in big-endian order. #33723 (Frank Chen).
- Improvement for
fromUnixTimestamp64
family functions.. They now accept any integer value that can be converted toInt64
. This closes: #14648. #33505 (Andrey Zvonov). - Reimplement
_shard_num
from constants (see #7624) withshardNum()
function (seee #27020), to avoid possible issues (like those that had been found in #16947). #33392 (Azat Khuzhin). - Enable binary arithmetic (plus, minus, multiply, division, least, greatest) between Decimal and Float. #33355 (flynn).
- Respect cgroups limits in max_threads autodetection. #33342 (JaySon).
- Add new clickhouse-keeper setting
min_session_timeout_ms
. Now clickhouse-keeper will determine client session timeout according tomin_session_timeout_ms
andsession_timeout_ms
settings. #33288 (JackyWoo). - Added
UUID
data type support for functionshex
andbin
. #32170 (Frank Chen). - Fix reading of subcolumns with dots in their names. In particular fixed reading of
Nested
columns, if their element names contain dots (e.gNested(`keys.name` String, `keys.id` UInt64, values UInt64)
). #34228 (Anton Popov). - Fixes
parallel_view_processing = 0
not working when inserting into a table usingVALUES
. - Fixesview_duration_ms
in thequery_views_log
not being set correctly for materialized views. #34067 (Raúl Marín). - Fix parsing tables structure from ZooKeeper: now metadata from ZooKeeper compared with local metadata in canonical form. It helps when canonical function names can change between ClickHouse versions. #33933 (sunny).
- Properly escape some characters for interaction with LDAP. #33401 (IlyaTsoi).
Build/Testing/Packaging Improvement
- Remove unbundled build support. #33690 (Azat Khuzhin).
- Ensure that tests don't depend on the result of non-stable sorting of equal elements. Added equal items ranges randomization in debug after sort to prevent issues when we rely on equal items sort order. #34393 (Maksim Kita).
- Add verbosity to a style check. #34289 (Mikhail f. Shiryaev).
- Remove
clickhouse-test
debian package because it's obsolete. #33948 (Ilya Yatsishin). - Multiple improvements for build system to remove the possibility of occasionally using packages from the OS and to enforce hermetic builds. #33695 (Amos Bird).
Bug Fix (user-visible misbehaviour in official stable or prestable release)
- Fixed the assertion in case of using
allow_experimental_parallel_reading_from_replicas
withmax_parallel_replicas
equals to 1. This fixes #34525. #34613 (Nikita Mikhaylov). - Fix rare bug while reading of empty arrays, which could lead to
Data compressed with different methods
error. It can reproduce if you have mostly empty arrays, but not always. And reading is performed in backward direction with ORDER BY ... DESC. This error is extremely unlikely to happen. #34327 (Anton Popov). - Fix wrong result of
round
/roundBankers
if integer values of small types are rounded. Closes #33267. #34562 (李扬). - Sometimes query cancellation did not work immediately when we were reading multiple files from s3 or HDFS. Fixes #34301 Relates to #34397. #34539 (Dmitry Novik).
- Fix exception
Chunk should have AggregatedChunkInfo in MergingAggregatedTransform
(in case ofoptimize_aggregation_in_order = 1
anddistributed_aggregation_memory_efficient = 0
). Fixes #34526. #34532 (Anton Popov). - Fix comparison between integers and floats in index analysis. Previously it could lead to skipping some granules for reading by mistake. Fixes #34493. #34528 (Anton Popov).
- Fix compression support in URL engine. #34524 (Frank Chen).
- Fix possible error 'file_size: Operation not supported' in files' schema autodetection. #34479 (Kruglov Pavel).
- Fixes possible race with table deletion. #34416 (Kseniia Sumarokova).
- Fix possible error
Cannot convert column Function to mask
in short circuit function evaluation. Closes #34171. #34415 (Kruglov Pavel). - Fix potential crash when doing schema inference from url source. Closes #34147. #34405 (Kruglov Pavel).
- For UDFs access permissions were checked for database level instead of global level as it should be. Closes #34281. #34404 (Maksim Kita).
- Fix wrong engine syntax in result of
SHOW CREATE DATABASE
query for databases with engineMemory
. This closes #34335. #34345 (alexey-milovidov). - Fixed a couple of extremely rare race conditions that might lead to broken state of replication queue and "intersecting parts" error. #34297 (tavplubix).
- Fix progress bar width. It was incorrectly rounded to integer number of characters. #34275 (alexey-milovidov).
- Fix current_user/current_address client information fields for inter-server communication (before this patch current_user/current_address will be preserved from the previous query). #34263 (Azat Khuzhin).
- Fix memory leak in case of some Exception during query processing with
optimize_aggregation_in_order=1
. #34234 (Azat Khuzhin). - Fix metric
Query
, which shows the number of executing queries. In last several releases it was always 0. #34224 (Anton Popov). - Fix schema inference for table runction
s3
. #34186 (Kruglov Pavel). - Fix rare and benign race condition in
HDFS
,S3
andURL
storage engines which can lead to additional connections. #34172 (alesapin). - Fix bug which can rarely lead to error "Cannot read all data" while reading LowCardinality columns of MergeTree table engines family which stores data on remote file system like S3 (virtual filesystem over s3 is an experimental feature that is not ready for production). #34139 (alesapin).
- Fix inserts to distributed tables in case of a change of native protocol. The last change was in the version 22.1, so there may be some failures of inserts to distributed tables after upgrade to that version. #34132 (Anton Popov).
- Fix possible data race in
File
table engine that was introduced in #33960. Closes #34111. #34113 (Kruglov Pavel). - Fixed minor race condition that might cause "intersecting parts" error in extremely rare cases after ZooKeeper connection loss. #34096 (tavplubix).
- Fix asynchronous inserts with
Native
format. #34068 (Anton Popov). - Fix bug which lead to inability for server to start when both replicated access storage and keeper (embedded in clickhouse-server) are used. Introduced two settings for keeper socket timeout instead of settings from default user:
keeper_server.socket_receive_timeout_sec
andkeeper_server.socket_send_timeout_sec
. Fixes #33973. #33988 (alesapin). - Fix segfault while parsing ORC file with corrupted footer. Closes #33797. #33984 (Kruglov Pavel).
- Fix parsing IPv6 from query parameter (prepared statements) and fix IPv6 to string conversion. Closes #33928. #33971 (Kruglov Pavel).
- Fix crash while reading of nested tuples. Fixes #33838. #33956 (Anton Popov).
- Fix usage of functions
array
andtuple
with literal arguments in distributed queries. Previously it could lead toNot found columns
exception. #33938 (Anton Popov). - Aggregate function combinator
-If
did not correctly processNullable
filter argument. This closes #27073. #33920 (alexey-milovidov). - Fix potential race condition when doing remote disk read (virtual filesystem over s3 is an experimental feature that is not ready for production). #33912 (Amos Bird).
- Fix crash if SQL UDF is created with lambda with non identifier arguments. Closes #33866. #33868 (Maksim Kita).
- Fix usage of sparse columns (which can be enabled by experimental setting
ratio_of_defaults_for_sparse_serialization
). #33849 (Anton Popov). - Fixed
replica is not readonly
logical error onSYSTEM RESTORE REPLICA
query when replica is actually readonly. Fixes #33806. #33847 (tavplubix). - Fix memory leak in
clickhouse-keeper
in case of compression is used (default). #33840 (Azat Khuzhin). - Fix index analysis with no common types available. #33833 (Amos Bird).
- Fix schema inference for
JSONEachRow
andJSONCompactEachRow
. #33830 (Kruglov Pavel). - Fix usage of external dictionaries with
redis
source and large number of keys. #33804 (Anton Popov). - Fix bug in client that led to 'Connection reset by peer' in server. Closes #33309. #33790 (Kruglov Pavel).
- Fix parsing query INSERT INTO ... VALUES SETTINGS ... (...), ... #33776 (Kruglov Pavel).
- Fix bug of check table when creating data part with wide format and projection. #33774 (李扬).
- Fix tiny race between count() and INSERT/merges/... in MergeTree (it is possible to return incorrect number of rows for SELECT with optimize_trivial_count_query). #33753 (Azat Khuzhin).
- Throw exception when directory listing request has failed in storage HDFS. #33724 (LiuNeng).
- Fix mutation when table contains projections. This fixes #33010. This fixes #33275. #33679 (Amos Bird).
- Correctly determine current database if
CREATE TEMPORARY TABLE AS SELECT
is queried inside a named HTTP session. This is a very rare use case. This closes #8340. #33676 (alexey-milovidov). - Allow some queries with sorting, LIMIT BY, ARRAY JOIN and lambda functions. This closes #7462. #33675 (alexey-milovidov).
- Fix bug in "zero copy replication" (a feature that is under development and should not be used in production) which lead to data duplication in case of TTL move. Fixes #33643. #33642 (alesapin).
- Fix
Chunk should have AggregatedChunkInfo in GroupingAggregatedTransform
(in case ofoptimize_aggregation_in_order = 1
). #33637 (Azat Khuzhin). - Fix error
Bad cast from type ... to DB::DataTypeArray
which may happen when table hasNested
column with dots in name, and default value is generated for it (e.g. during insert, when column is not listed). Continuation of #28762. #33588 (Alexey Pavlenko). - Export into
lz4
files has been fixed. Closes #31421. #31862 (Kruglov Pavel). - Fix potential crash if
group_by_overflow_mode
was set toany
(approximate GROUP BY) and aggregation was performed by single column of typeLowCardinality
. #34506 (DR). - Fix inserting to temporary tables via gRPC client-server protocol. Fixes #34347, issue
#2
. #34364 (Vitaly Baranov). - Fix issue #19429. #34225 (Vitaly Baranov).
- Fix issue #18206. #33977 (Vitaly Baranov).
- This PR allows using multiple LDAP storages in the same list of user directories. It worked earlier but was broken because LDAP tests are disabled (they are part of the testflows tests). #33574 (Vitaly Baranov).
ClickHouse release v22.1, 2022-01-18
Upgrade Notes
- The functions
left
andright
were previously implemented in parser and now full-featured. Distributed queries withleft
orright
functions without aliases may throw exception if cluster contains different versions of clickhouse-server. If you are upgrading your cluster and encounter this error, you should finish upgrading your cluster to ensure all nodes have the same version. Also you can add aliases (AS something
) to the columns in your queries to avoid this issue. #33407 (alexey-milovidov). - Resource usage by scalar subqueries is fully accounted since this version. With this change, rows read in scalar subqueries are now reported in the query_log. If the scalar subquery is cached (repeated or called for several rows) the rows read are only counted once. This change allows KILLing queries and reporting progress while they are executing scalar subqueries. #32271 (Raúl Marín).
New Feature
- Implement data schema inference for input formats. Allow to skip structure (or write just
auto
) in table functionsfile
,url
,s3
,hdfs
and in parameters ofclickhouse-local
. Allow to skip structure in create query for table enginesFile
,HDFS
,S3
,URL
,Merge
,Buffer
,Distributed
andReplicatedMergeTree
(if we add new replicas). #32455 (Kruglov Pavel). - Detect format by file extension in
file
/hdfs
/s3
/url
table functions andHDFS
/S3
/URL
table engines and also forSELECT INTO OUTFILE
andINSERT FROM INFILE
#33565 (Kruglov Pavel). Close #30918. #33443 (OnePiece). - A tool for collecting diagnostics data if you need support. #33175 (Alexander Burmak).
- Automatic cluster discovery via Zoo/Keeper. It allows to add replicas to the cluster without changing configuration on every server. #31442 (vdimir).
- Implement hive table engine to access apache hive from clickhouse. This implements: #29245. #31104 (taiyang-li).
- Add aggregate functions
cramersV
,cramersVBiasCorrected
,theilsU
andcontingency
. These functions calculate dependency (measure of association) between categorical values. All these functions are using cross-tab (histogram on pairs) for implementation. You can imagine it like a correlation coefficient but for any discrete values (not necessary numbers). #33366 (alexey-milovidov). Initial implementation by Vanyok-All-is-OK and antikvist. - Added table function
hdfsCluster
which allows processing files from HDFS in parallel from many nodes in a specified cluster, similarly tos3Cluster
. #32400 (Zhichang Yu). - Adding support for disks backed by Azure Blob Storage, in a similar way it has been done for disks backed by AWS S3. #31505 (Jakub Kuklis).
- Allow
COMMENT
inCREATE VIEW
(for all VIEW kinds). #31062 (Vasily Nemkov). - Dynamically reinitialize listening ports and protocols when configuration changes. #30549 (Kevin Michel).
- Added
left
,right
,leftUTF8
,rightUTF8
functions. Fix error in implementation ofsubstringUTF8
function with negative offset (offset from the end of string). #33407 (alexey-milovidov). - Add new functions for
H3
coordinate system:h3HexAreaKm2
,h3CellAreaM2
,h3CellAreaRads2
. #33479 (Bharat Nallan). - Add
MONTHNAME
function. #33436 (usurai). - Added function
arrayLast
. Closes #33390. #33415 Added functionarrayLastIndex
. #33465 (Maksim Kita). - Add function
decodeURLFormComponent
slightly different todecodeURLComponent
. Close #10298. #33451 (SuperDJY). - Allow to split
GraphiteMergeTree
rollup rules for plain/tagged metrics (optional rule_type field). #33494 (Michail Safronov).
Performance Improvement
- Support moving conditions to
PREWHERE
(settingoptimize_move_to_prewhere
) for tables ofMerge
engine if its all underlying tables supportsPREWHERE
. #33300 (Anton Popov). - More efficient handling of globs for URL storage. Now you can easily query million URLs in parallel with retries. Closes #32866. #32907 (Kseniia Sumarokova).
- Avoid exponential backtracking in parser. This closes #20158. #33481 (alexey-milovidov).
- Abuse of
untuple
function was leading to exponential complexity of query analysis (found by fuzzer). This closes #33297. #33445 (alexey-milovidov). - Reduce allocated memory for dictionaries with string attributes. #33466 (Maksim Kita).
- Slight performance improvement of
reinterpret
function. #32587 (alexey-milovidov). - Non significant change. In extremely rare cases when data part is lost on every replica, after merging of some data parts, the subsequent queries may skip less amount of partitions during partition pruning. This hardly affects anything. #32220 (Azat Khuzhin).
- Improve
clickhouse-keeper
writing performance by optimization the size calculation logic. #32366 (zhanglistar). - Optimize single part projection materialization. This closes #31669. #31885 (Amos Bird).
- Improve query performance of system tables. #33312 (OnePiece).
- Optimize selecting of MergeTree parts that can be moved between volumes. #33225 (OnePiece).
- Fix
sparse_hashed
dict performance with sequential keys (wrong hash function). #32536 (Azat Khuzhin).
Experimental Feature
- Parallel reading from multiple replicas within a shard during distributed query without using sample key. To enable this, set
allow_experimental_parallel_reading_from_replicas = 1
andmax_parallel_replicas
to any number. This closes #26748. #29279 (Nikita Mikhaylov). - Implemented sparse serialization. It can reduce usage of disk space and improve performance of some queries for columns, which contain a lot of default (zero) values. It can be enabled by setting
ratio_for_sparse_serialization
. Sparse serialization will be chosen dynamically for column, if it has ratio of number of default values to number of all values above that threshold. Serialization (default or sparse) will be fixed for every column in part, but may varies between parts. #22535 (Anton Popov). - Add "TABLE OVERRIDE" feature for customizing MaterializedMySQL table schemas. #32325 (Stig Bakken).
- Add
EXPLAIN TABLE OVERRIDE
query. #32836 (Stig Bakken). - Support TABLE OVERRIDE clause for MaterializedPostgreSQL. RFC: #31480. #32749 (Kseniia Sumarokova).
- Change ZooKeeper path for zero-copy marks for shared data. Note that "zero-copy replication" is non-production feature (in early stages of development) that you shouldn't use anyway. But in case if you have used it, let you keep in mind this change. #32061 (ianton-ru).
- Events clause support for WINDOW VIEW watch query. #32607 (vxider).
- Fix ACL with explicit digit hash in
clickhouse-keeper
: now the behavior consistent with ZooKeeper and generated digest is always accepted. #33249 (小路). #33246. - Fix unexpected projection removal when detaching parts. #32067 (Amos Bird).
Improvement
- Now date time conversion functions that generates time before
1970-01-01 00:00:00
will be saturated to zero instead of overflow. #29953 (Amos Bird). It also fixes a bug in index analysis if date truncation function would yield result before the Unix epoch. - Always display resource usage (total CPU usage, total RAM usage and max RAM usage per host) in client. #33271 (alexey-milovidov).
- Improve
Bool
type serialization and deserialization, check the range of values. #32984 (Kruglov Pavel). - If an invalid setting is defined using the
SET
query or using the query parameters in the HTTP request, error message will contain suggestions that are similar to the invalid setting string (if any exists). #32946 (Antonio Andelic). - Support hints for mistyped setting names for clickhouse-client and clickhouse-local. Closes #32237. #32841 (凌涛).
- Allow to use virtual columns in Materialized Views. Close #11210. #33482 (OnePiece).
- Add config to disable IPv6 in clickhouse-keeper if needed. This close #33381. #33450 (Wu Xueyang).
- Add more info to
system.build_options
about current git revision. #33431 (taiyang-li). clickhouse-local
: track memory under--max_memory_usage_in_client
option. #33341 (Azat Khuzhin).- Allow negative intervals in function
intervalLengthSum
. Their length will be added as well. This closes #33323. #33335 (alexey-milovidov). LineAsString
can be used as output format. This closes #30919. #33331 (Sergei Trifonov).- Support
<secure/>
in cluster configuration, as an alternative form of<secure>1</secure>
. Close #33270. #33330 (SuperDJY). - Pressing Ctrl+C twice will terminate
clickhouse-benchmark
immediately without waiting for in-flight queries. This closes #32586. #33303 (alexey-milovidov). - Support Unix timestamp with milliseconds in
parseDateTimeBestEffort
function. #33276 (Ben). - Allow to cancel query while reading data from external table in the formats:
Arrow
/Parquet
/ORC
- it failed to be cancelled it case of big files and setting input_format_allow_seeks as false. Closes #29678. #33238 (Kseniia Sumarokova). - If table engine supports
SETTINGS
clause, allow to pass the settings as key-value or via config. Add this support for MySQL. #33231 (Kseniia Sumarokova). - Correctly prevent Nullable primary keys if necessary. This is for #32780. #33218 (Amos Bird).
- Add retry for
PostgreSQL
connections in case nothing has been fetched yet. Closes #33199. #33209 (Kseniia Sumarokova). - Validate config keys for external dictionaries. #33095. #33130 (Kseniia Sumarokova).
- Send profile info inside
clickhouse-local
. Closes #33093. #33097 (Kseniia Sumarokova). - Short circuit evaluation: support for function
throwIf
. Closes #32969. #32973 (Maksim Kita). - (This only happens in unofficial builds). Fixed segfault when inserting data into compressed Decimal, String, FixedString and Array columns. This closes #32939. #32940 (N. Kolotov).
- Added support for specifying subquery as SQL user defined function. Example:
CREATE FUNCTION test AS () -> (SELECT 1)
. Closes #30755. #32758 (Maksim Kita). - Improve gRPC compression support for #28671. #32747 (Vitaly Baranov).
- Flush all In-Memory data parts when WAL is not enabled while shutdown server or detaching table. #32742 (nauta).
- Allow to control connection timeouts for MySQL (previously was supported only for dictionary source). Closes #16669. Previously default connect_timeout was rather small, now it is configurable. #32734 (Kseniia Sumarokova).
- Support
authSource
option for storageMongoDB
. Closes #32594. #32702 (Kseniia Sumarokova). - Support
Date32
type ingenarateRandom
table function. #32643 (nauta). - Add settings
max_concurrent_select_queries
andmax_concurrent_insert_queries
for control concurrent queries by query kind. Close #3575. #32609 (SuperDJY). - Improve handling nested structures with missing columns while reading data in
Protobuf
format. Follow-up to https://github.com/ClickHouse/ClickHouse/pull/31988. #32531 (Vitaly Baranov). - Allow empty credentials for
MongoDB
engine. Closes #26267. #32460 (Kseniia Sumarokova). - Disable some optimizations for window functions that may lead to exceptions. Closes #31535. Closes #31620. #32453 (Kseniia Sumarokova).
- Allows to connect to MongoDB 5.0. Closes #31483,. #32416 (Kseniia Sumarokova).
- Enable comparison between
Decimal
andFloat
. Closes #22626. #31966 (flynn). - Added settings
command_read_timeout
,command_write_timeout
forStorageExecutable
,StorageExecutablePool
,ExecutableDictionary
,ExecutablePoolDictionary
,ExecutableUserDefinedFunctions
. Settingcommand_read_timeout
controls timeout for reading data from command stdout in milliseconds. Settingcommand_write_timeout
timeout for writing data to command stdin in milliseconds. Added settingscommand_termination_timeout
forExecutableUserDefinedFunction
,ExecutableDictionary
,StorageExecutable
. Added settingexecute_direct
forExecutableUserDefinedFunction
, by default true. Added settingexecute_direct
forExecutableDictionary
,ExecutablePoolDictionary
, by default false. #30957 (Maksim Kita). - Bitmap aggregate functions will give correct result for out of range argument instead of wraparound. #33127 (DR).
- Fix parsing incorrect queries with
FROM INFILE
statement. #33521 (Kruglov Pavel). - Don't allow to write into
S3
if path contains globs. #33142 (Kruglov Pavel). --echo
option was not used byclickhouse-client
in batch mode with single query. #32843 (N. Kolotov).- Use
--database
option for clickhouse-local. #32797 (Kseniia Sumarokova). - Fix surprisingly bad code in SQL ordinary function
file
. Now it supports symlinks. #32640 (alexey-milovidov). - Updating
modification_time
for data part insystem.parts
after part movement #32964. #32965 (save-my-heart). - Potential issue, cannot be exploited: integer overflow may happen in array resize. #33024 (varadarajkumar).
Build/Testing/Packaging Improvement
- Add packages, functional tests and Docker builds for AArch64 (ARM) version of ClickHouse. #32911 (Mikhail f. Shiryaev). #32415
- Prepare ClickHouse to be built with musl-libc. It is not enabled by default. #33134 (alexey-milovidov).
- Make installation script working on FreeBSD. This closes #33384. #33418 (alexey-milovidov).
- Add
actionlint
for GitHub Actions workflows and verify workflow files viaact --list
to check the correct workflow syntax. #33612 (Mikhail f. Shiryaev). - Add more tests for the nullable primary key feature. Add more tests with different types and merge tree kinds, plus randomly generated data. #33228 (Amos Bird).
- Add a simple tool to visualize flaky tests in web browser. #33185 (alexey-milovidov).
- Enable hermetic build for shared builds. This is mainly for developers. #32968 (Amos Bird).
- Update
libc++
andlibc++abi
to the latest. #32484 (Raúl Marín). - Added integration test for external .NET client (ClickHouse.Client). #23230 (Oleg V. Kozlyuk).
- Inject git information into clickhouse binary file. So we can get source code revision easily from clickhouse binary file. #33124 (taiyang-li).
- Remove obsolete code from ConfigProcessor. Yandex specific code is not used anymore. The code contained one minor defect. This defect was reported by Mallik Hassan in #33032. This closes #33032. #33026 (alexey-milovidov).
Bug Fix (user-visible misbehavior in official stable or prestable release)
- Several fixes for format parsing. This is relevant if
clickhouse-server
is open for write access to adversary. Specifically crafted input data forNative
format may lead to reading uninitialized memory or crash. This is relevant ifclickhouse-server
is open for write access to adversary. #33050 (Heena Bansal). Fixed Apache Avro Union type index out of boundary issue in Apache Avro binary format. #33022 (Harry Lee). Fix null pointer dereference inLowCardinality
data when deserializingLowCardinality
data in the Native format. #33021 (Harry Lee). - ClickHouse Keeper handler will correctly remove operation when response sent. #32988 (JackyWoo).
- Potential off-by-one miscalculation of quotas: quota limit was not reached, but the limit was exceeded. This fixes #31174. #31656 (sunny).
- Fixed CASTing from String to IPv4 or IPv6 and back. Fixed error message in case of failed conversion. #29224 (Dmitry Novik) #27914 (Vasily Nemkov).
- Fixed an exception like
Unknown aggregate function nothing
during an execution on a remote server. This fixes #16689. #26074 (hexiaoting). - Fix wrong database for JOIN without explicit database in distributed queries (Fixes: #10471). #33611 (Azat Khuzhin).
- Fix segfault in Apache
Avro
format that appears after the second insert into file. #33566 (Kruglov Pavel). - Fix segfault in Apache
Arrow
format if schema containsDictionary
type. Closes #33507. #33529 (Kruglov Pavel). - Out of band
offset
andlimit
settings may be applied incorrectly for views. Close #33289 #33518 (hexiaoting). - Fix an exception
Block structure mismatch
which may happen during insertion into table with default nestedLowCardinality
column. Fixes #33028. #33504 (Nikolai Kochetov). - Fix dictionary expressions for
range_hashed
range min and range max attributes when created using DDL. Closes #30809. #33478 (Maksim Kita). - Fix possible use-after-free for INSERT into Materialized View with concurrent DROP (Azat Khuzhin).
- Do not try to read pass EOF (to workaround for a bug in the Linux kernel), this bug can be reproduced on kernels (3.14..5.9), and requires
index_granularity_bytes=0
(i.e. turn off adaptive index granularity). #33372 (Azat Khuzhin). - The commands
SYSTEM SUSPEND
andSYSTEM ... THREAD FUZZER
missed access control. It is fixed. Author: Kevin Michel. #33333 (alexey-milovidov). - Fix when
COMMENT
for dictionaries does not appear insystem.tables
,system.dictionaries
. Allow to modify the comment forDictionary
engine. Closes #33251. #33261 (Maksim Kita). - Add asynchronous inserts (with enabled setting
async_insert
) to query log. Previously such queries didn't appear in the query log. #33239 (Anton Popov). - Fix sending
WHERE 1 = 0
expressions for external databases query. Closes #33152. #33214 (Kseniia Sumarokova). - Fix DDL validation for MaterializedPostgreSQL. Fix setting
materialized_postgresql_allow_automatic_update
. Closes #29535. #33200 (Kseniia Sumarokova). Make sure unused replication slots are always removed. Found in #26952. #33187 (Kseniia Sumarokova). Fix MaterializedPostreSQL detach/attach (removing / adding to replication) tables with non-default schema. Found in #29535. #33179 (Kseniia Sumarokova). Fix DROP MaterializedPostgreSQL database. #33468 (Kseniia Sumarokova). - The metric
StorageBufferBytes
sometimes was miscalculated. #33159 (xuyatian). - Fix error
Invalid version for SerializationLowCardinality key column
in case of reading fromLowCardinality
column withlocal_filesystem_read_prefetch
orremote_filesystem_read_prefetch
enabled. #33046 (Nikolai Kochetov). - Fix
s3
table function reading empty file. Closes #33008. #33037 (Kseniia Sumarokova). - Fix Context leak in case of cancel_http_readonly_queries_on_client_close (i.e. leaking of external tables that had been uploaded the the server and other resources). #32982 (Azat Khuzhin).
- Fix wrong tuple output in
CSV
format in case of custom csv delimiter. #32981 (Kruglov Pavel). - Fix HDFS URL check that didn't allow using HA namenode address. Bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/31042. #32976 (Kruglov Pavel).
- Fix throwing exception like positional argument out of bounds for non-positional arguments. Closes #31173#event-5789668239. #32961 (Kseniia Sumarokova).
- Fix UB in case of unexpected EOF during filling a set from HTTP query (i.e. if the client interrupted in the middle, i.e.
timeout 0.15s curl -Ss -F 's=@t.csv;' 'http://127.0.0.1:8123/?s_structure=key+Int&query=SELECT+dummy+IN+s'
and with large enought.csv
). #32955 (Azat Khuzhin). - Fix a regression in
replaceRegexpAll
function. The function worked incorrectly when matched substring was empty. This closes #32777. This closes #30245. #32945 (alexey-milovidov). - Fix
ORC
format stripe reading. #32929 (kreuzerkrieg). topKWeightedState
failed for some input types. #32487. #32914 (vdimir).- Fix exception
Single chunk is expected from view inner query (LOGICAL_ERROR)
in materialized view. Fixes #31419. #32862 (Nikolai Kochetov). - Fix optimization with lazy seek for async reads from remote filesystems. Closes #32803. #32835 (Kseniia Sumarokova).
MergeTree
table engine might silently skip some mutations if there are too many running mutations or in case of high memory consumption, it's fixed. Fixes #17882. #32814 (tavplubix).- Avoid reusing the scalar subquery cache when processing MV blocks. This fixes a bug when the scalar query reference the source table but it means that all subscalar queries in the MV definition will be calculated for each block. #32811 (Raúl Marín).
- Server might fail to start if database with
MySQL
engine cannot connect to MySQL server, it's fixed. Fixes #14441. #32802 (tavplubix). - Fix crash when used
fuzzBits
function, close #32737. #32755 (SuperDJY). - Fix error
Column is not under aggregate function
in case of MV withGROUP BY (list of columns)
(which is pared asGROUP BY tuple(...)
) overKafka
/RabbitMQ
. Fixes #32668 and #32744. #32751 (Nikolai Kochetov). - Fix
ALTER TABLE ... MATERIALIZE TTL
query withTTL ... DELETE WHERE ...
andTTL ... GROUP BY ...
modes. #32695 (Anton Popov). - Fix
optimize_read_in_order
optimization in case when table engine isDistributed
orMerge
and its underlyingMergeTree
tables have monotonous function in prefix of sorting key. #32670 (Anton Popov). - Fix LOGICAL_ERROR exception when the target of a materialized view is a JOIN or a SET table. #32669 (Raúl Marín).
- Inserting into S3 with multipart upload to Google Cloud Storage may trigger abort. #32504. #32649 (vdimir).
- Fix possible exception at
RabbitMQ
storage startup by delaying channel creation. #32584 (Kseniia Sumarokova). - Fix table lifetime (i.e. possible use-after-free) in case of parallel DROP TABLE and INSERT. #32572 (Azat Khuzhin).
- Fix async inserts with formats
CustomSeparated
,Template
,Regexp
,MsgPack
andJSONAsString
. Previousely the async inserts with these formats didn't read any data. #32530 (Kruglov Pavel). - Fix
groupBitmapAnd
function on distributed table. #32529 (minhthucdao). - Fix crash in JOIN found by fuzzer, close #32458. #32508 (vdimir).
- Proper handling of the case with Apache Arrow column duplication. #32507 (Dmitriy Mokhnatkin).
- Fix issue with ambiguous query formatting in distributed queries that led to errors when some table columns were named
ALL
orDISTINCT
. This closes #32391. #32490 (alexey-milovidov). - Fix failures in queries that are trying to use skipping indices, which are not materialized yet. Fixes #32292 and #30343. #32359 (Anton Popov).
- Fix broken select query when there are more than 2 row policies on same column, begin at second queries on the same session. #31606. #32291 (SuperDJY).
- Fix fractional unix timestamp conversion to
DateTime64
, fractional part was reversed for negative unix timestamps (before 1970-01-01). #32240 (Ben). - Some entries of replication queue might hang for
temporary_directories_lifetime
(1 day by default) withDirectory tmp_merge_<part_name>
orPart ... (state Deleting) already exists, but it will be deleted soon
or similar error. It's fixed. Fixes #29616. #32201 (tavplubix). - Fix parsing of
APPLY lambda
column transformer which could lead to client/server crash. #32138 (Kruglov Pavel). - Fix
base64Encode
adding trailing bytes on small strings. #31797 (Kevin Michel). - Fix possible crash (or incorrect result) in case of
LowCardinality
arguments of window function. Fixes #31114. #31888 (Nikolai Kochetov). - Fix hang up with command
DROP TABLE system.query_log sync
. #33293 (zhanghuajie).