ClickHouse/docs/changelogs/v22.4.1.2305-prestable.md
2022-06-23 12:13:08 +02:00

84 KiB
Raw Blame History

sidebar_position sidebar_label
1 2022

2022 Changelog

ClickHouse release v22.4.1.2305-prestable FIXME as compared to v22.3.1.1262-prestable

Backward Incompatible Change

  • Function yandexConsistentHash (consistent hashing algorithm by Konstantin "kostik" Oblakov) is renamed to kostikConsistentHash. The old name is left as an alias for compatibility. Although this change is backward compatible, we may remove the alias in subsequent releases, that's why it's recommended to update the usages of this function in your apps. #35553 (Alexey Milovidov).
  • Do not allow SETTINGS after FORMAT for INSERT queries (there is compatibility setting parser_settings_after_format_compact to accept such queries, but it is turned OFF by default). #35883 (Azat Khuzhin).
  • Changed hashed path for cache files. #36079 (Kseniia Sumarokova).

New Feature

  • Added support for transactions for simple MergeTree tables. This feature is highly experimental and not recommended for production. Part of #22086. #24258 (Alexander Tokmakov).
  • Added load balancing setting for [Zoo]Keeper client. Closes #29617. #30325 (小路).
  • New aggregation function groupSortedArray to obtain an array of first N values. #34055 (palegre-tiny).
  • New functions minSampleSizeContinous and minSampleSizeConversion. #34354 (achimbab).
  • Profiling on Processors level (under log_processors_profiles setting, ClickHouse will write time that processor spent during execution/waiting for data to system.processors_profile_log table). #34355 (Azat Khuzhin).
  • Add toEndOfMonth function which rounds up a date or date with time to the last day of the month. #33501. #34394 (Habibullah Oladepo).
  • Add h3PointDistM, h3PointDistKm, h3PointDistRads, h3GetRes0Indexes, h3GetPentagonIndexes functions. #34568 (Bharat Nallan).
  • Introduce format ProtobufList. Fixes #16436. #35152 (Nikolai Kochetov).
  • A dedicated small package for clickhouse-keeper. #35308 (Mikhail f. Shiryaev).
  • added INTERPOLATE extension to the ORDER BY ... WITH FILL closes #34903. #35349 (Yakov Olkhovskiy).
  • Added functions minSampleSizeContinous and minSampleSizeConversion. Author @achimbab. #35360 (Maksim Kita).
  • Added functions arrayFirstOrNull, arrayLastOrNull. Closes #35238. #35414 (Maksim Kita).
  • Allow to write remote fs cache on all write operations. Add system.remote_filesystem_cache table. Add drop remote filesystem cache query. Add introspection for s3 metadata with system.remote_data_paths table. Closes #34021. Add cache option for merges by adding mode read_from_filesystem_cache_if_exists_otherwise_bypass_cache (turned on by default for merges and can also be turned on by query setting with the same name). Rename cache related settings (remote_fs_enable_cache -> enable_filesystem_cache, etc). #35475 (Kseniia Sumarokova).
  • Added functions makeDate(year, month, day), makeDate32(year, month, day). #35628 (Alexander Gololobov).
  • Added function flattenTuple. It receives nested named Tuple as an argument and returns a flatten Tuple which elements are the paths from the original Tuple. E.g.: Tuple(a Int, Tuple(b Int, c Int)) -> Tuple(a Int, b Int, c Int). flattenTuple can be used to select all paths from type Object as separate columns. #35690 (Anton Popov).
  • Support new type of quota WRITTEN BYTES to limit amount of written bytes during insert queries. #35736 (Anton Popov).
  • Implementation of makeDateTime() and makeDateTIme64(). #35934 (Alexander Gololobov).
  • Support '\G;' at the end of query for FORMAT Vertical. Closes #36111. #36130 (yuuch).
  • Adding random salt and appending to password to generate password hash. #36172 (Rajkumar Varada).
  • Add setting throw_if_no_data_to_insert. Closes #36336. #36345 (flynn).
  • Implement type inference for INSERT INTO function null(). Closes #36334. #36353 (flynn).
  • ... #36436 (Rich Raposa).

Performance Improvement

  • Speed up parts loading process of MergeTree to accelerate starting up of clickhouse-server. With this improvement, clickhouse-server was able to decrease starting up time from 75 minutes to 20 seconds, with 700k mergetree parts. #32928 (李扬).
  • Sizes of hash tables used during aggregation now collected and used in later queries to avoid hash tables resizes. #33439 (Nikita Taranov).
  • Multiple changes to improve ASOF join performance (1.2 - 1.6x as fast). It also adds support to use big integers. #34733 (Raúl Marín).
  • URL storage engine now downloads multiple chunks in parallel if the endpoint supports HTTP Range. Two additional settings were added, max_download_threads and max_download_buffer_size, which control maximum number of threads a single query can use to download the file and the maximum number of bytes each thread can process. #35150 (Antonio Andelic).
  • parallelization of multipart upload into S3 storage. #35343 (Sergei Trifonov).
  • Improve performance of ASOF JOIN if key is native integer. #35525 (Maksim Kita).
  • A new query plan optimization. Evaluate functions after ORDER BY when possible. As an example, for a query SELECT sipHash64(number) FROM numbers(1e8) ORDER BY number LIMIT 5, function sipHash64 would be evaluated after ORDER BY and LIMIT, which gives ~20x speed up. #35623 (Nikita Taranov).
  • narrow mutex scope when setenv LIBHDFS3_CONF related issue #35292. #35646 (shuchaome).
  • Improve performance of hasAll function using specializations for SSE and AVX2. Author @youennL-cs. #35723 (Maksim Kita).
    • The explain statement of GLOBAL JOIN two distributed tables can speed up 100x: explain plan select ... from t1_dist global join t2_dist on ... explain pipeline select ... from t1_dist global join t2_dist on ... #36055 (何李夫).
  • 2 optimizations: - Optimize trivail count hive query - Speed up hive query by caching metadata of hive file. #36082 (李扬).

Improvement

  • ... #21474 (nvartolomei).
  • As talked in issue 27025, there is an improvement of the HasAll function using SIMD instruction (SSE and AVX2). Gtest tests have also been added. #27653 (youennL-cs).
  • Proper support of setting max_rows_to_read in case of reading in order of sorting key and specified limit. Previously the exception Limit for rows or bytes to read exceeded could be thrown even if query actually requires to read less amount of rows. #33230 (Anton Popov).
  • INTERVAL improvement - can be used with [MILLI|MICRO|NANO]SECOND. Added toStartOf[Milli|Micro|Nano]second() functions. Added [add|subtract][Milli|Micro|Nano]second(). #34353 (Andrey Zvonov).
  • System log tables allow to specify COMMENT in ENGINE declaration. Closes #33768. #34536 (Maksim Kita).
  • added sanity checks on server startup (available memory and disk space, max thread count, etc). #34566 (Sergei Trifonov).
  • Use minmax index for orc/parquet file in Hive Engine. Related pr: https://github.com/ClickHouse-Extras/arrow/pull/10. #34631 (李扬).
  • If port is not specified in cluster configuration, default server port will be used. This closes #34769. #34772 (Alexey Milovidov).
  • Added better error messages in case of connection failed to MySQL. Closes #35128. #35234 (zzsmdfj).
  • Add function getTypeSerializationStreams. For a specified type (which is detected from column), it returns an array with all the serialization substream paths. This function is useful mainly for developers. #35290 (李扬).
    • wchc operation is expensive and should not be in the four_letter_word_white_list defaults. #35320 (zhangyuli1).
  • Added an ability to specify cluster secret in replicated database. #35333 (Nikita Mikhaylov).
  • Add a new kind of row policies named simple. Before this PR we had two kinds or row policies: permissive and restrictive. A simple row policy adds a new filter on a table without any side-effects like it was for permissive and restrictive policies. #35345 (Vitaly Baranov).
  • Remove testmode option, enable it unconditionally. #35354 (Kseniia Sumarokova).
  • For table function s3cluster or HDFSCluster or hive, we can't get right AccessType by StorageFactory::instance().getSourceAccessType(getStorageTypeName()). This pr fix it. #35365 (李扬).
  • For lts releases packages will be pushed to both lts and stable repos. #35382 (Mikhail f. Shiryaev).
  • Support uuid for postgres engines. Closes #35384. #35403 (Kseniia Sumarokova).
  • Add arguments --user, --password, --host, --port for clickhouse-diagnostics. #35422 (李扬).
  • fix INSERT INTO table FROM INFILE does not display progress bar. #35429 (xiedeyantu).
  • Allow server to bind to low-numbered ports (e.g. 443). ClickHouse installation script will set cap_net_bind_service to the binary file. #35451 (Alexey Milovidov).
  • Add settings input_format_orc_case_insensitive_column_matching, input_format_arrow_case_insensitive_column_matching, and input_format_parquet_case_insensitive_column_matching which allows ClickHouse to use case insensitive matching of columns while reading data from ORC, Arrow or Parquet files. #35459 (Antonio Andelic).
    • Add explicit table info to the scan node of query plan and pipeline. #35460 (何李夫).
  • Propagate query and session settings for distributed DDL queries. Setting distributed_ddl_entry_format_version is set to 2 by default now. #35463 (Alexander Tokmakov).
  • Add sizes of subcolumns to system.parts_columns table. #35488 (Anton Popov).
  • It was possible to get stack overflow in distributed queries if one of the settings async_socket_for_remote and use_hedged_requests is enabled while parsing very deeply nested data type (at least in debug build). Closes #35509. #35524 (Kruglov Pavel).
  • Improve pasting performance and compatibility of clickhouse-client. This helps #35501. #35541 (Amos Bird).
  • Added a support for automatic schema inference to s3Cluster table function. Synced the signatures of s3 and s3Cluster. #35544 (Nikita Mikhaylov).
  • Use multiple threads to download objects from S3. Downloading is controllable using max_download_threads and max_download_buffer_size settings. #35571 (Antonio Andelic).
  • Deduce absolute hdfs config path. #35572 (李扬).
    • Use some tweaks and heuristics to determine numbers, strings, arrays, tuples and maps in CSV, TSV and TSVRaw data formats. Add setting input_format_csv_use_best_effort_in_schema_inference for CSV format that enables/disables using these heuristics, if it's disabled, we treat everything as string. Add similar setting input_format_tsv_use_best_effort_in_schema_inference for TSV/TSVRaw format. These settings are enabled by default. - Add Maps support for schema inference in Values format. - Fix possible segfault in schema inference in Values format. - Allow to skip columns with unsupported types in Arrow/ORC/Parquet formats. Add corresponding settings for it: input_format_{parquet|orc|arrow}_skip_columns_with_unsupported_types_in_schema_inference. These settings are disabled by default. - Allow to convert a column with type Null to a Nullable column with all NULL values in Arrow/Parquet formats. - Allow to specify column names in schema inference via setting column_names_for_schema_inference for formats that don't contain column names (like CSV, TSV, JSONCompactEachRow, etc) - Fix schema inference in ORC/Arrow/Parquet formats in terms of working with Nullable columns. Previously all inferred types were not Nullable and it blocked reading Nullable columns from data, now it's fixed and all inferred types are always Nullable (because we cannot understand that column is Nullable or not by reading the schema). - Fix schema inference in Template format with CSV escaping rules. #35582 (Kruglov Pavel).
  • Add parallel parsing and schema inference for format JSONAsObject. #35592 (Anton Popov).
  • Added support for schema inference for hdfsCluster. #35602 (Nikita Mikhaylov).
  • Support schema inference for type Object in format JSONEachRow. Allow to convert columns of type Map to columns of type Object. #35629 (Anton Popov).
  • Add profile event counter AsyncInsertBytes about size of async INSERTs. #35644 (Alexey Milovidov).
  • Added is_secure column to system.query_log which denotes if the client is using a secure connection over TCP or HTTP. #35705 (Antonio Andelic).
  • closes #35641 Allow EPHEMERAL without explicit default expression. #35706 (Yakov Olkhovskiy).
  • Fix send_logs_level for clickhouse local. Closes #35653. #35716 (Kseniia Sumarokova).
  • Improve columns ordering in schema inference for formats TSKV and JSONEachRow, closes #35640. Don't stop schema inference when reading empty row in schema inference for formats TSKV and JSONEachRow. #35724 (Kruglov Pavel).
  • Add new setting input_format_json_read_bools_as_numbers that allows to infer and parse bools as numbers in JSON input formats. It's enabled by default. Suggested by @alexey-milovidov. #35735 (Kruglov Pavel).
  • Respect remote_url_allow_hosts for hive. #35743 (李扬).
  • Support schema inference for insert select with using input table function. Get schema from insertion table instead of inferring it from the data in case of insert select from table functions that support schema inference. Closes #35639. #35760 (Kruglov Pavel).
  • Improve projection analysis to optimize trivial queries such as count(). #35788 (Amos Bird).
  • support ALTER TABLE t DETACH PARTITION (ALL). #35794 (awakeljw).
  • Added an animation to the hourglass icon to indicate to the user that a query is running. #35860 (peledni).
  • Now some ALTER MODIFY COLUMN queries for Arrays and Nullable types can be done at metadata level without mutations. For example, alter from Array(Enum8('Option1'=1)) to Array(Enum8('Option1'=1, 'Option2'=2)). #35882 (alesapin).
  • Now it's not allowed to ALTER TABLE ... RESET SETTING for non-existing settings for MergeTree engines family. Fixes #35816. #35884 (alesapin).
  • Improve settings configuration for s3 storage / table function. #35915 (Kseniia Sumarokova).
  • Add some basic metrics to monitor engine=Kafka tables. #35916 (filimonov).
  • Now kafka_num_consumers can be bigger than amount of physical cores in case of low resource machine (less than 16 cores). #35926 (alesapin).
  • Update unixodbc to mitigate CVE-2018-7485. #35943 (Mikhail f. Shiryaev).
  • Require mutations for per-table TTL only when it had been changed. #35953 (Azat Khuzhin).
    • Add dns_max_consecutive_failures setting to stop re-resolving cached DNS entries after a number of consecutive failures (5 by default). #35956 (Raúl Marín).
  • ASTPartition::formatImpl should output ALL while executing ALTER TABLE t DETACH PARTITION ALL. #35987 (awakeljw).
  • clickhouse-keeper starts answering 4-letter commands before getting the quorum. #35992 (Antonio Andelic).
  • Fix wrong assertion in replxx which happens when navigating back the history when the first line of input is a newline. Mark as improvement because it only affects debug build. This fixes #34511. #36007 (Amos Bird).
  • If someone writes DEFAULT NULL in table definition, make data type Nullable. #35887. #36058 (xiedeyantu).
  • Added thread_id and query_id columns to system.zookeeper_log table. #36074 (Alexander Tokmakov).
  • Auto assign numbers for Enum elements. #36101 (awakeljw).
  • Reset thread name in ThreadPool to ThreadPoolIdle after job is done. This is to avoid displaying the old thread name for idle threads. This closes #36114. #36115 (Alexey Milovidov).
  • Support UNSIGNED modifier with unused parameters of INT. #36126 (awakeljw).
  • Add support for atomic exchange in OSX. #36133 (Raúl Marín).
  • Update the progress bar after receiving every ProfileEvents packet. This change must fix the showing of outdated profiling data in client. #36202 (Dmitry Novik).
  • Check ORC/Parquet/Arrow format magic bytes before loading file in memory to prevent high memory usage in case of wrong file format. #36209 (Kruglov Pavel).
  • Allow queries insert into function file(...) select from for files with formats that don't support schema inference. For example: insert into function file(data.json) select 42 - such query didn't work previously. #36211 (Kruglov Pavel).
  • Send both stdin data and data from query/data from infile in client. Previously client ignored stdin data in case of both sources were present. Closes #36100. #36254 (Kruglov Pavel).
  • Allow missing columns for mongo storage. Closes #36119. Closes #26490. #36272 (Kseniia Sumarokova).
  • Input format parsers can synchronize after wrong value of Bool or Map data types (see the input_format_allow_errors_* settings). #36333 (Alexey Milovidov).
  • Check for harmful environment variables like LD_PRELOAD at startup. It makes sense in Google Collab. This closes #36340. #36342 (Alexey Milovidov).
  • Fix #36307 #35891 possible range issues in automatic assigned enums, also fix error message. #36352 (awakeljw).
  • hex function support for Int128/Int256/UInt128/UInt256. #36386 (Memo).

Bug Fix

  • Add type checking when create materialized view. Try to close: #23684. #24896 (hexiaoting).
  • Avoid erasing columns from a block if it doesn't exist while reading data from Hive. #35393 (lgbo).
  • Added settings input_format_ipv4_default_on_conversion_error, input_format_ipv6_default_on_conversion_error to allow insert of invalid ip address values as default into tables. Closes #35726. #35733 (Maksim Kita).
  • In FileSegmentsHolder::~FileSegmentsHolder(), when a segment is set to detach, it will assert its state is empty. However, in FileSegment::completeImpl(), when detach is set to true, its state may be PARTIALLY_DOWNLOADED_NO_CONTINUATION or SKIP_CACHE or PARTIALLY_DOWNLOADED, thus cause error in FileSegmentsHolder::~FileSegmentsHolder(). if (file_segment->detached) { /// This file segment is not owned by cache, so it will be destructed /// at this point, therefore no completion required. assert(file_segment->state() == FileSegment::State::EMPTY); file_segment_it = file_segments.erase(current_file_segment_it); continue; }. #36452 (Han Shukai).

Build/Testing/Packaging Improvement

Bug Fix (prestable release)

  • call RemoteQueryExecutor with original_query instead of an rewritten query, elimate the AMBIGUOUS_COLUMN_NAME exception. #35748 (lgbo).

Bug Fix (user-visible misbehaviour in official stable or prestable release)

  • Disallow ALTER TTL for engines that does not support it, to avoid breaking ATTACH TABLE (closes #33344). #33391 (zhongyuankai).
  • Do not delay final part writing by default (fixes possible Memory limit exceeded during INSERT by adding max_insert_delayed_streams_for_parallel_write with default to 1000 for writes to s3 and disabled as before otherwise). #34780 (Azat Khuzhin).
  • fix issueinput_format_null_as_default does not work for DEFAULT expressions Closes #34890. #35039 (zzsmdfj).
  • Fix mutations in tables with enabled sparse columns. #35284 (Anton Popov).
  • Fix schema inference for TSKV format while using small max_read_buffer_size. #35332 (Kruglov Pavel).
  • Fix partition pruning in case of comparison with constant in WHERE. If column and constant had different types, overflow was possible. Query could return an incorrect empty result. This fixes #35304. #35334 (Amos Bird).
  • Fix issue with non-existing directory https://github.com/ClickHouse/ClickHouse/runs/5588046879?check_suite_focus=true. #35376 (Mikhail f. Shiryaev).
  • Fix possible deadlock in cache. #35378 (Kseniia Sumarokova).
  • Fix wrong assets path in release workflow. #35379 (Mikhail f. Shiryaev).
  • Cache fixes for high concurrency on corner cases. #35381 (Kseniia Sumarokova).
  • Fix working with columns that are not needed in query in Arrow/Parquet/ORC formats, it prevents possible errors like Unsupported <format> type <type> of an input column <column_name> when file contains column with unsupported type and we don't use it in query. #35406 (Kruglov Pavel).
  • Skip empty chunks in GroupingAggregatedTransform. #35417 (Nikita Taranov).
  • Now merges executed with zero copy replication will not spam logs with message Found parts with the same min block and with the same max block as the missing part _ on replica _. Hoping that it will eventually appear as a result of a merge.. #35430 (alesapin).
  • Fix excessive logging when using S3 as backend for MergeTree or as separate table engine/function. Fixes #30559. #35434 (alesapin).
  • Fix wrong result of datetime64 when negative. Close #34831. #35440 (李扬).
  • Fix bug in function if when resulting column type differs with resulting data type that led to logical errors like Logical error: 'Bad cast from type DB::ColumnVector<int> to DB::ColumnVector<long>'.. Closes #35367. #35476 (Kruglov Pavel).
  • Fix bug in Keeper which can lead to unstable client connections. Introduced in #35031. #35498 (alesapin).
  • Fix crash for function throwIf with constant arguments. #35500 (Maksim Kita).
  • Fix crash during short circuit function evaluation when one of arguments is nullable constant. Closes #35497. Closes #35496. #35502 (Maksim Kita).
  • Fix cast into IPv4, IPv6 address in IN section. Fixes #35528. #35534 (Maksim Kita).
  • Fix parsing of IPv6 addresses longer than 39 characters. Closes #34022. #35539 (Maksim Kita).
  • Fixed return type deduction for caseWithExpression. The type of the ELSE branch is now correctly taken into account. #35576 (Antonio Andelic).
  • Fix s3 engine getting virtual columns. Closes #35411. #35586 (Kseniia Sumarokova).
  • Fix version string setting in version_helper.py. #35589 (Mikhail f. Shiryaev).
  • Fix headers with named collections, add compression_method. Closes #35273. Closes #35269. #35593 (Kseniia Sumarokova).
  • Setting database_atomic_wait_for_drop_and_detach_synchronously worked incorrectly for ATTACH TABLE query when previously detached table is still in use, It's fixed. #35594 (Alexander Tokmakov).
  • Fix possible segfault in materialised postgresql which happened if exception occurred when data, collected in memory, was synced into underlying tables. Closes #35611. #35614 (Kseniia Sumarokova).
  • Fix HashJoin when columns with LowCardinality type are used. This closes #35548. #35616 (Antonio Andelic).
  • Check remote_url_allow_hosts before schema inference in URL engine Closes #35064. #35619 (Kruglov Pavel).
  • Fix positional arguments with aliases. Closes #35600. #35620 (Kseniia Sumarokova).
  • Fix projection analysis which might lead to wrong query result when IN subquery is used. This fixes #35336. #35631 (Amos Bird).
  • Fix usage of quota with asynchronous inserts. #35645 (Anton Popov).
  • Fix server crash when large number of arguments are passed into format function. Please refer to the test file and see how to reproduce the crash. #35651 (Amos Bird).
  • Fix part checking logic for parts with projections. Error happened when projection and main part had different types. This is similar to https://github.com/ClickHouse/ClickHouse/pull/33774 . The bug is addressed by @caoyang10. #35667 (Amos Bird).
  • Fix check asof join key nullability, close #35565. #35674 (Vladimir C).
  • Fix possible loss of subcolumns in type Object. #35682 (Anton Popov).
  • Enable build with JIT compilation by default. #35683 (Maksim Kita).
  • Fix possible Can't adjust last granule exception while reading subcolumns of type Object. #35687 (Anton Popov).
  • Fix bug in creating materialized view with subquery after server restart. Materialized view was not getting updated after inserts into underlying table after server restart. Closes #35511. #35691 (Kruglov Pavel).
  • Fix dropping non-empty database in clickhouse local. Closes #35692. #35711 (Kseniia Sumarokova).
  • Fix any/all(subquery) implementation. Closes #35489. #35727 (Kseniia Sumarokova).
  • Fix bug in conversion from custom types to string that could lead to segfault or unexpected error messages. Closes #35752. #35755 (Kruglov Pavel).
  • Now metadata for broken parts will be removed from metadata cache (introduced in #32928) on server start. #35759 (chen9t).
  • fix filebuffer pos in RemoteReadBuffer When RemoteReadBuffer is consumed, its pos will increase, for example in HadoopSnappyReadBuffer::nextImpl. image. #35771 (shuchaome).
  • Fixes parsing of the arguments of the functions extract. Fixes #35751. #35799 (Nikolay Degterinsky).
  • Fix bug in indexes of not presented columns in -WithNames formats that led to error INCORRECT_NUMBER_OF_COLUMNS when the number of columns is more than 256. Closes #35793. #35803 (Kruglov Pavel).
  • Fix inserts to columns of type Object in case when there is data related to several partitions in insert query. #35806 (Anton Popov).
  • Respect only quota & period from groups, ignore shares (which are not really limit the number of the cores which can be used). #35815 (filimonov).
  • Avoid processing per-column TTL multiple times. #35820 (Azat Khuzhin).
  • fix issue: #34966. #35840 (zzsmdfj).
  • Disable session_log because memory safety issue has been found by fuzzing. See #35714. #35873 (Alexey Milovidov).
  • Fix formatting of INSERT INFILE queries (missing quotes). #35886 (Azat Khuzhin).
  • Fixed GA not reporting events. #35935 (peledni).
  • Fix reading from Kafka tables when kafka_num_consumers > 1 and kafka_thread_per_consumer = 0. Returns parallel & multithreaded reading, accidentally broken in 21.11. Closes #35153. #35973 (filimonov).
  • Fix performance regression of scalar query optimization. #35986 (Amos Bird).
  • Fix error while moving table with JOIN engine from Ordinary database to Atomic, close #35686. #35995 (Vladimir C).
  • Fix error Empty list of columns in SELECT query in CROSS JOIN close #35672. #36033 (Vladimir C).
  • Fix possible incorrect result of WINDOW functions in queries with LIMIT which was caused by wrong limit-push-down query plan optimization. Fixes #36071 and #23125. #36075 (Nikolai Kochetov).
  • Throw an exception when CH cannot execute a file instead of displaying success and silently failing. #36088 (Julian Gilyadov).
  • Fix window view when is proc time and window kind larger than day, see code comment. #36109 (flynn).
  • Fix bug of read buffer from hdfs. ReadBufferFromHDFSImpl::offset was misused as offset of working_buffer, but it is file offset. cc @kssenii. #36153 (李扬).
  • Fix crash in ParallelReadBuffer. #36169 (Kruglov Pavel).
  • Allow to convert empty strings to empty values of type Objects. #36179 (Anton Popov).
  • Fix possible segfault in schema inference for JSON formats. #36195 (Kruglov Pavel).
  • CREATE TABLE ... AS might fail with Replica ... already exists even if ReplicatedMergeTree table was created with default arguments. It's fixed. Now {uuid} macro is not unfolded when saving table metadata. Therefore, it's not allowed to move ReplicatedMergeTree table from Atomic to Ordinary database if zookeeper_path contains {uuid} macro (or table was created with default engine arguments). Fixes #35577. #36200 (Alexander Tokmakov).
  • Fix reading of empty arrays in reverse order (in queries with descending sorting by prefix of primary key). #36215 (Anton Popov).
  • Play UI was not able to display some resultsets, for example SELECT * FROM dish. #36283 (Alexey Milovidov).
  • Fix crash in ReadBufferFromHDFS in debug mode. image. #36287 (zhanglistar).
  • Fix "Cannot find column" error for distributed queries with LIMIT BY. #36454 (Azat Khuzhin).

NO CL ENTRY

NOT FOR CHANGELOG / INSIGNIFICANT