ClickHouse/docs/changelogs/v24.10.1.2812-stable.md

89 KiB
Raw Permalink Blame History

sidebar_position sidebar_label
1 2024

2024 Changelog

ClickHouse release v24.10.1.2812-stable (9cd0a3738d) FIXME as compared to v24.10.1.1-new (b12a367741)

Backward Incompatible Change

  • Allow to write SETTINGS before FORMAT in a chain of queries with UNION when subqueries are inside parentheses. This closes #39712. Change the behavior when a query has the SETTINGS clause specified twice in a sequence. The closest SETTINGS clause will have a preference for the corresponding subquery. In the previous versions, the outermost SETTINGS clause could take a preference over the inner one. #68614 (Alexey Milovidov).
  • Reordering of filter conditions from [PRE]WHERE clause is now allowed by default. It could be disabled by setting allow_reorder_prewhere_conditions to false. #70657 (Nikita Taranov).
  • Fix optimize_functions_to_subcolumns optimization (previously could lead to Invalid column type for ColumnUnique::insertRangeFrom. Expected String, got LowCardinality(String) error), by preserving LowCardinality type in mapKeys/mapValues. #70716 (Azat Khuzhin).
  • Remove the idxd-config library, which has an incompatible license. This also removes the experimental Intel DeflateQPL codec. #70987 (Alexey Milovidov).

New Feature

  • MongoDB integration refactored: migration to new driver mongocxx from deprecated Poco::MongoDB, remove support for deprecated old protocol, support for connection by URI, support for all MongoDB types, support for WHERE and ORDER BY statements on MongoDB side, restriction for expression unsupported by MongoDB. #63279 (Kirill Nikiforov).
  • A new --progress-table option in clickhouse-client prints a table with metrics changing during query execution; a new --enable-progress-table-toggle is associated with the --progress-table option, and toggles the rendering of the progress table by pressing the control key (Space). #63689 (Maria Khristenko).
  • This allows to grant access to the wildcard prefixes. GRANT SELECT ON db.table_pefix_* TO user. #65311 (pufit).
  • Add system.query_metric_log which contains history of memory and metric values from table system.events for individual queries, periodically flushed to disk. #66532 (Pablo Marcos).
  • A simple SELECT query can be written with implicit SELECT to enable calculator-style expressions, e.g., ch "1 + 2". This is controlled by a new setting, implicit_select. #68502 (Alexey Milovidov).
  • Support --copy mode for clickhouse local as a shortcut for format conversion #68503. #68583 (Denis Hananein).
  • Add support for arrayUnion function. #68989 (Peter Nguyen).
  • Support aggreate function quantileExactWeightedInterpolated, which is a interpolated version based on quantileExactWeighted. Some people may wonder why we need a new quantileExactWeightedInterpolated since we already have quantileExactInterpolatedWeighted. The reason is the new one is more accurate than the old one. BTW, it is for spark compatiability in Apache Gluten. #69619 (李扬).
  • Support function arrayElementOrNull. It returns null if array index is out of range or map key not found. #69646 (李扬).
  • Allows users to specify regular expressions through new message_regexp and message_regexp_negative fields in the config.xml file to filter out logging. The logging is applied to the formatted un-colored text for the most intuitive developer experience. #69657 (Peter Nguyen).
  • Support Dynamic type in most functions by executing them on internal types inside Dynamic. #69691 (Pavel Kruglov).
  • Re-added RIPEMD160 function, which computes the RIPEMD-160 cryptographic hash of a string. Example: SELECT HEX(RIPEMD160('The quick brown fox jumps over the lazy dog')) returns 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B. #70087 (Dergousov Maxim).
  • Allow to cache read files for object storage table engines and data lakes using hash from ETag + file path as cache key. #70135 (Kseniia Sumarokova).
  • Support reading Iceberg tables on HDFS. #70268 (flynn).
  • Allow to read/write JSON type as binary string in RowBinary format under settings input_format_binary_read_json_as_string/output_format_binary_write_json_as_string. #70288 (Pavel Kruglov).
  • Allow to serialize/deserialize JSON column as single String column in Native format. For output use setting output_format_native_write_json_as_string. For input, use serialization version 1 before the column data. #70312 (Pavel Kruglov).
  • Supports standard CTE, with insert, as previously only supports insert ... with .... #70593 (Shichao Jin).

Performance Improvement

  • Support minmax index for pointInPolygon. #62085 (JackyWoo).
  • Add support for parquet bloom filters. #62966 (Arthur Passos).
  • Lock-free parts rename to avoid INSERT affect SELECT (due to parts lock) (under normal circumstances with fsync_part_directory, QPS of SELECT with INSERT in parallel, increased 2x, under heavy load the effect is even bigger). Note, this only includes ReplicatedMergeTree for now. #64955 (Azat Khuzhin).
  • Respect ttl_only_drop_parts on materialize ttl; only read necessary columns to recalculate TTL and drop parts by replacing them with an empty one. #65488 (Andrey Zvonov).
  • Refactor IDisk and IObjectStorage for better performance. Tables from plain and plain_rewritable object storages will initialize faster. #68146 (Alexey Milovidov).
  • Optimized thread creation in the ThreadPool to minimize lock contention. Thread creation is now performed outside of the critical section to avoid delays in job scheduling and thread management under high load conditions. This leads to a much more responsive ClickHouse under heavy concurrent load. #68694 (filimonov).
  • Enable reading LowCardinality string columns from ORC. #69481 (李扬).
  • Added an ability to parse data directly into sparse columns. #69828 (Anton Popov).
  • Supports parallel reading of parquet row groups and prefetching of row groups in single-threaded mode. #69862 (LiuNeng).
  • Improved performance of parsing formats with high number of missed values (e.g. JSONEachRow). #69875 (Anton Popov).
  • Use LowCardinality for ProfileEvents in system logs such as part_log, query_views_log, filesystem_cache_log. #70152 (Alexey Milovidov).
  • Improve performance of FromUnixTimestamp/ToUnixTimestamp functions. #71042 (kevinyhzou).

Improvement

  • Allow parametrised SQL aliases. #50665 (Anton Kozlov).
  • Fixed #57616 this problem occurs because all positive number arguments are automatically identified as uint64 type, leading to an inability to match int type data in summapfiltered. the issue of non-matching is indeed confusing, as the uint64 parameters are not specified by the user. additionally, if the arguments are [1,2,3,toint8(-3)], due to the getleastsupertype(), these parameters will be uniformly treated as int type, causing '1,2,3' to also fail in matching the uint type data in summapfiltered. #58408 (Chen768959).
  • ALTER TABLE .. REPLACE PARTITION doesn't wait anymore for mutations/merges that happen in other partitions. #59138 (Vasily Nemkov).
  • Refreshable materialized views are now supported in Replicated databases. #60669 (Michael Kolupaev).
  • Symbolic links for tables in the data/database_name/ directory are created for the actual paths to the table's data, depending on the storage policy, instead of the store/... directory on the default disk. #61777 (Kirill).
  • Apply configuration updates in global context object. It fixes issues like #62308. #62944 (Amos Bird).
  • Reworked settings that control the behavior of parallel replicas algorithms. A quick recap: ClickHouse has four different algorithms for parallel reading involving multiple replicas, which is reflected in the setting parallel_replicas_mode, the default value for it is read_tasks Additionally, the toggle-switch setting enable_parallel_replicas has been added. #63151 (Alexey Milovidov).
  • Fix ReadSettings not using user set values, because defaults were only used. #65625 (Kseniia Sumarokova).
  • While parsing an Enum field from JSON, a string containing an integer will be interpreted as the corresponding Enum element. This closes #65119. #66801 (scanhex12).
  • Allow TRIM -ing LEADING or TRAILING empty string as a no-op. Closes #67792. #68455 (Peter Nguyen).
  • Support creating a table with a query: CREATE TABLE ... CLONE AS .... It clones the source table's schema and then attaches all partitions to the newly created table. This feature is only supported with tables of the MergeTree family Closes #65015. #69091 (tuanpach).
  • In Gluten ClickHouse, Spark's timestamp type is mapped to ClickHouse's datetime64(6) type. When casting timestamp '2012-01-01 00:11:22' as a string, Spark returns '2012-01-01 00:11:22', while Gluten ClickHouse returns '2012-01-01 00:11:22.000000'. #69179 (Wenzheng Liu).
  • Always use the new analyzer to calculate constant expressions when enable_analyzer is set to true. Support calculation of executable() table function arguments without using SELECT query for constant expression. #69292 (Dmitry Novik).
  • Add enable_secure_identifiers to disallow insecure identifiers. #69411 (tuanpach).
  • Add show_create_query_identifier_quoting_rule to define identifier quoting behavior of the show create query result. Possible values: - user_display: When the identifiers is a keyword. - when_necessary: When the identifiers is one of {"distinct", "all", "table"}, or it can cause ambiguity: column names, dictionary attribute names. - always: Always quote identifiers. #69448 (tuanpach).
  • Follow-up to https://github.com/ClickHouse/ClickHouse/pull/69346 Point 4 described there will work now as well:. #69563 (Vitaly Baranov).
  • Implement generic SerDe between Avro Union and ClickHouse Variant type. Resolves #69713. #69712 (Jiří Kozlovský).
    1. CREATE TABLE AS will copy PRIMARY KEY, ORDER BY, and similar clauses. Now it is supported only for the MergeTree family of table engines. 2. For example, the follow SQL statements will trigger exception in the past, but this PR fixes it: if the destination table do not provide an ORDER BY or PRIMARY KEY expression in the table definition, we will copy that from source table. #69739 (sakulali).
  • Added user-level settings min_free_disk_bytes_to_throw_insert and min_free_disk_ratio_to_throw_insert to prevent insertions on disks that are almost full. #69755 (Marco Vilas Boas).
  • If you run clickhouse-client or other CLI application and it starts up slowly due to an overloaded server, and you start typing your query, such as SELECT, the previous versions will display the remaining of the terminal echo contents before printing the greetings message, such as SELECTClickHouse local version 24.10.1.1. instead of ClickHouse local version 24.10.1.1.. Now it is fixed. This closes #31696. #69856 (Alexey Milovidov).
  • Add new column readonly_duration to the system.replicas table. Needed to be able to distinguish actual readonly replicas from sentinel ones in alerts. #69871 (Miсhael Stetsyuk).
  • Change the join to sort settings type to unsigned int. #69886 (kevinyhzou).
  • Support 64-bit XID in Keeper. It can be enabled with use_xid_64 config. #69908 (Antonio Andelic).
  • New function getSettingOrDefault() added to return the default value and avoid exception if a custom setting is not found in the current profile. #69917 (Shankar).
  • Allow empty needle in function replace, the same behavior with PostgreSQL. #69918 (zhanglistar).
  • Enhance OpenTelemetry span logging to include query settings. #70011 (sharathks118).
  • Allow empty needle in functions replaceRegexp*, like https://github.com/ClickHouse/ClickHouse/pull/69918. #70053 (zhanglistar).
  • Add info to higher-order array functions if lambda result type is unexpected. #70093 (ttanay).
  • Keeper improvement: less blocking during cluster changes. #70275 (Antonio Andelic).
  • Embedded documentation for settings will be strictly more detailed and complete than the documentation on the website. This is the first step before making the website documentation always auto-generated from the source code. This has long-standing implications: - it will be guaranteed to have every setting; - there is no chance of having default values obsolete; - we can generate this documentation for each ClickHouse version; - the documentation can be displayed by the server itself even without Internet access. Generate the docs on the website from the source code. #70289 (Alexey Milovidov).
  • Add WITH IMPLICIT and FINAL keywords to the SHOW GRANTS command. Fix a minor bug with implicit grants: #70094. #70293 (pufit).
  • Don't disable nonblocking read from page cache for the entire server when reading from a blocking I/O. #70299 (Antonio Andelic).
  • Respect compatibility for MergeTree settings. The compatibility value is taken from the default profile on server startup, and default MergeTree settings are changed accordingly. Further changes of the compatibility setting do not affect MergeTree settings. #70322 (Nikolai Kochetov).
  • Clickhouse-client realtime metrics follow-up: restore cursor when ctrl-c cancels query; immediately stop intercepting keystrokes when the query is canceled; display the metrics table if --progress-table is on, and toggling is disabled. #70423 (Julia Kartseva).
  • Command-line arguments for Bool settings are set to true when no value is provided for the argument (e.g. clickhouse-client --optimize_aggregation_in_order --query "SELECT 1"). #70459 (davidtsuk).
  • Avoid spamming the logs with large HTTP response bodies in case of errors during inter-server communication. #70487 (Vladimir Cherkasov).
  • Added a new setting max_parts_to_move to control the maximum number of parts that can be moved at once. #70520 (Vladimir Cherkasov).
  • Limit the frequency of certain log messages. #70601 (Alexey Milovidov).
  • Don't do validation when synchronizing user_directories from keeper. #70644 (Raúl Marín).
  • Introduced a special (experimental) mode of a merge selector for MergeTree tables which makes it more aggressive for the partitions that are close to the limit by the number of parts. It is controlled by the merge_selector_use_blurry_base MergeTree-level setting. #70645 (Nikita Mikhaylov).
  • CHECK TABLE with PART qualifier was incorrectly formatted in the client. #70660 (Alexey Milovidov).
  • Support write column index and offset index using parquet native writer. #70669 (LiuNeng).
  • Support parse DateTime64 for microseond and timezone in joda syntax. #70737 (kevinyhzou).
  • Changed an approach to figure out if a cloud storage supports batch delete or not. #70786 (Vitaly Baranov).
  • Support for Parquet page V2 on native reader. #70807 (Arthur Passos).
  • Add an HTML page for visualizing merges. #70821 (Alexey Milovidov).
  • Backported in #71234: Do not call the object storage API when listing directories, as this may be cost-inefficient. Instead, store the list of filenames in the memory. The trade-offs are increased initial load time and memory required to store filenames. #70823 (Julia Kartseva).
  • A check if table has both storage_policy and disk set after alter query is added. A check if a new storage policy is compatible with an old one when using disk setting is added. #70839 (Kirill).
  • Add system.s3_queue_settings and system.azure_queue_settings. #70841 (Kseniia Sumarokova).
  • Functions base58Encode and base58Decode now accept arguments of type FixedString. Example: SELECT base58Encode(toFixedString('plaintext', 9));. #70846 (Faizan Patel).
  • Add the partition column to every entry type of the part log. Previously, it was set only for some entries. This closes #70819. #70848 (Alexey Milovidov).
  • Add merge start and mutate start events into system.part_log which helps with merges analysis and visualization. #70850 (Alexey Milovidov).
  • Do not call the LIST object storage API when determining if a file or directory exists on the plain rewritable disk, as it can be cost-inefficient. #70852 (Julia Kartseva).
  • Add a profile event about the number of merged source parts. It allows the monitoring of the fanout of the merge tree in production. #70908 (Alexey Milovidov).
  • Reduce the number of object storage HEAD API requests in the plain_rewritable disk. #70915 (Julia Kartseva).
  • Background downloads to filesystem cache was enabled back. #70929 (Nikita Taranov).
  • Add a new merge selector algorithm, named Trivial, for professional usage only. It is worse than the Simple merge selector. #70969 (Alexey Milovidov).

Bug Fix (user-visible misbehavior in an official stable release)

  • Fix toHour-like conversion functions' monotonicity when optional time zone argument is passed. #60264 (Amos Bird).
  • Relax supportsPrewhere check for StorageMerge. This fixes #61064. It was hardened unnecessarily in #60082. #61091 (Amos Bird).
  • Fix use_concurrency_control setting handling for proper concurrent_threads_soft_limit_num limit enforcing. This enables concurrency control by default because previously it was broken. #61473 (Sergei Trifonov).
  • Fix incorrect JOIN ON section optimization in case of IS NULL check under any other function (like NOT) that may lead to wrong results. Closes #67915. #68049 (Vladimir Cherkasov).
  • Prevent ALTER queries that would make the CREATE query of tables invalid. #68574 (János Benjamin Antal).
  • Fix inconsistent AST formatting for negate (-) and NOT functions with tuples and arrays. #68600 (Vladimir Cherkasov).
  • Fix insertion of incomplete type into Dynamic during deserialization. It could lead to Parameter out of bound errors. #69291 (Pavel Kruglov).
  • Fix inf loop after restore replica in the replicated merge tree with zero copy. #69293 (MikhailBurdukov).
  • Return back default value of processing_threads_num as number of cpu cores in storage S3Queue. #69384 (Kseniia Sumarokova).
  • Bypass try/catch flow when de/serializing nested repeated protobuf to nested columns ( fixes #41971 ). #69556 (Eliot Hautefeuille).
  • Fix vrash during insertion into FixedString column in PostgreSQL engine. #69584 (Pavel Kruglov).
  • Fix crash when executing create view t as (with recursive 42 as ttt select ttt);. #69676 (Han Fei).
  • Added strict_once mode to aggregate function windowFunnel to avoid counting one event several times in case it matches multiple conditions, close #21835. #69738 (Vladimir Cherkasov).
  • Fixed maxMapState throwing 'Bad get' if value type is DateTime64. #69787 (Michael Kolupaev).
  • Fix getSubcolumn with LowCardinality columns by overriding useDefaultImplementationForLowCardinalityColumns to return true. #69831 (Miсhael Stetsyuk).
  • Fix permanent blocked distributed sends if DROP of distributed table fails. #69843 (Azat Khuzhin).
  • Fix non-cancellable queries containing WITH FILL with NaN keys. This closes #69261. #69845 (Alexey Milovidov).
  • Fix analyzer default with old compatibility value. #69895 (Raúl Marín).
  • Don't check dependencies during CREATE OR REPLACE VIEW during DROP of old table. Previously CREATE OR REPLACE query failed when there are dependent tables of the recreated view. #69907 (Pavel Kruglov).
  • Implement missing decimal cases for zeroField. Fixes #69730. #69978 (Arthur Passos).
  • Now SQL security will work with parameterized views correctly. #69984 (pufit).
  • Closes #69752. #69985 (pufit).
  • Fixed a bug when the timezone could change the result of the query with a Date or Date32 arguments. #70036 (Yarik Briukhovetskyi).
  • Fixes Block structure mismatch for queries with nested views and WHERE condition. Fixes #66209. #70054 (Nikolai Kochetov).
  • Avoid reusing columns among different named tuples when evaluating tuple functions. This fixes #70022. #70103 (Amos Bird).
  • Fix wrong LOGICAL_ERROR when replacing literals in ranges. #70122 (Pablo Marcos).
  • Check for Nullable(Nothing) type during ALTER TABLE MODIFY COLUMN/QUERY to prevent tables with such data type. #70123 (Pavel Kruglov).
  • Proper error message for illegal query JOIN ... ON * , close #68650. #70124 (Vladimir Cherkasov).
  • Fix wrong result with skipping index. #70127 (Raúl Marín).
  • Fix data race in ColumnObject/ColumnTuple decompress method that could lead to heap use after free. #70137 (Pavel Kruglov).
  • Fix possible hung in ALTER COLUMN with Dynamic type. #70144 (Pavel Kruglov).
  • Now ClickHouse will consider more errors as retriable and will not mark data parts as broken in case of such errors. #70145 (alesapin).
  • Use correct max_types parameter during Dynamic type creation for JSON subcolumn. #70147 (Pavel Kruglov).
  • Fix the password being displayed in system.query_log for users with bcrypt password authentication method. #70148 (Nikolay Degterinsky).
  • Fix event counter for native interface (InterfaceNativeSendBytes). #70153 (Yakov Olkhovskiy).
  • Fix possible crash in JSON column. #70172 (Pavel Kruglov).
  • Fix multiple issues with arrayMin and arrayMax. #70207 (Raúl Marín).
  • Respect setting allow_simdjson in JSON type parser. #70218 (Pavel Kruglov).
  • Fix server segfault on creating a materialized view with two selects and an INTERSECT, e.g. CREATE MATERIALIZED VIEW v0 AS (SELECT 1) INTERSECT (SELECT 1);. #70264 (Konstantin Bogdanov).
  • Don't modify global settings with startup scripts. Previously, changing a setting in a startup script would change it globally. #70310 (Antonio Andelic).
  • Fix ALTER of Dynamic type with reducing max_types parameter that could lead to server crash. #70328 (Pavel Kruglov).
  • Fix crash when using WITH FILL incorrectly. #70338 (Raúl Marín).
  • Fix possible use-after-free in SYSTEM DROP FORMAT SCHEMA CACHE FOR Protobuf. #70358 (Azat Khuzhin).
  • Fix crash during GROUP BY JSON sub-object subcolumn. #70374 (Pavel Kruglov).
  • Don't prefetch parts for vertical merges if part has no rows. #70452 (Antonio Andelic).
  • Fix crash in WHERE with lambda functions. #70464 (Raúl Marín).
  • Fix table creation with CREATE ... AS table_function() with database Replicated and unavailable table function source on secondary replica. #70511 (Kseniia Sumarokova).
  • Ignore all output on async insert with wait_for_async_insert=1. Closes #62644. #70530 (Konstantin Bogdanov).
  • Ignore frozen_metadata.txt while traversing shadow directory from system.remote_data_paths. #70590 (Aleksei Filatov).
  • Fix creation of stateful window functions on misaligned memory. #70631 (Raúl Marín).
  • Fixed rare crashes in SELECT-s and merges after adding a column of Array type with non-empty default expression. #70695 (Anton Popov).
  • Insert into table function s3 respect query settings. #70696 (Vladimir Cherkasov).
  • Fix infinite recursion when infering a proto schema with skip unsupported fields enabled. #70697 (Raúl Marín).
  • Backported in #71122: GroupArraySortedData uses a PODArray with non-POD elements, manually calling constructors and destructors for the elements as needed. But it wasn't careful enough: in two places it forgot to call destructor, in one place it left elements uninitialized if an exception is thrown when deserializing previous elements. Then GroupArraySortedData's destructor called destructors on uninitialized elements and crashed: 2024.10.17 22:58:23.523790 [ 5233 ] {} <Fatal> BaseDaemon: ########## Short fault info ############ 2024.10.17 22:58:23.523834 [ 5233 ] {} <Fatal> BaseDaemon: (version 24.6.1.4609 (official build), build id: 5423339A6571004018D55BBE05D464AFA35E6718, git hash: fa6cdfda8a94890eb19bc7f22f8b0b56292f7a26) (from thread 682) Received signal 11 2024.10.17 22:58:23.523862 [ 5233 ] {} <Fatal> BaseDaemon: Signal description: Segmentation fault 2024.10.17 22:58:23.523883 [ 5233 ] {} <Fatal> BaseDaemon: Address: 0x8f. Access: . Address not mapped to object. 2024.10.17 22:58:23.523908 [ 5233 ] {} <Fatal> BaseDaemon: Stack trace: 0x0000aaaac4b78308 0x0000ffffb7701850 0x0000aaaac0104855 0x0000aaaac01048a0 0x0000aaaac501e84c 0x0000aaaac7c510d0 0x0000aaaac7c4ba20 0x0000aaaac968bbfc 0x0000aaaac968fab0 0x0000aaaac969bf50 0x0000aaaac9b7520c 0x0000aaaac9b74c74 0x0000aaaac9b8a150 0x0000aaaac9b809f0 0x0000aaaac9b80574 0x0000aaaac9b8e364 0x0000aaaac9b8e4fc 0x0000aaaac94f4328 0x0000aaaac94f428c 0x0000aaaac94f7df0 0x0000aaaac98b5a3c 0x0000aaaac950b234 0x0000aaaac49ae264 0x0000aaaac49b1dd0 0x0000aaaac49b0a80 0x0000ffffb755d5c8 0x0000ffffb75c5edc 2024.10.17 22:58:23.523936 [ 5233 ] {} <Fatal> BaseDaemon: ######################################## 2024.10.17 22:58:23.523959 [ 5233 ] {} <Fatal> BaseDaemon: (version 24.6.1.4609 (official build), build id: 5423339A6571004018D55BBE05D464AFA35E6718, git hash: fa6cdfda8a94890eb19bc7f22f8b0b56292f7a26) (from thread 682) (query_id: 6c8a33a2-f45a-4a3b-bd71-ded6a1c9ccd3::202410_534066_534078_2) (query: ) Received signal Segmentation fault (11) 2024.10.17 22:58:23.523977 [ 5233 ] {} <Fatal> BaseDaemon: Address: 0x8f. Access: . Address not mapped to object. 2024.10.17 22:58:23.523993 [ 5233 ] {} <Fatal> BaseDaemon: Stack trace: 0x0000aaaac4b78308 0x0000ffffb7701850 0x0000aaaac0104855 0x0000aaaac01048a0 0x0000aaaac501e84c 0x0000aaaac7c510d0 0x0000aaaac7c4ba20 0x0000aaaac968bbfc 0x0000aaaac968fab0 0x0000aaaac969bf50 0x0000aaaac9b7520c 0x0000aaaac9b74c74 0x0000aaaac9b8a150 0x0000aaaac9b809f0 0x0000aaaac9b80574 0x0000aaaac9b8e364 0x0000aaaac9b8e4fc 0x0000aaaac94f4328 0x0000aaaac94f428c 0x0000aaaac94f7df0 0x0000aaaac98b5a3c 0x0000aaaac950b234 0x0000aaaac49ae264 0x0000aaaac49b1dd0 0x0000aaaac49b0a80 0x0000ffffb755d5c8 0x0000ffffb75c5edc 2024.10.17 22:58:23.524817 [ 5233 ] {} <Fatal> BaseDaemon: 0. signalHandler(int, siginfo_t*, void*) @ 0x000000000c6f8308 2024.10.17 22:58:23.524917 [ 5233 ] {} <Fatal> BaseDaemon: 1. ? @ 0x0000ffffb7701850 2024.10.17 22:58:23.524962 [ 5233 ] {} <Fatal> BaseDaemon: 2. DB::Field::~Field() @ 0x0000000007c84855 2024.10.17 22:58:23.525012 [ 5233 ] {} <Fatal> BaseDaemon: 3. DB::Field::~Field() @ 0x0000000007c848a0 2024.10.17 22:58:23.526626 [ 5233 ] {} <Fatal> BaseDaemon: 4. DB::IAggregateFunctionDataHelper<DB::(anonymous namespace)::GroupArraySortedData<DB::Field, (DB::(anonymous namespace)::GroupArraySortedStrategy)0>, DB::(anonymous namespace)::GroupArraySorted<DB::(anonymous namespace)::GroupArraySortedData<DB::Field, (DB::(anonymous namespace)::GroupArraySortedStrategy)0>, DB::Field>>::destroy(char*) const (.5a6a451027f732f9fd91c13f4a13200c) @ 0x000000000cb9e84c 2024.10.17 22:58:23.527322 [ 5233 ] {} <Fatal> BaseDaemon: 5. DB::SerializationAggregateFunction::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0x000000000f7d10d0 2024.10.17 22:58:23.528470 [ 5233 ] {} <Fatal> BaseDaemon: 6. DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::unordered_map<String, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::hash<String>, std::equal_to<String>, std::allocator<std::pair<String const, COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>>*) const @ 0x000000000f7cba20 2024.10.17 22:58:23.529213 [ 5233 ] {} <Fatal> BaseDaemon: 7. DB::MergeTreeReaderCompact::readData(DB::NameAndTypePair const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, std::function<DB::ReadBuffer* (DB::ISerialization::SubstreamPath const&)> const&) @ 0x000000001120bbfc 2024.10.17 22:58:23.529277 [ 5233 ] {} <Fatal> BaseDaemon: 8. DB::MergeTreeReaderCompactSingleBuffer::readRows(unsigned long, unsigned long, bool, unsigned long, std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x000000001120fab0 2024.10.17 22:58:23.529319 [ 5233 ] {} <Fatal> BaseDaemon: 9. DB::MergeTreeSequentialSource::generate() @ 0x000000001121bf50 2024.10.17 22:58:23.529346 [ 5233 ] {} <Fatal> BaseDaemon: 10. DB::ISource::tryGenerate() @ 0x00000000116f520c 2024.10.17 22:58:23.529653 [ 5233 ] {} <Fatal> BaseDaemon: 11. DB::ISource::work() @ 0x00000000116f4c74 2024.10.17 22:58:23.529679 [ 5233 ] {} <Fatal> BaseDaemon: 12. DB::ExecutionThreadContext::executeTask() @ 0x000000001170a150 2024.10.17 22:58:23.529733 [ 5233 ] {} <Fatal> BaseDaemon: 13. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x00000000117009f0 2024.10.17 22:58:23.529763 [ 5233 ] {} <Fatal> BaseDaemon: 14. DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x0000000011700574 2024.10.17 22:58:23.530089 [ 5233 ] {} <Fatal> BaseDaemon: 15. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x000000001170e364 2024.10.17 22:58:23.530277 [ 5233 ] {} <Fatal> BaseDaemon: 16. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x000000001170e4fc 2024.10.17 22:58:23.530295 [ 5233 ] {} <Fatal> BaseDaemon: 17. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x0000000011074328 2024.10.17 22:58:23.530318 [ 5233 ] {} <Fatal> BaseDaemon: 18. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x000000001107428c 2024.10.17 22:58:23.530339 [ 5233 ] {} <Fatal> BaseDaemon: 19. DB::MergeTask::execute() @ 0x0000000011077df0 2024.10.17 22:58:23.530362 [ 5233 ] {} <Fatal> BaseDaemon: 20. DB::SharedMergeMutateTaskBase::executeStep() @ 0x0000000011435a3c 2024.10.17 22:58:23.530384 [ 5233 ] {} <Fatal> BaseDaemon: 21. DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() @ 0x000000001108b234 2024.10.17 22:58:23.530410 [ 5233 ] {} <Fatal> BaseDaemon: 22. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false, true>, void*>) @ 0x000000000c52e264 2024.10.17 22:58:23.530448 [ 5233 ] {} <Fatal> BaseDaemon: 23. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false, true>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c531dd0 2024.10.17 22:58:23.530476 [ 5233 ] {} <Fatal> BaseDaemon: 24. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c530a80 2024.10.17 22:58:23.530514 [ 5233 ] {} <Fatal> BaseDaemon: 25. ? @ 0x000000000007d5c8 2024.10.17 22:58:23.530534 [ 5233 ] {} <Fatal> BaseDaemon: 26. ? @ 0x00000000000e5edc 2024.10.17 22:58:23.530551 [ 5233 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read. 2024.10.17 22:58:23.531083 [ 5233 ] {} <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues 2024.10.17 22:58:23.531294 [ 5233 ] {} <Fatal> BaseDaemon: Changed settings: max_insert_threads = 4, max_threads = 42, use_hedged_requests = false, distributed_foreground_insert = true, alter_sync = 0, enable_memory_bound_merging_of_aggregation_results = true, cluster_for_parallel_replicas = 'default', do_not_merge_across_partitions_select_final = false, log_queries = true, log_queries_probability = 1., max_http_get_redirects = 10, enable_deflate_qpl_codec = false, enable_zstd_qat_codec = false, query_profiler_real_time_period_ns = 0, query_profiler_cpu_time_period_ns = 0, max_bytes_before_external_group_by = 90194313216, max_bytes_before_external_sort = 90194313216, max_memory_usage = 180388626432, backup_restore_keeper_retry_max_backoff_ms = 60000, cancel_http_readonly_queries_on_client_close = true, max_table_size_to_drop = 1000000000000, max_partition_size_to_drop = 1000000000000, default_table_engine = 'ReplicatedMergeTree', mutations_sync = 0, optimize_trivial_insert_select = false, database_replicated_allow_only_replicated_engine = true, cloud_mode = true, cloud_mode_engine = 2, distributed_ddl_output_mode = 'none_only_active', distributed_ddl_entry_format_version = 6, async_insert_max_data_size = 10485760, async_insert_busy_timeout_max_ms = 1000, enable_filesystem_cache_on_write_operations = true, load_marks_asynchronously = true, allow_prefetched_read_pool_for_remote_filesystem = true, filesystem_prefetch_max_memory_usage = 18038862643, filesystem_prefetches_limit = 200, compatibility = '24.6', insert_keeper_max_retries = 20, allow_experimental_materialized_postgresql_table = false, date_time_input_format = 'best_effort'. #70820 (Michael Kolupaev).
  • Disable enable_named_columns_in_function_tuple by default. #70833 (Raúl Marín).
  • Fix S3Queue table engine setting processing_threads_num not being effective in case it was deduced from the number of cpu cores on the server. #70837 (Kseniia Sumarokova).
  • Normalize named tuple arguments in aggregation states. This fixes #69732 . #70853 (Amos Bird).
  • Fix a logical error due to negative zeros in the two-level hash table. This closes #70973. #70979 (Alexey Milovidov).
  • Backported in #71214: Fix logical error in StorageS3Queue "Cannot create a persistent node in /processed since it already exists". #70984 (Kseniia Sumarokova).
  • Backported in #71243: Fixed named sessions not being closed and hanging on forever under certain circumstances. #70998 (Márcio Martins).
  • Backported in #71157: Fix the bug that didn't consider _row_exists column in rebuild option of projection lightweight delete. #71089 (Shichao Jin).
  • Backported in #71265: Fix wrong value in system.query_metric_log due to unexpected race condition. #71124 (Pablo Marcos).
  • Backported in #71331: Fix async inserts with empty blocks via native protocol. #71312 (Anton Popov).

Build/Testing/Packaging Improvement

  • Docker in integration tests runner is updated to latest version. It was previously pinned u until patch release 24.0.3 was out. https://github.com/moby/moby/issues/45770#issuecomment-1618255130. - HDFS image was deprecated and not running with current docker version. Switched to newer version of a derivative image based on ubuntu. - HDFS tests were hardened to allow them to run with python-repeat. #66867 (Ilya Yatsishin).
  • Alpine docker images now use ubuntu 22.04 as glibc donor, results in upgrade of glibc version delivered with alpine images from 2.31 to 2.35. #69033 (filimonov).
  • Makes dbms independent from clickhouse_functions. #69914 (Raúl Marín).
  • Fix FreeBSD compilation of the MariaDB connector. #70007 (Raúl Marín).
  • Building on Apple Mac OS X Darwin does not produce strange warnings anymore. #70411 (Alexey Milovidov).
  • Fix building with ARCH_NATIVE CMake flag. #70585 (Daniil Gentili).
  • The universal installer will download Musl build on Alpine Linux. Some Docker containers are using Alpine Linux, but it was not possible to install ClickHouse there with curl https://clickhouse.com/ | sh. #70767 (Alexey Milovidov).

NO CL CATEGORY

NO CL ENTRY

NOT FOR CHANGELOG / INSIGNIFICANT

Not for changeling