ClickHouse/docs/changelogs/v24.8.6.70-lts.md

19 KiB
Raw Blame History

sidebar_position sidebar_label
1 2024

2024 Changelog

ClickHouse release v24.8.6.70-lts (ddb8c21977) FIXME as compared to v24.8.5.115-lts (8c4cb00a38)

Backward Incompatible Change

  • Backported in #71359: Fix possible error No such file or directory due to unescaped special symbols in files for JSON subcolumns. #71182 (Pavel Kruglov).

Improvement

  • Backported in #70680: Don't do validation when synchronizing user_directories from keeper. #70644 (Raúl Marín).
  • Backported in #71395: Do not call the object storage API when listing directories, as this may be cost-inefficient. Instead, store the list of filenames in the memory. The trade-offs are increased initial load time and memory required to store filenames. #70823 (Julia Kartseva).
  • Backported in #71287: Reduce the number of object storage HEAD API requests in the plain_rewritable disk. #70915 (Julia Kartseva).

Bug Fix (user-visible misbehavior in an official stable release)

  • Backported in #70934: Fix incorrect JOIN ON section optimization in case of IS NULL check under any other function (like NOT) that may lead to wrong results. Closes #67915. #68049 (Vladimir Cherkasov).
  • Backported in #70735: Fix unexpected exception when passing empty tuple in array. This fixes #68618. #68848 (Amos Bird).
  • Backported in #71138: Fix propogating structure argument in s3Cluster. Previously the DEFAULT expression of the column could be lost when sending the query to the replicas in s3Cluster. #69147 (Pavel Kruglov).
  • Backported in #70561: Fix getSubcolumn with LowCardinality columns by overriding useDefaultImplementationForLowCardinalityColumns to return true. #69831 (Miсhael Stetsyuk).
  • Backported in #70903: Avoid reusing columns among different named tuples when evaluating tuple functions. This fixes #70022. #70103 (Amos Bird).
  • Backported in #70623: Fix server segfault on creating a materialized view with two selects and an INTERSECT, e.g. CREATE MATERIALIZED VIEW v0 AS (SELECT 1) INTERSECT (SELECT 1);. #70264 (Konstantin Bogdanov).
  • Backported in #70688: Fix possible use-after-free in SYSTEM DROP FORMAT SCHEMA CACHE FOR Protobuf. #70358 (Azat Khuzhin).
  • Backported in #70494: Fix crash during GROUP BY JSON sub-object subcolumn. #70374 (Pavel Kruglov).
  • Backported in #70482: Don't prefetch parts for vertical merges if part has no rows. #70452 (Antonio Andelic).
  • Backported in #70556: Fix crash in WHERE with lambda functions. #70464 (Raúl Marín).
  • Backported in #70878: Fix table creation with CREATE ... AS table_function() with database Replicated and unavailable table function source on secondary replica. #70511 (Kseniia Sumarokova).
  • Backported in #70575: Ignore all output on async insert with wait_for_async_insert=1. Closes #62644. #70530 (Konstantin Bogdanov).
  • Backported in #71052: Ignore frozen_metadata.txt while traversing shadow directory from system.remote_data_paths. #70590 (Aleksei Filatov).
  • Backported in #70651: Fix creation of stateful window functions on misaligned memory. #70631 (Raúl Marín).
  • Backported in #70757: Fixed rare crashes in SELECT-s and merges after adding a column of Array type with non-empty default expression. #70695 (Anton Popov).
  • Backported in #70763: Fix infinite recursion when infering a proto schema with skip unsupported fields enabled. #70697 (Raúl Marín).
  • Backported in #71118: GroupArraySortedData uses a PODArray with non-POD elements, manually calling constructors and destructors for the elements as needed. But it wasn't careful enough: in two places it forgot to call destructor, in one place it left elements uninitialized if an exception is thrown when deserializing previous elements. Then GroupArraySortedData's destructor called destructors on uninitialized elements and crashed: 2024.10.17 22:58:23.523790 [ 5233 ] {} <Fatal> BaseDaemon: ########## Short fault info ############ 2024.10.17 22:58:23.523834 [ 5233 ] {} <Fatal> BaseDaemon: (version 24.6.1.4609 (official build), build id: 5423339A6571004018D55BBE05D464AFA35E6718, git hash: fa6cdfda8a94890eb19bc7f22f8b0b56292f7a26) (from thread 682) Received signal 11 2024.10.17 22:58:23.523862 [ 5233 ] {} <Fatal> BaseDaemon: Signal description: Segmentation fault 2024.10.17 22:58:23.523883 [ 5233 ] {} <Fatal> BaseDaemon: Address: 0x8f. Access: . Address not mapped to object. 2024.10.17 22:58:23.523908 [ 5233 ] {} <Fatal> BaseDaemon: Stack trace: 0x0000aaaac4b78308 0x0000ffffb7701850 0x0000aaaac0104855 0x0000aaaac01048a0 0x0000aaaac501e84c 0x0000aaaac7c510d0 0x0000aaaac7c4ba20 0x0000aaaac968bbfc 0x0000aaaac968fab0 0x0000aaaac969bf50 0x0000aaaac9b7520c 0x0000aaaac9b74c74 0x0000aaaac9b8a150 0x0000aaaac9b809f0 0x0000aaaac9b80574 0x0000aaaac9b8e364 0x0000aaaac9b8e4fc 0x0000aaaac94f4328 0x0000aaaac94f428c 0x0000aaaac94f7df0 0x0000aaaac98b5a3c 0x0000aaaac950b234 0x0000aaaac49ae264 0x0000aaaac49b1dd0 0x0000aaaac49b0a80 0x0000ffffb755d5c8 0x0000ffffb75c5edc 2024.10.17 22:58:23.523936 [ 5233 ] {} <Fatal> BaseDaemon: ######################################## 2024.10.17 22:58:23.523959 [ 5233 ] {} <Fatal> BaseDaemon: (version 24.6.1.4609 (official build), build id: 5423339A6571004018D55BBE05D464AFA35E6718, git hash: fa6cdfda8a94890eb19bc7f22f8b0b56292f7a26) (from thread 682) (query_id: 6c8a33a2-f45a-4a3b-bd71-ded6a1c9ccd3::202410_534066_534078_2) (query: ) Received signal Segmentation fault (11) 2024.10.17 22:58:23.523977 [ 5233 ] {} <Fatal> BaseDaemon: Address: 0x8f. Access: . Address not mapped to object. 2024.10.17 22:58:23.523993 [ 5233 ] {} <Fatal> BaseDaemon: Stack trace: 0x0000aaaac4b78308 0x0000ffffb7701850 0x0000aaaac0104855 0x0000aaaac01048a0 0x0000aaaac501e84c 0x0000aaaac7c510d0 0x0000aaaac7c4ba20 0x0000aaaac968bbfc 0x0000aaaac968fab0 0x0000aaaac969bf50 0x0000aaaac9b7520c 0x0000aaaac9b74c74 0x0000aaaac9b8a150 0x0000aaaac9b809f0 0x0000aaaac9b80574 0x0000aaaac9b8e364 0x0000aaaac9b8e4fc 0x0000aaaac94f4328 0x0000aaaac94f428c 0x0000aaaac94f7df0 0x0000aaaac98b5a3c 0x0000aaaac950b234 0x0000aaaac49ae264 0x0000aaaac49b1dd0 0x0000aaaac49b0a80 0x0000ffffb755d5c8 0x0000ffffb75c5edc 2024.10.17 22:58:23.524817 [ 5233 ] {} <Fatal> BaseDaemon: 0. signalHandler(int, siginfo_t*, void*) @ 0x000000000c6f8308 2024.10.17 22:58:23.524917 [ 5233 ] {} <Fatal> BaseDaemon: 1. ? @ 0x0000ffffb7701850 2024.10.17 22:58:23.524962 [ 5233 ] {} <Fatal> BaseDaemon: 2. DB::Field::~Field() @ 0x0000000007c84855 2024.10.17 22:58:23.525012 [ 5233 ] {} <Fatal> BaseDaemon: 3. DB::Field::~Field() @ 0x0000000007c848a0 2024.10.17 22:58:23.526626 [ 5233 ] {} <Fatal> BaseDaemon: 4. DB::IAggregateFunctionDataHelper<DB::(anonymous namespace)::GroupArraySortedData<DB::Field, (DB::(anonymous namespace)::GroupArraySortedStrategy)0>, DB::(anonymous namespace)::GroupArraySorted<DB::(anonymous namespace)::GroupArraySortedData<DB::Field, (DB::(anonymous namespace)::GroupArraySortedStrategy)0>, DB::Field>>::destroy(char*) const (.5a6a451027f732f9fd91c13f4a13200c) @ 0x000000000cb9e84c 2024.10.17 22:58:23.527322 [ 5233 ] {} <Fatal> BaseDaemon: 5. DB::SerializationAggregateFunction::deserializeBinaryBulk(DB::IColumn&, DB::ReadBuffer&, unsigned long, double) const @ 0x000000000f7d10d0 2024.10.17 22:58:23.528470 [ 5233 ] {} <Fatal> BaseDaemon: 6. DB::ISerialization::deserializeBinaryBulkWithMultipleStreams(COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, DB::ISerialization::DeserializeBinaryBulkSettings&, std::shared_ptr<DB::ISerialization::DeserializeBinaryBulkState>&, std::unordered_map<String, COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::hash<String>, std::equal_to<String>, std::allocator<std::pair<String const, COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>>*) const @ 0x000000000f7cba20 2024.10.17 22:58:23.529213 [ 5233 ] {} <Fatal> BaseDaemon: 7. DB::MergeTreeReaderCompact::readData(DB::NameAndTypePair const&, COW<DB::IColumn>::immutable_ptr<DB::IColumn>&, unsigned long, std::function<DB::ReadBuffer* (DB::ISerialization::SubstreamPath const&)> const&) @ 0x000000001120bbfc 2024.10.17 22:58:23.529277 [ 5233 ] {} <Fatal> BaseDaemon: 8. DB::MergeTreeReaderCompactSingleBuffer::readRows(unsigned long, unsigned long, bool, unsigned long, std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x000000001120fab0 2024.10.17 22:58:23.529319 [ 5233 ] {} <Fatal> BaseDaemon: 9. DB::MergeTreeSequentialSource::generate() @ 0x000000001121bf50 2024.10.17 22:58:23.529346 [ 5233 ] {} <Fatal> BaseDaemon: 10. DB::ISource::tryGenerate() @ 0x00000000116f520c 2024.10.17 22:58:23.529653 [ 5233 ] {} <Fatal> BaseDaemon: 11. DB::ISource::work() @ 0x00000000116f4c74 2024.10.17 22:58:23.529679 [ 5233 ] {} <Fatal> BaseDaemon: 12. DB::ExecutionThreadContext::executeTask() @ 0x000000001170a150 2024.10.17 22:58:23.529733 [ 5233 ] {} <Fatal> BaseDaemon: 13. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x00000000117009f0 2024.10.17 22:58:23.529763 [ 5233 ] {} <Fatal> BaseDaemon: 14. DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x0000000011700574 2024.10.17 22:58:23.530089 [ 5233 ] {} <Fatal> BaseDaemon: 15. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x000000001170e364 2024.10.17 22:58:23.530277 [ 5233 ] {} <Fatal> BaseDaemon: 16. DB::PullingPipelineExecutor::pull(DB::Block&) @ 0x000000001170e4fc 2024.10.17 22:58:23.530295 [ 5233 ] {} <Fatal> BaseDaemon: 17. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::executeImpl() @ 0x0000000011074328 2024.10.17 22:58:23.530318 [ 5233 ] {} <Fatal> BaseDaemon: 18. DB::MergeTask::ExecuteAndFinalizeHorizontalPart::execute() @ 0x000000001107428c 2024.10.17 22:58:23.530339 [ 5233 ] {} <Fatal> BaseDaemon: 19. DB::MergeTask::execute() @ 0x0000000011077df0 2024.10.17 22:58:23.530362 [ 5233 ] {} <Fatal> BaseDaemon: 20. DB::SharedMergeMutateTaskBase::executeStep() @ 0x0000000011435a3c 2024.10.17 22:58:23.530384 [ 5233 ] {} <Fatal> BaseDaemon: 21. DB::MergeTreeBackgroundExecutor<DB::DynamicRuntimeQueue>::threadFunction() @ 0x000000001108b234 2024.10.17 22:58:23.530410 [ 5233 ] {} <Fatal> BaseDaemon: 22. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false, true>, void*>) @ 0x000000000c52e264 2024.10.17 22:58:23.530448 [ 5233 ] {} <Fatal> BaseDaemon: 23. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false, true>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false, true>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c531dd0 2024.10.17 22:58:23.530476 [ 5233 ] {} <Fatal> BaseDaemon: 24. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c530a80 2024.10.17 22:58:23.530514 [ 5233 ] {} <Fatal> BaseDaemon: 25. ? @ 0x000000000007d5c8 2024.10.17 22:58:23.530534 [ 5233 ] {} <Fatal> BaseDaemon: 26. ? @ 0x00000000000e5edc 2024.10.17 22:58:23.530551 [ 5233 ] {} <Fatal> BaseDaemon: Integrity check of the executable skipped because the reference checksum could not be read. 2024.10.17 22:58:23.531083 [ 5233 ] {} <Fatal> BaseDaemon: Report this error to https://github.com/ClickHouse/ClickHouse/issues 2024.10.17 22:58:23.531294 [ 5233 ] {} <Fatal> BaseDaemon: Changed settings: max_insert_threads = 4, max_threads = 42, use_hedged_requests = false, distributed_foreground_insert = true, alter_sync = 0, enable_memory_bound_merging_of_aggregation_results = true, cluster_for_parallel_replicas = 'default', do_not_merge_across_partitions_select_final = false, log_queries = true, log_queries_probability = 1., max_http_get_redirects = 10, enable_deflate_qpl_codec = false, enable_zstd_qat_codec = false, query_profiler_real_time_period_ns = 0, query_profiler_cpu_time_period_ns = 0, max_bytes_before_external_group_by = 90194313216, max_bytes_before_external_sort = 90194313216, max_memory_usage = 180388626432, backup_restore_keeper_retry_max_backoff_ms = 60000, cancel_http_readonly_queries_on_client_close = true, max_table_size_to_drop = 1000000000000, max_partition_size_to_drop = 1000000000000, default_table_engine = 'ReplicatedMergeTree', mutations_sync = 0, optimize_trivial_insert_select = false, database_replicated_allow_only_replicated_engine = true, cloud_mode = true, cloud_mode_engine = 2, distributed_ddl_output_mode = 'none_only_active', distributed_ddl_entry_format_version = 6, async_insert_max_data_size = 10485760, async_insert_busy_timeout_max_ms = 1000, enable_filesystem_cache_on_write_operations = true, load_marks_asynchronously = true, allow_prefetched_read_pool_for_remote_filesystem = true, filesystem_prefetch_max_memory_usage = 18038862643, filesystem_prefetches_limit = 200, compatibility = '24.6', insert_keeper_max_retries = 20, allow_experimental_materialized_postgresql_table = false, date_time_input_format = 'best_effort'. #70820 (Michael Kolupaev).
  • Backported in #70896: Disable enable_named_columns_in_function_tuple by default. #70833 (Raúl Marín).
  • Backported in #70994: Fix a logical error due to negative zeros in the two-level hash table. This closes #70973. #70979 (Alexey Milovidov).
  • Backported in #71210: Fix logical error in StorageS3Queue "Cannot create a persistent node in /processed since it already exists". #70984 (Kseniia Sumarokova).
  • Backported in #71248: Fixed named sessions not being closed and hanging on forever under certain circumstances. #70998 (Márcio Martins).
  • Backported in #71375: Add try/catch to data parts destructors to avoid terminate. #71364 (alesapin).

NOT FOR CHANGELOG / INSIGNIFICANT