* save all merge params to zookeeper
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* calculate hash for graphite merge params
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* add graphite params hash to zookeeper + fix tests
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* install new graphite for testing
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* fix backward incompatibility
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* minor fix test
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* Update src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp
Co-authored-by: Alexander Tokmakov <tavplubix@gmail.com>
* remove peekString and add more comments
- peekString doesn't always work even for ReadBufferFromString
- more comment re. backward compatibility
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
---------
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
Co-authored-by: Alexander Tokmakov <tavplubix@gmail.com>
Randomly modify structural characters of a valid JSON ('{', '}', '[', ']',
':', '"', ',') to generate output that cannot be parsed as JSON.
Follow-up to https://github.com/ClickHouse/ClickHouse/pull/56490
It is not safe to use statistics because of how KafkaEngine works - it
pre-creates consumers, and this leads to the situation when this
statistics entries generated (RD_KAFKA_OP_STATS), but never consumed.
Which creates a live memory leak for a server with Kafka tables, but
without materialized view attached to it (and no SELECT).
Another problem is that this makes shutdown very slow, because of how
pending queue entries are handled in librdkafka, it uses
TAILQ_INSERT_SORTED, which is sorted insert into linked list, which
works incredibly slow (likely you will never wait till it ends and kill
the server)
For instance in my production setup the server was running for ~67 days
with such table, and it got 1'942'233 `TAILQ_INSERT_SORTED` entries
(which perfectly matches by the way - `67*86400/3` = 1'929'600), and it
moved only 289'806 entries for a few hours, though I'm not sure how much
time the process was in the running state, since most of the time it was
with debugger attached.
So for now let's disable it, to make this patch easy for backporting,
and I will think about long term fix - do not pre-create consumers in
Kafka engine.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
First of all the problem is that that simple 'SELECT 1' cannot be runned
without system.one, which makes --no-system-tables almost useless:
$ ./clickhouse-debug local --no-system-tables -q "select 1"
Code: 81. DB::Exception: Database system does not exist. (UNKNOWN_DATABASE)
Secondly there are just too much flags, and this one
(--no-system-tables) is too damn specific.
This patch should improve startup time of the clickhouse-local almost
3x in debug builds.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
This setting, when enabled (disabled by default), allows ClickHouse to
silently skip unavailable shards of a Distributed table during a query
execution, instead of throwing an exception to the client.
Before rewriting system.stack_trace to handle max_block_size (in #54946)
parallel reading from system.stack_trace was prohibited, because this
could lead to hang of system.stack_trace table.
But that rewrite broke this guarantee, so let's fix it to avoid possible
hung.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
In case of restoring from current_batch.txt it is possible that the some
file from the batch will not be exist, and the fix submitted in #49884
was not complete, since it will fail later in markAsSend() (due to it
tries to obtain file size there):
2023.12.04 05:43:12.676658 [ 5006 ] {} <Error> dist.DirectoryMonitor.work4: std::exception. Code: 1001, type: std::__1::__fs::filesystem::filesystem_error, e.what() = filesystem error: in file_size: No such file or directory ["/work4/clickhouse/data/dist/shard8_all_replicas//150426396.bin"], Stack trace (when copying this message, always include the lines below):
0. ./.build/./contrib/llvm-project/libcxx/include/exception:134: std::runtime_error::runtime_error(String const&) @ 0x00000000177e83f4 in /usr/lib/debug/usr/bin/clickhouse.debug
1. ./.build/./contrib/llvm-project/libcxx/include/string:1499: std::system_error::system_error(std::error_code, String const&) @ 0x00000000177f0fd5 in /usr/lib/debug/usr/bin/clickhouse.debug
2. ./.build/./contrib/llvm-project/libcxx/include/__filesystem/filesystem_error.h:42: std::__fs::filesystem::filesystem_error::filesystem_error[abi:v15000](String const&, std::__fs::filesystem::path const&, std::error_code) @ 0x000000000b844ca1 in /usr/lib/debug/usr/bin/clickhouse.debug
3. ./.build/./contrib/llvm-project/libcxx/include/__filesystem/filesystem_error.h:90: void std::__fs::filesystem::__throw_filesystem_error[abi:v15000]<String&, std::__fs::filesystem::path const&, std::error_code const&>(String&, std::__fs::filesystem::path const&, std::error_code const&) @ 0x000000001778f953 in /usr/lib/debug/usr/bin/clickhouse.debug
4. ./.build/./contrib/llvm-project/libcxx/src/filesystem/filesystem_common.h:0: std::__fs::filesystem::detail::(anonymous namespace)::ErrorHandler<unsigned long>::report(std::error_code const&) const @ 0x0000000017793ef7 in /usr/lib/debug/usr/bin/clickhouse.debug
5. ./.build/./contrib/llvm-project/libcxx/src/filesystem/operations.cpp:0: std::__fs::filesystem::__file_size(std::__fs::filesystem::path const&, std::error_code*) @ 0x0000000017793e26 in /usr/lib/debug/usr/bin/clickhouse.debug
6. ./.build/./src/Storages/Distributed/DistributedAsyncInsertDirectoryQueue.cpp:707: DB::DistributedAsyncInsertDirectoryQueue::markAsSend(String const&) @ 0x0000000011cd92c5 in /usr/lib/debug/usr/bin/clickhouse.debug
7. ./.build/./contrib/llvm-project/libcxx/include/__iterator/wrap_iter.h💯 DB::DistributedAsyncInsertBatch::send() @ 0x0000000011cdd81c in /usr/lib/debug/usr/bin/clickhouse.debug
8. ./.build/./src/Storages/Distributed/DistributedAsyncInsertDirectoryQueue.cpp:0: DB::DistributedAsyncInsertDirectoryQueue::processFilesWithBatching() @ 0x0000000011cd5054 in /usr/lib/debug/usr/bin/clickhouse.debug
9. ./.build/./src/Storages/Distributed/DistributedAsyncInsertDirectoryQueue.cpp:417: DB::DistributedAsyncInsertDirectoryQueue::processFiles() @ 0x0000000011cd3440 in /usr/lib/debug/usr/bin/clickhouse.debug
10. ./.build/./src/Storages/Distributed/DistributedAsyncInsertDirectoryQueue.cpp:0: DB::DistributedAsyncInsertDirectoryQueue::run() @ 0x0000000011cd3878 in /usr/lib/debug/usr/bin/clickhouse.debug
11. ./.build/./contrib/llvm-project/libcxx/include/__functional/function.h:0: DB::BackgroundSchedulePoolTaskInfo::execute() @ 0x00000000103dbc34 in /usr/lib/debug/usr/bin/clickhouse.debug
12. ./.build/./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:701: DB::BackgroundSchedulePool::threadFunction() @ 0x00000000103de1b6 in /usr/lib/debug/usr/bin/clickhouse.debug
13. ./.build/./src/Core/BackgroundSchedulePool.cpp:0: void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0>(DB::BackgroundSchedulePool::BackgroundSchedulePool(unsigned long, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, StrongTypedef<unsigned long, CurrentMetrics::MetricTag>, char const*)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000103de7d1 in /usr/lib/debug/usr/bin/clickhouse.debug
14. ./.build/./base/base/../base/wide_integer_impl.h:809: ThreadPoolImpl<std::thread>::worker(std::__list_iterator<std::thread, void*>) @ 0x000000000b8c5502 in /usr/lib/debug/usr/bin/clickhouse.debug
15. ./.build/./contrib/llvm-project/libcxx/include/__memory/unique_ptr.h:302: void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000b8c936e in /usr/lib/debug/usr/bin/clickhouse.debug
16. ? @ 0x00007f1be8b30fd4 in ?
17. ? @ 0x00007f1be8bb15bc in ?
And instead of ignoring errors, DistributedAsyncInsertBatch::valid() had
been added, that should be called when the files had been read from the
current_batch.txt, if it is not valid (some files from the batch did not
exist), then there is no sense in trying to send the same batch, so just
this file will be ignored, and files will be processed in a regular
order.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>