Commit Graph

15847 Commits

Author SHA1 Message Date
Alexey Milovidov
8fc05e25fe
Merge pull request #58310 from azat/kafka-fix-stat-leak-resubmit
Create consumers for Kafka tables on fly with TTL (resubmit)
2023-12-30 13:03:16 +01:00
Alexey Milovidov
e1812f3b58
Merge pull request #58266 from ClickHouse/vdimir/simple_fix_tuple_elimination
Analyzer: fix tuple comparison when result is always null
2023-12-30 13:02:38 +01:00
Alexey Milovidov
aa6ecd2d59
Merge pull request #58343 from azat/s3/optional-gcs-compose
Avoid sending ComposeObject requests after upload to GCS
2023-12-30 12:40:04 +01:00
Alexey Milovidov
40ca9c202d
Merge pull request #58346 from ClickHouse/check-what-would-be-ifremove-array-joined-columns-from-key-condition
Check what happen if remove array joined columns from KeyCondition
2023-12-30 12:38:57 +01:00
Alexey Milovidov
f058394d92
Merge pull request #58351 from ClickHouse/fix_00002
Keep exception format string in retries ctl
2023-12-30 12:37:36 +01:00
Alexey Milovidov
39b239683c Attach all system tables in clickhouse-local 2023-12-29 21:25:22 +01:00
Nikolai Kochetov
b95bdef09e Update StorageS3 and StorageS3Cluster 2023-12-29 17:41:11 +00:00
Kruglov Pavel
f57939096c
Merge branch 'master' into ignore-mv-with-dropped-target-table 2023-12-29 17:02:23 +01:00
Nikolai Kochetov
5521e5d9b1 Refactor StorageHDFS and StorageFile virtual columns filtering 2023-12-29 15:58:01 +00:00
robot-ch-test-poll3
07ba672e37
Merge pull request #58142 from canhld94/final_less_compare
MergeTree FINAL to not compare rows from same non-L0 part
2023-12-29 16:47:14 +01:00
Azat Khuzhin
a12df35be4 Eliminate possible race between ALTER_METADATA and MERGE_PARTS
v2: move metadata version check after checking that the part is not covering part
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-29 16:46:10 +01:00
Azat Khuzhin
c7fa93d704 Add infrastructure for testing replicated MergeTree queue
- replicated_queue_fail_next_entry - to fail next queue entry
- replicated_queue_unfail_entries - to "unfail" all queue entries (if
  any)

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-29 16:43:01 +01:00
Alexander Tokmakov
1013f6b23f
Merge branch 'master' into reintroduce_is_deleted 2023-12-29 15:46:24 +01:00
Alexander Tokmakov
72a0797b88 keep exception format string in retries ctl 2023-12-29 15:21:46 +01:00
Alexey Milovidov
ea03cc82aa
Merge pull request #58320 from ClickHouse/mv3
Refreshable materialized views again
2023-12-29 14:44:50 +01:00
Azat Khuzhin
853fdfe775 Clean cached messages on destroy kafka consumer
The callchain of the kafka consumer is very tricky, so for the sake of
common sense let's just clean the messages on moving out consumer (and
in dtor, but this is just to keep that two code path in sync).

(Also reported by @filimonov)

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-29 14:30:21 +01:00
Azat Khuzhin
b3d6caf37f Unsubscribe kafka consumer before cleaning it by TTL
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-29 14:03:53 +01:00
Nikolai Kochetov
0e8232a8c3 Check what happen if remove array joined columns from KeyCondition 2023-12-29 12:24:19 +00:00
Azat Khuzhin
f578541ded Fix destructing kafka consumer via member orders
We've discussed this with @filimonov and he pointed out that everything
else (except for rdkafka_stat/rdkafka_stat_mutex) is done via members
orders, so let's do it in the same style.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-29 13:19:11 +01:00
Azat Khuzhin
8c54380d80 Avoid sending ComposeObject requests after upload to GCS
This should not be required anymore, but leave it as an option, since
likely this is required for old files.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-29 11:53:49 +01:00
Azat Khuzhin
f4a7789cd4 Convert various S3::Client settings into separate ClientSettings struct
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-29 11:53:49 +01:00
Duc Canh Le
91a87d6b6c better implementation
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
2023-12-29 07:27:10 +00:00
Igor Nikonov
208a9193f6 Merge remote-tracking branch 'origin/master' into pr-custom-key-failover 2023-12-28 21:28:36 +00:00
Alexander Tokmakov
852f397a97 fix lost blobs after dropping a replica with broken detached parts 2023-12-28 21:47:19 +01:00
Kruglov Pavel
fbd3f7cd59
Merge pull request #56132 from Avogar/flatten-only-true-nested
Flatten only true Nested type if flatten_nested=1, not all Array(Tuple)
2023-12-28 20:58:28 +01:00
Michael Kolupaev
c4f4516a37 Fix WriteBuffer assert if refresh is cancelled at the wrong moment 2023-12-28 18:34:28 +00:00
Nikolai Kochetov
490a8bce9e Remove commented code. 2023-12-28 18:01:08 +00:00
Michael Kolupaev
ea138fe8c9 space 2023-12-28 17:56:06 +00:00
Michael Kolupaev
96c68e5aae Remove pausing, enable multithreading, kick off refresh on table creation unless the query says EMPTY 2023-12-28 17:56:06 +00:00
Michael Kolupaev
4d732cdf1e Add to system.process, improve test slightly 2023-12-28 17:56:05 +00:00
Michael Kolupaev
f0417d0ec3 Things 2023-12-28 17:56:05 +00:00
Michael Kolupaev
0fc7535eba Fixes 2023-12-28 17:56:05 +00:00
Michael Kolupaev
609b2c216c Fix some of the CI 2023-12-28 17:56:05 +00:00
Michael Kolupaev
8b8ef41407 Documentation 2023-12-28 17:56:05 +00:00
Michael Kolupaev
64e6deb197 Slightly more things 2023-12-28 17:56:05 +00:00
Michael Kolupaev
dda0606f67 Things 2023-12-28 17:56:05 +00:00
Michael Kolupaev
98dbd105ad Overhaul timestamp arithmetic 2023-12-28 17:56:04 +00:00
Michael Kolupaev
a524e8c51e Overhaul dependencies 2023-12-28 17:56:04 +00:00
Michael Kolupaev
bd18522cad Overhaul RefreshTask 2023-12-28 17:56:04 +00:00
Michael Kolupaev
29a8edb40e Simple review comments 2023-12-28 17:56:04 +00:00
koloshmet
49367186e3 fix fix fix 2023-12-28 17:56:04 +00:00
koloshmet
fb420a160b proper tmp table cleanup 2023-12-28 17:56:04 +00:00
koloshmet
0999a6d98e proper tmp table cleanup 2023-12-28 17:56:04 +00:00
koloshmet
238741dafe fixed style 2023-12-28 17:56:04 +00:00
koloshmet
c52aa984ee refreshable materialized views 2023-12-28 17:56:04 +00:00
Nikolai Kochetov
4c68716df7 Fix another test. 2023-12-28 17:51:11 +00:00
Nikolai Kochetov
d7a473e386 Fix some test. 2023-12-28 17:34:28 +00:00
avogar
e66701dd10 Add setting ignore_materialized_views_with_dropped_target_table 2023-12-28 15:00:39 +00:00
Nikolai Kochetov
50e9c9bb4e Fixing tests. 2023-12-28 14:59:33 +00:00
Azat Khuzhin
ecf7188d52 Fix use-after-free in KafkaConsumer due to statistics callback
CI founds [1]:

    Exception: Sanitizer assert found for instance �=================================================================
    ==1==ERROR: AddressSanitizer: heap-use-after-free on address 0x5250006a4100 at pc 0x55d4ed46d2e2 bp 0x7f7e33b40190 sp 0x7f7e33b3f950
    WRITE of size 5390 at 0x5250006a4100 thread T2 (TCPHandler)
       8 0x55d50eba9497 in DB::KafkaConsumer::setRDKafkaStat(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/./src/Storages/Kafka/KafkaConsumer.h:117:22
       12 0x55d51e0eebfe in cppkafka::stats_callback_proxy(rd_kafka_s*, char*, unsigned long, void*) build_docker/./contrib/cppkafka/src/configuration.cpp:92:5
       13 0x55d51e151e3d in rd_kafka_poll_cb build_docker/./contrib/librdkafka/src/rdkafka.c:3790:7
       14 0x55d51e15531b in rd_kafka_consumer_close build_docker/./contrib/librdkafka/src/rdkafka.c:3200:31
       15 0x55d51e0f3241 in cppkafka::Consumer::close() build_docker/./contrib/cppkafka/src/consumer.cpp:293:33
       16 0x55d51e0f3241 in cppkafka::Consumer::~Consumer() build_docker/./contrib/cppkafka/src/consumer.cpp:82:9
       20 0x55d50eb8d12e in DB::KafkaConsumer::~KafkaConsumer() build_docker/./src/Storages/Kafka/KafkaConsumer.cpp:179:1

    0x5250006a4100 is located 0 bytes inside of 8736-byte region [0x5250006a4100,0x5250006a6320)
    freed by thread T2 (TCPHandler) here:
       0 0x55d4ed4a26b2 in operator delete(void*, unsigned long) (/usr/bin/clickhouse+0xa94b6b2) (BuildId: 74ec4a14a5109c41de109e82d56d8d863845144d)
       1 0x55d50eb8ca55 in void std::__1::__libcpp_operator_delete[abi:v15000]<void*, unsigned long>(void*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:256:3
       2 0x55d50eb8ca55 in void std::__1::__do_deallocate_handle_size[abi:v15000]<>(void*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:282:10
       3 0x55d50eb8ca55 in std::__1::__libcpp_deallocate[abi:v15000](void*, unsigned long, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:296:14
       4 0x55d50eb8ca55 in std::__1::allocator<char>::deallocate[abi:v15000](char*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/__memory/allocator.h:128:13
       5 0x55d50eb8ca55 in std::__1::allocator_traits<std::__1::allocator<char>>::deallocate[abi:v15000](std::__1::allocator<char>&, char*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:282:13
       6 0x55d50eb8ca55 in std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::~basic_string() build_docker/./contrib/llvm-project/libcxx/include/string:2334:9
       7 0x55d50eb8ca55 in DB::KafkaConsumer::~KafkaConsumer() build_docker/./src/Storages/Kafka/KafkaConsumer.cpp:179:1

  [1]: https://s3.amazonaws.com/clickhouse-test-reports/0/745d9bb47f3425e28e5660ed7c730038ffece4ee/integration_tests__asan__analyzer__%5B6_6%5D/integration_run_parallel4_0.log

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-28 15:48:43 +01:00
Azat Khuzhin
4a14112af1 Move StorageKafka::createConsumer() into KafkaConsumer
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit ebad1bf4f3)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
87f3f6619a Fix data-race between StorageKafka::startup() and cleanConsumers()
Actually now we can create consumer object in the ctor, no need to do
this in startup(), since consumer now do not connects to kafka.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 03218202d3)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
3c139d7135 Update comment for statistics.interval.ms librdkafka option
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 1f03a21033)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
6f85306510 Use separate thread for kafka consumers cleanup
Since pool may exceed threads, while we need to run this thread always
to avoid memory leaking.

And this should not be a problem since librdkafka has multiple threads
for each consumer (5!) anyway.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 06a9e9a9ca)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
7d2b82c37c Add ability to configure TTL for kafka consumers
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit b19b70b8fc)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
bea1610219 Preserve KafkaConsumer objects
This will make system.kafka_consumers more useful, since after TTL
consumer object will be removed prio this patch, but after, all
information will be preserved.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 2ff0bfb0a1)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
71fdde76c2 Enable stats for system.kafka_consumers back by default
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit db74549940)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
d66be02dc3 Create consumers for Kafka tables on fly (but keep them for 1min since last used)
Pool of consumers created a problem for librdkafka internal statistics,
you need to read from the queue always, while in ClickHouse consumers
created regardless are there any readers or not (attached materialized
views or direct SELECTs).

Otherwise, this statistics messages got queued and never released,
which:
- creates live memory leak
- and also makes destroy very slow, due to librdkafka internals (it
  moves entries from this queue into another linked list, but in a
  with sorting, which is incredibly slow for linked lists)

So the idea is simple, let's create a pool of consumers only when they
are required, and destroy them after some timeout (right now it is 60
seconds) if nobody uses them, that way this problem should gone.

This should also reduce number of internal librdkafka threads, when
nobody reads from Kafka tables.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit e7592c140e)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
a6841c8915 Properly set shutdown_called in StorageKafka::shutdown()
Fixes: https://github.com/ClickHouse/ClickHouse/pull/42777
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 51d4f583e6)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
3541d9a05f Remove StorageKafka::num_created_consumers (in favor of all_consumers.size())
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 123d63e824)
2023-12-28 15:32:39 +01:00
avogar
e1a9baa5b0 Fix 2023-12-28 13:51:37 +00:00
Nikolai Kochetov
737563296b
Merge branch 'master' into filter-virtual-columns-storage-merge 2023-12-28 14:47:41 +01:00
Alexander Tokmakov
bdada351c8 Revert "Merge pull request #58274 from ClickHouse/revert-58267"
This reverts commit 583b9637c2, reversing
changes made to 224e937620.
2023-12-28 14:07:59 +01:00
Alexander Tokmakov
5fcbf9cfb0 Revert "Merge pull request #58251 from ClickHouse/reintroduce-compatibility-with-a-misfeature"
This reverts commit a811d5b761, reversing
changes made to 583b9637c2.
2023-12-28 14:06:56 +01:00
Alexander Tokmakov
38fe70c68a
Revert "Refreshable materialized views (takeover)" 2023-12-28 13:12:20 +01:00
Duc Canh Le
238c5e66d5 use ChunkInfo to carry part level
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
2023-12-28 11:01:18 +00:00
Alexey Milovidov
4bb8592434 Update autogenerated version to 23.13.1.1 and contributors 2023-12-28 11:22:16 +01:00
Alexey Milovidov
524d53199d
Merge branch 'master' into mv 2023-12-28 04:11:48 +01:00
Alexey Milovidov
a811d5b761
Merge pull request #58251 from ClickHouse/reintroduce-compatibility-with-a-misfeature
Reintroduce compatibility with `is_deleted` on a syntax level
2023-12-28 04:11:04 +01:00
Alexey Milovidov
c7efd2afea Revert #58267 2023-12-28 04:09:33 +01:00
Alexey Milovidov
40a5dbdeba
Merge branch 'master' into mv 2023-12-28 03:16:27 +01:00
Alexey Milovidov
c52886eb81
Revert "Create consumers for Kafka tables on fly (but keep them for some period since last used)" 2023-12-28 03:35:57 +03:00
Alexey Milovidov
1d9dbfd18b
Merge pull request #49103 from ClickHouse/check-about-global-sorting
Fixed a sorting order breakage in TTL GROUP BY
2023-12-28 01:35:14 +01:00
Alexey Milovidov
d7a35773c1
Merge pull request #58252 from Algunenano/i51543
Avoid throwing ABORTED on normal situations
2023-12-28 00:28:51 +01:00
Alexey Milovidov
8d984df135
Merge pull request #58237 from azat/build/fwd-decl-exception
Some code refactoring (was an attempt to improve build time, but failed)
2023-12-28 00:21:09 +01:00
Alexey Milovidov
c024dc9c3d
Merge pull request #58265 from ClickHouse/remove-mayBenefitFromIndexForIn
Remove mayBenefitFromIndexForIn
2023-12-28 00:15:04 +01:00
Alexey Milovidov
dcbd3b9c26
Merge pull request #58267 from ClickHouse/fix_is_deleted_compatibility
Re-introduce `is_deleted` column for ReplacingMergeTree
2023-12-28 00:13:01 +01:00
Michael Kolupaev
4d4d8e0545 space 2023-12-27 20:25:35 +00:00
Michael Kolupaev
b9cbecb0df Remove pausing, enable multithreading, kick off refresh on table creation unless the query says EMPTY 2023-12-27 20:24:56 +00:00
Michael Kolupaev
de8567660c Add to system.process, improve test slightly 2023-12-27 20:24:55 +00:00
Michael Kolupaev
538b23d862 Things 2023-12-27 20:24:55 +00:00
Michael Kolupaev
802961f0a2 Fixes 2023-12-27 20:24:55 +00:00
Michael Kolupaev
673743e2ac Fix some of the CI 2023-12-27 20:24:55 +00:00
Michael Kolupaev
7786b12a89 Documentation 2023-12-27 20:24:55 +00:00
Michael Kolupaev
418423a304 Slightly more things 2023-12-27 20:24:55 +00:00
Michael Kolupaev
ef4cc5ec7f Things 2023-12-27 20:24:55 +00:00
Michael Kolupaev
a7c369e14f Overhaul timestamp arithmetic 2023-12-27 20:24:55 +00:00
Michael Kolupaev
01369a0a8a Overhaul dependencies 2023-12-27 20:24:54 +00:00
Michael Kolupaev
01345981e2 Overhaul RefreshTask 2023-12-27 20:24:54 +00:00
Michael Kolupaev
5dc04a13a7 Simple review comments 2023-12-27 20:24:54 +00:00
koloshmet
808cb0fa05 fix fix fix 2023-12-27 20:24:54 +00:00
koloshmet
f1161566b4 proper tmp table cleanup 2023-12-27 20:24:54 +00:00
koloshmet
f14114dafc proper tmp table cleanup 2023-12-27 20:24:54 +00:00
koloshmet
d1932763f3 fixed style 2023-12-27 20:24:54 +00:00
koloshmet
c762898adb refreshable materialized views 2023-12-27 20:24:54 +00:00
Alexander Tokmakov
a3cba8e06f
Update StorageReplicatedMergeTree.cpp 2023-12-27 20:27:15 +01:00
Alexander Tokmakov
f5bcfaffa5 disable vertical merges with cleanup 2023-12-27 19:28:50 +01:00
vdimir
1137461aaf
Analyzer: fix tuple comparison when result is always null 2023-12-27 18:19:39 +00:00
Nikolai Kochetov
e493789bf3 Remove from indexes as well. 2023-12-27 17:51:23 +00:00
Nikolai Kochetov
bcd34b25b2 Remove mayBenefitFromIndexForIn 2023-12-27 17:42:40 +00:00
Nikolai Kochetov
9f9b080b00
Update StorageMerge.cpp 2023-12-27 18:33:00 +01:00
Alexander Tokmakov
f924848347 partially revert #54368 (f28ad1e136) 2023-12-27 18:17:59 +01:00
Raúl Marín
dfe7b0e973 Keep message 2023-12-27 18:13:22 +01:00
Nikolai Kochetov
2f50d3da50 Filter virtual columns for StorageMerge from plan filter condition. 2023-12-27 17:05:23 +00:00
Raúl Marín
5f183649b2 Avoid throwing ABORTED on normal situations 2023-12-27 17:44:46 +01:00
Alexey Milovidov
64b4e1a66f Reintroduce compatibility with is_deleted on a syntax level 2023-12-27 17:42:51 +01:00
Nikolai Kochetov
3ec1b2a852 Refactor StorageMerge. 2023-12-27 16:32:21 +00:00
avogar
9ef8de21b2 Read column once while reading more that one subcolumn from it in Compact parts 2023-12-27 16:30:04 +00:00
Nikita Mikhaylov
3dbd3b3e61 Better 2023-12-27 15:50:20 +00:00
Nikita Mikhaylov
b60109d43e Better 2023-12-27 15:50:20 +00:00
Alexey Milovidov
f00337e2ba
Merge pull request #57872 from CurtizJ/optimize-aggregation-consecutive-keys
Better optimization of consecutive keys in aggregation
2023-12-27 15:44:22 +01:00
Azat Khuzhin
b9233f6d4f Move Allocator code into module part
This should reduce amount of code that should be recompiled on
Exception.h changes (and everything else that had been included there).

This will actually not help a lot, because it is also included into
PODArray.h and ThreadPool.h at least... Sigh.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 15:42:08 +01:00
Alexander Tokmakov
01d042c490 Revert "Merge pull request #57932 from ClickHouse/remove-shit-cleanup"
This reverts commit 2d58dc512c, reversing
changes made to 41873dc4a3.
2023-12-27 13:46:06 +01:00
Alexander Tokmakov
eeadeaa89d Revert "Merge pull request #58104 from ClickHouse/cleanup-replication-compatibility"
This reverts commit 34fd555ee6, reversing
changes made to cb53ee63be.
2023-12-27 13:03:38 +01:00
Azat Khuzhin
ebad1bf4f3 Move StorageKafka::createConsumer() into KafkaConsumer
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
03218202d3 Fix data-race between StorageKafka::startup() and cleanConsumers()
Actually now we can create consumer object in the ctor, no need to do
this in startup(), since consumer now do not connects to kafka.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
1f03a21033 Update comment for statistics.interval.ms librdkafka option
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
06a9e9a9ca Use separate thread for kafka consumers cleanup
Since pool may exceed threads, while we need to run this thread always
to avoid memory leaking.

And this should not be a problem since librdkafka has multiple threads
for each consumer (5!) anyway.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
b19b70b8fc Add ability to configure TTL for kafka consumers
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
2ff0bfb0a1 Preserve KafkaConsumer objects
This will make system.kafka_consumers more useful, since after TTL
consumer object will be removed prio this patch, but after, all
information will be preserved.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
db74549940 Enable stats for system.kafka_consumers back by default
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
e7592c140e Create consumers for Kafka tables on fly (but keep them for 1min since last used)
Pool of consumers created a problem for librdkafka internal statistics,
you need to read from the queue always, while in ClickHouse consumers
created regardless are there any readers or not (attached materialized
views or direct SELECTs).

Otherwise, this statistics messages got queued and never released,
which:
- creates live memory leak
- and also makes destroy very slow, due to librdkafka internals (it
  moves entries from this queue into another linked list, but in a
  with sorting, which is incredibly slow for linked lists)

So the idea is simple, let's create a pool of consumers only when they
are required, and destroy them after some timeout (right now it is 60
seconds) if nobody uses them, that way this problem should gone.

This should also reduce number of internal librdkafka threads, when
nobody reads from Kafka tables.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
51d4f583e6 Properly set shutdown_called in StorageKafka::shutdown()
Fixes: https://github.com/ClickHouse/ClickHouse/pull/42777
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Azat Khuzhin
123d63e824 Remove StorageKafka::num_created_consumers (in favor of all_consumers.size())
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-27 09:49:07 +01:00
Igor Nikonov
bee15325fc Merge remote-tracking branch 'origin/master' into pr-custom-key-failover 2023-12-26 21:56:46 +00:00
Alexey Milovidov
a0fccb0498
Merge pull request #58224 from amosbird/part_offset_pk
Primary key analysis for _part_offset
2023-12-26 14:51:57 +01:00
Alexey Milovidov
31a081bd83
Merge pull request #58226 from Algunenano/cleanup_known_short
Cleanup some known short messages
2023-12-26 14:40:58 +01:00
Raúl Marín
e87b9751bd Cleanup some known short messages 2023-12-26 12:58:50 +01:00
Amos Bird
66660ee4e2
Add comment 2023-12-26 17:04:00 +08:00
Amos Bird
bfcccf9fa3
Primary key analysis for _part_offset 2023-12-26 17:03:59 +08:00
santrancisco
a59d874bf9
fix syntax 2023-12-26 16:56:58 +11:00
凌涛
a09bdd4367 Merge branch 'master' into optimization/BF_support_rg 2023-12-26 10:09:58 +08:00
Azat Khuzhin
837f4ea676 Add ability to throttle merges/mutations
Main motivation was to has an ability to throttle background tasks, to
avoid affecting queries.

To new server settings had been added for this:
- max_mutations_bandwidth_for_server
- max_merges_bandwidth_for_server

Note, that they limit only reading, since usually you will not write
more data then you read, but sometimes it is possible in case of ALTER
UPDATE.

But for now, to keep things simple, I decided to limit this with only
2 settings instead of 4.

Note, that if the write throttling will be needed, then they can use the
same settings, and just create new throttler for write.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-24 22:31:49 +01:00
Azat Khuzhin
79de5c16c9 Apply all reader settings for merges/mutations
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-24 22:29:43 +01:00
Azat Khuzhin
e71f6893cc Add brief comment for MergeTreeSequentialSource
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-24 22:29:42 +01:00
Azat Khuzhin
3be3b0a280 Fix incorrect Exceptions
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-24 21:26:32 +01:00
Alexey Milovidov
ae51334ba5 Merge branch 'master' into fix-error-in-archive-reader 2023-12-24 05:53:22 +01:00
Alexey Milovidov
e98c49a58f Fix a benign error in archive reader 2023-12-24 05:44:24 +01:00
Alexey Milovidov
3f4c8e4ae8
Merge pull request #58167 from jrdi/part-log-uncompressed-bytes
Add bytes_uncompressed to system.part_log
2023-12-24 04:11:35 +01:00
Alexey Milovidov
b4bf1d1c4c
Merge pull request #58136 from azat/system.stack_trace-rt_tgsigqueueinfo-v2
Fix system.stack_trace for threads with blocked SIGRTMIN (resubmit)
2023-12-24 03:51:13 +01:00
Alexey Milovidov
4f3f69521d
Merge pull request #58173 from ClickHouse/parallel-replicas-used-count
Profile event 'ParallelReplicasUsedCount'
2023-12-24 03:46:09 +01:00
Alexey Milovidov
00fa9085b1
Merge pull request #58178 from chhetripradeep/add-base-backup-name-to-system-tables
Add base backup name to system.backups and system.backup_log tables
2023-12-24 03:38:20 +01:00
Azat Khuzhin
2f6c0487ad Ignore ENOENT for SigBlk check for system.stack_trace
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-23 14:35:38 +01:00
Azat Khuzhin
ac542199c5 Add some comments about racy code for system.stack_trace
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-23 13:42:26 +01:00
Igor Nikonov
d644a208bf Merge remote-tracking branch 'origin/master' into parallel-replicas-used-count 2023-12-23 11:02:28 +00:00
Igor Nikonov
3a485a8bbf Fix:moved request object was used 2023-12-23 11:02:24 +00:00
Alexey Milovidov
dc4b9a1013 Obfuscator: keep settings and timezones 2023-12-23 04:55:55 +01:00
Yakov Olkhovskiy
d7fe86279f StoragesInfoStreamBase refactoring, additional test, style fix 2023-12-23 03:47:43 +00:00
Pradeep Chhetri
b5c8c4050b Add base backup name to system.backups and system.backup_log tables 2023-12-23 11:08:50 +08:00
Jordi Villar
bff0b9c790 Fix mutations new part uncompressed bytes 2023-12-22 22:33:58 +01:00