Commit Graph

131774 Commits

Author SHA1 Message Date
Amos Bird
6b6e40831c
Move symbols from src/* into namespace DB 2023-12-29 14:37:08 +08:00
Bharat Nallan Chakravarthy
1dce048b58 initial refactor of db registration 2023-12-28 18:59:45 -08:00
凌涛
57c57360db add doc 2023-12-29 10:43:11 +08:00
凌涛
4630398d23 optimize 2023-12-29 10:38:13 +08:00
凌涛
7bd8488db5 Funciont sparkBar alias sparkbar 2023-12-29 10:08:17 +08:00
robot-ch-test-poll4
ef5837a008
Merge pull request #58318 from ClickHouse/fix-fuzzer-sparse
Fixed logical error in CheckSortedTransform
2023-12-28 23:57:01 +01:00
Kruglov Pavel
fbd3f7cd59
Merge pull request #56132 from Avogar/flatten-only-true-nested
Flatten only true Nested type if flatten_nested=1, not all Array(Tuple)
2023-12-28 20:58:28 +01:00
Alexey Milovidov
a313ee1023
Merge pull request #58298 from ClickHouse/vdimir/fix_01732_race_condition_storage_join_long
Fix timeout in 01732_race_condition_storage_join_long
2023-12-28 19:45:37 +01:00
Alexey Milovidov
b65c87b830
Merge pull request #58309 from ClickHouse/vdimir/fix_00172_hits_joins
Disable max_bytes_before_external* in 00172_hits_joins
2023-12-28 19:44:22 +01:00
Michael Kolupaev
c4f4516a37 Fix WriteBuffer assert if refresh is cancelled at the wrong moment 2023-12-28 18:34:28 +00:00
Michael Kolupaev
ea138fe8c9 space 2023-12-28 17:56:06 +00:00
Michael Kolupaev
96c68e5aae Remove pausing, enable multithreading, kick off refresh on table creation unless the query says EMPTY 2023-12-28 17:56:06 +00:00
Michael Kolupaev
c5f6169fe4 Fix race in the test 2023-12-28 17:56:06 +00:00
Michael Kolupaev
5a3026924d Make the test work with analyzer 2023-12-28 17:56:06 +00:00
Michael Kolupaev
4d732cdf1e Add to system.process, improve test slightly 2023-12-28 17:56:05 +00:00
Michael Kolupaev
bda01ca9db Spelling 2023-12-28 17:56:05 +00:00
Michael Kolupaev
8549dde6ce Fix timezone issue in the test 2023-12-28 17:56:05 +00:00
Michael Kolupaev
edd120e8be Make it experimental 2023-12-28 17:56:05 +00:00
Michael Kolupaev
f0417d0ec3 Things 2023-12-28 17:56:05 +00:00
Michael Kolupaev
0fc7535eba Fixes 2023-12-28 17:56:05 +00:00
Michael Kolupaev
609b2c216c Fix some of the CI 2023-12-28 17:56:05 +00:00
Michael Kolupaev
8b8ef41407 Documentation 2023-12-28 17:56:05 +00:00
Michael Kolupaev
64e6deb197 Slightly more things 2023-12-28 17:56:05 +00:00
Michael Kolupaev
dda0606f67 Things 2023-12-28 17:56:05 +00:00
Michael Kolupaev
98dbd105ad Overhaul timestamp arithmetic 2023-12-28 17:56:04 +00:00
Michael Kolupaev
a524e8c51e Overhaul dependencies 2023-12-28 17:56:04 +00:00
Michael Kolupaev
bd18522cad Overhaul RefreshTask 2023-12-28 17:56:04 +00:00
Michael Kolupaev
29a8edb40e Simple review comments 2023-12-28 17:56:04 +00:00
koloshmet
49367186e3 fix fix fix 2023-12-28 17:56:04 +00:00
koloshmet
fb420a160b proper tmp table cleanup 2023-12-28 17:56:04 +00:00
koloshmet
0999a6d98e proper tmp table cleanup 2023-12-28 17:56:04 +00:00
koloshmet
238741dafe fixed style 2023-12-28 17:56:04 +00:00
koloshmet
4305457883 fixed tests 2023-12-28 17:56:04 +00:00
koloshmet
67e469bee5 refreshable view query test 2023-12-28 17:56:04 +00:00
koloshmet
c52aa984ee refreshable materialized views 2023-12-28 17:56:04 +00:00
Alexey Milovidov
a9ac8dfb74 Update CHANGELOG.md 2023-12-28 18:31:15 +01:00
Nikita Mikhaylov
e15b1c6e5f Fixed 2023-12-28 17:25:27 +00:00
Alexander Tokmakov
95e4b0002f fix a bug in PartsSplitter 2023-12-28 17:25:36 +01:00
Kseniia Sumarokova
8e8fd84cb7
Merge pull request #58293 from ClickHouse/fix-s3-queue-test
Fix test_storage_s3_queue/test.py::test_drop_table
2023-12-28 17:18:11 +01:00
Azat Khuzhin
ecf7188d52 Fix use-after-free in KafkaConsumer due to statistics callback
CI founds [1]:

    Exception: Sanitizer assert found for instance �=================================================================
    ==1==ERROR: AddressSanitizer: heap-use-after-free on address 0x5250006a4100 at pc 0x55d4ed46d2e2 bp 0x7f7e33b40190 sp 0x7f7e33b3f950
    WRITE of size 5390 at 0x5250006a4100 thread T2 (TCPHandler)
       8 0x55d50eba9497 in DB::KafkaConsumer::setRDKafkaStat(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>> const&) build_docker/./src/Storages/Kafka/KafkaConsumer.h:117:22
       12 0x55d51e0eebfe in cppkafka::stats_callback_proxy(rd_kafka_s*, char*, unsigned long, void*) build_docker/./contrib/cppkafka/src/configuration.cpp:92:5
       13 0x55d51e151e3d in rd_kafka_poll_cb build_docker/./contrib/librdkafka/src/rdkafka.c:3790:7
       14 0x55d51e15531b in rd_kafka_consumer_close build_docker/./contrib/librdkafka/src/rdkafka.c:3200:31
       15 0x55d51e0f3241 in cppkafka::Consumer::close() build_docker/./contrib/cppkafka/src/consumer.cpp:293:33
       16 0x55d51e0f3241 in cppkafka::Consumer::~Consumer() build_docker/./contrib/cppkafka/src/consumer.cpp:82:9
       20 0x55d50eb8d12e in DB::KafkaConsumer::~KafkaConsumer() build_docker/./src/Storages/Kafka/KafkaConsumer.cpp:179:1

    0x5250006a4100 is located 0 bytes inside of 8736-byte region [0x5250006a4100,0x5250006a6320)
    freed by thread T2 (TCPHandler) here:
       0 0x55d4ed4a26b2 in operator delete(void*, unsigned long) (/usr/bin/clickhouse+0xa94b6b2) (BuildId: 74ec4a14a5109c41de109e82d56d8d863845144d)
       1 0x55d50eb8ca55 in void std::__1::__libcpp_operator_delete[abi:v15000]<void*, unsigned long>(void*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:256:3
       2 0x55d50eb8ca55 in void std::__1::__do_deallocate_handle_size[abi:v15000]<>(void*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:282:10
       3 0x55d50eb8ca55 in std::__1::__libcpp_deallocate[abi:v15000](void*, unsigned long, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/new:296:14
       4 0x55d50eb8ca55 in std::__1::allocator<char>::deallocate[abi:v15000](char*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/__memory/allocator.h:128:13
       5 0x55d50eb8ca55 in std::__1::allocator_traits<std::__1::allocator<char>>::deallocate[abi:v15000](std::__1::allocator<char>&, char*, unsigned long) build_docker/./contrib/llvm-project/libcxx/include/__memory/allocator_traits.h:282:13
       6 0x55d50eb8ca55 in std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char>>::~basic_string() build_docker/./contrib/llvm-project/libcxx/include/string:2334:9
       7 0x55d50eb8ca55 in DB::KafkaConsumer::~KafkaConsumer() build_docker/./src/Storages/Kafka/KafkaConsumer.cpp:179:1

  [1]: https://s3.amazonaws.com/clickhouse-test-reports/0/745d9bb47f3425e28e5660ed7c730038ffece4ee/integration_tests__asan__analyzer__%5B6_6%5D/integration_run_parallel4_0.log

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
2023-12-28 15:48:43 +01:00
Dmitry Novik
50e531bf93 Improve system.errors documentation 2023-12-28 14:47:30 +00:00
Azat Khuzhin
4a14112af1 Move StorageKafka::createConsumer() into KafkaConsumer
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit ebad1bf4f3)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
87f3f6619a Fix data-race between StorageKafka::startup() and cleanConsumers()
Actually now we can create consumer object in the ctor, no need to do
this in startup(), since consumer now do not connects to kafka.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 03218202d3)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
3c139d7135 Update comment for statistics.interval.ms librdkafka option
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 1f03a21033)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
6f85306510 Use separate thread for kafka consumers cleanup
Since pool may exceed threads, while we need to run this thread always
to avoid memory leaking.

And this should not be a problem since librdkafka has multiple threads
for each consumer (5!) anyway.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 06a9e9a9ca)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
8ac68b64d7 Allow setThreadName() to truncate thread name instead of throw an error
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit a7453f7f14)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
7d2b82c37c Add ability to configure TTL for kafka consumers
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit b19b70b8fc)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
bea1610219 Preserve KafkaConsumer objects
This will make system.kafka_consumers more useful, since after TTL
consumer object will be removed prio this patch, but after, all
information will be preserved.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit 2ff0bfb0a1)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
71fdde76c2 Enable stats for system.kafka_consumers back by default
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit db74549940)
2023-12-28 15:32:39 +01:00
Azat Khuzhin
d66be02dc3 Create consumers for Kafka tables on fly (but keep them for 1min since last used)
Pool of consumers created a problem for librdkafka internal statistics,
you need to read from the queue always, while in ClickHouse consumers
created regardless are there any readers or not (attached materialized
views or direct SELECTs).

Otherwise, this statistics messages got queued and never released,
which:
- creates live memory leak
- and also makes destroy very slow, due to librdkafka internals (it
  moves entries from this queue into another linked list, but in a
  with sorting, which is incredibly slow for linked lists)

So the idea is simple, let's create a pool of consumers only when they
are required, and destroy them after some timeout (right now it is 60
seconds) if nobody uses them, that way this problem should gone.

This should also reduce number of internal librdkafka threads, when
nobody reads from Kafka tables.

Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
(cherry picked from commit e7592c140e)
2023-12-28 15:32:39 +01:00