Some third-party libraries (i.e. librdkafka) could block it, and in this
case system.stack_trace will return stacktrace for the main process
(usually, basically it could be any thread with non blocked signal).
By replacing sigqueue() with more precise rt_tgsigqueueinfo(), other
threads will not respond to the signal.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
* save all merge params to zookeeper
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* calculate hash for graphite merge params
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* add graphite params hash to zookeeper + fix tests
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* install new graphite for testing
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* fix backward incompatibility
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* minor fix test
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
* Update src/Storages/MergeTree/ReplicatedMergeTreeTableMetadata.cpp
Co-authored-by: Alexander Tokmakov <tavplubix@gmail.com>
* remove peekString and add more comments
- peekString doesn't always work even for ReadBufferFromString
- more comment re. backward compatibility
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
---------
Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
Co-authored-by: Alexander Tokmakov <tavplubix@gmail.com>
Randomly modify structural characters of a valid JSON ('{', '}', '[', ']',
':', '"', ',') to generate output that cannot be parsed as JSON.
Follow-up to https://github.com/ClickHouse/ClickHouse/pull/56490
It is not safe to use statistics because of how KafkaEngine works - it
pre-creates consumers, and this leads to the situation when this
statistics entries generated (RD_KAFKA_OP_STATS), but never consumed.
Which creates a live memory leak for a server with Kafka tables, but
without materialized view attached to it (and no SELECT).
Another problem is that this makes shutdown very slow, because of how
pending queue entries are handled in librdkafka, it uses
TAILQ_INSERT_SORTED, which is sorted insert into linked list, which
works incredibly slow (likely you will never wait till it ends and kill
the server)
For instance in my production setup the server was running for ~67 days
with such table, and it got 1'942'233 `TAILQ_INSERT_SORTED` entries
(which perfectly matches by the way - `67*86400/3` = 1'929'600), and it
moved only 289'806 entries for a few hours, though I'm not sure how much
time the process was in the running state, since most of the time it was
with debugger attached.
So for now let's disable it, to make this patch easy for backporting,
and I will think about long term fix - do not pre-create consumers in
Kafka engine.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>