mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-27 01:51:59 +00:00
7210be1534
In case of INSERT into Distributed table with send_logs_level!=none it is possible to receive tons of Log packets, and w/o consuming it properly the socket buffer will be full, and eventually the query will hung. This happens because receiver will not read data until it will send logs packets, but sender does not reads those Log packets and so receiver hung, and hence the sender will hung too, because receiver do not consume Data packets anymore. In the initial version of this patch I tried to properly consume Log packets, but it is not possible to ensure that before writing Data blocks all Log packets had been consumed, that said that with current protocol implementation it is not possible to fix Log packets consuming properly, to avoid deadlock, so send_logs_level had been simply disabled. But note, that this does not differs to the user, in what ClickHouse did before, since before it simply does not consume those packets, so the client does not saw those messages anyway. <details> The receiver: Poco::Net::SocketImpl::poll(Poco::Timespan const&, int) Poco::Net::SocketImpl::sendBytes(void const*, int, int) Poco::Net::StreamSocketImpl::sendBytes(void const*, int, int) DB::WriteBufferFromPocoSocket::nextImpl() DB::TCPHandler::sendLogData(DB::Block const&) DB::TCPHandler::sendLogs() DB::TCPHandler::readDataNext() DB::TCPHandler::processInsertQuery() State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 4331792 211637 127.0.0.1:9000 127.0.0.1:24446 users:(("clickhouse-serv",pid=46874,fd=3850)) The sender: Poco::Net::SocketImpl::poll(Poco::Timespan const&, int) Poco::Net::SocketImpl::sendBytes(void const*, int, int) Poco::Net::StreamSocketImpl::sendBytes(void const*, int, int) DB::WriteBufferFromPocoSocket::nextImpl() DB::WriteBuffer::write(char const*, unsigned long) DB::CompressedWriteBuffer::nextImpl() DB::WriteBuffer::write(char const*, unsigned long) DB::SerializationString::serializeBinaryBulk(DB::IColumn const&, DB::WriteBuffer&, unsigned long, unsigned long) const DB::NativeWriter::write(DB::Block const&) DB::Connection::sendData(DB::Block const&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) DB::RemoteInserter::write(DB::Block) DB::RemoteSink::consume(DB::Chunk) DB::SinkToStorage::onConsume(DB::Chunk) State Recv-Q Send-Q Local Address:Port Peer Address:Port Process ESTAB 67883 3008240 127.0.0.1:24446 127.0.0.1:9000 users:(("clickhouse-serv",pid=41610,fd=25)) </details> v2: rebase to use clickhouse_client_timeout and add clickhouse_test_wait_queries v3: use KILL QUERY v4: adjust the test v5: disable send_logs_level for INSERT into Distributed v6: add no-backward-compatibility-check tag Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com> |
||
---|---|---|
.. | ||
ci | ||
config | ||
fuzz | ||
instructions | ||
integration | ||
jepsen.clickhouse-keeper | ||
perf_drafts | ||
performance | ||
queries | ||
.gitignore | ||
clickhouse-test | ||
CMakeLists.txt | ||
msan_suppressions.txt | ||
stress | ||
tsan_suppressions.txt | ||
ubsan_suppressions.txt |