Existing code that looks up marks that match the query has a pathological
case, when most of the part does in fact match the query.
The code works by recursively splitting a part into ranges and then discarding
the ranges that definitely do not match the query, based on primary key.
The problem is that it requires visiting every mark that matches the query,
making the complexity of this sort of look up O(n).
For queries that match exact range on the primary key, we can find
both left and right parts of the range with O(log 2) complexity.
This change implements exactly that.
To engage this optimization, the query must:
* Have a prefix list of the primary key.
* Have only range or single set element constraints for columns.
* Have only AND as a boolean operator.
Consider a table with `(service, timestamp)` as the primary key.
The following conditions will be optimized:
* `service = 'foo'`
* `service = 'foo' and timestamp >= now() - 3600`
* `service in ('foo')`
* `service in ('foo') and timestamp >= now() - 3600 and timestamp <= now`
The following will fall back to previous lookup algorithm:
* `timestamp >= now() - 3600`
* `service in ('foo', 'bar') and timestamp >= now() - 3600`
* `service = 'foo'`
Note that the optimization won't engage when PK has a range expression
followed by a point expression, since in that case the range is not continuous.
Trace query logging provides the following messages types of messages,
each representing a different kind of PK usage for a part:
```
Used optimized inclusion search over index for part 20200711_5710108_5710108_0 with 9 steps
Used generic exclusion search over index for part 20200711_5710118_5710228_5 with 1495 steps
Not using index on part 20200710_5710473_5710473_0
```
Number of steps translates to computational complexity.
Here's a comparison for before and after for a query over 24h of data:
```
Read 4562944 rows, 148.05 MiB in 45.19249672 sec., 100966 rows/sec., 3.28 MiB/sec.
Read 4183040 rows, 135.78 MiB in 0.196279627 sec., 21311636 rows/sec., 691.75 MiB/sec.
```
This is especially useful for queries that read data in order
and terminate early to return "last X things" matching a query.
See #11564 for more thoughts on this.
* commit 'ceac649c01b0158090cd271776f3219f5e7ff57c': (75 commits)
[docs] split misc statements (#12403)
Update 00405_pretty_formats.reference
Update PrettyCompactBlockOutputFormat.cpp
Update PrettyBlockOutputFormat.cpp
Update DataTypeNullable.cpp
Update 01383_remote_ambiguous_column_shard.sql
add output_format_pretty_grid_charset setting in docs
add setting output_format_pretty_grid_charset
Added a test for #11135
Update index.md
RIGHT and FULL JOIN for MergeJoin (#12118)
Update MergeTreeIndexFullText.cpp
restart the tests
[docs] add syntax highlight (#12398)
query fuzzer
Fix std::bad_typeid when JSON functions called with argument of wrong type.
Allow typeid_cast() to cast nullptr to nullptr.
fix another context-related segfault
[security docs] actually, only admins can create advisories
query fuzzer
...
Buffer engine is usually used on INSERTs, but right now there is no way
to track number of INSERTed rows per-table, since only summary metrics
exists:
- StorageBufferRows
- StorageBufferBytes
But it can be pretty useful to track INSERTed rows rate (and it can be
exposed via http_handlers for i.e. prometheus)
ReadBufferFromKafkaConsumer does not handle the case when there is
message with an error on non first position in the current batch, since
it goes through messages in the batch after poll and stop on first valid
message.
But later it can try to use message as valid:
- while storing offset
- get topic name
- ...
And besides the message itself is also invalid (you can find this in the
gdb traces below).
So just filter out messages win an error error after poll.
SIGSEGV was with the following stacktrace:
(gdb) bt
3 0x0000000010f05b4d in rd_kafka_offset_store (app_rkt=0x0, partition=0, offset=0) at ../contrib/librdkafka/src/rdkafka_offset.c:656
4 0x0000000010e69657 in cppkafka::Consumer::store_offset (this=0x7f2015210820, msg=...) at ../contrib/cppkafka/include/cppkafka/message.h:225
5 0x000000000e68f208 in DB::ReadBufferFromKafkaConsumer::storeLastReadMessageOffset (this=0x7f206a136618) at ../contrib/libcxx/include/iterator:1508
6 0x000000000e68b207 in DB::KafkaBlockInputStream::readImpl (this=0x7f202c689020) at ../src/Storages/Kafka/KafkaBlockInputStream.cpp:150
7 0x000000000dd1178d in DB::IBlockInputStream::read (this=this@entry=0x7f202c689020) at ../src/DataStreams/IBlockInputStream.cpp:60
8 0x000000000dd34c0a in DB::copyDataImpl<> () at ../src/DataStreams/copyData.cpp:21
9 DB::copyData () at ../src/DataStreams/copyData.cpp:62
10 0x000000000e67c8f2 in DB::StorageKafka::streamToViews () at ../contrib/libcxx/include/memory:3823
11 0x000000000e67d218 in DB::StorageKafka::threadFunc () at ../src/Storages/Kafka/StorageKafka.cpp:488
And some information from it:
(gdb) p this.current.__i
$14 = (std::__1::__wrap_iter<cppkafka::Message const*>::iterator_type) 0x7f1ca8f58660
# current-1
(gdb) p $14-1
$15 = (const cppkafka::Message *) 0x7f1ca8f58600
(gdb) p $16.handle_
$17 = {__ptr_ = {<std::__1::__compressed_pair_elem<rd_kafka_message_s*, 0, false>> = { __value_ = 0x7f203577f938}, ...}
(gdb) p *(rd_kafka_message_s*)0x7f203577f938
$24 = {err = RD_KAFKA_RESP_ERR__TRANSPORT, rkt = 0x0, partition = 0, payload = 0x7f202f0339c0, len = 63, key = 0x0, key_len = 0, offset = 0, _private = 0x7f203577f8c0}
# current
(gdb) p $14-0
$28 = (const cppkafka::Message *) 0x7f1ca8f58660
(gdb) p $28.handle_.__ptr_
$29 = {<std::__1::__compressed_pair_elem<rd_kafka_message_s*, 0, false>> = { __value_ = 0x7f184f129bf0}, ...}
(gdb) p *(rd_kafka_message_s*)0x7f184f129bf0
$30 = {err = RD_KAFKA_RESP_ERR_NO_ERROR, rkt = 0x7f1ed44fe000, partition = 1, payload = 0x7f1fc9bc6036, len = 242, key = 0x0, key_len = 0, offset = 2394853582209,
# current+1
(gdb) p (*($14+1)).handle_.__ptr_
$44 = {<std::__1::__compressed_pair_elem<rd_kafka_message_s*, 0, false>> = { __value_ = 0x7f184f129d30}, ...}
(gdb) p *(rd_kafka_message_s*)0x7f184f129d30
$45 = {err = RD_KAFKA_RESP_ERR_NO_ERROR, rkt = 0x7f1ed44fe000, partition = 1, payload = 0x7f1fc9bc612f, len = 31, key = 0x0, key_len = 0, offset = 2394853582210,
_private = 0x7f184f129cc0}
# distance from the beginning
(gdb) p messages.__end_-messages.__begin_
$34 = 65536
(gdb) p ($14-0)-messages.__begin_
$37 = 8965
(gdb) p ($14-1)-messages.__begin_
$38 = 8964
# parsing info
(gdb) p allowed
$39 = false
(gdb) p new_rows
$40 = 1
(gdb) p total_rows
$41 = 8964
# current buffer is invalid
(gdb) p *buffer.__ptr_
$50 = {<DB::ReadBuffer> = {<DB::BufferBase> = {pos = 0x7f202f0339c0 "FindCoordinator response error: Local: Broker transport failure", bytes = 47904863385, working_buffer = {
begin_pos = 0x7f202f0339c0 "FindCoordinator response error: Local: Broker transport failure",
end_pos = 0x7f202f0339c0 "FindCoordinator response error: Local: Broker transport failure"}, internal_buffer = {
v0: check message errors in ReadBufferFromKafkaConsumer::nextImpl() (but
this may lead to using of that messages after and SIGSEGV again, doh).
v2: skip messages with an error after poll.
MOTIVATION:
- remove double-conversion external dependency
- remove flatc (but flatbuffers is still required, arrow just shipped
with generated files and that's it)
CHANGED:
- remove pre-generated headers, it is shipped with the arrow
- remove flatc (see above)
NOTES (see tests changes):
- and snappy error is reported as unsupported compression.
gtest reports:
[ RUN ] GoogleTestVerification.UninstantiatedParameterizedTestSuite<CodecTestPerformance>
../src/Compression/tests/gtest_compressionCodec.cpp:590: Failure
Parameterized test suite CodecTestPerformance is defined via TEST_P, but never instantiated. None of the test cases will run. Either no INSTANTIATE_TEST_SUITE_P is provided or the only ones provided expand to nothing.
Ideally, TEST_P definitions should only ever be included as part of binaries that intend to use them. (As opposed to, for example, being placed in a library that may be linked in to get other utilities.)
To suppress this error for this test suite, insert the following line (in a non-header) in the namespace it is defined in:
GTEST_ALLOW_UNINSTANTIATED_PARAMETERIZED_TEST(CodecTestPerformance);
[ FAILED ] GoogleTestVerification.UninstantiatedParameterizedTestSuite<CodecTestPerformance> (0 ms)
There are still some issues with memory tracking, but now with per-user
tracking:
executeQuery: Code: 241, e.displayText() = DB::Exception: Memory limit (for user) exceeded: would use 437.72 GiB (attempt to allocate chunk of 4200926 bytes), maximum: 437.72 GiB (version 20.6.1.1) (from 10.7.140.7:31318)
Although the server is mostly idle:
SELECT formatReadableSize(memory_usage)
FROM system.processes
┌─formatReadableSize(memory_usage)─┐
│ 289.28 MiB │
│ 155.75 MiB │
│ 0.00 B │
└──────────────────────────────────┘
Refs: https://github.com/ClickHouse/ClickHouse/pull/10496/files#r450206865
Cc: @alexey-milovidov
boost::format is not compiled under gcc10:
from ../src/Compression/tests/gtest_compressionCodec.cpp:14:
/usr/include/boost/format/alt_sstream_impl.hpp: In instantiation of ‘boost::io::basic_altstringbuf<Ch, Tr, Alloc>::int_type boost::io::basic_altstringbuf<Ch, Tr, Alloc>::overflow(boost::io::basic_altstringbuf<Ch, Tr, Alloc>::int_type) [with Ch = char; Tr = std::char_traits<char>; Alloc = std::allocator<char>; boost::io::basic_altstringbuf<Ch, Tr, Alloc>::int_type = int]’:
/usr/include/boost/format/alt_sstream_impl.hpp:227:9: required from here
/usr/include/boost/format/alt_sstream_impl.hpp:261:45: error: no matching function for call to ‘std::allocator<char>::allocate(std::size_t&, char*)’
261 | newptr = alloc_.allocate(new_size, is_allocated_? oldptr : 0);
(although this is system-wide boost, it is pretty recent - 1.72)
This patch changes the place where the dictionary will be loaded (during
syntax analysis), but I guess this is fine, it will be loaded anyway.
Fixes: #10342