Resolves#49445
The query cache buffers query result chunks and eventually squashes
them before insertion into the cache. Here, squashing failed because not
all chunks were of the same type. Looks like chunks of the same
underlying type (e.g. UInt8) in a query result be of mixed const, sparse
or low-cardinality type. Fix this by always materializing the data
regardless of the compression. Strange thing is that the failing query
in the stress test (*) isn't able to reproduce the bug, and I haven't
managed to trigger the issue otherwise, so no test case is added.
(*) SELECT 1 UNION ALL SELECT 1 INTERSECT SELECT 1
E.g. here: https://s3.amazonaws.com/clickhouse-test-reports/0/18817517ed6f8849e3d979e10fbb273e0edf0eaa/stress_test__debug_/fatal_messages.txt
ClickHouse reads table data in blocks of 'max_block_size' rows. Due to
filtering, aggregation, etc., result blocks are typically much smaller
than 'max_block_size' but there are also cases where they are much
bigger. Setting 'query_cache_squash_partial_results' (enabled by
default) now controls if result blocks are squashed (if they are tiny)
or split (if they are large) into blocks of 'max_block_size' size before
insertion into the query result cache. This reduces performance of
writes into the query cache but improves compressability of cache
entries and provides more natural block granularity when query results
are later served from the query cache.
Entries in the query cache are now also compressed by default. This
reduces the overall memory consumption at the cost of slower writes into
/ reads from the query cache. To disable compression, use setting
'query_cache_compress_entries'.