This will allow to distinguish really corrupted data, since right now if
you will CREATE/DETACH/ATTACH such engine you will have the following
error [1]:
2022.08.05 20:02:20.726398 [ 696405 ] {} <Error> StorageFileLog (file_log): Metadata files of table file_log are lost.
[1]: https://s3.amazonaws.com/clickhouse-test-reports/39926/72961328f68b1ec05300d6dc4411a87618a2f46b/stress_test__debug_.html
Likely that previously it was not created to avoid creating empty
directories, however this should be a problem I guess.
Refs: #25969 (@ucasfl)
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
max_server_memory_usage already set to 75%, so OOM should not happens,
the reason is that because RSS does not match with memory tracker
statistics:
2022.08.05 12:36:57.869896 [ 82524 ] {} <Trace> AsynchronousMetrics: MemoryTracking: was 64.69 GiB, peak 65.26 GiB, will set to 62.80 GiB (RSS), difference: -1.89 GiB
...
2022.08.05 12:37:00.213440 [ 82334 ] {} <Error> void DB::MergeTreeBackgroundExecutor<DB::MergeMutateRuntimeQueue>::routine(DB::TaskRuntimeDataPtr) [Queue = DB::MergeMutateRuntimeQueue]: Code: 241. DB::Exception: Memory limit (total) exceeded: would use 64.68 GiB (attempt to allocate chunk of 1298794 bytes), maximum: 51.44 GiB. OvercommitTracker decision: Memory overcommit isn't used. Waiting time or orvercommit denominator are set to zero.. (MEMORY_LIMIT_EXCEEDED), Stack trace (when copying this message, always include the lines below):
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Instead of checking that number of processors different for different
threads, simply always return empty header from buildChainImpl(), by
adding explicit conversion.
v2: ignore UNKNOWN_TABLE errors in test
In case of parallel INSERT (max_insert_threads > 1) it is possible for
VIEW to be DROP/DETACH'ed while building pipeline for various paralell
streams, and in this case the header will not match since when you have
VIEW you will got empty header and non-empty header otherwise.
And this leads to LOGICAL_ERROR later, while checking that output
headers are the same (in QueryPipelineBuilder::addChains() -> Pipe::addChains()).
However it also makes the pipeline different for various parallel
streams, and it looks like it is better to fail in this case, so instead
of always returning empty header from buildChainImpl() explicit check
had been added.
Note, that I wasn't able to reproduce the issue with the added test,
but CI may have more "luck" (although I've verified it manually).
Fixes: #35902
Cc: @KochetovNicolai
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>