mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-19 06:01:57 +00:00
3a5a39a9df
For async s3 writes final part flushing was defered until all the INSERT block was processed, however in case of too many partitions/columns you may exceed max_memory_usage limit (since each stream has overhead). Introduce max_insert_delayed_streams_for_parallel_writes (with default to 1000 for S3, 0 otherwise), to avoid this. This should "Memory limit exceeded" errors in performance tests. Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com> |
||
---|---|---|
.. | ||
ci | ||
config | ||
fuzz | ||
instructions | ||
integration | ||
jepsen.clickhouse-keeper | ||
perf_drafts | ||
performance | ||
queries | ||
testflows | ||
.gitignore | ||
clickhouse-test | ||
CMakeLists.txt | ||
msan_suppressions.txt | ||
stress | ||
tsan_suppressions.txt | ||
ubsan_suppressions.txt |