mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-28 02:21:59 +00:00
3a5a39a9df
For async s3 writes final part flushing was defered until all the INSERT block was processed, however in case of too many partitions/columns you may exceed max_memory_usage limit (since each stream has overhead). Introduce max_insert_delayed_streams_for_parallel_writes (with default to 1000 for S3, 0 otherwise), to avoid this. This should "Memory limit exceeded" errors in performance tests. Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com> |
||
---|---|---|
.. | ||
constraints-on-settings.md | ||
index.md | ||
merge-tree-settings.md | ||
permissions-for-queries.md | ||
query-complexity.md | ||
settings-profiles.md | ||
settings-users.md | ||
settings.md |