mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-19 06:01:57 +00:00
9a26d48ad0
* Basic blog similar to docs * rename post * no post content in post_meta * update readme and template * more "en" content * complete "en" content * build blog redirects * redirects for migration * link sitemaps * update po * add "ru" content * ru redirects * remove old domain mentions * adjust styles * content improvements * +1 alt * use main images from CDN * use re-hosted in-content images * extra vertical margin around embedded youtube * minor improvements * adjust post page * adjust html meta * adjust post page * improve blog rendering
13 lines
913 B
Markdown
13 lines
913 B
Markdown
---
|
|
title: 'How to speed up LZ4 decompression in ClickHouse?'
|
|
image: 'https://blog-images.clickhouse.tech/en/2019/how-to-speed-up-lz4-decompression-in-clickhouse/main.jpg'
|
|
date: '2019-06-25'
|
|
tags: ['performance', 'lz4', 'article', 'decompression']
|
|
---
|
|
|
|
When you run queries in [ClickHouse](https://clickhouse.tech/), you might notice that the profiler often shows the `LZ_decompress_fast` function near the top. What is going on? This question had us wondering how to choose the best compression algorithm.
|
|
|
|
ClickHouse stores data in compressed form. When running queries, ClickHouse tries to do as little as possible, in order to conserve CPU resources. In many cases, all the potentially time-consuming computations are already well optimized, plus the user wrote a well thought-out query. Then all that's left to do is to perform decompression.
|
|
|
|
[Read further](https://habr.com/en/company/yandex/blog/457612/)
|