The SSE 4.2 instruction set must be supported. Modern processors (since 2008) support it.
When choosing a processor, prefer a large number of cores and slightly slower clock rate over fewer cores and a higher clock rate.
For example, 16 cores with 2600 MHz is better than 8 cores with 3600 MHz.
## Hyper-threading
Don't disable hyper-threading. It helps for some queries, but not for others.
## Turbo Boost
Turbo Boost is highly recommended. It significantly improves performance with a typical load.
You can use `turbostat` to view the CPU's actual clock rate under a load.
## CPU scaling governor
Always use the `performance` scaling governor. The `on-demand` scaling governor works much worse with constantly high demand.
```bash
sudo echo 'performance' | tee /sys/devices/system/cpu/cpu\*/cpufreq/scaling_governor
```
## CPU limitations
Processors can overheat. Use `dmesg` to see if the CPU's clock rate was limited due to overheating.
The restriction can also be set externally at the datacenter level. You can use `turbostat` to monitor it under a load.
## RAM
For small amounts of data (up to \~200 GB compressed), it is best to use as much memory as the volume of data.
For large amounts of data and when processing interactive (online) queries, you should use a reasonable amount of RAM (128 GB or more) so the hot data subset will fit in the cache of pages.
Even for data volumes of \~50 TB per server, using 128 GB of RAM significantly improves query performance compared to 64 GB.
## Swap file
Always disable the swap file. The only reason for not doing this is if you are using ClickHouse on your personal laptop.
## Huge pages
Always disable transparent huge pages. It interferes with memory allocators, which leads to significant performance degradation.
```bash
echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
```
Use `perf top` to watch the time spent in the kernel for memory management.
Permanent huge pages also do not need to be allocated.
## Storage subsystem
If your budget allows you to use SSD, use SSD.
If not, use HDD. SATA HDDs 7200 RPM will do.
Give preference to a lot of servers with local hard drives over a smaller number of servers with attached disk shelves.
But for storing archives with rare queries, shelves will work.
## RAID
When using HDD, you can combine their RAID-10, RAID-5, RAID-6 or RAID-50.
For Linux, software RAID is better (with `mdadm`). We don't recommend using LVM.
When creating RAID-10, select the `far` layout.
If your budget allows, choose RAID-10.
If you have more than 4 disks, use RAID-6 (preferred) or RAID-50, instead of RAID-5.
When using RAID-5, RAID-6 or RAID-50, always increase stripe_cache_size, since the default value is usually not the best choice.
```bash
echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size
```
Calculate the exact number from the number of devices and the block size, using the formula: `2 * num_devices * chunk_size_in_bytes / 4096`.
A block size of 1025 KB is sufficient for all RAID configurations.
Never set the block size too small or too large.
You can use RAID-0 on SSD.
Regardless of RAID use, always use replication for data security.
Enable NCQ with a long queue. For HDD, choose the CFQ scheduler, and for SSD, choose noop. Don't reduce the 'readahead' setting.
For HDD, enable the write cache.
## File system
Ext4 is the most reliable option. Set the mount options `noatime, nobarrier`.
XFS is also suitable, but it hasn't been as thoroughly tested with ClickHouse.
Most other file systems should also work fine. File systems with delayed allocation work better.
## Linux kernel
Don't use an outdated Linux kernel. In 2015, 3.18.19 was new enough.
Consider using the kernel build from Yandex:<https://github.com/yandex/smart>– it provides at least a 5% performance increase.
## Network
If you are using IPv6, increase the size of the route cache.
The Linux kernel prior to 3.2 had a multitude of problems with IPv6 implementation.
Use at least a 10 GB network, if possible. 1 Gb will also work, but it will be much worse for patching replicas with tens of terabytes of data, or for processing distributed queries with a large amount of intermediate data.
## ZooKeeper
You are probably already using ZooKeeper for other purposes. You can use the same installation of ZooKeeper, if it isn't already overloaded.
With the default settings, ZooKeeper is a time bomb:
> The ZooKeeper server won't delete files from old snapshots and logs when using the default configuration (see autopurge), and this is the responsibility of the operator.
This bomb must be defused.
The ZooKeeper (3.5.1) configuration below is used in the Yandex.Metrica production environment as of May 20, 2017: