mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-12-04 21:42:39 +00:00
318 lines
16 KiB
Markdown
318 lines
16 KiB
Markdown
---
|
||
slug: /en/operations/tips
|
||
sidebar_position: 58
|
||
sidebar_label: Usage Recommendations
|
||
title: "Usage Recommendations"
|
||
---
|
||
import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_automated.md';
|
||
|
||
<SelfManaged />
|
||
|
||
## CPU Scaling Governor
|
||
|
||
Always use the `performance` scaling governor. The `on-demand` scaling governor works much worse with constantly high demand.
|
||
|
||
``` bash
|
||
$ echo 'performance' | sudo tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
|
||
```
|
||
|
||
## CPU Limitations {#cpu-limitations}
|
||
|
||
Processors can overheat. Use `dmesg` to see if the CPU’s clock rate was limited due to overheating.
|
||
The restriction can also be set externally at the datacenter level. You can use `turbostat` to monitor it under a load.
|
||
|
||
## RAM {#ram}
|
||
|
||
For small amounts of data (up to ~200 GB compressed), it is best to use as much memory as the volume of data.
|
||
For large amounts of data and when processing interactive (online) queries, you should use a reasonable amount of RAM (128 GB or more) so the hot data subset will fit in the cache of pages.
|
||
Even for data volumes of ~50 TB per server, using 128 GB of RAM significantly improves query performance compared to 64 GB.
|
||
|
||
Do not disable overcommit. The value `cat /proc/sys/vm/overcommit_memory` should be 0 or 1. Run
|
||
|
||
``` bash
|
||
$ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
|
||
```
|
||
|
||
Use `perf top` to watch the time spent in the kernel for memory management.
|
||
Permanent huge pages also do not need to be allocated.
|
||
|
||
### Using less than 16GB of RAM
|
||
|
||
The recommended amount of RAM is 32 GB or more.
|
||
|
||
If your system has less than 16 GB of RAM, you may experience various memory exceptions because default settings do not match this amount of memory. You can use ClickHouse in a system with a small amount of RAM (as low as 2 GB), but these setups require additional tuning and can only ingest at a low rate.
|
||
|
||
When using ClickHouse with less than 16GB of RAM, we recommend the following:
|
||
|
||
- Lower the size of the mark cache in the `config.xml`. It can be set as low as 500 MB, but it cannot be set to zero.
|
||
- Lower the number of query processing threads down to `1`.
|
||
- Lower the `max_block_size` to `8192`. Values as low as `1024` can still be practical.
|
||
- Lower `max_download_threads` to `1`.
|
||
- Set `input_format_parallel_parsing` and `output_format_parallel_formatting` to `0`.
|
||
|
||
Additional notes:
|
||
- To flush the memory cached by the memory allocator, you can run the `SYSTEM JEMALLOC PURGE`
|
||
command.
|
||
- We do not recommend using S3 or Kafka integrations on low-memory machines because they require significant memory for buffers.
|
||
|
||
## Storage Subsystem {#storage-subsystem}
|
||
|
||
If your budget allows you to use SSD, use SSD.
|
||
If not, use HDD. SATA HDDs 7200 RPM will do.
|
||
|
||
Give preference to a lot of servers with local hard drives over a smaller number of servers with attached disk shelves.
|
||
But for storing archives with rare queries, shelves will work.
|
||
|
||
## RAID {#raid}
|
||
|
||
When using HDD, you can combine their RAID-10, RAID-5, RAID-6 or RAID-50.
|
||
For Linux, software RAID is better (with `mdadm`).
|
||
When creating RAID-10, select the `far` layout.
|
||
If your budget allows, choose RAID-10.
|
||
|
||
LVM by itself (without RAID or `mdadm`) is ok, but making RAID with it or combining it with `mdadm` is a less explored option, and there will be more chances for mistakes
|
||
(selecting wrong chunk size; misalignment of chunks; choosing a wrong raid type; forgetting to cleanup disks). If you are confident
|
||
in using LVM, there is nothing against using it.
|
||
|
||
If you have more than 4 disks, use RAID-6 (preferred) or RAID-50, instead of RAID-5.
|
||
When using RAID-5, RAID-6 or RAID-50, always increase stripe_cache_size, since the default value is usually not the best choice.
|
||
|
||
``` bash
|
||
$ echo 4096 | sudo tee /sys/block/md2/md/stripe_cache_size
|
||
```
|
||
|
||
Calculate the exact number from the number of devices and the block size, using the formula: `2 * num_devices * chunk_size_in_bytes / 4096`.
|
||
|
||
A block size of 64 KB is sufficient for most RAID configurations. The average clickhouse-server write size is approximately 1 MB (1024 KB), and thus the recommended stripe size is also 1 MB. The block size can be optimized if needed when set to 1 MB divided by the number of non-parity disks in the RAID array, such that each write is parallelized across all available non-parity disks.
|
||
Never set the block size too small or too large.
|
||
|
||
You can use RAID-0 on SSD.
|
||
Regardless of RAID use, always use replication for data security.
|
||
|
||
Enable NCQ with a long queue. For HDD, choose the mq-deadline or CFQ scheduler, and for SSD, choose noop. Don’t reduce the ‘readahead’ setting.
|
||
For HDD, enable the write cache.
|
||
|
||
Make sure that [`fstrim`](https://en.wikipedia.org/wiki/Trim_(computing)) is enabled for NVME and SSD disks in your OS (usually it's implemented using a cronjob or systemd service).
|
||
|
||
## File System {#file-system}
|
||
|
||
Ext4 is the most reliable option. Set the mount options `noatime`. XFS works well too.
|
||
Most other file systems should also work fine.
|
||
|
||
FAT-32 and exFAT are not supported due to lack of hard links.
|
||
|
||
Do not use compressed filesystems, because ClickHouse does compression on its own and better.
|
||
It's not recommended to use encrypted filesystems, because you can use builtin encryption in ClickHouse, which is better.
|
||
|
||
While ClickHouse can work over NFS, it is not the best idea.
|
||
|
||
## Linux Kernel {#linux-kernel}
|
||
|
||
Don't use an outdated Linux kernel.
|
||
|
||
## Network {#network}
|
||
|
||
If you are using IPv6, increase the size of the route cache.
|
||
The Linux kernel prior to 3.2 had a multitude of problems with IPv6 implementation.
|
||
|
||
Use at least a 10 GB network, if possible. 1 Gb will also work, but it will be much worse for patching replicas with tens of terabytes of data, or for processing distributed queries with a large amount of intermediate data.
|
||
|
||
## Huge Pages {#huge-pages}
|
||
|
||
If you are using old Linux kernel, disable transparent huge pages. It interferes with memory allocator, which leads to significant performance degradation.
|
||
On newer Linux kernels transparent huge pages are alright.
|
||
|
||
``` bash
|
||
$ echo 'madvise' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
|
||
```
|
||
|
||
If you want to modify the transparent huge pages setting permanently, editing the `/etc/default/grub` to add the `transparent_hugepage=madvise` to the `GRUB_CMDLINE_LINUX_DEFAULT` option:
|
||
|
||
```bash
|
||
$ GRUB_CMDLINE_LINUX_DEFAULT="transparent_hugepage=madvise ..."
|
||
```
|
||
|
||
After that, run the `sudo update-grub` command then reboot to take effect.
|
||
|
||
## Hypervisor configuration
|
||
|
||
If you are using OpenStack, set
|
||
```
|
||
cpu_mode=host-passthrough
|
||
```
|
||
in `nova.conf`.
|
||
|
||
If you are using libvirt, set
|
||
```
|
||
<cpu mode='host-passthrough'/>
|
||
```
|
||
in XML configuration.
|
||
|
||
This is important for ClickHouse to be able to get correct information with `cpuid` instruction.
|
||
Otherwise you may get `Illegal instruction` crashes when hypervisor is run on old CPU models.
|
||
|
||
## ClickHouse Keeper and ZooKeeper {#zookeeper}
|
||
|
||
ClickHouse Keeper is recommended to replace ZooKeeper for ClickHouse clusters. See the documentation for [ClickHouse Keeper](../guides/sre/keeper/index.md)
|
||
|
||
If you would like to continue using ZooKeeper then it is best to use a fresh version of ZooKeeper – 3.4.9 or later. The version in stable Linux distributions may be outdated.
|
||
|
||
You should never use manually written scripts to transfer data between different ZooKeeper clusters, because the result will be incorrect for sequential nodes. Never use the “zkcopy” utility for the same reason: https://github.com/ksprojects/zkcopy/issues/15
|
||
|
||
If you want to divide an existing ZooKeeper cluster into two, the correct way is to increase the number of its replicas and then reconfigure it as two independent clusters.
|
||
|
||
You can run ClickHouse Keeper on the same server as ClickHouse in test environments, or in environments with low ingestion rate.
|
||
For production environments we suggest to use separate servers for ClickHouse and ZooKeeper/Keeper, or place ClickHouse files and Keeper files on to separate disks. Because ZooKeeper/Keeper are very sensitive for disk latency and ClickHouse may utilize all available system resources.
|
||
|
||
You can have ZooKeeper observers in an ensemble but ClickHouse servers should not interact with observers.
|
||
|
||
Do not change `minSessionTimeout` setting, large values may affect ClickHouse restart stability.
|
||
|
||
With the default settings, ZooKeeper is a time bomb:
|
||
|
||
> The ZooKeeper server won’t delete files from old snapshots and logs when using the default configuration (see `autopurge`), and this is the responsibility of the operator.
|
||
|
||
This bomb must be defused.
|
||
|
||
The ZooKeeper (3.5.1) configuration below is used in a large production environment:
|
||
|
||
zoo.cfg:
|
||
|
||
``` bash
|
||
# http://hadoop.apache.org/zookeeper/docs/current/zookeeperAdmin.html
|
||
|
||
# The number of milliseconds of each tick
|
||
tickTime=2000
|
||
# The number of ticks that the initial
|
||
# synchronization phase can take
|
||
# This value is not quite motivated
|
||
initLimit=300
|
||
# The number of ticks that can pass between
|
||
# sending a request and getting an acknowledgement
|
||
syncLimit=10
|
||
|
||
maxClientCnxns=2000
|
||
|
||
# It is the maximum value that client may request and the server will accept.
|
||
# It is Ok to have high maxSessionTimeout on server to allow clients to work with high session timeout if they want.
|
||
# But we request session timeout of 30 seconds by default (you can change it with session_timeout_ms in ClickHouse config).
|
||
maxSessionTimeout=60000000
|
||
# the directory where the snapshot is stored.
|
||
dataDir=/opt/zookeeper/{{ '{{' }} cluster['name'] {{ '}}' }}/data
|
||
# Place the dataLogDir to a separate physical disc for better performance
|
||
dataLogDir=/opt/zookeeper/{{ '{{' }} cluster['name'] {{ '}}' }}/logs
|
||
|
||
autopurge.snapRetainCount=10
|
||
autopurge.purgeInterval=1
|
||
|
||
|
||
# To avoid seeks ZooKeeper allocates space in the transaction log file in
|
||
# blocks of preAllocSize kilobytes. The default block size is 64M. One reason
|
||
# for changing the size of the blocks is to reduce the block size if snapshots
|
||
# are taken more often. (Also, see snapCount).
|
||
preAllocSize=131072
|
||
|
||
# Clients can submit requests faster than ZooKeeper can process them,
|
||
# especially if there are a lot of clients. To prevent ZooKeeper from running
|
||
# out of memory due to queued requests, ZooKeeper will throttle clients so that
|
||
# there is no more than globalOutstandingLimit outstanding requests in the
|
||
# system. The default limit is 1000.
|
||
# globalOutstandingLimit=1000
|
||
|
||
# ZooKeeper logs transactions to a transaction log. After snapCount transactions
|
||
# are written to a log file a snapshot is started and a new transaction log file
|
||
# is started. The default snapCount is 100000.
|
||
snapCount=3000000
|
||
|
||
# If this option is defined, requests will be will logged to a trace file named
|
||
# traceFile.year.month.day.
|
||
#traceFile=
|
||
|
||
# Leader accepts client connections. Default value is "yes". The leader machine
|
||
# coordinates updates. For higher update throughput at thes slight expense of
|
||
# read throughput the leader can be configured to not accept clients and focus
|
||
# on coordination.
|
||
leaderServes=yes
|
||
|
||
standaloneEnabled=false
|
||
dynamicConfigFile=/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/zoo.cfg.dynamic
|
||
```
|
||
|
||
Java version:
|
||
|
||
``` text
|
||
openjdk 11.0.5-shenandoah 2019-10-15
|
||
OpenJDK Runtime Environment (build 11.0.5-shenandoah+10-adhoc.heretic.src)
|
||
OpenJDK 64-Bit Server VM (build 11.0.5-shenandoah+10-adhoc.heretic.src, mixed mode)
|
||
```
|
||
|
||
JVM parameters:
|
||
|
||
``` bash
|
||
NAME=zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}
|
||
ZOOCFGDIR=/etc/$NAME/conf
|
||
|
||
# TODO this is really ugly
|
||
# How to find out, which jars are needed?
|
||
# seems, that log4j requires the log4j.properties file to be in the classpath
|
||
CLASSPATH="$ZOOCFGDIR:/usr/build/classes:/usr/build/lib/*.jar:/usr/share/zookeeper-3.6.2/lib/audience-annotations-0.5.0.jar:/usr/share/zookeeper-3.6.2/lib/commons-cli-1.2.jar:/usr/share/zookeeper-3.6.2/lib/commons-lang-2.6.jar:/usr/share/zookeeper-3.6.2/lib/jackson-annotations-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-core-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/jackson-databind-2.10.3.jar:/usr/share/zookeeper-3.6.2/lib/javax.servlet-api-3.1.0.jar:/usr/share/zookeeper-3.6.2/lib/jetty-http-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-io-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-security-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-server-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-servlet-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jetty-util-9.4.24.v20191120.jar:/usr/share/zookeeper-3.6.2/lib/jline-2.14.6.jar:/usr/share/zookeeper-3.6.2/lib/json-simple-1.1.1.jar:/usr/share/zookeeper-3.6.2/lib/log4j-1.2.17.jar:/usr/share/zookeeper-3.6.2/lib/metrics-core-3.2.5.jar:/usr/share/zookeeper-3.6.2/lib/netty-buffer-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-codec-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-handler-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-resolver-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-epoll-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/netty-transport-native-unix-common-4.1.50.Final.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_common-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_hotspot-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/simpleclient_servlet-0.6.0.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-api-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/slf4j-log4j12-1.7.25.jar:/usr/share/zookeeper-3.6.2/lib/snappy-java-1.1.7.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-jute-3.6.2.jar:/usr/share/zookeeper-3.6.2/lib/zookeeper-prometheus-metrics-3.6.2.jar:/usr/share/zookeeper-3.6.2/etc"
|
||
|
||
ZOOCFG="$ZOOCFGDIR/zoo.cfg"
|
||
ZOO_LOG_DIR=/var/log/$NAME
|
||
USER=zookeeper
|
||
GROUP=zookeeper
|
||
PIDDIR=/var/run/$NAME
|
||
PIDFILE=$PIDDIR/$NAME.pid
|
||
SCRIPTNAME=/etc/init.d/$NAME
|
||
JAVA=/usr/local/jdk-11/bin/java
|
||
ZOOMAIN="org.apache.zookeeper.server.quorum.QuorumPeerMain"
|
||
ZOO_LOG4J_PROP="INFO,ROLLINGFILE"
|
||
JMXLOCALONLY=false
|
||
JAVA_OPTS="-Xms{{ '{{' }} cluster.get('xms','128M') {{ '}}' }} \
|
||
-Xmx{{ '{{' }} cluster.get('xmx','1G') {{ '}}' }} \
|
||
-Xlog:safepoint,gc*=info,age*=debug:file=/var/log/$NAME/zookeeper-gc.log:time,level,tags:filecount=16,filesize=16M
|
||
-verbose:gc \
|
||
-XX:+UseG1GC \
|
||
-Djute.maxbuffer=8388608 \
|
||
-XX:MaxGCPauseMillis=50"
|
||
```
|
||
|
||
Salt initialization:
|
||
|
||
``` text
|
||
description "zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} centralized coordination service"
|
||
|
||
start on runlevel [2345]
|
||
stop on runlevel [!2345]
|
||
|
||
respawn
|
||
|
||
limit nofile 8192 8192
|
||
|
||
pre-start script
|
||
[ -r "/etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment" ] || exit 0
|
||
. /etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment
|
||
[ -d $ZOO_LOG_DIR ] || mkdir -p $ZOO_LOG_DIR
|
||
chown $USER:$GROUP $ZOO_LOG_DIR
|
||
end script
|
||
|
||
script
|
||
. /etc/zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }}/conf/environment
|
||
[ -r /etc/default/zookeeper ] && . /etc/default/zookeeper
|
||
if [ -z "$JMXDISABLE" ]; then
|
||
JAVA_OPTS="$JAVA_OPTS -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.local.only=$JMXLOCALONLY"
|
||
fi
|
||
exec start-stop-daemon --start -c $USER --exec $JAVA --name zookeeper-{{ '{{' }} cluster['name'] {{ '}}' }} \
|
||
-- -cp $CLASSPATH $JAVA_OPTS -Dzookeeper.log.dir=${ZOO_LOG_DIR} \
|
||
-Dzookeeper.root.logger=${ZOO_LOG4J_PROP} $ZOOMAIN $ZOOCFG
|
||
end script
|
||
```
|
||
|
||
## Antivirus software {#antivirus-software}
|
||
|
||
If you use antivirus software configure it to skip folders with ClickHouse datafiles (`/var/lib/clickhouse`) otherwise performance may be reduced and you may experience unexpected errors during data ingestion and background merges.
|
||
|
||
## Related Content
|
||
|
||
- [Getting started with ClickHouse? Here are 13 "Deadly Sins" and how to avoid them](https://clickhouse.com/blog/common-getting-started-issues-with-clickhouse)
|