mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-29 11:02:08 +00:00
Merge pull request #38187 from DanRoscigno/update-mergetree-replication-docs
add ClickHouse Keeper to replication doc
This commit is contained in:
commit
a2ed02c920
@ -27,7 +27,7 @@ Compressed data for `INSERT` and `ALTER` queries is replicated (for more informa
|
||||
- The `DROP TABLE` query deletes the replica located on the server where the query is run.
|
||||
- The `RENAME` query renames the table on one of the replicas. In other words, replicated tables can have different names on different replicas.
|
||||
|
||||
ClickHouse uses [Apache ZooKeeper](https://zookeeper.apache.org) for storing replicas meta information. Use ZooKeeper version 3.4.5 or newer.
|
||||
ClickHouse uses [ClickHouse Keeper](../../../guides/sre/keeper/clickhouse-keeper.md) for storing replicas meta information. It is possible to use ZooKeeper version 3.4.5 or newer, but ClickHouse Keeper is recommended.
|
||||
|
||||
To use replication, set parameters in the [zookeeper](../../../operations/server-configuration-parameters/settings.md#server-settings_zookeeper) server configuration section.
|
||||
|
||||
@ -35,7 +35,7 @@ To use replication, set parameters in the [zookeeper](../../../operations/server
|
||||
Don’t neglect the security setting. ClickHouse supports the `digest` [ACL scheme](https://zookeeper.apache.org/doc/current/zookeeperProgrammers.html#sc_ZooKeeperAccessControl) of the ZooKeeper security subsystem.
|
||||
:::
|
||||
|
||||
Example of setting the addresses of the ZooKeeper cluster:
|
||||
Example of setting the addresses of the ClickHouse Keeper cluster:
|
||||
|
||||
``` xml
|
||||
<zookeeper>
|
||||
@ -54,8 +54,8 @@ Example of setting the addresses of the ZooKeeper cluster:
|
||||
</zookeeper>
|
||||
```
|
||||
|
||||
ClickHouse also supports to store replicas meta information in the auxiliary ZooKeeper cluster by providing ZooKeeper cluster name and path as engine arguments.
|
||||
In other word, it supports to store the metadata of differnt tables in different ZooKeeper clusters.
|
||||
ClickHouse also supports storing replicas meta information in an auxiliary ZooKeeper cluster. Do this by providing the ZooKeeper cluster name and path as engine arguments.
|
||||
In other words, it supports storing the metadata of different tables in different ZooKeeper clusters.
|
||||
|
||||
Example of setting the addresses of the auxiliary ZooKeeper cluster:
|
||||
|
||||
@ -122,8 +122,8 @@ The `Replicated` prefix is added to the table engine name. For example:`Replicat
|
||||
|
||||
**Replicated\*MergeTree parameters**
|
||||
|
||||
- `zoo_path` — The path to the table in ZooKeeper.
|
||||
- `replica_name` — The replica name in ZooKeeper.
|
||||
- `zoo_path` — The path to the table in ClickHouse Keeper.
|
||||
- `replica_name` — The replica name in ClickHouse Keeper.
|
||||
- `other_parameters` — Parameters of an engine which is used for creating the replicated version, for example, version in `ReplacingMergeTree`.
|
||||
|
||||
Example:
|
||||
@ -168,18 +168,18 @@ Example:
|
||||
</macros>
|
||||
```
|
||||
|
||||
The path to the table in ZooKeeper should be unique for each replicated table. Tables on different shards should have different paths.
|
||||
The path to the table in ClickHouse Keeper should be unique for each replicated table. Tables on different shards should have different paths.
|
||||
In this case, the path consists of the following parts:
|
||||
|
||||
`/clickhouse/tables/` is the common prefix. We recommend using exactly this one.
|
||||
|
||||
`{layer}-{shard}` is the shard identifier. In this example it consists of two parts, since the example cluster uses bi-level sharding. For most tasks, you can leave just the {shard} substitution, which will be expanded to the shard identifier.
|
||||
|
||||
`table_name` is the name of the node for the table in ZooKeeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it does not change after a RENAME query.
|
||||
`table_name` is the name of the node for the table in ClickHouse Keeper. It is a good idea to make it the same as the table name. It is defined explicitly, because in contrast to the table name, it does not change after a RENAME query.
|
||||
*HINT*: you could add a database name in front of `table_name` as well. E.g. `db_name.table_name`
|
||||
|
||||
The two built-in substitutions `{database}` and `{table}` can be used, they expand into the table name and the database name respectively (unless these macros are defined in the `macros` section). So the zookeeper path can be specified as `'/clickhouse/tables/{layer}-{shard}/{database}/{table}'`.
|
||||
Be careful with table renames when using these built-in substitutions. The path in Zookeeper cannot be changed, and when the table is renamed, the macros will expand into a different path, the table will refer to a path that does not exist in Zookeeper, and will go into read-only mode.
|
||||
Be careful with table renames when using these built-in substitutions. The path in ClickHouse Keeper cannot be changed, and when the table is renamed, the macros will expand into a different path, the table will refer to a path that does not exist in ClickHouse Keeper, and will go into read-only mode.
|
||||
|
||||
The replica name identifies different replicas of the same table. You can use the server name for this, as in the example. The name only needs to be unique within each shard.
|
||||
|
||||
@ -220,21 +220,21 @@ To delete a replica, run `DROP TABLE`. However, only one replica is deleted –
|
||||
|
||||
## Recovery After Failures {#recovery-after-failures}
|
||||
|
||||
If ZooKeeper is unavailable when a server starts, replicated tables switch to read-only mode. The system periodically attempts to connect to ZooKeeper.
|
||||
If ClickHouse Keeper is unavailable when a server starts, replicated tables switch to read-only mode. The system periodically attempts to connect to ClickHouse Keeper.
|
||||
|
||||
If ZooKeeper is unavailable during an `INSERT`, or an error occurs when interacting with ZooKeeper, an exception is thrown.
|
||||
If ClickHouse Keeper is unavailable during an `INSERT`, or an error occurs when interacting with ClickHouse Keeper, an exception is thrown.
|
||||
|
||||
After connecting to ZooKeeper, the system checks whether the set of data in the local file system matches the expected set of data (ZooKeeper stores this information). If there are minor inconsistencies, the system resolves them by syncing data with the replicas.
|
||||
After connecting to ClickHouse Keeper, the system checks whether the set of data in the local file system matches the expected set of data (ClickHouse Keeper stores this information). If there are minor inconsistencies, the system resolves them by syncing data with the replicas.
|
||||
|
||||
If the system detects broken data parts (with the wrong size of files) or unrecognized parts (parts written to the file system but not recorded in ZooKeeper), it moves them to the `detached` subdirectory (they are not deleted). Any missing parts are copied from the replicas.
|
||||
If the system detects broken data parts (with the wrong size of files) or unrecognized parts (parts written to the file system but not recorded in ClickHouse Keeper), it moves them to the `detached` subdirectory (they are not deleted). Any missing parts are copied from the replicas.
|
||||
|
||||
Note that ClickHouse does not perform any destructive actions such as automatically deleting a large amount of data.
|
||||
|
||||
When the server starts (or establishes a new session with ZooKeeper), it only checks the quantity and sizes of all files. If the file sizes match but bytes have been changed somewhere in the middle, this is not detected immediately, but only when attempting to read the data for a `SELECT` query. The query throws an exception about a non-matching checksum or size of a compressed block. In this case, data parts are added to the verification queue and copied from the replicas if necessary.
|
||||
When the server starts (or establishes a new session with ClickHouse Keeper), it only checks the quantity and sizes of all files. If the file sizes match but bytes have been changed somewhere in the middle, this is not detected immediately, but only when attempting to read the data for a `SELECT` query. The query throws an exception about a non-matching checksum or size of a compressed block. In this case, data parts are added to the verification queue and copied from the replicas if necessary.
|
||||
|
||||
If the local set of data differs too much from the expected one, a safety mechanism is triggered. The server enters this in the log and refuses to launch. The reason for this is that this case may indicate a configuration error, such as if a replica on a shard was accidentally configured like a replica on a different shard. However, the thresholds for this mechanism are set fairly low, and this situation might occur during normal failure recovery. In this case, data is restored semi-automatically - by “pushing a button”.
|
||||
|
||||
To start recovery, create the node `/path_to_table/replica_name/flags/force_restore_data` in ZooKeeper with any content, or run the command to restore all replicated tables:
|
||||
To start recovery, create the node `/path_to_table/replica_name/flags/force_restore_data` in ClickHouse Keeper with any content, or run the command to restore all replicated tables:
|
||||
|
||||
``` bash
|
||||
sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
|
||||
@ -249,11 +249,11 @@ If all data and metadata disappeared from one of the servers, follow these steps
|
||||
1. Install ClickHouse on the server. Define substitutions correctly in the config file that contains the shard identifier and replicas, if you use them.
|
||||
2. If you had unreplicated tables that must be manually duplicated on the servers, copy their data from a replica (in the directory `/var/lib/clickhouse/data/db_name/table_name/`).
|
||||
3. Copy table definitions located in `/var/lib/clickhouse/metadata/` from a replica. If a shard or replica identifier is defined explicitly in the table definitions, correct it so that it corresponds to this replica. (Alternatively, start the server and make all the `ATTACH TABLE` queries that should have been in the .sql files in `/var/lib/clickhouse/metadata/`.)
|
||||
4. To start recovery, create the ZooKeeper node `/path_to_table/replica_name/flags/force_restore_data` with any content, or run the command to restore all replicated tables: `sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data`
|
||||
4. To start recovery, create the ClickHouse Keeper node `/path_to_table/replica_name/flags/force_restore_data` with any content, or run the command to restore all replicated tables: `sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data`
|
||||
|
||||
Then start the server (restart, if it is already running). Data will be downloaded from replicas.
|
||||
|
||||
An alternative recovery option is to delete information about the lost replica from ZooKeeper (`/path_to_table/replica_name`), then create the replica again as described in “[Creating replicated tables](#creating-replicated-tables)”.
|
||||
An alternative recovery option is to delete information about the lost replica from ClickHouse Keeper (`/path_to_table/replica_name`), then create the replica again as described in “[Creating replicated tables](#creating-replicated-tables)”.
|
||||
|
||||
There is no restriction on network bandwidth during recovery. Keep this in mind if you are restoring many replicas at once.
|
||||
|
||||
@ -276,13 +276,13 @@ Create a MergeTree table with a different name. Move all the data from the direc
|
||||
If you want to get rid of a `ReplicatedMergeTree` table without launching the server:
|
||||
|
||||
- Delete the corresponding `.sql` file in the metadata directory (`/var/lib/clickhouse/metadata/`).
|
||||
- Delete the corresponding path in ZooKeeper (`/path_to_table/replica_name`).
|
||||
- Delete the corresponding path in ClickHouse Keeper (`/path_to_table/replica_name`).
|
||||
|
||||
After this, you can launch the server, create a `MergeTree` table, move the data to its directory, and then restart the server.
|
||||
|
||||
## Recovery When Metadata in the Zookeeper Cluster Is Lost or Damaged {#recovery-when-metadata-in-the-zookeeper-cluster-is-lost-or-damaged}
|
||||
## Recovery When Metadata in the ClickHouse Keeper Cluster Is Lost or Damaged {#recovery-when-metadata-in-the-zookeeper-cluster-is-lost-or-damaged}
|
||||
|
||||
If the data in ZooKeeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
|
||||
If the data in ClickHouse Keeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
|
||||
|
||||
**See Also**
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user