--- slug: /en/operations/cluster-discovery sidebar_label: Cluster Discovery --- # Cluster Discovery ## Overview ClickHouse's Cluster Discovery feature simplifies cluster configuration by allowing nodes to automatically discover and register themselves without the need for explicit definition in the configuration files. This is especially beneficial in cases where the manual definition of each node becomes cumbersome. :::note Cluster Discovery is an experimental feature and can be changed or removed in future versions. To enable it include the `allow_experimental_cluster_discovery` setting in your configuration file: ```xml 1 ``` ::: ## Remote Servers Configuration ### Traditional Manual Configuration Traditionally, in ClickHouse, each shard and replica in the cluster needed to be manually specified in the configuration: ```xml node1 9000 node2 9000 node3 9000 node4 9000 ``` ### Using Cluster Discovery With Cluster Discovery, rather than defining each node explicitly, you simply specify a path in ZooKeeper. All nodes that register under this path in ZooKeeper will be automatically discovered and added to the cluster. ```xml /clickhouse/discovery/cluster_name ``` If you want to specify a shard number for a particular node, you can include the `` tag within the `` section: for `node1` and `node2`: ```xml /clickhouse/discovery/cluster_name 1 ``` for `node3` and `node4`: ```xml /clickhouse/discovery/cluster_name 2 ``` ### Observer mode Nodes configured in observer mode will not register themselves as replicas. They will solely observe and discover other active replicas in the cluster without actively participating. To enable observer mode, include the `` tag within the `` section: ```xml /clickhouse/discovery/cluster_name ``` ## Use-Cases and Limitations As nodes are added or removed from the specified ZooKeeper path, they are automatically discovered or removed from the cluster without the need for configuration changes or server restarts. However, changes affect only cluster configuration, not the data or existing databases and tables. Consider the following example with a cluster of 3 nodes: ```xml /clickhouse/discovery/default_cluster ``` ``` SELECT * EXCEPT (default_database, errors_count, slowdowns_count, estimated_recovery_time, database_shard_name, database_replica_name) FROM system.clusters WHERE cluster = 'default'; ┌─cluster─┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name────┬─host_address─┬─port─┬─is_local─┬─user─┬─is_active─┐ │ default │ 1 │ 1 │ 1 │ 92d3c04025e8 │ 172.26.0.5 │ 9000 │ 0 │ │ ᴺᵁᴸᴸ │ │ default │ 1 │ 1 │ 2 │ a6a68731c21b │ 172.26.0.4 │ 9000 │ 1 │ │ ᴺᵁᴸᴸ │ │ default │ 1 │ 1 │ 3 │ 8e62b9cb17a1 │ 172.26.0.2 │ 9000 │ 0 │ │ ᴺᵁᴸᴸ │ └─────────┴───────────┴──────────────┴─────────────┴──────────────┴──────────────┴──────┴──────────┴──────┴───────────┘ ``` ```sql CREATE TABLE event_table ON CLUSTER default (event_time DateTime, value String) ENGINE = ReplicatedMergeTree('/clickhouse/tables/event_table', '{replica}') ORDER BY event_time PARTITION BY toYYYYMM(event_time); INSERT INTO event_table ... ``` Then, we add a new node to the cluster, starting a new node with the same entry in the `remote_servers` section in a configuration file: ``` ┌─cluster─┬─shard_num─┬─shard_weight─┬─replica_num─┬─host_name────┬─host_address─┬─port─┬─is_local─┬─user─┬─is_active─┐ │ default │ 1 │ 1 │ 1 │ 92d3c04025e8 │ 172.26.0.5 │ 9000 │ 0 │ │ ᴺᵁᴸᴸ │ │ default │ 1 │ 1 │ 2 │ a6a68731c21b │ 172.26.0.4 │ 9000 │ 1 │ │ ᴺᵁᴸᴸ │ │ default │ 1 │ 1 │ 3 │ 8e62b9cb17a1 │ 172.26.0.2 │ 9000 │ 0 │ │ ᴺᵁᴸᴸ │ │ default │ 1 │ 1 │ 4 │ b0df3669b81f │ 172.26.0.6 │ 9000 │ 0 │ │ ᴺᵁᴸᴸ │ └─────────┴───────────┴──────────────┴─────────────┴──────────────┴──────────────┴──────┴──────────┴──────┴───────────┘ ``` The fourth node is participating in the cluster, but table `event_table` still exists only on the first three nodes: ```sql SELECT hostname(), database, table FROM clusterAllReplicas(default, system.tables) WHERE table = 'event_table' FORMAT PrettyCompactMonoBlock ┌─hostname()───┬─database─┬─table───────┐ │ a6a68731c21b │ default │ event_table │ │ 92d3c04025e8 │ default │ event_table │ │ 8e62b9cb17a1 │ default │ event_table │ └──────────────┴──────────┴─────────────┘ ``` If you need to have tables replicated on all the nodes, you may use the [Replicated](../engines/database-engines/replicated.md) database engine in alternative to cluster discovery.