mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 15:12:02 +00:00
Merge pull request #65030 from rschu1ze/docs-mt-settings
Docs: Move MergeTree setting docs into MergeTree settings docs page
This commit is contained in:
commit
ee59036a83
@ -6,35 +6,26 @@ sidebar_label: MergeTree
|
||||
|
||||
# MergeTree
|
||||
|
||||
The `MergeTree` engine and other engines of this family (`*MergeTree`) are the most commonly used and most robust ClickHouse table engines.
|
||||
The `MergeTree` engine and other engines of the `MergeTree` family (e.g. `ReplacingMergeTree`, `AggregatingMergeTree` ) are the most commonly used and most robust table engines in ClickHouse.
|
||||
|
||||
Engines in the `MergeTree` family are designed for inserting a very large amount of data into a table. The data is quickly written to the table part by part, then rules are applied for merging the parts in the background. This method is much more efficient than continually rewriting the data in storage during insert.
|
||||
`MergeTree`-family table engines are designed for high data ingest rates and huge data volumes.
|
||||
Insert operations create table parts which are merged by a background process with other table parts.
|
||||
|
||||
Main features:
|
||||
Main features of `MergeTree`-family table engines.
|
||||
|
||||
- Stores data sorted by primary key.
|
||||
- The table's primary key determines the sort order within each table part (clustered index). The primary key also does not reference individual rows but blocks of 8192 rows called granules. This makes primary keys of huge data sets small enough to remain loaded in main memory, while still providing fast access to on-disk data.
|
||||
|
||||
This allows you to create a small sparse index that helps find data faster.
|
||||
- Tables can be partitioned using an arbitrary partition expression. Partition pruning ensures partitions are omitted from reading when the query allows it.
|
||||
|
||||
- Partitions can be used if the [partitioning key](/docs/en/engines/table-engines/mergetree-family/custom-partitioning-key.md) is specified.
|
||||
- Data can be replicated across multiple cluster nodes for high availability, failover, and zero downtime upgrades. See [Data replication](/docs/en/engines/table-engines/mergetree-family/replication.md).
|
||||
|
||||
ClickHouse supports certain operations with partitions that are more efficient than general operations on the same data with the same result. ClickHouse also automatically cuts off the partition data where the partitioning key is specified in the query.
|
||||
- `MergeTree` table engines support various statistics kinds and sampling methods to help query optimization.
|
||||
|
||||
- Data replication support.
|
||||
|
||||
The family of `ReplicatedMergeTree` tables provides data replication. For more information, see [Data replication](/docs/en/engines/table-engines/mergetree-family/replication.md).
|
||||
|
||||
- Data sampling support.
|
||||
|
||||
If necessary, you can set the data sampling method in the table.
|
||||
|
||||
:::info
|
||||
The [Merge](/docs/en/engines/table-engines/special/merge.md/#merge) engine does not belong to the `*MergeTree` family.
|
||||
:::note
|
||||
Despite a similar name, the [Merge](/docs/en/engines/table-engines/special/merge.md/#merge) engine is different from `*MergeTree` engines.
|
||||
:::
|
||||
|
||||
If you need to update rows frequently, we recommend using the [`ReplacingMergeTree`](/docs/en/engines/table-engines/mergetree-family/replacingmergetree.md) table engine. Using `ALTER TABLE my_table UPDATE` to update rows triggers a mutation, which causes parts to be re-written and uses IO/resources. With `ReplacingMergeTree`, you can simply insert the updated rows and the old rows will be replaced according to the table sorting key.
|
||||
|
||||
## Creating a Table {#table_engine-mergetree-creating-a-table}
|
||||
## Creating Tables {#table_engine-mergetree-creating-a-table}
|
||||
|
||||
``` sql
|
||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
@ -59,23 +50,24 @@ ORDER BY expr
|
||||
[SETTINGS name = value, ...]
|
||||
```
|
||||
|
||||
For a description of parameters, see the [CREATE query description](/docs/en/sql-reference/statements/create/table.md).
|
||||
For a detailed description of the parameters, see the [CREATE TABLE](/docs/en/sql-reference/statements/create/table.md) statement
|
||||
|
||||
### Query Clauses {#mergetree-query-clauses}
|
||||
|
||||
#### ENGINE
|
||||
|
||||
`ENGINE` — Name and parameters of the engine. `ENGINE = MergeTree()`. The `MergeTree` engine does not have parameters.
|
||||
`ENGINE` — Name and parameters of the engine. `ENGINE = MergeTree()`. The `MergeTree` engine has no parameters.
|
||||
|
||||
#### ORDER_BY
|
||||
|
||||
`ORDER BY` — The sorting key.
|
||||
|
||||
A tuple of column names or arbitrary expressions. Example: `ORDER BY (CounterID, EventDate)`.
|
||||
A tuple of column names or arbitrary expressions. Example: `ORDER BY (CounterID + 1, EventDate)`.
|
||||
|
||||
ClickHouse uses the sorting key as a primary key if the primary key is not defined explicitly by the `PRIMARY KEY` clause.
|
||||
If no primary key is defined (i.e. `PRIMARY KEY` was not specified), ClickHouse uses the the sorting key as primary key.
|
||||
|
||||
Use the `ORDER BY tuple()` syntax, if you do not need sorting, or set `create_table_empty_primary_key_by_default` to `true` to use the `ORDER BY tuple()` syntax by default. See [Selecting the Primary Key](#selecting-the-primary-key).
|
||||
If no sorting is required, you can use syntax `ORDER BY tuple()`.
|
||||
Alternatively, if setting `create_table_empty_primary_key_by_default` is enabled, `ORDER BY tuple()` is implicitly added to `CREATE TABLE` statements. See [Selecting a Primary Key](#selecting-a-primary-key).
|
||||
|
||||
#### PARTITION BY
|
||||
|
||||
@ -87,100 +79,32 @@ For partitioning by month, use the `toYYYYMM(date_column)` expression, where `da
|
||||
|
||||
`PRIMARY KEY` — The primary key if it [differs from the sorting key](#choosing-a-primary-key-that-differs-from-the-sorting-key). Optional.
|
||||
|
||||
By default the primary key is the same as the sorting key (which is specified by the `ORDER BY` clause). Thus in most cases it is unnecessary to specify a separate `PRIMARY KEY` clause.
|
||||
Specifying a sorting key (using `ORDER BY` clause) implicitly specifies a primary key.
|
||||
It is usually not necessary to specify the primary key in addition to the primary key.
|
||||
|
||||
#### SAMPLE BY
|
||||
|
||||
`SAMPLE BY` — An expression for sampling. Optional.
|
||||
`SAMPLE BY` — A sampling expression. Optional.
|
||||
|
||||
If a sampling expression is used, the primary key must contain it. The result of a sampling expression must be an unsigned integer. Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
If specified, it must be contained in the primary key.
|
||||
The sampling expression must result in an unsigned integer.
|
||||
|
||||
Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
|
||||
#### TTL
|
||||
|
||||
`TTL` — A list of rules specifying storage duration of rows and defining logic of automatic parts movement [between disks and volumes](#table_engine-mergetree-multiple-volumes). Optional.
|
||||
`TTL` — A list of rules that specify the storage duration of rows and the logic of automatic parts movement [between disks and volumes](#table_engine-mergetree-multiple-volumes). Optional.
|
||||
|
||||
Expression must have one `Date` or `DateTime` column as a result. Example:
|
||||
```
|
||||
TTL date + INTERVAL 1 DAY
|
||||
```
|
||||
Expression must result in a `Date` or `DateTime`, e.g. `TTL date + INTERVAL 1 DAY`.
|
||||
|
||||
Type of the rule `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'|GROUP BY` specifies an action to be done with the part if the expression is satisfied (reaches current time): removal of expired rows, moving a part (if expression is satisfied for all rows in a part) to specified disk (`TO DISK 'xxx'`) or to volume (`TO VOLUME 'xxx'`), or aggregating values in expired rows. Default type of the rule is removal (`DELETE`). List of multiple rules can be specified, but there should be no more than one `DELETE` rule.
|
||||
|
||||
|
||||
For more details, see [TTL for columns and tables](#table_engine-mergetree-ttl)
|
||||
|
||||
### SETTINGS
|
||||
Additional parameters that control the behavior of the `MergeTree` (optional):
|
||||
#### Settings
|
||||
|
||||
#### index_granularity
|
||||
|
||||
`index_granularity` — Maximum number of data rows between the marks of an index. Default value: 8192. See [Data Storage](#mergetree-data-storage).
|
||||
|
||||
#### index_granularity_bytes
|
||||
|
||||
`index_granularity_bytes` — Maximum size of data granules in bytes. Default value: 10Mb. To restrict the granule size only by number of rows, set to 0 (not recommended). See [Data Storage](#mergetree-data-storage).
|
||||
|
||||
#### min_index_granularity_bytes
|
||||
|
||||
`min_index_granularity_bytes` — Min allowed size of data granules in bytes. Default value: 1024b. To provide a safeguard against accidentally creating tables with very low index_granularity_bytes. See [Data Storage](#mergetree-data-storage).
|
||||
|
||||
#### enable_mixed_granularity_parts
|
||||
|
||||
`enable_mixed_granularity_parts` — Enables or disables transitioning to control the granule size with the `index_granularity_bytes` setting. Before version 19.11, there was only the `index_granularity` setting for restricting granule size. The `index_granularity_bytes` setting improves ClickHouse performance when selecting data from tables with big rows (tens and hundreds of megabytes). If you have tables with big rows, you can enable this setting for the tables to improve the efficiency of `SELECT` queries.
|
||||
|
||||
#### use_minimalistic_part_header_in_zookeeper
|
||||
|
||||
`use_minimalistic_part_header_in_zookeeper` — Storage method of the data parts headers in ZooKeeper. If `use_minimalistic_part_header_in_zookeeper=1`, then ZooKeeper stores less data. For more information, see the [setting description](/docs/en/operations/server-configuration-parameters/settings.md/#server-settings-use_minimalistic_part_header_in_zookeeper) in “Server configuration parameters”.
|
||||
|
||||
#### min_merge_bytes_to_use_direct_io
|
||||
|
||||
`min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option). If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled. Default value: `10 * 1024 * 1024 * 1024` bytes.
|
||||
|
||||
#### merge_with_ttl_timeout
|
||||
|
||||
`merge_with_ttl_timeout` — Minimum delay in seconds before repeating a merge with delete TTL. Default value: `14400` seconds (4 hours).
|
||||
#### merge_with_recompression_ttl_timeout
|
||||
|
||||
`merge_with_recompression_ttl_timeout` — Minimum delay in seconds before repeating a merge with recompression TTL. Default value: `14400` seconds (4 hours).
|
||||
|
||||
#### try_fetch_recompressed_part_timeout
|
||||
|
||||
`try_fetch_recompressed_part_timeout` — Timeout (in seconds) before starting merge with recompression. During this time ClickHouse tries to fetch recompressed part from replica which assigned this merge with recompression. Default value: `7200` seconds (2 hours).
|
||||
|
||||
#### write_final_mark
|
||||
|
||||
`write_final_mark` — Enables or disables writing the final index mark at the end of data part (after the last byte). Default value: 1. Don’t turn it off.
|
||||
|
||||
#### merge_max_block_size
|
||||
|
||||
`merge_max_block_size` — Maximum number of rows in block for merge operations. Default value: 8192.
|
||||
|
||||
#### storage_policy
|
||||
|
||||
`storage_policy` — Storage policy. See [Using Multiple Block Devices for Data Storage](#table_engine-mergetree-multiple-volumes).
|
||||
|
||||
#### min_bytes_for_wide_part
|
||||
|
||||
`min_bytes_for_wide_part`, `min_rows_for_wide_part` — Minimum number of bytes/rows in a data part that can be stored in `Wide` format. You can set one, both or none of these settings. See [Data Storage](#mergetree-data-storage).
|
||||
|
||||
#### max_parts_in_total
|
||||
|
||||
`max_parts_in_total` — Maximum number of parts in all partitions.
|
||||
|
||||
#### max_compress_block_size
|
||||
|
||||
`max_compress_block_size` — Maximum size of blocks of uncompressed data before compressing for writing to a table. You can also specify this setting in the global settings (see [max_compress_block_size](/docs/en/operations/settings/settings.md/#max-compress-block-size) setting). The value specified when table is created overrides the global value for this setting.
|
||||
|
||||
#### min_compress_block_size
|
||||
|
||||
`min_compress_block_size` — Minimum size of blocks of uncompressed data required for compression when writing the next mark. You can also specify this setting in the global settings (see [min_compress_block_size](/docs/en/operations/settings/settings.md/#min-compress-block-size) setting). The value specified when table is created overrides the global value for this setting.
|
||||
|
||||
#### max_partitions_to_read
|
||||
|
||||
`max_partitions_to_read` — Limits the maximum number of partitions that can be accessed in one query. You can also specify setting [max_partitions_to_read](/docs/en/operations/settings/merge-tree-settings.md/#max-partitions-to-read) in the global setting.
|
||||
|
||||
#### allow_experimental_optimized_row_order
|
||||
|
||||
`allow_experimental_optimized_row_order` - Experimental. Enables the optimization of the row order during inserts to improve the compressability of the data for compression codecs (e.g. LZ4). Analyzes and reorders the data, and thus increases the CPU overhead of inserts.
|
||||
See [MergeTree Settings](../../../operations/settings/merge-tree-settings.md).
|
||||
|
||||
**Example of Sections Setting**
|
||||
|
||||
@ -270,7 +194,7 @@ ClickHouse does not require a unique primary key. You can insert multiple rows w
|
||||
|
||||
You can use `Nullable`-typed expressions in the `PRIMARY KEY` and `ORDER BY` clauses but it is strongly discouraged. To allow this feature, turn on the [allow_nullable_key](/docs/en/operations/settings/settings.md/#allow-nullable-key) setting. The [NULLS_LAST](/docs/en/sql-reference/statements/select/order-by.md/#sorting-of-special-values) principle applies for `NULL` values in the `ORDER BY` clause.
|
||||
|
||||
### Selecting the Primary Key {#selecting-the-primary-key}
|
||||
### Selecting a Primary Key {#selecting-a-primary-key}
|
||||
|
||||
The number of columns in the primary key is not explicitly limited. Depending on the data structure, you can include more or fewer columns in the primary key. This may:
|
||||
|
||||
|
@ -3,9 +3,126 @@ slug: /en/operations/settings/merge-tree-settings
|
||||
title: "MergeTree tables settings"
|
||||
---
|
||||
|
||||
The values of `merge_tree` settings (for all MergeTree tables) can be viewed in the table `system.merge_tree_settings`, they can be overridden in `config.xml` in the `merge_tree` section, or set in the `SETTINGS` section of each table.
|
||||
System table `system.merge_tree_settings` shows the globally set MergeTree settings.
|
||||
|
||||
These are example overrides for `max_suspicious_broken_parts`:
|
||||
MergeTree settings can be set in the `merge_tree` section of the server config file, or specified for each `MergeTree` table individually in
|
||||
the `SETTINGS` clause of the `CREATE TABLE` statement.
|
||||
|
||||
Example for customizing setting `max_suspicious_broken_parts`:
|
||||
|
||||
Configure the default for all `MergeTree` tables in the server configuration file:
|
||||
|
||||
``` text
|
||||
<merge_tree>
|
||||
<max_suspicious_broken_parts>5</max_suspicious_broken_parts>
|
||||
</merge_tree>
|
||||
```
|
||||
|
||||
Set for a particular table:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE tab
|
||||
(
|
||||
`A` Int64
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY tuple()
|
||||
SETTINGS max_suspicious_broken_parts = 500;
|
||||
```
|
||||
|
||||
Change the settings for a particular table using `ALTER TABLE ... MODIFY SETTING`:
|
||||
|
||||
```sql
|
||||
ALTER TABLE tab MODIFY SETTING max_suspicious_broken_parts = 100;
|
||||
|
||||
-- reset to global default (value from system.merge_tree_settings)
|
||||
ALTER TABLE tab RESET SETTING max_suspicious_broken_parts;
|
||||
```
|
||||
|
||||
## index_granularity
|
||||
|
||||
Maximum number of data rows between the marks of an index.
|
||||
|
||||
Default value: 8192.
|
||||
|
||||
## index_granularity_bytes
|
||||
|
||||
Maximum size of data granules in bytes.
|
||||
|
||||
Default value: 10Mb.
|
||||
|
||||
To restrict the granule size only by number of rows, set to 0 (not recommended).
|
||||
|
||||
## min_index_granularity_bytes
|
||||
|
||||
Min allowed size of data granules in bytes.
|
||||
|
||||
Default value: 1024b.
|
||||
|
||||
To provide a safeguard against accidentally creating tables with very low index_granularity_bytes.
|
||||
|
||||
## enable_mixed_granularity_parts
|
||||
|
||||
Enables or disables transitioning to control the granule size with the `index_granularity_bytes` setting. Before version 19.11, there was only the `index_granularity` setting for restricting granule size. The `index_granularity_bytes` setting improves ClickHouse performance when selecting data from tables with big rows (tens and hundreds of megabytes). If you have tables with big rows, you can enable this setting for the tables to improve the efficiency of `SELECT` queries.
|
||||
|
||||
## use_minimalistic_part_header_in_zookeeper
|
||||
|
||||
Storage method of the data parts headers in ZooKeeper. If enabled, ZooKeeper stores less data. For details, see [here](../server-configuration-parameters/settings.md/#server-settings-use_minimalistic_part_header_in_zookeeper).
|
||||
|
||||
## min_merge_bytes_to_use_direct_io
|
||||
|
||||
The minimum data volume for merge operation that is required for using direct I/O access to the storage disk.
|
||||
When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged.
|
||||
If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option).
|
||||
If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled.
|
||||
|
||||
Default value: `10 * 1024 * 1024 * 1024` bytes.
|
||||
|
||||
## merge_with_ttl_timeout
|
||||
|
||||
Minimum delay in seconds before repeating a merge with delete TTL.
|
||||
|
||||
Default value: `14400` seconds (4 hours).
|
||||
|
||||
## merge_with_recompression_ttl_timeout
|
||||
|
||||
Minimum delay in seconds before repeating a merge with recompression TTL.
|
||||
|
||||
Default value: `14400` seconds (4 hours).
|
||||
|
||||
## write_final_mark
|
||||
|
||||
Enables or disables writing the final index mark at the end of data part (after the last byte).
|
||||
|
||||
Default value: 1.
|
||||
|
||||
Don’t change or bad things will happen.
|
||||
|
||||
## storage_policy
|
||||
|
||||
Storage policy.
|
||||
|
||||
## min_bytes_for_wide_part
|
||||
|
||||
Minimum number of bytes/rows in a data part that can be stored in `Wide` format.
|
||||
You can set one, both or none of these settings.
|
||||
|
||||
## max_compress_block_size
|
||||
|
||||
Maximum size of blocks of uncompressed data before compressing for writing to a table.
|
||||
You can also specify this setting in the global settings (see [max_compress_block_size](/docs/en/operations/settings/settings.md/#max-compress-block-size) setting).
|
||||
The value specified when table is created overrides the global value for this setting.
|
||||
|
||||
## min_compress_block_size
|
||||
|
||||
Minimum size of blocks of uncompressed data required for compression when writing the next mark.
|
||||
You can also specify this setting in the global settings (see [min_compress_block_size](/docs/en/operations/settings/settings.md/#min-compress-block-size) setting).
|
||||
The value specified when table is created overrides the global value for this setting.
|
||||
|
||||
## max_partitions_to_read
|
||||
|
||||
Limits the maximum number of partitions that can be accessed in one query.
|
||||
You can also specify setting [max_partitions_to_read](/docs/en/operations/settings/merge-tree-settings.md/#max-partitions-to-read) in the global setting.
|
||||
|
||||
## max_suspicious_broken_parts
|
||||
|
||||
@ -17,37 +134,6 @@ Possible values:
|
||||
|
||||
Default value: 100.
|
||||
|
||||
Override example in `config.xml`:
|
||||
|
||||
``` text
|
||||
<merge_tree>
|
||||
<max_suspicious_broken_parts>5</max_suspicious_broken_parts>
|
||||
</merge_tree>
|
||||
```
|
||||
|
||||
An example to set in `SETTINGS` for a particular table:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE foo
|
||||
(
|
||||
`A` Int64
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY tuple()
|
||||
SETTINGS max_suspicious_broken_parts = 500;
|
||||
```
|
||||
|
||||
An example of changing the settings for a specific table with the `ALTER TABLE ... MODIFY SETTING` command:
|
||||
|
||||
``` sql
|
||||
ALTER TABLE foo
|
||||
MODIFY SETTING max_suspicious_broken_parts = 100;
|
||||
|
||||
-- reset to default (use value from system.merge_tree_settings)
|
||||
ALTER TABLE foo
|
||||
RESET SETTING max_suspicious_broken_parts;
|
||||
```
|
||||
|
||||
## parts_to_throw_insert {#parts-to-throw-insert}
|
||||
|
||||
If the number of active parts in a single partition exceeds the `parts_to_throw_insert` value, `INSERT` is interrupted with the `Too many parts (N). Merges are processing significantly slower than inserts` exception.
|
||||
@ -301,6 +387,8 @@ Default value: 10800
|
||||
|
||||
## try_fetch_recompressed_part_timeout
|
||||
|
||||
Timeout (in seconds) before starting merge with recompression. During this time ClickHouse tries to fetch recompressed part from replica which assigned this merge with recompression.
|
||||
|
||||
Recompression works slow in most cases, so we don't start merge with recompression until this timeout and trying to fetch recompressed part from replica which assigned this merge with recompression.
|
||||
|
||||
Possible values:
|
||||
|
Loading…
Reference in New Issue
Block a user