mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-27 10:02:01 +00:00
Merge remote-tracking branch 'rschu1ze/master' into clang-17
This commit is contained in:
commit
83a2969f0c
@ -8,7 +8,7 @@ sidebar_label: EmbeddedRocksDB
|
|||||||
|
|
||||||
This engine allows integrating ClickHouse with [rocksdb](http://rocksdb.org/).
|
This engine allows integrating ClickHouse with [rocksdb](http://rocksdb.org/).
|
||||||
|
|
||||||
## Creating a Table {#table_engine-EmbeddedRocksDB-creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
@ -85,7 +85,7 @@ You can also change any [rocksdb options](https://github.com/facebook/rocksdb/wi
|
|||||||
</rocksdb>
|
</rocksdb>
|
||||||
```
|
```
|
||||||
|
|
||||||
## Supported operations {#table_engine-EmbeddedRocksDB-supported-operations}
|
## Supported operations {#supported-operations}
|
||||||
|
|
||||||
### Inserts
|
### Inserts
|
||||||
|
|
||||||
|
@ -14,7 +14,7 @@ Kafka lets you:
|
|||||||
- Organize fault-tolerant storage.
|
- Organize fault-tolerant storage.
|
||||||
- Process streams as they become available.
|
- Process streams as they become available.
|
||||||
|
|
||||||
## Creating a Table {#table_engine-kafka-creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
|
@ -13,7 +13,7 @@ This engine allows integrating ClickHouse with [NATS](https://nats.io/).
|
|||||||
- Publish or subscribe to message subjects.
|
- Publish or subscribe to message subjects.
|
||||||
- Process new messages as they become available.
|
- Process new messages as they become available.
|
||||||
|
|
||||||
## Creating a Table {#table_engine-redisstreams-creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
|
@ -13,7 +13,7 @@ This engine allows integrating ClickHouse with [RabbitMQ](https://www.rabbitmq.c
|
|||||||
- Publish or subscribe to data flows.
|
- Publish or subscribe to data flows.
|
||||||
- Process streams as they become available.
|
- Process streams as they become available.
|
||||||
|
|
||||||
## Creating a Table {#table_engine-rabbitmq-creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
|
@ -63,7 +63,7 @@ SETTINGS
|
|||||||
mode = 'ordered';
|
mode = 'ordered';
|
||||||
```
|
```
|
||||||
|
|
||||||
## Settings {#s3queue-settings}
|
## Settings {#settings}
|
||||||
|
|
||||||
### mode {#mode}
|
### mode {#mode}
|
||||||
|
|
||||||
@ -93,7 +93,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `/`.
|
Default value: `/`.
|
||||||
|
|
||||||
### s3queue_loading_retries {#s3queue_loading_retries}
|
### s3queue_loading_retries {#loading_retries}
|
||||||
|
|
||||||
Retry file loading up to specified number of times. By default, there are no retries.
|
Retry file loading up to specified number of times. By default, there are no retries.
|
||||||
Possible values:
|
Possible values:
|
||||||
@ -102,7 +102,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
### s3queue_polling_min_timeout_ms {#s3queue_polling_min_timeout_ms}
|
### s3queue_polling_min_timeout_ms {#polling_min_timeout_ms}
|
||||||
|
|
||||||
Minimal timeout before next polling (in milliseconds).
|
Minimal timeout before next polling (in milliseconds).
|
||||||
|
|
||||||
@ -112,7 +112,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `1000`.
|
Default value: `1000`.
|
||||||
|
|
||||||
### s3queue_polling_max_timeout_ms {#s3queue_polling_max_timeout_ms}
|
### s3queue_polling_max_timeout_ms {#polling_max_timeout_ms}
|
||||||
|
|
||||||
Maximum timeout before next polling (in milliseconds).
|
Maximum timeout before next polling (in milliseconds).
|
||||||
|
|
||||||
@ -122,7 +122,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `10000`.
|
Default value: `10000`.
|
||||||
|
|
||||||
### s3queue_polling_backoff_ms {#s3queue_polling_backoff_ms}
|
### s3queue_polling_backoff_ms {#polling_backoff_ms}
|
||||||
|
|
||||||
Polling backoff (in milliseconds).
|
Polling backoff (in milliseconds).
|
||||||
|
|
||||||
@ -132,7 +132,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
### s3queue_tracked_files_limit {#s3queue_tracked_files_limit}
|
### s3queue_tracked_files_limit {#tracked_files_limit}
|
||||||
|
|
||||||
Allows to limit the number of Zookeeper nodes if the 'unordered' mode is used, does nothing for 'ordered' mode.
|
Allows to limit the number of Zookeeper nodes if the 'unordered' mode is used, does nothing for 'ordered' mode.
|
||||||
If limit reached the oldest processed files will be deleted from ZooKeeper node and processed again.
|
If limit reached the oldest processed files will be deleted from ZooKeeper node and processed again.
|
||||||
@ -143,7 +143,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `1000`.
|
Default value: `1000`.
|
||||||
|
|
||||||
### s3queue_tracked_file_ttl_sec {#s3queue_tracked_file_ttl_sec}
|
### s3queue_tracked_file_ttl_sec {#tracked_file_ttl_sec}
|
||||||
|
|
||||||
Maximum number of seconds to store processed files in ZooKeeper node (store forever by default) for 'unordered' mode, does nothing for 'ordered' mode.
|
Maximum number of seconds to store processed files in ZooKeeper node (store forever by default) for 'unordered' mode, does nothing for 'ordered' mode.
|
||||||
After the specified number of seconds, the file will be re-imported.
|
After the specified number of seconds, the file will be re-imported.
|
||||||
@ -154,7 +154,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
### s3queue_polling_size {#s3queue_polling_size}
|
### s3queue_polling_size {#polling_size}
|
||||||
|
|
||||||
Maximum files to fetch from S3 with SELECT or in background task.
|
Maximum files to fetch from S3 with SELECT or in background task.
|
||||||
Engine takes files for processing from S3 in batches.
|
Engine takes files for processing from S3 in batches.
|
||||||
|
@ -20,7 +20,7 @@ For example:
|
|||||||
|
|
||||||
where path can be any other valid ZooKeeper path.
|
where path can be any other valid ZooKeeper path.
|
||||||
|
|
||||||
## Creating a Table {#table_engine-KeeperMap-creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||||
@ -74,7 +74,7 @@ If multiple tables are created on the same ZooKeeper path, the values are persis
|
|||||||
As a result, it is possible to use `ON CLUSTER` clause when creating the table and sharing the data from multiple ClickHouse instances.
|
As a result, it is possible to use `ON CLUSTER` clause when creating the table and sharing the data from multiple ClickHouse instances.
|
||||||
Of course, it's possible to manually run `CREATE TABLE` with same path on unrelated ClickHouse instances to have same data sharing effect.
|
Of course, it's possible to manually run `CREATE TABLE` with same path on unrelated ClickHouse instances to have same data sharing effect.
|
||||||
|
|
||||||
## Supported operations {#table_engine-KeeperMap-supported-operations}
|
## Supported operations {#supported-operations}
|
||||||
|
|
||||||
### Inserts
|
### Inserts
|
||||||
|
|
||||||
|
@ -43,6 +43,12 @@ SETTINGS use_query_cache = true;
|
|||||||
will store the query result in the query cache. Subsequent executions of the same query (also with parameter `use_query_cache = true`) will
|
will store the query result in the query cache. Subsequent executions of the same query (also with parameter `use_query_cache = true`) will
|
||||||
read the computed result from the cache and return it immediately.
|
read the computed result from the cache and return it immediately.
|
||||||
|
|
||||||
|
:::note
|
||||||
|
Setting `use_query_cache` and all other query-cache-related settings only take an effect on stand-alone `SELECT` statements. In particular,
|
||||||
|
the results of `SELECT`s to views created by `CREATE VIEW AS SELECT [...] SETTINGS use_query_cache = true` are not cached unless the `SELECT`
|
||||||
|
statement runs with `SETTINGS use_query_cache = true`.
|
||||||
|
:::
|
||||||
|
|
||||||
The way the cache is utilized can be configured in more detail using settings [enable_writes_to_query_cache](settings/settings.md#enable-writes-to-query-cache)
|
The way the cache is utilized can be configured in more detail using settings [enable_writes_to_query_cache](settings/settings.md#enable-writes-to-query-cache)
|
||||||
and [enable_reads_from_query_cache](settings/settings.md#enable-reads-from-query-cache) (both `true` by default). The former setting
|
and [enable_reads_from_query_cache](settings/settings.md#enable-reads-from-query-cache) (both `true` by default). The former setting
|
||||||
controls whether query results are stored in the cache, whereas the latter setting determines if the database should try to retrieve query
|
controls whether query results are stored in the cache, whereas the latter setting determines if the database should try to retrieve query
|
||||||
@ -84,7 +90,7 @@ It is also possible to limit the cache usage of individual users using [settings
|
|||||||
constraints](settings/constraints-on-settings.md). More specifically, you can restrict the maximum amount of memory (in bytes) a user may
|
constraints](settings/constraints-on-settings.md). More specifically, you can restrict the maximum amount of memory (in bytes) a user may
|
||||||
allocate in the query cache and the the maximum number of stored query results. For that, first provide configurations
|
allocate in the query cache and the the maximum number of stored query results. For that, first provide configurations
|
||||||
[query_cache_max_size_in_bytes](settings/settings.md#query-cache-max-size-in-bytes) and
|
[query_cache_max_size_in_bytes](settings/settings.md#query-cache-max-size-in-bytes) and
|
||||||
[query_cache_max_entries](settings/settings.md#query-cache-size-max-items) in a user profile in `users.xml`, then make both settings
|
[query_cache_max_entries](settings/settings.md#query-cache-size-max-entries) in a user profile in `users.xml`, then make both settings
|
||||||
readonly:
|
readonly:
|
||||||
|
|
||||||
``` xml
|
``` xml
|
||||||
@ -134,7 +140,7 @@ block granularity when query results are later served from the query cache.
|
|||||||
|
|
||||||
As a result, the query cache stores for each query multiple (partial)
|
As a result, the query cache stores for each query multiple (partial)
|
||||||
result blocks. While this behavior is a good default, it can be suppressed using setting
|
result blocks. While this behavior is a good default, it can be suppressed using setting
|
||||||
[query_cache_squash_partial_query_results](settings/settings.md#query-cache-squash-partial-query-results).
|
[query_cache_squash_partial_results](settings/settings.md#query-cache-squash-partial-results).
|
||||||
|
|
||||||
Also, results of queries with non-deterministic functions are not cached. Such functions include
|
Also, results of queries with non-deterministic functions are not cached. Such functions include
|
||||||
- functions for accessing dictionaries: [`dictGet()`](../sql-reference/functions/ext-dict-functions.md#dictGet) etc.
|
- functions for accessing dictionaries: [`dictGet()`](../sql-reference/functions/ext-dict-functions.md#dictGet) etc.
|
||||||
|
@ -835,7 +835,7 @@ List of prefixes for [custom settings](../../operations/settings/index.md#custom
|
|||||||
|
|
||||||
- [Custom settings](../../operations/settings/index.md#custom_settings)
|
- [Custom settings](../../operations/settings/index.md#custom_settings)
|
||||||
|
|
||||||
## core_dump {#server_configuration_parameters-core_dump}
|
## core_dump {#core_dump}
|
||||||
|
|
||||||
Configures soft limit for core dump file size.
|
Configures soft limit for core dump file size.
|
||||||
|
|
||||||
@ -924,7 +924,7 @@ The path to the table in ZooKeeper.
|
|||||||
<default_replica_name>{replica}</default_replica_name>
|
<default_replica_name>{replica}</default_replica_name>
|
||||||
```
|
```
|
||||||
|
|
||||||
## dictionaries_config {#server_configuration_parameters-dictionaries_config}
|
## dictionaries_config {#dictionaries_config}
|
||||||
|
|
||||||
The path to the config file for dictionaries.
|
The path to the config file for dictionaries.
|
||||||
|
|
||||||
@ -941,7 +941,7 @@ See also “[Dictionaries](../../sql-reference/dictionaries/index.md)”.
|
|||||||
<dictionaries_config>*_dictionary.xml</dictionaries_config>
|
<dictionaries_config>*_dictionary.xml</dictionaries_config>
|
||||||
```
|
```
|
||||||
|
|
||||||
## user_defined_executable_functions_config {#server_configuration_parameters-user_defined_executable_functions_config}
|
## user_defined_executable_functions_config {#user_defined_executable_functions_config}
|
||||||
|
|
||||||
The path to the config file for executable user defined functions.
|
The path to the config file for executable user defined functions.
|
||||||
|
|
||||||
@ -958,7 +958,7 @@ See also “[Executable User Defined Functions](../../sql-reference/functions/in
|
|||||||
<user_defined_executable_functions_config>*_function.xml</user_defined_executable_functions_config>
|
<user_defined_executable_functions_config>*_function.xml</user_defined_executable_functions_config>
|
||||||
```
|
```
|
||||||
|
|
||||||
## dictionaries_lazy_load {#server_configuration_parameters-dictionaries_lazy_load}
|
## dictionaries_lazy_load {#dictionaries_lazy_load}
|
||||||
|
|
||||||
Lazy loading of dictionaries.
|
Lazy loading of dictionaries.
|
||||||
|
|
||||||
@ -974,7 +974,7 @@ The default is `true`.
|
|||||||
<dictionaries_lazy_load>true</dictionaries_lazy_load>
|
<dictionaries_lazy_load>true</dictionaries_lazy_load>
|
||||||
```
|
```
|
||||||
|
|
||||||
## format_schema_path {#server_configuration_parameters-format_schema_path}
|
## format_schema_path {#format_schema_path}
|
||||||
|
|
||||||
The path to the directory with the schemes for the input data, such as schemas for the [CapnProto](../../interfaces/formats.md#capnproto) format.
|
The path to the directory with the schemes for the input data, such as schemas for the [CapnProto](../../interfaces/formats.md#capnproto) format.
|
||||||
|
|
||||||
@ -985,7 +985,7 @@ The path to the directory with the schemes for the input data, such as schemas f
|
|||||||
<format_schema_path>format_schemas/</format_schema_path>
|
<format_schema_path>format_schemas/</format_schema_path>
|
||||||
```
|
```
|
||||||
|
|
||||||
## graphite {#server_configuration_parameters-graphite}
|
## graphite {#graphite}
|
||||||
|
|
||||||
Sending data to [Graphite](https://github.com/graphite-project).
|
Sending data to [Graphite](https://github.com/graphite-project).
|
||||||
|
|
||||||
@ -1019,7 +1019,7 @@ You can configure multiple `<graphite>` clauses. For instance, you can use this
|
|||||||
</graphite>
|
</graphite>
|
||||||
```
|
```
|
||||||
|
|
||||||
## graphite_rollup {#server_configuration_parameters-graphite-rollup}
|
## graphite_rollup {#graphite-rollup}
|
||||||
|
|
||||||
Settings for thinning data for Graphite.
|
Settings for thinning data for Graphite.
|
||||||
|
|
||||||
@ -1051,7 +1051,7 @@ For more details, see [GraphiteMergeTree](../../engines/table-engines/mergetree-
|
|||||||
|
|
||||||
The port for connecting to the server over HTTP(s).
|
The port for connecting to the server over HTTP(s).
|
||||||
|
|
||||||
If `https_port` is specified, [openSSL](#server_configuration_parameters-openssl) must be configured.
|
If `https_port` is specified, [openSSL](#openssl) must be configured.
|
||||||
|
|
||||||
If `http_port` is specified, the OpenSSL configuration is ignored even if it is set.
|
If `http_port` is specified, the OpenSSL configuration is ignored even if it is set.
|
||||||
|
|
||||||
@ -1061,7 +1061,7 @@ If `http_port` is specified, the OpenSSL configuration is ignored even if it is
|
|||||||
<https_port>9999</https_port>
|
<https_port>9999</https_port>
|
||||||
```
|
```
|
||||||
|
|
||||||
## http_server_default_response {#server_configuration_parameters-http_server_default_response}
|
## http_server_default_response {#http_server_default_response}
|
||||||
|
|
||||||
The page that is shown by default when you access the ClickHouse HTTP(s) server.
|
The page that is shown by default when you access the ClickHouse HTTP(s) server.
|
||||||
The default value is “Ok.” (with a line feed at the end)
|
The default value is “Ok.” (with a line feed at the end)
|
||||||
@ -1086,7 +1086,7 @@ Expired time for HSTS in seconds. The default value is 0 means clickhouse disabl
|
|||||||
<hsts_max_age>600000</hsts_max_age>
|
<hsts_max_age>600000</hsts_max_age>
|
||||||
```
|
```
|
||||||
|
|
||||||
## include_from {#server_configuration_parameters-include_from}
|
## include_from {#include_from}
|
||||||
|
|
||||||
The path to the file with substitutions.
|
The path to the file with substitutions.
|
||||||
|
|
||||||
@ -1222,7 +1222,7 @@ The number of seconds that ClickHouse waits for incoming requests before closing
|
|||||||
<keep_alive_timeout>10</keep_alive_timeout>
|
<keep_alive_timeout>10</keep_alive_timeout>
|
||||||
```
|
```
|
||||||
|
|
||||||
## listen_host {#server_configuration_parameters-listen_host}
|
## listen_host {#listen_host}
|
||||||
|
|
||||||
Restriction on hosts that requests can come from. If you want the server to answer all of them, specify `::`.
|
Restriction on hosts that requests can come from. If you want the server to answer all of them, specify `::`.
|
||||||
|
|
||||||
@ -1233,7 +1233,7 @@ Examples:
|
|||||||
<listen_host>127.0.0.1</listen_host>
|
<listen_host>127.0.0.1</listen_host>
|
||||||
```
|
```
|
||||||
|
|
||||||
## listen_backlog {#server_configuration_parameters-listen_backlog}
|
## listen_backlog {#listen_backlog}
|
||||||
|
|
||||||
Backlog (queue size of pending connections) of the listen socket.
|
Backlog (queue size of pending connections) of the listen socket.
|
||||||
|
|
||||||
@ -1253,7 +1253,7 @@ Examples:
|
|||||||
<listen_backlog>4096</listen_backlog>
|
<listen_backlog>4096</listen_backlog>
|
||||||
```
|
```
|
||||||
|
|
||||||
## logger {#server_configuration_parameters-logger}
|
## logger {#logger}
|
||||||
|
|
||||||
Logging settings.
|
Logging settings.
|
||||||
|
|
||||||
@ -1357,7 +1357,7 @@ Keys for syslog:
|
|||||||
Default value: `LOG_USER` if `address` is specified, `LOG_DAEMON` otherwise.
|
Default value: `LOG_USER` if `address` is specified, `LOG_DAEMON` otherwise.
|
||||||
- format – Message format. Possible values: `bsd` and `syslog.`
|
- format – Message format. Possible values: `bsd` and `syslog.`
|
||||||
|
|
||||||
## send_crash_reports {#server_configuration_parameters-send_crash_reports}
|
## send_crash_reports {#send_crash_reports}
|
||||||
|
|
||||||
Settings for opt-in sending crash reports to the ClickHouse core developers team via [Sentry](https://sentry.io).
|
Settings for opt-in sending crash reports to the ClickHouse core developers team via [Sentry](https://sentry.io).
|
||||||
Enabling it, especially in pre-production environments, is highly appreciated.
|
Enabling it, especially in pre-production environments, is highly appreciated.
|
||||||
@ -1629,7 +1629,7 @@ Default value: `0.5`.
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
## merge_tree {#server_configuration_parameters-merge_tree}
|
## merge_tree {#merge_tree}
|
||||||
|
|
||||||
Fine tuning for tables in the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
Fine tuning for tables in the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||||
|
|
||||||
@ -1676,7 +1676,7 @@ To disable `metric_log` setting, you should create the following file `/etc/clic
|
|||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
|
||||||
## replicated_merge_tree {#server_configuration_parameters-replicated_merge_tree}
|
## replicated_merge_tree {#replicated_merge_tree}
|
||||||
|
|
||||||
Fine tuning for tables in the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
Fine tuning for tables in the [ReplicatedMergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||||
|
|
||||||
@ -1692,7 +1692,7 @@ For more information, see the MergeTreeSettings.h header file.
|
|||||||
</replicated_merge_tree>
|
</replicated_merge_tree>
|
||||||
```
|
```
|
||||||
|
|
||||||
## openSSL {#server_configuration_parameters-openssl}
|
## openSSL {#openssl}
|
||||||
|
|
||||||
SSL client/server configuration.
|
SSL client/server configuration.
|
||||||
|
|
||||||
@ -1751,7 +1751,7 @@ Keys for server/client settings:
|
|||||||
</openSSL>
|
</openSSL>
|
||||||
```
|
```
|
||||||
|
|
||||||
## part_log {#server_configuration_parameters-part-log}
|
## part_log {#part-log}
|
||||||
|
|
||||||
Logging events that are associated with [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). For instance, adding or merging data. You can use the log to simulate merge algorithms and compare their characteristics. You can visualize the merge process.
|
Logging events that are associated with [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md). For instance, adding or merging data. You can use the log to simulate merge algorithms and compare their characteristics. You can visualize the merge process.
|
||||||
|
|
||||||
@ -1791,7 +1791,7 @@ Default: false.
|
|||||||
</part_log>
|
</part_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
## path {#server_configuration_parameters-path}
|
## path {#path}
|
||||||
|
|
||||||
The path to the directory containing data.
|
The path to the directory containing data.
|
||||||
|
|
||||||
@ -1805,7 +1805,7 @@ The trailing slash is mandatory.
|
|||||||
<path>/var/lib/clickhouse/</path>
|
<path>/var/lib/clickhouse/</path>
|
||||||
```
|
```
|
||||||
|
|
||||||
## Prometheus {#server_configuration_parameters-prometheus}
|
## Prometheus {#prometheus}
|
||||||
|
|
||||||
Exposing metrics data for scraping from [Prometheus](https://prometheus.io).
|
Exposing metrics data for scraping from [Prometheus](https://prometheus.io).
|
||||||
|
|
||||||
@ -1841,7 +1841,7 @@ Check (replace `127.0.0.1` with the IP addr or hostname of your ClickHouse serve
|
|||||||
curl 127.0.0.1:9363/metrics
|
curl 127.0.0.1:9363/metrics
|
||||||
```
|
```
|
||||||
|
|
||||||
## query_log {#server_configuration_parameters-query-log}
|
## query_log {#query-log}
|
||||||
|
|
||||||
Setting for logging queries received with the [log_queries=1](../../operations/settings/settings.md) setting.
|
Setting for logging queries received with the [log_queries=1](../../operations/settings/settings.md) setting.
|
||||||
|
|
||||||
@ -1911,7 +1911,7 @@ Data for the query cache is allocated in DRAM. If memory is scarce, make sure to
|
|||||||
</query_cache>
|
</query_cache>
|
||||||
```
|
```
|
||||||
|
|
||||||
## query_thread_log {#server_configuration_parameters-query_thread_log}
|
## query_thread_log {#query_thread_log}
|
||||||
|
|
||||||
Setting for logging threads of queries received with the [log_query_threads=1](../../operations/settings/settings.md#settings-log-query-threads) setting.
|
Setting for logging threads of queries received with the [log_query_threads=1](../../operations/settings/settings.md#settings-log-query-threads) setting.
|
||||||
|
|
||||||
@ -1953,7 +1953,7 @@ If the table does not exist, ClickHouse will create it. If the structure of the
|
|||||||
</query_thread_log>
|
</query_thread_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
## query_views_log {#server_configuration_parameters-query_views_log}
|
## query_views_log {#query_views_log}
|
||||||
|
|
||||||
Setting for logging views (live, materialized etc) dependant of queries received with the [log_query_views=1](../../operations/settings/settings.md#settings-log-query-views) setting.
|
Setting for logging views (live, materialized etc) dependant of queries received with the [log_query_views=1](../../operations/settings/settings.md#settings-log-query-views) setting.
|
||||||
|
|
||||||
@ -1995,7 +1995,7 @@ If the table does not exist, ClickHouse will create it. If the structure of the
|
|||||||
</query_views_log>
|
</query_views_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
## text_log {#server_configuration_parameters-text_log}
|
## text_log {#text_log}
|
||||||
|
|
||||||
Settings for the [text_log](../../operations/system-tables/text_log.md#system_tables-text_log) system table for logging text messages.
|
Settings for the [text_log](../../operations/system-tables/text_log.md#system_tables-text_log) system table for logging text messages.
|
||||||
|
|
||||||
@ -2037,7 +2037,7 @@ Default: false.
|
|||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
|
||||||
## trace_log {#server_configuration_parameters-trace_log}
|
## trace_log {#trace_log}
|
||||||
|
|
||||||
Settings for the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
|
Settings for the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
|
||||||
|
|
||||||
@ -2073,7 +2073,7 @@ The default server configuration file `config.xml` contains the following settin
|
|||||||
</trace_log>
|
</trace_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
## asynchronous_insert_log {#server_configuration_parameters-asynchronous_insert_log}
|
## asynchronous_insert_log {#asynchronous_insert_log}
|
||||||
|
|
||||||
Settings for the [asynchronous_insert_log](../../operations/system-tables/asynchronous_insert_log.md#system_tables-asynchronous_insert_log) system table for logging async inserts.
|
Settings for the [asynchronous_insert_log](../../operations/system-tables/asynchronous_insert_log.md#system_tables-asynchronous_insert_log) system table for logging async inserts.
|
||||||
|
|
||||||
@ -2112,7 +2112,7 @@ Default: false.
|
|||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
|
||||||
## crash_log {#server_configuration_parameters-crash_log}
|
## crash_log {#crash_log}
|
||||||
|
|
||||||
Settings for the [crash_log](../../operations/system-tables/crash-log.md) system table operation.
|
Settings for the [crash_log](../../operations/system-tables/crash-log.md) system table operation.
|
||||||
|
|
||||||
@ -2150,7 +2150,7 @@ The default server configuration file `config.xml` contains the following settin
|
|||||||
</crash_log>
|
</crash_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
## backup_log {#server_configuration_parameters-backup_log}
|
## backup_log {#backup_log}
|
||||||
|
|
||||||
Settings for the [backup_log](../../operations/system-tables/backup_log.md) system table for logging `BACKUP` and `RESTORE` operations.
|
Settings for the [backup_log](../../operations/system-tables/backup_log.md) system table for logging `BACKUP` and `RESTORE` operations.
|
||||||
|
|
||||||
@ -2239,7 +2239,7 @@ For the value of the `incl` attribute, see the section “[Configuration files](
|
|||||||
- [Cluster Discovery](../../operations/cluster-discovery.md)
|
- [Cluster Discovery](../../operations/cluster-discovery.md)
|
||||||
- [Replicated database engine](../../engines/database-engines/replicated.md)
|
- [Replicated database engine](../../engines/database-engines/replicated.md)
|
||||||
|
|
||||||
## timezone {#server_configuration_parameters-timezone}
|
## timezone {#timezone}
|
||||||
|
|
||||||
The server’s time zone.
|
The server’s time zone.
|
||||||
|
|
||||||
@ -2257,7 +2257,7 @@ The time zone is necessary for conversions between String and DateTime formats w
|
|||||||
|
|
||||||
- [session_timezone](../settings/settings.md#session_timezone)
|
- [session_timezone](../settings/settings.md#session_timezone)
|
||||||
|
|
||||||
## tcp_port {#server_configuration_parameters-tcp_port}
|
## tcp_port {#tcp_port}
|
||||||
|
|
||||||
Port for communicating with clients over the TCP protocol.
|
Port for communicating with clients over the TCP protocol.
|
||||||
|
|
||||||
@ -2267,9 +2267,9 @@ Port for communicating with clients over the TCP protocol.
|
|||||||
<tcp_port>9000</tcp_port>
|
<tcp_port>9000</tcp_port>
|
||||||
```
|
```
|
||||||
|
|
||||||
## tcp_port_secure {#server_configuration_parameters-tcp_port_secure}
|
## tcp_port_secure {#tcp_port_secure}
|
||||||
|
|
||||||
TCP port for secure communication with clients. Use it with [OpenSSL](#server_configuration_parameters-openssl) settings.
|
TCP port for secure communication with clients. Use it with [OpenSSL](#openssl) settings.
|
||||||
|
|
||||||
**Possible values**
|
**Possible values**
|
||||||
|
|
||||||
@ -2281,7 +2281,7 @@ Positive integer.
|
|||||||
<tcp_port_secure>9440</tcp_port_secure>
|
<tcp_port_secure>9440</tcp_port_secure>
|
||||||
```
|
```
|
||||||
|
|
||||||
## mysql_port {#server_configuration_parameters-mysql_port}
|
## mysql_port {#mysql_port}
|
||||||
|
|
||||||
Port for communicating with clients over MySQL protocol.
|
Port for communicating with clients over MySQL protocol.
|
||||||
|
|
||||||
@ -2295,7 +2295,7 @@ Example
|
|||||||
<mysql_port>9004</mysql_port>
|
<mysql_port>9004</mysql_port>
|
||||||
```
|
```
|
||||||
|
|
||||||
## postgresql_port {#server_configuration_parameters-postgresql_port}
|
## postgresql_port {#postgresql_port}
|
||||||
|
|
||||||
Port for communicating with clients over PostgreSQL protocol.
|
Port for communicating with clients over PostgreSQL protocol.
|
||||||
|
|
||||||
@ -2326,7 +2326,7 @@ Path on the local filesystem to store temporary data for processing large querie
|
|||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## user_files_path {#server_configuration_parameters-user_files_path}
|
## user_files_path {#user_files_path}
|
||||||
|
|
||||||
The directory with user files. Used in the table function [file()](../../sql-reference/table-functions/file.md).
|
The directory with user files. Used in the table function [file()](../../sql-reference/table-functions/file.md).
|
||||||
|
|
||||||
@ -2336,7 +2336,7 @@ The directory with user files. Used in the table function [file()](../../sql-ref
|
|||||||
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
<user_files_path>/var/lib/clickhouse/user_files/</user_files_path>
|
||||||
```
|
```
|
||||||
|
|
||||||
## user_scripts_path {#server_configuration_parameters-user_scripts_path}
|
## user_scripts_path {#user_scripts_path}
|
||||||
|
|
||||||
The directory with user scripts files. Used for Executable user defined functions [Executable User Defined Functions](../../sql-reference/functions/index.md#executable-user-defined-functions).
|
The directory with user scripts files. Used for Executable user defined functions [Executable User Defined Functions](../../sql-reference/functions/index.md#executable-user-defined-functions).
|
||||||
|
|
||||||
@ -2346,7 +2346,7 @@ The directory with user scripts files. Used for Executable user defined function
|
|||||||
<user_scripts_path>/var/lib/clickhouse/user_scripts/</user_scripts_path>
|
<user_scripts_path>/var/lib/clickhouse/user_scripts/</user_scripts_path>
|
||||||
```
|
```
|
||||||
|
|
||||||
## user_defined_path {#server_configuration_parameters-user_defined_path}
|
## user_defined_path {#user_defined_path}
|
||||||
|
|
||||||
The directory with user defined files. Used for SQL user defined functions [SQL User Defined Functions](../../sql-reference/functions/index.md#user-defined-functions).
|
The directory with user defined files. Used for SQL user defined functions [SQL User Defined Functions](../../sql-reference/functions/index.md#user-defined-functions).
|
||||||
|
|
||||||
@ -2442,7 +2442,7 @@ Storage method for data part headers in ZooKeeper.
|
|||||||
|
|
||||||
This setting only applies to the `MergeTree` family. It can be specified:
|
This setting only applies to the `MergeTree` family. It can be specified:
|
||||||
|
|
||||||
- Globally in the [merge_tree](#server_configuration_parameters-merge_tree) section of the `config.xml` file.
|
- Globally in the [merge_tree](#merge_tree) section of the `config.xml` file.
|
||||||
|
|
||||||
ClickHouse uses the setting for all the tables on the server. You can change the setting at any time. Existing tables change their behaviour when the setting changes.
|
ClickHouse uses the setting for all the tables on the server. You can change the setting at any time. Existing tables change their behaviour when the setting changes.
|
||||||
|
|
||||||
|
@ -48,7 +48,7 @@ Setting `readonly = 1` prohibits the user from changing settings. There is a way
|
|||||||
:::
|
:::
|
||||||
|
|
||||||
|
|
||||||
## allow_ddl {#settings_allow_ddl}
|
## allow_ddl {#allow_ddl}
|
||||||
|
|
||||||
Allows or denies [DDL](https://en.wikipedia.org/wiki/Data_definition_language) queries.
|
Allows or denies [DDL](https://en.wikipedia.org/wiki/Data_definition_language) queries.
|
||||||
|
|
||||||
|
@ -154,6 +154,13 @@ Result:
|
|||||||
Maximum query execution time in seconds.
|
Maximum query execution time in seconds.
|
||||||
At this time, it is not checked for one of the sorting stages, or when merging and finalizing aggregate functions.
|
At this time, it is not checked for one of the sorting stages, or when merging and finalizing aggregate functions.
|
||||||
|
|
||||||
|
The `max_execution_time` parameter can be a bit tricky to understand.
|
||||||
|
It operates based on interpolation relative to the current query execution speed (this behaviour is controlled by [timeout_before_checking_execution_speed](#timeout-before-checking-execution-speed)).
|
||||||
|
ClickHouse will interrupt a query if the projected execution time exceeds the specified `max_execution_time`.
|
||||||
|
By default, the timeout_before_checking_execution_speed is set to 1 second. This means that after just one second of query execution, ClickHouse will begin estimating the total execution time.
|
||||||
|
If, for example, `max_execution_time` is set to 3600 seconds (1 hour), ClickHouse will terminate the query if the estimated time exceeds this 3600-second limit.
|
||||||
|
If you set `timeout_before_checking_execution_speed `to 0, ClickHouse will use clock time as the basis for `max_execution_time`.
|
||||||
|
|
||||||
## timeout_overflow_mode {#timeout-overflow-mode}
|
## timeout_overflow_mode {#timeout-overflow-mode}
|
||||||
|
|
||||||
What to do if the query is run longer than ‘max_execution_time’: ‘throw’ or ‘break’. By default, throw.
|
What to do if the query is run longer than ‘max_execution_time’: ‘throw’ or ‘break’. By default, throw.
|
||||||
|
@ -177,7 +177,7 @@ If `enable_optimize_predicate_expression = 1`, then the execution time of these
|
|||||||
|
|
||||||
If `enable_optimize_predicate_expression = 0`, then the execution time of the second query is much longer because the `WHERE` clause applies to all the data after the subquery finishes.
|
If `enable_optimize_predicate_expression = 0`, then the execution time of the second query is much longer because the `WHERE` clause applies to all the data after the subquery finishes.
|
||||||
|
|
||||||
## fallback_to_stale_replicas_for_distributed_queries {#settings-fallback_to_stale_replicas_for_distributed_queries}
|
## fallback_to_stale_replicas_for_distributed_queries {#fallback_to_stale_replicas_for_distributed_queries}
|
||||||
|
|
||||||
Forces a query to an out-of-date replica if updated data is not available. See [Replication](../../engines/table-engines/mergetree-family/replication.md).
|
Forces a query to an out-of-date replica if updated data is not available. See [Replication](../../engines/table-engines/mergetree-family/replication.md).
|
||||||
|
|
||||||
@ -187,7 +187,7 @@ Used when performing `SELECT` from a distributed table that points to replicated
|
|||||||
|
|
||||||
By default, 1 (enabled).
|
By default, 1 (enabled).
|
||||||
|
|
||||||
## force_index_by_date {#settings-force_index_by_date}
|
## force_index_by_date {#force_index_by_date}
|
||||||
|
|
||||||
Disables query execution if the index can’t be used by date.
|
Disables query execution if the index can’t be used by date.
|
||||||
|
|
||||||
@ -203,7 +203,7 @@ Works with tables in the MergeTree family.
|
|||||||
|
|
||||||
If `force_primary_key=1`, ClickHouse checks to see if the query has a primary key condition that can be used for restricting data ranges. If there is no suitable condition, it throws an exception. However, it does not check whether the condition reduces the amount of data to read. For more information about data ranges in MergeTree tables, see [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
If `force_primary_key=1`, ClickHouse checks to see if the query has a primary key condition that can be used for restricting data ranges. If there is no suitable condition, it throws an exception. However, it does not check whether the condition reduces the amount of data to read. For more information about data ranges in MergeTree tables, see [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md).
|
||||||
|
|
||||||
## use_skip_indexes {#settings-use_skip_indexes}
|
## use_skip_indexes {#use_skip_indexes}
|
||||||
|
|
||||||
Use data skipping indexes during query execution.
|
Use data skipping indexes during query execution.
|
||||||
|
|
||||||
@ -214,7 +214,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 1.
|
Default value: 1.
|
||||||
|
|
||||||
## force_data_skipping_indices {#settings-force_data_skipping_indices}
|
## force_data_skipping_indices {#force_data_skipping_indices}
|
||||||
|
|
||||||
Disables query execution if passed data skipping indices wasn't used.
|
Disables query execution if passed data skipping indices wasn't used.
|
||||||
|
|
||||||
@ -241,7 +241,7 @@ SELECT * FROM data_01515 WHERE d1 = 0 SETTINGS force_data_skipping_indices='`d1_
|
|||||||
SELECT * FROM data_01515 WHERE d1 = 0 AND assumeNotNull(d1_null) = 0 SETTINGS force_data_skipping_indices='`d1_idx`, d1_null_idx'; -- Ok.
|
SELECT * FROM data_01515 WHERE d1 = 0 AND assumeNotNull(d1_null) = 0 SETTINGS force_data_skipping_indices='`d1_idx`, d1_null_idx'; -- Ok.
|
||||||
```
|
```
|
||||||
|
|
||||||
## ignore_data_skipping_indices {#settings-ignore_data_skipping_indices}
|
## ignore_data_skipping_indices {#ignore_data_skipping_indices}
|
||||||
|
|
||||||
Ignores the skipping indexes specified if used by the query.
|
Ignores the skipping indexes specified if used by the query.
|
||||||
|
|
||||||
@ -401,7 +401,7 @@ Enables or disables [fsync](http://pubs.opengroup.org/onlinepubs/9699919799/func
|
|||||||
|
|
||||||
It makes sense to disable it if the server has millions of tiny tables that are constantly being created and destroyed.
|
It makes sense to disable it if the server has millions of tiny tables that are constantly being created and destroyed.
|
||||||
|
|
||||||
## function_range_max_elements_in_block {#settings-function_range_max_elements_in_block}
|
## function_range_max_elements_in_block {#function_range_max_elements_in_block}
|
||||||
|
|
||||||
Sets the safety threshold for data volume generated by function [range](../../sql-reference/functions/array-functions.md/#range). Defines the maximum number of values generated by function per block of data (sum of array sizes for every row in a block).
|
Sets the safety threshold for data volume generated by function [range](../../sql-reference/functions/array-functions.md/#range). Defines the maximum number of values generated by function per block of data (sum of array sizes for every row in a block).
|
||||||
|
|
||||||
@ -416,7 +416,7 @@ Default value: `500,000,000`.
|
|||||||
- [max_block_size](#setting-max_block_size)
|
- [max_block_size](#setting-max_block_size)
|
||||||
- [min_insert_block_size_rows](#min-insert-block-size-rows)
|
- [min_insert_block_size_rows](#min-insert-block-size-rows)
|
||||||
|
|
||||||
## enable_http_compression {#settings-enable_http_compression}
|
## enable_http_compression {#enable_http_compression}
|
||||||
|
|
||||||
Enables or disables data compression in the response to an HTTP request.
|
Enables or disables data compression in the response to an HTTP request.
|
||||||
|
|
||||||
@ -429,15 +429,15 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
## http_zlib_compression_level {#settings-http_zlib_compression_level}
|
## http_zlib_compression_level {#http_zlib_compression_level}
|
||||||
|
|
||||||
Sets the level of data compression in the response to an HTTP request if [enable_http_compression = 1](#settings-enable_http_compression).
|
Sets the level of data compression in the response to an HTTP request if [enable_http_compression = 1](#enable_http_compression).
|
||||||
|
|
||||||
Possible values: Numbers from 1 to 9.
|
Possible values: Numbers from 1 to 9.
|
||||||
|
|
||||||
Default value: 3.
|
Default value: 3.
|
||||||
|
|
||||||
## http_native_compression_disable_checksumming_on_decompress {#settings-http_native_compression_disable_checksumming_on_decompress}
|
## http_native_compression_disable_checksumming_on_decompress {#http_native_compression_disable_checksumming_on_decompress}
|
||||||
|
|
||||||
Enables or disables checksum verification when decompressing the HTTP POST data from the client. Used only for ClickHouse native compression format (not used with `gzip` or `deflate`).
|
Enables or disables checksum verification when decompressing the HTTP POST data from the client. Used only for ClickHouse native compression format (not used with `gzip` or `deflate`).
|
||||||
|
|
||||||
@ -480,7 +480,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `1000`.
|
Default value: `1000`.
|
||||||
|
|
||||||
## send_progress_in_http_headers {#settings-send_progress_in_http_headers}
|
## send_progress_in_http_headers {#send_progress_in_http_headers}
|
||||||
|
|
||||||
Enables or disables `X-ClickHouse-Progress` HTTP response headers in `clickhouse-server` responses.
|
Enables or disables `X-ClickHouse-Progress` HTTP response headers in `clickhouse-server` responses.
|
||||||
|
|
||||||
@ -518,7 +518,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `1`.
|
Default value: `1`.
|
||||||
|
|
||||||
## join_default_strictness {#settings-join_default_strictness}
|
## join_default_strictness {#join_default_strictness}
|
||||||
|
|
||||||
Sets default strictness for [JOIN clauses](../../sql-reference/statements/select/join.md/#select-join).
|
Sets default strictness for [JOIN clauses](../../sql-reference/statements/select/join.md/#select-join).
|
||||||
|
|
||||||
@ -531,7 +531,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `ALL`.
|
Default value: `ALL`.
|
||||||
|
|
||||||
## join_algorithm {#settings-join_algorithm}
|
## join_algorithm {#join_algorithm}
|
||||||
|
|
||||||
Specifies which [JOIN](../../sql-reference/statements/select/join.md) algorithm is used.
|
Specifies which [JOIN](../../sql-reference/statements/select/join.md) algorithm is used.
|
||||||
|
|
||||||
@ -547,7 +547,7 @@ Possible values:
|
|||||||
|
|
||||||
[Grace hash join](https://en.wikipedia.org/wiki/Hash_join#Grace_hash_join) is used. Grace hash provides an algorithm option that provides performant complex joins while limiting memory use.
|
[Grace hash join](https://en.wikipedia.org/wiki/Hash_join#Grace_hash_join) is used. Grace hash provides an algorithm option that provides performant complex joins while limiting memory use.
|
||||||
|
|
||||||
The first phase of a grace join reads the right table and splits it into N buckets depending on the hash value of key columns (initially, N is `grace_hash_join_initial_buckets`). This is done in a way to ensure that each bucket can be processed independently. Rows from the first bucket are added to an in-memory hash table while the others are saved to disk. If the hash table grows beyond the memory limit (e.g., as set by [`max_bytes_in_join`](/docs/en/operations/settings/query-complexity.md/#settings-max_bytes_in_join)), the number of buckets is increased and the assigned bucket for each row. Any rows which don’t belong to the current bucket are flushed and reassigned.
|
The first phase of a grace join reads the right table and splits it into N buckets depending on the hash value of key columns (initially, N is `grace_hash_join_initial_buckets`). This is done in a way to ensure that each bucket can be processed independently. Rows from the first bucket are added to an in-memory hash table while the others are saved to disk. If the hash table grows beyond the memory limit (e.g., as set by [`max_bytes_in_join`](/docs/en/operations/settings/query-complexity.md/#max_bytes_in_join)), the number of buckets is increased and the assigned bucket for each row. Any rows which don’t belong to the current bucket are flushed and reassigned.
|
||||||
|
|
||||||
Supports `INNER/LEFT/RIGHT/FULL ALL/ANY JOIN`.
|
Supports `INNER/LEFT/RIGHT/FULL ALL/ANY JOIN`.
|
||||||
|
|
||||||
@ -588,7 +588,7 @@ Possible values:
|
|||||||
ClickHouse always tries to use `partial_merge` join if possible, otherwise, it uses `hash`. *Deprecated*, same as `partial_merge,hash`.
|
ClickHouse always tries to use `partial_merge` join if possible, otherwise, it uses `hash`. *Deprecated*, same as `partial_merge,hash`.
|
||||||
|
|
||||||
|
|
||||||
## join_any_take_last_row {#settings-join_any_take_last_row}
|
## join_any_take_last_row {#join_any_take_last_row}
|
||||||
|
|
||||||
Changes the behaviour of join operations with `ANY` strictness.
|
Changes the behaviour of join operations with `ANY` strictness.
|
||||||
|
|
||||||
@ -607,7 +607,7 @@ See also:
|
|||||||
|
|
||||||
- [JOIN clause](../../sql-reference/statements/select/join.md/#select-join)
|
- [JOIN clause](../../sql-reference/statements/select/join.md/#select-join)
|
||||||
- [Join table engine](../../engines/table-engines/special/join.md)
|
- [Join table engine](../../engines/table-engines/special/join.md)
|
||||||
- [join_default_strictness](#settings-join_default_strictness)
|
- [join_default_strictness](#join_default_strictness)
|
||||||
|
|
||||||
## join_use_nulls {#join_use_nulls}
|
## join_use_nulls {#join_use_nulls}
|
||||||
|
|
||||||
@ -879,7 +879,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 2013265920.
|
Default value: 2013265920.
|
||||||
|
|
||||||
## min_bytes_to_use_direct_io {#settings-min-bytes-to-use-direct-io}
|
## min_bytes_to_use_direct_io {#min-bytes-to-use-direct-io}
|
||||||
|
|
||||||
The minimum data volume required for using direct I/O access to the storage disk.
|
The minimum data volume required for using direct I/O access to the storage disk.
|
||||||
|
|
||||||
@ -917,7 +917,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `1`.
|
Default value: `1`.
|
||||||
|
|
||||||
## log_queries {#settings-log-queries}
|
## log_queries {#log-queries}
|
||||||
|
|
||||||
Setting up query logging.
|
Setting up query logging.
|
||||||
|
|
||||||
@ -929,7 +929,7 @@ Example:
|
|||||||
log_queries=1
|
log_queries=1
|
||||||
```
|
```
|
||||||
|
|
||||||
## log_queries_min_query_duration_ms {#settings-log-queries-min-query-duration-ms}
|
## log_queries_min_query_duration_ms {#log-queries-min-query-duration-ms}
|
||||||
|
|
||||||
If enabled (non-zero), queries faster than the value of this setting will not be logged (you can think about this as a `long_query_time` for [MySQL Slow Query Log](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)), and this basically means that you will not find them in the following tables:
|
If enabled (non-zero), queries faster than the value of this setting will not be logged (you can think about this as a `long_query_time` for [MySQL Slow Query Log](https://dev.mysql.com/doc/refman/5.7/en/slow-query-log.html)), and this basically means that you will not find them in the following tables:
|
||||||
|
|
||||||
@ -944,7 +944,7 @@ Only the queries with the following type will get to the log:
|
|||||||
- Type: milliseconds
|
- Type: milliseconds
|
||||||
- Default value: 0 (any query)
|
- Default value: 0 (any query)
|
||||||
|
|
||||||
## log_queries_min_type {#settings-log-queries-min-type}
|
## log_queries_min_type {#log-queries-min-type}
|
||||||
|
|
||||||
`query_log` minimal type to log.
|
`query_log` minimal type to log.
|
||||||
|
|
||||||
@ -962,11 +962,11 @@ Can be used to limit which entities will go to `query_log`, say you are interest
|
|||||||
log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
|
log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
|
||||||
```
|
```
|
||||||
|
|
||||||
## log_query_threads {#settings-log-query-threads}
|
## log_query_threads {#log-query-threads}
|
||||||
|
|
||||||
Setting up query threads logging.
|
Setting up query threads logging.
|
||||||
|
|
||||||
Query threads log into the [system.query_thread_log](../../operations/system-tables/query_thread_log.md) table. This setting has effect only when [log_queries](#settings-log-queries) is true. Queries’ threads run by ClickHouse with this setup are logged according to the rules in the [query_thread_log](../../operations/server-configuration-parameters/settings.md/#server_configuration_parameters-query_thread_log) server configuration parameter.
|
Query threads log into the [system.query_thread_log](../../operations/system-tables/query_thread_log.md) table. This setting has effect only when [log_queries](#log-queries) is true. Queries’ threads run by ClickHouse with this setup are logged according to the rules in the [query_thread_log](../../operations/server-configuration-parameters/settings.md/#server_configuration_parameters-query_thread_log) server configuration parameter.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
@ -981,7 +981,7 @@ Default value: `1`.
|
|||||||
log_query_threads=1
|
log_query_threads=1
|
||||||
```
|
```
|
||||||
|
|
||||||
## log_query_views {#settings-log-query-views}
|
## log_query_views {#log-query-views}
|
||||||
|
|
||||||
Setting up query views logging.
|
Setting up query views logging.
|
||||||
|
|
||||||
@ -993,7 +993,7 @@ Example:
|
|||||||
log_query_views=1
|
log_query_views=1
|
||||||
```
|
```
|
||||||
|
|
||||||
## log_formatted_queries {#settings-log-formatted-queries}
|
## log_formatted_queries {#log-formatted-queries}
|
||||||
|
|
||||||
Allows to log formatted queries to the [system.query_log](../../operations/system-tables/query_log.md) system table (populates `formatted_query` column in the [system.query_log](../../operations/system-tables/query_log.md)).
|
Allows to log formatted queries to the [system.query_log](../../operations/system-tables/query_log.md) system table (populates `formatted_query` column in the [system.query_log](../../operations/system-tables/query_log.md)).
|
||||||
|
|
||||||
@ -1004,7 +1004,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `0`.
|
Default value: `0`.
|
||||||
|
|
||||||
## log_comment {#settings-log-comment}
|
## log_comment {#log-comment}
|
||||||
|
|
||||||
Specifies the value for the `log_comment` field of the [system.query_log](../system-tables/query_log.md) table and comment text for the server log.
|
Specifies the value for the `log_comment` field of the [system.query_log](../system-tables/query_log.md) table and comment text for the server log.
|
||||||
|
|
||||||
@ -1012,7 +1012,7 @@ It can be used to improve the readability of server logs. Additionally, it helps
|
|||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
- Any string no longer than [max_query_size](#settings-max_query_size). If the max_query_size is exceeded, the server throws an exception.
|
- Any string no longer than [max_query_size](#max_query_size). If the max_query_size is exceeded, the server throws an exception.
|
||||||
|
|
||||||
Default value: empty string.
|
Default value: empty string.
|
||||||
|
|
||||||
@ -1036,7 +1036,7 @@ Result:
|
|||||||
└─────────────┴───────────┘
|
└─────────────┴───────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## log_processors_profiles {#settings-log_processors_profiles}
|
## log_processors_profiles {#log_processors_profiles}
|
||||||
|
|
||||||
Write time that processor spent during execution/waiting for data to `system.processors_profile_log` table.
|
Write time that processor spent during execution/waiting for data to `system.processors_profile_log` table.
|
||||||
|
|
||||||
@ -1045,7 +1045,7 @@ See also:
|
|||||||
- [`system.processors_profile_log`](../../operations/system-tables/processors_profile_log.md)
|
- [`system.processors_profile_log`](../../operations/system-tables/processors_profile_log.md)
|
||||||
- [`EXPLAIN PIPELINE`](../../sql-reference/statements/explain.md#explain-pipeline)
|
- [`EXPLAIN PIPELINE`](../../sql-reference/statements/explain.md#explain-pipeline)
|
||||||
|
|
||||||
## max_insert_block_size {#settings-max_insert_block_size}
|
## max_insert_block_size {#max_insert_block_size}
|
||||||
|
|
||||||
The size of blocks (in a count of rows) to form for insertion into a table.
|
The size of blocks (in a count of rows) to form for insertion into a table.
|
||||||
This setting only applies in cases when the server forms the blocks.
|
This setting only applies in cases when the server forms the blocks.
|
||||||
@ -1079,7 +1079,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 268435456.
|
Default value: 268435456.
|
||||||
|
|
||||||
## max_replica_delay_for_distributed_queries {#settings-max_replica_delay_for_distributed_queries}
|
## max_replica_delay_for_distributed_queries {#max_replica_delay_for_distributed_queries}
|
||||||
|
|
||||||
Disables lagging replicas for distributed queries. See [Replication](../../engines/table-engines/mergetree-family/replication.md).
|
Disables lagging replicas for distributed queries. See [Replication](../../engines/table-engines/mergetree-family/replication.md).
|
||||||
|
|
||||||
@ -1096,7 +1096,7 @@ Default value: 300.
|
|||||||
|
|
||||||
Used when performing `SELECT` from a distributed table that points to replicated tables.
|
Used when performing `SELECT` from a distributed table that points to replicated tables.
|
||||||
|
|
||||||
## max_threads {#settings-max_threads}
|
## max_threads {#max_threads}
|
||||||
|
|
||||||
The maximum number of query processing threads, excluding threads for retrieving data from remote servers (see the ‘max_distributed_connections’ parameter).
|
The maximum number of query processing threads, excluding threads for retrieving data from remote servers (see the ‘max_distributed_connections’ parameter).
|
||||||
|
|
||||||
@ -1109,7 +1109,7 @@ For queries that are completed quickly because of a LIMIT, you can set a lower
|
|||||||
|
|
||||||
The smaller the `max_threads` value, the less memory is consumed.
|
The smaller the `max_threads` value, the less memory is consumed.
|
||||||
|
|
||||||
## max_insert_threads {#settings-max-insert-threads}
|
## max_insert_threads {#max-insert-threads}
|
||||||
|
|
||||||
The maximum number of threads to execute the `INSERT SELECT` query.
|
The maximum number of threads to execute the `INSERT SELECT` query.
|
||||||
|
|
||||||
@ -1120,7 +1120,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
Parallel `INSERT SELECT` has effect only if the `SELECT` part is executed in parallel, see [max_threads](#settings-max_threads) setting.
|
Parallel `INSERT SELECT` has effect only if the `SELECT` part is executed in parallel, see [max_threads](#max_threads) setting.
|
||||||
Higher values will lead to higher memory usage.
|
Higher values will lead to higher memory usage.
|
||||||
|
|
||||||
## max_compress_block_size {#max-compress-block-size}
|
## max_compress_block_size {#max-compress-block-size}
|
||||||
@ -1149,7 +1149,7 @@ We are writing a URL column with the String type (average size of 60 bytes per v
|
|||||||
This is an expert-level setting, and you shouldn't change it if you're just getting started with ClickHouse.
|
This is an expert-level setting, and you shouldn't change it if you're just getting started with ClickHouse.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
## max_query_size {#settings-max_query_size}
|
## max_query_size {#max_query_size}
|
||||||
|
|
||||||
The maximum number of bytes of a query string parsed by the SQL parser.
|
The maximum number of bytes of a query string parsed by the SQL parser.
|
||||||
Data in the VALUES clause of INSERT queries is processed by a separate stream parser (that consumes O(1) RAM) and not affected by this restriction.
|
Data in the VALUES clause of INSERT queries is processed by a separate stream parser (that consumes O(1) RAM) and not affected by this restriction.
|
||||||
@ -1393,7 +1393,7 @@ Default value: 5000.
|
|||||||
|
|
||||||
## stream_flush_interval_ms {#stream-flush-interval-ms}
|
## stream_flush_interval_ms {#stream-flush-interval-ms}
|
||||||
|
|
||||||
Works for tables with streaming in the case of a timeout, or when a thread generates [max_insert_block_size](#settings-max_insert_block_size) rows.
|
Works for tables with streaming in the case of a timeout, or when a thread generates [max_insert_block_size](#max_insert_block_size) rows.
|
||||||
|
|
||||||
The default value is 7500.
|
The default value is 7500.
|
||||||
|
|
||||||
@ -1405,7 +1405,7 @@ Timeout for polling data from/to streaming storages.
|
|||||||
|
|
||||||
Default value: 500.
|
Default value: 500.
|
||||||
|
|
||||||
## load_balancing {#settings-load_balancing}
|
## load_balancing {#load_balancing}
|
||||||
|
|
||||||
Specifies the algorithm of replicas selection that is used for distributed query processing.
|
Specifies the algorithm of replicas selection that is used for distributed query processing.
|
||||||
|
|
||||||
@ -1419,7 +1419,7 @@ ClickHouse supports the following algorithms of choosing replicas:
|
|||||||
|
|
||||||
See also:
|
See also:
|
||||||
|
|
||||||
- [distributed_replica_max_ignored_errors](#settings-distributed_replica_max_ignored_errors)
|
- [distributed_replica_max_ignored_errors](#distributed_replica_max_ignored_errors)
|
||||||
|
|
||||||
### Random (by Default) {#load_balancing-random}
|
### Random (by Default) {#load_balancing-random}
|
||||||
|
|
||||||
@ -1473,20 +1473,20 @@ load_balancing = round_robin
|
|||||||
|
|
||||||
This algorithm uses a round-robin policy across replicas with the same number of errors (only the queries with `round_robin` policy is accounted).
|
This algorithm uses a round-robin policy across replicas with the same number of errors (only the queries with `round_robin` policy is accounted).
|
||||||
|
|
||||||
## prefer_localhost_replica {#settings-prefer-localhost-replica}
|
## prefer_localhost_replica {#prefer-localhost-replica}
|
||||||
|
|
||||||
Enables/disables preferable using the localhost replica when processing distributed queries.
|
Enables/disables preferable using the localhost replica when processing distributed queries.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
- 1 — ClickHouse always sends a query to the localhost replica if it exists.
|
- 1 — ClickHouse always sends a query to the localhost replica if it exists.
|
||||||
- 0 — ClickHouse uses the balancing strategy specified by the [load_balancing](#settings-load_balancing) setting.
|
- 0 — ClickHouse uses the balancing strategy specified by the [load_balancing](#load_balancing) setting.
|
||||||
|
|
||||||
Default value: 1.
|
Default value: 1.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
Disable this setting if you use [max_parallel_replicas](#settings-max_parallel_replicas) without [parallel_replicas_custom_key](#settings-parallel_replicas_custom_key).
|
Disable this setting if you use [max_parallel_replicas](#max_parallel_replicas) without [parallel_replicas_custom_key](#parallel_replicas_custom_key).
|
||||||
If [parallel_replicas_custom_key](#settings-parallel_replicas_custom_key) is set, disable this setting only if it's used on a cluster with multiple shards containing multiple replicas.
|
If [parallel_replicas_custom_key](#parallel_replicas_custom_key) is set, disable this setting only if it's used on a cluster with multiple shards containing multiple replicas.
|
||||||
If it's used on a cluster with a single shard and multiple replicas, disabling this setting will have negative effects.
|
If it's used on a cluster with a single shard and multiple replicas, disabling this setting will have negative effects.
|
||||||
:::
|
:::
|
||||||
|
|
||||||
@ -1500,7 +1500,7 @@ See the section “WITH TOTALS modifier”.
|
|||||||
The threshold for `totals_mode = 'auto'`.
|
The threshold for `totals_mode = 'auto'`.
|
||||||
See the section “WITH TOTALS modifier”.
|
See the section “WITH TOTALS modifier”.
|
||||||
|
|
||||||
## max_parallel_replicas {#settings-max_parallel_replicas}
|
## max_parallel_replicas {#max_parallel_replicas}
|
||||||
|
|
||||||
The maximum number of replicas for each shard when executing a query.
|
The maximum number of replicas for each shard when executing a query.
|
||||||
|
|
||||||
@ -1527,23 +1527,23 @@ A query may be processed faster if it is executed on several servers in parallel
|
|||||||
- The sampling key is an expression that is expensive to calculate.
|
- The sampling key is an expression that is expensive to calculate.
|
||||||
- The cluster latency distribution has a long tail, so that querying more servers increases the query overall latency.
|
- The cluster latency distribution has a long tail, so that querying more servers increases the query overall latency.
|
||||||
|
|
||||||
### Parallel processing using [parallel_replicas_custom_key](#settings-parallel_replicas_custom_key)
|
### Parallel processing using [parallel_replicas_custom_key](#parallel_replicas_custom_key)
|
||||||
|
|
||||||
This setting is useful for any replicated table.
|
This setting is useful for any replicated table.
|
||||||
|
|
||||||
## parallel_replicas_custom_key {#settings-parallel_replicas_custom_key}
|
## parallel_replicas_custom_key {#parallel_replicas_custom_key}
|
||||||
|
|
||||||
An arbitrary integer expression that can be used to split work between replicas for a specific table.
|
An arbitrary integer expression that can be used to split work between replicas for a specific table.
|
||||||
The value can be any integer expression.
|
The value can be any integer expression.
|
||||||
A query may be processed faster if it is executed on several servers in parallel but it depends on the used [parallel_replicas_custom_key](#settings-parallel_replicas_custom_key)
|
A query may be processed faster if it is executed on several servers in parallel but it depends on the used [parallel_replicas_custom_key](#parallel_replicas_custom_key)
|
||||||
and [parallel_replicas_custom_key_filter_type](#settings-parallel_replicas_custom_key_filter_type).
|
and [parallel_replicas_custom_key_filter_type](#parallel_replicas_custom_key_filter_type).
|
||||||
|
|
||||||
Simple expressions using primary keys are preferred.
|
Simple expressions using primary keys are preferred.
|
||||||
|
|
||||||
If the setting is used on a cluster that consists of a single shard with multiple replicas, those replicas will be converted into virtual shards.
|
If the setting is used on a cluster that consists of a single shard with multiple replicas, those replicas will be converted into virtual shards.
|
||||||
Otherwise, it will behave same as for `SAMPLE` key, it will use multiple replicas of each shard.
|
Otherwise, it will behave same as for `SAMPLE` key, it will use multiple replicas of each shard.
|
||||||
|
|
||||||
## parallel_replicas_custom_key_filter_type {#settings-parallel_replicas_custom_key_filter_type}
|
## parallel_replicas_custom_key_filter_type {#parallel_replicas_custom_key_filter_type}
|
||||||
|
|
||||||
How to use `parallel_replicas_custom_key` expression for splitting work between replicas.
|
How to use `parallel_replicas_custom_key` expression for splitting work between replicas.
|
||||||
|
|
||||||
@ -1637,7 +1637,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `1`.
|
Default value: `1`.
|
||||||
|
|
||||||
## query_cache_store_results_of_queries_with_nondeterministic_functions {#query--store-results-of-queries-with-nondeterministic-functions}
|
## query_cache_store_results_of_queries_with_nondeterministic_functions {#query-cache-store-results-of-queries-with-nondeterministic-functions}
|
||||||
|
|
||||||
If turned on, then results of `SELECT` queries with non-deterministic functions (e.g. `rand()`, `now()`) can be cached in the [query cache](../query-cache.md).
|
If turned on, then results of `SELECT` queries with non-deterministic functions (e.g. `rand()`, `now()`) can be cached in the [query cache](../query-cache.md).
|
||||||
|
|
||||||
@ -1732,7 +1732,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0 (no restriction).
|
Default value: 0 (no restriction).
|
||||||
|
|
||||||
## insert_quorum {#settings-insert_quorum}
|
## insert_quorum {#insert_quorum}
|
||||||
|
|
||||||
Enables the quorum writes.
|
Enables the quorum writes.
|
||||||
|
|
||||||
@ -1746,7 +1746,7 @@ Quorum writes
|
|||||||
|
|
||||||
`INSERT` succeeds only when ClickHouse manages to correctly write data to the `insert_quorum` of replicas during the `insert_quorum_timeout`. If for any reason the number of replicas with successful writes does not reach the `insert_quorum`, the write is considered failed and ClickHouse will delete the inserted block from all the replicas where data has already been written.
|
`INSERT` succeeds only when ClickHouse manages to correctly write data to the `insert_quorum` of replicas during the `insert_quorum_timeout`. If for any reason the number of replicas with successful writes does not reach the `insert_quorum`, the write is considered failed and ClickHouse will delete the inserted block from all the replicas where data has already been written.
|
||||||
|
|
||||||
When `insert_quorum_parallel` is disabled, all replicas in the quorum are consistent, i.e. they contain data from all previous `INSERT` queries (the `INSERT` sequence is linearized). When reading data written using `insert_quorum` and `insert_quorum_parallel` is disabled, you can turn on sequential consistency for `SELECT` queries using [select_sequential_consistency](#settings-select_sequential_consistency).
|
When `insert_quorum_parallel` is disabled, all replicas in the quorum are consistent, i.e. they contain data from all previous `INSERT` queries (the `INSERT` sequence is linearized). When reading data written using `insert_quorum` and `insert_quorum_parallel` is disabled, you can turn on sequential consistency for `SELECT` queries using [select_sequential_consistency](#select_sequential_consistency).
|
||||||
|
|
||||||
ClickHouse generates an exception:
|
ClickHouse generates an exception:
|
||||||
|
|
||||||
@ -1755,11 +1755,11 @@ ClickHouse generates an exception:
|
|||||||
|
|
||||||
See also:
|
See also:
|
||||||
|
|
||||||
- [insert_quorum_timeout](#settings-insert_quorum_timeout)
|
- [insert_quorum_timeout](#insert_quorum_timeout)
|
||||||
- [insert_quorum_parallel](#settings-insert_quorum_parallel)
|
- [insert_quorum_parallel](#insert_quorum_parallel)
|
||||||
- [select_sequential_consistency](#settings-select_sequential_consistency)
|
- [select_sequential_consistency](#select_sequential_consistency)
|
||||||
|
|
||||||
## insert_quorum_timeout {#settings-insert_quorum_timeout}
|
## insert_quorum_timeout {#insert_quorum_timeout}
|
||||||
|
|
||||||
Write to a quorum timeout in milliseconds. If the timeout has passed and no write has taken place yet, ClickHouse will generate an exception and the client must repeat the query to write the same block to the same or any other replica.
|
Write to a quorum timeout in milliseconds. If the timeout has passed and no write has taken place yet, ClickHouse will generate an exception and the client must repeat the query to write the same block to the same or any other replica.
|
||||||
|
|
||||||
@ -1767,11 +1767,11 @@ Default value: 600 000 milliseconds (ten minutes).
|
|||||||
|
|
||||||
See also:
|
See also:
|
||||||
|
|
||||||
- [insert_quorum](#settings-insert_quorum)
|
- [insert_quorum](#insert_quorum)
|
||||||
- [insert_quorum_parallel](#settings-insert_quorum_parallel)
|
- [insert_quorum_parallel](#insert_quorum_parallel)
|
||||||
- [select_sequential_consistency](#settings-select_sequential_consistency)
|
- [select_sequential_consistency](#select_sequential_consistency)
|
||||||
|
|
||||||
## insert_quorum_parallel {#settings-insert_quorum_parallel}
|
## insert_quorum_parallel {#insert_quorum_parallel}
|
||||||
|
|
||||||
Enables or disables parallelism for quorum `INSERT` queries. If enabled, additional `INSERT` queries can be sent while previous queries have not yet finished. If disabled, additional writes to the same table will be rejected.
|
Enables or disables parallelism for quorum `INSERT` queries. If enabled, additional `INSERT` queries can be sent while previous queries have not yet finished. If disabled, additional writes to the same table will be rejected.
|
||||||
|
|
||||||
@ -1784,11 +1784,11 @@ Default value: 1.
|
|||||||
|
|
||||||
See also:
|
See also:
|
||||||
|
|
||||||
- [insert_quorum](#settings-insert_quorum)
|
- [insert_quorum](#insert_quorum)
|
||||||
- [insert_quorum_timeout](#settings-insert_quorum_timeout)
|
- [insert_quorum_timeout](#insert_quorum_timeout)
|
||||||
- [select_sequential_consistency](#settings-select_sequential_consistency)
|
- [select_sequential_consistency](#select_sequential_consistency)
|
||||||
|
|
||||||
## select_sequential_consistency {#settings-select_sequential_consistency}
|
## select_sequential_consistency {#select_sequential_consistency}
|
||||||
|
|
||||||
Enables or disables sequential consistency for `SELECT` queries. Requires `insert_quorum_parallel` to be disabled (enabled by default).
|
Enables or disables sequential consistency for `SELECT` queries. Requires `insert_quorum_parallel` to be disabled (enabled by default).
|
||||||
|
|
||||||
@ -1807,11 +1807,11 @@ When `insert_quorum_parallel` is enabled (the default), then `select_sequential_
|
|||||||
|
|
||||||
See also:
|
See also:
|
||||||
|
|
||||||
- [insert_quorum](#settings-insert_quorum)
|
- [insert_quorum](#insert_quorum)
|
||||||
- [insert_quorum_timeout](#settings-insert_quorum_timeout)
|
- [insert_quorum_timeout](#insert_quorum_timeout)
|
||||||
- [insert_quorum_parallel](#settings-insert_quorum_parallel)
|
- [insert_quorum_parallel](#insert_quorum_parallel)
|
||||||
|
|
||||||
## insert_deduplicate {#settings-insert-deduplicate}
|
## insert_deduplicate {#insert-deduplicate}
|
||||||
|
|
||||||
Enables or disables block deduplication of `INSERT` (for Replicated\* tables).
|
Enables or disables block deduplication of `INSERT` (for Replicated\* tables).
|
||||||
|
|
||||||
@ -1938,7 +1938,7 @@ For the replicated tables, by default, only 10000 of the most recent inserts for
|
|||||||
We recommend enabling the [async_block_ids_cache](merge-tree-settings.md/#use-async-block-ids-cache) to increase the efficiency of deduplication.
|
We recommend enabling the [async_block_ids_cache](merge-tree-settings.md/#use-async-block-ids-cache) to increase the efficiency of deduplication.
|
||||||
This function does not work for non-replicated tables.
|
This function does not work for non-replicated tables.
|
||||||
|
|
||||||
## deduplicate_blocks_in_dependent_materialized_views {#settings-deduplicate-blocks-in-dependent-materialized-views}
|
## deduplicate_blocks_in_dependent_materialized_views {#deduplicate-blocks-in-dependent-materialized-views}
|
||||||
|
|
||||||
Enables or disables the deduplication check for materialized views that receive data from Replicated\* tables.
|
Enables or disables the deduplication check for materialized views that receive data from Replicated\* tables.
|
||||||
|
|
||||||
@ -2048,7 +2048,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 10000
|
Default value: 10000
|
||||||
|
|
||||||
## max_network_bytes {#settings-max-network-bytes}
|
## max_network_bytes {#max-network-bytes}
|
||||||
|
|
||||||
Limits the data volume (in bytes) that is received or transmitted over the network when executing a query. This setting applies to every individual query.
|
Limits the data volume (in bytes) that is received or transmitted over the network when executing a query. This setting applies to every individual query.
|
||||||
|
|
||||||
@ -2059,7 +2059,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
## max_network_bandwidth {#settings-max-network-bandwidth}
|
## max_network_bandwidth {#max-network-bandwidth}
|
||||||
|
|
||||||
Limits the speed of the data exchange over the network in bytes per second. This setting applies to every query.
|
Limits the speed of the data exchange over the network in bytes per second. This setting applies to every query.
|
||||||
|
|
||||||
@ -2070,7 +2070,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
## max_network_bandwidth_for_user {#settings-max-network-bandwidth-for-user}
|
## max_network_bandwidth_for_user {#max-network-bandwidth-for-user}
|
||||||
|
|
||||||
Limits the speed of the data exchange over the network in bytes per second. This setting applies to all concurrently running queries performed by a single user.
|
Limits the speed of the data exchange over the network in bytes per second. This setting applies to all concurrently running queries performed by a single user.
|
||||||
|
|
||||||
@ -2081,7 +2081,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
## max_network_bandwidth_for_all_users {#settings-max-network-bandwidth-for-all-users}
|
## max_network_bandwidth_for_all_users {#max-network-bandwidth-for-all-users}
|
||||||
|
|
||||||
Limits the speed that data is exchanged at over the network in bytes per second. This setting applies to all concurrently running queries on the server.
|
Limits the speed that data is exchanged at over the network in bytes per second. This setting applies to all concurrently running queries on the server.
|
||||||
|
|
||||||
@ -2092,7 +2092,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
## count_distinct_implementation {#settings-count_distinct_implementation}
|
## count_distinct_implementation {#count_distinct_implementation}
|
||||||
|
|
||||||
Specifies which of the `uniq*` functions should be used to perform the [COUNT(DISTINCT …)](../../sql-reference/aggregate-functions/reference/count.md/#agg_function-count) construction.
|
Specifies which of the `uniq*` functions should be used to perform the [COUNT(DISTINCT …)](../../sql-reference/aggregate-functions/reference/count.md/#agg_function-count) construction.
|
||||||
|
|
||||||
@ -2106,7 +2106,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `uniqExact`.
|
Default value: `uniqExact`.
|
||||||
|
|
||||||
## skip_unavailable_shards {#settings-skip_unavailable_shards}
|
## skip_unavailable_shards {#skip_unavailable_shards}
|
||||||
|
|
||||||
Enables or disables silently skipping of unavailable shards.
|
Enables or disables silently skipping of unavailable shards.
|
||||||
|
|
||||||
@ -2270,7 +2270,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0
|
Default value: 0
|
||||||
|
|
||||||
## force_optimize_skip_unused_shards_nesting {#settings-force_optimize_skip_unused_shards_nesting}
|
## force_optimize_skip_unused_shards_nesting {#force_optimize_skip_unused_shards_nesting}
|
||||||
|
|
||||||
Controls [`force_optimize_skip_unused_shards`](#force-optimize-skip-unused-shards) (hence still requires [`force_optimize_skip_unused_shards`](#force-optimize-skip-unused-shards)) depends on the nesting level of the distributed query (case when you have `Distributed` table that look into another `Distributed` table).
|
Controls [`force_optimize_skip_unused_shards`](#force-optimize-skip-unused-shards) (hence still requires [`force_optimize_skip_unused_shards`](#force-optimize-skip-unused-shards)) depends on the nesting level of the distributed query (case when you have `Distributed` table that look into another `Distributed` table).
|
||||||
|
|
||||||
@ -2400,7 +2400,7 @@ Enables caching of rows number during count from files in table functions `file`
|
|||||||
|
|
||||||
Enabled by default.
|
Enabled by default.
|
||||||
|
|
||||||
## distributed_replica_error_half_life {#settings-distributed_replica_error_half_life}
|
## distributed_replica_error_half_life {#distributed_replica_error_half_life}
|
||||||
|
|
||||||
- Type: seconds
|
- Type: seconds
|
||||||
- Default value: 60 seconds
|
- Default value: 60 seconds
|
||||||
@ -2411,10 +2411,10 @@ See also:
|
|||||||
|
|
||||||
- [load_balancing](#load_balancing-round_robin)
|
- [load_balancing](#load_balancing-round_robin)
|
||||||
- [Table engine Distributed](../../engines/table-engines/special/distributed.md)
|
- [Table engine Distributed](../../engines/table-engines/special/distributed.md)
|
||||||
- [distributed_replica_error_cap](#settings-distributed_replica_error_cap)
|
- [distributed_replica_error_cap](#distributed_replica_error_cap)
|
||||||
- [distributed_replica_max_ignored_errors](#settings-distributed_replica_max_ignored_errors)
|
- [distributed_replica_max_ignored_errors](#distributed_replica_max_ignored_errors)
|
||||||
|
|
||||||
## distributed_replica_error_cap {#settings-distributed_replica_error_cap}
|
## distributed_replica_error_cap {#distributed_replica_error_cap}
|
||||||
|
|
||||||
- Type: unsigned int
|
- Type: unsigned int
|
||||||
- Default value: 1000
|
- Default value: 1000
|
||||||
@ -2425,10 +2425,10 @@ See also:
|
|||||||
|
|
||||||
- [load_balancing](#load_balancing-round_robin)
|
- [load_balancing](#load_balancing-round_robin)
|
||||||
- [Table engine Distributed](../../engines/table-engines/special/distributed.md)
|
- [Table engine Distributed](../../engines/table-engines/special/distributed.md)
|
||||||
- [distributed_replica_error_half_life](#settings-distributed_replica_error_half_life)
|
- [distributed_replica_error_half_life](#distributed_replica_error_half_life)
|
||||||
- [distributed_replica_max_ignored_errors](#settings-distributed_replica_max_ignored_errors)
|
- [distributed_replica_max_ignored_errors](#distributed_replica_max_ignored_errors)
|
||||||
|
|
||||||
## distributed_replica_max_ignored_errors {#settings-distributed_replica_max_ignored_errors}
|
## distributed_replica_max_ignored_errors {#distributed_replica_max_ignored_errors}
|
||||||
|
|
||||||
- Type: unsigned int
|
- Type: unsigned int
|
||||||
- Default value: 0
|
- Default value: 0
|
||||||
@ -2439,7 +2439,7 @@ See also:
|
|||||||
|
|
||||||
- [load_balancing](#load_balancing-round_robin)
|
- [load_balancing](#load_balancing-round_robin)
|
||||||
- [Table engine Distributed](../../engines/table-engines/special/distributed.md)
|
- [Table engine Distributed](../../engines/table-engines/special/distributed.md)
|
||||||
- [distributed_replica_error_cap](#settings-distributed_replica_error_cap)
|
- [distributed_replica_error_cap](#distributed_replica_error_cap)
|
||||||
- [distributed_replica_error_half_life](#settings-distributed_replica_error_half_life)
|
- [distributed_replica_error_half_life](#settings-distributed_replica_error_half_life)
|
||||||
|
|
||||||
## distributed_directory_monitor_sleep_time_ms {#distributed_directory_monitor_sleep_time_ms}
|
## distributed_directory_monitor_sleep_time_ms {#distributed_directory_monitor_sleep_time_ms}
|
||||||
@ -2595,7 +2595,7 @@ Possible values:
|
|||||||
|
|
||||||
Default value: 0.
|
Default value: 0.
|
||||||
|
|
||||||
## allow_introspection_functions {#settings-allow_introspection_functions}
|
## allow_introspection_functions {#allow_introspection_functions}
|
||||||
|
|
||||||
Enables or disables [introspection functions](../../sql-reference/functions/introspection.md) for query profiling.
|
Enables or disables [introspection functions](../../sql-reference/functions/introspection.md) for query profiling.
|
||||||
|
|
||||||
@ -3136,7 +3136,7 @@ Do not enable this feature in version `<= 21.8`. It's not properly implemented a
|
|||||||
## aggregate_functions_null_for_empty {#aggregate_functions_null_for_empty}
|
## aggregate_functions_null_for_empty {#aggregate_functions_null_for_empty}
|
||||||
|
|
||||||
Enables or disables rewriting all aggregate functions in a query, adding [-OrNull](../../sql-reference/aggregate-functions/combinators.md/#agg-functions-combinator-ornull) suffix to them. Enable it for SQL standard compatibility.
|
Enables or disables rewriting all aggregate functions in a query, adding [-OrNull](../../sql-reference/aggregate-functions/combinators.md/#agg-functions-combinator-ornull) suffix to them. Enable it for SQL standard compatibility.
|
||||||
It is implemented via query rewrite (similar to [count_distinct_implementation](#settings-count_distinct_implementation) setting) to get consistent results for distributed queries.
|
It is implemented via query rewrite (similar to [count_distinct_implementation](#count_distinct_implementation) setting) to get consistent results for distributed queries.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
@ -4609,7 +4609,7 @@ Default: 0
|
|||||||
|
|
||||||
## rewrite_count_distinct_if_with_count_distinct_implementation
|
## rewrite_count_distinct_if_with_count_distinct_implementation
|
||||||
|
|
||||||
Allows you to rewrite `countDistcintIf` with [count_distinct_implementation](#settings-count_distinct_implementation) setting.
|
Allows you to rewrite `countDistcintIf` with [count_distinct_implementation](#count_distinct_implementation) setting.
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
|
@ -123,7 +123,7 @@ LAYOUT(...) -- Memory layout configuration
|
|||||||
LIFETIME(...) -- Lifetime of dictionary in memory
|
LIFETIME(...) -- Lifetime of dictionary in memory
|
||||||
```
|
```
|
||||||
|
|
||||||
## Storing Dictionaries in Memory {#storig-dictionaries-in-memory}
|
## Storing Dictionaries in Memory {#storing-dictionaries-in-memory}
|
||||||
|
|
||||||
There are a variety of ways to store dictionaries in memory.
|
There are a variety of ways to store dictionaries in memory.
|
||||||
|
|
||||||
|
@ -657,7 +657,7 @@ SELECT arraySlice([1, 2, NULL, 4, 5], 2, 3) AS res;
|
|||||||
|
|
||||||
Array elements set to `NULL` are handled as normal values.
|
Array elements set to `NULL` are handled as normal values.
|
||||||
|
|
||||||
## arraySort(\[func,\] arr, …) {#array_functions-sort}
|
## arraySort(\[func,\] arr, …) {#sort}
|
||||||
|
|
||||||
Sorts the elements of the `arr` array in ascending order. If the `func` function is specified, sorting order is determined by the result of the `func` function applied to the elements of the array. If `func` accepts multiple arguments, the `arraySort` function is passed several arrays that the arguments of `func` will correspond to. Detailed examples are shown at the end of `arraySort` description.
|
Sorts the elements of the `arr` array in ascending order. If the `func` function is specified, sorting order is determined by the result of the `func` function applied to the elements of the array. If `func` accepts multiple arguments, the `arraySort` function is passed several arrays that the arguments of `func` will correspond to. Detailed examples are shown at the end of `arraySort` description.
|
||||||
|
|
||||||
@ -716,7 +716,7 @@ SELECT arraySort((x) -> -x, [1, 2, 3]) as res;
|
|||||||
└─────────┘
|
└─────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
For each element of the source array, the lambda function returns the sorting key, that is, \[1 –\> -1, 2 –\> -2, 3 –\> -3\]. Since the `arraySort` function sorts the keys in ascending order, the result is \[3, 2, 1\]. Thus, the `(x) –> -x` lambda function sets the [descending order](#array_functions-reverse-sort) in a sorting.
|
For each element of the source array, the lambda function returns the sorting key, that is, \[1 –\> -1, 2 –\> -2, 3 –\> -3\]. Since the `arraySort` function sorts the keys in ascending order, the result is \[3, 2, 1\]. Thus, the `(x) –> -x` lambda function sets the [descending order](#reverse-sort) in a sorting.
|
||||||
|
|
||||||
The lambda function can accept multiple arguments. In this case, you need to pass the `arraySort` function several arrays of identical length that the arguments of lambda function will correspond to. The resulting array will consist of elements from the first input array; elements from the next input array(s) specify the sorting keys. For example:
|
The lambda function can accept multiple arguments. In this case, you need to pass the `arraySort` function several arrays of identical length that the arguments of lambda function will correspond to. The resulting array will consist of elements from the first input array; elements from the next input array(s) specify the sorting keys. For example:
|
||||||
|
|
||||||
@ -762,7 +762,7 @@ To improve sorting efficiency, the [Schwartzian transform](https://en.wikipedia.
|
|||||||
|
|
||||||
Same as `arraySort` with additional `limit` argument allowing partial sorting. Returns an array of the same size as the original array where elements in range `[1..limit]` are sorted in ascending order. Remaining elements `(limit..N]` shall contain elements in unspecified order.
|
Same as `arraySort` with additional `limit` argument allowing partial sorting. Returns an array of the same size as the original array where elements in range `[1..limit]` are sorted in ascending order. Remaining elements `(limit..N]` shall contain elements in unspecified order.
|
||||||
|
|
||||||
## arrayReverseSort(\[func,\] arr, …) {#array_functions-reverse-sort}
|
## arrayReverseSort(\[func,\] arr, …) {#reverse-sort}
|
||||||
|
|
||||||
Sorts the elements of the `arr` array in descending order. If the `func` function is specified, `arr` is sorted according to the result of the `func` function applied to the elements of the array, and then the sorted array is reversed. If `func` accepts multiple arguments, the `arrayReverseSort` function is passed several arrays that the arguments of `func` will correspond to. Detailed examples are shown at the end of `arrayReverseSort` description.
|
Sorts the elements of the `arr` array in descending order. If the `func` function is specified, `arr` is sorted according to the result of the `func` function applied to the elements of the array, and then the sorted array is reversed. If `func` accepts multiple arguments, the `arrayReverseSort` function is passed several arrays that the arguments of `func` will correspond to. Detailed examples are shown at the end of `arrayReverseSort` description.
|
||||||
|
|
||||||
|
@ -239,7 +239,7 @@ int32samoa: 1546300800
|
|||||||
|
|
||||||
**See Also**
|
**See Also**
|
||||||
|
|
||||||
- [formatDateTime](#date_time_functions-formatDateTime) - supports non-constant timezone.
|
- [formatDateTime](#formatDateTime) - supports non-constant timezone.
|
||||||
- [toString](type-conversion-functions.md#tostring) - supports non-constant timezone.
|
- [toString](type-conversion-functions.md#tostring) - supports non-constant timezone.
|
||||||
|
|
||||||
## timeZoneOf
|
## timeZoneOf
|
||||||
@ -1628,7 +1628,7 @@ SELECT timeSlots(toDateTime64('1980-12-12 21:01:02.1234', 4, 'UTC'), toDecimal64
|
|||||||
└───────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
└───────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
## formatDateTime {#date_time_functions-formatDateTime}
|
## formatDateTime {#formatDateTime}
|
||||||
|
|
||||||
Formats a Time according to the given Format string. Format is a constant expression, so you cannot have multiple formats for a single result column.
|
Formats a Time according to the given Format string. Format is a constant expression, so you cannot have multiple formats for a single result column.
|
||||||
|
|
||||||
@ -1753,7 +1753,7 @@ LIMIT 10
|
|||||||
- [formatDateTimeInJodaSyntax](##formatDateTimeInJodaSyntax)
|
- [formatDateTimeInJodaSyntax](##formatDateTimeInJodaSyntax)
|
||||||
|
|
||||||
|
|
||||||
## formatDateTimeInJodaSyntax {#date_time_functions-formatDateTimeInJodaSyntax}
|
## formatDateTimeInJodaSyntax {#formatDateTimeInJodaSyntax}
|
||||||
|
|
||||||
Similar to formatDateTime, except that it formats datetime in Joda style instead of MySQL style. Refer to https://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html.
|
Similar to formatDateTime, except that it formats datetime in Joda style instead of MySQL style. Refer to https://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html.
|
||||||
|
|
||||||
|
@ -19,7 +19,7 @@ halfMD5(par1, ...)
|
|||||||
```
|
```
|
||||||
|
|
||||||
The function is relatively slow (5 million short strings per second per processor core).
|
The function is relatively slow (5 million short strings per second per processor core).
|
||||||
Consider using the [sipHash64](#hash_functions-siphash64) function instead.
|
Consider using the [sipHash64](#siphash64) function instead.
|
||||||
|
|
||||||
**Arguments**
|
**Arguments**
|
||||||
|
|
||||||
@ -45,13 +45,13 @@ SELECT halfMD5(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00')
|
|||||||
|
|
||||||
Calculates the MD4 from a string and returns the resulting set of bytes as FixedString(16).
|
Calculates the MD4 from a string and returns the resulting set of bytes as FixedString(16).
|
||||||
|
|
||||||
## MD5 {#hash_functions-md5}
|
## MD5 {#md5}
|
||||||
|
|
||||||
Calculates the MD5 from a string and returns the resulting set of bytes as FixedString(16).
|
Calculates the MD5 from a string and returns the resulting set of bytes as FixedString(16).
|
||||||
If you do not need MD5 in particular, but you need a decent cryptographic 128-bit hash, use the ‘sipHash128’ function instead.
|
If you do not need MD5 in particular, but you need a decent cryptographic 128-bit hash, use the ‘sipHash128’ function instead.
|
||||||
If you want to get the same result as output by the md5sum utility, use lower(hex(MD5(s))).
|
If you want to get the same result as output by the md5sum utility, use lower(hex(MD5(s))).
|
||||||
|
|
||||||
## sipHash64 {#hash_functions-siphash64}
|
## sipHash64 {#siphash64}
|
||||||
|
|
||||||
Produces a 64-bit [SipHash](https://en.wikipedia.org/wiki/SipHash) hash value.
|
Produces a 64-bit [SipHash](https://en.wikipedia.org/wiki/SipHash) hash value.
|
||||||
|
|
||||||
@ -59,7 +59,7 @@ Produces a 64-bit [SipHash](https://en.wikipedia.org/wiki/SipHash) hash value.
|
|||||||
sipHash64(par1,...)
|
sipHash64(par1,...)
|
||||||
```
|
```
|
||||||
|
|
||||||
This is a cryptographic hash function. It works at least three times faster than the [MD5](#hash_functions-md5) hash function.
|
This is a cryptographic hash function. It works at least three times faster than the [MD5](#md5) hash function.
|
||||||
|
|
||||||
The function [interprets](/docs/en/sql-reference/functions/type-conversion-functions.md/#type_conversion_functions-reinterpretAsString) all the input parameters as strings and calculates the hash value for each of them. It then combines the hashes by the following algorithm:
|
The function [interprets](/docs/en/sql-reference/functions/type-conversion-functions.md/#type_conversion_functions-reinterpretAsString) all the input parameters as strings and calculates the hash value for each of them. It then combines the hashes by the following algorithm:
|
||||||
|
|
||||||
@ -91,7 +91,7 @@ SELECT sipHash64(array('e','x','a'), 'mple', 10, toDateTime('2019-06-15 23:00:00
|
|||||||
|
|
||||||
## sipHash64Keyed
|
## sipHash64Keyed
|
||||||
|
|
||||||
Same as [sipHash64](#hash_functions-siphash64) but additionally takes an explicit key argument instead of using a fixed key.
|
Same as [sipHash64](#siphash64) but additionally takes an explicit key argument instead of using a fixed key.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
@ -101,7 +101,7 @@ sipHash64Keyed((k0, k1), par1,...)
|
|||||||
|
|
||||||
**Arguments**
|
**Arguments**
|
||||||
|
|
||||||
Same as [sipHash64](#hash_functions-siphash64), but the first argument is a tuple of two UInt64 values representing the key.
|
Same as [sipHash64](#siphash64), but the first argument is a tuple of two UInt64 values representing the key.
|
||||||
|
|
||||||
**Returned value**
|
**Returned value**
|
||||||
|
|
||||||
@ -123,12 +123,12 @@ SELECT sipHash64Keyed((506097522914230528, 1084818905618843912), array('e','x','
|
|||||||
|
|
||||||
## sipHash128
|
## sipHash128
|
||||||
|
|
||||||
Like [sipHash64](#hash_functions-siphash64) but produces a 128-bit hash value, i.e. the final xor-folding state is done up to 128 bits.
|
Like [sipHash64](#siphash64) but produces a 128-bit hash value, i.e. the final xor-folding state is done up to 128 bits.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
This 128-bit variant differs from the reference implementation and it's weaker.
|
This 128-bit variant differs from the reference implementation and it's weaker.
|
||||||
This version exists because, when it was written, there was no official 128-bit extension for SipHash.
|
This version exists because, when it was written, there was no official 128-bit extension for SipHash.
|
||||||
New projects should probably use [sipHash128Reference](#hash_functions-siphash128reference).
|
New projects should probably use [sipHash128Reference](#siphash128reference).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
@ -139,7 +139,7 @@ sipHash128(par1,...)
|
|||||||
|
|
||||||
**Arguments**
|
**Arguments**
|
||||||
|
|
||||||
Same as for [sipHash64](#hash_functions-siphash64).
|
Same as for [sipHash64](#siphash64).
|
||||||
|
|
||||||
**Returned value**
|
**Returned value**
|
||||||
|
|
||||||
@ -163,12 +163,12 @@ Result:
|
|||||||
|
|
||||||
## sipHash128Keyed
|
## sipHash128Keyed
|
||||||
|
|
||||||
Same as [sipHash128](#hash_functions-siphash128) but additionally takes an explicit key argument instead of using a fixed key.
|
Same as [sipHash128](#siphash128) but additionally takes an explicit key argument instead of using a fixed key.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
This 128-bit variant differs from the reference implementation and it's weaker.
|
This 128-bit variant differs from the reference implementation and it's weaker.
|
||||||
This version exists because, when it was written, there was no official 128-bit extension for SipHash.
|
This version exists because, when it was written, there was no official 128-bit extension for SipHash.
|
||||||
New projects should probably use [sipHash128ReferenceKeyed](#hash_functions-siphash128referencekeyed).
|
New projects should probably use [sipHash128ReferenceKeyed](#siphash128referencekeyed).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
@ -179,7 +179,7 @@ sipHash128Keyed((k0, k1), par1,...)
|
|||||||
|
|
||||||
**Arguments**
|
**Arguments**
|
||||||
|
|
||||||
Same as [sipHash128](#hash_functions-siphash128), but the first argument is a tuple of two UInt64 values representing the key.
|
Same as [sipHash128](#siphash128), but the first argument is a tuple of two UInt64 values representing the key.
|
||||||
|
|
||||||
**Returned value**
|
**Returned value**
|
||||||
|
|
||||||
@ -203,7 +203,7 @@ Result:
|
|||||||
|
|
||||||
## sipHash128Reference
|
## sipHash128Reference
|
||||||
|
|
||||||
Like [sipHash128](#hash_functions-siphash128) but implements the 128-bit algorithm from the original authors of SipHash.
|
Like [sipHash128](#siphash128) but implements the 128-bit algorithm from the original authors of SipHash.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
@ -213,7 +213,7 @@ sipHash128Reference(par1,...)
|
|||||||
|
|
||||||
**Arguments**
|
**Arguments**
|
||||||
|
|
||||||
Same as for [sipHash128](#hash_functions-siphash128).
|
Same as for [sipHash128](#siphash128).
|
||||||
|
|
||||||
**Returned value**
|
**Returned value**
|
||||||
|
|
||||||
@ -237,7 +237,7 @@ Result:
|
|||||||
|
|
||||||
## sipHash128ReferenceKeyed
|
## sipHash128ReferenceKeyed
|
||||||
|
|
||||||
Same as [sipHash128Reference](#hash_functions-siphash128reference) but additionally takes an explicit key argument instead of using a fixed key.
|
Same as [sipHash128Reference](#siphash128reference) but additionally takes an explicit key argument instead of using a fixed key.
|
||||||
|
|
||||||
**Syntax**
|
**Syntax**
|
||||||
|
|
||||||
@ -247,7 +247,7 @@ sipHash128ReferenceKeyed((k0, k1), par1,...)
|
|||||||
|
|
||||||
**Arguments**
|
**Arguments**
|
||||||
|
|
||||||
Same as [sipHash128Reference](#hash_functions-siphash128reference), but the first argument is a tuple of two UInt64 values representing the key.
|
Same as [sipHash128Reference](#siphash128reference), but the first argument is a tuple of two UInt64 values representing the key.
|
||||||
|
|
||||||
**Returned value**
|
**Returned value**
|
||||||
|
|
||||||
@ -536,7 +536,7 @@ Calculates `HiveHash` from a string.
|
|||||||
SELECT hiveHash('')
|
SELECT hiveHash('')
|
||||||
```
|
```
|
||||||
|
|
||||||
This is just [JavaHash](#hash_functions-javahash) with zeroed out sign bit. This function is used in [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) for versions before 3.0. This hash function is neither fast nor having a good quality. The only reason to use it is when this algorithm is already used in another system and you have to calculate exactly the same result.
|
This is just [JavaHash](#javahash) with zeroed out sign bit. This function is used in [Apache Hive](https://en.wikipedia.org/wiki/Apache_Hive) for versions before 3.0. This hash function is neither fast nor having a good quality. The only reason to use it is when this algorithm is already used in another system and you have to calculate exactly the same result.
|
||||||
|
|
||||||
**Returned value**
|
**Returned value**
|
||||||
|
|
||||||
|
@ -340,6 +340,15 @@ After running this statement the `[db.]replicated_merge_tree_family_table_name`
|
|||||||
- If a `LIGHTWEIGHT` modifier was specified then the query waits only for `GET_PART`, `ATTACH_PART`, `DROP_RANGE`, `REPLACE_RANGE` and `DROP_PART` entries to be processed.
|
- If a `LIGHTWEIGHT` modifier was specified then the query waits only for `GET_PART`, `ATTACH_PART`, `DROP_RANGE`, `REPLACE_RANGE` and `DROP_PART` entries to be processed.
|
||||||
- If a `PULL` modifier was specified then the query pulls new replication queue entries from ZooKeeper, but does not wait for anything to be processed.
|
- If a `PULL` modifier was specified then the query pulls new replication queue entries from ZooKeeper, but does not wait for anything to be processed.
|
||||||
|
|
||||||
|
### SYNC DATABASE REPLICA
|
||||||
|
|
||||||
|
Waits until the specified [replicated database](https://clickhouse.com/docs/en/engines/database-engines/replicated) applies all schema changes from the DDL queue of that database.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
```sql
|
||||||
|
SYSTEM SYNC DATABASE REPLICA replicated_database_name;
|
||||||
|
```
|
||||||
|
|
||||||
### RESTART REPLICA
|
### RESTART REPLICA
|
||||||
|
|
||||||
Provides possibility to reinitialize Zookeeper session's state for `ReplicatedMergeTree` table, will compare current state with Zookeeper as source of truth and add tasks to Zookeeper queue if needed.
|
Provides possibility to reinitialize Zookeeper session's state for `ReplicatedMergeTree` table, will compare current state with Zookeeper as source of truth and add tasks to Zookeeper queue if needed.
|
||||||
|
@ -135,7 +135,7 @@ func TestConfigFileFrameCopy(t *testing.T) {
|
|||||||
sizes := map[string]int64{
|
sizes := map[string]int64{
|
||||||
"users.xml": int64(2017),
|
"users.xml": int64(2017),
|
||||||
"default-password.xml": int64(188),
|
"default-password.xml": int64(188),
|
||||||
"config.xml": int64(59506),
|
"config.xml": int64(59377),
|
||||||
"server-include.xml": int64(168),
|
"server-include.xml": int64(168),
|
||||||
"user-include.xml": int64(559),
|
"user-include.xml": int64(559),
|
||||||
}
|
}
|
||||||
|
Loading…
Reference in New Issue
Block a user