mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-27 18:12:02 +00:00
Merge remote-tracking branch 'origin/master' into distinct-in-order-sqlancer-crashes
This commit is contained in:
commit
6202d82604
@ -67,7 +67,7 @@ Substitutions can also be performed from ZooKeeper. To do this, specify the attr
|
|||||||
|
|
||||||
## Encrypting Configuration {#encryption}
|
## Encrypting Configuration {#encryption}
|
||||||
|
|
||||||
You can use symmetric encryption to encrypt a configuration element, for example, a password field. To do so, first configure the [encryption codec](../sql-reference/statements/create/table.md#encryption-codecs), then add attribute `encryption_codec` with the name of the encryption codec as value to the element to encrypt.
|
You can use symmetric encryption to encrypt a configuration element, for example, a password field. To do so, first configure the [encryption codec](../sql-reference/statements/create/table.md#encryption-codecs), then add attribute `encrypted_by` with the name of the encryption codec as value to the element to encrypt.
|
||||||
|
|
||||||
Unlike attributes `from_zk`, `from_env` and `incl` (or element `include`), no substitution, i.e. decryption of the encrypted value, is performed in the preprocessed file. Decryption happens only at runtime in the server process.
|
Unlike attributes `from_zk`, `from_env` and `incl` (or element `include`), no substitution, i.e. decryption of the encrypted value, is performed in the preprocessed file. Decryption happens only at runtime in the server process.
|
||||||
|
|
||||||
@ -75,19 +75,22 @@ Example:
|
|||||||
|
|
||||||
```xml
|
```xml
|
||||||
<clickhouse>
|
<clickhouse>
|
||||||
|
|
||||||
<encryption_codecs>
|
<encryption_codecs>
|
||||||
<aes_128_gcm_siv>
|
<aes_128_gcm_siv>
|
||||||
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
||||||
</aes_128_gcm_siv>
|
</aes_128_gcm_siv>
|
||||||
</encryption_codecs>
|
</encryption_codecs>
|
||||||
|
|
||||||
<interserver_http_credentials>
|
<interserver_http_credentials>
|
||||||
<user>admin</user>
|
<user>admin</user>
|
||||||
<password encryption_codec="AES_128_GCM_SIV">961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85</password>
|
<password encrypted_by="AES_128_GCM_SIV">961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85</password>
|
||||||
</interserver_http_credentials>
|
</interserver_http_credentials>
|
||||||
|
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
|
||||||
To get the encrypted value `encrypt_decrypt` example application may be used.
|
To encrypt a value, you can use the (example) program `encrypt_decrypt`:
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
||||||
@ -138,12 +141,17 @@ Here you can see default config written in YAML: [config.yaml.example](https://g
|
|||||||
|
|
||||||
There are some differences between YAML and XML formats in terms of ClickHouse configurations. Here are some tips for writing a configuration in YAML format.
|
There are some differences between YAML and XML formats in terms of ClickHouse configurations. Here are some tips for writing a configuration in YAML format.
|
||||||
|
|
||||||
You should use a Scalar node to write a key-value pair:
|
An XML tag with a text value is represented by a YAML key-value pair
|
||||||
``` yaml
|
``` yaml
|
||||||
key: value
|
key: value
|
||||||
```
|
```
|
||||||
|
|
||||||
To create a node, containing other nodes you should use a Map:
|
Corresponding XML:
|
||||||
|
``` xml
|
||||||
|
<key>value</value>
|
||||||
|
```
|
||||||
|
|
||||||
|
A nested XML node is represented by a YAML map:
|
||||||
``` yaml
|
``` yaml
|
||||||
map_key:
|
map_key:
|
||||||
key1: val1
|
key1: val1
|
||||||
@ -151,7 +159,16 @@ map_key:
|
|||||||
key3: val3
|
key3: val3
|
||||||
```
|
```
|
||||||
|
|
||||||
To create a list of values or nodes assigned to one tag you should use a Sequence:
|
Corresponding XML:
|
||||||
|
``` xml
|
||||||
|
<map_key>
|
||||||
|
<key1>val1</key1>
|
||||||
|
<key2>val2</key2>
|
||||||
|
<key3>val3</key3>
|
||||||
|
</map_key>
|
||||||
|
```
|
||||||
|
|
||||||
|
To create the same XML tag multiple times, use a YAML sequence:
|
||||||
``` yaml
|
``` yaml
|
||||||
seq_key:
|
seq_key:
|
||||||
- val1
|
- val1
|
||||||
@ -162,8 +179,22 @@ seq_key:
|
|||||||
key3: val5
|
key3: val5
|
||||||
```
|
```
|
||||||
|
|
||||||
If you want to write an attribute for a Sequence or Map node, you should use a @ prefix before the attribute key. Note, that @ is reserved by YAML standard, so you should also to wrap it into double quotes:
|
Corresponding XML:
|
||||||
|
```xml
|
||||||
|
<seq_key>val1</seq_key>
|
||||||
|
<seq_key>val2</seq_key>
|
||||||
|
<seq_key>
|
||||||
|
<key1>val3</key1>
|
||||||
|
</seq_key>
|
||||||
|
<seq_key>
|
||||||
|
<map>
|
||||||
|
<key2>val4</key2>
|
||||||
|
<key3>val5</key3>
|
||||||
|
</map>
|
||||||
|
</seq_key>
|
||||||
|
```
|
||||||
|
|
||||||
|
To provide an XML attribute, you can use an attribute key with a `@` prefix. Note that `@` is reserved by YAML standard, so must be wrapped in double quotes:
|
||||||
``` yaml
|
``` yaml
|
||||||
map:
|
map:
|
||||||
"@attr1": value1
|
"@attr1": value1
|
||||||
@ -171,16 +202,14 @@ map:
|
|||||||
key: 123
|
key: 123
|
||||||
```
|
```
|
||||||
|
|
||||||
From that Map we will get these XML nodes:
|
Corresponding XML:
|
||||||
|
|
||||||
``` xml
|
``` xml
|
||||||
<map attr1="value1" attr2="value2">
|
<map attr1="value1" attr2="value2">
|
||||||
<key>123</key>
|
<key>123</key>
|
||||||
</map>
|
</map>
|
||||||
```
|
```
|
||||||
|
|
||||||
You can also set attributes for Sequence:
|
It is also possible to use attributes in YAML sequence:
|
||||||
|
|
||||||
``` yaml
|
``` yaml
|
||||||
seq:
|
seq:
|
||||||
- "@attr1": value1
|
- "@attr1": value1
|
||||||
@ -189,13 +218,25 @@ seq:
|
|||||||
- abc
|
- abc
|
||||||
```
|
```
|
||||||
|
|
||||||
So, we can get YAML config equal to this XML one:
|
Corresponding XML:
|
||||||
|
|
||||||
``` xml
|
``` xml
|
||||||
<seq attr1="value1" attr2="value2">123</seq>
|
<seq attr1="value1" attr2="value2">123</seq>
|
||||||
<seq attr1="value1" attr2="value2">abc</seq>
|
<seq attr1="value1" attr2="value2">abc</seq>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
The aforementioned syntax does not allow to express XML text nodes with XML attributes as YAML. This special case can be achieved using an
|
||||||
|
`#text` attribute key:
|
||||||
|
```yaml
|
||||||
|
map_key:
|
||||||
|
"@attr1": value1
|
||||||
|
"#text": value2
|
||||||
|
```
|
||||||
|
|
||||||
|
Corresponding XML:
|
||||||
|
```xml
|
||||||
|
<map_key attr1="value1">value2</map>
|
||||||
|
```
|
||||||
|
|
||||||
## Implementation Details {#implementation-details}
|
## Implementation Details {#implementation-details}
|
||||||
|
|
||||||
For each config file, the server also generates `file-preprocessed.xml` files when starting. These files contain all the completed substitutions and overrides, and they are intended for informational use. If ZooKeeper substitutions were used in the config files but ZooKeeper is not available on the server start, the server loads the configuration from the preprocessed file.
|
For each config file, the server also generates `file-preprocessed.xml` files when starting. These files contain all the completed substitutions and overrides, and they are intended for informational use. If ZooKeeper substitutions were used in the config files but ZooKeeper is not available on the server start, the server loads the configuration from the preprocessed file.
|
||||||
|
@ -61,11 +61,12 @@ use_query_cache = true`) but one should keep in mind that all `SELECT` queries i
|
|||||||
may return cached results then.
|
may return cached results then.
|
||||||
|
|
||||||
The query cache can be cleared using statement `SYSTEM DROP QUERY CACHE`. The content of the query cache is displayed in system table
|
The query cache can be cleared using statement `SYSTEM DROP QUERY CACHE`. The content of the query cache is displayed in system table
|
||||||
`system.query_cache`. The number of query cache hits and misses are shown as events "QueryCacheHits" and "QueryCacheMisses" in system table
|
`system.query_cache`. The number of query cache hits and misses since database start are shown as events "QueryCacheHits" and
|
||||||
[system.events](system-tables/events.md). Both counters are only updated for `SELECT` queries which run with setting "use_query_cache =
|
"QueryCacheMisses" in system table [system.events](system-tables/events.md). Both counters are only updated for `SELECT` queries which run
|
||||||
true". Other queries do not affect the cache miss counter. Field `query_log_usage` in system table
|
with setting `use_query_cache = true`, other queries do not affect "QueryCacheMisses". Field `query_log_usage` in system table
|
||||||
[system.query_log](system-tables/query_log.md) shows for each ran query whether the query result was written into or read from the query
|
[system.query_log](system-tables/query_log.md) shows for each executed query whether the query result was written into or read from the
|
||||||
cache.
|
query cache. Asynchronous metrics "QueryCacheEntries" and "QueryCacheBytes" in system table
|
||||||
|
[system.asynchronous_metrics](system-tables/asynchronous_metrics.md) show how many entries / bytes the query cache currently contains.
|
||||||
|
|
||||||
The query cache exists once per ClickHouse server process. However, cache results are by default not shared between users. This can be
|
The query cache exists once per ClickHouse server process. However, cache results are by default not shared between users. This can be
|
||||||
changed (see below) but doing so is not recommended for security reasons.
|
changed (see below) but doing so is not recommended for security reasons.
|
||||||
|
@ -512,7 +512,7 @@ Both the cache for `local_disk`, and temporary data will be stored in `/tiny_loc
|
|||||||
<type>cache</type>
|
<type>cache</type>
|
||||||
<disk>local_disk</disk>
|
<disk>local_disk</disk>
|
||||||
<path>/tiny_local_cache/</path>
|
<path>/tiny_local_cache/</path>
|
||||||
<max_size>10M</max_size>
|
<max_size_rows>10M</max_size_rows>
|
||||||
<max_file_segment_size>1M</max_file_segment_size>
|
<max_file_segment_size>1M</max_file_segment_size>
|
||||||
<cache_on_write_operations>1</cache_on_write_operations>
|
<cache_on_write_operations>1</cache_on_write_operations>
|
||||||
<do_not_evict_index_and_mark_files>0</do_not_evict_index_and_mark_files>
|
<do_not_evict_index_and_mark_files>0</do_not_evict_index_and_mark_files>
|
||||||
@ -1592,6 +1592,10 @@ To manually turn on metrics history collection [`system.metric_log`](../../opera
|
|||||||
<table>metric_log</table>
|
<table>metric_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
|
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</metric_log>
|
</metric_log>
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
@ -1695,6 +1699,14 @@ Use the following parameters to configure logging:
|
|||||||
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
||||||
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
||||||
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
|
- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
|
||||||
|
Default: false.
|
||||||
- `storage_policy` – Name of storage policy to use for the table (optional)
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
||||||
|
|
||||||
@ -1706,6 +1718,10 @@ Use the following parameters to configure logging:
|
|||||||
<table>part_log</table>
|
<table>part_log</table>
|
||||||
<partition_by>toMonday(event_date)</partition_by>
|
<partition_by>toMonday(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</part_log>
|
</part_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1773,6 +1789,14 @@ Use the following parameters to configure logging:
|
|||||||
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
||||||
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
||||||
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
|
- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
|
||||||
|
Default: false.
|
||||||
- `storage_policy` – Name of storage policy to use for the table (optional)
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
||||||
|
|
||||||
@ -1786,6 +1810,10 @@ If the table does not exist, ClickHouse will create it. If the structure of the
|
|||||||
<table>query_log</table>
|
<table>query_log</table>
|
||||||
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_log>
|
</query_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1831,6 +1859,14 @@ Use the following parameters to configure logging:
|
|||||||
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
||||||
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
||||||
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size_rows, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
|
- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
|
||||||
|
Default: false.
|
||||||
- `storage_policy` – Name of storage policy to use for the table (optional)
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
||||||
|
|
||||||
@ -1844,6 +1880,10 @@ If the table does not exist, ClickHouse will create it. If the structure of the
|
|||||||
<table>query_thread_log</table>
|
<table>query_thread_log</table>
|
||||||
<partition_by>toMonday(event_date)</partition_by>
|
<partition_by>toMonday(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_thread_log>
|
</query_thread_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1861,6 +1901,14 @@ Use the following parameters to configure logging:
|
|||||||
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
||||||
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
||||||
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
|
- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
|
||||||
|
Default: false.
|
||||||
- `storage_policy` – Name of storage policy to use for the table (optional)
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
||||||
|
|
||||||
@ -1874,6 +1922,10 @@ If the table does not exist, ClickHouse will create it. If the structure of the
|
|||||||
<table>query_views_log</table>
|
<table>query_views_log</table>
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_views_log>
|
</query_views_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1890,6 +1942,14 @@ Parameters:
|
|||||||
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
||||||
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
||||||
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
|
- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
|
||||||
|
Default: false.
|
||||||
- `storage_policy` – Name of storage policy to use for the table (optional)
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
||||||
|
|
||||||
@ -1901,13 +1961,16 @@ Parameters:
|
|||||||
<database>system</database>
|
<database>system</database>
|
||||||
<table>text_log</table>
|
<table>text_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
<!-- <partition_by>event_date</partition_by> -->
|
<!-- <partition_by>event_date</partition_by> -->
|
||||||
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
||||||
</text_log>
|
</text_log>
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## trace_log {#server_configuration_parameters-trace_log}
|
## trace_log {#server_configuration_parameters-trace_log}
|
||||||
|
|
||||||
Settings for the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
|
Settings for the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
|
||||||
@ -1920,6 +1983,12 @@ Parameters:
|
|||||||
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
||||||
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/index.md) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/index.md) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
||||||
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
- `storage_policy` – Name of storage policy to use for the table (optional)
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
||||||
|
|
||||||
@ -1931,6 +2000,10 @@ The default server configuration file `config.xml` contains the following settin
|
|||||||
<table>trace_log</table>
|
<table>trace_log</table>
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</trace_log>
|
</trace_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1945,9 +2018,18 @@ Parameters:
|
|||||||
- `partition_by` — [Custom partitioning key](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) for a system table. Can't be used if `engine` defined.
|
- `partition_by` — [Custom partitioning key](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) for a system table. Can't be used if `engine` defined.
|
||||||
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` defined.
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` defined.
|
||||||
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
|
- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
|
||||||
|
Default: false.
|
||||||
- `storage_policy` – Name of storage policy to use for the table (optional)
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
```xml
|
```xml
|
||||||
<clickhouse>
|
<clickhouse>
|
||||||
<asynchronous_insert_log>
|
<asynchronous_insert_log>
|
||||||
@ -1955,11 +2037,53 @@ Parameters:
|
|||||||
<table>asynchronous_insert_log</table>
|
<table>asynchronous_insert_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
<!-- <engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine> -->
|
<!-- <engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine> -->
|
||||||
</asynchronous_insert_log>
|
</asynchronous_insert_log>
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## crash_log {#server_configuration_parameters-crash_log}
|
||||||
|
|
||||||
|
Settings for the [crash_log](../../operations/system-tables/crash-log.md) system table operation.
|
||||||
|
|
||||||
|
Parameters:
|
||||||
|
|
||||||
|
- `database` — Database for storing a table.
|
||||||
|
- `table` — Table name.
|
||||||
|
- `partition_by` — [Custom partitioning key](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) for a system table. Can't be used if `engine` defined.
|
||||||
|
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
|
||||||
|
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/index.md) for a system table. Can't be used if `partition_by` or `order_by` defined.
|
||||||
|
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
||||||
|
- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
|
||||||
|
Default: 1048576.
|
||||||
|
- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
|
||||||
|
Default: 8192.
|
||||||
|
- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
|
||||||
|
Default: `max_size_rows / 2`.
|
||||||
|
- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
|
||||||
|
Default: false.
|
||||||
|
- `storage_policy` – Name of storage policy to use for the table (optional)
|
||||||
|
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
|
||||||
|
|
||||||
|
The default server configuration file `config.xml` contains the following settings section:
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<crash_log>
|
||||||
|
<database>system</database>
|
||||||
|
<table>crash_log</table>
|
||||||
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1024</max_size_rows>
|
||||||
|
<reserved_size_rows>1024</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>512</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
|
</crash_log>
|
||||||
|
```
|
||||||
|
|
||||||
## query_masking_rules {#query-masking-rules}
|
## query_masking_rules {#query-masking-rules}
|
||||||
|
|
||||||
Regexp-based rules, which will be applied to queries as well as all log messages before storing them in server logs,
|
Regexp-based rules, which will be applied to queries as well as all log messages before storing them in server logs,
|
||||||
|
@ -1164,7 +1164,7 @@ Enabled by default.
|
|||||||
|
|
||||||
Compression method used in output Arrow format. Supported codecs: `lz4_frame`, `zstd`, `none` (uncompressed)
|
Compression method used in output Arrow format. Supported codecs: `lz4_frame`, `zstd`, `none` (uncompressed)
|
||||||
|
|
||||||
Default value: `none`.
|
Default value: `lz4_frame`.
|
||||||
|
|
||||||
## ORC format settings {#orc-format-settings}
|
## ORC format settings {#orc-format-settings}
|
||||||
|
|
||||||
|
@ -32,6 +32,10 @@ SELECT * FROM system.asynchronous_metrics LIMIT 10
|
|||||||
└─────────────────────────────────────────┴────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
└─────────────────────────────────────────┴────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
<!--- Unlike with system.events and system.metrics, the asynchronous metrics are not gathered in a simple list in a source code file - they
|
||||||
|
are mixed with logic in src/Interpreters/ServerAsynchronousMetrics.cpp.
|
||||||
|
Listing them here explicitly for reader convenience. --->
|
||||||
|
|
||||||
## Metric descriptions
|
## Metric descriptions
|
||||||
|
|
||||||
|
|
||||||
@ -483,6 +487,14 @@ The value is similar to `OSUserTime` but divided to the number of CPU cores to b
|
|||||||
|
|
||||||
Number of threads in the server of the PostgreSQL compatibility protocol.
|
Number of threads in the server of the PostgreSQL compatibility protocol.
|
||||||
|
|
||||||
|
### QueryCacheBytes
|
||||||
|
|
||||||
|
Total size of the query cache cache in bytes.
|
||||||
|
|
||||||
|
### QueryCacheEntries
|
||||||
|
|
||||||
|
Total number of entries in the query cache.
|
||||||
|
|
||||||
### ReplicasMaxAbsoluteDelay
|
### ReplicasMaxAbsoluteDelay
|
||||||
|
|
||||||
Maximum difference in seconds between the most fresh replicated part and the most fresh data part still to be replicated, across Replicated tables. A very high value indicates a replica with no data.
|
Maximum difference in seconds between the most fresh replicated part and the most fresh data part still to be replicated, across Replicated tables. A very high value indicates a replica with no data.
|
||||||
|
@ -11,6 +11,8 @@ Columns:
|
|||||||
- `value` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of events occurred.
|
- `value` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of events occurred.
|
||||||
- `description` ([String](../../sql-reference/data-types/string.md)) — Event description.
|
- `description` ([String](../../sql-reference/data-types/string.md)) — Event description.
|
||||||
|
|
||||||
|
You can find all supported events in source file [src/Common/ProfileEvents.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/ProfileEvents.cpp).
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
@ -47,6 +47,10 @@ An example:
|
|||||||
<engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
|
<engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
|
||||||
-->
|
-->
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_log>
|
</query_log>
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
@ -11,7 +11,7 @@ Columns:
|
|||||||
- `value` ([Int64](../../sql-reference/data-types/int-uint.md)) — Metric value.
|
- `value` ([Int64](../../sql-reference/data-types/int-uint.md)) — Metric value.
|
||||||
- `description` ([String](../../sql-reference/data-types/string.md)) — Metric description.
|
- `description` ([String](../../sql-reference/data-types/string.md)) — Metric description.
|
||||||
|
|
||||||
The list of supported metrics you can find in the [src/Common/CurrentMetrics.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/CurrentMetrics.cpp) source file of ClickHouse.
|
You can find all supported metrics in source file [src/Common/CurrentMetrics.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/CurrentMetrics.cpp).
|
||||||
|
|
||||||
**Example**
|
**Example**
|
||||||
|
|
||||||
|
@ -87,7 +87,7 @@ $ cat /etc/clickhouse-server/users.d/alice.xml
|
|||||||
|
|
||||||
## Шифрование {#encryption}
|
## Шифрование {#encryption}
|
||||||
|
|
||||||
Вы можете использовать симметричное шифрование для зашифровки элемента конфигурации, например, поля password. Чтобы это сделать, сначала настройте [кодек шифрования](../sql-reference/statements/create/table.md#encryption-codecs), затем добавьте аттибут`encryption_codec` с именем кодека шифрования как значение к элементу, который надо зашифровать.
|
Вы можете использовать симметричное шифрование для зашифровки элемента конфигурации, например, поля password. Чтобы это сделать, сначала настройте [кодек шифрования](../sql-reference/statements/create/table.md#encryption-codecs), затем добавьте аттибут`encrypted_by` с именем кодека шифрования как значение к элементу, который надо зашифровать.
|
||||||
|
|
||||||
В отличии от аттрибутов `from_zk`, `from_env` и `incl` (или элемента `include`), подстановка, т.е. расшифровка зашифрованного значения, не выподняется в файле предобработки. Расшифровка происходит только во время исполнения в серверном процессе.
|
В отличии от аттрибутов `from_zk`, `from_env` и `incl` (или элемента `include`), подстановка, т.е. расшифровка зашифрованного значения, не выподняется в файле предобработки. Расшифровка происходит только во время исполнения в серверном процессе.
|
||||||
|
|
||||||
@ -95,15 +95,18 @@ $ cat /etc/clickhouse-server/users.d/alice.xml
|
|||||||
|
|
||||||
```xml
|
```xml
|
||||||
<clickhouse>
|
<clickhouse>
|
||||||
|
|
||||||
<encryption_codecs>
|
<encryption_codecs>
|
||||||
<aes_128_gcm_siv>
|
<aes_128_gcm_siv>
|
||||||
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
||||||
</aes_128_gcm_siv>
|
</aes_128_gcm_siv>
|
||||||
</encryption_codecs>
|
</encryption_codecs>
|
||||||
|
|
||||||
<interserver_http_credentials>
|
<interserver_http_credentials>
|
||||||
<user>admin</user>
|
<user>admin</user>
|
||||||
<password encryption_codec="AES_128_GCM_SIV">961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85</password>
|
<password encrypted_by="AES_128_GCM_SIV">961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85</password>
|
||||||
</interserver_http_credentials>
|
</interserver_http_credentials>
|
||||||
|
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -1058,6 +1058,10 @@ ClickHouse использует потоки из глобального пул
|
|||||||
<table>metric_log</table>
|
<table>metric_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
|
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</metric_log>
|
</metric_log>
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
@ -1155,12 +1159,19 @@ ClickHouse использует потоки из глобального пул
|
|||||||
|
|
||||||
При настройке логирования используются следующие параметры:
|
При настройке логирования используются следующие параметры:
|
||||||
|
|
||||||
- `database` — имя базы данных;
|
- `database` — имя базы данных;
|
||||||
- `table` — имя таблицы;
|
- `table` — имя таблицы;
|
||||||
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1048576.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 8192.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: false.
|
||||||
**Пример**
|
**Пример**
|
||||||
|
|
||||||
``` xml
|
``` xml
|
||||||
@ -1169,6 +1180,10 @@ ClickHouse использует потоки из глобального пул
|
|||||||
<table>part_log</table>
|
<table>part_log</table>
|
||||||
<partition_by>toMonday(event_date)</partition_by>
|
<partition_by>toMonday(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</part_log>
|
</part_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1218,11 +1233,19 @@ ClickHouse использует потоки из глобального пул
|
|||||||
|
|
||||||
При настройке логирования используются следующие параметры:
|
При настройке логирования используются следующие параметры:
|
||||||
|
|
||||||
- `database` — имя базы данных;
|
- `database` — имя базы данных;
|
||||||
- `table` — имя таблицы, куда будет записываться лог;
|
- `table` — имя таблицы;
|
||||||
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1048576.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 8192.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: false.
|
||||||
|
|
||||||
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
|
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
|
||||||
|
|
||||||
@ -1234,6 +1257,10 @@ ClickHouse использует потоки из глобального пул
|
|||||||
<table>query_log</table>
|
<table>query_log</table>
|
||||||
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_log>
|
</query_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1245,11 +1272,19 @@ ClickHouse использует потоки из глобального пул
|
|||||||
|
|
||||||
При настройке логирования используются следующие параметры:
|
При настройке логирования используются следующие параметры:
|
||||||
|
|
||||||
- `database` — имя базы данных;
|
- `database` — имя базы данных;
|
||||||
- `table` — имя таблицы, куда будет записываться лог;
|
- `table` — имя таблицы;
|
||||||
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1048576.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 8192.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: false.
|
||||||
|
|
||||||
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
|
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
|
||||||
|
|
||||||
@ -1261,6 +1296,10 @@ ClickHouse использует потоки из глобального пул
|
|||||||
<table>query_thread_log</table>
|
<table>query_thread_log</table>
|
||||||
<partition_by>toMonday(event_date)</partition_by>
|
<partition_by>toMonday(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_thread_log>
|
</query_thread_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1272,11 +1311,19 @@ ClickHouse использует потоки из глобального пул
|
|||||||
|
|
||||||
При настройке логирования используются следующие параметры:
|
При настройке логирования используются следующие параметры:
|
||||||
|
|
||||||
- `database` – имя базы данных.
|
- `database` — имя базы данных;
|
||||||
- `table` – имя системной таблицы, где будут логироваться запросы.
|
- `table` — имя таблицы;
|
||||||
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../engines/table-engines/mergetree-family/custom-partitioning-key.md). Нельзя использовать, если задан параметр `engine`.
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
- `engine` — устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать, если задан параметр `partition_by`.
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1048576.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 8192.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: false.
|
||||||
|
|
||||||
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
|
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
|
||||||
|
|
||||||
@ -1288,6 +1335,10 @@ ClickHouse использует потоки из глобального пул
|
|||||||
<table>query_views_log</table>
|
<table>query_views_log</table>
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_views_log>
|
</query_views_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -1297,12 +1348,20 @@ ClickHouse использует потоки из глобального пул
|
|||||||
|
|
||||||
Параметры:
|
Параметры:
|
||||||
|
|
||||||
- `level` — Максимальный уровень сообщения (по умолчанию `Trace`) которое будет сохранено в таблице.
|
- `level` — Максимальный уровень сообщения (по умолчанию `Trace`) которое будет сохранено в таблице.
|
||||||
- `database` — имя базы данных для хранения таблицы.
|
- `database` — имя базы данных;
|
||||||
- `table` — имя таблицы, куда будут записываться текстовые сообщения.
|
- `table` — имя таблицы;
|
||||||
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../engines/table-engines/mergetree-family/custom-partitioning-key.md). Нельзя использовать если используется `engine`
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1048576.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 8192.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: false.
|
||||||
|
|
||||||
**Пример**
|
**Пример**
|
||||||
```xml
|
```xml
|
||||||
@ -1312,6 +1371,10 @@ ClickHouse использует потоки из глобального пул
|
|||||||
<database>system</database>
|
<database>system</database>
|
||||||
<table>text_log</table>
|
<table>text_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
<!-- <partition_by>event_date</partition_by> -->
|
<!-- <partition_by>event_date</partition_by> -->
|
||||||
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
<engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine>
|
||||||
</text_log>
|
</text_log>
|
||||||
@ -1323,13 +1386,21 @@ ClickHouse использует потоки из глобального пул
|
|||||||
|
|
||||||
Настройки для [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
|
Настройки для [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
|
||||||
|
|
||||||
Parameters:
|
Параметры:
|
||||||
|
|
||||||
- `database` — Database for storing a table.
|
- `database` — имя базы данных;
|
||||||
- `table` — Table name.
|
- `table` — имя таблицы;
|
||||||
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1048576.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 8192.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: false.
|
||||||
|
|
||||||
По умолчанию файл настроек сервера `config.xml` содержит следующие настройки:
|
По умолчанию файл настроек сервера `config.xml` содержит следующие настройки:
|
||||||
|
|
||||||
@ -1339,9 +1410,84 @@ Parameters:
|
|||||||
<table>trace_log</table>
|
<table>trace_log</table>
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
</trace_log>
|
</trace_log>
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## asynchronous_insert_log {#server_configuration_parameters-asynchronous_insert_log}
|
||||||
|
|
||||||
|
Настройки для asynchronous_insert_log Система для логирования ассинхронных вставок.
|
||||||
|
|
||||||
|
Параметры:
|
||||||
|
|
||||||
|
- `database` — имя базы данных;
|
||||||
|
- `table` — имя таблицы;
|
||||||
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1048576.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 8192.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: false.
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<clickhouse>
|
||||||
|
<asynchronous_insert_log>
|
||||||
|
<database>system</database>
|
||||||
|
<table>asynchronous_insert_log</table>
|
||||||
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<!-- <engine>Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day</engine> -->
|
||||||
|
</asynchronous_insert_log>
|
||||||
|
</clickhouse>
|
||||||
|
```
|
||||||
|
|
||||||
|
## crash_log {#server_configuration_parameters-crash_log}
|
||||||
|
|
||||||
|
Настройки для таблицы [crash_log](../../operations/system-tables/crash-log.md).
|
||||||
|
|
||||||
|
Параметры:
|
||||||
|
|
||||||
|
- `database` — имя базы данных;
|
||||||
|
- `table` — имя таблицы;
|
||||||
|
- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
|
||||||
|
- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
|
||||||
|
- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
|
||||||
|
- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
|
||||||
|
Значение по умолчанию: 1024.
|
||||||
|
- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
|
||||||
|
Значение по умолчанию: 1024.
|
||||||
|
- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
|
||||||
|
Значение по умолчанию: `max_size / 2`.
|
||||||
|
- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
|
||||||
|
Значение по умолчанию: true.
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
``` xml
|
||||||
|
<crash_log>
|
||||||
|
<database>system</database>
|
||||||
|
<table>crash_log</table>
|
||||||
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1024</max_size_rows>
|
||||||
|
<reserved_size_rows>1024</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>512</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>true</flush_on_crash>
|
||||||
|
</crash_log>
|
||||||
|
```
|
||||||
|
|
||||||
## query_masking_rules {#query-masking-rules}
|
## query_masking_rules {#query-masking-rules}
|
||||||
|
|
||||||
Правила, основанные на регулярных выражениях, которые будут применены для всех запросов, а также для всех сообщений перед сохранением их в лог на сервере,
|
Правила, основанные на регулярных выражениях, которые будут применены для всех запросов, а также для всех сообщений перед сохранением их в лог на сервере,
|
||||||
|
@ -45,6 +45,10 @@ sidebar_label: "Системные таблицы"
|
|||||||
<engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
|
<engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
|
||||||
-->
|
-->
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_log>
|
</query_log>
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
```
|
```
|
||||||
|
@ -266,6 +266,10 @@ void LocalServer::tryInitPath()
|
|||||||
|
|
||||||
global_context->setUserFilesPath(""); // user's files are everywhere
|
global_context->setUserFilesPath(""); // user's files are everywhere
|
||||||
|
|
||||||
|
std::string user_scripts_path = config().getString("user_scripts_path", fs::path(path) / "user_scripts/");
|
||||||
|
global_context->setUserScriptsPath(user_scripts_path);
|
||||||
|
fs::create_directories(user_scripts_path);
|
||||||
|
|
||||||
/// top_level_domains_lists
|
/// top_level_domains_lists
|
||||||
const std::string & top_level_domains_path = config().getString("top_level_domains_path", path + "top_level_domains/");
|
const std::string & top_level_domains_path = config().getString("top_level_domains_path", path + "top_level_domains/");
|
||||||
if (!top_level_domains_path.empty())
|
if (!top_level_domains_path.empty())
|
||||||
@ -490,6 +494,17 @@ try
|
|||||||
|
|
||||||
applyCmdSettings(global_context);
|
applyCmdSettings(global_context);
|
||||||
|
|
||||||
|
/// try to load user defined executable functions, throw on error and die
|
||||||
|
try
|
||||||
|
{
|
||||||
|
global_context->loadOrReloadUserDefinedExecutableFunctions(config());
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
tryLogCurrentException(&logger(), "Caught exception while loading user defined executable functions.");
|
||||||
|
throw;
|
||||||
|
}
|
||||||
|
|
||||||
if (is_interactive)
|
if (is_interactive)
|
||||||
{
|
{
|
||||||
clearTerminal();
|
clearTerminal();
|
||||||
@ -569,7 +584,9 @@ void LocalServer::processConfig()
|
|||||||
}
|
}
|
||||||
|
|
||||||
print_stack_trace = config().getBool("stacktrace", false);
|
print_stack_trace = config().getBool("stacktrace", false);
|
||||||
load_suggestions = (is_interactive || delayed_interactive) && !config().getBool("disable_suggestion", false);
|
const std::string clickhouse_dialect{"clickhouse"};
|
||||||
|
load_suggestions = (is_interactive || delayed_interactive) && !config().getBool("disable_suggestion", false)
|
||||||
|
&& config().getString("dialect", clickhouse_dialect) == clickhouse_dialect;
|
||||||
|
|
||||||
auto logging = (config().has("logger.console")
|
auto logging = (config().has("logger.console")
|
||||||
|| config().has("logger.level")
|
|| config().has("logger.level")
|
||||||
|
@ -1035,6 +1035,11 @@ try
|
|||||||
/// Initialize merge tree metadata cache
|
/// Initialize merge tree metadata cache
|
||||||
if (config().has("merge_tree_metadata_cache"))
|
if (config().has("merge_tree_metadata_cache"))
|
||||||
{
|
{
|
||||||
|
global_context->addWarningMessage("The setting 'merge_tree_metadata_cache' is enabled."
|
||||||
|
" But the feature of 'metadata cache in RocksDB' is experimental and is not ready for production."
|
||||||
|
" The usage of this feature can lead to data corruption and loss. The setting should be disabled in production."
|
||||||
|
" See the corresponding report at https://github.com/ClickHouse/ClickHouse/issues/51182");
|
||||||
|
|
||||||
fs::create_directories(path / "rocksdb/");
|
fs::create_directories(path / "rocksdb/");
|
||||||
size_t size = config().getUInt64("merge_tree_metadata_cache.lru_cache_size", 256 << 20);
|
size_t size = config().getUInt64("merge_tree_metadata_cache.lru_cache_size", 256 << 20);
|
||||||
bool continue_if_corrupted = config().getBool("merge_tree_metadata_cache.continue_if_corrupted", false);
|
bool continue_if_corrupted = config().getBool("merge_tree_metadata_cache.continue_if_corrupted", false);
|
||||||
|
@ -1026,6 +1026,14 @@
|
|||||||
|
|
||||||
<!-- Interval of flushing data. -->
|
<!-- Interval of flushing data. -->
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<!-- Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk. -->
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<!-- Pre-allocated size in lines for the logs. -->
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<!-- Lines amount threshold, reaching it launches flushing logs to the disk in background. -->
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<!-- Indication whether logs should be dumped to the disk in case of a crash -->
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
|
|
||||||
<!-- example of using a different storage policy for a system table -->
|
<!-- example of using a different storage policy for a system table -->
|
||||||
<!-- storage_policy>local_ssd</storage_policy -->
|
<!-- storage_policy>local_ssd</storage_policy -->
|
||||||
@ -1039,6 +1047,11 @@
|
|||||||
|
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<!-- Indication whether logs should be dumped to the disk in case of a crash -->
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</trace_log>
|
</trace_log>
|
||||||
|
|
||||||
<!-- Query thread log. Has information about all threads participated in query execution.
|
<!-- Query thread log. Has information about all threads participated in query execution.
|
||||||
@ -1048,6 +1061,10 @@
|
|||||||
<table>query_thread_log</table>
|
<table>query_thread_log</table>
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</query_thread_log>
|
</query_thread_log>
|
||||||
|
|
||||||
<!-- Query views log. Has information about all dependent views associated with a query.
|
<!-- Query views log. Has information about all dependent views associated with a query.
|
||||||
@ -1066,6 +1083,10 @@
|
|||||||
<table>part_log</table>
|
<table>part_log</table>
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</part_log>
|
</part_log>
|
||||||
|
|
||||||
<!-- Uncomment to write text log into table.
|
<!-- Uncomment to write text log into table.
|
||||||
@ -1075,6 +1096,10 @@
|
|||||||
<database>system</database>
|
<database>system</database>
|
||||||
<table>text_log</table>
|
<table>text_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
<level></level>
|
<level></level>
|
||||||
</text_log>
|
</text_log>
|
||||||
-->
|
-->
|
||||||
@ -1084,7 +1109,11 @@
|
|||||||
<database>system</database>
|
<database>system</database>
|
||||||
<table>metric_log</table>
|
<table>metric_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
|
<collect_interval_milliseconds>1000</collect_interval_milliseconds>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</metric_log>
|
</metric_log>
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
@ -1095,6 +1124,10 @@
|
|||||||
<database>system</database>
|
<database>system</database>
|
||||||
<table>asynchronous_metric_log</table>
|
<table>asynchronous_metric_log</table>
|
||||||
<flush_interval_milliseconds>7000</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7000</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</asynchronous_metric_log>
|
</asynchronous_metric_log>
|
||||||
|
|
||||||
<!--
|
<!--
|
||||||
@ -1119,6 +1152,10 @@
|
|||||||
<database>system</database>
|
<database>system</database>
|
||||||
<table>opentelemetry_span_log</table>
|
<table>opentelemetry_span_log</table>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</opentelemetry_span_log>
|
</opentelemetry_span_log>
|
||||||
|
|
||||||
|
|
||||||
@ -1130,6 +1167,10 @@
|
|||||||
|
|
||||||
<partition_by />
|
<partition_by />
|
||||||
<flush_interval_milliseconds>1000</flush_interval_milliseconds>
|
<flush_interval_milliseconds>1000</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1024</max_size_rows>
|
||||||
|
<reserved_size_rows>1024</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>512</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>true</flush_on_crash>
|
||||||
</crash_log>
|
</crash_log>
|
||||||
|
|
||||||
<!-- Session log. Stores user log in (successful or not) and log out events.
|
<!-- Session log. Stores user log in (successful or not) and log out events.
|
||||||
@ -1142,6 +1183,10 @@
|
|||||||
|
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</session_log> -->
|
</session_log> -->
|
||||||
|
|
||||||
<!-- Profiling on Processors level. -->
|
<!-- Profiling on Processors level. -->
|
||||||
@ -1151,6 +1196,10 @@
|
|||||||
|
|
||||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
</processors_profile_log>
|
</processors_profile_log>
|
||||||
|
|
||||||
<!-- Log of asynchronous inserts. It allows to check status
|
<!-- Log of asynchronous inserts. It allows to check status
|
||||||
@ -1161,6 +1210,10 @@
|
|||||||
<table>asynchronous_insert_log</table>
|
<table>asynchronous_insert_log</table>
|
||||||
|
|
||||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||||
|
<max_size_rows>1048576</max_size_rows>
|
||||||
|
<reserved_size_rows>8192</reserved_size_rows>
|
||||||
|
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||||
|
<flush_on_crash>false</flush_on_crash>
|
||||||
<partition_by>event_date</partition_by>
|
<partition_by>event_date</partition_by>
|
||||||
<ttl>event_date + INTERVAL 3 DAY</ttl>
|
<ttl>event_date + INTERVAL 3 DAY</ttl>
|
||||||
</asynchronous_insert_log>
|
</asynchronous_insert_log>
|
||||||
@ -1418,12 +1471,6 @@
|
|||||||
<max_entry_size_in_rows>30000000</max_entry_size_in_rows>
|
<max_entry_size_in_rows>30000000</max_entry_size_in_rows>
|
||||||
</query_cache>
|
</query_cache>
|
||||||
|
|
||||||
<!-- Uncomment if enable merge tree metadata cache -->
|
|
||||||
<!--merge_tree_metadata_cache>
|
|
||||||
<lru_cache_size>268435456</lru_cache_size>
|
|
||||||
<continue_if_corrupted>true</continue_if_corrupted>
|
|
||||||
</merge_tree_metadata_cache-->
|
|
||||||
|
|
||||||
<!-- This allows to disable exposing addresses in stack traces for security reasons.
|
<!-- This allows to disable exposing addresses in stack traces for security reasons.
|
||||||
Please be aware that it does not improve security much, but makes debugging much harder.
|
Please be aware that it does not improve security much, but makes debugging much harder.
|
||||||
The addresses that are small offsets from zero will be displayed nevertheless to show nullptr dereferences.
|
The addresses that are small offsets from zero will be displayed nevertheless to show nullptr dereferences.
|
||||||
|
@ -7,6 +7,7 @@
|
|||||||
|
|
||||||
#include <Analyzer/IQueryTreeNode.h>
|
#include <Analyzer/IQueryTreeNode.h>
|
||||||
#include <Analyzer/QueryNode.h>
|
#include <Analyzer/QueryNode.h>
|
||||||
|
#include <Analyzer/TableFunctionNode.h>
|
||||||
#include <Analyzer/UnionNode.h>
|
#include <Analyzer/UnionNode.h>
|
||||||
|
|
||||||
#include <Interpreters/Context.h>
|
#include <Interpreters/Context.h>
|
||||||
@ -90,26 +91,25 @@ private:
|
|||||||
template <typename Derived>
|
template <typename Derived>
|
||||||
using ConstInDepthQueryTreeVisitor = InDepthQueryTreeVisitor<Derived, true /*const_visitor*/>;
|
using ConstInDepthQueryTreeVisitor = InDepthQueryTreeVisitor<Derived, true /*const_visitor*/>;
|
||||||
|
|
||||||
/** Same as InDepthQueryTreeVisitor and additionally keeps track of current scope context.
|
/** Same as InDepthQueryTreeVisitor (but has a different interface) and additionally keeps track of current scope context.
|
||||||
* This can be useful if your visitor has special logic that depends on current scope context.
|
* This can be useful if your visitor has special logic that depends on current scope context.
|
||||||
|
*
|
||||||
|
* To specify behavior of the visitor you can implement following methods in derived class:
|
||||||
|
* 1. needChildVisit – This methods allows to skip subtree.
|
||||||
|
* 2. enterImpl – This method is called before children are processed.
|
||||||
|
* 3. leaveImpl – This method is called after children are processed.
|
||||||
*/
|
*/
|
||||||
template <typename Derived, bool const_visitor = false>
|
template <typename Derived, bool const_visitor = false>
|
||||||
class InDepthQueryTreeVisitorWithContext
|
class InDepthQueryTreeVisitorWithContext
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
using VisitQueryTreeNodeType = std::conditional_t<const_visitor, const QueryTreeNodePtr, QueryTreeNodePtr>;
|
using VisitQueryTreeNodeType = QueryTreeNodePtr;
|
||||||
|
|
||||||
explicit InDepthQueryTreeVisitorWithContext(ContextPtr context, size_t initial_subquery_depth = 0)
|
explicit InDepthQueryTreeVisitorWithContext(ContextPtr context, size_t initial_subquery_depth = 0)
|
||||||
: current_context(std::move(context))
|
: current_context(std::move(context))
|
||||||
, subquery_depth(initial_subquery_depth)
|
, subquery_depth(initial_subquery_depth)
|
||||||
{}
|
{}
|
||||||
|
|
||||||
/// Return true if visitor should traverse tree top to bottom, false otherwise
|
|
||||||
bool shouldTraverseTopToBottom() const
|
|
||||||
{
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Return true if visitor should visit child, false otherwise
|
/// Return true if visitor should visit child, false otherwise
|
||||||
bool needChildVisit(VisitQueryTreeNodeType & parent [[maybe_unused]], VisitQueryTreeNodeType & child [[maybe_unused]])
|
bool needChildVisit(VisitQueryTreeNodeType & parent [[maybe_unused]], VisitQueryTreeNodeType & child [[maybe_unused]])
|
||||||
{
|
{
|
||||||
@ -146,18 +146,16 @@ public:
|
|||||||
|
|
||||||
++subquery_depth;
|
++subquery_depth;
|
||||||
|
|
||||||
bool traverse_top_to_bottom = getDerived().shouldTraverseTopToBottom();
|
getDerived().enterImpl(query_tree_node);
|
||||||
if (!traverse_top_to_bottom)
|
|
||||||
visitChildren(query_tree_node);
|
|
||||||
|
|
||||||
getDerived().visitImpl(query_tree_node);
|
visitChildren(query_tree_node);
|
||||||
|
|
||||||
if (traverse_top_to_bottom)
|
|
||||||
visitChildren(query_tree_node);
|
|
||||||
|
|
||||||
getDerived().leaveImpl(query_tree_node);
|
getDerived().leaveImpl(query_tree_node);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void enterImpl(VisitQueryTreeNodeType & node [[maybe_unused]])
|
||||||
|
{}
|
||||||
|
|
||||||
void leaveImpl(VisitQueryTreeNodeType & node [[maybe_unused]])
|
void leaveImpl(VisitQueryTreeNodeType & node [[maybe_unused]])
|
||||||
{}
|
{}
|
||||||
private:
|
private:
|
||||||
@ -171,17 +169,31 @@ private:
|
|||||||
return *static_cast<Derived *>(this);
|
return *static_cast<Derived *>(this);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool shouldSkipSubtree(
|
||||||
|
VisitQueryTreeNodeType & parent,
|
||||||
|
VisitQueryTreeNodeType & child,
|
||||||
|
size_t subtree_index)
|
||||||
|
{
|
||||||
|
bool need_visit_child = getDerived().needChildVisit(parent, child);
|
||||||
|
if (!need_visit_child)
|
||||||
|
return true;
|
||||||
|
|
||||||
|
if (auto * table_function_node = parent->as<TableFunctionNode>())
|
||||||
|
{
|
||||||
|
const auto & unresolved_indexes = table_function_node->getUnresolvedArgumentIndexes();
|
||||||
|
return std::find(unresolved_indexes.begin(), unresolved_indexes.end(), subtree_index) != unresolved_indexes.end();
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
void visitChildren(VisitQueryTreeNodeType & expression)
|
void visitChildren(VisitQueryTreeNodeType & expression)
|
||||||
{
|
{
|
||||||
|
size_t index = 0;
|
||||||
for (auto & child : expression->getChildren())
|
for (auto & child : expression->getChildren())
|
||||||
{
|
{
|
||||||
if (!child)
|
if (child && !shouldSkipSubtree(expression, child, index))
|
||||||
continue;
|
|
||||||
|
|
||||||
bool need_visit_child = getDerived().needChildVisit(expression, child);
|
|
||||||
|
|
||||||
if (need_visit_child)
|
|
||||||
visit(child);
|
visit(child);
|
||||||
|
++index;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -189,50 +201,4 @@ private:
|
|||||||
size_t subquery_depth = 0;
|
size_t subquery_depth = 0;
|
||||||
};
|
};
|
||||||
|
|
||||||
template <typename Derived>
|
|
||||||
using ConstInDepthQueryTreeVisitorWithContext = InDepthQueryTreeVisitorWithContext<Derived, true /*const_visitor*/>;
|
|
||||||
|
|
||||||
/** Visitor that use another visitor to visit node only if condition for visiting node is true.
|
|
||||||
* For example, your visitor need to visit only query tree nodes or union nodes.
|
|
||||||
*
|
|
||||||
* Condition interface:
|
|
||||||
* struct Condition
|
|
||||||
* {
|
|
||||||
* bool operator()(VisitQueryTreeNodeType & node)
|
|
||||||
* {
|
|
||||||
* return shouldNestedVisitorVisitNode(node);
|
|
||||||
* }
|
|
||||||
* }
|
|
||||||
*/
|
|
||||||
template <typename Visitor, typename Condition, bool const_visitor = false>
|
|
||||||
class InDepthQueryTreeConditionalVisitor : public InDepthQueryTreeVisitor<InDepthQueryTreeConditionalVisitor<Visitor, Condition, const_visitor>, const_visitor>
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
using Base = InDepthQueryTreeVisitor<InDepthQueryTreeConditionalVisitor<Visitor, Condition, const_visitor>, const_visitor>;
|
|
||||||
using VisitQueryTreeNodeType = typename Base::VisitQueryTreeNodeType;
|
|
||||||
|
|
||||||
explicit InDepthQueryTreeConditionalVisitor(Visitor & visitor_, Condition & condition_)
|
|
||||||
: visitor(visitor_)
|
|
||||||
, condition(condition_)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
bool shouldTraverseTopToBottom() const
|
|
||||||
{
|
|
||||||
return visitor.shouldTraverseTopToBottom();
|
|
||||||
}
|
|
||||||
|
|
||||||
void visitImpl(VisitQueryTreeNodeType & query_tree_node)
|
|
||||||
{
|
|
||||||
if (condition(query_tree_node))
|
|
||||||
visitor.visit(query_tree_node);
|
|
||||||
}
|
|
||||||
|
|
||||||
Visitor & visitor;
|
|
||||||
Condition & condition;
|
|
||||||
};
|
|
||||||
|
|
||||||
template <typename Visitor, typename Condition>
|
|
||||||
using ConstInDepthQueryTreeConditionalVisitor = InDepthQueryTreeConditionalVisitor<Visitor, Condition, true /*const_visitor*/>;
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -51,13 +51,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<AggregateFunctionsArithmericOperationsVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<AggregateFunctionsArithmericOperationsVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
/// Traverse tree bottom to top
|
void leaveImpl(QueryTreeNodePtr & node)
|
||||||
static bool shouldTraverseTopToBottom()
|
|
||||||
{
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_arithmetic_operations_in_aggregate_functions)
|
if (!getSettings().optimize_arithmetic_operations_in_aggregate_functions)
|
||||||
return;
|
return;
|
||||||
|
@ -22,7 +22,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<RewriteArrayExistsToHasVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<RewriteArrayExistsToHasVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_rewrite_array_exists_to_has)
|
if (!getSettings().optimize_rewrite_array_exists_to_has)
|
||||||
return;
|
return;
|
||||||
|
@ -20,7 +20,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<AutoFinalOnQueryPassVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<AutoFinalOnQueryPassVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().final)
|
if (!getSettings().final)
|
||||||
return;
|
return;
|
||||||
|
@ -50,7 +50,7 @@ public:
|
|||||||
&& settings.max_hyperscan_regexp_total_length == 0;
|
&& settings.max_hyperscan_regexp_total_length == 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
auto * function_node = node->as<FunctionNode>();
|
auto * function_node = node->as<FunctionNode>();
|
||||||
if (!function_node || function_node->getFunctionName() != "or")
|
if (!function_node || function_node->getFunctionName() != "or")
|
||||||
|
@ -688,7 +688,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<ConvertQueryToCNFVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<ConvertQueryToCNFVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
auto * query_node = node->as<QueryNode>();
|
auto * query_node = node->as<QueryNode>();
|
||||||
if (!query_node)
|
if (!query_node)
|
||||||
|
@ -22,7 +22,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<CountDistinctVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<CountDistinctVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().count_distinct_optimization)
|
if (!getSettings().count_distinct_optimization)
|
||||||
return;
|
return;
|
||||||
|
@ -193,7 +193,7 @@ public:
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!isEnabled())
|
if (!isEnabled())
|
||||||
return;
|
return;
|
||||||
|
@ -29,7 +29,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<FunctionToSubcolumnsVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<FunctionToSubcolumnsVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node) const
|
void enterImpl(QueryTreeNodePtr & node) const
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_functions_to_subcolumns)
|
if (!getSettings().optimize_functions_to_subcolumns)
|
||||||
return;
|
return;
|
||||||
|
@ -37,7 +37,7 @@ public:
|
|||||||
, names_to_collect(names_to_collect_)
|
, names_to_collect(names_to_collect_)
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_syntax_fuse_functions)
|
if (!getSettings().optimize_syntax_fuse_functions)
|
||||||
return;
|
return;
|
||||||
|
@ -46,7 +46,7 @@ public:
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
void visitImpl(const QueryTreeNodePtr & node)
|
void enterImpl(const QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
auto * function_node = node->as<FunctionNode>();
|
auto * function_node = node->as<FunctionNode>();
|
||||||
if (!function_node || function_node->getFunctionName() != "grouping")
|
if (!function_node || function_node->getFunctionName() != "grouping")
|
||||||
|
@ -23,7 +23,7 @@ public:
|
|||||||
, multi_if_function_ptr(std::move(multi_if_function_ptr_))
|
, multi_if_function_ptr(std::move(multi_if_function_ptr_))
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_if_chain_to_multiif)
|
if (!getSettings().optimize_if_chain_to_multiif)
|
||||||
return;
|
return;
|
||||||
|
@ -113,7 +113,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<ConvertStringsToEnumVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<ConvertStringsToEnumVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_if_transform_strings_to_enum)
|
if (!getSettings().optimize_if_transform_strings_to_enum)
|
||||||
return;
|
return;
|
||||||
|
@ -19,7 +19,7 @@ public:
|
|||||||
: Base(std::move(context))
|
: Base(std::move(context))
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
auto * function_node = node->as<FunctionNode>();
|
auto * function_node = node->as<FunctionNode>();
|
||||||
|
|
||||||
|
@ -21,7 +21,7 @@ public:
|
|||||||
, if_function_ptr(std::move(if_function_ptr_))
|
, if_function_ptr(std::move(if_function_ptr_))
|
||||||
{}
|
{}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_multiif_to_if)
|
if (!getSettings().optimize_multiif_to_if)
|
||||||
return;
|
return;
|
||||||
|
@ -20,7 +20,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<NormalizeCountVariantsVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<NormalizeCountVariantsVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_normalize_count_variants)
|
if (!getSettings().optimize_normalize_count_variants)
|
||||||
return;
|
return;
|
||||||
|
@ -26,7 +26,7 @@ public:
|
|||||||
return !child->as<FunctionNode>();
|
return !child->as<FunctionNode>();
|
||||||
}
|
}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_group_by_function_keys)
|
if (!getSettings().optimize_group_by_function_keys)
|
||||||
return;
|
return;
|
||||||
|
@ -28,7 +28,7 @@ public:
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_redundant_functions_in_order_by)
|
if (!getSettings().optimize_redundant_functions_in_order_by)
|
||||||
return;
|
return;
|
||||||
|
@ -116,6 +116,7 @@ namespace ErrorCodes
|
|||||||
extern const int UNKNOWN_TABLE;
|
extern const int UNKNOWN_TABLE;
|
||||||
extern const int ILLEGAL_COLUMN;
|
extern const int ILLEGAL_COLUMN;
|
||||||
extern const int NUMBER_OF_COLUMNS_DOESNT_MATCH;
|
extern const int NUMBER_OF_COLUMNS_DOESNT_MATCH;
|
||||||
|
extern const int FUNCTION_CANNOT_HAVE_PARAMETERS;
|
||||||
}
|
}
|
||||||
|
|
||||||
/** Query analyzer implementation overview. Please check documentation in QueryAnalysisPass.h first.
|
/** Query analyzer implementation overview. Please check documentation in QueryAnalysisPass.h first.
|
||||||
@ -4896,6 +4897,12 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
|||||||
lambda_expression_untyped->formatASTForErrorMessage(),
|
lambda_expression_untyped->formatASTForErrorMessage(),
|
||||||
scope.scope_node->formatASTForErrorMessage());
|
scope.scope_node->formatASTForErrorMessage());
|
||||||
|
|
||||||
|
if (!parameters.empty())
|
||||||
|
{
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::FUNCTION_CANNOT_HAVE_PARAMETERS, "Function {} is not parametric", function_node.formatASTForErrorMessage());
|
||||||
|
}
|
||||||
|
|
||||||
auto lambda_expression_clone = lambda_expression_untyped->clone();
|
auto lambda_expression_clone = lambda_expression_untyped->clone();
|
||||||
|
|
||||||
IdentifierResolveScope lambda_scope(lambda_expression_clone, &scope /*parent_scope*/);
|
IdentifierResolveScope lambda_scope(lambda_expression_clone, &scope /*parent_scope*/);
|
||||||
@ -5012,9 +5019,13 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
|||||||
}
|
}
|
||||||
|
|
||||||
FunctionOverloadResolverPtr function = UserDefinedExecutableFunctionFactory::instance().tryGet(function_name, scope.context, parameters);
|
FunctionOverloadResolverPtr function = UserDefinedExecutableFunctionFactory::instance().tryGet(function_name, scope.context, parameters);
|
||||||
|
bool is_executable_udf = true;
|
||||||
|
|
||||||
if (!function)
|
if (!function)
|
||||||
|
{
|
||||||
function = FunctionFactory::instance().tryGet(function_name, scope.context);
|
function = FunctionFactory::instance().tryGet(function_name, scope.context);
|
||||||
|
is_executable_udf = false;
|
||||||
|
}
|
||||||
|
|
||||||
if (!function)
|
if (!function)
|
||||||
{
|
{
|
||||||
@ -5065,6 +5076,12 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
|||||||
return result_projection_names;
|
return result_projection_names;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Executable UDFs may have parameters. They are checked in UserDefinedExecutableFunctionFactory.
|
||||||
|
if (!parameters.empty() && !is_executable_udf)
|
||||||
|
{
|
||||||
|
throw Exception(ErrorCodes::FUNCTION_CANNOT_HAVE_PARAMETERS, "Function {} is not parametric", function_name);
|
||||||
|
}
|
||||||
|
|
||||||
/** For lambda arguments we need to initialize lambda argument types DataTypeFunction using `getLambdaArgumentTypes` function.
|
/** For lambda arguments we need to initialize lambda argument types DataTypeFunction using `getLambdaArgumentTypes` function.
|
||||||
* Then each lambda arguments are initialized with columns, where column source is lambda.
|
* Then each lambda arguments are initialized with columns, where column source is lambda.
|
||||||
* This information is important for later steps of query processing.
|
* This information is important for later steps of query processing.
|
||||||
@ -6434,7 +6451,7 @@ void QueryAnalyzer::resolveTableFunction(QueryTreeNodePtr & table_function_node,
|
|||||||
table_function_ptr->parseArguments(table_function_ast, scope_context);
|
table_function_ptr->parseArguments(table_function_ast, scope_context);
|
||||||
|
|
||||||
auto table_function_storage = scope_context->getQueryContext()->executeTableFunction(table_function_ast, table_function_ptr);
|
auto table_function_storage = scope_context->getQueryContext()->executeTableFunction(table_function_ast, table_function_ptr);
|
||||||
table_function_node_typed.resolve(std::move(table_function_ptr), std::move(table_function_storage), scope_context);
|
table_function_node_typed.resolve(std::move(table_function_ptr), std::move(table_function_storage), scope_context, std::move(skip_analysis_arguments_indexes));
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve array join node in scope
|
/// Resolve array join node in scope
|
||||||
|
@ -26,7 +26,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<RewriteAggregateFunctionWithIfVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<RewriteAggregateFunctionWithIfVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_rewrite_aggregate_function_with_if)
|
if (!getSettings().optimize_rewrite_aggregate_function_with_if)
|
||||||
return;
|
return;
|
||||||
|
@ -24,7 +24,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<ShardNumColumnToFunctionVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<ShardNumColumnToFunctionVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node) const
|
void enterImpl(QueryTreeNodePtr & node) const
|
||||||
{
|
{
|
||||||
auto * column_node = node->as<ColumnNode>();
|
auto * column_node = node->as<ColumnNode>();
|
||||||
if (!column_node)
|
if (!column_node)
|
||||||
|
@ -26,7 +26,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<SumIfToCountIfVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<SumIfToCountIfVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_rewrite_sum_if_to_count_if)
|
if (!getSettings().optimize_rewrite_sum_if_to_count_if)
|
||||||
return;
|
return;
|
||||||
|
@ -31,7 +31,7 @@ public:
|
|||||||
using Base = InDepthQueryTreeVisitorWithContext<UniqInjectiveFunctionsEliminationVisitor>;
|
using Base = InDepthQueryTreeVisitorWithContext<UniqInjectiveFunctionsEliminationVisitor>;
|
||||||
using Base::Base;
|
using Base::Base;
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
if (!getSettings().optimize_injective_functions_inside_uniq)
|
if (!getSettings().optimize_injective_functions_inside_uniq)
|
||||||
return;
|
return;
|
||||||
|
@ -27,12 +27,13 @@ TableFunctionNode::TableFunctionNode(String table_function_name_)
|
|||||||
children[arguments_child_index] = std::make_shared<ListNode>();
|
children[arguments_child_index] = std::make_shared<ListNode>();
|
||||||
}
|
}
|
||||||
|
|
||||||
void TableFunctionNode::resolve(TableFunctionPtr table_function_value, StoragePtr storage_value, ContextPtr context)
|
void TableFunctionNode::resolve(TableFunctionPtr table_function_value, StoragePtr storage_value, ContextPtr context, std::vector<size_t> unresolved_arguments_indexes_)
|
||||||
{
|
{
|
||||||
table_function = std::move(table_function_value);
|
table_function = std::move(table_function_value);
|
||||||
storage = std::move(storage_value);
|
storage = std::move(storage_value);
|
||||||
storage_id = storage->getStorageID();
|
storage_id = storage->getStorageID();
|
||||||
storage_snapshot = storage->getStorageSnapshot(storage->getInMemoryMetadataPtr(), context);
|
storage_snapshot = storage->getStorageSnapshot(storage->getInMemoryMetadataPtr(), context);
|
||||||
|
unresolved_arguments_indexes = std::move(unresolved_arguments_indexes_);
|
||||||
}
|
}
|
||||||
|
|
||||||
const StorageID & TableFunctionNode::getStorageID() const
|
const StorageID & TableFunctionNode::getStorageID() const
|
||||||
@ -132,6 +133,7 @@ QueryTreeNodePtr TableFunctionNode::cloneImpl() const
|
|||||||
result->storage_snapshot = storage_snapshot;
|
result->storage_snapshot = storage_snapshot;
|
||||||
result->table_expression_modifiers = table_expression_modifiers;
|
result->table_expression_modifiers = table_expression_modifiers;
|
||||||
result->settings_changes = settings_changes;
|
result->settings_changes = settings_changes;
|
||||||
|
result->unresolved_arguments_indexes = unresolved_arguments_indexes;
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
@ -98,7 +98,7 @@ public:
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Resolve table function with table function, storage and context
|
/// Resolve table function with table function, storage and context
|
||||||
void resolve(TableFunctionPtr table_function_value, StoragePtr storage_value, ContextPtr context);
|
void resolve(TableFunctionPtr table_function_value, StoragePtr storage_value, ContextPtr context, std::vector<size_t> unresolved_arguments_indexes_);
|
||||||
|
|
||||||
/// Get storage id, throws exception if function node is not resolved
|
/// Get storage id, throws exception if function node is not resolved
|
||||||
const StorageID & getStorageID() const;
|
const StorageID & getStorageID() const;
|
||||||
@ -106,6 +106,11 @@ public:
|
|||||||
/// Get storage snapshot, throws exception if function node is not resolved
|
/// Get storage snapshot, throws exception if function node is not resolved
|
||||||
const StorageSnapshotPtr & getStorageSnapshot() const;
|
const StorageSnapshotPtr & getStorageSnapshot() const;
|
||||||
|
|
||||||
|
const std::vector<size_t> & getUnresolvedArgumentIndexes() const
|
||||||
|
{
|
||||||
|
return unresolved_arguments_indexes;
|
||||||
|
}
|
||||||
|
|
||||||
/// Return true if table function node has table expression modifiers, false otherwise
|
/// Return true if table function node has table expression modifiers, false otherwise
|
||||||
bool hasTableExpressionModifiers() const
|
bool hasTableExpressionModifiers() const
|
||||||
{
|
{
|
||||||
@ -164,6 +169,7 @@ private:
|
|||||||
StoragePtr storage;
|
StoragePtr storage;
|
||||||
StorageID storage_id;
|
StorageID storage_id;
|
||||||
StorageSnapshotPtr storage_snapshot;
|
StorageSnapshotPtr storage_snapshot;
|
||||||
|
std::vector<size_t> unresolved_arguments_indexes;
|
||||||
std::optional<TableExpressionModifiers> table_expression_modifiers;
|
std::optional<TableExpressionModifiers> table_expression_modifiers;
|
||||||
SettingsChanges settings_changes;
|
SettingsChanges settings_changes;
|
||||||
|
|
||||||
|
@ -2624,9 +2624,8 @@ void ClientBase::parseAndCheckOptions(OptionsDescription & options_description,
|
|||||||
throw Exception(ErrorCodes::UNRECOGNIZED_ARGUMENTS, "Unrecognized option '{}'", unrecognized_options[0]);
|
throw Exception(ErrorCodes::UNRECOGNIZED_ARGUMENTS, "Unrecognized option '{}'", unrecognized_options[0]);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Check positional options (options after ' -- ', ex: clickhouse-client -- <options>).
|
/// Check positional options.
|
||||||
unrecognized_options = po::collect_unrecognized(parsed.options, po::collect_unrecognized_mode::include_positional);
|
if (std::ranges::count_if(parsed.options, [](const auto & op){ return !op.unregistered && op.string_key.empty() && !op.original_tokens[0].starts_with("--"); }) > 1)
|
||||||
if (unrecognized_options.size() > 1)
|
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Positional options are not supported.");
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Positional options are not supported.");
|
||||||
|
|
||||||
po::store(parsed, options);
|
po::store(parsed, options);
|
||||||
|
@ -41,9 +41,25 @@ namespace DB
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
std::mutex CaresPTRResolver::mutex;
|
struct AresChannelRAII
|
||||||
|
{
|
||||||
|
AresChannelRAII()
|
||||||
|
{
|
||||||
|
if (ares_init(&channel) != ARES_SUCCESS)
|
||||||
|
{
|
||||||
|
throw DB::Exception(DB::ErrorCodes::DNS_ERROR, "Failed to initialize c-ares channel");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
CaresPTRResolver::CaresPTRResolver(CaresPTRResolver::provider_token) : channel(nullptr)
|
~AresChannelRAII()
|
||||||
|
{
|
||||||
|
ares_destroy(channel);
|
||||||
|
}
|
||||||
|
|
||||||
|
ares_channel channel;
|
||||||
|
};
|
||||||
|
|
||||||
|
CaresPTRResolver::CaresPTRResolver(CaresPTRResolver::provider_token)
|
||||||
{
|
{
|
||||||
/*
|
/*
|
||||||
* ares_library_init is not thread safe. Currently, the only other usage of c-ares seems to be in grpc.
|
* ares_library_init is not thread safe. Currently, the only other usage of c-ares seems to be in grpc.
|
||||||
@ -57,34 +73,22 @@ namespace DB
|
|||||||
* */
|
* */
|
||||||
static const auto library_init_result = ares_library_init(ARES_LIB_INIT_ALL);
|
static const auto library_init_result = ares_library_init(ARES_LIB_INIT_ALL);
|
||||||
|
|
||||||
if (library_init_result != ARES_SUCCESS || ares_init(&channel) != ARES_SUCCESS)
|
if (library_init_result != ARES_SUCCESS)
|
||||||
{
|
{
|
||||||
throw DB::Exception(DB::ErrorCodes::DNS_ERROR, "Failed to initialize c-ares");
|
throw DB::Exception(DB::ErrorCodes::DNS_ERROR, "Failed to initialize c-ares");
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
CaresPTRResolver::~CaresPTRResolver()
|
|
||||||
{
|
|
||||||
ares_destroy(channel);
|
|
||||||
/*
|
|
||||||
* Library initialization is currently done only once in the constructor. Multiple instances of CaresPTRResolver
|
|
||||||
* will be used in the lifetime of ClickHouse, thus it's problematic to have de-init here.
|
|
||||||
* In a practical view, it makes little to no sense to de-init a DNS library since DNS requests will happen
|
|
||||||
* until the end of the program. Hence, ares_library_cleanup() will not be called.
|
|
||||||
* */
|
|
||||||
}
|
|
||||||
|
|
||||||
std::unordered_set<std::string> CaresPTRResolver::resolve(const std::string & ip)
|
std::unordered_set<std::string> CaresPTRResolver::resolve(const std::string & ip)
|
||||||
{
|
{
|
||||||
std::lock_guard guard(mutex);
|
AresChannelRAII channel_raii;
|
||||||
|
|
||||||
std::unordered_set<std::string> ptr_records;
|
std::unordered_set<std::string> ptr_records;
|
||||||
|
|
||||||
resolve(ip, ptr_records);
|
resolve(ip, ptr_records, channel_raii.channel);
|
||||||
|
|
||||||
if (!wait_and_process())
|
if (!wait_and_process(channel_raii.channel))
|
||||||
{
|
{
|
||||||
cancel_requests();
|
|
||||||
throw DB::Exception(DB::ErrorCodes::DNS_ERROR, "Failed to complete reverse DNS query for IP {}", ip);
|
throw DB::Exception(DB::ErrorCodes::DNS_ERROR, "Failed to complete reverse DNS query for IP {}", ip);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -93,22 +97,21 @@ namespace DB
|
|||||||
|
|
||||||
std::unordered_set<std::string> CaresPTRResolver::resolve_v6(const std::string & ip)
|
std::unordered_set<std::string> CaresPTRResolver::resolve_v6(const std::string & ip)
|
||||||
{
|
{
|
||||||
std::lock_guard guard(mutex);
|
AresChannelRAII channel_raii;
|
||||||
|
|
||||||
std::unordered_set<std::string> ptr_records;
|
std::unordered_set<std::string> ptr_records;
|
||||||
|
|
||||||
resolve_v6(ip, ptr_records);
|
resolve_v6(ip, ptr_records, channel_raii.channel);
|
||||||
|
|
||||||
if (!wait_and_process())
|
if (!wait_and_process(channel_raii.channel))
|
||||||
{
|
{
|
||||||
cancel_requests();
|
|
||||||
throw DB::Exception(DB::ErrorCodes::DNS_ERROR, "Failed to complete reverse DNS query for IP {}", ip);
|
throw DB::Exception(DB::ErrorCodes::DNS_ERROR, "Failed to complete reverse DNS query for IP {}", ip);
|
||||||
}
|
}
|
||||||
|
|
||||||
return ptr_records;
|
return ptr_records;
|
||||||
}
|
}
|
||||||
|
|
||||||
void CaresPTRResolver::resolve(const std::string & ip, std::unordered_set<std::string> & response)
|
void CaresPTRResolver::resolve(const std::string & ip, std::unordered_set<std::string> & response, ares_channel channel)
|
||||||
{
|
{
|
||||||
in_addr addr;
|
in_addr addr;
|
||||||
|
|
||||||
@ -117,7 +120,7 @@ namespace DB
|
|||||||
ares_gethostbyaddr(channel, reinterpret_cast<const void*>(&addr), sizeof(addr), AF_INET, callback, &response);
|
ares_gethostbyaddr(channel, reinterpret_cast<const void*>(&addr), sizeof(addr), AF_INET, callback, &response);
|
||||||
}
|
}
|
||||||
|
|
||||||
void CaresPTRResolver::resolve_v6(const std::string & ip, std::unordered_set<std::string> & response)
|
void CaresPTRResolver::resolve_v6(const std::string & ip, std::unordered_set<std::string> & response, ares_channel channel)
|
||||||
{
|
{
|
||||||
in6_addr addr;
|
in6_addr addr;
|
||||||
inet_pton(AF_INET6, ip.c_str(), &addr);
|
inet_pton(AF_INET6, ip.c_str(), &addr);
|
||||||
@ -125,15 +128,15 @@ namespace DB
|
|||||||
ares_gethostbyaddr(channel, reinterpret_cast<const void*>(&addr), sizeof(addr), AF_INET6, callback, &response);
|
ares_gethostbyaddr(channel, reinterpret_cast<const void*>(&addr), sizeof(addr), AF_INET6, callback, &response);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool CaresPTRResolver::wait_and_process()
|
bool CaresPTRResolver::wait_and_process(ares_channel channel)
|
||||||
{
|
{
|
||||||
int sockets[ARES_GETSOCK_MAXNUM];
|
int sockets[ARES_GETSOCK_MAXNUM];
|
||||||
pollfd pollfd[ARES_GETSOCK_MAXNUM];
|
pollfd pollfd[ARES_GETSOCK_MAXNUM];
|
||||||
|
|
||||||
while (true)
|
while (true)
|
||||||
{
|
{
|
||||||
auto readable_sockets = get_readable_sockets(sockets, pollfd);
|
auto readable_sockets = get_readable_sockets(sockets, pollfd, channel);
|
||||||
auto timeout = calculate_timeout();
|
auto timeout = calculate_timeout(channel);
|
||||||
|
|
||||||
int number_of_fds_ready = 0;
|
int number_of_fds_ready = 0;
|
||||||
if (!readable_sockets.empty())
|
if (!readable_sockets.empty())
|
||||||
@ -158,11 +161,11 @@ namespace DB
|
|||||||
|
|
||||||
if (number_of_fds_ready > 0)
|
if (number_of_fds_ready > 0)
|
||||||
{
|
{
|
||||||
process_readable_sockets(readable_sockets);
|
process_readable_sockets(readable_sockets, channel);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
process_possible_timeout();
|
process_possible_timeout(channel);
|
||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -170,12 +173,12 @@ namespace DB
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void CaresPTRResolver::cancel_requests()
|
void CaresPTRResolver::cancel_requests(ares_channel channel)
|
||||||
{
|
{
|
||||||
ares_cancel(channel);
|
ares_cancel(channel);
|
||||||
}
|
}
|
||||||
|
|
||||||
std::span<pollfd> CaresPTRResolver::get_readable_sockets(int * sockets, pollfd * pollfd)
|
std::span<pollfd> CaresPTRResolver::get_readable_sockets(int * sockets, pollfd * pollfd, ares_channel channel)
|
||||||
{
|
{
|
||||||
int sockets_bitmask = ares_getsock(channel, sockets, ARES_GETSOCK_MAXNUM);
|
int sockets_bitmask = ares_getsock(channel, sockets, ARES_GETSOCK_MAXNUM);
|
||||||
|
|
||||||
@ -205,7 +208,7 @@ namespace DB
|
|||||||
return std::span<struct pollfd>(pollfd, number_of_sockets_to_poll);
|
return std::span<struct pollfd>(pollfd, number_of_sockets_to_poll);
|
||||||
}
|
}
|
||||||
|
|
||||||
int64_t CaresPTRResolver::calculate_timeout()
|
int64_t CaresPTRResolver::calculate_timeout(ares_channel channel)
|
||||||
{
|
{
|
||||||
timeval tv;
|
timeval tv;
|
||||||
if (auto * tvp = ares_timeout(channel, nullptr, &tv))
|
if (auto * tvp = ares_timeout(channel, nullptr, &tv))
|
||||||
@ -218,14 +221,14 @@ namespace DB
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
void CaresPTRResolver::process_possible_timeout()
|
void CaresPTRResolver::process_possible_timeout(ares_channel channel)
|
||||||
{
|
{
|
||||||
/* Call ares_process() unconditonally here, even if we simply timed out
|
/* Call ares_process() unconditonally here, even if we simply timed out
|
||||||
above, as otherwise the ares name resolve won't timeout! */
|
above, as otherwise the ares name resolve won't timeout! */
|
||||||
ares_process_fd(channel, ARES_SOCKET_BAD, ARES_SOCKET_BAD);
|
ares_process_fd(channel, ARES_SOCKET_BAD, ARES_SOCKET_BAD);
|
||||||
}
|
}
|
||||||
|
|
||||||
void CaresPTRResolver::process_readable_sockets(std::span<pollfd> readable_sockets)
|
void CaresPTRResolver::process_readable_sockets(std::span<pollfd> readable_sockets, ares_channel channel)
|
||||||
{
|
{
|
||||||
for (auto readable_socket : readable_sockets)
|
for (auto readable_socket : readable_sockets)
|
||||||
{
|
{
|
||||||
|
@ -28,32 +28,35 @@ namespace DB
|
|||||||
|
|
||||||
public:
|
public:
|
||||||
explicit CaresPTRResolver(provider_token);
|
explicit CaresPTRResolver(provider_token);
|
||||||
~CaresPTRResolver() override;
|
|
||||||
|
/*
|
||||||
|
* Library initialization is currently done only once in the constructor. Multiple instances of CaresPTRResolver
|
||||||
|
* will be used in the lifetime of ClickHouse, thus it's problematic to have de-init here.
|
||||||
|
* In a practical view, it makes little to no sense to de-init a DNS library since DNS requests will happen
|
||||||
|
* until the end of the program. Hence, ares_library_cleanup() will not be called.
|
||||||
|
* */
|
||||||
|
~CaresPTRResolver() override = default;
|
||||||
|
|
||||||
std::unordered_set<std::string> resolve(const std::string & ip) override;
|
std::unordered_set<std::string> resolve(const std::string & ip) override;
|
||||||
|
|
||||||
std::unordered_set<std::string> resolve_v6(const std::string & ip) override;
|
std::unordered_set<std::string> resolve_v6(const std::string & ip) override;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
bool wait_and_process();
|
bool wait_and_process(ares_channel channel);
|
||||||
|
|
||||||
void cancel_requests();
|
void cancel_requests(ares_channel channel);
|
||||||
|
|
||||||
void resolve(const std::string & ip, std::unordered_set<std::string> & response);
|
void resolve(const std::string & ip, std::unordered_set<std::string> & response, ares_channel channel);
|
||||||
|
|
||||||
void resolve_v6(const std::string & ip, std::unordered_set<std::string> & response);
|
void resolve_v6(const std::string & ip, std::unordered_set<std::string> & response, ares_channel channel);
|
||||||
|
|
||||||
std::span<pollfd> get_readable_sockets(int * sockets, pollfd * pollfd);
|
std::span<pollfd> get_readable_sockets(int * sockets, pollfd * pollfd, ares_channel channel);
|
||||||
|
|
||||||
int64_t calculate_timeout();
|
int64_t calculate_timeout(ares_channel channel);
|
||||||
|
|
||||||
void process_possible_timeout();
|
void process_possible_timeout(ares_channel channel);
|
||||||
|
|
||||||
void process_readable_sockets(std::span<pollfd> readable_sockets);
|
void process_readable_sockets(std::span<pollfd> readable_sockets, ares_channel channel);
|
||||||
|
|
||||||
ares_channel channel;
|
|
||||||
|
|
||||||
static std::mutex mutex;
|
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -192,13 +192,13 @@ static void mergeAttributes(Element & config_element, Element & with_element)
|
|||||||
|
|
||||||
std::string ConfigProcessor::encryptValue(const std::string & codec_name, const std::string & value)
|
std::string ConfigProcessor::encryptValue(const std::string & codec_name, const std::string & value)
|
||||||
{
|
{
|
||||||
EncryptionMethod method = getEncryptionMethod(codec_name);
|
EncryptionMethod encryption_method = toEncryptionMethod(codec_name);
|
||||||
CompressionCodecEncrypted codec(method);
|
CompressionCodecEncrypted codec(encryption_method);
|
||||||
|
|
||||||
Memory<> memory;
|
Memory<> memory;
|
||||||
memory.resize(codec.getCompressedReserveSize(static_cast<UInt32>(value.size())));
|
memory.resize(codec.getCompressedReserveSize(static_cast<UInt32>(value.size())));
|
||||||
auto bytes_written = codec.compress(value.data(), static_cast<UInt32>(value.size()), memory.data());
|
auto bytes_written = codec.compress(value.data(), static_cast<UInt32>(value.size()), memory.data());
|
||||||
auto encrypted_value = std::string(memory.data(), bytes_written);
|
std::string encrypted_value(memory.data(), bytes_written);
|
||||||
std::string hex_value;
|
std::string hex_value;
|
||||||
boost::algorithm::hex(encrypted_value.begin(), encrypted_value.end(), std::back_inserter(hex_value));
|
boost::algorithm::hex(encrypted_value.begin(), encrypted_value.end(), std::back_inserter(hex_value));
|
||||||
return hex_value;
|
return hex_value;
|
||||||
@ -206,8 +206,8 @@ std::string ConfigProcessor::encryptValue(const std::string & codec_name, const
|
|||||||
|
|
||||||
std::string ConfigProcessor::decryptValue(const std::string & codec_name, const std::string & value)
|
std::string ConfigProcessor::decryptValue(const std::string & codec_name, const std::string & value)
|
||||||
{
|
{
|
||||||
EncryptionMethod method = getEncryptionMethod(codec_name);
|
EncryptionMethod encryption_method = toEncryptionMethod(codec_name);
|
||||||
CompressionCodecEncrypted codec(method);
|
CompressionCodecEncrypted codec(encryption_method);
|
||||||
|
|
||||||
Memory<> memory;
|
Memory<> memory;
|
||||||
std::string encrypted_value;
|
std::string encrypted_value;
|
||||||
@ -223,7 +223,7 @@ std::string ConfigProcessor::decryptValue(const std::string & codec_name, const
|
|||||||
|
|
||||||
memory.resize(codec.readDecompressedBlockSize(encrypted_value.data()));
|
memory.resize(codec.readDecompressedBlockSize(encrypted_value.data()));
|
||||||
codec.decompress(encrypted_value.data(), static_cast<UInt32>(encrypted_value.size()), memory.data());
|
codec.decompress(encrypted_value.data(), static_cast<UInt32>(encrypted_value.size()), memory.data());
|
||||||
std::string decrypted_value = std::string(memory.data(), memory.size());
|
std::string decrypted_value(memory.data(), memory.size());
|
||||||
return decrypted_value;
|
return decrypted_value;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -234,7 +234,7 @@ void ConfigProcessor::decryptRecursive(Poco::XML::Node * config_root)
|
|||||||
if (node->nodeType() == Node::ELEMENT_NODE)
|
if (node->nodeType() == Node::ELEMENT_NODE)
|
||||||
{
|
{
|
||||||
Element & element = dynamic_cast<Element &>(*node);
|
Element & element = dynamic_cast<Element &>(*node);
|
||||||
if (element.hasAttribute("encryption_codec"))
|
if (element.hasAttribute("encrypted_by"))
|
||||||
{
|
{
|
||||||
const NodeListPtr children = element.childNodes();
|
const NodeListPtr children = element.childNodes();
|
||||||
if (children->length() != 1)
|
if (children->length() != 1)
|
||||||
@ -244,8 +244,8 @@ void ConfigProcessor::decryptRecursive(Poco::XML::Node * config_root)
|
|||||||
if (text_node->nodeType() != Node::TEXT_NODE)
|
if (text_node->nodeType() != Node::TEXT_NODE)
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Encrypted node {} should have text node", node->nodeName());
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Encrypted node {} should have text node", node->nodeName());
|
||||||
|
|
||||||
auto encryption_codec = element.getAttribute("encryption_codec");
|
auto encrypted_by = element.getAttribute("encrypted_by");
|
||||||
text_node->setNodeValue(decryptValue(encryption_codec, text_node->getNodeValue()));
|
text_node->setNodeValue(decryptValue(encrypted_by, text_node->getNodeValue()));
|
||||||
}
|
}
|
||||||
decryptRecursive(node);
|
decryptRecursive(node);
|
||||||
}
|
}
|
||||||
@ -775,7 +775,7 @@ ConfigProcessor::LoadedConfig ConfigProcessor::loadConfigWithZooKeeperIncludes(
|
|||||||
|
|
||||||
void ConfigProcessor::decryptEncryptedElements(LoadedConfig & loaded_config)
|
void ConfigProcessor::decryptEncryptedElements(LoadedConfig & loaded_config)
|
||||||
{
|
{
|
||||||
CompressionCodecEncrypted::Configuration::instance().tryLoad(*loaded_config.configuration, "encryption_codecs");
|
CompressionCodecEncrypted::Configuration::instance().load(*loaded_config.configuration, "encryption_codecs");
|
||||||
Node * config_root = getRootNode(loaded_config.preprocessed_xml.get());
|
Node * config_root = getRootNode(loaded_config.preprocessed_xml.get());
|
||||||
decryptRecursive(config_root);
|
decryptRecursive(config_root);
|
||||||
loaded_config.configuration = new Poco::Util::XMLConfiguration(loaded_config.preprocessed_xml);
|
loaded_config.configuration = new Poco::Util::XMLConfiguration(loaded_config.preprocessed_xml);
|
||||||
|
@ -31,30 +31,25 @@ namespace ErrorCodes
|
|||||||
extern const int TIMEOUT_EXCEEDED;
|
extern const int TIMEOUT_EXCEEDED;
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace
|
|
||||||
{
|
|
||||||
constexpr size_t DBMS_SYSTEM_LOG_QUEUE_SIZE = 1048576;
|
|
||||||
}
|
|
||||||
|
|
||||||
ISystemLog::~ISystemLog() = default;
|
ISystemLog::~ISystemLog() = default;
|
||||||
|
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
SystemLogQueue<LogElement>::SystemLogQueue(
|
SystemLogQueue<LogElement>::SystemLogQueue(const SystemLogQueueSettings & settings_)
|
||||||
const String & table_name_,
|
: log(&Poco::Logger::get("SystemLogQueue (" + settings_.database + "." +settings_.table + ")"))
|
||||||
size_t flush_interval_milliseconds_,
|
, settings(settings_)
|
||||||
bool turn_off_logger_)
|
|
||||||
: log(&Poco::Logger::get("SystemLogQueue (" + table_name_ + ")"))
|
|
||||||
, flush_interval_milliseconds(flush_interval_milliseconds_)
|
|
||||||
{
|
{
|
||||||
if (turn_off_logger_)
|
queue.reserve(settings.reserved_size_rows);
|
||||||
|
|
||||||
|
if (settings.turn_off_logger)
|
||||||
log->setLevel(0);
|
log->setLevel(0);
|
||||||
}
|
}
|
||||||
|
|
||||||
static thread_local bool recursive_push_call = false;
|
static thread_local bool recursive_push_call = false;
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
void SystemLogQueue<LogElement>::push(const LogElement & element)
|
void SystemLogQueue<LogElement>::push(LogElement&& element)
|
||||||
{
|
{
|
||||||
/// It is possible that the method will be called recursively.
|
/// It is possible that the method will be called recursively.
|
||||||
/// Better to drop these events to avoid complications.
|
/// Better to drop these events to avoid complications.
|
||||||
@ -70,7 +65,7 @@ void SystemLogQueue<LogElement>::push(const LogElement & element)
|
|||||||
MemoryTrackerBlockerInThread temporarily_disable_memory_tracker;
|
MemoryTrackerBlockerInThread temporarily_disable_memory_tracker;
|
||||||
|
|
||||||
/// Should not log messages under mutex.
|
/// Should not log messages under mutex.
|
||||||
bool queue_is_half_full = false;
|
bool buffer_size_rows_flush_threshold_exceeded = false;
|
||||||
|
|
||||||
{
|
{
|
||||||
std::unique_lock lock(mutex);
|
std::unique_lock lock(mutex);
|
||||||
@ -78,9 +73,9 @@ void SystemLogQueue<LogElement>::push(const LogElement & element)
|
|||||||
if (is_shutdown)
|
if (is_shutdown)
|
||||||
return;
|
return;
|
||||||
|
|
||||||
if (queue.size() == DBMS_SYSTEM_LOG_QUEUE_SIZE / 2)
|
if (queue.size() == settings.buffer_size_rows_flush_threshold)
|
||||||
{
|
{
|
||||||
queue_is_half_full = true;
|
buffer_size_rows_flush_threshold_exceeded = true;
|
||||||
|
|
||||||
// The queue more than half full, time to flush.
|
// The queue more than half full, time to flush.
|
||||||
// We only check for strict equality, because messages are added one
|
// We only check for strict equality, because messages are added one
|
||||||
@ -94,7 +89,7 @@ void SystemLogQueue<LogElement>::push(const LogElement & element)
|
|||||||
flush_event.notify_all();
|
flush_event.notify_all();
|
||||||
}
|
}
|
||||||
|
|
||||||
if (queue.size() >= DBMS_SYSTEM_LOG_QUEUE_SIZE)
|
if (queue.size() >= settings.max_size_rows)
|
||||||
{
|
{
|
||||||
// Ignore all further entries until the queue is flushed.
|
// Ignore all further entries until the queue is flushed.
|
||||||
// Log a message about that. Don't spam it -- this might be especially
|
// Log a message about that. Don't spam it -- this might be especially
|
||||||
@ -108,27 +103,28 @@ void SystemLogQueue<LogElement>::push(const LogElement & element)
|
|||||||
// TextLog sets its logger level to 0, so this log is a noop and
|
// TextLog sets its logger level to 0, so this log is a noop and
|
||||||
// there is no recursive logging.
|
// there is no recursive logging.
|
||||||
lock.unlock();
|
lock.unlock();
|
||||||
LOG_ERROR(log, "Queue is full for system log '{}' at {}", demangle(typeid(*this).name()), queue_front_index);
|
LOG_ERROR(log, "Queue is full for system log '{}' at {}. max_size_rows {}",
|
||||||
|
demangle(typeid(*this).name()),
|
||||||
|
queue_front_index,
|
||||||
|
settings.max_size_rows);
|
||||||
}
|
}
|
||||||
|
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
queue.push_back(element);
|
queue.push_back(std::move(element));
|
||||||
}
|
}
|
||||||
|
|
||||||
if (queue_is_half_full)
|
if (buffer_size_rows_flush_threshold_exceeded)
|
||||||
LOG_INFO(log, "Queue is half full for system log '{}'.", demangle(typeid(*this).name()));
|
LOG_INFO(log, "Queue is half full for system log '{}'. buffer_size_rows_flush_threshold {}",
|
||||||
|
demangle(typeid(*this).name()), settings.buffer_size_rows_flush_threshold);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
void SystemLogBase<LogElement>::flush(bool force)
|
void SystemLogQueue<LogElement>::handleCrash()
|
||||||
{
|
{
|
||||||
uint64_t this_thread_requested_offset = queue->notifyFlush(force);
|
if (settings.notify_flush_on_crash)
|
||||||
if (this_thread_requested_offset == uint64_t(-1))
|
notifyFlush(/* force */ true);
|
||||||
return;
|
|
||||||
|
|
||||||
queue->waitFlush(this_thread_requested_offset);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
@ -185,11 +181,13 @@ void SystemLogQueue<LogElement>::confirm(uint64_t to_flush_end)
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
typename SystemLogQueue<LogElement>::Index SystemLogQueue<LogElement>::pop(std::vector<LogElement>& output, bool& should_prepare_tables_anyway, bool& exit_this_thread)
|
typename SystemLogQueue<LogElement>::Index SystemLogQueue<LogElement>::pop(std::vector<LogElement> & output,
|
||||||
|
bool & should_prepare_tables_anyway,
|
||||||
|
bool & exit_this_thread)
|
||||||
{
|
{
|
||||||
std::unique_lock lock(mutex);
|
std::unique_lock lock(mutex);
|
||||||
flush_event.wait_for(lock,
|
flush_event.wait_for(lock,
|
||||||
std::chrono::milliseconds(flush_interval_milliseconds),
|
std::chrono::milliseconds(settings.flush_interval_milliseconds),
|
||||||
[&] ()
|
[&] ()
|
||||||
{
|
{
|
||||||
return requested_flush_up_to > flushed_up_to || is_shutdown || is_force_prepare_tables;
|
return requested_flush_up_to > flushed_up_to || is_shutdown || is_force_prepare_tables;
|
||||||
@ -219,13 +217,28 @@ void SystemLogQueue<LogElement>::shutdown()
|
|||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
SystemLogBase<LogElement>::SystemLogBase(
|
SystemLogBase<LogElement>::SystemLogBase(
|
||||||
const String& table_name_,
|
const SystemLogQueueSettings & settings_,
|
||||||
size_t flush_interval_milliseconds_,
|
|
||||||
std::shared_ptr<SystemLogQueue<LogElement>> queue_)
|
std::shared_ptr<SystemLogQueue<LogElement>> queue_)
|
||||||
: queue(queue_ ? queue_ : std::make_shared<SystemLogQueue<LogElement>>(table_name_, flush_interval_milliseconds_))
|
: queue(queue_ ? queue_ : std::make_shared<SystemLogQueue<LogElement>>(settings_))
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template <typename LogElement>
|
||||||
|
void SystemLogBase<LogElement>::flush(bool force)
|
||||||
|
{
|
||||||
|
uint64_t this_thread_requested_offset = queue->notifyFlush(force);
|
||||||
|
if (this_thread_requested_offset == uint64_t(-1))
|
||||||
|
return;
|
||||||
|
|
||||||
|
queue->waitFlush(this_thread_requested_offset);
|
||||||
|
}
|
||||||
|
|
||||||
|
template <typename LogElement>
|
||||||
|
void SystemLogBase<LogElement>::handleCrash()
|
||||||
|
{
|
||||||
|
queue->handleCrash();
|
||||||
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
void SystemLogBase<LogElement>::startup()
|
void SystemLogBase<LogElement>::startup()
|
||||||
{
|
{
|
||||||
@ -234,9 +247,9 @@ void SystemLogBase<LogElement>::startup()
|
|||||||
}
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
void SystemLogBase<LogElement>::add(const LogElement & element)
|
void SystemLogBase<LogElement>::add(LogElement element)
|
||||||
{
|
{
|
||||||
queue->push(element);
|
queue->push(std::move(element));
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
|
@ -62,6 +62,9 @@ public:
|
|||||||
|
|
||||||
virtual void stopFlushThread() = 0;
|
virtual void stopFlushThread() = 0;
|
||||||
|
|
||||||
|
/// Handles crash, flushes log without blocking if notify_flush_on_crash is set
|
||||||
|
virtual void handleCrash() = 0;
|
||||||
|
|
||||||
virtual ~ISystemLog();
|
virtual ~ISystemLog();
|
||||||
|
|
||||||
virtual void savingThreadFunction() = 0;
|
virtual void savingThreadFunction() = 0;
|
||||||
@ -73,26 +76,38 @@ protected:
|
|||||||
bool is_shutdown = false;
|
bool is_shutdown = false;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct SystemLogQueueSettings
|
||||||
|
{
|
||||||
|
String database;
|
||||||
|
String table;
|
||||||
|
size_t reserved_size_rows;
|
||||||
|
size_t max_size_rows;
|
||||||
|
size_t buffer_size_rows_flush_threshold;
|
||||||
|
size_t flush_interval_milliseconds;
|
||||||
|
bool notify_flush_on_crash;
|
||||||
|
bool turn_off_logger;
|
||||||
|
};
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
class SystemLogQueue
|
class SystemLogQueue
|
||||||
{
|
{
|
||||||
using Index = uint64_t;
|
using Index = uint64_t;
|
||||||
|
|
||||||
public:
|
public:
|
||||||
SystemLogQueue(
|
SystemLogQueue(const SystemLogQueueSettings & settings_);
|
||||||
const String & table_name_,
|
|
||||||
size_t flush_interval_milliseconds_,
|
|
||||||
bool turn_off_logger_ = false);
|
|
||||||
|
|
||||||
void shutdown();
|
void shutdown();
|
||||||
|
|
||||||
// producer methods
|
// producer methods
|
||||||
void push(const LogElement & element);
|
void push(LogElement && element);
|
||||||
Index notifyFlush(bool should_prepare_tables_anyway);
|
Index notifyFlush(bool should_prepare_tables_anyway);
|
||||||
void waitFlush(Index expected_flushed_up_to);
|
void waitFlush(Index expected_flushed_up_to);
|
||||||
|
|
||||||
|
/// Handles crash, flushes log without blocking if notify_flush_on_crash is set
|
||||||
|
void handleCrash();
|
||||||
|
|
||||||
// consumer methods
|
// consumer methods
|
||||||
Index pop(std::vector<LogElement>& output, bool& should_prepare_tables_anyway, bool& exit_this_thread);
|
Index pop(std::vector<LogElement>& output, bool & should_prepare_tables_anyway, bool & exit_this_thread);
|
||||||
void confirm(Index to_flush_end);
|
void confirm(Index to_flush_end);
|
||||||
|
|
||||||
private:
|
private:
|
||||||
@ -120,7 +135,8 @@ private:
|
|||||||
bool is_shutdown = false;
|
bool is_shutdown = false;
|
||||||
|
|
||||||
std::condition_variable flush_event;
|
std::condition_variable flush_event;
|
||||||
const size_t flush_interval_milliseconds;
|
|
||||||
|
const SystemLogQueueSettings settings;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
@ -131,8 +147,7 @@ public:
|
|||||||
using Self = SystemLogBase;
|
using Self = SystemLogBase;
|
||||||
|
|
||||||
SystemLogBase(
|
SystemLogBase(
|
||||||
const String& table_name_,
|
const SystemLogQueueSettings & settings_,
|
||||||
size_t flush_interval_milliseconds_,
|
|
||||||
std::shared_ptr<SystemLogQueue<LogElement>> queue_ = nullptr);
|
std::shared_ptr<SystemLogQueue<LogElement>> queue_ = nullptr);
|
||||||
|
|
||||||
void startup() override;
|
void startup() override;
|
||||||
@ -140,17 +155,25 @@ public:
|
|||||||
/** Append a record into log.
|
/** Append a record into log.
|
||||||
* Writing to table will be done asynchronously and in case of failure, record could be lost.
|
* Writing to table will be done asynchronously and in case of failure, record could be lost.
|
||||||
*/
|
*/
|
||||||
void add(const LogElement & element);
|
void add(LogElement element);
|
||||||
|
|
||||||
/// Flush data in the buffer to disk. Block the thread until the data is stored on disk.
|
/// Flush data in the buffer to disk. Block the thread until the data is stored on disk.
|
||||||
void flush(bool force) override;
|
void flush(bool force) override;
|
||||||
|
|
||||||
|
/// Handles crash, flushes log without blocking if notify_flush_on_crash is set
|
||||||
|
void handleCrash() override;
|
||||||
|
|
||||||
/// Non-blocking flush data in the buffer to disk.
|
/// Non-blocking flush data in the buffer to disk.
|
||||||
void notifyFlush(bool force);
|
void notifyFlush(bool force);
|
||||||
|
|
||||||
String getName() const override { return LogElement::name(); }
|
String getName() const override { return LogElement::name(); }
|
||||||
|
|
||||||
static const char * getDefaultOrderBy() { return "event_date, event_time"; }
|
static const char * getDefaultOrderBy() { return "event_date, event_time"; }
|
||||||
|
static consteval size_t getDefaultMaxSize() { return 1048576; }
|
||||||
|
static consteval size_t getDefaultReservedSize() { return 8192; }
|
||||||
|
static consteval size_t getDefaultFlushIntervalMilliseconds() { return 7500; }
|
||||||
|
static consteval bool shouldNotifyFlushOnCrash() { return false; }
|
||||||
|
static consteval bool shouldTurnOffLogger() { return false; }
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
std::shared_ptr<SystemLogQueue<LogElement>> queue;
|
std::shared_ptr<SystemLogQueue<LogElement>> queue;
|
||||||
|
@ -1440,7 +1440,7 @@ void ZooKeeper::logOperationIfNeeded(const ZooKeeperRequestPtr & request, const
|
|||||||
elem.thread_id = request->thread_id;
|
elem.thread_id = request->thread_id;
|
||||||
elem.query_id = request->query_id;
|
elem.query_id = request->query_id;
|
||||||
}
|
}
|
||||||
maybe_zk_log->add(elem);
|
maybe_zk_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
#else
|
#else
|
||||||
|
@ -3,7 +3,7 @@
|
|||||||
#include <Compression/CompressionCodecEncrypted.h>
|
#include <Compression/CompressionCodecEncrypted.h>
|
||||||
#include <iostream>
|
#include <iostream>
|
||||||
|
|
||||||
/** This test program encrypts or decrypts text values using a symmetric encryption codec like AES_128_GCM_SIV or AES_256_GCM_SIV.
|
/** This program encrypts or decrypts text values using a symmetric encryption codec like AES_128_GCM_SIV or AES_256_GCM_SIV.
|
||||||
* Keys for codecs are loaded from <encryption_codecs> section of configuration file.
|
* Keys for codecs are loaded from <encryption_codecs> section of configuration file.
|
||||||
*
|
*
|
||||||
* How to use:
|
* How to use:
|
||||||
@ -32,7 +32,7 @@ int main(int argc, char ** argv)
|
|||||||
|
|
||||||
DB::ConfigProcessor processor(argv[1], false, true);
|
DB::ConfigProcessor processor(argv[1], false, true);
|
||||||
auto loaded_config = processor.loadConfig();
|
auto loaded_config = processor.loadConfig();
|
||||||
DB::CompressionCodecEncrypted::Configuration::instance().tryLoad(*loaded_config.configuration, "encryption_codecs");
|
DB::CompressionCodecEncrypted::Configuration::instance().load(*loaded_config.configuration, "encryption_codecs");
|
||||||
|
|
||||||
if (action == "-e")
|
if (action == "-e")
|
||||||
std::cout << processor.encryptValue(codec_name, value) << std::endl;
|
std::cout << processor.encryptValue(codec_name, value) << std::endl;
|
||||||
|
@ -9,34 +9,35 @@ namespace DB
|
|||||||
{
|
{
|
||||||
TEST(Common, ReverseDNS)
|
TEST(Common, ReverseDNS)
|
||||||
{
|
{
|
||||||
auto addresses = std::vector<std::string>({
|
|
||||||
"8.8.8.8", "2001:4860:4860::8888", // dns.google
|
|
||||||
"142.250.219.35", // google.com
|
|
||||||
"157.240.12.35", // facebook
|
|
||||||
"208.84.244.116", "2600:1419:c400::214:c410", //www.terra.com.br,
|
|
||||||
"127.0.0.1", "::1"
|
|
||||||
});
|
|
||||||
|
|
||||||
auto func = [&]()
|
auto func = [&]()
|
||||||
{
|
{
|
||||||
// Good random seed, good engine
|
// Good random seed, good engine
|
||||||
auto rnd1 = std::mt19937(std::random_device{}());
|
auto rnd1 = std::mt19937(std::random_device{}());
|
||||||
|
|
||||||
for (int i = 0; i < 50; ++i)
|
for (int i = 0; i < 10; ++i)
|
||||||
{
|
{
|
||||||
auto & dns_resolver_instance = DNSResolver::instance();
|
auto & dns_resolver_instance = DNSResolver::instance();
|
||||||
// unfortunately, DNS cache can't be disabled because we might end up causing a DDoS attack
|
dns_resolver_instance.setDisableCacheFlag();
|
||||||
// dns_resolver_instance.setDisableCacheFlag();
|
|
||||||
|
|
||||||
auto addr_index = rnd1() % addresses.size();
|
auto val1 = rnd1() % static_cast<uint32_t>((pow(2, 31) - 1));
|
||||||
|
auto val2 = rnd1() % static_cast<uint32_t>((pow(2, 31) - 1));
|
||||||
|
auto val3 = rnd1() % static_cast<uint32_t>((pow(2, 31) - 1));
|
||||||
|
auto val4 = rnd1() % static_cast<uint32_t>((pow(2, 31) - 1));
|
||||||
|
|
||||||
[[maybe_unused]] auto result = dns_resolver_instance.reverseResolve(Poco::Net::IPAddress{ addresses[addr_index] });
|
uint32_t ipv4_buffer[1] = {
|
||||||
|
static_cast<uint32_t>(val1)
|
||||||
|
};
|
||||||
|
|
||||||
// will not assert either because some of the IP addresses might change in the future and
|
uint32_t ipv6_buffer[4] = {
|
||||||
// this test will become flaky
|
static_cast<uint32_t>(val1),
|
||||||
// ASSERT_TRUE(!result.empty());
|
static_cast<uint32_t>(val2),
|
||||||
|
static_cast<uint32_t>(val3),
|
||||||
|
static_cast<uint32_t>(val4)
|
||||||
|
};
|
||||||
|
|
||||||
|
dns_resolver_instance.reverseResolve(Poco::Net::IPAddress{ ipv4_buffer, sizeof(ipv4_buffer)});
|
||||||
|
dns_resolver_instance.reverseResolve(Poco::Net::IPAddress{ ipv6_buffer, sizeof(ipv6_buffer)});
|
||||||
}
|
}
|
||||||
|
|
||||||
};
|
};
|
||||||
|
|
||||||
auto number_of_threads = 200u;
|
auto number_of_threads = 200u;
|
||||||
|
@ -31,14 +31,14 @@ namespace ErrorCodes
|
|||||||
extern const int BAD_ARGUMENTS;
|
extern const int BAD_ARGUMENTS;
|
||||||
}
|
}
|
||||||
|
|
||||||
EncryptionMethod getEncryptionMethod(const std::string & name)
|
EncryptionMethod toEncryptionMethod(const std::string & name)
|
||||||
{
|
{
|
||||||
if (name == "AES_128_GCM_SIV")
|
if (name == "AES_128_GCM_SIV")
|
||||||
return AES_128_GCM_SIV;
|
return AES_128_GCM_SIV;
|
||||||
else if (name == "AES_256_GCM_SIV")
|
else if (name == "AES_256_GCM_SIV")
|
||||||
return AES_256_GCM_SIV;
|
return AES_256_GCM_SIV;
|
||||||
else
|
else
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Wrong encryption method. Got {}", name);
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown encryption method. Got {}", name);
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace
|
namespace
|
||||||
@ -48,34 +48,22 @@ namespace
|
|||||||
String getMethodName(EncryptionMethod Method)
|
String getMethodName(EncryptionMethod Method)
|
||||||
{
|
{
|
||||||
if (Method == AES_128_GCM_SIV)
|
if (Method == AES_128_GCM_SIV)
|
||||||
{
|
|
||||||
return "AES_128_GCM_SIV";
|
return "AES_128_GCM_SIV";
|
||||||
}
|
|
||||||
else if (Method == AES_256_GCM_SIV)
|
else if (Method == AES_256_GCM_SIV)
|
||||||
{
|
|
||||||
return "AES_256_GCM_SIV";
|
return "AES_256_GCM_SIV";
|
||||||
}
|
|
||||||
else
|
else
|
||||||
{
|
|
||||||
return "";
|
return "";
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Get method code (used for codec, to understand which one we are using)
|
/// Get method code (used for codec, to understand which one we are using)
|
||||||
uint8_t getMethodCode(EncryptionMethod Method)
|
uint8_t getMethodCode(EncryptionMethod Method)
|
||||||
{
|
{
|
||||||
if (Method == AES_128_GCM_SIV)
|
if (Method == AES_128_GCM_SIV)
|
||||||
{
|
|
||||||
return static_cast<uint8_t>(CompressionMethodByte::AES_128_GCM_SIV);
|
return static_cast<uint8_t>(CompressionMethodByte::AES_128_GCM_SIV);
|
||||||
}
|
|
||||||
else if (Method == AES_256_GCM_SIV)
|
else if (Method == AES_256_GCM_SIV)
|
||||||
{
|
|
||||||
return static_cast<uint8_t>(CompressionMethodByte::AES_256_GCM_SIV);
|
return static_cast<uint8_t>(CompressionMethodByte::AES_256_GCM_SIV);
|
||||||
}
|
|
||||||
else
|
else
|
||||||
{
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown encryption method. Got {}", getMethodName(Method));
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Wrong encryption method. Got {}", getMethodName(Method));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
} // end of namespace
|
} // end of namespace
|
||||||
@ -105,17 +93,11 @@ const String empty_nonce = {"\0\0\0\0\0\0\0\0\0\0\0\0", actual_nonce_size};
|
|||||||
UInt64 methodKeySize(EncryptionMethod Method)
|
UInt64 methodKeySize(EncryptionMethod Method)
|
||||||
{
|
{
|
||||||
if (Method == AES_128_GCM_SIV)
|
if (Method == AES_128_GCM_SIV)
|
||||||
{
|
|
||||||
return 16;
|
return 16;
|
||||||
}
|
|
||||||
else if (Method == AES_256_GCM_SIV)
|
else if (Method == AES_256_GCM_SIV)
|
||||||
{
|
|
||||||
return 32;
|
return 32;
|
||||||
}
|
|
||||||
else
|
else
|
||||||
{
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown encryption method. Got {}", getMethodName(Method));
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Wrong encryption method. Got {}", getMethodName(Method));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
std::string lastErrorString()
|
std::string lastErrorString()
|
||||||
@ -130,17 +112,11 @@ std::string lastErrorString()
|
|||||||
auto getMethod(EncryptionMethod Method)
|
auto getMethod(EncryptionMethod Method)
|
||||||
{
|
{
|
||||||
if (Method == AES_128_GCM_SIV)
|
if (Method == AES_128_GCM_SIV)
|
||||||
{
|
|
||||||
return EVP_aead_aes_128_gcm_siv;
|
return EVP_aead_aes_128_gcm_siv;
|
||||||
}
|
|
||||||
else if (Method == AES_256_GCM_SIV)
|
else if (Method == AES_256_GCM_SIV)
|
||||||
{
|
|
||||||
return EVP_aead_aes_256_gcm_siv;
|
return EVP_aead_aes_256_gcm_siv;
|
||||||
}
|
|
||||||
else
|
else
|
||||||
{
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown encryption method. Got {}", getMethodName(Method));
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Wrong encryption method. Got {}", getMethodName(Method));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Encrypt plaintext with particular algorithm and put result into ciphertext_and_tag.
|
/// Encrypt plaintext with particular algorithm and put result into ciphertext_and_tag.
|
||||||
@ -206,17 +182,11 @@ size_t decrypt(std::string_view ciphertext, char * plaintext, EncryptionMethod m
|
|||||||
auto getMethod(EncryptionMethod Method)
|
auto getMethod(EncryptionMethod Method)
|
||||||
{
|
{
|
||||||
if (Method == AES_128_GCM_SIV)
|
if (Method == AES_128_GCM_SIV)
|
||||||
{
|
|
||||||
return EVP_aes_128_gcm;
|
return EVP_aes_128_gcm;
|
||||||
}
|
|
||||||
else if (Method == AES_256_GCM_SIV)
|
else if (Method == AES_256_GCM_SIV)
|
||||||
{
|
|
||||||
return EVP_aes_256_gcm;
|
return EVP_aes_256_gcm;
|
||||||
}
|
|
||||||
else
|
else
|
||||||
{
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unknown encryption method. Got {}", getMethodName(Method));
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Wrong encryption method. Got {}", getMethodName(Method));
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Encrypt plaintext with particular algorithm and put result into ciphertext_and_tag.
|
/// Encrypt plaintext with particular algorithm and put result into ciphertext_and_tag.
|
||||||
|
@ -18,8 +18,8 @@ enum EncryptionMethod
|
|||||||
MAX_ENCRYPTION_METHOD
|
MAX_ENCRYPTION_METHOD
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Get method for string name. Throw exception for wrong name.
|
/// Get encryption method for string name. Throw exception for wrong name.
|
||||||
EncryptionMethod getEncryptionMethod(const std::string & name);
|
EncryptionMethod toEncryptionMethod(const std::string & name);
|
||||||
|
|
||||||
/** This codec encrypts and decrypts blocks with AES-128 in
|
/** This codec encrypts and decrypts blocks with AES-128 in
|
||||||
* GCM-SIV mode (RFC-8452), which is the only cipher currently
|
* GCM-SIV mode (RFC-8452), which is the only cipher currently
|
||||||
|
@ -805,20 +805,9 @@ protected:
|
|||||||
const String & user_name,
|
const String & user_name,
|
||||||
const String & password,
|
const String & password,
|
||||||
Session & session,
|
Session & session,
|
||||||
Messaging::MessageTransport & mt,
|
|
||||||
const Poco::Net::SocketAddress & address)
|
const Poco::Net::SocketAddress & address)
|
||||||
{
|
{
|
||||||
try
|
session.authenticate(user_name, password, address);
|
||||||
{
|
|
||||||
session.authenticate(user_name, password, address);
|
|
||||||
}
|
|
||||||
catch (const Exception &)
|
|
||||||
{
|
|
||||||
mt.send(
|
|
||||||
Messaging::ErrorOrNoticeResponse(Messaging::ErrorOrNoticeResponse::ERROR, "28P01", "Invalid user or password"),
|
|
||||||
true);
|
|
||||||
throw;
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
public:
|
public:
|
||||||
@ -839,10 +828,10 @@ public:
|
|||||||
void authenticate(
|
void authenticate(
|
||||||
const String & user_name,
|
const String & user_name,
|
||||||
Session & session,
|
Session & session,
|
||||||
Messaging::MessageTransport & mt,
|
[[maybe_unused]] Messaging::MessageTransport & mt,
|
||||||
const Poco::Net::SocketAddress & address) override
|
const Poco::Net::SocketAddress & address) override
|
||||||
{
|
{
|
||||||
return setPassword(user_name, "", session, mt, address);
|
return setPassword(user_name, "", session, address);
|
||||||
}
|
}
|
||||||
|
|
||||||
AuthenticationType getType() const override
|
AuthenticationType getType() const override
|
||||||
@ -866,7 +855,7 @@ public:
|
|||||||
if (type == Messaging::FrontMessageType::PASSWORD_MESSAGE)
|
if (type == Messaging::FrontMessageType::PASSWORD_MESSAGE)
|
||||||
{
|
{
|
||||||
std::unique_ptr<Messaging::PasswordMessage> password = mt.receive<Messaging::PasswordMessage>();
|
std::unique_ptr<Messaging::PasswordMessage> password = mt.receive<Messaging::PasswordMessage>();
|
||||||
return setPassword(user_name, password->password, session, mt, address);
|
return setPassword(user_name, password->password, session, address);
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
throw Exception(ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT,
|
throw Exception(ErrorCodes::UNEXPECTED_PACKET_FROM_CLIENT,
|
||||||
@ -901,20 +890,30 @@ public:
|
|||||||
Messaging::MessageTransport & mt,
|
Messaging::MessageTransport & mt,
|
||||||
const Poco::Net::SocketAddress & address)
|
const Poco::Net::SocketAddress & address)
|
||||||
{
|
{
|
||||||
const AuthenticationType user_auth_type = session.getAuthenticationTypeOrLogInFailure(user_name);
|
AuthenticationType user_auth_type;
|
||||||
if (type_to_method.find(user_auth_type) != type_to_method.end())
|
try
|
||||||
{
|
{
|
||||||
type_to_method[user_auth_type]->authenticate(user_name, session, mt, address);
|
user_auth_type = session.getAuthenticationTypeOrLogInFailure(user_name);
|
||||||
mt.send(Messaging::AuthenticationOk(), true);
|
if (type_to_method.find(user_auth_type) != type_to_method.end())
|
||||||
LOG_DEBUG(log, "Authentication for user {} was successful.", user_name);
|
{
|
||||||
return;
|
type_to_method[user_auth_type]->authenticate(user_name, session, mt, address);
|
||||||
|
mt.send(Messaging::AuthenticationOk(), true);
|
||||||
|
LOG_DEBUG(log, "Authentication for user {} was successful.", user_name);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
catch (const Exception&)
|
||||||
|
{
|
||||||
|
mt.send(Messaging::ErrorOrNoticeResponse(Messaging::ErrorOrNoticeResponse::ERROR, "28P01", "Invalid user or password"),
|
||||||
|
true);
|
||||||
|
|
||||||
|
throw;
|
||||||
}
|
}
|
||||||
|
|
||||||
mt.send(
|
mt.send(Messaging::ErrorOrNoticeResponse(Messaging::ErrorOrNoticeResponse::ERROR, "0A000", "Authentication method is not supported"),
|
||||||
Messaging::ErrorOrNoticeResponse(Messaging::ErrorOrNoticeResponse::ERROR, "0A000", "Authentication method is not supported"),
|
true);
|
||||||
true);
|
|
||||||
|
|
||||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Authentication type {} is not supported.", user_auth_type);
|
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Authentication method is not supported: {}", user_auth_type);
|
||||||
}
|
}
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
@ -466,6 +466,10 @@ private:
|
|||||||
if (collectCrashLog)
|
if (collectCrashLog)
|
||||||
collectCrashLog(sig, thread_num, query_id, stack_trace);
|
collectCrashLog(sig, thread_num, query_id, stack_trace);
|
||||||
|
|
||||||
|
#ifndef CLICKHOUSE_PROGRAM_STANDALONE_BUILD
|
||||||
|
Context::getGlobalContextInstance()->handleCrash();
|
||||||
|
#endif
|
||||||
|
|
||||||
/// Send crash report to developers (if configured)
|
/// Send crash report to developers (if configured)
|
||||||
if (sig != SanitizerTrap)
|
if (sig != SanitizerTrap)
|
||||||
{
|
{
|
||||||
|
@ -147,7 +147,7 @@ void AsynchronousBoundedReadBuffer::appendToPrefetchLog(
|
|||||||
};
|
};
|
||||||
|
|
||||||
if (prefetches_log)
|
if (prefetches_log)
|
||||||
prefetches_log->add(elem);
|
prefetches_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
@ -108,7 +108,7 @@ void CachedOnDiskReadBufferFromFile::appendFilesystemCacheLog(
|
|||||||
break;
|
break;
|
||||||
}
|
}
|
||||||
|
|
||||||
cache_log->add(elem);
|
cache_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
|
|
||||||
void CachedOnDiskReadBufferFromFile::initialize(size_t offset, size_t size)
|
void CachedOnDiskReadBufferFromFile::initialize(size_t offset, size_t size)
|
||||||
|
@ -171,7 +171,7 @@ void FileSegmentRangeWriter::appendFilesystemCacheLog(const FileSegment & file_s
|
|||||||
.profile_counters = nullptr,
|
.profile_counters = nullptr,
|
||||||
};
|
};
|
||||||
|
|
||||||
cache_log->add(elem);
|
cache_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
|
|
||||||
void FileSegmentRangeWriter::completeFileSegment()
|
void FileSegmentRangeWriter::completeFileSegment()
|
||||||
|
@ -112,7 +112,7 @@ void ReadBufferFromRemoteFSGather::appendUncachedReadInfo()
|
|||||||
.file_segment_size = current_object.bytes_size,
|
.file_segment_size = current_object.bytes_size,
|
||||||
.read_from_cache_attempted = false,
|
.read_from_cache_attempted = false,
|
||||||
};
|
};
|
||||||
cache_log->add(elem);
|
cache_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
|
|
||||||
IAsynchronousReader::Result ReadBufferFromRemoteFSGather::readInto(char * data, size_t size, size_t offset, size_t ignore)
|
IAsynchronousReader::Result ReadBufferFromRemoteFSGather::readInto(char * data, size_t size, size_t offset, size_t ignore)
|
||||||
|
@ -22,7 +22,14 @@ namespace ErrorCodes
|
|||||||
}
|
}
|
||||||
|
|
||||||
class IMetadataStorage;
|
class IMetadataStorage;
|
||||||
struct UnlinkMetadataFileOperationOutcome;
|
|
||||||
|
/// Return the result of operation to the caller.
|
||||||
|
/// It is used in `IDiskObjectStorageOperation::finalize` after metadata transaction executed to make decision on blob removal.
|
||||||
|
struct UnlinkMetadataFileOperationOutcome
|
||||||
|
{
|
||||||
|
UInt32 num_hardlinks = std::numeric_limits<UInt32>::max();
|
||||||
|
};
|
||||||
|
|
||||||
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
|
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
|
||||||
|
|
||||||
/// Tries to provide some "transactions" interface, which allow
|
/// Tries to provide some "transactions" interface, which allow
|
||||||
|
@ -244,15 +244,6 @@ private:
|
|||||||
std::unique_ptr<WriteFileOperation> write_operation;
|
std::unique_ptr<WriteFileOperation> write_operation;
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Return the result of operation to the caller.
|
|
||||||
/// It is used in `IDiskObjectStorageOperation::finalize` after metadata transaction executed to make decision on blob removal.
|
|
||||||
struct UnlinkMetadataFileOperationOutcome
|
|
||||||
{
|
|
||||||
UInt32 num_hardlinks = std::numeric_limits<UInt32>::max();
|
|
||||||
};
|
|
||||||
|
|
||||||
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
|
|
||||||
|
|
||||||
struct UnlinkMetadataFileOperation final : public IMetadataOperation
|
struct UnlinkMetadataFileOperation final : public IMetadataOperation
|
||||||
{
|
{
|
||||||
const UnlinkMetadataFileOperationOutcomePtr outcome = std::make_shared<UnlinkMetadataFileOperationOutcome>();
|
const UnlinkMetadataFileOperationOutcomePtr outcome = std::make_shared<UnlinkMetadataFileOperationOutcome>();
|
||||||
|
@ -1919,25 +1919,6 @@ ColumnPtr executeStringInteger(const ColumnsWithTypeAndName & arguments, const A
|
|||||||
return executeAggregateAddition(arguments, result_type, input_rows_count);
|
return executeAggregateAddition(arguments, result_type, input_rows_count);
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Special case - one or both arguments are IPv4
|
|
||||||
if (isIPv4(arguments[0].type) || isIPv4(arguments[1].type))
|
|
||||||
{
|
|
||||||
ColumnsWithTypeAndName new_arguments {
|
|
||||||
{
|
|
||||||
isIPv4(arguments[0].type) ? castColumn(arguments[0], std::make_shared<DataTypeUInt32>()) : arguments[0].column,
|
|
||||||
isIPv4(arguments[0].type) ? std::make_shared<DataTypeUInt32>() : arguments[0].type,
|
|
||||||
arguments[0].name,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
isIPv4(arguments[1].type) ? castColumn(arguments[1], std::make_shared<DataTypeUInt32>()) : arguments[1].column,
|
|
||||||
isIPv4(arguments[1].type) ? std::make_shared<DataTypeUInt32>() : arguments[1].type,
|
|
||||||
arguments[1].name
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
return executeImpl(new_arguments, result_type, input_rows_count);
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Special case when the function is plus or minus, one of arguments is Date/DateTime and another is Interval.
|
/// Special case when the function is plus or minus, one of arguments is Date/DateTime and another is Interval.
|
||||||
if (auto function_builder = getFunctionForIntervalArithmetic(arguments[0].type, arguments[1].type, context))
|
if (auto function_builder = getFunctionForIntervalArithmetic(arguments[0].type, arguments[1].type, context))
|
||||||
{
|
{
|
||||||
@ -1991,6 +1972,25 @@ ColumnPtr executeStringInteger(const ColumnsWithTypeAndName & arguments, const A
|
|||||||
return wrapInNullable(res, arguments, result_type, input_rows_count);
|
return wrapInNullable(res, arguments, result_type, input_rows_count);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Special case - one or both arguments are IPv4
|
||||||
|
if (isIPv4(arguments[0].type) || isIPv4(arguments[1].type))
|
||||||
|
{
|
||||||
|
ColumnsWithTypeAndName new_arguments {
|
||||||
|
{
|
||||||
|
isIPv4(arguments[0].type) ? castColumn(arguments[0], std::make_shared<DataTypeUInt32>()) : arguments[0].column,
|
||||||
|
isIPv4(arguments[0].type) ? std::make_shared<DataTypeUInt32>() : arguments[0].type,
|
||||||
|
arguments[0].name,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
isIPv4(arguments[1].type) ? castColumn(arguments[1], std::make_shared<DataTypeUInt32>()) : arguments[1].column,
|
||||||
|
isIPv4(arguments[1].type) ? std::make_shared<DataTypeUInt32>() : arguments[1].type,
|
||||||
|
arguments[1].name
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
return executeImpl2(new_arguments, result_type, input_rows_count, right_nullmap);
|
||||||
|
}
|
||||||
|
|
||||||
const auto * const left_generic = left_argument.type.get();
|
const auto * const left_generic = left_argument.type.get();
|
||||||
const auto * const right_generic = right_argument.type.get();
|
const auto * const right_generic = right_argument.type.get();
|
||||||
ColumnPtr res;
|
ColumnPtr res;
|
||||||
|
@ -74,6 +74,7 @@ namespace ErrorCodes
|
|||||||
extern const int LOGICAL_ERROR;
|
extern const int LOGICAL_ERROR;
|
||||||
extern const int TOO_FEW_ARGUMENTS_FOR_FUNCTION;
|
extern const int TOO_FEW_ARGUMENTS_FOR_FUNCTION;
|
||||||
extern const int TOO_MANY_ARGUMENTS_FOR_FUNCTION;
|
extern const int TOO_MANY_ARGUMENTS_FOR_FUNCTION;
|
||||||
|
extern const int FUNCTION_CANNOT_HAVE_PARAMETERS;
|
||||||
}
|
}
|
||||||
|
|
||||||
static NamesAndTypesList::iterator findColumn(const String & name, NamesAndTypesList & cols)
|
static NamesAndTypesList::iterator findColumn(const String & name, NamesAndTypesList & cols)
|
||||||
@ -1107,6 +1108,10 @@ void ActionsMatcher::visit(const ASTFunction & node, const ASTPtr & ast, Data &
|
|||||||
e.addMessage("Or unknown aggregate function " + node.name + ". Maybe you meant: " + toString(hints));
|
e.addMessage("Or unknown aggregate function " + node.name + ". Maybe you meant: " + toString(hints));
|
||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Normal functions are not parametric for now.
|
||||||
|
if (node.parameters)
|
||||||
|
throw Exception(ErrorCodes::FUNCTION_CANNOT_HAVE_PARAMETERS, "Function {} is not parametric", node.name);
|
||||||
}
|
}
|
||||||
|
|
||||||
Names argument_names;
|
Names argument_names;
|
||||||
|
@ -438,7 +438,7 @@ try
|
|||||||
elem.flush_query_id = flush_query_id;
|
elem.flush_query_id = flush_query_id;
|
||||||
elem.exception = flush_exception;
|
elem.exception = flush_exception;
|
||||||
elem.status = flush_exception.empty() ? Status::Ok : Status::FlushError;
|
elem.status = flush_exception.empty() ? Status::Ok : Status::FlushError;
|
||||||
log.add(elem);
|
log.add(std::move(elem));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
@ -608,7 +608,7 @@ try
|
|||||||
if (!elem.exception.empty())
|
if (!elem.exception.empty())
|
||||||
{
|
{
|
||||||
elem.status = AsynchronousInsertLogElement::ParsingError;
|
elem.status = AsynchronousInsertLogElement::ParsingError;
|
||||||
insert_log->add(elem);
|
insert_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
|
@ -496,6 +496,16 @@ void QueryCache::reset()
|
|||||||
cache_size_in_bytes = 0;
|
cache_size_in_bytes = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
size_t QueryCache::weight() const
|
||||||
|
{
|
||||||
|
return cache.weight();
|
||||||
|
}
|
||||||
|
|
||||||
|
size_t QueryCache::count() const
|
||||||
|
{
|
||||||
|
return cache.count();
|
||||||
|
}
|
||||||
|
|
||||||
size_t QueryCache::recordQueryRun(const Key & key)
|
size_t QueryCache::recordQueryRun(const Key & key)
|
||||||
{
|
{
|
||||||
std::lock_guard lock(mutex);
|
std::lock_guard lock(mutex);
|
||||||
|
@ -186,6 +186,9 @@ public:
|
|||||||
|
|
||||||
void reset();
|
void reset();
|
||||||
|
|
||||||
|
size_t weight() const;
|
||||||
|
size_t count() const;
|
||||||
|
|
||||||
/// Record new execution of query represented by key. Returns number of executions so far.
|
/// Record new execution of query represented by key. Returns number of executions so far.
|
||||||
size_t recordQueryRun(const Key & key);
|
size_t recordQueryRun(const Key & key);
|
||||||
|
|
||||||
@ -193,7 +196,7 @@ public:
|
|||||||
std::vector<QueryCache::Cache::KeyMapped> dump() const;
|
std::vector<QueryCache::Cache::KeyMapped> dump() const;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
Cache cache;
|
Cache cache; /// has its own locking --> not protected by mutex
|
||||||
|
|
||||||
mutable std::mutex mutex;
|
mutable std::mutex mutex;
|
||||||
TimesExecuted times_executed TSA_GUARDED_BY(mutex);
|
TimesExecuted times_executed TSA_GUARDED_BY(mutex);
|
||||||
|
@ -2903,16 +2903,6 @@ std::map<String, zkutil::ZooKeeperPtr> Context::getAuxiliaryZooKeepers() const
|
|||||||
}
|
}
|
||||||
|
|
||||||
#if USE_ROCKSDB
|
#if USE_ROCKSDB
|
||||||
MergeTreeMetadataCachePtr Context::getMergeTreeMetadataCache() const
|
|
||||||
{
|
|
||||||
auto cache = tryGetMergeTreeMetadataCache();
|
|
||||||
if (!cache)
|
|
||||||
throw Exception(
|
|
||||||
ErrorCodes::LOGICAL_ERROR,
|
|
||||||
"Merge tree metadata cache is not initialized, please add config merge_tree_metadata_cache in config.xml and restart");
|
|
||||||
return cache;
|
|
||||||
}
|
|
||||||
|
|
||||||
MergeTreeMetadataCachePtr Context::tryGetMergeTreeMetadataCache() const
|
MergeTreeMetadataCachePtr Context::tryGetMergeTreeMetadataCache() const
|
||||||
{
|
{
|
||||||
return shared->merge_tree_metadata_cache;
|
return shared->merge_tree_metadata_cache;
|
||||||
@ -3210,6 +3200,12 @@ void Context::initializeMergeTreeMetadataCache(const String & dir, size_t size)
|
|||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/// Call after unexpected crash happen.
|
||||||
|
void Context::handleCrash() const
|
||||||
|
{
|
||||||
|
shared->system_logs->handleCrash();
|
||||||
|
}
|
||||||
|
|
||||||
bool Context::hasTraceCollector() const
|
bool Context::hasTraceCollector() const
|
||||||
{
|
{
|
||||||
return shared->hasTraceCollector();
|
return shared->hasTraceCollector();
|
||||||
|
@ -889,7 +889,6 @@ public:
|
|||||||
void setClientProtocolVersion(UInt64 version);
|
void setClientProtocolVersion(UInt64 version);
|
||||||
|
|
||||||
#if USE_ROCKSDB
|
#if USE_ROCKSDB
|
||||||
MergeTreeMetadataCachePtr getMergeTreeMetadataCache() const;
|
|
||||||
MergeTreeMetadataCachePtr tryGetMergeTreeMetadataCache() const;
|
MergeTreeMetadataCachePtr tryGetMergeTreeMetadataCache() const;
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
@ -998,6 +997,9 @@ public:
|
|||||||
void initializeMergeTreeMetadataCache(const String & dir, size_t size);
|
void initializeMergeTreeMetadataCache(const String & dir, size_t size);
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
/// Call after unexpected crash happen.
|
||||||
|
void handleCrash() const;
|
||||||
|
|
||||||
bool hasTraceCollector() const;
|
bool hasTraceCollector() const;
|
||||||
|
|
||||||
/// Nullptr if the query log is not ready for this moment.
|
/// Nullptr if the query log is not ready for this moment.
|
||||||
|
@ -83,9 +83,6 @@ void collectCrashLog(Int32 signal, UInt64 thread_id, const String & query_id, co
|
|||||||
stack_trace.toStringEveryLine([&trace_full](std::string_view line) { trace_full.push_back(line); });
|
stack_trace.toStringEveryLine([&trace_full](std::string_view line) { trace_full.push_back(line); });
|
||||||
|
|
||||||
CrashLogElement element{static_cast<time_t>(time / 1000000000), time, signal, thread_id, query_id, trace, trace_full};
|
CrashLogElement element{static_cast<time_t>(time / 1000000000), time, signal, thread_id, query_id, trace, trace_full};
|
||||||
crash_log_owned->add(element);
|
crash_log_owned->add(std::move(element));
|
||||||
/// Notify savingThreadFunction to start flushing crash log
|
|
||||||
/// Crash log is storing in parallel with the signal processing thread.
|
|
||||||
crash_log_owned->notifyFlush(true);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -45,6 +45,11 @@ public:
|
|||||||
{
|
{
|
||||||
crash_log = crash_log_;
|
crash_log = crash_log_;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static consteval size_t getDefaultMaxSize() { return 1024; }
|
||||||
|
static consteval size_t getDefaultReservedSize() { return 1024; }
|
||||||
|
static consteval size_t getDefaultFlushIntervalMilliseconds() { return 1000; }
|
||||||
|
static consteval size_t shouldNotifyFlushOnCrash() { return true; }
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -113,7 +113,7 @@ void MetricLog::metricThreadFunction()
|
|||||||
elem.current_metrics[i] = CurrentMetrics::values[i];
|
elem.current_metrics[i] = CurrentMetrics::values[i];
|
||||||
}
|
}
|
||||||
|
|
||||||
this->add(elem);
|
this->add(std::move(elem));
|
||||||
|
|
||||||
/// We will record current time into table but align it to regular time intervals to avoid time drift.
|
/// We will record current time into table but align it to regular time intervals to avoid time drift.
|
||||||
/// We may drop some time points if the server is overloaded and recording took too much time.
|
/// We may drop some time points if the server is overloaded and recording took too much time.
|
||||||
|
@ -242,7 +242,7 @@ bool PartLog::addNewParts(
|
|||||||
|
|
||||||
elem.profile_counters = part_log_entry.profile_counters;
|
elem.profile_counters = part_log_entry.profile_counters;
|
||||||
|
|
||||||
part_log->add(elem);
|
part_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
|
@ -73,12 +73,5 @@ void ProcessorProfileLogElement::appendToBlock(MutableColumns & columns) const
|
|||||||
columns[i++]->insert(output_bytes);
|
columns[i++]->insert(output_bytes);
|
||||||
}
|
}
|
||||||
|
|
||||||
ProcessorsProfileLog::ProcessorsProfileLog(ContextPtr context_, const String & database_name_,
|
|
||||||
const String & table_name_, const String & storage_def_,
|
|
||||||
size_t flush_interval_milliseconds_)
|
|
||||||
: SystemLog<ProcessorProfileLogElement>(context_, database_name_, table_name_,
|
|
||||||
storage_def_, flush_interval_milliseconds_)
|
|
||||||
{
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -45,12 +45,7 @@ struct ProcessorProfileLogElement
|
|||||||
class ProcessorsProfileLog : public SystemLog<ProcessorProfileLogElement>
|
class ProcessorsProfileLog : public SystemLog<ProcessorProfileLogElement>
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ProcessorsProfileLog(
|
using SystemLog<ProcessorProfileLogElement>::SystemLog;
|
||||||
ContextPtr context_,
|
|
||||||
const String & database_name_,
|
|
||||||
const String & table_name_,
|
|
||||||
const String & storage_def_,
|
|
||||||
size_t flush_interval_milliseconds_);
|
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -92,6 +92,12 @@ void ServerAsynchronousMetrics::updateImpl(AsynchronousMetricValues & new_values
|
|||||||
" The files opened with `mmap` are kept in the cache to avoid costly TLB flushes."};
|
" The files opened with `mmap` are kept in the cache to avoid costly TLB flushes."};
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (auto query_cache = getContext()->getQueryCache())
|
||||||
|
{
|
||||||
|
new_values["QueryCacheBytes"] = { query_cache->weight(), "Total size of the query cache in bytes." };
|
||||||
|
new_values["QueryCacheEntries"] = { query_cache->count(), "Total number of entries in the query cache." };
|
||||||
|
}
|
||||||
|
|
||||||
{
|
{
|
||||||
auto caches = FileCacheFactory::instance().getAll();
|
auto caches = FileCacheFactory::instance().getAll();
|
||||||
size_t total_bytes = 0;
|
size_t total_bytes = 0;
|
||||||
|
@ -240,7 +240,7 @@ private:
|
|||||||
|
|
||||||
if (session != sessions.end() && session->second->close_cycle <= current_cycle)
|
if (session != sessions.end() && session->second->close_cycle <= current_cycle)
|
||||||
{
|
{
|
||||||
if (!session->second.unique())
|
if (session->second.use_count() != 1)
|
||||||
{
|
{
|
||||||
LOG_TEST(log, "Delay closing session with session_id: {}, user_id: {}", key.second, key.first);
|
LOG_TEST(log, "Delay closing session with session_id: {}, user_id: {}", key.second, key.first);
|
||||||
|
|
||||||
|
@ -227,7 +227,7 @@ void SessionLog::addLoginSuccess(const UUID & auth_id, std::optional<String> ses
|
|||||||
for (const auto & s : settings.allChanged())
|
for (const auto & s : settings.allChanged())
|
||||||
log_entry.settings.emplace_back(s.getName(), s.getValueString());
|
log_entry.settings.emplace_back(s.getName(), s.getValueString());
|
||||||
|
|
||||||
add(log_entry);
|
add(std::move(log_entry));
|
||||||
}
|
}
|
||||||
|
|
||||||
void SessionLog::addLoginFailure(
|
void SessionLog::addLoginFailure(
|
||||||
@ -243,7 +243,7 @@ void SessionLog::addLoginFailure(
|
|||||||
log_entry.client_info = info;
|
log_entry.client_info = info;
|
||||||
log_entry.user_identified_with = AuthenticationType::NO_PASSWORD;
|
log_entry.user_identified_with = AuthenticationType::NO_PASSWORD;
|
||||||
|
|
||||||
add(log_entry);
|
add(std::move(log_entry));
|
||||||
}
|
}
|
||||||
|
|
||||||
void SessionLog::addLogOut(const UUID & auth_id, const UserPtr & login_user, const ClientInfo & client_info)
|
void SessionLog::addLogOut(const UUID & auth_id, const UserPtr & login_user, const ClientInfo & client_info)
|
||||||
@ -257,7 +257,7 @@ void SessionLog::addLogOut(const UUID & auth_id, const UserPtr & login_user, con
|
|||||||
log_entry.external_auth_server = login_user ? login_user->auth_data.getLDAPServerName() : "";
|
log_entry.external_auth_server = login_user ? login_user->auth_data.getLDAPServerName() : "";
|
||||||
log_entry.client_info = client_info;
|
log_entry.client_info = client_info;
|
||||||
|
|
||||||
add(log_entry);
|
add(std::move(log_entry));
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -101,7 +101,6 @@ namespace
|
|||||||
namespace
|
namespace
|
||||||
{
|
{
|
||||||
|
|
||||||
constexpr size_t DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS = 7500;
|
|
||||||
constexpr size_t DEFAULT_METRIC_LOG_COLLECT_INTERVAL_MILLISECONDS = 1000;
|
constexpr size_t DEFAULT_METRIC_LOG_COLLECT_INTERVAL_MILLISECONDS = 1000;
|
||||||
|
|
||||||
/// Creates a system log with MergeTree engine using parameters from config
|
/// Creates a system log with MergeTree engine using parameters from config
|
||||||
@ -124,18 +123,23 @@ std::shared_ptr<TSystemLog> createSystemLog(
|
|||||||
LOG_DEBUG(&Poco::Logger::get("SystemLog"),
|
LOG_DEBUG(&Poco::Logger::get("SystemLog"),
|
||||||
"Creating {}.{} from {}", default_database_name, default_table_name, config_prefix);
|
"Creating {}.{} from {}", default_database_name, default_table_name, config_prefix);
|
||||||
|
|
||||||
String database = config.getString(config_prefix + ".database", default_database_name);
|
SystemLogSettings log_settings;
|
||||||
String table = config.getString(config_prefix + ".table", default_table_name);
|
log_settings.queue_settings.database = config.getString(config_prefix + ".database", default_database_name);
|
||||||
|
log_settings.queue_settings.table = config.getString(config_prefix + ".table", default_table_name);
|
||||||
|
|
||||||
if (database != default_database_name)
|
if (log_settings.queue_settings.database != default_database_name)
|
||||||
{
|
{
|
||||||
/// System tables must be loaded before other tables, but loading order is undefined for all databases except `system`
|
/// System tables must be loaded before other tables, but loading order is undefined for all databases except `system`
|
||||||
LOG_ERROR(&Poco::Logger::get("SystemLog"), "Custom database name for a system table specified in config."
|
LOG_ERROR(
|
||||||
" Table `{}` will be created in `system` database instead of `{}`", table, database);
|
&Poco::Logger::get("SystemLog"),
|
||||||
database = default_database_name;
|
"Custom database name for a system table specified in config."
|
||||||
|
" Table `{}` will be created in `system` database instead of `{}`",
|
||||||
|
log_settings.queue_settings.table,
|
||||||
|
log_settings.queue_settings.database);
|
||||||
|
|
||||||
|
log_settings.queue_settings.database = default_database_name;
|
||||||
}
|
}
|
||||||
|
|
||||||
String engine;
|
|
||||||
if (config.has(config_prefix + ".engine"))
|
if (config.has(config_prefix + ".engine"))
|
||||||
{
|
{
|
||||||
if (config.has(config_prefix + ".partition_by"))
|
if (config.has(config_prefix + ".partition_by"))
|
||||||
@ -159,26 +163,26 @@ std::shared_ptr<TSystemLog> createSystemLog(
|
|||||||
"If 'engine' is specified for system table, SETTINGS parameters should "
|
"If 'engine' is specified for system table, SETTINGS parameters should "
|
||||||
"be specified directly inside 'engine' and 'settings' setting doesn't make sense");
|
"be specified directly inside 'engine' and 'settings' setting doesn't make sense");
|
||||||
|
|
||||||
engine = config.getString(config_prefix + ".engine");
|
log_settings.engine = config.getString(config_prefix + ".engine");
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
/// ENGINE expr is necessary.
|
/// ENGINE expr is necessary.
|
||||||
engine = "ENGINE = MergeTree";
|
log_settings.engine = "ENGINE = MergeTree";
|
||||||
|
|
||||||
/// PARTITION expr is not necessary.
|
/// PARTITION expr is not necessary.
|
||||||
String partition_by = config.getString(config_prefix + ".partition_by", "toYYYYMM(event_date)");
|
String partition_by = config.getString(config_prefix + ".partition_by", "toYYYYMM(event_date)");
|
||||||
if (!partition_by.empty())
|
if (!partition_by.empty())
|
||||||
engine += " PARTITION BY (" + partition_by + ")";
|
log_settings.engine += " PARTITION BY (" + partition_by + ")";
|
||||||
|
|
||||||
/// TTL expr is not necessary.
|
/// TTL expr is not necessary.
|
||||||
String ttl = config.getString(config_prefix + ".ttl", "");
|
String ttl = config.getString(config_prefix + ".ttl", "");
|
||||||
if (!ttl.empty())
|
if (!ttl.empty())
|
||||||
engine += " TTL " + ttl;
|
log_settings.engine += " TTL " + ttl;
|
||||||
|
|
||||||
/// ORDER BY expr is necessary.
|
/// ORDER BY expr is necessary.
|
||||||
String order_by = config.getString(config_prefix + ".order_by", TSystemLog::getDefaultOrderBy());
|
String order_by = config.getString(config_prefix + ".order_by", TSystemLog::getDefaultOrderBy());
|
||||||
engine += " ORDER BY (" + order_by + ")";
|
log_settings.engine += " ORDER BY (" + order_by + ")";
|
||||||
|
|
||||||
/// SETTINGS expr is not necessary.
|
/// SETTINGS expr is not necessary.
|
||||||
/// https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#settings
|
/// https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#settings
|
||||||
@ -188,24 +192,52 @@ std::shared_ptr<TSystemLog> createSystemLog(
|
|||||||
String settings = config.getString(config_prefix + ".settings", "");
|
String settings = config.getString(config_prefix + ".settings", "");
|
||||||
if (!storage_policy.empty() || !settings.empty())
|
if (!storage_policy.empty() || !settings.empty())
|
||||||
{
|
{
|
||||||
engine += " SETTINGS";
|
log_settings.engine += " SETTINGS";
|
||||||
/// If 'storage_policy' is repeated, the 'settings' configuration is preferred.
|
/// If 'storage_policy' is repeated, the 'settings' configuration is preferred.
|
||||||
if (!storage_policy.empty())
|
if (!storage_policy.empty())
|
||||||
engine += " storage_policy = " + quoteString(storage_policy);
|
log_settings.engine += " storage_policy = " + quoteString(storage_policy);
|
||||||
if (!settings.empty())
|
if (!settings.empty())
|
||||||
engine += (storage_policy.empty() ? " " : ", ") + settings;
|
log_settings.engine += (storage_policy.empty() ? " " : ", ") + settings;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Validate engine definition syntax to prevent some configuration errors.
|
/// Validate engine definition syntax to prevent some configuration errors.
|
||||||
ParserStorageWithComment storage_parser;
|
ParserStorageWithComment storage_parser;
|
||||||
parseQuery(storage_parser, engine.data(), engine.data() + engine.size(),
|
parseQuery(storage_parser, log_settings.engine.data(), log_settings.engine.data() + log_settings.engine.size(),
|
||||||
"Storage to create table for " + config_prefix, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
"Storage to create table for " + config_prefix, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
|
||||||
|
|
||||||
size_t flush_interval_milliseconds = config.getUInt64(config_prefix + ".flush_interval_milliseconds",
|
log_settings.queue_settings.flush_interval_milliseconds = config.getUInt64(config_prefix + ".flush_interval_milliseconds",
|
||||||
DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS);
|
TSystemLog::getDefaultFlushIntervalMilliseconds());
|
||||||
|
|
||||||
return std::make_shared<TSystemLog>(context, database, table, engine, flush_interval_milliseconds);
|
log_settings.queue_settings.max_size_rows = config.getUInt64(config_prefix + ".max_size_rows",
|
||||||
|
TSystemLog::getDefaultMaxSize());
|
||||||
|
|
||||||
|
if (log_settings.queue_settings.max_size_rows < 1)
|
||||||
|
throw Exception(ErrorCodes::BAD_ARGUMENTS, "{0}.max_size_rows {1} should be 1 at least",
|
||||||
|
config_prefix,
|
||||||
|
log_settings.queue_settings.max_size_rows);
|
||||||
|
|
||||||
|
log_settings.queue_settings.reserved_size_rows = config.getUInt64(config_prefix + ".reserved_size_rows",
|
||||||
|
TSystemLog::getDefaultReservedSize());
|
||||||
|
|
||||||
|
if (log_settings.queue_settings.max_size_rows < log_settings.queue_settings.reserved_size_rows)
|
||||||
|
{
|
||||||
|
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||||
|
"{0}.max_size_rows {1} should be greater or equal to {0}.reserved_size_rows {2}",
|
||||||
|
config_prefix,
|
||||||
|
log_settings.queue_settings.max_size_rows,
|
||||||
|
log_settings.queue_settings.reserved_size_rows);
|
||||||
|
}
|
||||||
|
|
||||||
|
log_settings.queue_settings.buffer_size_rows_flush_threshold = config.getUInt64(config_prefix + ".buffer_size_rows_flush_threshold",
|
||||||
|
log_settings.queue_settings.max_size_rows / 2);
|
||||||
|
|
||||||
|
log_settings.queue_settings.notify_flush_on_crash = config.getBool(config_prefix + ".flush_on_crash",
|
||||||
|
TSystemLog::shouldNotifyFlushOnCrash());
|
||||||
|
|
||||||
|
log_settings.queue_settings.turn_off_logger = TSystemLog::shouldTurnOffLogger();
|
||||||
|
|
||||||
|
return std::make_shared<TSystemLog>(context, log_settings);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -325,23 +357,25 @@ void SystemLogs::shutdown()
|
|||||||
log->shutdown();
|
log->shutdown();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void SystemLogs::handleCrash()
|
||||||
|
{
|
||||||
|
for (auto & log : logs)
|
||||||
|
log->handleCrash();
|
||||||
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
SystemLog<LogElement>::SystemLog(
|
SystemLog<LogElement>::SystemLog(
|
||||||
ContextPtr context_,
|
ContextPtr context_,
|
||||||
const String & database_name_,
|
const SystemLogSettings & settings_,
|
||||||
const String & table_name_,
|
|
||||||
const String & storage_def_,
|
|
||||||
size_t flush_interval_milliseconds_,
|
|
||||||
std::shared_ptr<SystemLogQueue<LogElement>> queue_)
|
std::shared_ptr<SystemLogQueue<LogElement>> queue_)
|
||||||
: Base(database_name_ + "." + table_name_, flush_interval_milliseconds_, queue_)
|
: Base(settings_.queue_settings, queue_)
|
||||||
, WithContext(context_)
|
, WithContext(context_)
|
||||||
, log(&Poco::Logger::get("SystemLog (" + database_name_ + "." + table_name_ + ")"))
|
, log(&Poco::Logger::get("SystemLog (" + settings_.queue_settings.database + "." + settings_.queue_settings.table + ")"))
|
||||||
, table_id(database_name_, table_name_)
|
, table_id(settings_.queue_settings.database, settings_.queue_settings.table)
|
||||||
, storage_def(storage_def_)
|
, storage_def(settings_.engine)
|
||||||
, create_query(serializeAST(*getCreateTableQuery()))
|
, create_query(serializeAST(*getCreateTableQuery()))
|
||||||
{
|
{
|
||||||
assert(database_name_ == DatabaseCatalog::SYSTEM_DATABASE);
|
assert(settings_.queue_settings.database == DatabaseCatalog::SYSTEM_DATABASE);
|
||||||
}
|
}
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
|
@ -58,6 +58,7 @@ struct SystemLogs
|
|||||||
~SystemLogs();
|
~SystemLogs();
|
||||||
|
|
||||||
void shutdown();
|
void shutdown();
|
||||||
|
void handleCrash();
|
||||||
|
|
||||||
std::shared_ptr<QueryLog> query_log; /// Used to log queries.
|
std::shared_ptr<QueryLog> query_log; /// Used to log queries.
|
||||||
std::shared_ptr<QueryThreadLog> query_thread_log; /// Used to log query threads.
|
std::shared_ptr<QueryThreadLog> query_thread_log; /// Used to log query threads.
|
||||||
@ -87,6 +88,12 @@ struct SystemLogs
|
|||||||
std::vector<ISystemLog *> logs;
|
std::vector<ISystemLog *> logs;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
struct SystemLogSettings
|
||||||
|
{
|
||||||
|
SystemLogQueueSettings queue_settings;
|
||||||
|
|
||||||
|
String engine;
|
||||||
|
};
|
||||||
|
|
||||||
template <typename LogElement>
|
template <typename LogElement>
|
||||||
class SystemLog : public SystemLogBase<LogElement>, private boost::noncopyable, WithContext
|
class SystemLog : public SystemLogBase<LogElement>, private boost::noncopyable, WithContext
|
||||||
@ -103,13 +110,9 @@ public:
|
|||||||
* where N - is a minimal number from 1, for that table with corresponding name doesn't exist yet;
|
* where N - is a minimal number from 1, for that table with corresponding name doesn't exist yet;
|
||||||
* and new table get created - as if previous table was not exist.
|
* and new table get created - as if previous table was not exist.
|
||||||
*/
|
*/
|
||||||
SystemLog(
|
SystemLog(ContextPtr context_,
|
||||||
ContextPtr context_,
|
const SystemLogSettings& settings_,
|
||||||
const String & database_name_,
|
std::shared_ptr<SystemLogQueue<LogElement>> queue_ = nullptr);
|
||||||
const String & table_name_,
|
|
||||||
const String & storage_def_,
|
|
||||||
size_t flush_interval_milliseconds_,
|
|
||||||
std::shared_ptr<SystemLogQueue<LogElement>> queue_ = nullptr);
|
|
||||||
|
|
||||||
/** Append a record into log.
|
/** Append a record into log.
|
||||||
* Writing to table will be done asynchronously and in case of failure, record could be lost.
|
* Writing to table will be done asynchronously and in case of failure, record could be lost.
|
||||||
|
@ -80,15 +80,10 @@ void TextLogElement::appendToBlock(MutableColumns & columns) const
|
|||||||
columns[i++]->insert(message_format_string);
|
columns[i++]->insert(message_format_string);
|
||||||
}
|
}
|
||||||
|
|
||||||
TextLog::TextLog(ContextPtr context_, const String & database_name_,
|
TextLog::TextLog(ContextPtr context_,
|
||||||
const String & table_name_, const String & storage_def_,
|
const SystemLogSettings & settings)
|
||||||
size_t flush_interval_milliseconds_)
|
: SystemLog<TextLogElement>(context_, settings, getLogQueue(settings.queue_settings))
|
||||||
: SystemLog<TextLogElement>(context_, database_name_, table_name_,
|
|
||||||
storage_def_, flush_interval_milliseconds_, getLogQueue(flush_interval_milliseconds_))
|
|
||||||
{
|
{
|
||||||
// SystemLog methods may write text logs, so we disable logging for the text
|
|
||||||
// log table to avoid recursion.
|
|
||||||
log->setLevel(0);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -42,18 +42,15 @@ class TextLog : public SystemLog<TextLogElement>
|
|||||||
public:
|
public:
|
||||||
using Queue = SystemLogQueue<TextLogElement>;
|
using Queue = SystemLogQueue<TextLogElement>;
|
||||||
|
|
||||||
TextLog(
|
explicit TextLog(ContextPtr context_, const SystemLogSettings & settings);
|
||||||
ContextPtr context_,
|
|
||||||
const String & database_name_,
|
|
||||||
const String & table_name_,
|
|
||||||
const String & storage_def_,
|
|
||||||
size_t flush_interval_milliseconds_);
|
|
||||||
|
|
||||||
static std::shared_ptr<Queue> getLogQueue(size_t flush_interval_milliseconds)
|
static std::shared_ptr<Queue> getLogQueue(const SystemLogQueueSettings & settings)
|
||||||
{
|
{
|
||||||
static std::shared_ptr<Queue> queue = std::make_shared<Queue>("text_log", flush_interval_milliseconds, true);
|
static std::shared_ptr<Queue> queue = std::make_shared<Queue>(settings);
|
||||||
return queue;
|
return queue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static consteval bool shouldTurnOffLogger() { return true; }
|
||||||
};
|
};
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -513,7 +513,7 @@ void ThreadStatus::logToQueryThreadLog(QueryThreadLog & thread_log, const String
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
thread_log.add(elem);
|
thread_log.add(std::move(elem));
|
||||||
}
|
}
|
||||||
|
|
||||||
static String getCleanQueryAst(const ASTPtr q, ContextPtr context)
|
static String getCleanQueryAst(const ASTPtr q, ContextPtr context)
|
||||||
@ -573,7 +573,7 @@ void ThreadStatus::logToQueryViewsLog(const ViewRuntimeData & vinfo)
|
|||||||
element.stack_trace = getExceptionStackTraceString(vinfo.exception);
|
element.stack_trace = getExceptionStackTraceString(vinfo.exception);
|
||||||
}
|
}
|
||||||
|
|
||||||
views_log->add(element);
|
views_log->add(std::move(element));
|
||||||
}
|
}
|
||||||
|
|
||||||
void CurrentThread::attachToGroup(const ThreadGroupPtr & thread_group)
|
void CurrentThread::attachToGroup(const ThreadGroupPtr & thread_group)
|
||||||
|
@ -128,7 +128,7 @@ void TraceCollector::run()
|
|||||||
UInt64 time = static_cast<UInt64>(ts.tv_sec * 1000000000LL + ts.tv_nsec);
|
UInt64 time = static_cast<UInt64>(ts.tv_sec * 1000000000LL + ts.tv_nsec);
|
||||||
UInt64 time_in_microseconds = static_cast<UInt64>((ts.tv_sec * 1000000LL) + (ts.tv_nsec / 1000));
|
UInt64 time_in_microseconds = static_cast<UInt64>((ts.tv_sec * 1000000LL) + (ts.tv_nsec / 1000));
|
||||||
TraceLogElement element{time_t(time / 1000000000), time_in_microseconds, time, trace_type, thread_id, query_id, trace, size, event, increment};
|
TraceLogElement element{time_t(time / 1000000000), time_in_microseconds, time, trace_type, thread_id, query_id, trace, size, event, increment};
|
||||||
trace_log->add(element);
|
trace_log->add(std::move(element));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -34,7 +34,7 @@ try
|
|||||||
elem.tid = tid;
|
elem.tid = tid;
|
||||||
elem.csn = csn;
|
elem.csn = csn;
|
||||||
elem.fillCommonFields(nullptr);
|
elem.fillCommonFields(nullptr);
|
||||||
system_log->add(elem);
|
system_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
|
@ -101,7 +101,7 @@ try
|
|||||||
elem.type = type;
|
elem.type = type;
|
||||||
elem.tid = tid;
|
elem.tid = tid;
|
||||||
elem.fillCommonFields(&context);
|
elem.fillCommonFields(&context);
|
||||||
system_log->add(elem);
|
system_log->add(std::move(elem));
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
|
@ -21,6 +21,12 @@ namespace fs = std::filesystem;
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
class SensitiveDataMasker;
|
class SensitiveDataMasker;
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int BAD_ARGUMENTS;
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -44,10 +50,6 @@ static std::string renderFileNameTemplate(time_t now, const std::string & file_p
|
|||||||
return path.replace_filename(ss.str());
|
return path.replace_filename(ss.str());
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifndef WITHOUT_TEXT_LOG
|
|
||||||
constexpr size_t DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS = 7500;
|
|
||||||
#endif
|
|
||||||
|
|
||||||
void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Logger & logger /*_root*/, const std::string & cmd_name)
|
void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Logger & logger /*_root*/, const std::string & cmd_name)
|
||||||
{
|
{
|
||||||
auto current_logger = config.getString("logger", "");
|
auto current_logger = config.getString("logger", "");
|
||||||
@ -271,9 +273,37 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
|
|||||||
{
|
{
|
||||||
String text_log_level_str = config.getString("text_log.level", "trace");
|
String text_log_level_str = config.getString("text_log.level", "trace");
|
||||||
int text_log_level = Poco::Logger::parseLevel(text_log_level_str);
|
int text_log_level = Poco::Logger::parseLevel(text_log_level_str);
|
||||||
size_t flush_interval_milliseconds = config.getUInt64("text_log.flush_interval_milliseconds",
|
|
||||||
DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS);
|
DB::SystemLogQueueSettings log_settings;
|
||||||
split->addTextLog(DB::TextLog::getLogQueue(flush_interval_milliseconds), text_log_level);
|
log_settings.flush_interval_milliseconds = config.getUInt64("text_log.flush_interval_milliseconds",
|
||||||
|
DB::TextLog::getDefaultFlushIntervalMilliseconds());
|
||||||
|
|
||||||
|
log_settings.max_size_rows = config.getUInt64("text_log.max_size_rows",
|
||||||
|
DB::TextLog::getDefaultMaxSize());
|
||||||
|
|
||||||
|
if (log_settings.max_size_rows< 1)
|
||||||
|
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "text_log.max_size_rows {} should be 1 at least",
|
||||||
|
log_settings.max_size_rows);
|
||||||
|
|
||||||
|
log_settings.reserved_size_rows = config.getUInt64("text_log.reserved_size_rows", DB::TextLog::getDefaultReservedSize());
|
||||||
|
|
||||||
|
if (log_settings.max_size_rows < log_settings.reserved_size_rows)
|
||||||
|
{
|
||||||
|
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS,
|
||||||
|
"text_log.max_size {0} should be greater or equal to text_log.reserved_size_rows {1}",
|
||||||
|
log_settings.max_size_rows,
|
||||||
|
log_settings.reserved_size_rows);
|
||||||
|
}
|
||||||
|
|
||||||
|
log_settings.buffer_size_rows_flush_threshold = config.getUInt64("text_log.buffer_size_rows_flush_threshold",
|
||||||
|
log_settings.max_size_rows / 2);
|
||||||
|
|
||||||
|
log_settings.notify_flush_on_crash = config.getBool("text_log.flush_on_crash",
|
||||||
|
DB::TextLog::shouldNotifyFlushOnCrash());
|
||||||
|
|
||||||
|
log_settings.turn_off_logger = DB::TextLog::shouldTurnOffLogger();
|
||||||
|
|
||||||
|
split->addTextLog(DB::TextLog::getLogQueue(log_settings), text_log_level);
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
@ -138,7 +138,7 @@ void OwnSplitChannel::logSplit(const Poco::Message & msg)
|
|||||||
std::shared_ptr<SystemLogQueue<TextLogElement>> text_log_locked{};
|
std::shared_ptr<SystemLogQueue<TextLogElement>> text_log_locked{};
|
||||||
text_log_locked = text_log.lock();
|
text_log_locked = text_log.lock();
|
||||||
if (text_log_locked)
|
if (text_log_locked)
|
||||||
text_log_locked->push(elem);
|
text_log_locked->push(std::move(elem));
|
||||||
}
|
}
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
@ -1674,8 +1674,8 @@ std::pair<bool, NameSet> IMergeTreeDataPart::canRemovePart() const
|
|||||||
void IMergeTreeDataPart::initializePartMetadataManager()
|
void IMergeTreeDataPart::initializePartMetadataManager()
|
||||||
{
|
{
|
||||||
#if USE_ROCKSDB
|
#if USE_ROCKSDB
|
||||||
if (use_metadata_cache)
|
if (auto metadata_cache = storage.getContext()->tryGetMergeTreeMetadataCache(); metadata_cache && use_metadata_cache)
|
||||||
metadata_manager = std::make_shared<PartMetadataManagerWithCache>(this, storage.getContext()->getMergeTreeMetadataCache());
|
metadata_manager = std::make_shared<PartMetadataManagerWithCache>(this, metadata_cache);
|
||||||
else
|
else
|
||||||
metadata_manager = std::make_shared<PartMetadataManagerOrdinary>(this);
|
metadata_manager = std::make_shared<PartMetadataManagerOrdinary>(this);
|
||||||
#else
|
#else
|
||||||
|
@ -7775,7 +7775,7 @@ try
|
|||||||
LOG_WARNING(log, "Profile counters are not set");
|
LOG_WARNING(log, "Profile counters are not set");
|
||||||
}
|
}
|
||||||
|
|
||||||
part_log->add(part_log_elem);
|
part_log->add(std::move(part_log_elem));
|
||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
|
@ -795,6 +795,10 @@ StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables(
|
|||||||
bool filter_by_database_virtual_column /* = false */,
|
bool filter_by_database_virtual_column /* = false */,
|
||||||
bool filter_by_table_virtual_column /* = false */) const
|
bool filter_by_table_virtual_column /* = false */) const
|
||||||
{
|
{
|
||||||
|
/// FIXME: filtering does not work with allow_experimental_analyzer due to
|
||||||
|
/// different column names there (it has "table_name._table" not just
|
||||||
|
/// "_table")
|
||||||
|
|
||||||
assert(!filter_by_database_virtual_column || !filter_by_table_virtual_column || query);
|
assert(!filter_by_database_virtual_column || !filter_by_table_virtual_column || query);
|
||||||
|
|
||||||
const Settings & settings = query_context->getSettingsRef();
|
const Settings & settings = query_context->getSettingsRef();
|
||||||
|
@ -1,145 +0,0 @@
|
|||||||
#include <Storages/System/StorageSystemMergeTreeMetadataCache.h>
|
|
||||||
|
|
||||||
#if USE_ROCKSDB
|
|
||||||
#include <DataTypes/DataTypeDateTime.h>
|
|
||||||
#include <DataTypes/DataTypeString.h>
|
|
||||||
#include <DataTypes/DataTypesNumber.h>
|
|
||||||
#include <Interpreters/Context.h>
|
|
||||||
#include <Parsers/ASTExpressionList.h>
|
|
||||||
#include <Parsers/ASTFunction.h>
|
|
||||||
#include <Parsers/ASTIdentifier.h>
|
|
||||||
#include <Parsers/ASTLiteral.h>
|
|
||||||
#include <Parsers/ASTSelectQuery.h>
|
|
||||||
#include <Storages/MergeTree/KeyCondition.h>
|
|
||||||
#include <Storages/MergeTree/MergeTreeMetadataCache.h>
|
|
||||||
#include <Common/typeid_cast.h>
|
|
||||||
|
|
||||||
namespace DB
|
|
||||||
{
|
|
||||||
namespace ErrorCodes
|
|
||||||
{
|
|
||||||
extern const int BAD_ARGUMENTS;
|
|
||||||
}
|
|
||||||
|
|
||||||
NamesAndTypesList StorageSystemMergeTreeMetadataCache::getNamesAndTypes()
|
|
||||||
{
|
|
||||||
return {
|
|
||||||
{"key", std::make_shared<DataTypeString>()},
|
|
||||||
{"value", std::make_shared<DataTypeString>()},
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
static bool extractKeyImpl(const IAST & elem, String & res, bool & precise)
|
|
||||||
{
|
|
||||||
const auto * function = elem.as<ASTFunction>();
|
|
||||||
if (!function)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
if (function->name == "and")
|
|
||||||
{
|
|
||||||
for (const auto & child : function->arguments->children)
|
|
||||||
{
|
|
||||||
bool tmp_precise = false;
|
|
||||||
if (extractKeyImpl(*child, res, tmp_precise))
|
|
||||||
{
|
|
||||||
precise = tmp_precise;
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (function->name == "equals" || function->name == "like")
|
|
||||||
{
|
|
||||||
const auto & args = function->arguments->as<ASTExpressionList &>();
|
|
||||||
const IAST * value;
|
|
||||||
|
|
||||||
if (args.children.size() != 2)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
const ASTIdentifier * ident;
|
|
||||||
if ((ident = args.children.at(0)->as<ASTIdentifier>()))
|
|
||||||
value = args.children.at(1).get();
|
|
||||||
else if ((ident = args.children.at(1)->as<ASTIdentifier>()))
|
|
||||||
value = args.children.at(0).get();
|
|
||||||
else
|
|
||||||
return false;
|
|
||||||
|
|
||||||
if (ident->name() != "key")
|
|
||||||
return false;
|
|
||||||
|
|
||||||
const auto * literal = value->as<ASTLiteral>();
|
|
||||||
if (!literal)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
if (literal->value.getType() != Field::Types::String)
|
|
||||||
return false;
|
|
||||||
|
|
||||||
res = literal->value.safeGet<String>();
|
|
||||||
precise = function->name == "equals";
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
/// Retrieve from the query a condition of the form `key= 'key'`, from conjunctions in the WHERE clause.
|
|
||||||
static String extractKey(const ASTPtr & query, bool& precise)
|
|
||||||
{
|
|
||||||
const auto & select = query->as<ASTSelectQuery &>();
|
|
||||||
if (!select.where())
|
|
||||||
return "";
|
|
||||||
|
|
||||||
String res;
|
|
||||||
return extractKeyImpl(*select.where(), res, precise) ? res : "";
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
void StorageSystemMergeTreeMetadataCache::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const
|
|
||||||
{
|
|
||||||
bool precise = false;
|
|
||||||
String key = extractKey(query_info.query, precise);
|
|
||||||
if (key.empty())
|
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
|
||||||
"SELECT from system.merge_tree_metadata_cache table must contain condition like key = 'key' "
|
|
||||||
"or key LIKE 'prefix%' in WHERE clause.");
|
|
||||||
|
|
||||||
auto cache = context->getMergeTreeMetadataCache();
|
|
||||||
if (precise)
|
|
||||||
{
|
|
||||||
String value;
|
|
||||||
if (cache->get(key, value) != MergeTreeMetadataCache::Status::OK())
|
|
||||||
return;
|
|
||||||
|
|
||||||
size_t col_num = 0;
|
|
||||||
res_columns[col_num++]->insert(key);
|
|
||||||
res_columns[col_num++]->insert(value);
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
String target = extractFixedPrefixFromLikePattern(key, /*requires_perfect_prefix*/ false);
|
|
||||||
if (target.empty())
|
|
||||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
|
||||||
"SELECT from system.merge_tree_metadata_cache table must contain condition like key = 'key' "
|
|
||||||
"or key LIKE 'prefix%' in WHERE clause.");
|
|
||||||
|
|
||||||
Strings keys;
|
|
||||||
Strings values;
|
|
||||||
keys.reserve(4096);
|
|
||||||
values.reserve(4096);
|
|
||||||
cache->getByPrefix(target, keys, values);
|
|
||||||
if (keys.empty())
|
|
||||||
return;
|
|
||||||
|
|
||||||
assert(keys.size() == values.size());
|
|
||||||
for (size_t i = 0; i < keys.size(); ++i)
|
|
||||||
{
|
|
||||||
size_t col_num = 0;
|
|
||||||
res_columns[col_num++]->insert(keys[i]);
|
|
||||||
res_columns[col_num++]->insert(values[i]);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
#endif
|
|
@ -1,29 +0,0 @@
|
|||||||
#pragma once
|
|
||||||
|
|
||||||
#include "config.h"
|
|
||||||
|
|
||||||
#if USE_ROCKSDB
|
|
||||||
#include <Storages/System/IStorageSystemOneBlock.h>
|
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
|
||||||
{
|
|
||||||
class Context;
|
|
||||||
|
|
||||||
|
|
||||||
/// Implements `merge_tree_metadata_cache` system table, which allows you to view the metadata cache data in rocksdb for testing purposes.
|
|
||||||
class StorageSystemMergeTreeMetadataCache : public IStorageSystemOneBlock<StorageSystemMergeTreeMetadataCache>
|
|
||||||
{
|
|
||||||
public:
|
|
||||||
std::string getName() const override { return "SystemMergeTreeMetadataCache"; }
|
|
||||||
|
|
||||||
static NamesAndTypesList getNamesAndTypes();
|
|
||||||
|
|
||||||
protected:
|
|
||||||
using IStorageSystemOneBlock::IStorageSystemOneBlock;
|
|
||||||
|
|
||||||
void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override;
|
|
||||||
};
|
|
||||||
|
|
||||||
}
|
|
||||||
#endif
|
|
@ -90,7 +90,6 @@
|
|||||||
|
|
||||||
#if USE_ROCKSDB
|
#if USE_ROCKSDB
|
||||||
#include <Storages/RocksDB/StorageSystemRocksDB.h>
|
#include <Storages/RocksDB/StorageSystemRocksDB.h>
|
||||||
#include <Storages/System/StorageSystemMergeTreeMetadataCache.h>
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
|
|
||||||
@ -150,7 +149,6 @@ void attachSystemTablesLocal(ContextPtr context, IDatabase & system_database)
|
|||||||
#endif
|
#endif
|
||||||
#if USE_ROCKSDB
|
#if USE_ROCKSDB
|
||||||
attach<StorageSystemRocksDB>(context, system_database, "rocksdb");
|
attach<StorageSystemRocksDB>(context, system_database, "rocksdb");
|
||||||
attach<StorageSystemMergeTreeMetadataCache>(context, system_database, "merge_tree_metadata_cache");
|
|
||||||
#endif
|
#endif
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -30,6 +30,7 @@
|
|||||||
#include <Storages/VirtualColumnUtils.h>
|
#include <Storages/VirtualColumnUtils.h>
|
||||||
#include <IO/WriteHelpers.h>
|
#include <IO/WriteHelpers.h>
|
||||||
#include <Common/typeid_cast.h>
|
#include <Common/typeid_cast.h>
|
||||||
|
#include <Parsers/makeASTForLogicalFunction.h>
|
||||||
#include <Columns/ColumnSet.h>
|
#include <Columns/ColumnSet.h>
|
||||||
#include <Functions/FunctionHelpers.h>
|
#include <Functions/FunctionHelpers.h>
|
||||||
#include <Interpreters/ActionsVisitor.h>
|
#include <Interpreters/ActionsVisitor.h>
|
||||||
@ -63,14 +64,31 @@ bool isValidFunction(const ASTPtr & expression, const std::function<bool(const A
|
|||||||
bool extractFunctions(const ASTPtr & expression, const std::function<bool(const ASTPtr &)> & is_constant, ASTs & result)
|
bool extractFunctions(const ASTPtr & expression, const std::function<bool(const ASTPtr &)> & is_constant, ASTs & result)
|
||||||
{
|
{
|
||||||
const auto * function = expression->as<ASTFunction>();
|
const auto * function = expression->as<ASTFunction>();
|
||||||
if (function && (function->name == "and" || function->name == "indexHint"))
|
|
||||||
|
if (function)
|
||||||
{
|
{
|
||||||
bool ret = true;
|
if (function->name == "and" || function->name == "indexHint")
|
||||||
for (const auto & child : function->arguments->children)
|
{
|
||||||
ret &= extractFunctions(child, is_constant, result);
|
bool ret = true;
|
||||||
return ret;
|
for (const auto & child : function->arguments->children)
|
||||||
|
ret &= extractFunctions(child, is_constant, result);
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
|
else if (function->name == "or")
|
||||||
|
{
|
||||||
|
bool ret = true;
|
||||||
|
ASTs or_args;
|
||||||
|
for (const auto & child : function->arguments->children)
|
||||||
|
ret &= extractFunctions(child, is_constant, or_args);
|
||||||
|
/// We can keep condition only if it still OR condition (i.e. we
|
||||||
|
/// have dependent conditions for columns at both sides)
|
||||||
|
if (or_args.size() == 2)
|
||||||
|
result.push_back(makeASTForLogicalOr(std::move(or_args)));
|
||||||
|
return ret;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
else if (isValidFunction(expression, is_constant))
|
|
||||||
|
if (isValidFunction(expression, is_constant))
|
||||||
{
|
{
|
||||||
result.push_back(expression->clone());
|
result.push_back(expression->clone());
|
||||||
return true;
|
return true;
|
||||||
@ -80,13 +98,13 @@ bool extractFunctions(const ASTPtr & expression, const std::function<bool(const
|
|||||||
}
|
}
|
||||||
|
|
||||||
/// Construct a conjunction from given functions
|
/// Construct a conjunction from given functions
|
||||||
ASTPtr buildWhereExpression(const ASTs & functions)
|
ASTPtr buildWhereExpression(ASTs && functions)
|
||||||
{
|
{
|
||||||
if (functions.empty())
|
if (functions.empty())
|
||||||
return nullptr;
|
return nullptr;
|
||||||
if (functions.size() == 1)
|
if (functions.size() == 1)
|
||||||
return functions[0];
|
return functions[0];
|
||||||
return makeASTFunction("and", functions);
|
return makeASTForLogicalAnd(std::move(functions));
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
@ -171,7 +189,7 @@ bool prepareFilterBlockWithQuery(const ASTPtr & query, ContextPtr context, Block
|
|||||||
if (select.prewhere())
|
if (select.prewhere())
|
||||||
unmodified &= extractFunctions(select.prewhere(), is_constant, functions);
|
unmodified &= extractFunctions(select.prewhere(), is_constant, functions);
|
||||||
|
|
||||||
expression_ast = buildWhereExpression(functions);
|
expression_ast = buildWhereExpression(std::move(functions));
|
||||||
return unmodified;
|
return unmodified;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -130,7 +130,7 @@ public:
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void visitImpl(QueryTreeNodePtr & node)
|
void enterImpl(QueryTreeNodePtr & node)
|
||||||
{
|
{
|
||||||
auto * function_node = node->as<FunctionNode>();
|
auto * function_node = node->as<FunctionNode>();
|
||||||
auto * join_node = node->as<JoinNode>();
|
auto * join_node = node->as<JoinNode>();
|
||||||
|
@ -181,7 +181,7 @@ void TableFunctionS3::parseArgumentsImpl(ASTs & args, const ContextPtr & context
|
|||||||
configuration.keys = {configuration.url.key};
|
configuration.keys = {configuration.url.key};
|
||||||
|
|
||||||
if (configuration.format == "auto")
|
if (configuration.format == "auto")
|
||||||
configuration.format = FormatFactory::instance().getFormatFromFileName(configuration.url.uri.getPath(), true);
|
configuration.format = FormatFactory::instance().getFormatFromFileName(Poco::URI(configuration.url.uri.getPath()).getPath(), true);
|
||||||
}
|
}
|
||||||
|
|
||||||
void TableFunctionS3::parseArguments(const ASTPtr & ast_function, ContextPtr context)
|
void TableFunctionS3::parseArguments(const ASTPtr & ast_function, ContextPtr context)
|
||||||
|
@ -17,7 +17,7 @@
|
|||||||
|
|
||||||
<users>
|
<users>
|
||||||
<session_log_test_xml_user>
|
<session_log_test_xml_user>
|
||||||
<password></password>
|
<no_password></no_password>
|
||||||
<networks incl="networks" replace="replace">
|
<networks incl="networks" replace="replace">
|
||||||
<ip>::1</ip>
|
<ip>::1</ip>
|
||||||
<ip>127.0.0.1</ip>
|
<ip>127.0.0.1</ip>
|
||||||
|
@ -74,5 +74,12 @@
|
|||||||
|
|
||||||
"test_http_failover/test.py::test_url_destination_host_with_multiple_addrs",
|
"test_http_failover/test.py::test_url_destination_host_with_multiple_addrs",
|
||||||
"test_http_failover/test.py::test_url_invalid_hostname",
|
"test_http_failover/test.py::test_url_invalid_hostname",
|
||||||
"test_http_failover/test.py::test_url_ip_change"
|
"test_http_failover/test.py::test_url_ip_change",
|
||||||
|
|
||||||
|
"test_system_logs/test_system_logs.py::test_max_size_0",
|
||||||
|
"test_system_logs/test_system_logs.py::test_reserved_size_greater_max_size",
|
||||||
|
"test_system_flush_logs/test.py::test_log_buffer_size_rows_flush_threshold",
|
||||||
|
"test_system_flush_logs/test.py::test_log_max_size",
|
||||||
|
"test_crash_log/test.py::test_pkill_query_log",
|
||||||
|
"test_crash_log/test.py::test_pkill"
|
||||||
]
|
]
|
||||||
|
@ -133,21 +133,33 @@ def test_concurrent_backups_on_same_node():
|
|||||||
)
|
)
|
||||||
assert status in ["CREATING_BACKUP", "BACKUP_CREATED"]
|
assert status in ["CREATING_BACKUP", "BACKUP_CREATED"]
|
||||||
|
|
||||||
try:
|
result, error = nodes[0].query_and_get_answer_with_error(
|
||||||
error = nodes[0].query_and_get_error(
|
f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
|
||||||
f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
|
)
|
||||||
)
|
|
||||||
except Exception as e:
|
if not error:
|
||||||
status = (
|
status = (
|
||||||
nodes[0]
|
nodes[0]
|
||||||
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
|
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
|
||||||
.rstrip("\n")
|
.rstrip("\n")
|
||||||
)
|
)
|
||||||
# It is possible that the second backup was picked up first, and then the async backup
|
# It is possible that the second backup was picked up first, and then the async backup
|
||||||
if status == "CREATING_BACKUP" or status == "BACKUP_FAILED":
|
if status == "BACKUP_FAILED":
|
||||||
|
return
|
||||||
|
elif status == "CREATING_BACKUP":
|
||||||
|
assert_eq_with_retry(
|
||||||
|
nodes[0],
|
||||||
|
f"SELECT status FROM system.backups WHERE id = '{id}'",
|
||||||
|
"BACKUP_FAILED",
|
||||||
|
sleep_time=2,
|
||||||
|
retry_count=50,
|
||||||
|
)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
raise e
|
raise Exception(
|
||||||
|
"Concurrent backups both passed, when one is expected to fail"
|
||||||
|
)
|
||||||
|
|
||||||
expected_errors = [
|
expected_errors = [
|
||||||
"Concurrent backups not supported",
|
"Concurrent backups not supported",
|
||||||
f"Backup {backup_name} already exists",
|
f"Backup {backup_name} already exists",
|
||||||
@ -191,20 +203,33 @@ def test_concurrent_backups_on_different_nodes():
|
|||||||
)
|
)
|
||||||
assert status in ["CREATING_BACKUP", "BACKUP_CREATED"]
|
assert status in ["CREATING_BACKUP", "BACKUP_CREATED"]
|
||||||
|
|
||||||
try:
|
result, error = nodes[0].query_and_get_answer_with_error(
|
||||||
error = nodes[0].query_and_get_error(
|
f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
|
||||||
f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
|
)
|
||||||
)
|
|
||||||
except Exception as e:
|
if not error:
|
||||||
status = (
|
status = (
|
||||||
nodes[1]
|
nodes[1]
|
||||||
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
|
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
|
||||||
.rstrip("\n")
|
.rstrip("\n")
|
||||||
)
|
)
|
||||||
if status == "CREATING_BACKUP" or status == "BACKUP_FAILED":
|
# It is possible that the second backup was picked up first, and then the async backup
|
||||||
|
if status == "BACKUP_FAILED":
|
||||||
|
return
|
||||||
|
elif status == "CREATING_BACKUP":
|
||||||
|
assert_eq_with_retry(
|
||||||
|
nodes[1],
|
||||||
|
f"SELECT status FROM system.backups WHERE id = '{id}'",
|
||||||
|
"BACKUP_FAILED",
|
||||||
|
sleep_time=2,
|
||||||
|
retry_count=50,
|
||||||
|
)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
raise e
|
raise Exception(
|
||||||
|
"Concurrent backups both passed, when one is expected to fail"
|
||||||
|
)
|
||||||
|
|
||||||
expected_errors = [
|
expected_errors = [
|
||||||
"Concurrent backups not supported",
|
"Concurrent backups not supported",
|
||||||
f"Backup {backup_name} already exists",
|
f"Backup {backup_name} already exists",
|
||||||
@ -247,20 +272,33 @@ def test_concurrent_restores_on_same_node():
|
|||||||
)
|
)
|
||||||
assert status in ["RESTORING", "RESTORED"]
|
assert status in ["RESTORING", "RESTORED"]
|
||||||
|
|
||||||
try:
|
result, error = nodes[0].query_and_get_answer_with_error(
|
||||||
error = nodes[0].query_and_get_error(
|
f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
|
||||||
f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
|
)
|
||||||
)
|
|
||||||
except Exception as e:
|
if not error:
|
||||||
status = (
|
status = (
|
||||||
nodes[0]
|
nodes[0]
|
||||||
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
|
.query(f"SELECT status FROM system.backups WHERE id == '{restore_id}'")
|
||||||
.rstrip("\n")
|
.rstrip("\n")
|
||||||
)
|
)
|
||||||
if status == "RESTORING" or status == "RESTORE_FAILED":
|
# It is possible that the second backup was picked up first, and then the async backup
|
||||||
|
if status == "RESTORE_FAILED":
|
||||||
|
return
|
||||||
|
elif status == "RESTORING":
|
||||||
|
assert_eq_with_retry(
|
||||||
|
nodes[0],
|
||||||
|
f"SELECT status FROM system.backups WHERE id == '{restore_id}'",
|
||||||
|
"RESTORE_FAILED",
|
||||||
|
sleep_time=2,
|
||||||
|
retry_count=50,
|
||||||
|
)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
raise e
|
raise Exception(
|
||||||
|
"Concurrent restores both passed, when one is expected to fail"
|
||||||
|
)
|
||||||
|
|
||||||
expected_errors = [
|
expected_errors = [
|
||||||
"Concurrent restores not supported",
|
"Concurrent restores not supported",
|
||||||
"Cannot restore the table default.tbl because it already contains some data",
|
"Cannot restore the table default.tbl because it already contains some data",
|
||||||
@ -303,20 +341,33 @@ def test_concurrent_restores_on_different_node():
|
|||||||
)
|
)
|
||||||
assert status in ["RESTORING", "RESTORED"]
|
assert status in ["RESTORING", "RESTORED"]
|
||||||
|
|
||||||
try:
|
result, error = nodes[1].query_and_get_answer_with_error(
|
||||||
error = nodes[1].query_and_get_error(
|
f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
|
||||||
f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
|
)
|
||||||
)
|
|
||||||
except Exception as e:
|
if not error:
|
||||||
status = (
|
status = (
|
||||||
nodes[0]
|
nodes[0]
|
||||||
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
|
.query(f"SELECT status FROM system.backups WHERE id == '{restore_id}'")
|
||||||
.rstrip("\n")
|
.rstrip("\n")
|
||||||
)
|
)
|
||||||
if status == "RESTORING" or status == "RESTORE_FAILED":
|
# It is possible that the second backup was picked up first, and then the async backup
|
||||||
|
if status == "RESTORE_FAILED":
|
||||||
|
return
|
||||||
|
elif status == "RESTORING":
|
||||||
|
assert_eq_with_retry(
|
||||||
|
nodes[0],
|
||||||
|
f"SELECT status FROM system.backups WHERE id == '{restore_id}'",
|
||||||
|
"RESTORE_FAILED",
|
||||||
|
sleep_time=2,
|
||||||
|
retry_count=50,
|
||||||
|
)
|
||||||
return
|
return
|
||||||
else:
|
else:
|
||||||
raise e
|
raise Exception(
|
||||||
|
"Concurrent restores both passed, when one is expected to fail"
|
||||||
|
)
|
||||||
|
|
||||||
expected_errors = [
|
expected_errors = [
|
||||||
"Concurrent restores not supported",
|
"Concurrent restores not supported",
|
||||||
"Cannot restore the table default.tbl because it already contains some data",
|
"Cannot restore the table default.tbl because it already contains some data",
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
<clickhouse>
|
<clickhouse>
|
||||||
|
|
||||||
<encryption_codecs>
|
<encryption_codecs>
|
||||||
<aes_128_gcm_siv>
|
<aes_128_gcm_siv>
|
||||||
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
||||||
@ -7,6 +8,8 @@
|
|||||||
<key_hex>00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff</key_hex>
|
<key_hex>00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff</key_hex>
|
||||||
</aes_256_gcm_siv>
|
</aes_256_gcm_siv>
|
||||||
</encryption_codecs>
|
</encryption_codecs>
|
||||||
<max_table_size_to_drop encryption_codec="AES_128_GCM_SIV">96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C</max_table_size_to_drop>
|
|
||||||
<max_partition_size_to_drop encryption_codec="AES_256_GCM_SIV">97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14</max_partition_size_to_drop>
|
<max_table_size_to_drop encrypted_by="AES_128_GCM_SIV">96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C</max_table_size_to_drop>
|
||||||
|
<max_partition_size_to_drop encrypted_by="AES_256_GCM_SIV">97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14</max_partition_size_to_drop>
|
||||||
|
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
|
@ -3,9 +3,11 @@ encryption_codecs:
|
|||||||
key_hex: 00112233445566778899aabbccddeeff
|
key_hex: 00112233445566778899aabbccddeeff
|
||||||
aes_256_gcm_siv:
|
aes_256_gcm_siv:
|
||||||
key_hex: 00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff
|
key_hex: 00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff
|
||||||
|
|
||||||
max_table_size_to_drop:
|
max_table_size_to_drop:
|
||||||
'#text': 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
|
'#text': 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
|
||||||
'@encryption_codec': AES_128_GCM_SIV
|
'@encrypted_by': AES_128_GCM_SIV
|
||||||
|
|
||||||
max_partition_size_to_drop:
|
max_partition_size_to_drop:
|
||||||
'@encryption_codec': AES_256_GCM_SIV
|
'@encrypted_by': AES_256_GCM_SIV
|
||||||
'#text': 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
|
'#text': 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
|
||||||
|
@ -1,4 +1,5 @@
|
|||||||
<clickhouse>
|
<clickhouse>
|
||||||
|
|
||||||
<encryption_codecs>
|
<encryption_codecs>
|
||||||
<aes_128_gcm_siv>
|
<aes_128_gcm_siv>
|
||||||
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
<key_hex>00112233445566778899aabbccddeeff</key_hex>
|
||||||
@ -7,6 +8,9 @@
|
|||||||
<key_hex>00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff</key_hex>
|
<key_hex>00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff</key_hex>
|
||||||
</aes_256_gcm_siv>
|
</aes_256_gcm_siv>
|
||||||
</encryption_codecs>
|
</encryption_codecs>
|
||||||
<max_table_size_to_drop encryption_codec="AES_128_GCM_SIV">--96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C</max_table_size_to_drop>
|
|
||||||
<max_partition_size_to_drop encryption_codec="AES_256_GCM_SIV">97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14</max_partition_size_to_drop>
|
<!-- Dash prefix leads to invalid hex-encoding -->
|
||||||
|
<max_table_size_to_drop encrypted_by="AES_128_GCM_SIV">--96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C</max_table_size_to_drop>
|
||||||
|
<max_partition_size_to_drop encrypted_by="AES_256_GCM_SIV">97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14</max_partition_size_to_drop>
|
||||||
|
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
|
@ -1,3 +1,7 @@
|
|||||||
<clickhouse>
|
<clickhouse>
|
||||||
<max_table_size_to_drop encryption_codec="AES_128_GCM_SIV">96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C</max_table_size_to_drop>
|
|
||||||
|
<!-- section "encryption_codec" is not specified -->
|
||||||
|
|
||||||
|
<max_table_size_to_drop encrypted_by="AES_128_GCM_SIV">96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C</max_table_size_to_drop>
|
||||||
|
|
||||||
</clickhouse>
|
</clickhouse>
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user