diff --git a/docs/en/operations/configuration-files.md b/docs/en/operations/configuration-files.md
index d1d9fa542ab..a19c55673ed 100644
--- a/docs/en/operations/configuration-files.md
+++ b/docs/en/operations/configuration-files.md
@@ -67,7 +67,7 @@ Substitutions can also be performed from ZooKeeper. To do this, specify the attr
## Encrypting Configuration {#encryption}
-You can use symmetric encryption to encrypt a configuration element, for example, a password field. To do so, first configure the [encryption codec](../sql-reference/statements/create/table.md#encryption-codecs), then add attribute `encryption_codec` with the name of the encryption codec as value to the element to encrypt.
+You can use symmetric encryption to encrypt a configuration element, for example, a password field. To do so, first configure the [encryption codec](../sql-reference/statements/create/table.md#encryption-codecs), then add attribute `encrypted_by` with the name of the encryption codec as value to the element to encrypt.
Unlike attributes `from_zk`, `from_env` and `incl` (or element `include`), no substitution, i.e. decryption of the encrypted value, is performed in the preprocessed file. Decryption happens only at runtime in the server process.
@@ -75,19 +75,22 @@ Example:
```xml
+
00112233445566778899aabbccddeeff
+
admin
- 961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85
+ 961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85
+
```
-To get the encrypted value `encrypt_decrypt` example application may be used.
+To encrypt a value, you can use the (example) program `encrypt_decrypt`:
Example:
@@ -138,12 +141,17 @@ Here you can see default config written in YAML: [config.yaml.example](https://g
There are some differences between YAML and XML formats in terms of ClickHouse configurations. Here are some tips for writing a configuration in YAML format.
-You should use a Scalar node to write a key-value pair:
+An XML tag with a text value is represented by a YAML key-value pair
``` yaml
key: value
```
-To create a node, containing other nodes you should use a Map:
+Corresponding XML:
+``` xml
+value
+```
+
+A nested XML node is represented by a YAML map:
``` yaml
map_key:
key1: val1
@@ -151,7 +159,16 @@ map_key:
key3: val3
```
-To create a list of values or nodes assigned to one tag you should use a Sequence:
+Corresponding XML:
+``` xml
+
+ val1
+ val2
+ val3
+
+```
+
+To create the same XML tag multiple times, use a YAML sequence:
``` yaml
seq_key:
- val1
@@ -162,8 +179,22 @@ seq_key:
key3: val5
```
-If you want to write an attribute for a Sequence or Map node, you should use a @ prefix before the attribute key. Note, that @ is reserved by YAML standard, so you should also to wrap it into double quotes:
+Corresponding XML:
+```xml
+val1
+val2
+
+ val3
+
+
+
+
+```
+To provide an XML attribute, you can use an attribute key with a `@` prefix. Note that `@` is reserved by YAML standard, so must be wrapped in double quotes:
``` yaml
map:
"@attr1": value1
@@ -171,16 +202,14 @@ map:
key: 123
```
-From that Map we will get these XML nodes:
-
+Corresponding XML:
``` xml
```
-You can also set attributes for Sequence:
-
+It is also possible to use attributes in YAML sequence:
``` yaml
seq:
- "@attr1": value1
@@ -189,13 +218,25 @@ seq:
- abc
```
-So, we can get YAML config equal to this XML one:
-
+Corresponding XML:
``` xml
123
abc
```
+The aforementioned syntax does not allow to express XML text nodes with XML attributes as YAML. This special case can be achieved using an
+`#text` attribute key:
+```yaml
+map_key:
+ "@attr1": value1
+ "#text": value2
+```
+
+Corresponding XML:
+```xml
+value2
+```
+
## Implementation Details {#implementation-details}
For each config file, the server also generates `file-preprocessed.xml` files when starting. These files contain all the completed substitutions and overrides, and they are intended for informational use. If ZooKeeper substitutions were used in the config files but ZooKeeper is not available on the server start, the server loads the configuration from the preprocessed file.
diff --git a/docs/en/operations/query-cache.md b/docs/en/operations/query-cache.md
index 547105c65cc..d0b785d8fda 100644
--- a/docs/en/operations/query-cache.md
+++ b/docs/en/operations/query-cache.md
@@ -61,11 +61,12 @@ use_query_cache = true`) but one should keep in mind that all `SELECT` queries i
may return cached results then.
The query cache can be cleared using statement `SYSTEM DROP QUERY CACHE`. The content of the query cache is displayed in system table
-`system.query_cache`. The number of query cache hits and misses are shown as events "QueryCacheHits" and "QueryCacheMisses" in system table
-[system.events](system-tables/events.md). Both counters are only updated for `SELECT` queries which run with setting "use_query_cache =
-true". Other queries do not affect the cache miss counter. Field `query_log_usage` in system table
-[system.query_log](system-tables/query_log.md) shows for each ran query whether the query result was written into or read from the query
-cache.
+`system.query_cache`. The number of query cache hits and misses since database start are shown as events "QueryCacheHits" and
+"QueryCacheMisses" in system table [system.events](system-tables/events.md). Both counters are only updated for `SELECT` queries which run
+with setting `use_query_cache = true`, other queries do not affect "QueryCacheMisses". Field `query_log_usage` in system table
+[system.query_log](system-tables/query_log.md) shows for each executed query whether the query result was written into or read from the
+query cache. Asynchronous metrics "QueryCacheEntries" and "QueryCacheBytes" in system table
+[system.asynchronous_metrics](system-tables/asynchronous_metrics.md) show how many entries / bytes the query cache currently contains.
The query cache exists once per ClickHouse server process. However, cache results are by default not shared between users. This can be
changed (see below) but doing so is not recommended for security reasons.
diff --git a/docs/en/operations/server-configuration-parameters/settings.md b/docs/en/operations/server-configuration-parameters/settings.md
index a6ae517e401..e9f0f0dae00 100644
--- a/docs/en/operations/server-configuration-parameters/settings.md
+++ b/docs/en/operations/server-configuration-parameters/settings.md
@@ -512,7 +512,7 @@ Both the cache for `local_disk`, and temporary data will be stored in `/tiny_loc
cache
local_disk
/tiny_local_cache/
- 10M
+ 10M
1M
1
0
@@ -1592,6 +1592,10 @@ To manually turn on metrics history collection [`system.metric_log`](../../opera
7500
1000
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1695,6 +1699,14 @@ Use the following parameters to configure logging:
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
+- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
+Default: false.
- `storage_policy` – Name of storage policy to use for the table (optional)
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
@@ -1706,6 +1718,10 @@ Use the following parameters to configure logging:
toMonday(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1773,6 +1789,14 @@ Use the following parameters to configure logging:
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
+- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
+Default: false.
- `storage_policy` – Name of storage policy to use for the table (optional)
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
@@ -1786,6 +1810,10 @@ If the table does not exist, ClickHouse will create it. If the structure of the
Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1831,6 +1859,14 @@ Use the following parameters to configure logging:
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size_rows, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
+- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
+Default: false.
- `storage_policy` – Name of storage policy to use for the table (optional)
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
@@ -1844,6 +1880,10 @@ If the table does not exist, ClickHouse will create it. If the structure of the
toMonday(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1861,6 +1901,14 @@ Use the following parameters to configure logging:
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
- `flush_interval_milliseconds` – Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
+- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
+Default: false.
- `storage_policy` – Name of storage policy to use for the table (optional)
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
@@ -1874,6 +1922,10 @@ If the table does not exist, ClickHouse will create it. If the structure of the
toYYYYMM(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1890,6 +1942,14 @@ Parameters:
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` or `order_by` defined.
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
+- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
+Default: false.
- `storage_policy` – Name of storage policy to use for the table (optional)
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
@@ -1901,13 +1961,16 @@ Parameters:
system
7500
+ 1048576
+ 8192
+ 524288
+ false
Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day
```
-
## trace_log {#server_configuration_parameters-trace_log}
Settings for the [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
@@ -1920,6 +1983,12 @@ Parameters:
- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/index.md) for a system table. Can't be used if `partition_by` or `order_by` defined.
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
- `storage_policy` – Name of storage policy to use for the table (optional)
- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
@@ -1931,6 +2000,10 @@ The default server configuration file `config.xml` contains the following settin
toYYYYMM(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1945,9 +2018,18 @@ Parameters:
- `partition_by` — [Custom partitioning key](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) for a system table. Can't be used if `engine` defined.
- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) for a system table. Can't be used if `partition_by` defined.
- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
+- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
+Default: false.
- `storage_policy` – Name of storage policy to use for the table (optional)
**Example**
+
```xml
@@ -1955,11 +2037,53 @@ Parameters:
7500
toYYYYMM(event_date)
+ 1048576
+ 8192
+ 524288
+ false
```
+## crash_log {#server_configuration_parameters-crash_log}
+
+Settings for the [crash_log](../../operations/system-tables/crash-log.md) system table operation.
+
+Parameters:
+
+- `database` — Database for storing a table.
+- `table` — Table name.
+- `partition_by` — [Custom partitioning key](../../engines/table-engines/mergetree-family/custom-partitioning-key.md) for a system table. Can't be used if `engine` defined.
+- `order_by` - [Custom sorting key](../../engines/table-engines/mergetree-family/mergetree.md#order_by) for a system table. Can't be used if `engine` defined.
+- `engine` - [MergeTree Engine Definition](../../engines/table-engines/mergetree-family/index.md) for a system table. Can't be used if `partition_by` or `order_by` defined.
+- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
+- `max_size_rows` – Maximal size in lines for the logs. When non-flushed logs amount reaches max_size, logs dumped to the disk.
+Default: 1048576.
+- `reserved_size_rows` – Pre-allocated memory size in lines for the logs.
+Default: 8192.
+- `buffer_size_rows_flush_threshold` – Lines amount threshold, reaching it launches flushing logs to the disk in background.
+Default: `max_size_rows / 2`.
+- `flush_on_crash` - Indication whether logs should be dumped to the disk in case of a crash.
+Default: false.
+- `storage_policy` – Name of storage policy to use for the table (optional)
+- `settings` - [Additional parameters](../../engines/table-engines/mergetree-family/mergetree.md/#settings) that control the behavior of the MergeTree (optional).
+
+The default server configuration file `config.xml` contains the following settings section:
+
+``` xml
+
+ system
+
+ toYYYYMM(event_date)
+ 7500
+ 1024
+ 1024
+ 512
+ false
+
+```
+
## query_masking_rules {#query-masking-rules}
Regexp-based rules, which will be applied to queries as well as all log messages before storing them in server logs,
diff --git a/docs/en/operations/settings/settings-formats.md b/docs/en/operations/settings/settings-formats.md
index ee8e0d547b8..fb10ff7f61b 100644
--- a/docs/en/operations/settings/settings-formats.md
+++ b/docs/en/operations/settings/settings-formats.md
@@ -1164,7 +1164,7 @@ Enabled by default.
Compression method used in output Arrow format. Supported codecs: `lz4_frame`, `zstd`, `none` (uncompressed)
-Default value: `none`.
+Default value: `lz4_frame`.
## ORC format settings {#orc-format-settings}
diff --git a/docs/en/operations/system-tables/asynchronous_metrics.md b/docs/en/operations/system-tables/asynchronous_metrics.md
index f357341da67..e46b495239c 100644
--- a/docs/en/operations/system-tables/asynchronous_metrics.md
+++ b/docs/en/operations/system-tables/asynchronous_metrics.md
@@ -32,6 +32,10 @@ SELECT * FROM system.asynchronous_metrics LIMIT 10
└─────────────────────────────────────────┴────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
+
+
## Metric descriptions
@@ -483,6 +487,14 @@ The value is similar to `OSUserTime` but divided to the number of CPU cores to b
Number of threads in the server of the PostgreSQL compatibility protocol.
+### QueryCacheBytes
+
+Total size of the query cache cache in bytes.
+
+### QueryCacheEntries
+
+Total number of entries in the query cache.
+
### ReplicasMaxAbsoluteDelay
Maximum difference in seconds between the most fresh replicated part and the most fresh data part still to be replicated, across Replicated tables. A very high value indicates a replica with no data.
diff --git a/docs/en/operations/system-tables/events.md b/docs/en/operations/system-tables/events.md
index ba5602ee292..7846fe4be5d 100644
--- a/docs/en/operations/system-tables/events.md
+++ b/docs/en/operations/system-tables/events.md
@@ -11,6 +11,8 @@ Columns:
- `value` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of events occurred.
- `description` ([String](../../sql-reference/data-types/string.md)) — Event description.
+You can find all supported events in source file [src/Common/ProfileEvents.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/ProfileEvents.cpp).
+
**Example**
``` sql
diff --git a/docs/en/operations/system-tables/index.md b/docs/en/operations/system-tables/index.md
index 1b720098fc7..a46f306f677 100644
--- a/docs/en/operations/system-tables/index.md
+++ b/docs/en/operations/system-tables/index.md
@@ -47,6 +47,10 @@ An example:
ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024
-->
7500
+ 1048576
+ 8192
+ 524288
+ false
```
diff --git a/docs/en/operations/system-tables/metrics.md b/docs/en/operations/system-tables/metrics.md
index 5a7dfd03eb4..b1dcea5500f 100644
--- a/docs/en/operations/system-tables/metrics.md
+++ b/docs/en/operations/system-tables/metrics.md
@@ -11,7 +11,7 @@ Columns:
- `value` ([Int64](../../sql-reference/data-types/int-uint.md)) — Metric value.
- `description` ([String](../../sql-reference/data-types/string.md)) — Metric description.
-The list of supported metrics you can find in the [src/Common/CurrentMetrics.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/CurrentMetrics.cpp) source file of ClickHouse.
+You can find all supported metrics in source file [src/Common/CurrentMetrics.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/CurrentMetrics.cpp).
**Example**
diff --git a/docs/ru/operations/configuration-files.md b/docs/ru/operations/configuration-files.md
index 01a91bd41c6..085761d80c7 100644
--- a/docs/ru/operations/configuration-files.md
+++ b/docs/ru/operations/configuration-files.md
@@ -87,7 +87,7 @@ $ cat /etc/clickhouse-server/users.d/alice.xml
## Шифрование {#encryption}
-Вы можете использовать симметричное шифрование для зашифровки элемента конфигурации, например, поля password. Чтобы это сделать, сначала настройте [кодек шифрования](../sql-reference/statements/create/table.md#encryption-codecs), затем добавьте аттибут`encryption_codec` с именем кодека шифрования как значение к элементу, который надо зашифровать.
+Вы можете использовать симметричное шифрование для зашифровки элемента конфигурации, например, поля password. Чтобы это сделать, сначала настройте [кодек шифрования](../sql-reference/statements/create/table.md#encryption-codecs), затем добавьте аттибут`encrypted_by` с именем кодека шифрования как значение к элементу, который надо зашифровать.
В отличии от аттрибутов `from_zk`, `from_env` и `incl` (или элемента `include`), подстановка, т.е. расшифровка зашифрованного значения, не выподняется в файле предобработки. Расшифровка происходит только во время исполнения в серверном процессе.
@@ -95,15 +95,18 @@ $ cat /etc/clickhouse-server/users.d/alice.xml
```xml
+
00112233445566778899aabbccddeeff
+
admin
- 961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85
+ 961F000000040000000000EEDDEF4F453CFE6457C4234BD7C09258BD651D85
+
```
diff --git a/docs/ru/operations/server-configuration-parameters/settings.md b/docs/ru/operations/server-configuration-parameters/settings.md
index 421df3fe3eb..81a696bcfc1 100644
--- a/docs/ru/operations/server-configuration-parameters/settings.md
+++ b/docs/ru/operations/server-configuration-parameters/settings.md
@@ -1058,6 +1058,10 @@ ClickHouse использует потоки из глобального пул
7500
1000
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1155,12 +1159,19 @@ ClickHouse использует потоки из глобального пул
При настройке логирования используются следующие параметры:
-- `database` — имя базы данных;
-- `table` — имя таблицы;
-- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
-- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
-- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
-
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1048576.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 8192.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: false.
**Пример**
``` xml
@@ -1169,6 +1180,10 @@ ClickHouse использует потоки из глобального пул
toMonday(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1218,11 +1233,19 @@ ClickHouse использует потоки из глобального пул
При настройке логирования используются следующие параметры:
-- `database` — имя базы данных;
-- `table` — имя таблицы, куда будет записываться лог;
-- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
-- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
-- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1048576.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 8192.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: false.
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
@@ -1234,6 +1257,10 @@ ClickHouse использует потоки из глобального пул
Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1245,11 +1272,19 @@ ClickHouse использует потоки из глобального пул
При настройке логирования используются следующие параметры:
-- `database` — имя базы данных;
-- `table` — имя таблицы, куда будет записываться лог;
-- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
-- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
-- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1048576.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 8192.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: false.
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
@@ -1261,6 +1296,10 @@ ClickHouse использует потоки из глобального пул
toMonday(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1272,11 +1311,19 @@ ClickHouse использует потоки из глобального пул
При настройке логирования используются следующие параметры:
-- `database` – имя базы данных.
-- `table` – имя системной таблицы, где будут логироваться запросы.
-- `partition_by` — устанавливает [произвольный ключ партиционирования](../../engines/table-engines/mergetree-family/custom-partitioning-key.md). Нельзя использовать, если задан параметр `engine`.
-- `engine` — устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать, если задан параметр `partition_by`.
-- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1048576.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 8192.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: false.
Если таблица не существует, то ClickHouse создаст её. Если структура журнала запросов изменилась при обновлении сервера ClickHouse, то таблица со старой структурой переименовывается, а новая таблица создается автоматически.
@@ -1288,6 +1335,10 @@ ClickHouse использует потоки из глобального пул
toYYYYMM(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
```
@@ -1297,12 +1348,20 @@ ClickHouse использует потоки из глобального пул
Параметры:
-- `level` — Максимальный уровень сообщения (по умолчанию `Trace`) которое будет сохранено в таблице.
-- `database` — имя базы данных для хранения таблицы.
-- `table` — имя таблицы, куда будут записываться текстовые сообщения.
-- `partition_by` — устанавливает [произвольный ключ партиционирования](../../engines/table-engines/mergetree-family/custom-partitioning-key.md). Нельзя использовать если используется `engine`
-- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
-- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `level` — Максимальный уровень сообщения (по умолчанию `Trace`) которое будет сохранено в таблице.
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1048576.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 8192.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: false.
**Пример**
```xml
@@ -1312,6 +1371,10 @@ ClickHouse использует потоки из глобального пул
system
7500
+ 1048576
+ 8192
+ 524288
+ false
Engine = MergeTree PARTITION BY event_date ORDER BY event_time TTL event_date + INTERVAL 30 day
@@ -1323,13 +1386,21 @@ ClickHouse использует потоки из глобального пул
Настройки для [trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) system table operation.
-Parameters:
+Параметры:
-- `database` — Database for storing a table.
-- `table` — Table name.
-- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
-- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
-- `flush_interval_milliseconds` — Interval for flushing data from the buffer in memory to the table.
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1048576.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 8192.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: false.
По умолчанию файл настроек сервера `config.xml` содержит следующие настройки:
@@ -1339,9 +1410,84 @@ Parameters:
toYYYYMM(event_date)
7500
+ 1048576
+ 8192
+ 524288
```
+## asynchronous_insert_log {#server_configuration_parameters-asynchronous_insert_log}
+
+Настройки для asynchronous_insert_log Система для логирования ассинхронных вставок.
+
+Параметры:
+
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1048576.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 8192.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: false.
+
+**Пример**
+
+```xml
+
+
+ system
+
+ 7500
+ toYYYYMM(event_date)
+ 1048576
+ 8192
+ 524288
+
+
+
+```
+
+## crash_log {#server_configuration_parameters-crash_log}
+
+Настройки для таблицы [crash_log](../../operations/system-tables/crash-log.md).
+
+Параметры:
+
+- `database` — имя базы данных;
+- `table` — имя таблицы;
+- `partition_by` — устанавливает [произвольный ключ партиционирования](../../operations/server-configuration-parameters/settings.md). Нельзя использовать если используется `engine`
+- `engine` - устанавливает [настройки MergeTree Engine](../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-creating-a-table) для системной таблицы. Нельзя использовать если используется `partition_by`.
+- `flush_interval_milliseconds` — период сброса данных из буфера в памяти в таблицу.
+- `max_size_rows` – максимальный размер в строках для буфера с логами. Когда буфер будет заполнен полностью, сбрасывает логи на диск.
+Значение по умолчанию: 1024.
+- `reserved_size_rows` – преаллоцированный размер в строках для буфера с логами.
+Значение по умолчанию: 1024.
+- `buffer_size_bytes_flush_threshold` – количество линий в логе при достижении которого логи начнут скидываться на диск в неблокирующем режиме.
+Значение по умолчанию: `max_size / 2`.
+- `flush_on_crash` - должны ли логи быть сброшены на диск в случае неожиданной остановки программы.
+Значение по умолчанию: true.
+
+**Пример**
+
+``` xml
+
+ system
+
+ toYYYYMM(event_date)
+ 7500
+ 1024
+ 1024
+ 512
+ true
+
+```
+
## query_masking_rules {#query-masking-rules}
Правила, основанные на регулярных выражениях, которые будут применены для всех запросов, а также для всех сообщений перед сохранением их в лог на сервере,
diff --git a/docs/ru/operations/system-tables/index.md b/docs/ru/operations/system-tables/index.md
index 7ff368b1910..24f79cae212 100644
--- a/docs/ru/operations/system-tables/index.md
+++ b/docs/ru/operations/system-tables/index.md
@@ -45,6 +45,10 @@ sidebar_label: "Системные таблицы"
ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024
-->
7500
+ 1048576
+ 8192
+ 524288
+ false
```
diff --git a/programs/local/LocalServer.cpp b/programs/local/LocalServer.cpp
index 3c2a8ae3152..587c88a2745 100644
--- a/programs/local/LocalServer.cpp
+++ b/programs/local/LocalServer.cpp
@@ -266,6 +266,10 @@ void LocalServer::tryInitPath()
global_context->setUserFilesPath(""); // user's files are everywhere
+ std::string user_scripts_path = config().getString("user_scripts_path", fs::path(path) / "user_scripts/");
+ global_context->setUserScriptsPath(user_scripts_path);
+ fs::create_directories(user_scripts_path);
+
/// top_level_domains_lists
const std::string & top_level_domains_path = config().getString("top_level_domains_path", path + "top_level_domains/");
if (!top_level_domains_path.empty())
@@ -490,6 +494,17 @@ try
applyCmdSettings(global_context);
+ /// try to load user defined executable functions, throw on error and die
+ try
+ {
+ global_context->loadOrReloadUserDefinedExecutableFunctions(config());
+ }
+ catch (...)
+ {
+ tryLogCurrentException(&logger(), "Caught exception while loading user defined executable functions.");
+ throw;
+ }
+
if (is_interactive)
{
clearTerminal();
@@ -569,7 +584,9 @@ void LocalServer::processConfig()
}
print_stack_trace = config().getBool("stacktrace", false);
- load_suggestions = (is_interactive || delayed_interactive) && !config().getBool("disable_suggestion", false);
+ const std::string clickhouse_dialect{"clickhouse"};
+ load_suggestions = (is_interactive || delayed_interactive) && !config().getBool("disable_suggestion", false)
+ && config().getString("dialect", clickhouse_dialect) == clickhouse_dialect;
auto logging = (config().has("logger.console")
|| config().has("logger.level")
diff --git a/programs/server/Server.cpp b/programs/server/Server.cpp
index dce52ecdb12..405ebf7fb2f 100644
--- a/programs/server/Server.cpp
+++ b/programs/server/Server.cpp
@@ -1035,6 +1035,11 @@ try
/// Initialize merge tree metadata cache
if (config().has("merge_tree_metadata_cache"))
{
+ global_context->addWarningMessage("The setting 'merge_tree_metadata_cache' is enabled."
+ " But the feature of 'metadata cache in RocksDB' is experimental and is not ready for production."
+ " The usage of this feature can lead to data corruption and loss. The setting should be disabled in production."
+ " See the corresponding report at https://github.com/ClickHouse/ClickHouse/issues/51182");
+
fs::create_directories(path / "rocksdb/");
size_t size = config().getUInt64("merge_tree_metadata_cache.lru_cache_size", 256 << 20);
bool continue_if_corrupted = config().getBool("merge_tree_metadata_cache.continue_if_corrupted", false);
diff --git a/programs/server/config.xml b/programs/server/config.xml
index 2a7dc1e576a..14b8954fc39 100644
--- a/programs/server/config.xml
+++ b/programs/server/config.xml
@@ -1026,6 +1026,14 @@
7500
+
+ 1048576
+
+ 8192
+
+ 524288
+
+ false
@@ -1039,6 +1047,11 @@
toYYYYMM(event_date)
7500
+ 1048576
+ 8192
+ 524288
+
+ false
@@ -1084,7 +1109,11 @@
system
7500
+ 1048576
+ 8192
+ 524288
1000
+ false
@@ -1151,6 +1196,10 @@
toYYYYMM(event_date)
7500
+ 1048576
+ 8192
+ 524288
+ false
-
-
not protected by mutex
mutable std::mutex mutex;
TimesExecuted times_executed TSA_GUARDED_BY(mutex);
diff --git a/src/Interpreters/Context.cpp b/src/Interpreters/Context.cpp
index f83e524ffb9..7c3646a9583 100644
--- a/src/Interpreters/Context.cpp
+++ b/src/Interpreters/Context.cpp
@@ -2903,16 +2903,6 @@ std::map Context::getAuxiliaryZooKeepers() const
}
#if USE_ROCKSDB
-MergeTreeMetadataCachePtr Context::getMergeTreeMetadataCache() const
-{
- auto cache = tryGetMergeTreeMetadataCache();
- if (!cache)
- throw Exception(
- ErrorCodes::LOGICAL_ERROR,
- "Merge tree metadata cache is not initialized, please add config merge_tree_metadata_cache in config.xml and restart");
- return cache;
-}
-
MergeTreeMetadataCachePtr Context::tryGetMergeTreeMetadataCache() const
{
return shared->merge_tree_metadata_cache;
@@ -3210,6 +3200,12 @@ void Context::initializeMergeTreeMetadataCache(const String & dir, size_t size)
}
#endif
+/// Call after unexpected crash happen.
+void Context::handleCrash() const
+{
+ shared->system_logs->handleCrash();
+}
+
bool Context::hasTraceCollector() const
{
return shared->hasTraceCollector();
diff --git a/src/Interpreters/Context.h b/src/Interpreters/Context.h
index 75752774d4c..0d567816ec9 100644
--- a/src/Interpreters/Context.h
+++ b/src/Interpreters/Context.h
@@ -889,7 +889,6 @@ public:
void setClientProtocolVersion(UInt64 version);
#if USE_ROCKSDB
- MergeTreeMetadataCachePtr getMergeTreeMetadataCache() const;
MergeTreeMetadataCachePtr tryGetMergeTreeMetadataCache() const;
#endif
@@ -998,6 +997,9 @@ public:
void initializeMergeTreeMetadataCache(const String & dir, size_t size);
#endif
+ /// Call after unexpected crash happen.
+ void handleCrash() const;
+
bool hasTraceCollector() const;
/// Nullptr if the query log is not ready for this moment.
diff --git a/src/Interpreters/CrashLog.cpp b/src/Interpreters/CrashLog.cpp
index 379c9122cc8..ec693eb7931 100644
--- a/src/Interpreters/CrashLog.cpp
+++ b/src/Interpreters/CrashLog.cpp
@@ -83,9 +83,6 @@ void collectCrashLog(Int32 signal, UInt64 thread_id, const String & query_id, co
stack_trace.toStringEveryLine([&trace_full](std::string_view line) { trace_full.push_back(line); });
CrashLogElement element{static_cast(time / 1000000000), time, signal, thread_id, query_id, trace, trace_full};
- crash_log_owned->add(element);
- /// Notify savingThreadFunction to start flushing crash log
- /// Crash log is storing in parallel with the signal processing thread.
- crash_log_owned->notifyFlush(true);
+ crash_log_owned->add(std::move(element));
}
}
diff --git a/src/Interpreters/CrashLog.h b/src/Interpreters/CrashLog.h
index 78794574c82..65714295be4 100644
--- a/src/Interpreters/CrashLog.h
+++ b/src/Interpreters/CrashLog.h
@@ -45,6 +45,11 @@ public:
{
crash_log = crash_log_;
}
+
+ static consteval size_t getDefaultMaxSize() { return 1024; }
+ static consteval size_t getDefaultReservedSize() { return 1024; }
+ static consteval size_t getDefaultFlushIntervalMilliseconds() { return 1000; }
+ static consteval size_t shouldNotifyFlushOnCrash() { return true; }
};
}
diff --git a/src/Interpreters/MetricLog.cpp b/src/Interpreters/MetricLog.cpp
index 24f77f7d0ba..24e9e4487ae 100644
--- a/src/Interpreters/MetricLog.cpp
+++ b/src/Interpreters/MetricLog.cpp
@@ -113,7 +113,7 @@ void MetricLog::metricThreadFunction()
elem.current_metrics[i] = CurrentMetrics::values[i];
}
- this->add(elem);
+ this->add(std::move(elem));
/// We will record current time into table but align it to regular time intervals to avoid time drift.
/// We may drop some time points if the server is overloaded and recording took too much time.
diff --git a/src/Interpreters/PartLog.cpp b/src/Interpreters/PartLog.cpp
index 881fcae4de6..a97f1f405bc 100644
--- a/src/Interpreters/PartLog.cpp
+++ b/src/Interpreters/PartLog.cpp
@@ -242,7 +242,7 @@ bool PartLog::addNewParts(
elem.profile_counters = part_log_entry.profile_counters;
- part_log->add(elem);
+ part_log->add(std::move(elem));
}
}
catch (...)
diff --git a/src/Interpreters/ProcessorsProfileLog.cpp b/src/Interpreters/ProcessorsProfileLog.cpp
index e78a07bb752..14159ad3438 100644
--- a/src/Interpreters/ProcessorsProfileLog.cpp
+++ b/src/Interpreters/ProcessorsProfileLog.cpp
@@ -73,12 +73,5 @@ void ProcessorProfileLogElement::appendToBlock(MutableColumns & columns) const
columns[i++]->insert(output_bytes);
}
-ProcessorsProfileLog::ProcessorsProfileLog(ContextPtr context_, const String & database_name_,
- const String & table_name_, const String & storage_def_,
- size_t flush_interval_milliseconds_)
- : SystemLog(context_, database_name_, table_name_,
- storage_def_, flush_interval_milliseconds_)
-{
-}
}
diff --git a/src/Interpreters/ProcessorsProfileLog.h b/src/Interpreters/ProcessorsProfileLog.h
index 81d58edd913..63791c0374c 100644
--- a/src/Interpreters/ProcessorsProfileLog.h
+++ b/src/Interpreters/ProcessorsProfileLog.h
@@ -45,12 +45,7 @@ struct ProcessorProfileLogElement
class ProcessorsProfileLog : public SystemLog
{
public:
- ProcessorsProfileLog(
- ContextPtr context_,
- const String & database_name_,
- const String & table_name_,
- const String & storage_def_,
- size_t flush_interval_milliseconds_);
+ using SystemLog::SystemLog;
};
}
diff --git a/src/Interpreters/ServerAsynchronousMetrics.cpp b/src/Interpreters/ServerAsynchronousMetrics.cpp
index 0fbcfc9e6a1..68411e80755 100644
--- a/src/Interpreters/ServerAsynchronousMetrics.cpp
+++ b/src/Interpreters/ServerAsynchronousMetrics.cpp
@@ -92,6 +92,12 @@ void ServerAsynchronousMetrics::updateImpl(AsynchronousMetricValues & new_values
" The files opened with `mmap` are kept in the cache to avoid costly TLB flushes."};
}
+ if (auto query_cache = getContext()->getQueryCache())
+ {
+ new_values["QueryCacheBytes"] = { query_cache->weight(), "Total size of the query cache in bytes." };
+ new_values["QueryCacheEntries"] = { query_cache->count(), "Total number of entries in the query cache." };
+ }
+
{
auto caches = FileCacheFactory::instance().getAll();
size_t total_bytes = 0;
diff --git a/src/Interpreters/Session.cpp b/src/Interpreters/Session.cpp
index 97b056cfc32..cadf619700c 100644
--- a/src/Interpreters/Session.cpp
+++ b/src/Interpreters/Session.cpp
@@ -240,7 +240,7 @@ private:
if (session != sessions.end() && session->second->close_cycle <= current_cycle)
{
- if (!session->second.unique())
+ if (session->second.use_count() != 1)
{
LOG_TEST(log, "Delay closing session with session_id: {}, user_id: {}", key.second, key.first);
diff --git a/src/Interpreters/SessionLog.cpp b/src/Interpreters/SessionLog.cpp
index c930013e52b..0a8a7fc18c5 100644
--- a/src/Interpreters/SessionLog.cpp
+++ b/src/Interpreters/SessionLog.cpp
@@ -227,7 +227,7 @@ void SessionLog::addLoginSuccess(const UUID & auth_id, std::optional ses
for (const auto & s : settings.allChanged())
log_entry.settings.emplace_back(s.getName(), s.getValueString());
- add(log_entry);
+ add(std::move(log_entry));
}
void SessionLog::addLoginFailure(
@@ -243,7 +243,7 @@ void SessionLog::addLoginFailure(
log_entry.client_info = info;
log_entry.user_identified_with = AuthenticationType::NO_PASSWORD;
- add(log_entry);
+ add(std::move(log_entry));
}
void SessionLog::addLogOut(const UUID & auth_id, const UserPtr & login_user, const ClientInfo & client_info)
@@ -257,7 +257,7 @@ void SessionLog::addLogOut(const UUID & auth_id, const UserPtr & login_user, con
log_entry.external_auth_server = login_user ? login_user->auth_data.getLDAPServerName() : "";
log_entry.client_info = client_info;
- add(log_entry);
+ add(std::move(log_entry));
}
}
diff --git a/src/Interpreters/SystemLog.cpp b/src/Interpreters/SystemLog.cpp
index 0b89b1dec26..be0468aa876 100644
--- a/src/Interpreters/SystemLog.cpp
+++ b/src/Interpreters/SystemLog.cpp
@@ -101,7 +101,6 @@ namespace
namespace
{
-constexpr size_t DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS = 7500;
constexpr size_t DEFAULT_METRIC_LOG_COLLECT_INTERVAL_MILLISECONDS = 1000;
/// Creates a system log with MergeTree engine using parameters from config
@@ -124,18 +123,23 @@ std::shared_ptr createSystemLog(
LOG_DEBUG(&Poco::Logger::get("SystemLog"),
"Creating {}.{} from {}", default_database_name, default_table_name, config_prefix);
- String database = config.getString(config_prefix + ".database", default_database_name);
- String table = config.getString(config_prefix + ".table", default_table_name);
+ SystemLogSettings log_settings;
+ log_settings.queue_settings.database = config.getString(config_prefix + ".database", default_database_name);
+ log_settings.queue_settings.table = config.getString(config_prefix + ".table", default_table_name);
- if (database != default_database_name)
+ if (log_settings.queue_settings.database != default_database_name)
{
/// System tables must be loaded before other tables, but loading order is undefined for all databases except `system`
- LOG_ERROR(&Poco::Logger::get("SystemLog"), "Custom database name for a system table specified in config."
- " Table `{}` will be created in `system` database instead of `{}`", table, database);
- database = default_database_name;
+ LOG_ERROR(
+ &Poco::Logger::get("SystemLog"),
+ "Custom database name for a system table specified in config."
+ " Table `{}` will be created in `system` database instead of `{}`",
+ log_settings.queue_settings.table,
+ log_settings.queue_settings.database);
+
+ log_settings.queue_settings.database = default_database_name;
}
- String engine;
if (config.has(config_prefix + ".engine"))
{
if (config.has(config_prefix + ".partition_by"))
@@ -159,26 +163,26 @@ std::shared_ptr createSystemLog(
"If 'engine' is specified for system table, SETTINGS parameters should "
"be specified directly inside 'engine' and 'settings' setting doesn't make sense");
- engine = config.getString(config_prefix + ".engine");
+ log_settings.engine = config.getString(config_prefix + ".engine");
}
else
{
/// ENGINE expr is necessary.
- engine = "ENGINE = MergeTree";
+ log_settings.engine = "ENGINE = MergeTree";
/// PARTITION expr is not necessary.
String partition_by = config.getString(config_prefix + ".partition_by", "toYYYYMM(event_date)");
if (!partition_by.empty())
- engine += " PARTITION BY (" + partition_by + ")";
+ log_settings.engine += " PARTITION BY (" + partition_by + ")";
/// TTL expr is not necessary.
String ttl = config.getString(config_prefix + ".ttl", "");
if (!ttl.empty())
- engine += " TTL " + ttl;
+ log_settings.engine += " TTL " + ttl;
/// ORDER BY expr is necessary.
String order_by = config.getString(config_prefix + ".order_by", TSystemLog::getDefaultOrderBy());
- engine += " ORDER BY (" + order_by + ")";
+ log_settings.engine += " ORDER BY (" + order_by + ")";
/// SETTINGS expr is not necessary.
/// https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree#settings
@@ -188,24 +192,52 @@ std::shared_ptr createSystemLog(
String settings = config.getString(config_prefix + ".settings", "");
if (!storage_policy.empty() || !settings.empty())
{
- engine += " SETTINGS";
+ log_settings.engine += " SETTINGS";
/// If 'storage_policy' is repeated, the 'settings' configuration is preferred.
if (!storage_policy.empty())
- engine += " storage_policy = " + quoteString(storage_policy);
+ log_settings.engine += " storage_policy = " + quoteString(storage_policy);
if (!settings.empty())
- engine += (storage_policy.empty() ? " " : ", ") + settings;
+ log_settings.engine += (storage_policy.empty() ? " " : ", ") + settings;
}
}
/// Validate engine definition syntax to prevent some configuration errors.
ParserStorageWithComment storage_parser;
- parseQuery(storage_parser, engine.data(), engine.data() + engine.size(),
+ parseQuery(storage_parser, log_settings.engine.data(), log_settings.engine.data() + log_settings.engine.size(),
"Storage to create table for " + config_prefix, 0, DBMS_DEFAULT_MAX_PARSER_DEPTH);
- size_t flush_interval_milliseconds = config.getUInt64(config_prefix + ".flush_interval_milliseconds",
- DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS);
+ log_settings.queue_settings.flush_interval_milliseconds = config.getUInt64(config_prefix + ".flush_interval_milliseconds",
+ TSystemLog::getDefaultFlushIntervalMilliseconds());
- return std::make_shared(context, database, table, engine, flush_interval_milliseconds);
+ log_settings.queue_settings.max_size_rows = config.getUInt64(config_prefix + ".max_size_rows",
+ TSystemLog::getDefaultMaxSize());
+
+ if (log_settings.queue_settings.max_size_rows < 1)
+ throw Exception(ErrorCodes::BAD_ARGUMENTS, "{0}.max_size_rows {1} should be 1 at least",
+ config_prefix,
+ log_settings.queue_settings.max_size_rows);
+
+ log_settings.queue_settings.reserved_size_rows = config.getUInt64(config_prefix + ".reserved_size_rows",
+ TSystemLog::getDefaultReservedSize());
+
+ if (log_settings.queue_settings.max_size_rows < log_settings.queue_settings.reserved_size_rows)
+ {
+ throw Exception(ErrorCodes::BAD_ARGUMENTS,
+ "{0}.max_size_rows {1} should be greater or equal to {0}.reserved_size_rows {2}",
+ config_prefix,
+ log_settings.queue_settings.max_size_rows,
+ log_settings.queue_settings.reserved_size_rows);
+ }
+
+ log_settings.queue_settings.buffer_size_rows_flush_threshold = config.getUInt64(config_prefix + ".buffer_size_rows_flush_threshold",
+ log_settings.queue_settings.max_size_rows / 2);
+
+ log_settings.queue_settings.notify_flush_on_crash = config.getBool(config_prefix + ".flush_on_crash",
+ TSystemLog::shouldNotifyFlushOnCrash());
+
+ log_settings.queue_settings.turn_off_logger = TSystemLog::shouldTurnOffLogger();
+
+ return std::make_shared(context, log_settings);
}
@@ -325,23 +357,25 @@ void SystemLogs::shutdown()
log->shutdown();
}
+void SystemLogs::handleCrash()
+{
+ for (auto & log : logs)
+ log->handleCrash();
+}
template
SystemLog::SystemLog(
ContextPtr context_,
- const String & database_name_,
- const String & table_name_,
- const String & storage_def_,
- size_t flush_interval_milliseconds_,
+ const SystemLogSettings & settings_,
std::shared_ptr> queue_)
- : Base(database_name_ + "." + table_name_, flush_interval_milliseconds_, queue_)
+ : Base(settings_.queue_settings, queue_)
, WithContext(context_)
- , log(&Poco::Logger::get("SystemLog (" + database_name_ + "." + table_name_ + ")"))
- , table_id(database_name_, table_name_)
- , storage_def(storage_def_)
+ , log(&Poco::Logger::get("SystemLog (" + settings_.queue_settings.database + "." + settings_.queue_settings.table + ")"))
+ , table_id(settings_.queue_settings.database, settings_.queue_settings.table)
+ , storage_def(settings_.engine)
, create_query(serializeAST(*getCreateTableQuery()))
{
- assert(database_name_ == DatabaseCatalog::SYSTEM_DATABASE);
+ assert(settings_.queue_settings.database == DatabaseCatalog::SYSTEM_DATABASE);
}
template
diff --git a/src/Interpreters/SystemLog.h b/src/Interpreters/SystemLog.h
index 5d8bb30150d..437b1b2a6bb 100644
--- a/src/Interpreters/SystemLog.h
+++ b/src/Interpreters/SystemLog.h
@@ -58,6 +58,7 @@ struct SystemLogs
~SystemLogs();
void shutdown();
+ void handleCrash();
std::shared_ptr query_log; /// Used to log queries.
std::shared_ptr query_thread_log; /// Used to log query threads.
@@ -87,6 +88,12 @@ struct SystemLogs
std::vector logs;
};
+struct SystemLogSettings
+{
+ SystemLogQueueSettings queue_settings;
+
+ String engine;
+};
template
class SystemLog : public SystemLogBase, private boost::noncopyable, WithContext
@@ -103,13 +110,9 @@ public:
* where N - is a minimal number from 1, for that table with corresponding name doesn't exist yet;
* and new table get created - as if previous table was not exist.
*/
- SystemLog(
- ContextPtr context_,
- const String & database_name_,
- const String & table_name_,
- const String & storage_def_,
- size_t flush_interval_milliseconds_,
- std::shared_ptr> queue_ = nullptr);
+ SystemLog(ContextPtr context_,
+ const SystemLogSettings& settings_,
+ std::shared_ptr> queue_ = nullptr);
/** Append a record into log.
* Writing to table will be done asynchronously and in case of failure, record could be lost.
diff --git a/src/Interpreters/TextLog.cpp b/src/Interpreters/TextLog.cpp
index 108135c78b3..3951a41f0c5 100644
--- a/src/Interpreters/TextLog.cpp
+++ b/src/Interpreters/TextLog.cpp
@@ -80,15 +80,10 @@ void TextLogElement::appendToBlock(MutableColumns & columns) const
columns[i++]->insert(message_format_string);
}
-TextLog::TextLog(ContextPtr context_, const String & database_name_,
- const String & table_name_, const String & storage_def_,
- size_t flush_interval_milliseconds_)
- : SystemLog(context_, database_name_, table_name_,
- storage_def_, flush_interval_milliseconds_, getLogQueue(flush_interval_milliseconds_))
+TextLog::TextLog(ContextPtr context_,
+ const SystemLogSettings & settings)
+ : SystemLog(context_, settings, getLogQueue(settings.queue_settings))
{
- // SystemLog methods may write text logs, so we disable logging for the text
- // log table to avoid recursion.
- log->setLevel(0);
}
}
diff --git a/src/Interpreters/TextLog.h b/src/Interpreters/TextLog.h
index 60ca11632aa..4bfed5327f3 100644
--- a/src/Interpreters/TextLog.h
+++ b/src/Interpreters/TextLog.h
@@ -42,18 +42,15 @@ class TextLog : public SystemLog
public:
using Queue = SystemLogQueue;
- TextLog(
- ContextPtr context_,
- const String & database_name_,
- const String & table_name_,
- const String & storage_def_,
- size_t flush_interval_milliseconds_);
+ explicit TextLog(ContextPtr context_, const SystemLogSettings & settings);
- static std::shared_ptr getLogQueue(size_t flush_interval_milliseconds)
+ static std::shared_ptr getLogQueue(const SystemLogQueueSettings & settings)
{
- static std::shared_ptr queue = std::make_shared("text_log", flush_interval_milliseconds, true);
+ static std::shared_ptr queue = std::make_shared(settings);
return queue;
}
+
+ static consteval bool shouldTurnOffLogger() { return true; }
};
}
diff --git a/src/Interpreters/ThreadStatusExt.cpp b/src/Interpreters/ThreadStatusExt.cpp
index 398bea26b87..7a6bc45c118 100644
--- a/src/Interpreters/ThreadStatusExt.cpp
+++ b/src/Interpreters/ThreadStatusExt.cpp
@@ -513,7 +513,7 @@ void ThreadStatus::logToQueryThreadLog(QueryThreadLog & thread_log, const String
}
}
- thread_log.add(elem);
+ thread_log.add(std::move(elem));
}
static String getCleanQueryAst(const ASTPtr q, ContextPtr context)
@@ -573,7 +573,7 @@ void ThreadStatus::logToQueryViewsLog(const ViewRuntimeData & vinfo)
element.stack_trace = getExceptionStackTraceString(vinfo.exception);
}
- views_log->add(element);
+ views_log->add(std::move(element));
}
void CurrentThread::attachToGroup(const ThreadGroupPtr & thread_group)
diff --git a/src/Interpreters/TraceCollector.cpp b/src/Interpreters/TraceCollector.cpp
index cb00e37df69..19cc5c4e6bd 100644
--- a/src/Interpreters/TraceCollector.cpp
+++ b/src/Interpreters/TraceCollector.cpp
@@ -128,7 +128,7 @@ void TraceCollector::run()
UInt64 time = static_cast(ts.tv_sec * 1000000000LL + ts.tv_nsec);
UInt64 time_in_microseconds = static_cast((ts.tv_sec * 1000000LL) + (ts.tv_nsec / 1000));
TraceLogElement element{time_t(time / 1000000000), time_in_microseconds, time, trace_type, thread_id, query_id, trace, size, event, increment};
- trace_log->add(element);
+ trace_log->add(std::move(element));
}
}
}
diff --git a/src/Interpreters/TransactionLog.cpp b/src/Interpreters/TransactionLog.cpp
index 2ef4f4d6218..631e7f5c746 100644
--- a/src/Interpreters/TransactionLog.cpp
+++ b/src/Interpreters/TransactionLog.cpp
@@ -34,7 +34,7 @@ try
elem.tid = tid;
elem.csn = csn;
elem.fillCommonFields(nullptr);
- system_log->add(elem);
+ system_log->add(std::move(elem));
}
catch (...)
{
diff --git a/src/Interpreters/TransactionsInfoLog.cpp b/src/Interpreters/TransactionsInfoLog.cpp
index b62cd4672d8..90f5022a444 100644
--- a/src/Interpreters/TransactionsInfoLog.cpp
+++ b/src/Interpreters/TransactionsInfoLog.cpp
@@ -101,7 +101,7 @@ try
elem.type = type;
elem.tid = tid;
elem.fillCommonFields(&context);
- system_log->add(elem);
+ system_log->add(std::move(elem));
}
catch (...)
{
diff --git a/src/Loggers/Loggers.cpp b/src/Loggers/Loggers.cpp
index 271ab39cd88..90b3457b7d8 100644
--- a/src/Loggers/Loggers.cpp
+++ b/src/Loggers/Loggers.cpp
@@ -21,6 +21,12 @@ namespace fs = std::filesystem;
namespace DB
{
class SensitiveDataMasker;
+
+namespace ErrorCodes
+{
+ extern const int BAD_ARGUMENTS;
+}
+
}
@@ -44,10 +50,6 @@ static std::string renderFileNameTemplate(time_t now, const std::string & file_p
return path.replace_filename(ss.str());
}
-#ifndef WITHOUT_TEXT_LOG
-constexpr size_t DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS = 7500;
-#endif
-
void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Logger & logger /*_root*/, const std::string & cmd_name)
{
auto current_logger = config.getString("logger", "");
@@ -271,9 +273,37 @@ void Loggers::buildLoggers(Poco::Util::AbstractConfiguration & config, Poco::Log
{
String text_log_level_str = config.getString("text_log.level", "trace");
int text_log_level = Poco::Logger::parseLevel(text_log_level_str);
- size_t flush_interval_milliseconds = config.getUInt64("text_log.flush_interval_milliseconds",
- DEFAULT_SYSTEM_LOG_FLUSH_INTERVAL_MILLISECONDS);
- split->addTextLog(DB::TextLog::getLogQueue(flush_interval_milliseconds), text_log_level);
+
+ DB::SystemLogQueueSettings log_settings;
+ log_settings.flush_interval_milliseconds = config.getUInt64("text_log.flush_interval_milliseconds",
+ DB::TextLog::getDefaultFlushIntervalMilliseconds());
+
+ log_settings.max_size_rows = config.getUInt64("text_log.max_size_rows",
+ DB::TextLog::getDefaultMaxSize());
+
+ if (log_settings.max_size_rows< 1)
+ throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "text_log.max_size_rows {} should be 1 at least",
+ log_settings.max_size_rows);
+
+ log_settings.reserved_size_rows = config.getUInt64("text_log.reserved_size_rows", DB::TextLog::getDefaultReservedSize());
+
+ if (log_settings.max_size_rows < log_settings.reserved_size_rows)
+ {
+ throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS,
+ "text_log.max_size {0} should be greater or equal to text_log.reserved_size_rows {1}",
+ log_settings.max_size_rows,
+ log_settings.reserved_size_rows);
+ }
+
+ log_settings.buffer_size_rows_flush_threshold = config.getUInt64("text_log.buffer_size_rows_flush_threshold",
+ log_settings.max_size_rows / 2);
+
+ log_settings.notify_flush_on_crash = config.getBool("text_log.flush_on_crash",
+ DB::TextLog::shouldNotifyFlushOnCrash());
+
+ log_settings.turn_off_logger = DB::TextLog::shouldTurnOffLogger();
+
+ split->addTextLog(DB::TextLog::getLogQueue(log_settings), text_log_level);
}
#endif
}
diff --git a/src/Loggers/OwnSplitChannel.cpp b/src/Loggers/OwnSplitChannel.cpp
index b5ac42d6041..cdf8402745f 100644
--- a/src/Loggers/OwnSplitChannel.cpp
+++ b/src/Loggers/OwnSplitChannel.cpp
@@ -138,7 +138,7 @@ void OwnSplitChannel::logSplit(const Poco::Message & msg)
std::shared_ptr> text_log_locked{};
text_log_locked = text_log.lock();
if (text_log_locked)
- text_log_locked->push(elem);
+ text_log_locked->push(std::move(elem));
}
#endif
}
diff --git a/src/Storages/MergeTree/IMergeTreeDataPart.cpp b/src/Storages/MergeTree/IMergeTreeDataPart.cpp
index 7050a98a4bc..6d7b6b39a40 100644
--- a/src/Storages/MergeTree/IMergeTreeDataPart.cpp
+++ b/src/Storages/MergeTree/IMergeTreeDataPart.cpp
@@ -1674,8 +1674,8 @@ std::pair IMergeTreeDataPart::canRemovePart() const
void IMergeTreeDataPart::initializePartMetadataManager()
{
#if USE_ROCKSDB
- if (use_metadata_cache)
- metadata_manager = std::make_shared(this, storage.getContext()->getMergeTreeMetadataCache());
+ if (auto metadata_cache = storage.getContext()->tryGetMergeTreeMetadataCache(); metadata_cache && use_metadata_cache)
+ metadata_manager = std::make_shared(this, metadata_cache);
else
metadata_manager = std::make_shared(this);
#else
diff --git a/src/Storages/MergeTree/MergeTreeData.cpp b/src/Storages/MergeTree/MergeTreeData.cpp
index 6179c70ca57..013a9d6923c 100644
--- a/src/Storages/MergeTree/MergeTreeData.cpp
+++ b/src/Storages/MergeTree/MergeTreeData.cpp
@@ -7775,7 +7775,7 @@ try
LOG_WARNING(log, "Profile counters are not set");
}
- part_log->add(part_log_elem);
+ part_log->add(std::move(part_log_elem));
}
catch (...)
{
diff --git a/src/Storages/StorageMerge.cpp b/src/Storages/StorageMerge.cpp
index b0ed242d14d..272f35303bd 100644
--- a/src/Storages/StorageMerge.cpp
+++ b/src/Storages/StorageMerge.cpp
@@ -795,6 +795,10 @@ StorageMerge::StorageListWithLocks StorageMerge::getSelectedTables(
bool filter_by_database_virtual_column /* = false */,
bool filter_by_table_virtual_column /* = false */) const
{
+ /// FIXME: filtering does not work with allow_experimental_analyzer due to
+ /// different column names there (it has "table_name._table" not just
+ /// "_table")
+
assert(!filter_by_database_virtual_column || !filter_by_table_virtual_column || query);
const Settings & settings = query_context->getSettingsRef();
diff --git a/src/Storages/System/StorageSystemMergeTreeMetadataCache.cpp b/src/Storages/System/StorageSystemMergeTreeMetadataCache.cpp
deleted file mode 100644
index 3bb92814a2f..00000000000
--- a/src/Storages/System/StorageSystemMergeTreeMetadataCache.cpp
+++ /dev/null
@@ -1,145 +0,0 @@
-#include
-
-#if USE_ROCKSDB
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-#include
-
-namespace DB
-{
-namespace ErrorCodes
-{
- extern const int BAD_ARGUMENTS;
-}
-
-NamesAndTypesList StorageSystemMergeTreeMetadataCache::getNamesAndTypes()
-{
- return {
- {"key", std::make_shared()},
- {"value", std::make_shared()},
- };
-}
-
-static bool extractKeyImpl(const IAST & elem, String & res, bool & precise)
-{
- const auto * function = elem.as();
- if (!function)
- return false;
-
- if (function->name == "and")
- {
- for (const auto & child : function->arguments->children)
- {
- bool tmp_precise = false;
- if (extractKeyImpl(*child, res, tmp_precise))
- {
- precise = tmp_precise;
- return true;
- }
- }
- return false;
- }
-
- if (function->name == "equals" || function->name == "like")
- {
- const auto & args = function->arguments->as();
- const IAST * value;
-
- if (args.children.size() != 2)
- return false;
-
- const ASTIdentifier * ident;
- if ((ident = args.children.at(0)->as()))
- value = args.children.at(1).get();
- else if ((ident = args.children.at(1)->as()))
- value = args.children.at(0).get();
- else
- return false;
-
- if (ident->name() != "key")
- return false;
-
- const auto * literal = value->as();
- if (!literal)
- return false;
-
- if (literal->value.getType() != Field::Types::String)
- return false;
-
- res = literal->value.safeGet();
- precise = function->name == "equals";
- return true;
- }
- return false;
-}
-
-
-/// Retrieve from the query a condition of the form `key= 'key'`, from conjunctions in the WHERE clause.
-static String extractKey(const ASTPtr & query, bool& precise)
-{
- const auto & select = query->as();
- if (!select.where())
- return "";
-
- String res;
- return extractKeyImpl(*select.where(), res, precise) ? res : "";
-}
-
-
-void StorageSystemMergeTreeMetadataCache::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const
-{
- bool precise = false;
- String key = extractKey(query_info.query, precise);
- if (key.empty())
- throw Exception(ErrorCodes::BAD_ARGUMENTS,
- "SELECT from system.merge_tree_metadata_cache table must contain condition like key = 'key' "
- "or key LIKE 'prefix%' in WHERE clause.");
-
- auto cache = context->getMergeTreeMetadataCache();
- if (precise)
- {
- String value;
- if (cache->get(key, value) != MergeTreeMetadataCache::Status::OK())
- return;
-
- size_t col_num = 0;
- res_columns[col_num++]->insert(key);
- res_columns[col_num++]->insert(value);
- }
- else
- {
- String target = extractFixedPrefixFromLikePattern(key, /*requires_perfect_prefix*/ false);
- if (target.empty())
- throw Exception(ErrorCodes::BAD_ARGUMENTS,
- "SELECT from system.merge_tree_metadata_cache table must contain condition like key = 'key' "
- "or key LIKE 'prefix%' in WHERE clause.");
-
- Strings keys;
- Strings values;
- keys.reserve(4096);
- values.reserve(4096);
- cache->getByPrefix(target, keys, values);
- if (keys.empty())
- return;
-
- assert(keys.size() == values.size());
- for (size_t i = 0; i < keys.size(); ++i)
- {
- size_t col_num = 0;
- res_columns[col_num++]->insert(keys[i]);
- res_columns[col_num++]->insert(values[i]);
- }
- }
-}
-
-}
-#endif
diff --git a/src/Storages/System/StorageSystemMergeTreeMetadataCache.h b/src/Storages/System/StorageSystemMergeTreeMetadataCache.h
deleted file mode 100644
index 4603583227e..00000000000
--- a/src/Storages/System/StorageSystemMergeTreeMetadataCache.h
+++ /dev/null
@@ -1,29 +0,0 @@
-#pragma once
-
-#include "config.h"
-
-#if USE_ROCKSDB
-#include
-
-
-namespace DB
-{
-class Context;
-
-
-/// Implements `merge_tree_metadata_cache` system table, which allows you to view the metadata cache data in rocksdb for testing purposes.
-class StorageSystemMergeTreeMetadataCache : public IStorageSystemOneBlock
-{
-public:
- std::string getName() const override { return "SystemMergeTreeMetadataCache"; }
-
- static NamesAndTypesList getNamesAndTypes();
-
-protected:
- using IStorageSystemOneBlock::IStorageSystemOneBlock;
-
- void fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo & query_info) const override;
-};
-
-}
-#endif
diff --git a/src/Storages/System/attachSystemTables.cpp b/src/Storages/System/attachSystemTables.cpp
index 84965b3196b..f0c67e0f787 100644
--- a/src/Storages/System/attachSystemTables.cpp
+++ b/src/Storages/System/attachSystemTables.cpp
@@ -90,7 +90,6 @@
#if USE_ROCKSDB
#include
-#include
#endif
@@ -150,7 +149,6 @@ void attachSystemTablesLocal(ContextPtr context, IDatabase & system_database)
#endif
#if USE_ROCKSDB
attach(context, system_database, "rocksdb");
- attach(context, system_database, "merge_tree_metadata_cache");
#endif
}
diff --git a/src/Storages/VirtualColumnUtils.cpp b/src/Storages/VirtualColumnUtils.cpp
index 907fc0cd22c..79be1f98a0f 100644
--- a/src/Storages/VirtualColumnUtils.cpp
+++ b/src/Storages/VirtualColumnUtils.cpp
@@ -30,6 +30,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -63,14 +64,31 @@ bool isValidFunction(const ASTPtr & expression, const std::function & is_constant, ASTs & result)
{
const auto * function = expression->as();
- if (function && (function->name == "and" || function->name == "indexHint"))
+
+ if (function)
{
- bool ret = true;
- for (const auto & child : function->arguments->children)
- ret &= extractFunctions(child, is_constant, result);
- return ret;
+ if (function->name == "and" || function->name == "indexHint")
+ {
+ bool ret = true;
+ for (const auto & child : function->arguments->children)
+ ret &= extractFunctions(child, is_constant, result);
+ return ret;
+ }
+ else if (function->name == "or")
+ {
+ bool ret = true;
+ ASTs or_args;
+ for (const auto & child : function->arguments->children)
+ ret &= extractFunctions(child, is_constant, or_args);
+ /// We can keep condition only if it still OR condition (i.e. we
+ /// have dependent conditions for columns at both sides)
+ if (or_args.size() == 2)
+ result.push_back(makeASTForLogicalOr(std::move(or_args)));
+ return ret;
+ }
}
- else if (isValidFunction(expression, is_constant))
+
+ if (isValidFunction(expression, is_constant))
{
result.push_back(expression->clone());
return true;
@@ -80,13 +98,13 @@ bool extractFunctions(const ASTPtr & expression, const std::functionas();
auto * join_node = node->as();
diff --git a/src/TableFunctions/TableFunctionS3.cpp b/src/TableFunctions/TableFunctionS3.cpp
index 0f3078b1ca6..84bd93b0009 100644
--- a/src/TableFunctions/TableFunctionS3.cpp
+++ b/src/TableFunctions/TableFunctionS3.cpp
@@ -181,7 +181,7 @@ void TableFunctionS3::parseArgumentsImpl(ASTs & args, const ContextPtr & context
configuration.keys = {configuration.url.key};
if (configuration.format == "auto")
- configuration.format = FormatFactory::instance().getFormatFromFileName(configuration.url.uri.getPath(), true);
+ configuration.format = FormatFactory::instance().getFormatFromFileName(Poco::URI(configuration.url.uri.getPath()).getPath(), true);
}
void TableFunctionS3::parseArguments(const ASTPtr & ast_function, ContextPtr context)
diff --git a/tests/config/users.d/session_log_test.xml b/tests/config/users.d/session_log_test.xml
index daddaa6e4b9..cc2c2c5fcde 100644
--- a/tests/config/users.d/session_log_test.xml
+++ b/tests/config/users.d/session_log_test.xml
@@ -17,7 +17,7 @@
-
+
::1
127.0.0.1
diff --git a/tests/integration/parallel_skip.json b/tests/integration/parallel_skip.json
index 1075fbaa0f8..6e1604f4eb5 100644
--- a/tests/integration/parallel_skip.json
+++ b/tests/integration/parallel_skip.json
@@ -74,5 +74,12 @@
"test_http_failover/test.py::test_url_destination_host_with_multiple_addrs",
"test_http_failover/test.py::test_url_invalid_hostname",
- "test_http_failover/test.py::test_url_ip_change"
+ "test_http_failover/test.py::test_url_ip_change",
+
+ "test_system_logs/test_system_logs.py::test_max_size_0",
+ "test_system_logs/test_system_logs.py::test_reserved_size_greater_max_size",
+ "test_system_flush_logs/test.py::test_log_buffer_size_rows_flush_threshold",
+ "test_system_flush_logs/test.py::test_log_max_size",
+ "test_crash_log/test.py::test_pkill_query_log",
+ "test_crash_log/test.py::test_pkill"
]
diff --git a/tests/integration/test_backup_restore_on_cluster/test_disallow_concurrency.py b/tests/integration/test_backup_restore_on_cluster/test_disallow_concurrency.py
index a863a6e2047..5c3f06a9d9d 100644
--- a/tests/integration/test_backup_restore_on_cluster/test_disallow_concurrency.py
+++ b/tests/integration/test_backup_restore_on_cluster/test_disallow_concurrency.py
@@ -133,21 +133,33 @@ def test_concurrent_backups_on_same_node():
)
assert status in ["CREATING_BACKUP", "BACKUP_CREATED"]
- try:
- error = nodes[0].query_and_get_error(
- f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
- )
- except Exception as e:
+ result, error = nodes[0].query_and_get_answer_with_error(
+ f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
+ )
+
+ if not error:
status = (
nodes[0]
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
.rstrip("\n")
)
# It is possible that the second backup was picked up first, and then the async backup
- if status == "CREATING_BACKUP" or status == "BACKUP_FAILED":
+ if status == "BACKUP_FAILED":
+ return
+ elif status == "CREATING_BACKUP":
+ assert_eq_with_retry(
+ nodes[0],
+ f"SELECT status FROM system.backups WHERE id = '{id}'",
+ "BACKUP_FAILED",
+ sleep_time=2,
+ retry_count=50,
+ )
return
else:
- raise e
+ raise Exception(
+ "Concurrent backups both passed, when one is expected to fail"
+ )
+
expected_errors = [
"Concurrent backups not supported",
f"Backup {backup_name} already exists",
@@ -191,20 +203,33 @@ def test_concurrent_backups_on_different_nodes():
)
assert status in ["CREATING_BACKUP", "BACKUP_CREATED"]
- try:
- error = nodes[0].query_and_get_error(
- f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
- )
- except Exception as e:
+ result, error = nodes[0].query_and_get_answer_with_error(
+ f"BACKUP TABLE tbl ON CLUSTER 'cluster' TO {backup_name}"
+ )
+
+ if not error:
status = (
nodes[1]
.query(f"SELECT status FROM system.backups WHERE id == '{id}'")
.rstrip("\n")
)
- if status == "CREATING_BACKUP" or status == "BACKUP_FAILED":
+ # It is possible that the second backup was picked up first, and then the async backup
+ if status == "BACKUP_FAILED":
+ return
+ elif status == "CREATING_BACKUP":
+ assert_eq_with_retry(
+ nodes[1],
+ f"SELECT status FROM system.backups WHERE id = '{id}'",
+ "BACKUP_FAILED",
+ sleep_time=2,
+ retry_count=50,
+ )
return
else:
- raise e
+ raise Exception(
+ "Concurrent backups both passed, when one is expected to fail"
+ )
+
expected_errors = [
"Concurrent backups not supported",
f"Backup {backup_name} already exists",
@@ -247,20 +272,33 @@ def test_concurrent_restores_on_same_node():
)
assert status in ["RESTORING", "RESTORED"]
- try:
- error = nodes[0].query_and_get_error(
- f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
- )
- except Exception as e:
+ result, error = nodes[0].query_and_get_answer_with_error(
+ f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
+ )
+
+ if not error:
status = (
nodes[0]
- .query(f"SELECT status FROM system.backups WHERE id == '{id}'")
+ .query(f"SELECT status FROM system.backups WHERE id == '{restore_id}'")
.rstrip("\n")
)
- if status == "RESTORING" or status == "RESTORE_FAILED":
+ # It is possible that the second backup was picked up first, and then the async backup
+ if status == "RESTORE_FAILED":
+ return
+ elif status == "RESTORING":
+ assert_eq_with_retry(
+ nodes[0],
+ f"SELECT status FROM system.backups WHERE id == '{restore_id}'",
+ "RESTORE_FAILED",
+ sleep_time=2,
+ retry_count=50,
+ )
return
else:
- raise e
+ raise Exception(
+ "Concurrent restores both passed, when one is expected to fail"
+ )
+
expected_errors = [
"Concurrent restores not supported",
"Cannot restore the table default.tbl because it already contains some data",
@@ -303,20 +341,33 @@ def test_concurrent_restores_on_different_node():
)
assert status in ["RESTORING", "RESTORED"]
- try:
- error = nodes[1].query_and_get_error(
- f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
- )
- except Exception as e:
+ result, error = nodes[1].query_and_get_answer_with_error(
+ f"RESTORE TABLE tbl ON CLUSTER 'cluster' FROM {backup_name}"
+ )
+
+ if not error:
status = (
nodes[0]
- .query(f"SELECT status FROM system.backups WHERE id == '{id}'")
+ .query(f"SELECT status FROM system.backups WHERE id == '{restore_id}'")
.rstrip("\n")
)
- if status == "RESTORING" or status == "RESTORE_FAILED":
+ # It is possible that the second backup was picked up first, and then the async backup
+ if status == "RESTORE_FAILED":
+ return
+ elif status == "RESTORING":
+ assert_eq_with_retry(
+ nodes[0],
+ f"SELECT status FROM system.backups WHERE id == '{restore_id}'",
+ "RESTORE_FAILED",
+ sleep_time=2,
+ retry_count=50,
+ )
return
else:
- raise e
+ raise Exception(
+ "Concurrent restores both passed, when one is expected to fail"
+ )
+
expected_errors = [
"Concurrent restores not supported",
"Cannot restore the table default.tbl because it already contains some data",
diff --git a/tests/integration/test_config_decryption/configs/config.xml b/tests/integration/test_config_decryption/configs/config.xml
index 5c274128e39..4b0d3a77659 100644
--- a/tests/integration/test_config_decryption/configs/config.xml
+++ b/tests/integration/test_config_decryption/configs/config.xml
@@ -1,4 +1,5 @@
+
00112233445566778899aabbccddeeff
@@ -7,6 +8,8 @@
00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff
- 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
- 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
+
+ 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
+ 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
+
diff --git a/tests/integration/test_config_decryption/configs/config.yaml b/tests/integration/test_config_decryption/configs/config.yaml
index ab4391be3c5..1b20b65b652 100644
--- a/tests/integration/test_config_decryption/configs/config.yaml
+++ b/tests/integration/test_config_decryption/configs/config.yaml
@@ -3,9 +3,11 @@ encryption_codecs:
key_hex: 00112233445566778899aabbccddeeff
aes_256_gcm_siv:
key_hex: 00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff
+
max_table_size_to_drop:
'#text': 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
- '@encryption_codec': AES_128_GCM_SIV
+ '@encrypted_by': AES_128_GCM_SIV
+
max_partition_size_to_drop:
- '@encryption_codec': AES_256_GCM_SIV
+ '@encrypted_by': AES_256_GCM_SIV
'#text': 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
diff --git a/tests/integration/test_config_decryption/configs/config_invalid_chars.xml b/tests/integration/test_config_decryption/configs/config_invalid_chars.xml
index 49bf51b5bad..53345b897dc 100644
--- a/tests/integration/test_config_decryption/configs/config_invalid_chars.xml
+++ b/tests/integration/test_config_decryption/configs/config_invalid_chars.xml
@@ -1,4 +1,5 @@
+
00112233445566778899aabbccddeeff
@@ -7,6 +8,9 @@
00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff
- --96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
- 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
+
+
+ --96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
+ 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
+
diff --git a/tests/integration/test_config_decryption/configs/config_no_encryption_key.xml b/tests/integration/test_config_decryption/configs/config_no_encryption_key.xml
index 5f7769f7403..830c75f7378 100644
--- a/tests/integration/test_config_decryption/configs/config_no_encryption_key.xml
+++ b/tests/integration/test_config_decryption/configs/config_no_encryption_key.xml
@@ -1,3 +1,7 @@
- 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
+
+
+
+ 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
+
diff --git a/tests/integration/test_config_decryption/configs/config_subnodes.xml b/tests/integration/test_config_decryption/configs/config_subnodes.xml
index b0e519ff546..8213270f747 100644
--- a/tests/integration/test_config_decryption/configs/config_subnodes.xml
+++ b/tests/integration/test_config_decryption/configs/config_subnodes.xml
@@ -1,10 +1,14 @@
+
00112233445566778899aabbccddeeff
-
+
+
+
96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
+
diff --git a/tests/integration/test_config_decryption/configs/config_wrong_method.xml b/tests/integration/test_config_decryption/configs/config_wrong_method.xml
index b452ce6374c..b96c13d5105 100644
--- a/tests/integration/test_config_decryption/configs/config_wrong_method.xml
+++ b/tests/integration/test_config_decryption/configs/config_wrong_method.xml
@@ -1,4 +1,5 @@
+
00112233445566778899aabbccddeeff
@@ -7,6 +8,8 @@
00112233445566778899aabbccddeeff00112233445566778899aabbccddeeff
- 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
- 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
+
+ 96260000000B0000000000E8FE3C087CED2205A5071078B29FD5C3B97F824911DED3217E980C
+ 97260000000B0000000000BFFF70C4DA718754C1DA0E2F25FF9246D4783F7FFEC4089EC1CC14
+
diff --git a/tests/integration/test_config_decryption/test_wrong_settings.py b/tests/integration/test_config_decryption/test_wrong_settings.py
index b148f9a051a..c6987d12324 100644
--- a/tests/integration/test_config_decryption/test_wrong_settings.py
+++ b/tests/integration/test_config_decryption/test_wrong_settings.py
@@ -15,7 +15,7 @@ def start_clickhouse(config, err_msg):
def test_wrong_method():
start_clickhouse(
- "configs/config_wrong_method.xml", "Wrong encryption method. Got WRONG"
+ "configs/config_wrong_method.xml", "Unknown encryption method. Got WRONG"
)
diff --git a/tests/integration/test_crash_log/configs/crash_log.xml b/tests/integration/test_crash_log/configs/crash_log.xml
new file mode 100644
index 00000000000..f4fbfaba08e
--- /dev/null
+++ b/tests/integration/test_crash_log/configs/crash_log.xml
@@ -0,0 +1,16 @@
+
+
+ 1000000
+ 1
+ 1
+ 1
+ true
+
+
+ 1000000
+ 100
+ 100
+ 100
+ true
+
+
diff --git a/tests/integration/test_crash_log/test.py b/tests/integration/test_crash_log/test.py
index 9f6eca794b1..1b7e7f38242 100644
--- a/tests/integration/test_crash_log/test.py
+++ b/tests/integration/test_crash_log/test.py
@@ -12,7 +12,9 @@ SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
def started_node():
cluster = helpers.cluster.ClickHouseCluster(__file__)
try:
- node = cluster.add_instance("node", stay_alive=True)
+ node = cluster.add_instance(
+ "node", main_configs=["configs/crash_log.xml"], stay_alive=True
+ )
cluster.start()
yield node
@@ -55,3 +57,18 @@ def test_pkill(started_node):
started_node.query("SELECT COUNT(*) FROM system.crash_log")
== f"{crashes_count}\n"
)
+
+
+def test_pkill_query_log(started_node):
+ for signal in ["SEGV", "4"]:
+ # force create query_log if it was not created
+ started_node.query("SYSTEM FLUSH LOGS")
+ started_node.query("TRUNCATE TABLE IF EXISTS system.query_log")
+ started_node.query("SELECT COUNT(*) FROM system.query_log")
+ # logs don't flush
+ assert started_node.query("SELECT COUNT(*) FROM system.query_log") == f"{0}\n"
+
+ send_signal(started_node, signal)
+ wait_for_clickhouse_stop(started_node)
+ started_node.restart_clickhouse()
+ assert started_node.query("SELECT COUNT(*) FROM system.query_log") >= f"3\n"
diff --git a/tests/integration/test_multiple_disks/configs/logs_config.xml b/tests/integration/test_multiple_disks/configs/logs_config.xml
index b0643c8bdad..2ee8bb55f38 100644
--- a/tests/integration/test_multiple_disks/configs/logs_config.xml
+++ b/tests/integration/test_multiple_disks/configs/logs_config.xml
@@ -14,8 +14,4 @@
500
-
- 268435456
- true
-
diff --git a/tests/integration/test_multiple_disks/test.py b/tests/integration/test_multiple_disks/test.py
index 0724791c940..17621d09422 100644
--- a/tests/integration/test_multiple_disks/test.py
+++ b/tests/integration/test_multiple_disks/test.py
@@ -889,15 +889,12 @@ def get_paths_for_partition_from_part_log(node, table, partition_id):
@pytest.mark.parametrize(
- "name,engine,use_metadata_cache",
+ "name,engine",
[
- pytest.param("altering_mt", "MergeTree()", "false", id="mt"),
- pytest.param("altering_mt", "MergeTree()", "true", id="mt_use_metadata_cache"),
- # ("altering_replicated_mt","ReplicatedMergeTree('/clickhouse/altering_replicated_mt', '1')",),
- # SYSTEM STOP MERGES doesn't disable merges assignments
+ pytest.param("altering_mt", "MergeTree()", id="mt"),
],
)
-def test_alter_move(start_cluster, name, engine, use_metadata_cache):
+def test_alter_move(start_cluster, name, engine):
try:
node1.query(
"""
@@ -907,9 +904,9 @@ def test_alter_move(start_cluster, name, engine, use_metadata_cache):
) ENGINE = {engine}
ORDER BY tuple()
PARTITION BY toYYYYMM(EventDate)
- SETTINGS storage_policy='jbods_with_external', use_metadata_cache={use_metadata_cache}
+ SETTINGS storage_policy='jbods_with_external'
""".format(
- name=name, engine=engine, use_metadata_cache=use_metadata_cache
+ name=name, engine=engine
)
)
diff --git a/tests/integration/test_storage_kafka/test.py b/tests/integration/test_storage_kafka/test.py
index d0686c7c36f..f196837751b 100644
--- a/tests/integration/test_storage_kafka/test.py
+++ b/tests/integration/test_storage_kafka/test.py
@@ -843,24 +843,7 @@ def test_kafka_formats(kafka_cluster):
extra_settings=format_opts.get("extra_settings") or "",
)
)
-
- instance.wait_for_log_line(
- "kafka.*Committed offset [0-9]+.*format_tests_",
- repetitions=len(all_formats.keys()),
- look_behind_lines=12000,
- )
-
- for format_name, format_opts in list(all_formats.items()):
- logging.debug(("Checking {}".format(format_name)))
- topic_name = f"format_tests_{format_name}"
- # shift offsets by 1 if format supports empty value
- offsets = (
- [1, 2, 3] if format_opts.get("supports_empty_value", False) else [0, 1, 2]
- )
- result = instance.query(
- "SELECT * FROM test.kafka_{format_name}_mv;".format(format_name=format_name)
- )
- expected = """\
+ raw_expected = """\
0 0 AM 0.5 1 {topic_name} 0 {offset_0}
1 0 AM 0.5 1 {topic_name} 0 {offset_1}
2 0 AM 0.5 1 {topic_name} 0 {offset_1}
@@ -878,7 +861,27 @@ def test_kafka_formats(kafka_cluster):
14 0 AM 0.5 1 {topic_name} 0 {offset_1}
15 0 AM 0.5 1 {topic_name} 0 {offset_1}
0 0 AM 0.5 1 {topic_name} 0 {offset_2}
-""".format(
+"""
+
+ expected_rows_count = raw_expected.count("\n")
+ instance.query_with_retry(
+ f"SELECT * FROM test.kafka_{list(all_formats.keys())[-1]}_mv;",
+ retry_count=30,
+ sleep_time=1,
+ check_callback=lambda res: res.count("\n") == expected_rows_count,
+ )
+
+ for format_name, format_opts in list(all_formats.items()):
+ logging.debug(("Checking {}".format(format_name)))
+ topic_name = f"format_tests_{format_name}"
+ # shift offsets by 1 if format supports empty value
+ offsets = (
+ [1, 2, 3] if format_opts.get("supports_empty_value", False) else [0, 1, 2]
+ )
+ result = instance.query(
+ "SELECT * FROM test.kafka_{format_name}_mv;".format(format_name=format_name)
+ )
+ expected = raw_expected.format(
topic_name=topic_name,
offset_0=offsets[0],
offset_1=offsets[1],
@@ -3755,19 +3758,7 @@ def test_kafka_formats_with_broken_message(kafka_cluster):
)
)
- for format_name, format_opts in list(all_formats.items()):
- logging.debug("Checking {format_name}")
- topic_name = f"{topic_name_prefix}{format_name}"
- # shift offsets by 1 if format supports empty value
- offsets = (
- [1, 2, 3] if format_opts.get("supports_empty_value", False) else [0, 1, 2]
- )
- result = instance.query(
- "SELECT * FROM test.kafka_data_{format_name}_mv;".format(
- format_name=format_name
- )
- )
- expected = """\
+ raw_expected = """\
0 0 AM 0.5 1 {topic_name} 0 {offset_0}
1 0 AM 0.5 1 {topic_name} 0 {offset_1}
2 0 AM 0.5 1 {topic_name} 0 {offset_1}
@@ -3785,7 +3776,29 @@ def test_kafka_formats_with_broken_message(kafka_cluster):
14 0 AM 0.5 1 {topic_name} 0 {offset_1}
15 0 AM 0.5 1 {topic_name} 0 {offset_1}
0 0 AM 0.5 1 {topic_name} 0 {offset_2}
-""".format(
+"""
+
+ expected_rows_count = raw_expected.count("\n")
+ instance.query_with_retry(
+ f"SELECT * FROM test.kafka_data_{list(all_formats.keys())[-1]}_mv;",
+ retry_count=30,
+ sleep_time=1,
+ check_callback=lambda res: res.count("\n") == expected_rows_count,
+ )
+
+ for format_name, format_opts in list(all_formats.items()):
+ logging.debug(f"Checking {format_name}")
+ topic_name = f"{topic_name_prefix}{format_name}"
+ # shift offsets by 1 if format supports empty value
+ offsets = (
+ [1, 2, 3] if format_opts.get("supports_empty_value", False) else [0, 1, 2]
+ )
+ result = instance.query(
+ "SELECT * FROM test.kafka_data_{format_name}_mv;".format(
+ format_name=format_name
+ )
+ )
+ expected = raw_expected.format(
topic_name=topic_name,
offset_0=offsets[0],
offset_1=offsets[1],
diff --git a/tests/integration/test_system_flush_logs/test.py b/tests/integration/test_system_flush_logs/test.py
index d9ab76d2d61..bf225ac30f8 100644
--- a/tests/integration/test_system_flush_logs/test.py
+++ b/tests/integration/test_system_flush_logs/test.py
@@ -2,11 +2,16 @@
# pylint: disable=unused-argument
# pylint: disable=redefined-outer-name
+import time
import pytest
from helpers.cluster import ClickHouseCluster
+from helpers.test_tools import assert_eq_with_retry
cluster = ClickHouseCluster(__file__)
-node = cluster.add_instance("node_default")
+node = cluster.add_instance(
+ "node_default",
+ stay_alive=True,
+)
system_logs = [
# disabled by default
@@ -64,3 +69,95 @@ def test_system_suspend():
node.query("SYSTEM SUSPEND FOR 1 SECOND;")
node.query("INSERT INTO t VALUES (now());")
assert "1\n" == node.query("SELECT max(x) - min(x) >= 1 FROM t;")
+
+
+def test_log_max_size(start_cluster):
+ node.exec_in_container(
+ [
+ "bash",
+ "-c",
+ f"""echo "
+
+
+ 1000000
+ 10
+ 10
+
+
+ " > /etc/clickhouse-server/config.d/yyy-override-query_log.xml
+ """,
+ ]
+ )
+ node.restart_clickhouse()
+ for i in range(10):
+ node.query(f"select {i}")
+
+ assert node.query("select count() >= 10 from system.query_log") == "1\n"
+ node.exec_in_container(
+ ["rm", f"/etc/clickhouse-server/config.d/yyy-override-query_log.xml"]
+ )
+
+
+def test_log_buffer_size_rows_flush_threshold(start_cluster):
+ node.exec_in_container(
+ [
+ "bash",
+ "-c",
+ f"""echo "
+
+
+ 1000000
+ 10
+ 10000
+
+
+ " > /etc/clickhouse-server/config.d/yyy-override-query_log.xml
+ """,
+ ]
+ )
+ node.restart_clickhouse()
+ node.query(f"TRUNCATE TABLE IF EXISTS system.query_log")
+ for i in range(10):
+ node.query(f"select {i}")
+
+ assert_eq_with_retry(
+ node,
+ f"select count() >= 11 from system.query_log",
+ "1",
+ sleep_time=0.2,
+ retry_count=100,
+ )
+
+ node.query(f"TRUNCATE TABLE IF EXISTS system.query_log")
+ node.exec_in_container(
+ [
+ "bash",
+ "-c",
+ f"""echo "
+
+
+ 1000000
+ 10000
+ 10000
+
+
+ " > /etc/clickhouse-server/config.d/yyy-override-query_log.xml
+ """,
+ ]
+ )
+ node.restart_clickhouse()
+ for i in range(10):
+ node.query(f"select {i}")
+
+ # Logs aren't flushed
+ assert_eq_with_retry(
+ node,
+ f"select count() < 10 from system.query_log",
+ "1",
+ sleep_time=0.2,
+ retry_count=100,
+ )
+
+ node.exec_in_container(
+ ["rm", f"/etc/clickhouse-server/config.d/yyy-override-query_log.xml"]
+ )
diff --git a/tests/integration/test_system_logs/test_system_logs.py b/tests/integration/test_system_logs/test_system_logs.py
index aac5ee53819..72249cd64ee 100644
--- a/tests/integration/test_system_logs/test_system_logs.py
+++ b/tests/integration/test_system_logs/test_system_logs.py
@@ -88,3 +88,53 @@ def test_system_logs_settings_expr(start_cluster):
assert expected in node3.query(
"SELECT engine_full FROM system.tables WHERE database='system' and name='query_log'"
)
+
+
+def test_max_size_0(start_cluster):
+ node1.exec_in_container(
+ [
+ "bash",
+ "-c",
+ f"""echo "
+
+
+ 0
+ 0
+
+
+ " > /etc/clickhouse-server/config.d/yyy-override-query_log.xml
+ """,
+ ]
+ )
+ with pytest.raises(Exception):
+ node1.restart_clickhouse()
+
+ node1.exec_in_container(
+ ["rm", f"/etc/clickhouse-server/config.d/yyy-override-query_log.xml"]
+ )
+ node1.restart_clickhouse()
+
+
+def test_reserved_size_greater_max_size(start_cluster):
+ node1.exec_in_container(
+ [
+ "bash",
+ "-c",
+ f"""echo "
+
+
+ 10
+ 11
+
+
+ " > /etc/clickhouse-server/config.d/yyy-override-query_log.xml
+ """,
+ ]
+ )
+ with pytest.raises(Exception):
+ node1.restart_clickhouse()
+
+ node1.exec_in_container(
+ ["rm", f"/etc/clickhouse-server/config.d/yyy-override-query_log.xml"]
+ )
+ node1.restart_clickhouse()
diff --git a/tests/queries/0_stateless/01161_all_system_tables.sh b/tests/queries/0_stateless/01161_all_system_tables.sh
index 6a72027478e..47316a6a805 100755
--- a/tests/queries/0_stateless/01161_all_system_tables.sh
+++ b/tests/queries/0_stateless/01161_all_system_tables.sh
@@ -18,7 +18,7 @@ function run_selects()
{
thread_num=$1
readarray -t tables_arr < <(${CLICKHOUSE_CLIENT} -q "SELECT database || '.' || name FROM system.tables
- WHERE database in ('system', 'information_schema', 'INFORMATION_SCHEMA') and name!='zookeeper' and name!='merge_tree_metadata_cache' and name!='models'
+ WHERE database in ('system', 'information_schema', 'INFORMATION_SCHEMA') and name != 'zookeeper' and name != 'models'
AND sipHash64(name || toString($RAND)) % $THREADS = $thread_num")
for t in "${tables_arr[@]}"
diff --git a/tests/queries/0_stateless/01233_check_table_with_metadata_cache.reference b/tests/queries/0_stateless/01233_check_table_with_metadata_cache.reference
deleted file mode 100644
index b773fc49ec3..00000000000
--- a/tests/queries/0_stateless/01233_check_table_with_metadata_cache.reference
+++ /dev/null
@@ -1,672 +0,0 @@
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:false; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:false; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:true; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:true; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:false; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:false; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:true; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Ordinary; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:true; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:false; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:false; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:true; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:false; use projection:true; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:false; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:false; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:true; use_compact_data_part:false
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-database engine:Atomic; table engine:ReplicatedMergeTree; use metadata cache:true; use projection:true; use_compact_data_part:true
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
-TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;
-CHECK TABLE test_metadata_cache.check_part_metadata_cache;
-1
diff --git a/tests/queries/0_stateless/01233_check_table_with_metadata_cache.sh b/tests/queries/0_stateless/01233_check_table_with_metadata_cache.sh
deleted file mode 100755
index 67f11e58a68..00000000000
--- a/tests/queries/0_stateless/01233_check_table_with_metadata_cache.sh
+++ /dev/null
@@ -1,104 +0,0 @@
-#!/usr/bin/env bash
-# Tags: no-fasttest, long, no-s3-storage, no-random-settings, no-parallel
-# Tag no-fasttest: setting use_metadata_cache=true is not supported in fasttest, because clickhouse binary in fasttest is build without RocksDB.
-# Tag no-random-settings: random settings significantly slow down test with debug build (alternative: add no-debug tag)
-# To suppress Warning messages from CHECK TABLE
-CLICKHOUSE_CLIENT_SERVER_LOGS_LEVEL=error
-CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
-# shellcheck source=../shell_config.sh
-. "$CURDIR"/../shell_config.sh
-
-set -e
-
-table_engines=(ReplicatedMergeTree)
-database_engines=(Ordinary Atomic)
-use_metadata_caches=(false true)
-use_projections=(false true)
-use_compact_data_parts=(false true)
-
-for table_engine in "${table_engines[@]}"; do
- for database_engine in "${database_engines[@]}"; do
- for use_metadata_cache in "${use_metadata_caches[@]}"; do
- for use_projection in "${use_projections[@]}"; do
- for use_compact_data_part in "${use_compact_data_parts[@]}"; do
- echo "database engine:${database_engine}; table engine:${table_engine}; use metadata cache:${use_metadata_cache}; use projection:${use_projection}; use_compact_data_part:${use_compact_data_part}"
-
- ${CLICKHOUSE_CLIENT} --query "DROP TABLE IF EXISTS test_metadata_cache.check_part_metadata_cache SYNC;"
- ${CLICKHOUSE_CLIENT} --query "DROP DATABASE IF EXISTS test_metadata_cache;"
- ${CLICKHOUSE_CLIENT} --allow_deprecated_database_ordinary=1 --query "CREATE DATABASE test_metadata_cache ENGINE = ${database_engine};"
-
- table_engine_clause=""
- if [[ "$table_engine" == "ReplicatedMergeTree" ]]; then
- table_engine_clause="ENGINE ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/test_metadata_cache/check_part_metadata_cache', 'r1')"
- elif [[ "$table_engine" == "MergeTree" ]]; then
- table_engine_clause="ENGINE MergeTree()"
- fi
-
- projection_clause=""
- if [[ "$use_projection" == "true" ]]; then
- projection_clause=", projection p1 (select p, sum(k), sum(v1), sum(v2) group by p)"
- fi
-
- compact_data_part_clause=", min_bytes_for_wide_part = 10485760"
- if [[ $use_compact_data_part == "true" ]]; then
- compact_data_part_clause=", min_bytes_for_wide_part = 0"
- fi
- ${CLICKHOUSE_CLIENT} --query "CREATE TABLE test_metadata_cache.check_part_metadata_cache (p Date, k UInt64, v1 UInt64, v2 Int64${projection_clause}) $table_engine_clause PARTITION BY toYYYYMM(p) ORDER BY k settings use_metadata_cache = ${use_metadata_cache} ${compact_data_part_clause}"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Insert first batch of data.
- ${CLICKHOUSE_CLIENT} --echo --query "INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 1, 1000, 2000), ('2018-05-16', 2, 3000, 4000), ('2018-05-17', 3, 5000, 6000), ('2018-05-18', 4, 7000, 8000);"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Insert second batch of data.
- ${CLICKHOUSE_CLIENT} --echo --query "INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-05-15', 5, 1000, 2000), ('2018-05-16', 6, 3000, 4000), ('2018-05-17', 7, 5000, 6000), ('2018-05-18', 8, 7000, 8000);"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # First update.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache update v1 = 2001 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Second update.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache update v2 = 4002 where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # First delete.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 1 settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Second delete.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache delete where k = 8 settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Insert third batch of data.
- ${CLICKHOUSE_CLIENT} --echo --query "INSERT INTO test_metadata_cache.check_part_metadata_cache (p, k, v1, v2) VALUES ('2018-06-15', 5, 1000, 2000), ('2018-06-16', 6, 3000, 4000), ('2018-06-17', 7, 5000, 6000), ('2018-06-18', 8, 7000, 8000);"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Drop one partition.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache drop partition 201805 settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Add column.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache add column v3 UInt64 settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Delete column.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache drop column v3 settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Add TTL.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 10 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Modify TTL.
- ${CLICKHOUSE_CLIENT} --echo --query "ALTER TABLE test_metadata_cache.check_part_metadata_cache modify TTL p + INTERVAL 15 YEAR settings mutations_sync = 1, replication_alter_partitions_sync = 1;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
-
- # Truncate table.
- ${CLICKHOUSE_CLIENT} --echo --query "TRUNCATE TABLE test_metadata_cache.check_part_metadata_cache;"
- ${CLICKHOUSE_CLIENT} --echo --query "CHECK TABLE test_metadata_cache.check_part_metadata_cache;"
- done
- done
- done
- done
-done
diff --git a/tests/queries/bugs/01747_system_session_log_long.reference b/tests/queries/0_stateless/01747_system_session_log_long.reference
similarity index 73%
rename from tests/queries/bugs/01747_system_session_log_long.reference
rename to tests/queries/0_stateless/01747_system_session_log_long.reference
index 9ecf7e05421..e4f0b6f6076 100644
--- a/tests/queries/bugs/01747_system_session_log_long.reference
+++ b/tests/queries/0_stateless/01747_system_session_log_long.reference
@@ -4,215 +4,291 @@ TCP endpoint
TCP 'wrong password' case is skipped for no_password.
HTTP endpoint
HTTP 'wrong password' case is skipped for no_password.
-MySQL endpoint
+HTTP endpoint with named session
+HTTP 'wrong password' case is skipped for no_password.
+MySQL endpoint no_password
+Wrong username
+Wrong password
MySQL 'wrong password' case is skipped for no_password.
+PostrgreSQL endpoint
+PostgreSQL 'wrong password' case is skipped for no_password.
# no_password - No profiles no roles
TCP endpoint
TCP 'wrong password' case is skipped for no_password.
HTTP endpoint
HTTP 'wrong password' case is skipped for no_password.
-MySQL endpoint
+HTTP endpoint with named session
+HTTP 'wrong password' case is skipped for no_password.
+MySQL endpoint no_password
+Wrong username
+Wrong password
MySQL 'wrong password' case is skipped for no_password.
+PostrgreSQL endpoint
+PostgreSQL 'wrong password' case is skipped for no_password.
# no_password - Two profiles, no roles
TCP endpoint
TCP 'wrong password' case is skipped for no_password.
HTTP endpoint
HTTP 'wrong password' case is skipped for no_password.
-MySQL endpoint
+HTTP endpoint with named session
+HTTP 'wrong password' case is skipped for no_password.
+MySQL endpoint no_password
+Wrong username
+Wrong password
MySQL 'wrong password' case is skipped for no_password.
+PostrgreSQL endpoint
+PostgreSQL 'wrong password' case is skipped for no_password.
# no_password - Two profiles and two simple roles
TCP endpoint
TCP 'wrong password' case is skipped for no_password.
HTTP endpoint
HTTP 'wrong password' case is skipped for no_password.
-MySQL endpoint
+HTTP endpoint with named session
+HTTP 'wrong password' case is skipped for no_password.
+MySQL endpoint no_password
+Wrong username
+Wrong password
MySQL 'wrong password' case is skipped for no_password.
+PostrgreSQL endpoint
+PostgreSQL 'wrong password' case is skipped for no_password.
# plaintext_password - No profiles no roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint plaintext_password
+Wrong username
+Wrong password
+PostrgreSQL endpoint
# plaintext_password - Two profiles, no roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint plaintext_password
+Wrong username
+Wrong password
+PostrgreSQL endpoint
# plaintext_password - Two profiles and two simple roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint plaintext_password
+Wrong username
+Wrong password
+PostrgreSQL endpoint
# sha256_password - No profiles no roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint sha256_password
MySQL 'successful login' case is skipped for sha256_password.
+Wrong username
+Wrong password
+PostrgreSQL endpoint
+PostgreSQL tests are skipped for sha256_password
# sha256_password - Two profiles, no roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint sha256_password
MySQL 'successful login' case is skipped for sha256_password.
+Wrong username
+Wrong password
+PostrgreSQL endpoint
+PostgreSQL tests are skipped for sha256_password
# sha256_password - Two profiles and two simple roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint sha256_password
MySQL 'successful login' case is skipped for sha256_password.
+Wrong username
+Wrong password
+PostrgreSQL endpoint
+PostgreSQL tests are skipped for sha256_password
# double_sha1_password - No profiles no roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint double_sha1_password
+Wrong username
+Wrong password
+PostrgreSQL endpoint
+PostgreSQL tests are skipped for double_sha1_password
# double_sha1_password - Two profiles, no roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint double_sha1_password
+Wrong username
+Wrong password
+PostrgreSQL endpoint
+PostgreSQL tests are skipped for double_sha1_password
# double_sha1_password - Two profiles and two simple roles
TCP endpoint
HTTP endpoint
-MySQL endpoint
+HTTP endpoint with named session
+MySQL endpoint double_sha1_password
+Wrong username
+Wrong password
+PostrgreSQL endpoint
+PostgreSQL tests are skipped for double_sha1_password
${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles TCP LoginFailure 1
${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP LoginFailure 1
-${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP LoginFailure many
+${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles MySQL LoginFailure many
${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles MySQL LoginSuccess 1
${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles MySQL Logout 1
${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles TCP LoginFailure 1
${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP LoginFailure 1
-${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP LoginFailure many
+${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles MySQL LoginFailure many
${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles MySQL LoginSuccess 1
${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles MySQL Logout 1
${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles TCP LoginFailure 1
${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles TCP LoginSuccess 1
${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles TCP Logout 1
-${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP LoginFailure 1
-${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP Logout 1
+${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP LoginFailure many
+${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP LoginSuccess many
+${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP Logout many
${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles MySQL LoginFailure many
${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles MySQL LoginSuccess 1
${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles MySQL Logout 1
${BASE_USERNAME}_no_password_no_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_no_password_no_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_no_password_no_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_no_password_no_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_no_password_no_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_no_password_no_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_no_password_no_profiles_no_roles MySQL LoginSuccess 1
${BASE_USERNAME}_no_password_no_profiles_no_roles MySQL Logout 1
${BASE_USERNAME}_no_password_two_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_no_password_two_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_no_password_two_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_no_password_two_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_no_password_two_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_no_password_two_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_no_password_two_profiles_no_roles MySQL LoginSuccess 1
${BASE_USERNAME}_no_password_two_profiles_no_roles MySQL Logout 1
${BASE_USERNAME}_no_password_two_profiles_two_roles TCP LoginSuccess 1
${BASE_USERNAME}_no_password_two_profiles_two_roles TCP Logout 1
-${BASE_USERNAME}_no_password_two_profiles_two_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_no_password_two_profiles_two_roles HTTP Logout 1
+${BASE_USERNAME}_no_password_two_profiles_two_roles HTTP LoginSuccess many
+${BASE_USERNAME}_no_password_two_profiles_two_roles HTTP Logout many
${BASE_USERNAME}_no_password_two_profiles_two_roles MySQL LoginSuccess 1
${BASE_USERNAME}_no_password_two_profiles_two_roles MySQL Logout 1
${BASE_USERNAME}_plaintext_password_no_profiles_no_roles TCP LoginFailure 1
${BASE_USERNAME}_plaintext_password_no_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_plaintext_password_no_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP LoginFailure 1
-${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP LoginFailure many
+${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_plaintext_password_no_profiles_no_roles MySQL LoginFailure many
${BASE_USERNAME}_plaintext_password_no_profiles_no_roles MySQL LoginSuccess 1
${BASE_USERNAME}_plaintext_password_no_profiles_no_roles MySQL Logout 1
+${BASE_USERNAME}_plaintext_password_no_profiles_no_roles PostgreSQL LoginFailure many
${BASE_USERNAME}_plaintext_password_two_profiles_no_roles TCP LoginFailure 1
${BASE_USERNAME}_plaintext_password_two_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_plaintext_password_two_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP LoginFailure 1
-${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP LoginFailure many
+${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_plaintext_password_two_profiles_no_roles MySQL LoginFailure many
${BASE_USERNAME}_plaintext_password_two_profiles_no_roles MySQL LoginSuccess 1
${BASE_USERNAME}_plaintext_password_two_profiles_no_roles MySQL Logout 1
+${BASE_USERNAME}_plaintext_password_two_profiles_no_roles PostgreSQL LoginFailure many
${BASE_USERNAME}_plaintext_password_two_profiles_two_roles TCP LoginFailure 1
${BASE_USERNAME}_plaintext_password_two_profiles_two_roles TCP LoginSuccess 1
${BASE_USERNAME}_plaintext_password_two_profiles_two_roles TCP Logout 1
-${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP LoginFailure 1
-${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP Logout 1
+${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP LoginFailure many
+${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP LoginSuccess many
+${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP Logout many
${BASE_USERNAME}_plaintext_password_two_profiles_two_roles MySQL LoginFailure many
${BASE_USERNAME}_plaintext_password_two_profiles_two_roles MySQL LoginSuccess 1
${BASE_USERNAME}_plaintext_password_two_profiles_two_roles MySQL Logout 1
+${BASE_USERNAME}_plaintext_password_two_profiles_two_roles PostgreSQL LoginFailure many
${BASE_USERNAME}_sha256_password_no_profiles_no_roles TCP LoginFailure 1
${BASE_USERNAME}_sha256_password_no_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_sha256_password_no_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP LoginFailure 1
-${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP LoginFailure many
+${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_sha256_password_no_profiles_no_roles MySQL LoginFailure many
${BASE_USERNAME}_sha256_password_two_profiles_no_roles TCP LoginFailure 1
${BASE_USERNAME}_sha256_password_two_profiles_no_roles TCP LoginSuccess 1
${BASE_USERNAME}_sha256_password_two_profiles_no_roles TCP Logout 1
-${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP LoginFailure 1
-${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP Logout 1
+${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP LoginFailure many
+${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP LoginSuccess many
+${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP Logout many
${BASE_USERNAME}_sha256_password_two_profiles_no_roles MySQL LoginFailure many
${BASE_USERNAME}_sha256_password_two_profiles_two_roles TCP LoginFailure 1
${BASE_USERNAME}_sha256_password_two_profiles_two_roles TCP LoginSuccess 1
${BASE_USERNAME}_sha256_password_two_profiles_two_roles TCP Logout 1
-${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP LoginFailure 1
-${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP LoginSuccess 1
-${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP Logout 1
+${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP LoginFailure many
+${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP LoginSuccess many
+${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP Logout many
${BASE_USERNAME}_sha256_password_two_profiles_two_roles MySQL LoginFailure many
invalid_${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_double_sha1_password_no_profiles_no_roles MySQL LoginFailure many
invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_no_roles MySQL LoginFailure many
invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_double_sha1_password_two_profiles_two_roles MySQL LoginFailure many
invalid_${BASE_USERNAME}_no_password_no_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_no_password_no_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_no_password_no_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_no_password_no_profiles_no_roles MySQL LoginFailure many
+invalid_${BASE_USERNAME}_no_password_no_profiles_no_roles PostgreSQL LoginFailure many
invalid_${BASE_USERNAME}_no_password_two_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_no_password_two_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_no_password_two_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_no_password_two_profiles_no_roles MySQL LoginFailure many
+invalid_${BASE_USERNAME}_no_password_two_profiles_no_roles PostgreSQL LoginFailure many
invalid_${BASE_USERNAME}_no_password_two_profiles_two_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_no_password_two_profiles_two_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_no_password_two_profiles_two_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_no_password_two_profiles_two_roles MySQL LoginFailure many
+invalid_${BASE_USERNAME}_no_password_two_profiles_two_roles PostgreSQL LoginFailure many
invalid_${BASE_USERNAME}_plaintext_password_no_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_plaintext_password_no_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_plaintext_password_no_profiles_no_roles MySQL LoginFailure many
+invalid_${BASE_USERNAME}_plaintext_password_no_profiles_no_roles PostgreSQL LoginFailure many
invalid_${BASE_USERNAME}_plaintext_password_two_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_plaintext_password_two_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_plaintext_password_two_profiles_no_roles MySQL LoginFailure many
+invalid_${BASE_USERNAME}_plaintext_password_two_profiles_no_roles PostgreSQL LoginFailure many
invalid_${BASE_USERNAME}_plaintext_password_two_profiles_two_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_plaintext_password_two_profiles_two_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_plaintext_password_two_profiles_two_roles MySQL LoginFailure many
+invalid_${BASE_USERNAME}_plaintext_password_two_profiles_two_roles PostgreSQL LoginFailure many
invalid_${BASE_USERNAME}_sha256_password_no_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_sha256_password_no_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_sha256_password_no_profiles_no_roles MySQL LoginFailure many
invalid_${BASE_USERNAME}_sha256_password_two_profiles_no_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_sha256_password_two_profiles_no_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_sha256_password_two_profiles_no_roles MySQL LoginFailure many
invalid_${BASE_USERNAME}_sha256_password_two_profiles_two_roles TCP LoginFailure 1
-invalid_${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP LoginFailure 1
+invalid_${BASE_USERNAME}_sha256_password_two_profiles_two_roles HTTP LoginFailure many
invalid_${BASE_USERNAME}_sha256_password_two_profiles_two_roles MySQL LoginFailure many
invalid_session_log_test_xml_user TCP LoginFailure 1
-invalid_session_log_test_xml_user HTTP LoginFailure 1
+invalid_session_log_test_xml_user HTTP LoginFailure many
invalid_session_log_test_xml_user MySQL LoginFailure many
+invalid_session_log_test_xml_user PostgreSQL LoginFailure many
session_log_test_xml_user TCP LoginSuccess 1
session_log_test_xml_user TCP Logout 1
-session_log_test_xml_user HTTP LoginSuccess 1
-session_log_test_xml_user HTTP Logout 1
+session_log_test_xml_user HTTP LoginSuccess many
+session_log_test_xml_user HTTP Logout many
session_log_test_xml_user MySQL LoginSuccess 1
session_log_test_xml_user MySQL Logout 1
diff --git a/tests/queries/bugs/01747_system_session_log_long.sh b/tests/queries/0_stateless/01747_system_session_log_long.sh
similarity index 78%
rename from tests/queries/bugs/01747_system_session_log_long.sh
rename to tests/queries/0_stateless/01747_system_session_log_long.sh
index 9b127e0b48d..c6e93f4abd7 100755
--- a/tests/queries/bugs/01747_system_session_log_long.sh
+++ b/tests/queries/0_stateless/01747_system_session_log_long.sh
@@ -1,6 +1,5 @@
#!/usr/bin/env bash
# Tags: long, no-parallel, no-fasttest
-# Tag no-fasttest: Accesses CH via mysql table function (which is unavailable)
##################################################################################################
# Verify that login, logout, and login failure events are properly stored in system.session_log
@@ -11,9 +10,8 @@
# Using multiple protocols
# * native TCP protocol with CH client
# * HTTP with CURL
-# * MySQL - CH server accesses itself via mysql table function, query typically fails (unrelated)
-# but auth should be performed properly.
-# * PostgreSQL - CH server accesses itself via postgresql table function (currently out of order).
+# * MySQL - CH server accesses itself via mysql table function.
+# * PostgreSQL - CH server accesses itself via postgresql table function, but can't execute query (No LOGIN SUCCESS entry).
# * gRPC - not done yet
#
# There is way to control how many time a query (e.g. via mysql table function) is retried
@@ -53,7 +51,7 @@ function reportError()
function executeQuery()
{
- ## Execute query (provided via heredoc or herestring) and print query in case of error.
+ # Execute query (provided via heredoc or herestring) and print query in case of error.
trap 'rm -f ${TMP_QUERY_FILE}; trap - ERR RETURN' RETURN
# Since we want to report with current values supplied to this function call
# shellcheck disable=SC2064
@@ -82,7 +80,7 @@ trap "cleanup" EXIT
function executeQueryExpectError()
{
cat - > "${TMP_QUERY_FILE}"
- ! ${CLICKHOUSE_CLIENT} "${@}" --multiquery --queries-file "${TMP_QUERY_FILE}" 2>&1 | tee -a ${TMP_QUERY_FILE}
+ ! ${CLICKHOUSE_CLIENT} --multiquery --queries-file "${TMP_QUERY_FILE}" "${@}" 2>&1 | tee -a ${TMP_QUERY_FILE}
}
function createUser()
@@ -121,6 +119,8 @@ function createUser()
executeQuery <&1 | grep -F -q "UNRECOGNIZED_ARGUMENTS"
${CLICKHOUSE_LOCAL} --unknown-option-1 --unknown-option-2 2>&1 | grep -F -q "UNRECOGNIZED_ARGUMENTS" && echo "OK" || echo "FAIL"
-${CLICKHOUSE_LOCAL} -- --unknown-option 2>&1 | grep -F -q "BAD_ARGUMENTS" && echo "OK" || echo "FAIL"
-
${CLICKHOUSE_LOCAL} -- 'positional-argument' 2>&1 | grep -F -q "BAD_ARGUMENTS" && echo "OK" || echo "FAIL"
${CLICKHOUSE_LOCAL} -f 2>&1 | grep -F -q "Bad arguments" && echo "OK" || echo "FAIL"
@@ -22,8 +20,6 @@ ${CLICKHOUSE_CLIENT} --unknown-option 2>&1 | grep -F -q "UNRECOGNIZED_ARGUMENTS"
${CLICKHOUSE_CLIENT} --unknown-option-1 --unknown-option-2 2>&1 | grep -F -q "UNRECOGNIZED_ARGUMENTS" && echo "OK" || echo "FAIL"
-${CLICKHOUSE_CLIENT} -- --unknown-option 2>&1 | grep -F -q "BAD_ARGUMENTS" && echo "OK" || echo "FAIL"
-
${CLICKHOUSE_CLIENT} -- 'positional-argument' 2>&1 | grep -F -q "BAD_ARGUMENTS" && echo "OK" || echo "FAIL"
${CLICKHOUSE_CLIENT} --j 2>&1 | grep -F -q "Bad arguments" && echo "OK" || echo "FAIL"
diff --git a/tests/queries/0_stateless/02531_ipv4_arithmetic.reference b/tests/queries/0_stateless/02531_ipv4_arithmetic.reference
index 6f03e4e6903..28d6f76e9e9 100644
--- a/tests/queries/0_stateless/02531_ipv4_arithmetic.reference
+++ b/tests/queries/0_stateless/02531_ipv4_arithmetic.reference
@@ -1,3 +1,5 @@
+-- { echoOn }
+SELECT number, ip, ip % number FROM (SELECT number, toIPv4('1.2.3.4') as ip FROM numbers(10, 20));
10 1.2.3.4 0
11 1.2.3.4 3
12 1.2.3.4 4
@@ -18,3 +20,24 @@
27 1.2.3.4 13
28 1.2.3.4 0
29 1.2.3.4 1
+SELECT number, ip, number % ip FROM (SELECT number, toIPv4OrNull('0.0.0.3') as ip FROM numbers(10, 20));
+10 0.0.0.3 1
+11 0.0.0.3 2
+12 0.0.0.3 0
+13 0.0.0.3 1
+14 0.0.0.3 2
+15 0.0.0.3 0
+16 0.0.0.3 1
+17 0.0.0.3 2
+18 0.0.0.3 0
+19 0.0.0.3 1
+20 0.0.0.3 2
+21 0.0.0.3 0
+22 0.0.0.3 1
+23 0.0.0.3 2
+24 0.0.0.3 0
+25 0.0.0.3 1
+26 0.0.0.3 2
+27 0.0.0.3 0
+28 0.0.0.3 1
+29 0.0.0.3 2
diff --git a/tests/queries/0_stateless/02531_ipv4_arithmetic.sql b/tests/queries/0_stateless/02531_ipv4_arithmetic.sql
index 59a99842d61..88c8cf936dd 100644
--- a/tests/queries/0_stateless/02531_ipv4_arithmetic.sql
+++ b/tests/queries/0_stateless/02531_ipv4_arithmetic.sql
@@ -1 +1,4 @@
-SELECT number, ip, ip % number FROM (SELECT number, toIPv4('1.2.3.4') as ip FROM numbers(10, 20));
\ No newline at end of file
+-- { echoOn }
+SELECT number, ip, ip % number FROM (SELECT number, toIPv4('1.2.3.4') as ip FROM numbers(10, 20));
+SELECT number, ip, number % ip FROM (SELECT number, toIPv4OrNull('0.0.0.3') as ip FROM numbers(10, 20));
+
diff --git a/tests/queries/0_stateless/02701_non_parametric_function.reference b/tests/queries/0_stateless/02701_non_parametric_function.reference
new file mode 100644
index 00000000000..e69de29bb2d
diff --git a/tests/queries/0_stateless/02701_non_parametric_function.sql b/tests/queries/0_stateless/02701_non_parametric_function.sql
new file mode 100644
index 00000000000..b242bdc72ef
--- /dev/null
+++ b/tests/queries/0_stateless/02701_non_parametric_function.sql
@@ -0,0 +1 @@
+SELECT * FROM system.numbers WHERE number > toUInt64(10)(number) LIMIT 10; -- { serverError 309 }
diff --git a/tests/queries/0_stateless/02833_local_udf_options.reference b/tests/queries/0_stateless/02833_local_udf_options.reference
new file mode 100644
index 00000000000..19f0805d8de
--- /dev/null
+++ b/tests/queries/0_stateless/02833_local_udf_options.reference
@@ -0,0 +1 @@
+qwerty
diff --git a/tests/queries/0_stateless/02833_local_udf_options.sh b/tests/queries/0_stateless/02833_local_udf_options.sh
new file mode 100755
index 00000000000..149b62d7e2c
--- /dev/null
+++ b/tests/queries/0_stateless/02833_local_udf_options.sh
@@ -0,0 +1,11 @@
+#!/usr/bin/env bash
+
+set -e
+
+CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
+# shellcheck source=../shell_config.sh
+. "$CUR_DIR"/../shell_config.sh
+
+SCRIPTS_DIR=$CUR_DIR/scripts_udf
+
+$CLICKHOUSE_LOCAL -q 'select test_function()' -- --user_scripts_path=$SCRIPTS_DIR --user_defined_executable_functions_config=$SCRIPTS_DIR/function.xml
diff --git a/tests/queries/0_stateless/02833_local_with_dialect.reference b/tests/queries/0_stateless/02833_local_with_dialect.reference
new file mode 100644
index 00000000000..dbb67375997
--- /dev/null
+++ b/tests/queries/0_stateless/02833_local_with_dialect.reference
@@ -0,0 +1,2 @@
+0
+[?2004h[?2004lBye.
diff --git a/tests/queries/0_stateless/02833_local_with_dialect.sh b/tests/queries/0_stateless/02833_local_with_dialect.sh
new file mode 100755
index 00000000000..012a6d91269
--- /dev/null
+++ b/tests/queries/0_stateless/02833_local_with_dialect.sh
@@ -0,0 +1,9 @@
+#!/usr/bin/env bash
+# Tags: no-fasttest, no-random-settings
+
+CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
+# shellcheck source=../shell_config.sh
+. "$CUR_DIR"/../shell_config.sh
+
+
+echo "exit" | ${CLICKHOUSE_LOCAL} --query "from s\"SELECT * FROM numbers(1)\"" --dialect prql --interactive
diff --git a/tests/queries/0_stateless/02840_merge__table_or_filter.reference b/tests/queries/0_stateless/02840_merge__table_or_filter.reference
new file mode 100644
index 00000000000..ff5e0865a22
--- /dev/null
+++ b/tests/queries/0_stateless/02840_merge__table_or_filter.reference
@@ -0,0 +1,38 @@
+-- { echoOn }
+
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v1') settings allow_experimental_analyzer=0, convert_query_to_cnf=0;
+v1 1
+v1 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v2') settings allow_experimental_analyzer=0, convert_query_to_cnf=0;
+v1 1
+v2 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=0, convert_query_to_cnf=0;
+v1 1
+select _table, key from m where (value = 10 and _table = 'v3') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=0, convert_query_to_cnf=0;
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v1') settings allow_experimental_analyzer=0, convert_query_to_cnf=1;
+v1 1
+v1 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v2') settings allow_experimental_analyzer=0, convert_query_to_cnf=1;
+v1 1
+v2 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=0, convert_query_to_cnf=1;
+v1 1
+select _table, key from m where (value = 10 and _table = 'v3') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=0, convert_query_to_cnf=1;
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v1') settings allow_experimental_analyzer=1, convert_query_to_cnf=0;
+v1 1
+v1 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v2') settings allow_experimental_analyzer=1, convert_query_to_cnf=0;
+v1 1
+v2 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=1, convert_query_to_cnf=0;
+v1 1
+select _table, key from m where (value = 10 and _table = 'v3') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=1, convert_query_to_cnf=0;
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v1') settings allow_experimental_analyzer=1, convert_query_to_cnf=1;
+v1 1
+v1 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v2') settings allow_experimental_analyzer=1, convert_query_to_cnf=1;
+v1 1
+v2 2
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=1, convert_query_to_cnf=1;
+v1 1
+select _table, key from m where (value = 10 and _table = 'v3') or (value = 20 and _table = 'v3') settings allow_experimental_analyzer=1, convert_query_to_cnf=1;
diff --git a/tests/queries/0_stateless/02840_merge__table_or_filter.sql.j2 b/tests/queries/0_stateless/02840_merge__table_or_filter.sql.j2
new file mode 100644
index 00000000000..a87ef7302c6
--- /dev/null
+++ b/tests/queries/0_stateless/02840_merge__table_or_filter.sql.j2
@@ -0,0 +1,34 @@
+drop table if exists m;
+drop view if exists v1;
+drop view if exists v2;
+drop table if exists d1;
+drop table if exists d2;
+
+create table d1 (key Int, value Int) engine=Memory();
+create table d2 (key Int, value Int) engine=Memory();
+
+insert into d1 values (1, 10);
+insert into d1 values (2, 20);
+
+insert into d2 values (1, 10);
+insert into d2 values (2, 20);
+
+create view v1 as select * from d1;
+create view v2 as select * from d2;
+
+create table m as v1 engine=Merge(currentDatabase(), '^(v1|v2)$');
+
+-- avoid reorder
+set max_threads=1;
+-- { echoOn }
+{% for settings in [
+ 'allow_experimental_analyzer=0, convert_query_to_cnf=0',
+ 'allow_experimental_analyzer=0, convert_query_to_cnf=1',
+ 'allow_experimental_analyzer=1, convert_query_to_cnf=0',
+ 'allow_experimental_analyzer=1, convert_query_to_cnf=1'
+] %}
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v1') settings {{ settings }};
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v2') settings {{ settings }};
+select _table, key from m where (value = 10 and _table = 'v1') or (value = 20 and _table = 'v3') settings {{ settings }};
+select _table, key from m where (value = 10 and _table = 'v3') or (value = 20 and _table = 'v3') settings {{ settings }};
+{% endfor %}
diff --git a/tests/queries/0_stateless/scripts_udf/function.xml b/tests/queries/0_stateless/scripts_udf/function.xml
new file mode 100644
index 00000000000..69a0abb5cec
--- /dev/null
+++ b/tests/queries/0_stateless/scripts_udf/function.xml
@@ -0,0 +1,9 @@
+
+
+ executable
+ test_function
+ String
+ TabSeparated
+ udf.sh
+
+
diff --git a/tests/queries/0_stateless/scripts_udf/udf.sh b/tests/queries/0_stateless/scripts_udf/udf.sh
new file mode 100755
index 00000000000..add85833c3e
--- /dev/null
+++ b/tests/queries/0_stateless/scripts_udf/udf.sh
@@ -0,0 +1,3 @@
+#!/bin/sh
+
+echo qwerty
diff --git a/utils/check-style/aspell-ignore/en/aspell-dict.txt b/utils/check-style/aspell-ignore/en/aspell-dict.txt
index a314815e2c4..80aeadd8738 100644
--- a/utils/check-style/aspell-ignore/en/aspell-dict.txt
+++ b/utils/check-style/aspell-ignore/en/aspell-dict.txt
@@ -211,7 +211,6 @@ Decrypted
Deduplicate
Deduplication
DelayedInserts
-delim
DeliveryTag
DeltaLake
Denormalize
@@ -699,6 +698,8 @@ PyCharm
QEMU
QTCreator
Quantile
+QueryCacheBytes
+QueryCacheEntries
QueryCacheHits
QueryCacheMisses
QueryPreempted
@@ -761,9 +762,9 @@ RoaringBitmap
RocksDB
Rollup
RowBinary
+RowBinaryWithDefaults
RowBinaryWithNames
RowBinaryWithNamesAndTypes
-RowBinaryWithDefaults
Runtime
SATA
SELECTs
@@ -776,7 +777,6 @@ SMALLINT
SPNEGO
SQEs
SQLAlchemy
-SquaredDistance
SQLConsoleDetail
SQLInsert
SQLSTATE
@@ -811,6 +811,7 @@ Smirnov'test
Soundex
SpanKind
Spearman's
+SquaredDistance
StartTLS
StartTime
StartupSystemTables
@@ -838,8 +839,6 @@ Subexpression
Submodules
Subqueries
Substrings
-substringIndex
-substringIndexUTF
SummingMergeTree
SuperSet
Superset
@@ -1272,6 +1271,7 @@ cryptographic
csv
csvwithnames
csvwithnamesandtypes
+curdate
currentDatabase
currentProfiles
currentRoles
@@ -1331,6 +1331,7 @@ defaultProfiles
defaultRoles
defaultValueOfArgumentType
defaultValueOfTypeName
+delim
deltaLake
deltaSum
deltaSumTimestamp
@@ -1542,13 +1543,13 @@ hadoop
halfMD
halfday
hardlinks
+hasAll
+hasAny
+hasColumnInTable
hasSubsequence
hasSubsequenceCaseInsensitive
hasSubsequenceCaseInsensitiveUTF
hasSubsequenceUTF
-hasAll
-hasAny
-hasColumnInTable
hasSubstr
hasToken
hasTokenCaseInsensitive
@@ -1590,10 +1591,10 @@ incrementing
indexHint
indexOf
infi
-initialQueryID
-initializeAggregation
initcap
initcapUTF
+initialQueryID
+initializeAggregation
injective
innogames
inodes
@@ -2131,9 +2132,9 @@ routineley
rowNumberInAllBlocks
rowNumberInBlock
rowbinary
+rowbinarywithdefaults
rowbinarywithnames
rowbinarywithnamesandtypes
-rowbinarywithdefaults
rsync
rsyslog
runnable
@@ -2185,8 +2186,8 @@ sleepEachRow
snowflakeToDateTime
socketcache
soundex
-sparkbar
sparkBar
+sparkbar
sparsehash
speedscope
splitByChar
@@ -2256,6 +2257,8 @@ subreddits
subseconds
subsequence
substring
+substringIndex
+substringIndexUTF
substringUTF
substrings
subtitiles
@@ -2556,4 +2559,3 @@ znode
znodes
zookeeperSessionUptime
zstd
-curdate