mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 01:25:21 +00:00
Merge remote-tracking branch 'ck/master' into bigo_hive_table
This commit is contained in:
commit
c5bc968140
@ -9,45 +9,57 @@ The `Merge` engine (not to be confused with `MergeTree`) does not store data its
|
||||
|
||||
Reading is automatically parallelized. Writing to a table is not supported. When reading, the indexes of tables that are actually being read are used, if they exist.
|
||||
|
||||
The `Merge` engine accepts parameters: the database name and a regular expression for tables.
|
||||
|
||||
## Examples {#examples}
|
||||
|
||||
Example 1:
|
||||
## Creating a Table {#creating-a-table}
|
||||
|
||||
``` sql
|
||||
Merge(hits, '^WatchLog')
|
||||
CREATE TABLE ... Engine=Merge(db_name, tables_regexp)
|
||||
```
|
||||
|
||||
Data will be read from the tables in the `hits` database that have names that match the regular expression ‘`^WatchLog`’.
|
||||
**Engine Parameters**
|
||||
|
||||
Instead of the database name, you can use a constant expression that returns a string. For example, `currentDatabase()`.
|
||||
- `db_name` — Possible values:
|
||||
- database name,
|
||||
- constant expression that returns a string with a database name, for example, `currentDatabase()`,
|
||||
- `REGEXP(expression)`, where `expression` is a regular expression to match the DB names.
|
||||
|
||||
- `tables_regexp` — A regular expression to match the table names in the specified DB or DBs.
|
||||
|
||||
Regular expressions — [re2](https://github.com/google/re2) (supports a subset of PCRE), case-sensitive.
|
||||
See the notes about escaping symbols in regular expressions in the “match” section.
|
||||
See the notes about escaping symbols in regular expressions in the "match" section.
|
||||
|
||||
When selecting tables to read, the `Merge` table itself will not be selected, even if it matches the regex. This is to avoid loops.
|
||||
It is possible to create two `Merge` tables that will endlessly try to read each others’ data, but this is not a good idea.
|
||||
## Usage {#usage}
|
||||
|
||||
When selecting tables to read, the `Merge` table itself is not selected, even if it matches the regex. This is to avoid loops.
|
||||
It is possible to create two `Merge` tables that will endlessly try to read each others' data, but this is not a good idea.
|
||||
|
||||
The typical way to use the `Merge` engine is for working with a large number of `TinyLog` tables as if with a single table.
|
||||
|
||||
Example 2:
|
||||
## Examples {#examples}
|
||||
|
||||
Let’s say you have a old table (WatchLog_old) and decided to change partitioning without moving data to a new table (WatchLog_new) and you need to see data from both tables.
|
||||
**Example 1**
|
||||
|
||||
Consider two databases `ABC_corporate_site` and `ABC_store`. The `all_visitors` table will contain IDs from the tables `visitors` in both databases.
|
||||
|
||||
``` sql
|
||||
CREATE TABLE WatchLog_old(date Date, UserId Int64, EventType String, Cnt UInt64)
|
||||
ENGINE=MergeTree(date, (UserId, EventType), 8192);
|
||||
CREATE TABLE all_visitors (id UInt32) ENGINE=Merge(REGEXP('ABC_*'), 'visitors');
|
||||
```
|
||||
|
||||
**Example 2**
|
||||
|
||||
Let's say you have an old table `WatchLog_old` and decided to change partitioning without moving data to a new table `WatchLog_new`, and you need to see data from both tables.
|
||||
|
||||
``` sql
|
||||
CREATE TABLE WatchLog_old(date Date, UserId Int64, EventType String, Cnt UInt64)
|
||||
ENGINE=MergeTree(date, (UserId, EventType), 8192);
|
||||
INSERT INTO WatchLog_old VALUES ('2018-01-01', 1, 'hit', 3);
|
||||
|
||||
CREATE TABLE WatchLog_new(date Date, UserId Int64, EventType String, Cnt UInt64)
|
||||
ENGINE=MergeTree PARTITION BY date ORDER BY (UserId, EventType) SETTINGS index_granularity=8192;
|
||||
CREATE TABLE WatchLog_new(date Date, UserId Int64, EventType String, Cnt UInt64)
|
||||
ENGINE=MergeTree PARTITION BY date ORDER BY (UserId, EventType) SETTINGS index_granularity=8192;
|
||||
INSERT INTO WatchLog_new VALUES ('2018-01-02', 2, 'hit', 3);
|
||||
|
||||
CREATE TABLE WatchLog as WatchLog_old ENGINE=Merge(currentDatabase(), '^WatchLog');
|
||||
|
||||
SELECT *
|
||||
FROM WatchLog
|
||||
SELECT * FROM WatchLog;
|
||||
```
|
||||
|
||||
``` text
|
||||
@ -68,5 +80,4 @@ FROM WatchLog
|
||||
**See Also**
|
||||
|
||||
- [Virtual columns](../../../engines/table-engines/special/index.md#table_engines-virtual_columns)
|
||||
|
||||
[Original article](https://clickhouse.com/docs/en/operations/table_engines/merge/) <!--hide-->
|
||||
- [merge](../../../sql-reference/table-functions/merge.md) table function
|
||||
|
@ -332,7 +332,7 @@ ORDER BY year, count(*) DESC
|
||||
|
||||
The following server was used:
|
||||
|
||||
Two Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz, 16 physical kernels total,128 GiB RAM,8x6 TB HD on hardware RAID-5
|
||||
Two Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz, 16 physical cores total, 128 GiB RAM, 8x6 TB HD on hardware RAID-5
|
||||
|
||||
Execution time is the best of three runs. But starting from the second run, queries read data from the file system cache. No further caching occurs: the data is read out and processed in each run.
|
||||
|
||||
|
@ -21,7 +21,7 @@ By default, ClickHouse Keeper provides the same guarantees as ZooKeeper (lineari
|
||||
ClickHouse Keeper can be used as a standalone replacement for ZooKeeper or as an internal part of the ClickHouse server, but in both cases configuration is almost the same `.xml` file. The main ClickHouse Keeper configuration tag is `<keeper_server>`. Keeper configuration has the following parameters:
|
||||
|
||||
- `tcp_port` — Port for a client to connect (default for ZooKeeper is `2181`).
|
||||
- `tcp_port_secure` — Secure port for a client to connect.
|
||||
- `tcp_port_secure` — Secure port for an SSL connection between client and keeper-server.
|
||||
- `server_id` — Unique server id, each participant of the ClickHouse Keeper cluster must have a unique number (1, 2, 3, and so on).
|
||||
- `log_storage_path` — Path to coordination logs, better to store logs on the non-busy device (same for ZooKeeper).
|
||||
- `snapshot_storage_path` — Path to coordination snapshots.
|
||||
@ -50,7 +50,11 @@ Internal coordination settings are located in `<keeper_server>.<coordination_set
|
||||
- `shutdown_timeout` — Wait to finish internal connections and shutdown (ms) (default: 5000).
|
||||
- `startup_timeout` — If the server doesn't connect to other quorum participants in the specified timeout it will terminate (ms) (default: 30000).
|
||||
|
||||
Quorum configuration is located in `<keeper_server>.<raft_configuration>` section and contain servers description. The only parameter for the whole quorum is `secure`, which enables encrypted connection for communication between quorum participants. The main parameters for each `<server>` are:
|
||||
Quorum configuration is located in `<keeper_server>.<raft_configuration>` section and contain servers description.
|
||||
|
||||
The only parameter for the whole quorum is `secure`, which enables encrypted connection for communication between quorum participants. The parameter can be set `true` if SSL connection is required for internal communication between nodes, or left unspecified otherwise.
|
||||
|
||||
The main parameters for each `<server>` are:
|
||||
|
||||
- `id` — Server identifier in a quorum.
|
||||
- `hostname` — Hostname where this server is placed.
|
||||
|
@ -370,7 +370,7 @@ Opens `https://tabix.io/` when accessing `http://localhost: http_port`.
|
||||
<![CDATA[<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>]]>
|
||||
</http_server_default_response>
|
||||
```
|
||||
## hsts_max_age
|
||||
## hsts_max_age {#hsts-max-age}
|
||||
|
||||
Expired time for HSTS in seconds. The default value is 0 means clickhouse disabled HSTS. If you set a positive number, the HSTS will be enabled and the max-age is the number you set.
|
||||
|
||||
|
@ -992,6 +992,16 @@ Example:
|
||||
log_query_views=1
|
||||
```
|
||||
|
||||
## log_formatted_queries {#settings-log-formatted-queries}
|
||||
|
||||
Allows to log formatted queries to the [system.query_log](../../operations/system-tables/query_log.md) system table.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Formatted queries are not logged in the system table.
|
||||
- 1 — Formatted queries are logged in the system table.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
## log_comment {#settings-log-comment}
|
||||
|
||||
|
@ -26,6 +26,8 @@ Each query creates one or two rows in the `query_log` table, depending on the st
|
||||
|
||||
You can use the [log_queries_probability](../../operations/settings/settings.md#log-queries-probability) setting to reduce the number of queries, registered in the `query_log` table.
|
||||
|
||||
You can use the [log_formatted_queries](../../operations/settings/settings.md#settings-log-formatted-queries) setting to log formatted queries to the `formatted_query` column.
|
||||
|
||||
Columns:
|
||||
|
||||
- `type` ([Enum8](../../sql-reference/data-types/enum.md)) — Type of an event that occurred when executing the query. Values:
|
||||
@ -48,6 +50,7 @@ Columns:
|
||||
- `memory_usage` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Memory consumption by the query.
|
||||
- `current_database` ([String](../../sql-reference/data-types/string.md)) — Name of the current database.
|
||||
- `query` ([String](../../sql-reference/data-types/string.md)) — Query string.
|
||||
- `formatted_query` ([String](../../sql-reference/data-types/string.md)) — Formatted query string.
|
||||
- `normalized_query_hash` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Identical hash value without the values of literals for similar queries.
|
||||
- `query_kind` ([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md)) — Type of the query.
|
||||
- `databases` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — Names of the databases present in the query.
|
||||
@ -114,68 +117,68 @@ SELECT * FROM system.query_log WHERE type = 'QueryFinish' ORDER BY query_start_t
|
||||
Row 1:
|
||||
──────
|
||||
type: QueryFinish
|
||||
event_date: 2021-07-28
|
||||
event_time: 2021-07-28 13:46:56
|
||||
event_time_microseconds: 2021-07-28 13:46:56.719791
|
||||
query_start_time: 2021-07-28 13:46:56
|
||||
query_start_time_microseconds: 2021-07-28 13:46:56.704542
|
||||
query_duration_ms: 14
|
||||
read_rows: 8393
|
||||
read_bytes: 374325
|
||||
event_date: 2021-11-03
|
||||
event_time: 2021-11-03 16:13:54
|
||||
event_time_microseconds: 2021-11-03 16:13:54.953024
|
||||
query_start_time: 2021-11-03 16:13:54
|
||||
query_start_time_microseconds: 2021-11-03 16:13:54.952325
|
||||
query_duration_ms: 0
|
||||
read_rows: 69
|
||||
read_bytes: 6187
|
||||
written_rows: 0
|
||||
written_bytes: 0
|
||||
result_rows: 4201
|
||||
result_bytes: 153024
|
||||
memory_usage: 4714038
|
||||
result_rows: 69
|
||||
result_bytes: 48256
|
||||
memory_usage: 0
|
||||
current_database: default
|
||||
query: SELECT DISTINCT arrayJoin(extractAll(name, '[\\w_]{2,}')) AS res FROM (SELECT name FROM system.functions UNION ALL SELECT name FROM system.table_engines UNION ALL SELECT name FROM system.formats UNION ALL SELECT name FROM system.table_functions UNION ALL SELECT name FROM system.data_type_families UNION ALL SELECT name FROM system.merge_tree_settings UNION ALL SELECT name FROM system.settings UNION ALL SELECT cluster FROM system.clusters UNION ALL SELECT macro FROM system.macros UNION ALL SELECT policy_name FROM system.storage_policies UNION ALL SELECT concat(func.name, comb.name) FROM system.functions AS func CROSS JOIN system.aggregate_function_combinators AS comb WHERE is_aggregate UNION ALL SELECT name FROM system.databases LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.tables LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.dictionaries LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.columns LIMIT 10000) WHERE notEmpty(res)
|
||||
normalized_query_hash: 6666026786019643712
|
||||
query_kind: Select
|
||||
databases: ['system']
|
||||
tables: ['system.aggregate_function_combinators','system.clusters','system.columns','system.data_type_families','system.databases','system.dictionaries','system.formats','system.functions','system.macros','system.merge_tree_settings','system.settings','system.storage_policies','system.table_engines','system.table_functions','system.tables']
|
||||
columns: ['system.aggregate_function_combinators.name','system.clusters.cluster','system.columns.name','system.data_type_families.name','system.databases.name','system.dictionaries.name','system.formats.name','system.functions.is_aggregate','system.functions.name','system.macros.macro','system.merge_tree_settings.name','system.settings.name','system.storage_policies.policy_name','system.table_engines.name','system.table_functions.name','system.tables.name']
|
||||
query: DESCRIBE TABLE system.query_log
|
||||
formatted_query:
|
||||
normalized_query_hash: 8274064835331539124
|
||||
query_kind:
|
||||
databases: []
|
||||
tables: []
|
||||
columns: []
|
||||
projections: []
|
||||
views: []
|
||||
exception_code: 0
|
||||
exception:
|
||||
stack_trace:
|
||||
is_initial_query: 1
|
||||
user: default
|
||||
query_id: a3361f6e-a1fd-4d54-9f6f-f93a08bab0bf
|
||||
query_id: 7c28bbbb-753b-4eba-98b1-efcbe2b9bdf6
|
||||
address: ::ffff:127.0.0.1
|
||||
port: 51006
|
||||
port: 40452
|
||||
initial_user: default
|
||||
initial_query_id: a3361f6e-a1fd-4d54-9f6f-f93a08bab0bf
|
||||
initial_query_id: 7c28bbbb-753b-4eba-98b1-efcbe2b9bdf6
|
||||
initial_address: ::ffff:127.0.0.1
|
||||
initial_port: 51006
|
||||
initial_query_start_time: 2021-07-28 13:46:56
|
||||
initial_query_start_time_microseconds: 2021-07-28 13:46:56.704542
|
||||
initial_port: 40452
|
||||
initial_query_start_time: 2021-11-03 16:13:54
|
||||
initial_query_start_time_microseconds: 2021-11-03 16:13:54.952325
|
||||
interface: 1
|
||||
os_user:
|
||||
client_hostname:
|
||||
client_name: ClickHouse client
|
||||
os_user: sevirov
|
||||
client_hostname: clickhouse.ru-central1.internal
|
||||
client_name: ClickHouse
|
||||
client_revision: 54449
|
||||
client_version_major: 21
|
||||
client_version_minor: 8
|
||||
client_version_patch: 0
|
||||
client_version_minor: 10
|
||||
client_version_patch: 1
|
||||
http_method: 0
|
||||
http_user_agent:
|
||||
http_referer:
|
||||
forwarded_for:
|
||||
quota_key:
|
||||
revision: 54453
|
||||
revision: 54456
|
||||
log_comment:
|
||||
thread_ids: [5058,22097,22110,22094]
|
||||
ProfileEvents.Names: ['Query','SelectQuery','ArenaAllocChunks','ArenaAllocBytes','FunctionExecute','NetworkSendElapsedMicroseconds','SelectedRows','SelectedBytes','ContextLock','RWLockAcquiredReadLocks','RealTimeMicroseconds','UserTimeMicroseconds','SystemTimeMicroseconds','SoftPageFaults','OSCPUWaitMicroseconds','OSCPUVirtualTimeMicroseconds','OSWriteBytes','OSWriteChars']
|
||||
ProfileEvents.Values: [1,1,39,352256,64,360,8393,374325,412,440,34480,13108,4723,671,19,17828,8192,10240]
|
||||
Settings.Names: ['load_balancing','max_memory_usage']
|
||||
Settings.Values: ['random','10000000000']
|
||||
thread_ids: [30776,31174]
|
||||
ProfileEvents: {'Query':1,'NetworkSendElapsedMicroseconds':59,'NetworkSendBytes':2643,'SelectedRows':69,'SelectedBytes':6187,'ContextLock':9,'RWLockAcquiredReadLocks':1,'RealTimeMicroseconds':817,'UserTimeMicroseconds':427,'SystemTimeMicroseconds':212,'OSCPUVirtualTimeMicroseconds':639,'OSReadChars':894,'OSWriteChars':319}
|
||||
Settings: {'load_balancing':'random','max_memory_usage':'10000000000'}
|
||||
used_aggregate_functions: []
|
||||
used_aggregate_function_combinators: []
|
||||
used_database_engines: []
|
||||
used_data_type_families: ['UInt64','UInt8','Nullable','String','date']
|
||||
used_data_type_families: []
|
||||
used_dictionaries: []
|
||||
used_formats: []
|
||||
used_functions: ['concat','notEmpty','extractAll']
|
||||
used_functions: []
|
||||
used_storages: []
|
||||
used_table_functions: []
|
||||
```
|
||||
@ -183,6 +186,3 @@ used_table_functions: []
|
||||
**See Also**
|
||||
|
||||
- [system.query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log) — This table contains information about each query execution thread.
|
||||
- [system.query_views_log](../../operations/system-tables/query_views_log.md#system_tables-query_views_log) — This table contains information about each view executed during a query.
|
||||
|
||||
[Original article](https://clickhouse.com/docs/en/operations/system-tables/query_log) <!--hide-->
|
||||
|
@ -1,3 +1,4 @@
|
||||
---
|
||||
toc_priority: 58
|
||||
toc_title: Usage Recommendations
|
||||
---
|
||||
@ -71,8 +72,8 @@ For HDD, enable the write cache.
|
||||
## File System {#file-system}
|
||||
|
||||
Ext4 is the most reliable option. Set the mount options `noatime`.
|
||||
XFS is also suitable, but it hasn’t been as thoroughly tested with ClickHouse.
|
||||
Most other file systems should also work fine. File systems with delayed allocation work better.
|
||||
XFS should be avoided. It works mostly fine but there are some reports about lower performance.
|
||||
Most other file systems should also work fine.
|
||||
|
||||
## Linux Kernel {#linux-kernel}
|
||||
|
||||
|
@ -155,7 +155,7 @@ ALTER TABLE visits CLEAR COLUMN hour in PARTITION 201902
|
||||
## FREEZE PARTITION {#alter_freeze-partition}
|
||||
|
||||
``` sql
|
||||
ALTER TABLE table_name FREEZE [PARTITION partition_expr]
|
||||
ALTER TABLE table_name FREEZE [PARTITION partition_expr] [WITH NAME 'backup_name']
|
||||
```
|
||||
|
||||
This query creates a local backup of a specified partition. If the `PARTITION` clause is omitted, the query creates the backup of all partitions at once.
|
||||
@ -169,6 +169,7 @@ At the time of execution, for a data snapshot, the query creates hardlinks to a
|
||||
|
||||
- `/var/lib/clickhouse/` is the working ClickHouse directory specified in the config.
|
||||
- `N` is the incremental number of the backup.
|
||||
- if the `WITH NAME` parameter is specified, then the value of the `'backup_name'` parameter is used instead of the incremental number.
|
||||
|
||||
!!! note "Note"
|
||||
If you use [a set of disks for data storage in a table](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-multiple-volumes), the `shadow/N` directory appears on every disk, storing data parts that matched by the `PARTITION` expression.
|
||||
|
@ -5,7 +5,23 @@ toc_title: merge
|
||||
|
||||
# merge {#merge}
|
||||
|
||||
`merge(db_name, 'tables_regexp')` – Creates a temporary Merge table. For more information, see the section “Table engines, Merge”.
|
||||
Creates a temporary [Merge](../../engines/table-engines/special/merge.md) table. The table structure is taken from the first table encountered that matches the regular expression.
|
||||
|
||||
The table structure is taken from the first table encountered that matches the regular expression.
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
merge('db_name', 'tables_regexp')
|
||||
```
|
||||
**Arguments**
|
||||
|
||||
- `db_name` — Possible values:
|
||||
- database name,
|
||||
- constant expression that returns a string with a database name, for example, `currentDatabase()`,
|
||||
- `REGEXP(expression)`, where `expression` is a regular expression to match the DB names.
|
||||
|
||||
- `tables_regexp` — A regular expression to match the table names in the specified DB or DBs.
|
||||
|
||||
**See Also**
|
||||
|
||||
- [Merge](../../engines/table-engines/special/merge.md) table engine
|
||||
|
||||
|
@ -30,6 +30,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
[rabbitmq_skip_broken_messages = N,]
|
||||
[rabbitmq_max_block_size = N,]
|
||||
[rabbitmq_flush_interval_ms = N]
|
||||
[rabbitmq_queue_settings_list = 'x-dead-letter-exchange=my-dlx,x-max-length=10,x-overflow=reject-publish']
|
||||
```
|
||||
|
||||
Обязательные параметры:
|
||||
@ -51,6 +52,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
- `rabbitmq_skip_broken_messages` – максимальное количество некорректных сообщений в блоке. Если `rabbitmq_skip_broken_messages = N`, то движок отбрасывает `N` сообщений, которые не получилось обработать. Одно сообщение в точности соответствует одной записи (строке). Значение по умолчанию – 0.
|
||||
- `rabbitmq_max_block_size`
|
||||
- `rabbitmq_flush_interval_ms`
|
||||
- `rabbitmq_queue_settings_list` - позволяет самостоятельно установить настройки RabbitMQ при создании очереди. Доступные настройки: `x-max-length`, `x-max-length-bytes`, `x-message-ttl`, `x-expires`, `x-priority`, `x-max-priority`, `x-overflow`, `x-dead-letter-exchange`, `x-queue-type`. Настрока `durable` для очереди ставится автоматически.
|
||||
|
||||
Настройки форматов данных также могут быть добавлены в списке RabbitMQ настроек.
|
||||
|
||||
|
@ -7,43 +7,56 @@ toc_title: Merge
|
||||
|
||||
Движок `Merge` (не путайте с движком `MergeTree`) не хранит данные самостоятельно, а позволяет читать одновременно из произвольного количества других таблиц.
|
||||
Чтение автоматически распараллеливается. Запись в таблицу не поддерживается. При чтении будут использованы индексы тех таблиц, из которых реально идёт чтение, если они существуют.
|
||||
Движок `Merge` принимает параметры: имя базы данных и регулярное выражение для таблиц.
|
||||
|
||||
Пример:
|
||||
## Создание таблицы {#creating-a-table}
|
||||
|
||||
``` sql
|
||||
Merge(hits, '^WatchLog')
|
||||
CREATE TABLE ... Engine=Merge(db_name, tables_regexp)
|
||||
```
|
||||
|
||||
Данные будут читаться из таблиц в базе `hits`, имена которых соответствуют регулярному выражению ‘`^WatchLog`’.
|
||||
**Параметры движка**
|
||||
|
||||
Вместо имени базы данных может использоваться константное выражение, возвращающее строку. Например, `currentDatabase()`.
|
||||
- `db_name` — Возможные варианты:
|
||||
- имя БД,
|
||||
- выражение, возвращающее строку с именем БД, например, `currentDatabase()`,
|
||||
- `REGEXP(expression)`, где `expression` — регулярное выражение для отбора БД.
|
||||
|
||||
- `tables_regexp` — регулярное выражение для имен таблиц в указанной БД или нескольких БД.
|
||||
|
||||
## Использование {#usage}
|
||||
|
||||
Регулярные выражения — [re2](https://github.com/google/re2) (поддерживает подмножество PCRE), регистрозависимые.
|
||||
Смотрите замечание об экранировании в регулярных выражениях в разделе «match».
|
||||
|
||||
При выборе таблиц для чтения, сама `Merge`-таблица не будет выбрана, даже если попадает под регулярное выражение, чтобы не возникло циклов.
|
||||
Впрочем, вы можете создать две `Merge`-таблицы, которые будут пытаться бесконечно читать данные друг друга, но делать этого не нужно.
|
||||
При выборе таблиц для чтения сама `Merge`-таблица не будет выбрана, даже если попадает под регулярное выражение, чтобы не возникло циклов.
|
||||
Впрочем, вы можете создать две `Merge`-таблицы, которые будут пытаться бесконечно читать данные друг друга, но делать этого не рекомендуется.
|
||||
|
||||
Типичный способ использования движка `Merge` — работа с большим количеством таблиц типа `TinyLog`, как с одной.
|
||||
Типичный способ использования движка `Merge` — работа с большим количеством таблиц типа `TinyLog` как с одной.
|
||||
|
||||
Пример 2:
|
||||
**Пример 1**
|
||||
|
||||
Пусть есть две БД `ABC_corporate_site` и `ABC_store`. Таблица `all_visitors` будет содержать ID из таблиц `visitors` в обеих БД.
|
||||
|
||||
``` sql
|
||||
CREATE TABLE all_visitors (id UInt32) ENGINE=Merge(REGEXP('ABC_*'), 'visitors');
|
||||
```
|
||||
|
||||
**Пример 2**
|
||||
|
||||
Пусть есть старая таблица `WatchLog_old`. Необходимо изменить партиционирование без перемещения данных в новую таблицу `WatchLog_new`. При этом в выборке должны участвовать данные обеих таблиц.
|
||||
|
||||
``` sql
|
||||
CREATE TABLE WatchLog_old(date Date, UserId Int64, EventType String, Cnt UInt64)
|
||||
ENGINE=MergeTree(date, (UserId, EventType), 8192);
|
||||
ENGINE=MergeTree(date, (UserId, EventType), 8192);
|
||||
INSERT INTO WatchLog_old VALUES ('2018-01-01', 1, 'hit', 3);
|
||||
|
||||
CREATE TABLE WatchLog_new(date Date, UserId Int64, EventType String, Cnt UInt64)
|
||||
ENGINE=MergeTree PARTITION BY date ORDER BY (UserId, EventType) SETTINGS index_granularity=8192;
|
||||
ENGINE=MergeTree PARTITION BY date ORDER BY (UserId, EventType) SETTINGS index_granularity=8192;
|
||||
INSERT INTO WatchLog_new VALUES ('2018-01-02', 2, 'hit', 3);
|
||||
|
||||
CREATE TABLE WatchLog as WatchLog_old ENGINE=Merge(currentDatabase(), '^WatchLog');
|
||||
|
||||
SELECT *
|
||||
FROM WatchLog
|
||||
SELECT * FROM WatchLog;
|
||||
```
|
||||
|
||||
``` text
|
||||
@ -61,7 +74,7 @@ FROM WatchLog
|
||||
|
||||
В секции `WHERE/PREWHERE` можно установить константное условие на столбец `_table` (например, `WHERE _table='xyz'`). В этом случае операции чтения выполняются только для тех таблиц, для которых выполняется условие на значение `_table`, таким образом, столбец `_table` работает как индекс.
|
||||
|
||||
**Смотрите также**
|
||||
**См. также**
|
||||
|
||||
- [Виртуальные столбцы](index.md#table_engines-virtual_columns)
|
||||
|
||||
- Табличная функция [merge](../../../sql-reference/table-functions/merge.md)
|
||||
|
@ -30,11 +30,13 @@ toc_title: "Отличительные возможности ClickHouse"
|
||||
Почти все перечисленные ранее столбцовые СУБД не поддерживают распределённую обработку запроса.
|
||||
В ClickHouse данные могут быть расположены на разных шардах. Каждый шард может представлять собой группу реплик, которые используются для отказоустойчивости. Запрос будет выполнен на всех шардах параллельно. Это делается прозрачно для пользователя.
|
||||
|
||||
## Поддержка SQL {#podderzhka-sql}
|
||||
## Поддержка SQL {#sql-support}
|
||||
|
||||
ClickHouse поддерживает декларативный язык запросов на основе SQL и во многих случаях совпадающий с SQL стандартом.
|
||||
Поддерживаются GROUP BY, ORDER BY, подзапросы в секциях FROM, IN, JOIN, а также скалярные подзапросы.
|
||||
Зависимые подзапросы и оконные функции не поддерживаются.
|
||||
ClickHouse поддерживает [декларативный язык запросов на основе SQL](../sql-reference/index.md) и во [многих случаях](../sql-reference/ansi.md) совпадающий с SQL стандартом.
|
||||
|
||||
Поддерживаются [GROUP BY](../sql-reference/statements/select/group-by.md), [ORDER BY](../sql-reference/statements/select/order-by.md), подзапросы в секциях [FROM](../sql-reference/statements/select/from.md), [IN](../sql-reference/operators/in.md), [JOIN](../sql-reference/statements/select/join.md), [функции window](../sql-reference/window-functions/index.md), а также скалярные подзапросы.
|
||||
|
||||
Зависимые подзапросы не поддерживаются, но могут стать доступными в будущем.
|
||||
|
||||
## Векторный движок {#vektornyi-dvizhok}
|
||||
|
||||
|
@ -368,6 +368,16 @@ ClickHouse проверяет условия для `min_part_size` и `min_part
|
||||
</http_server_default_response>
|
||||
```
|
||||
|
||||
## hsts_max_age {#hsts-max-age}
|
||||
|
||||
Срок действия HSTS в секундах. Значение по умолчанию `0` (HSTS выключен). Для включения HSTS задайте положительное число. Срок действия HSTS будет равен введенному числу.
|
||||
|
||||
**Пример**
|
||||
|
||||
```xml
|
||||
<hsts_max_age>600000</hsts_max_age>
|
||||
```
|
||||
|
||||
## include_from {#server_configuration_parameters-include_from}
|
||||
|
||||
Путь к файлу с подстановками.
|
||||
|
@ -922,6 +922,17 @@ log_queries_min_type='EXCEPTION_WHILE_PROCESSING'
|
||||
log_query_threads=1
|
||||
```
|
||||
|
||||
## log_formatted_queries {#settings-log-formatted-queries}
|
||||
|
||||
Позволяет регистрировать отформатированные запросы в системной таблице [system.query_log](../../operations/system-tables/query_log.md).
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- 0 — отформатированные запросы не регистрируются в системной таблице.
|
||||
- 1 — отформатированные запросы регистрируются в системной таблице.
|
||||
|
||||
Значение по умолчанию: `0`.
|
||||
|
||||
## log_comment {#settings-log-comment}
|
||||
|
||||
Задаёт значение поля `log_comment` таблицы [system.query_log](../system-tables/query_log.md) и текст комментария в логе сервера.
|
||||
|
@ -10,6 +10,9 @@
|
||||
- `type` ([String](../../sql-reference/data-types/string.md)) — тип индекса.
|
||||
- `expr` ([String](../../sql-reference/data-types/string.md)) — выражение, используемое для вычисления индекса.
|
||||
- `granularity` ([UInt64](../../sql-reference/data-types/int-uint.md)) — количество гранул в блоке данных.
|
||||
- `data_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — размер сжатых данных в байтах.
|
||||
- `data_uncompressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — размер несжатых данных в байтах.
|
||||
- `marks_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) — размер засечек в байтах.
|
||||
|
||||
**Пример**
|
||||
|
||||
@ -26,6 +29,9 @@ name: clicks_idx
|
||||
type: minmax
|
||||
expr: clicks
|
||||
granularity: 1
|
||||
data_compressed_bytes: 58
|
||||
data_uncompressed_bytes: 6
|
||||
marks: 48
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
@ -35,4 +41,7 @@ name: contacts_null_idx
|
||||
type: minmax
|
||||
expr: assumeNotNull(contacts_null)
|
||||
granularity: 1
|
||||
data_compressed_bytes: 58
|
||||
data_uncompressed_bytes: 6
|
||||
marks: 48
|
||||
```
|
||||
|
@ -38,6 +38,12 @@
|
||||
|
||||
- `marks_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) – размер файла с засечками.
|
||||
|
||||
- `secondary_indices_compressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) – общий размер сжатых данных для вторичных индексов в куске данных. Вспомогательные файлы (например, файлы с засечками) не включены.
|
||||
|
||||
- `secondary_indices_uncompressed_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) – общий размер несжатых данных для вторичных индексов в куске данных. Вспомогательные файлы (например, файлы с засечками) не включены.
|
||||
|
||||
- `secondary_indices_marks_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md)) – размер файла с засечками для вторичных индексов.
|
||||
|
||||
- `modification_time` ([DateTime](../../sql-reference/data-types/datetime.md)) – время модификации директории с куском данных. Обычно соответствует времени создания куска.
|
||||
|
||||
- `remove_time` ([DateTime](../../sql-reference/data-types/datetime.md)) – время, когда кусок стал неактивным.
|
||||
@ -119,6 +125,9 @@ rows: 6
|
||||
bytes_on_disk: 310
|
||||
data_compressed_bytes: 157
|
||||
data_uncompressed_bytes: 91
|
||||
secondary_indices_compressed_bytes: 58
|
||||
secondary_indices_uncompressed_bytes: 6
|
||||
secondary_indices_marks_bytes: 48
|
||||
marks_bytes: 144
|
||||
modification_time: 2020-06-18 13:01:49
|
||||
remove_time: 0000-00-00 00:00:00
|
||||
|
@ -26,6 +26,8 @@ ClickHouse не удаляет данные из таблица автомати
|
||||
|
||||
Чтобы уменьшить количество запросов, регистрирующихся в таблице `query_log`, вы можете использовать настройку [log_queries_probability](../../operations/settings/settings.md#log-queries-probability).
|
||||
|
||||
Чтобы регистрировать отформатированные запросы в столбце `formatted_query`, вы можете использовать настройку [log_formatted_queries](../../operations/settings/settings.md#settings-log-formatted-queries).
|
||||
|
||||
Столбцы:
|
||||
|
||||
- `type` ([Enum8](../../sql-reference/data-types/enum.md)) — тип события, произошедшего при выполнении запроса. Значения:
|
||||
@ -48,6 +50,7 @@ ClickHouse не удаляет данные из таблица автомати
|
||||
- `memory_usage` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — потребление RAM запросом.
|
||||
- `current_database` ([String](../../sql-reference/data-types/string.md)) — имя текущей базы данных.
|
||||
- `query` ([String](../../sql-reference/data-types/string.md)) — текст запроса.
|
||||
- `formatted_query` ([String](../../sql-reference/data-types/string.md)) — текст отформатированного запроса.
|
||||
- `normalized_query_hash` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — идентичная хэш-сумма без значений литералов для аналогичных запросов.
|
||||
- `query_kind` ([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md)) — тип запроса.
|
||||
- `databases` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — имена баз данных, присутствующих в запросе.
|
||||
@ -113,74 +116,72 @@ SELECT * FROM system.query_log WHERE type = 'QueryFinish' ORDER BY query_start_t
|
||||
Row 1:
|
||||
──────
|
||||
type: QueryFinish
|
||||
event_date: 2021-07-28
|
||||
event_time: 2021-07-28 13:46:56
|
||||
event_time_microseconds: 2021-07-28 13:46:56.719791
|
||||
query_start_time: 2021-07-28 13:46:56
|
||||
query_start_time_microseconds: 2021-07-28 13:46:56.704542
|
||||
query_duration_ms: 14
|
||||
read_rows: 8393
|
||||
read_bytes: 374325
|
||||
event_date: 2021-11-03
|
||||
event_time: 2021-11-03 16:13:54
|
||||
event_time_microseconds: 2021-11-03 16:13:54.953024
|
||||
query_start_time: 2021-11-03 16:13:54
|
||||
query_start_time_microseconds: 2021-11-03 16:13:54.952325
|
||||
query_duration_ms: 0
|
||||
read_rows: 69
|
||||
read_bytes: 6187
|
||||
written_rows: 0
|
||||
written_bytes: 0
|
||||
result_rows: 4201
|
||||
result_bytes: 153024
|
||||
memory_usage: 4714038
|
||||
result_rows: 69
|
||||
result_bytes: 48256
|
||||
memory_usage: 0
|
||||
current_database: default
|
||||
query: SELECT DISTINCT arrayJoin(extractAll(name, '[\\w_]{2,}')) AS res FROM (SELECT name FROM system.functions UNION ALL SELECT name FROM system.table_engines UNION ALL SELECT name FROM system.formats UNION ALL SELECT name FROM system.table_functions UNION ALL SELECT name FROM system.data_type_families UNION ALL SELECT name FROM system.merge_tree_settings UNION ALL SELECT name FROM system.settings UNION ALL SELECT cluster FROM system.clusters UNION ALL SELECT macro FROM system.macros UNION ALL SELECT policy_name FROM system.storage_policies UNION ALL SELECT concat(func.name, comb.name) FROM system.functions AS func CROSS JOIN system.aggregate_function_combinators AS comb WHERE is_aggregate UNION ALL SELECT name FROM system.databases LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.tables LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.dictionaries LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.columns LIMIT 10000) WHERE notEmpty(res)
|
||||
normalized_query_hash: 6666026786019643712
|
||||
query_kind: Select
|
||||
databases: ['system']
|
||||
tables: ['system.aggregate_function_combinators','system.clusters','system.columns','system.data_type_families','system.databases','system.dictionaries','system.formats','system.functions','system.macros','system.merge_tree_settings','system.settings','system.storage_policies','system.table_engines','system.table_functions','system.tables']
|
||||
columns: ['system.aggregate_function_combinators.name','system.clusters.cluster','system.columns.name','system.data_type_families.name','system.databases.name','system.dictionaries.name','system.formats.name','system.functions.is_aggregate','system.functions.name','system.macros.macro','system.merge_tree_settings.name','system.settings.name','system.storage_policies.policy_name','system.table_engines.name','system.table_functions.name','system.tables.name']
|
||||
query: DESCRIBE TABLE system.query_log
|
||||
formatted_query:
|
||||
normalized_query_hash: 8274064835331539124
|
||||
query_kind:
|
||||
databases: []
|
||||
tables: []
|
||||
columns: []
|
||||
projections: []
|
||||
views: []
|
||||
exception_code: 0
|
||||
exception:
|
||||
stack_trace:
|
||||
is_initial_query: 1
|
||||
user: default
|
||||
query_id: a3361f6e-a1fd-4d54-9f6f-f93a08bab0bf
|
||||
query_id: 7c28bbbb-753b-4eba-98b1-efcbe2b9bdf6
|
||||
address: ::ffff:127.0.0.1
|
||||
port: 51006
|
||||
port: 40452
|
||||
initial_user: default
|
||||
initial_query_id: a3361f6e-a1fd-4d54-9f6f-f93a08bab0bf
|
||||
initial_query_id: 7c28bbbb-753b-4eba-98b1-efcbe2b9bdf6
|
||||
initial_address: ::ffff:127.0.0.1
|
||||
initial_port: 51006
|
||||
initial_query_start_time: 2021-07-28 13:46:56
|
||||
initial_query_start_time_microseconds: 2021-07-28 13:46:56.704542
|
||||
initial_port: 40452
|
||||
initial_query_start_time: 2021-11-03 16:13:54
|
||||
initial_query_start_time_microseconds: 2021-11-03 16:13:54.952325
|
||||
interface: 1
|
||||
os_user:
|
||||
client_hostname:
|
||||
client_name: ClickHouse client
|
||||
os_user: sevirov
|
||||
client_hostname: clickhouse.ru-central1.internal
|
||||
client_name: ClickHouse
|
||||
client_revision: 54449
|
||||
client_version_major: 21
|
||||
client_version_minor: 8
|
||||
client_version_patch: 0
|
||||
client_version_minor: 10
|
||||
client_version_patch: 1
|
||||
http_method: 0
|
||||
http_user_agent:
|
||||
http_referer:
|
||||
forwarded_for:
|
||||
quota_key:
|
||||
revision: 54453
|
||||
revision: 54456
|
||||
log_comment:
|
||||
thread_ids: [5058,22097,22110,22094]
|
||||
ProfileEvents.Names: ['Query','SelectQuery','ArenaAllocChunks','ArenaAllocBytes','FunctionExecute','NetworkSendElapsedMicroseconds','SelectedRows','SelectedBytes','ContextLock','RWLockAcquiredReadLocks','RealTimeMicroseconds','UserTimeMicroseconds','SystemTimeMicroseconds','SoftPageFaults','OSCPUWaitMicroseconds','OSCPUVirtualTimeMicroseconds','OSWriteBytes','OSWriteChars']
|
||||
ProfileEvents.Values: [1,1,39,352256,64,360,8393,374325,412,440,34480,13108,4723,671,19,17828,8192,10240]
|
||||
Settings.Names: ['load_balancing','max_memory_usage']
|
||||
Settings.Values: ['random','10000000000']
|
||||
thread_ids: [30776,31174]
|
||||
ProfileEvents: {'Query':1,'NetworkSendElapsedMicroseconds':59,'NetworkSendBytes':2643,'SelectedRows':69,'SelectedBytes':6187,'ContextLock':9,'RWLockAcquiredReadLocks':1,'RealTimeMicroseconds':817,'UserTimeMicroseconds':427,'SystemTimeMicroseconds':212,'OSCPUVirtualTimeMicroseconds':639,'OSReadChars':894,'OSWriteChars':319}
|
||||
Settings: {'load_balancing':'random','max_memory_usage':'10000000000'}
|
||||
used_aggregate_functions: []
|
||||
used_aggregate_function_combinators: []
|
||||
used_database_engines: []
|
||||
used_data_type_families: ['UInt64','UInt8','Nullable','String','date']
|
||||
used_data_type_families: []
|
||||
used_dictionaries: []
|
||||
used_formats: []
|
||||
used_functions: ['concat','notEmpty','extractAll']
|
||||
used_functions: []
|
||||
used_storages: []
|
||||
used_table_functions: []
|
||||
```
|
||||
|
||||
**Смотрите также**
|
||||
**См. также**
|
||||
|
||||
- [system.query_thread_log](../../operations/system-tables/query_thread_log.md#system_tables-query_thread_log) — в этой таблице содержится информация о цепочке каждого выполненного запроса.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.com/docs/ru/operations/system_tables/query_log) <!--hide-->
|
||||
|
@ -165,7 +165,7 @@ ALTER TABLE table_name CLEAR INDEX index_name IN PARTITION partition_expr
|
||||
## FREEZE PARTITION {#alter_freeze-partition}
|
||||
|
||||
``` sql
|
||||
ALTER TABLE table_name FREEZE [PARTITION partition_expr]
|
||||
ALTER TABLE table_name FREEZE [PARTITION partition_expr] [WITH NAME 'backup_name']
|
||||
```
|
||||
|
||||
Создаёт резервную копию для заданной партиции. Если выражение `PARTITION` опущено, резервные копии будут созданы для всех партиций.
|
||||
@ -179,6 +179,7 @@ ALTER TABLE table_name FREEZE [PARTITION partition_expr]
|
||||
|
||||
- `/var/lib/clickhouse/` — рабочая директория ClickHouse, заданная в конфигурационном файле;
|
||||
- `N` — инкрементальный номер резервной копии.
|
||||
- если задан параметр `WITH NAME`, то вместо инкрементального номера используется значение параметра `'backup_name'`.
|
||||
|
||||
!!! note "Примечание"
|
||||
При использовании [нескольких дисков для хранения данных таблицы](../../statements/alter/index.md#table_engine-mergetree-multiple-volumes) директория `shadow/N` появляется на каждом из дисков, на которых были куски, попавшие под выражение `PARTITION`.
|
||||
|
@ -5,7 +5,22 @@ toc_title: merge
|
||||
|
||||
# merge {#merge}
|
||||
|
||||
`merge(db_name, 'tables_regexp')` - создаёт временную таблицу типа Merge. Подробнее смотрите раздел «Движки таблиц, Merge».
|
||||
Cоздаёт временную таблицу типа [Merge](../../engines/table-engines/special/merge.md). Структура таблицы берётся из первой попавшейся таблицы, подходящей под регулярное выражение.
|
||||
|
||||
Структура таблицы берётся из первой попавшейся таблицы, подходящей под регулярное выражение.
|
||||
**Синтаксис**
|
||||
|
||||
```sql
|
||||
merge('db_name', 'tables_regexp')
|
||||
```
|
||||
**Аргументы**
|
||||
|
||||
- `db_name` — Возможные варианты:
|
||||
- имя БД,
|
||||
- выражение, возвращающее строку с именем БД, например, `currentDatabase()`,
|
||||
- `REGEXP(expression)`, где `expression` — регулярное выражение для отбора БД.
|
||||
|
||||
- `tables_regexp` — регулярное выражение для имен таблиц в указанной БД или нескольких БД.
|
||||
|
||||
**См. также**
|
||||
|
||||
- Табличный движок [Merge](../../engines/table-engines/special/merge.md)
|
||||
|
@ -117,6 +117,9 @@ def build_for_lang(lang, args):
|
||||
)
|
||||
)
|
||||
|
||||
# Clean to be safe if last build finished abnormally
|
||||
single_page.remove_temporary_files(lang, args)
|
||||
|
||||
raw_config['nav'] = nav.build_docs_nav(lang, args)
|
||||
|
||||
cfg = config.load_config(**raw_config)
|
||||
|
@ -12,6 +12,7 @@ import test
|
||||
import util
|
||||
import website
|
||||
|
||||
TEMPORARY_FILE_NAME = 'single.md'
|
||||
|
||||
def recursive_values(item):
|
||||
if isinstance(item, dict):
|
||||
@ -101,6 +102,14 @@ def concatenate(lang, docs_path, single_page_file, nav):
|
||||
|
||||
single_page_file.flush()
|
||||
|
||||
def get_temporary_file_name(lang, args):
|
||||
return os.path.join(args.docs_dir, lang, TEMPORARY_FILE_NAME)
|
||||
|
||||
def remove_temporary_files(lang, args):
|
||||
single_md_path = get_temporary_file_name(lang, args)
|
||||
if os.path.exists(single_md_path):
|
||||
os.unlink(single_md_path)
|
||||
|
||||
|
||||
def build_single_page_version(lang, args, nav, cfg):
|
||||
logging.info(f'Building single page version for {lang}')
|
||||
@ -109,7 +118,7 @@ def build_single_page_version(lang, args, nav, cfg):
|
||||
extra['single_page'] = True
|
||||
extra['is_amp'] = False
|
||||
|
||||
single_md_path = os.path.join(args.docs_dir, lang, 'single.md')
|
||||
single_md_path = get_temporary_file_name(lang, args)
|
||||
with open(single_md_path, 'w') as single_md:
|
||||
concatenate(lang, args.docs_dir, single_md, nav)
|
||||
|
||||
@ -226,5 +235,4 @@ def build_single_page_version(lang, args, nav, cfg):
|
||||
|
||||
logging.info(f'Finished building single page version for {lang}')
|
||||
|
||||
if os.path.exists(single_md_path):
|
||||
os.unlink(single_md_path)
|
||||
remove_temporary_files(lang, args)
|
||||
|
@ -403,6 +403,36 @@ void Client::initialize(Poco::Util::Application & self)
|
||||
}
|
||||
|
||||
|
||||
void Client::prepareForInteractive()
|
||||
{
|
||||
clearTerminal();
|
||||
showClientVersion();
|
||||
|
||||
if (delayed_interactive)
|
||||
std::cout << std::endl;
|
||||
|
||||
/// Load Warnings at the beginning of connection
|
||||
if (!config().has("no-warnings"))
|
||||
{
|
||||
try
|
||||
{
|
||||
std::vector<String> messages = loadWarningMessages();
|
||||
if (!messages.empty())
|
||||
{
|
||||
std::cout << "Warnings:" << std::endl;
|
||||
for (const auto & message : messages)
|
||||
std::cout << " * " << message << std::endl;
|
||||
std::cout << std::endl;
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
/// Ignore exception
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
int Client::main(const std::vector<std::string> & /*args*/)
|
||||
try
|
||||
{
|
||||
@ -429,36 +459,11 @@ try
|
||||
|
||||
processConfig();
|
||||
|
||||
if (is_interactive)
|
||||
{
|
||||
clearTerminal();
|
||||
showClientVersion();
|
||||
}
|
||||
|
||||
connect();
|
||||
|
||||
if (is_interactive)
|
||||
if (is_interactive && !delayed_interactive)
|
||||
{
|
||||
/// Load Warnings at the beginning of connection
|
||||
if (!config().has("no-warnings"))
|
||||
{
|
||||
try
|
||||
{
|
||||
std::vector<String> messages = loadWarningMessages();
|
||||
if (!messages.empty())
|
||||
{
|
||||
std::cout << "Warnings:" << std::endl;
|
||||
for (const auto & message : messages)
|
||||
std::cout << " * " << message << std::endl;
|
||||
std::cout << std::endl;
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
/// Ignore exception
|
||||
}
|
||||
}
|
||||
|
||||
prepareForInteractive();
|
||||
runInteractive();
|
||||
}
|
||||
else
|
||||
@ -482,6 +487,12 @@ try
|
||||
// case so that at least we don't lose an error.
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (delayed_interactive)
|
||||
{
|
||||
prepareForInteractive();
|
||||
runInteractive();
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
@ -555,8 +566,9 @@ void Client::connect()
|
||||
if (is_interactive)
|
||||
{
|
||||
std::cout << "Connected to " << server_name << " server version " << server_version << " revision " << server_revision << "."
|
||||
<< std::endl
|
||||
<< std::endl;
|
||||
if (!delayed_interactive)
|
||||
std::cout << std::endl;
|
||||
|
||||
auto client_version_tuple = std::make_tuple(VERSION_MAJOR, VERSION_MINOR, VERSION_PATCH);
|
||||
auto server_version_tuple = std::make_tuple(server_version_major, server_version_minor, server_version_patch);
|
||||
@ -1156,11 +1168,11 @@ void Client::processConfig()
|
||||
/// - stdin is not a terminal. In this case queries are read from it.
|
||||
/// - -qf (--queries-file) command line option is present.
|
||||
/// The value of the option is used as file with query (or of multiple queries) to execute.
|
||||
if (stdin_is_a_tty && !config().has("query") && queries_files.empty())
|
||||
{
|
||||
if (config().has("query") && config().has("queries-file"))
|
||||
throw Exception("Specify either `query` or `queries-file` option", ErrorCodes::BAD_ARGUMENTS);
|
||||
|
||||
delayed_interactive = config().has("interactive") && (config().has("query") || config().has("queries-file"));
|
||||
if (stdin_is_a_tty
|
||||
&& (delayed_interactive || (!config().has("query") && queries_files.empty())))
|
||||
{
|
||||
is_interactive = true;
|
||||
}
|
||||
else
|
||||
|
@ -20,6 +20,7 @@ protected:
|
||||
bool processWithFuzzing(const String & full_query) override;
|
||||
|
||||
void connect() override;
|
||||
void prepareForInteractive() override;
|
||||
void processError(const String & query) const override;
|
||||
String getName() const override { return "client"; }
|
||||
|
||||
|
@ -396,6 +396,14 @@ void LocalServer::connect()
|
||||
}
|
||||
|
||||
|
||||
void LocalServer::prepareForInteractive()
|
||||
{
|
||||
clearTerminal();
|
||||
showClientVersion();
|
||||
std::cerr << std::endl;
|
||||
}
|
||||
|
||||
|
||||
int LocalServer::main(const std::vector<std::string> & /*args*/)
|
||||
try
|
||||
{
|
||||
@ -406,7 +414,10 @@ try
|
||||
std::cout << std::fixed << std::setprecision(3);
|
||||
std::cerr << std::fixed << std::setprecision(3);
|
||||
|
||||
is_interactive = stdin_is_a_tty && !config().has("query") && !config().has("table-structure") && queries_files.empty();
|
||||
is_interactive = stdin_is_a_tty
|
||||
&& (config().hasOption("interactive")
|
||||
|| (!config().has("query") && !config().has("table-structure") && queries_files.empty()));
|
||||
|
||||
if (!is_interactive)
|
||||
{
|
||||
/// We will terminate process on error
|
||||
@ -427,17 +438,20 @@ try
|
||||
applyCmdSettings(global_context);
|
||||
connect();
|
||||
|
||||
if (is_interactive)
|
||||
if (is_interactive && !delayed_interactive)
|
||||
{
|
||||
clearTerminal();
|
||||
showClientVersion();
|
||||
std::cerr << std::endl;
|
||||
|
||||
prepareForInteractive();
|
||||
runInteractive();
|
||||
}
|
||||
else
|
||||
{
|
||||
runNonInteractive();
|
||||
|
||||
if (delayed_interactive)
|
||||
{
|
||||
prepareForInteractive();
|
||||
runInteractive();
|
||||
}
|
||||
}
|
||||
|
||||
cleanup();
|
||||
@ -462,7 +476,8 @@ catch (...)
|
||||
|
||||
void LocalServer::processConfig()
|
||||
{
|
||||
if (is_interactive)
|
||||
delayed_interactive = config().has("interactive") && (config().has("query") || config().has("queries-file"));
|
||||
if (is_interactive && !delayed_interactive)
|
||||
{
|
||||
if (config().has("query") && config().has("queries-file"))
|
||||
throw Exception("Specify either `query` or `queries-file` option", ErrorCodes::BAD_ARGUMENTS);
|
||||
@ -474,6 +489,11 @@ void LocalServer::processConfig()
|
||||
}
|
||||
else
|
||||
{
|
||||
if (delayed_interactive)
|
||||
{
|
||||
load_suggestions = true;
|
||||
}
|
||||
|
||||
need_render_progress = config().getBool("progress", false);
|
||||
echo_queries = config().hasOption("echo") || config().hasOption("verbose");
|
||||
ignore_error = config().getBool("ignore-error", false);
|
||||
|
@ -34,6 +34,7 @@ protected:
|
||||
bool executeMultiQuery(const String & all_queries_text) override;
|
||||
|
||||
void connect() override;
|
||||
void prepareForInteractive() override;
|
||||
void processError(const String & query) const override;
|
||||
String getName() const override { return "local"; }
|
||||
|
||||
|
@ -10,10 +10,15 @@
|
||||
#include <base/LocalDate.h>
|
||||
#include <base/LineReader.h>
|
||||
#include <base/scope_guard_safe.h>
|
||||
#include "Common/Exception.h"
|
||||
#include "Common/getNumberOfPhysicalCPUCores.h"
|
||||
#include "Common/tests/gtest_global_context.h"
|
||||
#include "Common/typeid_cast.h"
|
||||
#include "Columns/ColumnString.h"
|
||||
#include "Columns/ColumnsNumber.h"
|
||||
#include "Core/Block.h"
|
||||
#include "Core/Protocol.h"
|
||||
#include "Formats/FormatFactory.h"
|
||||
|
||||
#include <Common/config_version.h>
|
||||
#include <Common/UTF8Helpers.h>
|
||||
@ -77,6 +82,7 @@ namespace ErrorCodes
|
||||
extern const int INVALID_USAGE_OF_INPUT;
|
||||
extern const int CANNOT_SET_SIGNAL_HANDLER;
|
||||
extern const int UNRECOGNIZED_ARGUMENTS;
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
}
|
||||
@ -842,6 +848,13 @@ void ClientBase::processInsertQuery(const String & query_to_execute, ASTPtr pars
|
||||
|
||||
void ClientBase::sendData(Block & sample, const ColumnsDescription & columns_description, ASTPtr parsed_query)
|
||||
{
|
||||
/// Get columns description from variable or (if it was empty) create it from sample.
|
||||
auto columns_description_for_query = columns_description.empty() ? ColumnsDescription(sample.getNamesAndTypesList()) : columns_description;
|
||||
if (columns_description_for_query.empty())
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Column description is empty and it can't be built from sample from table. Cannot execute query.");
|
||||
}
|
||||
|
||||
/// If INSERT data must be sent.
|
||||
auto * parsed_insert_query = parsed_query->as<ASTInsertQuery>();
|
||||
if (!parsed_insert_query)
|
||||
@ -863,7 +876,8 @@ void ClientBase::sendData(Block & sample, const ColumnsDescription & columns_des
|
||||
/// Get name of this file (path to file)
|
||||
const auto & in_file_node = parsed_insert_query->infile->as<ASTLiteral &>();
|
||||
const auto in_file = in_file_node.value.safeGet<std::string>();
|
||||
|
||||
/// Get name of table
|
||||
const auto table_name = parsed_insert_query->table_id.getTableName();
|
||||
std::string compression_method;
|
||||
/// Compression method can be specified in query
|
||||
if (parsed_insert_query->compression)
|
||||
@ -872,13 +886,35 @@ void ClientBase::sendData(Block & sample, const ColumnsDescription & columns_des
|
||||
compression_method = compression_method_node.value.safeGet<std::string>();
|
||||
}
|
||||
|
||||
/// Otherwise, it will be detected from file name automatically (by chooseCompressionMethod)
|
||||
/// Buffer for reading from file is created and wrapped with appropriate compression method
|
||||
auto in_buffer = wrapReadBufferWithCompressionMethod(std::make_unique<ReadBufferFromFile>(in_file), chooseCompressionMethod(in_file, compression_method));
|
||||
/// Create temporary storage file, to support globs and parallel reading
|
||||
StorageFile::CommonArguments args{
|
||||
WithContext(global_context),
|
||||
parsed_insert_query->table_id,
|
||||
parsed_insert_query->format,
|
||||
getFormatSettings(global_context),
|
||||
compression_method,
|
||||
columns_description_for_query,
|
||||
ConstraintsDescription{},
|
||||
String{},
|
||||
};
|
||||
StoragePtr storage = StorageFile::create(in_file, global_context->getUserFilesPath(), args);
|
||||
storage->startup();
|
||||
SelectQueryInfo query_info;
|
||||
|
||||
try
|
||||
{
|
||||
sendDataFrom(*in_buffer, sample, columns_description, parsed_query);
|
||||
sendDataFromPipe(
|
||||
storage->read(
|
||||
sample.getNames(),
|
||||
storage->getInMemoryMetadataPtr(),
|
||||
query_info,
|
||||
global_context,
|
||||
{},
|
||||
global_context->getSettingsRef().max_block_size,
|
||||
getNumberOfPhysicalCPUCores()
|
||||
),
|
||||
parsed_query
|
||||
);
|
||||
}
|
||||
catch (Exception & e)
|
||||
{
|
||||
@ -892,7 +928,7 @@ void ClientBase::sendData(Block & sample, const ColumnsDescription & columns_des
|
||||
ReadBufferFromMemory data_in(parsed_insert_query->data, parsed_insert_query->end - parsed_insert_query->data);
|
||||
try
|
||||
{
|
||||
sendDataFrom(data_in, sample, columns_description, parsed_query);
|
||||
sendDataFrom(data_in, sample, columns_description_for_query, parsed_query);
|
||||
}
|
||||
catch (Exception & e)
|
||||
{
|
||||
@ -917,7 +953,7 @@ void ClientBase::sendData(Block & sample, const ColumnsDescription & columns_des
|
||||
/// Send data read from stdin.
|
||||
try
|
||||
{
|
||||
sendDataFrom(std_in, sample, columns_description, parsed_query);
|
||||
sendDataFrom(std_in, sample, columns_description_for_query, parsed_query);
|
||||
}
|
||||
catch (Exception & e)
|
||||
{
|
||||
@ -952,6 +988,11 @@ void ClientBase::sendDataFrom(ReadBuffer & buf, Block & sample, const ColumnsDes
|
||||
});
|
||||
}
|
||||
|
||||
sendDataFromPipe(std::move(pipe), parsed_query);
|
||||
}
|
||||
|
||||
void ClientBase::sendDataFromPipe(Pipe&& pipe, ASTPtr parsed_query)
|
||||
{
|
||||
QueryPipeline pipeline(std::move(pipe));
|
||||
PullingAsyncPipelineExecutor executor(pipeline);
|
||||
|
||||
@ -1634,6 +1675,8 @@ void ClientBase::init(int argc, char ** argv)
|
||||
("hardware-utilization", "print hardware utilization information in progress bar")
|
||||
("print-profile-events", po::value(&profile_events.print)->zero_tokens(), "Printing ProfileEvents packets")
|
||||
("profile-events-delay-ms", po::value<UInt64>()->default_value(profile_events.delay_ms), "Delay between printing `ProfileEvents` packets (-1 - print only totals, 0 - print every single packet)")
|
||||
|
||||
("interactive", "Process queries-file or --query query and start interactive mode")
|
||||
;
|
||||
|
||||
addOptions(options_description);
|
||||
@ -1703,6 +1746,9 @@ void ClientBase::init(int argc, char ** argv)
|
||||
config().setString("history_file", options["history_file"].as<std::string>());
|
||||
if (options.count("verbose"))
|
||||
config().setBool("verbose", true);
|
||||
if (options.count("interactive"))
|
||||
config().setBool("interactive", true);
|
||||
|
||||
if (options.count("log-level"))
|
||||
Poco::Logger::root().setLevel(options["log-level"].as<std::string>());
|
||||
if (options.count("server_logs_file"))
|
||||
|
@ -10,6 +10,8 @@
|
||||
#include <Client/Suggest.h>
|
||||
#include <Client/QueryFuzzer.h>
|
||||
#include <boost/program_options.hpp>
|
||||
#include <Storages/StorageFile.h>
|
||||
#include <Storages/SelectQueryInfo.h>
|
||||
|
||||
namespace po = boost::program_options;
|
||||
|
||||
@ -57,6 +59,7 @@ protected:
|
||||
|
||||
virtual bool executeMultiQuery(const String & all_queries_text) = 0;
|
||||
virtual void connect() = 0;
|
||||
virtual void prepareForInteractive() = 0;
|
||||
virtual void processError(const String & query) const = 0;
|
||||
virtual String getName() const = 0;
|
||||
|
||||
@ -120,6 +123,7 @@ private:
|
||||
void sendData(Block & sample, const ColumnsDescription & columns_description, ASTPtr parsed_query);
|
||||
void sendDataFrom(ReadBuffer & buf, Block & sample,
|
||||
const ColumnsDescription & columns_description, ASTPtr parsed_query);
|
||||
void sendDataFromPipe(Pipe && pipe, ASTPtr parsed_query);
|
||||
void sendExternalTables(ASTPtr parsed_query);
|
||||
|
||||
void initBlockOutputStream(const Block & block, ASTPtr parsed_query);
|
||||
@ -138,6 +142,7 @@ private:
|
||||
protected:
|
||||
bool is_interactive = false; /// Use either interactive line editing interface or batch mode.
|
||||
bool is_multiquery = false;
|
||||
bool delayed_interactive = false;
|
||||
|
||||
bool echo_queries = false; /// Print queries before execution in batch mode.
|
||||
bool ignore_error = false; /// In case of errors, don't print error message, continue to next query. Only applicable for non-interactive mode.
|
||||
|
@ -3423,7 +3423,6 @@ Pipe MergeTreeData::alterPartition(
|
||||
|
||||
case PartitionCommand::MoveDestinationType::TABLE:
|
||||
{
|
||||
checkPartitionCanBeDropped(command.partition);
|
||||
String dest_database = query_context->resolveDatabase(command.to_database);
|
||||
auto dest_storage = DatabaseCatalog::instance().getTable({dest_database, command.to_table}, query_context);
|
||||
movePartitionToTable(dest_storage, command.partition, query_context);
|
||||
@ -3445,7 +3444,8 @@ Pipe MergeTreeData::alterPartition(
|
||||
|
||||
case PartitionCommand::REPLACE_PARTITION:
|
||||
{
|
||||
checkPartitionCanBeDropped(command.partition);
|
||||
if (command.replace)
|
||||
checkPartitionCanBeDropped(command.partition);
|
||||
String from_database = query_context->resolveDatabase(command.from_database);
|
||||
auto from_storage = DatabaseCatalog::instance().getTable({from_database, command.from_table}, query_context);
|
||||
replacePartitionFrom(from_storage, command.partition, command.replace, query_context);
|
||||
|
@ -0,0 +1,4 @@
|
||||
<clickhouse>
|
||||
<max_table_size_to_drop>1</max_table_size_to_drop>
|
||||
<max_partition_size_to_drop>1</max_partition_size_to_drop>
|
||||
</clickhouse>
|
@ -0,0 +1,50 @@
|
||||
import pytest
|
||||
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
node = cluster.add_instance('node', main_configs=["configs/config.xml"], with_zookeeper=True)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
def create_force_drop_flag(node):
|
||||
force_drop_flag_path = "/var/lib/clickhouse/flags/force_drop_table"
|
||||
node.exec_in_container(["bash", "-c", "touch {} && chmod a=rw {}".format(force_drop_flag_path, force_drop_flag_path)], user="root")
|
||||
|
||||
@pytest.mark.parametrize("engine", ['Ordinary', 'Atomic'])
|
||||
def test_attach_partition_with_large_destination(started_cluster, engine):
|
||||
# Initialize
|
||||
node.query("CREATE DATABASE db ENGINE={}".format(engine))
|
||||
node.query("CREATE TABLE db.destination (n UInt64) ENGINE=ReplicatedMergeTree('/test/destination', 'r1') ORDER BY n PARTITION BY n % 2")
|
||||
node.query("CREATE TABLE db.source_1 (n UInt64) ENGINE=ReplicatedMergeTree('/test/source_1', 'r1') ORDER BY n PARTITION BY n % 2")
|
||||
node.query("INSERT INTO db.source_1 VALUES (1), (2), (3), (4)")
|
||||
node.query("CREATE TABLE db.source_2 (n UInt64) ENGINE=ReplicatedMergeTree('/test/source_2', 'r1') ORDER BY n PARTITION BY n % 2")
|
||||
node.query("INSERT INTO db.source_2 VALUES (5), (6), (7), (8)")
|
||||
|
||||
# Attach partition when destination partition is empty
|
||||
node.query("ALTER TABLE db.destination ATTACH PARTITION 0 FROM db.source_1")
|
||||
assert node.query("SELECT n FROM db.destination ORDER BY n") == "2\n4\n"
|
||||
|
||||
# REPLACE PARTITION should still respect max_partition_size_to_drop
|
||||
assert node.query_and_get_error("ALTER TABLE db.destination REPLACE PARTITION 0 FROM db.source_2")
|
||||
assert node.query("SELECT n FROM db.destination ORDER BY n") == "2\n4\n"
|
||||
|
||||
# Attach partition when destination partition is larger than max_partition_size_to_drop
|
||||
node.query("ALTER TABLE db.destination ATTACH PARTITION 0 FROM db.source_2")
|
||||
assert node.query("SELECT n FROM db.destination ORDER BY n") == "2\n4\n6\n8\n"
|
||||
|
||||
# Cleanup
|
||||
create_force_drop_flag(node)
|
||||
node.query("DROP TABLE db.source_1 SYNC")
|
||||
create_force_drop_flag(node)
|
||||
node.query("DROP TABLE db.source_2 SYNC")
|
||||
create_force_drop_flag(node)
|
||||
node.query("DROP TABLE db.destination SYNC")
|
||||
node.query("DROP DATABASE db")
|
@ -0,0 +1,5 @@
|
||||
1
|
||||
2
|
||||
Correct
|
||||
1
|
||||
2
|
44
tests/queries/0_stateless/02048_parallel_reading_from_infile.sh
Executable file
44
tests/queries/0_stateless/02048_parallel_reading_from_infile.sh
Executable file
@ -0,0 +1,44 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
. "$CURDIR"/../shell_config.sh
|
||||
|
||||
[ -e "${CLICKHOUSE_TMP}"/test_infile_parallel.gz ] && rm "${CLICKHOUSE_TMP}"/test_infile_parallel.gz
|
||||
[ -e "${CLICKHOUSE_TMP}"/test_infile_parallel ] && rm "${CLICKHOUSE_TMP}"/test_infile_parallel
|
||||
[ -e "${CLICKHOUSE_TMP}"/test_infile_parallel ] && rm "${CLICKHOUSE_TMP}"/test_infile_parallel_1
|
||||
[ -e "${CLICKHOUSE_TMP}"/test_infile_parallel ] && rm "${CLICKHOUSE_TMP}"/test_infile_parallel_2
|
||||
[ -e "${CLICKHOUSE_TMP}"/test_infile_parallel ] && rm "${CLICKHOUSE_TMP}"/test_infile_parallel_3
|
||||
|
||||
echo -e "102\t2" > "${CLICKHOUSE_TMP}"/test_infile_parallel
|
||||
echo -e "102\tsecond" > "${CLICKHOUSE_TMP}"/test_infile_parallel_1
|
||||
echo -e "103\tfirst" > "${CLICKHOUSE_TMP}"/test_infile_parallel_2
|
||||
echo -e "103" > "${CLICKHOUSE_TMP}"/test_infile_parallel_3
|
||||
|
||||
gzip "${CLICKHOUSE_TMP}"/test_infile_parallel
|
||||
|
||||
${CLICKHOUSE_CLIENT} --multiquery <<EOF
|
||||
DROP TABLE IF EXISTS test_infile_parallel;
|
||||
CREATE TABLE test_infile_parallel (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory();
|
||||
SET input_format_allow_errors_num=1;
|
||||
INSERT INTO test_infile_parallel FROM INFILE '${CLICKHOUSE_TMP}/test_infile_parallel*' FORMAT TSV;
|
||||
SELECT count() FROM test_infile_parallel WHERE Value='first';
|
||||
SELECT count() FROM test_infile_parallel WHERE Value='second';
|
||||
EOF
|
||||
|
||||
# Error code is 36 (BAD_ARGUMENTS). It is not ignored.
|
||||
${CLICKHOUSE_CLIENT} --multiquery "
|
||||
DROP TABLE IF EXISTS test_infile_parallel;
|
||||
CREATE TABLE test_infile_parallel (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory();
|
||||
SET input_format_allow_errors_num=0;
|
||||
INSERT INTO test_infile_parallel FROM INFILE '${CLICKHOUSE_TMP}/test_infile_parallel*' FORMAT TSV;
|
||||
" 2>&1 | grep -q "36" && echo "Correct" || echo 'Fail'
|
||||
|
||||
${CLICKHOUSE_LOCAL} --multiquery <<EOF
|
||||
DROP TABLE IF EXISTS test_infile_parallel;
|
||||
SET input_format_allow_errors_num=1;
|
||||
CREATE TABLE test_infile_parallel (Id Int32,Value Enum('first' = 1, 'second' = 2)) ENGINE=Memory();
|
||||
INSERT INTO test_infile_parallel FROM INFILE '${CLICKHOUSE_TMP}/test_infile_parallel*' FORMAT TSV;
|
||||
SELECT count() FROM test_infile_parallel WHERE Value='first';
|
||||
SELECT count() FROM test_infile_parallel WHERE Value='second';
|
||||
EOF
|
@ -0,0 +1,27 @@
|
||||
#!/usr/bin/expect -f
|
||||
# Tags: no-parallel, no-fasttest
|
||||
|
||||
log_user 0
|
||||
set timeout 20
|
||||
match_max 100000
|
||||
|
||||
# A default timeout action is to fail
|
||||
expect_after {
|
||||
timeout {
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
spawn bash -c "\$CLICKHOUSE_TESTS_DIR/helpers/02112_prepare.sh"
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_CLIENT --disable_suggestion --interactive --queries-file \$CURDIR/file_02112"
|
||||
expect ":) "
|
||||
|
||||
send -- "select * from t format TSV\r"
|
||||
expect "1"
|
||||
expect ":) "
|
||||
|
||||
spawn bash -c "\$CLICKHOUSE_TESTS_DIR/helpers/02112_clean.sh"
|
||||
|
24
tests/queries/0_stateless/02112_delayed_clickhouse_local.expect
Executable file
24
tests/queries/0_stateless/02112_delayed_clickhouse_local.expect
Executable file
@ -0,0 +1,24 @@
|
||||
#!/usr/bin/expect -f
|
||||
# Tags: no-unbundled, no-fasttest
|
||||
|
||||
log_user 0
|
||||
set timeout 20
|
||||
match_max 100000
|
||||
|
||||
# A default timeout action is to fail
|
||||
expect_after {
|
||||
timeout {
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_LOCAL --disable_suggestion --interactive --query 'create table t(i Int32) engine=Memory; insert into t select 1'"
|
||||
expect ":) "
|
||||
|
||||
send -- "select * from t format TSV\r"
|
||||
expect "1"
|
||||
expect ":) "
|
||||
|
||||
send -- "exit\r"
|
||||
expect eof
|
@ -0,0 +1,27 @@
|
||||
#!/usr/bin/expect -f
|
||||
# Tags: no-parallel, no-fasttest
|
||||
|
||||
log_user 0
|
||||
set timeout 20
|
||||
match_max 100000
|
||||
|
||||
# A default timeout action is to fail
|
||||
expect_after {
|
||||
timeout {
|
||||
exit 1
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
spawn bash -c "\$CLICKHOUSE_TESTS_DIR/helpers/02112_prepare.sh"
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_LOCAL --disable_suggestion --interactive --queries-file \$CURDIR/file_02112"
|
||||
expect ":) "
|
||||
|
||||
send -- "select * from t format TSV\r"
|
||||
expect "1"
|
||||
expect ":) "
|
||||
|
||||
spawn bash -c "\$CLICKHOUSE_TESTS_DIR/helpers/02112_clean.sh"
|
||||
|
6
tests/queries/0_stateless/helpers/02112_clean.sh
Executable file
6
tests/queries/0_stateless/helpers/02112_clean.sh
Executable file
@ -0,0 +1,6 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
FILE=${CURDIR}/file_02112
|
||||
if [ -f $FILE ]; then
|
||||
rm $FILE
|
||||
fi
|
7
tests/queries/0_stateless/helpers/02112_prepare.sh
Executable file
7
tests/queries/0_stateless/helpers/02112_prepare.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
FILE=${CURDIR}/file_02112
|
||||
if [ -f $FILE ]; then
|
||||
rm $FILE
|
||||
fi
|
||||
echo "drop table if exists t;create table t(i Int32) engine=Memory; insert into t select 1" >> $FILE
|
@ -35,7 +35,7 @@
|
||||
<div class="row mb-3">
|
||||
<div class="col">
|
||||
<h3 class="my-3">Full results</h3>
|
||||
<div id="comparison_table"></div>
|
||||
<div id="comparison_table" class="overflow-auto"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
@ -35,7 +35,7 @@
|
||||
<div class="row mb-3">
|
||||
<div class="col">
|
||||
<h3 class="my-3">Full results</h3>
|
||||
<div id="comparison_table"></div>
|
||||
<div id="comparison_table" class="overflow-auto"></div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
File diff suppressed because one or more lines are too long
3
website/src/scss/utilities/_overflow.scss
Normal file
3
website/src/scss/utilities/_overflow.scss
Normal file
@ -0,0 +1,3 @@
|
||||
.overflow-auto {
|
||||
overflow: auto;
|
||||
}
|
Loading…
Reference in New Issue
Block a user