mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 09:32:06 +00:00
Merge branch 'master' of github.com:ClickHouse/ClickHouse into parallel-parsing-input-format
This commit is contained in:
commit
2b90b4e01d
@ -20,7 +20,8 @@ toc_title: Cloud
|
||||
|
||||
## Altinity.Cloud {#altinity.cloud}
|
||||
|
||||
[Altinity.Cloud](https://altinity.com/cloud-database/) is a fully managed ClickHouse-as-a-Service for the Amazon public cloud.
|
||||
[Altinity.Cloud](https://altinity.com/cloud-database/) is a fully managed ClickHouse-as-a-Service for the Amazon public cloud.
|
||||
|
||||
- Fast deployment of ClickHouse clusters on Amazon resources
|
||||
- Easy scale-out/scale-in as well as vertical scaling of nodes
|
||||
- Isolated per-tenant VPCs with public endpoint or VPC peering
|
||||
|
@ -281,8 +281,9 @@ After this, you can launch the server, create a `MergeTree` table, move the data
|
||||
|
||||
If the data in ZooKeeper was lost or damaged, you can save data by moving it to an unreplicated table as described above.
|
||||
|
||||
**See also**
|
||||
**See Also**
|
||||
|
||||
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
|
||||
- [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/replication/) <!--hide-->
|
||||
|
@ -80,18 +80,27 @@ List of prefixes for [custom settings](../../operations/settings/index.md#custom
|
||||
|
||||
- [Custom settings](../../operations/settings/index.md#custom_settings)
|
||||
|
||||
## core_dump
|
||||
## core_dump {#server_configuration_parameters-core_dump}
|
||||
|
||||
Configures soft limit for core dump file size.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
|
||||
Default value: `1073741824` (1 GB).
|
||||
|
||||
!!! info "Note"
|
||||
Hard limit is configured via system tools
|
||||
|
||||
**Example**
|
||||
|
||||
Configures soft limit for core dump file size, one gigabyte by default.
|
||||
```xml
|
||||
<core_dump>
|
||||
<size_limit>1073741824</size_limit>
|
||||
</core_dump>
|
||||
```
|
||||
|
||||
(Hard limit is configured via system tools)
|
||||
|
||||
|
||||
## default_database {#default-database}
|
||||
|
||||
The default database.
|
||||
@ -431,7 +440,7 @@ Limits total RAM usage by the ClickHouse server.
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
- 0 (auto).
|
||||
- 0 — Auto.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
|
@ -1312,7 +1312,7 @@ See also:
|
||||
|
||||
Write to a quorum timeout in milliseconds. If the timeout has passed and no write has taken place yet, ClickHouse will generate an exception and the client must repeat the query to write the same block to the same or any other replica.
|
||||
|
||||
Default value: 600000 milliseconds (ten minutes).
|
||||
Default value: 600 000 milliseconds (ten minutes).
|
||||
|
||||
See also:
|
||||
|
||||
@ -2470,4 +2470,34 @@ Possible values:
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
## data_type_default_nullable {#data_type_default_nullable}
|
||||
|
||||
Allows data types without explicit modifiers [NULL or NOT NULL](../../sql-reference/statements/create/table.md#null-modifiers) in column definition will be [Nullable](../../sql-reference/data-types/nullable.md#data_type-nullable).
|
||||
|
||||
Possible values:
|
||||
|
||||
- 1 — The data types in column definitions are set to `Nullable` by default.
|
||||
- 0 — The data types in column definitions are set to not `Nullable` by default.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
## execute_merges_on_single_replica_time_threshold {#execute-merges-on-single-replica-time-threshold}
|
||||
|
||||
Enables special logic to perform merges on replicas.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer (in seconds).
|
||||
- 0 — Special merges logic is not used. Merges happen in the usual way on all the replicas.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
**Usage**
|
||||
|
||||
Selects one replica to perform the merge on. Sets the time threshold from the start of the merge. Other replicas wait for the merge to finish, then download the result. If the time threshold passes and the selected replica does not perform the merge, then the merge is performed on other replicas as usual.
|
||||
|
||||
High values for that threshold may lead to replication delays.
|
||||
|
||||
It can be useful when merges are CPU bounded not IO bounded (performing heavy data compression, calculating aggregate functions or default expressions that require a large amount of calculations, or just very high number of tiny merges).
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
||||
|
@ -1,6 +1,6 @@
|
||||
# system.distribution_queue {#system_tables-distribution_queue}
|
||||
|
||||
Contains information about local files that are in the queue to be sent to the shards. This local files contain new parts that are created by inserting new data into the Distributed table in asynchronous mode.
|
||||
Contains information about local files that are in the queue to be sent to the shards. These local files contain new parts that are created by inserting new data into the Distributed table in asynchronous mode.
|
||||
|
||||
Columns:
|
||||
|
||||
|
@ -406,6 +406,14 @@ Example:
|
||||
<null_value>??</null_value>
|
||||
</attribute>
|
||||
...
|
||||
</structure>
|
||||
<layout>
|
||||
<ip_trie>
|
||||
<!-- Key attribute `prefix` can be retrieved via dictGetString. -->
|
||||
<!-- This option increases memory usage. -->
|
||||
<access_to_key_from_attributes>true</access_to_key_from_attributes>
|
||||
</ip_trie>
|
||||
</layout>
|
||||
```
|
||||
|
||||
or
|
||||
@ -435,6 +443,6 @@ dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1')))
|
||||
|
||||
Other types are not supported yet. The function returns the attribute for the prefix that corresponds to this IP address. If there are overlapping prefixes, the most specific one is returned.
|
||||
|
||||
Data is stored in a `trie`. It must completely fit into RAM.
|
||||
Data must completely fit into RAM.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_layout/) <!--hide-->
|
||||
|
@ -1774,4 +1774,44 @@ Result:
|
||||
UNSUPPORTED_METHOD
|
||||
```
|
||||
|
||||
## tcpPort {#tcpPort}
|
||||
|
||||
Returns [native interface](../../interfaces/tcp.md) TCP port number listened by this server.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
tcpPort()
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- None.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The TCP port number.
|
||||
|
||||
Type: [UInt16](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tcpPort();
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tcpPort()─┐
|
||||
│ 9000 │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
||||
- [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/other_functions/) <!--hide-->
|
||||
|
@ -558,4 +558,46 @@ Result:
|
||||
└─────┘
|
||||
```
|
||||
|
||||
## encodeXMLComponent {#encode-xml-component}
|
||||
|
||||
Escapes characters to place string into XML text node or attribute.
|
||||
|
||||
The following five XML predefined entities will be replaced: `<`, `&`, `>`, `"`, `'`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
encodeXMLComponent(x)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `x` — The sequence of characters. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
- The sequence of characters with escape characters.
|
||||
|
||||
Type: [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT encodeXMLComponent('Hello, "world"!');
|
||||
SELECT encodeXMLComponent('<123>');
|
||||
SELECT encodeXMLComponent('&clickhouse');
|
||||
SELECT encodeXMLComponent('\'foo\'');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
Hello, "world"!
|
||||
<123>
|
||||
&clickhouse
|
||||
'foo'
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_functions/) <!--hide-->
|
||||
|
@ -400,7 +400,8 @@ Result:
|
||||
└──────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**See also**
|
||||
**See Also**
|
||||
|
||||
- [extractAllGroupsVertical](#extractallgroups-vertical)
|
||||
|
||||
## extractAllGroupsVertical {#extractallgroups-vertical}
|
||||
@ -440,7 +441,8 @@ Result:
|
||||
└────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**See also**
|
||||
**See Also**
|
||||
|
||||
- [extractAllGroupsHorizontal](#extractallgroups-horizontal)
|
||||
|
||||
## like(haystack, pattern), haystack LIKE pattern operator {#function-like}
|
||||
@ -590,8 +592,55 @@ Result:
|
||||
└───────────────────────────────┘
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_search_functions/) <!--hide-->
|
||||
|
||||
## countMatches(haystack, pattern) {#countmatcheshaystack-pattern}
|
||||
|
||||
Returns the number of regular expression matches for a `pattern` in a `haystack`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
countMatches(haystack, pattern)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `haystack` — The string to search in. [String](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `pattern` — The regular expression with [re2 syntax](https://github.com/google/re2/wiki/Syntax). [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The number of matches.
|
||||
|
||||
Type: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT countMatches('foobar.com', 'o+');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─countMatches('foobar.com', 'o+')─┐
|
||||
│ 2 │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT countMatches('aaaa', 'aa');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─countMatches('aaaa', 'aa')────┐
|
||||
│ 2 │
|
||||
└───────────────────────────────┘
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_search_functions/) <!--hide-->
|
||||
|
@ -16,8 +16,8 @@ By default, tables are created only on the current server. Distributed DDL queri
|
||||
``` sql
|
||||
CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
(
|
||||
name1 [type1] [DEFAULT|MATERIALIZED|ALIAS expr1] [compression_codec] [TTL expr1],
|
||||
name2 [type2] [DEFAULT|MATERIALIZED|ALIAS expr2] [compression_codec] [TTL expr2],
|
||||
name1 [type1] [NULL|NOT NULL] [DEFAULT|MATERIALIZED|ALIAS expr1] [compression_codec] [TTL expr1],
|
||||
name2 [type2] [NULL|NOT NULL] [DEFAULT|MATERIALIZED|ALIAS expr2] [compression_codec] [TTL expr2],
|
||||
...
|
||||
) ENGINE = engine
|
||||
```
|
||||
@ -57,6 +57,14 @@ In all cases, if `IF NOT EXISTS` is specified, the query won’t return an error
|
||||
|
||||
There can be other clauses after the `ENGINE` clause in the query. See detailed documentation on how to create tables in the descriptions of [table engines](../../../engines/table-engines/index.md#table_engines).
|
||||
|
||||
## NULL Or NOT NULL Modifiers {#null-modifiers}
|
||||
|
||||
`NULL` and `NOT NULL` modifiers after data type in column definition allow or do not allow it to be [Nullable](../../../sql-reference/data-types/nullable.md#data_type-nullable).
|
||||
|
||||
If the type is not `Nullable` and if `NULL` is specified, it will be treated as `Nullable`; if `NOT NULL` is specified, then no. For example, `INT NULL` is the same as `Nullable(INT)`. If the type is `Nullable` and `NULL` or `NOT NULL` modifiers are specified, the exception will be thrown.
|
||||
|
||||
See also [data_type_default_nullable](../../../operations/settings/settings.md#data_type_default_nullable) setting.
|
||||
|
||||
## Default Values {#create-default-values}
|
||||
|
||||
The column description can specify an expression for a default value, in one of the following ways: `DEFAULT expr`, `MATERIALIZED expr`, `ALIAS expr`.
|
||||
|
@ -367,6 +367,12 @@ Exemple:
|
||||
<null_value>??</null_value>
|
||||
</attribute>
|
||||
...
|
||||
</structure>
|
||||
<layout>
|
||||
<ip_trie>
|
||||
<access_to_key_from_attributes>true</access_to_key_from_attributes>
|
||||
</ip_trie>
|
||||
</layout>
|
||||
```
|
||||
|
||||
ou
|
||||
@ -396,6 +402,6 @@ dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1')))
|
||||
|
||||
Les autres types ne sont pas encore pris en charge. La fonction renvoie l'attribut du préfixe correspondant à cette adresse IP. S'il y a chevauchement des préfixes, le plus spécifique est retourné.
|
||||
|
||||
Les données sont stockées dans une `trie`. Il doit complètement s'intégrer dans la RAM.
|
||||
Les données doit complètement s'intégrer dans la RAM.
|
||||
|
||||
[Article Original](https://clickhouse.tech/docs/en/query_language/dicts/external_dicts_dict_layout/) <!--hide-->
|
||||
|
@ -362,6 +362,12 @@ LAYOUT(DIRECT())
|
||||
<null_value>??</null_value>
|
||||
</attribute>
|
||||
...
|
||||
</structure>
|
||||
<layout>
|
||||
<ip_trie>
|
||||
<access_to_key_from_attributes>true</access_to_key_from_attributes>
|
||||
</ip_trie>
|
||||
</layout>
|
||||
```
|
||||
|
||||
または
|
||||
|
@ -12,10 +12,21 @@ toc_title: "\u041f\u043e\u0441\u0442\u0430\u0432\u0449\u0438\u043a\u0438\u0020\u
|
||||
|
||||
[Yandex Managed Service for ClickHouse](https://cloud.yandex.ru/services/managed-clickhouse?utm_source=referrals&utm_medium=clickhouseofficialsite&utm_campaign=link3) предоставляет следующие ключевые возможности:
|
||||
|
||||
- Полностью управляемый сервис ZooKeeper для [репликации ClickHouse](../engines/table-engines/mergetree-family/replication.md)
|
||||
- Выбор типа хранилища
|
||||
- Реплики в разных зонах доступности
|
||||
- Шифрование и изоляция
|
||||
- Автоматизированное техническое обслуживание
|
||||
- полностью управляемый сервис ZooKeeper для [репликации ClickHouse](../engines/table-engines/mergetree-family/replication.md)
|
||||
- выбор типа хранилища
|
||||
- реплики в разных зонах доступности
|
||||
- шифрование и изоляция
|
||||
- автоматизированное техническое обслуживание
|
||||
|
||||
## Altinity.Cloud {#altinity.cloud}
|
||||
|
||||
[Altinity.Cloud](https://altinity.com/cloud-database/) — это полностью управляемый ClickHouse-as-a-Service для публичного облака Amazon.
|
||||
|
||||
- быстрое развертывание кластеров ClickHouse на ресурсах Amazon.
|
||||
- легкое горизонтальное масштабирование также, как и вертикальное масштабирование узлов.
|
||||
- изолированные виртуальные сети для каждого клиента с общедоступным эндпоинтом или пирингом VPC.
|
||||
- настраиваемые типы и объемы хранилищ
|
||||
- cross-az масштабирование для повышения производительности и обеспечения высокой доступности
|
||||
- встроенный мониторинг и редактор SQL-запросов
|
||||
|
||||
{## [Оригинальная статья](https://clickhouse.tech/docs/ru/commercial/cloud/) ##}
|
||||
|
@ -246,4 +246,9 @@ $ sudo -u clickhouse touch /var/lib/clickhouse/flags/force_restore_data
|
||||
|
||||
Если данные в ZooKeeper оказались утеряны или повреждены, то вы можете сохранить данные, переместив их в нереплицируемую таблицу, как описано в пункте выше.
|
||||
|
||||
**Смотрите также**
|
||||
|
||||
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
|
||||
- [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/replication/) <!--hide-->
|
||||
|
@ -80,6 +80,27 @@ ClickHouse проверяет условия для `min_part_size` и `min_part
|
||||
|
||||
- [Пользовательские настройки](../../operations/settings/index.md#custom_settings)
|
||||
|
||||
## core_dump {#server_configuration_parameters-core_dump}
|
||||
|
||||
Задает мягкое ограничение для размера файла дампа памяти.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- положительное целое число.
|
||||
|
||||
Значение по умолчанию: `1073741824` (1 ГБ).
|
||||
|
||||
!!! info "Примечание"
|
||||
Жесткое ограничение настраивается с помощью системных инструментов.
|
||||
|
||||
**Пример**
|
||||
|
||||
```xml
|
||||
<core_dump>
|
||||
<size_limit>1073741824</size_limit>
|
||||
</core_dump>
|
||||
```
|
||||
|
||||
## default\_database {#default-database}
|
||||
|
||||
База данных по умолчанию.
|
||||
@ -420,7 +441,7 @@ ClickHouse проверяет условия для `min_part_size` и `min_part
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число.
|
||||
- 0 — объём используемой памяти не ограничен.
|
||||
- 0 — автоматически.
|
||||
|
||||
Значение по умолчанию: `0`.
|
||||
|
||||
|
@ -1258,7 +1258,7 @@ ClickHouse генерирует исключение
|
||||
|
||||
Время ожидания кворумной записи в миллисекундах. Если время прошло, а запись так не состоялась, то ClickHouse сгенерирует исключение и клиент должен повторить запрос на запись того же блока на эту же или любую другую реплику.
|
||||
|
||||
Значение по умолчанию: 600000 миллисекунд (10 минут).
|
||||
Значение по умолчанию: 600 000 миллисекунд (10 минут).
|
||||
|
||||
См. также:
|
||||
|
||||
@ -2324,4 +2324,23 @@ SELECT number FROM numbers(3) FORMAT JSONEachRow;
|
||||
|
||||
Значение по умолчанию: `0`.
|
||||
|
||||
## execute_merges_on_single_replica_time_threshold {#execute-merges-on-single-replica-time-threshold}
|
||||
|
||||
Включает особую логику выполнения слияний на репликах.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число (в секундах).
|
||||
- 0 — не используется особая логика выполнения слияний. Слияния происходят обычным образом на всех репликах.
|
||||
|
||||
Значение по умолчанию: `0`.
|
||||
|
||||
**Использование**
|
||||
|
||||
Выбирается одна реплика для выполнения слияния. Устанавливается порог времени с момента начала слияния. Другие реплики ждут завершения слияния, а затем скачивают результат. Если время выполнения слияния превышает установленный порог и выбранная реплика не выполняет слияние, тогда слияние выполняется на других репликах как обычно.
|
||||
|
||||
Большие значения этой настройки могут привести к задержкам репликации.
|
||||
|
||||
Эта настройка полезна, когда скорость слияния ограничивается мощностью процессора, а не скоростью операций ввода-вывода (при выполнении "тяжелого" сжатия данных, при расчете агрегатных функций или выражений по умолчанию, требующих большого объема вычислений, или просто при большом количестве мелких слияний).
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) <!--hide-->
|
||||
|
@ -116,12 +116,14 @@ FROM dt
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [Функции преобразования типов](../../sql-reference/data-types/datetime.md)
|
||||
- [Функции для работы с датой и временем](../../sql-reference/data-types/datetime.md)
|
||||
- [Функции для работы с массивами](../../sql-reference/data-types/datetime.md)
|
||||
- [Настройка `date_time_input_format`](../../operations/settings/settings.md#settings-date_time_input_format)
|
||||
- [Конфигурационный параметр сервера `timezone`](../../sql-reference/data-types/datetime.md#server_configuration_parameters-timezone)
|
||||
- [Операторы для работы с датой и временем](../../sql-reference/data-types/datetime.md#operators-datetime)
|
||||
- [Функции преобразования типов](../../sql-reference/functions/type-conversion-functions.md)
|
||||
- [Функции для работы с датой и временем](../../sql-reference/functions/date-time-functions.md)
|
||||
- [Функции для работы с массивами](../../sql-reference/functions/array-functions.md)
|
||||
- [Настройка `date_time_input_format`](../../operations/settings/settings/#settings-date_time_input_format)
|
||||
- [Настройка `date_time_output_format`](../../operations/settings/settings/)
|
||||
- [Конфигурационный параметр сервера `timezone`](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
|
||||
- [Операторы для работы с датой и временем](../../sql-reference/operators/index.md#operators-datetime)
|
||||
- [Тип данных `Date`](date.md)
|
||||
- [Тип данных `DateTime64`](datetime64.md)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/data_types/datetime/) <!--hide-->
|
||||
|
@ -92,11 +92,12 @@ FROM dt
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [Функции преобразования типов](../../sql-reference/data-types/datetime64.md)
|
||||
- [Функции для работы с датой и временем](../../sql-reference/data-types/datetime64.md)
|
||||
- [Функции для работы с массивами](../../sql-reference/data-types/datetime64.md)
|
||||
- [Функции преобразования типов](../../sql-reference/functions/type-conversion-functions.md)
|
||||
- [Функции для работы с датой и временем](../../sql-reference/functions/date-time-functions.md)
|
||||
- [Функции для работы с массивами](../../sql-reference/functions/array-functions.md)
|
||||
- [Настройка `date_time_input_format`](../../operations/settings/settings.md#settings-date_time_input_format)
|
||||
- [Конфигурационный параметр сервера `timezone`](../../sql-reference/data-types/datetime64.md#server_configuration_parameters-timezone)
|
||||
- [Операторы для работы с датой и временем](../../sql-reference/data-types/datetime64.md#operators-datetime)
|
||||
- [Настройка `date_time_output_format`](../../operations/settings/settings.md)
|
||||
- [Конфигурационный параметр сервера `timezone`](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
|
||||
- [Операторы для работы с датой и временем](../../sql-reference/operators/index.md#operators-datetime)
|
||||
- [Тип данных `Date`](date.md)
|
||||
- [Тип данных `DateTime`](datetime.md)
|
||||
|
@ -404,6 +404,14 @@ LAYOUT(DIRECT())
|
||||
<null_value>??</null_value>
|
||||
</attribute>
|
||||
...
|
||||
</structure>
|
||||
<layout>
|
||||
<ip_trie>
|
||||
<!-- Ключевой аттрибут `prefix` будет доступен через dictGetString -->
|
||||
<!-- Эта опция увеличивает потреблямую память -->
|
||||
<access_to_key_from_attributes>true</access_to_key_from_attributes>
|
||||
</ip_trie>
|
||||
</layout>
|
||||
```
|
||||
|
||||
или
|
||||
@ -433,6 +441,6 @@ dictGetString('prefix', 'asn', tuple(IPv6StringToNum('2001:db8::1')))
|
||||
|
||||
Никакие другие типы не поддерживаются. Функция возвращает атрибут для префикса, соответствующего данному IP-адресу. Если есть перекрывающиеся префиксы, возвращается наиболее специфический.
|
||||
|
||||
Данные хранятся в побитовом дереве (`trie`), он должен полностью помещаться в оперативной памяти.
|
||||
Данные должны полностью помещаться в оперативной памяти.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/dicts/external_dicts_dict_layout/) <!--hide-->
|
||||
|
@ -1686,4 +1686,44 @@ SELECT countDigits(toDecimal32(1, 9)), countDigits(toDecimal32(-1, 9)),
|
||||
10 10 19 19 39 39
|
||||
```
|
||||
|
||||
## tcpPort {#tcpPort}
|
||||
|
||||
Вовращает номер TCP порта, который использует сервер для [нативного протокола](../../interfaces/tcp.md).
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
tcpPort()
|
||||
```
|
||||
|
||||
**Параметры**
|
||||
|
||||
- Нет.
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- Номер TCP порта.
|
||||
|
||||
Тип: [UInt16](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT tcpPort();
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─tcpPort()─┐
|
||||
│ 9000 │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
**Смотрите также**
|
||||
|
||||
- [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/other_functions/) <!--hide-->
|
||||
|
@ -521,5 +521,56 @@ SELECT * FROM Months WHERE ilike(name, '%j%')
|
||||
|
||||
!!! note "Примечание"
|
||||
Для случая UTF-8 мы используем триграммное расстояние. Вычисление n-граммного расстояния не совсем честное. Мы используем 2-х байтные хэши для хэширования n-грамм, а затем вычисляем (не)симметрическую разность между хэш таблицами – могут возникнуть коллизии. В формате UTF-8 без учета регистра мы не используем честную функцию `tolower` – мы обнуляем 5-й бит (нумерация с нуля) каждого байта кодовой точки, а также первый бит нулевого байта, если байтов больше 1 – это работает для латиницы и почти для всех кириллических букв.
|
||||
|
||||
## countMatches(haystack, pattern) {#countmatcheshaystack-pattern}
|
||||
|
||||
Возвращает количество совпадений, найденных в строке `haystack`, для регулярного выражения `pattern`.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
countMatches(haystack, pattern)
|
||||
```
|
||||
|
||||
**Параметры**
|
||||
|
||||
- `haystack` — строка, по которой выполняется поиск. [String](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `pattern` — регулярное выражение, построенное по синтаксическим правилам [re2](https://github.com/google/re2/wiki/Syntax). [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- Количество совпадений.
|
||||
|
||||
Тип: [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Примеры**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT countMatches('foobar.com', 'o+');
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─countMatches('foobar.com', 'o+')─┐
|
||||
│ 2 │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT countMatches('aaaa', 'aa');
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─countMatches('aaaa', 'aa')────┐
|
||||
│ 2 │
|
||||
└───────────────────────────────┘
|
||||
```
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/string_search_functions/) <!--hide-->
|
||||
|
@ -23,7 +23,7 @@ mapAdd(Tuple(Array, Array), Tuple(Array, Array) [, ...])
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- Возвращает один [кортеж](../../sql-reference/data-types/tuple.md#tuplet1-t2), в котором первый массив содержит отсортированные ключи, а второй - значения.
|
||||
- Возвращает один [кортеж](../../sql-reference/data-types/tuple.md#tuplet1-t2), в котором первый массив содержит отсортированные ключи, а второй — значения.
|
||||
|
||||
**Пример**
|
||||
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -1,51 +1,66 @@
|
||||
# 第三方开发的库 {#di-san-fang-kai-fa-de-ku}
|
||||
---
|
||||
toc_priority: 26
|
||||
toc_title: 客户端开发库
|
||||
---
|
||||
|
||||
!!! warning "放弃"
|
||||
Yandex不维护下面列出的库,也没有进行任何广泛的测试以确保其质量。
|
||||
# 第三方开发库 {#client-libraries-from-third-party-developers}
|
||||
|
||||
!!! warning "声明"
|
||||
Yandex**没有**维护下面列出的库,也没有做过任何广泛的测试来确保它们的质量。
|
||||
|
||||
- Python
|
||||
- [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm)
|
||||
- [ツ环板driverョツ嘉ッツ偲](https://github.com/mymarilyn/clickhouse-driver)
|
||||
- [ツ环板clientョツ嘉ッツ偲](https://github.com/yurial/clickhouse-client)
|
||||
- [clickhouse-driver](https://github.com/mymarilyn/clickhouse-driver)
|
||||
- [clickhouse-client](https://github.com/yurial/clickhouse-client)
|
||||
- [aiochclient](https://github.com/maximdanilchenko/aiochclient)
|
||||
- PHP
|
||||
- [smi2/phpclickhouse](https://packagist.org/packages/smi2/phpClickHouse)
|
||||
- [8bitov/clickhouse-php客户端](https://packagist.org/packages/8bitov/clickhouse-php-client)
|
||||
- [ツ暗ェツ氾环催ツ団ツ法ツ人](https://packagist.org/packages/bozerkins/clickhouse-client)
|
||||
- [ツ环板clientョツ嘉ッツ偲](https://packagist.org/packages/simpod/clickhouse-client)
|
||||
- [8bitov/clickhouse-php-client](https://packagist.org/packages/8bitov/clickhouse-php-client)
|
||||
- [bozerkins/clickhouse-client](https://packagist.org/packages/bozerkins/clickhouse-client)
|
||||
- [simpod/clickhouse-client](https://packagist.org/packages/simpod/clickhouse-client)
|
||||
- [seva-code/php-click-house-client](https://packagist.org/packages/seva-code/php-click-house-client)
|
||||
- [ツ环板clientョツ嘉ッツ偲](https://github.com/SeasX/SeasClick)
|
||||
- 走吧
|
||||
- [SeasClick C++ client](https://github.com/SeasX/SeasClick)
|
||||
- [one-ck](https://github.com/lizhichao/one-ck)
|
||||
- [glushkovds/phpclickhouse-laravel](https://packagist.org/packages/glushkovds/phpclickhouse-laravel)
|
||||
- Go
|
||||
- [clickhouse](https://github.com/kshvakov/clickhouse/)
|
||||
- [ツ环板-ョツ嘉ッツ偲](https://github.com/roistat/go-clickhouse)
|
||||
- [ツ暗ェツ氾环催ツ団ツ法ツ人](https://github.com/mailru/go-clickhouse)
|
||||
- [go-clickhouse](https://github.com/roistat/go-clickhouse)
|
||||
- [mailrugo-clickhouse](https://github.com/mailru/go-clickhouse)
|
||||
- [golang-clickhouse](https://github.com/leprosus/golang-clickhouse)
|
||||
- Swift
|
||||
- [ClickHouseNIO](https://github.com/patrick-zippenfenig/ClickHouseNIO)
|
||||
- [ClickHouseVapor ORM](https://github.com/patrick-zippenfenig/ClickHouseVapor)
|
||||
- NodeJs
|
||||
- [ツ暗ェツ氾环催ツ団ツ法ツ人)](https://github.com/TimonKK/clickhouse)
|
||||
- [ツ环板-ョツ嘉ッツ偲](https://github.com/apla/node-clickhouse)
|
||||
- [clickhouse (NodeJs)](https://github.com/TimonKK/clickhouse)
|
||||
- [node-clickhouse](https://github.com/apla/node-clickhouse)
|
||||
- Perl
|
||||
- [perl-DBD-ClickHouse](https://github.com/elcamlost/perl-DBD-ClickHouse)
|
||||
- [HTTP-ClickHouse](https://metacpan.org/release/HTTP-ClickHouse)
|
||||
- [ツ暗ェツ氾环催ツ団ツ法ツ人](https://metacpan.org/release/AnyEvent-ClickHouse)
|
||||
- [AnyEvent-ClickHouse](https://metacpan.org/release/AnyEvent-ClickHouse)
|
||||
- Ruby
|
||||
- [ツ暗ェツ氾环催ツ団)](https://github.com/shlima/click_house)
|
||||
- [ツ暗ェツ氾环催ツ団ツ法ツ人](https://github.com/PNixx/clickhouse-activerecord)
|
||||
- [ClickHouse (Ruby)](https://github.com/shlima/click_house)
|
||||
- [clickhouse-activerecord](https://github.com/PNixx/clickhouse-activerecord)
|
||||
- R
|
||||
- [clickhouse-r](https://github.com/hannesmuehleisen/clickhouse-r)
|
||||
- [RClickhouse](https://github.com/IMSMWU/RClickhouse)
|
||||
- [RClickHouse](https://github.com/IMSMWU/RClickHouse)
|
||||
- Java
|
||||
- [clickhouse-client-java](https://github.com/VirtusAI/clickhouse-client-java)
|
||||
- 斯卡拉
|
||||
- [掳胫client-禄脢鹿脷露胫鲁隆鹿-client酶](https://github.com/crobox/clickhouse-scala-client)
|
||||
- [clickhouse-client](https://github.com/Ecwid/clickhouse-client)
|
||||
- Scala
|
||||
- [clickhouse-scala-client](https://github.com/crobox/clickhouse-scala-client)
|
||||
- Kotlin
|
||||
- [AORM](https://github.com/TanVD/AORM)
|
||||
- C#
|
||||
- [Octonica.ClickHouseClient](https://github.com/Octonica/ClickHouseClient)
|
||||
- [克莱克豪斯Ado](https://github.com/killwort/ClickHouse-Net)
|
||||
- [ClickHouse.Ado](https://github.com/killwort/ClickHouse-Net)
|
||||
- [ClickHouse.Client](https://github.com/DarkWanderer/ClickHouse.Client)
|
||||
- [ClickHouse.Net](https://github.com/ilyabreev/ClickHouse.Net)
|
||||
- [克莱克豪斯客户](https://github.com/DarkWanderer/ClickHouse.Client)
|
||||
- 仙丹
|
||||
- Elixir
|
||||
- [clickhousex](https://github.com/appodeal/clickhousex/)
|
||||
- 尼姆
|
||||
- [pillar](https://github.com/sofakingworld/pillar)
|
||||
- Nim
|
||||
- [nim-clickhouse](https://github.com/leonardoce/nim-clickhouse)
|
||||
- Haskell
|
||||
- [hdbc-clickhouse](https://github.com/zaneli/hdbc-clickhouse)
|
||||
|
||||
[来源文章](https://clickhouse.tech/docs/zh/interfaces/third-party/client_libraries/) <!--hide-->
|
||||
[来源文章](https://clickhouse.tech/docs/en/interfaces/third-party/client_libraries/) <!--hide-->
|
||||
|
14
docs/zh/interfaces/third-party/index.md
vendored
14
docs/zh/interfaces/third-party/index.md
vendored
@ -1,8 +1,16 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_folder_title: "\u7B2C\u4E09\u65B9"
|
||||
toc_folder_title: 第三方工具
|
||||
toc_priority: 24
|
||||
---
|
||||
|
||||
# 第三方工具 {#third-party-interfaces}
|
||||
|
||||
这是第三方工具的链接集合,它们提供了一些ClickHouse的接口。它可以是可视化界面、命令行界面或API:
|
||||
|
||||
- [Client libraries](../../interfaces/third-party/client-libraries.md)
|
||||
- [Integrations](../../interfaces/third-party/integrations.md)
|
||||
- [GUI](../../interfaces/third-party/gui.md)
|
||||
- [Proxies](../../interfaces/third-party/proxy.md)
|
||||
|
||||
!!! note "注意"
|
||||
支持通用API的通用工具[ODBC](../../interfaces/odbc.md)或[JDBC](../../interfaces/jdbc.md),通常也适用于ClickHouse,但这里没有列出,因为它们实在太多了。
|
||||
|
94
docs/zh/interfaces/third-party/integrations.md
vendored
94
docs/zh/interfaces/third-party/integrations.md
vendored
@ -1,100 +1,108 @@
|
||||
# 第三方集成库 {#di-san-fang-ji-cheng-ku}
|
||||
---
|
||||
toc_priority: 27
|
||||
toc_title: 第三方集成库
|
||||
---
|
||||
|
||||
# 第三方集成库 {#integration-libraries-from-third-party-developers}
|
||||
|
||||
!!! warning "声明"
|
||||
Yandex不维护下面列出的库,也没有进行任何广泛的测试以确保其质量。
|
||||
Yandex**没有**维护下面列出的库,也没有做过任何广泛的测试来确保它们的质量。
|
||||
|
||||
## 基建产品 {#ji-jian-chan-pin}
|
||||
## 基础设施 {#infrastructure-products}
|
||||
|
||||
- 关系数据库管理系统
|
||||
- 关系数据库
|
||||
- [MySQL](https://www.mysql.com)
|
||||
- [mysql2ch](https://github.com/long2ice/mysql2ch)
|
||||
- [ProxySQL](https://github.com/sysown/proxysql/wiki/ClickHouse-Support)
|
||||
- [clickhouse-mysql-data-reader](https://github.com/Altinity/clickhouse-mysql-data-reader)
|
||||
- [horgh-复制器](https://github.com/larsnovikov/horgh-replicator)
|
||||
- [horgh-replicator](https://github.com/larsnovikov/horgh-replicator)
|
||||
- [PostgreSQL](https://www.postgresql.org)
|
||||
- [clickhousedb_fdw](https://github.com/Percona-Lab/clickhousedb_fdw)
|
||||
- [infi.clickhouse_fdw](https://github.com/Infinidat/infi.clickhouse_fdw) (使用 [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [infi.clickhouse_fdw](https://github.com/Infinidat/infi.clickhouse_fdw) (uses [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pg2ch](https://github.com/mkabilov/pg2ch)
|
||||
- [clickhouse_fdw](https://github.com/adjust/clickhouse_fdw)
|
||||
- [MSSQL](https://en.wikipedia.org/wiki/Microsoft_SQL_Server)
|
||||
- [ClickHouseMightrator](https://github.com/zlzforever/ClickHouseMigrator)
|
||||
- [ClickHouseMigrator](https://github.com/zlzforever/ClickHouseMigrator)
|
||||
- 消息队列
|
||||
- [卡夫卡](https://kafka.apache.org)
|
||||
- [clickhouse_sinker](https://github.com/housepower/clickhouse_sinker) (使用 [去客户](https://github.com/ClickHouse/clickhouse-go/))
|
||||
- [Kafka](https://kafka.apache.org)
|
||||
- [clickhouse_sinker](https://github.com/housepower/clickhouse_sinker) (uses [Go client](https://github.com/ClickHouse/clickhouse-go/))
|
||||
- [stream-loader-clickhouse](https://github.com/adform/stream-loader)
|
||||
- 流处理
|
||||
- [Flink](https://flink.apache.org)
|
||||
- [flink-clickhouse-sink](https://github.com/ivi-ru/flink-clickhouse-sink)
|
||||
- 对象存储
|
||||
- [S3](https://en.wikipedia.org/wiki/Amazon_S3)
|
||||
- [ツ环板backupョツ嘉ッツ偲](https://github.com/AlexAkulov/clickhouse-backup)
|
||||
- [clickhouse-backup](https://github.com/AlexAkulov/clickhouse-backup)
|
||||
- 容器编排
|
||||
- [Kubernetes](https://kubernetes.io)
|
||||
- [clickhouse-操](https://github.com/Altinity/clickhouse-operator)
|
||||
- [clickhouse-operator](https://github.com/Altinity/clickhouse-operator)
|
||||
- 配置管理
|
||||
- [木偶](https://puppet.com)
|
||||
- [ツ环板/ョツ嘉ッツ偲](https://forge.puppet.com/innogames/clickhouse)
|
||||
- [mfedotov/clickhouse](https://forge.puppet.com/mfedotov/clickhouse)
|
||||
- 监控
|
||||
- [石墨](https://graphiteapp.org)
|
||||
- [puppet](https://puppet.com)
|
||||
- [innogames/clickhouse](https://forge.puppet.com/innogames/clickhouse)
|
||||
- [mfedotov/clickhouse](https://forge.puppet.com/mfedotov/clickhouse)
|
||||
- Monitoring
|
||||
- [Graphite](https://graphiteapp.org)
|
||||
- [graphouse](https://github.com/yandex/graphouse)
|
||||
- [ツ暗ェツ氾环催ツ団](https://github.com/lomik/carbon-clickhouse) +
|
||||
- [ツ环板-ョツ嘉ッツ偲](https://github.com/lomik/graphite-clickhouse)
|
||||
- [石墨-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) -优化静态分区 [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md#graphitemergetree) 如果从规则 [汇总配置](../../engines/table-engines/mergetree-family/graphitemergetree.md#rollup-configuration) 可以应用
|
||||
- [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse) +
|
||||
- [graphite-clickhouse](https://github.com/lomik/graphite-clickhouse)
|
||||
- [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) - optimizes staled partitions in [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md#graphitemergetree) if rules from [rollup configuration](../../engines/table-engines/mergetree-family/graphitemergetree.md#rollup-configuration) could be applied
|
||||
- [Grafana](https://grafana.com/)
|
||||
- [clickhouse-grafana](https://github.com/Vertamedia/clickhouse-grafana)
|
||||
- [普罗米修斯号](https://prometheus.io/)
|
||||
- [Prometheus](https://prometheus.io/)
|
||||
- [clickhouse_exporter](https://github.com/f1yegor/clickhouse_exporter)
|
||||
- [PromHouse](https://github.com/Percona-Lab/PromHouse)
|
||||
- [clickhouse_exporter](https://github.com/hot-wifi/clickhouse_exporter) (用途 [去客户](https://github.com/kshvakov/clickhouse/))
|
||||
- [clickhouse_exporter](https://github.com/hot-wifi/clickhouse_exporter) (uses [Go client](https://github.com/kshvakov/clickhouse/))
|
||||
- [Nagios](https://www.nagios.org/)
|
||||
- [check_clickhouse](https://github.com/exogroup/check_clickhouse/)
|
||||
- [check_clickhouse.py](https://github.com/innogames/igmonplugins/blob/master/src/check_clickhouse.py)
|
||||
- [Zabbix](https://www.zabbix.com)
|
||||
- [ツ暗ェツ氾环催ツ団ツ法ツ人](https://github.com/Altinity/clickhouse-zabbix-template)
|
||||
- [clickhouse-zabbix-template](https://github.com/Altinity/clickhouse-zabbix-template)
|
||||
- [Sematext](https://sematext.com/)
|
||||
- [clickhouse积分](https://github.com/sematext/sematext-agent-integrations/tree/master/clickhouse)
|
||||
- 记录
|
||||
- [clickhouse integration](https://github.com/sematext/sematext-agent-integrations/tree/master/clickhouse)
|
||||
- Logging
|
||||
- [rsyslog](https://www.rsyslog.com/)
|
||||
- [鹿茫house omhousee酶](https://www.rsyslog.com/doc/master/configuration/modules/omclickhouse.html)
|
||||
- [omclickhouse](https://www.rsyslog.com/doc/master/configuration/modules/omclickhouse.html)
|
||||
- [fluentd](https://www.fluentd.org)
|
||||
- [loghouse](https://github.com/flant/loghouse) (对于 [Kubernetes](https://kubernetes.io))
|
||||
- [Sematext](https://www.sematext.com/logagent)
|
||||
- [logagent输出-插件-clickhouse](https://sematext.com/docs/logagent/output-plugin-clickhouse/)
|
||||
- 地理
|
||||
- [loghouse](https://github.com/flant/loghouse) (for [Kubernetes](https://kubernetes.io))
|
||||
- [logagent](https://www.sematext.com/logagent)
|
||||
- [logagent output-plugin-clickhouse](https://sematext.com/docs/logagent/output-plugin-clickhouse/)
|
||||
- Geo
|
||||
- [MaxMind](https://dev.maxmind.com/geoip/)
|
||||
- [ツ环板-ョツ嘉ッツ偲青clickシツ氾カツ鉄ツ工ツ渉](https://github.com/AlexeyKupershtokh/clickhouse-maxmind-geoip)
|
||||
- [clickhouse-maxmind-geoip](https://github.com/AlexeyKupershtokh/clickhouse-maxmind-geoip)
|
||||
|
||||
## 编程语言生态系统 {#bian-cheng-yu-yan-sheng-tai-xi-tong}
|
||||
## 编程语言 {#programming-language-ecosystems}
|
||||
|
||||
- Python
|
||||
- [SQLAlchemy](https://www.sqlalchemy.org)
|
||||
- [ツ暗ェツ氾环催ツ団ツ法ツ人](https://github.com/cloudflare/sqlalchemy-clickhouse) (使用 [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [熊猫](https://pandas.pydata.org)
|
||||
- [sqlalchemy-clickhouse](https://github.com/cloudflare/sqlalchemy-clickhouse) (uses [infi.clickhouse_orm](https://github.com/Infinidat/infi.clickhouse_orm))
|
||||
- [pandas](https://pandas.pydata.org)
|
||||
- [pandahouse](https://github.com/kszucs/pandahouse)
|
||||
- PHP
|
||||
- [Doctrine](https://www.doctrine-project.org/)
|
||||
- [dbal-clickhouse](https://packagist.org/packages/friendsofdoctrine/dbal-clickhouse)
|
||||
- R
|
||||
- [dplyr](https://db.rstudio.com/dplyr/)
|
||||
- [RClickhouse](https://github.com/IMSMWU/RClickhouse) (使用 [ツ暗ェツ氾环催ツ団](https://github.com/artpaul/clickhouse-cpp))
|
||||
- [RClickHouse](https://github.com/IMSMWU/RClickHouse) (uses [clickhouse-cpp](https://github.com/artpaul/clickhouse-cpp))
|
||||
- Java
|
||||
- [Hadoop](http://hadoop.apache.org)
|
||||
- [clickhouse-hdfs-装载机](https://github.com/jaykelin/clickhouse-hdfs-loader) (使用 [JDBC](../../sql-reference/table-functions/jdbc.md))
|
||||
- 斯卡拉
|
||||
- [clickhouse-hdfs-loader](https://github.com/jaykelin/clickhouse-hdfs-loader) (uses [JDBC](../../sql-reference/table-functions/jdbc.md))
|
||||
- Scala
|
||||
- [Akka](https://akka.io)
|
||||
- [掳胫client-禄脢鹿脷露胫鲁隆鹿-client酶](https://github.com/crobox/clickhouse-scala-client)
|
||||
- [clickhouse-scala-client](https://github.com/crobox/clickhouse-scala-client)
|
||||
- C#
|
||||
- [ADO.NET](https://docs.microsoft.com/en-us/dotnet/framework/data/adonet/ado-net-overview)
|
||||
- [克莱克豪斯Ado](https://github.com/killwort/ClickHouse-Net)
|
||||
- [ClickHouse.Net](https://github.com/ilyabreev/ClickHouse.Net)
|
||||
- [ClickHouse.Net.Migrations](https://github.com/ilyabreev/ClickHouse.Net.Migrations)
|
||||
- 仙丹
|
||||
- [ClickHouse.Ado](https://github.com/killwort/ClickHouse-Net)
|
||||
- [ClickHouse.Client](https://github.com/DarkWanderer/ClickHouse.Client)
|
||||
- [ClickHouse.Net](https://github.com/ilyabreev/ClickHouse.Net)
|
||||
- [ClickHouse.Net.Migrations](https://github.com/ilyabreev/ClickHouse.Net.Migrations)
|
||||
- Elixir
|
||||
- [Ecto](https://github.com/elixir-ecto/ecto)
|
||||
- [clickhouse_ecto](https://github.com/appodeal/clickhouse_ecto)
|
||||
- [clickhouse_ecto](https://github.com/appodeal/clickhouse_ecto)
|
||||
- Ruby
|
||||
- [Ruby on Rails](https://rubyonrails.org/)
|
||||
- [activecube](https://github.com/bitquery/activecube)
|
||||
- [ActiveRecord](https://github.com/PNixx/clickhouse-activerecord)
|
||||
- [GraphQL](https://github.com/graphql)
|
||||
- [activecube-graphql](https://github.com/bitquery/activecube-graphql)
|
||||
|
||||
[来源文章](https://clickhouse.tech/docs/zh/interfaces/third-party/integrations/) <!--hide-->
|
||||
[源文章](https://clickhouse.tech/docs/en/interfaces/third-party/integrations/) <!--hide-->
|
||||
|
49
docs/zh/interfaces/third-party/proxy.md
vendored
49
docs/zh/interfaces/third-party/proxy.md
vendored
@ -1,37 +1,44 @@
|
||||
# 来自第三方开发人员的代理服务器 {#lai-zi-di-san-fang-kai-fa-ren-yuan-de-dai-li-fu-wu-qi}
|
||||
---
|
||||
toc_priority: 29
|
||||
toc_title: 第三方代理
|
||||
---
|
||||
|
||||
[chproxy](https://github.com/Vertamedia/chproxy) 是ClickHouse数据库的http代理和负载均衡器。
|
||||
# 第三方代理 {#proxy-servers-from-third-party-developers}
|
||||
|
||||
特征
|
||||
## chproxy {#chproxy}
|
||||
|
||||
*每用户路由和响应缓存。
|
||||
*灵活的限制。
|
||||
\*自动SSL证书续订。
|
||||
[chproxy](https://github.com/Vertamedia/chproxy), 是一个用于ClickHouse数据库的HTTP代理和负载均衡器。
|
||||
|
||||
在Go中实现。
|
||||
特性:
|
||||
|
||||
- 用户路由和响应缓存。
|
||||
- 灵活的限制。
|
||||
- 自动SSL证书续订。
|
||||
|
||||
使用go语言实现。
|
||||
|
||||
## KittenHouse {#kittenhouse}
|
||||
|
||||
[KittenHouse](https://github.com/VKCOM/kittenhouse) 设计为ClickHouse和应用程序服务器之间的本地代理,以防在应用程序端缓冲INSERT数据是不可能或不方便的。
|
||||
[KittenHouse](https://github.com/VKCOM/kittenhouse)被设计为ClickHouse和应用服务器之间的本地代理,以防不可能或不方便在应用程序端缓冲插入数据。
|
||||
|
||||
特征:
|
||||
特性:
|
||||
|
||||
*内存和磁盘数据缓冲。
|
||||
*每表路由。
|
||||
\*负载平衡和健康检查。
|
||||
- 内存和磁盘上的数据缓冲。
|
||||
- 表路由。
|
||||
- 负载平衡和运行状况检查。
|
||||
|
||||
在Go中实现。
|
||||
使用go语言实现。
|
||||
|
||||
## ツ环板-ョツ嘉ッツ偲 {#clickhouse-bulk}
|
||||
## ClickHouse-Bulk {#clickhouse-bulk}
|
||||
|
||||
[ツ环板-ョツ嘉ッツ偲](https://github.com/nikepan/clickhouse-bulk) 是一个简单的ClickHouse插入收集器。
|
||||
[ClickHouse-Bulk](https://github.com/nikepan/clickhouse-bulk)是一个简单的ClickHouse收集器。
|
||||
|
||||
特征:
|
||||
特性:
|
||||
|
||||
*分组请求并按阈值或间隔发送。
|
||||
*多个远程服务器。
|
||||
\*基本身份验证。
|
||||
- 按阈值或间隔对请求进行分组并发送。
|
||||
- 多个远程服务器。
|
||||
- 基本身份验证。
|
||||
|
||||
在Go中实现。
|
||||
使用go语言实现。
|
||||
|
||||
[来源文章](https://clickhouse.tech/docs/zh/interfaces/third-party/proxy/) <!--hide-->
|
||||
[Original article](https://clickhouse.tech/docs/en/interfaces/third-party/proxy/) <!--hide-->
|
||||
|
@ -362,6 +362,12 @@ LAYOUT(DIRECT())
|
||||
<null_value>??</null_value>
|
||||
</attribute>
|
||||
...
|
||||
</structure>
|
||||
<layout>
|
||||
<ip_trie>
|
||||
<access_to_key_from_attributes>true</access_to_key_from_attributes>
|
||||
</ip_trie>
|
||||
</layout>
|
||||
```
|
||||
|
||||
或
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_folder_title: "\u65B0\u589E\u5185\u5BB9"
|
||||
toc_priority: 72
|
||||
toc_folder_title: ClickHouse事迹
|
||||
toc_priority: 82
|
||||
---
|
||||
|
||||
# ClickHouse变更及改动? {#whats-new-in-clickhouse}
|
||||
|
||||
对于已经发布的版本,有一个[roadmap](../whats-new/roadmap.md)和一个详细的[changelog](../whats-new/changelog/index.md)。
|
||||
|
@ -1,17 +1,10 @@
|
||||
---
|
||||
toc_priority: 74
|
||||
toc_title: 路线图
|
||||
toc_title: Roadmap
|
||||
---
|
||||
|
||||
# 路线图 {#roadmap}
|
||||
# Roadmap {#roadmap}
|
||||
|
||||
## Q2 2020 {#q2-2020}
|
||||
|
||||
- 和外部认证服务集成
|
||||
|
||||
## Q3 2020 {#q3-2020}
|
||||
|
||||
- 资源池,为用户提供更精准的集群资源分配
|
||||
|
||||
{## [原始文档](https://clickhouse.tech/docs/en/roadmap/) ##}
|
||||
`2021年Roadmap`已公布供公开讨论查看[这里](https://github.com/ClickHouse/ClickHouse/issues/17623).
|
||||
|
||||
{## [源文章](https://clickhouse.tech/docs/en/roadmap/) ##}
|
||||
|
@ -1,41 +1,74 @@
|
||||
## 修复于 ClickHouse Release 18.12.13, 2018-09-10 {#xiu-fu-yu-clickhouse-release-18-12-13-2018-09-10}
|
||||
---
|
||||
toc_priority: 76
|
||||
toc_title: 安全更新日志
|
||||
---
|
||||
|
||||
## 修复于ClickHouse Release 19.14.3.3, 2019-09-10 {#fixed-in-clickhouse-release-19-14-3-3-2019-09-10}
|
||||
|
||||
### CVE-2019-15024 {#cve-2019-15024}
|
||||
|
||||
对ZooKeeper具有写访问权限并且可以运行ClickHouse所在网络上可用的自定义服务器的攻击者可以创建一个自定义的恶意服务器,该服务器将充当ClickHouse副本并在ZooKeeper中注册。当另一个副本将从恶意副本获取数据部分时,它可以强制clickhouse服务器写入文件系统上的任意路径。
|
||||
|
||||
作者:Yandex信息安全团队Eldar Zaitov
|
||||
|
||||
### CVE-2019-16535 {#cve-2019-16535}
|
||||
|
||||
解压算法中的OOB-read、OOB-write和整数下溢可以通过本机协议实现RCE或DoS。
|
||||
|
||||
作者: Yandex信息安全团队Eldar Zaitov
|
||||
|
||||
### CVE-2019-16536 {#cve-2019-16536}
|
||||
|
||||
恶意的经过身份验证的客户端可能会触发导致DoS的堆栈溢出。
|
||||
|
||||
作者: Yandex信息安全团队Eldar Zaitov
|
||||
|
||||
## 修复于ClickHouse Release 19.13.6.1, 2019-09-20 {#fixed-in-clickhouse-release-19-13-6-1-2019-09-20}
|
||||
|
||||
### CVE-2019-18657 {#cve-2019-18657}
|
||||
|
||||
表函数`url`存在允许攻击者在请求中插入任意HTTP标头的漏洞。
|
||||
|
||||
作者: [Nikita Tikhomirov](https://github.com/NSTikhomirov)
|
||||
|
||||
## 修复于ClickHouse Release 18.12.13, 2018-09-10 {#fixed-in-clickhouse-release-18-12-13-2018-09-10}
|
||||
|
||||
### CVE-2018-14672 {#cve-2018-14672}
|
||||
|
||||
加载CatBoost模型的功能,允许遍历路径并通过错误消息读取任意文件。
|
||||
加载CatBoost模型的函数允许路径遍历和通过错误消息读取任意文件。
|
||||
|
||||
来源: Yandex信息安全团队的Andrey Krasichkov
|
||||
作者:Yandex信息安全团队Andrey Krasichkov
|
||||
|
||||
## 修复于 ClickHouse Release 18.10.3, 2018-08-13 {#xiu-fu-yu-clickhouse-release-18-10-3-2018-08-13}
|
||||
## 修复于Release 18.10.3, 2018-08-13 {#fixed-in-clickhouse-release-18-10-3-2018-08-13}
|
||||
|
||||
### CVE-2018-14671 {#cve-2018-14671}
|
||||
|
||||
unixODBC允许从文件系统加载任意共享对象,从而导致«远程执行代码»漏洞。
|
||||
unixODBC允许从文件系统加载任意共享对象,从而导致远程代码执行漏洞。
|
||||
|
||||
来源:Yandex信息安全团队的Andrey Krasichkov和Evgeny Sidorov
|
||||
作者:Yandex信息安全团队Andrey Krasichkov和Evgeny Sidorov
|
||||
|
||||
## 修复于 ClickHouse Release 1.1.54388, 2018-06-28 {#xiu-fu-yu-clickhouse-release-1-1-54388-2018-06-28}
|
||||
## 修复于ClickHouse Release 1.1.54388, 2018-06-28 {#fixed-in-clickhouse-release-1-1-54388-2018-06-28}
|
||||
|
||||
### CVE-2018-14668 {#cve-2018-14668}
|
||||
|
||||
远程表函数功能允许在 «user», «password» 及 «default_database» 字段中使用任意符号,从而导致跨协议请求伪造攻击。
|
||||
`remote`表函数允许在`user`,`password`和`default_database`字段中使用任意符号,从而导致跨协议请求伪造攻击。
|
||||
|
||||
来源:Yandex信息安全团队的Andrey Krasichkov
|
||||
者:Yandex信息安全团队Andrey Krasichkov
|
||||
|
||||
## 修复于 ClickHouse Release 1.1.54390, 2018-07-06 {#xiu-fu-yu-clickhouse-release-1-1-54390-2018-07-06}
|
||||
## 修复于ClickHouse Release 1.1.54390, 2018-07-06 {#fixed-in-clickhouse-release-1-1-54390-2018-07-06}
|
||||
|
||||
### CVE-2018-14669 {#cve-2018-14669}
|
||||
|
||||
ClickHouse MySQL客户端启用了 «LOAD DATA LOCAL INFILE» 功能,该功能允许恶意MySQL数据库从连接的ClickHouse服务器读取任意文件。
|
||||
ClickHouse MySQL客户端启用了`LOAD DATA LOCAL INFILE`功能,允许恶意MySQL数据库从连接的ClickHouse服务器读取任意文件。
|
||||
|
||||
来源:Yandex信息安全团队的Andrey Krasichkov和Evgeny Sidorov
|
||||
作者:Yandex信息安全团队Andrey Krasichkov和Evgeny Sidorov
|
||||
|
||||
## 修复于 ClickHouse Release 1.1.54131, 2017-01-10 {#xiu-fu-yu-clickhouse-release-1-1-54131-2017-01-10}
|
||||
## 修复于ClickHouse Release 1.1.54131, 2017-01-10 {#fixed-in-clickhouse-release-1-1-54131-2017-01-10}
|
||||
|
||||
### CVE-2018-14670 {#cve-2018-14670}
|
||||
|
||||
deb软件包中的错误配置可能导致使用未经授权的数据库。
|
||||
deb包中的错误配置可能导致未经授权使用数据库。
|
||||
|
||||
来源:英国国家网络安全中心(NCSC)
|
||||
作者:英国国家网络安全中心(NCSC)
|
||||
|
||||
[来源文章](https://clickhouse.tech/docs/en/security_changelog/) <!--hide-->
|
||||
{## [Original article](https://clickhouse.tech/docs/en/security_changelog/) ##}
|
||||
|
@ -405,8 +405,8 @@ void QueryFuzzer::fuzz(ASTPtr & ast)
|
||||
|
||||
if (fn->is_window_function)
|
||||
{
|
||||
fuzzColumnLikeExpressionList(fn->window_partition_by);
|
||||
fuzzOrderByList(fn->window_order_by);
|
||||
fuzzColumnLikeExpressionList(fn->window_partition_by.get());
|
||||
fuzzOrderByList(fn->window_order_by.get());
|
||||
}
|
||||
|
||||
fuzz(fn->children);
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <DataTypes/DataTypeDateTime64.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypeFixedString.h>
|
||||
#include <DataTypes/DataTypeUUID.h>
|
||||
#include <DataTypes/DataTypesDecimal.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
@ -76,6 +77,8 @@ void ExternalResultDescription::init(const Block & sample_block_)
|
||||
types.emplace_back(ValueType::vtDecimal128, is_nullable);
|
||||
else if (typeid_cast<const DataTypeDecimal<Decimal256> *>(type))
|
||||
types.emplace_back(ValueType::vtDecimal256, is_nullable);
|
||||
else if (typeid_cast<const DataTypeFixedString *>(type))
|
||||
types.emplace_back(ValueType::vtFixedString, is_nullable);
|
||||
else
|
||||
throw Exception{"Unsupported type " + type->getName(), ErrorCodes::UNKNOWN_TYPE};
|
||||
}
|
||||
|
@ -30,7 +30,8 @@ struct ExternalResultDescription
|
||||
vtDecimal32,
|
||||
vtDecimal64,
|
||||
vtDecimal128,
|
||||
vtDecimal256
|
||||
vtDecimal256,
|
||||
vtFixedString
|
||||
};
|
||||
|
||||
Block sample_block;
|
||||
|
@ -109,6 +109,18 @@ static std::pair<Poco::Net::IPAddress, UInt8> parseIPFromString(const std::strin
|
||||
}
|
||||
}
|
||||
|
||||
static size_t formatIPWithPrefix(const unsigned char * src, UInt8 prefix_len, bool isv4, char * dst)
|
||||
{
|
||||
char * ptr = dst;
|
||||
if (isv4)
|
||||
formatIPv4(src, ptr);
|
||||
else
|
||||
formatIPv6(src, ptr);
|
||||
*(ptr - 1) = '/';
|
||||
ptr = itoa(prefix_len, ptr);
|
||||
return ptr - dst;
|
||||
}
|
||||
|
||||
static void validateKeyTypes(const DataTypes & key_types)
|
||||
{
|
||||
if (key_types.empty() || key_types.size() > 2)
|
||||
@ -233,14 +245,21 @@ IPAddressDictionary::IPAddressDictionary(
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_)
|
||||
bool require_nonempty_,
|
||||
bool access_to_key_from_attributes_)
|
||||
: IDictionaryBase(dict_id_)
|
||||
, dict_struct(dict_struct_)
|
||||
, source_ptr{std::move(source_ptr_)}
|
||||
, dict_lifetime(dict_lifetime_)
|
||||
, require_nonempty(require_nonempty_)
|
||||
, access_to_key_from_attributes(access_to_key_from_attributes_)
|
||||
, logger(&Poco::Logger::get("IPAddressDictionary"))
|
||||
{
|
||||
if (access_to_key_from_attributes)
|
||||
{
|
||||
dict_struct.attributes.emplace_back(dict_struct.key->front());
|
||||
}
|
||||
|
||||
createAttributes();
|
||||
|
||||
loadData();
|
||||
@ -455,8 +474,6 @@ void IPAddressDictionary::loadData()
|
||||
auto stream = source_ptr->loadAll();
|
||||
stream->readPrefix();
|
||||
|
||||
const auto attributes_size = attributes.size();
|
||||
|
||||
std::vector<IPRecord> ip_records;
|
||||
|
||||
bool has_ipv6 = false;
|
||||
@ -467,14 +484,19 @@ void IPAddressDictionary::loadData()
|
||||
element_count += rows;
|
||||
|
||||
const ColumnPtr key_column_ptr = block.safeGetByPosition(0).column;
|
||||
const auto attribute_column_ptrs = ext::map<Columns>(ext::range(0, attributes_size), [&](const size_t attribute_idx)
|
||||
|
||||
size_t attributes_size = dict_struct.attributes.size();
|
||||
if (access_to_key_from_attributes)
|
||||
{
|
||||
return block.safeGetByPosition(attribute_idx + 1).column;
|
||||
});
|
||||
/// last attribute contains key and will be filled in code below
|
||||
attributes_size--;
|
||||
}
|
||||
const auto attribute_column_ptrs = ext::map<Columns>(ext::range(0, attributes_size),
|
||||
[&](const size_t attribute_idx) { return block.safeGetByPosition(attribute_idx + 1).column; });
|
||||
|
||||
for (const auto row : ext::range(0, rows))
|
||||
{
|
||||
for (const auto attribute_idx : ext::range(0, attributes_size))
|
||||
for (const auto attribute_idx : ext::range(0, attribute_column_ptrs.size()))
|
||||
{
|
||||
const auto & attribute_column = *attribute_column_ptrs[attribute_idx];
|
||||
auto & attribute = attributes[attribute_idx];
|
||||
@ -492,6 +514,33 @@ void IPAddressDictionary::loadData()
|
||||
|
||||
stream->readSuffix();
|
||||
|
||||
if (access_to_key_from_attributes)
|
||||
{
|
||||
/// We format key attribute values here instead of filling with data from key_column
|
||||
/// because string representation can be normalized if bits beyond mask are set.
|
||||
/// Also all IPv4 will be displayed as mapped IPv6 if threre are any IPv6.
|
||||
/// It's consistent with representation in table created with `ENGINE = Dictionary` from this dictionary.
|
||||
char str_buffer[48];
|
||||
if (has_ipv6)
|
||||
{
|
||||
uint8_t ip_buffer[IPV6_BINARY_LENGTH];
|
||||
for (const auto & record : ip_records)
|
||||
{
|
||||
size_t str_len = formatIPWithPrefix(record.asIPv6Binary(ip_buffer), record.prefixIPv6(), false, str_buffer);
|
||||
setAttributeValue(attributes.back(), String(str_buffer, str_len));
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
for (const auto & record : ip_records)
|
||||
{
|
||||
UInt32 addr = IPv4AsUInt32(record.addr.addr());
|
||||
size_t str_len = formatIPWithPrefix(reinterpret_cast<const unsigned char *>(&addr), record.prefix, true, str_buffer);
|
||||
setAttributeValue(attributes.back(), String(str_buffer, str_len));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
row_idx.reserve(ip_records.size());
|
||||
mask_column.reserve(ip_records.size());
|
||||
|
||||
@ -681,7 +730,7 @@ void IPAddressDictionary::calculateBytesAllocated()
|
||||
template <typename T>
|
||||
void IPAddressDictionary::createAttributeImpl(Attribute & attribute, const Field & null_value)
|
||||
{
|
||||
attribute.null_values = T(null_value.get<NearestFieldType<T>>());
|
||||
attribute.null_values = null_value.isNull() ? T{} : T(null_value.get<NearestFieldType<T>>());
|
||||
attribute.maps.emplace<ContainerType<T>>();
|
||||
}
|
||||
|
||||
@ -737,7 +786,8 @@ IPAddressDictionary::Attribute IPAddressDictionary::createAttributeWithType(cons
|
||||
|
||||
case AttributeUnderlyingType::utString:
|
||||
{
|
||||
attr.null_values = null_value.get<String>();
|
||||
|
||||
attr.null_values = null_value.isNull() ? String() : null_value.get<String>();
|
||||
attr.maps.emplace<ContainerType<StringRef>>();
|
||||
attr.string_arena = std::make_unique<Arena>();
|
||||
break;
|
||||
@ -981,14 +1031,12 @@ static auto keyViewGetter()
|
||||
for (size_t row : ext::range(0, key_ip_column.size()))
|
||||
{
|
||||
UInt8 mask = key_mask_column.getElement(row);
|
||||
char * ptr = buffer;
|
||||
size_t str_len;
|
||||
if constexpr (IsIPv4)
|
||||
formatIPv4(reinterpret_cast<const unsigned char *>(&key_ip_column.getElement(row)), ptr);
|
||||
str_len = formatIPWithPrefix(reinterpret_cast<const unsigned char *>(&key_ip_column.getElement(row)), mask, true, buffer);
|
||||
else
|
||||
formatIPv6(reinterpret_cast<const unsigned char *>(key_ip_column.getDataAt(row).data), ptr);
|
||||
*(ptr - 1) = '/';
|
||||
ptr = itoa(mask, ptr);
|
||||
column->insertData(buffer, ptr - buffer);
|
||||
str_len = formatIPWithPrefix(reinterpret_cast<const unsigned char *>(key_ip_column.getDataAt(row).data), mask, false, buffer);
|
||||
column->insertData(buffer, str_len);
|
||||
}
|
||||
return ColumnsWithTypeAndName{
|
||||
ColumnWithTypeAndName(std::move(column), std::make_shared<DataTypeString>(), dict_attributes.front().name)};
|
||||
@ -1122,8 +1170,12 @@ void registerDictionaryTrie(DictionaryFactory & factory)
|
||||
const auto dict_id = StorageID::fromDictionaryConfig(config, config_prefix);
|
||||
const DictionaryLifetime dict_lifetime{config, config_prefix + ".lifetime"};
|
||||
const bool require_nonempty = config.getBool(config_prefix + ".require_nonempty", false);
|
||||
|
||||
const auto & layout_prefix = config_prefix + ".layout.ip_trie";
|
||||
const bool access_to_key_from_attributes = config.getBool(layout_prefix + ".access_to_key_from_attributes", false);
|
||||
// This is specialised dictionary for storing IPv4 and IPv6 prefixes.
|
||||
return std::make_unique<IPAddressDictionary>(dict_id, dict_struct, std::move(source_ptr), dict_lifetime, require_nonempty);
|
||||
return std::make_unique<IPAddressDictionary>(dict_id, dict_struct, std::move(source_ptr), dict_lifetime,
|
||||
require_nonempty, access_to_key_from_attributes);
|
||||
};
|
||||
factory.registerLayout("ip_trie", create_layout, true);
|
||||
}
|
||||
|
@ -27,7 +27,8 @@ public:
|
||||
const DictionaryStructure & dict_struct_,
|
||||
DictionarySourcePtr source_ptr_,
|
||||
const DictionaryLifetime dict_lifetime_,
|
||||
bool require_nonempty_);
|
||||
bool require_nonempty_,
|
||||
bool access_to_key_from_attributes_);
|
||||
|
||||
std::string getKeyDescription() const { return key_description; }
|
||||
|
||||
@ -45,7 +46,8 @@ public:
|
||||
|
||||
std::shared_ptr<const IExternalLoadable> clone() const override
|
||||
{
|
||||
return std::make_shared<IPAddressDictionary>(getDictionaryID(), dict_struct, source_ptr->clone(), dict_lifetime, require_nonempty);
|
||||
return std::make_shared<IPAddressDictionary>(getDictionaryID(), dict_struct, source_ptr->clone(), dict_lifetime,
|
||||
require_nonempty, access_to_key_from_attributes);
|
||||
}
|
||||
|
||||
const IDictionarySource * getSource() const override { return source_ptr.get(); }
|
||||
@ -238,10 +240,11 @@ private:
|
||||
|
||||
static const uint8_t * getIPv6FromOffset(const IPv6Container & ipv6_col, size_t i);
|
||||
|
||||
const DictionaryStructure dict_struct;
|
||||
DictionaryStructure dict_struct;
|
||||
const DictionarySourcePtr source_ptr;
|
||||
const DictionaryLifetime dict_lifetime;
|
||||
const bool require_nonempty;
|
||||
const bool access_to_key_from_attributes;
|
||||
const std::string key_description{dict_struct.getKeyDescription()};
|
||||
|
||||
/// Contains sorted IP subnetworks. If some addresses equals, subnet with lower mask is placed first.
|
||||
|
@ -8,6 +8,7 @@
|
||||
# include <Columns/ColumnString.h>
|
||||
# include <Columns/ColumnsNumber.h>
|
||||
# include <Columns/ColumnDecimal.h>
|
||||
# include <Columns/ColumnFixedString.h>
|
||||
# include <DataTypes/IDataType.h>
|
||||
# include <DataTypes/DataTypeNullable.h>
|
||||
# include <IO/ReadHelpers.h>
|
||||
@ -110,6 +111,9 @@ namespace
|
||||
data_type.deserializeAsWholeText(column, buffer, FormatSettings{});
|
||||
break;
|
||||
}
|
||||
case ValueType::vtFixedString:
|
||||
assert_cast<ColumnFixedString &>(column).insertData(value.data(), value.size());
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -15,152 +15,54 @@
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
// used by FunctionsStringSimilarity and FunctionsStringHash
|
||||
// includes extracting ASCII ngram, UTF8 ngram, ASCII word and UTF8 word
|
||||
template <size_t N, bool CaseInsensitive>
|
||||
struct ExtractStringImpl
|
||||
{
|
||||
/// Padding form ColumnsString. It is a number of bytes we can always read starting from pos if pos < end.
|
||||
static constexpr size_t default_padding = 16;
|
||||
|
||||
/// Functions are read `default_padding - (N - 1)` bytes into the buffer. Window of size N is used.
|
||||
/// Read copies `N - 1` last bytes from buffer into beginning, and then reads new bytes.
|
||||
static constexpr size_t buffer_size = default_padding + N - 1;
|
||||
|
||||
// the length of code_points = buffer_size
|
||||
// pos: the current beginning location that we want to copy data
|
||||
// end: the end location of the string
|
||||
static ALWAYS_INLINE size_t readASCIICodePoints(UInt8 * code_points, const char *& pos, const char * end)
|
||||
{
|
||||
/// Offset before which we copy some data.
|
||||
constexpr size_t padding_offset = default_padding - N + 1;
|
||||
/// We have an array like this for ASCII (N == 4, other cases are similar)
|
||||
/// |a0|a1|a2|a3|a4|a5|a6|a7|a8|a9|a10|a11|a12|a13|a14|a15|a16|a17|a18|
|
||||
/// And we copy ^^^^^^^^^^^^^^^ these bytes to the start
|
||||
/// Actually it is enough to copy 3 bytes, but memcpy for 4 bytes translates into 1 instruction
|
||||
memcpy(code_points, code_points + padding_offset, roundUpToPowerOfTwoOrZero(N - 1) * sizeof(UInt8));
|
||||
/// Now we have an array
|
||||
/// |a13|a14|a15|a16|a4|a5|a6|a7|a8|a9|a10|a11|a12|a13|a14|a15|a16|a17|a18|
|
||||
/// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
/// Doing unaligned read of 16 bytes and copy them like above
|
||||
/// 16 is also chosen to do two `movups`.
|
||||
/// Such copying allow us to have 3 codepoints from the previous read to produce the 4-grams with them.
|
||||
memcpy(code_points + (N - 1), pos, default_padding * sizeof(UInt8));
|
||||
|
||||
if constexpr (CaseInsensitive)
|
||||
{
|
||||
/// We really need template lambdas with C++20 to do it inline
|
||||
unrollLowering<N - 1>(code_points, std::make_index_sequence<padding_offset>());
|
||||
}
|
||||
pos += padding_offset;
|
||||
if (pos > end)
|
||||
return default_padding - (pos - end);
|
||||
return default_padding;
|
||||
}
|
||||
|
||||
// read a ASCII word
|
||||
static ALWAYS_INLINE inline size_t readOneASCIIWord(PaddedPODArray<UInt8> & word_buf, const char *& pos, const char * end)
|
||||
static ALWAYS_INLINE inline const UInt8 * readOneASCIIWord(const UInt8 *& pos, const UInt8 * end)
|
||||
{
|
||||
// jump separators
|
||||
while (pos < end && !isAlphaNumericASCII(*pos))
|
||||
while (pos < end && isUTF8Sep(*pos))
|
||||
++pos;
|
||||
|
||||
// word start from here
|
||||
const char * word_start = pos;
|
||||
while (pos < end && isAlphaNumericASCII(*pos))
|
||||
const UInt8 * word_start = pos;
|
||||
while (pos < end && !isUTF8Sep(*pos))
|
||||
++pos;
|
||||
|
||||
word_buf.assign(word_start, pos);
|
||||
if (CaseInsensitive)
|
||||
{
|
||||
std::transform(word_buf.begin(), word_buf.end(), word_buf.begin(), [](UInt8 c) { return std::tolower(c); });
|
||||
}
|
||||
return word_buf.size();
|
||||
}
|
||||
|
||||
static ALWAYS_INLINE inline size_t readUTF8CodePoints(UInt32 * code_points, const char *& pos, const char * end)
|
||||
{
|
||||
memcpy(code_points, code_points + default_padding - N + 1, roundUpToPowerOfTwoOrZero(N - 1) * sizeof(UInt32));
|
||||
|
||||
size_t num = N - 1;
|
||||
while (num < default_padding && pos < end)
|
||||
{
|
||||
code_points[num++] = readOneUTF8Code(pos, end);
|
||||
}
|
||||
return num;
|
||||
return word_start;
|
||||
}
|
||||
|
||||
// read one UTF8 word from pos to word
|
||||
static ALWAYS_INLINE inline size_t readOneUTF8Word(PaddedPODArray<UInt32> & word_buf, const char *& pos, const char * end)
|
||||
static ALWAYS_INLINE inline const UInt8 * readOneUTF8Word(const UInt8 *& pos, const UInt8 * end)
|
||||
{
|
||||
// jump UTF8 separator
|
||||
while (pos < end && isUTF8Sep(*pos))
|
||||
++pos;
|
||||
word_buf.clear();
|
||||
// UTF8 word's character number
|
||||
while (pos < end && !isUTF8Sep(*pos))
|
||||
{
|
||||
word_buf.push_back(readOneUTF8Code(pos, end));
|
||||
}
|
||||
return word_buf.size();
|
||||
}
|
||||
|
||||
private:
|
||||
template <size_t Offset, typename Container, size_t... I>
|
||||
static ALWAYS_INLINE inline void unrollLowering(Container & cont, const std::index_sequence<I...> &)
|
||||
{
|
||||
((cont[Offset + I] = std::tolower(cont[Offset + I])), ...);
|
||||
// UTF8 word's character number
|
||||
const UInt8 * word_start = pos;
|
||||
|
||||
while (pos < end && !isUTF8Sep(*pos))
|
||||
readOneUTF8Code(pos, end);
|
||||
|
||||
return word_start;
|
||||
}
|
||||
|
||||
// we use ASCII non-alphanum character as UTF8 separator
|
||||
static ALWAYS_INLINE inline bool isUTF8Sep(const UInt8 c) { return c < 128 && !isAlphaNumericASCII(c); }
|
||||
|
||||
// read one UTF8 character and return it
|
||||
static ALWAYS_INLINE inline UInt32 readOneUTF8Code(const char *& pos, const char * end)
|
||||
// read one UTF8 character
|
||||
static ALWAYS_INLINE inline void readOneUTF8Code(const UInt8 *& pos, const UInt8 * end)
|
||||
{
|
||||
size_t length = UTF8::seqLength(*pos);
|
||||
|
||||
if (pos + length > end)
|
||||
length = end - pos;
|
||||
UInt32 res;
|
||||
switch (length)
|
||||
{
|
||||
case 1:
|
||||
res = 0;
|
||||
memcpy(&res, pos, 1);
|
||||
break;
|
||||
case 2:
|
||||
res = 0;
|
||||
memcpy(&res, pos, 2);
|
||||
break;
|
||||
case 3:
|
||||
res = 0;
|
||||
memcpy(&res, pos, 3);
|
||||
break;
|
||||
default:
|
||||
memcpy(&res, pos, 4);
|
||||
}
|
||||
|
||||
if constexpr (CaseInsensitive)
|
||||
{
|
||||
switch (length)
|
||||
{
|
||||
case 4:
|
||||
res &= ~(1u << (5 + 3 * CHAR_BIT));
|
||||
[[fallthrough]];
|
||||
case 3:
|
||||
res &= ~(1u << (5 + 2 * CHAR_BIT));
|
||||
[[fallthrough]];
|
||||
case 2:
|
||||
res &= ~(1u);
|
||||
res &= ~(1u << (5 + CHAR_BIT));
|
||||
[[fallthrough]];
|
||||
default:
|
||||
res &= ~(1u << 5);
|
||||
}
|
||||
}
|
||||
pos += length;
|
||||
return res;
|
||||
}
|
||||
};
|
||||
}
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -7,6 +7,7 @@
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <Functions/FunctionHelpers.h>
|
||||
#include <Functions/IFunctionImpl.h>
|
||||
|
||||
@ -15,33 +16,107 @@ namespace DB
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int TOO_MANY_ARGUMENTS_FOR_FUNCTION;
|
||||
extern const int TOO_FEW_ARGUMENTS_FOR_FUNCTION;
|
||||
}
|
||||
|
||||
// FunctionStringHash
|
||||
// Simhash: String -> UInt64
|
||||
// Minhash: String -> (UInt64, UInt64)
|
||||
template <typename Impl, typename Name, bool is_simhash>
|
||||
template <typename Impl, typename Name, bool is_simhash, bool is_arg = false>
|
||||
class FunctionsStringHash : public IFunction
|
||||
{
|
||||
public:
|
||||
static constexpr auto name = Name::name;
|
||||
static constexpr size_t default_shingle_size = 3;
|
||||
static constexpr size_t default_num_hashes = 6;
|
||||
|
||||
static FunctionPtr create(const Context &) { return std::make_shared<FunctionsStringHash>(); }
|
||||
|
||||
String getName() const override { return name; }
|
||||
|
||||
size_t getNumberOfArguments() const override { return 1; }
|
||||
size_t getNumberOfArguments() const override { return 0; }
|
||||
bool isVariadic() const override { return true; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override
|
||||
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override
|
||||
{
|
||||
if (!isString(arguments[0]))
|
||||
if constexpr (is_simhash)
|
||||
return {1};
|
||||
else
|
||||
return {1, 2};
|
||||
}
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
|
||||
{
|
||||
if (arguments.empty())
|
||||
throw Exception(ErrorCodes::TOO_FEW_ARGUMENTS_FOR_FUNCTION, "Function {} expect at least one argument", getName());
|
||||
|
||||
if (!isString(arguments[0].type))
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Function {} expect single String argument, got {}", getName(), arguments[0]->getName());
|
||||
"First argument of function {} must be String, got {}", getName(), arguments[0].type->getName());
|
||||
|
||||
size_t shingle_size = default_shingle_size;
|
||||
|
||||
if (arguments.size() > 1)
|
||||
{
|
||||
if (!isUnsignedInteger(arguments[1].type))
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Second argument (shingle size) of function {} must be unsigned integer, got {}",
|
||||
getName(), arguments[1].type->getName());
|
||||
|
||||
if (!arguments[1].column)
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Second argument (shingle size) of function {} must be constant", getName());
|
||||
|
||||
shingle_size = arguments[1].column->getUInt(0);
|
||||
}
|
||||
|
||||
size_t num_hashes = default_num_hashes;
|
||||
|
||||
if (arguments.size() > 2)
|
||||
{
|
||||
if constexpr (is_simhash)
|
||||
throw Exception(ErrorCodes::TOO_MANY_ARGUMENTS_FOR_FUNCTION,
|
||||
"Function {} expect no more then two arguments (text, shingle size), got {}",
|
||||
getName(), arguments.size());
|
||||
|
||||
if (!isUnsignedInteger(arguments[2].type))
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Third argument (num hashes) of function {} must be unsigned integer, got {}",
|
||||
getName(), arguments[2].type->getName());
|
||||
|
||||
if (!arguments[2].column)
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Third argument (num hashes) of function {} must be constant", getName());
|
||||
|
||||
num_hashes = arguments[2].column->getUInt(0);
|
||||
}
|
||||
|
||||
if (arguments.size() > 3)
|
||||
{
|
||||
throw Exception(ErrorCodes::TOO_MANY_ARGUMENTS_FOR_FUNCTION,
|
||||
"Function {} expect no more then three arguments (text, shingle size, num hashes), got {}",
|
||||
getName(), arguments.size());
|
||||
}
|
||||
|
||||
if (shingle_size == 0)
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Second argument (shingle size) of function {} cannot be zero", getName());
|
||||
|
||||
if (num_hashes == 0)
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Third argument (num hashes) of function {} cannot be zero", getName());
|
||||
|
||||
auto type = std::make_shared<DataTypeUInt64>();
|
||||
if constexpr (is_simhash)
|
||||
return type;
|
||||
|
||||
if constexpr (is_arg)
|
||||
{
|
||||
DataTypePtr string_type = std::make_shared<DataTypeString>();
|
||||
DataTypes types(num_hashes, string_type);
|
||||
auto tuple_type = std::make_shared<DataTypeTuple>(types);
|
||||
return std::make_shared<DataTypeTuple>(DataTypes{tuple_type, tuple_type});
|
||||
}
|
||||
|
||||
return std::make_shared<DataTypeTuple>(DataTypes{type, type});
|
||||
}
|
||||
|
||||
@ -51,19 +126,47 @@ public:
|
||||
{
|
||||
const ColumnPtr & column = arguments[0].column;
|
||||
|
||||
size_t shingle_size = default_shingle_size;
|
||||
size_t num_hashes = default_num_hashes;
|
||||
|
||||
if (arguments.size() > 1)
|
||||
shingle_size = arguments[1].column->getUInt(0);
|
||||
|
||||
if (arguments.size() > 2)
|
||||
num_hashes = arguments[2].column->getUInt(0);
|
||||
|
||||
if constexpr (is_simhash)
|
||||
{
|
||||
// non const string, const case is handled by useDefaultImplementationForConstants.
|
||||
auto col_res = ColumnVector<UInt64>::create();
|
||||
auto & vec_res = col_res->getData();
|
||||
vec_res.resize(column->size());
|
||||
const ColumnString * col_str_vector = checkAndGetColumn<ColumnString>(&*column);
|
||||
Impl::apply(col_str_vector->getChars(), col_str_vector->getOffsets(), vec_res);
|
||||
Impl::apply(col_str_vector->getChars(), col_str_vector->getOffsets(), shingle_size, vec_res);
|
||||
return col_res;
|
||||
}
|
||||
else if constexpr (is_arg) // Min hash arg
|
||||
{
|
||||
MutableColumns min_columns(num_hashes);
|
||||
MutableColumns max_columns(num_hashes);
|
||||
for (size_t i = 0; i < num_hashes; ++i)
|
||||
{
|
||||
min_columns[i] = ColumnString::create();
|
||||
max_columns[i] = ColumnString::create();
|
||||
}
|
||||
|
||||
auto min_tuple = ColumnTuple::create(std::move(min_columns));
|
||||
auto max_tuple = ColumnTuple::create(std::move(max_columns));
|
||||
|
||||
const ColumnString * col_str_vector = checkAndGetColumn<ColumnString>(&*column);
|
||||
Impl::apply(col_str_vector->getChars(), col_str_vector->getOffsets(), shingle_size, num_hashes, nullptr, nullptr, min_tuple.get(), max_tuple.get());
|
||||
|
||||
MutableColumns tuple_columns;
|
||||
tuple_columns.emplace_back(std::move(min_tuple));
|
||||
tuple_columns.emplace_back(std::move(max_tuple));
|
||||
return ColumnTuple::create(std::move(tuple_columns));
|
||||
}
|
||||
else // Min hash
|
||||
{
|
||||
// non const string
|
||||
auto col_h1 = ColumnVector<UInt64>::create();
|
||||
auto col_h2 = ColumnVector<UInt64>::create();
|
||||
auto & vec_h1 = col_h1->getData();
|
||||
@ -71,7 +174,7 @@ public:
|
||||
vec_h1.resize(column->size());
|
||||
vec_h2.resize(column->size());
|
||||
const ColumnString * col_str_vector = checkAndGetColumn<ColumnString>(&*column);
|
||||
Impl::apply(col_str_vector->getChars(), col_str_vector->getOffsets(), vec_h1, vec_h2);
|
||||
Impl::apply(col_str_vector->getChars(), col_str_vector->getOffsets(), shingle_size, num_hashes, &vec_h1, &vec_h2, nullptr, nullptr);
|
||||
MutableColumns tuple_columns;
|
||||
tuple_columns.emplace_back(std::move(col_h1));
|
||||
tuple_columns.emplace_back(std::move(col_h2));
|
||||
|
@ -738,15 +738,26 @@ void ActionsMatcher::visit(const ASTFunction & node, const ASTPtr & ast, Data &
|
||||
if (node.is_window_function)
|
||||
{
|
||||
// Also add columns from PARTITION BY and ORDER BY of window functions.
|
||||
// Requiring a constant reference to a shared pointer to non-const AST
|
||||
// doesn't really look sane, but the visitor does indeed require it.
|
||||
if (node.window_partition_by)
|
||||
{
|
||||
visit(node.window_partition_by->clone(), data);
|
||||
visit(node.window_partition_by, data);
|
||||
}
|
||||
if (node.window_order_by)
|
||||
{
|
||||
visit(node.window_order_by->clone(), data);
|
||||
visit(node.window_order_by, data);
|
||||
}
|
||||
|
||||
// Also manually add columns for arguments of the window function itself.
|
||||
// ActionVisitor is written in such a way that this method must itself
|
||||
// descend into all needed function children. Window functions can't have
|
||||
// any special functions as argument, so the code below that handles
|
||||
// special arguments is not needed. This is analogous to the
|
||||
// appendWindowFunctionsArguments() in SelectQueryExpressionAnalyzer and
|
||||
// partially duplicates its code. Probably we can remove most of the
|
||||
// logic from that function, but I don't yet have it all figured out...
|
||||
for (const auto & arg : node.arguments->children)
|
||||
{
|
||||
visit(arg, data);
|
||||
}
|
||||
|
||||
// Don't need to do anything more for window functions here -- the
|
||||
|
@ -970,7 +970,9 @@ void SelectQueryExpressionAnalyzer::appendWindowFunctionsArguments(
|
||||
ExpressionActionsChain::Step & step = chain.lastStep(aggregated_columns);
|
||||
|
||||
// 1) Add actions for window functions and their arguments;
|
||||
// 2) Mark the columns that are really required.
|
||||
// 2) Mark the columns that are really required. We have to mark them as
|
||||
// required because we finish the expression chain before processing the
|
||||
// window functions.
|
||||
for (const auto & [_, w] : window_descriptions)
|
||||
{
|
||||
for (const auto & f : w.window_functions)
|
||||
@ -981,41 +983,14 @@ void SelectQueryExpressionAnalyzer::appendWindowFunctionsArguments(
|
||||
getRootActionsNoMakeSet(f.function_node->clone(),
|
||||
true /* no_subqueries */, step.actions());
|
||||
|
||||
// 1.2) result of window function: an empty INPUT.
|
||||
// It is an aggregate function, so it won't be added by getRootActions.
|
||||
// This is something of a hack. Other options:
|
||||
// a] do it like aggregate function -- break the chain of actions
|
||||
// and manually add window functions to the starting list of
|
||||
// input columns. Logically this is similar to what we're doing
|
||||
// now, but would require to split the window function processing
|
||||
// into a full-fledged step after plain functions. This would be
|
||||
// somewhat cumbersome. With INPUT hack we can avoid a separate
|
||||
// step and pretend that window functions are almost "normal"
|
||||
// select functions. The limitation of both these ways is that
|
||||
// we can't reference window functions in other SELECT
|
||||
// expressions.
|
||||
// b] add a WINDOW action type, then sort, then split the chain on
|
||||
// each WINDOW action and insert the Window pipeline between the
|
||||
// Expression pipelines. This is a "proper" way that would allow
|
||||
// us to depend on window functions in other functions. But it's
|
||||
// complicated so I avoid doing it for now.
|
||||
ColumnWithTypeAndName col;
|
||||
col.type = f.aggregate_function->getReturnType();
|
||||
col.column = col.type->createColumn();
|
||||
col.name = f.column_name;
|
||||
|
||||
step.actions()->addInput(col);
|
||||
|
||||
// 2.1) function arguments;
|
||||
for (const auto & a : f.function_node->arguments->children)
|
||||
{
|
||||
// 2.1) function arguments;
|
||||
step.required_output.push_back(a->getColumnName());
|
||||
}
|
||||
// 2.2) function result;
|
||||
step.required_output.push_back(f.column_name);
|
||||
}
|
||||
|
||||
// 2.3) PARTITION BY and ORDER BY columns.
|
||||
// 2.1) PARTITION BY and ORDER BY columns.
|
||||
for (const auto & c : w.full_sort_description)
|
||||
{
|
||||
step.required_output.push_back(c.column_name);
|
||||
@ -1048,6 +1023,15 @@ void SelectQueryExpressionAnalyzer::appendSelect(ExpressionActionsChain & chain,
|
||||
|
||||
for (const auto & child : select_query->select()->children)
|
||||
{
|
||||
if (const auto * function = typeid_cast<const ASTFunction *>(child.get());
|
||||
function
|
||||
&& function->is_window_function)
|
||||
{
|
||||
// Skip window function columns here -- they are calculated after
|
||||
// other SELECT expressions by a special step.
|
||||
continue;
|
||||
}
|
||||
|
||||
step.required_output.push_back(child->getColumnName());
|
||||
}
|
||||
}
|
||||
@ -1421,11 +1405,54 @@ ExpressionAnalysisResult::ExpressionAnalysisResult(
|
||||
/// If there is aggregation, we execute expressions in SELECT and ORDER BY on the initiating server, otherwise on the source servers.
|
||||
query_analyzer.appendSelect(chain, only_types || (need_aggregate ? !second_stage : !first_stage));
|
||||
|
||||
query_analyzer.appendWindowFunctionsArguments(chain, only_types || !first_stage);
|
||||
// Window functions are processed in a separate expression chain after
|
||||
// the main SELECT, similar to what we do for aggregate functions.
|
||||
if (has_window)
|
||||
{
|
||||
query_analyzer.appendWindowFunctionsArguments(chain, only_types || !first_stage);
|
||||
|
||||
// Build a list of output columns of the window step.
|
||||
// 1) We need the columns that are the output of ExpressionActions.
|
||||
for (const auto & x : chain.getLastActions()->getNamesAndTypesList())
|
||||
{
|
||||
query_analyzer.columns_after_window.push_back(x);
|
||||
}
|
||||
// 2) We also have to manually add the output of the window function
|
||||
// to the list of the output columns of the window step, because the
|
||||
// window functions are not in the ExpressionActions.
|
||||
for (const auto & [_, w] : query_analyzer.window_descriptions)
|
||||
{
|
||||
for (const auto & f : w.window_functions)
|
||||
{
|
||||
query_analyzer.columns_after_window.push_back(
|
||||
{f.column_name, f.aggregate_function->getReturnType()});
|
||||
}
|
||||
}
|
||||
|
||||
before_window = chain.getLastActions();
|
||||
finalize_chain(chain);
|
||||
|
||||
auto & step = chain.lastStep(query_analyzer.columns_after_window);
|
||||
|
||||
// The output of this expression chain is the result of
|
||||
// SELECT (before "final projection" i.e. renaming the columns), so
|
||||
// we have to mark the expressions that are required in the output,
|
||||
// again. We did it for the previous expression chain ("select w/o
|
||||
// window functions") earlier, in appendSelect(). But that chain also
|
||||
// produced the expressions required to calculate window functions.
|
||||
// They are not needed in the final SELECT result. Knowing the correct
|
||||
// list of columns is important when we apply SELECT DISTINCT later.
|
||||
const auto * select_query = query_analyzer.getSelectQuery();
|
||||
for (const auto & child : select_query->select()->children)
|
||||
{
|
||||
step.required_output.push_back(child->getColumnName());
|
||||
}
|
||||
}
|
||||
|
||||
selected_columns = chain.getLastStep().required_output;
|
||||
|
||||
has_order_by = query.orderBy() != nullptr;
|
||||
before_order_and_select = query_analyzer.appendOrderBy(
|
||||
before_order_by = query_analyzer.appendOrderBy(
|
||||
chain,
|
||||
only_types || (need_aggregate ? !second_stage : !first_stage),
|
||||
optimize_read_in_order,
|
||||
@ -1572,9 +1599,9 @@ std::string ExpressionAnalysisResult::dump() const
|
||||
ss << "before_window " << before_window->dumpDAG() << "\n";
|
||||
}
|
||||
|
||||
if (before_order_and_select)
|
||||
if (before_order_by)
|
||||
{
|
||||
ss << "before_order_and_select " << before_order_and_select->dumpDAG() << "\n";
|
||||
ss << "before_order_by " << before_order_by->dumpDAG() << "\n";
|
||||
}
|
||||
|
||||
if (before_limit_by)
|
||||
@ -1587,6 +1614,20 @@ std::string ExpressionAnalysisResult::dump() const
|
||||
ss << "final_projection " << final_projection->dumpDAG() << "\n";
|
||||
}
|
||||
|
||||
if (!selected_columns.empty())
|
||||
{
|
||||
ss << "selected_columns ";
|
||||
for (size_t i = 0; i < selected_columns.size(); i++)
|
||||
{
|
||||
if (i > 0)
|
||||
{
|
||||
ss << ", ";
|
||||
}
|
||||
ss << backQuote(selected_columns[i]);
|
||||
}
|
||||
ss << "\n";
|
||||
}
|
||||
|
||||
return ss.str();
|
||||
}
|
||||
|
||||
|
@ -55,6 +55,8 @@ struct ExpressionAnalyzerData
|
||||
NamesAndTypesList columns_after_join;
|
||||
/// Columns after ARRAY JOIN, JOIN, and/or aggregation.
|
||||
NamesAndTypesList aggregated_columns;
|
||||
/// Columns after window functions.
|
||||
NamesAndTypesList columns_after_window;
|
||||
|
||||
bool has_aggregation = false;
|
||||
NamesAndTypesList aggregation_keys;
|
||||
@ -202,11 +204,12 @@ struct ExpressionAnalysisResult
|
||||
ActionsDAGPtr before_aggregation;
|
||||
ActionsDAGPtr before_having;
|
||||
ActionsDAGPtr before_window;
|
||||
ActionsDAGPtr before_order_and_select;
|
||||
ActionsDAGPtr before_order_by;
|
||||
ActionsDAGPtr before_limit_by;
|
||||
ActionsDAGPtr final_projection;
|
||||
|
||||
/// Columns from the SELECT list, before renaming them to aliases.
|
||||
/// Columns from the SELECT list, before renaming them to aliases. Used to
|
||||
/// perform SELECT DISTINCT.
|
||||
Names selected_columns;
|
||||
|
||||
/// Columns will be removed after prewhere actions execution.
|
||||
|
@ -22,15 +22,22 @@ void ExpressionInfoMatcher::visit(const ASTFunction & ast_function, const ASTPtr
|
||||
{
|
||||
data.is_array_join = true;
|
||||
}
|
||||
// "is_aggregate_function" doesn't mean much by itself. Apparently here it is
|
||||
// used to move filters from HAVING to WHERE, and probably for this purpose
|
||||
// an aggregate function calculated as a window function is not relevant.
|
||||
// "is_aggregate_function" is used to determine whether we can move a filter
|
||||
// (1) from HAVING to WHERE or (2) from WHERE of a parent query to HAVING of
|
||||
// a subquery.
|
||||
// For aggregate functions we can't do (1) but can do (2).
|
||||
// For window functions both don't make sense -- they are not allowed in
|
||||
// WHERE or HAVING.
|
||||
else if (!ast_function.is_window_function
|
||||
&& AggregateFunctionFactory::instance().isAggregateFunctionName(
|
||||
ast_function.name))
|
||||
{
|
||||
data.is_aggregate_function = true;
|
||||
}
|
||||
else if (ast_function.is_window_function)
|
||||
{
|
||||
data.is_window_function = true;
|
||||
}
|
||||
else
|
||||
{
|
||||
const auto & function = FunctionFactory::instance().tryGet(ast_function.name, data.context);
|
||||
@ -75,15 +82,26 @@ bool ExpressionInfoMatcher::needChildVisit(const ASTPtr & node, const ASTPtr &)
|
||||
return !node->as<ASTSubquery>();
|
||||
}
|
||||
|
||||
bool hasStatefulFunction(const ASTPtr & node, const Context & context)
|
||||
bool hasNonRewritableFunction(const ASTPtr & node, const Context & context)
|
||||
{
|
||||
for (const auto & select_expression : node->children)
|
||||
{
|
||||
ExpressionInfoVisitor::Data expression_info{.context = context, .tables = {}};
|
||||
ExpressionInfoVisitor(expression_info).visit(select_expression);
|
||||
|
||||
if (expression_info.is_stateful_function)
|
||||
if (expression_info.is_stateful_function
|
||||
|| expression_info.is_window_function)
|
||||
{
|
||||
// If an outer query has a WHERE on window function, we can't move
|
||||
// it into the subquery, because window functions are not allowed in
|
||||
// WHERE and HAVING. Example:
|
||||
// select * from (
|
||||
// select number,
|
||||
// count(*) over (partition by intDiv(number, 3)) c
|
||||
// from numbers(3)
|
||||
// ) where c > 1;
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
return false;
|
||||
|
@ -21,6 +21,7 @@ struct ExpressionInfoMatcher
|
||||
bool is_array_join = false;
|
||||
bool is_stateful_function = false;
|
||||
bool is_aggregate_function = false;
|
||||
bool is_window_function = false;
|
||||
bool is_deterministic_function = true;
|
||||
std::unordered_set<size_t> unique_reference_tables_pos = {};
|
||||
};
|
||||
@ -36,6 +37,6 @@ struct ExpressionInfoMatcher
|
||||
|
||||
using ExpressionInfoVisitor = ConstInDepthNodeVisitor<ExpressionInfoMatcher, true>;
|
||||
|
||||
bool hasStatefulFunction(const ASTPtr & node, const Context & context);
|
||||
bool hasNonRewritableFunction(const ASTPtr & node, const Context & context);
|
||||
|
||||
}
|
||||
|
@ -33,11 +33,14 @@ public:
|
||||
return false;
|
||||
if (auto * func = node->as<ASTFunction>())
|
||||
{
|
||||
if (isAggregateFunction(*func)
|
||||
|| func->is_window_function)
|
||||
if (isAggregateFunction(*func))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
// Window functions can contain aggregation results as arguments
|
||||
// to the window functions, or columns of PARTITION BY or ORDER BY
|
||||
// of the window.
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
@ -538,7 +538,10 @@ Block InterpreterSelectQuery::getSampleBlockImpl()
|
||||
if (options.to_stage == QueryProcessingStage::Enum::WithMergeableState)
|
||||
{
|
||||
if (!analysis_result.need_aggregate)
|
||||
return analysis_result.before_order_and_select->getResultColumns();
|
||||
{
|
||||
// What's the difference with selected_columns?
|
||||
return analysis_result.before_order_by->getResultColumns();
|
||||
}
|
||||
|
||||
Block header = analysis_result.before_aggregation->getResultColumns();
|
||||
|
||||
@ -564,7 +567,8 @@ Block InterpreterSelectQuery::getSampleBlockImpl()
|
||||
|
||||
if (options.to_stage == QueryProcessingStage::Enum::WithMergeableStateAfterAggregation)
|
||||
{
|
||||
return analysis_result.before_order_and_select->getResultColumns();
|
||||
// What's the difference with selected_columns?
|
||||
return analysis_result.before_order_by->getResultColumns();
|
||||
}
|
||||
|
||||
return analysis_result.final_projection->getResultColumns();
|
||||
@ -958,8 +962,9 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu
|
||||
}
|
||||
else
|
||||
{
|
||||
executeExpression(query_plan, expressions.before_order_and_select, "Before ORDER BY and SELECT");
|
||||
executeExpression(query_plan, expressions.before_window, "Before window functions");
|
||||
executeWindow(query_plan);
|
||||
executeExpression(query_plan, expressions.before_order_by, "Before ORDER BY");
|
||||
executeDistinct(query_plan, true, expressions.selected_columns, true);
|
||||
}
|
||||
|
||||
@ -1005,8 +1010,10 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu
|
||||
else if (expressions.hasHaving())
|
||||
executeHaving(query_plan, expressions.before_having);
|
||||
|
||||
executeExpression(query_plan, expressions.before_order_and_select, "Before ORDER BY and SELECT");
|
||||
executeExpression(query_plan, expressions.before_window,
|
||||
"Before window functions");
|
||||
executeWindow(query_plan);
|
||||
executeExpression(query_plan, expressions.before_order_by, "Before ORDER BY");
|
||||
executeDistinct(query_plan, true, expressions.selected_columns, true);
|
||||
|
||||
}
|
||||
@ -1029,10 +1036,23 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, const BlockInpu
|
||||
/** Optimization - if there are several sources and there is LIMIT, then first apply the preliminary LIMIT,
|
||||
* limiting the number of rows in each up to `offset + limit`.
|
||||
*/
|
||||
bool has_withfill = false;
|
||||
if (query.orderBy())
|
||||
{
|
||||
SortDescription order_descr = getSortDescription(query, *context);
|
||||
for (auto & desc : order_descr)
|
||||
if (desc.with_fill)
|
||||
{
|
||||
has_withfill = true;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
bool has_prelimit = false;
|
||||
if (!to_aggregation_stage &&
|
||||
query.limitLength() && !query.limit_with_ties && !hasWithTotalsInAnySubqueryInFromClause(query) &&
|
||||
!query.arrayJoinExpressionList() && !query.distinct && !expressions.hasLimitBy() && !settings.extremes)
|
||||
!query.arrayJoinExpressionList() && !query.distinct && !expressions.hasLimitBy() && !settings.extremes &&
|
||||
!has_withfill)
|
||||
{
|
||||
executePreLimit(query_plan, false);
|
||||
has_prelimit = true;
|
||||
@ -1745,6 +1765,11 @@ void InterpreterSelectQuery::executeRollupOrCube(QueryPlan & query_plan, Modific
|
||||
|
||||
void InterpreterSelectQuery::executeExpression(QueryPlan & query_plan, const ActionsDAGPtr & expression, const std::string & description)
|
||||
{
|
||||
if (!expression)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
auto expression_step = std::make_unique<ExpressionStep>(query_plan.getCurrentDataStream(), expression);
|
||||
|
||||
expression_step->setStepDescription(description);
|
||||
|
@ -90,8 +90,12 @@ std::vector<ASTs> PredicateExpressionsOptimizer::extractTablesPredicates(const A
|
||||
ExpressionInfoVisitor::Data expression_info{.context = context, .tables = tables_with_columns};
|
||||
ExpressionInfoVisitor(expression_info).visit(predicate_expression);
|
||||
|
||||
if (expression_info.is_stateful_function || !expression_info.is_deterministic_function)
|
||||
return {}; /// Not optimized when predicate contains stateful function or indeterministic function
|
||||
if (expression_info.is_stateful_function
|
||||
|| !expression_info.is_deterministic_function
|
||||
|| expression_info.is_window_function)
|
||||
{
|
||||
return {}; /// Not optimized when predicate contains stateful function or indeterministic function or window functions
|
||||
}
|
||||
|
||||
if (!expression_info.is_array_join)
|
||||
{
|
||||
@ -190,6 +194,12 @@ bool PredicateExpressionsOptimizer::tryMovePredicatesFromHavingToWhere(ASTSelect
|
||||
if (expression_info.is_stateful_function)
|
||||
return false;
|
||||
|
||||
if (expression_info.is_window_function)
|
||||
{
|
||||
// Window functions are not allowed in either HAVING or WHERE.
|
||||
return false;
|
||||
}
|
||||
|
||||
if (expression_info.is_aggregate_function)
|
||||
having_predicates.emplace_back(moving_predicate);
|
||||
else
|
||||
|
@ -88,7 +88,7 @@ bool PredicateRewriteVisitorData::rewriteSubquery(ASTSelectQuery & subquery, con
|
||||
|| (!optimize_with && subquery.with())
|
||||
|| subquery.withFill()
|
||||
|| subquery.limitBy() || subquery.limitLength()
|
||||
|| hasStatefulFunction(subquery.select(), context))
|
||||
|| hasNonRewritableFunction(subquery.select(), context))
|
||||
return false;
|
||||
|
||||
for (const auto & predicate : predicates)
|
||||
|
@ -148,9 +148,9 @@ void QueryNormalizer::visit(ASTSelectQuery & select, const ASTPtr &, Data & data
|
||||
/// Don't go into select query. It processes children itself.
|
||||
/// Do not go to the left argument of lambda expressions, so as not to replace the formal parameters
|
||||
/// on aliases in expressions of the form 123 AS x, arrayMap(x -> 1, [2]).
|
||||
void QueryNormalizer::visitChildren(const ASTPtr & node, Data & data)
|
||||
void QueryNormalizer::visitChildren(IAST * node, Data & data)
|
||||
{
|
||||
if (const auto * func_node = node->as<ASTFunction>())
|
||||
if (auto * func_node = node->as<ASTFunction>())
|
||||
{
|
||||
if (func_node->tryGetQueryArgument())
|
||||
{
|
||||
@ -176,6 +176,16 @@ void QueryNormalizer::visitChildren(const ASTPtr & node, Data & data)
|
||||
visit(child, data);
|
||||
}
|
||||
}
|
||||
|
||||
if (func_node->window_partition_by)
|
||||
{
|
||||
visitChildren(func_node->window_partition_by.get(), data);
|
||||
}
|
||||
|
||||
if (func_node->window_order_by)
|
||||
{
|
||||
visitChildren(func_node->window_order_by.get(), data);
|
||||
}
|
||||
}
|
||||
else if (!node->as<ASTSelectQuery>())
|
||||
{
|
||||
@ -221,7 +231,7 @@ void QueryNormalizer::visit(ASTPtr & ast, Data & data)
|
||||
if (ast.get() != initial_ast.get())
|
||||
visit(ast, data);
|
||||
else
|
||||
visitChildren(ast, data);
|
||||
visitChildren(ast.get(), data);
|
||||
|
||||
current_asts.erase(initial_ast.get());
|
||||
current_asts.erase(ast.get());
|
||||
|
@ -69,7 +69,7 @@ private:
|
||||
static void visit(ASTTablesInSelectQueryElement &, const ASTPtr &, Data &);
|
||||
static void visit(ASTSelectQuery &, const ASTPtr &, Data &);
|
||||
|
||||
static void visitChildren(const ASTPtr &, Data & data);
|
||||
static void visitChildren(IAST * node, Data & data);
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -29,6 +29,7 @@
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <IO/WriteBufferFromOStream.h>
|
||||
#include <Storages/IStorage.h>
|
||||
|
||||
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||
@ -445,6 +446,8 @@ std::vector<const ASTFunction *> getAggregates(ASTPtr & query, const ASTSelectQu
|
||||
for (auto & arg : node->arguments->children)
|
||||
{
|
||||
assertNoAggregates(arg, "inside another aggregate function");
|
||||
// We also can't have window functions inside aggregate functions,
|
||||
// because the window functions are calculated later.
|
||||
assertNoWindows(arg, "inside an aggregate function");
|
||||
}
|
||||
}
|
||||
@ -454,7 +457,9 @@ std::vector<const ASTFunction *> getAggregates(ASTPtr & query, const ASTSelectQu
|
||||
|
||||
std::vector<const ASTFunction *> getWindowFunctions(ASTPtr & query, const ASTSelectQuery & select_query)
|
||||
{
|
||||
/// There can not be window functions inside the WHERE and PREWHERE.
|
||||
/// There can not be window functions inside the WHERE, PREWHERE and HAVING
|
||||
if (select_query.having())
|
||||
assertNoWindows(select_query.having(), "in HAVING");
|
||||
if (select_query.where())
|
||||
assertNoWindows(select_query.where(), "in WHERE");
|
||||
if (select_query.prewhere())
|
||||
@ -463,17 +468,34 @@ std::vector<const ASTFunction *> getWindowFunctions(ASTPtr & query, const ASTSel
|
||||
GetAggregatesVisitor::Data data;
|
||||
GetAggregatesVisitor(data).visit(query);
|
||||
|
||||
/// There can not be other window functions within the aggregate functions.
|
||||
/// Window functions cannot be inside aggregates or other window functions.
|
||||
/// Aggregate functions can be inside window functions because they are
|
||||
/// calculated earlier.
|
||||
for (const ASTFunction * node : data.window_functions)
|
||||
{
|
||||
if (node->arguments)
|
||||
{
|
||||
for (auto & arg : node->arguments->children)
|
||||
{
|
||||
assertNoAggregates(arg, "inside a window function");
|
||||
assertNoWindows(arg, "inside another window function");
|
||||
}
|
||||
}
|
||||
|
||||
if (node->window_partition_by)
|
||||
{
|
||||
for (auto & arg : node->window_partition_by->children)
|
||||
{
|
||||
assertNoWindows(arg, "inside PARTITION BY of a window");
|
||||
}
|
||||
}
|
||||
|
||||
if (node->window_order_by)
|
||||
{
|
||||
for (auto & arg : node->window_order_by->children)
|
||||
{
|
||||
assertNoWindows(arg, "inside ORDER BY of a window");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return data.window_functions;
|
||||
|
@ -32,9 +32,6 @@ target_link_libraries (string_hash_map_aggregation PRIVATE dbms)
|
||||
add_executable (string_hash_set string_hash_set.cpp)
|
||||
target_link_libraries (string_hash_set PRIVATE dbms)
|
||||
|
||||
add_executable (context context.cpp)
|
||||
target_link_libraries (context PRIVATE dbms)
|
||||
|
||||
add_executable (two_level_hash_map two_level_hash_map.cpp)
|
||||
target_include_directories (two_level_hash_map SYSTEM BEFORE PRIVATE ${SPARSEHASH_INCLUDE_DIR})
|
||||
target_link_libraries (two_level_hash_map PRIVATE dbms)
|
||||
|
@ -1,90 +0,0 @@
|
||||
#include <iostream>
|
||||
/// #define BOOST_USE_UCONTEXT
|
||||
#include <Common/Fiber.h>
|
||||
// #include <boost/context/pooled_fixedsize_stack.hpp>
|
||||
// #include <boost/context/segmented_stack.hpp>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/FiberStack.h>
|
||||
|
||||
void __attribute__((__noinline__)) foo(std::exception_ptr exception)
|
||||
{
|
||||
if (exception)
|
||||
std::rethrow_exception(exception);
|
||||
}
|
||||
|
||||
void __attribute__((__noinline__)) bar(int a)
|
||||
{
|
||||
std::cout << StackTrace().toString() << std::endl;
|
||||
|
||||
if (a > 0)
|
||||
throw DB::Exception(0, "hello");
|
||||
}
|
||||
|
||||
void __attribute__((__noinline__)) gar(int a)
|
||||
{
|
||||
char buf[1024];
|
||||
buf[1023] = a & 255;
|
||||
if (a > 2)
|
||||
return gar(a - 1);
|
||||
else
|
||||
bar(a);
|
||||
}
|
||||
|
||||
int main(int, char **)
|
||||
try {
|
||||
namespace ctx=boost::context;
|
||||
int a;
|
||||
std::exception_ptr exception;
|
||||
// ctx::protected_fixedsize allocator
|
||||
// ctx::pooled_fixedsize_stack(1024 * 64 + 2 * 2 * 1024 * 1024 * 16, 1)
|
||||
ctx::fiber source{std::allocator_arg_t(), FiberStack(), [&](ctx::fiber&& sink)
|
||||
{
|
||||
a=0;
|
||||
int b=1;
|
||||
for (size_t i = 0; i < 9; ++i)
|
||||
{
|
||||
sink=std::move(sink).resume();
|
||||
int next=a+b;
|
||||
a=b;
|
||||
b=next;
|
||||
}
|
||||
try
|
||||
{
|
||||
gar(1024);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
std::cout << "Saving exception\n";
|
||||
exception = std::current_exception();
|
||||
}
|
||||
return std::move(sink);
|
||||
}};
|
||||
|
||||
for (int j=0;j<10;++j)
|
||||
{
|
||||
try
|
||||
{
|
||||
source=std::move(source).resume();
|
||||
}
|
||||
catch (DB::Exception & e)
|
||||
{
|
||||
std::cout << "Caught exception in resume " << e.getStackTraceString() << std::endl;
|
||||
}
|
||||
std::cout << a << " ";
|
||||
}
|
||||
|
||||
std::cout << std::endl;
|
||||
|
||||
try
|
||||
{
|
||||
foo(exception);
|
||||
}
|
||||
catch (const DB::Exception & e)
|
||||
{
|
||||
std::cout << e.getStackTraceString() << std::endl;
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
std::cerr << "Uncaught exception\n";
|
||||
}
|
@ -39,6 +39,16 @@ void ASTFunction::appendColumnNameImpl(WriteBuffer & ostr) const
|
||||
(*it)->appendColumnName(ostr);
|
||||
}
|
||||
writeChar(')', ostr);
|
||||
|
||||
if (is_window_function)
|
||||
{
|
||||
writeCString(" OVER (", ostr);
|
||||
FormatSettings settings{ostr, true /* one_line */};
|
||||
FormatState state;
|
||||
FormatStateStacked frame;
|
||||
appendWindowDescription(settings, state, frame);
|
||||
writeCString(")", ostr);
|
||||
}
|
||||
}
|
||||
|
||||
/** Get the text that identifies this element. */
|
||||
@ -57,17 +67,20 @@ ASTPtr ASTFunction::clone() const
|
||||
|
||||
if (window_name)
|
||||
{
|
||||
res->set(res->window_name, window_name->clone());
|
||||
res->window_name = window_name->clone();
|
||||
res->children.push_back(res->window_name);
|
||||
}
|
||||
|
||||
if (window_partition_by)
|
||||
{
|
||||
res->set(res->window_partition_by, window_partition_by->clone());
|
||||
res->window_partition_by = window_partition_by->clone();
|
||||
res->children.push_back(res->window_partition_by);
|
||||
}
|
||||
|
||||
if (window_order_by)
|
||||
{
|
||||
res->set(res->window_order_by, window_order_by->clone());
|
||||
res->window_order_by = window_order_by->clone();
|
||||
res->children.push_back(res->window_order_by);
|
||||
}
|
||||
|
||||
return res;
|
||||
|
@ -21,9 +21,25 @@ public:
|
||||
ASTPtr parameters;
|
||||
|
||||
bool is_window_function = false;
|
||||
ASTIdentifier * window_name;
|
||||
ASTExpressionList * window_partition_by;
|
||||
ASTExpressionList * window_order_by;
|
||||
|
||||
// We have to make these fields ASTPtr because this is what the visitors
|
||||
// expect. Some of them take const ASTPtr & (makes no sense), and some
|
||||
// take ASTPtr & and modify it. I don't understand how the latter is
|
||||
// compatible with also having an owning `children` array -- apparently it
|
||||
// leads to some dangling children that are not referenced by the fields of
|
||||
// the AST class itself. Some older code hints at the idea of having
|
||||
// ownership in `children` only, and making the class fields to be raw
|
||||
// pointers of proper type (see e.g. IAST::set), but this is not compatible
|
||||
// with the visitor interface.
|
||||
|
||||
// ASTIdentifier
|
||||
ASTPtr window_name;
|
||||
|
||||
// ASTExpressionList
|
||||
ASTPtr window_partition_by;
|
||||
|
||||
// ASTExpressionList of
|
||||
ASTPtr window_order_by;
|
||||
|
||||
/// do not print empty parentheses if there are no args - compatibility with new AST for data types and engine names.
|
||||
bool no_empty_args = false;
|
||||
|
@ -419,7 +419,8 @@ bool ParserWindowDefinition::parseImpl(Pos & pos, ASTPtr & node, Expected & expe
|
||||
ParserIdentifier window_name_parser;
|
||||
if (window_name_parser.parse(pos, window_name_ast, expected))
|
||||
{
|
||||
function->set(function->window_name, window_name_ast);
|
||||
function->children.push_back(window_name_ast);
|
||||
function->window_name = window_name_ast;
|
||||
return true;
|
||||
}
|
||||
else
|
||||
@ -442,7 +443,8 @@ bool ParserWindowDefinition::parseImpl(Pos & pos, ASTPtr & node, Expected & expe
|
||||
ASTPtr partition_by_ast;
|
||||
if (columns_partition_by.parse(pos, partition_by_ast, expected))
|
||||
{
|
||||
function->set(function->window_partition_by, partition_by_ast);
|
||||
function->children.push_back(partition_by_ast);
|
||||
function->window_partition_by = partition_by_ast;
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -455,7 +457,8 @@ bool ParserWindowDefinition::parseImpl(Pos & pos, ASTPtr & node, Expected & expe
|
||||
ASTPtr order_by_ast;
|
||||
if (columns_order_by.parse(pos, order_by_ast, expected))
|
||||
{
|
||||
function->set(function->window_order_by, order_by_ast);
|
||||
function->children.push_back(order_by_ast);
|
||||
function->window_order_by = order_by_ast;
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -3,24 +3,37 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
DelayedPortsProcessor::DelayedPortsProcessor(const Block & header, size_t num_ports, const PortNumbers & delayed_ports)
|
||||
: IProcessor(InputPorts(num_ports, header), OutputPorts(num_ports, header))
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
DelayedPortsProcessor::DelayedPortsProcessor(
|
||||
const Block & header, size_t num_ports, const PortNumbers & delayed_ports, bool assert_main_ports_empty)
|
||||
: IProcessor(InputPorts(num_ports, header),
|
||||
OutputPorts((assert_main_ports_empty ? delayed_ports.size() : num_ports), header))
|
||||
, num_delayed(delayed_ports.size())
|
||||
{
|
||||
port_pairs.resize(num_ports);
|
||||
output_to_pair.reserve(outputs.size());
|
||||
|
||||
for (const auto & delayed : delayed_ports)
|
||||
port_pairs[delayed].is_delayed = true;
|
||||
|
||||
auto input_it = inputs.begin();
|
||||
auto output_it = outputs.begin();
|
||||
for (size_t i = 0; i < num_ports; ++i)
|
||||
{
|
||||
port_pairs[i].input_port = &*input_it;
|
||||
port_pairs[i].output_port = &*output_it;
|
||||
++input_it;
|
||||
++output_it;
|
||||
}
|
||||
|
||||
for (const auto & delayed : delayed_ports)
|
||||
port_pairs[delayed].is_delayed = true;
|
||||
if (port_pairs[i].is_delayed || !assert_main_ports_empty)
|
||||
{
|
||||
port_pairs[i].output_port = &*output_it;
|
||||
output_to_pair.push_back(i);
|
||||
++output_it;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
bool DelayedPortsProcessor::processPair(PortsPair & pair)
|
||||
@ -34,7 +47,7 @@ bool DelayedPortsProcessor::processPair(PortsPair & pair)
|
||||
}
|
||||
};
|
||||
|
||||
if (pair.output_port->isFinished())
|
||||
if (pair.output_port && pair.output_port->isFinished())
|
||||
{
|
||||
pair.input_port->close();
|
||||
finish();
|
||||
@ -43,17 +56,24 @@ bool DelayedPortsProcessor::processPair(PortsPair & pair)
|
||||
|
||||
if (pair.input_port->isFinished())
|
||||
{
|
||||
pair.output_port->finish();
|
||||
if (pair.output_port)
|
||||
pair.output_port->finish();
|
||||
finish();
|
||||
return false;
|
||||
}
|
||||
|
||||
if (!pair.output_port->canPush())
|
||||
if (pair.output_port && !pair.output_port->canPush())
|
||||
return false;
|
||||
|
||||
pair.input_port->setNeeded();
|
||||
if (pair.input_port->hasData())
|
||||
{
|
||||
if (!pair.output_port)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Input port for DelayedPortsProcessor is assumed to have no data, but it has one");
|
||||
|
||||
pair.output_port->pushData(pair.input_port->pullData());
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
@ -63,10 +83,21 @@ IProcessor::Status DelayedPortsProcessor::prepare(const PortNumbers & updated_in
|
||||
bool skip_delayed = (num_finished + num_delayed) < port_pairs.size();
|
||||
bool need_data = false;
|
||||
|
||||
if (!are_inputs_initialized && !updated_outputs.empty())
|
||||
{
|
||||
/// Activate inputs with no output.
|
||||
for (const auto & pair : port_pairs)
|
||||
if (!pair.output_port)
|
||||
pair.input_port->setNeeded();
|
||||
|
||||
are_inputs_initialized = true;
|
||||
}
|
||||
|
||||
for (const auto & output_number : updated_outputs)
|
||||
{
|
||||
if (!skip_delayed || !port_pairs[output_number].is_delayed)
|
||||
need_data = processPair(port_pairs[output_number]) || need_data;
|
||||
auto pair_num = output_to_pair[output_number];
|
||||
if (!skip_delayed || !port_pairs[pair_num].is_delayed)
|
||||
need_data = processPair(port_pairs[pair_num]) || need_data;
|
||||
}
|
||||
|
||||
for (const auto & input_number : updated_inputs)
|
||||
|
@ -11,7 +11,7 @@ namespace DB
|
||||
class DelayedPortsProcessor : public IProcessor
|
||||
{
|
||||
public:
|
||||
DelayedPortsProcessor(const Block & header, size_t num_ports, const PortNumbers & delayed_ports);
|
||||
DelayedPortsProcessor(const Block & header, size_t num_ports, const PortNumbers & delayed_ports, bool assert_main_ports_empty = false);
|
||||
|
||||
String getName() const override { return "DelayedPorts"; }
|
||||
|
||||
@ -31,6 +31,9 @@ private:
|
||||
size_t num_delayed;
|
||||
size_t num_finished = 0;
|
||||
|
||||
std::vector<size_t> output_to_pair;
|
||||
bool are_inputs_initialized = false;
|
||||
|
||||
bool processPair(PortsPair & pair);
|
||||
};
|
||||
|
||||
|
@ -298,7 +298,7 @@ void QueryPipeline::addPipelineBefore(QueryPipeline pipeline)
|
||||
pipes.emplace_back(QueryPipeline::getPipe(std::move(pipeline)));
|
||||
pipe = Pipe::unitePipes(std::move(pipes), collected_processors);
|
||||
|
||||
auto processor = std::make_shared<DelayedPortsProcessor>(getHeader(), pipe.numOutputPorts(), delayed_streams);
|
||||
auto processor = std::make_shared<DelayedPortsProcessor>(getHeader(), pipe.numOutputPorts(), delayed_streams, true);
|
||||
addTransform(std::move(processor));
|
||||
}
|
||||
|
||||
|
@ -32,6 +32,11 @@ void AddingDelayedSourceStep::transformPipeline(QueryPipeline & pipeline)
|
||||
{
|
||||
source->setQueryPlanStep(this);
|
||||
pipeline.addDelayedStream(source);
|
||||
|
||||
/// Now, after adding delayed stream, it has implicit dependency on other port.
|
||||
/// Here we add resize processor to remove this dependency.
|
||||
/// Otherwise, if we add MergeSorting + MergingSorted transform to pipeline, we could get `Pipeline stuck`
|
||||
pipeline.resize(pipeline.getNumStreams(), true);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -46,6 +46,8 @@ static void doDescribeHeader(const Block & header, size_t count, IQueryPlanStep:
|
||||
|
||||
first = false;
|
||||
elem.dumpNameAndType(settings.out);
|
||||
settings.out << ": ";
|
||||
elem.dumpStructure(settings.out);
|
||||
settings.out << '\n';
|
||||
}
|
||||
}
|
||||
|
@ -247,6 +247,15 @@ static void explainStep(
|
||||
step.describeActions(settings);
|
||||
}
|
||||
|
||||
std::string debugExplainStep(const IQueryPlanStep & step)
|
||||
{
|
||||
WriteBufferFromOwnString out;
|
||||
IQueryPlanStep::FormatSettings settings{.out = out};
|
||||
QueryPlan::ExplainPlanOptions options{.actions = true};
|
||||
explainStep(step, settings, options);
|
||||
return out.str();
|
||||
}
|
||||
|
||||
void QueryPlan::explainPlan(WriteBuffer & buffer, const ExplainPlanOptions & options)
|
||||
{
|
||||
checkInitialized();
|
||||
@ -488,6 +497,7 @@ static bool tryMergeExpressions(QueryPlan::Node * parent_node, QueryPlan::Node *
|
||||
{
|
||||
auto & parent = parent_node->step;
|
||||
auto & child = child_node->step;
|
||||
|
||||
/// TODO: FilterStep
|
||||
auto * parent_expr = typeid_cast<ExpressionStep *>(parent.get());
|
||||
auto * child_expr = typeid_cast<ExpressionStep *>(child.get());
|
||||
|
@ -97,4 +97,6 @@ private:
|
||||
std::vector<std::shared_ptr<Context>> interpreter_context;
|
||||
};
|
||||
|
||||
std::string debugExplainStep(const IQueryPlanStep & step);
|
||||
|
||||
}
|
||||
|
@ -77,6 +77,11 @@ void WindowTransform::transform(Chunk & chunk)
|
||||
ws.argument_columns.clear();
|
||||
for (const auto column_index : ws.argument_column_indices)
|
||||
{
|
||||
// Aggregate functions can't work with constant columns, so we have to
|
||||
// materialize them like the Aggregator does.
|
||||
columns[column_index]
|
||||
= std::move(columns[column_index])->convertToFullColumnIfConst();
|
||||
|
||||
ws.argument_columns.push_back(columns[column_index].get());
|
||||
}
|
||||
|
||||
|
@ -23,11 +23,13 @@ void ReplicatedMergeTreeAltersSequence::addMutationForAlter(int alter_version, s
|
||||
}
|
||||
|
||||
void ReplicatedMergeTreeAltersSequence::addMetadataAlter(
|
||||
int alter_version, bool have_mutation, std::lock_guard<std::mutex> & /*state_lock*/)
|
||||
int alter_version, std::lock_guard<std::mutex> & /*state_lock*/)
|
||||
{
|
||||
/// Data alter (mutation) always added before. See ReplicatedMergeTreeQueue::pullLogsToQueue.
|
||||
/// So mutation alredy added to this sequence or doesn't exist.
|
||||
if (!queue_state.count(alter_version))
|
||||
queue_state.emplace(alter_version, AlterState{.metadata_finished=false, .data_finished=!have_mutation});
|
||||
else /// Data alter can be added before.
|
||||
queue_state.emplace(alter_version, AlterState{.metadata_finished=false, .data_finished=true});
|
||||
else
|
||||
queue_state[alter_version].metadata_finished = false;
|
||||
}
|
||||
|
||||
|
@ -38,9 +38,8 @@ public:
|
||||
/// Add mutation for alter (alter data stage).
|
||||
void addMutationForAlter(int alter_version, std::lock_guard<std::mutex> & /*state_lock*/);
|
||||
|
||||
/// Add metadata for alter (alter metadata stage). If have_mutation=true, than we expect, that
|
||||
/// corresponding mutation will be added.
|
||||
void addMetadataAlter(int alter_version, bool have_mutation, std::lock_guard<std::mutex> & /*state_lock*/);
|
||||
/// Add metadata for alter (alter metadata stage).
|
||||
void addMetadataAlter(int alter_version, std::lock_guard<std::mutex> & /*state_lock*/);
|
||||
|
||||
/// Finish metadata alter. If corresponding data alter finished, than we can remove
|
||||
/// alter from sequence.
|
||||
|
@ -158,7 +158,7 @@ void ReplicatedMergeTreeQueue::insertUnlocked(
|
||||
if (entry->type == LogEntry::ALTER_METADATA)
|
||||
{
|
||||
LOG_TRACE(log, "Adding alter metadata version {} to the queue", entry->alter_version);
|
||||
alter_sequence.addMetadataAlter(entry->alter_version, entry->have_mutation, state_lock);
|
||||
alter_sequence.addMetadataAlter(entry->alter_version, state_lock);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -873,6 +873,7 @@ if __name__ == '__main__':
|
||||
parser.add_argument('--use-skip-list', action='store_true', default=False, help="Use skip list to skip tests if found")
|
||||
parser.add_argument('--db-engine', help='Database engine name')
|
||||
|
||||
parser.add_argument('--antlr', action='store_true', default=False, dest='antlr', help='Use new ANTLR parser in tests')
|
||||
parser.add_argument('--no-stateless', action='store_true', help='Disable all stateless tests')
|
||||
parser.add_argument('--no-stateful', action='store_true', help='Disable all stateful tests')
|
||||
parser.add_argument('--skip', nargs='+', help="Skip these tests")
|
||||
@ -886,7 +887,6 @@ if __name__ == '__main__':
|
||||
group=parser.add_mutually_exclusive_group(required=False)
|
||||
group.add_argument('--shard', action='store_true', default=None, dest='shard', help='Run sharding related tests (required to clickhouse-server listen 127.0.0.2 127.0.0.3)')
|
||||
group.add_argument('--no-shard', action='store_false', default=None, dest='shard', help='Do not run shard related tests')
|
||||
parser.add_argument('--antlr', action='store_true', default=False, dest='antlr', help='Use new ANTLR parser in tests')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
@ -967,7 +967,10 @@ if __name__ == '__main__':
|
||||
os.environ['CLICKHOUSE_URL_PARAMS'] += get_additional_client_options_url(args)
|
||||
|
||||
if args.antlr:
|
||||
os.environ['CLICKHOUSE_CLIENT_OPT'] += ' --use_antlr_parser=1'
|
||||
if 'CLICKHOUSE_CLIENT_OPT' in os.environ:
|
||||
os.environ['CLICKHOUSE_CLIENT_OPT'] += ' --use_antlr_parser=1'
|
||||
else:
|
||||
os.environ['CLICKHOUSE_CLIENT_OPT'] = '--use_antlr_parser=1'
|
||||
|
||||
if args.extract_from_config is None:
|
||||
if os.access(args.binary + '-extract-from-config', os.X_OK):
|
||||
|
@ -48,16 +48,15 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam
|
||||
"/* Need ClickHouse support read mysql decimal unsigned_decimal DECIMAL(19, 10) UNSIGNED, _decimal DECIMAL(19, 10), */"
|
||||
"unsigned_float FLOAT UNSIGNED, _float FLOAT, "
|
||||
"unsigned_double DOUBLE UNSIGNED, _double DOUBLE, "
|
||||
"_varchar VARCHAR(10), _char CHAR(10), "
|
||||
"_varchar VARCHAR(10), _char CHAR(10), binary_col BINARY(8), "
|
||||
"/* Need ClickHouse support Enum('a', 'b', 'v') _enum ENUM('a', 'b', 'c'), */"
|
||||
"_date Date, _datetime DateTime, _timestamp TIMESTAMP, _bool BOOLEAN) ENGINE = InnoDB;")
|
||||
|
||||
# it already has some data
|
||||
mysql_node.query("""
|
||||
INSERT INTO test_database.test_table_1 VALUES(1, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char',
|
||||
INSERT INTO test_database.test_table_1 VALUES(1, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', 'binary',
|
||||
'2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', true);
|
||||
""")
|
||||
|
||||
clickhouse_node.query(
|
||||
"CREATE DATABASE test_database ENGINE = MaterializeMySQL('{}:3306', 'test_database', 'root', 'clickhouse')".format(
|
||||
service_name))
|
||||
@ -65,51 +64,51 @@ def dml_with_materialize_mysql_database(clickhouse_node, mysql_node, service_nam
|
||||
assert "test_database" in clickhouse_node.query("SHOW DATABASES")
|
||||
|
||||
check_query(clickhouse_node, "SELECT * FROM test_database.test_table_1 ORDER BY key FORMAT TSV",
|
||||
"1\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t"
|
||||
"1\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\tbinary\\0\\0\t2020-01-01\t"
|
||||
"2020-01-01 00:00:00\t2020-01-01 00:00:00\t1\n")
|
||||
|
||||
mysql_node.query("""
|
||||
INSERT INTO test_database.test_table_1 VALUES(2, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char',
|
||||
INSERT INTO test_database.test_table_1 VALUES(2, 1, -1, 2, -2, 3, -3, 4, -4, 5, -5, 6, -6, 3.2, -3.2, 3.4, -3.4, 'varchar', 'char', 'binary',
|
||||
'2020-01-01', '2020-01-01 00:00:00', '2020-01-01 00:00:00', false);
|
||||
""")
|
||||
|
||||
check_query(clickhouse_node, "SELECT * FROM test_database.test_table_1 ORDER BY key FORMAT TSV",
|
||||
"1\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t"
|
||||
"1\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\tbinary\\0\\0\t2020-01-01\t"
|
||||
"2020-01-01 00:00:00\t2020-01-01 00:00:00\t1\n2\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\t"
|
||||
"varchar\tchar\t2020-01-01\t2020-01-01 00:00:00\t2020-01-01 00:00:00\t0\n")
|
||||
"varchar\tchar\tbinary\\0\\0\t2020-01-01\t2020-01-01 00:00:00\t2020-01-01 00:00:00\t0\n")
|
||||
|
||||
mysql_node.query("UPDATE test_database.test_table_1 SET unsigned_tiny_int = 2 WHERE `key` = 1")
|
||||
|
||||
check_query(clickhouse_node, """
|
||||
SELECT key, unsigned_tiny_int, tiny_int, unsigned_small_int,
|
||||
small_int, unsigned_medium_int, medium_int, unsigned_int, _int, unsigned_integer, _integer,
|
||||
unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char,
|
||||
unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char, binary_col,
|
||||
_date, _datetime, /* exclude it, because ON UPDATE CURRENT_TIMESTAMP _timestamp, */
|
||||
_bool FROM test_database.test_table_1 ORDER BY key FORMAT TSV
|
||||
""",
|
||||
"1\t2\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t"
|
||||
"1\t2\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\tbinary\\0\\0\t2020-01-01\t"
|
||||
"2020-01-01 00:00:00\t1\n2\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\t"
|
||||
"varchar\tchar\t2020-01-01\t2020-01-01 00:00:00\t0\n")
|
||||
"varchar\tchar\tbinary\\0\\0\t2020-01-01\t2020-01-01 00:00:00\t0\n")
|
||||
|
||||
# update primary key
|
||||
mysql_node.query("UPDATE test_database.test_table_1 SET `key` = 3 WHERE `unsigned_tiny_int` = 2")
|
||||
|
||||
check_query(clickhouse_node, "SELECT key, unsigned_tiny_int, tiny_int, unsigned_small_int,"
|
||||
" small_int, unsigned_medium_int, medium_int, unsigned_int, _int, unsigned_integer, _integer, "
|
||||
" unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char, "
|
||||
" unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char, binary_col, "
|
||||
" _date, _datetime, /* exclude it, because ON UPDATE CURRENT_TIMESTAMP _timestamp, */ "
|
||||
" _bool FROM test_database.test_table_1 ORDER BY key FORMAT TSV",
|
||||
"2\t1\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\t"
|
||||
"varchar\tchar\t2020-01-01\t2020-01-01 00:00:00\t0\n3\t2\t-1\t2\t-2\t3\t-3\t"
|
||||
"4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t2020-01-01 00:00:00\t1\n")
|
||||
"varchar\tchar\tbinary\\0\\0\t2020-01-01\t2020-01-01 00:00:00\t0\n3\t2\t-1\t2\t-2\t3\t-3\t"
|
||||
"4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\tbinary\\0\\0\t2020-01-01\t2020-01-01 00:00:00\t1\n")
|
||||
|
||||
mysql_node.query('DELETE FROM test_database.test_table_1 WHERE `key` = 2')
|
||||
check_query(clickhouse_node, "SELECT key, unsigned_tiny_int, tiny_int, unsigned_small_int,"
|
||||
" small_int, unsigned_medium_int, medium_int, unsigned_int, _int, unsigned_integer, _integer, "
|
||||
" unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char, "
|
||||
" unsigned_bigint, _bigint, unsigned_float, _float, unsigned_double, _double, _varchar, _char, binary_col, "
|
||||
" _date, _datetime, /* exclude it, because ON UPDATE CURRENT_TIMESTAMP _timestamp, */ "
|
||||
" _bool FROM test_database.test_table_1 ORDER BY key FORMAT TSV",
|
||||
"3\t2\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\t2020-01-01\t"
|
||||
"3\t2\t-1\t2\t-2\t3\t-3\t4\t-4\t5\t-5\t6\t-6\t3.2\t-3.2\t3.4\t-3.4\tvarchar\tchar\tbinary\\0\\0\t2020-01-01\t"
|
||||
"2020-01-01 00:00:00\t1\n")
|
||||
|
||||
mysql_node.query('DELETE FROM test_database.test_table_1 WHERE `unsigned_tiny_int` = 2')
|
||||
|
@ -148,6 +148,13 @@ def test_table_function(started_cluster):
|
||||
assert node1.query("SELECT sum(`money`) FROM {}".format(table_function)).rstrip() == '60000'
|
||||
conn.close()
|
||||
|
||||
def test_binary_type(started_cluster):
|
||||
conn = get_mysql_conn()
|
||||
with conn.cursor() as cursor:
|
||||
cursor.execute("CREATE TABLE clickhouse.binary_type (id INT PRIMARY KEY, data BINARY(16) NOT NULL)")
|
||||
table_function = "mysql('mysql1:3306', 'clickhouse', '{}', 'root', 'clickhouse')".format('binary_type')
|
||||
node1.query("INSERT INTO {} VALUES (42, 'clickhouse')".format('TABLE FUNCTION ' + table_function))
|
||||
assert node1.query("SELECT * FROM {}".format(table_function)) == '42\tclickhouse\\0\\0\\0\\0\\0\\0\n'
|
||||
|
||||
def test_enum_type(started_cluster):
|
||||
table_name = 'test_enum_type'
|
||||
|
38
tests/performance/window_functions.xml
Normal file
38
tests/performance/window_functions.xml
Normal file
@ -0,0 +1,38 @@
|
||||
<test>
|
||||
<preconditions>
|
||||
<table_exists>hits_100m_single</table_exists>
|
||||
</preconditions>
|
||||
|
||||
<settings>
|
||||
<allow_experimental_window_functions>1</allow_experimental_window_functions>
|
||||
</settings>
|
||||
|
||||
<!--
|
||||
For some counters, find top 10 users by the numer of records.
|
||||
First with LIMIT BY, next with window functions.
|
||||
-->
|
||||
<query><![CDATA[
|
||||
select CounterID, UserID, count(*) user_hits
|
||||
from hits_100m_single
|
||||
where CounterID < 10000
|
||||
group by CounterID, UserID
|
||||
order by user_hits desc
|
||||
limit 10 by CounterID
|
||||
format Null
|
||||
]]></query>
|
||||
|
||||
<query><![CDATA[
|
||||
select *
|
||||
from (
|
||||
select CounterID, UserID, count(*) user_hits,
|
||||
count() over (partition by CounterID order by user_hits desc)
|
||||
user_rank
|
||||
from hits_100m_single
|
||||
where CounterID < 10000
|
||||
group by CounterID, UserID
|
||||
)
|
||||
where user_rank <= 10
|
||||
format Null
|
||||
]]></query>
|
||||
|
||||
</test>
|
@ -1,144 +1,141 @@
|
||||
0
|
||||
2718169299
|
||||
2718169299
|
||||
1315333491
|
||||
1099965843
|
||||
5746351769509927967
|
||||
5746351769509927967
|
||||
8347269581771603092
|
||||
6041373934059725027
|
||||
(17178276249054052155,8864230932371215121)
|
||||
(14133097226001036899,7985237721476952807)
|
||||
(14133097226001036899,7985237721476952807)
|
||||
(4661257206578284012,15229878657590021759)
|
||||
(3087743741749030713,11631667950302077749)
|
||||
(11923981719512934676,1193672187225825732)
|
||||
(11923981719512934676,1193672187225825732)
|
||||
(17970606678134635272,3825545538448404526)
|
||||
(9422952829151664974,568010773615758889)
|
||||
2548869326
|
||||
2548869326
|
||||
401385678
|
||||
401385710
|
||||
2652202579
|
||||
2652235347
|
||||
2984455347
|
||||
2984488115
|
||||
12804820948382413807
|
||||
12804820948919350245
|
||||
11651601468065149391
|
||||
11651600368014488527
|
||||
18377198011227067677
|
||||
18233505035951822655
|
||||
5501050600367972694
|
||||
5501050600367972692
|
||||
(8590465925632898311,12699049311112305995)
|
||||
(8590465925632898311,15828587343885202011)
|
||||
(8590465925632898311,15824051019631343049)
|
||||
(8590465925632898311,12699049311222825283)
|
||||
(217966158370437743,14452995911556652133)
|
||||
(217966158370437743,14452995911556652133)
|
||||
(2170210914777151141,5341809779339553313)
|
||||
(12469866236432988845,5341809779339553313)
|
||||
(12271157076799061825,5514511977572226426)
|
||||
(11639913962681153226,2767634094725305612)
|
||||
(12271157075024394466,17994666970078080114)
|
||||
(12271157077109587702,13572452308677868240)
|
||||
(6252006845407214340,13538761942960976531)
|
||||
(13795977174459370328,6392395597500134035)
|
||||
(16118993428517222971,13602445809406467)
|
||||
(16118993428517222971,13602445809406467)
|
||||
18446744073709551615
|
||||
130877626
|
||||
130877626
|
||||
2414681787
|
||||
2414681787
|
||||
3795742796
|
||||
3795742796
|
||||
3795742796
|
||||
3795742796
|
||||
(10693559443859979498,10693559443859979498)
|
||||
(12862934800683464900,12912608544812513109)
|
||||
(12862934800683464900,12912608544812513109)
|
||||
(5701637312405877447,12912608544812513109)
|
||||
(5701637312405877447,12912608544812513109)
|
||||
(17357047205102710216,17357047205102710216)
|
||||
(17357047205102710216,17357047205102710216)
|
||||
(17357047205102710216,17357047205102710216)
|
||||
(17357047205102710216,17357047205102710216)
|
||||
3562273581
|
||||
3579050789
|
||||
3562257197
|
||||
3562258213
|
||||
3579050797
|
||||
3579050757
|
||||
3562258221
|
||||
3562258181
|
||||
3004171816
|
||||
2584740395
|
||||
437257770
|
||||
2651981610
|
||||
3004171816
|
||||
2584740395
|
||||
437257770
|
||||
2651981610
|
||||
(17614245890954671019,12771214424940442770)
|
||||
(17614245890954671019,12771214424940442770)
|
||||
(7128473921279637957,12771214424940442770)
|
||||
(7128473921279637957,12771214424940442770)
|
||||
(17614245890954671019,12771214424940442770)
|
||||
(17614245890954671019,12771214424940442770)
|
||||
(7128473921279637957,12771214424940442770)
|
||||
(7128473921279637957,12771214424940442770)
|
||||
(14260447771268573594,5578182242585518316)
|
||||
(14260447771268573594,16377939020851853906)
|
||||
(4363920713808688881,5013693163726625177)
|
||||
(14260447771268573594,3863279269132177973)
|
||||
(14260447771268573594,5578182242585518316)
|
||||
(14260447771268573594,16377939020851853906)
|
||||
(4363920713808688881,5013693163726625177)
|
||||
(14260447771268573594,3863279269132177973)
|
||||
uniqExact 6
|
||||
ngramSimhash
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 938403918
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 904817231
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 904849486
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 938469966
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 938404430
|
||||
ngramSimhashCaseInsensitive
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 938453071
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 938453599
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 938404430
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 636382047
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 938388046
|
||||
ngramSimhashUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2400625214
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2669060670
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 2671174174
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 2669060798
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 2635506238
|
||||
ngramSimhashCaseInsensitiveUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2984307934
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 2967514366
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2715855070
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2967529694
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 2984290526
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2984306910
|
||||
wordShingleSimhash
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2384813566025024242
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 2393820766427040734
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 2421405261516400471
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2384883934767174398
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2384813567165864670
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2384813567098766070
|
||||
wordShingleSimhashCaseInsensitive
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 11635224793909957342
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 11617192803208139478
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 11617192803208151794
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 11617192803208151766
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 3006891407629799254
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 11617263171950236406
|
||||
wordShingleSimhashUTF8
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 9097818277104946605
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 9084246141658271116
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 9084247241171471628
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 9088752215857929613
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 9093255814816009484
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 9084247481822285196
|
||||
wordShingleSimhashCaseInsensitiveUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 14788772559981154978
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 14497164445320454820
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 14500537785782895266
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 14787646625647636642
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 14500016612976573090
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 14787956717160870888
|
||||
ngramMinhash
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (15568933215262012353,16287411738807860353)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (9473589826959436958,14264235017873782379)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 (261441656340606110,13387826928927239258)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 (261441656340606110,3305790294064680121)
|
||||
ngramMinhashCaseInsensitive
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (15568933215262012353,16287411738807860353)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (9473589826959436958,14264235017873782379)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 (3051755284325985438,3305790294064680121)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 (3051755284325985438,13387826928927239258)
|
||||
ngramMinhashUTF8
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 4 (309830857064065611,7476109060377919216)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (309830856946430871,7521913981442105351)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (309830857559697399,7476109060377919216)
|
||||
ngramMinhashCaseInsensitiveUTF8
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (13010809262502929096,2266175201446733829)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 5 (16827851229372179144,976408052548769549)
|
||||
wordShingleMinhash
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 (14343822344862533053,11776483993821900250)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (18417749332128868312,11776483993821900250)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (18417749329907528200,14156831980621923226)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (4600092690178227586,11776483993821900250)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (12998011837685887081,1565093152297016105)
|
||||
wordShingleMinhashCaseInsensitive
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (12998011837880940480,1565093152297016105)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (1100751419997894255,15225006848401474458)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 (1100751419777226283,12993805708561478711)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (1260401089202135898,12993805709529540523)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (1638964264353944555,12993805708561478711)
|
||||
wordShingleMinhashUTF8
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (742280067319112377,14237963017046410351)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (7237654052534217600,14400297883226437452)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (742280067319112377,17574811665615962276)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (3458625375707825328,17574811665615962276)
|
||||
wordShingleMinhashCaseInsensitiveUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (7032848390598450936,5104668712725998486)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (15582670464629505464,13034678298246801511)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (9935434838523508980,7648038926638343017)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 (7032848390598450936,16870743692447971238)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (7302041809563941951,6856814412450461959)
|
||||
ngramSimHash
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 3906262823
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 2857686823
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 676648743
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 1012193063
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 3092567843
|
||||
ngramSimHashCaseInsensitive
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2891240999
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 3092567591
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 2824132391
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 3908359975
|
||||
ngramSimHashUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 3159676711
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 2 676648743
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 1012193063
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 2924795687
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 3897874215
|
||||
ngramSimHashCaseInsensitiveUTF8
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 3906262823
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 3092567591
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 2824132391
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 2891241255
|
||||
wordShingleSimHash
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 857724390
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 404215270
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 991679910
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 425963587
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 404215014
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 563598566
|
||||
wordShingleSimHashCaseInsensitive
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 959182215
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 429118950
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 421737795
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 964941252
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 965465540
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 420713958
|
||||
wordShingleSimHashUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 857724390
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 404215270
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 991679910
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 425963587
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 404215014
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 563598566
|
||||
wordShingleSimHashCaseInsensitiveUTF8
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 959182215
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 429118950
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 421737795
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 964941252
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 965465540
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 420713958
|
||||
ngramMinHash
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (6021986790841777095,17443426065825246292)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (13225377334870249827,17443426065825246292)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (4388091710993602029,17613327300639166679)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (7962672159337006560,17443426065825246292)
|
||||
ngramMinHashCaseInsensitive
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (6021986790841777095,8535005350590298790)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (13225377334870249827,8535005350590298790)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (4388091710993602029,17613327300639166679)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (7962672159337006560,8535005350590298790)
|
||||
ngramMinHashUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (6021986790841777095,17443426065825246292)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (13225377334870249827,17443426065825246292)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (4388091710993602029,17613327300639166679)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (7962672159337006560,17443426065825246292)
|
||||
ngramMinHashCaseInsensitiveUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (6021986790841777095,8535005350590298790)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (13225377334870249827,8535005350590298790)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (4388091710993602029,17613327300639166679)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (7962672159337006560,8535005350590298790)
|
||||
wordShingleMinHash
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (18148981179837829400,14581416672396321264)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (16224204290372720939,13975393268888698430)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (5044918525503962090,12338022931991160906)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (18148981179837829400,6048943706095721476)
|
||||
wordShingleMinHashCaseInsensitive
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (15504011608613565061,6048943706095721476)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (16224204290372720939,13975393268888698430)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (5044918525503962090,3381836163833256482)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (15504011608613565061,14581416672396321264)
|
||||
wordShingleMinHashUTF8
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (18148981179837829400,14581416672396321264)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (16224204290372720939,13975393268888698430)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (5044918525503962090,12338022931991160906)
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (18148981179837829400,6048943706095721476)
|
||||
wordShingleMinHashCaseInsensitiveUTF8
|
||||
ClickHouse makes full use of all available hardware to process every request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (only used columns after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid single points of failure. Downtime for one site or the entire data center will not affect the system\'s read and write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they immediately become available for building reports. The SQL dialect allows you to express the desired result without resorting to any non-standard APIs that can be found in some alternative systems. 1 (15504011608613565061,6048943706095721476)
|
||||
ClickHouse uses all available hardware to its full potential to process each query as fast as possible. Peak processing performance for a single query stands at more than 2 terabytes per second (after decompression, only used columns). In distributed setup reads are automatically balanced among healthy replicas to avoid increasing latency.\nClickHouse supports multi-master asynchronous replication and can be deployed across multiple datacenters. All nodes are equal, which allows avoiding having single points of failure. Downtime of a single node or the whole datacenter wont affect the systems availability for both reads and writes.\nClickHouse is simple and works out-of-the-box. It streamlines all your data processing: ingest all your structured data into the system and it becomes instantly available for building reports. SQL dialect allows expressing the desired result without involving any custom non-standard API that could be found in some alternative systems. 1 (16224204290372720939,13975393268888698430)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all of your structured data into the system, and it is immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all your structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems.\n:::::::\nClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (using columns after decompression only). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the read / write availability of the system.\nClickHouse is simple and works out of the box. It simplifies all processing of your data: it loads all structured data into the system and immediately becomes available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 3 (5044918525503962090,3381836163833256482)
|
||||
ClickHouse makes full use of all available hardware to process each request as quickly as possible. Peak performance for a single query is over 2 terabytes per second (used columns only after unpacking). In a distributed setup, reads are automatically balanced across healthy replicas to avoid increased latency.\nClickHouse supports asynchronous multi-master replication and can be deployed across multiple data centers. All nodes are equal to avoid a single point of failure. Downtime for one site or the entire data center will not affect the system\'s read / write availability.\nClickHouse is simple and works out of the box. It simplifies all the processing of your data: it loads all your structured data into the system, and they are immediately available for building reports. The SQL dialect allows you to express the desired result without resorting to any of the non-standard APIs found in some alternative systems. 1 (15504011608613565061,14581416672396321264)
|
||||
|
@ -1,22 +1,22 @@
|
||||
SELECT ngramSimhash('');
|
||||
SELECT ngramSimhash('what a cute cat.');
|
||||
SELECT ngramSimhashCaseInsensitive('what a cute cat.');
|
||||
SELECT ngramSimhashUTF8('what a cute cat.');
|
||||
SELECT ngramSimhashCaseInsensitiveUTF8('what a cute cat.');
|
||||
SELECT wordShingleSimhash('what a cute cat.');
|
||||
SELECT wordShingleSimhashCaseInsensitive('what a cute cat.');
|
||||
SELECT wordShingleSimhashUTF8('what a cute cat.');
|
||||
SELECT wordShingleSimhashCaseInsensitiveUTF8('what a cute cat.');
|
||||
SELECT ngramSimHash('');
|
||||
SELECT ngramSimHash('what a cute cat.');
|
||||
SELECT ngramSimHashCaseInsensitive('what a cute cat.');
|
||||
SELECT ngramSimHashUTF8('what a cute cat.');
|
||||
SELECT ngramSimHashCaseInsensitiveUTF8('what a cute cat.');
|
||||
SELECT wordShingleSimHash('what a cute cat.');
|
||||
SELECT wordShingleSimHashCaseInsensitive('what a cute cat.');
|
||||
SELECT wordShingleSimHashUTF8('what a cute cat.');
|
||||
SELECT wordShingleSimHashCaseInsensitiveUTF8('what a cute cat.');
|
||||
|
||||
SELECT ngramMinhash('');
|
||||
SELECT ngramMinhash('what a cute cat.');
|
||||
SELECT ngramMinhashCaseInsensitive('what a cute cat.');
|
||||
SELECT ngramMinhashUTF8('what a cute cat.');
|
||||
SELECT ngramMinhashCaseInsensitiveUTF8('what a cute cat.');
|
||||
SELECT wordShingleMinhash('what a cute cat.');
|
||||
SELECT wordShingleMinhashCaseInsensitive('what a cute cat.');
|
||||
SELECT wordShingleMinhashUTF8('what a cute cat.');
|
||||
SELECT wordShingleMinhashCaseInsensitiveUTF8('what a cute cat.');
|
||||
SELECT ngramMinHash('');
|
||||
SELECT ngramMinHash('what a cute cat.');
|
||||
SELECT ngramMinHashCaseInsensitive('what a cute cat.');
|
||||
SELECT ngramMinHashUTF8('what a cute cat.');
|
||||
SELECT ngramMinHashCaseInsensitiveUTF8('what a cute cat.');
|
||||
SELECT wordShingleMinHash('what a cute cat.');
|
||||
SELECT wordShingleMinHashCaseInsensitive('what a cute cat.');
|
||||
SELECT wordShingleMinHashUTF8('what a cute cat.');
|
||||
SELECT wordShingleMinHashCaseInsensitiveUTF8('what a cute cat.');
|
||||
|
||||
DROP TABLE IF EXISTS defaults;
|
||||
CREATE TABLE defaults
|
||||
@ -26,23 +26,23 @@ CREATE TABLE defaults
|
||||
|
||||
INSERT INTO defaults values ('It is the latest occurrence of the Southeast European haze, the issue that occurs in constant intensity during every wet season. It has mainly been caused by forest fires resulting from illegal slash-and-burn clearing performed on behalf of the palm oil industry in Kazakhstan, principally on the islands, which then spread quickly in the dry season.') ('It is the latest occurrence of the Southeast Asian haze, the issue that occurs in constant intensity during every wet season. It has mainly been caused by forest fires resulting from illegal slash-and-burn clearing performed on behalf of the palm oil industry in Kazakhstan, principally on the islands, which then spread quickly in the dry season.');
|
||||
|
||||
SELECT ngramSimhash(s) FROM defaults;
|
||||
SELECT ngramSimhashCaseInsensitive(s) FROM defaults;
|
||||
SELECT ngramSimhashUTF8(s) FROM defaults;
|
||||
SELECT ngramSimhashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
SELECT wordShingleSimhash(s) FROM defaults;
|
||||
SELECT wordShingleSimhashCaseInsensitive(s) FROM defaults;
|
||||
SELECT wordShingleSimhashUTF8(s) FROM defaults;
|
||||
SELECT wordShingleSimhashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
SELECT ngramSimHash(s) FROM defaults;
|
||||
SELECT ngramSimHashCaseInsensitive(s) FROM defaults;
|
||||
SELECT ngramSimHashUTF8(s) FROM defaults;
|
||||
SELECT ngramSimHashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
SELECT wordShingleSimHash(s) FROM defaults;
|
||||
SELECT wordShingleSimHashCaseInsensitive(s) FROM defaults;
|
||||
SELECT wordShingleSimHashUTF8(s) FROM defaults;
|
||||
SELECT wordShingleSimHashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
|
||||
SELECT ngramMinhash(s) FROM defaults;
|
||||
SELECT ngramMinhashCaseInsensitive(s) FROM defaults;
|
||||
SELECT ngramMinhashUTF8(s) FROM defaults;
|
||||
SELECT ngramMinhashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
SELECT wordShingleMinhash(s) FROM defaults;
|
||||
SELECT wordShingleMinhashCaseInsensitive(s) FROM defaults;
|
||||
SELECT wordShingleMinhashUTF8(s) FROM defaults;
|
||||
SELECT wordShingleMinhashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
SELECT ngramMinHash(s) FROM defaults;
|
||||
SELECT ngramMinHashCaseInsensitive(s) FROM defaults;
|
||||
SELECT ngramMinHashUTF8(s) FROM defaults;
|
||||
SELECT ngramMinHashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
SELECT wordShingleMinHash(s) FROM defaults;
|
||||
SELECT wordShingleMinHashCaseInsensitive(s) FROM defaults;
|
||||
SELECT wordShingleMinHashUTF8(s) FROM defaults;
|
||||
SELECT wordShingleMinHashCaseInsensitiveUTF8(s) FROM defaults;
|
||||
|
||||
TRUNCATE TABLE defaults;
|
||||
INSERT INTO defaults SELECT arrayJoin(splitByString('\n\n',
|
||||
@ -74,38 +74,38 @@ ClickHouse is simple and works out of the box. It simplifies all processing of y
|
||||
SELECT 'uniqExact', uniqExact(s) FROM defaults;
|
||||
|
||||
|
||||
SELECT 'ngramSimhash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimhash(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramSimhashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimhashCaseInsensitive(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramSimhashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimhashUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramSimhashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimhashCaseInsensitiveUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimhash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimhash(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimhashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimhashCaseInsensitive(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimhashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimhashUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimhashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimhashCaseInsensitiveUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramSimHash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimHash(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramSimHashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimHashCaseInsensitive(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramSimHashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimHashUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramSimHashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramSimHashCaseInsensitiveUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimHash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimHash(s, 2) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimHashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimHashCaseInsensitive(s, 2) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimHashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimHashUTF8(s, 2) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleSimHashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleSimHashCaseInsensitiveUTF8(s, 2) as h FROM defaults GROUP BY h;
|
||||
|
||||
SELECT 'ngramMinhash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinhash(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramMinhashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinhashCaseInsensitive(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramMinhashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinhashUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramMinhashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinhashCaseInsensitiveUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinhash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinhash(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinhashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinhashCaseInsensitive(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinhashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinhashUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinhashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinhashCaseInsensitiveUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramMinHash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinHash(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramMinHashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinHashCaseInsensitive(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramMinHashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinHashUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'ngramMinHashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), ngramMinHashCaseInsensitiveUTF8(s) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinHash';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinHash(s, 2, 3) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinHashCaseInsensitive';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinHashCaseInsensitive(s, 2, 3) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinHashUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinHashUTF8(s, 2, 3) as h FROM defaults GROUP BY h;
|
||||
SELECT 'wordShingleMinHashCaseInsensitiveUTF8';
|
||||
SELECT arrayStringConcat(groupArray(s), '\n:::::::\n'), count(), wordShingleMinHashCaseInsensitiveUTF8(s, 2, 3) as h FROM defaults GROUP BY h;
|
||||
|
||||
DROP TABLE defaults;
|
||||
|
@ -123,6 +123,16 @@
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
***ipv6 trie dict***
|
||||
1
|
||||
1
|
||||
@ -273,6 +283,11 @@
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
***ipv6 trie dict mask***
|
||||
1
|
||||
1
|
||||
|
@ -207,9 +207,20 @@ INSERT INTO database_for_dict.table_ipv4_trie VALUES ('127.255.255.255/32', 21);
|
||||
CREATE DICTIONARY database_for_dict.dict_ipv4_trie ( prefix String, val UInt32 )
|
||||
PRIMARY KEY prefix
|
||||
SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db 'database_for_dict' table 'table_ipv4_trie'))
|
||||
LAYOUT(IP_TRIE())
|
||||
LAYOUT(IP_TRIE(ACCESS_TO_KEY_FROM_ATTRIBUTES 1))
|
||||
LIFETIME(MIN 10 MAX 100);
|
||||
|
||||
SELECT '127.0.0.0/24' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.0.0.0')));
|
||||
SELECT '127.0.0.1/32' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.0.0.1')));
|
||||
SELECT '127.0.0.0/24' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.0.0.127')));
|
||||
SELECT '127.0.0.0/16' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.0.255.127')));
|
||||
SELECT '127.255.0.0/16' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.255.127.127')));
|
||||
SELECT '127.255.128.0/24' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.255.128.9')));
|
||||
SELECT '127.255.128.0/24' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.255.128.127')));
|
||||
SELECT '127.255.128.10/32' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.255.128.10')));
|
||||
SELECT '127.255.128.128/25' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.255.128.255')));
|
||||
SELECT '127.255.255.128/32' == dictGetString('database_for_dict.dict_ipv4_trie', 'prefix', tuple(IPv4StringToNum('127.255.255.128')));
|
||||
|
||||
SELECT 3 == dictGetUInt32('database_for_dict.dict_ipv4_trie', 'val', tuple(IPv4StringToNum('127.0.0.0')));
|
||||
SELECT 4 == dictGetUInt32('database_for_dict.dict_ipv4_trie', 'val', tuple(IPv4StringToNum('127.0.0.1')));
|
||||
SELECT 3 == dictGetUInt32('database_for_dict.dict_ipv4_trie', 'val', tuple(IPv4StringToNum('127.0.0.127')));
|
||||
@ -274,7 +285,7 @@ CREATE DICTIONARY database_for_dict.dict_ip_trie
|
||||
)
|
||||
PRIMARY KEY prefix
|
||||
SOURCE(CLICKHOUSE(host 'localhost' port 9000 user 'default' db 'database_for_dict' table 'table_ip_trie'))
|
||||
LAYOUT(IP_TRIE())
|
||||
LAYOUT(IP_TRIE(ACCESS_TO_KEY_FROM_ATTRIBUTES 1))
|
||||
LIFETIME(MIN 10 MAX 100);
|
||||
|
||||
SELECT 'US' == dictGetString('database_for_dict.dict_ip_trie', 'val', tuple(IPv6StringToNum('2620:0:870::')));
|
||||
@ -294,6 +305,12 @@ SELECT 'JA' == dictGetString('database_for_dict.dict_ip_trie', 'val', tuple(IPv4
|
||||
SELECT 1 == dictHas('database_for_dict.dict_ip_trie', tuple(IPv4StringToNum('127.0.0.1')));
|
||||
SELECT 1 == dictHas('database_for_dict.dict_ip_trie', tuple(IPv6StringToNum('::ffff:127.0.0.1')));
|
||||
|
||||
SELECT '2620:0:870::/48' == dictGetString('database_for_dict.dict_ip_trie', 'prefix', tuple(IPv6StringToNum('2620:0:870::')));
|
||||
SELECT '2a02:6b8:1::/48' == dictGetString('database_for_dict.dict_ip_trie', 'prefix', tuple(IPv6StringToNum('2a02:6b8:1::1')));
|
||||
SELECT '2001:db8::/32' == dictGetString('database_for_dict.dict_ip_trie', 'prefix', tuple(IPv6StringToNum('2001:db8::1')));
|
||||
SELECT '::ffff:101.79.55.22/128' == dictGetString('database_for_dict.dict_ip_trie', 'prefix', tuple(IPv6StringToNum('::ffff:654f:3716')));
|
||||
SELECT '::ffff:101.79.55.22/128' == dictGetString('database_for_dict.dict_ip_trie', 'prefix', tuple(IPv6StringToNum('::ffff:101.79.55.22')));
|
||||
|
||||
SELECT '0' == dictGetString('database_for_dict.dict_ip_trie', 'val', tuple(IPv6StringToNum('::0')));
|
||||
SELECT '1' == dictGetString('database_for_dict.dict_ip_trie', 'val', tuple(IPv6StringToNum('8000::')));
|
||||
SELECT '2' == dictGetString('database_for_dict.dict_ip_trie', 'val', tuple(IPv6StringToNum('c000::')));
|
||||
|
@ -1,4 +1,4 @@
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
Header: x UInt8
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
Header: dummy UInt8
|
||||
|
@ -1,252 +1,252 @@
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Distinct
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Union
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
Expression (Projection + Before ORDER BY and SELECT)
|
||||
Expression (Projection + Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (SystemOne)
|
||||
|
@ -9,7 +9,7 @@ Expression (Projection)
|
||||
MergingSorted (Merge sorted streams for ORDER BY)
|
||||
MergeSorting (Merge sorted blocks for ORDER BY)
|
||||
PartialSorting (Sort each block for ORDER BY)
|
||||
Expression (Before ORDER BY and SELECT)
|
||||
Expression (Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (MergeTree)
|
||||
SELECT
|
||||
@ -21,7 +21,7 @@ LIMIT 10
|
||||
Expression (Projection)
|
||||
Limit (preliminary LIMIT)
|
||||
FinishSorting
|
||||
Expression (Before ORDER BY and SELECT)
|
||||
Expression (Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (MergeTree with order)
|
||||
SELECT
|
||||
@ -35,7 +35,7 @@ LIMIT 10
|
||||
Expression (Projection)
|
||||
Limit (preliminary LIMIT)
|
||||
FinishSorting
|
||||
Expression (Before ORDER BY and SELECT)
|
||||
Expression (Before ORDER BY)
|
||||
SettingQuotaAndLimits (Set limits and quota after reading from storage)
|
||||
ReadFromStorage (MergeTree with order)
|
||||
SELECT
|
||||
|
@ -79,17 +79,18 @@ select number, quantileExact(number) over (partition by intDiv(number, 3)) q fro
|
||||
9 9
|
||||
select q * 10, quantileExact(number) over (partition by intDiv(number, 3)) q from numbers(10); -- { serverError 47 }
|
||||
|
||||
-- should work in ORDER BY though
|
||||
-- must work in WHERE if you wrap it in a subquery
|
||||
|
||||
select * from (select count(*) over () c from numbers(3)) where c > 0;
|
||||
|
||||
-- should work in ORDER BY
|
||||
|
||||
1
|
||||
2
|
||||
3
|
||||
select number, max(number) over (partition by intDiv(number, 3) order by number desc) m from numbers(10) order by m desc, number;
|
||||
|
||||
-- this one doesn't work yet -- looks like the column names clash, and the
|
||||
-- window count() is overwritten with aggregate count()
|
||||
-- select number, count(), count() over (partition by intDiv(number, 3)) from numbers(10) group by number order by count() desc;
|
||||
|
||||
-- different windows
|
||||
-- an explain test would also be helpful, but it's too immature now and I don't
|
||||
-- want to change reference all the time
|
||||
-- also works in ORDER BY if you wrap it in a subquery
|
||||
|
||||
9 9
|
||||
6 8
|
||||
@ -101,6 +102,37 @@ select number, max(number) over (partition by intDiv(number, 3) order by number
|
||||
0 2
|
||||
1 2
|
||||
2 2
|
||||
select * from (select count(*) over () c from numbers(3)) order by c;
|
||||
|
||||
-- Example with window function only in ORDER BY. Here we make a rank of all
|
||||
-- numbers sorted descending, and then sort by this rank descending, and must get
|
||||
-- the ascending order.
|
||||
|
||||
1
|
||||
2
|
||||
3
|
||||
select * from (select * from numbers(5) order by rand()) order by count() over (order by number desc) desc;
|
||||
|
||||
-- Aggregate functions as window function arguments. This query is semantically
|
||||
-- the same as the above one, only we replace `number` with
|
||||
-- `any(number) group by number` and so on.
|
||||
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
select * from (select * from numbers(5) order by rand()) group by number order by sum(any(number + 1)) over (order by min(number) desc) desc;
|
||||
|
||||
-- different windows
|
||||
-- an explain test would also be helpful, but it's too immature now and I don't
|
||||
-- want to change reference all the time
|
||||
|
||||
0
|
||||
1
|
||||
2
|
||||
3
|
||||
4
|
||||
select number, max(number) over (partition by intDiv(number, 3) order by number desc), count(number) over (partition by intDiv(number, 5) order by number) as m from numbers(31) order by number settings max_block_size = 2;
|
||||
|
||||
-- two functions over the same window
|
||||
@ -140,6 +172,8 @@ select number, max(number) over (partition by intDiv(number, 3) order by number
|
||||
30 30 1
|
||||
select number, max(number) over (partition by intDiv(number, 3) order by number desc), count(number) over (partition by intDiv(number, 3) order by number desc) as m from numbers(7) order by number settings max_block_size = 2;
|
||||
|
||||
-- check that we can work with constant columns
|
||||
|
||||
0 2 3
|
||||
1 2 2
|
||||
2 2 1
|
||||
@ -147,3 +181,39 @@ select number, max(number) over (partition by intDiv(number, 3) order by number
|
||||
4 5 2
|
||||
5 5 1
|
||||
6 6 1
|
||||
select median(x) over (partition by x) from (select 1 x);
|
||||
|
||||
-- an empty window definition is valid as well
|
||||
|
||||
1
|
||||
select groupArray(number) over () from numbers(3);
|
||||
|
||||
-- This one tests we properly process the window function arguments.
|
||||
-- Seen errors like 'column `1` not found' from count(1).
|
||||
|
||||
[0]
|
||||
[0,1]
|
||||
[0,1,2]
|
||||
select count(1) over (), max(number + 1) over () from numbers(3);
|
||||
|
||||
-- Should work in DISTINCT
|
||||
|
||||
1 3
|
||||
select distinct sum(0) over () from numbers(2);
|
||||
|
||||
0
|
||||
select distinct any(number) over () from numbers(2);
|
||||
|
||||
-- Various kinds of aliases are properly substituted into various parts of window
|
||||
-- function definition.
|
||||
|
||||
0
|
||||
with number + 1 as x select intDiv(number, 3) as y, sum(x + y) over (partition by y order by x) from numbers(7);
|
||||
|
||||
0 1
|
||||
0 3
|
||||
0 6
|
||||
1 5
|
||||
1 11
|
||||
1 18
|
||||
2 9
|
||||
|
@ -24,12 +24,24 @@ select number, quantileExact(number) over (partition by intDiv(number, 3)) q fro
|
||||
-- last stage of select, after all other functions.
|
||||
select q * 10, quantileExact(number) over (partition by intDiv(number, 3)) q from numbers(10); -- { serverError 47 }
|
||||
|
||||
-- should work in ORDER BY though
|
||||
-- must work in WHERE if you wrap it in a subquery
|
||||
select * from (select count(*) over () c from numbers(3)) where c > 0;
|
||||
|
||||
-- should work in ORDER BY
|
||||
select number, max(number) over (partition by intDiv(number, 3) order by number desc) m from numbers(10) order by m desc, number;
|
||||
|
||||
-- this one doesn't work yet -- looks like the column names clash, and the
|
||||
-- window count() is overwritten with aggregate count()
|
||||
-- select number, count(), count() over (partition by intDiv(number, 3)) from numbers(10) group by number order by count() desc;
|
||||
-- also works in ORDER BY if you wrap it in a subquery
|
||||
select * from (select count(*) over () c from numbers(3)) order by c;
|
||||
|
||||
-- Example with window function only in ORDER BY. Here we make a rank of all
|
||||
-- numbers sorted descending, and then sort by this rank descending, and must get
|
||||
-- the ascending order.
|
||||
select * from (select * from numbers(5) order by rand()) order by count() over (order by number desc) desc;
|
||||
|
||||
-- Aggregate functions as window function arguments. This query is semantically
|
||||
-- the same as the above one, only we replace `number` with
|
||||
-- `any(number) group by number` and so on.
|
||||
select * from (select * from numbers(5) order by rand()) group by number order by sum(any(number + 1)) over (order by min(number) desc) desc;
|
||||
|
||||
-- different windows
|
||||
-- an explain test would also be helpful, but it's too immature now and I don't
|
||||
@ -40,3 +52,21 @@ select number, max(number) over (partition by intDiv(number, 3) order by number
|
||||
-- an explain test would also be helpful, but it's too immature now and I don't
|
||||
-- want to change reference all the time
|
||||
select number, max(number) over (partition by intDiv(number, 3) order by number desc), count(number) over (partition by intDiv(number, 3) order by number desc) as m from numbers(7) order by number settings max_block_size = 2;
|
||||
|
||||
-- check that we can work with constant columns
|
||||
select median(x) over (partition by x) from (select 1 x);
|
||||
|
||||
-- an empty window definition is valid as well
|
||||
select groupArray(number) over () from numbers(3);
|
||||
|
||||
-- This one tests we properly process the window function arguments.
|
||||
-- Seen errors like 'column `1` not found' from count(1).
|
||||
select count(1) over (), max(number + 1) over () from numbers(3);
|
||||
|
||||
-- Should work in DISTINCT
|
||||
select distinct sum(0) over () from numbers(2);
|
||||
select distinct any(number) over () from numbers(2);
|
||||
|
||||
-- Various kinds of aliases are properly substituted into various parts of window
|
||||
-- function definition.
|
||||
with number + 1 as x select intDiv(number, 3) as y, sum(x + y) over (partition by y order by x) from numbers(7);
|
||||
|
@ -24,6 +24,7 @@ function alter_thread
|
||||
function kill_mutation_thread
|
||||
{
|
||||
while true; do
|
||||
# find any mutation and kill it
|
||||
mutation_id=$($CLICKHOUSE_CLIENT --query "SELECT mutation_id FROM system.mutations WHERE is_done=0 and database='${CLICKHOUSE_DATABASE}' and table='concurrent_mutate_kill' LIMIT 1")
|
||||
if [ ! -z "$mutation_id" ]; then
|
||||
$CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='$mutation_id'" 1> /dev/null
|
||||
@ -44,7 +45,23 @@ timeout $TIMEOUT bash -c kill_mutation_thread 2> /dev/null &
|
||||
wait
|
||||
|
||||
$CLICKHOUSE_CLIENT --query "SYSTEM SYNC REPLICA concurrent_mutate_kill"
|
||||
$CLICKHOUSE_CLIENT --query "ALTER TABLE concurrent_mutate_kill MODIFY COLUMN value Int64 SETTINGS replication_alter_partitions_sync=2"
|
||||
|
||||
# with timeout alter query can be not finished yet, so to execute new alter
|
||||
# we use retries
|
||||
counter=0
|
||||
while true; do
|
||||
if $CLICKHOUSE_CLIENT --query "ALTER TABLE concurrent_mutate_kill MODIFY COLUMN value Int64 SETTINGS replication_alter_partitions_sync=2" 2> /dev/null ; then
|
||||
break
|
||||
fi
|
||||
|
||||
if [ "$counter" -gt 120 ]
|
||||
then
|
||||
break
|
||||
fi
|
||||
sleep 0.5
|
||||
counter=$(($counter + 1))
|
||||
done
|
||||
|
||||
$CLICKHOUSE_CLIENT --query "SHOW CREATE TABLE concurrent_mutate_kill"
|
||||
$CLICKHOUSE_CLIENT --query "OPTIMIZE TABLE concurrent_mutate_kill FINAL"
|
||||
$CLICKHOUSE_CLIENT --query "SELECT sum(value) FROM concurrent_mutate_kill"
|
||||
|
@ -36,6 +36,7 @@ function alter_thread
|
||||
function kill_mutation_thread
|
||||
{
|
||||
while true; do
|
||||
# find any mutation and kill it
|
||||
mutation_id=$($CLICKHOUSE_CLIENT --query "SELECT mutation_id FROM system.mutations WHERE is_done = 0 and table like 'concurrent_kill_%' and database='${CLICKHOUSE_DATABASE}' LIMIT 1")
|
||||
if [ ! -z "$mutation_id" ]; then
|
||||
$CLICKHOUSE_CLIENT --query "KILL MUTATION WHERE mutation_id='$mutation_id'" 1> /dev/null
|
||||
@ -58,7 +59,22 @@ for i in $(seq $REPLICAS); do
|
||||
$CLICKHOUSE_CLIENT --query "SYSTEM SYNC REPLICA concurrent_kill_$i"
|
||||
done
|
||||
|
||||
$CLICKHOUSE_CLIENT --query "ALTER TABLE concurrent_kill_$i MODIFY COLUMN value Int64 SETTINGS replication_alter_partitions_sync=2"
|
||||
# with timeout alter query can be not finished yet, so to execute new alter
|
||||
# we use retries
|
||||
counter=0
|
||||
while true; do
|
||||
if $CLICKHOUSE_CLIENT --query "ALTER TABLE concurrent_kill_1 MODIFY COLUMN value Int64 SETTINGS replication_alter_partitions_sync=2" 2> /dev/null ; then
|
||||
break
|
||||
fi
|
||||
|
||||
if [ "$counter" -gt 120 ]
|
||||
then
|
||||
break
|
||||
fi
|
||||
sleep 0.5
|
||||
counter=$(($counter + 1))
|
||||
done
|
||||
|
||||
|
||||
metadata_version=$($CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/test_01593_concurrent_kill/replicas/$i/' and name = 'metadata_version'")
|
||||
for i in $(seq $REPLICAS); do
|
||||
|
@ -0,0 +1,4 @@
|
||||
1 original
|
||||
2
|
||||
1 original
|
||||
2
|
15
tests/queries/0_stateless/01614_with_fill_with_limit.sql
Normal file
15
tests/queries/0_stateless/01614_with_fill_with_limit.sql
Normal file
@ -0,0 +1,15 @@
|
||||
SELECT
|
||||
toFloat32(number % 10) AS n,
|
||||
'original' AS source
|
||||
FROM numbers(10)
|
||||
WHERE (number % 3) = 1
|
||||
ORDER BY n ASC WITH FILL STEP 1
|
||||
LIMIT 2;
|
||||
|
||||
SELECT
|
||||
toFloat32(number % 10) AS n,
|
||||
'original' AS source
|
||||
FROM numbers(10)
|
||||
WHERE (number % 3) = 1
|
||||
ORDER BY n ASC WITH FILL STEP 1
|
||||
LIMIT 2 WITH TIES;
|
@ -0,0 +1 @@
|
||||
SELECT k FROM (SELECT NULL, nullIf(number, 3) AS k, '1048575', (65536, -9223372036854775808), toString(number) AS a FROM system.numbers LIMIT 1048577) AS js1 ANY RIGHT JOIN (SELECT 1.000100016593933, nullIf(number, NULL) AS k, toString(number) AS b FROM system.numbers LIMIT 2, 255) AS js2 USING (k) ORDER BY 257 ASC NULLS LAST FORMAT Null;
|
Loading…
Reference in New Issue
Block a user