mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 07:31:57 +00:00
Merge branch 'master' of github.com:ClickHouse/ClickHouse into keen-wolf-filefuncwithreadbuf
This commit is contained in:
commit
6ee98ead05
@ -113,10 +113,10 @@ $ docker run --rm -e CLICKHOUSE_UID=0 -e CLICKHOUSE_GID=0 --name clickhouse-serv
|
|||||||
|
|
||||||
### How to create default database and user on starting
|
### How to create default database and user on starting
|
||||||
|
|
||||||
Sometimes you may want to create user (user named `default` is used by default) and database on image starting. You can do it using environment variables `CLICKHOUSE_DB`, `CLICKHOUSE_USER` and `CLICKHOUSE_PASSWORD`:
|
Sometimes you may want to create user (user named `default` is used by default) and database on image starting. You can do it using environment variables `CLICKHOUSE_DB`, `CLICKHOUSE_USER`, `CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT` and `CLICKHOUSE_PASSWORD`:
|
||||||
|
|
||||||
```
|
```
|
||||||
$ docker run --rm -e CLICKHOUSE_DB=my_database -e CLICKHOUSE_USER=username -e CLICKHOUSE_PASSWORD=password -p 9000:9000/tcp yandex/clickhouse-server
|
$ docker run --rm -e CLICKHOUSE_DB=my_database -e CLICKHOUSE_USER=username -e CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT=1 -e CLICKHOUSE_PASSWORD=password -p 9000:9000/tcp yandex/clickhouse-server
|
||||||
```
|
```
|
||||||
|
|
||||||
## How to extend this image
|
## How to extend this image
|
||||||
|
@ -54,6 +54,7 @@ FORMAT_SCHEMA_PATH="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_
|
|||||||
CLICKHOUSE_USER="${CLICKHOUSE_USER:-default}"
|
CLICKHOUSE_USER="${CLICKHOUSE_USER:-default}"
|
||||||
CLICKHOUSE_PASSWORD="${CLICKHOUSE_PASSWORD:-}"
|
CLICKHOUSE_PASSWORD="${CLICKHOUSE_PASSWORD:-}"
|
||||||
CLICKHOUSE_DB="${CLICKHOUSE_DB:-}"
|
CLICKHOUSE_DB="${CLICKHOUSE_DB:-}"
|
||||||
|
CLICKHOUSE_ACCESS_MANAGEMENT="${CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT:-0}"
|
||||||
|
|
||||||
for dir in "$DATA_DIR" \
|
for dir in "$DATA_DIR" \
|
||||||
"$ERROR_LOG_DIR" \
|
"$ERROR_LOG_DIR" \
|
||||||
@ -97,6 +98,7 @@ if [ -n "$CLICKHOUSE_USER" ] && [ "$CLICKHOUSE_USER" != "default" ] || [ -n "$CL
|
|||||||
</networks>
|
</networks>
|
||||||
<password>${CLICKHOUSE_PASSWORD}</password>
|
<password>${CLICKHOUSE_PASSWORD}</password>
|
||||||
<quota>default</quota>
|
<quota>default</quota>
|
||||||
|
<access_management>${CLICKHOUSE_ACCESS_MANAGEMENT}</access_management>
|
||||||
</${CLICKHOUSE_USER}>
|
</${CLICKHOUSE_USER}>
|
||||||
</users>
|
</users>
|
||||||
</yandex>
|
</yandex>
|
||||||
|
29
docs/_description_templates/template-data-type.md
Normal file
29
docs/_description_templates/template-data-type.md
Normal file
@ -0,0 +1,29 @@
|
|||||||
|
---
|
||||||
|
toc_priority:
|
||||||
|
toc_title:
|
||||||
|
---
|
||||||
|
|
||||||
|
# data_type_name {#data_type-name}
|
||||||
|
|
||||||
|
Description.
|
||||||
|
|
||||||
|
**Parameters** (Optional)
|
||||||
|
|
||||||
|
- `x` — Description. [Type name](relative/path/to/type/dscr.md#type).
|
||||||
|
- `y` — Description. [Type name](relative/path/to/type/dscr.md#type).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Additional Info {#additional-info} (Optional)
|
||||||
|
|
||||||
|
The name of an additional section can be any, for example, **Usage**.
|
||||||
|
|
||||||
|
**See Also** (Optional)
|
||||||
|
|
||||||
|
- [link](#)
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/data_types/<data-type-name>/) <!--hide-->
|
@ -2592,6 +2592,18 @@ Possible values:
|
|||||||
|
|
||||||
Default value: `16`.
|
Default value: `16`.
|
||||||
|
|
||||||
|
## opentelemetry_start_trace_probability {#opentelemetry-start-trace-probability}
|
||||||
|
|
||||||
|
Sets the probability that the ClickHouse can start a trace for executed queries (if no parent [trace context](https://www.w3.org/TR/trace-context/) is supplied).
|
||||||
|
|
||||||
|
Possible values:
|
||||||
|
|
||||||
|
- 0 — The trace for all executed queries is disabled (if no parent trace context is supplied).
|
||||||
|
- Positive floating-point number in the range [0..1]. For example, if the setting value is `0,5`, ClickHouse can start a trace on average for half of the queries.
|
||||||
|
- 1 — The trace for all executed queries is enabled.
|
||||||
|
|
||||||
|
Default value: `0`.
|
||||||
|
|
||||||
## optimize_on_insert {#optimize-on-insert}
|
## optimize_on_insert {#optimize-on-insert}
|
||||||
|
|
||||||
Enables or disables data transformation before the insertion, as if merge was done on this block (according to table engine).
|
Enables or disables data transformation before the insertion, as if merge was done on this block (according to table engine).
|
||||||
|
53
docs/en/operations/system-tables/opentelemetry_span_log.md
Normal file
53
docs/en/operations/system-tables/opentelemetry_span_log.md
Normal file
@ -0,0 +1,53 @@
|
|||||||
|
# system.opentelemetry_span_log {#system_tables-opentelemetry_span_log}
|
||||||
|
|
||||||
|
Contains information about [trace spans](https://opentracing.io/docs/overview/spans/) for executed queries.
|
||||||
|
|
||||||
|
Columns:
|
||||||
|
|
||||||
|
- `trace_id` ([UUID](../../sql-reference/data-types/uuid.md) — ID of the trace for executed query.
|
||||||
|
|
||||||
|
- `span_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — ID of the `trace span`.
|
||||||
|
|
||||||
|
- `parent_span_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — ID of the parent `trace span`.
|
||||||
|
|
||||||
|
- `operation_name` ([String](../../sql-reference/data-types/string.md)) — The name of the operation.
|
||||||
|
|
||||||
|
- `start_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The start time of the `trace span` (in microseconds).
|
||||||
|
|
||||||
|
- `finish_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — The finish time of the `trace span` (in microseconds).
|
||||||
|
|
||||||
|
- `finish_date` ([Date](../../sql-reference/data-types/date.md)) — The finish date of the `trace span`.
|
||||||
|
|
||||||
|
- `attribute.names` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — [Attribute](https://opentelemetry.io/docs/go/instrumentation/#attributes) names depending on the `trace span`. They are filled in according to the recommendations in the [OpenTelemetry](https://opentelemetry.io/) standard.
|
||||||
|
|
||||||
|
- `attribute.values` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Attribute values depending on the `trace span`. They are filled in according to the recommendations in the `OpenTelemetry` standard.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM system.opentelemetry_span_log LIMIT 1 FORMAT Vertical;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
Row 1:
|
||||||
|
──────
|
||||||
|
trace_id: cdab0847-0d62-61d5-4d38-dd65b19a1914
|
||||||
|
span_id: 701487461015578150
|
||||||
|
parent_span_id: 2991972114672045096
|
||||||
|
operation_name: DB::Block DB::InterpreterSelectQuery::getSampleBlockImpl()
|
||||||
|
start_time_us: 1612374594529090
|
||||||
|
finish_time_us: 1612374594529108
|
||||||
|
finish_date: 2021-02-03
|
||||||
|
attribute.names: []
|
||||||
|
attribute.values: []
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [OpenTelemetry](../../operations/opentelemetry.md)
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/opentelemetry_span_log) <!--hide-->
|
83
docs/en/sql-reference/data-types/map.md
Normal file
83
docs/en/sql-reference/data-types/map.md
Normal file
@ -0,0 +1,83 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 65
|
||||||
|
toc_title: Map(key, value)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Map(key, value) {#data_type-map}
|
||||||
|
|
||||||
|
`Map(key, value)` data type stores `key:value` pairs.
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
- `key` — The key part of the pair. [String](../../sql-reference/data-types/string.md) or [Integer](../../sql-reference/data-types/int-uint.md).
|
||||||
|
- `value` — The value part of the pair. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md) or [Array](../../sql-reference/data-types/array.md).
|
||||||
|
|
||||||
|
!!! warning "Warning"
|
||||||
|
Currently `Map` data type is an experimental feature. To work with it you must set `allow_experimental_map_type = 1`.
|
||||||
|
|
||||||
|
To get the value from an `a Map('key', 'value')` column, use `a['key']` syntax. This lookup works now with a linear complexity.
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Consider the table:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_map (a Map(String, UInt64)) ENGINE=Memory;
|
||||||
|
INSERT INTO table_map VALUES ({'key1':1, 'key2':10}), ({'key1':2,'key2':20}), ({'key1':3,'key2':30});
|
||||||
|
```
|
||||||
|
|
||||||
|
Select all `key2` values:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT a['key2'] FROM table_map;
|
||||||
|
```
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─arrayElement(a, 'key2')─┐
|
||||||
|
│ 10 │
|
||||||
|
│ 20 │
|
||||||
|
│ 30 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
If there's no such `key` in the `Map()` column, the query returns zeros for numerical values, empty strings or empty arrays.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
INSERT INTO table_map VALUES ({'key3':100}), ({});
|
||||||
|
SELECT a['key3'] FROM table_map;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─arrayElement(a, 'key3')─┐
|
||||||
|
│ 100 │
|
||||||
|
│ 0 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
┌─arrayElement(a, 'key3')─┐
|
||||||
|
│ 0 │
|
||||||
|
│ 0 │
|
||||||
|
│ 0 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
## Convert Tuple to Map Type {#map-and-tuple}
|
||||||
|
|
||||||
|
You can cast `Tuple()` as `Map()` using [CAST](../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) function:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT CAST(([1, 2, 3], ['Ready', 'Steady', 'Go']), 'Map(UInt8, String)') AS map;
|
||||||
|
```
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─map───────────────────────────┐
|
||||||
|
│ {1:'Ready',2:'Steady',3:'Go'} │
|
||||||
|
└───────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [map()](../../sql-reference/functions/tuple-map-functions.md#function-map) function
|
||||||
|
- [CAST()](../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast) function
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/en/data-types/map/) <!--hide-->
|
@ -5,6 +5,68 @@ toc_title: Working with maps
|
|||||||
|
|
||||||
# Functions for maps {#functions-for-working-with-tuple-maps}
|
# Functions for maps {#functions-for-working-with-tuple-maps}
|
||||||
|
|
||||||
|
## map {#function-map}
|
||||||
|
|
||||||
|
Arranges `key:value` pairs into [Map(key, value)](../../sql-reference/data-types/map.md) data type.
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
map(key1, value1[, key2, value2, ...])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `key` — The key part of the pair. [String](../../sql-reference/data-types/string.md) or [Integer](../../sql-reference/data-types/int-uint.md).
|
||||||
|
- `value` — The value part of the pair. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md) or [Array](../../sql-reference/data-types/array.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Data structure as `key:value` pairs.
|
||||||
|
|
||||||
|
Type: [Map(key, value)](../../sql-reference/data-types/map.md).
|
||||||
|
|
||||||
|
**Examples**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT map('key1', number, 'key2', number * 2) FROM numbers(3);
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─map('key1', number, 'key2', multiply(number, 2))─┐
|
||||||
|
│ {'key1':0,'key2':0} │
|
||||||
|
│ {'key1':1,'key2':2} │
|
||||||
|
│ {'key1':2,'key2':4} │
|
||||||
|
└──────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_map (a Map(String, UInt64)) ENGINE = MergeTree() ORDER BY a;
|
||||||
|
INSERT INTO table_map SELECT map('key1', number, 'key2', number * 2) FROM numbers(3);
|
||||||
|
SELECT a['key2'] FROM table_map;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─arrayElement(a, 'key2')─┐
|
||||||
|
│ 0 │
|
||||||
|
│ 2 │
|
||||||
|
│ 4 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [Map(key, value)](../../sql-reference/data-types/map.md) data type
|
||||||
|
|
||||||
|
|
||||||
## mapAdd {#function-mapadd}
|
## mapAdd {#function-mapadd}
|
||||||
|
|
||||||
Collect all the keys and sum corresponding values.
|
Collect all the keys and sum corresponding values.
|
||||||
@ -112,4 +174,4 @@ Result:
|
|||||||
└──────────────────────────────┴───────────────────────────────────┘
|
└──────────────────────────────┴───────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/tuple-map-functions/) <!--hide-->
|
[Original article](https://clickhouse.tech/docs/en/sql-reference/functions/tuple-map-functions/) <!--hide-->
|
||||||
|
@ -133,10 +133,9 @@ For example:
|
|||||||
|
|
||||||
### cutToFirstSignificantSubdomainCustom {#cuttofirstsignificantsubdomaincustom}
|
### cutToFirstSignificantSubdomainCustom {#cuttofirstsignificantsubdomaincustom}
|
||||||
|
|
||||||
Same as `cutToFirstSignificantSubdomain` but accept custom TLD list name, useful if:
|
Returns the part of the domain that includes top-level subdomains up to the first significant subdomain. Accepts custom [TLD list](https://en.wikipedia.org/wiki/List_of_Internet_top-level_domains) name.
|
||||||
|
|
||||||
- you need fresh TLD list,
|
Can be useful if you need fresh TLD list or you have custom.
|
||||||
- or you have custom.
|
|
||||||
|
|
||||||
Configuration example:
|
Configuration example:
|
||||||
|
|
||||||
@ -149,21 +148,150 @@ Configuration example:
|
|||||||
</top_level_domains_lists>
|
</top_level_domains_lists>
|
||||||
```
|
```
|
||||||
|
|
||||||
Example:
|
**Syntax**
|
||||||
|
|
||||||
- `cutToFirstSignificantSubdomain('https://news.yandex.com.tr/', 'public_suffix_list') = 'yandex.com.tr'`.
|
``` sql
|
||||||
|
cutToFirstSignificantSubdomain(URL, TLD)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `URL` — URL. [String](../../sql-reference/data-types/string.md).
|
||||||
|
- `TLD` — Custom TLD list name. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Part of the domain that includes top-level subdomains up to the first significant subdomain.
|
||||||
|
|
||||||
|
Type: [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT cutToFirstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─cutToFirstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list')─┐
|
||||||
|
│ foo.there-is-no-such-domain │
|
||||||
|
└───────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [firstSignificantSubdomain](#firstsignificantsubdomain).
|
||||||
|
|
||||||
### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww}
|
### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww}
|
||||||
|
|
||||||
Same as `cutToFirstSignificantSubdomainWithWWW` but accept custom TLD list name.
|
Returns the part of the domain that includes top-level subdomains up to the first significant subdomain without stripping `www`. Accepts custom TLD list name.
|
||||||
|
|
||||||
|
Can be useful if you need fresh TLD list or you have custom.
|
||||||
|
|
||||||
|
Configuration example:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<!-- <top_level_domains_path>/var/lib/clickhouse/top_level_domains/</top_level_domains_path> -->
|
||||||
|
<top_level_domains_lists>
|
||||||
|
<!-- https://publicsuffix.org/list/public_suffix_list.dat -->
|
||||||
|
<public_suffix_list>public_suffix_list.dat</public_suffix_list>
|
||||||
|
<!-- NOTE: path is under top_level_domains_path -->
|
||||||
|
</top_level_domains_lists>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
cutToFirstSignificantSubdomainCustomWithWWW(URL, TLD)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `URL` — URL. [String](../../sql-reference/data-types/string.md).
|
||||||
|
- `TLD` — Custom TLD list name. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- Part of the domain that includes top-level subdomains up to the first significant subdomain without stripping `www`.
|
||||||
|
|
||||||
|
Type: [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT cutToFirstSignificantSubdomainCustomWithWWW('www.foo', 'public_suffix_list');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─cutToFirstSignificantSubdomainCustomWithWWW('www.foo', 'public_suffix_list')─┐
|
||||||
|
│ www.foo │
|
||||||
|
└──────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [firstSignificantSubdomain](#firstsignificantsubdomain).
|
||||||
|
|
||||||
### firstSignificantSubdomainCustom {#firstsignificantsubdomaincustom}
|
### firstSignificantSubdomainCustom {#firstsignificantsubdomaincustom}
|
||||||
|
|
||||||
Same as `firstSignificantSubdomain` but accept custom TLD list name.
|
Returns the first significant subdomain. Accepts customs TLD list name.
|
||||||
|
|
||||||
### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww}
|
Can be useful if you need fresh TLD list or you have custom.
|
||||||
|
|
||||||
Same as `cutToFirstSignificantSubdomainWithWWW` but accept custom TLD list name.
|
Configuration example:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<!-- <top_level_domains_path>/var/lib/clickhouse/top_level_domains/</top_level_domains_path> -->
|
||||||
|
<top_level_domains_lists>
|
||||||
|
<!-- https://publicsuffix.org/list/public_suffix_list.dat -->
|
||||||
|
<public_suffix_list>public_suffix_list.dat</public_suffix_list>
|
||||||
|
<!-- NOTE: path is under top_level_domains_path -->
|
||||||
|
</top_level_domains_lists>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Syntax**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
firstSignificantSubdomainCustom(URL, TLD)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `URL` — URL. [String](../../sql-reference/data-types/string.md).
|
||||||
|
- `TLD` — Custom TLD list name. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Returned value**
|
||||||
|
|
||||||
|
- First significant subdomain.
|
||||||
|
|
||||||
|
Type: [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
Query:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT firstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list');
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─firstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list')─┐
|
||||||
|
│ foo │
|
||||||
|
└──────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**See Also**
|
||||||
|
|
||||||
|
- [firstSignificantSubdomain](#firstsignificantsubdomain).
|
||||||
|
|
||||||
### port(URL\[, default_port = 0\]) {#port}
|
### port(URL\[, default_port = 0\]) {#port}
|
||||||
|
|
||||||
|
@ -14,14 +14,16 @@ ClickHouse supports the standard grammar for defining windows and window functio
|
|||||||
|
|
||||||
| Feature | Support or workaround |
|
| Feature | Support or workaround |
|
||||||
| --------| ----------|
|
| --------| ----------|
|
||||||
| ad hoc window specification (`count(*) over (partition by id order by time desc)`) | yes |
|
| ad hoc window specification (`count(*) over (partition by id order by time desc)`) | supported |
|
||||||
| `WINDOW` clause (`select ... from table window w as (partiton by id)`) | yes |
|
| expressions involving window functions, e.g. `(count(*) over ()) / 2)` | not supported, wrap in a subquery ([feature request](https://github.com/ClickHouse/ClickHouse/issues/19857)) |
|
||||||
| `ROWS` frame | yes |
|
| `WINDOW` clause (`select ... from table window w as (partiton by id)`) | supported |
|
||||||
| `RANGE` frame | yes, it is the default |
|
| `ROWS` frame | supported |
|
||||||
| `GROUPS` frame | no |
|
| `RANGE` frame | supported, the default |
|
||||||
|
| `INTERVAL` syntax for `DateTime` `RANGE OFFSET` frame | not supported, specify the number of seconds instead |
|
||||||
|
| `GROUPS` frame | not supported |
|
||||||
| Calculating aggregate functions over a frame (`sum(value) over (order by time)`) | all aggregate functions are supported |
|
| Calculating aggregate functions over a frame (`sum(value) over (order by time)`) | all aggregate functions are supported |
|
||||||
| `rank()`, `dense_rank()`, `row_number()` | yes |
|
| `rank()`, `dense_rank()`, `row_number()` | supported |
|
||||||
| `lag/lead(value, offset)` | no, replace with `any(value) over (.... rows between <offset> preceding and <offset> preceding)`, or `following` for `lead`|
|
| `lag/lead(value, offset)` | not supported, replace with `any(value) over (.... rows between <offset> preceding and <offset> preceding)`, or `following` for `lead`|
|
||||||
|
|
||||||
## References
|
## References
|
||||||
|
|
||||||
|
@ -2473,6 +2473,18 @@ SELECT SUM(-1), MAX(0) FROM system.one WHERE 0;
|
|||||||
|
|
||||||
Значение по умолчанию: `16`.
|
Значение по умолчанию: `16`.
|
||||||
|
|
||||||
|
## opentelemetry_start_trace_probability {#opentelemetry-start-trace-probability}
|
||||||
|
|
||||||
|
Задает вероятность того, что ClickHouse начнет трассировку для выполненных запросов (если не указан [входящий контекст](https://www.w3.org/TR/trace-context/) трассировки).
|
||||||
|
|
||||||
|
Возможные значения:
|
||||||
|
|
||||||
|
- 0 — трассировка для выполненных запросов отключена (если не указан входящий контекст трассировки).
|
||||||
|
- Положительное число с плавающей точкой в диапазоне [0..1]. Например, при значении настройки, равной `0,5`, ClickHouse начнет трассировку в среднем для половины запросов.
|
||||||
|
- 1 — трассировка для всех выполненных запросов включена.
|
||||||
|
|
||||||
|
Значение по умолчанию: `0`.
|
||||||
|
|
||||||
## optimize_on_insert {#optimize-on-insert}
|
## optimize_on_insert {#optimize-on-insert}
|
||||||
|
|
||||||
Включает или выключает преобразование данных перед добавлением в таблицу, как будто над добавляемым блоком предварительно было произведено слияние (в соответствии с движком таблицы).
|
Включает или выключает преобразование данных перед добавлением в таблицу, как будто над добавляемым блоком предварительно было произведено слияние (в соответствии с движком таблицы).
|
||||||
|
49
docs/ru/operations/system-tables/opentelemetry_span_log.md
Normal file
49
docs/ru/operations/system-tables/opentelemetry_span_log.md
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
# system.opentelemetry_span_log {#system_tables-opentelemetry_span_log}
|
||||||
|
|
||||||
|
Содержит информацию о [trace spans](https://opentracing.io/docs/overview/spans/) для выполненных запросов.
|
||||||
|
|
||||||
|
Столбцы:
|
||||||
|
|
||||||
|
- `trace_id` ([UUID](../../sql-reference/data-types/uuid.md) — идентификатор трассировки для выполненного запроса.
|
||||||
|
|
||||||
|
- `span_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — идентификатор `trace span`.
|
||||||
|
|
||||||
|
- `parent_span_id` ([UInt64](../../sql-reference/data-types/int-uint.md)) — идентификатор родительского `trace span`.
|
||||||
|
|
||||||
|
- `operation_name` ([String](../../sql-reference/data-types/string.md)) — имя операции.
|
||||||
|
|
||||||
|
- `start_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — время начала `trace span` (в микросекундах).
|
||||||
|
|
||||||
|
- `finish_time_us` ([UInt64](../../sql-reference/data-types/int-uint.md)) — время окончания `trace span` (в микросекундах).
|
||||||
|
|
||||||
|
- `finish_date` ([Date](../../sql-reference/data-types/date.md)) — дата окончания `trace span`.
|
||||||
|
|
||||||
|
- `attribute.names` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — имена [атрибутов](https://opentelemetry.io/docs/go/instrumentation/#attributes) в зависимости от `trace span`. Заполняются согласно рекомендациям в стандарте [OpenTelemetry](https://opentelemetry.io/).
|
||||||
|
|
||||||
|
- `attribute.values` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — значения атрибутов в зависимости от `trace span`. Заполняются согласно рекомендациям в стандарте `OpenTelemetry`.
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
Запрос:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT * FROM system.opentelemetry_span_log LIMIT 1 FORMAT Vertical;
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
Row 1:
|
||||||
|
──────
|
||||||
|
trace_id: cdab0847-0d62-61d5-4d38-dd65b19a1914
|
||||||
|
span_id: 701487461015578150
|
||||||
|
parent_span_id: 2991972114672045096
|
||||||
|
operation_name: DB::Block DB::InterpreterSelectQuery::getSampleBlockImpl()
|
||||||
|
start_time_us: 1612374594529090
|
||||||
|
finish_time_us: 1612374594529108
|
||||||
|
finish_date: 2021-02-03
|
||||||
|
attribute.names: []
|
||||||
|
attribute.values: []
|
||||||
|
```
|
||||||
|
|
||||||
|
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/opentelemetry_span_log) <!--hide-->
|
69
docs/ru/sql-reference/data-types/map.md
Normal file
69
docs/ru/sql-reference/data-types/map.md
Normal file
@ -0,0 +1,69 @@
|
|||||||
|
---
|
||||||
|
toc_priority: 65
|
||||||
|
toc_title: Map(key, value)
|
||||||
|
---
|
||||||
|
|
||||||
|
# Map(key, value) {#data_type-map}
|
||||||
|
|
||||||
|
Тип данных `Map(key, value)` хранит пары `ключ:значение`.
|
||||||
|
|
||||||
|
**Параметры**
|
||||||
|
- `key` — ключ. [String](../../sql-reference/data-types/string.md) или [Integer](../../sql-reference/data-types/int-uint.md).
|
||||||
|
- `value` — значение. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md) или [Array](../../sql-reference/data-types/array.md).
|
||||||
|
|
||||||
|
!!! warning "Предупреждение"
|
||||||
|
Сейчас использование типа данных `Map` является экспериментальной возможностью. Чтобы использовать этот тип данных, включите настройку `allow_experimental_map_type = 1`.
|
||||||
|
|
||||||
|
Чтобы получить значение из колонки `a Map('key', 'value')`, используйте синтаксис `a['key']`. В настоящее время такая подстановка работает по алгоритму с линейной сложностью.
|
||||||
|
|
||||||
|
**Примеры**
|
||||||
|
|
||||||
|
Рассмотрим таблицу:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_map (a Map(String, UInt64)) ENGINE=Memory;
|
||||||
|
INSERT INTO table_map VALUES ({'key1':1, 'key2':10}), ({'key1':2,'key2':20}), ({'key1':3,'key2':30});
|
||||||
|
```
|
||||||
|
|
||||||
|
Выборка всех значений ключа `key2`:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT a['key2'] FROM table_map;
|
||||||
|
```
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─arrayElement(a, 'key2')─┐
|
||||||
|
│ 10 │
|
||||||
|
│ 20 │
|
||||||
|
│ 30 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Если для какого-то ключа `key` в колонке с типом `Map()` нет значения, запрос возвращает нули для числовых колонок, пустые строки или пустые массивы.
|
||||||
|
|
||||||
|
```sql
|
||||||
|
INSERT INTO table_map VALUES ({'key3':100}), ({});
|
||||||
|
SELECT a['key3'] FROM table_map;
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─arrayElement(a, 'key3')─┐
|
||||||
|
│ 100 │
|
||||||
|
│ 0 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
┌─arrayElement(a, 'key3')─┐
|
||||||
|
│ 0 │
|
||||||
|
│ 0 │
|
||||||
|
│ 0 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**См. также**
|
||||||
|
|
||||||
|
- функция [map()](../../sql-reference/functions/tuple-map-functions.md#function-map)
|
||||||
|
- функция [CAST()](../../sql-reference/functions/type-conversion-functions.md#type_conversion_function-cast)
|
||||||
|
|
||||||
|
[Original article](https://clickhouse.tech/docs/ru/data-types/map/) <!--hide-->
|
@ -5,6 +5,66 @@ toc_title: Работа с контейнерами map
|
|||||||
|
|
||||||
# Функции для работы с контейнерами map {#functions-for-working-with-tuple-maps}
|
# Функции для работы с контейнерами map {#functions-for-working-with-tuple-maps}
|
||||||
|
|
||||||
|
## map {#function-map}
|
||||||
|
|
||||||
|
Преобразовывает пары `ключ:значение` в тип данных [Map(key, value)](../../sql-reference/data-types/map.md).
|
||||||
|
|
||||||
|
**Синтаксис**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
map(key1, value1[, key2, value2, ...])
|
||||||
|
```
|
||||||
|
|
||||||
|
**Параметры**
|
||||||
|
|
||||||
|
- `key` — ключ. [String](../../sql-reference/data-types/string.md) или [Integer](../../sql-reference/data-types/int-uint.md).
|
||||||
|
- `value` — значение. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md) или [Array](../../sql-reference/data-types/array.md).
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
- Структура данных в виде пар `ключ:значение`.
|
||||||
|
|
||||||
|
Тип: [Map(key, value)](../../sql-reference/data-types/map.md).
|
||||||
|
|
||||||
|
**Примеры**
|
||||||
|
|
||||||
|
Запрос:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT map('key1', number, 'key2', number * 2) FROM numbers(3);
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─map('key1', number, 'key2', multiply(number, 2))─┐
|
||||||
|
│ {'key1':0,'key2':0} │
|
||||||
|
│ {'key1':1,'key2':2} │
|
||||||
|
│ {'key1':2,'key2':4} │
|
||||||
|
└──────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Запрос:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
CREATE TABLE table_map (a Map(String, UInt64)) ENGINE = MergeTree() ORDER BY a;
|
||||||
|
INSERT INTO table_map SELECT map('key1', number, 'key2', number * 2) FROM numbers(3);
|
||||||
|
SELECT a['key2'] FROM table_map;
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─arrayElement(a, 'key2')─┐
|
||||||
|
│ 0 │
|
||||||
|
│ 2 │
|
||||||
|
│ 4 │
|
||||||
|
└─────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**См. также**
|
||||||
|
|
||||||
|
- тип данных [Map(key, value)](../../sql-reference/data-types/map.md)
|
||||||
## mapAdd {#function-mapadd}
|
## mapAdd {#function-mapadd}
|
||||||
|
|
||||||
Собирает все ключи и суммирует соответствующие значения.
|
Собирает все ключи и суммирует соответствующие значения.
|
||||||
|
@ -115,6 +115,168 @@ SELECT topLevelDomain('svn+ssh://www.some.svn-hosting.com:80/repo/trunk')
|
|||||||
|
|
||||||
Например, `cutToFirstSignificantSubdomain('https://news.yandex.com.tr/') = 'yandex.com.tr'`.
|
Например, `cutToFirstSignificantSubdomain('https://news.yandex.com.tr/') = 'yandex.com.tr'`.
|
||||||
|
|
||||||
|
### cutToFirstSignificantSubdomainCustom {#cuttofirstsignificantsubdomaincustom}
|
||||||
|
|
||||||
|
Возвращает часть домена, включающую поддомены верхнего уровня до первого существенного поддомена. Принимает имя пользовательского [списка доменов верхнего уровня](https://ru.wikipedia.org/wiki/Список_доменов_верхнего_уровня).
|
||||||
|
|
||||||
|
Полезно, если требуется актуальный список доменов верхнего уровня или если есть пользовательский.
|
||||||
|
|
||||||
|
Пример конфигурации:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<!-- <top_level_domains_path>/var/lib/clickhouse/top_level_domains/</top_level_domains_path> -->
|
||||||
|
<top_level_domains_lists>
|
||||||
|
<!-- https://publicsuffix.org/list/public_suffix_list.dat -->
|
||||||
|
<public_suffix_list>public_suffix_list.dat</public_suffix_list>
|
||||||
|
<!-- NOTE: path is under top_level_domains_path -->
|
||||||
|
</top_level_domains_lists>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Синтаксис**
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
cutToFirstSignificantSubdomain(URL, TLD)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameters**
|
||||||
|
|
||||||
|
- `URL` — URL. [String](../../sql-reference/data-types/string.md).
|
||||||
|
- `TLD` — имя пользовательского списка доменов верхнего уровня. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
- Часть домена, включающая поддомены верхнего уровня до первого существенного поддомена.
|
||||||
|
|
||||||
|
Тип: [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
Запрос:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT cutToFirstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list');
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─cutToFirstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list')─┐
|
||||||
|
│ foo.there-is-no-such-domain │
|
||||||
|
└───────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Смотрите также**
|
||||||
|
|
||||||
|
- [firstSignificantSubdomain](#firstsignificantsubdomain).
|
||||||
|
|
||||||
|
### cutToFirstSignificantSubdomainCustomWithWWW {#cuttofirstsignificantsubdomaincustomwithwww}
|
||||||
|
|
||||||
|
Возвращает часть домена, включающую поддомены верхнего уровня до первого существенного поддомена, не опуская "www". Принимает имя пользовательского списка доменов верхнего уровня.
|
||||||
|
|
||||||
|
Полезно, если требуется актуальный список доменов верхнего уровня или если есть пользовательский.
|
||||||
|
|
||||||
|
Пример конфигурации:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<!-- <top_level_domains_path>/var/lib/clickhouse/top_level_domains/</top_level_domains_path> -->
|
||||||
|
<top_level_domains_lists>
|
||||||
|
<!-- https://publicsuffix.org/list/public_suffix_list.dat -->
|
||||||
|
<public_suffix_list>public_suffix_list.dat</public_suffix_list>
|
||||||
|
<!-- NOTE: path is under top_level_domains_path -->
|
||||||
|
</top_level_domains_lists>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Синтаксис**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
cutToFirstSignificantSubdomainCustomWithWWW(URL, TLD)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Параметры**
|
||||||
|
|
||||||
|
- `URL` — URL. [String](../../sql-reference/data-types/string.md).
|
||||||
|
- `TLD` — имя пользовательского списка доменов верхнего уровня. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
- Часть домена, включающая поддомены верхнего уровня до первого существенного поддомена, без удаления `www`.
|
||||||
|
|
||||||
|
Тип: [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
Запрос:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT cutToFirstSignificantSubdomainCustomWithWWW('www.foo', 'public_suffix_list');
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─cutToFirstSignificantSubdomainCustomWithWWW('www.foo', 'public_suffix_list')─┐
|
||||||
|
│ www.foo │
|
||||||
|
└──────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Смотрите также**
|
||||||
|
|
||||||
|
- [firstSignificantSubdomain](#firstsignificantsubdomain).
|
||||||
|
|
||||||
|
### firstSignificantSubdomainCustom {#firstsignificantsubdomaincustom}
|
||||||
|
|
||||||
|
Возвращает первый существенный поддомен. Принимает имя пользовательского списка доменов верхнего уровня.
|
||||||
|
|
||||||
|
Полезно, если требуется актуальный список доменов верхнего уровня или если есть пользовательский.
|
||||||
|
|
||||||
|
Пример конфигурации:
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<!-- <top_level_domains_path>/var/lib/clickhouse/top_level_domains/</top_level_domains_path> -->
|
||||||
|
<top_level_domains_lists>
|
||||||
|
<!-- https://publicsuffix.org/list/public_suffix_list.dat -->
|
||||||
|
<public_suffix_list>public_suffix_list.dat</public_suffix_list>
|
||||||
|
<!-- NOTE: path is under top_level_domains_path -->
|
||||||
|
</top_level_domains_lists>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Синтаксис**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
firstSignificantSubdomainCustom(URL, TLD)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Параметры**
|
||||||
|
|
||||||
|
- `URL` — URL. [String](../../sql-reference/data-types/string.md).
|
||||||
|
- `TLD` — имя пользовательского списка доменов верхнего уровня. [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Возвращаемое значение**
|
||||||
|
|
||||||
|
- Первый существенный поддомен.
|
||||||
|
|
||||||
|
Тип: [String](../../sql-reference/data-types/string.md).
|
||||||
|
|
||||||
|
**Пример**
|
||||||
|
|
||||||
|
Запрос:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT firstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list');
|
||||||
|
```
|
||||||
|
|
||||||
|
Результат:
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─firstSignificantSubdomainCustom('bar.foo.there-is-no-such-domain', 'public_suffix_list')─┐
|
||||||
|
│ foo │
|
||||||
|
└──────────────────────────────────────────────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
**Смотрите также**
|
||||||
|
|
||||||
|
- [firstSignificantSubdomain](#firstsignificantsubdomain).
|
||||||
|
|
||||||
### port(URL[, default_port = 0]) {#port}
|
### port(URL[, default_port = 0]) {#port}
|
||||||
|
|
||||||
Возвращает порт или значение `default_port`, если в URL-адресе нет порта (или передан невалидный URL)
|
Возвращает порт или значение `default_port`, если в URL-адресе нет порта (или передан невалидный URL)
|
||||||
|
@ -756,7 +756,11 @@ std::optional<UInt64> Connection::checkPacket(size_t timeout_microseconds)
|
|||||||
Packet Connection::receivePacket(std::function<void(Poco::Net::Socket &)> async_callback)
|
Packet Connection::receivePacket(std::function<void(Poco::Net::Socket &)> async_callback)
|
||||||
{
|
{
|
||||||
in->setAsyncCallback(std::move(async_callback));
|
in->setAsyncCallback(std::move(async_callback));
|
||||||
SCOPE_EXIT(in->setAsyncCallback({}));
|
SCOPE_EXIT({
|
||||||
|
/// disconnect() will reset "in".
|
||||||
|
if (in)
|
||||||
|
in->setAsyncCallback({});
|
||||||
|
});
|
||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
|
@ -166,6 +166,8 @@ void NuKeeperStateMachine::create_snapshot(
|
|||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
LOG_DEBUG(log, "Created snapshot {}", s.get_last_log_idx());
|
||||||
nuraft::ptr<std::exception> except(nullptr);
|
nuraft::ptr<std::exception> except(nullptr);
|
||||||
bool ret = true;
|
bool ret = true;
|
||||||
when_done(ret, except);
|
when_done(ret, except);
|
||||||
|
@ -6,6 +6,7 @@ namespace DB
|
|||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
extern const int INCORRECT_DATA;
|
extern const int INCORRECT_DATA;
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
std::pair<bool, size_t> fileSegmentationEngineJSONEachRowImpl(ReadBuffer & in, DB::Memory<> & memory, size_t min_chunk_size)
|
std::pair<bool, size_t> fileSegmentationEngineJSONEachRowImpl(ReadBuffer & in, DB::Memory<> & memory, size_t min_chunk_size)
|
||||||
@ -28,7 +29,9 @@ std::pair<bool, size_t> fileSegmentationEngineJSONEachRowImpl(ReadBuffer & in, D
|
|||||||
if (quotes)
|
if (quotes)
|
||||||
{
|
{
|
||||||
pos = find_first_symbols<'\\', '"'>(pos, in.buffer().end());
|
pos = find_first_symbols<'\\', '"'>(pos, in.buffer().end());
|
||||||
if (pos == in.buffer().end())
|
if (pos > in.buffer().end())
|
||||||
|
throw Exception("Position in buffer is out of bounds. There must be a bug.", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
else if (pos == in.buffer().end())
|
||||||
continue;
|
continue;
|
||||||
if (*pos == '\\')
|
if (*pos == '\\')
|
||||||
{
|
{
|
||||||
@ -45,9 +48,11 @@ std::pair<bool, size_t> fileSegmentationEngineJSONEachRowImpl(ReadBuffer & in, D
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
pos = find_first_symbols<'{', '}', '\\', '"'>(pos, in.buffer().end());
|
pos = find_first_symbols<'{', '}', '\\', '"'>(pos, in.buffer().end());
|
||||||
if (pos == in.buffer().end())
|
if (pos > in.buffer().end())
|
||||||
|
throw Exception("Position in buffer is out of bounds. There must be a bug.", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
else if (pos == in.buffer().end())
|
||||||
continue;
|
continue;
|
||||||
if (*pos == '{')
|
else if (*pos == '{')
|
||||||
{
|
{
|
||||||
++balance;
|
++balance;
|
||||||
++pos;
|
++pos;
|
||||||
|
@ -35,10 +35,10 @@ struct Memory : boost::noncopyable, Allocator
|
|||||||
char * m_data = nullptr;
|
char * m_data = nullptr;
|
||||||
size_t alignment = 0;
|
size_t alignment = 0;
|
||||||
|
|
||||||
Memory() {}
|
Memory() = default;
|
||||||
|
|
||||||
/// If alignment != 0, then allocate memory aligned to specified value.
|
/// If alignment != 0, then allocate memory aligned to specified value.
|
||||||
Memory(size_t size_, size_t alignment_ = 0) : m_capacity(size_), m_size(m_capacity), alignment(alignment_)
|
explicit Memory(size_t size_, size_t alignment_ = 0) : m_capacity(size_), m_size(m_capacity), alignment(alignment_)
|
||||||
{
|
{
|
||||||
alloc();
|
alloc();
|
||||||
}
|
}
|
||||||
@ -140,7 +140,7 @@ protected:
|
|||||||
Memory<> memory;
|
Memory<> memory;
|
||||||
public:
|
public:
|
||||||
/// If non-nullptr 'existing_memory' is passed, then buffer will not create its own memory and will use existing_memory without ownership.
|
/// If non-nullptr 'existing_memory' is passed, then buffer will not create its own memory and will use existing_memory without ownership.
|
||||||
BufferWithOwnMemory(size_t size = DBMS_DEFAULT_BUFFER_SIZE, char * existing_memory = nullptr, size_t alignment = 0)
|
explicit BufferWithOwnMemory(size_t size = DBMS_DEFAULT_BUFFER_SIZE, char * existing_memory = nullptr, size_t alignment = 0)
|
||||||
: Base(nullptr, 0), memory(existing_memory ? 0 : size, alignment)
|
: Base(nullptr, 0), memory(existing_memory ? 0 : size, alignment)
|
||||||
{
|
{
|
||||||
Base::set(existing_memory ? existing_memory : memory.data(), size);
|
Base::set(existing_memory ? existing_memory : memory.data(), size);
|
||||||
|
@ -266,7 +266,7 @@ bool ParserFunction::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
|||||||
ParserIdentifier id_parser;
|
ParserIdentifier id_parser;
|
||||||
ParserKeyword distinct("DISTINCT");
|
ParserKeyword distinct("DISTINCT");
|
||||||
ParserKeyword all("ALL");
|
ParserKeyword all("ALL");
|
||||||
ParserExpressionList contents(false);
|
ParserExpressionList contents(false, is_table_function);
|
||||||
ParserSelectWithUnionQuery select;
|
ParserSelectWithUnionQuery select;
|
||||||
ParserKeyword over("OVER");
|
ParserKeyword over("OVER");
|
||||||
|
|
||||||
@ -278,6 +278,12 @@ bool ParserFunction::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
|||||||
ASTPtr expr_list_args;
|
ASTPtr expr_list_args;
|
||||||
ASTPtr expr_list_params;
|
ASTPtr expr_list_params;
|
||||||
|
|
||||||
|
if (is_table_function)
|
||||||
|
{
|
||||||
|
if (ParserTableFunctionView().parse(pos, node, expected))
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
if (!id_parser.parse(pos, identifier, expected))
|
if (!id_parser.parse(pos, identifier, expected))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
@ -312,36 +318,6 @@ bool ParserFunction::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (!has_distinct && !has_all)
|
|
||||||
{
|
|
||||||
auto old_pos = pos;
|
|
||||||
auto maybe_an_subquery = pos->type == TokenType::OpeningRoundBracket;
|
|
||||||
|
|
||||||
if (select.parse(pos, query, expected))
|
|
||||||
{
|
|
||||||
auto & select_ast = query->as<ASTSelectWithUnionQuery &>();
|
|
||||||
if (select_ast.list_of_selects->children.size() == 1 && maybe_an_subquery)
|
|
||||||
{
|
|
||||||
// It's an subquery. Bail out.
|
|
||||||
pos = old_pos;
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
if (pos->type != TokenType::ClosingRoundBracket)
|
|
||||||
return false;
|
|
||||||
++pos;
|
|
||||||
auto function_node = std::make_shared<ASTFunction>();
|
|
||||||
tryGetIdentifierNameInto(identifier, function_node->name);
|
|
||||||
auto expr_list_with_single_query = std::make_shared<ASTExpressionList>();
|
|
||||||
expr_list_with_single_query->children.push_back(query);
|
|
||||||
function_node->arguments = expr_list_with_single_query;
|
|
||||||
function_node->children.push_back(function_node->arguments);
|
|
||||||
node = function_node;
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const char * contents_begin = pos->begin;
|
const char * contents_begin = pos->begin;
|
||||||
if (!contents.parse(pos, expr_list_args, expected))
|
if (!contents.parse(pos, expr_list_args, expected))
|
||||||
return false;
|
return false;
|
||||||
@ -477,6 +453,49 @@ bool ParserFunction::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool ParserTableFunctionView::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||||
|
{
|
||||||
|
ParserIdentifier id_parser;
|
||||||
|
ParserKeyword view("VIEW");
|
||||||
|
ParserSelectWithUnionQuery select;
|
||||||
|
|
||||||
|
ASTPtr identifier;
|
||||||
|
ASTPtr query;
|
||||||
|
|
||||||
|
if (!view.ignore(pos, expected))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if (pos->type != TokenType::OpeningRoundBracket)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
++pos;
|
||||||
|
|
||||||
|
bool maybe_an_subquery = pos->type == TokenType::OpeningRoundBracket;
|
||||||
|
|
||||||
|
if (!select.parse(pos, query, expected))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
auto & select_ast = query->as<ASTSelectWithUnionQuery &>();
|
||||||
|
if (select_ast.list_of_selects->children.size() == 1 && maybe_an_subquery)
|
||||||
|
{
|
||||||
|
// It's an subquery. Bail out.
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (pos->type != TokenType::ClosingRoundBracket)
|
||||||
|
return false;
|
||||||
|
++pos;
|
||||||
|
auto function_node = std::make_shared<ASTFunction>();
|
||||||
|
tryGetIdentifierNameInto(identifier, function_node->name);
|
||||||
|
auto expr_list_with_single_query = std::make_shared<ASTExpressionList>();
|
||||||
|
expr_list_with_single_query->children.push_back(query);
|
||||||
|
function_node->name = "view";
|
||||||
|
function_node->arguments = expr_list_with_single_query;
|
||||||
|
function_node->children.push_back(function_node->arguments);
|
||||||
|
node = function_node;
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
bool ParserWindowReference::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
bool ParserWindowReference::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||||
{
|
{
|
||||||
ASTFunction * function = dynamic_cast<ASTFunction *>(node.get());
|
ASTFunction * function = dynamic_cast<ASTFunction *>(node.get());
|
||||||
|
@ -149,11 +149,25 @@ protected:
|
|||||||
class ParserFunction : public IParserBase
|
class ParserFunction : public IParserBase
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ParserFunction(bool allow_function_parameters_ = true) : allow_function_parameters(allow_function_parameters_) {}
|
ParserFunction(bool allow_function_parameters_ = true, bool is_table_function_ = false)
|
||||||
|
: allow_function_parameters(allow_function_parameters_), is_table_function(is_table_function_)
|
||||||
|
{
|
||||||
|
}
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
const char * getName() const override { return "function"; }
|
const char * getName() const override { return "function"; }
|
||||||
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
||||||
bool allow_function_parameters;
|
bool allow_function_parameters;
|
||||||
|
bool is_table_function;
|
||||||
|
};
|
||||||
|
|
||||||
|
// A special function parser for view table function.
|
||||||
|
// It parses an SELECT query as its argument and doesn't support getColumnName().
|
||||||
|
class ParserTableFunctionView : public IParserBase
|
||||||
|
{
|
||||||
|
protected:
|
||||||
|
const char * getName() const override { return "function"; }
|
||||||
|
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
||||||
};
|
};
|
||||||
|
|
||||||
// Window reference (the thing that goes after OVER) for window function.
|
// Window reference (the thing that goes after OVER) for window function.
|
||||||
|
@ -468,6 +468,14 @@ bool ParserLambdaExpression::parseImpl(Pos & pos, ASTPtr & node, Expected & expe
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
bool ParserTableFunctionExpression::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||||
|
{
|
||||||
|
if (ParserTableFunctionView().parse(pos, node, expected))
|
||||||
|
return true;
|
||||||
|
return elem_parser.parse(pos, node, expected);
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
bool ParserPrefixUnaryOperatorExpression::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
bool ParserPrefixUnaryOperatorExpression::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||||
{
|
{
|
||||||
/// try to find any of the valid operators
|
/// try to find any of the valid operators
|
||||||
@ -570,8 +578,9 @@ bool ParserTupleElementExpression::parseImpl(Pos & pos, ASTPtr & node, Expected
|
|||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
ParserExpressionWithOptionalAlias::ParserExpressionWithOptionalAlias(bool allow_alias_without_as_keyword)
|
ParserExpressionWithOptionalAlias::ParserExpressionWithOptionalAlias(bool allow_alias_without_as_keyword, bool is_table_function)
|
||||||
: impl(std::make_unique<ParserWithOptionalAlias>(std::make_unique<ParserExpression>(),
|
: impl(std::make_unique<ParserWithOptionalAlias>(
|
||||||
|
is_table_function ? ParserPtr(std::make_unique<ParserTableFunctionExpression>()) : ParserPtr(std::make_unique<ParserExpression>()),
|
||||||
allow_alias_without_as_keyword))
|
allow_alias_without_as_keyword))
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
@ -580,7 +589,7 @@ ParserExpressionWithOptionalAlias::ParserExpressionWithOptionalAlias(bool allow_
|
|||||||
bool ParserExpressionList::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
bool ParserExpressionList::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
|
||||||
{
|
{
|
||||||
return ParserList(
|
return ParserList(
|
||||||
std::make_unique<ParserExpressionWithOptionalAlias>(allow_alias_without_as_keyword),
|
std::make_unique<ParserExpressionWithOptionalAlias>(allow_alias_without_as_keyword, is_table_function),
|
||||||
std::make_unique<ParserToken>(TokenType::Comma))
|
std::make_unique<ParserToken>(TokenType::Comma))
|
||||||
.parse(pos, node, expected);
|
.parse(pos, node, expected);
|
||||||
}
|
}
|
||||||
|
@ -436,13 +436,26 @@ protected:
|
|||||||
};
|
};
|
||||||
|
|
||||||
|
|
||||||
|
// It's used to parse expressions in table function.
|
||||||
|
class ParserTableFunctionExpression : public IParserBase
|
||||||
|
{
|
||||||
|
private:
|
||||||
|
ParserLambdaExpression elem_parser;
|
||||||
|
|
||||||
|
protected:
|
||||||
|
const char * getName() const override { return "table function expression"; }
|
||||||
|
|
||||||
|
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
using ParserExpression = ParserLambdaExpression;
|
using ParserExpression = ParserLambdaExpression;
|
||||||
|
|
||||||
|
|
||||||
class ParserExpressionWithOptionalAlias : public IParserBase
|
class ParserExpressionWithOptionalAlias : public IParserBase
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ParserExpressionWithOptionalAlias(bool allow_alias_without_as_keyword);
|
explicit ParserExpressionWithOptionalAlias(bool allow_alias_without_as_keyword, bool is_table_function = false);
|
||||||
protected:
|
protected:
|
||||||
ParserPtr impl;
|
ParserPtr impl;
|
||||||
|
|
||||||
@ -459,11 +472,12 @@ protected:
|
|||||||
class ParserExpressionList : public IParserBase
|
class ParserExpressionList : public IParserBase
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ParserExpressionList(bool allow_alias_without_as_keyword_)
|
explicit ParserExpressionList(bool allow_alias_without_as_keyword_, bool is_table_function_ = false)
|
||||||
: allow_alias_without_as_keyword(allow_alias_without_as_keyword_) {}
|
: allow_alias_without_as_keyword(allow_alias_without_as_keyword_), is_table_function(is_table_function_) {}
|
||||||
|
|
||||||
protected:
|
protected:
|
||||||
bool allow_alias_without_as_keyword;
|
bool allow_alias_without_as_keyword;
|
||||||
|
bool is_table_function; // This expression list is used by a table function
|
||||||
|
|
||||||
const char * getName() const override { return "list of expressions"; }
|
const char * getName() const override { return "list of expressions"; }
|
||||||
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
bool parseImpl(Pos & pos, ASTPtr & node, Expected & expected) override;
|
||||||
@ -473,7 +487,7 @@ protected:
|
|||||||
class ParserNotEmptyExpressionList : public IParserBase
|
class ParserNotEmptyExpressionList : public IParserBase
|
||||||
{
|
{
|
||||||
public:
|
public:
|
||||||
ParserNotEmptyExpressionList(bool allow_alias_without_as_keyword)
|
explicit ParserNotEmptyExpressionList(bool allow_alias_without_as_keyword)
|
||||||
: nested_parser(allow_alias_without_as_keyword) {}
|
: nested_parser(allow_alias_without_as_keyword) {}
|
||||||
private:
|
private:
|
||||||
ParserExpressionList nested_parser;
|
ParserExpressionList nested_parser;
|
||||||
|
@ -22,7 +22,7 @@ bool ParserTableExpression::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
|
|||||||
auto res = std::make_shared<ASTTableExpression>();
|
auto res = std::make_shared<ASTTableExpression>();
|
||||||
|
|
||||||
if (!ParserWithOptionalAlias(std::make_unique<ParserSubquery>(), true).parse(pos, res->subquery, expected)
|
if (!ParserWithOptionalAlias(std::make_unique<ParserSubquery>(), true).parse(pos, res->subquery, expected)
|
||||||
&& !ParserWithOptionalAlias(std::make_unique<ParserFunction>(), true).parse(pos, res->table_function, expected)
|
&& !ParserWithOptionalAlias(std::make_unique<ParserFunction>(true, true), true).parse(pos, res->table_function, expected)
|
||||||
&& !ParserWithOptionalAlias(std::make_unique<ParserCompoundIdentifier>(false, true), true).parse(pos, res->database_and_table_name, expected))
|
&& !ParserWithOptionalAlias(std::make_unique<ParserCompoundIdentifier>(false, true), true).parse(pos, res->database_and_table_name, expected))
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
|
@ -15,6 +15,7 @@ namespace ErrorCodes
|
|||||||
{
|
{
|
||||||
extern const int BAD_ARGUMENTS;
|
extern const int BAD_ARGUMENTS;
|
||||||
extern const int INCORRECT_DATA;
|
extern const int INCORRECT_DATA;
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -436,9 +437,11 @@ static std::pair<bool, size_t> fileSegmentationEngineCSVImpl(ReadBuffer & in, DB
|
|||||||
if (quotes)
|
if (quotes)
|
||||||
{
|
{
|
||||||
pos = find_first_symbols<'"'>(pos, in.buffer().end());
|
pos = find_first_symbols<'"'>(pos, in.buffer().end());
|
||||||
if (pos == in.buffer().end())
|
if (pos > in.buffer().end())
|
||||||
|
throw Exception("Position in buffer is out of bounds. There must be a bug.", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
else if (pos == in.buffer().end())
|
||||||
continue;
|
continue;
|
||||||
if (*pos == '"')
|
else if (*pos == '"')
|
||||||
{
|
{
|
||||||
++pos;
|
++pos;
|
||||||
if (loadAtPosition(in, memory, pos) && *pos == '"')
|
if (loadAtPosition(in, memory, pos) && *pos == '"')
|
||||||
@ -450,9 +453,11 @@ static std::pair<bool, size_t> fileSegmentationEngineCSVImpl(ReadBuffer & in, DB
|
|||||||
else
|
else
|
||||||
{
|
{
|
||||||
pos = find_first_symbols<'"', '\r', '\n'>(pos, in.buffer().end());
|
pos = find_first_symbols<'"', '\r', '\n'>(pos, in.buffer().end());
|
||||||
if (pos == in.buffer().end())
|
if (pos > in.buffer().end())
|
||||||
|
throw Exception("Position in buffer is out of bounds. There must be a bug.", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
else if (pos == in.buffer().end())
|
||||||
continue;
|
continue;
|
||||||
if (*pos == '"')
|
else if (*pos == '"')
|
||||||
{
|
{
|
||||||
quotes = true;
|
quotes = true;
|
||||||
++pos;
|
++pos;
|
||||||
|
@ -11,6 +11,7 @@ namespace ErrorCodes
|
|||||||
{
|
{
|
||||||
extern const int INCORRECT_DATA;
|
extern const int INCORRECT_DATA;
|
||||||
extern const int BAD_ARGUMENTS;
|
extern const int BAD_ARGUMENTS;
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
RegexpRowInputFormat::RegexpRowInputFormat(
|
RegexpRowInputFormat::RegexpRowInputFormat(
|
||||||
@ -182,7 +183,9 @@ static std::pair<bool, size_t> fileSegmentationEngineRegexpImpl(ReadBuffer & in,
|
|||||||
while (loadAtPosition(in, memory, pos) && need_more_data)
|
while (loadAtPosition(in, memory, pos) && need_more_data)
|
||||||
{
|
{
|
||||||
pos = find_first_symbols<'\n', '\r'>(pos, in.buffer().end());
|
pos = find_first_symbols<'\n', '\r'>(pos, in.buffer().end());
|
||||||
if (pos == in.buffer().end())
|
if (pos > in.buffer().end())
|
||||||
|
throw Exception("Position in buffer is out of bounds. There must be a bug.", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
else if (pos == in.buffer().end())
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
// Support DOS-style newline ("\r\n")
|
// Support DOS-style newline ("\r\n")
|
||||||
|
@ -15,6 +15,7 @@ namespace DB
|
|||||||
namespace ErrorCodes
|
namespace ErrorCodes
|
||||||
{
|
{
|
||||||
extern const int INCORRECT_DATA;
|
extern const int INCORRECT_DATA;
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -433,10 +434,11 @@ static std::pair<bool, size_t> fileSegmentationEngineTabSeparatedImpl(ReadBuffer
|
|||||||
{
|
{
|
||||||
pos = find_first_symbols<'\\', '\r', '\n'>(pos, in.buffer().end());
|
pos = find_first_symbols<'\\', '\r', '\n'>(pos, in.buffer().end());
|
||||||
|
|
||||||
if (pos == in.buffer().end())
|
if (pos > in.buffer().end())
|
||||||
|
throw Exception("Position in buffer is out of bounds. There must be a bug.", ErrorCodes::LOGICAL_ERROR);
|
||||||
|
else if (pos == in.buffer().end())
|
||||||
continue;
|
continue;
|
||||||
|
else if (*pos == '\\')
|
||||||
if (*pos == '\\')
|
|
||||||
{
|
{
|
||||||
++pos;
|
++pos;
|
||||||
if (loadAtPosition(in, memory, pos))
|
if (loadAtPosition(in, memory, pos))
|
||||||
|
@ -6,6 +6,8 @@
|
|||||||
<coordination_settings>
|
<coordination_settings>
|
||||||
<operation_timeout_ms>10000</operation_timeout_ms>
|
<operation_timeout_ms>10000</operation_timeout_ms>
|
||||||
<session_timeout_ms>30000</session_timeout_ms>
|
<session_timeout_ms>30000</session_timeout_ms>
|
||||||
|
<snapshot_distance>0</snapshot_distance>
|
||||||
|
<reserved_log_items>0</reserved_log_items>
|
||||||
</coordination_settings>
|
</coordination_settings>
|
||||||
|
|
||||||
<raft_configuration>
|
<raft_configuration>
|
||||||
|
@ -29,8 +29,8 @@ def get_fake_zk():
|
|||||||
def reset_last_zxid_listener(state):
|
def reset_last_zxid_listener(state):
|
||||||
print("Fake zk callback called for state", state)
|
print("Fake zk callback called for state", state)
|
||||||
global _fake_zk_instance
|
global _fake_zk_instance
|
||||||
# reset last_zxid -- fake server doesn't support it
|
if state != KazooState.CONNECTED:
|
||||||
_fake_zk_instance.last_zxid = 0
|
_fake_zk_instance._reset()
|
||||||
|
|
||||||
_fake_zk_instance.add_listener(reset_last_zxid_listener)
|
_fake_zk_instance.add_listener(reset_last_zxid_listener)
|
||||||
_fake_zk_instance.start()
|
_fake_zk_instance.start()
|
||||||
|
@ -0,0 +1 @@
|
|||||||
|
SELECT view(SELECT 1); -- { clientError 62 }
|
@ -1,6 +1,6 @@
|
|||||||
|
100
|
||||||
6410
|
6410
|
||||||
6410
|
100
|
||||||
25323
|
|
||||||
25323
|
25323
|
||||||
1774655
|
100
|
||||||
1774655
|
1774655
|
||||||
|
@ -6,15 +6,15 @@ CREATE DICTIONARY db_dict.cache_hits
|
|||||||
PRIMARY KEY WatchID
|
PRIMARY KEY WatchID
|
||||||
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'hits' PASSWORD '' DB 'test'))
|
SOURCE(CLICKHOUSE(HOST 'localhost' PORT tcpPort() USER 'default' TABLE 'hits' PASSWORD '' DB 'test'))
|
||||||
LIFETIME(MIN 300 MAX 600)
|
LIFETIME(MIN 300 MAX 600)
|
||||||
LAYOUT(CACHE(SIZE_IN_CELLS 100000 QUERY_WAIT_TIMEOUT_MILLISECONDS 600000));
|
LAYOUT(CACHE(SIZE_IN_CELLS 100 QUERY_WAIT_TIMEOUT_MILLISECONDS 600000));
|
||||||
|
|
||||||
SELECT sum(flag) FROM (SELECT dictHas('db_dict.cache_hits', toUInt64(WatchID)) as flag FROM test.hits PREWHERE WatchID % 1400 == 0);
|
SELECT sum(flag) FROM (SELECT dictHas('db_dict.cache_hits', toUInt64(WatchID)) as flag FROM test.hits PREWHERE WatchID % 1400 == 0 LIMIT 100);
|
||||||
SELECT count() from test.hits PREWHERE WatchID % 1400 == 0;
|
SELECT count() from test.hits PREWHERE WatchID % 1400 == 0;
|
||||||
|
|
||||||
SELECT sum(flag) FROM (SELECT dictHas('db_dict.cache_hits', toUInt64(WatchID)) as flag FROM test.hits PREWHERE WatchID % 350 == 0);
|
SELECT sum(flag) FROM (SELECT dictHas('db_dict.cache_hits', toUInt64(WatchID)) as flag FROM test.hits PREWHERE WatchID % 350 == 0 LIMIT 100);
|
||||||
SELECT count() from test.hits PREWHERE WatchID % 350 == 0;
|
SELECT count() from test.hits PREWHERE WatchID % 350 == 0;
|
||||||
|
|
||||||
SELECT sum(flag) FROM (SELECT dictHas('db_dict.cache_hits', toUInt64(WatchID)) as flag FROM test.hits PREWHERE WatchID % 5 == 0);
|
SELECT sum(flag) FROM (SELECT dictHas('db_dict.cache_hits', toUInt64(WatchID)) as flag FROM test.hits PREWHERE WatchID % 5 == 0 LIMIT 100);
|
||||||
SELECT count() from test.hits PREWHERE WatchID % 5 == 0;
|
SELECT count() from test.hits PREWHERE WatchID % 5 == 0;
|
||||||
|
|
||||||
DROP DICTIONARY IF EXISTS db_dict.cache_hits;
|
DROP DICTIONARY IF EXISTS db_dict.cache_hits;
|
||||||
|
@ -10,7 +10,6 @@
|
|||||||
"00152_insert_different_granularity",
|
"00152_insert_different_granularity",
|
||||||
"00151_replace_partition_with_different_granularity",
|
"00151_replace_partition_with_different_granularity",
|
||||||
"00157_cache_dictionary",
|
"00157_cache_dictionary",
|
||||||
"00992_system_parts_race_condition_zookeeper", /// TODO remove me (alesapin)
|
|
||||||
"01193_metadata_loading",
|
"01193_metadata_loading",
|
||||||
"01473_event_time_microseconds",
|
"01473_event_time_microseconds",
|
||||||
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
||||||
@ -26,7 +25,6 @@
|
|||||||
"memory_profiler",
|
"memory_profiler",
|
||||||
"odbc_roundtrip",
|
"odbc_roundtrip",
|
||||||
"01103_check_cpu_instructions_at_startup",
|
"01103_check_cpu_instructions_at_startup",
|
||||||
"00992_system_parts_race_condition_zookeeper", /// TODO remove me (alesapin)
|
|
||||||
"01473_event_time_microseconds",
|
"01473_event_time_microseconds",
|
||||||
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
||||||
"01193_metadata_loading"
|
"01193_metadata_loading"
|
||||||
@ -37,7 +35,6 @@
|
|||||||
"memory_profiler",
|
"memory_profiler",
|
||||||
"01103_check_cpu_instructions_at_startup",
|
"01103_check_cpu_instructions_at_startup",
|
||||||
"00900_orc_load",
|
"00900_orc_load",
|
||||||
"00992_system_parts_race_condition_zookeeper", /// TODO remove me (alesapin)
|
|
||||||
"01473_event_time_microseconds",
|
"01473_event_time_microseconds",
|
||||||
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
||||||
"01193_metadata_loading"
|
"01193_metadata_loading"
|
||||||
@ -49,7 +46,6 @@
|
|||||||
"01103_check_cpu_instructions_at_startup",
|
"01103_check_cpu_instructions_at_startup",
|
||||||
"01086_odbc_roundtrip", /// can't pass because odbc libraries are not instrumented
|
"01086_odbc_roundtrip", /// can't pass because odbc libraries are not instrumented
|
||||||
"00877_memory_limit_for_new_delete", /// memory limits don't work correctly under msan because it replaces malloc/free
|
"00877_memory_limit_for_new_delete", /// memory limits don't work correctly under msan because it replaces malloc/free
|
||||||
"00992_system_parts_race_condition_zookeeper", /// TODO remove me (alesapin)
|
|
||||||
"01473_event_time_microseconds",
|
"01473_event_time_microseconds",
|
||||||
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
"01526_max_untracked_memory", /// requires TraceCollector, does not available under sanitizers
|
||||||
"01193_metadata_loading"
|
"01193_metadata_loading"
|
||||||
@ -61,7 +57,6 @@
|
|||||||
"00980_alter_settings_race",
|
"00980_alter_settings_race",
|
||||||
"00834_kill_mutation_replicated_zookeeper",
|
"00834_kill_mutation_replicated_zookeeper",
|
||||||
"00834_kill_mutation",
|
"00834_kill_mutation",
|
||||||
"00992_system_parts_race_condition_zookeeper", /// TODO remove me (alesapin)
|
|
||||||
"01200_mutations_memory_consumption",
|
"01200_mutations_memory_consumption",
|
||||||
"01103_check_cpu_instructions_at_startup",
|
"01103_check_cpu_instructions_at_startup",
|
||||||
"01037_polygon_dicts_",
|
"01037_polygon_dicts_",
|
||||||
@ -87,7 +82,6 @@
|
|||||||
"00505_secure",
|
"00505_secure",
|
||||||
"00505_shard_secure",
|
"00505_shard_secure",
|
||||||
"odbc_roundtrip",
|
"odbc_roundtrip",
|
||||||
"00992_system_parts_race_condition_zookeeper", /// TODO remove me (alesapin)
|
|
||||||
"01103_check_cpu_instructions_at_startup",
|
"01103_check_cpu_instructions_at_startup",
|
||||||
"01114_mysql_database_engine_segfault",
|
"01114_mysql_database_engine_segfault",
|
||||||
"00834_cancel_http_readonly_queries_on_client_close",
|
"00834_cancel_http_readonly_queries_on_client_close",
|
||||||
@ -101,19 +95,16 @@
|
|||||||
"01455_time_zones"
|
"01455_time_zones"
|
||||||
],
|
],
|
||||||
"release-build": [
|
"release-build": [
|
||||||
"00992_system_parts_race_condition_zookeeper" /// TODO remove me (alesapin)
|
|
||||||
],
|
],
|
||||||
"database-ordinary": [
|
"database-ordinary": [
|
||||||
"00604_show_create_database",
|
"00604_show_create_database",
|
||||||
"00609_mv_index_in_in",
|
"00609_mv_index_in_in",
|
||||||
"00510_materizlized_view_and_deduplication_zookeeper",
|
"00510_materizlized_view_and_deduplication_zookeeper",
|
||||||
"00738_lock_for_inner_table",
|
"00738_lock_for_inner_table"
|
||||||
"00992_system_parts_race_condition_zookeeper" /// TODO remove me (alesapin)
|
|
||||||
],
|
],
|
||||||
"polymorphic-parts": [
|
"polymorphic-parts": [
|
||||||
"01508_partition_pruning_long", /// bug, shoud be fixed
|
"01508_partition_pruning_long", /// bug, shoud be fixed
|
||||||
"01482_move_to_prewhere_and_cast", /// bug, shoud be fixed
|
"01482_move_to_prewhere_and_cast" /// bug, shoud be fixed
|
||||||
"00992_system_parts_race_condition_zookeeper" /// TODO remove me (alesapin)
|
|
||||||
],
|
],
|
||||||
"antlr": [
|
"antlr": [
|
||||||
"00186_very_long_arrays",
|
"00186_very_long_arrays",
|
||||||
@ -153,7 +144,6 @@
|
|||||||
"00982_array_enumerate_uniq_ranked",
|
"00982_array_enumerate_uniq_ranked",
|
||||||
"00984_materialized_view_to_columns",
|
"00984_materialized_view_to_columns",
|
||||||
"00988_constraints_replication_zookeeper",
|
"00988_constraints_replication_zookeeper",
|
||||||
"00992_system_parts_race_condition_zookeeper", /// TODO remove me (alesapin)
|
|
||||||
"00995_order_by_with_fill",
|
"00995_order_by_with_fill",
|
||||||
"01001_enums_in_in_section",
|
"01001_enums_in_in_section",
|
||||||
"01011_group_uniq_array_memsan",
|
"01011_group_uniq_array_memsan",
|
||||||
|
Loading…
Reference in New Issue
Block a user