mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-21 07:01:59 +00:00
Merge branch 'master' into vdimir/join_select_inner_table
This commit is contained in:
commit
b03f48807a
@ -74,6 +74,7 @@ elseif (ARCH_AARCH64)
|
||||
# introduced as optional, either in v8.2 [7] or in v8.4 [8].
|
||||
# rcpc: Load-Acquire RCpc Register. Better support of release/acquire of atomics. Good for allocators and high contention code.
|
||||
# Optional in v8.2, mandatory in v8.3 [9]. Supported in Graviton >=2, Azure and GCP instances.
|
||||
# bf16: Bfloat16, a half-precision floating point format developed by Google Brain. Optional in v8.2, mandatory in v8.6.
|
||||
#
|
||||
# [1] https://github.com/aws/aws-graviton-getting-started/blob/main/c-c%2B%2B.md
|
||||
# [2] https://community.arm.com/arm-community-blogs/b/tools-software-ides-blog/posts/making-the-most-of-the-arm-architecture-in-gcc-10
|
||||
|
@ -122,7 +122,7 @@ Default value: `0`.
|
||||
|
||||
### s3queue_polling_min_timeout_ms {#polling_min_timeout_ms}
|
||||
|
||||
Minimal timeout before next polling (in milliseconds).
|
||||
Specifies the minimum time, in milliseconds, that ClickHouse waits before making the next polling attempt.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -132,7 +132,7 @@ Default value: `1000`.
|
||||
|
||||
### s3queue_polling_max_timeout_ms {#polling_max_timeout_ms}
|
||||
|
||||
Maximum timeout before next polling (in milliseconds).
|
||||
Defines the maximum time, in milliseconds, that ClickHouse waits before initiating the next polling attempt.
|
||||
|
||||
Possible values:
|
||||
|
||||
@ -142,7 +142,7 @@ Default value: `10000`.
|
||||
|
||||
### s3queue_polling_backoff_ms {#polling_backoff_ms}
|
||||
|
||||
Polling backoff (in milliseconds).
|
||||
Determines the additional wait time added to the previous polling interval when no new files are found. The next poll occurs after the sum of the previous interval and this backoff value, or the maximum interval, whichever is lower.
|
||||
|
||||
Possible values:
|
||||
|
||||
|
@ -10,6 +10,11 @@ The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-fa
|
||||
|
||||
You can use `AggregatingMergeTree` tables for incremental data aggregation, including for aggregated materialized views.
|
||||
|
||||
You can see an example of how to use the AggregatingMergeTree and Aggregate functions in the below video:
|
||||
<div class='vimeo-container'>
|
||||
<iframe width="1030" height="579" src="https://www.youtube.com/embed/pryhI4F_zqQ" title="Aggregation States in ClickHouse" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
|
||||
</div>
|
||||
|
||||
The engine processes all columns with the following types:
|
||||
|
||||
## [AggregateFunction](../../../sql-reference/data-types/aggregatefunction.md)
|
||||
|
@ -16,7 +16,7 @@ You have four options for getting up and running with ClickHouse:
|
||||
- **[ClickHouse Cloud](https://clickhouse.com/cloud/):** The official ClickHouse as a service, - built by, maintained and supported by the creators of ClickHouse
|
||||
- **[Quick Install](#quick-install):** an easy-to-download binary for testing and developing with ClickHouse
|
||||
- **[Production Deployments](#available-installation-options):** ClickHouse can run on any Linux, FreeBSD, or macOS with x86-64, modern ARM (ARMv8.2-A up), or PowerPC64LE CPU architecture
|
||||
- **[Docker Image](https://hub.docker.com/r/clickhouse/clickhouse-server/):** use the official Docker image in Docker Hub
|
||||
- **[Docker Image](https://hub.docker.com/_/clickhouse):** use the official Docker image in Docker Hub
|
||||
|
||||
## ClickHouse Cloud
|
||||
|
||||
|
@ -211,7 +211,7 @@ Number of threads in the server of the replicas communication protocol (without
|
||||
|
||||
The difference in time the thread for calculation of the asynchronous metrics was scheduled to wake up and the time it was in fact, woken up. A proxy-indicator of overall system latency and responsiveness.
|
||||
|
||||
### LoadAverage_*N*
|
||||
### LoadAverage*N*
|
||||
|
||||
The whole system load, averaged with exponential smoothing over 1 minute. The load represents the number of threads across all the processes (the scheduling entities of the OS kernel), that are currently running by CPU or waiting for IO, or ready to run but not being scheduled at this point of time. This number includes all the processes, not only clickhouse-server. The number can be greater than the number of CPU cores, if the system is overloaded, and many processes are ready to run but waiting for CPU or IO.
|
||||
|
||||
|
@ -75,7 +75,7 @@ FROM t_null_big
|
||||
└────────────────────┴─────────────────────┘
|
||||
```
|
||||
|
||||
Also you can use [Tuple](/docs/en/sql-reference/data-types/tuple.md) to work around NULL skipping behavior. The a `Tuple` that contains only a `NULL` value is not `NULL`, so the aggregate functions won't skip that row because of that `NULL` value.
|
||||
Also you can use [Tuple](/docs/en/sql-reference/data-types/tuple.md) to work around NULL skipping behavior. A `Tuple` that contains only a `NULL` value is not `NULL`, so the aggregate functions won't skip that row because of that `NULL` value.
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
@ -110,7 +110,7 @@ GROUP BY v
|
||||
└──────┴─────────┴──────────┘
|
||||
```
|
||||
|
||||
And here is an example of of first_value with `RESPECT NULLS` where we can see that NULL inputs are respected and it will return the first value read, whether it's NULL or not:
|
||||
And here is an example of first_value with `RESPECT NULLS` where we can see that NULL inputs are respected and it will return the first value read, whether it's NULL or not:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
|
@ -5,7 +5,15 @@ sidebar_position: 102
|
||||
|
||||
# any
|
||||
|
||||
Selects the first encountered value of a column, ignoring any `NULL` values.
|
||||
Selects the first encountered value of a column.
|
||||
|
||||
:::warning
|
||||
As a query can be executed in arbitrary order, the result of this function is non-deterministic.
|
||||
If you need an arbitrary but deterministic result, use functions [`min`](../reference/min.md) or [`max`](../reference/max.md).
|
||||
:::
|
||||
|
||||
By default, the function never returns NULL, i.e. ignores NULL values in the input column.
|
||||
However, if the function is used with the `RESPECT NULLS` modifier, it returns the first value reads no matter if NULL or not.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -13,46 +21,51 @@ Selects the first encountered value of a column, ignoring any `NULL` values.
|
||||
any(column) [RESPECT NULLS]
|
||||
```
|
||||
|
||||
Aliases: `any_value`, [`first_value`](../reference/first_value.md).
|
||||
Aliases `any(column)` (without `RESPECT NULLS`)
|
||||
- `any_value`
|
||||
- [`first_value`](../reference/first_value.md).
|
||||
|
||||
Alias for `any(column) RESPECT NULLS`
|
||||
- `anyRespectNulls`, `any_respect_nulls`
|
||||
- `firstValueRespectNulls`, `first_value_respect_nulls`
|
||||
- `anyValueRespectNulls`, `any_value_respect_nulls`
|
||||
|
||||
**Parameters**
|
||||
- `column`: The column name.
|
||||
- `column`: The column name.
|
||||
|
||||
**Returned value**
|
||||
|
||||
:::note
|
||||
Supports the `RESPECT NULLS` modifier after the function name. Using this modifier will ensure the function selects the first value passed, regardless of whether it is `NULL` or not.
|
||||
:::
|
||||
The first value encountered.
|
||||
|
||||
:::note
|
||||
The return type of the function is the same as the input, except for LowCardinality which is discarded. This means that given no rows as input it will return the default value of that type (0 for integers, or Null for a Nullable() column). You might use the `-OrNull` [combinator](../../../sql-reference/aggregate-functions/combinators.md) ) to modify this behaviour.
|
||||
:::
|
||||
|
||||
:::warning
|
||||
The query can be executed in any order and even in a different order each time, so the result of this function is indeterminate.
|
||||
To get a determinate result, you can use the [`min`](../reference/min.md) or [`max`](../reference/max.md) function instead of `any`.
|
||||
The return type of the function is the same as the input, except for LowCardinality which is discarded.
|
||||
This means that given no rows as input it will return the default value of that type (0 for integers, or Null for a Nullable() column).
|
||||
You might use the `-OrNull` [combinator](../../../sql-reference/aggregate-functions/combinators.md) ) to modify this behaviour.
|
||||
:::
|
||||
|
||||
**Implementation details**
|
||||
|
||||
In some cases, you can rely on the order of execution. This applies to cases when `SELECT` comes from a subquery that uses `ORDER BY`.
|
||||
In some cases, you can rely on the order of execution.
|
||||
This applies to cases when `SELECT` comes from a subquery that uses `ORDER BY`.
|
||||
|
||||
When a `SELECT` query has the `GROUP BY` clause or at least one aggregate function, ClickHouse (in contrast to MySQL) requires that all expressions in the `SELECT`, `HAVING`, and `ORDER BY` clauses be calculated from keys or from aggregate functions. In other words, each column selected from the table must be used either in keys or inside aggregate functions. To get behavior like in MySQL, you can put the other columns in the `any` aggregate function.
|
||||
When a `SELECT` query has the `GROUP BY` clause or at least one aggregate function, ClickHouse (in contrast to MySQL) requires that all expressions in the `SELECT`, `HAVING`, and `ORDER BY` clauses be calculated from keys or from aggregate functions.
|
||||
In other words, each column selected from the table must be used either in keys or inside aggregate functions.
|
||||
To get behavior like in MySQL, you can put the other columns in the `any` aggregate function.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
CREATE TABLE any_nulls (city Nullable(String)) ENGINE=Log;
|
||||
CREATE TABLE tab (city Nullable(String)) ENGINE=Memory;
|
||||
|
||||
INSERT INTO any_nulls (city) VALUES (NULL), ('Amsterdam'), ('New York'), ('Tokyo'), ('Valencia'), (NULL);
|
||||
INSERT INTO tab (city) VALUES (NULL), ('Amsterdam'), ('New York'), ('Tokyo'), ('Valencia'), (NULL);
|
||||
|
||||
SELECT any(city) FROM any_nulls;
|
||||
SELECT any(city), anyRespectNulls(city) FROM tab;
|
||||
```
|
||||
|
||||
```response
|
||||
┌─any(city)─┐
|
||||
│ Amsterdam │
|
||||
└───────────┘
|
||||
┌─any(city)─┬─anyRespectNulls(city)─┐
|
||||
│ Amsterdam │ ᴺᵁᴸᴸ │
|
||||
└───────────┴───────────────────────┘
|
||||
```
|
||||
|
@ -5,7 +5,15 @@ sidebar_position: 105
|
||||
|
||||
# anyLast
|
||||
|
||||
Selects the last value encountered, ignoring any `NULL` values by default. The result is just as indeterminate as for the [any](../../../sql-reference/aggregate-functions/reference/any.md) function.
|
||||
Selects the last encountered value of a column.
|
||||
|
||||
:::warning
|
||||
As a query can be executed in arbitrary order, the result of this function is non-deterministic.
|
||||
If you need an arbitrary but deterministic result, use functions [`min`](../reference/min.md) or [`max`](../reference/max.md).
|
||||
:::
|
||||
|
||||
By default, the function never returns NULL, i.e. ignores NULL values in the input column.
|
||||
However, if the function is used with the `RESPECT NULLS` modifier, it returns the first value reads no matter if NULL or not.
|
||||
|
||||
**Syntax**
|
||||
|
||||
@ -13,12 +21,15 @@ Selects the last value encountered, ignoring any `NULL` values by default. The r
|
||||
anyLast(column) [RESPECT NULLS]
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
- `column`: The column name.
|
||||
Alias `anyLast(column)` (without `RESPECT NULLS`)
|
||||
- [`last_value`](../reference/last_value.md).
|
||||
|
||||
:::note
|
||||
Supports the `RESPECT NULLS` modifier after the function name. Using this modifier will ensure the function selects the last value passed, regardless of whether it is `NULL` or not.
|
||||
:::
|
||||
Aliases for `anyLast(column) RESPECT NULLS`
|
||||
- `anyLastRespectNulls`, `anyLast_respect_nulls`
|
||||
- `lastValueRespectNulls`, `last_value_respect_nulls`
|
||||
|
||||
**Parameters**
|
||||
- `column`: The column name.
|
||||
|
||||
**Returned value**
|
||||
|
||||
@ -29,15 +40,15 @@ Supports the `RESPECT NULLS` modifier after the function name. Using this modifi
|
||||
Query:
|
||||
|
||||
```sql
|
||||
CREATE TABLE any_last_nulls (city Nullable(String)) ENGINE=Log;
|
||||
CREATE TABLE tab (city Nullable(String)) ENGINE=Memory;
|
||||
|
||||
INSERT INTO any_last_nulls (city) VALUES ('Amsterdam'),(NULL),('New York'),('Tokyo'),('Valencia'),(NULL);
|
||||
INSERT INTO tab (city) VALUES ('Amsterdam'),(NULL),('New York'),('Tokyo'),('Valencia'),(NULL);
|
||||
|
||||
SELECT anyLast(city) FROM any_last_nulls;
|
||||
SELECT anyLast(city), anyLastRespectNulls(city) FROM tab;
|
||||
```
|
||||
|
||||
```response
|
||||
┌─anyLast(city)─┐
|
||||
│ Valencia │
|
||||
└───────────────┘
|
||||
┌─anyLast(city)─┬─anyLastRespectNulls(city)─┐
|
||||
│ Valencia │ ᴺᵁᴸᴸ │
|
||||
└───────────────┴───────────────────────────┘
|
||||
```
|
||||
|
@ -4489,9 +4489,9 @@ Using replacement fields, you can define a pattern for the resulting string.
|
||||
| k | clockhour of day (1~24) | number | 24 |
|
||||
| m | minute of hour | number | 30 |
|
||||
| s | second of minute | number | 55 |
|
||||
| S | fraction of second (not supported yet) | number | 978 |
|
||||
| z | time zone (short name not supported yet) | text | Pacific Standard Time; PST |
|
||||
| Z | time zone offset/id (not supported yet) | zone | -0800; -08:00; America/Los_Angeles |
|
||||
| S | fraction of second | number | 978 |
|
||||
| z | time zone | text | Eastern Standard Time; EST |
|
||||
| Z | time zone offset | zone | -0800; -0812 |
|
||||
| ' | escape for text | delimiter | |
|
||||
| '' | single quote | literal | ' |
|
||||
|
||||
|
@ -6791,7 +6791,7 @@ parseDateTime(str[, format[, timezone]])
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
Returns DateTime values parsed from input string according to a MySQL style format string.
|
||||
Return a [DateTime](../data-types/datetime.md) value parsed from the input string according to a MySQL-style format string.
|
||||
|
||||
**Supported format specifiers**
|
||||
|
||||
@ -6840,7 +6840,7 @@ parseDateTimeInJodaSyntax(str[, format[, timezone]])
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
Returns DateTime values parsed from input string according to a Joda style format.
|
||||
Return a [DateTime](../data-types/datetime.md) value parsed from the input string according to a Joda-style format string.
|
||||
|
||||
**Supported format specifiers**
|
||||
|
||||
@ -6867,9 +6867,55 @@ Same as for [parseDateTimeInJodaSyntax](#parsedatetimeinjodasyntax) except that
|
||||
|
||||
Same as for [parseDateTimeInJodaSyntax](#parsedatetimeinjodasyntax) except that it returns `NULL` when it encounters a date format that cannot be processed.
|
||||
|
||||
## parseDateTime64
|
||||
|
||||
Converts a [String](../data-types/string.md) to [DateTime64](../data-types/datetime64.md) according to a [MySQL format string](https://dev.mysql.com/doc/refman/8.0/en/date-and-time-functions.html#function_date-format).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
parseDateTime64(str[, format[, timezone]])
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `str` — The String to be parsed.
|
||||
- `format` — The format string. Optional. `%Y-%m-%d %H:%i:%s.%f` if not specified.
|
||||
- `timezone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). Optional.
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
Return a [DateTime64](../data-types/datetime64.md) value parsed from the input string according to a MySQL-style format string.
|
||||
The precision of the returned value is 6.
|
||||
|
||||
## parseDateTime64OrZero
|
||||
|
||||
Same as for [parseDateTime64](#parsedatetime64) except that it returns zero date when it encounters a date format that cannot be processed.
|
||||
|
||||
## parseDateTime64OrNull
|
||||
|
||||
Same as for [parseDateTime64](#parsedatetime64) except that it returns `NULL` when it encounters a date format that cannot be processed.
|
||||
|
||||
## parseDateTime64InJodaSyntax
|
||||
|
||||
Similar to [parseDateTimeInJodaSyntax](#parsedatetimeinjodasyntax). Differently, it returns a value of type [DateTime64](../data-types/datetime64.md).
|
||||
Converts a [String](../data-types/string.md) to [DateTime64](../data-types/datetime64.md) according to a [Joda format string](https://joda-time.sourceforge.net/apidocs/org/joda/time/format/DateTimeFormat.html).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
parseDateTime64InJodaSyntax(str[, format[, timezone]])
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `str` — The String to be parsed.
|
||||
- `format` — The format string. Optional. `yyyy-MM-dd HH:mm:ss` if not specified.
|
||||
- `timezone` — [Timezone](/docs/en/operations/server-configuration-parameters/settings.md#timezone). Optional.
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
Return a [DateTime64](../data-types/datetime64.md) value parsed from the input string according to a Joda-style format string.
|
||||
The precision of the returned value equal to the number of `S` placeholders in the format string (but at most 6).
|
||||
|
||||
## parseDateTime64InJodaSyntaxOrZero
|
||||
|
||||
|
@ -161,6 +161,8 @@ Settings:
|
||||
- `actions` — Prints detailed information about step actions. Default: 0.
|
||||
- `json` — Prints query plan steps as a row in [JSON](../../interfaces/formats.md#json) format. Default: 0. It is recommended to use [TSVRaw](../../interfaces/formats.md#tabseparatedraw) format to avoid unnecessary escaping.
|
||||
|
||||
When `json=1` step names will contain an additional suffix with unique step identifier.
|
||||
|
||||
Example:
|
||||
|
||||
```sql
|
||||
@ -194,30 +196,25 @@ EXPLAIN json = 1, description = 0 SELECT 1 UNION ALL SELECT 2 FORMAT TSVRaw;
|
||||
{
|
||||
"Plan": {
|
||||
"Node Type": "Union",
|
||||
"Node Id": "Union_10",
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "Expression",
|
||||
"Node Id": "Expression_13",
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "SettingQuotaAndLimits",
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "ReadFromStorage"
|
||||
}
|
||||
]
|
||||
"Node Type": "ReadFromStorage",
|
||||
"Node Id": "ReadFromStorage_0"
|
||||
}
|
||||
]
|
||||
},
|
||||
{
|
||||
"Node Type": "Expression",
|
||||
"Node Id": "Expression_16",
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "SettingQuotaAndLimits",
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "ReadFromStorage"
|
||||
}
|
||||
]
|
||||
"Node Type": "ReadFromStorage",
|
||||
"Node Id": "ReadFromStorage_4"
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -249,6 +246,7 @@ EXPLAIN json = 1, description = 0, header = 1 SELECT 1, 2 + dummy;
|
||||
{
|
||||
"Plan": {
|
||||
"Node Type": "Expression",
|
||||
"Node Id": "Expression_5",
|
||||
"Header": [
|
||||
{
|
||||
"Name": "1",
|
||||
@ -261,23 +259,13 @@ EXPLAIN json = 1, description = 0, header = 1 SELECT 1, 2 + dummy;
|
||||
],
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "SettingQuotaAndLimits",
|
||||
"Node Type": "ReadFromStorage",
|
||||
"Node Id": "ReadFromStorage_0",
|
||||
"Header": [
|
||||
{
|
||||
"Name": "dummy",
|
||||
"Type": "UInt8"
|
||||
}
|
||||
],
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "ReadFromStorage",
|
||||
"Header": [
|
||||
{
|
||||
"Name": "dummy",
|
||||
"Type": "UInt8"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
@ -351,17 +339,31 @@ EXPLAIN json = 1, actions = 1, description = 0 SELECT 1 FORMAT TSVRaw;
|
||||
{
|
||||
"Plan": {
|
||||
"Node Type": "Expression",
|
||||
"Node Id": "Expression_5",
|
||||
"Expression": {
|
||||
"Inputs": [],
|
||||
"Inputs": [
|
||||
{
|
||||
"Name": "dummy",
|
||||
"Type": "UInt8"
|
||||
}
|
||||
],
|
||||
"Actions": [
|
||||
{
|
||||
"Node Type": "Column",
|
||||
"Node Type": "INPUT",
|
||||
"Result Type": "UInt8",
|
||||
"Result Type": "Column",
|
||||
"Result Name": "dummy",
|
||||
"Arguments": [0],
|
||||
"Removed Arguments": [0],
|
||||
"Result": 0
|
||||
},
|
||||
{
|
||||
"Node Type": "COLUMN",
|
||||
"Result Type": "UInt8",
|
||||
"Result Name": "1",
|
||||
"Column": "Const(UInt8)",
|
||||
"Arguments": [],
|
||||
"Removed Arguments": [],
|
||||
"Result": 0
|
||||
"Result": 1
|
||||
}
|
||||
],
|
||||
"Outputs": [
|
||||
@ -370,17 +372,12 @@ EXPLAIN json = 1, actions = 1, description = 0 SELECT 1 FORMAT TSVRaw;
|
||||
"Type": "UInt8"
|
||||
}
|
||||
],
|
||||
"Positions": [0],
|
||||
"Project Input": true
|
||||
"Positions": [1]
|
||||
},
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "SettingQuotaAndLimits",
|
||||
"Plans": [
|
||||
{
|
||||
"Node Type": "ReadFromStorage"
|
||||
}
|
||||
]
|
||||
"Node Type": "ReadFromStorage",
|
||||
"Node Id": "ReadFromStorage_0"
|
||||
}
|
||||
]
|
||||
}
|
||||
@ -396,6 +393,8 @@ Settings:
|
||||
- `graph` — Prints a graph described in the [DOT](https://en.wikipedia.org/wiki/DOT_(graph_description_language)) graph description language. Default: 0.
|
||||
- `compact` — Prints graph in compact mode if `graph` setting is enabled. Default: 1.
|
||||
|
||||
When `compact=0` and `graph=1` processor names will contain an additional suffix with unique processor identifier.
|
||||
|
||||
Example:
|
||||
|
||||
```sql
|
||||
|
@ -5,9 +5,14 @@ sidebar_label: EXCEPT
|
||||
|
||||
# EXCEPT Clause
|
||||
|
||||
The `EXCEPT` clause returns only those rows that result from the first query without the second. The queries must match the number of columns, order, and type. The result of `EXCEPT` can contain duplicate rows.
|
||||
The `EXCEPT` clause returns only those rows that result from the first query without the second.
|
||||
|
||||
Multiple `EXCEPT` statements are executed left to right if parenthesis are not specified. The `EXCEPT` operator has the same priority as the `UNION` clause and lower priority than the `INTERSECT` clause.
|
||||
- Both queries must have the same number of columns in the same order and data type.
|
||||
- The result of `EXCEPT` can contain duplicate rows. Use `EXCEPT DISTINCT` if this is not desirable.
|
||||
- Multiple `EXCEPT` statements are executed from left to right if parentheses are not specified.
|
||||
- The `EXCEPT` operator has the same priority as the `UNION` clause and lower priority than the `INTERSECT` clause.
|
||||
|
||||
## Syntax
|
||||
|
||||
``` sql
|
||||
SELECT column1 [, column2 ]
|
||||
@ -19,18 +24,33 @@ EXCEPT
|
||||
SELECT column1 [, column2 ]
|
||||
FROM table2
|
||||
[WHERE condition]
|
||||
|
||||
```
|
||||
The condition could be any expression based on your requirements.
|
||||
The condition could be any expression based on your requirements.
|
||||
|
||||
Additionally, `EXCEPT()` can be used to exclude columns from a result in the same table, as is possible with BigQuery (Google Cloud), using the following syntax:
|
||||
|
||||
```sql
|
||||
SELECT column1 [, column2 ] EXCEPT (column3 [, column4])
|
||||
FROM table1
|
||||
[WHERE condition]
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
The examples in this section demonstrate usage of the `EXCEPT` clause.
|
||||
|
||||
### Filtering Numbers Using the `EXCEPT` Clause
|
||||
|
||||
Here is a simple example that returns the numbers 1 to 10 that are _not_ a part of the numbers 3 to 8:
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT number FROM numbers(1,10) EXCEPT SELECT number FROM numbers(3,6);
|
||||
SELECT number
|
||||
FROM numbers(1, 10)
|
||||
EXCEPT
|
||||
SELECT number
|
||||
FROM numbers(3, 6)
|
||||
```
|
||||
|
||||
Result:
|
||||
@ -44,7 +64,53 @@ Result:
|
||||
└────────┘
|
||||
```
|
||||
|
||||
`EXCEPT` and `INTERSECT` can often be used interchangeably with different Boolean logic, and they are both useful if you have two tables that share a common column (or columns). For example, suppose we have a few million rows of historical cryptocurrency data that contains trade prices and volume:
|
||||
### Excluding Specific Columns Using `EXCEPT()`
|
||||
|
||||
`EXCEPT()` can be used to quickly exclude columns from a result. For instance if we want to select all columns from a table, except a few select columns as shown in the example below:
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SHOW COLUMNS IN system.settings
|
||||
|
||||
SELECT * EXCEPT (default, alias_for, readonly, description)
|
||||
FROM system.settings
|
||||
LIMIT 5
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─field───────┬─type─────────────────────────────────────────────────────────────────────┬─null─┬─key─┬─default─┬─extra─┐
|
||||
1. │ alias_for │ String │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
2. │ changed │ UInt8 │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
3. │ default │ String │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
4. │ description │ String │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
5. │ is_obsolete │ UInt8 │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
6. │ max │ Nullable(String) │ YES │ │ ᴺᵁᴸᴸ │ │
|
||||
7. │ min │ Nullable(String) │ YES │ │ ᴺᵁᴸᴸ │ │
|
||||
8. │ name │ String │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
9. │ readonly │ UInt8 │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
10. │ tier │ Enum8('Production' = 0, 'Obsolete' = 4, 'Experimental' = 8, 'Beta' = 12) │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
11. │ type │ String │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
12. │ value │ String │ NO │ │ ᴺᵁᴸᴸ │ │
|
||||
└─────────────┴──────────────────────────────────────────────────────────────────────────┴──────┴─────┴─────────┴───────┘
|
||||
|
||||
┌─name────────────────────┬─value──────┬─changed─┬─min──┬─max──┬─type────┬─is_obsolete─┬─tier───────┐
|
||||
1. │ dialect │ clickhouse │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ Dialect │ 0 │ Production │
|
||||
2. │ min_compress_block_size │ 65536 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
|
||||
3. │ max_compress_block_size │ 1048576 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
|
||||
4. │ max_block_size │ 65409 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
|
||||
5. │ max_insert_block_size │ 1048449 │ 0 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ UInt64 │ 0 │ Production │
|
||||
└─────────────────────────┴────────────┴─────────┴──────┴──────┴─────────┴─────────────┴────────────┘
|
||||
```
|
||||
|
||||
### Using `EXCEPT` and `INTERSECT` with Cryptocurrency Data
|
||||
|
||||
`EXCEPT` and `INTERSECT` can often be used interchangeably with different Boolean logic, and they are both useful if you have two tables that share a common column (or columns).
|
||||
For example, suppose we have a few million rows of historical cryptocurrency data that contains trade prices and volume:
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
CREATE TABLE crypto_prices
|
||||
@ -72,6 +138,8 @@ ORDER BY trade_date DESC
|
||||
LIMIT 10;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─trade_date─┬─crypto_name─┬──────volume─┬────price─┬───market_cap─┬──change_1_day─┐
|
||||
│ 2020-11-02 │ Bitcoin │ 30771456000 │ 13550.49 │ 251119860000 │ -0.013585099 │
|
||||
@ -127,7 +195,7 @@ Result:
|
||||
|
||||
This means of the four cryptocurrencies we own, only Bitcoin has never dropped below $10 (based on the limited data we have here in this example).
|
||||
|
||||
## EXCEPT DISTINCT
|
||||
### Using `EXCEPT DISTINCT`
|
||||
|
||||
Notice in the previous query we had multiple Bitcoin holdings in the result. You can add `DISTINCT` to `EXCEPT` to eliminate duplicate rows from the result:
|
||||
|
||||
@ -146,7 +214,6 @@ Result:
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
|
||||
**See Also**
|
||||
|
||||
- [UNION](union.md#union-clause)
|
||||
|
@ -15,7 +15,7 @@ first_value (column_name) [[RESPECT NULLS] | [IGNORE NULLS]]
|
||||
OVER ([[PARTITION BY grouping_column] [ORDER BY sorting_column]
|
||||
[ROWS or RANGE expression_to_bound_rows_withing_the_group]] | [window_name])
|
||||
FROM table_name
|
||||
WINDOW window_name as ([[PARTITION BY grouping_column] [ORDER BY sorting_column])
|
||||
WINDOW window_name as ([PARTITION BY grouping_column] [ORDER BY sorting_column])
|
||||
```
|
||||
|
||||
Alias: `any`.
|
||||
@ -23,6 +23,8 @@ Alias: `any`.
|
||||
:::note
|
||||
Using the optional modifier `RESPECT NULLS` after `first_value(column_name)` will ensure that `NULL` arguments are not skipped.
|
||||
See [NULL processing](../aggregate-functions/index.md/#null-processing) for more information.
|
||||
|
||||
Alias: `firstValueRespectNulls`
|
||||
:::
|
||||
|
||||
For more detail on window function syntax see: [Window Functions - Syntax](./index.md/#syntax).
|
||||
@ -48,7 +50,7 @@ CREATE TABLE salaries
|
||||
)
|
||||
Engine = Memory;
|
||||
|
||||
INSERT INTO salaries FORMAT Values
|
||||
INSERT INTO salaries FORMAT VALUES
|
||||
('Port Elizabeth Barbarians', 'Gary Chen', 196000, 'F'),
|
||||
('New Coreystad Archdukes', 'Charles Juarez', 190000, 'F'),
|
||||
('Port Elizabeth Barbarians', 'Michael Stanley', 100000, 'D'),
|
||||
|
@ -23,6 +23,8 @@ Alias: `anyLast`.
|
||||
:::note
|
||||
Using the optional modifier `RESPECT NULLS` after `first_value(column_name)` will ensure that `NULL` arguments are not skipped.
|
||||
See [NULL processing](../aggregate-functions/index.md/#null-processing) for more information.
|
||||
|
||||
Alias: `lastValueRespectNulls`
|
||||
:::
|
||||
|
||||
For more detail on window function syntax see: [Window Functions - Syntax](./index.md/#syntax).
|
||||
@ -33,7 +35,7 @@ For more detail on window function syntax see: [Window Functions - Syntax](./ind
|
||||
|
||||
**Example**
|
||||
|
||||
In this example the `last_value` function is used to find the highest paid footballer from a fictional dataset of salaries of Premier League football players.
|
||||
In this example the `last_value` function is used to find the lowest paid footballer from a fictional dataset of salaries of Premier League football players.
|
||||
|
||||
Query:
|
||||
|
||||
@ -48,7 +50,7 @@ CREATE TABLE salaries
|
||||
)
|
||||
Engine = Memory;
|
||||
|
||||
INSERT INTO salaries FORMAT Values
|
||||
INSERT INTO salaries FORMAT VALUES
|
||||
('Port Elizabeth Barbarians', 'Gary Chen', 196000, 'F'),
|
||||
('New Coreystad Archdukes', 'Charles Juarez', 190000, 'F'),
|
||||
('Port Elizabeth Barbarians', 'Michael Stanley', 100000, 'D'),
|
||||
|
@ -154,7 +154,7 @@ sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh"
|
||||
|
||||
### Из Docker образа {#from-docker-image}
|
||||
|
||||
Для запуска ClickHouse в Docker нужно следовать инструкции на [Docker Hub](https://hub.docker.com/r/clickhouse/clickhouse-server/). Внутри образов используются официальные `deb`-пакеты.
|
||||
Для запуска ClickHouse в Docker нужно следовать инструкции на [Docker Hub](https://hub.docker.com/_/clickhouse). Внутри образов используются официальные `deb`-пакеты.
|
||||
|
||||
### Из единого бинарного файла {#from-single-binary}
|
||||
|
||||
|
@ -136,7 +136,7 @@ ClickHouse применяет настройку в тех случаях, ко
|
||||
- 0 — выключена.
|
||||
- 1 — включена.
|
||||
|
||||
Значение по умолчанию: 0.
|
||||
Значение по умолчанию: 1.
|
||||
|
||||
## http_zlib_compression_level {#settings-http_zlib_compression_level}
|
||||
|
||||
|
@ -132,7 +132,7 @@ sudo "clickhouse-client-$LATEST_VERSION/install/doinst.sh"
|
||||
|
||||
### `Docker`安装包 {#from-docker-image}
|
||||
|
||||
要在Docker中运行ClickHouse,请遵循[Docker Hub](https://hub.docker.com/r/clickhouse/clickhouse-server/)上的指南。它是官方的`deb`安装包。
|
||||
要在Docker中运行ClickHouse,请遵循[Docker Hub](https://hub.docker.com/_/clickhouse)上的指南。它是官方的`deb`安装包。
|
||||
|
||||
### 其他环境安装包 {#from-other}
|
||||
|
||||
|
@ -97,7 +97,7 @@ ClickHouse从表的过时副本中选择最相关的副本。
|
||||
- 0 — Disabled.
|
||||
- 1 — Enabled.
|
||||
|
||||
默认值:0。
|
||||
默认值:1。
|
||||
|
||||
## http_zlib_compression_level {#settings-http_zlib_compression_level}
|
||||
|
||||
|
@ -221,11 +221,16 @@ void registerAggregateFunctionsAnyRespectNulls(AggregateFunctionFactory & factor
|
||||
= {.returns_default_when_only_null = false, .is_order_dependent = true, .is_window_function = true};
|
||||
|
||||
factory.registerFunction("any_respect_nulls", {createAggregateFunctionAnyRespectNulls, default_properties_for_respect_nulls});
|
||||
factory.registerAlias("any_value_respect_nulls", "any_respect_nulls", AggregateFunctionFactory::Case::Insensitive);
|
||||
factory.registerAlias("anyRespectNulls", "any_respect_nulls", AggregateFunctionFactory::Case::Sensitive);
|
||||
factory.registerAlias("first_value_respect_nulls", "any_respect_nulls", AggregateFunctionFactory::Case::Insensitive);
|
||||
factory.registerAlias("firstValueRespectNulls", "any_respect_nulls", AggregateFunctionFactory::Case::Sensitive);
|
||||
factory.registerAlias("any_value_respect_nulls", "any_respect_nulls", AggregateFunctionFactory::Case::Insensitive);
|
||||
factory.registerAlias("anyValueRespectNulls", "any_respect_nulls", AggregateFunctionFactory::Case::Sensitive);
|
||||
|
||||
factory.registerFunction("anyLast_respect_nulls", {createAggregateFunctionAnyLastRespectNulls, default_properties_for_respect_nulls});
|
||||
factory.registerAlias("anyLastRespectNulls", "anyLast_respect_nulls", AggregateFunctionFactory::Case::Sensitive);
|
||||
factory.registerAlias("last_value_respect_nulls", "anyLast_respect_nulls", AggregateFunctionFactory::Case::Insensitive);
|
||||
factory.registerAlias("lastValueRespectNulls", "anyLast_respect_nulls", AggregateFunctionFactory::Case::Sensitive);
|
||||
|
||||
/// Must happen after registering any and anyLast
|
||||
factory.registerNullsActionTransformation("any", "any_respect_nulls");
|
||||
|
@ -22,6 +22,13 @@ namespace ErrorCodes
|
||||
namespace
|
||||
{
|
||||
|
||||
/** Due to a lack of proper code review, this code was contributed with a multiplication of template instantiations
|
||||
* over all pairs of data types, and we deeply regret that.
|
||||
*
|
||||
* We cannot remove all combinations, because the binary representation of serialized data has to remain the same,
|
||||
* but we can partially heal the wound by treating unsigned and signed data types in the same way.
|
||||
*/
|
||||
|
||||
template <typename ValueType, typename TimestampType>
|
||||
struct AggregationFunctionDeltaSumTimestampData
|
||||
{
|
||||
@ -37,23 +44,22 @@ template <typename ValueType, typename TimestampType>
|
||||
class AggregationFunctionDeltaSumTimestamp final
|
||||
: public IAggregateFunctionDataHelper<
|
||||
AggregationFunctionDeltaSumTimestampData<ValueType, TimestampType>,
|
||||
AggregationFunctionDeltaSumTimestamp<ValueType, TimestampType>
|
||||
>
|
||||
AggregationFunctionDeltaSumTimestamp<ValueType, TimestampType>>
|
||||
{
|
||||
public:
|
||||
AggregationFunctionDeltaSumTimestamp(const DataTypes & arguments, const Array & params)
|
||||
: IAggregateFunctionDataHelper<
|
||||
AggregationFunctionDeltaSumTimestampData<ValueType, TimestampType>,
|
||||
AggregationFunctionDeltaSumTimestamp<ValueType, TimestampType>
|
||||
>{arguments, params, createResultType()}
|
||||
{}
|
||||
AggregationFunctionDeltaSumTimestamp<ValueType, TimestampType>>{arguments, params, createResultType()}
|
||||
{
|
||||
}
|
||||
|
||||
AggregationFunctionDeltaSumTimestamp()
|
||||
: IAggregateFunctionDataHelper<
|
||||
AggregationFunctionDeltaSumTimestampData<ValueType, TimestampType>,
|
||||
AggregationFunctionDeltaSumTimestamp<ValueType, TimestampType>
|
||||
>{}
|
||||
{}
|
||||
AggregationFunctionDeltaSumTimestamp<ValueType, TimestampType>>{}
|
||||
{
|
||||
}
|
||||
|
||||
bool allocatesMemoryInArena() const override { return false; }
|
||||
|
||||
@ -63,8 +69,8 @@ public:
|
||||
|
||||
void NO_SANITIZE_UNDEFINED ALWAYS_INLINE add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
|
||||
{
|
||||
auto value = assert_cast<const ColumnVector<ValueType> &>(*columns[0]).getData()[row_num];
|
||||
auto ts = assert_cast<const ColumnVector<TimestampType> &>(*columns[1]).getData()[row_num];
|
||||
auto value = unalignedLoad<ValueType>(columns[0]->getRawData().data() + row_num * sizeof(ValueType));
|
||||
auto ts = unalignedLoad<TimestampType>(columns[1]->getRawData().data() + row_num * sizeof(TimestampType));
|
||||
|
||||
auto & data = this->data(place);
|
||||
|
||||
@ -172,10 +178,48 @@ public:
|
||||
|
||||
void insertResultInto(AggregateDataPtr __restrict place, IColumn & to, Arena *) const override
|
||||
{
|
||||
assert_cast<ColumnVector<ValueType> &>(to).getData().push_back(this->data(place).sum);
|
||||
static_cast<ColumnFixedSizeHelper &>(to).template insertRawData<sizeof(ValueType)>(
|
||||
reinterpret_cast<const char *>(&this->data(place).sum));
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
template <typename FirstType, template <typename, typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
IAggregateFunction * createWithTwoTypesSecond(const IDataType & second_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(second_type);
|
||||
|
||||
if (which.idx == TypeIndex::UInt32) return new AggregateFunctionTemplate<FirstType, UInt32>(args...);
|
||||
if (which.idx == TypeIndex::UInt64) return new AggregateFunctionTemplate<FirstType, UInt64>(args...);
|
||||
if (which.idx == TypeIndex::Int32) return new AggregateFunctionTemplate<FirstType, UInt32>(args...);
|
||||
if (which.idx == TypeIndex::Int64) return new AggregateFunctionTemplate<FirstType, UInt64>(args...);
|
||||
if (which.idx == TypeIndex::Float32) return new AggregateFunctionTemplate<FirstType, Float32>(args...);
|
||||
if (which.idx == TypeIndex::Float64) return new AggregateFunctionTemplate<FirstType, Float64>(args...);
|
||||
if (which.idx == TypeIndex::Date) return new AggregateFunctionTemplate<FirstType, UInt16>(args...);
|
||||
if (which.idx == TypeIndex::DateTime) return new AggregateFunctionTemplate<FirstType, UInt32>(args...);
|
||||
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <template <typename, typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
IAggregateFunction * createWithTwoTypes(const IDataType & first_type, const IDataType & second_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(first_type);
|
||||
|
||||
if (which.idx == TypeIndex::UInt8) return createWithTwoTypesSecond<UInt8, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::UInt16) return createWithTwoTypesSecond<UInt16, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::UInt32) return createWithTwoTypesSecond<UInt32, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::UInt64) return createWithTwoTypesSecond<UInt64, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Int8) return createWithTwoTypesSecond<UInt8, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Int16) return createWithTwoTypesSecond<UInt16, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Int32) return createWithTwoTypesSecond<UInt32, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Int64) return createWithTwoTypesSecond<UInt64, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Float32) return createWithTwoTypesSecond<Float32, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Float64) return createWithTwoTypesSecond<Float64, AggregateFunctionTemplate>(second_type, args...);
|
||||
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
AggregateFunctionPtr createAggregateFunctionDeltaSumTimestamp(
|
||||
const String & name,
|
||||
const DataTypes & arguments,
|
||||
@ -193,8 +237,14 @@ AggregateFunctionPtr createAggregateFunctionDeltaSumTimestamp(
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument for aggregate function {}, "
|
||||
"must be Int, Float, Date, DateTime", arguments[1]->getName(), name);
|
||||
|
||||
return AggregateFunctionPtr(createWithTwoNumericOrDateTypes<AggregationFunctionDeltaSumTimestamp>(
|
||||
auto res = AggregateFunctionPtr(createWithTwoTypes<AggregationFunctionDeltaSumTimestamp>(
|
||||
*arguments[0], *arguments[1], arguments, params));
|
||||
|
||||
if (!res)
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument for aggregate function {}, "
|
||||
"this type is not supported", arguments[0]->getName(), name);
|
||||
|
||||
return res;
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -184,36 +184,8 @@ static IAggregateFunction * createWithDecimalType(const IDataType & argument_typ
|
||||
}
|
||||
|
||||
/** For template with two arguments.
|
||||
* This is an extremely dangerous for code bloat - do not use.
|
||||
*/
|
||||
template <typename FirstType, template <typename, typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
static IAggregateFunction * createWithTwoNumericTypesSecond(const IDataType & second_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(second_type);
|
||||
#define DISPATCH(TYPE) \
|
||||
if (which.idx == TypeIndex::TYPE) return new AggregateFunctionTemplate<FirstType, TYPE>(args...);
|
||||
FOR_NUMERIC_TYPES(DISPATCH)
|
||||
#undef DISPATCH
|
||||
if (which.idx == TypeIndex::Enum8) return new AggregateFunctionTemplate<FirstType, Int8>(args...);
|
||||
if (which.idx == TypeIndex::Enum16) return new AggregateFunctionTemplate<FirstType, Int16>(args...);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <template <typename, typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
static IAggregateFunction * createWithTwoNumericTypes(const IDataType & first_type, const IDataType & second_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(first_type);
|
||||
#define DISPATCH(TYPE) \
|
||||
if (which.idx == TypeIndex::TYPE) \
|
||||
return createWithTwoNumericTypesSecond<TYPE, AggregateFunctionTemplate>(second_type, args...);
|
||||
FOR_NUMERIC_TYPES(DISPATCH)
|
||||
#undef DISPATCH
|
||||
if (which.idx == TypeIndex::Enum8)
|
||||
return createWithTwoNumericTypesSecond<Int8, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Enum16)
|
||||
return createWithTwoNumericTypesSecond<Int16, AggregateFunctionTemplate>(second_type, args...);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <typename FirstType, template <typename, typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
static IAggregateFunction * createWithTwoBasicNumericTypesSecond(const IDataType & second_type, TArgs && ... args)
|
||||
{
|
||||
@ -237,46 +209,6 @@ static IAggregateFunction * createWithTwoBasicNumericTypes(const IDataType & fir
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <typename FirstType, template <typename, typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
static IAggregateFunction * createWithTwoNumericOrDateTypesSecond(const IDataType & second_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(second_type);
|
||||
#define DISPATCH(TYPE) \
|
||||
if (which.idx == TypeIndex::TYPE) return new AggregateFunctionTemplate<FirstType, TYPE>(args...);
|
||||
FOR_NUMERIC_TYPES(DISPATCH)
|
||||
#undef DISPATCH
|
||||
if (which.idx == TypeIndex::Enum8) return new AggregateFunctionTemplate<FirstType, Int8>(args...);
|
||||
if (which.idx == TypeIndex::Enum16) return new AggregateFunctionTemplate<FirstType, Int16>(args...);
|
||||
|
||||
/// expects that DataTypeDate based on UInt16, DataTypeDateTime based on UInt32
|
||||
if (which.idx == TypeIndex::Date) return new AggregateFunctionTemplate<FirstType, UInt16>(args...);
|
||||
if (which.idx == TypeIndex::DateTime) return new AggregateFunctionTemplate<FirstType, UInt32>(args...);
|
||||
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <template <typename, typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
static IAggregateFunction * createWithTwoNumericOrDateTypes(const IDataType & first_type, const IDataType & second_type, TArgs && ... args)
|
||||
{
|
||||
WhichDataType which(first_type);
|
||||
#define DISPATCH(TYPE) \
|
||||
if (which.idx == TypeIndex::TYPE) \
|
||||
return createWithTwoNumericOrDateTypesSecond<TYPE, AggregateFunctionTemplate>(second_type, args...);
|
||||
FOR_NUMERIC_TYPES(DISPATCH)
|
||||
#undef DISPATCH
|
||||
if (which.idx == TypeIndex::Enum8)
|
||||
return createWithTwoNumericOrDateTypesSecond<Int8, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::Enum16)
|
||||
return createWithTwoNumericOrDateTypesSecond<Int16, AggregateFunctionTemplate>(second_type, args...);
|
||||
|
||||
/// expects that DataTypeDate based on UInt16, DataTypeDateTime based on UInt32
|
||||
if (which.idx == TypeIndex::Date)
|
||||
return createWithTwoNumericOrDateTypesSecond<UInt16, AggregateFunctionTemplate>(second_type, args...);
|
||||
if (which.idx == TypeIndex::DateTime)
|
||||
return createWithTwoNumericOrDateTypesSecond<UInt32, AggregateFunctionTemplate>(second_type, args...);
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
template <template <typename> class AggregateFunctionTemplate, typename... TArgs>
|
||||
static IAggregateFunction * createWithStringType(const IDataType & argument_type, TArgs && ... args)
|
||||
{
|
||||
|
@ -685,13 +685,13 @@ void BackupCoordinationStageSync::cancelQueryIfError()
|
||||
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
if (!state.host_with_error)
|
||||
return;
|
||||
|
||||
exception = state.hosts.at(*state.host_with_error).exception;
|
||||
if (state.host_with_error)
|
||||
exception = state.hosts.at(*state.host_with_error).exception;
|
||||
}
|
||||
|
||||
chassert(exception);
|
||||
if (!exception)
|
||||
return;
|
||||
|
||||
process_list_element->cancelQuery(false, exception);
|
||||
state_changed.notify_all();
|
||||
}
|
||||
@ -741,6 +741,11 @@ void BackupCoordinationStageSync::cancelQueryIfDisconnectedTooLong()
|
||||
if (!exception)
|
||||
return;
|
||||
|
||||
/// In this function we only pass the new `exception` (about that the connection was lost) to `process_list_element`.
|
||||
/// We don't try to create the 'error' node here (because this function is called from watchingThread() and
|
||||
/// we don't want the watching thread to try waiting here for retries or a reconnection).
|
||||
/// Also we don't set the `state.host_with_error` field here because `state.host_with_error` can only be set
|
||||
/// AFTER creating the 'error' node (see the comment for `State`).
|
||||
process_list_element->cancelQuery(false, exception);
|
||||
state_changed.notify_all();
|
||||
}
|
||||
@ -870,6 +875,9 @@ bool BackupCoordinationStageSync::checkIfHostsReachStage(const Strings & hosts,
|
||||
continue;
|
||||
}
|
||||
|
||||
if (state.host_with_error)
|
||||
std::rethrow_exception(state.hosts.at(*state.host_with_error).exception);
|
||||
|
||||
if (host_info.finished)
|
||||
throw Exception(ErrorCodes::FAILED_TO_SYNC_BACKUP_OR_RESTORE,
|
||||
"{} finished without coming to stage {}", getHostDesc(host), stage_to_wait);
|
||||
@ -1150,6 +1158,9 @@ bool BackupCoordinationStageSync::checkIfOtherHostsFinish(
|
||||
if ((host == current_host) || host_info.finished)
|
||||
continue;
|
||||
|
||||
if (throw_if_error && state.host_with_error)
|
||||
std::rethrow_exception(state.hosts.at(*state.host_with_error).exception);
|
||||
|
||||
String reason_text = reason.empty() ? "" : (" " + reason);
|
||||
|
||||
String host_status;
|
||||
|
@ -197,6 +197,9 @@ private:
|
||||
};
|
||||
|
||||
/// Information about all the host participating in the current BACKUP or RESTORE operation.
|
||||
/// This information is read from ZooKeeper.
|
||||
/// To simplify the programming logic `state` can only be updated AFTER changing corresponding nodes in ZooKeeper
|
||||
/// (for example, first we create the 'error' node, and only after that we set or read from ZK the `state.host_with_error` field).
|
||||
struct State
|
||||
{
|
||||
std::map<String /* host */, HostInfo> hosts; /// std::map because we need to compare states
|
||||
|
@ -52,6 +52,7 @@ private:
|
||||
explicit ColumnVector(const size_t n) : data(n) {}
|
||||
ColumnVector(const size_t n, const ValueType x) : data(n, x) {}
|
||||
ColumnVector(const ColumnVector & src) : data(src.data.begin(), src.data.end()) {}
|
||||
ColumnVector(Container::const_iterator begin, Container::const_iterator end) : data(begin, end) { }
|
||||
|
||||
/// Sugar constructor.
|
||||
ColumnVector(std::initializer_list<T> il) : data{il} {}
|
||||
|
@ -49,6 +49,7 @@
|
||||
M(TemporaryFilesForSort, "Number of temporary files created for external sorting") \
|
||||
M(TemporaryFilesForAggregation, "Number of temporary files created for external aggregation") \
|
||||
M(TemporaryFilesForJoin, "Number of temporary files created for JOIN") \
|
||||
M(TemporaryFilesForMerge, "Number of temporary files for vertical merge") \
|
||||
M(TemporaryFilesUnknown, "Number of temporary files created without known purpose") \
|
||||
M(Read, "Number of read (read, pread, io_getevents, etc.) syscalls in fly") \
|
||||
M(RemoteRead, "Number of read with remote reader in fly") \
|
||||
|
@ -7,7 +7,6 @@
|
||||
#include <condition_variable>
|
||||
#include <mutex>
|
||||
|
||||
#include "config.h"
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -10,6 +10,7 @@
|
||||
#include <fcntl.h>
|
||||
#include <algorithm>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
|
@ -204,6 +204,16 @@ bool ThreadStatus::isQueryCanceled() const
|
||||
return false;
|
||||
}
|
||||
|
||||
size_t ThreadStatus::getNextPlanStepIndex() const
|
||||
{
|
||||
return local_data.plan_step_index->fetch_add(1);
|
||||
}
|
||||
|
||||
size_t ThreadStatus::getNextPipelineProcessorIndex() const
|
||||
{
|
||||
return local_data.pipeline_processor_index->fetch_add(1);
|
||||
}
|
||||
|
||||
ThreadStatus::~ThreadStatus()
|
||||
{
|
||||
flushUntrackedMemory();
|
||||
|
@ -11,6 +11,7 @@
|
||||
|
||||
#include <boost/noncopyable.hpp>
|
||||
|
||||
#include <atomic>
|
||||
#include <functional>
|
||||
#include <memory>
|
||||
#include <mutex>
|
||||
@ -90,6 +91,11 @@ public:
|
||||
String query_for_logs;
|
||||
UInt64 normalized_query_hash = 0;
|
||||
|
||||
// Since processors might be added on the fly within expand() function we use atomic_size_t.
|
||||
// These two fields are used for EXPLAIN PLAN / PIPELINE.
|
||||
std::shared_ptr<std::atomic_size_t> plan_step_index = std::make_shared<std::atomic_size_t>(0);
|
||||
std::shared_ptr<std::atomic_size_t> pipeline_processor_index = std::make_shared<std::atomic_size_t>(0);
|
||||
|
||||
QueryIsCanceledPredicate query_is_canceled_predicate = {};
|
||||
};
|
||||
|
||||
@ -313,6 +319,9 @@ public:
|
||||
|
||||
void initGlobalProfiler(UInt64 global_profiler_real_time_period, UInt64 global_profiler_cpu_time_period);
|
||||
|
||||
size_t getNextPlanStepIndex() const;
|
||||
size_t getNextPipelineProcessorIndex() const;
|
||||
|
||||
private:
|
||||
void applyGlobalSettings();
|
||||
void applyQuerySettings();
|
||||
|
@ -150,6 +150,9 @@ Squash blocks passed to the external table to a specified size in bytes, if bloc
|
||||
)", 0) \
|
||||
DECLARE(UInt64, max_joined_block_size_rows, DEFAULT_BLOCK_SIZE, R"(
|
||||
Maximum block size for JOIN result (if join algorithm supports it). 0 means unlimited.
|
||||
)", 0) \
|
||||
DECLARE(UInt64, min_joined_block_size_bytes, 524288, R"(
|
||||
Minimum block size for JOIN result (if join algorithm supports it). 0 means unlimited.
|
||||
)", 0) \
|
||||
DECLARE(UInt64, max_insert_threads, 0, R"(
|
||||
The maximum number of threads to execute the `INSERT SELECT` query.
|
||||
@ -1794,7 +1797,7 @@ Possible values:
|
||||
|
||||
- 0 — Disabled.
|
||||
- 1 — Enabled.
|
||||
)", 0) \
|
||||
)", 1) \
|
||||
DECLARE(Int64, http_zlib_compression_level, 3, R"(
|
||||
Sets the level of data compression in the response to an HTTP request if [enable_http_compression = 1](#enable_http_compression).
|
||||
|
||||
@ -4572,7 +4575,7 @@ Possible values:
|
||||
- 0 - Disable
|
||||
- 1 - Enable
|
||||
)", 0) \
|
||||
DECLARE(Bool, query_plan_merge_filters, false, R"(
|
||||
DECLARE(Bool, query_plan_merge_filters, true, R"(
|
||||
Allow to merge filters in the query plan
|
||||
)", 0) \
|
||||
DECLARE(Bool, query_plan_filter_push_down, true, R"(
|
||||
|
@ -64,6 +64,7 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
|
||||
},
|
||||
{"24.11",
|
||||
{
|
||||
{"enable_http_compression", false, true, "Improvement for read-only clients since they can't change settings"},
|
||||
{"validate_mutation_query", false, true, "New setting to validate mutation queries by default."},
|
||||
{"enable_job_stack_trace", false, true, "Enable by default collecting stack traces from job's scheduling."},
|
||||
{"allow_suspicious_types_in_group_by", true, false, "Don't allow Variant/Dynamic types in GROUP BY by default"},
|
||||
@ -80,6 +81,7 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
|
||||
{"query_plan_join_swap_table", "false", "auto", "New setting. Right table was always chosen before."},
|
||||
{"query_plan_merge_filters", false, true, "Allow to merge filters in the query plan. This is required to properly support filter-push-down with a new analyzer."},
|
||||
{"parallel_replicas_local_plan", false, true, "Use local plan for local replica in a query with parallel replicas"},
|
||||
{"min_joined_block_size_bytes", 524288, 524288, "New setting."},
|
||||
{"allow_experimental_bfloat16_type", false, false, "Add new experimental BFloat16 type"},
|
||||
{"filesystem_cache_skip_download_if_exceeds_per_query_cache_write_limit", 1, 1, "Rename of setting skip_download_if_exceeds_query_cache_limit"},
|
||||
{"filesystem_cache_prefer_bigger_buffer_size", true, true, "New setting"},
|
||||
|
@ -69,7 +69,7 @@ static void testCascadeBufferRedability(
|
||||
auto rbuf = wbuf_readable.tryGetReadBuffer();
|
||||
ASSERT_FALSE(!rbuf);
|
||||
|
||||
concat.appendBuffer(wrapReadBufferPointer(std::move(rbuf)));
|
||||
concat.appendBuffer(std::move(rbuf));
|
||||
}
|
||||
|
||||
std::string decoded_data;
|
||||
|
@ -64,6 +64,7 @@ constexpr time_t MAX_DATETIME_DAY_NUM = 49710; // 2106-02-07
|
||||
/// This factor transformation will say that the function is monotone everywhere.
|
||||
struct ZeroTransform
|
||||
{
|
||||
static constexpr auto name = "Zero";
|
||||
static UInt16 execute(Int64, const DateLUTImpl &) { return 0; }
|
||||
static UInt16 execute(UInt32, const DateLUTImpl &) { return 0; }
|
||||
static UInt16 execute(Int32, const DateLUTImpl &) { return 0; }
|
||||
|
@ -56,6 +56,21 @@ public:
|
||||
: is_not_monotonic;
|
||||
}
|
||||
|
||||
if (checkAndGetDataType<DataTypeDateTime64>(&type))
|
||||
{
|
||||
|
||||
const auto & left_date_time = left.safeGet<DateTime64>();
|
||||
TransformDateTime64<typename Transform::FactorTransform> transformer_left(left_date_time.getScale());
|
||||
|
||||
const auto & right_date_time = right.safeGet<DateTime64>();
|
||||
TransformDateTime64<typename Transform::FactorTransform> transformer_right(right_date_time.getScale());
|
||||
|
||||
return transformer_left.execute(left_date_time.getValue(), date_lut)
|
||||
== transformer_right.execute(right_date_time.getValue(), date_lut)
|
||||
? is_monotonic
|
||||
: is_not_monotonic;
|
||||
}
|
||||
|
||||
return Transform::FactorTransform::execute(UInt32(left.safeGet<UInt64>()), date_lut)
|
||||
== Transform::FactorTransform::execute(UInt32(right.safeGet<UInt64>()), date_lut)
|
||||
? is_monotonic
|
||||
|
@ -32,12 +32,12 @@ namespace Setting
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
extern const int VALUE_IS_OUT_OF_RANGE_OF_DATA_TYPE;
|
||||
extern const int CANNOT_PARSE_DATETIME;
|
||||
extern const int ILLEGAL_COLUMN;
|
||||
extern const int NOT_ENOUGH_SPACE;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int VALUE_IS_OUT_OF_RANGE_OF_DATA_TYPE;
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -57,6 +57,12 @@ namespace
|
||||
Null
|
||||
};
|
||||
|
||||
enum class ReturnType: uint8_t
|
||||
{
|
||||
DateTime,
|
||||
DateTime64
|
||||
};
|
||||
|
||||
constexpr Int32 minYear = 1970;
|
||||
constexpr Int32 maxYear = 2106;
|
||||
|
||||
@ -186,12 +192,13 @@ namespace
|
||||
Int32 minute = 0; /// range [0, 59]
|
||||
Int32 second = 0; /// range [0, 59]
|
||||
Int32 microsecond = 0; /// range [0, 999999]
|
||||
UInt32 scale = 0; /// scale of the result DateTime64. Always 6 for ParseSytax == MySQL, [0, 6] for ParseSyntax == Joda.
|
||||
|
||||
bool is_am = true; /// If is_hour_of_half_day = true and is_am = false (i.e. pm) then add 12 hours to the result DateTime
|
||||
bool hour_starts_at_1 = false; /// Whether the hour is clockhour
|
||||
bool is_hour_of_half_day = false; /// Whether the hour is of half day
|
||||
|
||||
bool has_time_zone_offset = false; /// If true, time zone offset is explicitly specified.
|
||||
bool has_time_zone_offset = false; /// If true, timezone offset is explicitly specified.
|
||||
Int64 time_zone_offset = 0; /// Offset in seconds between current timezone to UTC.
|
||||
|
||||
void reset()
|
||||
@ -214,6 +221,7 @@ namespace
|
||||
minute = 0;
|
||||
second = 0;
|
||||
microsecond = 0;
|
||||
scale = 0;
|
||||
|
||||
is_am = true;
|
||||
hour_starts_at_1 = false;
|
||||
@ -449,6 +457,18 @@ namespace
|
||||
return {};
|
||||
}
|
||||
|
||||
void setScale(UInt32 scale_, ParseSyntax parse_syntax_)
|
||||
{
|
||||
/// Because the scale argument for parseDateTime*() is constant, always throw an exception (don't allow continuing to the
|
||||
/// next row like in other set* functions)
|
||||
if (parse_syntax_ == ParseSyntax::MySQL && scale_ != 6)
|
||||
throw Exception(ErrorCodes::CANNOT_PARSE_DATETIME, "Precision {} is invalid (must be 6)", scale);
|
||||
else if (parse_syntax_ == ParseSyntax::Joda && scale_ > 6)
|
||||
throw Exception(ErrorCodes::CANNOT_PARSE_DATETIME, "Precision {} is invalid (must be [0, 6])", scale);
|
||||
|
||||
scale = scale_;
|
||||
}
|
||||
|
||||
/// For debug
|
||||
[[maybe_unused]] String toString() const
|
||||
{
|
||||
@ -571,7 +591,7 @@ namespace
|
||||
};
|
||||
|
||||
/// _FUNC_(str[, format, timezone])
|
||||
template <typename Name, ParseSyntax parse_syntax, ErrorHandling error_handling, bool parseDateTime64 = false>
|
||||
template <typename Name, ParseSyntax parse_syntax, ReturnType return_type, ErrorHandling error_handling>
|
||||
class FunctionParseDateTimeImpl : public IFunction
|
||||
{
|
||||
public:
|
||||
@ -591,7 +611,6 @@ namespace
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return false; }
|
||||
|
||||
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {1, 2}; }
|
||||
bool isVariadic() const override { return true; }
|
||||
size_t getNumberOfArguments() const override { return 0; }
|
||||
@ -601,98 +620,106 @@ namespace
|
||||
FunctionArgumentDescriptors mandatory_args{
|
||||
{"time", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isString), nullptr, "String"}
|
||||
};
|
||||
|
||||
FunctionArgumentDescriptors optional_args{
|
||||
{"format", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isString), nullptr, "String"},
|
||||
{"format", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isString), &isColumnConst, "const String"},
|
||||
{"timezone", static_cast<FunctionArgumentDescriptor::TypeValidator>(&isString), &isColumnConst, "const String"}
|
||||
};
|
||||
|
||||
validateFunctionArguments(*this, arguments, mandatory_args, optional_args);
|
||||
|
||||
String time_zone_name = getTimeZone(arguments).getTimeZone();
|
||||
DataTypePtr date_type = nullptr;
|
||||
if constexpr (parseDateTime64)
|
||||
{
|
||||
String format = getFormat(arguments);
|
||||
std::vector<Instruction> instructions = parseFormat(format);
|
||||
UInt32 scale = 0;
|
||||
if (!instructions.empty())
|
||||
{
|
||||
for (const auto & ins : instructions)
|
||||
{
|
||||
if (scale > 0)
|
||||
break;
|
||||
const String fragment = ins.getFragment();
|
||||
for (char ch : fragment)
|
||||
{
|
||||
if (ch != 'S')
|
||||
{
|
||||
scale = 0;
|
||||
break;
|
||||
}
|
||||
else
|
||||
scale++;
|
||||
}
|
||||
}
|
||||
}
|
||||
date_type = std::make_shared<DataTypeDateTime64>(scale, time_zone_name);
|
||||
}
|
||||
DataTypePtr data_type;
|
||||
if constexpr (return_type == ReturnType::DateTime)
|
||||
data_type = std::make_shared<DataTypeDateTime>(time_zone_name);
|
||||
else
|
||||
date_type = std::make_shared<DataTypeDateTime>(time_zone_name);
|
||||
{
|
||||
if constexpr (parse_syntax == ParseSyntax::MySQL)
|
||||
data_type = std::make_shared<DataTypeDateTime64>(6, time_zone_name);
|
||||
else
|
||||
{
|
||||
/// The precision of the return type is the number of 'S' placeholders.
|
||||
String format = getFormat(arguments);
|
||||
std::vector<Instruction> instructions = parseFormat(format);
|
||||
size_t s_count = 0;
|
||||
for (const auto & instruction : instructions)
|
||||
{
|
||||
const String & fragment = instruction.getFragment();
|
||||
for (char c : fragment)
|
||||
{
|
||||
if (c == 'S')
|
||||
++s_count;
|
||||
else
|
||||
break;
|
||||
}
|
||||
if (s_count > 0)
|
||||
break;
|
||||
}
|
||||
data_type = std::make_shared<DataTypeDateTime64>(s_count, time_zone_name);
|
||||
}
|
||||
}
|
||||
|
||||
if (error_handling == ErrorHandling::Null)
|
||||
return std::make_shared<DataTypeNullable>(date_type);
|
||||
return date_type;
|
||||
return std::make_shared<DataTypeNullable>(data_type);
|
||||
return data_type;
|
||||
}
|
||||
|
||||
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count) const override
|
||||
{
|
||||
ColumnUInt8::MutablePtr col_null_map;
|
||||
DataTypePtr result_type_without_nullable;
|
||||
if constexpr (error_handling == ErrorHandling::Null)
|
||||
col_null_map = ColumnUInt8::create(input_rows_count, 0);
|
||||
if constexpr (parseDateTime64)
|
||||
result_type_without_nullable = removeNullable(result_type); /// Remove Nullable wrapper. It will be added back later.
|
||||
else
|
||||
result_type_without_nullable = result_type;
|
||||
|
||||
if constexpr (return_type == ReturnType::DateTime)
|
||||
{
|
||||
const DataTypeDateTime64 * datatime64_type = checkAndGetDataType<DataTypeDateTime64>(removeNullable(result_type).get());
|
||||
auto col_res = ColumnDateTime64::create(input_rows_count, datatime64_type->getScale());
|
||||
PaddedPODArray<DataTypeDateTime64::FieldType> & res_data = col_res->getData();
|
||||
executeImpl2<DataTypeDateTime64::FieldType>(arguments, result_type, input_rows_count, res_data, col_null_map);
|
||||
if constexpr (error_handling == ErrorHandling::Null)
|
||||
return ColumnNullable::create(std::move(col_res), std::move(col_null_map));
|
||||
else
|
||||
return col_res;
|
||||
MutableColumnPtr col_res = ColumnDateTime::create(input_rows_count);
|
||||
ColumnDateTime * col_datetime = assert_cast<ColumnDateTime *>(col_res.get());
|
||||
return executeImpl2<DataTypeDateTime::FieldType>(arguments, result_type, input_rows_count, col_res, col_datetime->getData());
|
||||
}
|
||||
else
|
||||
{
|
||||
auto col_res = ColumnDateTime::create(input_rows_count);
|
||||
PaddedPODArray<DataTypeDateTime::FieldType> & res_data = col_res->getData();
|
||||
executeImpl2<DataTypeDateTime::FieldType>(arguments, result_type, input_rows_count, res_data, col_null_map);
|
||||
if constexpr (error_handling == ErrorHandling::Null)
|
||||
return ColumnNullable::create(std::move(col_res), std::move(col_null_map));
|
||||
else
|
||||
return col_res;
|
||||
const auto * result_type_without_nullable_casted = checkAndGetDataType<DataTypeDateTime64>(result_type_without_nullable.get());
|
||||
MutableColumnPtr col_res = ColumnDateTime64::create(input_rows_count, result_type_without_nullable_casted->getScale());
|
||||
ColumnDateTime64 * col_datetime64 = assert_cast<ColumnDateTime64 *>(col_res.get());
|
||||
return executeImpl2<DataTypeDateTime64::FieldType>(arguments, result_type, input_rows_count, col_res, col_datetime64->getData());
|
||||
}
|
||||
}
|
||||
|
||||
template<typename T>
|
||||
void executeImpl2(const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count,
|
||||
PaddedPODArray<T> & res_data, ColumnUInt8::MutablePtr & col_null_map) const
|
||||
ColumnPtr executeImpl2(
|
||||
const ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, size_t input_rows_count,
|
||||
MutableColumnPtr & col_res, PaddedPODArray<T> & res_data) const
|
||||
{
|
||||
const auto * col_str = checkAndGetColumn<ColumnString>(arguments[0].column.get());
|
||||
if (!col_str)
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Illegal column {} of first ('str') argument of function {}. Must be string.",
|
||||
arguments[0].column->getName(),
|
||||
"Illegal type in 1st ('time') argument of function {}. Must be String.",
|
||||
getName());
|
||||
|
||||
String format = getFormat(arguments);
|
||||
const auto & time_zone = getTimeZone(arguments);
|
||||
std::vector<Instruction> instructions = parseFormat(format);
|
||||
Int64 multiplier = 0;
|
||||
UInt32 scale = 0;
|
||||
if constexpr (return_type == ReturnType::DateTime64)
|
||||
{
|
||||
const DataTypeDateTime64 * result_type_without_nullable_casted = checkAndGetDataType<DataTypeDateTime64>(removeNullable(result_type).get());
|
||||
scale = result_type_without_nullable_casted->getScale();
|
||||
multiplier = DecimalUtils::scaleMultiplier<DateTime64>(scale);
|
||||
}
|
||||
|
||||
/// Make datetime fit in a cache line.
|
||||
alignas(64) DateTime<error_handling> datetime;
|
||||
ColumnUInt8::MutablePtr col_null_map;
|
||||
if constexpr (error_handling == ErrorHandling::Null)
|
||||
col_null_map = ColumnUInt8::create(input_rows_count, 0);
|
||||
|
||||
const String format = getFormat(arguments);
|
||||
const std::vector<Instruction> instructions = parseFormat(format);
|
||||
const auto & time_zone = getTimeZone(arguments);
|
||||
alignas(64) DateTime<error_handling> datetime; /// Make datetime fit in a cache line.
|
||||
for (size_t i = 0; i < input_rows_count; ++i)
|
||||
{
|
||||
datetime.reset();
|
||||
if constexpr (return_type == ReturnType::DateTime64)
|
||||
datetime.setScale(scale, parse_syntax);
|
||||
|
||||
StringRef str_ref = col_str->getDataAt(i);
|
||||
Pos cur = str_ref.data;
|
||||
Pos end = str_ref.data + str_ref.size;
|
||||
@ -732,9 +759,8 @@ namespace
|
||||
continue;
|
||||
|
||||
Int64OrError result = 0;
|
||||
|
||||
/// Ensure all input was consumed
|
||||
if (!parseDateTime64 && cur < end)
|
||||
if (cur < end)
|
||||
{
|
||||
result = tl::unexpected(ErrorCodeAndMessage(
|
||||
ErrorCodes::CANNOT_PARSE_DATETIME,
|
||||
@ -747,14 +773,10 @@ namespace
|
||||
{
|
||||
if (result = datetime.buildDateTime(time_zone); result.has_value())
|
||||
{
|
||||
if constexpr (parseDateTime64)
|
||||
{
|
||||
const DataTypeDateTime64 * datatime64_type = checkAndGetDataType<DataTypeDateTime64>(removeNullable(result_type).get());
|
||||
Int64 multiplier = DecimalUtils::scaleMultiplier<DateTime64>(datatime64_type->getScale());
|
||||
res_data[i] = static_cast<Int64>(*result) * multiplier + datetime.microsecond;
|
||||
}
|
||||
else
|
||||
if constexpr (return_type == ReturnType::DateTime)
|
||||
res_data[i] = static_cast<UInt32>(*result);
|
||||
else
|
||||
res_data[i] = static_cast<Int64>(*result) * multiplier + datetime.microsecond;
|
||||
}
|
||||
}
|
||||
|
||||
@ -777,6 +799,10 @@ namespace
|
||||
}
|
||||
}
|
||||
}
|
||||
if constexpr (error_handling == ErrorHandling::Null)
|
||||
return ColumnNullable::create(std::move(col_res), std::move(col_null_map));
|
||||
else
|
||||
return std::move(col_res);
|
||||
}
|
||||
|
||||
|
||||
@ -808,7 +834,7 @@ namespace
|
||||
explicit Instruction(const String & literal_) : literal(literal_), fragment("LITERAL") { }
|
||||
explicit Instruction(String && literal_) : literal(std::move(literal_)), fragment("LITERAL") { }
|
||||
|
||||
String getFragment() const { return fragment; }
|
||||
const String & getFragment() const { return fragment; }
|
||||
|
||||
/// For debug
|
||||
[[maybe_unused]] String toString() const
|
||||
@ -885,6 +911,28 @@ namespace
|
||||
return cur;
|
||||
}
|
||||
|
||||
template<typename T, NeedCheckSpace need_check_space>
|
||||
[[nodiscard]]
|
||||
static PosOrError readNumber6(Pos cur, Pos end, [[maybe_unused]] const String & fragment, T & res)
|
||||
{
|
||||
if constexpr (need_check_space == NeedCheckSpace::Yes)
|
||||
RETURN_ERROR_IF_FAILED(checkSpace(cur, end, 6, "readNumber6 requires size >= 6", fragment))
|
||||
|
||||
res = (*cur - '0');
|
||||
++cur;
|
||||
res = res * 10 + (*cur - '0');
|
||||
++cur;
|
||||
res = res * 10 + (*cur - '0');
|
||||
++cur;
|
||||
res = res * 10 + (*cur - '0');
|
||||
++cur;
|
||||
res = res * 10 + (*cur - '0');
|
||||
++cur;
|
||||
res = res * 10 + (*cur - '0');
|
||||
++cur;
|
||||
return cur;
|
||||
}
|
||||
|
||||
[[nodiscard]]
|
||||
static VoidOrError checkSpace(Pos cur, Pos end, size_t len, const String & msg, const String & fragment)
|
||||
{
|
||||
@ -1305,13 +1353,28 @@ namespace
|
||||
}
|
||||
|
||||
[[nodiscard]]
|
||||
static PosOrError mysqlMicrosecond(Pos cur, Pos end, const String & fragment, DateTime<error_handling> & /*date*/)
|
||||
static PosOrError mysqlMicrosecond(Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
||||
{
|
||||
RETURN_ERROR_IF_FAILED(checkSpace(cur, end, 6, "mysqlMicrosecond requires size >= 6", fragment))
|
||||
|
||||
for (size_t i = 0; i < 6; ++i)
|
||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (assertNumber<NeedCheckSpace::No>(cur, end, fragment)))
|
||||
if constexpr (return_type == ReturnType::DateTime)
|
||||
{
|
||||
RETURN_ERROR_IF_FAILED(checkSpace(cur, end, 6, "mysqlMicrosecond requires size >= 6", fragment))
|
||||
|
||||
for (size_t i = 0; i < 6; ++i)
|
||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (assertNumber<NeedCheckSpace::No>(cur, end, fragment)))
|
||||
}
|
||||
else
|
||||
{
|
||||
if (date.scale != 6)
|
||||
RETURN_ERROR(
|
||||
ErrorCodes::CANNOT_PARSE_DATETIME,
|
||||
"Unable to parse fragment {} from {} because the datetime scale {} is not 6",
|
||||
fragment,
|
||||
std::string_view(cur, end - cur),
|
||||
std::to_string(date.scale))
|
||||
Int32 microsecond = 0;
|
||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumber6<Int32, NeedCheckSpace::Yes>(cur, end, fragment, microsecond)))
|
||||
RETURN_ERROR_IF_FAILED(date.setMicrosecond(microsecond))
|
||||
}
|
||||
return cur;
|
||||
}
|
||||
|
||||
@ -1695,7 +1758,7 @@ namespace
|
||||
}
|
||||
|
||||
[[nodiscard]]
|
||||
static PosOrError jodaMicroSecondOfSecond(size_t repetitions, Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
||||
static PosOrError jodaMicrosecondOfSecond(size_t repetitions, Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
||||
{
|
||||
Int32 microsecond;
|
||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumberWithVariableLength(cur, end, false, false, false, repetitions, std::max(repetitions, 2uz), fragment, microsecond)))
|
||||
@ -1704,31 +1767,32 @@ namespace
|
||||
}
|
||||
|
||||
[[nodiscard]]
|
||||
static PosOrError jodaTimezoneId(size_t, Pos cur, Pos end, const String &, DateTime<error_handling> & date)
|
||||
static PosOrError jodaTimezone(size_t, Pos cur, Pos end, const String &, DateTime<error_handling> & date)
|
||||
{
|
||||
String dateTimeZone;
|
||||
String read_time_zone;
|
||||
while (cur <= end)
|
||||
{
|
||||
dateTimeZone += *cur;
|
||||
read_time_zone += *cur;
|
||||
++cur;
|
||||
}
|
||||
const DateLUTImpl & date_time_zone = DateLUT::instance(dateTimeZone);
|
||||
const DateLUTImpl & date_time_zone = DateLUT::instance(read_time_zone);
|
||||
const auto result = date.buildDateTime(date_time_zone);
|
||||
if (result.has_value())
|
||||
{
|
||||
const auto timezoneOffset = date_time_zone.timezoneOffset(*result);
|
||||
const DateLUTImpl::Time timezone_offset = date_time_zone.timezoneOffset(*result);
|
||||
date.has_time_zone_offset = true;
|
||||
date.time_zone_offset = timezoneOffset;
|
||||
date.time_zone_offset = timezone_offset;
|
||||
return cur;
|
||||
}
|
||||
else
|
||||
RETURN_ERROR(ErrorCodes::CANNOT_PARSE_DATETIME, "Unable to build date time from timezone {}", dateTimeZone)
|
||||
RETURN_ERROR(ErrorCodes::CANNOT_PARSE_DATETIME, "Unable to parse date time from timezone {}", read_time_zone)
|
||||
}
|
||||
|
||||
[[nodiscard]]
|
||||
static PosOrError jodaTimezoneOffset(size_t repetitions, Pos cur, Pos end, const String & fragment, DateTime<error_handling> & date)
|
||||
{
|
||||
RETURN_ERROR_IF_FAILED(checkSpace(cur, end, 5, "jodaTimezoneOffset requires size >= 5", fragment))
|
||||
|
||||
Int32 sign;
|
||||
if (*cur == '-')
|
||||
sign = -1;
|
||||
@ -1737,7 +1801,7 @@ namespace
|
||||
else
|
||||
RETURN_ERROR(
|
||||
ErrorCodes::CANNOT_PARSE_DATETIME,
|
||||
"Unable to parse fragment {} from {} because of unknown sign time zone offset: {}",
|
||||
"Unable to parse fragment {} from {} because of unknown sign in time zone offset: {}",
|
||||
fragment,
|
||||
std::string_view(cur, end - cur),
|
||||
std::string_view(cur, 1))
|
||||
@ -1745,8 +1809,22 @@ namespace
|
||||
|
||||
Int32 hour;
|
||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumberWithVariableLength(cur, end, false, false, false, repetitions, std::max(repetitions, 2uz), fragment, hour)))
|
||||
if (hour < 0 || hour > 23)
|
||||
RETURN_ERROR(
|
||||
ErrorCodes::CANNOT_PARSE_DATETIME,
|
||||
"Unable to parse fragment {} from {} because the hour of datetime not in range [0, 23]: {}",
|
||||
fragment,
|
||||
std::string_view(cur, end - cur),
|
||||
std::string_view(cur, 1))
|
||||
Int32 minute;
|
||||
ASSIGN_RESULT_OR_RETURN_ERROR(cur, (readNumberWithVariableLength(cur, end, false, false, false, repetitions, std::max(repetitions, 2uz), fragment, minute)))
|
||||
if (minute < 0 || minute > 59)
|
||||
RETURN_ERROR(
|
||||
ErrorCodes::CANNOT_PARSE_DATETIME,
|
||||
"Unable to parse fragment {} from {} because the minute of datetime not in range [0, 59]: {}",
|
||||
fragment,
|
||||
std::string_view(cur, end - cur),
|
||||
std::string_view(cur, 1))
|
||||
date.has_time_zone_offset = true;
|
||||
date.time_zone_offset = sign * (hour * 3600 + minute * 60);
|
||||
return cur;
|
||||
@ -2133,10 +2211,10 @@ namespace
|
||||
instructions.emplace_back(ACTION_ARGS_WITH_BIND(Instruction::jodaSecondOfMinute, repetitions));
|
||||
break;
|
||||
case 'S':
|
||||
instructions.emplace_back(ACTION_ARGS_WITH_BIND(Instruction::jodaMicroSecondOfSecond, repetitions));
|
||||
instructions.emplace_back(ACTION_ARGS_WITH_BIND(Instruction::jodaMicrosecondOfSecond, repetitions));
|
||||
break;
|
||||
case 'z':
|
||||
instructions.emplace_back(ACTION_ARGS_WITH_BIND(Instruction::jodaTimezoneId, repetitions));
|
||||
instructions.emplace_back(ACTION_ARGS_WITH_BIND(Instruction::jodaTimezone, repetitions));
|
||||
break;
|
||||
case 'Z':
|
||||
instructions.emplace_back(ACTION_ARGS_WITH_BIND(Instruction::jodaTimezoneOffset, repetitions));
|
||||
@ -2161,21 +2239,22 @@ namespace
|
||||
if (arguments.size() == 1)
|
||||
{
|
||||
if constexpr (parse_syntax == ParseSyntax::MySQL)
|
||||
return "%Y-%m-%d %H:%i:%s";
|
||||
{
|
||||
if constexpr (return_type == ReturnType::DateTime)
|
||||
return "%Y-%m-%d %H:%i:%s";
|
||||
else
|
||||
return "%Y-%m-%d %H:%i:%s.%f";
|
||||
}
|
||||
else
|
||||
return "yyyy-MM-dd HH:mm:ss";
|
||||
}
|
||||
else
|
||||
{
|
||||
if (!arguments[1].column || !isColumnConst(*arguments[1].column))
|
||||
throw Exception(ErrorCodes::ILLEGAL_COLUMN, "Argument at index {} for function {} must be constant", 1, getName());
|
||||
|
||||
const auto * col_format = checkAndGetColumnConst<ColumnString>(arguments[1].column.get());
|
||||
if (!col_format)
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Illegal column {} of second ('format') argument of function {}. Must be constant string.",
|
||||
arguments[1].column->getName(),
|
||||
"Illegal type in 'format' argument of function {}. Must be constant String.",
|
||||
getName());
|
||||
return col_format->getValue<String>();
|
||||
}
|
||||
@ -2190,8 +2269,7 @@ namespace
|
||||
if (!col)
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_COLUMN,
|
||||
"Illegal column {} of third ('timezone') argument of function {}. Must be constant String.",
|
||||
arguments[2].column->getName(),
|
||||
"Illegal type in 'timezone' argument of function {}. Must be constant String.",
|
||||
getName());
|
||||
|
||||
String time_zone = col->getValue<String>();
|
||||
@ -2229,6 +2307,21 @@ namespace
|
||||
static constexpr auto name = "parseDateTimeInJodaSyntaxOrNull";
|
||||
};
|
||||
|
||||
struct NameParseDateTime64
|
||||
{
|
||||
static constexpr auto name = "parseDateTime64";
|
||||
};
|
||||
|
||||
struct NameParseDateTime64OrZero
|
||||
{
|
||||
static constexpr auto name = "parseDateTime64OrZero";
|
||||
};
|
||||
|
||||
struct NameParseDateTime64OrNull
|
||||
{
|
||||
static constexpr auto name = "parseDateTime64OrNull";
|
||||
};
|
||||
|
||||
struct NameParseDateTime64InJodaSyntax
|
||||
{
|
||||
static constexpr auto name = "parseDateTime64InJodaSyntax";
|
||||
@ -2244,15 +2337,18 @@ namespace
|
||||
static constexpr auto name = "parseDateTime64InJodaSyntaxOrNull";
|
||||
};
|
||||
|
||||
using FunctionParseDateTime = FunctionParseDateTimeImpl<NameParseDateTime, ParseSyntax::MySQL, ErrorHandling::Exception>;
|
||||
using FunctionParseDateTimeOrZero = FunctionParseDateTimeImpl<NameParseDateTimeOrZero, ParseSyntax::MySQL, ErrorHandling::Zero>;
|
||||
using FunctionParseDateTimeOrNull = FunctionParseDateTimeImpl<NameParseDateTimeOrNull, ParseSyntax::MySQL, ErrorHandling::Null>;
|
||||
using FunctionParseDateTimeInJodaSyntax = FunctionParseDateTimeImpl<NameParseDateTimeInJodaSyntax, ParseSyntax::Joda, ErrorHandling::Exception>;
|
||||
using FunctionParseDateTimeInJodaSyntaxOrZero = FunctionParseDateTimeImpl<NameParseDateTimeInJodaSyntaxOrZero, ParseSyntax::Joda, ErrorHandling::Zero>;
|
||||
using FunctionParseDateTimeInJodaSyntaxOrNull = FunctionParseDateTimeImpl<NameParseDateTimeInJodaSyntaxOrNull, ParseSyntax::Joda, ErrorHandling::Null>;
|
||||
using FunctionParseDateTime64InJodaSyntax = FunctionParseDateTimeImpl<NameParseDateTime64InJodaSyntax, ParseSyntax::Joda, ErrorHandling::Exception, true>;
|
||||
using FunctionParseDateTime64InJodaSyntaxOrZero = FunctionParseDateTimeImpl<NameParseDateTime64InJodaSyntaxOrZero, ParseSyntax::Joda, ErrorHandling::Zero, true>;
|
||||
using FunctionParseDateTime64InJodaSyntaxOrNull = FunctionParseDateTimeImpl<NameParseDateTime64InJodaSyntaxOrNull, ParseSyntax::Joda, ErrorHandling::Null, true>;
|
||||
using FunctionParseDateTime = FunctionParseDateTimeImpl<NameParseDateTime, ParseSyntax::MySQL, ReturnType::DateTime, ErrorHandling::Exception>;
|
||||
using FunctionParseDateTimeOrZero = FunctionParseDateTimeImpl<NameParseDateTimeOrZero, ParseSyntax::MySQL, ReturnType::DateTime, ErrorHandling::Zero>;
|
||||
using FunctionParseDateTimeOrNull = FunctionParseDateTimeImpl<NameParseDateTimeOrNull, ParseSyntax::MySQL, ReturnType::DateTime, ErrorHandling::Null>;
|
||||
using FunctionParseDateTime64 = FunctionParseDateTimeImpl<NameParseDateTime64, ParseSyntax::MySQL, ReturnType::DateTime64, ErrorHandling::Exception>;
|
||||
using FunctionParseDateTime64OrZero = FunctionParseDateTimeImpl<NameParseDateTime64OrZero, ParseSyntax::MySQL, ReturnType::DateTime64, ErrorHandling::Zero>;
|
||||
using FunctionParseDateTime64OrNull = FunctionParseDateTimeImpl<NameParseDateTime64OrNull, ParseSyntax::MySQL, ReturnType::DateTime64, ErrorHandling::Null>;
|
||||
using FunctionParseDateTimeInJodaSyntax = FunctionParseDateTimeImpl<NameParseDateTimeInJodaSyntax, ParseSyntax::Joda, ReturnType::DateTime, ErrorHandling::Exception>;
|
||||
using FunctionParseDateTimeInJodaSyntaxOrZero = FunctionParseDateTimeImpl<NameParseDateTimeInJodaSyntaxOrZero, ParseSyntax::Joda, ReturnType::DateTime, ErrorHandling::Zero>;
|
||||
using FunctionParseDateTimeInJodaSyntaxOrNull = FunctionParseDateTimeImpl<NameParseDateTimeInJodaSyntaxOrNull, ParseSyntax::Joda, ReturnType::DateTime, ErrorHandling::Null>;
|
||||
using FunctionParseDateTime64InJodaSyntax = FunctionParseDateTimeImpl<NameParseDateTime64InJodaSyntax, ParseSyntax::Joda, ReturnType::DateTime64, ErrorHandling::Exception>;
|
||||
using FunctionParseDateTime64InJodaSyntaxOrZero = FunctionParseDateTimeImpl<NameParseDateTime64InJodaSyntaxOrZero, ParseSyntax::Joda, ReturnType::DateTime64, ErrorHandling::Zero>;
|
||||
using FunctionParseDateTime64InJodaSyntaxOrNull = FunctionParseDateTimeImpl<NameParseDateTime64InJodaSyntaxOrNull, ParseSyntax::Joda, ReturnType::DateTime64, ErrorHandling::Null>;
|
||||
}
|
||||
|
||||
REGISTER_FUNCTION(ParseDateTime)
|
||||
@ -2262,13 +2358,16 @@ REGISTER_FUNCTION(ParseDateTime)
|
||||
factory.registerFunction<FunctionParseDateTimeOrZero>();
|
||||
factory.registerFunction<FunctionParseDateTimeOrNull>();
|
||||
factory.registerAlias("str_to_date", FunctionParseDateTimeOrNull::name, FunctionFactory::Case::Insensitive);
|
||||
|
||||
factory.registerFunction<FunctionParseDateTimeInJodaSyntax>();
|
||||
factory.registerFunction<FunctionParseDateTimeInJodaSyntaxOrZero>();
|
||||
factory.registerFunction<FunctionParseDateTimeInJodaSyntaxOrNull>();
|
||||
|
||||
factory.registerFunction<FunctionParseDateTime64InJodaSyntax>();
|
||||
factory.registerFunction<FunctionParseDateTime64InJodaSyntaxOrZero>();
|
||||
factory.registerFunction<FunctionParseDateTime64InJodaSyntaxOrNull>();
|
||||
factory.registerFunction<FunctionParseDateTime64>();
|
||||
factory.registerFunction<FunctionParseDateTime64OrZero>();
|
||||
factory.registerFunction<FunctionParseDateTime64OrNull>();
|
||||
}
|
||||
|
||||
|
||||
|
@ -335,7 +335,7 @@ Aggregator::Aggregator(const Block & header_, const Params & params_)
|
||||
: header(header_)
|
||||
, keys_positions(calculateKeysPositions(header, params_))
|
||||
, params(params_)
|
||||
, tmp_data(params.tmp_data_scope ? std::make_unique<TemporaryDataOnDisk>(params.tmp_data_scope, CurrentMetrics::TemporaryFilesForAggregation) : nullptr)
|
||||
, tmp_data(params.tmp_data_scope ? params.tmp_data_scope->childScope(CurrentMetrics::TemporaryFilesForAggregation) : nullptr)
|
||||
, min_bytes_for_prefetch(getMinBytesForPrefetch())
|
||||
{
|
||||
/// Use query-level memory tracker
|
||||
@ -1519,10 +1519,15 @@ void Aggregator::writeToTemporaryFile(AggregatedDataVariants & data_variants, si
|
||||
Stopwatch watch;
|
||||
size_t rows = data_variants.size();
|
||||
|
||||
auto & out_stream = tmp_data->createStream(getHeader(false), max_temp_file_size);
|
||||
auto & out_stream = [this, max_temp_file_size]() -> TemporaryBlockStreamHolder &
|
||||
{
|
||||
std::lock_guard lk(tmp_files_mutex);
|
||||
return tmp_files.emplace_back(getHeader(false), tmp_data.get(), max_temp_file_size);
|
||||
}();
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::ExternalAggregationWritePart);
|
||||
|
||||
LOG_DEBUG(log, "Writing part of aggregation data into temporary file {}", out_stream.getPath());
|
||||
LOG_DEBUG(log, "Writing part of aggregation data into temporary file {}", out_stream.getHolder()->describeFilePath());
|
||||
|
||||
/// Flush only two-level data and possibly overflow data.
|
||||
|
||||
@ -1639,11 +1644,24 @@ Block Aggregator::convertOneBucketToBlock(AggregatedDataVariants & variants, Are
|
||||
return block;
|
||||
}
|
||||
|
||||
std::list<TemporaryBlockStreamHolder> Aggregator::detachTemporaryData()
|
||||
{
|
||||
std::lock_guard lk(tmp_files_mutex);
|
||||
return std::move(tmp_files);
|
||||
}
|
||||
|
||||
bool Aggregator::hasTemporaryData() const
|
||||
{
|
||||
std::lock_guard lk(tmp_files_mutex);
|
||||
return !tmp_files.empty();
|
||||
}
|
||||
|
||||
|
||||
template <typename Method>
|
||||
void Aggregator::writeToTemporaryFileImpl(
|
||||
AggregatedDataVariants & data_variants,
|
||||
Method & method,
|
||||
TemporaryFileStream & out) const
|
||||
TemporaryBlockStreamHolder & out) const
|
||||
{
|
||||
size_t max_temporary_block_size_rows = 0;
|
||||
size_t max_temporary_block_size_bytes = 0;
|
||||
@ -1660,14 +1678,14 @@ void Aggregator::writeToTemporaryFileImpl(
|
||||
for (UInt32 bucket = 0; bucket < Method::Data::NUM_BUCKETS; ++bucket)
|
||||
{
|
||||
Block block = convertOneBucketToBlock(data_variants, method, data_variants.aggregates_pool, false, bucket);
|
||||
out.write(block);
|
||||
out->write(block);
|
||||
update_max_sizes(block);
|
||||
}
|
||||
|
||||
if (params.overflow_row)
|
||||
{
|
||||
Block block = prepareBlockAndFillWithoutKey(data_variants, false, true);
|
||||
out.write(block);
|
||||
out->write(block);
|
||||
update_max_sizes(block);
|
||||
}
|
||||
|
||||
@ -3369,6 +3387,8 @@ UInt64 calculateCacheKey(const DB::ASTPtr & select_query)
|
||||
|
||||
SipHash hash;
|
||||
hash.update(select.tables()->getTreeHash(/*ignore_aliases=*/true));
|
||||
if (const auto prewhere = select.prewhere())
|
||||
hash.update(prewhere->getTreeHash(/*ignore_aliases=*/true));
|
||||
if (const auto where = select.where())
|
||||
hash.update(where->getTreeHash(/*ignore_aliases=*/true));
|
||||
if (const auto group_by = select.groupBy())
|
||||
|
@ -309,9 +309,9 @@ public:
|
||||
/// For external aggregation.
|
||||
void writeToTemporaryFile(AggregatedDataVariants & data_variants, size_t max_temp_file_size = 0) const;
|
||||
|
||||
bool hasTemporaryData() const { return tmp_data && !tmp_data->empty(); }
|
||||
bool hasTemporaryData() const;
|
||||
|
||||
const TemporaryDataOnDisk & getTemporaryData() const { return *tmp_data; }
|
||||
std::list<TemporaryBlockStreamHolder> detachTemporaryData();
|
||||
|
||||
/// Get data structure of the result.
|
||||
Block getHeader(bool final) const;
|
||||
@ -355,7 +355,9 @@ private:
|
||||
LoggerPtr log = getLogger("Aggregator");
|
||||
|
||||
/// For external aggregation.
|
||||
TemporaryDataOnDiskPtr tmp_data;
|
||||
TemporaryDataOnDiskScopePtr tmp_data;
|
||||
mutable std::mutex tmp_files_mutex;
|
||||
mutable std::list<TemporaryBlockStreamHolder> tmp_files TSA_GUARDED_BY(tmp_files_mutex);
|
||||
|
||||
size_t min_bytes_for_prefetch = 0;
|
||||
|
||||
@ -456,7 +458,7 @@ private:
|
||||
void writeToTemporaryFileImpl(
|
||||
AggregatedDataVariants & data_variants,
|
||||
Method & method,
|
||||
TemporaryFileStream & out) const;
|
||||
TemporaryBlockStreamHolder & out) const;
|
||||
|
||||
/// Merge NULL key data from hash table `src` into `dst`.
|
||||
template <typename Method, typename Table>
|
||||
|
@ -5,16 +5,21 @@
|
||||
#include <Core/Names.h>
|
||||
#include <Core/NamesAndTypes.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/IDataType.h>
|
||||
#include <DataTypes/Serializations/ISerialization.h>
|
||||
#include <Interpreters/ConcurrentHashJoin.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/ExpressionActions.h>
|
||||
#include <Interpreters/HashJoin/ScatteredBlock.h>
|
||||
#include <Interpreters/PreparedSets.h>
|
||||
#include <Interpreters/TableJoin.h>
|
||||
#include <Interpreters/createBlockSelector.h>
|
||||
#include <Parsers/ASTSelectQuery.h>
|
||||
#include <Parsers/DumpASTNode.h>
|
||||
#include <Parsers/ExpressionListParsers.h>
|
||||
#include <Parsers/IAST_fwd.h>
|
||||
#include <Parsers/parseQuery.h>
|
||||
#include <Storages/SelectQueryInfo.h>
|
||||
#include <Common/CurrentThread.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/ProfileEvents.h>
|
||||
@ -24,6 +29,12 @@
|
||||
#include <Common/setThreadName.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <numeric>
|
||||
#include <vector>
|
||||
|
||||
using namespace DB;
|
||||
|
||||
namespace ProfileEvents
|
||||
{
|
||||
extern const Event HashJoinPreallocatedElementsInHashTables;
|
||||
@ -116,9 +127,7 @@ ConcurrentHashJoin::ConcurrentHashJoin(
|
||||
auto inner_hash_join = std::make_shared<InternalHashJoin>();
|
||||
inner_hash_join->data = std::make_unique<HashJoin>(
|
||||
table_join_, right_sample_block, any_take_last_row_, reserve_size, fmt::format("concurrent{}", idx));
|
||||
/// Non zero `max_joined_block_rows` allows to process block partially and return not processed part.
|
||||
/// TODO: It's not handled properly in ConcurrentHashJoin case, so we set it to 0 to disable this feature.
|
||||
inner_hash_join->data->setMaxJoinedBlockRows(0);
|
||||
inner_hash_join->data->setMaxJoinedBlockRows(table_join->maxJoinedBlockRows());
|
||||
hash_joins[idx] = std::move(inner_hash_join);
|
||||
});
|
||||
}
|
||||
@ -165,10 +174,13 @@ ConcurrentHashJoin::~ConcurrentHashJoin()
|
||||
}
|
||||
}
|
||||
|
||||
bool ConcurrentHashJoin::addBlockToJoin(const Block & right_block, bool check_limits)
|
||||
bool ConcurrentHashJoin::addBlockToJoin(const Block & right_block_, bool check_limits)
|
||||
{
|
||||
Blocks dispatched_blocks = dispatchBlock(table_join->getOnlyClause().key_names_right, right_block);
|
||||
/// We materialize columns here to avoid materializing them multiple times on different threads
|
||||
/// (inside different `hash_join`-s) because the block will be shared.
|
||||
Block right_block = hash_joins[0]->data->materializeColumnsFromRightBlock(right_block_);
|
||||
|
||||
auto dispatched_blocks = dispatchBlock(table_join->getOnlyClause().key_names_right, std::move(right_block));
|
||||
size_t blocks_left = 0;
|
||||
for (const auto & block : dispatched_blocks)
|
||||
{
|
||||
@ -211,19 +223,52 @@ bool ConcurrentHashJoin::addBlockToJoin(const Block & right_block, bool check_li
|
||||
|
||||
void ConcurrentHashJoin::joinBlock(Block & block, std::shared_ptr<ExtraBlock> & /*not_processed*/)
|
||||
{
|
||||
Blocks dispatched_blocks = dispatchBlock(table_join->getOnlyClause().key_names_left, block);
|
||||
Blocks res;
|
||||
ExtraScatteredBlocks extra_blocks;
|
||||
joinBlock(block, extra_blocks, res);
|
||||
chassert(!extra_blocks.rows());
|
||||
block = concatenateBlocks(res);
|
||||
}
|
||||
|
||||
void ConcurrentHashJoin::joinBlock(Block & block, ExtraScatteredBlocks & extra_blocks, std::vector<Block> & res)
|
||||
{
|
||||
ScatteredBlocks dispatched_blocks;
|
||||
auto & remaining_blocks = extra_blocks.remaining_blocks;
|
||||
if (extra_blocks.rows())
|
||||
{
|
||||
dispatched_blocks.swap(remaining_blocks);
|
||||
}
|
||||
else
|
||||
{
|
||||
hash_joins[0]->data->materializeColumnsFromLeftBlock(block);
|
||||
dispatched_blocks = dispatchBlock(table_join->getOnlyClause().key_names_left, std::move(block));
|
||||
}
|
||||
|
||||
block = {};
|
||||
|
||||
/// Just in case, should be no-op always
|
||||
remaining_blocks.resize(slots);
|
||||
|
||||
chassert(res.empty());
|
||||
res.clear();
|
||||
res.reserve(dispatched_blocks.size());
|
||||
|
||||
for (size_t i = 0; i < dispatched_blocks.size(); ++i)
|
||||
{
|
||||
std::shared_ptr<ExtraBlock> none_extra_block;
|
||||
auto & hash_join = hash_joins[i];
|
||||
auto & dispatched_block = dispatched_blocks[i];
|
||||
hash_join->data->joinBlock(dispatched_block, none_extra_block);
|
||||
if (dispatched_block && (i == 0 || dispatched_block.rows()))
|
||||
hash_join->data->joinBlock(dispatched_block, remaining_blocks[i]);
|
||||
if (none_extra_block && !none_extra_block->empty())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "not_processed should be empty");
|
||||
}
|
||||
|
||||
block = concatenateBlocks(dispatched_blocks);
|
||||
for (size_t i = 0; i < dispatched_blocks.size(); ++i)
|
||||
{
|
||||
auto & dispatched_block = dispatched_blocks[i];
|
||||
if (dispatched_block && (i == 0 || dispatched_block.rows()))
|
||||
res.emplace_back(std::move(dispatched_block).getSourceBlock());
|
||||
}
|
||||
}
|
||||
|
||||
void ConcurrentHashJoin::checkTypesOfKeys(const Block & block) const
|
||||
@ -302,10 +347,9 @@ static ALWAYS_INLINE IColumn::Selector hashToSelector(const WeakHash32 & hash, s
|
||||
return selector;
|
||||
}
|
||||
|
||||
IColumn::Selector ConcurrentHashJoin::selectDispatchBlock(const Strings & key_columns_names, const Block & from_block)
|
||||
IColumn::Selector selectDispatchBlock(size_t num_shards, const Strings & key_columns_names, const Block & from_block)
|
||||
{
|
||||
size_t num_rows = from_block.rows();
|
||||
size_t num_shards = hash_joins.size();
|
||||
|
||||
WeakHash32 hash(num_rows);
|
||||
for (const auto & key_name : key_columns_names)
|
||||
@ -317,40 +361,101 @@ IColumn::Selector ConcurrentHashJoin::selectDispatchBlock(const Strings & key_co
|
||||
return hashToSelector(hash, num_shards);
|
||||
}
|
||||
|
||||
Blocks ConcurrentHashJoin::dispatchBlock(const Strings & key_columns_names, const Block & from_block)
|
||||
ScatteredBlocks scatterBlocksByCopying(size_t num_shards, const IColumn::Selector & selector, const Block & from_block)
|
||||
{
|
||||
/// TODO: use JoinCommon::scatterBlockByHash
|
||||
size_t num_shards = hash_joins.size();
|
||||
size_t num_cols = from_block.columns();
|
||||
|
||||
IColumn::Selector selector = selectDispatchBlock(key_columns_names, from_block);
|
||||
|
||||
Blocks result(num_shards);
|
||||
Blocks blocks(num_shards);
|
||||
for (size_t i = 0; i < num_shards; ++i)
|
||||
result[i] = from_block.cloneEmpty();
|
||||
blocks[i] = from_block.cloneEmpty();
|
||||
|
||||
for (size_t i = 0; i < num_cols; ++i)
|
||||
for (size_t i = 0; i < from_block.columns(); ++i)
|
||||
{
|
||||
auto dispatched_columns = from_block.getByPosition(i).column->scatter(num_shards, selector);
|
||||
assert(result.size() == dispatched_columns.size());
|
||||
chassert(blocks.size() == dispatched_columns.size());
|
||||
for (size_t block_index = 0; block_index < num_shards; ++block_index)
|
||||
{
|
||||
result[block_index].getByPosition(i).column = std::move(dispatched_columns[block_index]);
|
||||
blocks[block_index].getByPosition(i).column = std::move(dispatched_columns[block_index]);
|
||||
}
|
||||
}
|
||||
|
||||
ScatteredBlocks result;
|
||||
result.reserve(num_shards);
|
||||
for (size_t i = 0; i < num_shards; ++i)
|
||||
result.emplace_back(std::move(blocks[i]));
|
||||
return result;
|
||||
}
|
||||
|
||||
UInt64 calculateCacheKey(std::shared_ptr<TableJoin> & table_join, const QueryTreeNodePtr & right_table_expression)
|
||||
ScatteredBlocks scatterBlocksWithSelector(size_t num_shards, const IColumn::Selector & selector, const Block & from_block)
|
||||
{
|
||||
std::vector<ScatteredBlock::IndexesPtr> selectors(num_shards);
|
||||
for (size_t i = 0; i < num_shards; ++i)
|
||||
{
|
||||
selectors[i] = ScatteredBlock::Indexes::create();
|
||||
selectors[i]->reserve(selector.size() / num_shards + 1);
|
||||
}
|
||||
for (size_t i = 0; i < selector.size(); ++i)
|
||||
{
|
||||
const size_t shard = selector[i];
|
||||
selectors[shard]->getData().push_back(i);
|
||||
}
|
||||
ScatteredBlocks result;
|
||||
result.reserve(num_shards);
|
||||
for (size_t i = 0; i < num_shards; ++i)
|
||||
result.emplace_back(from_block, std::move(selectors[i]));
|
||||
return result;
|
||||
}
|
||||
|
||||
ScatteredBlocks ConcurrentHashJoin::dispatchBlock(const Strings & key_columns_names, Block && from_block)
|
||||
{
|
||||
size_t num_shards = hash_joins.size();
|
||||
if (num_shards == 1)
|
||||
{
|
||||
ScatteredBlocks res;
|
||||
res.emplace_back(std::move(from_block));
|
||||
return res;
|
||||
}
|
||||
|
||||
IColumn::Selector selector = selectDispatchBlock(num_shards, key_columns_names, from_block);
|
||||
|
||||
/// With zero-copy approach we won't copy the source columns, but will create a new one with indices.
|
||||
/// This is not beneficial when the whole set of columns is e.g. a single small column.
|
||||
constexpr auto threshold = sizeof(IColumn::Selector::value_type);
|
||||
const auto & data_types = from_block.getDataTypes();
|
||||
const bool use_zero_copy_approach
|
||||
= std::accumulate(
|
||||
data_types.begin(),
|
||||
data_types.end(),
|
||||
0u,
|
||||
[](size_t sum, const DataTypePtr & type)
|
||||
{ return sum + (type->haveMaximumSizeOfValue() ? type->getMaximumSizeOfValueInMemory() : threshold + 1); })
|
||||
> threshold;
|
||||
|
||||
return use_zero_copy_approach ? scatterBlocksWithSelector(num_shards, selector, from_block)
|
||||
: scatterBlocksByCopying(num_shards, selector, from_block);
|
||||
}
|
||||
|
||||
UInt64 calculateCacheKey(
|
||||
std::shared_ptr<TableJoin> & table_join, const QueryTreeNodePtr & right_table_expression, const SelectQueryInfo & select_query_info)
|
||||
{
|
||||
const auto * select = select_query_info.query->as<DB::ASTSelectQuery>();
|
||||
if (!select)
|
||||
return 0;
|
||||
|
||||
IQueryTreeNode::HashState hash;
|
||||
|
||||
if (const auto prewhere = select->prewhere())
|
||||
hash.update(prewhere->getTreeHash(/*ignore_aliases=*/true));
|
||||
if (const auto where = select->where())
|
||||
hash.update(where->getTreeHash(/*ignore_aliases=*/true));
|
||||
|
||||
chassert(right_table_expression);
|
||||
hash.update(right_table_expression->getTreeHash());
|
||||
|
||||
chassert(table_join && table_join->oneDisjunct());
|
||||
const auto keys
|
||||
= NameOrderedSet{table_join->getClauses().at(0).key_names_right.begin(), table_join->getClauses().at(0).key_names_right.end()};
|
||||
for (const auto & name : keys)
|
||||
hash.update(name);
|
||||
|
||||
return hash.get64();
|
||||
}
|
||||
}
|
||||
|
@ -1,13 +1,11 @@
|
||||
#pragma once
|
||||
|
||||
#include <condition_variable>
|
||||
#include <memory>
|
||||
#include <optional>
|
||||
#include <Analyzer/IQueryTreeNode.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/ExpressionActions.h>
|
||||
#include <Interpreters/HashTablesStatistics.h>
|
||||
#include <Interpreters/HashJoin/HashJoin.h>
|
||||
#include <Interpreters/HashTablesStatistics.h>
|
||||
#include <Interpreters/IJoin.h>
|
||||
#include <base/defines.h>
|
||||
#include <base/types.h>
|
||||
@ -17,6 +15,8 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
struct SelectQueryInfo;
|
||||
|
||||
/**
|
||||
* Can run addBlockToJoin() parallelly to speedup the join process. On test, it almose linear speedup by
|
||||
* the degree of parallelism.
|
||||
@ -47,7 +47,7 @@ public:
|
||||
|
||||
std::string getName() const override { return "ConcurrentHashJoin"; }
|
||||
const TableJoin & getTableJoin() const override { return *table_join; }
|
||||
bool addBlockToJoin(const Block & block, bool check_limits) override;
|
||||
bool addBlockToJoin(const Block & right_block_, bool check_limits) override;
|
||||
void checkTypesOfKeys(const Block & block) const override;
|
||||
void joinBlock(Block & block, std::shared_ptr<ExtraBlock> & not_processed) override;
|
||||
void setTotals(const Block & block) override;
|
||||
@ -57,6 +57,9 @@ public:
|
||||
bool alwaysReturnsEmptySet() const override;
|
||||
bool supportParallelJoin() const override { return true; }
|
||||
|
||||
bool isScatteredJoin() const override { return true; }
|
||||
void joinBlock(Block & block, ExtraScatteredBlocks & extra_blocks, std::vector<Block> & res) override;
|
||||
|
||||
IBlocksStreamPtr
|
||||
getNonJoinedBlocks(const Block & left_sample_block, const Block & result_sample_block, UInt64 max_block_size) const override;
|
||||
|
||||
@ -89,9 +92,9 @@ private:
|
||||
std::mutex totals_mutex;
|
||||
Block totals;
|
||||
|
||||
IColumn::Selector selectDispatchBlock(const Strings & key_columns_names, const Block & from_block);
|
||||
Blocks dispatchBlock(const Strings & key_columns_names, const Block & from_block);
|
||||
ScatteredBlocks dispatchBlock(const Strings & key_columns_names, Block && from_block);
|
||||
};
|
||||
|
||||
UInt64 calculateCacheKey(std::shared_ptr<TableJoin> & table_join, const QueryTreeNodePtr & right_table_expression);
|
||||
UInt64 calculateCacheKey(
|
||||
std::shared_ptr<TableJoin> & table_join, const QueryTreeNodePtr & right_table_expression, const SelectQueryInfo & select_query_info);
|
||||
}
|
||||
|
@ -364,6 +364,8 @@ struct ContextSharedPart : boost::noncopyable
|
||||
/// Child scopes for more fine-grained accounting are created per user/query/etc.
|
||||
/// Initialized once during server startup.
|
||||
TemporaryDataOnDiskScopePtr root_temp_data_on_disk TSA_GUARDED_BY(mutex);
|
||||
/// TODO: remove, use only root_temp_data_on_disk
|
||||
VolumePtr temporary_volume_legacy;
|
||||
|
||||
mutable OnceFlag async_loader_initialized;
|
||||
mutable std::unique_ptr<AsyncLoader> async_loader; /// Thread pool for asynchronous initialization of arbitrary DAG of `LoadJob`s (used for tables loading)
|
||||
@ -799,10 +801,9 @@ struct ContextSharedPart : boost::noncopyable
|
||||
}
|
||||
|
||||
/// Special volumes might also use disks that require shutdown.
|
||||
auto & tmp_data = root_temp_data_on_disk;
|
||||
if (tmp_data && tmp_data->getVolume())
|
||||
if (temporary_volume_legacy)
|
||||
{
|
||||
auto & disks = tmp_data->getVolume()->getDisks();
|
||||
auto & disks = temporary_volume_legacy->getDisks();
|
||||
for (auto & disk : disks)
|
||||
disk->shutdown();
|
||||
}
|
||||
@ -1184,8 +1185,8 @@ VolumePtr Context::getGlobalTemporaryVolume() const
|
||||
SharedLockGuard lock(shared->mutex);
|
||||
/// Calling this method we just bypass the `temp_data_on_disk` and write to the file on the volume directly.
|
||||
/// Volume is the same for `root_temp_data_on_disk` (always set) and `temp_data_on_disk` (if it's set).
|
||||
if (shared->root_temp_data_on_disk)
|
||||
return shared->root_temp_data_on_disk->getVolume();
|
||||
if (shared->temporary_volume_legacy)
|
||||
return shared->temporary_volume_legacy;
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
@ -1273,6 +1274,10 @@ try
|
||||
/// We skip directories (for example, 'http_buffers' - it's used for buffering of the results) and all other file types.
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
fs::create_directories(path);
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
@ -1306,7 +1311,8 @@ void Context::setTemporaryStoragePath(const String & path, size_t max_size)
|
||||
|
||||
TemporaryDataOnDiskSettings temporary_data_on_disk_settings;
|
||||
temporary_data_on_disk_settings.max_size_on_disk = max_size;
|
||||
shared->root_temp_data_on_disk = std::make_shared<TemporaryDataOnDiskScope>(std::move(volume), std::move(temporary_data_on_disk_settings));
|
||||
shared->root_temp_data_on_disk = std::make_shared<TemporaryDataOnDiskScope>(volume, std::move(temporary_data_on_disk_settings));
|
||||
shared->temporary_volume_legacy = volume;
|
||||
}
|
||||
|
||||
void Context::setTemporaryStoragePolicy(const String & policy_name, size_t max_size)
|
||||
@ -1354,7 +1360,8 @@ void Context::setTemporaryStoragePolicy(const String & policy_name, size_t max_s
|
||||
|
||||
TemporaryDataOnDiskSettings temporary_data_on_disk_settings;
|
||||
temporary_data_on_disk_settings.max_size_on_disk = max_size;
|
||||
shared->root_temp_data_on_disk = std::make_shared<TemporaryDataOnDiskScope>(std::move(volume), std::move(temporary_data_on_disk_settings));
|
||||
shared->root_temp_data_on_disk = std::make_shared<TemporaryDataOnDiskScope>(volume, std::move(temporary_data_on_disk_settings));
|
||||
shared->temporary_volume_legacy = volume;
|
||||
}
|
||||
|
||||
void Context::setTemporaryStorageInCache(const String & cache_disk_name, size_t max_size)
|
||||
@ -1378,7 +1385,8 @@ void Context::setTemporaryStorageInCache(const String & cache_disk_name, size_t
|
||||
|
||||
TemporaryDataOnDiskSettings temporary_data_on_disk_settings;
|
||||
temporary_data_on_disk_settings.max_size_on_disk = max_size;
|
||||
shared->root_temp_data_on_disk = std::make_shared<TemporaryDataOnDiskScope>(std::move(volume), file_cache.get(), std::move(temporary_data_on_disk_settings));
|
||||
shared->root_temp_data_on_disk = std::make_shared<TemporaryDataOnDiskScope>(file_cache.get(), std::move(temporary_data_on_disk_settings));
|
||||
shared->temporary_volume_legacy = volume;
|
||||
}
|
||||
|
||||
void Context::setFlagsPath(const String & path)
|
||||
|
@ -41,15 +41,15 @@ namespace
|
||||
class AccumulatedBlockReader
|
||||
{
|
||||
public:
|
||||
AccumulatedBlockReader(TemporaryFileStream & reader_,
|
||||
AccumulatedBlockReader(TemporaryBlockStreamReaderHolder reader_,
|
||||
std::mutex & mutex_,
|
||||
size_t result_block_size_ = 0)
|
||||
: reader(reader_)
|
||||
: reader(std::move(reader_))
|
||||
, mutex(mutex_)
|
||||
, result_block_size(result_block_size_)
|
||||
{
|
||||
if (!reader.isWriteFinished())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Reading not finished file");
|
||||
if (!reader)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Reader is nullptr");
|
||||
}
|
||||
|
||||
Block read()
|
||||
@ -63,7 +63,7 @@ namespace
|
||||
size_t rows_read = 0;
|
||||
do
|
||||
{
|
||||
Block block = reader.read();
|
||||
Block block = reader->read();
|
||||
rows_read += block.rows();
|
||||
if (!block)
|
||||
{
|
||||
@ -81,7 +81,7 @@ namespace
|
||||
}
|
||||
|
||||
private:
|
||||
TemporaryFileStream & reader;
|
||||
TemporaryBlockStreamReaderHolder reader;
|
||||
std::mutex & mutex;
|
||||
|
||||
const size_t result_block_size;
|
||||
@ -124,12 +124,12 @@ class GraceHashJoin::FileBucket : boost::noncopyable
|
||||
public:
|
||||
using BucketLock = std::unique_lock<std::mutex>;
|
||||
|
||||
explicit FileBucket(size_t bucket_index_, TemporaryFileStream & left_file_, TemporaryFileStream & right_file_, LoggerPtr log_)
|
||||
: idx{bucket_index_}
|
||||
, left_file{left_file_}
|
||||
, right_file{right_file_}
|
||||
, state{State::WRITING_BLOCKS}
|
||||
, log{log_}
|
||||
explicit FileBucket(size_t bucket_index_, TemporaryBlockStreamHolder left_file_, TemporaryBlockStreamHolder right_file_, LoggerPtr log_)
|
||||
: idx(bucket_index_)
|
||||
, left_file(std::move(left_file_))
|
||||
, right_file(std::move(right_file_))
|
||||
, state(State::WRITING_BLOCKS)
|
||||
, log(log_)
|
||||
{
|
||||
}
|
||||
|
||||
@ -157,12 +157,6 @@ public:
|
||||
return addBlockImpl(block, right_file, lock);
|
||||
}
|
||||
|
||||
bool finished() const
|
||||
{
|
||||
std::unique_lock<std::mutex> left_lock(left_file_mutex);
|
||||
return left_file.isEof();
|
||||
}
|
||||
|
||||
bool empty() const { return is_empty.load(); }
|
||||
|
||||
AccumulatedBlockReader startJoining()
|
||||
@ -172,24 +166,21 @@ public:
|
||||
std::unique_lock<std::mutex> left_lock(left_file_mutex);
|
||||
std::unique_lock<std::mutex> right_lock(right_file_mutex);
|
||||
|
||||
left_file.finishWriting();
|
||||
right_file.finishWriting();
|
||||
|
||||
state = State::JOINING_BLOCKS;
|
||||
}
|
||||
return AccumulatedBlockReader(right_file, right_file_mutex);
|
||||
return AccumulatedBlockReader(right_file.getReadStream(), right_file_mutex);
|
||||
}
|
||||
|
||||
AccumulatedBlockReader getLeftTableReader()
|
||||
{
|
||||
ensureState(State::JOINING_BLOCKS);
|
||||
return AccumulatedBlockReader(left_file, left_file_mutex);
|
||||
return AccumulatedBlockReader(left_file.getReadStream(), left_file_mutex);
|
||||
}
|
||||
|
||||
const size_t idx;
|
||||
|
||||
private:
|
||||
bool addBlockImpl(const Block & block, TemporaryFileStream & writer, std::unique_lock<std::mutex> & lock)
|
||||
bool addBlockImpl(const Block & block, TemporaryBlockStreamHolder & writer, std::unique_lock<std::mutex> & lock)
|
||||
{
|
||||
ensureState(State::WRITING_BLOCKS);
|
||||
|
||||
@ -199,7 +190,7 @@ private:
|
||||
if (block.rows())
|
||||
is_empty = false;
|
||||
|
||||
writer.write(block);
|
||||
writer->write(block);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -217,8 +208,8 @@ private:
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Invalid state transition, expected {}, got {}", expected, state.load());
|
||||
}
|
||||
|
||||
TemporaryFileStream & left_file;
|
||||
TemporaryFileStream & right_file;
|
||||
TemporaryBlockStreamHolder left_file;
|
||||
TemporaryBlockStreamHolder right_file;
|
||||
mutable std::mutex left_file_mutex;
|
||||
mutable std::mutex right_file_mutex;
|
||||
|
||||
@ -274,7 +265,7 @@ GraceHashJoin::GraceHashJoin(
|
||||
, max_num_buckets{context->getSettingsRef()[Setting::grace_hash_join_max_buckets]}
|
||||
, left_key_names(table_join->getOnlyClause().key_names_left)
|
||||
, right_key_names(table_join->getOnlyClause().key_names_right)
|
||||
, tmp_data(std::make_unique<TemporaryDataOnDisk>(tmp_data_, CurrentMetrics::TemporaryFilesForJoin))
|
||||
, tmp_data(tmp_data_->childScope(CurrentMetrics::TemporaryFilesForJoin))
|
||||
, hash_join(makeInMemoryJoin("grace0"))
|
||||
, hash_join_sample_block(hash_join->savedBlockSample())
|
||||
{
|
||||
@ -398,10 +389,10 @@ void GraceHashJoin::addBuckets(const size_t bucket_count)
|
||||
for (size_t i = 0; i < bucket_count; ++i)
|
||||
try
|
||||
{
|
||||
auto & left_file = tmp_data->createStream(left_sample_block);
|
||||
auto & right_file = tmp_data->createStream(prepareRightBlock(right_sample_block));
|
||||
TemporaryBlockStreamHolder left_file(left_sample_block, tmp_data.get());
|
||||
TemporaryBlockStreamHolder right_file(prepareRightBlock(right_sample_block), tmp_data.get());
|
||||
|
||||
BucketPtr new_bucket = std::make_shared<FileBucket>(current_size + i, left_file, right_file, log);
|
||||
BucketPtr new_bucket = std::make_shared<FileBucket>(current_size + i, std::move(left_file), std::move(right_file), log);
|
||||
tmp_buckets.emplace_back(std::move(new_bucket));
|
||||
}
|
||||
catch (...)
|
||||
@ -632,12 +623,9 @@ IBlocksStreamPtr GraceHashJoin::getDelayedBlocks()
|
||||
for (bucket_idx = bucket_idx + 1; bucket_idx < buckets.size(); ++bucket_idx)
|
||||
{
|
||||
current_bucket = buckets[bucket_idx].get();
|
||||
if (current_bucket->finished() || current_bucket->empty())
|
||||
if (current_bucket->empty())
|
||||
{
|
||||
LOG_TRACE(log, "Skipping {} {} bucket {}",
|
||||
current_bucket->finished() ? "finished" : "",
|
||||
current_bucket->empty() ? "empty" : "",
|
||||
bucket_idx);
|
||||
LOG_TRACE(log, "Skipping empty bucket {}", bucket_idx);
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -132,7 +132,7 @@ private:
|
||||
Names left_key_names;
|
||||
Names right_key_names;
|
||||
|
||||
TemporaryDataOnDiskPtr tmp_data;
|
||||
TemporaryDataOnDiskScopePtr tmp_data;
|
||||
|
||||
Buckets buckets;
|
||||
mutable SharedMutex rehash_mutex;
|
||||
|
@ -3,14 +3,16 @@
|
||||
|
||||
namespace DB
|
||||
{
|
||||
JoinOnKeyColumns::JoinOnKeyColumns(const Block & block, const Names & key_names_, const String & cond_column_name, const Sizes & key_sizes_)
|
||||
: key_names(key_names_)
|
||||
, materialized_keys_holder(JoinCommon::materializeColumns(
|
||||
block, key_names)) /// Rare case, when keys are constant or low cardinality. To avoid code bloat, simply materialize them.
|
||||
JoinOnKeyColumns::JoinOnKeyColumns(
|
||||
const ScatteredBlock & block_, const Names & key_names_, const String & cond_column_name, const Sizes & key_sizes_)
|
||||
: block(block_)
|
||||
, key_names(key_names_)
|
||||
/// Rare case, when keys are constant or low cardinality. To avoid code bloat, simply materialize them.
|
||||
, materialized_keys_holder(JoinCommon::materializeColumns(block.getSourceBlock(), key_names))
|
||||
, key_columns(JoinCommon::getRawPointers(materialized_keys_holder))
|
||||
, null_map(nullptr)
|
||||
, null_map_holder(extractNestedColumnsAndNullMap(key_columns, null_map))
|
||||
, join_mask_column(JoinCommon::getColumnAsMask(block, cond_column_name))
|
||||
, join_mask_column(JoinCommon::getColumnAsMask(block.getSourceBlock(), cond_column_name))
|
||||
, key_sizes(key_sizes_)
|
||||
{
|
||||
}
|
||||
|
@ -1,4 +1,6 @@
|
||||
#pragma once
|
||||
|
||||
#include <Core/Defines.h>
|
||||
#include <Interpreters/HashJoin/HashJoin.h>
|
||||
#include <Interpreters/TableJoin.h>
|
||||
|
||||
@ -14,6 +16,8 @@ using ExpressionActionsPtr = std::shared_ptr<ExpressionActions>;
|
||||
|
||||
struct JoinOnKeyColumns
|
||||
{
|
||||
const ScatteredBlock & block;
|
||||
|
||||
Names key_names;
|
||||
|
||||
Columns materialized_keys_holder;
|
||||
@ -27,9 +31,13 @@ struct JoinOnKeyColumns
|
||||
|
||||
Sizes key_sizes;
|
||||
|
||||
explicit JoinOnKeyColumns(const Block & block, const Names & key_names_, const String & cond_column_name, const Sizes & key_sizes_);
|
||||
JoinOnKeyColumns(
|
||||
const ScatteredBlock & block, const Names & key_names_, const String & cond_column_name, const Sizes & key_sizes_);
|
||||
|
||||
bool isRowFiltered(size_t i) const { return join_mask_column.isRowFiltered(i); }
|
||||
bool isRowFiltered(size_t i) const
|
||||
{
|
||||
return join_mask_column.isRowFiltered(i);
|
||||
}
|
||||
};
|
||||
|
||||
template <bool lazy>
|
||||
@ -54,7 +62,7 @@ public:
|
||||
};
|
||||
|
||||
AddedColumns(
|
||||
const Block & left_block_,
|
||||
const ScatteredBlock & left_block_,
|
||||
const Block & block_with_columns_to_add,
|
||||
const Block & saved_block_sample,
|
||||
const HashJoin & join,
|
||||
@ -62,10 +70,11 @@ public:
|
||||
ExpressionActionsPtr additional_filter_expression_,
|
||||
bool is_asof_join,
|
||||
bool is_join_get_)
|
||||
: left_block(left_block_)
|
||||
: src_block(left_block_)
|
||||
, left_block(left_block_.getSourceBlock())
|
||||
, join_on_keys(join_on_keys_)
|
||||
, additional_filter_expression(additional_filter_expression_)
|
||||
, rows_to_add(left_block.rows())
|
||||
, rows_to_add(left_block_.rows())
|
||||
, join_data_avg_perkey_rows(join.getJoinedData()->avgPerKeyRows())
|
||||
, output_by_row_list_threshold(join.getTableJoin().outputByRowListPerkeyRowsThreshold())
|
||||
, join_data_sorted(join.getJoinedData()->sorted)
|
||||
@ -139,6 +148,7 @@ public:
|
||||
|
||||
static constexpr bool isLazy() { return lazy; }
|
||||
|
||||
const ScatteredBlock & src_block;
|
||||
Block left_block;
|
||||
std::vector<JoinOnKeyColumns> join_on_keys;
|
||||
ExpressionActionsPtr additional_filter_expression;
|
||||
@ -159,7 +169,7 @@ public:
|
||||
return;
|
||||
|
||||
/// Do not allow big allocations when user set max_joined_block_rows to huge value
|
||||
size_t reserve_size = std::min<size_t>(max_joined_block_rows, DEFAULT_BLOCK_SIZE * 2);
|
||||
size_t reserve_size = std::min<size_t>(max_joined_block_rows, rows_to_add * 2);
|
||||
|
||||
if (need_replicate)
|
||||
/// Reserve 10% more space for columns, because some rows can be repeated
|
||||
@ -218,7 +228,7 @@ private:
|
||||
void addColumn(const ColumnWithTypeAndName & src_column, const std::string & qualified_name)
|
||||
{
|
||||
columns.push_back(src_column.column->cloneEmpty());
|
||||
columns.back()->reserve(src_column.column->size());
|
||||
columns.back()->reserve(rows_to_add);
|
||||
type_name.emplace_back(src_column.type, src_column.name, qualified_name);
|
||||
}
|
||||
|
||||
|
@ -13,48 +13,42 @@
|
||||
#include <Common/logger_useful.h>
|
||||
|
||||
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeNullable.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
|
||||
#include <Interpreters/ExpressionActions.h>
|
||||
#include <Interpreters/HashJoin/HashJoin.h>
|
||||
#include <Interpreters/JoinUtils.h>
|
||||
#include <Interpreters/TableJoin.h>
|
||||
#include <Interpreters/joinDispatch.h>
|
||||
#include <Interpreters/NullableUtils.h>
|
||||
#include <Interpreters/RowRefs.h>
|
||||
#include <Interpreters/TableJoin.h>
|
||||
#include <Interpreters/joinDispatch.h>
|
||||
|
||||
#include <Interpreters/TemporaryDataOnDisk.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/formatReadable.h>
|
||||
#include <Interpreters/TemporaryDataOnDisk.h>
|
||||
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
#include <Interpreters/HashJoin/HashJoinMethods.h>
|
||||
#include <Interpreters/HashJoin/JoinUsedFlags.h>
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
extern const Metric TemporaryFilesForJoin;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int NO_SUCH_COLUMN_IN_TABLE;
|
||||
extern const int INCOMPATIBLE_TYPE_OF_JOIN;
|
||||
extern const int UNSUPPORTED_JOIN_KEYS;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int SYNTAX_ERROR;
|
||||
extern const int SET_SIZE_LIMIT_EXCEEDED;
|
||||
extern const int TYPE_MISMATCH;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int INVALID_JOIN_ON_EXPRESSION;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int NO_SUCH_COLUMN_IN_TABLE;
|
||||
extern const int INCOMPATIBLE_TYPE_OF_JOIN;
|
||||
extern const int UNSUPPORTED_JOIN_KEYS;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int SYNTAX_ERROR;
|
||||
extern const int SET_SIZE_LIMIT_EXCEEDED;
|
||||
extern const int TYPE_MISMATCH;
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int INVALID_JOIN_ON_EXPRESSION;
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -64,7 +58,7 @@ struct NotProcessedCrossJoin : public ExtraBlock
|
||||
{
|
||||
size_t left_position;
|
||||
size_t right_block;
|
||||
std::unique_ptr<TemporaryFileStream::Reader> reader;
|
||||
std::optional<TemporaryBlockStreamReaderHolder> reader;
|
||||
};
|
||||
|
||||
|
||||
@ -77,6 +71,40 @@ Int64 getCurrentQueryMemoryUsage()
|
||||
return 0;
|
||||
}
|
||||
|
||||
Block filterColumnsPresentInSampleBlock(const Block & block, const Block & sample_block)
|
||||
{
|
||||
Block filtered_block;
|
||||
for (const auto & sample_column : sample_block.getColumnsWithTypeAndName())
|
||||
filtered_block.insert(block.getByName(sample_column.name));
|
||||
return filtered_block;
|
||||
}
|
||||
|
||||
ScatteredBlock filterColumnsPresentInSampleBlock(const ScatteredBlock & block, const Block & sample_block)
|
||||
{
|
||||
return ScatteredBlock{filterColumnsPresentInSampleBlock(block.getSourceBlock(), sample_block)};
|
||||
}
|
||||
|
||||
Block materializeColumnsFromRightBlock(Block block, const Block & sample_block, const Names &)
|
||||
{
|
||||
for (const auto & sample_column : sample_block.getColumnsWithTypeAndName())
|
||||
{
|
||||
auto & column = block.getByName(sample_column.name);
|
||||
|
||||
/// There's no optimization for right side const columns. Remove constness if any.
|
||||
column.column = recursiveRemoveSparse(column.column->convertToFullColumnIfConst());
|
||||
|
||||
if (column.column->lowCardinality() && !sample_column.column->lowCardinality())
|
||||
{
|
||||
column.column = column.column->convertToFullColumnIfLowCardinality();
|
||||
column.type = removeLowCardinality(column.type);
|
||||
}
|
||||
|
||||
if (sample_column.column->isNullable())
|
||||
JoinCommon::convertColumnToNullable(column);
|
||||
}
|
||||
|
||||
return block;
|
||||
}
|
||||
}
|
||||
|
||||
static void correctNullabilityInplace(ColumnWithTypeAndName & column, bool nullable)
|
||||
@ -96,8 +124,12 @@ static void correctNullabilityInplace(ColumnWithTypeAndName & column, bool nulla
|
||||
}
|
||||
}
|
||||
|
||||
HashJoin::HashJoin(std::shared_ptr<TableJoin> table_join_, const Block & right_sample_block_,
|
||||
bool any_take_last_row_, size_t reserve_num_, const String & instance_id_)
|
||||
HashJoin::HashJoin(
|
||||
std::shared_ptr<TableJoin> table_join_,
|
||||
const Block & right_sample_block_,
|
||||
bool any_take_last_row_,
|
||||
size_t reserve_num_,
|
||||
const String & instance_id_)
|
||||
: table_join(table_join_)
|
||||
, kind(table_join->kind())
|
||||
, strictness(table_join->strictness())
|
||||
@ -106,17 +138,21 @@ HashJoin::HashJoin(std::shared_ptr<TableJoin> table_join_, const Block & right_s
|
||||
, instance_id(instance_id_)
|
||||
, asof_inequality(table_join->getAsofInequality())
|
||||
, data(std::make_shared<RightTableData>())
|
||||
, tmp_data(
|
||||
table_join_->getTempDataOnDisk()
|
||||
? std::make_unique<TemporaryDataOnDisk>(table_join_->getTempDataOnDisk(), CurrentMetrics::TemporaryFilesForJoin)
|
||||
: nullptr)
|
||||
, tmp_data(table_join_->getTempDataOnDisk())
|
||||
, right_sample_block(right_sample_block_)
|
||||
, max_joined_block_rows(table_join->maxJoinedBlockRows())
|
||||
, instance_log_id(!instance_id_.empty() ? "(" + instance_id_ + ") " : "")
|
||||
, log(getLogger("HashJoin"))
|
||||
{
|
||||
LOG_TRACE(log, "{}Keys: {}, datatype: {}, kind: {}, strictness: {}, right header: {}",
|
||||
instance_log_id, TableJoin::formatClauses(table_join->getClauses(), true), data->type, kind, strictness, right_sample_block.dumpStructure());
|
||||
LOG_TRACE(
|
||||
log,
|
||||
"{}Keys: {}, datatype: {}, kind: {}, strictness: {}, right header: {}",
|
||||
instance_log_id,
|
||||
TableJoin::formatClauses(table_join->getClauses(), true),
|
||||
data->type,
|
||||
kind,
|
||||
strictness,
|
||||
right_sample_block.dumpStructure());
|
||||
|
||||
validateAdditionalFilterExpression(table_join->getMixedJoinExpression());
|
||||
|
||||
@ -260,8 +296,8 @@ HashJoin::Type HashJoin::chooseMethod(JoinKind kind, const ColumnRawPtrs & key_c
|
||||
};
|
||||
|
||||
const auto * key_column = key_columns[0];
|
||||
if (is_string_column(key_column) ||
|
||||
(isColumnConst(*key_column) && is_string_column(assert_cast<const ColumnConst *>(key_column)->getDataColumnPtr().get())))
|
||||
if (is_string_column(key_column)
|
||||
|| (isColumnConst(*key_column) && is_string_column(assert_cast<const ColumnConst *>(key_column)->getDataColumnPtr().get())))
|
||||
return Type::key_string;
|
||||
}
|
||||
|
||||
@ -331,7 +367,8 @@ size_t HashJoin::getTotalRowCount() const
|
||||
auto prefer_use_maps_all = table_join->getMixedJoinExpression() != nullptr;
|
||||
for (const auto & map : data->maps)
|
||||
{
|
||||
joinDispatch(kind, strictness, map, prefer_use_maps_all, [&](auto, auto, auto & map_) { res += map_.getTotalRowCount(data->type); });
|
||||
joinDispatch(
|
||||
kind, strictness, map, prefer_use_maps_all, [&](auto, auto, auto & map_) { res += map_.getTotalRowCount(data->type); });
|
||||
}
|
||||
}
|
||||
|
||||
@ -346,16 +383,22 @@ void HashJoin::doDebugAsserts() const
|
||||
debug_blocks_allocated_size += block.allocatedBytes();
|
||||
|
||||
if (data->blocks_allocated_size != debug_blocks_allocated_size)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "data->blocks_allocated_size != debug_blocks_allocated_size ({} != {})",
|
||||
data->blocks_allocated_size, debug_blocks_allocated_size);
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"data->blocks_allocated_size != debug_blocks_allocated_size ({} != {})",
|
||||
data->blocks_allocated_size,
|
||||
debug_blocks_allocated_size);
|
||||
|
||||
size_t debug_blocks_nullmaps_allocated_size = 0;
|
||||
for (const auto & nullmap : data->blocks_nullmaps)
|
||||
debug_blocks_nullmaps_allocated_size += nullmap.second->allocatedBytes();
|
||||
debug_blocks_nullmaps_allocated_size += nullmap.allocatedBytes();
|
||||
|
||||
if (data->blocks_nullmaps_allocated_size != debug_blocks_nullmaps_allocated_size)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "data->blocks_nullmaps_allocated_size != debug_blocks_nullmaps_allocated_size ({} != {})",
|
||||
data->blocks_nullmaps_allocated_size, debug_blocks_nullmaps_allocated_size);
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"data->blocks_nullmaps_allocated_size != debug_blocks_nullmaps_allocated_size ({} != {})",
|
||||
data->blocks_nullmaps_allocated_size,
|
||||
debug_blocks_nullmaps_allocated_size);
|
||||
#endif
|
||||
}
|
||||
|
||||
@ -377,7 +420,12 @@ size_t HashJoin::getTotalByteCount() const
|
||||
auto prefer_use_maps_all = table_join->getMixedJoinExpression() != nullptr;
|
||||
for (const auto & map : data->maps)
|
||||
{
|
||||
joinDispatch(kind, strictness, map, prefer_use_maps_all, [&](auto, auto, auto & map_) { res += map_.getTotalByteCountImpl(data->type); });
|
||||
joinDispatch(
|
||||
kind,
|
||||
strictness,
|
||||
map,
|
||||
prefer_use_maps_all,
|
||||
[&](auto, auto, auto & map_) { res += map_.getTotalByteCountImpl(data->type); });
|
||||
}
|
||||
}
|
||||
return res;
|
||||
@ -428,29 +476,27 @@ void HashJoin::initRightBlockStructure(Block & saved_block_sample)
|
||||
}
|
||||
}
|
||||
|
||||
void HashJoin::materializeColumnsFromLeftBlock(Block & block) const
|
||||
{
|
||||
/** If you use FULL or RIGHT JOIN, then the columns from the "left" table must be materialized.
|
||||
* Because if they are constants, then in the "not joined" rows, they may have different values
|
||||
* - default values, which can differ from the values of these constants.
|
||||
*/
|
||||
if (kind == JoinKind::Right || kind == JoinKind::Full)
|
||||
{
|
||||
materializeBlockInplace(block);
|
||||
}
|
||||
}
|
||||
|
||||
Block HashJoin::materializeColumnsFromRightBlock(Block block) const
|
||||
{
|
||||
return DB::materializeColumnsFromRightBlock(std::move(block), savedBlockSample(), table_join->getAllNames(JoinTableSide::Right));
|
||||
}
|
||||
|
||||
Block HashJoin::prepareRightBlock(const Block & block, const Block & saved_block_sample_)
|
||||
{
|
||||
Block structured_block;
|
||||
for (const auto & sample_column : saved_block_sample_.getColumnsWithTypeAndName())
|
||||
{
|
||||
ColumnWithTypeAndName column = block.getByName(sample_column.name);
|
||||
|
||||
/// There's no optimization for right side const columns. Remove constness if any.
|
||||
column.column = recursiveRemoveSparse(column.column->convertToFullColumnIfConst());
|
||||
|
||||
if (column.column->lowCardinality() && !sample_column.column->lowCardinality())
|
||||
{
|
||||
column.column = column.column->convertToFullColumnIfLowCardinality();
|
||||
column.type = removeLowCardinality(column.type);
|
||||
}
|
||||
|
||||
if (sample_column.column->isNullable())
|
||||
JoinCommon::convertColumnToNullable(column);
|
||||
|
||||
structured_block.insert(std::move(column));
|
||||
}
|
||||
|
||||
return structured_block;
|
||||
Block prepared_block = DB::materializeColumnsFromRightBlock(block, saved_block_sample_, {});
|
||||
return filterColumnsPresentInSampleBlock(prepared_block, saved_block_sample_);
|
||||
}
|
||||
|
||||
Block HashJoin::prepareRightBlock(const Block & block) const
|
||||
@ -458,15 +504,22 @@ Block HashJoin::prepareRightBlock(const Block & block) const
|
||||
return prepareRightBlock(block, savedBlockSample());
|
||||
}
|
||||
|
||||
bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
bool HashJoin::addBlockToJoin(const Block & source_block, bool check_limits)
|
||||
{
|
||||
auto materialized = materializeColumnsFromRightBlock(source_block);
|
||||
auto scattered_block = ScatteredBlock{materialized};
|
||||
return addBlockToJoin(scattered_block, check_limits);
|
||||
}
|
||||
|
||||
bool HashJoin::addBlockToJoin(ScatteredBlock & source_block, bool check_limits)
|
||||
{
|
||||
if (!data)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Join data was released");
|
||||
|
||||
/// RowRef::SizeT is uint32_t (not size_t) for hash table Cell memory efficiency.
|
||||
/// It's possible to split bigger blocks and insert them by parts here. But it would be a dead code.
|
||||
if (unlikely(source_block_.rows() > std::numeric_limits<RowRef::SizeT>::max()))
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Too many rows in right table block for HashJoin: {}", source_block_.rows());
|
||||
if (unlikely(source_block.rows() > std::numeric_limits<RowRef::SizeT>::max()))
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Too many rows in right table block for HashJoin: {}", source_block.rows());
|
||||
|
||||
/** We do not allocate memory for stored blocks inside HashJoin, only for hash table.
|
||||
* In case when we have all the blocks allocated before the first `addBlockToJoin` call, will already be quite high.
|
||||
@ -475,7 +528,6 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
if (!memory_usage_before_adding_blocks)
|
||||
memory_usage_before_adding_blocks = getCurrentQueryMemoryUsage();
|
||||
|
||||
Block source_block = source_block_;
|
||||
if (strictness == JoinStrictness::Asof)
|
||||
{
|
||||
chassert(kind == JoinKind::Left || kind == JoinKind::Inner);
|
||||
@ -484,7 +536,7 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
/// We support only INNER/LEFT ASOF join, so rows with NULLs never return from the right joined table.
|
||||
/// So filter them out here not to handle in implementation.
|
||||
const auto & asof_key_name = table_join->getOnlyClause().key_names_right.back();
|
||||
auto & asof_column = source_block.getByName(asof_key_name);
|
||||
const auto & asof_column = source_block.getByName(asof_key_name);
|
||||
|
||||
if (asof_column.type->isNullable())
|
||||
{
|
||||
@ -502,13 +554,12 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
for (size_t i = 0; i < asof_column_nullable.size(); ++i)
|
||||
negative_null_map[i] = !asof_column_nullable[i];
|
||||
|
||||
for (auto & column : source_block)
|
||||
column.column = column.column->filter(negative_null_map, -1);
|
||||
source_block.filter(negative_null_map);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
size_t rows = source_block.rows();
|
||||
const size_t rows = source_block.rows();
|
||||
data->rows_to_join += rows;
|
||||
const auto & right_key_names = table_join->getAllNames(JoinTableSide::Right);
|
||||
ColumnPtrMap all_key_columns(right_key_names.size());
|
||||
@ -518,7 +569,7 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
all_key_columns[column_name] = recursiveRemoveSparse(column->convertToFullColumnIfConst())->convertToFullColumnIfLowCardinality();
|
||||
}
|
||||
|
||||
Block block_to_save = prepareRightBlock(source_block);
|
||||
ScatteredBlock block_to_save = filterColumnsPresentInSampleBlock(source_block, savedBlockSample());
|
||||
if (shrink_blocks)
|
||||
block_to_save = block_to_save.shrinkToFit();
|
||||
|
||||
@ -529,11 +580,11 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
&& (tmp_stream || (max_bytes_in_join && getTotalByteCount() + block_to_save.allocatedBytes() >= max_bytes_in_join)
|
||||
|| (max_rows_in_join && getTotalRowCount() + block_to_save.rows() >= max_rows_in_join)))
|
||||
{
|
||||
if (tmp_stream == nullptr)
|
||||
{
|
||||
tmp_stream = &tmp_data->createStream(right_sample_block);
|
||||
}
|
||||
tmp_stream->write(block_to_save);
|
||||
if (!tmp_stream)
|
||||
tmp_stream.emplace(right_sample_block, tmp_data.get());
|
||||
|
||||
chassert(!source_block.wasScattered()); /// We don't run parallel_hash for cross join
|
||||
tmp_stream.value()->write(block_to_save.getSourceBlock());
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -545,7 +596,7 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
if (storage_join_lock)
|
||||
throw DB::Exception(ErrorCodes::LOGICAL_ERROR, "addBlockToJoin called when HashJoin locked to prevent updates");
|
||||
|
||||
assertBlocksHaveEqualStructure(data->sample_block, block_to_save, "joined block");
|
||||
assertBlocksHaveEqualStructure(data->sample_block, block_to_save.getSourceBlock(), "joined block");
|
||||
|
||||
size_t min_bytes_to_compress = table_join->crossJoinMinBytesToCompress();
|
||||
size_t min_rows_to_compress = table_join->crossJoinMinRowsToCompress();
|
||||
@ -554,6 +605,7 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
&& ((min_bytes_to_compress && getTotalByteCount() >= min_bytes_to_compress)
|
||||
|| (min_rows_to_compress && getTotalRowCount() >= min_rows_to_compress)))
|
||||
{
|
||||
chassert(!source_block.wasScattered()); /// We don't run parallel_hash for cross join
|
||||
block_to_save = block_to_save.compress();
|
||||
have_compressed = true;
|
||||
}
|
||||
@ -561,7 +613,7 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
doDebugAsserts();
|
||||
data->blocks_allocated_size += block_to_save.allocatedBytes();
|
||||
data->blocks.emplace_back(std::move(block_to_save));
|
||||
Block * stored_block = &data->blocks.back();
|
||||
const auto * stored_block = &data->blocks.back();
|
||||
doDebugAsserts();
|
||||
|
||||
if (rows)
|
||||
@ -588,7 +640,7 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
save_nullmap |= (*null_map)[i];
|
||||
}
|
||||
|
||||
auto join_mask_col = JoinCommon::getColumnAsMask(source_block, onexprs[onexpr_idx].condColumnNames().second);
|
||||
auto join_mask_col = JoinCommon::getColumnAsMask(source_block.getSourceBlock(), onexprs[onexpr_idx].condColumnNames().second);
|
||||
/// Save blocks that do not hold conditions in ON section
|
||||
ColumnUInt8::MutablePtr not_joined_map = nullptr;
|
||||
if (!flag_per_row && isRightOrFull(kind) && join_mask_col.hasData())
|
||||
@ -613,39 +665,45 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
bool is_inserted = false;
|
||||
if (kind != JoinKind::Cross)
|
||||
{
|
||||
joinDispatch(kind, strictness, data->maps[onexpr_idx], prefer_use_maps_all, [&](auto kind_, auto strictness_, auto & map)
|
||||
{
|
||||
size_t size = HashJoinMethods<kind_, strictness_, std::decay_t<decltype(map)>>::insertFromBlockImpl(
|
||||
joinDispatch(
|
||||
kind,
|
||||
strictness,
|
||||
data->maps[onexpr_idx],
|
||||
prefer_use_maps_all,
|
||||
[&](auto kind_, auto strictness_, auto & map)
|
||||
{
|
||||
size_t size = HashJoinMethods<kind_, strictness_, std::decay_t<decltype(map)>>::insertFromBlockImpl(
|
||||
*this,
|
||||
data->type,
|
||||
map,
|
||||
rows,
|
||||
key_columns,
|
||||
key_sizes[onexpr_idx],
|
||||
stored_block,
|
||||
&stored_block->getSourceBlock(),
|
||||
source_block.getSelector(),
|
||||
null_map,
|
||||
join_mask_col.getData(),
|
||||
data->pool,
|
||||
is_inserted);
|
||||
|
||||
if (flag_per_row)
|
||||
used_flags->reinit<kind_, strictness_, std::is_same_v<std::decay_t<decltype(map)>, MapsAll>>(stored_block);
|
||||
else if (is_inserted)
|
||||
/// Number of buckets + 1 value from zero storage
|
||||
used_flags->reinit<kind_, strictness_, std::is_same_v<std::decay_t<decltype(map)>, MapsAll>>(size + 1);
|
||||
});
|
||||
if (flag_per_row)
|
||||
used_flags->reinit<kind_, strictness_, std::is_same_v<std::decay_t<decltype(map)>, MapsAll>>(
|
||||
&stored_block->getSourceBlock());
|
||||
else if (is_inserted)
|
||||
/// Number of buckets + 1 value from zero storage
|
||||
used_flags->reinit<kind_, strictness_, std::is_same_v<std::decay_t<decltype(map)>, MapsAll>>(size + 1);
|
||||
});
|
||||
}
|
||||
|
||||
if (!flag_per_row && save_nullmap && is_inserted)
|
||||
{
|
||||
data->blocks_nullmaps_allocated_size += null_map_holder->allocatedBytes();
|
||||
data->blocks_nullmaps.emplace_back(stored_block, null_map_holder);
|
||||
data->blocks_nullmaps_allocated_size += data->blocks_nullmaps.back().allocatedBytes();
|
||||
}
|
||||
|
||||
if (!flag_per_row && not_joined_map && is_inserted)
|
||||
{
|
||||
data->blocks_nullmaps_allocated_size += not_joined_map->allocatedBytes();
|
||||
data->blocks_nullmaps.emplace_back(stored_block, std::move(not_joined_map));
|
||||
data->blocks_nullmaps_allocated_size += data->blocks_nullmaps.back().allocatedBytes();
|
||||
}
|
||||
|
||||
if (!flag_per_row && !is_inserted)
|
||||
@ -672,7 +730,6 @@ bool HashJoin::addBlockToJoin(const Block & source_block_, bool check_limits)
|
||||
|
||||
void HashJoin::shrinkStoredBlocksToFit(size_t & total_bytes_in_join, bool force_optimize)
|
||||
{
|
||||
|
||||
Int64 current_memory_usage = getCurrentQueryMemoryUsage();
|
||||
Int64 query_memory_usage_delta = current_memory_usage - memory_usage_before_adding_blocks;
|
||||
Int64 max_total_bytes_for_query = memory_usage_before_adding_blocks ? table_join->getMaxMemoryUsage() : 0;
|
||||
@ -689,15 +746,19 @@ void HashJoin::shrinkStoredBlocksToFit(size_t & total_bytes_in_join, bool force_
|
||||
* is bigger than half of all memory available for query,
|
||||
* then shrink stored blocks to fit.
|
||||
*/
|
||||
shrink_blocks = (max_total_bytes_in_join && total_bytes_in_join > max_total_bytes_in_join / 2) ||
|
||||
(max_total_bytes_for_query && query_memory_usage_delta > max_total_bytes_for_query / 2);
|
||||
shrink_blocks = (max_total_bytes_in_join && total_bytes_in_join > max_total_bytes_in_join / 2)
|
||||
|| (max_total_bytes_for_query && query_memory_usage_delta > max_total_bytes_for_query / 2);
|
||||
if (!shrink_blocks)
|
||||
return;
|
||||
}
|
||||
|
||||
LOG_DEBUG(log, "Shrinking stored blocks, memory consumption is {} {} calculated by join, {} {} by memory tracker",
|
||||
ReadableSize(total_bytes_in_join), max_total_bytes_in_join ? fmt::format("/ {}", ReadableSize(max_total_bytes_in_join)) : "",
|
||||
ReadableSize(query_memory_usage_delta), max_total_bytes_for_query ? fmt::format("/ {}", ReadableSize(max_total_bytes_for_query)) : "");
|
||||
LOG_DEBUG(
|
||||
log,
|
||||
"Shrinking stored blocks, memory consumption is {} {} calculated by join, {} {} by memory tracker",
|
||||
ReadableSize(total_bytes_in_join),
|
||||
max_total_bytes_in_join ? fmt::format("/ {}", ReadableSize(max_total_bytes_in_join)) : "",
|
||||
ReadableSize(query_memory_usage_delta),
|
||||
max_total_bytes_for_query ? fmt::format("/ {}", ReadableSize(max_total_bytes_for_query)) : "");
|
||||
|
||||
for (auto & stored_block : data->blocks)
|
||||
{
|
||||
@ -710,10 +771,13 @@ void HashJoin::shrinkStoredBlocksToFit(size_t & total_bytes_in_join, bool force_
|
||||
if (old_size >= new_size)
|
||||
{
|
||||
if (data->blocks_allocated_size < old_size - new_size)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"Blocks allocated size value is broken: "
|
||||
"blocks_allocated_size = {}, old_size = {}, new_size = {}",
|
||||
data->blocks_allocated_size, old_size, new_size);
|
||||
data->blocks_allocated_size,
|
||||
old_size,
|
||||
new_size);
|
||||
|
||||
data->blocks_allocated_size -= old_size - new_size;
|
||||
}
|
||||
@ -728,9 +792,13 @@ void HashJoin::shrinkStoredBlocksToFit(size_t & total_bytes_in_join, bool force_
|
||||
|
||||
Int64 new_current_memory_usage = getCurrentQueryMemoryUsage();
|
||||
|
||||
LOG_DEBUG(log, "Shrunk stored blocks {} freed ({} by memory tracker), new memory consumption is {} ({} by memory tracker)",
|
||||
ReadableSize(total_bytes_in_join - new_total_bytes_in_join), ReadableSize(current_memory_usage - new_current_memory_usage),
|
||||
ReadableSize(new_total_bytes_in_join), ReadableSize(new_current_memory_usage));
|
||||
LOG_DEBUG(
|
||||
log,
|
||||
"Shrunk stored blocks {} freed ({} by memory tracker), new memory consumption is {} ({} by memory tracker)",
|
||||
ReadableSize(total_bytes_in_join - new_total_bytes_in_join),
|
||||
ReadableSize(current_memory_usage - new_current_memory_usage),
|
||||
ReadableSize(new_total_bytes_in_join),
|
||||
ReadableSize(new_current_memory_usage));
|
||||
|
||||
total_bytes_in_join = new_total_bytes_in_join;
|
||||
}
|
||||
@ -739,13 +807,14 @@ void HashJoin::joinBlockImplCross(Block & block, ExtraBlockPtr & not_processed)
|
||||
{
|
||||
size_t start_left_row = 0;
|
||||
size_t start_right_block = 0;
|
||||
std::unique_ptr<TemporaryFileStream::Reader> reader = nullptr;
|
||||
std::optional<TemporaryBlockStreamReaderHolder> reader;
|
||||
if (not_processed)
|
||||
{
|
||||
auto & continuation = static_cast<NotProcessedCrossJoin &>(*not_processed);
|
||||
start_left_row = continuation.left_position;
|
||||
start_right_block = continuation.right_block;
|
||||
reader = std::move(continuation.reader);
|
||||
if (continuation.reader)
|
||||
reader = std::move(*continuation.reader);
|
||||
not_processed.reset();
|
||||
}
|
||||
|
||||
@ -793,7 +862,7 @@ void HashJoin::joinBlockImplCross(Block & block, ExtraBlockPtr & not_processed)
|
||||
}
|
||||
};
|
||||
|
||||
for (const Block & block_right : data->blocks)
|
||||
for (const auto & block_right : data->blocks)
|
||||
{
|
||||
++block_number;
|
||||
if (block_number < start_right_block)
|
||||
@ -801,9 +870,12 @@ void HashJoin::joinBlockImplCross(Block & block, ExtraBlockPtr & not_processed)
|
||||
/// The following statement cannot be substituted with `process_right_block(!have_compressed ? block_right : block_right.decompress())`
|
||||
/// because it will lead to copying of `block_right` even if its branch is taken (because common type of `block_right` and `block_right.decompress()` is `Block`).
|
||||
if (!have_compressed)
|
||||
process_right_block(block_right);
|
||||
process_right_block(block_right.getSourceBlock());
|
||||
else
|
||||
process_right_block(block_right.decompress());
|
||||
{
|
||||
chassert(!block_right.wasScattered()); /// Compression only happens for cross join and scattering only for concurrent hash
|
||||
process_right_block(block_right.getSourceBlock().decompress());
|
||||
}
|
||||
|
||||
if (rows_added > max_joined_block_rows)
|
||||
{
|
||||
@ -813,12 +885,10 @@ void HashJoin::joinBlockImplCross(Block & block, ExtraBlockPtr & not_processed)
|
||||
|
||||
if (tmp_stream && rows_added <= max_joined_block_rows)
|
||||
{
|
||||
if (reader == nullptr)
|
||||
{
|
||||
tmp_stream->finishWritingAsyncSafe();
|
||||
if (!reader)
|
||||
reader = tmp_stream->getReadStream();
|
||||
}
|
||||
while (auto block_right = reader->read())
|
||||
|
||||
while (auto block_right = reader.value()->read())
|
||||
{
|
||||
++block_number;
|
||||
process_right_block(block_right);
|
||||
@ -856,9 +926,11 @@ DataTypePtr HashJoin::joinGetCheckAndGetReturnType(const DataTypes & data_types,
|
||||
{
|
||||
size_t num_keys = data_types.size();
|
||||
if (right_table_keys.columns() != num_keys)
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
throw Exception(
|
||||
ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
"Number of arguments for function joinGet{} doesn't match: passed, should be equal to {}",
|
||||
toString(or_null ? "OrNull" : ""), toString(num_keys));
|
||||
toString(or_null ? "OrNull" : ""),
|
||||
toString(num_keys));
|
||||
|
||||
for (size_t i = 0; i < num_keys; ++i)
|
||||
{
|
||||
@ -867,8 +939,13 @@ DataTypePtr HashJoin::joinGetCheckAndGetReturnType(const DataTypes & data_types,
|
||||
auto left_type = removeNullable(recursiveRemoveLowCardinality(left_type_origin));
|
||||
auto right_type = removeNullable(recursiveRemoveLowCardinality(right_type_origin));
|
||||
if (!left_type->equals(*right_type))
|
||||
throw Exception(ErrorCodes::TYPE_MISMATCH, "Type mismatch in joinGet key {}: "
|
||||
"found type {}, while the needed type is {}", i, left_type->getName(), right_type->getName());
|
||||
throw Exception(
|
||||
ErrorCodes::TYPE_MISMATCH,
|
||||
"Type mismatch in joinGet key {}: "
|
||||
"found type {}, while the needed type is {}",
|
||||
i,
|
||||
left_type->getName(),
|
||||
right_type->getName());
|
||||
}
|
||||
|
||||
if (!sample_block_with_columns_to_add.has(column_name))
|
||||
@ -884,8 +961,7 @@ DataTypePtr HashJoin::joinGetCheckAndGetReturnType(const DataTypes & data_types,
|
||||
/// TODO: return array of values when strictness == JoinStrictness::All
|
||||
ColumnWithTypeAndName HashJoin::joinGet(const Block & block, const Block & block_with_columns_to_add) const
|
||||
{
|
||||
bool is_valid = (strictness == JoinStrictness::Any || strictness == JoinStrictness::RightAny)
|
||||
&& kind == JoinKind::Left;
|
||||
bool is_valid = (strictness == JoinStrictness::Any || strictness == JoinStrictness::RightAny) && kind == JoinKind::Left;
|
||||
if (!is_valid)
|
||||
throw Exception(ErrorCodes::INCOMPATIBLE_TYPE_OF_JOIN, "joinGet only supports StorageJoin of type Left Any");
|
||||
const auto & key_names_right = table_join->getOnlyClause().key_names_right;
|
||||
@ -899,12 +975,14 @@ ColumnWithTypeAndName HashJoin::joinGet(const Block & block, const Block & block
|
||||
keys.insert(std::move(key));
|
||||
}
|
||||
|
||||
static_assert(!MapGetter<JoinKind::Left, JoinStrictness::Any, false>::flagged,
|
||||
"joinGet are not protected from hash table changes between block processing");
|
||||
static_assert(
|
||||
!MapGetter<JoinKind::Left, JoinStrictness::Any, false>::flagged,
|
||||
"joinGet are not protected from hash table changes between block processing");
|
||||
|
||||
std::vector<const MapsOne *> maps_vector;
|
||||
maps_vector.push_back(&std::get<MapsOne>(data->maps[0]));
|
||||
HashJoinMethods<JoinKind::Left, JoinStrictness::Any, MapsOne>::joinBlockImpl(*this, keys, block_with_columns_to_add, maps_vector, /* is_join_get = */ true);
|
||||
HashJoinMethods<JoinKind::Left, JoinStrictness::Any, MapsOne>::joinBlockImpl(
|
||||
*this, keys, block_with_columns_to_add, maps_vector, /* is_join_get = */ true);
|
||||
return keys.getByPosition(keys.columns() - 1);
|
||||
}
|
||||
|
||||
@ -925,8 +1003,7 @@ void HashJoin::joinBlock(Block & block, ExtraBlockPtr & not_processed)
|
||||
{
|
||||
auto cond_column_name = onexpr.condColumnNames();
|
||||
JoinCommon::checkTypesOfKeys(
|
||||
block, onexpr.key_names_left, cond_column_name.first,
|
||||
right_sample_block, onexpr.key_names_right, cond_column_name.second);
|
||||
block, onexpr.key_names_left, cond_column_name.first, right_sample_block, onexpr.key_names_right, cond_column_name.second);
|
||||
}
|
||||
|
||||
if (kind == JoinKind::Cross)
|
||||
@ -935,20 +1012,85 @@ void HashJoin::joinBlock(Block & block, ExtraBlockPtr & not_processed)
|
||||
return;
|
||||
}
|
||||
|
||||
if (kind == JoinKind::Right || kind == JoinKind::Full)
|
||||
{
|
||||
materializeBlockInplace(block);
|
||||
}
|
||||
materializeColumnsFromLeftBlock(block);
|
||||
|
||||
bool prefer_use_maps_all = table_join->getMixedJoinExpression() != nullptr;
|
||||
{
|
||||
std::vector<const std::decay_t<decltype(data->maps[0])> * > maps_vector;
|
||||
std::vector<const std::decay_t<decltype(data->maps[0])> *> maps_vector;
|
||||
for (size_t i = 0; i < table_join->getClauses().size(); ++i)
|
||||
maps_vector.push_back(&data->maps[i]);
|
||||
|
||||
if (joinDispatch(kind, strictness, maps_vector, prefer_use_maps_all, [&](auto kind_, auto strictness_, auto & maps_vector_)
|
||||
if (joinDispatch(
|
||||
kind,
|
||||
strictness,
|
||||
maps_vector,
|
||||
prefer_use_maps_all,
|
||||
[&](auto kind_, auto strictness_, auto & maps_vector_)
|
||||
{
|
||||
Block remaining_block;
|
||||
if constexpr (std::is_same_v<std::decay_t<decltype(maps_vector_)>, std::vector<const MapsAll *>>)
|
||||
{
|
||||
remaining_block = HashJoinMethods<kind_, strictness_, MapsAll>::joinBlockImpl(
|
||||
*this, block, sample_block_with_columns_to_add, maps_vector_);
|
||||
}
|
||||
else if constexpr (std::is_same_v<std::decay_t<decltype(maps_vector_)>, std::vector<const MapsOne *>>)
|
||||
{
|
||||
remaining_block = HashJoinMethods<kind_, strictness_, MapsOne>::joinBlockImpl(
|
||||
*this, block, sample_block_with_columns_to_add, maps_vector_);
|
||||
}
|
||||
else if constexpr (std::is_same_v<std::decay_t<decltype(maps_vector_)>, std::vector<const MapsAsof *>>)
|
||||
{
|
||||
remaining_block = HashJoinMethods<kind_, strictness_, MapsAsof>::joinBlockImpl(
|
||||
*this, block, sample_block_with_columns_to_add, maps_vector_);
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown maps type");
|
||||
}
|
||||
if (remaining_block.rows())
|
||||
not_processed = std::make_shared<ExtraBlock>(ExtraBlock{std::move(remaining_block)});
|
||||
else
|
||||
not_processed.reset();
|
||||
}))
|
||||
{
|
||||
/// Joined
|
||||
}
|
||||
else
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Wrong JOIN combination: {} {}", strictness, kind);
|
||||
}
|
||||
}
|
||||
|
||||
void HashJoin::joinBlock(ScatteredBlock & block, ScatteredBlock & remaining_block)
|
||||
{
|
||||
if (!data)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot join after data has been released");
|
||||
|
||||
chassert(kind == JoinKind::Left || kind == JoinKind::Inner);
|
||||
|
||||
for (const auto & onexpr : table_join->getClauses())
|
||||
{
|
||||
auto cond_column_name = onexpr.condColumnNames();
|
||||
JoinCommon::checkTypesOfKeys(
|
||||
block.getSourceBlock(),
|
||||
onexpr.key_names_left,
|
||||
cond_column_name.first,
|
||||
right_sample_block,
|
||||
onexpr.key_names_right,
|
||||
cond_column_name.second);
|
||||
}
|
||||
|
||||
std::vector<const std::decay_t<decltype(data->maps[0])> *> maps_vector;
|
||||
for (size_t i = 0; i < table_join->getClauses().size(); ++i)
|
||||
maps_vector.push_back(&data->maps[i]);
|
||||
|
||||
bool prefer_use_maps_all = table_join->getMixedJoinExpression() != nullptr;
|
||||
const bool joined = joinDispatch(
|
||||
kind,
|
||||
strictness,
|
||||
maps_vector,
|
||||
prefer_use_maps_all,
|
||||
[&](auto kind_, auto strictness_, auto & maps_vector_)
|
||||
{
|
||||
Block remaining_block;
|
||||
if constexpr (std::is_same_v<std::decay_t<decltype(maps_vector_)>, std::vector<const MapsAll *>>)
|
||||
{
|
||||
remaining_block = HashJoinMethods<kind_, strictness_, MapsAll>::joinBlockImpl(
|
||||
@ -968,17 +1110,9 @@ void HashJoin::joinBlock(Block & block, ExtraBlockPtr & not_processed)
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown maps type");
|
||||
}
|
||||
if (remaining_block.rows())
|
||||
not_processed = std::make_shared<ExtraBlock>(ExtraBlock{std::move(remaining_block)});
|
||||
else
|
||||
not_processed.reset();
|
||||
}))
|
||||
{
|
||||
/// Joined
|
||||
}
|
||||
else
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Wrong JOIN combination: {} {}", strictness, kind);
|
||||
}
|
||||
});
|
||||
|
||||
chassert(joined);
|
||||
}
|
||||
|
||||
HashJoin::~HashJoin()
|
||||
@ -1042,10 +1176,7 @@ class NotJoinedHash final : public NotJoinedBlocks::RightColumnsFiller
|
||||
{
|
||||
public:
|
||||
NotJoinedHash(const HashJoin & parent_, UInt64 max_block_size_, bool flag_per_row_)
|
||||
: parent(parent_)
|
||||
, max_block_size(max_block_size_)
|
||||
, flag_per_row(flag_per_row_)
|
||||
, current_block_start(0)
|
||||
: parent(parent_), max_block_size(max_block_size_), flag_per_row(flag_per_row_), current_block_start(0)
|
||||
{
|
||||
if (parent.data == nullptr)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot join after data has been released");
|
||||
@ -1062,14 +1193,12 @@ public:
|
||||
}
|
||||
else
|
||||
{
|
||||
auto fill_callback = [&](auto, auto, auto & map)
|
||||
{
|
||||
rows_added = fillColumnsFromMap(map, columns_right);
|
||||
};
|
||||
auto fill_callback = [&](auto, auto, auto & map) { rows_added = fillColumnsFromMap(map, columns_right); };
|
||||
|
||||
bool prefer_use_maps_all = parent.table_join->getMixedJoinExpression() != nullptr;
|
||||
if (!joinDispatch(parent.kind, parent.strictness, parent.data->maps.front(), prefer_use_maps_all, fill_callback))
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown JOIN strictness '{}' (must be on of: ANY, ALL, ASOF)", parent.strictness);
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR, "Unknown JOIN strictness '{}' (must be on of: ANY, ALL, ASOF)", parent.strictness);
|
||||
}
|
||||
|
||||
if (!flag_per_row)
|
||||
@ -1089,14 +1218,14 @@ private:
|
||||
|
||||
std::any position;
|
||||
std::optional<HashJoin::BlockNullmapList::const_iterator> nulls_position;
|
||||
std::optional<BlocksList::const_iterator> used_position;
|
||||
std::optional<HashJoin::ScatteredBlocksList::const_iterator> used_position;
|
||||
|
||||
size_t fillColumnsFromData(const BlocksList & blocks, MutableColumns & columns_right)
|
||||
size_t fillColumnsFromData(const HashJoin::ScatteredBlocksList & blocks, MutableColumns & columns_right)
|
||||
{
|
||||
if (!position.has_value())
|
||||
position = std::make_any<BlocksList::const_iterator>(blocks.begin());
|
||||
position = std::make_any<HashJoin::ScatteredBlocksList::const_iterator>(blocks.begin());
|
||||
|
||||
auto & block_it = std::any_cast<BlocksList::const_iterator &>(position);
|
||||
auto & block_it = std::any_cast<HashJoin::ScatteredBlocksList::const_iterator &>(position);
|
||||
auto end = blocks.end();
|
||||
|
||||
size_t rows_added = 0;
|
||||
@ -1132,11 +1261,11 @@ private:
|
||||
{
|
||||
switch (parent.data->type)
|
||||
{
|
||||
#define M(TYPE) \
|
||||
case HashJoin::Type::TYPE: \
|
||||
return fillColumns(*maps.TYPE, columns_keys_and_right);
|
||||
#define M(TYPE) \
|
||||
case HashJoin::Type::TYPE: \
|
||||
return fillColumns(*maps.TYPE, columns_keys_and_right);
|
||||
APPLY_FOR_JOIN_VARIANTS(M)
|
||||
#undef M
|
||||
#undef M
|
||||
default:
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_JOIN_KEYS, "Unsupported JOIN keys (type: {})", parent.data->type);
|
||||
}
|
||||
@ -1156,11 +1285,11 @@ private:
|
||||
|
||||
for (auto & it = *used_position; it != end && rows_added < max_block_size; ++it)
|
||||
{
|
||||
const Block & mapped_block = *it;
|
||||
const auto & mapped_block = *it;
|
||||
|
||||
for (size_t row = 0; row < mapped_block.rows(); ++row)
|
||||
{
|
||||
if (!parent.isUsed(&mapped_block, row))
|
||||
if (!parent.isUsed(&mapped_block.getSourceBlock(), row))
|
||||
{
|
||||
for (size_t colnum = 0; colnum < columns_keys_and_right.size(); ++colnum)
|
||||
{
|
||||
@ -1213,10 +1342,10 @@ private:
|
||||
|
||||
for (auto & it = *nulls_position; it != end && rows_added < max_block_size; ++it)
|
||||
{
|
||||
const auto * block = it->first;
|
||||
const auto * block = it->block;
|
||||
ConstNullMapPtr nullmap = nullptr;
|
||||
if (it->second)
|
||||
nullmap = &assert_cast<const ColumnUInt8 &>(*it->second).getData();
|
||||
if (it->column)
|
||||
nullmap = &assert_cast<const ColumnUInt8 &>(*it->column).getData();
|
||||
|
||||
for (size_t row = 0; row < block->rows(); ++row)
|
||||
{
|
||||
@ -1231,9 +1360,8 @@ private:
|
||||
}
|
||||
};
|
||||
|
||||
IBlocksStreamPtr HashJoin::getNonJoinedBlocks(const Block & left_sample_block,
|
||||
const Block & result_sample_block,
|
||||
UInt64 max_block_size) const
|
||||
IBlocksStreamPtr
|
||||
HashJoin::getNonJoinedBlocks(const Block & left_sample_block, const Block & result_sample_block, UInt64 max_block_size) const
|
||||
{
|
||||
if (!JoinCommon::hasNonJoinedBlocks(*table_join))
|
||||
return {};
|
||||
@ -1272,22 +1400,37 @@ void HashJoin::reuseJoinedData(const HashJoin & join)
|
||||
bool prefer_use_maps_all = join.table_join->getMixedJoinExpression() != nullptr;
|
||||
for (auto & map : data->maps)
|
||||
{
|
||||
joinDispatch(kind, strictness, map, prefer_use_maps_all, [this](auto kind_, auto strictness_, auto & map_)
|
||||
{
|
||||
used_flags->reinit<kind_, strictness_, std::is_same_v<std::decay_t<decltype(map_)>, MapsAll>>(map_.getBufferSizeInCells(data->type) + 1);
|
||||
});
|
||||
joinDispatch(
|
||||
kind,
|
||||
strictness,
|
||||
map,
|
||||
prefer_use_maps_all,
|
||||
[this](auto kind_, auto strictness_, auto & map_)
|
||||
{
|
||||
used_flags->reinit<kind_, strictness_, std::is_same_v<std::decay_t<decltype(map_)>, MapsAll>>(
|
||||
map_.getBufferSizeInCells(data->type) + 1);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
BlocksList HashJoin::releaseJoinedBlocks(bool restructure)
|
||||
BlocksList HashJoin::releaseJoinedBlocks(bool restructure [[maybe_unused]])
|
||||
{
|
||||
LOG_TRACE(log, "{}Join data is being released, {} bytes and {} rows in hash table", instance_log_id, getTotalByteCount(), getTotalRowCount());
|
||||
LOG_TRACE(
|
||||
log, "{}Join data is being released, {} bytes and {} rows in hash table", instance_log_id, getTotalByteCount(), getTotalRowCount());
|
||||
|
||||
BlocksList right_blocks = std::move(data->blocks);
|
||||
auto extract_source_blocks = [](ScatteredBlocksList && blocks)
|
||||
{
|
||||
BlocksList result;
|
||||
for (auto & block : blocks)
|
||||
result.emplace_back(std::move(block).getSourceBlock());
|
||||
return result;
|
||||
};
|
||||
|
||||
ScatteredBlocksList right_blocks = std::move(data->blocks);
|
||||
if (!restructure)
|
||||
{
|
||||
data.reset();
|
||||
return right_blocks;
|
||||
return extract_source_blocks(std::move(right_blocks));
|
||||
}
|
||||
|
||||
data->maps.clear();
|
||||
@ -1301,7 +1444,7 @@ BlocksList HashJoin::releaseJoinedBlocks(bool restructure)
|
||||
if (!right_blocks.empty())
|
||||
{
|
||||
positions.reserve(right_sample_block.columns());
|
||||
const Block & tmp_block = *right_blocks.begin();
|
||||
const Block & tmp_block = right_blocks.begin()->getSourceBlock();
|
||||
for (const auto & sample_column : right_sample_block)
|
||||
{
|
||||
positions.emplace_back(tmp_block.getPositionByName(sample_column.name));
|
||||
@ -1309,12 +1452,12 @@ BlocksList HashJoin::releaseJoinedBlocks(bool restructure)
|
||||
}
|
||||
}
|
||||
|
||||
for (Block & saved_block : right_blocks)
|
||||
for (ScatteredBlock & saved_block : right_blocks)
|
||||
{
|
||||
Block restored_block;
|
||||
for (size_t i = 0; i < positions.size(); ++i)
|
||||
{
|
||||
auto & column = saved_block.getByPosition(positions[i]);
|
||||
auto & column = saved_block.getSourceBlock().getByPosition(positions[i]);
|
||||
correctNullabilityInplace(column, is_nullable[i]);
|
||||
restored_block.insert(column);
|
||||
}
|
||||
@ -1340,7 +1483,8 @@ void HashJoin::validateAdditionalFilterExpression(ExpressionActionsPtr additiona
|
||||
|
||||
if (expression_sample_block.columns() != 1)
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"Unexpected expression in JOIN ON section. Expected single column, got '{}'",
|
||||
expression_sample_block.dumpStructure());
|
||||
}
|
||||
@ -1348,7 +1492,8 @@ void HashJoin::validateAdditionalFilterExpression(ExpressionActionsPtr additiona
|
||||
auto type = removeNullable(expression_sample_block.getByPosition(0).type);
|
||||
if (!type->equals(*std::make_shared<DataTypeUInt8>()))
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
"Unexpected expression in JOIN ON section. Expected boolean (UInt8), got '{}'. expression:\n{}",
|
||||
expression_sample_block.getByPosition(0).type->getName(),
|
||||
additional_filter_expression->dumpActions());
|
||||
@ -1356,10 +1501,12 @@ void HashJoin::validateAdditionalFilterExpression(ExpressionActionsPtr additiona
|
||||
|
||||
bool is_supported = ((strictness == JoinStrictness::All) && (isInnerOrLeft(kind) || isRightOrFull(kind)))
|
||||
|| ((strictness == JoinStrictness::Semi || strictness == JoinStrictness::Any || strictness == JoinStrictness::Anti)
|
||||
&& (isLeft(kind) || isRight(kind))) || (strictness == JoinStrictness::Any && (isInner(kind)));
|
||||
&& (isLeft(kind) || isRight(kind)))
|
||||
|| (strictness == JoinStrictness::Any && (isInner(kind)));
|
||||
if (!is_supported)
|
||||
{
|
||||
throw Exception(ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
throw Exception(
|
||||
ErrorCodes::INVALID_JOIN_ON_EXPRESSION,
|
||||
"Non equi condition '{}' from JOIN ON section is supported only for ALL INNER/LEFT/FULL/RIGHT JOINs",
|
||||
expression_sample_block.getByPosition(0).name);
|
||||
}
|
||||
@ -1375,7 +1522,6 @@ bool HashJoin::isUsed(const Block * block_ptr, size_t row_idx) const
|
||||
return used_flags->getUsedSafe(block_ptr, row_idx);
|
||||
}
|
||||
|
||||
|
||||
bool HashJoin::needUsedFlagsForPerRightTableRow(std::shared_ptr<TableJoin> table_join_) const
|
||||
{
|
||||
if (!table_join_->oneDisjunct())
|
||||
@ -1394,7 +1540,7 @@ void HashJoin::tryRerangeRightTableDataImpl(Map & map [[maybe_unused]])
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Only left or inner join table can be reranged.");
|
||||
else
|
||||
{
|
||||
auto merge_rows_into_one_block = [&](BlocksList & blocks, RowRefList & rows_ref)
|
||||
auto merge_rows_into_one_block = [&](ScatteredBlocksList & blocks, RowRefList & rows_ref)
|
||||
{
|
||||
auto it = rows_ref.begin();
|
||||
if (it.ok())
|
||||
@ -1406,7 +1552,7 @@ void HashJoin::tryRerangeRightTableDataImpl(Map & map [[maybe_unused]])
|
||||
{
|
||||
return;
|
||||
}
|
||||
auto & block = blocks.back();
|
||||
auto & block = blocks.back().getSourceBlock();
|
||||
size_t start_row = block.rows();
|
||||
for (; it.ok(); ++it)
|
||||
{
|
||||
@ -1423,23 +1569,22 @@ void HashJoin::tryRerangeRightTableDataImpl(Map & map [[maybe_unused]])
|
||||
}
|
||||
};
|
||||
|
||||
auto visit_rows_map = [&](BlocksList & blocks, MapsAll & rows_map)
|
||||
auto visit_rows_map = [&](ScatteredBlocksList & blocks, MapsAll & rows_map)
|
||||
{
|
||||
switch (data->type)
|
||||
{
|
||||
#define M(TYPE) \
|
||||
case Type::TYPE: \
|
||||
{\
|
||||
rows_map.TYPE->forEachMapped([&](RowRefList & rows_ref) { merge_rows_into_one_block(blocks, rows_ref); }); \
|
||||
break; \
|
||||
}
|
||||
#define M(TYPE) \
|
||||
case Type::TYPE: { \
|
||||
rows_map.TYPE->forEachMapped([&](RowRefList & rows_ref) { merge_rows_into_one_block(blocks, rows_ref); }); \
|
||||
break; \
|
||||
}
|
||||
APPLY_FOR_JOIN_VARIANTS(M)
|
||||
#undef M
|
||||
#undef M
|
||||
default:
|
||||
break;
|
||||
}
|
||||
};
|
||||
BlocksList sorted_blocks;
|
||||
ScatteredBlocksList sorted_blocks;
|
||||
visit_rows_map(sorted_blocks, map);
|
||||
doDebugAsserts();
|
||||
data->blocks.swap(sorted_blocks);
|
||||
|
@ -1,9 +1,11 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <variant>
|
||||
#include <optional>
|
||||
#include <algorithm>
|
||||
#include <deque>
|
||||
#include <memory>
|
||||
#include <optional>
|
||||
#include <ranges>
|
||||
#include <variant>
|
||||
#include <vector>
|
||||
|
||||
#include <Parsers/ASTTablesInSelectQuery.h>
|
||||
@ -12,22 +14,19 @@
|
||||
#include <Interpreters/AggregationCommon.h>
|
||||
#include <Interpreters/RowRefs.h>
|
||||
|
||||
#include <Common/Arena.h>
|
||||
#include <Common/ColumnsHashing.h>
|
||||
#include <Common/HashTable/HashMap.h>
|
||||
#include <Common/HashTable/FixedHashMap.h>
|
||||
#include <Storages/TableLockHolder.h>
|
||||
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Columns/ColumnFixedString.h>
|
||||
|
||||
#include <QueryPipeline/SizeLimits.h>
|
||||
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <Core/Block.h>
|
||||
|
||||
#include <Storages/IStorage_fwd.h>
|
||||
#include <Interpreters/HashJoin/ScatteredBlock.h>
|
||||
#include <Interpreters/IKeyValueEntity.h>
|
||||
#include <Interpreters/TemporaryDataOnDisk.h>
|
||||
#include <QueryPipeline/SizeLimits.h>
|
||||
#include <Storages/IStorage_fwd.h>
|
||||
#include <Storages/TableLockHolder.h>
|
||||
#include <Common/Arena.h>
|
||||
#include <Common/ColumnsHashing.h>
|
||||
#include <Common/HashTable/FixedHashMap.h>
|
||||
#include <Common/HashTable/HashMap.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -142,13 +141,21 @@ public:
|
||||
*/
|
||||
bool addBlockToJoin(const Block & source_block_, bool check_limits) override;
|
||||
|
||||
/// Called directly from ConcurrentJoin::addBlockToJoin
|
||||
bool addBlockToJoin(ScatteredBlock & source_block_, bool check_limits);
|
||||
|
||||
void checkTypesOfKeys(const Block & block) const override;
|
||||
|
||||
using IJoin::joinBlock;
|
||||
|
||||
/** Join data from the map (that was previously built by calls to addBlockToJoin) to the block with data from "left" table.
|
||||
* Could be called from different threads in parallel.
|
||||
*/
|
||||
void joinBlock(Block & block, ExtraBlockPtr & not_processed) override;
|
||||
|
||||
/// Called directly from ConcurrentJoin::joinBlock
|
||||
void joinBlock(ScatteredBlock & block, ScatteredBlock & remaining_block);
|
||||
|
||||
/// Check joinGet arguments and infer the return type.
|
||||
DataTypePtr joinGetCheckAndGetReturnType(const DataTypes & data_types, const String & column_name, bool or_null) const;
|
||||
|
||||
@ -327,8 +334,17 @@ public:
|
||||
|
||||
using MapsVariant = std::variant<MapsOne, MapsAll, MapsAsof>;
|
||||
|
||||
using RawBlockPtr = const Block *;
|
||||
using BlockNullmapList = std::deque<std::pair<RawBlockPtr, ColumnPtr>>;
|
||||
using RawBlockPtr = const ScatteredBlock *;
|
||||
struct NullMapHolder
|
||||
{
|
||||
size_t allocatedBytes() const { return !column->empty() ? column->allocatedBytes() * block->rows() / column->size() : 0; }
|
||||
|
||||
RawBlockPtr block;
|
||||
ColumnPtr column;
|
||||
};
|
||||
using BlockNullmapList = std::deque<NullMapHolder>;
|
||||
|
||||
using ScatteredBlocksList = std::list<ScatteredBlock>;
|
||||
|
||||
struct RightTableData
|
||||
{
|
||||
@ -337,7 +353,7 @@ public:
|
||||
|
||||
std::vector<MapsVariant> maps;
|
||||
Block sample_block; /// Block as it would appear in the BlockList
|
||||
BlocksList blocks; /// Blocks of "right" table.
|
||||
ScatteredBlocksList blocks; /// Blocks of "right" table.
|
||||
BlockNullmapList blocks_nullmaps; /// Nullmaps for blocks of "right" table (if needed)
|
||||
|
||||
/// Additional data - strings for string keys and continuation elements of single-linked lists of references to rows.
|
||||
@ -389,6 +405,9 @@ public:
|
||||
|
||||
void setMaxJoinedBlockRows(size_t value) { max_joined_block_rows = value; }
|
||||
|
||||
void materializeColumnsFromLeftBlock(Block & block) const;
|
||||
Block materializeColumnsFromRightBlock(Block block) const;
|
||||
|
||||
private:
|
||||
friend class NotJoinedHash;
|
||||
|
||||
@ -423,8 +442,9 @@ private:
|
||||
std::vector<Sizes> key_sizes;
|
||||
|
||||
/// Needed to do external cross join
|
||||
TemporaryDataOnDiskPtr tmp_data;
|
||||
TemporaryFileStream* tmp_stream{nullptr};
|
||||
TemporaryDataOnDiskScopePtr tmp_data;
|
||||
std::optional<TemporaryBlockStreamHolder> tmp_stream;
|
||||
mutable std::once_flag finish_writing;
|
||||
|
||||
/// Block with columns from the right-side table.
|
||||
Block right_sample_block;
|
||||
@ -475,5 +495,4 @@ private:
|
||||
void tryRerangeRightTableDataImpl(Map & map);
|
||||
void doDebugAsserts() const;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -19,7 +19,7 @@ template <typename HashMap, typename KeyGetter>
|
||||
struct Inserter
|
||||
{
|
||||
static ALWAYS_INLINE bool
|
||||
insertOne(const HashJoin & join, HashMap & map, KeyGetter & key_getter, Block * stored_block, size_t i, Arena & pool)
|
||||
insertOne(const HashJoin & join, HashMap & map, KeyGetter & key_getter, const Block * stored_block, size_t i, Arena & pool)
|
||||
{
|
||||
auto emplace_result = key_getter.emplaceKey(map, i, pool);
|
||||
|
||||
@ -31,7 +31,8 @@ struct Inserter
|
||||
return false;
|
||||
}
|
||||
|
||||
static ALWAYS_INLINE void insertAll(const HashJoin &, HashMap & map, KeyGetter & key_getter, Block * stored_block, size_t i, Arena & pool)
|
||||
static ALWAYS_INLINE void
|
||||
insertAll(const HashJoin &, HashMap & map, KeyGetter & key_getter, const Block * stored_block, size_t i, Arena & pool)
|
||||
{
|
||||
auto emplace_result = key_getter.emplaceKey(map, i, pool);
|
||||
|
||||
@ -45,7 +46,13 @@ struct Inserter
|
||||
}
|
||||
|
||||
static ALWAYS_INLINE void insertAsof(
|
||||
HashJoin & join, HashMap & map, KeyGetter & key_getter, Block * stored_block, size_t i, Arena & pool, const IColumn & asof_column)
|
||||
HashJoin & join,
|
||||
HashMap & map,
|
||||
KeyGetter & key_getter,
|
||||
const Block * stored_block,
|
||||
size_t i,
|
||||
Arena & pool,
|
||||
const IColumn & asof_column)
|
||||
{
|
||||
auto emplace_result = key_getter.emplaceKey(map, i, pool);
|
||||
typename HashMap::mapped_type * time_series_map = &emplace_result.getMapped();
|
||||
@ -66,10 +73,10 @@ public:
|
||||
HashJoin & join,
|
||||
HashJoin::Type type,
|
||||
MapsTemplate & maps,
|
||||
size_t rows,
|
||||
const ColumnRawPtrs & key_columns,
|
||||
const Sizes & key_sizes,
|
||||
Block * stored_block,
|
||||
const Block * stored_block,
|
||||
const ScatteredBlock::Selector & selector,
|
||||
ConstNullMapPtr null_map,
|
||||
UInt8ColumnDataPtr join_mask,
|
||||
Arena & pool,
|
||||
@ -83,14 +90,30 @@ public:
|
||||
const Block & block_with_columns_to_add,
|
||||
const MapsTemplateVector & maps_,
|
||||
bool is_join_get = false);
|
||||
|
||||
static ScatteredBlock joinBlockImpl(
|
||||
const HashJoin & join,
|
||||
ScatteredBlock & block,
|
||||
const Block & block_with_columns_to_add,
|
||||
const MapsTemplateVector & maps_,
|
||||
bool is_join_get = false);
|
||||
|
||||
private:
|
||||
template <typename KeyGetter, bool is_asof_join>
|
||||
static KeyGetter createKeyGetter(const ColumnRawPtrs & key_columns, const Sizes & key_sizes);
|
||||
|
||||
template <typename KeyGetter, typename HashMap>
|
||||
template <typename KeyGetter, typename HashMap, typename Selector>
|
||||
static size_t insertFromBlockImplTypeCase(
|
||||
HashJoin & join, HashMap & map, size_t rows, const ColumnRawPtrs & key_columns,
|
||||
const Sizes & key_sizes, Block * stored_block, ConstNullMapPtr null_map, UInt8ColumnDataPtr join_mask, Arena & pool, bool & is_inserted);
|
||||
HashJoin & join,
|
||||
HashMap & map,
|
||||
const ColumnRawPtrs & key_columns,
|
||||
const Sizes & key_sizes,
|
||||
const Block * stored_block,
|
||||
const Selector & selector,
|
||||
ConstNullMapPtr null_map,
|
||||
UInt8ColumnDataPtr join_mask,
|
||||
Arena & pool,
|
||||
bool & is_inserted);
|
||||
|
||||
template <typename AddedColumns>
|
||||
static size_t switchJoinRightColumns(
|
||||
@ -115,12 +138,13 @@ private:
|
||||
|
||||
/// Joins right table columns which indexes are present in right_indexes using specified map.
|
||||
/// Makes filter (1 if row presented in right table) and returns offsets to replicate (for ALL JOINS).
|
||||
template <typename KeyGetter, typename Map, bool need_filter, bool flag_per_row, typename AddedColumns>
|
||||
template <typename KeyGetter, typename Map, bool need_filter, bool flag_per_row, typename AddedColumns, typename Selector>
|
||||
static size_t joinRightColumns(
|
||||
std::vector<KeyGetter> && key_getter_vector,
|
||||
const std::vector<const Map *> & mapv,
|
||||
AddedColumns & added_columns,
|
||||
JoinStuff::JoinUsedFlags & used_flags);
|
||||
JoinStuff::JoinUsedFlags & used_flags,
|
||||
const Selector & selector);
|
||||
|
||||
template <bool need_filter>
|
||||
static void setUsed(IColumn::Filter & filter [[maybe_unused]], size_t pos [[maybe_unused]]);
|
||||
|
@ -1,5 +1,8 @@
|
||||
#pragma once
|
||||
#include <type_traits>
|
||||
#include <Interpreters/HashJoin/HashJoinMethods.h>
|
||||
#include "Columns/IColumn.h"
|
||||
#include "Interpreters/HashJoin/ScatteredBlock.h"
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -13,10 +16,10 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::insertFromBlockImpl(
|
||||
HashJoin & join,
|
||||
HashJoin::Type type,
|
||||
MapsTemplate & maps,
|
||||
size_t rows,
|
||||
const ColumnRawPtrs & key_columns,
|
||||
const Sizes & key_sizes,
|
||||
Block * stored_block,
|
||||
const Block * stored_block,
|
||||
const ScatteredBlock::Selector & selector,
|
||||
ConstNullMapPtr null_map,
|
||||
UInt8ColumnDataPtr join_mask,
|
||||
Arena & pool,
|
||||
@ -33,9 +36,14 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::insertFromBlockImpl(
|
||||
|
||||
#define M(TYPE) \
|
||||
case HashJoin::Type::TYPE: \
|
||||
return insertFromBlockImplTypeCase< \
|
||||
typename KeyGetterForType<HashJoin::Type::TYPE, std::remove_reference_t<decltype(*maps.TYPE)>>::Type>( \
|
||||
join, *maps.TYPE, rows, key_columns, key_sizes, stored_block, null_map, join_mask, pool, is_inserted); \
|
||||
if (selector.isContinuousRange()) \
|
||||
return insertFromBlockImplTypeCase< \
|
||||
typename KeyGetterForType<HashJoin::Type::TYPE, std::remove_reference_t<decltype(*maps.TYPE)>>::Type>( \
|
||||
join, *maps.TYPE, key_columns, key_sizes, stored_block, selector.getRange(), null_map, join_mask, pool, is_inserted); \
|
||||
else \
|
||||
return insertFromBlockImplTypeCase< \
|
||||
typename KeyGetterForType<HashJoin::Type::TYPE, std::remove_reference_t<decltype(*maps.TYPE)>>::Type>( \
|
||||
join, *maps.TYPE, key_columns, key_sizes, stored_block, selector.getIndexes(), null_map, join_mask, pool, is_inserted); \
|
||||
break;
|
||||
|
||||
APPLY_FOR_JOIN_VARIANTS(M)
|
||||
@ -46,6 +54,22 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::insertFromBlockImpl(
|
||||
template <JoinKind KIND, JoinStrictness STRICTNESS, typename MapsTemplate>
|
||||
Block HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinBlockImpl(
|
||||
const HashJoin & join, Block & block, const Block & block_with_columns_to_add, const MapsTemplateVector & maps_, bool is_join_get)
|
||||
{
|
||||
ScatteredBlock scattered_block{block};
|
||||
auto ret = joinBlockImpl(join, scattered_block, block_with_columns_to_add, maps_, is_join_get);
|
||||
ret.filterBySelector();
|
||||
scattered_block.filterBySelector();
|
||||
block = std::move(scattered_block.getSourceBlock());
|
||||
return ret.getSourceBlock();
|
||||
}
|
||||
|
||||
template <JoinKind KIND, JoinStrictness STRICTNESS, typename MapsTemplate>
|
||||
ScatteredBlock HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinBlockImpl(
|
||||
const HashJoin & join,
|
||||
ScatteredBlock & block,
|
||||
const Block & block_with_columns_to_add,
|
||||
const MapsTemplateVector & maps_,
|
||||
bool is_join_get)
|
||||
{
|
||||
constexpr JoinFeatures<KIND, STRICTNESS, MapsTemplate> join_features;
|
||||
|
||||
@ -66,6 +90,8 @@ Block HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinBlockImpl(
|
||||
materializeBlockInplace(block);
|
||||
}
|
||||
|
||||
auto & source_block = block.getSourceBlock();
|
||||
|
||||
/** For LEFT/INNER JOIN, the saved blocks do not contain keys.
|
||||
* For FULL/RIGHT JOIN, the saved blocks contain keys;
|
||||
* but they will not be used at this stage of joining (and will be in `AdderNonJoined`), and they need to be skipped.
|
||||
@ -89,16 +115,24 @@ Block HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinBlockImpl(
|
||||
else
|
||||
added_columns.reserve(join_features.need_replication);
|
||||
|
||||
size_t num_joined = switchJoinRightColumns(maps_, added_columns, join.data->type, *join.used_flags);
|
||||
const size_t num_joined = switchJoinRightColumns(maps_, added_columns, join.data->type, *join.used_flags);
|
||||
/// Do not hold memory for join_on_keys anymore
|
||||
added_columns.join_on_keys.clear();
|
||||
Block remaining_block = sliceBlock(block, num_joined);
|
||||
auto remaining_block = block.cut(num_joined);
|
||||
|
||||
if (is_join_get)
|
||||
added_columns.buildJoinGetOutput();
|
||||
else
|
||||
added_columns.buildOutput();
|
||||
|
||||
if constexpr (join_features.need_filter)
|
||||
block.filter(added_columns.filter);
|
||||
|
||||
block.filterBySelector();
|
||||
|
||||
for (size_t i = 0; i < added_columns.size(); ++i)
|
||||
source_block.insert(added_columns.moveColumn(i));
|
||||
|
||||
const auto & table_join = join.table_join;
|
||||
std::set<size_t> block_columns_to_erase;
|
||||
if (join.canRemoveColumnsFromLeftBlock())
|
||||
@ -106,25 +140,17 @@ Block HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinBlockImpl(
|
||||
std::unordered_set<String> left_output_columns;
|
||||
for (const auto & out_column : table_join->getOutputColumns(JoinTableSide::Left))
|
||||
left_output_columns.insert(out_column.name);
|
||||
for (size_t i = 0; i < block.columns(); ++i)
|
||||
for (size_t i = 0; i < source_block.columns(); ++i)
|
||||
{
|
||||
if (!left_output_columns.contains(block.getByPosition(i).name))
|
||||
if (!left_output_columns.contains(source_block.getByPosition(i).name))
|
||||
block_columns_to_erase.insert(i);
|
||||
}
|
||||
}
|
||||
size_t existing_columns = block.columns();
|
||||
|
||||
for (size_t i = 0; i < added_columns.size(); ++i)
|
||||
block.insert(added_columns.moveColumn(i));
|
||||
|
||||
std::vector<size_t> right_keys_to_replicate [[maybe_unused]];
|
||||
|
||||
if constexpr (join_features.need_filter)
|
||||
{
|
||||
/// If ANY INNER | RIGHT JOIN - filter all the columns except the new ones.
|
||||
for (size_t i = 0; i < existing_columns; ++i)
|
||||
block.safeGetByPosition(i).column = block.safeGetByPosition(i).column->filter(added_columns.filter, -1);
|
||||
|
||||
/// Add join key columns from right block if needed using value from left table because of equality
|
||||
for (size_t i = 0; i < join.required_right_keys.columns(); ++i)
|
||||
{
|
||||
@ -136,7 +162,7 @@ Block HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinBlockImpl(
|
||||
const auto & left_column = block.getByName(join.required_right_keys_sources[i]);
|
||||
const auto & right_col_name = join.getTableJoin().renamedRightColumnName(right_key.name);
|
||||
auto right_col = copyLeftKeyColumnToRight(right_key.type, right_col_name, left_column);
|
||||
block.insert(std::move(right_col));
|
||||
source_block.insert(std::move(right_col));
|
||||
}
|
||||
}
|
||||
else if (has_required_right_keys)
|
||||
@ -152,30 +178,30 @@ Block HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinBlockImpl(
|
||||
|
||||
const auto & left_column = block.getByName(join.required_right_keys_sources[i]);
|
||||
auto right_col = copyLeftKeyColumnToRight(right_key.type, right_col_name, left_column, &added_columns.filter);
|
||||
block.insert(std::move(right_col));
|
||||
source_block.insert(std::move(right_col));
|
||||
|
||||
if constexpr (join_features.need_replication)
|
||||
right_keys_to_replicate.push_back(block.getPositionByName(right_col_name));
|
||||
right_keys_to_replicate.push_back(source_block.getPositionByName(right_col_name));
|
||||
}
|
||||
}
|
||||
|
||||
if constexpr (join_features.need_replication)
|
||||
{
|
||||
std::unique_ptr<IColumn::Offsets> & offsets_to_replicate = added_columns.offsets_to_replicate;
|
||||
IColumn::Offsets & offsets = *added_columns.offsets_to_replicate;
|
||||
|
||||
/// If ALL ... JOIN - we replicate all the columns except the new ones.
|
||||
for (size_t i = 0; i < existing_columns; ++i)
|
||||
{
|
||||
block.safeGetByPosition(i).column = block.safeGetByPosition(i).column->replicate(*offsets_to_replicate);
|
||||
}
|
||||
chassert(block);
|
||||
chassert(offsets.size() == block.rows());
|
||||
|
||||
/// Replicate additional right keys
|
||||
auto && columns = block.getSourceBlock().getColumns();
|
||||
for (size_t i = 0; i < columns.size(); ++i)
|
||||
columns[i] = columns[i]->replicate(offsets);
|
||||
for (size_t pos : right_keys_to_replicate)
|
||||
{
|
||||
block.safeGetByPosition(pos).column = block.safeGetByPosition(pos).column->replicate(*offsets_to_replicate);
|
||||
}
|
||||
columns[pos] = columns[pos]->replicate(offsets);
|
||||
|
||||
block.getSourceBlock().setColumns(columns);
|
||||
block.getSourceBlock().erase(block_columns_to_erase);
|
||||
block = ScatteredBlock(std::move(block).getSourceBlock());
|
||||
}
|
||||
block.erase(block_columns_to_erase);
|
||||
return remaining_block;
|
||||
}
|
||||
|
||||
@ -196,14 +222,14 @@ KeyGetter HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::createKeyGetter(const
|
||||
}
|
||||
|
||||
template <JoinKind KIND, JoinStrictness STRICTNESS, typename MapsTemplate>
|
||||
template <typename KeyGetter, typename HashMap>
|
||||
template <typename KeyGetter, typename HashMap, typename Selector>
|
||||
size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::insertFromBlockImplTypeCase(
|
||||
HashJoin & join,
|
||||
HashMap & map,
|
||||
size_t rows,
|
||||
const ColumnRawPtrs & key_columns,
|
||||
const Sizes & key_sizes,
|
||||
Block * stored_block,
|
||||
const Block * stored_block,
|
||||
const Selector & selector,
|
||||
ConstNullMapPtr null_map,
|
||||
UInt8ColumnDataPtr join_mask,
|
||||
Arena & pool,
|
||||
@ -221,9 +247,22 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::insertFromBlockImplTypeC
|
||||
/// For ALL and ASOF join always insert values
|
||||
is_inserted = !mapped_one || is_asof_join;
|
||||
|
||||
size_t rows = 0;
|
||||
if constexpr (std::is_same_v<std::decay_t<Selector>, ScatteredBlock::Indexes>)
|
||||
rows = selector.getData().size();
|
||||
else
|
||||
rows = selector.second - selector.first;
|
||||
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
{
|
||||
if (null_map && (*null_map)[i])
|
||||
size_t ind = 0;
|
||||
if constexpr (std::is_same_v<std::decay_t<Selector>, ScatteredBlock::Indexes>)
|
||||
ind = selector.getData()[i];
|
||||
else
|
||||
ind = selector.first + i;
|
||||
|
||||
chassert(!null_map || ind < null_map->size());
|
||||
if (null_map && (*null_map)[ind])
|
||||
{
|
||||
/// nulls are not inserted into hash table,
|
||||
/// keep them for RIGHT and FULL joins
|
||||
@ -232,15 +271,16 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::insertFromBlockImplTypeC
|
||||
}
|
||||
|
||||
/// Check condition for right table from ON section
|
||||
if (join_mask && !(*join_mask)[i])
|
||||
chassert(!join_mask || ind < join_mask->size());
|
||||
if (join_mask && !(*join_mask)[ind])
|
||||
continue;
|
||||
|
||||
if constexpr (is_asof_join)
|
||||
Inserter<HashMap, KeyGetter>::insertAsof(join, map, key_getter, stored_block, i, pool, *asof_column);
|
||||
Inserter<HashMap, KeyGetter>::insertAsof(join, map, key_getter, stored_block, ind, pool, *asof_column);
|
||||
else if constexpr (mapped_one)
|
||||
is_inserted |= Inserter<HashMap, KeyGetter>::insertOne(join, map, key_getter, stored_block, i, pool);
|
||||
is_inserted |= Inserter<HashMap, KeyGetter>::insertOne(join, map, key_getter, stored_block, ind, pool);
|
||||
else
|
||||
Inserter<HashMap, KeyGetter>::insertAll(join, map, key_getter, stored_block, i, pool);
|
||||
Inserter<HashMap, KeyGetter>::insertAll(join, map, key_getter, stored_block, ind, pool);
|
||||
}
|
||||
return map.getBufferSizeInCells();
|
||||
}
|
||||
@ -334,26 +374,43 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinRightColumnsSwitchMu
|
||||
if (added_columns.additional_filter_expression)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Additional filter expression is not supported for this JOIN");
|
||||
|
||||
return mapv.size() > 1 ? joinRightColumns<KeyGetter, Map, need_filter, true>(
|
||||
std::forward<std::vector<KeyGetter>>(key_getter_vector), mapv, added_columns, used_flags)
|
||||
: joinRightColumns<KeyGetter, Map, need_filter, false>(
|
||||
std::forward<std::vector<KeyGetter>>(key_getter_vector), mapv, added_columns, used_flags);
|
||||
auto & block = added_columns.src_block;
|
||||
if (block.getSelector().isContinuousRange())
|
||||
{
|
||||
if (mapv.size() > 1)
|
||||
return joinRightColumns<KeyGetter, Map, need_filter, true>(
|
||||
std::move(key_getter_vector), mapv, added_columns, used_flags, block.getSelector().getRange());
|
||||
else
|
||||
return joinRightColumns<KeyGetter, Map, need_filter, false>(
|
||||
std::move(key_getter_vector), mapv, added_columns, used_flags, block.getSelector().getRange());
|
||||
}
|
||||
else
|
||||
{
|
||||
if (mapv.size() > 1)
|
||||
return joinRightColumns<KeyGetter, Map, need_filter, true>(
|
||||
std::move(key_getter_vector), mapv, added_columns, used_flags, block.getSelector().getIndexes());
|
||||
else
|
||||
return joinRightColumns<KeyGetter, Map, need_filter, false>(
|
||||
std::move(key_getter_vector), mapv, added_columns, used_flags, block.getSelector().getIndexes());
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
/// Joins right table columns which indexes are present in right_indexes using specified map.
|
||||
/// Makes filter (1 if row presented in right table) and returns offsets to replicate (for ALL JOINS).
|
||||
template <JoinKind KIND, JoinStrictness STRICTNESS, typename MapsTemplate>
|
||||
template <typename KeyGetter, typename Map, bool need_filter, bool flag_per_row, typename AddedColumns>
|
||||
template <typename KeyGetter, typename Map, bool need_filter, bool flag_per_row, typename AddedColumns, typename Selector>
|
||||
size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinRightColumns(
|
||||
std::vector<KeyGetter> && key_getter_vector,
|
||||
const std::vector<const Map *> & mapv,
|
||||
AddedColumns & added_columns,
|
||||
JoinStuff::JoinUsedFlags & used_flags)
|
||||
JoinStuff::JoinUsedFlags & used_flags,
|
||||
const Selector & selector)
|
||||
{
|
||||
constexpr JoinFeatures<KIND, STRICTNESS, MapsTemplate> join_features;
|
||||
|
||||
size_t rows = added_columns.rows_to_add;
|
||||
auto & block = added_columns.src_block;
|
||||
size_t rows = block.rows();
|
||||
if constexpr (need_filter)
|
||||
added_columns.filter = IColumn::Filter(rows, 0);
|
||||
if constexpr (!flag_per_row && (STRICTNESS == JoinStrictness::All || (STRICTNESS == JoinStrictness::Semi && KIND == JoinKind::Right)))
|
||||
@ -369,6 +426,12 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinRightColumns(
|
||||
size_t i = 0;
|
||||
for (; i < rows; ++i)
|
||||
{
|
||||
size_t ind = 0;
|
||||
if constexpr (std::is_same_v<std::decay_t<Selector>, ScatteredBlock::Indexes>)
|
||||
ind = selector.getData()[i];
|
||||
else
|
||||
ind = selector.first + i;
|
||||
|
||||
if constexpr (join_features.need_replication)
|
||||
{
|
||||
if (unlikely(current_offset >= max_joined_block_rows))
|
||||
@ -384,12 +447,12 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinRightColumns(
|
||||
for (size_t onexpr_idx = 0; onexpr_idx < added_columns.join_on_keys.size(); ++onexpr_idx)
|
||||
{
|
||||
const auto & join_keys = added_columns.join_on_keys[onexpr_idx];
|
||||
if (join_keys.null_map && (*join_keys.null_map)[i])
|
||||
if (join_keys.null_map && (*join_keys.null_map)[ind])
|
||||
continue;
|
||||
|
||||
bool row_acceptable = !join_keys.isRowFiltered(i);
|
||||
bool row_acceptable = !join_keys.isRowFiltered(ind);
|
||||
using FindResult = typename KeyGetter::FindResult;
|
||||
auto find_result = row_acceptable ? key_getter_vector[onexpr_idx].findKey(*(mapv[onexpr_idx]), i, pool) : FindResult();
|
||||
auto find_result = row_acceptable ? key_getter_vector[onexpr_idx].findKey(*(mapv[onexpr_idx]), ind, pool) : FindResult();
|
||||
|
||||
if (find_result.isFound())
|
||||
{
|
||||
@ -399,7 +462,7 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinRightColumns(
|
||||
{
|
||||
const IColumn & left_asof_key = added_columns.leftAsofKey();
|
||||
|
||||
auto row_ref = mapped->findAsof(left_asof_key, i);
|
||||
auto row_ref = mapped->findAsof(left_asof_key, ind);
|
||||
if (row_ref && row_ref->block)
|
||||
{
|
||||
setUsed<need_filter>(added_columns.filter, i);
|
||||
@ -864,23 +927,6 @@ size_t HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::joinRightColumnsWithAddt
|
||||
return left_row_iter;
|
||||
}
|
||||
|
||||
template <JoinKind KIND, JoinStrictness STRICTNESS, typename MapsTemplate>
|
||||
Block HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::sliceBlock(Block & block, size_t num_rows)
|
||||
{
|
||||
size_t total_rows = block.rows();
|
||||
if (num_rows >= total_rows)
|
||||
return {};
|
||||
size_t remaining_rows = total_rows - num_rows;
|
||||
Block remaining_block = block.cloneEmpty();
|
||||
for (size_t i = 0; i < block.columns(); ++i)
|
||||
{
|
||||
auto & col = block.getByPosition(i);
|
||||
remaining_block.getByPosition(i).column = col.column->cut(num_rows, remaining_rows);
|
||||
col.column = col.column->cut(0, num_rows);
|
||||
}
|
||||
return remaining_block;
|
||||
}
|
||||
|
||||
template <JoinKind KIND, JoinStrictness STRICTNESS, typename MapsTemplate>
|
||||
ColumnWithTypeAndName HashJoinMethods<KIND, STRICTNESS, MapsTemplate>::copyLeftKeyColumnToRight(
|
||||
const DataTypePtr & right_key_type,
|
||||
|
337
src/Interpreters/HashJoin/ScatteredBlock.h
Normal file
337
src/Interpreters/HashJoin/ScatteredBlock.h
Normal file
@ -0,0 +1,337 @@
|
||||
#pragma once
|
||||
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Core/Block.h>
|
||||
#include <base/defines.h>
|
||||
#include <Common/PODArray.h>
|
||||
|
||||
#include <Poco/Logger.h>
|
||||
#include <Common/logger_useful.h>
|
||||
|
||||
#include <boost/noncopyable.hpp>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
namespace detail
|
||||
{
|
||||
|
||||
/// Previously ConcurrentHashJoin used IColumn::scatter method to split input blocks to sub-blocks by hash.
|
||||
/// To avoid copying of columns, we introduce a new class ScatteredBlock that holds a block and a selector.
|
||||
/// So now each threads get a copy of the source input block and a selector that tells which rows are meant for the given thread.
|
||||
/// Selector can be seen as just a list of indexes or rows that belong to the given thread.
|
||||
/// One optimization is to use a continuous range instead of explicit list of indexes when selector contains all indexes from [L, R).
|
||||
class Selector
|
||||
{
|
||||
public:
|
||||
using Range = std::pair<size_t, size_t>;
|
||||
using Indexes = ColumnUInt64;
|
||||
using IndexesPtr = ColumnUInt64::MutablePtr;
|
||||
|
||||
/// [begin, end)
|
||||
Selector(size_t begin, size_t end) : data(Range{begin, end}) { }
|
||||
Selector() : Selector(0, 0) { }
|
||||
explicit Selector(size_t size) : Selector(0, size) { }
|
||||
|
||||
explicit Selector(IndexesPtr && selector_) : data(initializeFromSelector(std::move(selector_))) { }
|
||||
|
||||
class Iterator
|
||||
{
|
||||
public:
|
||||
using iterator_category = std::forward_iterator_tag;
|
||||
using value_type = size_t;
|
||||
using difference_type = std::ptrdiff_t;
|
||||
using pointer = size_t *;
|
||||
using reference = size_t &;
|
||||
|
||||
Iterator(const Selector & selector_, size_t idx_) : selector(selector_), idx(idx_) { }
|
||||
|
||||
size_t ALWAYS_INLINE operator*() const
|
||||
{
|
||||
chassert(idx < selector.size());
|
||||
if (idx >= selector.size())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Index {} out of range size {}", idx, selector.size());
|
||||
return selector[idx];
|
||||
}
|
||||
|
||||
Iterator & ALWAYS_INLINE operator++()
|
||||
{
|
||||
if (idx >= selector.size())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Index {} out of range size {}", idx, selector.size());
|
||||
++idx;
|
||||
return *this;
|
||||
}
|
||||
|
||||
bool ALWAYS_INLINE operator!=(const Iterator & other) const { return idx != other.idx; }
|
||||
|
||||
private:
|
||||
const Selector & selector;
|
||||
size_t idx;
|
||||
};
|
||||
|
||||
Iterator begin() const { return Iterator(*this, 0); }
|
||||
|
||||
Iterator end() const { return Iterator(*this, size()); }
|
||||
|
||||
size_t ALWAYS_INLINE operator[](size_t idx) const
|
||||
{
|
||||
chassert(idx < size());
|
||||
|
||||
if (std::holds_alternative<Range>(data))
|
||||
{
|
||||
const auto range = std::get<Range>(data);
|
||||
return range.first + idx;
|
||||
}
|
||||
else
|
||||
{
|
||||
return std::get<IndexesPtr>(data)->getData()[idx];
|
||||
}
|
||||
}
|
||||
|
||||
size_t size() const
|
||||
{
|
||||
if (std::holds_alternative<Range>(data))
|
||||
{
|
||||
const auto range = std::get<Range>(data);
|
||||
return range.second - range.first;
|
||||
}
|
||||
else
|
||||
{
|
||||
return std::get<IndexesPtr>(data)->size();
|
||||
}
|
||||
}
|
||||
|
||||
/// First selector contains first `num_rows` rows, second selector contains the rest
|
||||
std::pair<Selector, Selector> split(size_t num_rows)
|
||||
{
|
||||
chassert(num_rows <= size());
|
||||
|
||||
if (std::holds_alternative<Range>(data))
|
||||
{
|
||||
const auto range = std::get<Range>(data);
|
||||
|
||||
if (num_rows == 0)
|
||||
return {Selector(), Selector{range.first, range.second}};
|
||||
|
||||
if (num_rows == size())
|
||||
return {Selector{range.first, range.second}, Selector()};
|
||||
|
||||
return {Selector(range.first, range.first + num_rows), Selector(range.first + num_rows, range.second)};
|
||||
}
|
||||
else
|
||||
{
|
||||
const auto & selector = std::get<IndexesPtr>(data)->getData();
|
||||
auto && left = Selector(Indexes::create(selector.begin(), selector.begin() + num_rows));
|
||||
auto && right = Selector(Indexes::create(selector.begin() + num_rows, selector.end()));
|
||||
return {std::move(left), std::move(right)};
|
||||
}
|
||||
}
|
||||
|
||||
bool isContinuousRange() const { return std::holds_alternative<Range>(data); }
|
||||
|
||||
Range getRange() const
|
||||
{
|
||||
chassert(isContinuousRange());
|
||||
return std::get<Range>(data);
|
||||
}
|
||||
|
||||
const Indexes & getIndexes() const
|
||||
{
|
||||
chassert(!isContinuousRange());
|
||||
return *std::get<IndexesPtr>(data);
|
||||
}
|
||||
|
||||
std::string toString() const
|
||||
{
|
||||
if (std::holds_alternative<Range>(data))
|
||||
{
|
||||
const auto range = std::get<Range>(data);
|
||||
return fmt::format("[{}, {})", range.first, range.second);
|
||||
}
|
||||
else
|
||||
{
|
||||
const auto & selector = std::get<IndexesPtr>(data)->getData();
|
||||
return fmt::format("({})", fmt::join(selector, ","));
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
using Data = std::variant<Range, IndexesPtr>;
|
||||
|
||||
Data initializeFromSelector(IndexesPtr && selector_)
|
||||
{
|
||||
const auto & selector = selector_->getData();
|
||||
if (selector.empty())
|
||||
return Range{0, 0};
|
||||
|
||||
/// selector represents continuous range
|
||||
if (selector.back() == selector.front() + selector.size() - 1)
|
||||
return Range{selector.front(), selector.front() + selector.size()};
|
||||
|
||||
return std::move(selector_);
|
||||
}
|
||||
|
||||
Data data;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
/// Source block + list of selected rows. See detail::Selector for more details.
|
||||
struct ScatteredBlock : private boost::noncopyable
|
||||
{
|
||||
using Selector = detail::Selector;
|
||||
using Indexes = Selector::Indexes;
|
||||
using IndexesPtr = Selector::IndexesPtr;
|
||||
|
||||
ScatteredBlock() = default;
|
||||
|
||||
explicit ScatteredBlock(Block block_) : block(std::move(block_)), selector(block.rows()) { }
|
||||
|
||||
ScatteredBlock(Block block_, IndexesPtr && selector_) : block(std::move(block_)), selector(std::move(selector_)) { }
|
||||
|
||||
ScatteredBlock(Block block_, Selector selector_) : block(std::move(block_)), selector(std::move(selector_)) { }
|
||||
|
||||
ScatteredBlock(ScatteredBlock && other) noexcept : block(std::move(other.block)), selector(std::move(other.selector))
|
||||
{
|
||||
other.block.clear();
|
||||
other.selector = {};
|
||||
}
|
||||
|
||||
ScatteredBlock & operator=(ScatteredBlock && other) noexcept
|
||||
{
|
||||
if (this != &other)
|
||||
{
|
||||
block = std::move(other.block);
|
||||
selector = std::move(other.selector);
|
||||
|
||||
other.block.clear();
|
||||
other.selector = {};
|
||||
}
|
||||
return *this;
|
||||
}
|
||||
|
||||
Block & getSourceBlock() & { return block; }
|
||||
const Block & getSourceBlock() const & { return block; }
|
||||
|
||||
Block && getSourceBlock() && { return std::move(block); }
|
||||
|
||||
const auto & getSelector() const { return selector; }
|
||||
|
||||
explicit operator bool() const { return !!block; }
|
||||
|
||||
/// Accounts only selected rows
|
||||
size_t rows() const { return selector.size(); }
|
||||
|
||||
/// In case of scattered block we account proportional share of the source block bytes.
|
||||
/// For not scattered columns it will be trivial (bytes * N / N) calculation.
|
||||
size_t allocatedBytes() const { return block.rows() ? block.allocatedBytes() * rows() / block.rows() : 0; }
|
||||
|
||||
ScatteredBlock shrinkToFit() const
|
||||
{
|
||||
if (wasScattered())
|
||||
{
|
||||
LOG_TEST(getLogger("HashJoin"), "shrinkToFit() is not supported for ScatteredBlock because blocks are shared");
|
||||
return ScatteredBlock{block};
|
||||
}
|
||||
return ScatteredBlock{block.shrinkToFit()};
|
||||
}
|
||||
|
||||
ScatteredBlock compress() const
|
||||
{
|
||||
if (wasScattered())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot compress ScatteredBlock");
|
||||
return ScatteredBlock{block.compress()};
|
||||
}
|
||||
|
||||
const auto & getByPosition(size_t i) const { return block.getByPosition(i); }
|
||||
|
||||
/// Whether `block` was scattered, i.e. `selector` != [0, block.rows())
|
||||
bool wasScattered() const
|
||||
{
|
||||
return selector.size() != block.rows();
|
||||
}
|
||||
|
||||
const ColumnWithTypeAndName & getByName(const std::string & name) const
|
||||
{
|
||||
return block.getByName(name);
|
||||
}
|
||||
|
||||
/// Filters selector by mask discarding rows for which filter is false
|
||||
void filter(const IColumn::Filter & filter)
|
||||
{
|
||||
chassert(block && block.rows() == filter.size());
|
||||
IndexesPtr new_selector = Indexes::create();
|
||||
new_selector->reserve(selector.size());
|
||||
std::copy_if(
|
||||
selector.begin(), selector.end(), std::back_inserter(new_selector->getData()), [&](size_t idx) { return filter[idx]; });
|
||||
selector = Selector(std::move(new_selector));
|
||||
}
|
||||
|
||||
/// Applies `selector` to the `block` in-place
|
||||
void filterBySelector()
|
||||
{
|
||||
if (!block || !wasScattered())
|
||||
return;
|
||||
|
||||
if (selector.isContinuousRange())
|
||||
{
|
||||
const auto range = selector.getRange();
|
||||
for (size_t i = 0; i < block.columns(); ++i)
|
||||
{
|
||||
auto & col = block.getByPosition(i);
|
||||
col.column = col.column->cut(range.first, range.second - range.first);
|
||||
}
|
||||
selector = Selector(block.rows());
|
||||
return;
|
||||
}
|
||||
|
||||
/// The general case when `selector` is non-trivial (likely the result of applying a filter)
|
||||
auto columns = block.getColumns();
|
||||
for (auto & col : columns)
|
||||
col = col->index(selector.getIndexes(), /*limit*/ 0);
|
||||
block.setColumns(columns);
|
||||
selector = Selector(block.rows());
|
||||
}
|
||||
|
||||
/// Cut first `num_rows` rows from `block` in place and returns block with remaining rows
|
||||
ScatteredBlock cut(size_t num_rows)
|
||||
{
|
||||
SCOPE_EXIT(filterBySelector());
|
||||
|
||||
if (num_rows >= rows())
|
||||
return ScatteredBlock{Block{}};
|
||||
|
||||
chassert(block);
|
||||
|
||||
auto && [first_num_rows, remaining_selector] = selector.split(num_rows);
|
||||
|
||||
auto remaining = ScatteredBlock{block, std::move(remaining_selector)};
|
||||
|
||||
selector = std::move(first_num_rows);
|
||||
|
||||
return remaining;
|
||||
}
|
||||
|
||||
private:
|
||||
Block block;
|
||||
Selector selector;
|
||||
};
|
||||
|
||||
using ScatteredBlocks = std::vector<ScatteredBlock>;
|
||||
|
||||
struct ExtraScatteredBlocks
|
||||
{
|
||||
ScatteredBlocks remaining_blocks;
|
||||
|
||||
bool rows() const
|
||||
{
|
||||
return std::ranges::any_of(remaining_blocks, [](const auto & bl) { return bl.rows(); });
|
||||
}
|
||||
};
|
||||
}
|
@ -1,11 +1,11 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <vector>
|
||||
|
||||
#include <Core/Names.h>
|
||||
#include <Core/Block.h>
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Core/Block.h>
|
||||
#include <Core/Names.h>
|
||||
#include <Interpreters/HashJoin/ScatteredBlock.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
namespace DB
|
||||
@ -90,6 +90,13 @@ public:
|
||||
/// Could be called from different threads in parallel.
|
||||
virtual void joinBlock(Block & block, std::shared_ptr<ExtraBlock> & not_processed) = 0;
|
||||
|
||||
virtual bool isScatteredJoin() const { return false; }
|
||||
virtual void joinBlock(
|
||||
[[maybe_unused]] Block & block, [[maybe_unused]] ExtraScatteredBlocks & extra_blocks, [[maybe_unused]] std::vector<Block> & res)
|
||||
{
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_METHOD, "joinBlock is not supported for {}", getName());
|
||||
}
|
||||
|
||||
/** Set/Get totals for right table
|
||||
* Keep "totals" (separate part of dataset, see WITH TOTALS) to use later.
|
||||
*/
|
||||
|
@ -1887,6 +1887,7 @@ void InterpreterSelectQuery::executeImpl(QueryPlan & query_plan, std::optional<P
|
||||
joined_plan->getCurrentHeader(),
|
||||
expressions.join,
|
||||
settings[Setting::max_block_size],
|
||||
0,
|
||||
max_streams,
|
||||
/* required_output_ = */ NameSet{},
|
||||
analysis_result.optimize_read_in_order,
|
||||
|
@ -48,6 +48,8 @@ ColumnsDescription ProcessorProfileLogElement::getColumnsDescription()
|
||||
{"input_bytes", std::make_shared<DataTypeUInt64>(), "The number of bytes consumed by processor."},
|
||||
{"output_rows", std::make_shared<DataTypeUInt64>(), "The number of rows generated by processor."},
|
||||
{"output_bytes", std::make_shared<DataTypeUInt64>(), "The number of bytes generated by processor."},
|
||||
{"processor_uniq_id", std::make_shared<DataTypeString>(), "The uniq processor id in pipeline."},
|
||||
{"step_uniq_id", std::make_shared<DataTypeString>(), "The uniq step id in plan."},
|
||||
};
|
||||
}
|
||||
|
||||
@ -83,6 +85,8 @@ void ProcessorProfileLogElement::appendToBlock(MutableColumns & columns) const
|
||||
columns[i++]->insert(input_bytes);
|
||||
columns[i++]->insert(output_rows);
|
||||
columns[i++]->insert(output_bytes);
|
||||
columns[i++]->insert(processor_uniq_id);
|
||||
columns[i++]->insert(step_uniq_id);
|
||||
}
|
||||
|
||||
void logProcessorProfile(ContextPtr context, const Processors & processors)
|
||||
@ -120,6 +124,8 @@ void logProcessorProfile(ContextPtr context, const Processors & processors)
|
||||
processor_elem.plan_step_name = processor->getPlanStepName();
|
||||
processor_elem.plan_step_description = processor->getPlanStepDescription();
|
||||
processor_elem.plan_group = processor->getQueryPlanStepGroup();
|
||||
processor_elem.processor_uniq_id = processor->getUniqID();
|
||||
processor_elem.step_uniq_id = processor->getStepUniqID();
|
||||
|
||||
processor_elem.processor_name = processor->getName();
|
||||
|
||||
|
@ -17,7 +17,7 @@ struct ProcessorProfileLogElement
|
||||
UInt64 id{};
|
||||
std::vector<UInt64> parent_ids;
|
||||
|
||||
UInt64 plan_step{};
|
||||
UInt64 plan_step;
|
||||
UInt64 plan_group{};
|
||||
String plan_step_name;
|
||||
String plan_step_description;
|
||||
@ -25,6 +25,8 @@ struct ProcessorProfileLogElement
|
||||
String initial_query_id;
|
||||
String query_id;
|
||||
String processor_name;
|
||||
String processor_uniq_id;
|
||||
String step_uniq_id;
|
||||
|
||||
/// Milliseconds spend in IProcessor::work()
|
||||
UInt64 elapsed_us{};
|
||||
|
@ -20,6 +20,11 @@
|
||||
#include <memory>
|
||||
#include <base/types.h>
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
extern const Metric TemporaryFilesForJoin;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -270,9 +275,9 @@ public:
|
||||
|
||||
VolumePtr getGlobalTemporaryVolume() { return tmp_volume; }
|
||||
|
||||
TemporaryDataOnDiskScopePtr getTempDataOnDisk() { return tmp_data; }
|
||||
bool enableEnalyzer() const { return enable_analyzer; }
|
||||
void assertEnableEnalyzer() const;
|
||||
TemporaryDataOnDiskScopePtr getTempDataOnDisk() { return tmp_data ? tmp_data->childScope(CurrentMetrics::TemporaryFilesForJoin) : nullptr; }
|
||||
|
||||
ActionsDAG createJoinedBlockActions(ContextPtr context) const;
|
||||
|
||||
|
@ -9,13 +9,16 @@
|
||||
#include <Interpreters/Cache/FileCache.h>
|
||||
#include <Formats/NativeWriter.h>
|
||||
#include <Core/ProtocolDefines.h>
|
||||
#include <Disks/IDisk.h>
|
||||
#include <Disks/SingleDiskVolume.h>
|
||||
#include <Disks/DiskLocal.h>
|
||||
#include <Disks/IO/WriteBufferFromTemporaryFile.h>
|
||||
|
||||
#include <Core/Defines.h>
|
||||
#include <Common/formatReadable.h>
|
||||
#include <Common/NaNUtils.h>
|
||||
#include <Interpreters/Cache/WriteBufferToFileSegment.h>
|
||||
#include "Common/Exception.h"
|
||||
#include <Common/Exception.h>
|
||||
|
||||
namespace ProfileEvents
|
||||
{
|
||||
@ -27,11 +30,293 @@ namespace DB
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int TOO_MANY_ROWS_OR_BYTES;
|
||||
extern const int INVALID_STATE;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int NOT_ENOUGH_SPACE;
|
||||
extern const int TOO_MANY_ROWS_OR_BYTES;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
inline CompressionCodecPtr getCodec(const TemporaryDataOnDiskSettings & settings)
|
||||
{
|
||||
if (settings.compression_codec.empty())
|
||||
return CompressionCodecFactory::instance().get("NONE");
|
||||
|
||||
return CompressionCodecFactory::instance().get(settings.compression_codec);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
TemporaryFileHolder::TemporaryFileHolder()
|
||||
{
|
||||
ProfileEvents::increment(ProfileEvents::ExternalProcessingFilesTotal);
|
||||
}
|
||||
|
||||
|
||||
class TemporaryFileInLocalCache : public TemporaryFileHolder
|
||||
{
|
||||
public:
|
||||
explicit TemporaryFileInLocalCache(FileCache & file_cache, size_t reserve_size = 0)
|
||||
{
|
||||
const auto key = FileSegment::Key::random();
|
||||
LOG_TRACE(getLogger("TemporaryFileInLocalCache"), "Creating temporary file in cache with key {}", key);
|
||||
segment_holder = file_cache.set(
|
||||
key, 0, std::max<size_t>(1, reserve_size),
|
||||
CreateFileSegmentSettings(FileSegmentKind::Ephemeral), FileCache::getCommonUser());
|
||||
|
||||
chassert(segment_holder->size() == 1);
|
||||
segment_holder->front().getKeyMetadata()->createBaseDirectory(/* throw_if_failed */true);
|
||||
}
|
||||
|
||||
std::unique_ptr<WriteBuffer> write() override
|
||||
{
|
||||
return std::make_unique<WriteBufferToFileSegment>(&segment_holder->front());
|
||||
}
|
||||
|
||||
std::unique_ptr<ReadBuffer> read(size_t buffer_size) const override
|
||||
{
|
||||
return std::make_unique<ReadBufferFromFile>(segment_holder->front().getPath(), /* buf_size = */ buffer_size);
|
||||
}
|
||||
|
||||
String describeFilePath() const override
|
||||
{
|
||||
return fmt::format("fscache://{}", segment_holder->front().getPath());
|
||||
}
|
||||
|
||||
private:
|
||||
FileSegmentsHolderPtr segment_holder;
|
||||
};
|
||||
|
||||
class TemporaryFileOnLocalDisk : public TemporaryFileHolder
|
||||
{
|
||||
public:
|
||||
explicit TemporaryFileOnLocalDisk(VolumePtr volume, size_t reserve_size = 0)
|
||||
: path_to_file("tmp" + toString(UUIDHelpers::generateV4()))
|
||||
{
|
||||
LOG_TRACE(getLogger("TemporaryFileOnLocalDisk"), "Creating temporary file '{}'", path_to_file);
|
||||
if (reserve_size > 0)
|
||||
{
|
||||
auto reservation = volume->reserve(reserve_size);
|
||||
if (!reservation)
|
||||
{
|
||||
auto disks = volume->getDisks();
|
||||
Strings disks_info;
|
||||
for (const auto & d : disks)
|
||||
{
|
||||
auto to_double = [](auto x) { return static_cast<double>(x); };
|
||||
disks_info.push_back(fmt::format("{}: available: {} unreserved: {}, total: {}, keeping: {}",
|
||||
d->getName(),
|
||||
ReadableSize(d->getAvailableSpace().transform(to_double).value_or(NaNOrZero<double>())),
|
||||
ReadableSize(d->getUnreservedSpace().transform(to_double).value_or(NaNOrZero<double>())),
|
||||
ReadableSize(d->getTotalSpace().transform(to_double).value_or(NaNOrZero<double>())),
|
||||
ReadableSize(d->getKeepingFreeSpace())));
|
||||
}
|
||||
|
||||
throw Exception(ErrorCodes::NOT_ENOUGH_SPACE,
|
||||
"Not enough space on temporary disk, cannot reserve {} bytes on [{}]",
|
||||
reserve_size, fmt::join(disks_info, ", "));
|
||||
}
|
||||
disk = reservation->getDisk();
|
||||
}
|
||||
else
|
||||
{
|
||||
disk = volume->getDisk();
|
||||
}
|
||||
chassert(disk);
|
||||
}
|
||||
|
||||
std::unique_ptr<WriteBuffer> write() override
|
||||
{
|
||||
return disk->writeFile(path_to_file);
|
||||
}
|
||||
|
||||
std::unique_ptr<ReadBuffer> read(size_t buffer_size) const override
|
||||
{
|
||||
ReadSettings settings;
|
||||
settings.local_fs_buffer_size = buffer_size;
|
||||
settings.remote_fs_buffer_size = buffer_size;
|
||||
settings.prefetch_buffer_size = buffer_size;
|
||||
|
||||
return disk->readFile(path_to_file, settings);
|
||||
}
|
||||
|
||||
String describeFilePath() const override
|
||||
{
|
||||
return fmt::format("disk({})://{}/{}", disk->getName(), disk->getPath(), path_to_file);
|
||||
}
|
||||
|
||||
~TemporaryFileOnLocalDisk() override
|
||||
try
|
||||
{
|
||||
if (disk->existsFile(path_to_file))
|
||||
{
|
||||
LOG_TRACE(getLogger("TemporaryFileOnLocalDisk"), "Removing temporary file '{}'", path_to_file);
|
||||
disk->removeRecursive(path_to_file);
|
||||
}
|
||||
else
|
||||
{
|
||||
LOG_WARNING(getLogger("TemporaryFileOnLocalDisk"), "Temporary path '{}' does not exist in '{}' on disk {}", path_to_file, disk->getPath(), disk->getName());
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
|
||||
private:
|
||||
DiskPtr disk;
|
||||
String path_to_file;
|
||||
};
|
||||
|
||||
TemporaryFileProvider createTemporaryFileProvider(VolumePtr volume)
|
||||
{
|
||||
if (!volume)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Volume is not initialized");
|
||||
return [volume](size_t max_size) -> std::unique_ptr<TemporaryFileHolder>
|
||||
{
|
||||
return std::make_unique<TemporaryFileOnLocalDisk>(volume, max_size);
|
||||
};
|
||||
}
|
||||
|
||||
TemporaryFileProvider createTemporaryFileProvider(FileCache * file_cache)
|
||||
{
|
||||
if (!file_cache || !file_cache->isInitialized())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "File cache is not initialized");
|
||||
return [file_cache](size_t max_size) -> std::unique_ptr<TemporaryFileHolder>
|
||||
{
|
||||
return std::make_unique<TemporaryFileInLocalCache>(*file_cache, max_size);
|
||||
};
|
||||
}
|
||||
|
||||
TemporaryDataOnDiskScopePtr TemporaryDataOnDiskScope::childScope(CurrentMetrics::Metric current_metric)
|
||||
{
|
||||
TemporaryDataOnDiskSettings child_settings = settings;
|
||||
child_settings.current_metric = current_metric;
|
||||
return std::make_shared<TemporaryDataOnDiskScope>(shared_from_this(), child_settings);
|
||||
}
|
||||
|
||||
TemporaryDataReadBuffer::TemporaryDataReadBuffer(std::unique_ptr<ReadBuffer> in_)
|
||||
: ReadBuffer(nullptr, 0)
|
||||
, compressed_buf(std::move(in_))
|
||||
{
|
||||
BufferBase::set(compressed_buf->buffer().begin(), compressed_buf->buffer().size(), compressed_buf->offset());
|
||||
}
|
||||
|
||||
bool TemporaryDataReadBuffer::nextImpl()
|
||||
{
|
||||
compressed_buf->position() = position();
|
||||
if (!compressed_buf->next())
|
||||
{
|
||||
set(compressed_buf->position(), 0);
|
||||
return false;
|
||||
}
|
||||
BufferBase::set(compressed_buf->buffer().begin(), compressed_buf->buffer().size(), compressed_buf->offset());
|
||||
return true;
|
||||
}
|
||||
|
||||
TemporaryDataBuffer::TemporaryDataBuffer(TemporaryDataOnDiskScope * parent_, size_t reserve_size)
|
||||
: WriteBuffer(nullptr, 0)
|
||||
, parent(parent_)
|
||||
, file_holder(parent->file_provider(reserve_size))
|
||||
, out_compressed_buf(file_holder->write(), getCodec(parent->getSettings()))
|
||||
{
|
||||
WriteBuffer::set(out_compressed_buf->buffer().begin(), out_compressed_buf->buffer().size());
|
||||
}
|
||||
|
||||
void TemporaryDataBuffer::nextImpl()
|
||||
{
|
||||
if (!out_compressed_buf)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Temporary file buffer writing has been finished");
|
||||
|
||||
out_compressed_buf->position() = position();
|
||||
out_compressed_buf->next();
|
||||
BufferBase::set(out_compressed_buf->buffer().begin(), out_compressed_buf->buffer().size(), out_compressed_buf->offset());
|
||||
updateAllocAndCheck();
|
||||
}
|
||||
|
||||
String TemporaryDataBuffer::describeFilePath() const
|
||||
{
|
||||
return file_holder->describeFilePath();
|
||||
}
|
||||
|
||||
TemporaryDataBuffer::~TemporaryDataBuffer()
|
||||
{
|
||||
if (out_compressed_buf)
|
||||
// read() nor finishWriting() was called
|
||||
cancel();
|
||||
}
|
||||
|
||||
void TemporaryDataBuffer::cancelImpl() noexcept
|
||||
{
|
||||
if (out_compressed_buf)
|
||||
{
|
||||
/// CompressedWriteBuffer doesn't call cancel/finalize for wrapped buffer
|
||||
out_compressed_buf->cancel();
|
||||
out_compressed_buf.getHolder()->cancel();
|
||||
out_compressed_buf.reset();
|
||||
}
|
||||
}
|
||||
|
||||
void TemporaryDataBuffer::finalizeImpl()
|
||||
{
|
||||
if (!out_compressed_buf)
|
||||
return;
|
||||
|
||||
/// CompressedWriteBuffer doesn't call cancel/finalize for wrapped buffer
|
||||
out_compressed_buf->finalize();
|
||||
out_compressed_buf.getHolder()->finalize();
|
||||
|
||||
updateAllocAndCheck();
|
||||
out_compressed_buf.reset();
|
||||
}
|
||||
|
||||
TemporaryDataBuffer::Stat TemporaryDataBuffer::finishWriting()
|
||||
{
|
||||
/// TemporaryDataBuffer::read can be called from multiple threads
|
||||
std::call_once(write_finished, [this]
|
||||
{
|
||||
if (canceled)
|
||||
throw Exception(ErrorCodes::INVALID_STATE, "Writing to temporary file buffer was not successful");
|
||||
next();
|
||||
finalize();
|
||||
});
|
||||
return stat;
|
||||
}
|
||||
|
||||
std::unique_ptr<ReadBuffer> TemporaryDataBuffer::read()
|
||||
{
|
||||
finishWriting();
|
||||
|
||||
if (stat.compressed_size == 0 && stat.uncompressed_size == 0)
|
||||
return std::make_unique<TemporaryDataReadBuffer>(std::make_unique<ReadBufferFromEmptyFile>());
|
||||
|
||||
/// Keep buffer size less that file size, to avoid memory overhead for large amounts of small files
|
||||
size_t buffer_size = std::min<size_t>(stat.compressed_size, DBMS_DEFAULT_BUFFER_SIZE);
|
||||
return std::make_unique<TemporaryDataReadBuffer>(file_holder->read(buffer_size));
|
||||
}
|
||||
|
||||
void TemporaryDataBuffer::updateAllocAndCheck()
|
||||
{
|
||||
if (!out_compressed_buf)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Temporary file buffer writing has been finished");
|
||||
|
||||
size_t new_compressed_size = out_compressed_buf->getCompressedBytes();
|
||||
size_t new_uncompressed_size = out_compressed_buf->getUncompressedBytes();
|
||||
|
||||
if (unlikely(new_compressed_size < stat.compressed_size || new_uncompressed_size < stat.uncompressed_size))
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Temporary file {} size decreased after write: compressed: {} -> {}, uncompressed: {} -> {}",
|
||||
file_holder ? file_holder->describeFilePath() : "NULL",
|
||||
new_compressed_size, stat.compressed_size, new_uncompressed_size, stat.uncompressed_size);
|
||||
}
|
||||
|
||||
parent->deltaAllocAndCheck(new_compressed_size - stat.compressed_size, new_uncompressed_size - stat.uncompressed_size);
|
||||
stat.compressed_size = new_compressed_size;
|
||||
stat.uncompressed_size = new_uncompressed_size;
|
||||
}
|
||||
|
||||
void TemporaryDataOnDiskScope::deltaAllocAndCheck(ssize_t compressed_delta, ssize_t uncompressed_delta)
|
||||
{
|
||||
@ -54,391 +339,25 @@ void TemporaryDataOnDiskScope::deltaAllocAndCheck(ssize_t compressed_delta, ssiz
|
||||
stat.uncompressed_size += uncompressed_delta;
|
||||
}
|
||||
|
||||
TemporaryDataOnDisk::TemporaryDataOnDisk(TemporaryDataOnDiskScopePtr parent_)
|
||||
: TemporaryDataOnDiskScope(parent_, parent_->getSettings())
|
||||
TemporaryBlockStreamHolder::TemporaryBlockStreamHolder(const Block & header_, TemporaryDataOnDiskScope * parent_, size_t reserve_size)
|
||||
: WrapperGuard(std::make_unique<TemporaryDataBuffer>(parent_, reserve_size), DBMS_TCP_PROTOCOL_VERSION, header_)
|
||||
, header(header_)
|
||||
{}
|
||||
|
||||
TemporaryDataOnDisk::TemporaryDataOnDisk(TemporaryDataOnDiskScopePtr parent_, CurrentMetrics::Metric metric_scope)
|
||||
: TemporaryDataOnDiskScope(parent_, parent_->getSettings())
|
||||
, current_metric_scope(metric_scope)
|
||||
{}
|
||||
|
||||
std::unique_ptr<WriteBufferFromFileBase> TemporaryDataOnDisk::createRawStream(size_t max_file_size)
|
||||
TemporaryDataBuffer::Stat TemporaryBlockStreamHolder::finishWriting() const
|
||||
{
|
||||
if (file_cache && file_cache->isInitialized())
|
||||
{
|
||||
auto holder = createCacheFile(max_file_size);
|
||||
return std::make_unique<WriteBufferToFileSegment>(std::move(holder));
|
||||
}
|
||||
if (volume)
|
||||
{
|
||||
auto tmp_file = createRegularFile(max_file_size);
|
||||
return std::make_unique<WriteBufferFromTemporaryFile>(std::move(tmp_file));
|
||||
}
|
||||
if (!holder)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Temporary block stream is not initialized");
|
||||
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryDataOnDiskScope has no cache and no volume");
|
||||
impl->flush();
|
||||
return holder->finishWriting();
|
||||
}
|
||||
|
||||
TemporaryFileStream & TemporaryDataOnDisk::createStream(const Block & header, size_t max_file_size)
|
||||
TemporaryBlockStreamReaderHolder TemporaryBlockStreamHolder::getReadStream() const
|
||||
{
|
||||
if (file_cache && file_cache->isInitialized())
|
||||
{
|
||||
auto holder = createCacheFile(max_file_size);
|
||||
|
||||
std::lock_guard lock(mutex);
|
||||
TemporaryFileStreamPtr & tmp_stream = streams.emplace_back(std::make_unique<TemporaryFileStream>(std::move(holder), header, this));
|
||||
return *tmp_stream;
|
||||
}
|
||||
if (volume)
|
||||
{
|
||||
auto tmp_file = createRegularFile(max_file_size);
|
||||
std::lock_guard lock(mutex);
|
||||
TemporaryFileStreamPtr & tmp_stream
|
||||
= streams.emplace_back(std::make_unique<TemporaryFileStream>(std::move(tmp_file), header, this));
|
||||
return *tmp_stream;
|
||||
}
|
||||
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryDataOnDiskScope has no cache and no volume");
|
||||
}
|
||||
|
||||
FileSegmentsHolderPtr TemporaryDataOnDisk::createCacheFile(size_t max_file_size)
|
||||
{
|
||||
if (!file_cache)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryDataOnDiskScope has no cache");
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::ExternalProcessingFilesTotal);
|
||||
|
||||
const auto key = FileSegment::Key::random();
|
||||
auto holder = file_cache->set(
|
||||
key, 0, std::max(10_MiB, max_file_size),
|
||||
CreateFileSegmentSettings(FileSegmentKind::Ephemeral), FileCache::getCommonUser());
|
||||
|
||||
chassert(holder->size() == 1);
|
||||
holder->back().getKeyMetadata()->createBaseDirectory(/* throw_if_failed */true);
|
||||
|
||||
return holder;
|
||||
}
|
||||
|
||||
TemporaryFileOnDiskHolder TemporaryDataOnDisk::createRegularFile(size_t max_file_size)
|
||||
{
|
||||
if (!volume)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryDataOnDiskScope has no volume");
|
||||
|
||||
DiskPtr disk;
|
||||
if (max_file_size > 0)
|
||||
{
|
||||
auto reservation = volume->reserve(max_file_size);
|
||||
if (!reservation)
|
||||
throw Exception(ErrorCodes::NOT_ENOUGH_SPACE, "Not enough space on temporary disk");
|
||||
disk = reservation->getDisk();
|
||||
}
|
||||
else
|
||||
{
|
||||
disk = volume->getDisk();
|
||||
}
|
||||
/// We do not increment ProfileEvents::ExternalProcessingFilesTotal here because it is incremented in TemporaryFileOnDisk constructor.
|
||||
return std::make_unique<TemporaryFileOnDisk>(disk, current_metric_scope);
|
||||
}
|
||||
|
||||
std::vector<TemporaryFileStream *> TemporaryDataOnDisk::getStreams() const
|
||||
{
|
||||
std::vector<TemporaryFileStream *> res;
|
||||
std::lock_guard lock(mutex);
|
||||
res.reserve(streams.size());
|
||||
for (const auto & stream : streams)
|
||||
res.push_back(stream.get());
|
||||
return res;
|
||||
}
|
||||
|
||||
bool TemporaryDataOnDisk::empty() const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
return streams.empty();
|
||||
}
|
||||
|
||||
static inline CompressionCodecPtr getCodec(const TemporaryDataOnDiskSettings & settings)
|
||||
{
|
||||
if (settings.compression_codec.empty())
|
||||
return CompressionCodecFactory::instance().get("NONE");
|
||||
|
||||
return CompressionCodecFactory::instance().get(settings.compression_codec);
|
||||
}
|
||||
|
||||
struct TemporaryFileStream::OutputWriter
|
||||
{
|
||||
OutputWriter(std::unique_ptr<WriteBuffer> out_buf_, const Block & header_, const TemporaryDataOnDiskSettings & settings)
|
||||
: out_buf(std::move(out_buf_))
|
||||
, out_compressed_buf(*out_buf, getCodec(settings))
|
||||
, out_writer(out_compressed_buf, DBMS_TCP_PROTOCOL_VERSION, header_)
|
||||
{
|
||||
}
|
||||
|
||||
size_t write(const Block & block)
|
||||
{
|
||||
if (finalized)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot write to finalized stream");
|
||||
size_t written_bytes = out_writer.write(block);
|
||||
num_rows += block.rows();
|
||||
return written_bytes;
|
||||
}
|
||||
|
||||
void flush()
|
||||
{
|
||||
if (finalized)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot flush finalized stream");
|
||||
|
||||
out_compressed_buf.next();
|
||||
out_buf->next();
|
||||
out_writer.flush();
|
||||
}
|
||||
|
||||
void finalize()
|
||||
{
|
||||
if (finalized)
|
||||
return;
|
||||
|
||||
/// if we called finalize() explicitly, and got an exception,
|
||||
/// we don't want to get it again in the destructor, so set finalized flag first
|
||||
finalized = true;
|
||||
|
||||
out_writer.flush();
|
||||
out_compressed_buf.finalize();
|
||||
out_buf->finalize();
|
||||
}
|
||||
|
||||
~OutputWriter()
|
||||
{
|
||||
try
|
||||
{
|
||||
finalize();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
std::unique_ptr<WriteBuffer> out_buf;
|
||||
CompressedWriteBuffer out_compressed_buf;
|
||||
NativeWriter out_writer;
|
||||
|
||||
std::atomic_size_t num_rows = 0;
|
||||
|
||||
bool finalized = false;
|
||||
};
|
||||
|
||||
TemporaryFileStream::Reader::Reader(const String & path_, const Block & header_, size_t size_)
|
||||
: path(path_)
|
||||
, size(size_ ? std::min<size_t>(size_, DBMS_DEFAULT_BUFFER_SIZE) : DBMS_DEFAULT_BUFFER_SIZE)
|
||||
, header(header_)
|
||||
{
|
||||
LOG_TEST(getLogger("TemporaryFileStream"), "Reading {} from {}", header_.dumpStructure(), path);
|
||||
}
|
||||
|
||||
TemporaryFileStream::Reader::Reader(const String & path_, size_t size_)
|
||||
: path(path_)
|
||||
, size(size_ ? std::min<size_t>(size_, DBMS_DEFAULT_BUFFER_SIZE) : DBMS_DEFAULT_BUFFER_SIZE)
|
||||
{
|
||||
LOG_TEST(getLogger("TemporaryFileStream"), "Reading from {}", path);
|
||||
}
|
||||
|
||||
Block TemporaryFileStream::Reader::read()
|
||||
{
|
||||
if (!in_reader)
|
||||
{
|
||||
if (fs::exists(path))
|
||||
in_file_buf = std::make_unique<ReadBufferFromFile>(path, size);
|
||||
else
|
||||
in_file_buf = std::make_unique<ReadBufferFromEmptyFile>();
|
||||
|
||||
in_compressed_buf = std::make_unique<CompressedReadBuffer>(*in_file_buf);
|
||||
if (header.has_value())
|
||||
in_reader = std::make_unique<NativeReader>(*in_compressed_buf, header.value(), DBMS_TCP_PROTOCOL_VERSION);
|
||||
else
|
||||
in_reader = std::make_unique<NativeReader>(*in_compressed_buf, DBMS_TCP_PROTOCOL_VERSION);
|
||||
}
|
||||
return in_reader->read();
|
||||
}
|
||||
|
||||
TemporaryFileStream::TemporaryFileStream(TemporaryFileOnDiskHolder file_, const Block & header_, TemporaryDataOnDisk * parent_)
|
||||
: parent(parent_)
|
||||
, header(header_)
|
||||
, file(std::move(file_))
|
||||
, out_writer(std::make_unique<OutputWriter>(std::make_unique<WriteBufferFromFile>(file->getAbsolutePath()), header, parent->settings))
|
||||
{
|
||||
LOG_TEST(getLogger("TemporaryFileStream"), "Writing to temporary file {}", file->getAbsolutePath());
|
||||
}
|
||||
|
||||
TemporaryFileStream::TemporaryFileStream(FileSegmentsHolderPtr segments_, const Block & header_, TemporaryDataOnDisk * parent_)
|
||||
: parent(parent_)
|
||||
, header(header_)
|
||||
, segment_holder(std::move(segments_))
|
||||
{
|
||||
if (segment_holder->size() != 1)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryFileStream can be created only from single segment");
|
||||
auto out_buf = std::make_unique<WriteBufferToFileSegment>(&segment_holder->front());
|
||||
|
||||
LOG_TEST(getLogger("TemporaryFileStream"), "Writing to temporary file {}", out_buf->getFileName());
|
||||
out_writer = std::make_unique<OutputWriter>(std::move(out_buf), header, parent_->settings);
|
||||
}
|
||||
|
||||
size_t TemporaryFileStream::write(const Block & block)
|
||||
{
|
||||
if (!out_writer)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing has been finished");
|
||||
|
||||
updateAllocAndCheck();
|
||||
size_t bytes_written = out_writer->write(block);
|
||||
return bytes_written;
|
||||
}
|
||||
|
||||
void TemporaryFileStream::flush()
|
||||
{
|
||||
if (!out_writer)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing has been finished");
|
||||
|
||||
out_writer->flush();
|
||||
}
|
||||
|
||||
TemporaryFileStream::Stat TemporaryFileStream::finishWriting()
|
||||
{
|
||||
if (isWriteFinished())
|
||||
return stat;
|
||||
|
||||
if (out_writer)
|
||||
{
|
||||
out_writer->finalize();
|
||||
/// The amount of written data can be changed after finalization, some buffers can be flushed
|
||||
/// Need to update the stat
|
||||
updateAllocAndCheck();
|
||||
out_writer.reset();
|
||||
|
||||
/// reader will be created at the first read call, not to consume memory before it is needed
|
||||
}
|
||||
return stat;
|
||||
}
|
||||
|
||||
TemporaryFileStream::Stat TemporaryFileStream::finishWritingAsyncSafe()
|
||||
{
|
||||
std::call_once(finish_writing, [this]{ finishWriting(); });
|
||||
return stat;
|
||||
}
|
||||
|
||||
bool TemporaryFileStream::isWriteFinished() const
|
||||
{
|
||||
assert(in_reader == nullptr || out_writer == nullptr);
|
||||
return out_writer == nullptr;
|
||||
}
|
||||
|
||||
Block TemporaryFileStream::read()
|
||||
{
|
||||
if (!isWriteFinished())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing has been not finished");
|
||||
|
||||
if (isEof())
|
||||
return {};
|
||||
|
||||
if (!in_reader)
|
||||
{
|
||||
in_reader = std::make_unique<Reader>(getPath(), header, getSize());
|
||||
}
|
||||
|
||||
Block block = in_reader->read();
|
||||
if (!block)
|
||||
{
|
||||
/// finalize earlier to release resources, do not wait for the destructor
|
||||
this->release();
|
||||
}
|
||||
return block;
|
||||
}
|
||||
|
||||
std::unique_ptr<TemporaryFileStream::Reader> TemporaryFileStream::getReadStream()
|
||||
{
|
||||
if (!isWriteFinished())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Writing has been not finished");
|
||||
|
||||
if (isEof())
|
||||
return nullptr;
|
||||
|
||||
return std::make_unique<Reader>(getPath(), header, getSize());
|
||||
}
|
||||
|
||||
void TemporaryFileStream::updateAllocAndCheck()
|
||||
{
|
||||
assert(out_writer);
|
||||
size_t new_compressed_size = out_writer->out_compressed_buf.getCompressedBytes();
|
||||
size_t new_uncompressed_size = out_writer->out_compressed_buf.getUncompressedBytes();
|
||||
|
||||
if (unlikely(new_compressed_size < stat.compressed_size || new_uncompressed_size < stat.uncompressed_size))
|
||||
{
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||
"Temporary file {} size decreased after write: compressed: {} -> {}, uncompressed: {} -> {}",
|
||||
getPath(), new_compressed_size, stat.compressed_size, new_uncompressed_size, stat.uncompressed_size);
|
||||
}
|
||||
|
||||
parent->deltaAllocAndCheck(new_compressed_size - stat.compressed_size, new_uncompressed_size - stat.uncompressed_size);
|
||||
stat.compressed_size = new_compressed_size;
|
||||
stat.uncompressed_size = new_uncompressed_size;
|
||||
stat.num_rows = out_writer->num_rows;
|
||||
}
|
||||
|
||||
bool TemporaryFileStream::isEof() const
|
||||
{
|
||||
return file == nullptr && !segment_holder;
|
||||
}
|
||||
|
||||
void TemporaryFileStream::release()
|
||||
{
|
||||
if (in_reader)
|
||||
in_reader.reset();
|
||||
|
||||
if (out_writer)
|
||||
{
|
||||
out_writer->finalize();
|
||||
out_writer.reset();
|
||||
}
|
||||
|
||||
if (file)
|
||||
{
|
||||
file.reset();
|
||||
parent->deltaAllocAndCheck(-stat.compressed_size, -stat.uncompressed_size);
|
||||
}
|
||||
|
||||
if (segment_holder)
|
||||
segment_holder.reset();
|
||||
}
|
||||
|
||||
String TemporaryFileStream::getPath() const
|
||||
{
|
||||
if (file)
|
||||
return file->getAbsolutePath();
|
||||
if (segment_holder && !segment_holder->empty())
|
||||
return segment_holder->front().getPath();
|
||||
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryFileStream has no file");
|
||||
}
|
||||
|
||||
size_t TemporaryFileStream::getSize() const
|
||||
{
|
||||
if (file)
|
||||
return file->getDisk()->getFileSize(file->getRelativePath());
|
||||
if (segment_holder && !segment_holder->empty())
|
||||
return segment_holder->front().getReservedSize();
|
||||
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryFileStream has no file");
|
||||
}
|
||||
|
||||
TemporaryFileStream::~TemporaryFileStream()
|
||||
{
|
||||
try
|
||||
{
|
||||
release();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
assert(false); /// deltaAllocAndCheck with negative can't throw exception
|
||||
}
|
||||
if (!holder)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Temporary block stream is not initialized");
|
||||
return TemporaryBlockStreamReaderHolder(holder->read(), header, DBMS_TCP_PROTOCOL_VERSION);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -4,15 +4,21 @@
|
||||
#include <mutex>
|
||||
#include <boost/noncopyable.hpp>
|
||||
|
||||
#include <IO/ReadBufferFromFile.h>
|
||||
#include <Common/CurrentMetrics.h>
|
||||
#include <Compression/CompressedReadBuffer.h>
|
||||
#include <Formats/NativeReader.h>
|
||||
#include <Core/Block.h>
|
||||
#include <Compression/CompressedWriteBuffer.h>
|
||||
|
||||
#include <Disks/IVolume.h>
|
||||
#include <Disks/TemporaryFileOnDisk.h>
|
||||
#include <Interpreters/Cache/FileSegment.h>
|
||||
#include <Common/CurrentMetrics.h>
|
||||
|
||||
#include <Formats/NativeReader.h>
|
||||
#include <Formats/NativeWriter.h>
|
||||
|
||||
#include <Interpreters/Cache/FileSegment.h>
|
||||
|
||||
#include <IO/ReadBufferFromFile.h>
|
||||
|
||||
class FileCacheTest_TemporaryDataReadBufferSize_Test;
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
@ -25,11 +31,10 @@ namespace DB
|
||||
class TemporaryDataOnDiskScope;
|
||||
using TemporaryDataOnDiskScopePtr = std::shared_ptr<TemporaryDataOnDiskScope>;
|
||||
|
||||
class TemporaryDataOnDisk;
|
||||
using TemporaryDataOnDiskPtr = std::unique_ptr<TemporaryDataOnDisk>;
|
||||
class TemporaryDataBuffer;
|
||||
using TemporaryDataBufferPtr = std::unique_ptr<TemporaryDataBuffer>;
|
||||
|
||||
class TemporaryFileStream;
|
||||
using TemporaryFileStreamPtr = std::unique_ptr<TemporaryFileStream>;
|
||||
class TemporaryFileHolder;
|
||||
|
||||
class FileCache;
|
||||
|
||||
@ -40,15 +45,26 @@ struct TemporaryDataOnDiskSettings
|
||||
|
||||
/// Compression codec for temporary data, if empty no compression will be used. LZ4 by default
|
||||
String compression_codec = "LZ4";
|
||||
|
||||
/// Read/Write internal buffer size
|
||||
size_t buffer_size = DBMS_DEFAULT_BUFFER_SIZE;
|
||||
|
||||
/// Metrics counter to increment when temporary file in current scope are created
|
||||
CurrentMetrics::Metric current_metric = CurrentMetrics::TemporaryFilesUnknown;
|
||||
};
|
||||
|
||||
/// Creates temporary files located on specified resource (disk, fs_cache, etc.)
|
||||
using TemporaryFileProvider = std::function<std::unique_ptr<TemporaryFileHolder>(size_t)>;
|
||||
TemporaryFileProvider createTemporaryFileProvider(VolumePtr volume);
|
||||
TemporaryFileProvider createTemporaryFileProvider(FileCache * file_cache);
|
||||
|
||||
/*
|
||||
* Used to account amount of temporary data written to disk.
|
||||
* If limit is set, throws exception if limit is exceeded.
|
||||
* Data can be nested, so parent scope accounts all data written by children.
|
||||
* Scopes are: global -> per-user -> per-query -> per-purpose (sorting, aggregation, etc).
|
||||
*/
|
||||
class TemporaryDataOnDiskScope : boost::noncopyable
|
||||
class TemporaryDataOnDiskScope : boost::noncopyable, public std::enable_shared_from_this<TemporaryDataOnDiskScope>
|
||||
{
|
||||
public:
|
||||
struct StatAtomic
|
||||
@ -57,164 +73,156 @@ public:
|
||||
std::atomic<size_t> uncompressed_size;
|
||||
};
|
||||
|
||||
explicit TemporaryDataOnDiskScope(VolumePtr volume_, TemporaryDataOnDiskSettings settings_)
|
||||
: volume(std::move(volume_))
|
||||
/// Root scope
|
||||
template <typename T>
|
||||
TemporaryDataOnDiskScope(T && storage, TemporaryDataOnDiskSettings settings_)
|
||||
: file_provider(createTemporaryFileProvider(std::forward<T>(storage)))
|
||||
, settings(std::move(settings_))
|
||||
{}
|
||||
|
||||
explicit TemporaryDataOnDiskScope(VolumePtr volume_, FileCache * file_cache_, TemporaryDataOnDiskSettings settings_)
|
||||
: volume(std::move(volume_))
|
||||
, file_cache(file_cache_)
|
||||
, settings(std::move(settings_))
|
||||
{}
|
||||
|
||||
explicit TemporaryDataOnDiskScope(TemporaryDataOnDiskScopePtr parent_, TemporaryDataOnDiskSettings settings_)
|
||||
TemporaryDataOnDiskScope(TemporaryDataOnDiskScopePtr parent_, TemporaryDataOnDiskSettings settings_)
|
||||
: parent(std::move(parent_))
|
||||
, volume(parent->volume)
|
||||
, file_cache(parent->file_cache)
|
||||
, file_provider(parent->file_provider)
|
||||
, settings(std::move(settings_))
|
||||
{}
|
||||
|
||||
/// TODO: remove
|
||||
/// Refactor all code that uses volume directly to use TemporaryDataOnDisk.
|
||||
VolumePtr getVolume() const { return volume; }
|
||||
TemporaryDataOnDiskScopePtr childScope(CurrentMetrics::Metric current_metric);
|
||||
|
||||
const TemporaryDataOnDiskSettings & getSettings() const { return settings; }
|
||||
|
||||
protected:
|
||||
friend class TemporaryDataBuffer;
|
||||
|
||||
void deltaAllocAndCheck(ssize_t compressed_delta, ssize_t uncompressed_delta);
|
||||
|
||||
TemporaryDataOnDiskScopePtr parent = nullptr;
|
||||
|
||||
VolumePtr volume = nullptr;
|
||||
FileCache * file_cache = nullptr;
|
||||
TemporaryFileProvider file_provider;
|
||||
|
||||
StatAtomic stat;
|
||||
const TemporaryDataOnDiskSettings settings;
|
||||
};
|
||||
|
||||
/*
|
||||
* Holds the set of temporary files.
|
||||
* New file stream is created with `createStream`.
|
||||
* Streams are owned by this object and will be deleted when it is deleted.
|
||||
* It's a leaf node in temporary data scope tree.
|
||||
*/
|
||||
class TemporaryDataOnDisk : private TemporaryDataOnDiskScope
|
||||
/** Used to hold the wrapper and wrapped object together.
|
||||
* This class provides a convenient way to manage the lifetime of both the wrapper and the wrapped object.
|
||||
* The wrapper class (Impl) stores a reference to the wrapped object (Holder), and both objects are owned by this class.
|
||||
* The lifetime of the wrapper and the wrapped object should be the same.
|
||||
* This pattern is commonly used when the caller only needs to interact with the wrapper and doesn't need to be aware of the wrapped object.
|
||||
* Examples: CompressedWriteBuffer and WriteBuffer, and NativeReader and ReadBuffer.
|
||||
*/
|
||||
template <typename Impl, typename Holder>
|
||||
class WrapperGuard
|
||||
{
|
||||
friend class TemporaryFileStream; /// to allow it to call `deltaAllocAndCheck` to account data
|
||||
|
||||
public:
|
||||
using TemporaryDataOnDiskScope::StatAtomic;
|
||||
template <typename ... Args>
|
||||
explicit WrapperGuard(std::unique_ptr<Holder> holder_, Args && ... args)
|
||||
: holder(std::move(holder_))
|
||||
, impl(std::make_unique<Impl>(*holder, std::forward<Args>(args)...))
|
||||
{
|
||||
chassert(holder);
|
||||
chassert(impl);
|
||||
}
|
||||
|
||||
explicit TemporaryDataOnDisk(TemporaryDataOnDiskScopePtr parent_);
|
||||
Impl * operator->() { chassert(impl); chassert(holder); return impl.get(); }
|
||||
const Impl * operator->() const { chassert(impl); chassert(holder); return impl.get(); }
|
||||
Impl & operator*() { chassert(impl); chassert(holder); return *impl; }
|
||||
const Impl & operator*() const { chassert(impl); chassert(holder); return *impl; }
|
||||
operator bool() const { return impl != nullptr; } /// NOLINT
|
||||
|
||||
explicit TemporaryDataOnDisk(TemporaryDataOnDiskScopePtr parent_, CurrentMetrics::Metric metric_scope);
|
||||
const Holder * getHolder() const { return holder.get(); }
|
||||
Holder * getHolder() { return holder.get(); }
|
||||
|
||||
/// If max_file_size > 0, then check that there's enough space on the disk and throw an exception in case of lack of free space
|
||||
TemporaryFileStream & createStream(const Block & header, size_t max_file_size = 0);
|
||||
void reset()
|
||||
{
|
||||
impl.reset();
|
||||
holder.reset();
|
||||
}
|
||||
|
||||
/// Write raw data directly into buffer.
|
||||
/// Differences from `createStream`:
|
||||
/// 1) it doesn't account data in parent scope
|
||||
/// 2) returned buffer owns resources (instead of TemporaryDataOnDisk itself)
|
||||
/// If max_file_size > 0, then check that there's enough space on the disk and throw an exception in case of lack of free space
|
||||
std::unique_ptr<WriteBufferFromFileBase> createRawStream(size_t max_file_size = 0);
|
||||
|
||||
std::vector<TemporaryFileStream *> getStreams() const;
|
||||
bool empty() const;
|
||||
|
||||
const StatAtomic & getStat() const { return stat; }
|
||||
|
||||
private:
|
||||
FileSegmentsHolderPtr createCacheFile(size_t max_file_size);
|
||||
TemporaryFileOnDiskHolder createRegularFile(size_t max_file_size);
|
||||
|
||||
mutable std::mutex mutex;
|
||||
std::vector<TemporaryFileStreamPtr> streams TSA_GUARDED_BY(mutex);
|
||||
|
||||
typename CurrentMetrics::Metric current_metric_scope = CurrentMetrics::TemporaryFilesUnknown;
|
||||
protected:
|
||||
std::unique_ptr<Holder> holder;
|
||||
std::unique_ptr<Impl> impl;
|
||||
};
|
||||
|
||||
/*
|
||||
* Data can be written into this stream and then read.
|
||||
* After finish writing, call `finishWriting` and then either call `read` or 'getReadStream'(only one of the two) to read the data.
|
||||
* Account amount of data written to disk in parent scope.
|
||||
*/
|
||||
class TemporaryFileStream : boost::noncopyable
|
||||
/// Owns temporary file and provides access to it.
|
||||
/// On destruction, file is removed and all resources are freed.
|
||||
/// Lifetime of read/write buffers should be less than lifetime of TemporaryFileHolder.
|
||||
class TemporaryFileHolder
|
||||
{
|
||||
public:
|
||||
struct Reader
|
||||
{
|
||||
Reader(const String & path, const Block & header_, size_t size = 0);
|
||||
TemporaryFileHolder();
|
||||
|
||||
explicit Reader(const String & path, size_t size = 0);
|
||||
virtual std::unique_ptr<WriteBuffer> write() = 0;
|
||||
virtual std::unique_ptr<ReadBuffer> read(size_t buffer_size) const = 0;
|
||||
|
||||
Block read();
|
||||
/// Get location for logging
|
||||
virtual String describeFilePath() const = 0;
|
||||
|
||||
const std::string path;
|
||||
const size_t size;
|
||||
const std::optional<Block> header;
|
||||
virtual ~TemporaryFileHolder() = default;
|
||||
};
|
||||
|
||||
std::unique_ptr<ReadBufferFromFileBase> in_file_buf;
|
||||
std::unique_ptr<CompressedReadBuffer> in_compressed_buf;
|
||||
std::unique_ptr<NativeReader> in_reader;
|
||||
};
|
||||
/// Reads raw data from temporary file
|
||||
class TemporaryDataReadBuffer : public ReadBuffer
|
||||
{
|
||||
public:
|
||||
explicit TemporaryDataReadBuffer(std::unique_ptr<ReadBuffer> in_);
|
||||
|
||||
private:
|
||||
friend class ::FileCacheTest_TemporaryDataReadBufferSize_Test;
|
||||
|
||||
bool nextImpl() override;
|
||||
|
||||
WrapperGuard<CompressedReadBuffer, ReadBuffer> compressed_buf;
|
||||
};
|
||||
|
||||
/// Writes raw data to buffer provided by file_holder, and accounts amount of written data in parent scope.
|
||||
class TemporaryDataBuffer : public WriteBuffer
|
||||
{
|
||||
public:
|
||||
struct Stat
|
||||
{
|
||||
/// Statistics for file
|
||||
/// Non-atomic because we don't allow to `read` or `write` into single file from multiple threads
|
||||
size_t compressed_size = 0;
|
||||
size_t uncompressed_size = 0;
|
||||
size_t num_rows = 0;
|
||||
};
|
||||
|
||||
TemporaryFileStream(TemporaryFileOnDiskHolder file_, const Block & header_, TemporaryDataOnDisk * parent_);
|
||||
TemporaryFileStream(FileSegmentsHolderPtr segments_, const Block & header_, TemporaryDataOnDisk * parent_);
|
||||
|
||||
size_t write(const Block & block);
|
||||
void flush();
|
||||
explicit TemporaryDataBuffer(TemporaryDataOnDiskScope * parent_, size_t reserve_size = 0);
|
||||
void nextImpl() override;
|
||||
void finalizeImpl() override;
|
||||
void cancelImpl() noexcept override;
|
||||
|
||||
std::unique_ptr<ReadBuffer> read();
|
||||
Stat finishWriting();
|
||||
Stat finishWritingAsyncSafe();
|
||||
bool isWriteFinished() const;
|
||||
|
||||
std::unique_ptr<Reader> getReadStream();
|
||||
String describeFilePath() const;
|
||||
|
||||
Block read();
|
||||
|
||||
String getPath() const;
|
||||
size_t getSize() const;
|
||||
|
||||
Block getHeader() const { return header; }
|
||||
|
||||
/// Read finished and file released
|
||||
bool isEof() const;
|
||||
|
||||
~TemporaryFileStream();
|
||||
~TemporaryDataBuffer() override;
|
||||
|
||||
private:
|
||||
void updateAllocAndCheck();
|
||||
|
||||
/// Release everything, close reader and writer, delete file
|
||||
void release();
|
||||
|
||||
TemporaryDataOnDisk * parent;
|
||||
|
||||
Block header;
|
||||
|
||||
/// Data can be stored in file directly or in the cache
|
||||
TemporaryFileOnDiskHolder file;
|
||||
FileSegmentsHolderPtr segment_holder;
|
||||
TemporaryDataOnDiskScope * parent;
|
||||
std::unique_ptr<TemporaryFileHolder> file_holder;
|
||||
WrapperGuard<CompressedWriteBuffer, WriteBuffer> out_compressed_buf;
|
||||
std::once_flag write_finished;
|
||||
|
||||
Stat stat;
|
||||
};
|
||||
|
||||
std::once_flag finish_writing;
|
||||
|
||||
struct OutputWriter;
|
||||
std::unique_ptr<OutputWriter> out_writer;
|
||||
/// High level interfaces for reading and writing temporary data by blocks.
|
||||
using TemporaryBlockStreamReaderHolder = WrapperGuard<NativeReader, ReadBuffer>;
|
||||
|
||||
std::unique_ptr<Reader> in_reader;
|
||||
class TemporaryBlockStreamHolder : public WrapperGuard<NativeWriter, TemporaryDataBuffer>
|
||||
{
|
||||
public:
|
||||
TemporaryBlockStreamHolder(const Block & header_, TemporaryDataOnDiskScope * parent_, size_t reserve_size = 0);
|
||||
|
||||
TemporaryBlockStreamReaderHolder getReadStream() const;
|
||||
|
||||
TemporaryDataBuffer::Stat finishWriting() const;
|
||||
const Block & getHeader() const { return header; }
|
||||
|
||||
private:
|
||||
Block header;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -934,7 +934,7 @@ static Block generateBlock(size_t size = 0)
|
||||
return block;
|
||||
}
|
||||
|
||||
static size_t readAllTemporaryData(TemporaryFileStream & stream)
|
||||
static size_t readAllTemporaryData(NativeReader & stream)
|
||||
{
|
||||
Block block;
|
||||
size_t read_rows = 0;
|
||||
@ -947,6 +947,7 @@ static size_t readAllTemporaryData(TemporaryFileStream & stream)
|
||||
}
|
||||
|
||||
TEST_F(FileCacheTest, temporaryData)
|
||||
try
|
||||
{
|
||||
ServerUUID::setRandomForUnitTests();
|
||||
DB::FileCacheSettings settings;
|
||||
@ -959,7 +960,7 @@ TEST_F(FileCacheTest, temporaryData)
|
||||
file_cache.initialize();
|
||||
|
||||
const auto user = FileCache::getCommonUser();
|
||||
auto tmp_data_scope = std::make_shared<TemporaryDataOnDiskScope>(nullptr, &file_cache, TemporaryDataOnDiskSettings{});
|
||||
auto tmp_data_scope = std::make_shared<TemporaryDataOnDiskScope>(&file_cache, TemporaryDataOnDiskSettings{});
|
||||
|
||||
auto some_data_holder = file_cache.getOrSet(FileCacheKey::fromPath("some_data"), 0, 5_KiB, 5_KiB, CreateFileSegmentSettings{}, 0, user);
|
||||
|
||||
@ -982,12 +983,17 @@ TEST_F(FileCacheTest, temporaryData)
|
||||
|
||||
size_t size_used_with_temporary_data;
|
||||
size_t segments_used_with_temporary_data;
|
||||
|
||||
|
||||
{
|
||||
auto tmp_data = std::make_unique<TemporaryDataOnDisk>(tmp_data_scope);
|
||||
TemporaryBlockStreamHolder stream(generateBlock(), tmp_data_scope.get());
|
||||
ASSERT_TRUE(stream);
|
||||
/// Do nothing with stream, just create it and destroy.
|
||||
}
|
||||
|
||||
auto & stream = tmp_data->createStream(generateBlock());
|
||||
|
||||
ASSERT_GT(stream.write(generateBlock(100)), 0);
|
||||
{
|
||||
TemporaryBlockStreamHolder stream(generateBlock(), tmp_data_scope.get());
|
||||
ASSERT_GT(stream->write(generateBlock(100)), 0);
|
||||
|
||||
ASSERT_GT(file_cache.getUsedCacheSize(), 0);
|
||||
ASSERT_GT(file_cache.getFileSegmentsNum(), 0);
|
||||
@ -995,22 +1001,22 @@ TEST_F(FileCacheTest, temporaryData)
|
||||
size_t used_size_before_attempt = file_cache.getUsedCacheSize();
|
||||
/// data can't be evicted because it is still held by `some_data_holder`
|
||||
ASSERT_THROW({
|
||||
stream.write(generateBlock(2000));
|
||||
stream.flush();
|
||||
stream->write(generateBlock(2000));
|
||||
stream.finishWriting();
|
||||
}, DB::Exception);
|
||||
|
||||
ASSERT_THROW(stream.finishWriting(), DB::Exception);
|
||||
|
||||
ASSERT_EQ(file_cache.getUsedCacheSize(), used_size_before_attempt);
|
||||
}
|
||||
|
||||
{
|
||||
size_t before_used_size = file_cache.getUsedCacheSize();
|
||||
auto tmp_data = std::make_unique<TemporaryDataOnDisk>(tmp_data_scope);
|
||||
|
||||
auto write_buf_stream = tmp_data->createRawStream();
|
||||
auto write_buf_stream = std::make_unique<TemporaryDataBuffer>(tmp_data_scope.get());
|
||||
|
||||
write_buf_stream->write("1234567890", 10);
|
||||
write_buf_stream->write("abcde", 5);
|
||||
auto read_buf = dynamic_cast<IReadableWriteBuffer *>(write_buf_stream.get())->tryGetReadBuffer();
|
||||
auto read_buf = write_buf_stream->read();
|
||||
|
||||
ASSERT_GT(file_cache.getUsedCacheSize(), before_used_size + 10);
|
||||
|
||||
@ -1023,22 +1029,22 @@ TEST_F(FileCacheTest, temporaryData)
|
||||
}
|
||||
|
||||
{
|
||||
auto tmp_data = std::make_unique<TemporaryDataOnDisk>(tmp_data_scope);
|
||||
auto & stream = tmp_data->createStream(generateBlock());
|
||||
TemporaryBlockStreamHolder stream(generateBlock(), tmp_data_scope.get());
|
||||
|
||||
ASSERT_GT(stream.write(generateBlock(100)), 0);
|
||||
ASSERT_GT(stream->write(generateBlock(100)), 0);
|
||||
|
||||
some_data_holder.reset();
|
||||
|
||||
stream.write(generateBlock(2000));
|
||||
stream->write(generateBlock(2000));
|
||||
|
||||
auto stat = stream.finishWriting();
|
||||
stream.finishWriting();
|
||||
|
||||
ASSERT_TRUE(fs::exists(stream.getPath()));
|
||||
ASSERT_GT(fs::file_size(stream.getPath()), 100);
|
||||
String file_path = stream.getHolder()->describeFilePath().substr(strlen("fscache://"));
|
||||
|
||||
ASSERT_EQ(stat.num_rows, 2100);
|
||||
ASSERT_EQ(readAllTemporaryData(stream), 2100);
|
||||
ASSERT_TRUE(fs::exists(file_path)) << "File " << file_path << " should exist";
|
||||
ASSERT_GT(fs::file_size(file_path), 100) << "File " << file_path << " should be larger than 100 bytes";
|
||||
|
||||
ASSERT_EQ(readAllTemporaryData(*stream.getReadStream()), 2100);
|
||||
|
||||
size_used_with_temporary_data = file_cache.getUsedCacheSize();
|
||||
segments_used_with_temporary_data = file_cache.getFileSegmentsNum();
|
||||
@ -1054,6 +1060,11 @@ TEST_F(FileCacheTest, temporaryData)
|
||||
ASSERT_LE(file_cache.getUsedCacheSize(), size_used_before_temporary_data);
|
||||
ASSERT_LE(file_cache.getFileSegmentsNum(), segments_used_before_temporary_data);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
std::cerr << getCurrentExceptionMessage(true) << std::endl;
|
||||
throw;
|
||||
}
|
||||
|
||||
TEST_F(FileCacheTest, CachedReadBuffer)
|
||||
{
|
||||
@ -1148,18 +1159,22 @@ TEST_F(FileCacheTest, TemporaryDataReadBufferSize)
|
||||
DB::FileCache file_cache("cache", settings);
|
||||
file_cache.initialize();
|
||||
|
||||
auto tmp_data_scope = std::make_shared<TemporaryDataOnDiskScope>(/*volume=*/nullptr, &file_cache, /*settings=*/TemporaryDataOnDiskSettings{});
|
||||
|
||||
auto tmp_data = std::make_unique<TemporaryDataOnDisk>(tmp_data_scope);
|
||||
auto tmp_data_scope = std::make_shared<TemporaryDataOnDiskScope>(&file_cache, TemporaryDataOnDiskSettings{});
|
||||
|
||||
auto block = generateBlock(/*size=*/3);
|
||||
auto & stream = tmp_data->createStream(block);
|
||||
stream.write(block);
|
||||
stream.finishWriting();
|
||||
TemporaryBlockStreamHolder stream(block, tmp_data_scope.get());
|
||||
|
||||
/// We allocate buffer of size min(getSize(), DBMS_DEFAULT_BUFFER_SIZE)
|
||||
stream->write(block);
|
||||
auto stat = stream.finishWriting();
|
||||
|
||||
/// We allocate buffer of size min(stat.compressed_size, DBMS_DEFAULT_BUFFER_SIZE)
|
||||
/// We do care about buffer size because realistic external group by could generate 10^5 temporary files
|
||||
ASSERT_EQ(stream.getSize(), 62);
|
||||
ASSERT_EQ(stat.compressed_size, 62);
|
||||
|
||||
auto reader = stream.getReadStream();
|
||||
auto * read_buf = reader.getHolder();
|
||||
const auto & internal_buffer = static_cast<TemporaryDataReadBuffer *>(read_buf)->compressed_buf.getHolder()->internalBuffer();
|
||||
ASSERT_EQ(internal_buffer.size(), 62);
|
||||
}
|
||||
|
||||
/// Temporary data stored on disk
|
||||
@ -1170,16 +1185,14 @@ TEST_F(FileCacheTest, TemporaryDataReadBufferSize)
|
||||
disk = createDisk("temporary_data_read_buffer_size_test_dir");
|
||||
VolumePtr volume = std::make_shared<SingleDiskVolume>("volume", disk);
|
||||
|
||||
auto tmp_data_scope = std::make_shared<TemporaryDataOnDiskScope>(/*volume=*/volume, /*cache=*/nullptr, /*settings=*/TemporaryDataOnDiskSettings{});
|
||||
|
||||
auto tmp_data = std::make_unique<TemporaryDataOnDisk>(tmp_data_scope);
|
||||
auto tmp_data_scope = std::make_shared<TemporaryDataOnDiskScope>(volume, TemporaryDataOnDiskSettings{});
|
||||
|
||||
auto block = generateBlock(/*size=*/3);
|
||||
auto & stream = tmp_data->createStream(block);
|
||||
stream.write(block);
|
||||
stream.finishWriting();
|
||||
TemporaryBlockStreamHolder stream(block, tmp_data_scope.get());
|
||||
stream->write(block);
|
||||
auto stat = stream.finishWriting();
|
||||
|
||||
ASSERT_EQ(stream.getSize(), 62);
|
||||
ASSERT_EQ(stat.compressed_size, 62);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -45,6 +45,16 @@ void Expected::highlight(HighlightedRange range)
|
||||
return;
|
||||
|
||||
auto it = highlights.lower_bound(range);
|
||||
|
||||
/// Highlights are sorted by their starting position.
|
||||
/// lower_bound(range) will find the first highlight where begin >= range.begin.
|
||||
/// However, this does not ensure that the previous highlight's end <= range.begin.
|
||||
/// By checking the previous highlight, if it exists, we ensure that
|
||||
/// for each highlight x and the next one y: x.end <= y.begin, thus preventing any overlap.
|
||||
|
||||
if (it != highlights.begin())
|
||||
it = std::prev(it);
|
||||
|
||||
while (it != highlights.end() && range.begin < it->end)
|
||||
{
|
||||
if (intersects(range.begin, range.end, it->begin, it->end))
|
||||
|
@ -274,7 +274,7 @@ FiltersForTableExpressionMap collectFiltersForAnalysis(const QueryTreeNodePtr &
|
||||
return res;
|
||||
}
|
||||
|
||||
FiltersForTableExpressionMap collectFiltersForAnalysis(const QueryTreeNodePtr & query_tree_node, SelectQueryOptions & select_query_options)
|
||||
FiltersForTableExpressionMap collectFiltersForAnalysis(const QueryTreeNodePtr & query_tree_node, const SelectQueryOptions & select_query_options)
|
||||
{
|
||||
if (select_query_options.only_analyze)
|
||||
return {};
|
||||
|
@ -2,8 +2,8 @@
|
||||
|
||||
#include <Core/Settings.h>
|
||||
|
||||
#include <Common/scope_guard_safe.h>
|
||||
#include <Core/ParallelReplicasMode.h>
|
||||
#include <Common/scope_guard_safe.h>
|
||||
|
||||
#include <Columns/ColumnAggregateFunction.h>
|
||||
|
||||
@ -105,6 +105,7 @@ namespace Setting
|
||||
extern const SettingsBool optimize_move_to_prewhere_if_final;
|
||||
extern const SettingsBool use_concurrency_control;
|
||||
extern const SettingsBoolAuto query_plan_join_swap_table;
|
||||
extern const SettingsUInt64 min_joined_block_size_bytes;
|
||||
}
|
||||
|
||||
namespace ErrorCodes
|
||||
@ -660,6 +661,7 @@ std::unique_ptr<ExpressionStep> createComputeAliasColumnsStep(
|
||||
}
|
||||
|
||||
JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expression,
|
||||
const QueryTreeNodePtr & parent_join_tree,
|
||||
const SelectQueryInfo & select_query_info,
|
||||
const SelectQueryOptions & select_query_options,
|
||||
PlannerContextPtr & planner_context,
|
||||
@ -697,8 +699,6 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
||||
table_expression_query_info.table_expression = table_expression;
|
||||
if (const auto & filter_actions = table_expression_data.getFilterActions())
|
||||
table_expression_query_info.filter_actions_dag = std::make_shared<const ActionsDAG>(filter_actions->clone());
|
||||
table_expression_query_info.current_table_chosen_for_reading_with_parallel_replicas
|
||||
= table_node == planner_context->getGlobalPlannerContext()->parallel_replicas_table;
|
||||
|
||||
size_t max_streams = settings[Setting::max_threads];
|
||||
size_t max_threads_execute_query = settings[Setting::max_threads];
|
||||
@ -913,21 +913,35 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
||||
/// It is just a safety check needed until we have a proper sending plan to replicas.
|
||||
/// If we have a non-trivial storage like View it might create its own Planner inside read(), run findTableForParallelReplicas()
|
||||
/// and find some other table that might be used for reading with parallel replicas. It will lead to errors.
|
||||
const bool other_table_already_chosen_for_reading_with_parallel_replicas
|
||||
= planner_context->getGlobalPlannerContext()->parallel_replicas_table
|
||||
&& !table_expression_query_info.current_table_chosen_for_reading_with_parallel_replicas;
|
||||
if (other_table_already_chosen_for_reading_with_parallel_replicas)
|
||||
planner_context->getMutableQueryContext()->setSetting("allow_experimental_parallel_reading_from_replicas", Field(0));
|
||||
|
||||
storage->read(
|
||||
query_plan,
|
||||
columns_names,
|
||||
storage_snapshot,
|
||||
table_expression_query_info,
|
||||
query_context,
|
||||
from_stage,
|
||||
max_block_size,
|
||||
max_streams);
|
||||
const bool no_tables_or_another_table_chosen_for_reading_with_parallel_replicas_mode
|
||||
= query_context->canUseParallelReplicasOnFollower()
|
||||
&& table_node != planner_context->getGlobalPlannerContext()->parallel_replicas_table;
|
||||
if (no_tables_or_another_table_chosen_for_reading_with_parallel_replicas_mode)
|
||||
{
|
||||
auto mutable_context = Context::createCopy(query_context);
|
||||
mutable_context->setSetting("allow_experimental_parallel_reading_from_replicas", Field(0));
|
||||
storage->read(
|
||||
query_plan,
|
||||
columns_names,
|
||||
storage_snapshot,
|
||||
table_expression_query_info,
|
||||
std::move(mutable_context),
|
||||
from_stage,
|
||||
max_block_size,
|
||||
max_streams);
|
||||
}
|
||||
else
|
||||
{
|
||||
storage->read(
|
||||
query_plan,
|
||||
columns_names,
|
||||
storage_snapshot,
|
||||
table_expression_query_info,
|
||||
query_context,
|
||||
from_stage,
|
||||
max_block_size,
|
||||
max_streams);
|
||||
}
|
||||
|
||||
auto parallel_replicas_enabled_for_storage = [](const StoragePtr & table, const Settings & query_settings)
|
||||
{
|
||||
@ -943,6 +957,19 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
||||
/// query_plan can be empty if there is nothing to read
|
||||
if (query_plan.isInitialized() && parallel_replicas_enabled_for_storage(storage, settings))
|
||||
{
|
||||
const bool allow_parallel_replicas_for_table_expression = [](const QueryTreeNodePtr & join_tree_node)
|
||||
{
|
||||
const JoinNode * join_node = join_tree_node->as<JoinNode>();
|
||||
if (!join_node)
|
||||
return true;
|
||||
|
||||
const auto join_kind = join_node->getKind();
|
||||
if (join_kind == JoinKind::Left || join_kind == JoinKind::Right || join_kind == JoinKind::Inner)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}(parent_join_tree);
|
||||
|
||||
if (query_context->canUseParallelReplicasCustomKey() && query_context->getClientInfo().distributed_depth == 0)
|
||||
{
|
||||
if (auto cluster = query_context->getClusterForParallelReplicas();
|
||||
@ -965,7 +992,7 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
||||
query_plan = std::move(query_plan_parallel_replicas);
|
||||
}
|
||||
}
|
||||
else if (ClusterProxy::canUseParallelReplicasOnInitiator(query_context))
|
||||
else if (ClusterProxy::canUseParallelReplicasOnInitiator(query_context) && allow_parallel_replicas_for_table_expression)
|
||||
{
|
||||
// (1) find read step
|
||||
QueryPlan::Node * node = query_plan.getRootNode();
|
||||
@ -1291,11 +1318,13 @@ std::optional<ActionsDAG> createStepToDropColumns(
|
||||
return drop_unused_columns_after_join_actions_dag;
|
||||
}
|
||||
|
||||
JoinTreeQueryPlan buildQueryPlanForJoinNode(const QueryTreeNodePtr & join_table_expression,
|
||||
JoinTreeQueryPlan buildQueryPlanForJoinNode(
|
||||
const QueryTreeNodePtr & join_table_expression,
|
||||
JoinTreeQueryPlan left_join_tree_query_plan,
|
||||
JoinTreeQueryPlan right_join_tree_query_plan,
|
||||
const ColumnIdentifierSet & outer_scope_columns,
|
||||
PlannerContextPtr & planner_context)
|
||||
PlannerContextPtr & planner_context,
|
||||
const SelectQueryInfo & select_query_info)
|
||||
{
|
||||
auto & join_node = join_table_expression->as<JoinNode &>();
|
||||
if (left_join_tree_query_plan.from_stage != QueryProcessingStage::FetchColumns)
|
||||
@ -1604,8 +1633,7 @@ JoinTreeQueryPlan buildQueryPlanForJoinNode(const QueryTreeNodePtr & join_table_
|
||||
set_used_column_with_duplicates(columns_from_right_table, JoinTableSide::Right);
|
||||
}
|
||||
|
||||
auto join_algorithm = chooseJoinAlgorithm(table_join, join_node.getRightTableExpression(), left_header, right_header, planner_context);
|
||||
|
||||
auto join_algorithm = chooseJoinAlgorithm(table_join, join_node.getRightTableExpression(), left_header, right_header, planner_context, select_query_info);
|
||||
auto result_plan = QueryPlan();
|
||||
|
||||
bool is_filled_join = join_algorithm->isFilled();
|
||||
@ -1706,6 +1734,7 @@ JoinTreeQueryPlan buildQueryPlanForJoinNode(const QueryTreeNodePtr & join_table_
|
||||
right_plan.getCurrentHeader(),
|
||||
std::move(join_algorithm),
|
||||
settings[Setting::max_block_size],
|
||||
settings[Setting::min_joined_block_size_bytes],
|
||||
settings[Setting::max_threads],
|
||||
outer_scope_columns.empty() ? outer_scope_columns_nonempty : outer_scope_columns,
|
||||
false /*optimize_read_in_order*/,
|
||||
@ -1855,7 +1884,8 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node,
|
||||
const ColumnIdentifierSet & outer_scope_columns,
|
||||
PlannerContextPtr & planner_context)
|
||||
{
|
||||
auto table_expressions_stack = buildTableExpressionsStack(query_node->as<QueryNode &>().getJoinTree());
|
||||
const QueryTreeNodePtr & join_tree_node = query_node->as<QueryNode &>().getJoinTree();
|
||||
auto table_expressions_stack = buildTableExpressionsStack(join_tree_node);
|
||||
size_t table_expressions_stack_size = table_expressions_stack.size();
|
||||
bool is_single_table_expression = table_expressions_stack_size == 1;
|
||||
|
||||
@ -1890,7 +1920,9 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node,
|
||||
* Examples: Distributed, LiveView, Merge storages.
|
||||
*/
|
||||
auto left_table_expression = table_expressions_stack.front();
|
||||
auto left_table_expression_query_plan = buildQueryPlanForTableExpression(left_table_expression,
|
||||
auto left_table_expression_query_plan = buildQueryPlanForTableExpression(
|
||||
left_table_expression,
|
||||
join_tree_node,
|
||||
select_query_info,
|
||||
select_query_options,
|
||||
planner_context,
|
||||
@ -1944,11 +1976,13 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node,
|
||||
auto left_query_plan = std::move(query_plans_stack.back());
|
||||
query_plans_stack.pop_back();
|
||||
|
||||
query_plans_stack.push_back(buildQueryPlanForJoinNode(table_expression,
|
||||
query_plans_stack.push_back(buildQueryPlanForJoinNode(
|
||||
table_expression,
|
||||
std::move(left_query_plan),
|
||||
std::move(right_query_plan),
|
||||
table_expressions_outer_scope_columns[i],
|
||||
planner_context));
|
||||
planner_context,
|
||||
select_query_info));
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -1963,7 +1997,9 @@ JoinTreeQueryPlan buildJoinTreeQueryPlan(const QueryTreeNodePtr & query_node,
|
||||
* table expression in subquery.
|
||||
*/
|
||||
bool is_remote = planner_context->getTableExpressionDataOrThrow(table_expression).isRemote();
|
||||
query_plans_stack.push_back(buildQueryPlanForTableExpression(table_expression,
|
||||
query_plans_stack.push_back(buildQueryPlanForTableExpression(
|
||||
table_expression,
|
||||
join_tree_node,
|
||||
select_query_info,
|
||||
select_query_options,
|
||||
planner_context,
|
||||
|
@ -789,12 +789,14 @@ std::shared_ptr<DirectKeyValueJoin> tryDirectJoin(const std::shared_ptr<TableJoi
|
||||
}
|
||||
}
|
||||
|
||||
static std::shared_ptr<IJoin> tryCreateJoin(JoinAlgorithm algorithm,
|
||||
static std::shared_ptr<IJoin> tryCreateJoin(
|
||||
JoinAlgorithm algorithm,
|
||||
std::shared_ptr<TableJoin> & table_join,
|
||||
const QueryTreeNodePtr & right_table_expression,
|
||||
const Block & left_table_expression_header,
|
||||
const Block & right_table_expression_header,
|
||||
const PlannerContextPtr & planner_context)
|
||||
const PlannerContextPtr & planner_context,
|
||||
const SelectQueryInfo & select_query_info)
|
||||
{
|
||||
if (table_join->kind() == JoinKind::Paste)
|
||||
return std::make_shared<PasteJoin>(table_join, right_table_expression_header);
|
||||
@ -824,7 +826,7 @@ static std::shared_ptr<IJoin> tryCreateJoin(JoinAlgorithm algorithm,
|
||||
{
|
||||
const auto & settings = query_context->getSettingsRef();
|
||||
StatsCollectingParams params{
|
||||
calculateCacheKey(table_join, right_table_expression),
|
||||
calculateCacheKey(table_join, right_table_expression, select_query_info),
|
||||
settings[Setting::collect_hash_table_stats_during_joins],
|
||||
query_context->getServerSettings()[ServerSetting::max_entries_for_hash_table_stats],
|
||||
settings[Setting::max_size_to_preallocate_for_joins]};
|
||||
@ -866,11 +868,13 @@ static std::shared_ptr<IJoin> tryCreateJoin(JoinAlgorithm algorithm,
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
std::shared_ptr<IJoin> chooseJoinAlgorithm(std::shared_ptr<TableJoin> & table_join,
|
||||
std::shared_ptr<IJoin> chooseJoinAlgorithm(
|
||||
std::shared_ptr<TableJoin> & table_join,
|
||||
const QueryTreeNodePtr & right_table_expression,
|
||||
const Block & left_table_expression_header,
|
||||
const Block & right_table_expression_header,
|
||||
const PlannerContextPtr & planner_context)
|
||||
const PlannerContextPtr & planner_context,
|
||||
const SelectQueryInfo & select_query_info)
|
||||
{
|
||||
if (table_join->getMixedJoinExpression()
|
||||
&& !table_join->isEnabledAlgorithm(JoinAlgorithm::HASH)
|
||||
@ -926,7 +930,14 @@ std::shared_ptr<IJoin> chooseJoinAlgorithm(std::shared_ptr<TableJoin> & table_jo
|
||||
|
||||
for (auto algorithm : table_join->getEnabledJoinAlgorithms())
|
||||
{
|
||||
auto join = tryCreateJoin(algorithm, table_join, right_table_expression, left_table_expression_header, right_table_expression_header, planner_context);
|
||||
auto join = tryCreateJoin(
|
||||
algorithm,
|
||||
table_join,
|
||||
right_table_expression,
|
||||
left_table_expression_header,
|
||||
right_table_expression_header,
|
||||
planner_context,
|
||||
select_query_info);
|
||||
if (join)
|
||||
return join;
|
||||
}
|
||||
|
@ -12,6 +12,8 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
struct SelectQueryInfo;
|
||||
|
||||
/** Join clause represent single JOIN ON section clause.
|
||||
* Join clause consists of JOIN keys and conditions.
|
||||
*
|
||||
@ -218,10 +220,11 @@ std::optional<bool> tryExtractConstantFromJoinNode(const QueryTreeNodePtr & join
|
||||
* Table join structure can be modified during JOIN algorithm choosing for special JOIN algorithms.
|
||||
* For example JOIN with Dictionary engine, or JOIN with JOIN engine.
|
||||
*/
|
||||
std::shared_ptr<IJoin> chooseJoinAlgorithm(std::shared_ptr<TableJoin> & table_join,
|
||||
std::shared_ptr<IJoin> chooseJoinAlgorithm(
|
||||
std::shared_ptr<TableJoin> & table_join,
|
||||
const QueryTreeNodePtr & right_table_expression,
|
||||
const Block & left_table_expression_header,
|
||||
const Block & right_table_expression_header,
|
||||
const PlannerContextPtr & planner_context);
|
||||
|
||||
const PlannerContextPtr & planner_context,
|
||||
const SelectQueryInfo & select_query_info);
|
||||
}
|
||||
|
@ -23,6 +23,8 @@
|
||||
#include <Storages/StorageMaterializedView.h>
|
||||
#include <Storages/buildQueryTreeForShard.h>
|
||||
|
||||
#include <ranges>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace Setting
|
||||
@ -38,12 +40,12 @@ namespace ErrorCodes
|
||||
|
||||
/// Returns a list of (sub)queries (candidates) which may support parallel replicas.
|
||||
/// The rule is :
|
||||
/// subquery has only LEFT or ALL INNER JOIN (or none), and left part is MergeTree table or subquery candidate as well.
|
||||
/// subquery has only LEFT / RIGHT / ALL INNER JOIN (or none), and left / right part is MergeTree table or subquery candidate as well.
|
||||
///
|
||||
/// Additional checks are required, so we return many candidates. The innermost subquery is on top.
|
||||
std::stack<const QueryNode *> getSupportingParallelReplicasQuery(const IQueryTreeNode * query_tree_node)
|
||||
std::vector<const QueryNode *> getSupportingParallelReplicasQuery(const IQueryTreeNode * query_tree_node)
|
||||
{
|
||||
std::stack<const QueryNode *> res;
|
||||
std::vector<const QueryNode *> res;
|
||||
|
||||
while (query_tree_node)
|
||||
{
|
||||
@ -75,7 +77,7 @@ std::stack<const QueryNode *> getSupportingParallelReplicasQuery(const IQueryTre
|
||||
{
|
||||
const auto & query_node_to_process = query_tree_node->as<QueryNode &>();
|
||||
query_tree_node = query_node_to_process.getJoinTree().get();
|
||||
res.push(&query_node_to_process);
|
||||
res.push_back(&query_node_to_process);
|
||||
break;
|
||||
}
|
||||
case QueryTreeNodeType::UNION:
|
||||
@ -98,17 +100,16 @@ std::stack<const QueryNode *> getSupportingParallelReplicasQuery(const IQueryTre
|
||||
case QueryTreeNodeType::JOIN:
|
||||
{
|
||||
const auto & join_node = query_tree_node->as<JoinNode &>();
|
||||
auto join_kind = join_node.getKind();
|
||||
auto join_strictness = join_node.getStrictness();
|
||||
const auto join_kind = join_node.getKind();
|
||||
const auto join_strictness = join_node.getStrictness();
|
||||
|
||||
bool can_parallelize_join =
|
||||
join_kind == JoinKind::Left
|
||||
|| (join_kind == JoinKind::Inner && join_strictness == JoinStrictness::All);
|
||||
|
||||
if (!can_parallelize_join)
|
||||
if (join_kind == JoinKind::Left || (join_kind == JoinKind::Inner && join_strictness == JoinStrictness::All))
|
||||
query_tree_node = join_node.getLeftTableExpression().get();
|
||||
else if (join_kind == JoinKind::Right)
|
||||
query_tree_node = join_node.getRightTableExpression().get();
|
||||
else
|
||||
return {};
|
||||
|
||||
query_tree_node = join_node.getLeftTableExpression().get();
|
||||
break;
|
||||
}
|
||||
default:
|
||||
@ -163,14 +164,27 @@ QueryTreeNodePtr replaceTablesWithDummyTables(QueryTreeNodePtr query, const Cont
|
||||
return query->cloneAndReplace(visitor.replacement_map);
|
||||
}
|
||||
|
||||
#ifdef DUMP_PARALLEL_REPLICAS_QUERY_CANDIDATES
|
||||
static void dumpStack(const std::vector<const QueryNode *> & stack)
|
||||
{
|
||||
std::ranges::reverse_view rv{stack};
|
||||
for (const auto * node : rv)
|
||||
LOG_DEBUG(getLogger(__PRETTY_FUNCTION__), "{}\n{}", CityHash_v1_0_2::Hash128to64(node->getTreeHash()), node->dumpTree());
|
||||
}
|
||||
#endif
|
||||
|
||||
/// Find the best candidate for parallel replicas execution by verifying query plan.
|
||||
/// If query plan has only Expression, Filter of Join steps, we can execute it fully remotely and check the next query.
|
||||
/// If query plan has only Expression, Filter or Join steps, we can execute it fully remotely and check the next query.
|
||||
/// Otherwise we can execute current query up to WithMergableStage only.
|
||||
const QueryNode * findQueryForParallelReplicas(
|
||||
std::stack<const QueryNode *> stack,
|
||||
std::vector<const QueryNode *> stack,
|
||||
const std::unordered_map<const QueryNode *, const QueryPlan::Node *> & mapping,
|
||||
const Settings & settings)
|
||||
{
|
||||
#ifdef DUMP_PARALLEL_REPLICAS_QUERY_CANDIDATES
|
||||
dumpStack(stack);
|
||||
#endif
|
||||
|
||||
struct Frame
|
||||
{
|
||||
const QueryPlan::Node * node = nullptr;
|
||||
@ -189,8 +203,8 @@ const QueryNode * findQueryForParallelReplicas(
|
||||
|
||||
while (!stack.empty())
|
||||
{
|
||||
const QueryNode * const subquery_node = stack.top();
|
||||
stack.pop();
|
||||
const QueryNode * const subquery_node = stack.back();
|
||||
stack.pop_back();
|
||||
|
||||
auto it = mapping.find(subquery_node);
|
||||
/// This should not happen ideally.
|
||||
@ -236,7 +250,7 @@ const QueryNode * findQueryForParallelReplicas(
|
||||
else
|
||||
{
|
||||
const auto * join = typeid_cast<JoinStep *>(step);
|
||||
/// We've checked that JOIN is INNER/LEFT in query tree.
|
||||
/// We've checked that JOIN is INNER/LEFT/RIGHT on query tree level before.
|
||||
/// Don't distribute UNION node.
|
||||
if (!join)
|
||||
return res;
|
||||
@ -263,7 +277,7 @@ const QueryNode * findQueryForParallelReplicas(
|
||||
return res;
|
||||
}
|
||||
|
||||
const QueryNode * findQueryForParallelReplicas(const QueryTreeNodePtr & query_tree_node, SelectQueryOptions & select_query_options)
|
||||
const QueryNode * findQueryForParallelReplicas(const QueryTreeNodePtr & query_tree_node, const SelectQueryOptions & select_query_options)
|
||||
{
|
||||
if (select_query_options.only_analyze)
|
||||
return nullptr;
|
||||
@ -287,7 +301,7 @@ const QueryNode * findQueryForParallelReplicas(const QueryTreeNodePtr & query_tr
|
||||
return nullptr;
|
||||
|
||||
/// We don't have any subquery and storage can process parallel replicas by itself.
|
||||
if (stack.top() == query_tree_node.get())
|
||||
if (stack.back() == query_tree_node.get())
|
||||
return nullptr;
|
||||
|
||||
/// This is needed to avoid infinite recursion.
|
||||
@ -310,31 +324,33 @@ const QueryNode * findQueryForParallelReplicas(const QueryTreeNodePtr & query_tr
|
||||
const auto & mapping = planner.getQueryNodeToPlanStepMapping();
|
||||
const auto * res = findQueryForParallelReplicas(new_stack, mapping, context->getSettingsRef());
|
||||
|
||||
/// Now, return a query from initial stack.
|
||||
if (res)
|
||||
{
|
||||
// find query in initial stack
|
||||
while (!new_stack.empty())
|
||||
{
|
||||
if (res == new_stack.top())
|
||||
return stack.top();
|
||||
if (res == new_stack.back())
|
||||
{
|
||||
res = stack.back();
|
||||
break;
|
||||
}
|
||||
|
||||
stack.pop();
|
||||
new_stack.pop();
|
||||
stack.pop_back();
|
||||
new_stack.pop_back();
|
||||
}
|
||||
}
|
||||
|
||||
return res;
|
||||
}
|
||||
|
||||
static const TableNode * findTableForParallelReplicas(const IQueryTreeNode * query_tree_node)
|
||||
{
|
||||
std::stack<const IQueryTreeNode *> right_join_nodes;
|
||||
while (query_tree_node || !right_join_nodes.empty())
|
||||
std::stack<const IQueryTreeNode *> join_nodes;
|
||||
while (query_tree_node || !join_nodes.empty())
|
||||
{
|
||||
if (!query_tree_node)
|
||||
{
|
||||
query_tree_node = right_join_nodes.top();
|
||||
right_join_nodes.pop();
|
||||
query_tree_node = join_nodes.top();
|
||||
join_nodes.pop();
|
||||
}
|
||||
|
||||
auto join_tree_node_type = query_tree_node->getNodeType();
|
||||
@ -383,8 +399,23 @@ static const TableNode * findTableForParallelReplicas(const IQueryTreeNode * que
|
||||
case QueryTreeNodeType::JOIN:
|
||||
{
|
||||
const auto & join_node = query_tree_node->as<JoinNode &>();
|
||||
query_tree_node = join_node.getLeftTableExpression().get();
|
||||
right_join_nodes.push(join_node.getRightTableExpression().get());
|
||||
const auto join_kind = join_node.getKind();
|
||||
const auto join_strictness = join_node.getStrictness();
|
||||
|
||||
if (join_kind == JoinKind::Left || (join_kind == JoinKind::Inner and join_strictness == JoinStrictness::All))
|
||||
{
|
||||
query_tree_node = join_node.getLeftTableExpression().get();
|
||||
join_nodes.push(join_node.getRightTableExpression().get());
|
||||
}
|
||||
else if (join_kind == JoinKind::Right)
|
||||
{
|
||||
query_tree_node = join_node.getRightTableExpression().get();
|
||||
join_nodes.push(join_node.getLeftTableExpression().get());
|
||||
}
|
||||
else
|
||||
{
|
||||
return nullptr;
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
@ -400,7 +431,7 @@ static const TableNode * findTableForParallelReplicas(const IQueryTreeNode * que
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
const TableNode * findTableForParallelReplicas(const QueryTreeNodePtr & query_tree_node, SelectQueryOptions & select_query_options)
|
||||
const TableNode * findTableForParallelReplicas(const QueryTreeNodePtr & query_tree_node, const SelectQueryOptions & select_query_options)
|
||||
{
|
||||
if (select_query_options.only_analyze)
|
||||
return nullptr;
|
||||
|
@ -15,10 +15,10 @@ struct SelectQueryOptions;
|
||||
|
||||
/// Find a query which can be executed with parallel replicas up to WithMergableStage.
|
||||
/// Returned query will always contain some (>1) subqueries, possibly with joins.
|
||||
const QueryNode * findQueryForParallelReplicas(const QueryTreeNodePtr & query_tree_node, SelectQueryOptions & select_query_options);
|
||||
const QueryNode * findQueryForParallelReplicas(const QueryTreeNodePtr & query_tree_node, const SelectQueryOptions & select_query_options);
|
||||
|
||||
/// Find a table from which we should read on follower replica. It's the left-most table within all JOINs and UNIONs.
|
||||
const TableNode * findTableForParallelReplicas(const QueryTreeNodePtr & query_tree_node, SelectQueryOptions & select_query_options);
|
||||
const TableNode * findTableForParallelReplicas(const QueryTreeNodePtr & query_tree_node, const SelectQueryOptions & select_query_options);
|
||||
|
||||
struct JoinTreeQueryPlan;
|
||||
|
||||
|
@ -79,7 +79,7 @@ bool ExecutionThreadContext::executeTask()
|
||||
|
||||
if (trace_processors)
|
||||
{
|
||||
span = std::make_unique<OpenTelemetry::SpanHolder>(node->processor->getName());
|
||||
span = std::make_unique<OpenTelemetry::SpanHolder>(node->processor->getUniqID());
|
||||
span->addAttribute("thread_number", thread_number);
|
||||
}
|
||||
std::optional<Stopwatch> execution_time_watch;
|
||||
|
@ -10,6 +10,20 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
IProcessor::IProcessor()
|
||||
{
|
||||
processor_index = CurrentThread::isInitialized() ? CurrentThread::get().getNextPipelineProcessorIndex() : 0;
|
||||
}
|
||||
|
||||
IProcessor::IProcessor(InputPorts inputs_, OutputPorts outputs_) : inputs(std::move(inputs_)), outputs(std::move(outputs_))
|
||||
{
|
||||
for (auto & port : inputs)
|
||||
port.processor = this;
|
||||
for (auto & port : outputs)
|
||||
port.processor = this;
|
||||
processor_index = CurrentThread::isInitialized() ? CurrentThread::get().getNextPipelineProcessorIndex() : 0;
|
||||
}
|
||||
|
||||
void IProcessor::setQueryPlanStep(IQueryPlanStep * step, size_t group)
|
||||
{
|
||||
query_plan_step = step;
|
||||
@ -18,6 +32,7 @@ void IProcessor::setQueryPlanStep(IQueryPlanStep * step, size_t group)
|
||||
{
|
||||
plan_step_name = step->getName();
|
||||
plan_step_description = step->getStepDescription();
|
||||
step_uniq_id = step->getUniqID();
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -1,9 +1,12 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Processors/Port.h>
|
||||
#include <Processors/QueryPlan/IQueryPlanStep.h>
|
||||
#include <Common/CurrentThread.h>
|
||||
#include <Common/Stopwatch.h>
|
||||
|
||||
#include <memory>
|
||||
|
||||
class EventCounter;
|
||||
|
||||
@ -121,19 +124,14 @@ protected:
|
||||
OutputPorts outputs;
|
||||
|
||||
public:
|
||||
IProcessor() = default;
|
||||
IProcessor();
|
||||
|
||||
IProcessor(InputPorts inputs_, OutputPorts outputs_)
|
||||
: inputs(std::move(inputs_)), outputs(std::move(outputs_))
|
||||
{
|
||||
for (auto & port : inputs)
|
||||
port.processor = this;
|
||||
for (auto & port : outputs)
|
||||
port.processor = this;
|
||||
}
|
||||
IProcessor(InputPorts inputs_, OutputPorts outputs_);
|
||||
|
||||
virtual String getName() const = 0;
|
||||
|
||||
String getUniqID() const { return fmt::format("{}_{}", getName(), processor_index); }
|
||||
|
||||
enum class Status : uint8_t
|
||||
{
|
||||
/// Processor needs some data at its inputs to proceed.
|
||||
@ -314,6 +312,7 @@ public:
|
||||
void setQueryPlanStep(IQueryPlanStep * step, size_t group = 0);
|
||||
|
||||
IQueryPlanStep * getQueryPlanStep() const { return query_plan_step; }
|
||||
const String & getStepUniqID() const { return step_uniq_id; }
|
||||
size_t getQueryPlanStepGroup() const { return query_plan_step_group; }
|
||||
const String & getPlanStepName() const { return plan_step_name; }
|
||||
const String & getPlanStepDescription() const { return plan_step_description; }
|
||||
@ -407,7 +406,10 @@ private:
|
||||
size_t stream_number = NO_STREAM;
|
||||
|
||||
IQueryPlanStep * query_plan_step = nullptr;
|
||||
String step_uniq_id;
|
||||
size_t query_plan_step_group = 0;
|
||||
|
||||
size_t processor_index = 0;
|
||||
String plan_step_name;
|
||||
String plan_step_description;
|
||||
};
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <Interpreters/ExpressionActions.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <Common/JSONBuilder.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <Functions/IFunction.h>
|
||||
@ -52,7 +53,7 @@ static ActionsAndName splitSingleAndFilter(ActionsDAG & dag, const ActionsDAG::N
|
||||
auto filter_type = removeLowCardinality(split_filter_node->result_type);
|
||||
if (!filter_type->onlyNull() && !isUInt8(removeNullable(filter_type)))
|
||||
{
|
||||
DataTypePtr cast_type = std::make_shared<DataTypeUInt8>();
|
||||
DataTypePtr cast_type = DataTypeFactory::instance().get("Bool");
|
||||
if (filter_type->isNullable())
|
||||
cast_type = std::make_shared<DataTypeNullable>(std::move(cast_type));
|
||||
|
||||
|
@ -10,6 +10,11 @@ namespace ErrorCodes
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
IQueryPlanStep::IQueryPlanStep()
|
||||
{
|
||||
step_index = CurrentThread::isInitialized() ? CurrentThread::get().getNextPlanStepIndex() : 0;
|
||||
}
|
||||
|
||||
void IQueryPlanStep::updateInputHeaders(Headers input_headers_)
|
||||
{
|
||||
input_headers = std::move(input_headers_);
|
||||
|
@ -1,8 +1,13 @@
|
||||
#pragma once
|
||||
|
||||
#include <Common/CurrentThread.h>
|
||||
#include <Core/Block.h>
|
||||
#include <Core/SortDescription.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Processors/QueryPlan/BuildQueryPipelineSettings.h>
|
||||
|
||||
#include <fmt/core.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -26,6 +31,8 @@ using Headers = std::vector<Header>;
|
||||
class IQueryPlanStep
|
||||
{
|
||||
public:
|
||||
IQueryPlanStep();
|
||||
|
||||
virtual ~IQueryPlanStep() = default;
|
||||
|
||||
virtual String getName() const = 0;
|
||||
@ -77,6 +84,8 @@ public:
|
||||
|
||||
/// Updates the input streams of the given step. Used during query plan optimizations.
|
||||
/// It won't do any validation of new streams, so it is your responsibility to ensure that this update doesn't break anything
|
||||
String getUniqID() const { return fmt::format("{}_{}", getName(), step_index); }
|
||||
|
||||
/// (e.g. you correctly remove / add columns).
|
||||
void updateInputHeaders(Headers input_headers_);
|
||||
void updateInputHeader(Header input_header, size_t idx = 0);
|
||||
@ -95,6 +104,9 @@ protected:
|
||||
Processors processors;
|
||||
|
||||
static void describePipeline(const Processors & processors, FormatSettings & settings);
|
||||
|
||||
private:
|
||||
size_t step_index = 0;
|
||||
};
|
||||
|
||||
using QueryPlanStepPtr = std::unique_ptr<IQueryPlanStep>;
|
||||
|
@ -1,9 +1,10 @@
|
||||
#include <Processors/QueryPlan/JoinStep.h>
|
||||
#include <QueryPipeline/QueryPipelineBuilder.h>
|
||||
#include <Processors/Transforms/JoiningTransform.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <Interpreters/IJoin.h>
|
||||
#include <Interpreters/TableJoin.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <Processors/QueryPlan/JoinStep.h>
|
||||
#include <Processors/Transforms/JoiningTransform.h>
|
||||
#include <Processors/Transforms/SquashingTransform.h>
|
||||
#include <QueryPipeline/QueryPipelineBuilder.h>
|
||||
#include <Common/JSONBuilder.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Processors/Transforms/ColumnPermuteTransform.h>
|
||||
@ -75,6 +76,7 @@ JoinStep::JoinStep(
|
||||
const Header & right_header_,
|
||||
JoinPtr join_,
|
||||
size_t max_block_size_,
|
||||
size_t min_block_size_bytes_,
|
||||
size_t max_streams_,
|
||||
NameSet required_output_,
|
||||
bool keep_left_read_in_order_,
|
||||
@ -132,7 +134,13 @@ QueryPipelineBuilderPtr JoinStep::updatePipeline(QueryPipelineBuilders pipelines
|
||||
});
|
||||
}
|
||||
|
||||
return joined_pipeline;
|
||||
if (join->supportParallelJoin())
|
||||
{
|
||||
pipeline->addSimpleTransform([&](const Block & header)
|
||||
{ return std::make_shared<SimpleSquashingChunksTransform>(header, 0, min_block_size_bytes); });
|
||||
}
|
||||
|
||||
return pipeline;
|
||||
}
|
||||
|
||||
bool JoinStep::allowPushDownToRight() const
|
||||
|
@ -19,6 +19,7 @@ public:
|
||||
const Header & right_header_,
|
||||
JoinPtr join_,
|
||||
size_t max_block_size_,
|
||||
size_t min_block_size_bytes_,
|
||||
size_t max_streams_,
|
||||
NameSet required_output_,
|
||||
bool keep_left_read_in_order_,
|
||||
@ -48,6 +49,7 @@ private:
|
||||
|
||||
JoinPtr join;
|
||||
size_t max_block_size;
|
||||
size_t min_block_size_bytes;
|
||||
size_t max_streams;
|
||||
|
||||
const NameSet required_output;
|
||||
|
@ -647,7 +647,7 @@ std::optional<String> optimizeUseAggregateProjections(QueryPlan::Node & node, Qu
|
||||
|
||||
range.begin = exact_ranges[i].end;
|
||||
ordinary_reading_marks -= exact_ranges[i].end - exact_ranges[i].begin;
|
||||
exact_count += part_with_ranges.data_part->index_granularity.getRowsCountInRange(exact_ranges[i]);
|
||||
exact_count += part_with_ranges.data_part->index_granularity->getRowsCountInRange(exact_ranges[i]);
|
||||
++i;
|
||||
}
|
||||
|
||||
|
@ -3,12 +3,15 @@
|
||||
#include <Common/checkStackSize.h>
|
||||
#include <Interpreters/ActionsDAG.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/IJoin.h>
|
||||
#include <Interpreters/InterpreterSelectQueryAnalyzer.h>
|
||||
#include <Interpreters/StorageID.h>
|
||||
#include <Interpreters/TableJoin.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Processors/QueryPlan/ConvertingActions.h>
|
||||
#include <Processors/QueryPlan/ExpressionStep.h>
|
||||
#include <Processors/QueryPlan/ISourceStep.h>
|
||||
#include <Processors/QueryPlan/JoinStep.h>
|
||||
#include <Processors/QueryPlan/ReadFromMergeTree.h>
|
||||
#include <Processors/Sources/NullSource.h>
|
||||
#include <Processors/Transforms/ExpressionTransform.h>
|
||||
@ -62,7 +65,14 @@ std::pair<std::unique_ptr<QueryPlan>, bool> createLocalPlanForParallelReplicas(
|
||||
break;
|
||||
|
||||
if (!node->children.empty())
|
||||
node = node->children.at(0);
|
||||
{
|
||||
// in case of RIGHT JOIN, - reading from right table is parallelized among replicas
|
||||
const JoinStep * join = typeid_cast<JoinStep*>(node->step.get());
|
||||
if (join && join->getJoin()->getTableJoin().kind() == JoinKind::Right)
|
||||
node = node->children.at(1);
|
||||
else
|
||||
node = node->children.at(0);
|
||||
}
|
||||
else
|
||||
node = nullptr;
|
||||
}
|
||||
|
@ -201,7 +201,7 @@ public:
|
||||
|
||||
size_t getMarkRows(size_t part_idx, size_t mark) const
|
||||
{
|
||||
return parts[part_idx].data_part->index_granularity.getMarkRows(mark);
|
||||
return parts[part_idx].data_part->index_granularity->getMarkRows(mark);
|
||||
}
|
||||
private:
|
||||
const RangesInDataParts & parts;
|
||||
@ -444,7 +444,7 @@ SplitPartsRangesResult splitPartsRanges(RangesInDataParts ranges_in_data_parts,
|
||||
parts_ranges.push_back(
|
||||
{index_access.getValue(part_index, range.begin), range, part_index, PartsRangesIterator::EventType::RangeStart});
|
||||
|
||||
const bool value_is_defined_at_end_mark = range.end < index_granularity.getMarksCount();
|
||||
const bool value_is_defined_at_end_mark = range.end < index_granularity->getMarksCount();
|
||||
if (!value_is_defined_at_end_mark)
|
||||
continue;
|
||||
|
||||
@ -667,7 +667,7 @@ std::pair<std::vector<RangesInDataParts>, std::vector<Values>> splitIntersecting
|
||||
PartRangeIndex parts_range_start_index(parts_range_start);
|
||||
parts_ranges_queue.push({std::move(parts_range_start), std::move(parts_range_start_index)});
|
||||
|
||||
const bool value_is_defined_at_end_mark = range.end < index_granularity.getMarksCount();
|
||||
const bool value_is_defined_at_end_mark = range.end < index_granularity->getMarksCount();
|
||||
if (!value_is_defined_at_end_mark)
|
||||
continue;
|
||||
|
||||
|
@ -207,6 +207,7 @@ QueryPipelineBuilderPtr QueryPlan::buildQueryPipeline(
|
||||
static void explainStep(const IQueryPlanStep & step, JSONBuilder::JSONMap & map, const QueryPlan::ExplainPlanOptions & options)
|
||||
{
|
||||
map.add("Node Type", step.getName());
|
||||
map.add("Node Id", step.getUniqID());
|
||||
|
||||
if (options.description)
|
||||
{
|
||||
|
@ -667,7 +667,7 @@ Pipe ReadFromMergeTree::readInOrder(
|
||||
part_with_ranges.ranges.size(),
|
||||
read_type == ReadType::InReverseOrder ? " reverse " : " ",
|
||||
part_with_ranges.data_part->name, total_rows,
|
||||
part_with_ranges.data_part->index_granularity.getMarkStartingRow(part_with_ranges.ranges.front().begin));
|
||||
part_with_ranges.data_part->index_granularity->getMarkStartingRow(part_with_ranges.ranges.front().begin));
|
||||
|
||||
MergeTreeSelectAlgorithmPtr algorithm;
|
||||
if (read_type == ReadType::InReverseOrder)
|
||||
@ -1759,7 +1759,7 @@ ReadFromMergeTree::AnalysisResultPtr ReadFromMergeTree::selectRangesToRead(
|
||||
return std::make_shared<AnalysisResult>(std::move(result));
|
||||
|
||||
for (const auto & part : parts)
|
||||
total_marks_pk += part->index_granularity.getMarksCountWithoutFinal();
|
||||
total_marks_pk += part->index_granularity->getMarksCountWithoutFinal();
|
||||
parts_before_pk = parts.size();
|
||||
|
||||
auto reader_settings = getMergeTreeReaderSettings(context_, query_info_);
|
||||
|
@ -282,9 +282,9 @@ void SortingStep::mergeSorting(
|
||||
if (increase_sort_description_compile_attempts)
|
||||
increase_sort_description_compile_attempts = false;
|
||||
|
||||
auto tmp_data_on_disk = sort_settings.tmp_data
|
||||
? std::make_unique<TemporaryDataOnDisk>(sort_settings.tmp_data, CurrentMetrics::TemporaryFilesForSort)
|
||||
: std::unique_ptr<TemporaryDataOnDisk>();
|
||||
TemporaryDataOnDiskScopePtr tmp_data_on_disk = nullptr;
|
||||
if (sort_settings.tmp_data)
|
||||
tmp_data_on_disk = sort_settings.tmp_data->childScope(CurrentMetrics::TemporaryFilesForSort);
|
||||
|
||||
return std::make_shared<MergeSortingTransform>(
|
||||
header,
|
||||
|
@ -54,9 +54,9 @@ namespace
|
||||
class SourceFromNativeStream : public ISource
|
||||
{
|
||||
public:
|
||||
explicit SourceFromNativeStream(TemporaryFileStream * tmp_stream_)
|
||||
: ISource(tmp_stream_->getHeader())
|
||||
, tmp_stream(tmp_stream_)
|
||||
explicit SourceFromNativeStream(const Block & header, TemporaryBlockStreamReaderHolder tmp_stream_)
|
||||
: ISource(header)
|
||||
, tmp_stream(std::move(tmp_stream_))
|
||||
{}
|
||||
|
||||
String getName() const override { return "SourceFromNativeStream"; }
|
||||
@ -69,7 +69,7 @@ namespace
|
||||
auto block = tmp_stream->read();
|
||||
if (!block)
|
||||
{
|
||||
tmp_stream = nullptr;
|
||||
tmp_stream.reset();
|
||||
return {};
|
||||
}
|
||||
return convertToChunk(block);
|
||||
@ -78,7 +78,7 @@ namespace
|
||||
std::optional<ReadProgress> getReadProgress() override { return std::nullopt; }
|
||||
|
||||
private:
|
||||
TemporaryFileStream * tmp_stream;
|
||||
TemporaryBlockStreamReaderHolder tmp_stream;
|
||||
};
|
||||
}
|
||||
|
||||
@ -811,15 +811,18 @@ void AggregatingTransform::initGenerate()
|
||||
|
||||
Pipes pipes;
|
||||
/// Merge external data from all aggregators used in query.
|
||||
for (const auto & aggregator : *params->aggregator_list_ptr)
|
||||
for (auto & aggregator : *params->aggregator_list_ptr)
|
||||
{
|
||||
const auto & tmp_data = aggregator.getTemporaryData();
|
||||
for (auto * tmp_stream : tmp_data.getStreams())
|
||||
pipes.emplace_back(Pipe(std::make_unique<SourceFromNativeStream>(tmp_stream)));
|
||||
tmp_files = aggregator.detachTemporaryData();
|
||||
num_streams += tmp_files.size();
|
||||
|
||||
num_streams += tmp_data.getStreams().size();
|
||||
compressed_size += tmp_data.getStat().compressed_size;
|
||||
uncompressed_size += tmp_data.getStat().uncompressed_size;
|
||||
for (auto & tmp_stream : tmp_files)
|
||||
{
|
||||
auto stat = tmp_stream.finishWriting();
|
||||
compressed_size += stat.compressed_size;
|
||||
uncompressed_size += stat.uncompressed_size;
|
||||
pipes.emplace_back(Pipe(std::make_unique<SourceFromNativeStream>(tmp_stream.getHeader(), tmp_stream.getReadStream())));
|
||||
}
|
||||
}
|
||||
|
||||
LOG_DEBUG(
|
||||
|
@ -216,6 +216,8 @@ private:
|
||||
|
||||
RowsBeforeStepCounterPtr rows_before_aggregation;
|
||||
|
||||
std::list<TemporaryBlockStreamHolder> tmp_files;
|
||||
|
||||
void initGenerate();
|
||||
};
|
||||
|
||||
|
@ -76,8 +76,9 @@ IProcessor::Status JoiningTransform::prepare()
|
||||
/// Output if has data.
|
||||
if (has_output)
|
||||
{
|
||||
output.push(std::move(output_chunk));
|
||||
has_output = false;
|
||||
output.push(std::move(output_chunks.front()));
|
||||
output_chunks.pop_front();
|
||||
has_output = !output_chunks.empty();
|
||||
|
||||
return Status::PortFull;
|
||||
}
|
||||
@ -123,10 +124,10 @@ void JoiningTransform::work()
|
||||
{
|
||||
if (has_input)
|
||||
{
|
||||
chassert(output_chunks.empty());
|
||||
transform(input_chunk);
|
||||
output_chunk.swap(input_chunk);
|
||||
has_input = not_processed != nullptr;
|
||||
has_output = !output_chunk.empty();
|
||||
has_output = !output_chunks.empty();
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -154,8 +155,7 @@ void JoiningTransform::work()
|
||||
return;
|
||||
}
|
||||
|
||||
auto rows = block.rows();
|
||||
output_chunk.setColumns(block.getColumns(), rows);
|
||||
output_chunks.emplace_back(block.getColumns(), block.rows());
|
||||
has_output = true;
|
||||
}
|
||||
}
|
||||
@ -174,7 +174,7 @@ void JoiningTransform::transform(Chunk & chunk)
|
||||
}
|
||||
}
|
||||
|
||||
Block block;
|
||||
Blocks res;
|
||||
if (on_totals)
|
||||
{
|
||||
const auto & left_totals = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
@ -185,39 +185,58 @@ void JoiningTransform::transform(Chunk & chunk)
|
||||
if (default_totals && !right_totals)
|
||||
return;
|
||||
|
||||
block = outputs.front().getHeader().cloneEmpty();
|
||||
JoinCommon::joinTotals(left_totals, right_totals, join->getTableJoin(), block);
|
||||
res.emplace_back();
|
||||
res.back() = outputs.front().getHeader().cloneEmpty();
|
||||
JoinCommon::joinTotals(left_totals, right_totals, join->getTableJoin(), res.back());
|
||||
}
|
||||
else
|
||||
block = readExecute(chunk);
|
||||
auto num_rows = block.rows();
|
||||
chunk.setColumns(block.getColumns(), num_rows);
|
||||
res = readExecute(chunk);
|
||||
|
||||
std::ranges::for_each(res, [this](Block & block) { output_chunks.emplace_back(block.getColumns(), block.rows()); });
|
||||
}
|
||||
|
||||
Block JoiningTransform::readExecute(Chunk & chunk)
|
||||
Blocks JoiningTransform::readExecute(Chunk & chunk)
|
||||
{
|
||||
Block res;
|
||||
Blocks res;
|
||||
Block block;
|
||||
|
||||
auto join_block = [&]()
|
||||
{
|
||||
if (join->isScatteredJoin())
|
||||
{
|
||||
join->joinBlock(block, remaining_blocks, res);
|
||||
if (remaining_blocks.rows())
|
||||
not_processed = std::make_shared<ExtraBlock>();
|
||||
else
|
||||
not_processed.reset();
|
||||
}
|
||||
else
|
||||
{
|
||||
join->joinBlock(block, not_processed);
|
||||
res.push_back(std::move(block));
|
||||
}
|
||||
};
|
||||
|
||||
if (!not_processed)
|
||||
{
|
||||
if (chunk.hasColumns())
|
||||
res = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
block = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
|
||||
if (res)
|
||||
join->joinBlock(res, not_processed);
|
||||
if (block)
|
||||
join_block();
|
||||
}
|
||||
else if (not_processed->empty()) /// There's not processed data inside expression.
|
||||
{
|
||||
if (chunk.hasColumns())
|
||||
res = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
block = inputs.front().getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
|
||||
not_processed.reset();
|
||||
join->joinBlock(res, not_processed);
|
||||
join_block();
|
||||
}
|
||||
else
|
||||
{
|
||||
res = std::move(not_processed->block);
|
||||
join->joinBlock(res, not_processed);
|
||||
block = std::move(not_processed->block);
|
||||
join_block();
|
||||
}
|
||||
|
||||
return res;
|
||||
|
@ -1,6 +1,10 @@
|
||||
#pragma once
|
||||
#include <Processors/IProcessor.h>
|
||||
|
||||
#include <Interpreters/HashJoin/ScatteredBlock.h>
|
||||
#include <Processors/Chunk.h>
|
||||
#include <Processors/IProcessor.h>
|
||||
|
||||
#include <deque>
|
||||
#include <memory>
|
||||
|
||||
namespace DB
|
||||
@ -66,7 +70,7 @@ protected:
|
||||
|
||||
private:
|
||||
Chunk input_chunk;
|
||||
Chunk output_chunk;
|
||||
std::deque<Chunk> output_chunks;
|
||||
bool has_input = false;
|
||||
bool has_output = false;
|
||||
bool stop_reading = false;
|
||||
@ -80,13 +84,16 @@ private:
|
||||
bool default_totals;
|
||||
bool initialized = false;
|
||||
|
||||
/// Only used with ConcurrentHashJoin
|
||||
ExtraScatteredBlocks remaining_blocks;
|
||||
|
||||
ExtraBlockPtr not_processed;
|
||||
|
||||
FinishCounterPtr finish_counter;
|
||||
IBlocksStreamPtr non_joined_blocks;
|
||||
size_t max_block_size;
|
||||
|
||||
Block readExecute(Chunk & chunk);
|
||||
Blocks readExecute(Chunk & chunk);
|
||||
};
|
||||
|
||||
/// Fills Join with block from right table.
|
||||
|
@ -27,15 +27,20 @@ namespace ProfileEvents
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int LOGICAL_ERROR;
|
||||
}
|
||||
|
||||
class BufferingToFileTransform : public IAccumulatingTransform
|
||||
{
|
||||
public:
|
||||
BufferingToFileTransform(const Block & header, TemporaryFileStream & tmp_stream_, LoggerPtr log_)
|
||||
BufferingToFileTransform(const Block & header, TemporaryBlockStreamHolder tmp_stream_, LoggerPtr log_)
|
||||
: IAccumulatingTransform(header, header)
|
||||
, tmp_stream(tmp_stream_)
|
||||
, tmp_stream(std::move(tmp_stream_))
|
||||
, log(log_)
|
||||
{
|
||||
LOG_INFO(log, "Sorting and writing part of data into temporary file {}", tmp_stream.getPath());
|
||||
LOG_INFO(log, "Sorting and writing part of data into temporary file {}", tmp_stream.getHolder()->describeFilePath());
|
||||
ProfileEvents::increment(ProfileEvents::ExternalSortWritePart);
|
||||
}
|
||||
|
||||
@ -44,14 +49,15 @@ public:
|
||||
void consume(Chunk chunk) override
|
||||
{
|
||||
Block block = getInputPort().getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
tmp_stream.write(block);
|
||||
tmp_stream->write(block);
|
||||
}
|
||||
|
||||
Chunk generate() override
|
||||
{
|
||||
if (!tmp_stream.isWriteFinished())
|
||||
if (!tmp_read_stream)
|
||||
{
|
||||
auto stat = tmp_stream.finishWriting();
|
||||
tmp_read_stream = tmp_stream.getReadStream();
|
||||
|
||||
ProfileEvents::increment(ProfileEvents::ExternalProcessingCompressedBytesTotal, stat.compressed_size);
|
||||
ProfileEvents::increment(ProfileEvents::ExternalProcessingUncompressedBytesTotal, stat.uncompressed_size);
|
||||
@ -59,10 +65,11 @@ public:
|
||||
ProfileEvents::increment(ProfileEvents::ExternalSortUncompressedBytes, stat.uncompressed_size);
|
||||
|
||||
LOG_INFO(log, "Done writing part of data into temporary file {}, compressed {}, uncompressed {} ",
|
||||
tmp_stream.getPath(), ReadableSize(static_cast<double>(stat.compressed_size)), ReadableSize(static_cast<double>(stat.uncompressed_size)));
|
||||
tmp_stream.getHolder()->describeFilePath(),
|
||||
ReadableSize(static_cast<double>(stat.compressed_size)), ReadableSize(static_cast<double>(stat.uncompressed_size)));
|
||||
}
|
||||
|
||||
Block block = tmp_stream.read();
|
||||
Block block = tmp_read_stream.value()->read();
|
||||
if (!block)
|
||||
return {};
|
||||
|
||||
@ -71,7 +78,8 @@ public:
|
||||
}
|
||||
|
||||
private:
|
||||
TemporaryFileStream & tmp_stream;
|
||||
TemporaryBlockStreamHolder tmp_stream;
|
||||
std::optional<TemporaryBlockStreamReaderHolder> tmp_read_stream;
|
||||
|
||||
LoggerPtr log;
|
||||
};
|
||||
@ -86,7 +94,7 @@ MergeSortingTransform::MergeSortingTransform(
|
||||
size_t max_bytes_before_remerge_,
|
||||
double remerge_lowered_memory_bytes_ratio_,
|
||||
size_t max_bytes_before_external_sort_,
|
||||
TemporaryDataOnDiskPtr tmp_data_,
|
||||
TemporaryDataOnDiskScopePtr tmp_data_,
|
||||
size_t min_free_disk_space_)
|
||||
: SortingTransform(header, description_, max_merged_block_size_, limit_, increase_sort_description_compile_attempts)
|
||||
, max_bytes_before_remerge(max_bytes_before_remerge_)
|
||||
@ -168,9 +176,13 @@ void MergeSortingTransform::consume(Chunk chunk)
|
||||
*/
|
||||
if (max_bytes_before_external_sort && sum_bytes_in_blocks > max_bytes_before_external_sort)
|
||||
{
|
||||
if (!tmp_data)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "TemporaryDataOnDisk is not set for MergeSortingTransform");
|
||||
temporary_files_num++;
|
||||
|
||||
/// If there's less free disk space than reserve_size, an exception will be thrown
|
||||
size_t reserve_size = sum_bytes_in_blocks + min_free_disk_space;
|
||||
auto & tmp_stream = tmp_data->createStream(header_without_constants, reserve_size);
|
||||
TemporaryBlockStreamHolder tmp_stream(header_without_constants, tmp_data.get(), reserve_size);
|
||||
size_t max_merged_block_size = this->max_merged_block_size;
|
||||
if (max_block_bytes > 0 && sum_rows_in_blocks > 0 && sum_bytes_in_blocks > 0)
|
||||
{
|
||||
@ -179,7 +191,7 @@ void MergeSortingTransform::consume(Chunk chunk)
|
||||
max_merged_block_size = std::max(std::min(max_merged_block_size, max_block_bytes / avg_row_bytes), 128UL);
|
||||
}
|
||||
merge_sorter = std::make_unique<MergeSorter>(header_without_constants, std::move(chunks), description, max_merged_block_size, limit);
|
||||
auto current_processor = std::make_shared<BufferingToFileTransform>(header_without_constants, tmp_stream, log);
|
||||
auto current_processor = std::make_shared<BufferingToFileTransform>(header_without_constants, std::move(tmp_stream), log);
|
||||
|
||||
processors.emplace_back(current_processor);
|
||||
|
||||
@ -223,14 +235,14 @@ void MergeSortingTransform::generate()
|
||||
{
|
||||
if (!generated_prefix)
|
||||
{
|
||||
size_t num_tmp_files = tmp_data ? tmp_data->getStreams().size() : 0;
|
||||
if (num_tmp_files == 0)
|
||||
merge_sorter
|
||||
= std::make_unique<MergeSorter>(header_without_constants, std::move(chunks), description, max_merged_block_size, limit);
|
||||
if (temporary_files_num == 0)
|
||||
{
|
||||
merge_sorter = std::make_unique<MergeSorter>(header_without_constants, std::move(chunks), description, max_merged_block_size, limit);
|
||||
}
|
||||
else
|
||||
{
|
||||
ProfileEvents::increment(ProfileEvents::ExternalSortMerge);
|
||||
LOG_INFO(log, "There are {} temporary sorted parts to merge", num_tmp_files);
|
||||
LOG_INFO(log, "There are {} temporary sorted parts to merge", temporary_files_num);
|
||||
|
||||
processors.emplace_back(std::make_shared<MergeSorterSource>(
|
||||
header_without_constants, std::move(chunks), description, max_merged_block_size, limit));
|
||||
|
@ -29,7 +29,7 @@ public:
|
||||
size_t max_bytes_before_remerge_,
|
||||
double remerge_lowered_memory_bytes_ratio_,
|
||||
size_t max_bytes_before_external_sort_,
|
||||
TemporaryDataOnDiskPtr tmp_data_,
|
||||
TemporaryDataOnDiskScopePtr tmp_data_,
|
||||
size_t min_free_disk_space_);
|
||||
|
||||
String getName() const override { return "MergeSortingTransform"; }
|
||||
@ -45,7 +45,8 @@ private:
|
||||
size_t max_bytes_before_remerge;
|
||||
double remerge_lowered_memory_bytes_ratio;
|
||||
size_t max_bytes_before_external_sort;
|
||||
TemporaryDataOnDiskPtr tmp_data;
|
||||
TemporaryDataOnDiskScopePtr tmp_data;
|
||||
size_t temporary_files_num = 0;
|
||||
size_t min_free_disk_space;
|
||||
size_t max_block_bytes;
|
||||
|
||||
|
@ -78,7 +78,7 @@ Chunk SimpleSquashingChunksTransform::generate()
|
||||
|
||||
bool SimpleSquashingChunksTransform::canGenerate()
|
||||
{
|
||||
return !squashed_chunk.empty();
|
||||
return squashed_chunk.hasRows();
|
||||
}
|
||||
|
||||
Chunk SimpleSquashingChunksTransform::getRemaining()
|
||||
|
@ -26,6 +26,7 @@
|
||||
#include <Processors/Transforms/MergingAggregatedMemoryEfficientTransform.h>
|
||||
#include <Processors/Transforms/PartialSortingTransform.h>
|
||||
#include <Processors/Transforms/PasteJoinTransform.h>
|
||||
#include <Processors/Transforms/SquashingTransform.h>
|
||||
#include <Processors/Transforms/TotalsHavingTransform.h>
|
||||
#include <QueryPipeline/narrowPipe.h>
|
||||
#include <Common/CurrentThread.h>
|
||||
@ -385,6 +386,7 @@ std::unique_ptr<QueryPipelineBuilder> QueryPipelineBuilder::joinPipelinesRightLe
|
||||
JoinPtr join,
|
||||
const Block & output_header,
|
||||
size_t max_block_size,
|
||||
size_t min_block_size_bytes,
|
||||
size_t max_streams,
|
||||
bool keep_left_read_in_order,
|
||||
Processors * collected_processors)
|
||||
@ -398,10 +400,10 @@ std::unique_ptr<QueryPipelineBuilder> QueryPipelineBuilder::joinPipelinesRightLe
|
||||
|
||||
left->pipe.collected_processors = collected_processors;
|
||||
|
||||
/// Collect the NEW processors for the right pipeline.
|
||||
QueryPipelineProcessorsCollector collector(*right);
|
||||
/// Remember the last step of the right pipeline.
|
||||
IQueryPlanStep * step = right->pipe.processors->back()->getQueryPlanStep();
|
||||
/// Collect the NEW processors for the right pipeline.
|
||||
QueryPipelineProcessorsCollector collector(*right, step);
|
||||
|
||||
/// In case joined subquery has totals, and we don't, add default chunk to totals.
|
||||
bool default_totals = false;
|
||||
@ -441,9 +443,12 @@ std::unique_ptr<QueryPipelineBuilder> QueryPipelineBuilder::joinPipelinesRightLe
|
||||
Processors processors;
|
||||
for (auto & outport : outports)
|
||||
{
|
||||
auto squashing = std::make_shared<SimpleSquashingChunksTransform>(right->getHeader(), 0, min_block_size_bytes);
|
||||
connect(*outport, squashing->getInputs().front());
|
||||
processors.emplace_back(squashing);
|
||||
auto adding_joined = std::make_shared<FillingRightJoinSideTransform>(right->getHeader(), join);
|
||||
connect(*outport, adding_joined->getInputs().front());
|
||||
processors.emplace_back(adding_joined);
|
||||
connect(squashing->getOutputPort(), adding_joined->getInputs().front());
|
||||
processors.emplace_back(std::move(adding_joined));
|
||||
}
|
||||
return processors;
|
||||
};
|
||||
@ -497,10 +502,13 @@ std::unique_ptr<QueryPipelineBuilder> QueryPipelineBuilder::joinPipelinesRightLe
|
||||
Block left_header = left->getHeader();
|
||||
for (size_t i = 0; i < num_streams; ++i)
|
||||
{
|
||||
auto squashing = std::make_shared<SimpleSquashingChunksTransform>(left->getHeader(), 0, min_block_size_bytes);
|
||||
connect(**lit, squashing->getInputs().front());
|
||||
|
||||
auto joining = std::make_shared<JoiningTransform>(
|
||||
left_header, output_header, join, max_block_size, false, default_totals, finish_counter);
|
||||
|
||||
connect(**lit, joining->getInputs().front());
|
||||
connect(squashing->getOutputPort(), joining->getInputs().front());
|
||||
connect(**rit, joining->getInputs().back());
|
||||
if (delayed_root)
|
||||
{
|
||||
@ -532,6 +540,7 @@ std::unique_ptr<QueryPipelineBuilder> QueryPipelineBuilder::joinPipelinesRightLe
|
||||
if (collected_processors)
|
||||
collected_processors->emplace_back(joining);
|
||||
|
||||
left->pipe.processors->emplace_back(std::move(squashing));
|
||||
left->pipe.processors->emplace_back(std::move(joining));
|
||||
}
|
||||
|
||||
|
@ -126,6 +126,7 @@ public:
|
||||
JoinPtr join,
|
||||
const Block & output_header,
|
||||
size_t max_block_size,
|
||||
size_t min_block_size_bytes,
|
||||
size_t max_streams,
|
||||
bool keep_left_read_in_order,
|
||||
Processors * collected_processors = nullptr);
|
||||
|
@ -30,7 +30,7 @@ void printPipeline(const Processors & processors, const Statuses & statuses, Wri
|
||||
for (const auto & processor : processors)
|
||||
{
|
||||
const auto & description = processor->getDescription();
|
||||
out << " n" << get_proc_id(*processor) << "[label=\"" << processor->getName() << (description.empty() ? "" : ":") << description;
|
||||
out << " n" << get_proc_id(*processor) << "[label=\"" << processor->getUniqID() << (description.empty() ? "" : ":") << description;
|
||||
|
||||
if (statuses_iter != statuses.end())
|
||||
{
|
||||
|
@ -170,15 +170,16 @@ void HTTPHandler::pushDelayedResults(Output & used_output)
|
||||
|
||||
for (auto & write_buf : write_buffers)
|
||||
{
|
||||
if (!write_buf)
|
||||
continue;
|
||||
|
||||
IReadableWriteBuffer * write_buf_concrete = dynamic_cast<IReadableWriteBuffer *>(write_buf.get());
|
||||
if (write_buf_concrete)
|
||||
if (auto * write_buf_concrete = dynamic_cast<TemporaryDataBuffer *>(write_buf.get()))
|
||||
{
|
||||
ReadBufferPtr reread_buf = write_buf_concrete->tryGetReadBuffer();
|
||||
if (reread_buf)
|
||||
read_buffers.emplace_back(wrapReadBufferPointer(reread_buf));
|
||||
if (auto reread_buf = write_buf_concrete->read())
|
||||
read_buffers.emplace_back(std::move(reread_buf));
|
||||
}
|
||||
|
||||
if (auto * write_buf_concrete = dynamic_cast<IReadableWriteBuffer *>(write_buf.get()))
|
||||
{
|
||||
if (auto reread_buf = write_buf_concrete->tryGetReadBuffer())
|
||||
read_buffers.emplace_back(std::move(reread_buf));
|
||||
}
|
||||
}
|
||||
|
||||
@ -321,21 +322,19 @@ void HTTPHandler::processQuery(
|
||||
|
||||
if (buffer_size_memory > 0 || buffer_until_eof)
|
||||
{
|
||||
CascadeWriteBuffer::WriteBufferPtrs cascade_buffer1;
|
||||
CascadeWriteBuffer::WriteBufferConstructors cascade_buffer2;
|
||||
CascadeWriteBuffer::WriteBufferPtrs cascade_buffers;
|
||||
CascadeWriteBuffer::WriteBufferConstructors cascade_buffers_lazy;
|
||||
|
||||
if (buffer_size_memory > 0)
|
||||
cascade_buffer1.emplace_back(std::make_shared<MemoryWriteBuffer>(buffer_size_memory));
|
||||
cascade_buffers.emplace_back(std::make_shared<MemoryWriteBuffer>(buffer_size_memory));
|
||||
|
||||
if (buffer_until_eof)
|
||||
{
|
||||
auto tmp_data = std::make_shared<TemporaryDataOnDisk>(server.context()->getTempDataOnDisk());
|
||||
|
||||
auto create_tmp_disk_buffer = [tmp_data] (const WriteBufferPtr &) -> WriteBufferPtr {
|
||||
return tmp_data->createRawStream();
|
||||
};
|
||||
|
||||
cascade_buffer2.emplace_back(std::move(create_tmp_disk_buffer));
|
||||
auto tmp_data = server.context()->getTempDataOnDisk();
|
||||
cascade_buffers_lazy.emplace_back([tmp_data](const WriteBufferPtr &) -> WriteBufferPtr
|
||||
{
|
||||
return std::make_unique<TemporaryDataBuffer>(tmp_data.get());
|
||||
});
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -351,10 +350,10 @@ void HTTPHandler::processQuery(
|
||||
return next_buffer;
|
||||
};
|
||||
|
||||
cascade_buffer2.emplace_back(push_memory_buffer_and_continue);
|
||||
cascade_buffers_lazy.emplace_back(push_memory_buffer_and_continue);
|
||||
}
|
||||
|
||||
used_output.out_delayed_and_compressed_holder = std::make_unique<CascadeWriteBuffer>(std::move(cascade_buffer1), std::move(cascade_buffer2));
|
||||
used_output.out_delayed_and_compressed_holder = std::make_unique<CascadeWriteBuffer>(std::move(cascade_buffers), std::move(cascade_buffers_lazy));
|
||||
used_output.out_maybe_delayed_and_compressed = used_output.out_delayed_and_compressed_holder.get();
|
||||
}
|
||||
else
|
||||
|
@ -29,6 +29,7 @@
|
||||
#include <Storages/MergeTree/checkDataPart.h>
|
||||
#include <Storages/MergeTree/Backup.h>
|
||||
#include <Storages/StorageReplicatedMergeTree.h>
|
||||
#include <Storages/MergeTree/MergeTreeIndexGranularityAdaptive.h>
|
||||
#include <base/JSON.h>
|
||||
#include <boost/algorithm/string/join.hpp>
|
||||
#include <Common/CurrentMetrics.h>
|
||||
@ -626,11 +627,12 @@ UInt64 IMergeTreeDataPart::getIndexSizeInAllocatedBytes() const
|
||||
|
||||
UInt64 IMergeTreeDataPart::getIndexGranularityBytes() const
|
||||
{
|
||||
return index_granularity.getBytesSize();
|
||||
return index_granularity->getBytesSize();
|
||||
}
|
||||
|
||||
UInt64 IMergeTreeDataPart::getIndexGranularityAllocatedBytes() const
|
||||
{
|
||||
return index_granularity.getBytesAllocated();
|
||||
return index_granularity->getBytesAllocated();
|
||||
}
|
||||
|
||||
void IMergeTreeDataPart::assertState(const std::initializer_list<MergeTreeDataPartState> & affordable_states) const
|
||||
@ -661,7 +663,7 @@ void IMergeTreeDataPart::assertOnDisk() const
|
||||
|
||||
UInt64 IMergeTreeDataPart::getMarksCount() const
|
||||
{
|
||||
return index_granularity.getMarksCount();
|
||||
return index_granularity->getMarksCount();
|
||||
}
|
||||
|
||||
UInt64 IMergeTreeDataPart::getExistingBytesOnDisk() const
|
||||
@ -746,7 +748,6 @@ void IMergeTreeDataPart::loadColumnsChecksumsIndexes(bool require_columns_checks
|
||||
loadChecksums(require_columns_checksums);
|
||||
|
||||
loadIndexGranularity();
|
||||
index_granularity.shrinkToFitInMemory();
|
||||
|
||||
if (!(*storage.getSettings())[MergeTreeSetting::primary_key_lazy_load])
|
||||
getIndex();
|
||||
@ -942,13 +943,13 @@ void IMergeTreeDataPart::loadIndex() const
|
||||
for (size_t i = 0; i < key_size; ++i)
|
||||
{
|
||||
loaded_index[i] = primary_key.data_types[i]->createColumn();
|
||||
loaded_index[i]->reserve(index_granularity.getMarksCount());
|
||||
loaded_index[i]->reserve(index_granularity->getMarksCount());
|
||||
}
|
||||
|
||||
String index_name = "primary" + getIndexExtensionFromFilesystem(getDataPartStorage());
|
||||
String index_path = fs::path(getDataPartStorage().getRelativePath()) / index_name;
|
||||
auto index_file = metadata_manager->read(index_name);
|
||||
size_t marks_count = index_granularity.getMarksCount();
|
||||
size_t marks_count = index_granularity->getMarksCount();
|
||||
|
||||
Serializations key_serializations(key_size);
|
||||
for (size_t j = 0; j < key_size; ++j)
|
||||
@ -988,6 +989,8 @@ void IMergeTreeDataPart::loadIndex() const
|
||||
"{}, read: {})", index_path, marks_count, loaded_index[i]->size());
|
||||
}
|
||||
|
||||
LOG_TEST(storage.log, "Loaded primary key index for part {}, {} columns are kept in memory", name, key_size);
|
||||
|
||||
if (!index_file->eof())
|
||||
throw Exception(ErrorCodes::EXPECTED_END_OF_FILE, "Index file {} is unexpectedly long", index_path);
|
||||
|
||||
@ -1361,7 +1364,7 @@ void IMergeTreeDataPart::loadRowsCount()
|
||||
assertEOF(*buf);
|
||||
};
|
||||
|
||||
if (index_granularity.empty())
|
||||
if (index_granularity->empty())
|
||||
{
|
||||
rows_count = 0;
|
||||
}
|
||||
@ -1396,9 +1399,9 @@ void IMergeTreeDataPart::loadRowsCount()
|
||||
backQuote(column.name), rows_in_column, name, rows_count);
|
||||
}
|
||||
|
||||
size_t last_possibly_incomplete_mark_rows = index_granularity.getLastNonFinalMarkRows();
|
||||
size_t last_possibly_incomplete_mark_rows = index_granularity->getLastNonFinalMarkRows();
|
||||
/// All this rows have to be written in column
|
||||
size_t index_granularity_without_last_mark = index_granularity.getTotalRows() - last_possibly_incomplete_mark_rows;
|
||||
size_t index_granularity_without_last_mark = index_granularity->getTotalRows() - last_possibly_incomplete_mark_rows;
|
||||
/// We have more rows in column than in index granularity without last possibly incomplete mark
|
||||
if (rows_in_column < index_granularity_without_last_mark)
|
||||
{
|
||||
@ -1408,7 +1411,7 @@ void IMergeTreeDataPart::loadRowsCount()
|
||||
"and size of single value, "
|
||||
"but index granularity in part {} without last mark has {} rows, which "
|
||||
"is more than in column",
|
||||
backQuote(column.name), rows_in_column, name, index_granularity.getTotalRows());
|
||||
backQuote(column.name), rows_in_column, name, index_granularity->getTotalRows());
|
||||
}
|
||||
|
||||
/// In last mark we actually written less or equal rows than stored in last mark of index granularity
|
||||
@ -1456,8 +1459,8 @@ void IMergeTreeDataPart::loadRowsCount()
|
||||
column.name, column_size, sizeof_field);
|
||||
}
|
||||
|
||||
size_t last_mark_index_granularity = index_granularity.getLastNonFinalMarkRows();
|
||||
size_t rows_approx = index_granularity.getTotalRows();
|
||||
size_t last_mark_index_granularity = index_granularity->getLastNonFinalMarkRows();
|
||||
size_t rows_approx = index_granularity->getTotalRows();
|
||||
if (!(rows_count <= rows_approx && rows_approx < rows_count + last_mark_index_granularity))
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected size of column {}: "
|
||||
"{} rows, expected {}+-{} rows according to the index",
|
||||
@ -1520,7 +1523,7 @@ UInt64 IMergeTreeDataPart::readExistingRowsCount()
|
||||
|
||||
while (current_row < rows_count)
|
||||
{
|
||||
size_t rows_to_read = index_granularity.getMarkRows(current_mark);
|
||||
size_t rows_to_read = index_granularity->getMarkRows(current_mark);
|
||||
continue_reading = (current_mark != 0);
|
||||
|
||||
Columns result;
|
||||
@ -1968,6 +1971,9 @@ void IMergeTreeDataPart::initializeIndexGranularityInfo()
|
||||
index_granularity_info = MergeTreeIndexGranularityInfo(storage, *mrk_type);
|
||||
else
|
||||
index_granularity_info = MergeTreeIndexGranularityInfo(storage, part_type);
|
||||
|
||||
/// It may be converted to constant index granularity after loading it.
|
||||
index_granularity = std::make_unique<MergeTreeIndexGranularityAdaptive>();
|
||||
}
|
||||
|
||||
void IMergeTreeDataPart::remove()
|
||||
@ -2241,9 +2247,9 @@ void IMergeTreeDataPart::checkConsistency(bool require_part_metadata) const
|
||||
"part_state: [{}]",
|
||||
columns.toString(),
|
||||
index_granularity_info.getMarkSizeInBytes(columns.size()),
|
||||
index_granularity.getMarksCount(),
|
||||
index_granularity->getMarksCount(),
|
||||
index_granularity_info.describe(),
|
||||
index_granularity.describe(),
|
||||
index_granularity->describe(),
|
||||
part_state);
|
||||
|
||||
e.addMessage(debug_info);
|
||||
|
@ -321,7 +321,7 @@ public:
|
||||
|
||||
/// Amount of rows between marks
|
||||
/// As index always loaded into memory
|
||||
MergeTreeIndexGranularity index_granularity;
|
||||
MergeTreeIndexGranularityPtr index_granularity;
|
||||
|
||||
/// Index that for each part stores min and max values of a set of columns. This allows quickly excluding
|
||||
/// parts based on conditions on these columns imposed by a query.
|
||||
|
@ -1,5 +1,6 @@
|
||||
#include <Storages/MergeTree/IMergeTreeDataPartWriter.h>
|
||||
#include <Common/MemoryTrackerBlockerInThread.h>
|
||||
#include <Storages/MergeTree/MergeTreeIndexGranularity.h>
|
||||
#include <Columns/ColumnSparse.h>
|
||||
|
||||
namespace DB
|
||||
@ -11,7 +12,6 @@ namespace ErrorCodes
|
||||
extern const int NO_SUCH_COLUMN_IN_TABLE;
|
||||
}
|
||||
|
||||
|
||||
Block getIndexBlockAndPermute(const Block & block, const Names & names, const IColumn::Permutation * permutation)
|
||||
{
|
||||
Block result;
|
||||
@ -57,7 +57,7 @@ IMergeTreeDataPartWriter::IMergeTreeDataPartWriter(
|
||||
const StorageMetadataPtr & metadata_snapshot_,
|
||||
const VirtualsDescriptionPtr & virtual_columns_,
|
||||
const MergeTreeWriterSettings & settings_,
|
||||
const MergeTreeIndexGranularity & index_granularity_)
|
||||
MergeTreeIndexGranularityPtr index_granularity_)
|
||||
: data_part_name(data_part_name_)
|
||||
, serializations(serializations_)
|
||||
, index_granularity_info(index_granularity_info_)
|
||||
@ -68,7 +68,7 @@ IMergeTreeDataPartWriter::IMergeTreeDataPartWriter(
|
||||
, settings(settings_)
|
||||
, with_final_mark(settings.can_use_adaptive_granularity)
|
||||
, data_part_storage(data_part_storage_)
|
||||
, index_granularity(index_granularity_)
|
||||
, index_granularity(std::move(index_granularity_))
|
||||
{
|
||||
}
|
||||
|
||||
@ -145,7 +145,7 @@ MergeTreeDataPartWriterPtr createMergeTreeDataPartCompactWriter(
|
||||
const String & marks_file_extension_,
|
||||
const CompressionCodecPtr & default_codec_,
|
||||
const MergeTreeWriterSettings & writer_settings,
|
||||
const MergeTreeIndexGranularity & computed_index_granularity);
|
||||
MergeTreeIndexGranularityPtr computed_index_granularity);
|
||||
|
||||
MergeTreeDataPartWriterPtr createMergeTreeDataPartWideWriter(
|
||||
const String & data_part_name_,
|
||||
@ -162,8 +162,7 @@ MergeTreeDataPartWriterPtr createMergeTreeDataPartWideWriter(
|
||||
const String & marks_file_extension_,
|
||||
const CompressionCodecPtr & default_codec_,
|
||||
const MergeTreeWriterSettings & writer_settings,
|
||||
const MergeTreeIndexGranularity & computed_index_granularity);
|
||||
|
||||
MergeTreeIndexGranularityPtr computed_index_granularity);
|
||||
|
||||
MergeTreeDataPartWriterPtr createMergeTreeDataPartWriter(
|
||||
MergeTreeDataPartType part_type,
|
||||
@ -182,12 +181,26 @@ MergeTreeDataPartWriterPtr createMergeTreeDataPartWriter(
|
||||
const String & marks_file_extension_,
|
||||
const CompressionCodecPtr & default_codec_,
|
||||
const MergeTreeWriterSettings & writer_settings,
|
||||
const MergeTreeIndexGranularity & computed_index_granularity)
|
||||
MergeTreeIndexGranularityPtr computed_index_granularity)
|
||||
{
|
||||
if (part_type == MergeTreeDataPartType::Compact)
|
||||
return createMergeTreeDataPartCompactWriter(data_part_name_, logger_name_, serializations_, data_part_storage_,
|
||||
index_granularity_info_, storage_settings_, columns_list, column_positions, metadata_snapshot, virtual_columns, indices_to_recalc, stats_to_recalc_,
|
||||
marks_file_extension_, default_codec_, writer_settings, computed_index_granularity);
|
||||
return createMergeTreeDataPartCompactWriter(
|
||||
data_part_name_,
|
||||
logger_name_,
|
||||
serializations_,
|
||||
data_part_storage_,
|
||||
index_granularity_info_,
|
||||
storage_settings_,
|
||||
columns_list,
|
||||
column_positions,
|
||||
metadata_snapshot,
|
||||
virtual_columns,
|
||||
indices_to_recalc,
|
||||
stats_to_recalc_,
|
||||
marks_file_extension_,
|
||||
default_codec_,
|
||||
writer_settings,
|
||||
std::move(computed_index_granularity));
|
||||
if (part_type == MergeTreeDataPartType::Wide)
|
||||
return createMergeTreeDataPartWideWriter(
|
||||
data_part_name_,
|
||||
@ -204,7 +217,7 @@ MergeTreeDataPartWriterPtr createMergeTreeDataPartWriter(
|
||||
marks_file_extension_,
|
||||
default_codec_,
|
||||
writer_settings,
|
||||
computed_index_granularity);
|
||||
std::move(computed_index_granularity));
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unknown part type: {}", part_type.toString());
|
||||
}
|
||||
|
||||
|
@ -36,7 +36,7 @@ public:
|
||||
const StorageMetadataPtr & metadata_snapshot_,
|
||||
const VirtualsDescriptionPtr & virtual_columns_,
|
||||
const MergeTreeWriterSettings & settings_,
|
||||
const MergeTreeIndexGranularity & index_granularity_ = {});
|
||||
MergeTreeIndexGranularityPtr index_granularity_);
|
||||
|
||||
virtual ~IMergeTreeDataPartWriter();
|
||||
|
||||
@ -52,7 +52,7 @@ public:
|
||||
|
||||
PlainMarksByName releaseCachedMarks();
|
||||
|
||||
const MergeTreeIndexGranularity & getIndexGranularity() const { return index_granularity; }
|
||||
MergeTreeIndexGranularityPtr getIndexGranularity() const { return index_granularity; }
|
||||
|
||||
virtual Block getColumnsSample() const = 0;
|
||||
|
||||
@ -76,7 +76,7 @@ protected:
|
||||
|
||||
MutableDataPartStoragePtr data_part_storage;
|
||||
MutableColumns index_columns;
|
||||
MergeTreeIndexGranularity index_granularity;
|
||||
MergeTreeIndexGranularityPtr index_granularity;
|
||||
/// Marks that will be saved to cache on finish.
|
||||
PlainMarksByName cached_marks;
|
||||
};
|
||||
@ -101,6 +101,6 @@ MergeTreeDataPartWriterPtr createMergeTreeDataPartWriter(
|
||||
const String & marks_file_extension,
|
||||
const CompressionCodecPtr & default_codec_,
|
||||
const MergeTreeWriterSettings & writer_settings,
|
||||
const MergeTreeIndexGranularity & computed_index_granularity);
|
||||
MergeTreeIndexGranularityPtr computed_index_granularity);
|
||||
|
||||
}
|
||||
|
@ -29,7 +29,7 @@ public:
|
||||
|
||||
virtual void write(const Block & block) = 0;
|
||||
|
||||
const MergeTreeIndexGranularity & getIndexGranularity() const
|
||||
MergeTreeIndexGranularityPtr getIndexGranularity() const
|
||||
{
|
||||
return writer->getIndexGranularity();
|
||||
}
|
||||
|
@ -51,7 +51,7 @@ public:
|
||||
|
||||
const MergeTreeIndexGranularityInfo & getIndexGranularityInfo() const override { return data_part->index_granularity_info; }
|
||||
|
||||
const MergeTreeIndexGranularity & getIndexGranularity() const override { return data_part->index_granularity; }
|
||||
const MergeTreeIndexGranularity & getIndexGranularity() const override { return *data_part->index_granularity; }
|
||||
|
||||
const SerializationInfoByName & getSerializationInfos() const override { return data_part->getSerializationInfos(); }
|
||||
|
||||
|
@ -52,7 +52,7 @@ MergeListElement::MergeListElement(const StorageID & table_id_, FutureMergedMuta
|
||||
total_size_bytes_compressed += source_part->getBytesOnDisk();
|
||||
total_size_bytes_uncompressed += source_part->getTotalColumnsSize().data_uncompressed;
|
||||
total_size_marks += source_part->getMarksCount();
|
||||
total_rows_count += source_part->index_granularity.getTotalRows();
|
||||
total_rows_count += source_part->index_granularity->getTotalRows();
|
||||
}
|
||||
|
||||
if (!future_part->parts.empty())
|
||||
|
@ -8,6 +8,7 @@
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Core/Settings.h>
|
||||
#include <Common/ProfileEvents.h>
|
||||
#include <Storages/MergeTree/MergeTreeIndexGranularity.h>
|
||||
#include <Compression/CompressedWriteBuffer.h>
|
||||
#include <DataTypes/ObjectUtils.h>
|
||||
#include <DataTypes/Serializations/SerializationInfo.h>
|
||||
@ -65,8 +66,14 @@ namespace ProfileEvents
|
||||
extern const Event MergeProjectionStageExecuteMilliseconds;
|
||||
}
|
||||
|
||||
namespace CurrentMetrics
|
||||
{
|
||||
extern const Metric TemporaryFilesForMerge;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace Setting
|
||||
{
|
||||
extern const SettingsBool compile_sort_description;
|
||||
@ -94,6 +101,7 @@ namespace MergeTreeSetting
|
||||
extern const MergeTreeSettingsUInt64 vertical_merge_algorithm_min_rows_to_activate;
|
||||
extern const MergeTreeSettingsBool vertical_merge_remote_filesystem_prefetch;
|
||||
extern const MergeTreeSettingsBool prewarm_mark_cache;
|
||||
extern const MergeTreeSettingsBool use_const_adaptive_granularity;
|
||||
}
|
||||
|
||||
namespace ErrorCodes
|
||||
@ -124,6 +132,7 @@ static ColumnsStatistics getStatisticsForColumns(
|
||||
return all_statistics;
|
||||
}
|
||||
|
||||
|
||||
/// Manages the "rows_sources" temporary file that is used during vertical merge.
|
||||
class RowsSourcesTemporaryFile : public ITemporaryFileLookup
|
||||
{
|
||||
@ -132,9 +141,7 @@ public:
|
||||
static constexpr auto FILE_ID = "rows_sources";
|
||||
|
||||
explicit RowsSourcesTemporaryFile(TemporaryDataOnDiskScopePtr temporary_data_on_disk_)
|
||||
: tmp_disk(std::make_unique<TemporaryDataOnDisk>(temporary_data_on_disk_))
|
||||
, uncompressed_write_buffer(tmp_disk->createRawStream())
|
||||
, tmp_file_name_on_disk(uncompressed_write_buffer->getFileName())
|
||||
: temporary_data_on_disk(temporary_data_on_disk_->childScope(CurrentMetrics::TemporaryFilesForMerge))
|
||||
{
|
||||
}
|
||||
|
||||
@ -143,11 +150,11 @@ public:
|
||||
if (name != FILE_ID)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected temporary file name requested: {}", name);
|
||||
|
||||
if (write_buffer)
|
||||
if (tmp_data_buffer)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Temporary file was already requested for writing, there musto be only one writer");
|
||||
|
||||
write_buffer = (std::make_unique<CompressedWriteBuffer>(*uncompressed_write_buffer));
|
||||
return *write_buffer;
|
||||
tmp_data_buffer = std::make_unique<TemporaryDataBuffer>(temporary_data_on_disk.get());
|
||||
return *tmp_data_buffer;
|
||||
}
|
||||
|
||||
std::unique_ptr<ReadBuffer> getTemporaryFileForReading(const String & name) override
|
||||
@ -163,25 +170,24 @@ public:
|
||||
return std::make_unique<ReadBufferFromEmptyFile>();
|
||||
|
||||
/// Reopen the file for each read so that multiple reads can be performed in parallel and there is no need to seek to the beginning.
|
||||
auto raw_file_read_buffer = std::make_unique<ReadBufferFromFile>(tmp_file_name_on_disk);
|
||||
return std::make_unique<CompressedReadBufferFromFile>(std::move(raw_file_read_buffer));
|
||||
return tmp_data_buffer->read();
|
||||
}
|
||||
|
||||
/// Returns written data size in bytes
|
||||
size_t finalizeWriting()
|
||||
{
|
||||
write_buffer->finalize();
|
||||
uncompressed_write_buffer->finalize();
|
||||
if (!tmp_data_buffer)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Temporary file was not requested for writing");
|
||||
|
||||
auto stat = tmp_data_buffer->finishWriting();
|
||||
finalized = true;
|
||||
final_size = write_buffer->count();
|
||||
final_size = stat.uncompressed_size;
|
||||
return final_size;
|
||||
}
|
||||
|
||||
private:
|
||||
std::unique_ptr<TemporaryDataOnDisk> tmp_disk;
|
||||
std::unique_ptr<WriteBufferFromFileBase> uncompressed_write_buffer;
|
||||
std::unique_ptr<WriteBuffer> write_buffer;
|
||||
const String tmp_file_name_on_disk;
|
||||
std::unique_ptr<TemporaryDataBuffer> tmp_data_buffer;
|
||||
TemporaryDataOnDiskScopePtr temporary_data_on_disk;
|
||||
bool finalized = false;
|
||||
size_t final_size = 0;
|
||||
};
|
||||
@ -409,10 +415,11 @@ bool MergeTask::ExecuteAndFinalizeHorizontalPart::prepare() const
|
||||
};
|
||||
|
||||
auto mutations_snapshot = global_ctx->data->getMutationsSnapshot(params);
|
||||
auto storage_settings = global_ctx->data->getSettings();
|
||||
|
||||
SerializationInfo::Settings info_settings =
|
||||
{
|
||||
.ratio_of_defaults_for_sparse = (*global_ctx->data->getSettings())[MergeTreeSetting::ratio_of_defaults_for_sparse_serialization],
|
||||
.ratio_of_defaults_for_sparse = (*storage_settings)[MergeTreeSetting::ratio_of_defaults_for_sparse_serialization],
|
||||
.choose_kind = true,
|
||||
};
|
||||
|
||||
@ -461,6 +468,7 @@ bool MergeTask::ExecuteAndFinalizeHorizontalPart::prepare() const
|
||||
|
||||
ctx->sum_input_rows_upper_bound = global_ctx->merge_list_element_ptr->total_rows_count;
|
||||
ctx->sum_compressed_bytes_upper_bound = global_ctx->merge_list_element_ptr->total_size_bytes_compressed;
|
||||
ctx->sum_uncompressed_bytes_upper_bound = global_ctx->merge_list_element_ptr->total_size_bytes_uncompressed;
|
||||
|
||||
global_ctx->chosen_merge_algorithm = chooseMergeAlgorithm();
|
||||
global_ctx->merge_list_element_ptr->merge_algorithm.store(global_ctx->chosen_merge_algorithm, std::memory_order_relaxed);
|
||||
@ -504,8 +512,14 @@ bool MergeTask::ExecuteAndFinalizeHorizontalPart::prepare() const
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Merge algorithm must be chosen");
|
||||
}
|
||||
|
||||
/// If merge is vertical we cannot calculate it
|
||||
ctx->blocks_are_granules_size = (global_ctx->chosen_merge_algorithm == MergeAlgorithm::Vertical);
|
||||
bool use_adaptive_granularity = global_ctx->new_data_part->index_granularity_info.mark_type.adaptive;
|
||||
bool use_const_adaptive_granularity = (*storage_settings)[MergeTreeSetting::use_const_adaptive_granularity];
|
||||
|
||||
/// If merge is vertical we cannot calculate it.
|
||||
/// If granularity is constant we don't need to calculate it.
|
||||
ctx->blocks_are_granules_size = use_adaptive_granularity
|
||||
&& !use_const_adaptive_granularity
|
||||
&& global_ctx->chosen_merge_algorithm == MergeAlgorithm::Vertical;
|
||||
|
||||
/// Merged stream will be created and available as merged_stream variable
|
||||
createMergedStream();
|
||||
@ -547,7 +561,14 @@ bool MergeTask::ExecuteAndFinalizeHorizontalPart::prepare() const
|
||||
}
|
||||
}
|
||||
|
||||
bool save_marks_in_cache = (*global_ctx->data->getSettings())[MergeTreeSetting::prewarm_mark_cache] && global_ctx->context->getMarkCache();
|
||||
auto index_granularity_ptr = createMergeTreeIndexGranularity(
|
||||
ctx->sum_input_rows_upper_bound,
|
||||
ctx->sum_uncompressed_bytes_upper_bound,
|
||||
*storage_settings,
|
||||
global_ctx->new_data_part->index_granularity_info,
|
||||
ctx->blocks_are_granules_size);
|
||||
|
||||
bool save_marks_in_cache = (*storage_settings)[MergeTreeSetting::prewarm_mark_cache] && global_ctx->context->getMarkCache();
|
||||
|
||||
global_ctx->to = std::make_shared<MergedBlockOutputStream>(
|
||||
global_ctx->new_data_part,
|
||||
@ -556,6 +577,7 @@ bool MergeTask::ExecuteAndFinalizeHorizontalPart::prepare() const
|
||||
MergeTreeIndexFactory::instance().getMany(global_ctx->merging_skip_indexes),
|
||||
getStatisticsForColumns(global_ctx->merging_columns, global_ctx->metadata_snapshot),
|
||||
ctx->compression_codec,
|
||||
std::move(index_granularity_ptr),
|
||||
global_ctx->txn ? global_ctx->txn->tid : Tx::PrehistoricTID,
|
||||
/*reset_columns=*/ true,
|
||||
save_marks_in_cache,
|
||||
@ -874,6 +896,7 @@ bool MergeTask::VerticalMergeStage::prepareVerticalMergeForAllColumns() const
|
||||
/// In special case, when there is only one source part, and no rows were skipped, we may have
|
||||
/// skipped writing rows_sources file. Otherwise rows_sources_count must be equal to the total
|
||||
/// number of input rows.
|
||||
/// Note that only one byte index is written for each row, so number of rows is equals to the number of bytes written.
|
||||
if ((rows_sources_count > 0 || global_ctx->future_part->parts.size() > 1) && sum_input_rows_exact != rows_sources_count + input_rows_filtered)
|
||||
throw Exception(
|
||||
ErrorCodes::LOGICAL_ERROR,
|
||||
@ -881,6 +904,7 @@ bool MergeTask::VerticalMergeStage::prepareVerticalMergeForAllColumns() const
|
||||
"of bytes written to rows_sources file ({}). It is a bug.",
|
||||
sum_input_rows_exact, input_rows_filtered, rows_sources_count);
|
||||
|
||||
|
||||
ctx->it_name_and_type = global_ctx->gathering_columns.cbegin();
|
||||
|
||||
const auto & settings = global_ctx->context->getSettingsRef();
|
||||
@ -1095,12 +1119,12 @@ void MergeTask::VerticalMergeStage::prepareVerticalMergeForOneColumn() const
|
||||
global_ctx->new_data_part,
|
||||
global_ctx->metadata_snapshot,
|
||||
columns_list,
|
||||
ctx->compression_codec,
|
||||
column_pipepline.indexes_to_recalc,
|
||||
getStatisticsForColumns(columns_list, global_ctx->metadata_snapshot),
|
||||
ctx->compression_codec,
|
||||
global_ctx->to->getIndexGranularity(),
|
||||
&global_ctx->written_offset_columns,
|
||||
save_marks_in_cache,
|
||||
global_ctx->to->getIndexGranularity());
|
||||
save_marks_in_cache);
|
||||
|
||||
ctx->column_elems_written = 0;
|
||||
}
|
||||
@ -1718,7 +1742,7 @@ void MergeTask::ExecuteAndFinalizeHorizontalPart::createMergedStream() const
|
||||
sort_description,
|
||||
partition_key_columns,
|
||||
global_ctx->merging_params,
|
||||
(is_vertical_merge ? RowsSourcesTemporaryFile::FILE_ID : ""), /// rows_sources' temporary file is used only for vertical merge
|
||||
(is_vertical_merge ? RowsSourcesTemporaryFile::FILE_ID : ""), /// rows_sources' temporary file is used only for vertical merge
|
||||
(*data_settings)[MergeTreeSetting::merge_max_block_size],
|
||||
(*data_settings)[MergeTreeSetting::merge_max_block_size_bytes],
|
||||
ctx->blocks_are_granules_size,
|
||||
|
@ -243,7 +243,6 @@ private:
|
||||
bool need_remove_expired_values{false};
|
||||
bool force_ttl{false};
|
||||
CompressionCodecPtr compression_codec{nullptr};
|
||||
size_t sum_input_rows_upper_bound{0};
|
||||
std::shared_ptr<RowsSourcesTemporaryFile> rows_sources_temporary_file;
|
||||
std::optional<ColumnSizeEstimator> column_sizes{};
|
||||
|
||||
@ -261,7 +260,9 @@ private:
|
||||
std::function<bool()> is_cancelled{};
|
||||
|
||||
/// Local variables for this stage
|
||||
size_t sum_input_rows_upper_bound{0};
|
||||
size_t sum_compressed_bytes_upper_bound{0};
|
||||
size_t sum_uncompressed_bytes_upper_bound{0};
|
||||
bool blocks_are_granules_size{false};
|
||||
|
||||
LoggerPtr log{getLogger("MergeTask::PrepareStage")};
|
||||
|
@ -83,6 +83,7 @@
|
||||
#include <Storages/StorageMergeTree.h>
|
||||
#include <Storages/StorageReplicatedMergeTree.h>
|
||||
#include <Storages/VirtualColumnUtils.h>
|
||||
#include <Storages/MergeTree/MergeTreeIndexGranularityAdaptive.h>
|
||||
|
||||
#include <boost/range/algorithm_ext/erase.hpp>
|
||||
#include <boost/algorithm/string/join.hpp>
|
||||
@ -7237,7 +7238,7 @@ Block MergeTreeData::getMinMaxCountProjectionBlock(
|
||||
/// It's extremely rare that some parts have final marks while others don't. To make it
|
||||
/// straightforward, disable minmax_count projection when `max(pk)' encounters any part with
|
||||
/// no final mark.
|
||||
if (need_primary_key_max_column && !part->index_granularity.hasFinalMark())
|
||||
if (need_primary_key_max_column && !part->index_granularity->hasFinalMark())
|
||||
return {};
|
||||
|
||||
real_parts.push_back(part);
|
||||
@ -8960,10 +8961,15 @@ std::pair<MergeTreeData::MutableDataPartPtr, scope_guard> MergeTreeData::createE
|
||||
auto compression_codec = getContext()->chooseCompressionCodec(0, 0);
|
||||
|
||||
const auto & index_factory = MergeTreeIndexFactory::instance();
|
||||
MergedBlockOutputStream out(new_data_part, metadata_snapshot, columns,
|
||||
MergedBlockOutputStream out(
|
||||
new_data_part,
|
||||
metadata_snapshot,
|
||||
columns,
|
||||
index_factory.getMany(metadata_snapshot->getSecondaryIndices()),
|
||||
ColumnsStatistics{},
|
||||
compression_codec, txn ? txn->tid : Tx::PrehistoricTID);
|
||||
compression_codec,
|
||||
std::make_shared<MergeTreeIndexGranularityAdaptive>(),
|
||||
txn ? txn->tid : Tx::PrehistoricTID);
|
||||
|
||||
bool sync_on_insert = (*settings)[MergeTreeSetting::fsync_after_insert];
|
||||
|
||||
|
@ -3,6 +3,7 @@
|
||||
#include <Storages/MergeTree/MergeTreeReaderCompactSingleBuffer.h>
|
||||
#include <Storages/MergeTree/MergeTreeDataPartWriterCompact.h>
|
||||
#include <Storages/MergeTree/LoadedMergeTreeDataPartInfoForReader.h>
|
||||
#include <Storages/MergeTree/MergeTreeSettings.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -15,6 +16,11 @@ namespace ErrorCodes
|
||||
extern const int BAD_SIZE_OF_FILE_IN_DATA_PART;
|
||||
}
|
||||
|
||||
namespace MergeTreeSetting
|
||||
{
|
||||
extern MergeTreeSettingsBool enable_index_granularity_compression;
|
||||
}
|
||||
|
||||
MergeTreeDataPartCompact::MergeTreeDataPartCompact(
|
||||
const MergeTreeData & storage_,
|
||||
const String & name_,
|
||||
@ -62,7 +68,7 @@ MergeTreeDataPartWriterPtr createMergeTreeDataPartCompactWriter(
|
||||
const String & marks_file_extension_,
|
||||
const CompressionCodecPtr & default_codec_,
|
||||
const MergeTreeWriterSettings & writer_settings,
|
||||
const MergeTreeIndexGranularity & computed_index_granularity)
|
||||
MergeTreeIndexGranularityPtr computed_index_granularity)
|
||||
{
|
||||
NamesAndTypesList ordered_columns_list;
|
||||
std::copy_if(columns_list.begin(), columns_list.end(), std::back_inserter(ordered_columns_list),
|
||||
@ -76,7 +82,7 @@ MergeTreeDataPartWriterPtr createMergeTreeDataPartCompactWriter(
|
||||
data_part_name_, logger_name_, serializations_, data_part_storage_,
|
||||
index_granularity_info_, storage_settings_, ordered_columns_list, metadata_snapshot, virtual_columns,
|
||||
indices_to_recalc, stats_to_recalc_, marks_file_extension_,
|
||||
default_codec_, writer_settings, computed_index_granularity);
|
||||
default_codec_, writer_settings, std::move(computed_index_granularity));
|
||||
}
|
||||
|
||||
|
||||
@ -95,8 +101,11 @@ void MergeTreeDataPartCompact::calculateEachColumnSizes(ColumnSizeByName & /*eac
|
||||
}
|
||||
|
||||
void MergeTreeDataPartCompact::loadIndexGranularityImpl(
|
||||
MergeTreeIndexGranularity & index_granularity_, const MergeTreeIndexGranularityInfo & index_granularity_info_,
|
||||
size_t columns_count, const IDataPartStorage & data_part_storage_)
|
||||
MergeTreeIndexGranularityPtr & index_granularity_ptr,
|
||||
const MergeTreeIndexGranularityInfo & index_granularity_info_,
|
||||
size_t columns_count,
|
||||
const IDataPartStorage & data_part_storage_,
|
||||
const MergeTreeSettings & storage_settings)
|
||||
{
|
||||
if (!index_granularity_info_.mark_type.adaptive)
|
||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "MergeTreeDataPartCompact cannot be created with non-adaptive granularity.");
|
||||
@ -122,10 +131,14 @@ void MergeTreeDataPartCompact::loadIndexGranularityImpl(
|
||||
marks_reader->ignore(columns_count * sizeof(MarkInCompressedFile));
|
||||
size_t granularity;
|
||||
readBinaryLittleEndian(granularity, *marks_reader);
|
||||
index_granularity_.appendMark(granularity);
|
||||
index_granularity_ptr->appendMark(granularity);
|
||||
}
|
||||
|
||||
index_granularity_.setInitialized();
|
||||
if (storage_settings[MergeTreeSetting::enable_index_granularity_compression])
|
||||
{
|
||||
if (auto new_granularity_ptr = index_granularity_ptr->optimize())
|
||||
index_granularity_ptr = std::move(new_granularity_ptr);
|
||||
}
|
||||
}
|
||||
|
||||
void MergeTreeDataPartCompact::loadIndexGranularity()
|
||||
@ -133,7 +146,7 @@ void MergeTreeDataPartCompact::loadIndexGranularity()
|
||||
if (columns.empty())
|
||||
throw Exception(ErrorCodes::NO_FILE_IN_DATA_PART, "No columns in part {}", name);
|
||||
|
||||
loadIndexGranularityImpl(index_granularity, index_granularity_info, columns.size(), getDataPartStorage());
|
||||
loadIndexGranularityImpl(index_granularity, index_granularity_info, columns.size(), getDataPartStorage(), *storage.getSettings());
|
||||
}
|
||||
|
||||
void MergeTreeDataPartCompact::loadMarksToCache(const Names & column_names, MarkCache * mark_cache) const
|
||||
@ -152,7 +165,7 @@ void MergeTreeDataPartCompact::loadMarksToCache(const Names & column_names, Mark
|
||||
info_for_read,
|
||||
mark_cache,
|
||||
index_granularity_info.getMarksFilePath(DATA_FILE_NAME),
|
||||
index_granularity.getMarksCount(),
|
||||
index_granularity->getMarksCount(),
|
||||
index_granularity_info,
|
||||
/*save_marks_in_cache=*/ true,
|
||||
read_settings,
|
||||
@ -227,7 +240,7 @@ void MergeTreeDataPartCompact::doCheckConsistency(bool require_part_metadata) co
|
||||
getDataPartStorage().getRelativePath(),
|
||||
std::string(fs::path(getDataPartStorage().getFullPath()) / mrk_file_name));
|
||||
|
||||
UInt64 expected_file_size = index_granularity_info.getMarkSizeInBytes(columns.size()) * index_granularity.getMarksCount();
|
||||
UInt64 expected_file_size = index_granularity_info.getMarkSizeInBytes(columns.size()) * index_granularity->getMarksCount();
|
||||
if (expected_file_size != file_size)
|
||||
throw Exception(
|
||||
ErrorCodes::BAD_SIZE_OF_FILE_IN_DATA_PART,
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user