Merge branch 'master' into add_test_for_concurrent_merges

This commit is contained in:
alesapin 2021-06-23 22:26:15 +03:00
commit d744136c0d
26 changed files with 359 additions and 131 deletions

View File

@ -1302,6 +1302,7 @@ The table below shows supported data types and how they match ClickHouse [data t
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `UTF8` |
| `STRING`, `BINARY` | [FixedString](../sql-reference/data-types/fixedstring.md) | `UTF8` |
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
| `DECIMAL256` | [Decimal256](../sql-reference/data-types/decimal.md)| `DECIMAL256` |
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
Arrays can be nested and can have a value of the `Nullable` type as an argument.

View File

@ -1,37 +0,0 @@
---
toc_priority: 150
---
## initializeAggregation {#initializeaggregation}
Initializes aggregation for your input rows. It is intended for the functions with the suffix `State`.
Use it for tests or to process columns of types `AggregateFunction` and `AggregationgMergeTree`.
**Syntax**
``` sql
initializeAggregation (aggregate_function, column_1, column_2)
```
**Arguments**
- `aggregate_function` — Name of the aggregation function. The state of this function — the creating one. [String](../../../sql-reference/data-types/string.md#string).
- `column_n` — The column to translate it into the function as it's argument. [String](../../../sql-reference/data-types/string.md#string).
**Returned value(s)**
Returns the result of the aggregation for your input rows. The return type will be the same as the return type of function, that `initializeAgregation` takes as first argument.
For example for functions with the suffix `State` the return type will be `AggregateFunction`.
**Example**
Query:
```sql
SELECT uniqMerge(state) FROM (SELECT initializeAggregation('uniqState', number % 3) AS state FROM system.numbers LIMIT 10000);
```
Result:
┌─uniqMerge(state)─┐
│ 3 │
└──────────────────┘

View File

@ -486,6 +486,7 @@ Example of settings:
<table>table_name</table>
<where>id=10</where>
<invalidate_query>SQL_QUERY</invalidate_query>
<fail_on_connection_loss>true</fail_on_connection_loss>
</mysql>
</source>
```
@ -503,6 +504,7 @@ SOURCE(MYSQL(
table 'table_name'
where 'id=10'
invalidate_query 'SQL_QUERY'
fail_on_connection_loss 'true'
))
```
@ -527,6 +529,8 @@ Setting fields:
- `invalidate_query` Query for checking the dictionary status. Optional parameter. Read more in the section [Updating dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md).
- `fail_on_connection_loss` The configuration parameter that controls behavior of the server on connection loss. If `true`, an exception is thrown immediately if the connection between client and server was lost. If `false`, the ClickHouse server retries to execute the query three times before throwing an exception. Note that retrying leads to increased response times. Default value: `false`.
MySQL can be connected on a local host via sockets. To do this, set `host` and `socket`.
Example of settings:
@ -542,6 +546,7 @@ Example of settings:
<table>table_name</table>
<where>id=10</where>
<invalidate_query>SQL_QUERY</invalidate_query>
<fail_on_connection_loss>true</fail_on_connection_loss>
</mysql>
</source>
```
@ -558,6 +563,7 @@ SOURCE(MYSQL(
table 'table_name'
where 'id=10'
invalidate_query 'SQL_QUERY'
fail_on_connection_loss 'true'
))
```

View File

@ -831,7 +831,7 @@ Returns 0 for the first row and the difference from the previous row for each su
!!! warning "Warning"
It can reach the previous row only inside the currently processed data block.
The result of the function depends on the affected data blocks and the order of data in the block.
The rows order used during the calculation of `runningDifference` can differ from the order of rows returned to the user.
@ -908,7 +908,7 @@ Same as for [runningDifference](./other-functions.md#other_functions-runningdiff
## runningConcurrency {#runningconcurrency}
Calculates the number of concurrent events.
Each event has a start time and an end time. The start time is included in the event, while the end time is excluded. Columns with a start time and an end time must be of the same data type.
Each event has a start time and an end time. The start time is included in the event, while the end time is excluded. Columns with a start time and an end time must be of the same data type.
The function calculates the total number of active (concurrent) events for each event start time.
@ -1424,11 +1424,83 @@ Result:
└───────────┴────────┘
```
## initializeAggregation {#initializeaggregation}
Calculates result of aggregate function based on single value. It is intended to use this function to initialize aggregate functions with combinator [-State](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-state). You can create states of aggregate functions and insert them to columns of type [AggregateFunction](../../sql-reference/data-types/aggregatefunction.md#data-type-aggregatefunction) or use initialized aggregates as default values.
**Syntax**
``` sql
initializeAggregation (aggregate_function, arg1, arg2, ..., argN)
```
**Arguments**
- `aggregate_function` — Name of the aggregation function to initialize. [String](../../sql-reference/data-types/string.md).
- `arg` — Arguments of aggregate function.
**Returned value(s)**
- Result of aggregation for every row passed to the function.
The return type is the same as the return type of function, that `initializeAgregation` takes as first argument.
**Example**
Query:
```sql
SELECT uniqMerge(state) FROM (SELECT initializeAggregation('uniqState', number % 3) AS state FROM numbers(10000));
```
Result:
```text
┌─uniqMerge(state)─┐
│ 3 │
└──────────────────┘
```
Query:
```sql
SELECT finalizeAggregation(state), toTypeName(state) FROM (SELECT initializeAggregation('sumState', number % 3) AS state FROM numbers(5));
```
Result:
```text
┌─finalizeAggregation(state)─┬─toTypeName(state)─────────────┐
│ 0 │ AggregateFunction(sum, UInt8) │
│ 1 │ AggregateFunction(sum, UInt8) │
│ 2 │ AggregateFunction(sum, UInt8) │
│ 0 │ AggregateFunction(sum, UInt8) │
│ 1 │ AggregateFunction(sum, UInt8) │
└────────────────────────────┴───────────────────────────────┘
```
Example with `AggregatingMergeTree` table engine and `AggregateFunction` column:
```sql
CREATE TABLE metrics
(
key UInt64,
value AggregateFunction(sum, UInt64) DEFAULT initializeAggregation('sumState', toUInt64(0))
)
ENGINE = AggregatingMergeTree
ORDER BY key
```
```sql
INSERT INTO metrics VALUES (0, initializeAggregation('sumState', toUInt64(42)))
```
**See Also**
- [arrayReduce](../../sql-reference/functions/array-functions.md#arrayreduce)
## finalizeAggregation {#function-finalizeaggregation}
Takes state of aggregate function. Returns result of aggregation (or finalized state when using[-State](../../sql-reference/aggregate-functions/combinators.md#agg-functions-combinator-state) combinator).
**Syntax**
**Syntax**
``` sql
finalizeAggregation(state)
@ -1442,7 +1514,7 @@ finalizeAggregation(state)
- Value/values that was aggregated.
Type: Value of any types that was aggregated.
Type: Value of any types that was aggregated.
**Examples**
@ -1474,7 +1546,7 @@ Result:
└──────────────────────────────────┘
```
Note that `NULL` values are ignored.
Note that `NULL` values are ignored.
Query:
@ -1520,10 +1592,9 @@ Result:
└────────┴─────────────┴────────────────┘
```
**See Also**
**See Also**
- [arrayReduce](../../sql-reference/functions/array-functions.md#arrayreduce)
- [initializeAggregation](../../sql-reference/aggregate-functions/reference/initializeAggregation.md)
- [initializeAggregation](#initializeaggregation)
## runningAccumulate {#runningaccumulate}

View File

@ -1,40 +0,0 @@
---
toc_priority: 150
---
## initializeAggregation {#initializeaggregation}
Инициализирует агрегацию для введеных строчек. Предназначена для функций с суффиксом `State`.
Поможет вам проводить тесты или работать со столбцами типов: `AggregateFunction` и `AggregationgMergeTree`.
**Синтаксис**
``` sql
initializeAggregation (aggregate_function, column_1, column_2)
```
**Аргументы**
- `aggregate_function` — название функции агрегации, состояние которой нужно создать. [String](../../../sql-reference/data-types/string.md#string).
- `column_n` — столбец, который передается в функцию агрегации как аргумент. [String](../../../sql-reference/data-types/string.md#string).
**Возвращаемое значение**
Возвращает результат агрегации введенной информации. Тип возвращаемого значения такой же, как и для функции, которая становится первым аргументом для `initializeAgregation`.
Пример:
Возвращаемый тип функций с суффиксом `State``AggregateFunction`.
**Пример**
Запрос:
```sql
SELECT uniqMerge(state) FROM (SELECT initializeAggregation('uniqState', number % 3) AS state FROM system.numbers LIMIT 10000);
```
Результат:
┌─uniqMerge(state)─┐
│ 3 │
└──────────────────┘

View File

@ -486,6 +486,7 @@ LIFETIME(MIN 300 MAX 360)
<table>table_name</table>
<where>id=10</where>
<invalidate_query>SQL_QUERY</invalidate_query>
<fail_on_connection_loss>true</fail_on_connection_loss>
</mysql>
</source>
```
@ -503,6 +504,7 @@ SOURCE(MYSQL(
table 'table_name'
where 'id=10'
invalidate_query 'SQL_QUERY'
fail_on_connection_loss 'true'
))
```
@ -527,6 +529,8 @@ SOURCE(MYSQL(
- `invalidate_query` — запрос для проверки статуса словаря. Необязательный параметр. Читайте подробнее в разделе [Обновление словарей](external-dicts-dict-lifetime.md).
- `fail_on_connection_loss` параметр конфигурации, контролирующий поведение сервера при потере соединения. Если значение `true`, то исключение генерируется сразу же, если соединение между клиентом и сервером было потеряно. Если значение `false`, то сервер повторно попытается выполнить запрос три раза прежде чем сгенерировать исключение. Имейте в виду, что повторные попытки могут увеличить время выполнения запроса. Значение по умолчанию: `false`.
MySQL можно подключить на локальном хосте через сокеты, для этого необходимо задать `host` и `socket`.
Пример настройки:
@ -542,6 +546,7 @@ MySQL можно подключить на локальном хосте чер
<table>table_name</table>
<where>id=10</where>
<invalidate_query>SQL_QUERY</invalidate_query>
<fail_on_connection_loss>true</fail_on_connection_loss>
</mysql>
</source>
```
@ -558,6 +563,7 @@ SOURCE(MYSQL(
table 'table_name'
where 'id=10'
invalidate_query 'SQL_QUERY'
fail_on_connection_loss 'true'
))
```

View File

@ -13,7 +13,7 @@ toc_title: "Прочие функции"
Возвращает именованное значение из секции [macros](../../operations/server-configuration-parameters/settings.md#macros) конфигурации сервера.
**Синтаксис**
**Синтаксис**
```sql
getMacro(name)
@ -854,8 +854,8 @@ WHERE diff != 1
## runningConcurrency {#runningconcurrency}
Подсчитывает количество одновременно идущих событий.
У каждого события есть время начала и время окончания. Считается, что время начала включено в событие, а время окончания исключено из него. Столбцы со временем начала и окончания событий должны иметь одинаковый тип данных.
Функция подсчитывает количество событий, происходящих одновременно на момент начала каждого из событий в выборке.
У каждого события есть время начала и время окончания. Считается, что время начала включено в событие, а время окончания исключено из него. Столбцы со временем начала и окончания событий должны иметь одинаковый тип данных.
Функция подсчитывает количество событий, происходящих одновременно на момент начала каждого из событий в выборке.
!!! warning "Предупреждение"
События должны быть отсортированы по возрастанию времени начала. Если это требование нарушено, то функция вызывает исключение.
@ -1371,11 +1371,84 @@ SELECT formatReadableSize(filesystemCapacity()) AS "Capacity", toTypeName(filesy
└───────────┴────────┘
```
## initializeAggregation {#initializeaggregation}
Вычисляет результат агрегатной функции для каждой строки. Предназначена для инициализации агрегатных функций с комбинатором [-State](../../sql-reference/aggregate-functions/combinators.md#state). Может быть полезна для создания состояний агрегатных функций для последующей их вставки в столбцы типа [AggregateFunction](../../sql-reference/data-types/aggregatefunction.md#data-type-aggregatefunction) или использования в качестве значений по-умолчанию.
**Синтаксис**
``` sql
initializeAggregation (aggregate_function, arg1, arg2, ..., argN)
```
**Аргументы**
- `aggregate_function` — название агрегатной функции, состояние которой нужно создать. [String](../../sql-reference/data-types/string.md#string).
- `arg` — аргументы, которые передаются в агрегатную функцию.
**Возвращаемое значение**
- В каждой строке результат агрегатной функции, примененной к аргументам из этой строки.
Тип возвращаемого значения такой же, как и у функции, переданной первым аргументом.
**Пример**
Запрос:
```sql
SELECT uniqMerge(state) FROM (SELECT initializeAggregation('uniqState', number % 3) AS state FROM numbers(10000));
```
Результат:
```text
┌─uniqMerge(state)─┐
│ 3 │
└──────────────────┘
```
Запрос:
```sql
SELECT finalizeAggregation(state), toTypeName(state) FROM (SELECT initializeAggregation('sumState', number % 3) AS state FROM numbers(5));
```
Результат:
```text
┌─finalizeAggregation(state)─┬─toTypeName(state)─────────────┐
│ 0 │ AggregateFunction(sum, UInt8) │
│ 1 │ AggregateFunction(sum, UInt8) │
│ 2 │ AggregateFunction(sum, UInt8) │
│ 0 │ AggregateFunction(sum, UInt8) │
│ 1 │ AggregateFunction(sum, UInt8) │
└────────────────────────────┴───────────────────────────────┘
```
Пример с движком таблиц `AggregatingMergeTree` и столбцом типа `AggregateFunction`:
```sql
CREATE TABLE metrics
(
key UInt64,
value AggregateFunction(sum, UInt64) DEFAULT initializeAggregation('sumState', toUInt64(0))
)
ENGINE = AggregatingMergeTree
ORDER BY key
```
```sql
INSERT INTO metrics VALUES (0, initializeAggregation('sumState', toUInt64(42)))
```
**Смотрите также**
- [arrayReduce](../../sql-reference/functions/array-functions.md#arrayreduce)
## finalizeAggregation {#function-finalizeaggregation}
Принимает состояние агрегатной функции. Возвращает результат агрегирования (или конечное состояние при использовании комбинатора [-State](../../sql-reference/aggregate-functions/combinators.md#state)).
**Синтаксис**
**Синтаксис**
``` sql
finalizeAggregation(state)
@ -1421,7 +1494,7 @@ SELECT finalizeAggregation(( SELECT sumState(number) FROM numbers(10)));
└──────────────────────────────────┘
```
Обратите внимание, что значения `NULL` игнорируются.
Обратите внимание, что значения `NULL` игнорируются.
Запрос:
@ -1470,7 +1543,7 @@ FROM numbers(10);
**Смотрите также**
- [arrayReduce](../../sql-reference/functions/array-functions.md#arrayreduce)
- [initializeAggregation](../../sql-reference/aggregate-functions/reference/initializeAggregation.md)
- [initializeAggregation](#initializeaggregation)
## runningAccumulate {#runningaccumulate}
@ -1537,13 +1610,13 @@ SELECT k, runningAccumulate(sum_k) AS res FROM (SELECT number as k, sumState(k)
Запрос:
```sql
SELECT
SELECT
grouping,
item,
runningAccumulate(state, grouping) AS res
FROM
FROM
(
SELECT
SELECT
toInt8(number / 4) AS grouping,
number AS item,
sumState(number) AS state
@ -1732,7 +1805,7 @@ SELECT number, randomPrintableASCII(30) as str, length(str) FROM system.numbers
randomString(length)
```
**Аргументы**
**Аргументы**
- `length` — длина строки. Положительное целое число.
@ -1831,13 +1904,13 @@ randomStringUTF8(length)
Запрос:
```sql
```sql
SELECT randomStringUTF8(13)
```
Результат:
```text
```text
┌─randomStringUTF8(13)─┐
│ 𘤗𙉝д兠庇󡅴󱱎󦐪􂕌𔊹𓰛 │
└──────────────────────┘
@ -1848,13 +1921,13 @@ SELECT randomStringUTF8(13)
Возвращает текущее значение [пользовательской настройки](../../operations/settings/index.md#custom_settings).
**Синтаксис**
**Синтаксис**
```sql
getSetting('custom_setting')
```
**Параметр**
**Параметр**
- `custom_setting` — название настройки. [String](../../sql-reference/data-types/string.md).
@ -1866,7 +1939,7 @@ getSetting('custom_setting')
```sql
SET custom_a = 123;
SELECT getSetting('custom_a');
SELECT getSetting('custom_a');
```
**Результат**
@ -1875,7 +1948,7 @@ SELECT getSetting('custom_a');
123
```
**См. также**
**См. также**
- [Пользовательские настройки](../../operations/settings/index.md#custom_settings)
@ -1889,10 +1962,10 @@ SELECT getSetting('custom_a');
isDecimalOverflow(d, [p])
```
**Аргументы**
**Аргументы**
- `d` — число. [Decimal](../../sql-reference/data-types/decimal.md).
- `p` — точность. Необязательный параметр. Если опущен, используется исходная точность первого аргумента. Использование этого параметра может быть полезно для извлечения данных в другую СУБД или файл. [UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges).
- `p` — точность. Необязательный параметр. Если опущен, используется исходная точность первого аргумента. Использование этого параметра может быть полезно для извлечения данных в другую СУБД или файл. [UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges).
**Возвращаемое значение**
@ -1926,7 +1999,7 @@ SELECT isDecimalOverflow(toDecimal32(1000000000, 0), 9),
countDigits(x)
```
**Аргументы**
**Аргументы**
- `x` — [целое](../../sql-reference/data-types/int-uint.md#uint8-uint16-uint32-uint64-int8-int16-int32-int64) или [дробное](../../sql-reference/data-types/decimal.md) число.

View File

@ -577,7 +577,18 @@ private:
}
if (!history_file.empty() && !fs::exists(history_file))
FS::createFile(history_file);
{
/// Avoid TOCTOU issue.
try
{
FS::createFile(history_file);
}
catch (const ErrnoException & e)
{
if (e.getErrno() != EEXIST)
throw;
}
}
LineReader::Patterns query_extenders = {"\\"};
LineReader::Patterns query_delimiters = {";", "\\G"};

View File

@ -883,6 +883,7 @@ void DiskS3::restoreFileOperations(const RestoreInformation & restore_informatio
to_path /= from_path.parent_path().filename();
else
to_path /= from_path.filename();
fs::create_directories(to_path);
fs::copy(from_path, to_path, fs::copy_options::recursive | fs::copy_options::overwrite_existing);
fs::remove_all(from_path);
}

View File

@ -57,6 +57,8 @@ bool ParserExplainQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected
ParserQuery p(end);
if (p.parse(pos, query, expected))
explain_query->setExplainedQuery(std::move(query));
else
return false;
}
else if (select_p.parse(pos, query, expected) ||
create_p.parse(pos, query, expected))

View File

@ -225,18 +225,19 @@ namespace DB
}
}
template <typename DecimalType, typename DecimalArray>
static void fillColumnWithDecimalData(std::shared_ptr<arrow::ChunkedArray> & arrow_column, IColumn & internal_column)
{
auto & column = assert_cast<ColumnDecimal<Decimal128> &>(internal_column);
auto & column = assert_cast<ColumnDecimal<DecimalType> &>(internal_column);
auto & column_data = column.getData();
column_data.reserve(arrow_column->length());
for (size_t chunk_i = 0, num_chunks = static_cast<size_t>(arrow_column->num_chunks()); chunk_i < num_chunks; ++chunk_i)
{
auto & chunk = static_cast<arrow::DecimalArray &>(*(arrow_column->chunk(chunk_i)));
auto & chunk = static_cast<DecimalArray &>(*(arrow_column->chunk(chunk_i)));
for (size_t value_i = 0, length = static_cast<size_t>(chunk.length()); value_i < length; ++value_i)
{
column_data.emplace_back(chunk.IsNull(value_i) ? Decimal128(0) : *reinterpret_cast<const Decimal128 *>(chunk.Value(value_i))); // TODO: copy column
column_data.emplace_back(chunk.IsNull(value_i) ? DecimalType(0) : *reinterpret_cast<const DecimalType *>(chunk.Value(value_i))); // TODO: copy column
}
}
}
@ -335,8 +336,11 @@ namespace DB
case arrow::Type::TIMESTAMP:
fillColumnWithTimestampData(arrow_column, internal_column);
break;
case arrow::Type::DECIMAL:
fillColumnWithDecimalData(arrow_column, internal_column /*, internal_nested_type*/);
case arrow::Type::DECIMAL128:
fillColumnWithDecimalData<Decimal128, arrow::Decimal128Array>(arrow_column, internal_column /*, internal_nested_type*/);
break;
case arrow::Type::DECIMAL256:
fillColumnWithDecimalData<Decimal256, arrow::Decimal256Array>(arrow_column, internal_column /*, internal_nested_type*/);
break;
case arrow::Type::MAP: [[fallthrough]];
case arrow::Type::LIST:
@ -442,12 +446,18 @@ namespace DB
return makeNullable(getInternalType(arrow_type, nested_type, column_name, format_name));
}
if (arrow_type->id() == arrow::Type::DECIMAL)
if (arrow_type->id() == arrow::Type::DECIMAL128)
{
const auto * decimal_type = static_cast<arrow::DecimalType *>(arrow_type.get());
return std::make_shared<DataTypeDecimal<Decimal128>>(decimal_type->precision(), decimal_type->scale());
}
if (arrow_type->id() == arrow::Type::DECIMAL256)
{
const auto * decimal_type = static_cast<arrow::DecimalType *>(arrow_type.get());
return std::make_shared<DataTypeDecimal<Decimal256>>(decimal_type->precision(), decimal_type->scale());
}
if (arrow_type->id() == arrow::Type::LIST)
{
const auto * list_type = static_cast<arrow::ListType *>(arrow_type.get());

View File

@ -421,11 +421,20 @@ namespace DB
|| std::is_same_v<ToDataType, DataTypeDecimal<Decimal64>>
|| std::is_same_v<ToDataType, DataTypeDecimal<Decimal128>>)
{
fillArrowArrayWithDecimalColumnData<ToDataType>(column, null_bytemap, array_builder, format_name, start, end);
fillArrowArrayWithDecimalColumnData<ToDataType, Int128, arrow::Decimal128, arrow::Decimal128Builder>(column, null_bytemap, array_builder, format_name, start, end);
return true;
}
if constexpr (std::is_same_v<ToDataType,DataTypeDecimal<Decimal256>>)
{
fillArrowArrayWithDecimalColumnData<ToDataType, Int256, arrow::Decimal256, arrow::Decimal256Builder>(column, null_bytemap, array_builder, format_name, start, end);
return true;
}
return false;
};
callOnIndexAndDataType<void>(column_type->getTypeId(), fill_decimal);
if (!callOnIndexAndDataType<void>(column_type->getTypeId(), fill_decimal))
throw Exception{ErrorCodes::LOGICAL_ERROR, "Cannot fill arrow array with decimal data with type {}", column_type_name};
}
#define DISPATCH(CPP_NUMERIC_TYPE, ARROW_BUILDER_TYPE) \
else if (#CPP_NUMERIC_TYPE == column_type_name) \
@ -445,7 +454,7 @@ namespace DB
}
}
template <typename DataType>
template <typename DataType, typename FieldType, typename ArrowDecimalType, typename ArrowBuilder>
static void fillArrowArrayWithDecimalColumnData(
ColumnPtr write_column,
const PaddedPODArray<UInt8> * null_bytemap,
@ -455,7 +464,7 @@ namespace DB
size_t end)
{
const auto & column = assert_cast<const typename DataType::ColumnType &>(*write_column);
arrow::DecimalBuilder & builder = assert_cast<arrow::DecimalBuilder &>(*array_builder);
ArrowBuilder & builder = assert_cast<ArrowBuilder &>(*array_builder);
arrow::Status status;
for (size_t value_i = start; value_i < end; ++value_i)
@ -463,8 +472,10 @@ namespace DB
if (null_bytemap && (*null_bytemap)[value_i])
status = builder.AppendNull();
else
status = builder.Append(
arrow::Decimal128(reinterpret_cast<const uint8_t *>(&column.getElement(value_i).value))); // TODO: try copy column
{
FieldType element = FieldType(column.getElement(value_i).value);
status = builder.Append(ArrowDecimalType(reinterpret_cast<const uint8_t *>(&element))); // TODO: try copy column
}
checkStatus(status, write_column->getName(), format_name);
}
@ -512,15 +523,18 @@ namespace DB
if constexpr (
std::is_same_v<ToDataType, DataTypeDecimal<Decimal32>>
|| std::is_same_v<ToDataType, DataTypeDecimal<Decimal64>>
|| std::is_same_v<ToDataType, DataTypeDecimal<Decimal128>>)
|| std::is_same_v<ToDataType, DataTypeDecimal<Decimal128>>
|| std::is_same_v<ToDataType, DataTypeDecimal<Decimal256>>)
{
const auto & decimal_type = assert_cast<const ToDataType *>(column_type.get());
arrow_type = arrow::decimal(decimal_type->getPrecision(), decimal_type->getScale());
return true;
}
return false;
};
callOnIndexAndDataType<void>(column_type->getTypeId(), create_arrow_type);
if (!callOnIndexAndDataType<void>(column_type->getTypeId(), create_arrow_type))
throw Exception{ErrorCodes::LOGICAL_ERROR, "Cannot convert decimal type {} to arrow type", column_type->getFamilyName()};
return arrow_type;
}

View File

@ -38,9 +38,10 @@ FillingStep::FillingStep(const DataStream & input_stream_, SortDescription sort_
void FillingStep::transformPipeline(QueryPipeline & pipeline, const BuildQueryPipelineSettings &)
{
pipeline.addSimpleTransform([&](const Block & header)
pipeline.addSimpleTransform([&](const Block & header, QueryPipeline::StreamType stream_type) -> ProcessorPtr
{
return std::make_shared<FillingTransform>(header, sort_description);
bool on_totals = stream_type == QueryPipeline::StreamType::Totals;
return std::make_shared<FillingTransform>(header, sort_description, on_totals);
});
}

View File

@ -30,12 +30,16 @@ Block FillingTransform::transformHeader(Block header, const SortDescription & so
}
FillingTransform::FillingTransform(
const Block & header_, const SortDescription & sort_description_)
const Block & header_, const SortDescription & sort_description_, bool on_totals_)
: ISimpleTransform(header_, transformHeader(header_, sort_description_), true)
, sort_description(sort_description_)
, on_totals(on_totals_)
, filling_row(sort_description_)
, next_row(sort_description_)
{
if (on_totals)
return;
auto try_convert_fields = [](auto & descr, const auto & type)
{
auto max_type = Field::Types::Null;
@ -106,7 +110,7 @@ FillingTransform::FillingTransform(
IProcessor::Status FillingTransform::prepare()
{
if (input.isFinished() && !output.isFinished() && !has_input && !generate_suffix)
if (!on_totals && input.isFinished() && !output.isFinished() && !has_input && !generate_suffix)
{
should_insert_first = next_row < filling_row;
@ -126,6 +130,9 @@ IProcessor::Status FillingTransform::prepare()
void FillingTransform::transform(Chunk & chunk)
{
if (on_totals)
return;
Columns old_fill_columns;
Columns old_other_columns;
MutableColumns res_fill_columns;

View File

@ -13,7 +13,7 @@ namespace DB
class FillingTransform : public ISimpleTransform
{
public:
FillingTransform(const Block & header_, const SortDescription & sort_description_);
FillingTransform(const Block & header_, const SortDescription & sort_description_, bool on_totals_);
String getName() const override { return "FillingTransform"; }
@ -28,6 +28,8 @@ private:
void setResultColumns(Chunk & chunk, MutableColumns & fill_columns, MutableColumns & other_columns) const;
const SortDescription sort_description; /// Contains only rows with WITH FILL.
const bool on_totals; /// FillingTransform does nothing on totals.
FillingRow filling_row; /// Current row, which is used to fill gaps.
FillingRow next_row; /// Row to which we need to generate filling rows.

View File

@ -72,3 +72,5 @@ dest from null:
3 [] [] []
[[[1,2,3],[1,2,3]],[[1,2,3]],[[],[1,2,3]]] [[['Some string','Some string'],[]],[['Some string']],[[]]] [[NULL,1,2],[NULL],[1,2],[]] [['Some string',NULL,'Some string'],[NULL],[]]
[[[1,2,3],[1,2,3]],[[1,2,3]],[[],[1,2,3]]] [[['Some string','Some string'],[]],[['Some string']],[[]]] [[NULL,1,2],[NULL],[1,2],[]] [['Some string',NULL,'Some string'],[NULL],[]]
0.1230 0.12312312 0.1231231231230000 0.12312312312312312300000000000000
0.1230 0.12312312 0.1231231231230000 0.12312312312312312300000000000000

View File

@ -166,3 +166,11 @@ ${CLICKHOUSE_CLIENT} --query="INSERT INTO parquet_nested_arrays VALUES ([[[1,2,3
${CLICKHOUSE_CLIENT} --query="SELECT * FROM parquet_nested_arrays FORMAT Parquet" | ${CLICKHOUSE_CLIENT} --query="INSERT INTO parquet_nested_arrays FORMAT Parquet"
${CLICKHOUSE_CLIENT} --query="SELECT * FROM parquet_nested_arrays"
${CLICKHOUSE_CLIENT} --query="DROP TABLE parquet_nested_arrays"
${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS parquet_decimal"
${CLICKHOUSE_CLIENT} --query="CREATE TABLE parquet_decimal (d1 Decimal32(4), d2 Decimal64(8), d3 Decimal128(16), d4 Decimal256(32)) ENGINE = Memory"
${CLICKHOUSE_CLIENT} --query="INSERT INTO TABLE parquet_decimal VALUES (0.123, 0.123123123, 0.123123123123, 0.123123123123123123)"
${CLICKHOUSE_CLIENT} --query="SELECT * FROM parquet_decimal FORMAT Arrow" | ${CLICKHOUSE_CLIENT} --query="INSERT INTO parquet_decimal FORMAT Arrow"
${CLICKHOUSE_CLIENT} --query="SELECT * FROM parquet_decimal"
${CLICKHOUSE_CLIENT} --query="DROP TABLE parquet_decimal"

View File

@ -0,0 +1,2 @@
0.1230 0.12312312 0.1231231231230000 0.12312312312312312300000000000000
0.1230 0.12312312 0.1231231231230000 0.12312312312312312300000000000000

View File

@ -0,0 +1,15 @@
#!/usr/bin/env bash
set -e
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh
${CLICKHOUSE_CLIENT} --query="DROP TABLE IF EXISTS arrow_decimal"
${CLICKHOUSE_CLIENT} --query="CREATE TABLE arrow_decimal (d1 Decimal32(4), d2 Decimal64(8), d3 Decimal128(16), d4 Decimal256(32)) ENGINE = Memory"
${CLICKHOUSE_CLIENT} --query="INSERT INTO TABLE arrow_decimal VALUES (0.123, 0.123123123, 0.123123123123, 0.123123123123123123)"
${CLICKHOUSE_CLIENT} --query="SELECT * FROM arrow_decimal FORMAT Arrow" | ${CLICKHOUSE_CLIENT} --query="INSERT INTO arrow_decimal FORMAT Arrow"
${CLICKHOUSE_CLIENT} --query="SELECT * FROM arrow_decimal"
${CLICKHOUSE_CLIENT} --query="DROP TABLE arrow_decimal"

View File

@ -1 +1,2 @@
explain ast; -- { clientError 62 }
explain ast alter table t1 delete where date = today()

View File

@ -0,0 +1,19 @@
#!/usr/bin/env python3
import os
import sys
import signal
CURDIR = os.path.dirname(os.path.realpath(__file__))
sys.path.insert(0, os.path.join(CURDIR, 'helpers'))
from client import client, prompt, end_of_block
log = None
# uncomment the line below for debugging
#log=sys.stdout
with client(name='client1>', log=log) as client1:
client1.expect(prompt)
client1.send('SELECT number FROM numbers(100) FORMAT Null')
client1.expect('Progress: 100\.00 rows, 800\.00 B.*' + end_of_block)
client1.expect('0 rows in set. Elapsed: [\\w]{1}\.[\\w]{3} sec.' + end_of_block)

View File

@ -0,0 +1,34 @@
15 0
14 0
13 0
12 0
11 0
10 0
9 0
8 0
7 7
6 0
5 0
4 4
3 0
2 0
1 1
0 12
15 0
14 0
13 0
12 0
11 0
10 0
9 0
8 0
7 7
6 0
5 0
4 4
3 0
2 0
1 1
0 12

View File

@ -0,0 +1,17 @@
SELECT
number,
sum(number)
FROM numbers(10)
WHERE number % 3 = 1
GROUP BY number
WITH TOTALS
ORDER BY number DESC WITH FILL FROM 15;
SELECT
number,
sum(number)
FROM numbers(10)
WHERE number % 3 = 1
GROUP BY number
WITH TOTALS
ORDER BY 10, number DESC WITH FILL FROM 15;

View File

@ -23,14 +23,14 @@ def regression(self, local, clickhouse_binary_path, stress=None, parallel=None):
with Pool(8) as pool:
try:
run_scenario(pool, tasks, Feature(test=load("example.regression", "regression")), args)
run_scenario(pool, tasks, Feature(test=load("ldap.regression", "regression")), args)
run_scenario(pool, tasks, Feature(test=load("rbac.regression", "regression")), args)
run_scenario(pool, tasks, Feature(test=load("aes_encryption.regression", "regression")), args)
run_scenario(pool, tasks, Feature(test=load("map_type.regression", "regression")), args)
run_scenario(pool, tasks, Feature(test=load("window_functions.regression", "regression")), args)
run_scenario(pool, tasks, Feature(test=load("datetime64_extended_range.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("ldap.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("rbac.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("aes_encryption.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("map_type.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("window_functions.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("datetime64_extended_range.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("kerberos.regression", "regression")), args)
run_scenario(pool, tasks, Feature(test=load("extended_precision_data_types.regression", "regression")), args)
#run_scenario(pool, tasks, Feature(test=load("extended_precision_data_types.regression", "regression")), args)
finally:
join(tasks)

View File

@ -1,6 +1,7 @@
v21.6.5.37-stable 2021-06-19
v21.6.4.26-stable 2021-06-11
v21.6.3.14-stable 2021-06-04
v21.5.7.9-stable 2021-06-22
v21.5.6.6-stable 2021-05-29
v21.5.5.12-stable 2021-05-20
v21.4.7.3-stable 2021-05-19
@ -8,6 +9,7 @@ v21.4.6.55-stable 2021-04-30
v21.4.5.46-stable 2021-04-24
v21.4.4.30-stable 2021-04-16
v21.4.3.21-stable 2021-04-12
v21.3.13.9-lts 2021-06-22
v21.3.12.2-lts 2021-05-25
v21.3.11.5-lts 2021-05-14
v21.3.10.1-lts 2021-05-09

1 v21.6.5.37-stable 2021-06-19
2 v21.6.4.26-stable 2021-06-11
3 v21.6.3.14-stable 2021-06-04
4 v21.5.7.9-stable 2021-06-22
5 v21.5.6.6-stable 2021-05-29
6 v21.5.5.12-stable 2021-05-20
7 v21.4.7.3-stable 2021-05-19
9 v21.4.5.46-stable 2021-04-24
10 v21.4.4.30-stable 2021-04-16
11 v21.4.3.21-stable 2021-04-12
12 v21.3.13.9-lts 2021-06-22
13 v21.3.12.2-lts 2021-05-25
14 v21.3.11.5-lts 2021-05-14
15 v21.3.10.1-lts 2021-05-09