mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 23:52:03 +00:00
Merge branch 'master' into persistent_nukeeper_snapshot_storage
This commit is contained in:
commit
c9c301e10c
@ -5,74 +5,40 @@ toc_title: OpenTelemetry Support
|
||||
|
||||
# [experimental] OpenTelemetry Support
|
||||
|
||||
[OpenTelemetry](https://opentelemetry.io/) is an open standard for collecting
|
||||
traces and metrics from distributed application. ClickHouse has some support
|
||||
for OpenTelemetry.
|
||||
[OpenTelemetry](https://opentelemetry.io/) is an open standard for collecting traces and metrics from the distributed application. ClickHouse has some support for OpenTelemetry.
|
||||
|
||||
!!! warning "Warning"
|
||||
This is an experimental feature that will change in backwards-incompatible ways in the future releases.
|
||||
|
||||
This is an experimental feature that will change in backwards-incompatible ways in future releases.
|
||||
|
||||
## Supplying Trace Context to ClickHouse
|
||||
|
||||
ClickHouse accepts trace context HTTP headers, as described by
|
||||
the [W3C recommendation](https://www.w3.org/TR/trace-context/).
|
||||
It also accepts trace context over native protocol that is used for
|
||||
communication between ClickHouse servers or between the client and server.
|
||||
For manual testing, trace context headers conforming to the Trace Context
|
||||
recommendation can be supplied to `clickhouse-client` using
|
||||
`--opentelemetry-traceparent` and `--opentelemetry-tracestate` flags.
|
||||
|
||||
If no parent trace context is supplied, ClickHouse can start a new trace, with
|
||||
probability controlled by the `opentelemetry_start_trace_probability` setting.
|
||||
ClickHouse accepts trace context HTTP headers, as described by the [W3C recommendation](https://www.w3.org/TR/trace-context/). It also accepts trace context over a native protocol that is used for communication between ClickHouse servers or between the client and server. For manual testing, trace context headers conforming to the Trace Context recommendation can be supplied to `clickhouse-client` using `--opentelemetry-traceparent` and `--opentelemetry-tracestate` flags.
|
||||
|
||||
If no parent trace context is supplied, ClickHouse can start a new trace, with probability controlled by the [opentelemetry_start_trace_probability](../operations/settings/settings.md#opentelemetry-start-trace-probability) setting.
|
||||
|
||||
## Propagating the Trace Context
|
||||
|
||||
The trace context is propagated to downstream services in the following cases:
|
||||
|
||||
* Queries to remote ClickHouse servers, such as when using `Distributed` table
|
||||
engine.
|
||||
|
||||
* `URL` table function. Trace context information is sent in HTTP headers.
|
||||
* Queries to remote ClickHouse servers, such as when using [Distributed](../engines/table-engines/special/distributed.md) table engine.
|
||||
|
||||
* [url](../sql-reference/table-functions/url.md) table function. Trace context information is sent in HTTP headers.
|
||||
|
||||
## Tracing the ClickHouse Itself
|
||||
|
||||
ClickHouse creates _trace spans_ for each query and some of the query execution
|
||||
stages, such as query planning or distributed queries.
|
||||
ClickHouse creates `trace spans` for each query and some of the query execution stages, such as query planning or distributed queries.
|
||||
|
||||
To be useful, the tracing information has to be exported to a monitoring system
|
||||
that supports OpenTelemetry, such as Jaeger or Prometheus. ClickHouse avoids
|
||||
a dependency on a particular monitoring system, instead only providing the
|
||||
tracing data through a system table. OpenTelemetry trace span information
|
||||
[required by the standard](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span)
|
||||
is stored in the `system.opentelemetry_span_log` table.
|
||||
To be useful, the tracing information has to be exported to a monitoring system that supports OpenTelemetry, such as [Jaeger](https://jaegertracing.io/) or [Prometheus](https://prometheus.io/). ClickHouse avoids a dependency on a particular monitoring system, instead only providing the tracing data through a system table. OpenTelemetry trace span information [required by the standard](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span) is stored in the [system.opentelemetry_span_log](../operations/system-tables/opentelemetry_span_log.md) table.
|
||||
|
||||
The table must be enabled in the server configuration, see the `opentelemetry_span_log`
|
||||
element in the default config file `config.xml`. It is enabled by default.
|
||||
The table must be enabled in the server configuration, see the `opentelemetry_span_log` element in the default config file `config.xml`. It is enabled by default.
|
||||
|
||||
The table has the following columns:
|
||||
|
||||
- `trace_id`
|
||||
- `span_id`
|
||||
- `parent_span_id`
|
||||
- `operation_name`
|
||||
- `start_time`
|
||||
- `finish_time`
|
||||
- `finish_date`
|
||||
- `attribute.name`
|
||||
- `attribute.values`
|
||||
|
||||
The tags or attributes are saved as two parallel arrays, containing the keys
|
||||
and values. Use `ARRAY JOIN` to work with them.
|
||||
The tags or attributes are saved as two parallel arrays, containing the keys and values. Use [ARRAY JOIN](../sql-reference/statements/select/array-join.md) to work with them.
|
||||
|
||||
## Integration with monitoring systems
|
||||
|
||||
At the moment, there is no ready tool that can export the tracing data from
|
||||
ClickHouse to a monitoring system.
|
||||
At the moment, there is no ready tool that can export the tracing data from ClickHouse to a monitoring system.
|
||||
|
||||
For testing, it is possible to setup the export using a materialized view with the URL engine over the `system.opentelemetry_span_log` table, which would push the arriving log data to an HTTP endpoint of a trace collector. For example, to push the minimal span data to a Zipkin instance running at `http://localhost:9411`, in Zipkin v2 JSON format:
|
||||
For testing, it is possible to setup the export using a materialized view with the [URL](../engines/table-engines/special/url.md) engine over the [system.opentelemetry_span_log](../operations/system-tables/opentelemetry_span_log.md) table, which would push the arriving log data to an HTTP endpoint of a trace collector. For example, to push the minimal span data to a Zipkin instance running at `http://localhost:9411`, in Zipkin v2 JSON format:
|
||||
|
||||
```sql
|
||||
CREATE MATERIALIZED VIEW default.zipkin_spans
|
||||
@ -94,3 +60,5 @@ FROM system.opentelemetry_span_log
|
||||
```
|
||||
|
||||
In case of any errors, the part of the log data for which the error has occurred will be silently lost. Check the server log for error messages if the data does not arrive.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/opentelemetry/) <!--hide-->
|
||||
|
@ -19,15 +19,17 @@ Resolution: 1 second.
|
||||
|
||||
## Usage Remarks {#usage-remarks}
|
||||
|
||||
The point in time is saved as a [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time), regardless of the time zone or daylight saving time. Additionally, the `DateTime` type can store time zone that is the same for the entire column, that affects how the values of the `DateTime` type values are displayed in text format and how the values specified as strings are parsed (‘2020-01-01 05:00:01’). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata.
|
||||
A list of supported time zones can be found in the [IANA Time Zone Database](https://www.iana.org/time-zones).
|
||||
The `tzdata` package, containing [IANA Time Zone Database](https://www.iana.org/time-zones), should be installed in the system. Use the `timedatectl list-timezones` command to list timezones known by a local system.
|
||||
The point in time is saved as a [Unix timestamp](https://en.wikipedia.org/wiki/Unix_time), regardless of the time zone or daylight saving time. The time zone affects how the values of the `DateTime` type values are displayed in text format and how the values specified as strings are parsed (‘2020-01-01 05:00:01’).
|
||||
|
||||
You can explicitly set a time zone for `DateTime`-type columns when creating a table. If the time zone isn’t set, ClickHouse uses the value of the [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) parameter in the server settings or the operating system settings at the moment of the ClickHouse server start.
|
||||
Timezone agnostic unix timestamp is stored in tables, and the timezone is used to transform it to text format or back during data import/export or to make calendar calculations on the values (example: `toDate`, `toHour` functions et cetera). The time zone is not stored in the rows of the table (or in resultset), but is stored in the column metadata.
|
||||
|
||||
A list of supported time zones can be found in the [IANA Time Zone Database](https://www.iana.org/time-zones) and also can be queried by `SELECT * FROM system.time_zones`.
|
||||
|
||||
You can explicitly set a time zone for `DateTime`-type columns when creating a table. Example: `DateTime('UTC')`. If the time zone isn’t set, ClickHouse uses the value of the [timezone](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) parameter in the server settings or the operating system settings at the moment of the ClickHouse server start.
|
||||
|
||||
The [clickhouse-client](../../interfaces/cli.md) applies the server time zone by default if a time zone isn’t explicitly set when initializing the data type. To use the client time zone, run `clickhouse-client` with the `--use_client_time_zone` parameter.
|
||||
|
||||
ClickHouse outputs values depending on the value of the [date\_time\_output\_format](../../operations/settings/settings.md#settings-date_time_output_format) setting. `YYYY-MM-DD hh:mm:ss` text format by default. Additionaly you can change the output with the [formatDateTime](../../sql-reference/functions/date-time-functions.md#formatdatetime) function.
|
||||
ClickHouse outputs values depending on the value of the [date_time_output_format](../../operations/settings/settings.md#settings-date_time_output_format) setting. `YYYY-MM-DD hh:mm:ss` text format by default. Additionaly you can change the output with the [formatDateTime](../../sql-reference/functions/date-time-functions.md#formatdatetime) function.
|
||||
|
||||
When inserting data into ClickHouse, you can use different formats of date and time strings, depending on the value of the [date_time_input_format](../../operations/settings/settings.md#settings-date_time_input_format) setting.
|
||||
|
||||
@ -114,6 +116,24 @@ FROM dt
|
||||
└─────────────────────┴─────────────────────┘
|
||||
```
|
||||
|
||||
As timezone conversion only changes the metadata, the operation has no computation cost.
|
||||
|
||||
|
||||
## Limitations on timezones support
|
||||
|
||||
Some timezones may not be supported completely. There are a few cases:
|
||||
|
||||
If the offset from UTC is not a multiple of 15 minutes, the calculation of hours and minutes can be incorrect. For example, the time zone in Monrovia, Liberia has offset UTC -0:44:30 before 7 Jan 1972. If you are doing calculations on the historical time in Monrovia timezone, the time processing functions may give incorrect results. The results after 7 Jan 1972 will be correct nevertheless.
|
||||
|
||||
If the time transition (due to daylight saving time or for other reasons) was performed at a point of time that is not a multiple of 15 minutes, you can also get incorrect results at this specific day.
|
||||
|
||||
Non-monotonic calendar dates. For example, in Happy Valley - Goose Bay, the time was transitioned one hour backwards at 00:01:00 7 Nov 2010 (one minute after midnight). So after 6th Nov has ended, people observed a whole one minute of 7th Nov, then time was changed back to 23:01 6th Nov and after another 59 minutes the 7th Nov started again. ClickHouse does not (yet) support this kind of fun. During these days the results of time processing functions may be slightly incorrect.
|
||||
|
||||
Similar issue exists for Casey Antarctic station in year 2010. They changed time three hours back at 5 Mar, 02:00. If you are working in antarctic station, please don't afraid to use ClickHouse. Just make sure you set timezone to UTC or be aware of inaccuracies.
|
||||
|
||||
Time shifts for multiple days. Some pacific islands changed their timezone offset from UTC+14 to UTC-12. That's alright but some inaccuracies may present if you do calculations with their timezone for historical time points at the days of conversion.
|
||||
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [Type conversion functions](../../sql-reference/functions/type-conversion-functions.md)
|
||||
|
@ -728,7 +728,7 @@ The result of the function depends on the affected data blocks and the order of
|
||||
It can reach the neighbor rows only inside the currently processed data block.
|
||||
|
||||
The rows order used during the calculation of `neighbor` can differ from the order of rows returned to the user.
|
||||
To prevent that you can make a subquery with ORDER BY and call the function from outside the subquery.
|
||||
To prevent that you can make a subquery with [ORDER BY](../../sql-reference/statements/select/order-by.md) and call the function from outside the subquery.
|
||||
|
||||
**Arguments**
|
||||
|
||||
@ -834,12 +834,12 @@ Calculates the difference between successive row values in the data block.
|
||||
Returns 0 for the first row and the difference from the previous row for each subsequent row.
|
||||
|
||||
!!! warning "Warning"
|
||||
It can reach the previos row only inside the currently processed data block.
|
||||
It can reach the previous row only inside the currently processed data block.
|
||||
|
||||
The result of the function depends on the affected data blocks and the order of data in the block.
|
||||
|
||||
The rows order used during the calculation of `runningDifference` can differ from the order of rows returned to the user.
|
||||
To prevent that you can make a subquery with ORDER BY and call the function from outside the subquery.
|
||||
To prevent that you can make a subquery with [ORDER BY](../../sql-reference/statements/select/order-by.md) and call the function from outside the subquery.
|
||||
|
||||
Example:
|
||||
|
||||
|
@ -17,7 +17,7 @@ When `OPTIMIZE` is used with the [ReplicatedMergeTree](../../engines/table-engin
|
||||
|
||||
- If `OPTIMIZE` doesn’t perform a merge for any reason, it doesn’t notify the client. To enable notifications, use the [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop) setting.
|
||||
- If you specify a `PARTITION`, only the specified partition is optimized. [How to set partition expression](../../sql-reference/statements/alter/index.md#alter-how-to-specify-part-expr).
|
||||
- If you specify `FINAL`, optimization is performed even when all the data is already in one part.
|
||||
- If you specify `FINAL`, optimization is performed even when all the data is already in one part. Also merge is forced even if concurrent merges are performed.
|
||||
- If you specify `DEDUPLICATE`, then completely identical rows will be deduplicated (all columns are compared), it makes sense only for the MergeTree engine.
|
||||
|
||||
!!! warning "Warning"
|
||||
|
37
docs/ru/operations/opentelemetry.md
Normal file
37
docs/ru/operations/opentelemetry.md
Normal file
@ -0,0 +1,37 @@
|
||||
---
|
||||
toc_priority: 62
|
||||
toc_title: Поддержка OpenTelemetry
|
||||
---
|
||||
|
||||
# [экспериментально] Поддержка OpenTelemetry
|
||||
|
||||
ClickHouse поддерживает [OpenTelemetry](https://opentelemetry.io/) — открытый стандарт для сбора трассировок и метрик из распределенного приложения.
|
||||
|
||||
!!! warning "Предупреждение"
|
||||
Поддержка стандарта экспериментальная и будет со временем меняться.
|
||||
|
||||
## Обеспечение поддержки контекста трассировки в ClickHouse
|
||||
|
||||
ClickHouse принимает контекстную информацию трассировки через HTTP заголовок `tracecontext`, как описано в [рекомендации W3C](https://www.w3.org/TR/trace-context/). Также он принимает контекстную информацию через нативный протокол, который используется для связи между серверами ClickHouse или между клиентом и сервером. Для ручного тестирования стандартный заголовок `tracecontext`, содержащий контекст трассировки, может быть передан в `clickhouse-client` через флаги: `--opentelemetry-traceparent` и `--opentelemetry-tracestate`.
|
||||
|
||||
Если входящий контекст трассировки не указан, ClickHouse может начать трассировку с вероятностью, задаваемой настройкой [opentelemetry_start_trace_probability](../operations/settings/settings.md#opentelemetry-start-trace-probability).
|
||||
|
||||
## Распространение контекста трассировки
|
||||
|
||||
Контекст трассировки распространяется на нижестоящие сервисы в следующих случаях:
|
||||
|
||||
* При использовании запросов к удаленным серверам ClickHouse, например, при использовании движка таблиц [Distributed](../engines/table-engines/special/distributed.md).
|
||||
|
||||
* При использовании табличной функции [url](../sql-reference/table-functions/url.md). Информация контекста трассировки передается в HTTP заголовки.
|
||||
|
||||
## Как ClickHouse выполняет трассировку
|
||||
|
||||
ClickHouse создает `trace spans` для каждого запроса и некоторых этапов выполнения запроса, таких как планирование запросов или распределенные запросы.
|
||||
|
||||
Чтобы анализировать информацию трассировки, ее следует экспортировать в систему мониторинга, поддерживающую OpenTelemetry, такую как [Jaeger](https://jaegertracing.io/) или [Prometheus](https://prometheus.io/). ClickHouse не зависит от конкретной системы мониторинга, вместо этого предоставляя данные трассировки только через системную таблицу. Информация о диапазоне трассировки в OpenTelemetry, [требуемая стандартом](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span), хранится в системной таблице [system.opentelemetry_span_log](../operations/system-tables/opentelemetry_span_log.md).
|
||||
|
||||
Таблица должна быть включена в конфигурации сервера, смотрите элемент `opentelemetry_span_log` в файле конфигурации `config.xml`. По умолчанию таблица включена всегда.
|
||||
|
||||
Теги или атрибуты сохраняются в виде двух параллельных массивов, содержащих ключи и значения. Для работы с ними используйте [ARRAY JOIN](../sql-reference/statements/select/array-join.md).
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/opentelemetry/) <!--hide-->
|
@ -23,8 +23,6 @@ SELECT
|
||||
└─────────────────────┴────────────┴────────────┴─────────────────────┘
|
||||
```
|
||||
|
||||
Поддерживаются только часовые пояса, отличающиеся от UTC на целое число часов.
|
||||
|
||||
## toTimeZone {#totimezone}
|
||||
|
||||
Переводит дату или дату-с-временем в указанный часовой пояс. Часовой пояс (таймзона) это атрибут типов Date/DateTime, внутреннее значение (количество секунд) поля таблицы или колонки результата не изменяется, изменяется тип поля и автоматически его текстовое отображение.
|
||||
|
@ -659,7 +659,7 @@ SELECT
|
||||
|
||||
## neighbor {#neighbor}
|
||||
|
||||
Функция позволяет получить доступ к значению в колонке `column`, находящемуся на смещении `offset` относительно текущей строки. Является частичной реализацией [оконных функций](https://en.wikipedia.org/wiki/SQL_window_function) `LEAD()` и `LAG()`.
|
||||
Функция позволяет получить доступ к значению в столбце `column`, находящемуся на смещении `offset` относительно текущей строки. Является частичной реализацией [оконных функций](https://en.wikipedia.org/wiki/SQL_window_function) `LEAD()` и `LAG()`.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
@ -667,7 +667,13 @@ SELECT
|
||||
neighbor(column, offset[, default_value])
|
||||
```
|
||||
|
||||
Результат функции зависит от затронутых блоков данных и порядка данных в блоке. Если сделать подзапрос с ORDER BY и вызывать функцию извне подзапроса, можно будет получить ожидаемый результат.
|
||||
Результат функции зависит от затронутых блоков данных и порядка данных в блоке.
|
||||
|
||||
!!! warning "Предупреждение"
|
||||
Функция может получить доступ к значению в столбце соседней строки только внутри обрабатываемого в данный момент блока данных.
|
||||
|
||||
Порядок строк, используемый при вычислении функции `neighbor`, может отличаться от порядка строк, возвращаемых пользователю.
|
||||
Чтобы этого не случилось, вы можете сделать подзапрос с [ORDER BY](../../sql-reference/statements/select/order-by.md) и вызвать функцию изне подзапроса.
|
||||
|
||||
**Параметры**
|
||||
|
||||
@ -772,8 +778,13 @@ FROM numbers(16)
|
||||
Считает разницу между последовательными значениями строк в блоке данных.
|
||||
Возвращает 0 для первой строки и разницу с предыдущей строкой для каждой последующей строки.
|
||||
|
||||
!!! warning "Предупреждение"
|
||||
Функция может взять значение предыдущей строки только внутри текущего обработанного блока данных.
|
||||
|
||||
Результат функции зависит от затронутых блоков данных и порядка данных в блоке.
|
||||
Если сделать подзапрос с ORDER BY и вызывать функцию извне подзапроса, можно будет получить ожидаемый результат.
|
||||
|
||||
Порядок строк, используемый при вычислении функции `runningDifference`, может отличаться от порядка строк, возвращаемых пользователю.
|
||||
Чтобы этого не случилось, вы можете сделать подзапрос с [ORDER BY](../../sql-reference/statements/select/order-by.md) и вызвать функцию извне подзапроса.
|
||||
|
||||
Пример:
|
||||
|
||||
|
@ -15,11 +15,10 @@ OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition | PARTITION I
|
||||
|
||||
- Если `OPTIMIZE` не выполняет мёрж по любой причине, ClickHouse не оповещает об этом клиента. Чтобы включить оповещения, используйте настройку [optimize_throw_if_noop](../../operations/settings/settings.md#setting-optimize_throw_if_noop).
|
||||
- Если указать `PARTITION`, то оптимизация выполняется только для указанной партиции. [Как задавать имя партиции в запросах](alter/index.md#alter-how-to-specify-part-expr).
|
||||
- Если указать `FINAL`, то оптимизация выполняется даже в том случае, если все данные уже лежат в одном куске.
|
||||
- Если указать `FINAL`, то оптимизация выполняется даже в том случае, если все данные уже лежат в одном куске. Кроме того, слияние является принудительным, даже если выполняются параллельные слияния.
|
||||
- Если указать `DEDUPLICATE`, то произойдет схлопывание полностью одинаковых строк (сравниваются значения во всех колонках), имеет смысл только для движка MergeTree.
|
||||
|
||||
!!! warning "Внимание"
|
||||
Запрос `OPTIMIZE` не может устранить причину появления ошибки «Too many parts».
|
||||
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/optimize/) <!--hide-->
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/optimize/) <!--hide-->
|
||||
|
@ -458,33 +458,25 @@ ExecutionStatus ExecutionStatus::fromCurrentException(const std::string & start_
|
||||
return ExecutionStatus(getCurrentExceptionCode(), msg);
|
||||
}
|
||||
|
||||
ParsingException::ParsingException()
|
||||
{
|
||||
Exception::message(Exception::message() + "{}");
|
||||
}
|
||||
|
||||
ParsingException::ParsingException() = default;
|
||||
ParsingException::ParsingException(const std::string & msg, int code)
|
||||
: Exception(msg, code)
|
||||
{
|
||||
Exception::message(Exception::message() + "{}");
|
||||
}
|
||||
|
||||
ParsingException::ParsingException(int code, const std::string & message)
|
||||
: Exception(message, code)
|
||||
{
|
||||
Exception::message(Exception::message() + "{}");
|
||||
}
|
||||
|
||||
|
||||
/// We use additional field formatted_message_ to make this method const.
|
||||
std::string ParsingException::displayText() const
|
||||
{
|
||||
try
|
||||
{
|
||||
if (line_number_ == -1)
|
||||
formatted_message_ = fmt::format(message(), "");
|
||||
formatted_message_ = message();
|
||||
else
|
||||
formatted_message_ = fmt::format(message(), fmt::format(": (at row {})\n", line_number_));
|
||||
formatted_message_ = message() + fmt::format(": (at row {})\n", line_number_);
|
||||
}
|
||||
catch (...)
|
||||
{}
|
||||
|
@ -115,9 +115,7 @@ public:
|
||||
template <typename ...Args>
|
||||
ParsingException(int code, const std::string & fmt, Args&&... args)
|
||||
: Exception(fmt::format(fmt, std::forward<Args>(args)...), code)
|
||||
{
|
||||
Exception::message(Exception::message() + "{}");
|
||||
}
|
||||
{}
|
||||
|
||||
|
||||
std::string displayText() const
|
||||
|
@ -1100,14 +1100,14 @@ public:
|
||||
return executeBitmapData<UInt32>(arguments, input_rows_count);
|
||||
else if (which.isUInt64())
|
||||
return executeBitmapData<UInt64>(arguments, input_rows_count);
|
||||
else if (which.isUInt8())
|
||||
return executeBitmapData<UInt8>(arguments, input_rows_count);
|
||||
else if (which.isUInt16())
|
||||
return executeBitmapData<UInt16>(arguments, input_rows_count);
|
||||
else if (which.isUInt32())
|
||||
return executeBitmapData<UInt32>(arguments, input_rows_count);
|
||||
else if (which.isUInt64())
|
||||
return executeBitmapData<UInt64>(arguments, input_rows_count);
|
||||
else if (which.isInt8())
|
||||
return executeBitmapData<Int8>(arguments, input_rows_count);
|
||||
else if (which.isInt16())
|
||||
return executeBitmapData<Int16>(arguments, input_rows_count);
|
||||
else if (which.isInt32())
|
||||
return executeBitmapData<Int32>(arguments, input_rows_count);
|
||||
else if (which.isInt64())
|
||||
return executeBitmapData<Int64>(arguments, input_rows_count);
|
||||
else
|
||||
throw Exception(
|
||||
"Unexpected type " + from_type->getName() + " of argument of function " + getName(), ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
|
@ -15,6 +15,8 @@
|
||||
# include <Common/config.h>
|
||||
#endif
|
||||
|
||||
#include <boost/algorithm/string/case_conv.hpp>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -55,23 +57,24 @@ CompressionMethod chooseCompressionMethod(const std::string & path, const std::s
|
||||
file_extension = path.substr(pos + 1, std::string::npos);
|
||||
}
|
||||
|
||||
const std::string * method_str = file_extension.empty() ? &hint : &file_extension;
|
||||
std::string method_str = file_extension.empty() ? hint : std::move(file_extension);
|
||||
boost::algorithm::to_lower(method_str);
|
||||
|
||||
if (*method_str == "gzip" || *method_str == "gz")
|
||||
if (method_str == "gzip" || method_str == "gz")
|
||||
return CompressionMethod::Gzip;
|
||||
if (*method_str == "deflate")
|
||||
if (method_str == "deflate")
|
||||
return CompressionMethod::Zlib;
|
||||
if (*method_str == "brotli" || *method_str == "br")
|
||||
if (method_str == "brotli" || method_str == "br")
|
||||
return CompressionMethod::Brotli;
|
||||
if (*method_str == "LZMA" || *method_str == "xz")
|
||||
if (method_str == "lzma" || method_str == "xz")
|
||||
return CompressionMethod::Xz;
|
||||
if (*method_str == "zstd" || *method_str == "zst")
|
||||
if (method_str == "zstd" || method_str == "zst")
|
||||
return CompressionMethod::Zstd;
|
||||
if (hint.empty() || hint == "auto" || hint == "none")
|
||||
return CompressionMethod::None;
|
||||
|
||||
throw Exception(
|
||||
"Unknown compression method " + hint + ". Only 'auto', 'none', 'gzip', 'br', 'xz', 'zstd' are supported as compression methods",
|
||||
"Unknown compression method " + hint + ". Only 'auto', 'none', 'gzip', 'deflate', 'br', 'xz', 'zstd' are supported as compression methods",
|
||||
ErrorCodes::NOT_IMPLEMENTED);
|
||||
}
|
||||
|
||||
|
@ -471,12 +471,14 @@ def select_without_columns(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("CREATE DATABASE db")
|
||||
mysql_node.query("CREATE TABLE db.t (a INT PRIMARY KEY, b INT)")
|
||||
clickhouse_node.query(
|
||||
"CREATE DATABASE db ENGINE = MaterializeMySQL('{}:3306', 'db', 'root', 'clickhouse')".format(service_name))
|
||||
"CREATE DATABASE db ENGINE = MaterializeMySQL('{}:3306', 'db', 'root', 'clickhouse') SETTINGS max_flush_data_time = 100000".format(service_name))
|
||||
check_query(clickhouse_node, "SHOW TABLES FROM db FORMAT TSV", "t\n")
|
||||
clickhouse_node.query("SYSTEM STOP MERGES db.t")
|
||||
clickhouse_node.query("CREATE VIEW v AS SELECT * FROM db.t")
|
||||
mysql_node.query("INSERT INTO db.t VALUES (1, 1), (2, 2)")
|
||||
mysql_node.query("DELETE FROM db.t WHERE a=2;")
|
||||
mysql_node.query("DELETE FROM db.t WHERE a = 2;")
|
||||
# We need to execute a DDL for flush data buffer
|
||||
mysql_node.query("CREATE TABLE db.temporary(a INT PRIMARY KEY, b INT)")
|
||||
|
||||
optimize_on_insert = clickhouse_node.query("SELECT value FROM system.settings WHERE name='optimize_on_insert'").strip()
|
||||
if optimize_on_insert == "0":
|
||||
|
8
tests/queries/0_stateless/01750_parsing_exception.sh
Executable file
8
tests/queries/0_stateless/01750_parsing_exception.sh
Executable file
@ -0,0 +1,8 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
. "$CURDIR"/../shell_config.sh
|
||||
|
||||
# if it will not match, the exit code of grep will be non-zero and the test will fail
|
||||
$CLICKHOUSE_CLIENT -q "SELECT toDateTime(format('{}-{}-01 00:00:00', '2021', '1'))" |& grep -F -q 'Cannot parse datetime 2021-1-01 00:00:00: Cannot parse DateTime from String:'
|
Loading…
Reference in New Issue
Block a user