mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-24 08:32:02 +00:00
Merge branch 'master' into update_keeper_config
This commit is contained in:
commit
aee446352a
13
.github/workflows/main.yml
vendored
13
.github/workflows/main.yml
vendored
@ -17,8 +17,6 @@ jobs:
|
||||
uses: actions/checkout@v2
|
||||
- name: Labels check
|
||||
run: cd $GITHUB_WORKSPACE/tests/ci && python3 run_check.py
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
DockerHubPush:
|
||||
needs: CheckLabels
|
||||
runs-on: [self-hosted]
|
||||
@ -27,11 +25,6 @@ jobs:
|
||||
uses: actions/checkout@v2
|
||||
- name: Images check
|
||||
run: cd $GITHUB_WORKSPACE/tests/ci && python3 docker_images_check.py
|
||||
env:
|
||||
YANDEX_S3_ACCESS_KEY_ID: ${{ secrets.YANDEX_S3_ACCESS_KEY_ID }}
|
||||
YANDEX_S3_ACCESS_SECRET_KEY: ${{ secrets.YANDEX_S3_ACCESS_SECRET_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
DOCKER_ROBOT_PASSWORD: ${{ secrets.DOCKER_ROBOT_PASSWORD }}
|
||||
- name: Upload images files to artifacts
|
||||
uses: actions/upload-artifact@v2
|
||||
with:
|
||||
@ -49,10 +42,6 @@ jobs:
|
||||
- name: Check out repository code
|
||||
uses: actions/checkout@v2
|
||||
- name: Style Check
|
||||
env:
|
||||
YANDEX_S3_ACCESS_KEY_ID: ${{ secrets.YANDEX_S3_ACCESS_KEY_ID }}
|
||||
YANDEX_S3_ACCESS_SECRET_KEY: ${{ secrets.YANDEX_S3_ACCESS_SECRET_KEY }}
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
run: cd $GITHUB_WORKSPACE/tests/ci && python3 style_check.py
|
||||
FinishCheck:
|
||||
needs: [StyleCheck, DockerHubPush, CheckLabels]
|
||||
@ -62,5 +51,3 @@ jobs:
|
||||
uses: actions/checkout@v2
|
||||
- name: Finish label
|
||||
run: cd $GITHUB_WORKSPACE/tests/ci && python3 finish_check.py
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
@ -5,7 +5,7 @@ toc_title: Distributed
|
||||
|
||||
# Distributed Table Engine {#distributed}
|
||||
|
||||
Tables with Distributed engine do not store any data by their own, but allow distributed query processing on multiple servers.
|
||||
Tables with Distributed engine do not store any data of their own, but allow distributed query processing on multiple servers.
|
||||
Reading is automatically parallelized. During a read, the table indexes on remote servers are used, if there are any.
|
||||
|
||||
The Distributed engine accepts parameters:
|
||||
@ -167,20 +167,20 @@ If this parameter is set to `true`, the write operation selects the first health
|
||||
|
||||
If it is set to `false` (the default), data is written to all replicas. In essence, this means that the Distributed table replicates data itself. This is worse than using replicated tables, because the consistency of replicas is not checked, and over time they will contain slightly different data.
|
||||
|
||||
To select the shard that a row of data is sent to, the sharding expression is analyzed, and its remainder is taken from dividing it by the total weight of the shards. The row is sent to the shard that corresponds to the half-interval of the remainders from `prev_weight` to `prev_weights + weight`, where `prev_weights` is the total weight of the shards with the smallest number, and `weight` is the weight of this shard. For example, if there are two shards, and the first has a weight of 9 while the second has a weight of 10, the row will be sent to the first shard for the remainders from the range \[0, 9), and to the second for the remainders from the range \[9, 19).
|
||||
To select the shard that a row of data is sent to, the sharding expression is analyzed, and its remainder is taken from dividing it by the total weight of the shards. The row is sent to the shard that corresponds to the half-interval of the remainders from `prev_weights` to `prev_weights + weight`, where `prev_weights` is the total weight of the shards with the smallest number, and `weight` is the weight of this shard. For example, if there are two shards, and the first has a weight of 9 while the second has a weight of 10, the row will be sent to the first shard for the remainders from the range \[0, 9), and to the second for the remainders from the range \[9, 19).
|
||||
|
||||
The sharding expression can be any expression from constants and table columns that returns an integer. For example, you can use the expression `rand()` for random distribution of data, or `UserID` for distribution by the remainder from dividing the user’s ID (then the data of a single user will reside on a single shard, which simplifies running IN and JOIN by users). If one of the columns is not distributed evenly enough, you can wrap it in a hash function: intHash64(UserID).
|
||||
|
||||
A simple reminder from the division is a limited solution for sharding and isn’t always appropriate. It works for medium and large volumes of data (dozens of servers), but not for very large volumes of data (hundreds of servers or more). In the latter case, use the sharding scheme required by the subject area, rather than using entries in Distributed tables.
|
||||
A simple remainder from the division is a limited solution for sharding and isn’t always appropriate. It works for medium and large volumes of data (dozens of servers), but not for very large volumes of data (hundreds of servers or more). In the latter case, use the sharding scheme required by the subject area, rather than using entries in Distributed tables.
|
||||
|
||||
SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you do not have to transfer the old data to it. You can write new data with a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently.
|
||||
SELECT queries are sent to all the shards and work regardless of how data is distributed across the shards (they can be distributed completely randomly). When you add a new shard, you do not have to transfer old data into it. Instead, you can write new data to it by using a heavier weight – the data will be distributed slightly unevenly, but queries will work correctly and efficiently.
|
||||
|
||||
You should be concerned about the sharding scheme in the following cases:
|
||||
|
||||
- Queries are used that require joining data (IN or JOIN) by a specific key. If data is sharded by this key, you can use local IN or JOIN instead of GLOBAL IN or GLOBAL JOIN, which is much more efficient.
|
||||
- A large number of servers is used (hundreds or more) with a large number of small queries (queries of individual clients - websites, advertisers, or partners). In order for the small queries to not affect the entire cluster, it makes sense to locate data for a single client on a single shard. Alternatively, as we’ve done in Yandex.Metrica, you can set up bi-level sharding: divide the entire cluster into “layers”, where a layer may consist of multiple shards. Data for a single client is located on a single layer, but shards can be added to a layer as necessary, and data is randomly distributed within them. Distributed tables are created for each layer, and a single shared distributed table is created for global queries.
|
||||
|
||||
Data is written asynchronously. When inserted in the table, the data block is just written to the local file system. The data is sent to the remote servers in the background as soon as possible. The period for sending data is managed by the [distributed_directory_monitor_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_sleep_time_ms) and [distributed_directory_monitor_max_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_max_sleep_time_ms) settings. The `Distributed` engine sends each file with inserted data separately, but you can enable batch sending of files with the [distributed_directory_monitor_batch_inserts](../../../operations/settings/settings.md#distributed_directory_monitor_batch_inserts) setting. This setting improves cluster performance by better utilizing local server and network resources. You should check whether data is sent successfully by checking the list of files (data waiting to be sent) in the table directory: `/var/lib/clickhouse/data/database/table/`. The number of threads performing background tasks can be set by [background_distributed_schedule_pool_size](../../../operations/settings/settings.md#background_distributed_schedule_pool_size) setting.
|
||||
Data is written asynchronously. When inserted in the table, the data block is just written to the local file system. The data is sent to the remote servers in the background as soon as possible. The periodicity for sending data is managed by the [distributed_directory_monitor_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_sleep_time_ms) and [distributed_directory_monitor_max_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_max_sleep_time_ms) settings. The `Distributed` engine sends each file with inserted data separately, but you can enable batch sending of files with the [distributed_directory_monitor_batch_inserts](../../../operations/settings/settings.md#distributed_directory_monitor_batch_inserts) setting. This setting improves cluster performance by better utilizing local server and network resources. You should check whether data is sent successfully by checking the list of files (data waiting to be sent) in the table directory: `/var/lib/clickhouse/data/database/table/`. The number of threads performing background tasks can be set by [background_distributed_schedule_pool_size](../../../operations/settings/settings.md#background_distributed_schedule_pool_size) setting.
|
||||
|
||||
If the server ceased to exist or had a rough restart (for example, after a device failure) after an INSERT to a Distributed table, the inserted data might be lost. If a damaged data part is detected in the table directory, it is transferred to the `broken` subdirectory and no longer used.
|
||||
|
||||
|
@ -20,7 +20,7 @@ Subquery is another `SELECT` query that may be specified in parenthesis inside `
|
||||
|
||||
When `FINAL` is specified, ClickHouse fully merges the data before returning the result and thus performs all data transformations that happen during merges for the given table engine.
|
||||
|
||||
It is applicable when selecting data from tables that use the [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)-engine family (except `GraphiteMergeTree`). Also supported for:
|
||||
It is applicable when selecting data from tables that use the [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)-engine family. Also supported for:
|
||||
|
||||
- [Replicated](../../../engines/table-engines/mergetree-family/replication.md) versions of `MergeTree` engines.
|
||||
- [View](../../../engines/table-engines/special/view.md), [Buffer](../../../engines/table-engines/special/buffer.md), [Distributed](../../../engines/table-engines/special/distributed.md), and [MaterializedView](../../../engines/table-engines/special/materializedview.md) engines that operate over other engines, provided they were created over `MergeTree`-engine tables.
|
||||
|
@ -20,7 +20,7 @@ toc_title: FROM
|
||||
|
||||
Если в запросе используется модификатор `FINAL`, то ClickHouse полностью мёржит данные перед выдачей результата, таким образом выполняя все преобразования данных, которые производятся движком таблиц при мёржах.
|
||||
|
||||
Он применим при выборе данных из таблиц, использующих [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)- семейство движков (кроме `GraphiteMergeTree`). Также поддерживается для:
|
||||
Он применим при выборе данных из таблиц, использующих [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)- семейство движков. Также поддерживается для:
|
||||
|
||||
- [Replicated](../../../engines/table-engines/mergetree-family/replication.md) варианты исполнения `MergeTree` движков.
|
||||
- [View](../../../engines/table-engines/special/view.md), [Buffer](../../../engines/table-engines/special/buffer.md), [Distributed](../../../engines/table-engines/special/distributed.md), и [MaterializedView](../../../engines/table-engines/special/materializedview.md), которые работают поверх других движков, если они созданы для таблиц с движками семейства `MergeTree`.
|
||||
|
@ -20,7 +20,7 @@ toc_title: FROM
|
||||
|
||||
当 `FINAL` 被指定,ClickHouse会在返回结果之前完全合并数据,从而执行给定表引擎合并期间发生的所有数据转换。
|
||||
|
||||
它适用于从使用 [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)-引擎族(除了 `GraphiteMergeTree`). 还支持:
|
||||
它适用于从使用 [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md)-引擎族. 还支持:
|
||||
|
||||
- [Replicated](../../../engines/table-engines/mergetree-family/replication.md) 版本 `MergeTree` 引擎
|
||||
- [View](../../../engines/table-engines/special/view.md), [Buffer](../../../engines/table-engines/special/buffer.md), [Distributed](../../../engines/table-engines/special/distributed.md),和 [MaterializedView](../../../engines/table-engines/special/materializedview.md) 在其他引擎上运行的引擎,只要是它们底层是 `MergeTree`-引擎表即可。
|
||||
|
@ -25,6 +25,8 @@
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/registerStorages.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <Formats/FormatFactory.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
|
||||
|
||||
#pragma GCC diagnostic ignored "-Wunused-function"
|
||||
@ -114,6 +116,7 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
registerAggregateFunctions();
|
||||
registerTableFunctions();
|
||||
registerStorages();
|
||||
registerFormats();
|
||||
|
||||
std::unordered_set<std::string> additional_names;
|
||||
|
||||
@ -130,6 +133,8 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
return FunctionFactory::instance().has(what)
|
||||
|| AggregateFunctionFactory::instance().isAggregateFunctionName(what)
|
||||
|| TableFunctionFactory::instance().isTableFunctionName(what)
|
||||
|| FormatFactory::instance().isOutputFormat(what)
|
||||
|| FormatFactory::instance().isInputFormat(what)
|
||||
|| additional_names.count(what);
|
||||
};
|
||||
|
||||
|
@ -5,7 +5,7 @@
|
||||
#include <Processors/Executors/PushingPipelineExecutor.h>
|
||||
#include <Processors/Executors/PushingAsyncPipelineExecutor.h>
|
||||
#include <Storages/IStorage.h>
|
||||
#include "Core/Protocol.h"
|
||||
#include <Core/Protocol.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -105,6 +105,16 @@ void LocalConnection::sendQuery(
|
||||
state->pushing_executor->start();
|
||||
state->block = state->pushing_executor->getHeader();
|
||||
}
|
||||
|
||||
const auto & table_id = query_context->getInsertionTable();
|
||||
if (query_context->getSettingsRef().input_format_defaults_for_omitted_fields)
|
||||
{
|
||||
if (!table_id.empty())
|
||||
{
|
||||
auto storage_ptr = DatabaseCatalog::instance().getTable(table_id, query_context);
|
||||
state->columns_description = storage_ptr->getInMemoryMetadataPtr()->getColumns();
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (state->io.pipeline.pulling())
|
||||
{
|
||||
@ -117,7 +127,9 @@ void LocalConnection::sendQuery(
|
||||
executor.execute();
|
||||
}
|
||||
|
||||
if (state->block)
|
||||
if (state->columns_description)
|
||||
next_packet_type = Protocol::Server::TableColumns;
|
||||
else if (state->block)
|
||||
next_packet_type = Protocol::Server::Data;
|
||||
}
|
||||
catch (const Exception & e)
|
||||
@ -338,21 +350,41 @@ Packet LocalConnection::receivePacket()
|
||||
packet.block = std::move(state->block.value());
|
||||
state->block.reset();
|
||||
}
|
||||
next_packet_type.reset();
|
||||
break;
|
||||
}
|
||||
case Protocol::Server::TableColumns:
|
||||
{
|
||||
if (state->columns_description)
|
||||
{
|
||||
/// Send external table name (empty name is the main table)
|
||||
/// (see TCPHandler::sendTableColumns)
|
||||
packet.multistring_message = {"", state->columns_description->toString()};
|
||||
}
|
||||
|
||||
if (state->block)
|
||||
{
|
||||
next_packet_type = Protocol::Server::Data;
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
case Protocol::Server::Exception:
|
||||
{
|
||||
packet.exception = std::make_unique<Exception>(*state->exception);
|
||||
next_packet_type.reset();
|
||||
break;
|
||||
}
|
||||
case Protocol::Server::Progress:
|
||||
{
|
||||
packet.progress = std::move(state->progress);
|
||||
state->progress.reset();
|
||||
next_packet_type.reset();
|
||||
break;
|
||||
}
|
||||
case Protocol::Server::EndOfStream:
|
||||
{
|
||||
next_packet_type.reset();
|
||||
break;
|
||||
}
|
||||
default:
|
||||
@ -360,7 +392,6 @@ Packet LocalConnection::receivePacket()
|
||||
"Unknown packet {} for {}", toString(packet.type), getDescription());
|
||||
}
|
||||
|
||||
next_packet_type.reset();
|
||||
return packet;
|
||||
}
|
||||
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <QueryPipeline/BlockIO.h>
|
||||
#include <IO/TimeoutSetter.h>
|
||||
#include <Interpreters/Session.h>
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -33,6 +34,7 @@ struct LocalQueryState
|
||||
|
||||
/// Current block to be sent next.
|
||||
std::optional<Block> block;
|
||||
std::optional<ColumnsDescription> columns_description;
|
||||
|
||||
/// Is request cancelled
|
||||
bool is_cancelled = false;
|
||||
|
@ -24,11 +24,13 @@ int main(int argc, char **argv)
|
||||
size_t rows = 0;
|
||||
char dummy;
|
||||
|
||||
while (!read_buffer.eof()) {
|
||||
while (!read_buffer.eof())
|
||||
{
|
||||
readIntText(rows, read_buffer);
|
||||
readChar(dummy, read_buffer);
|
||||
|
||||
for (size_t i = 0; i < rows; ++i) {
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
{
|
||||
readString(buffer, read_buffer);
|
||||
readChar(dummy, read_buffer);
|
||||
|
||||
|
@ -277,17 +277,18 @@ DataTypePtr getLeastSupertype(const DataTypes & types)
|
||||
/// For Date and DateTime/DateTime64, the common type is DateTime/DateTime64. No other types are compatible.
|
||||
{
|
||||
UInt32 have_date = type_ids.count(TypeIndex::Date);
|
||||
UInt32 have_date32 = type_ids.count(TypeIndex::Date32);
|
||||
UInt32 have_datetime = type_ids.count(TypeIndex::DateTime);
|
||||
UInt32 have_datetime64 = type_ids.count(TypeIndex::DateTime64);
|
||||
|
||||
if (have_date || have_datetime || have_datetime64)
|
||||
if (have_date || have_date32 || have_datetime || have_datetime64)
|
||||
{
|
||||
bool all_date_or_datetime = type_ids.size() == (have_date + have_datetime + have_datetime64);
|
||||
bool all_date_or_datetime = type_ids.size() == (have_date + have_date32 + have_datetime + have_datetime64);
|
||||
if (!all_date_or_datetime)
|
||||
throw Exception(getExceptionMessagePrefix(types) + " because some of them are Date/DateTime/DateTime64 and some of them are not",
|
||||
throw Exception(getExceptionMessagePrefix(types) + " because some of them are Date/Date32/DateTime/DateTime64 and some of them are not",
|
||||
ErrorCodes::NO_COMMON_TYPE);
|
||||
|
||||
if (have_datetime64 == 0)
|
||||
if (have_datetime64 == 0 && have_date32 == 0)
|
||||
{
|
||||
for (const auto & type : types)
|
||||
{
|
||||
@ -298,6 +299,22 @@ DataTypePtr getLeastSupertype(const DataTypes & types)
|
||||
return std::make_shared<DataTypeDateTime>();
|
||||
}
|
||||
|
||||
/// For Date and Date32, the common type is Date32
|
||||
if (have_datetime == 0 && have_datetime64 == 0)
|
||||
{
|
||||
for (const auto & type : types)
|
||||
{
|
||||
if (isDate32(type))
|
||||
return type;
|
||||
}
|
||||
}
|
||||
|
||||
/// For Datetime and Date32, the common type is Datetime64
|
||||
if (have_datetime == 1 && have_date32 == 1 && have_datetime64 == 0)
|
||||
{
|
||||
return std::make_shared<DataTypeDateTime64>(0);
|
||||
}
|
||||
|
||||
UInt8 max_scale = 0;
|
||||
size_t max_scale_date_time_index = 0;
|
||||
|
||||
|
@ -50,36 +50,33 @@ public:
|
||||
throw Exception{"JSONPath functions require at least 2 arguments", ErrorCodes::TOO_FEW_ARGUMENTS_FOR_FUNCTION};
|
||||
}
|
||||
|
||||
const auto & first_column = arguments[0];
|
||||
const auto & json_column = arguments[0];
|
||||
|
||||
/// Check 1 argument: must be of type String (JSONPath)
|
||||
if (!isString(first_column.type))
|
||||
if (!isString(json_column.type))
|
||||
{
|
||||
throw Exception(
|
||||
"JSONPath functions require 1 argument to be JSONPath of type string, illegal type: " + first_column.type->getName(),
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
/// Check 1 argument: must be const (JSONPath)
|
||||
if (!isColumnConst(*first_column.column))
|
||||
{
|
||||
throw Exception("1 argument (JSONPath) must be const", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
const auto & second_column = arguments[1];
|
||||
|
||||
/// Check 2 argument: must be of type String (JSON)
|
||||
if (!isString(second_column.type))
|
||||
{
|
||||
throw Exception(
|
||||
"JSONPath functions require 2 argument to be JSON of string, illegal type: " + second_column.type->getName(),
|
||||
"JSONPath functions require first argument to be JSON of string, illegal type: " + json_column.type->getName(),
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
const ColumnPtr & arg_jsonpath = first_column.column;
|
||||
const auto & json_path_column = arguments[1];
|
||||
|
||||
if (!isString(json_path_column.type))
|
||||
{
|
||||
throw Exception(
|
||||
"JSONPath functions require second argument to be JSONPath of type string, illegal type: " + json_path_column.type->getName(),
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
if (!isColumnConst(*json_path_column.column))
|
||||
{
|
||||
throw Exception("Second argument (JSONPath) must be constant string", ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
|
||||
}
|
||||
|
||||
const ColumnPtr & arg_jsonpath = json_path_column.column;
|
||||
const auto * arg_jsonpath_const = typeid_cast<const ColumnConst *>(arg_jsonpath.get());
|
||||
const auto * arg_jsonpath_string = typeid_cast<const ColumnString *>(arg_jsonpath_const->getDataColumnPtr().get());
|
||||
|
||||
const ColumnPtr & arg_json = second_column.column;
|
||||
const ColumnPtr & arg_json = json_column.column;
|
||||
const auto * col_json_const = typeid_cast<const ColumnConst *>(arg_json.get());
|
||||
const auto * col_json_string
|
||||
= typeid_cast<const ColumnString *>(col_json_const ? col_json_const->getDataColumnPtr().get() : arg_json.get());
|
||||
@ -152,7 +149,7 @@ public:
|
||||
bool isVariadic() const override { return true; }
|
||||
size_t getNumberOfArguments() const override { return 0; }
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {0}; }
|
||||
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {1}; }
|
||||
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
|
||||
|
@ -1088,7 +1088,7 @@ public:
|
||||
if (!((both_represented_by_number && !has_date) /// Do not allow to compare date and number.
|
||||
|| (left.isStringOrFixedString() || right.isStringOrFixedString()) /// Everything can be compared with string by conversion.
|
||||
/// You can compare the date, datetime, or datatime64 and an enumeration with a constant string.
|
||||
|| ((left.isDate() || left.isDateTime() || left.isDateTime64()) && (right.isDate() || right.isDateTime() || right.isDateTime64()) && left.idx == right.idx) /// only date vs date, or datetime vs datetime
|
||||
|| ((left.isDate() || left.isDate32() || left.isDateTime() || left.isDateTime64()) && (right.isDate() || right.isDate32() || right.isDateTime() || right.isDateTime64()) && left.idx == right.idx) /// only date vs date, or datetime vs datetime
|
||||
|| (left.isUUID() && right.isUUID())
|
||||
|| (left.isEnum() && right.isEnum() && arguments[0]->getName() == arguments[1]->getName()) /// only equivalent enum type values can be compared against
|
||||
|| (left_tuple && right_tuple && left_tuple->getElements().size() == right_tuple->getElements().size())
|
||||
@ -1178,8 +1178,8 @@ public:
|
||||
const bool left_is_string = isStringOrFixedString(which_left);
|
||||
const bool right_is_string = isStringOrFixedString(which_right);
|
||||
|
||||
bool date_and_datetime = (which_left.idx != which_right.idx) && (which_left.isDate() || which_left.isDateTime() || which_left.isDateTime64())
|
||||
&& (which_right.isDate() || which_right.isDateTime() || which_right.isDateTime64());
|
||||
bool date_and_datetime = (which_left.idx != which_right.idx) && (which_left.isDate() || which_left.isDate32() || which_left.isDateTime() || which_left.isDateTime64())
|
||||
&& (which_right.isDate() || which_right.isDate32() || which_right.isDateTime() || which_right.isDateTime64());
|
||||
|
||||
ColumnPtr res;
|
||||
if (left_is_num && right_is_num && !date_and_datetime)
|
||||
@ -1222,8 +1222,8 @@ public:
|
||||
}
|
||||
else if ((isColumnedAsDecimal(left_type) || isColumnedAsDecimal(right_type)))
|
||||
{
|
||||
// Comparing Date and DateTime64 requires implicit conversion,
|
||||
if (date_and_datetime && (isDate(left_type) || isDate(right_type)))
|
||||
// Comparing Date/Date32 and DateTime64 requires implicit conversion,
|
||||
if (date_and_datetime && (isDateOrDate32(left_type) || isDateOrDate32(right_type)))
|
||||
{
|
||||
DataTypePtr common_type = getLeastSupertype({left_type, right_type});
|
||||
ColumnPtr c0_converted = castColumn(col_with_type_and_name_left, common_type);
|
||||
@ -1247,8 +1247,10 @@ public:
|
||||
ColumnPtr c0_converted = castColumn(col_with_type_and_name_left, common_type);
|
||||
ColumnPtr c1_converted = castColumn(col_with_type_and_name_right, common_type);
|
||||
if (!((res = executeNumLeftType<UInt32>(c0_converted.get(), c1_converted.get()))
|
||||
|| (res = executeNumLeftType<UInt64>(c0_converted.get(), c1_converted.get()))))
|
||||
throw Exception("Date related common types can only be UInt32 or UInt64", ErrorCodes::LOGICAL_ERROR);
|
||||
|| (res = executeNumLeftType<UInt64>(c0_converted.get(), c1_converted.get()))
|
||||
|| (res = executeNumLeftType<Int32>(c0_converted.get(), c1_converted.get()))
|
||||
|| (res = executeDecimal({c0_converted, common_type, "left"}, {c1_converted, common_type, "right"}))))
|
||||
throw Exception("Date related common types can only be UInt32/UInt64/Int32/Decimal", ErrorCodes::LOGICAL_ERROR);
|
||||
return res;
|
||||
}
|
||||
else if (left_type->equals(*right_type))
|
||||
|
@ -301,7 +301,7 @@ struct ToDateTimeImpl
|
||||
return time_zone.fromDayNum(DayNum(d));
|
||||
}
|
||||
|
||||
static inline UInt32 execute(Int32 d, const DateLUTImpl & time_zone)
|
||||
static inline Int64 execute(Int32 d, const DateLUTImpl & time_zone)
|
||||
{
|
||||
return time_zone.fromDayNum(ExtendedDayNum(d));
|
||||
}
|
||||
@ -638,7 +638,7 @@ struct ToDateTime64Transform
|
||||
inline DateTime64::NativeType execute(Int32 d, const DateLUTImpl & time_zone) const
|
||||
{
|
||||
const auto dt = ToDateTimeImpl::execute(d, time_zone);
|
||||
return execute(dt, time_zone);
|
||||
return DecimalUtils::decimalFromComponentsWithMultiplier<DateTime64>(dt, 0, scale_multiplier);
|
||||
}
|
||||
|
||||
inline DateTime64::NativeType execute(UInt32 dt, const DateLUTImpl & /*time_zone*/) const
|
||||
|
@ -40,6 +40,7 @@ public:
|
||||
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
|
||||
|
||||
bool useDefaultImplementationForConstants() const override { return true; }
|
||||
bool useDefaultImplementationForNulls() const override { return false; }
|
||||
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {0}; }
|
||||
|
||||
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override;
|
||||
|
@ -279,29 +279,39 @@ ReturnType readIntTextImpl(T & x, ReadBuffer & buf)
|
||||
{
|
||||
case '+':
|
||||
{
|
||||
if (has_sign || has_number)
|
||||
/// 123+ or +123+, just stop after 123 or +123.
|
||||
if (has_number)
|
||||
goto end;
|
||||
|
||||
/// No digits read yet, but we already read sign, like ++, -+.
|
||||
if (has_sign)
|
||||
{
|
||||
if constexpr (throw_exception)
|
||||
throw ParsingException(
|
||||
"Cannot parse number with multiple sign (+/-) characters or intermediate sign character",
|
||||
"Cannot parse number with multiple sign (+/-) characters",
|
||||
ErrorCodes::CANNOT_PARSE_NUMBER);
|
||||
else
|
||||
return ReturnType(false);
|
||||
}
|
||||
|
||||
has_sign = true;
|
||||
break;
|
||||
}
|
||||
case '-':
|
||||
{
|
||||
if (has_sign || has_number)
|
||||
if (has_number)
|
||||
goto end;
|
||||
|
||||
if (has_sign)
|
||||
{
|
||||
if constexpr (throw_exception)
|
||||
throw ParsingException(
|
||||
"Cannot parse number with multiple sign (+/-) characters or intermediate sign character",
|
||||
"Cannot parse number with multiple sign (+/-) characters",
|
||||
ErrorCodes::CANNOT_PARSE_NUMBER);
|
||||
else
|
||||
return ReturnType(false);
|
||||
}
|
||||
|
||||
if constexpr (is_signed_v<T>)
|
||||
negative = true;
|
||||
else
|
||||
|
@ -1895,7 +1895,7 @@ void Context::setSystemZooKeeperLogAfterInitializationIfNeeded()
|
||||
zk.second->setZooKeeperLog(shared->system_logs->zookeeper_log);
|
||||
}
|
||||
|
||||
void Context::initializeKeeperDispatcher(bool start_async) const
|
||||
void Context::initializeKeeperDispatcher([[maybe_unused]] bool start_async) const
|
||||
{
|
||||
#if USE_NURAFT
|
||||
std::lock_guard lock(shared->keeper_dispatcher_mutex);
|
||||
|
@ -1,21 +1,23 @@
|
||||
#include <Interpreters/InterpreterCreateFunctionQuery.h>
|
||||
|
||||
#include <stack>
|
||||
|
||||
#include <Access/ContextAccess.h>
|
||||
#include <Parsers/ASTCreateFunctionQuery.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/ExpressionActions.h>
|
||||
#include <Interpreters/ExpressionAnalyzer.h>
|
||||
#include <Interpreters/InterpreterCreateFunctionQuery.h>
|
||||
#include <Interpreters/FunctionNameNormalizer.h>
|
||||
#include <Interpreters/UserDefinedSQLObjectsLoader.h>
|
||||
#include <Interpreters/UserDefinedSQLFunctionFactory.h>
|
||||
#include <stack>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int UNKNOWN_IDENTIFIER;
|
||||
extern const int CANNOT_CREATE_RECURSIVE_FUNCTION;
|
||||
extern const int UNSUPPORTED_METHOD;
|
||||
}
|
||||
@ -31,20 +33,32 @@ BlockIO InterpreterCreateFunctionQuery::execute()
|
||||
if (!create_function_query)
|
||||
throw Exception(ErrorCodes::UNSUPPORTED_METHOD, "Expected CREATE FUNCTION query");
|
||||
|
||||
auto & user_defined_function_factory = UserDefinedSQLFunctionFactory::instance();
|
||||
|
||||
auto & function_name = create_function_query->function_name;
|
||||
|
||||
bool if_not_exists = create_function_query->if_not_exists;
|
||||
bool replace = create_function_query->or_replace;
|
||||
|
||||
create_function_query->if_not_exists = false;
|
||||
create_function_query->or_replace = false;
|
||||
|
||||
if (if_not_exists && user_defined_function_factory.tryGet(function_name) != nullptr)
|
||||
return {};
|
||||
|
||||
validateFunction(create_function_query->function_core, function_name);
|
||||
|
||||
UserDefinedSQLFunctionFactory::instance().registerFunction(function_name, query_ptr);
|
||||
user_defined_function_factory.registerFunction(function_name, query_ptr, replace);
|
||||
|
||||
if (!persist_function)
|
||||
if (persist_function)
|
||||
{
|
||||
try
|
||||
{
|
||||
UserDefinedSQLObjectsLoader::instance().storeObject(current_context, UserDefinedSQLObjectType::Function, function_name, *query_ptr);
|
||||
UserDefinedSQLObjectsLoader::instance().storeObject(current_context, UserDefinedSQLObjectType::Function, function_name, *query_ptr, replace);
|
||||
}
|
||||
catch (Exception & exception)
|
||||
{
|
||||
UserDefinedSQLFunctionFactory::instance().unregisterFunction(function_name);
|
||||
user_defined_function_factory.unregisterFunction(function_name);
|
||||
exception.addMessage(fmt::format("while storing user defined function {} on disk", backQuote(function_name)));
|
||||
throw;
|
||||
}
|
||||
@ -66,42 +80,9 @@ void InterpreterCreateFunctionQuery::validateFunction(ASTPtr function, const Str
|
||||
}
|
||||
|
||||
ASTPtr function_body = function->as<ASTFunction>()->children.at(0)->children.at(1);
|
||||
std::unordered_set<String> identifiers_in_body = getIdentifiers(function_body);
|
||||
|
||||
for (const auto & identifier : identifiers_in_body)
|
||||
{
|
||||
if (!arguments.contains(identifier))
|
||||
throw Exception(ErrorCodes::UNKNOWN_IDENTIFIER, "Identifier {} does not exist in arguments", backQuote(identifier));
|
||||
}
|
||||
|
||||
validateFunctionRecursiveness(function_body, name);
|
||||
}
|
||||
|
||||
std::unordered_set<String> InterpreterCreateFunctionQuery::getIdentifiers(ASTPtr node)
|
||||
{
|
||||
std::unordered_set<String> identifiers;
|
||||
|
||||
std::stack<ASTPtr> ast_nodes_to_process;
|
||||
ast_nodes_to_process.push(node);
|
||||
|
||||
while (!ast_nodes_to_process.empty())
|
||||
{
|
||||
auto ast_node_to_process = ast_nodes_to_process.top();
|
||||
ast_nodes_to_process.pop();
|
||||
|
||||
for (const auto & child : ast_node_to_process->children)
|
||||
{
|
||||
auto identifier_name_opt = tryGetIdentifierName(child);
|
||||
if (identifier_name_opt)
|
||||
identifiers.insert(identifier_name_opt.value());
|
||||
|
||||
ast_nodes_to_process.push(child);
|
||||
}
|
||||
}
|
||||
|
||||
return identifiers;
|
||||
}
|
||||
|
||||
void InterpreterCreateFunctionQuery::validateFunctionRecursiveness(ASTPtr node, const String & function_to_create)
|
||||
{
|
||||
for (const auto & child : node->children)
|
||||
|
@ -22,7 +22,6 @@ public:
|
||||
|
||||
private:
|
||||
static void validateFunction(ASTPtr function, const String & name);
|
||||
static std::unordered_set<String> getIdentifiers(ASTPtr node);
|
||||
static void validateFunctionRecursiveness(ASTPtr node, const String & function_to_create);
|
||||
|
||||
ASTPtr query_ptr;
|
||||
|
@ -18,6 +18,11 @@ BlockIO InterpreterDropFunctionQuery::execute()
|
||||
FunctionNameNormalizer().visit(query_ptr.get());
|
||||
auto & drop_function_query = query_ptr->as<ASTDropFunctionQuery &>();
|
||||
|
||||
auto & user_defined_functions_factory = UserDefinedSQLFunctionFactory::instance();
|
||||
|
||||
if (drop_function_query.if_exists && !user_defined_functions_factory.has(drop_function_query.function_name))
|
||||
return {};
|
||||
|
||||
UserDefinedSQLFunctionFactory::instance().unregisterFunction(drop_function_query.function_name);
|
||||
UserDefinedSQLObjectsLoader::instance().removeObject(current_context, UserDefinedSQLObjectType::Function, drop_function_query.function_name);
|
||||
|
||||
|
@ -278,7 +278,7 @@ std::unique_ptr<IInterpreter> InterpreterFactory::get(ASTPtr & query, ContextMut
|
||||
}
|
||||
else if (query->as<ASTCreateFunctionQuery>())
|
||||
{
|
||||
return std::make_unique<InterpreterCreateFunctionQuery>(query, context, false /*is_internal*/);
|
||||
return std::make_unique<InterpreterCreateFunctionQuery>(query, context, true /*persist_function*/);
|
||||
}
|
||||
else if (query->as<ASTDropFunctionQuery>())
|
||||
{
|
||||
|
@ -19,7 +19,7 @@ UserDefinedSQLFunctionFactory & UserDefinedSQLFunctionFactory::instance()
|
||||
return result;
|
||||
}
|
||||
|
||||
void UserDefinedSQLFunctionFactory::registerFunction(const String & function_name, ASTPtr create_function_query)
|
||||
void UserDefinedSQLFunctionFactory::registerFunction(const String & function_name, ASTPtr create_function_query, bool replace)
|
||||
{
|
||||
if (FunctionFactory::instance().hasNameOrAlias(function_name))
|
||||
throw Exception(ErrorCodes::FUNCTION_ALREADY_EXISTS, "The function '{}' already exists", function_name);
|
||||
@ -29,11 +29,17 @@ void UserDefinedSQLFunctionFactory::registerFunction(const String & function_nam
|
||||
|
||||
std::lock_guard lock(mutex);
|
||||
|
||||
auto [_, inserted] = function_name_to_create_query.emplace(function_name, std::move(create_function_query));
|
||||
auto [it, inserted] = function_name_to_create_query.emplace(function_name, create_function_query);
|
||||
|
||||
if (!inserted)
|
||||
throw Exception(ErrorCodes::FUNCTION_ALREADY_EXISTS,
|
||||
"The function name '{}' is not unique",
|
||||
function_name);
|
||||
{
|
||||
if (replace)
|
||||
it->second = std::move(create_function_query);
|
||||
else
|
||||
throw Exception(ErrorCodes::FUNCTION_ALREADY_EXISTS,
|
||||
"The function name '{}' is not unique",
|
||||
function_name);
|
||||
}
|
||||
}
|
||||
|
||||
void UserDefinedSQLFunctionFactory::unregisterFunction(const String & function_name)
|
||||
@ -77,6 +83,11 @@ ASTPtr UserDefinedSQLFunctionFactory::tryGet(const std::string & function_name)
|
||||
return it->second;
|
||||
}
|
||||
|
||||
bool UserDefinedSQLFunctionFactory::has(const String & function_name) const
|
||||
{
|
||||
return tryGet(function_name) != nullptr;
|
||||
}
|
||||
|
||||
std::vector<std::string> UserDefinedSQLFunctionFactory::getAllRegisteredNames() const
|
||||
{
|
||||
std::vector<std::string> registered_names;
|
||||
|
@ -10,19 +10,31 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/// Factory for SQLUserDefinedFunctions
|
||||
class UserDefinedSQLFunctionFactory : public IHints<1, UserDefinedSQLFunctionFactory>
|
||||
{
|
||||
public:
|
||||
static UserDefinedSQLFunctionFactory & instance();
|
||||
|
||||
void registerFunction(const String & function_name, ASTPtr create_function_query);
|
||||
/** Register function for function_name in factory for specified create_function_query.
|
||||
* If replace = true and function with function_name already exists replace it with create_function_query.
|
||||
* Otherwise throws exception.
|
||||
*/
|
||||
void registerFunction(const String & function_name, ASTPtr create_function_query, bool replace);
|
||||
|
||||
/// Unregister function for function_name
|
||||
void unregisterFunction(const String & function_name);
|
||||
|
||||
/// Get function create query for function_name. If no function registered with function_name throws exception.
|
||||
ASTPtr get(const String & function_name) const;
|
||||
|
||||
/// Get function create query for function_name. If no function registered with function_name return nullptr.
|
||||
ASTPtr tryGet(const String & function_name) const;
|
||||
|
||||
/// Check if function with function_name registered.
|
||||
bool has(const String & function_name) const;
|
||||
|
||||
/// Get all user defined functions registered names.
|
||||
std::vector<String> getAllRegisteredNames() const override;
|
||||
|
||||
private:
|
||||
|
@ -25,6 +25,7 @@ void UserDefinedSQLFunctionMatcher::visit(ASTPtr & ast, Data &)
|
||||
return;
|
||||
|
||||
auto result = tryToReplaceFunction(*function);
|
||||
|
||||
if (result)
|
||||
ast = result;
|
||||
}
|
||||
@ -83,9 +84,16 @@ ASTPtr UserDefinedSQLFunctionMatcher::tryToReplaceFunction(const ASTFunction & f
|
||||
if (identifier_name_opt)
|
||||
{
|
||||
auto function_argument_it = identifier_name_to_function_argument.find(*identifier_name_opt);
|
||||
assert(function_argument_it != identifier_name_to_function_argument.end());
|
||||
|
||||
if (function_argument_it == identifier_name_to_function_argument.end())
|
||||
continue;
|
||||
|
||||
auto child_alias = child->tryGetAlias();
|
||||
child = function_argument_it->second->clone();
|
||||
|
||||
if (!child_alias.empty())
|
||||
child->setAlias(child_alias);
|
||||
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -69,7 +69,7 @@ void UserDefinedSQLObjectsLoader::loadUserDefinedObject(ContextPtr context, User
|
||||
0,
|
||||
context->getSettingsRef().max_parser_depth);
|
||||
|
||||
InterpreterCreateFunctionQuery interpreter(ast, context, true /*is internal*/);
|
||||
InterpreterCreateFunctionQuery interpreter(ast, context, false /*persist_function*/);
|
||||
interpreter.execute();
|
||||
}
|
||||
}
|
||||
@ -111,7 +111,7 @@ void UserDefinedSQLObjectsLoader::loadObjects(ContextPtr context)
|
||||
}
|
||||
}
|
||||
|
||||
void UserDefinedSQLObjectsLoader::storeObject(ContextPtr context, UserDefinedSQLObjectType object_type, const String & object_name, const IAST & ast)
|
||||
void UserDefinedSQLObjectsLoader::storeObject(ContextPtr context, UserDefinedSQLObjectType object_type, const String & object_name, const IAST & ast, bool replace)
|
||||
{
|
||||
if (unlikely(!enable_persistence))
|
||||
return;
|
||||
@ -127,7 +127,7 @@ void UserDefinedSQLObjectsLoader::storeObject(ContextPtr context, UserDefinedSQL
|
||||
}
|
||||
}
|
||||
|
||||
if (std::filesystem::exists(file_path))
|
||||
if (!replace && std::filesystem::exists(file_path))
|
||||
throw Exception(ErrorCodes::OBJECT_ALREADY_STORED_ON_DISK, "User defined object {} already stored on disk", backQuote(file_path));
|
||||
|
||||
LOG_DEBUG(log, "Storing object {} to file {}", backQuote(object_name), file_path);
|
||||
@ -135,9 +135,9 @@ void UserDefinedSQLObjectsLoader::storeObject(ContextPtr context, UserDefinedSQL
|
||||
WriteBufferFromOwnString create_statement_buf;
|
||||
formatAST(ast, create_statement_buf, false);
|
||||
writeChar('\n', create_statement_buf);
|
||||
|
||||
String create_statement = create_statement_buf.str();
|
||||
WriteBufferFromFile out(file_path, create_statement.size(), O_WRONLY | O_CREAT | O_EXCL);
|
||||
|
||||
WriteBufferFromFile out(file_path, create_statement.size());
|
||||
writeString(create_statement, out);
|
||||
out.next();
|
||||
if (context->getSettingsRef().fsync_metadata)
|
||||
|
@ -21,7 +21,7 @@ public:
|
||||
UserDefinedSQLObjectsLoader();
|
||||
|
||||
void loadObjects(ContextPtr context);
|
||||
void storeObject(ContextPtr context, UserDefinedSQLObjectType object_type, const String & object_name, const IAST & ast);
|
||||
void storeObject(ContextPtr context, UserDefinedSQLObjectType object_type, const String & object_name, const IAST & ast, bool replace);
|
||||
void removeObject(ContextPtr context, UserDefinedSQLObjectType object_type, const String & object_name);
|
||||
|
||||
/// For ClickHouse local if path is not set we can disable loader.
|
||||
|
@ -203,6 +203,12 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
|
||||
return src;
|
||||
}
|
||||
|
||||
if (which_type.isDate32() && src.getType() == Field::Types::Int64)
|
||||
{
|
||||
/// We don't need any conversion Int64 is under type of Date32
|
||||
return src;
|
||||
}
|
||||
|
||||
if (which_type.isDateTime64() && src.getType() == Field::Types::Decimal64)
|
||||
{
|
||||
/// Already in needed type.
|
||||
@ -210,7 +216,7 @@ Field convertFieldToTypeImpl(const Field & src, const IDataType & type, const ID
|
||||
}
|
||||
|
||||
if (which_type.isDateTime64()
|
||||
&& (which_from_type.isNativeInt() || which_from_type.isNativeUInt() || which_from_type.isDate() || which_from_type.isDateTime() || which_from_type.isDateTime64()))
|
||||
&& (which_from_type.isNativeInt() || which_from_type.isNativeUInt() || which_from_type.isDate() || which_from_type.isDate32() || which_from_type.isDateTime() || which_from_type.isDateTime64()))
|
||||
{
|
||||
const auto scale = static_cast<const DataTypeDateTime64 &>(type).getScale();
|
||||
const auto decimal_value = DecimalUtils::decimalFromComponents<DateTime64>(src.reinterpret<Int64>(), 0, scale);
|
||||
|
@ -12,7 +12,18 @@ ASTPtr ASTCreateFunctionQuery::clone() const
|
||||
|
||||
void ASTCreateFunctionQuery::formatImpl(const IAST::FormatSettings & settings, IAST::FormatState & state, IAST::FormatStateStacked frame) const
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "CREATE FUNCTION " << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "CREATE ";
|
||||
|
||||
if (or_replace)
|
||||
settings.ostr << "OR REPLACE ";
|
||||
|
||||
settings.ostr << "FUNCTION ";
|
||||
|
||||
if (if_not_exists)
|
||||
settings.ostr << "IF NOT EXISTS ";
|
||||
|
||||
settings.ostr << (settings.hilite ? hilite_none : "");
|
||||
|
||||
settings.ostr << (settings.hilite ? hilite_identifier : "") << backQuoteIfNeed(function_name) << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << " AS " << (settings.hilite ? hilite_none : "");
|
||||
function_core->formatImpl(settings, state, frame);
|
||||
|
@ -12,6 +12,9 @@ public:
|
||||
String function_name;
|
||||
ASTPtr function_core;
|
||||
|
||||
bool or_replace = false;
|
||||
bool if_not_exists = false;
|
||||
|
||||
String getID(char) const override { return "CreateFunctionQuery"; }
|
||||
|
||||
ASTPtr clone() const override;
|
||||
|
@ -12,7 +12,12 @@ ASTPtr ASTDropFunctionQuery::clone() const
|
||||
|
||||
void ASTDropFunctionQuery::formatImpl(const IAST::FormatSettings & settings, IAST::FormatState &, IAST::FormatStateStacked) const
|
||||
{
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "DROP FUNCTION " << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (settings.hilite ? hilite_keyword : "") << "DROP FUNCTION ";
|
||||
|
||||
if (if_exists)
|
||||
settings.ostr << "IF EXISTS ";
|
||||
|
||||
settings.ostr << (settings.hilite ? hilite_none : "");
|
||||
settings.ostr << (settings.hilite ? hilite_identifier : "") << backQuoteIfNeed(function_name) << (settings.hilite ? hilite_none : "");
|
||||
}
|
||||
|
||||
|
@ -10,6 +10,8 @@ class ASTDropFunctionQuery : public IAST
|
||||
public:
|
||||
String function_name;
|
||||
|
||||
bool if_exists = false;
|
||||
|
||||
String getID(char) const override { return "DropFunctionQuery"; }
|
||||
|
||||
ASTPtr clone() const override;
|
||||
|
@ -1,10 +1,12 @@
|
||||
#include <Parsers/ParserCreateFunctionQuery.h>
|
||||
|
||||
#include <Parsers/ASTCreateFunctionQuery.h>
|
||||
#include <Parsers/ASTExpressionList.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
#include <Parsers/CommonParsers.h>
|
||||
#include <Parsers/ExpressionElementParsers.h>
|
||||
#include <Parsers/ExpressionListParsers.h>
|
||||
#include <Parsers/ParserCreateFunctionQuery.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -13,6 +15,8 @@ bool ParserCreateFunctionQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Exp
|
||||
{
|
||||
ParserKeyword s_create("CREATE");
|
||||
ParserKeyword s_function("FUNCTION");
|
||||
ParserKeyword s_or_replace("OR REPLACE");
|
||||
ParserKeyword s_if_not_exists("IF NOT EXISTS");
|
||||
ParserIdentifier function_name_p;
|
||||
ParserKeyword s_as("AS");
|
||||
ParserLambdaExpression lambda_p;
|
||||
@ -20,12 +24,21 @@ bool ParserCreateFunctionQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Exp
|
||||
ASTPtr function_name;
|
||||
ASTPtr function_core;
|
||||
|
||||
bool or_replace = false;
|
||||
bool if_not_exists = false;
|
||||
|
||||
if (!s_create.ignore(pos, expected))
|
||||
return false;
|
||||
|
||||
if (s_or_replace.ignore(pos, expected))
|
||||
or_replace = true;
|
||||
|
||||
if (!s_function.ignore(pos, expected))
|
||||
return false;
|
||||
|
||||
if (!or_replace && s_if_not_exists.ignore(pos, expected))
|
||||
if_not_exists = true;
|
||||
|
||||
if (!function_name_p.parse(pos, function_name, expected))
|
||||
return false;
|
||||
|
||||
@ -40,6 +53,8 @@ bool ParserCreateFunctionQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Exp
|
||||
|
||||
create_function_query->function_name = function_name->as<ASTIdentifier &>().name();
|
||||
create_function_query->function_core = function_core;
|
||||
create_function_query->or_replace = or_replace;
|
||||
create_function_query->if_not_exists = if_not_exists;
|
||||
|
||||
return true;
|
||||
}
|
||||
|
@ -11,7 +11,10 @@ bool ParserDropFunctionQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expec
|
||||
{
|
||||
ParserKeyword s_drop("DROP");
|
||||
ParserKeyword s_function("FUNCTION");
|
||||
ParserKeyword s_if_exists("IF EXISTS");
|
||||
|
||||
ParserIdentifier function_name_p;
|
||||
bool if_exists = false;
|
||||
|
||||
ASTPtr function_name;
|
||||
|
||||
@ -21,10 +24,14 @@ bool ParserDropFunctionQuery::parseImpl(IParser::Pos & pos, ASTPtr & node, Expec
|
||||
if (!s_function.ignore(pos, expected))
|
||||
return false;
|
||||
|
||||
if (s_if_exists.ignore(pos, expected))
|
||||
if_exists = true;
|
||||
|
||||
if (!function_name_p.parse(pos, function_name, expected))
|
||||
return false;
|
||||
|
||||
auto drop_function_query = std::make_shared<ASTDropFunctionQuery>();
|
||||
drop_function_query->if_exists = if_exists;
|
||||
node = drop_function_query;
|
||||
|
||||
drop_function_query->function_name = function_name->as<ASTIdentifier &>().name();
|
||||
|
@ -38,7 +38,8 @@ const std::unordered_set<std::string_view> keywords
|
||||
"IN", "KILL", "QUERY", "SYNC", "ASYNC", "TEST", "BETWEEN", "TRUNCATE", "USER", "ROLE",
|
||||
"PROFILE", "QUOTA", "POLICY", "ROW", "GRANT", "REVOKE", "OPTION", "ADMIN", "EXCEPT", "REPLACE",
|
||||
"IDENTIFIED", "HOST", "NAME", "READONLY", "WRITABLE", "PERMISSIVE", "FOR", "RESTRICTIVE", "RANDOMIZED",
|
||||
"INTERVAL", "LIMITS", "ONLY", "TRACKING", "IP", "REGEXP", "ILIKE", "DICTIONARY"
|
||||
"INTERVAL", "LIMITS", "ONLY", "TRACKING", "IP", "REGEXP", "ILIKE", "DICTIONARY", "OFFSET",
|
||||
"TRIM", "LTRIM", "RTRIM", "BOTH", "LEADING", "TRAILING"
|
||||
};
|
||||
|
||||
const std::unordered_set<std::string_view> keep_words
|
||||
@ -906,7 +907,13 @@ void obfuscateQueries(
|
||||
|
||||
/// Write quotes and the obfuscated content inside.
|
||||
result.write(*token.begin);
|
||||
obfuscateIdentifier({token.begin + 1, token.size() - 2}, result, obfuscate_map, used_nouns, hash_func);
|
||||
|
||||
/// If it is long, just replace it with hash. Long identifiers in queries are usually auto-generated.
|
||||
if (token.size() > 32)
|
||||
writeIntText(sipHash64(token.begin + 1, token.size() - 2), result);
|
||||
else
|
||||
obfuscateIdentifier({token.begin + 1, token.size() - 2}, result, obfuscate_map, used_nouns, hash_func);
|
||||
|
||||
result.write(token.end[-1]);
|
||||
}
|
||||
else if (token.type == TokenType::Number)
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include <Processors/Sources/NullSource.h>
|
||||
#include <Processors/Merges/AggregatingSortedTransform.h>
|
||||
#include <Processors/Merges/CollapsingSortedTransform.h>
|
||||
#include <Processors/Merges/GraphiteRollupSortedTransform.h>
|
||||
#include <Processors/Merges/MergingSortedTransform.h>
|
||||
#include <Processors/Merges/ReplacingSortedTransform.h>
|
||||
#include <Processors/Merges/SummingSortedTransform.h>
|
||||
@ -506,38 +507,39 @@ static void addMergingFinal(
|
||||
const auto & header = pipe.getHeader();
|
||||
size_t num_outputs = pipe.numOutputPorts();
|
||||
|
||||
auto now = time(nullptr);
|
||||
|
||||
auto get_merging_processor = [&]() -> MergingTransformPtr
|
||||
{
|
||||
switch (merging_params.mode)
|
||||
{
|
||||
case MergeTreeData::MergingParams::Ordinary:
|
||||
{
|
||||
return std::make_shared<MergingSortedTransform>(header, num_outputs,
|
||||
sort_description, max_block_size);
|
||||
}
|
||||
sort_description, max_block_size);
|
||||
|
||||
case MergeTreeData::MergingParams::Collapsing:
|
||||
return std::make_shared<CollapsingSortedTransform>(header, num_outputs,
|
||||
sort_description, merging_params.sign_column, true, max_block_size);
|
||||
sort_description, merging_params.sign_column, true, max_block_size);
|
||||
|
||||
case MergeTreeData::MergingParams::Summing:
|
||||
return std::make_shared<SummingSortedTransform>(header, num_outputs,
|
||||
sort_description, merging_params.columns_to_sum, partition_key_columns, max_block_size);
|
||||
sort_description, merging_params.columns_to_sum, partition_key_columns, max_block_size);
|
||||
|
||||
case MergeTreeData::MergingParams::Aggregating:
|
||||
return std::make_shared<AggregatingSortedTransform>(header, num_outputs,
|
||||
sort_description, max_block_size);
|
||||
sort_description, max_block_size);
|
||||
|
||||
case MergeTreeData::MergingParams::Replacing:
|
||||
return std::make_shared<ReplacingSortedTransform>(header, num_outputs,
|
||||
sort_description, merging_params.version_column, max_block_size);
|
||||
sort_description, merging_params.version_column, max_block_size);
|
||||
|
||||
case MergeTreeData::MergingParams::VersionedCollapsing:
|
||||
return std::make_shared<VersionedCollapsingTransform>(header, num_outputs,
|
||||
sort_description, merging_params.sign_column, max_block_size);
|
||||
sort_description, merging_params.sign_column, max_block_size);
|
||||
|
||||
case MergeTreeData::MergingParams::Graphite:
|
||||
throw Exception("GraphiteMergeTree doesn't support FINAL", ErrorCodes::LOGICAL_ERROR);
|
||||
return std::make_shared<GraphiteRollupSortedTransform>(header, num_outputs,
|
||||
sort_description, max_block_size, merging_params.graphite_params, now);
|
||||
}
|
||||
|
||||
__builtin_unreachable();
|
||||
|
@ -4472,16 +4472,6 @@ Block MergeTreeData::getMinMaxCountProjectionBlock(
|
||||
}
|
||||
|
||||
size_t pos = 0;
|
||||
if (!primary_key_max_column_name.empty())
|
||||
{
|
||||
const auto & primary_key_column = *part->index[0];
|
||||
auto primary_key_column_size = primary_key_column.size();
|
||||
auto & min_column = assert_cast<ColumnAggregateFunction &>(*minmax_count_columns[pos++]);
|
||||
auto & max_column = assert_cast<ColumnAggregateFunction &>(*minmax_count_columns[pos++]);
|
||||
insert(min_column, primary_key_column[0]);
|
||||
insert(max_column, primary_key_column[primary_key_column_size - 1]);
|
||||
}
|
||||
|
||||
size_t minmax_idx_size = part->minmax_idx->hyperrectangle.size();
|
||||
for (size_t i = 0; i < minmax_idx_size; ++i)
|
||||
{
|
||||
@ -4492,6 +4482,16 @@ Block MergeTreeData::getMinMaxCountProjectionBlock(
|
||||
insert(max_column, range.right);
|
||||
}
|
||||
|
||||
if (!primary_key_max_column_name.empty())
|
||||
{
|
||||
const auto & primary_key_column = *part->index[0];
|
||||
auto primary_key_column_size = primary_key_column.size();
|
||||
auto & min_column = assert_cast<ColumnAggregateFunction &>(*minmax_count_columns[pos++]);
|
||||
auto & max_column = assert_cast<ColumnAggregateFunction &>(*minmax_count_columns[pos++]);
|
||||
insert(min_column, primary_key_column[0]);
|
||||
insert(max_column, primary_key_column[primary_key_column_size - 1]);
|
||||
}
|
||||
|
||||
{
|
||||
auto & column = assert_cast<ColumnAggregateFunction &>(*minmax_count_columns.back());
|
||||
auto func = column.getAggregateFunction();
|
||||
|
@ -406,6 +406,7 @@ public:
|
||||
|| merging_params.mode == MergingParams::Summing
|
||||
|| merging_params.mode == MergingParams::Aggregating
|
||||
|| merging_params.mode == MergingParams::Replacing
|
||||
|| merging_params.mode == MergingParams::Graphite
|
||||
|| merging_params.mode == MergingParams::VersionedCollapsing;
|
||||
}
|
||||
|
||||
|
@ -184,16 +184,16 @@ ProjectionDescription ProjectionDescription::getMinMaxCountProjection(
|
||||
|
||||
auto select_query = std::make_shared<ASTProjectionSelectQuery>();
|
||||
ASTPtr select_expression_list = std::make_shared<ASTExpressionList>();
|
||||
if (!primary_key_asts.empty())
|
||||
{
|
||||
select_expression_list->children.push_back(makeASTFunction("min", primary_key_asts.front()->clone()));
|
||||
select_expression_list->children.push_back(makeASTFunction("max", primary_key_asts.front()->clone()));
|
||||
}
|
||||
for (const auto & column : minmax_columns)
|
||||
{
|
||||
select_expression_list->children.push_back(makeASTFunction("min", std::make_shared<ASTIdentifier>(column)));
|
||||
select_expression_list->children.push_back(makeASTFunction("max", std::make_shared<ASTIdentifier>(column)));
|
||||
}
|
||||
if (!primary_key_asts.empty())
|
||||
{
|
||||
select_expression_list->children.push_back(makeASTFunction("min", primary_key_asts.front()->clone()));
|
||||
select_expression_list->children.push_back(makeASTFunction("max", primary_key_asts.front()->clone()));
|
||||
}
|
||||
select_expression_list->children.push_back(makeASTFunction("count"));
|
||||
select_query->setExpression(ASTProjectionSelectQuery::Expression::SELECT, std::move(select_expression_list));
|
||||
|
||||
@ -207,8 +207,14 @@ ProjectionDescription ProjectionDescription::getMinMaxCountProjection(
|
||||
result.query_ast, query_context, storage, {}, SelectQueryOptions{QueryProcessingStage::WithMergeableState}.modify().ignoreAlias());
|
||||
result.required_columns = select.getRequiredColumns();
|
||||
result.sample_block = select.getSampleBlock();
|
||||
if (!primary_key_asts.empty())
|
||||
result.primary_key_max_column_name = result.sample_block.getNames()[ProjectionDescription::PRIMARY_KEY_MAX_COLUMN_POS];
|
||||
/// If we have primary key and it's not in minmax_columns, it will be used as one additional minmax columns.
|
||||
if (!primary_key_asts.empty() && result.sample_block.columns() == 2 * (minmax_columns.size() + 1) + 1)
|
||||
{
|
||||
/// min(p1), max(p1), min(p2), max(p2), ..., min(k1), max(k1), count()
|
||||
/// ^
|
||||
/// size - 2
|
||||
result.primary_key_max_column_name = *(result.sample_block.getNames().cend() - 2);
|
||||
}
|
||||
result.type = ProjectionDescription::Type::Aggregate;
|
||||
StorageInMemoryMetadata metadata;
|
||||
metadata.setColumns(ColumnsDescription(result.sample_block.getNamesAndTypesList()));
|
||||
|
@ -30,10 +30,6 @@ struct ProjectionDescription
|
||||
|
||||
static constexpr const char * MINMAX_COUNT_PROJECTION_NAME = "_minmax_count_projection";
|
||||
|
||||
/// If minmax_count projection contains a primary key's minmax values. Their positions will be 0 and 1.
|
||||
static constexpr const size_t PRIMARY_KEY_MIN_COLUMN_POS = 0;
|
||||
static constexpr const size_t PRIMARY_KEY_MAX_COLUMN_POS = 1;
|
||||
|
||||
/// Definition AST of projection
|
||||
ASTPtr definition_ast;
|
||||
|
||||
|
@ -8,6 +8,7 @@ import os
|
||||
from pr_info import PRInfo
|
||||
from github import Github
|
||||
import shutil
|
||||
from get_robot_token import get_best_robot_token, get_parameter_from_ssm
|
||||
|
||||
NAME = "Push to Dockerhub (actions)"
|
||||
|
||||
@ -176,7 +177,7 @@ if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
repo_path = os.getenv("GITHUB_WORKSPACE", os.path.abspath("../../"))
|
||||
temp_path = os.path.join(os.getenv("RUNNER_TEMP", os.path.abspath("./temp")), 'docker_images_check')
|
||||
dockerhub_password = os.getenv('DOCKER_ROBOT_PASSWORD')
|
||||
dockerhub_password = get_parameter_from_ssm('dockerhub_robot_password')
|
||||
|
||||
if os.path.exists(temp_path):
|
||||
shutil.rmtree(temp_path)
|
||||
@ -212,17 +213,14 @@ if __name__ == "__main__":
|
||||
if len(description) >= 140:
|
||||
description = description[:136] + "..."
|
||||
|
||||
aws_secret_key_id = os.getenv("YANDEX_S3_ACCESS_KEY_ID", "")
|
||||
aws_secret_key = os.getenv("YANDEX_S3_ACCESS_SECRET_KEY", "")
|
||||
|
||||
s3_helper = S3Helper('https://storage.yandexcloud.net', aws_access_key_id=aws_secret_key_id, aws_secret_access_key=aws_secret_key)
|
||||
s3_helper = S3Helper('https://s3.amazonaws.com')
|
||||
|
||||
s3_path_prefix = str(pr_info.number) + "/" + pr_info.sha + "/" + NAME.lower().replace(' ', '_')
|
||||
status, test_results = process_test_results(s3_helper, images_processing_result, s3_path_prefix)
|
||||
|
||||
url = upload_results(s3_helper, pr_info.number, pr_info.sha, test_results)
|
||||
|
||||
gh = Github(os.getenv("GITHUB_TOKEN"))
|
||||
gh = Github(get_best_robot_token())
|
||||
commit = get_commit(gh, pr_info.sha)
|
||||
commit.create_status(context=NAME, description=description, state=status, target_url=url)
|
||||
|
||||
|
@ -4,6 +4,7 @@ from github import Github
|
||||
from pr_info import PRInfo
|
||||
import json
|
||||
import os
|
||||
from get_robot_token import get_best_robot_token
|
||||
|
||||
NAME = 'Run Check (actions)'
|
||||
|
||||
@ -34,7 +35,7 @@ if __name__ == "__main__":
|
||||
event = json.load(event_file)
|
||||
|
||||
pr_info = PRInfo(event, need_orgs=True)
|
||||
gh = Github(os.getenv("GITHUB_TOKEN"))
|
||||
gh = Github(get_best_robot_token())
|
||||
commit = get_commit(gh, pr_info.sha)
|
||||
|
||||
url = f"https://github.com/ClickHouse/ClickHouse/actions/runs/{os.getenv('GITHUB_RUN_ID')}"
|
||||
|
20
tests/ci/get_robot_token.py
Normal file
20
tests/ci/get_robot_token.py
Normal file
@ -0,0 +1,20 @@
|
||||
#!/usr/bin/env python3
|
||||
import boto3
|
||||
from github import Github
|
||||
|
||||
def get_parameter_from_ssm(name, decrypt=True, client=None):
|
||||
if not client:
|
||||
client = boto3.client('ssm', region_name='us-east-1')
|
||||
return client.get_parameter(Name=name, WithDecryption=decrypt)['Parameter']['Value']
|
||||
|
||||
def get_best_robot_token(token_prefix_env_name="github_robot_token_", total_tokens=4):
|
||||
client = boto3.client('ssm', region_name='us-east-1')
|
||||
tokens = {}
|
||||
for i in range(1, total_tokens + 1):
|
||||
token_name = token_prefix_env_name + str(i)
|
||||
token = get_parameter_from_ssm(token_name, True, client)
|
||||
gh = Github(token)
|
||||
rest, _ = gh.rate_limiting
|
||||
tokens[token] = rest
|
||||
|
||||
return max(tokens.items(), key=lambda x: x[1])[0]
|
@ -9,6 +9,7 @@ from s3_helper import S3Helper
|
||||
from pr_info import PRInfo
|
||||
import shutil
|
||||
import sys
|
||||
from get_robot_token import get_best_robot_token
|
||||
|
||||
NAME = 'PVS Studio (actions)'
|
||||
LICENCE_NAME = 'Free license: ClickHouse, Yandex'
|
||||
@ -80,10 +81,7 @@ if __name__ == "__main__":
|
||||
# this check modify repository so copy it to the temp directory
|
||||
logging.info("Repo copy path %s", repo_path)
|
||||
|
||||
aws_secret_key_id = os.getenv("YANDEX_S3_ACCESS_KEY_ID", "")
|
||||
aws_secret_key = os.getenv("YANDEX_S3_ACCESS_SECRET_KEY", "")
|
||||
|
||||
gh = Github(os.getenv("GITHUB_TOKEN"))
|
||||
gh = Github(get_best_robot_token())
|
||||
|
||||
images_path = os.path.join(temp_path, 'changed_images.json')
|
||||
docker_image = 'clickhouse/pvs-test'
|
||||
@ -97,10 +95,7 @@ if __name__ == "__main__":
|
||||
|
||||
logging.info("Got docker image %s", docker_image)
|
||||
|
||||
if not aws_secret_key_id or not aws_secret_key:
|
||||
logging.info("No secrets, will not upload anything to S3")
|
||||
|
||||
s3_helper = S3Helper('https://storage.yandexcloud.net', aws_access_key_id=aws_secret_key_id, aws_secret_access_key=aws_secret_key)
|
||||
s3_helper = S3Helper('https://s3.amazonaws.com')
|
||||
|
||||
licence_key = os.getenv('PVS_STUDIO_KEY')
|
||||
cmd = f"docker run -u $(id -u ${{USER}}):$(id -g ${{USER}}) --volume={repo_path}:/repo_folder --volume={temp_path}:/test_output -e LICENCE_NAME='{LICENCE_NAME}' -e LICENCE_KEY='{licence_key}' {docker_image}"
|
||||
|
@ -6,6 +6,7 @@ from pr_info import PRInfo
|
||||
import sys
|
||||
import logging
|
||||
from github import Github
|
||||
from get_robot_token import get_best_robot_token
|
||||
|
||||
NAME = 'Run Check (actions)'
|
||||
|
||||
@ -113,7 +114,7 @@ if __name__ == "__main__":
|
||||
|
||||
pr_info = PRInfo(event, need_orgs=True)
|
||||
can_run, description = should_run_checks_for_pr(pr_info)
|
||||
gh = Github(os.getenv("GITHUB_TOKEN"))
|
||||
gh = Github(get_best_robot_token())
|
||||
commit = get_commit(gh, pr_info.sha)
|
||||
url = f"https://github.com/ClickHouse/ClickHouse/actions/runs/{os.getenv('GITHUB_RUN_ID')}"
|
||||
if not can_run:
|
||||
|
@ -6,6 +6,7 @@ import boto3
|
||||
from botocore.exceptions import ClientError, BotoCoreError
|
||||
from multiprocessing.dummy import Pool
|
||||
from compress_files import compress_file_fast
|
||||
from get_robot_token import get_parameter_from_ssm
|
||||
|
||||
def _md5(fname):
|
||||
hash_md5 = hashlib.md5()
|
||||
@ -27,8 +28,8 @@ def _flatten_list(lst):
|
||||
|
||||
|
||||
class S3Helper(object):
|
||||
def __init__(self, host, aws_access_key_id, aws_secret_access_key):
|
||||
self.session = boto3.session.Session(aws_access_key_id=aws_access_key_id, aws_secret_access_key=aws_secret_access_key)
|
||||
def __init__(self, host):
|
||||
self.session = boto3.session.Session(region_name='us-east-1')
|
||||
self.client = self.session.client('s3', endpoint_url=host)
|
||||
|
||||
def _upload_file_to_s3(self, bucket_name, file_path, s3_path):
|
||||
@ -55,7 +56,7 @@ class S3Helper(object):
|
||||
|
||||
self.client.upload_file(file_path, bucket_name, s3_path, ExtraArgs=metadata)
|
||||
logging.info("Upload {} to {}. Meta: {}".format(file_path, s3_path, metadata))
|
||||
return "https://storage.yandexcloud.net/{bucket}/{path}".format(bucket=bucket_name, path=s3_path)
|
||||
return "https://s3.amazonaws.com/{bucket}/{path}".format(bucket=bucket_name, path=s3_path)
|
||||
|
||||
def upload_test_report_to_s3(self, file_path, s3_path):
|
||||
return self._upload_file_to_s3('clickhouse-test-reports', file_path, s3_path)
|
||||
|
@ -10,6 +10,7 @@ from s3_helper import S3Helper
|
||||
import time
|
||||
import json
|
||||
from pr_info import PRInfo
|
||||
from get_robot_token import get_best_robot_token
|
||||
|
||||
NAME = "Style Check (actions)"
|
||||
|
||||
@ -105,10 +106,7 @@ if __name__ == "__main__":
|
||||
if not os.path.exists(temp_path):
|
||||
os.makedirs(temp_path)
|
||||
|
||||
aws_secret_key_id = os.getenv("YANDEX_S3_ACCESS_KEY_ID", "")
|
||||
aws_secret_key = os.getenv("YANDEX_S3_ACCESS_SECRET_KEY", "")
|
||||
|
||||
gh = Github(os.getenv("GITHUB_TOKEN"))
|
||||
gh = Github(get_best_robot_token())
|
||||
|
||||
images_path = os.path.join(temp_path, 'changed_images.json')
|
||||
docker_image = 'clickhouse/style-test'
|
||||
@ -131,10 +129,7 @@ if __name__ == "__main__":
|
||||
else:
|
||||
raise Exception(f"Cannot pull dockerhub for image {docker_image}")
|
||||
|
||||
if not aws_secret_key_id or not aws_secret_key:
|
||||
logging.info("No secrets, will not upload anything to S3")
|
||||
|
||||
s3_helper = S3Helper('https://storage.yandexcloud.net', aws_access_key_id=aws_secret_key_id, aws_secret_access_key=aws_secret_key)
|
||||
s3_helper = S3Helper('https://s3.amazonaws.com')
|
||||
|
||||
subprocess.check_output(f"docker run -u $(id -u ${{USER}}):$(id -g ${{USER}}) --cap-add=SYS_PTRACE --volume={repo_path}:/ClickHouse --volume={temp_path}:/test_output {docker_image}", shell=True)
|
||||
state, description, test_results, additional_files = process_result(temp_path)
|
||||
|
@ -342,3 +342,347 @@
|
||||
2 sum_2 98950 1 940
|
||||
2 sum_2 108950 1 1040
|
||||
2 sum_2 70170 1 1140
|
||||
1 max_1 9 1 0
|
||||
1 max_1 19 1 10
|
||||
1 max_1 29 1 20
|
||||
1 max_1 39 1 30
|
||||
1 max_1 49 1 40
|
||||
1 max_1 59 1 50
|
||||
1 max_1 69 1 60
|
||||
1 max_1 79 1 70
|
||||
1 max_1 89 1 80
|
||||
1 max_1 99 1 90
|
||||
1 max_1 109 1 100
|
||||
1 max_1 119 1 110
|
||||
1 max_1 129 1 120
|
||||
1 max_1 139 1 130
|
||||
1 max_1 149 1 140
|
||||
1 max_1 159 1 150
|
||||
1 max_1 169 1 160
|
||||
1 max_1 179 1 170
|
||||
1 max_1 189 1 180
|
||||
1 max_1 199 1 190
|
||||
1 max_1 209 1 200
|
||||
1 max_1 219 1 210
|
||||
1 max_1 229 1 220
|
||||
1 max_1 239 1 230
|
||||
1 max_1 249 1 240
|
||||
1 max_1 259 1 250
|
||||
1 max_1 269 1 260
|
||||
1 max_1 279 1 270
|
||||
1 max_1 289 1 280
|
||||
1 max_1 299 1 290
|
||||
1 max_1 39 1 0
|
||||
1 max_1 139 1 40
|
||||
1 max_1 239 1 140
|
||||
1 max_1 339 1 240
|
||||
1 max_1 439 1 340
|
||||
1 max_1 539 1 440
|
||||
1 max_1 639 1 540
|
||||
1 max_1 739 1 640
|
||||
1 max_1 839 1 740
|
||||
1 max_1 939 1 840
|
||||
1 max_1 1039 1 940
|
||||
1 max_1 1139 1 1040
|
||||
1 max_1 1199 1 1140
|
||||
1 max_2 9 1 0
|
||||
1 max_2 19 1 10
|
||||
1 max_2 29 1 20
|
||||
1 max_2 39 1 30
|
||||
1 max_2 49 1 40
|
||||
1 max_2 59 1 50
|
||||
1 max_2 69 1 60
|
||||
1 max_2 79 1 70
|
||||
1 max_2 89 1 80
|
||||
1 max_2 99 1 90
|
||||
1 max_2 109 1 100
|
||||
1 max_2 119 1 110
|
||||
1 max_2 129 1 120
|
||||
1 max_2 139 1 130
|
||||
1 max_2 149 1 140
|
||||
1 max_2 159 1 150
|
||||
1 max_2 169 1 160
|
||||
1 max_2 179 1 170
|
||||
1 max_2 189 1 180
|
||||
1 max_2 199 1 190
|
||||
1 max_2 209 1 200
|
||||
1 max_2 219 1 210
|
||||
1 max_2 229 1 220
|
||||
1 max_2 239 1 230
|
||||
1 max_2 249 1 240
|
||||
1 max_2 259 1 250
|
||||
1 max_2 269 1 260
|
||||
1 max_2 279 1 270
|
||||
1 max_2 289 1 280
|
||||
1 max_2 299 1 290
|
||||
1 max_2 39 1 0
|
||||
1 max_2 139 1 40
|
||||
1 max_2 239 1 140
|
||||
1 max_2 339 1 240
|
||||
1 max_2 439 1 340
|
||||
1 max_2 539 1 440
|
||||
1 max_2 639 1 540
|
||||
1 max_2 739 1 640
|
||||
1 max_2 839 1 740
|
||||
1 max_2 939 1 840
|
||||
1 max_2 1039 1 940
|
||||
1 max_2 1139 1 1040
|
||||
1 max_2 1199 1 1140
|
||||
1 sum_1 45 1 0
|
||||
1 sum_1 145 1 10
|
||||
1 sum_1 245 1 20
|
||||
1 sum_1 345 1 30
|
||||
1 sum_1 445 1 40
|
||||
1 sum_1 545 1 50
|
||||
1 sum_1 645 1 60
|
||||
1 sum_1 745 1 70
|
||||
1 sum_1 845 1 80
|
||||
1 sum_1 945 1 90
|
||||
1 sum_1 1045 1 100
|
||||
1 sum_1 1145 1 110
|
||||
1 sum_1 1245 1 120
|
||||
1 sum_1 1345 1 130
|
||||
1 sum_1 1445 1 140
|
||||
1 sum_1 1545 1 150
|
||||
1 sum_1 1645 1 160
|
||||
1 sum_1 1745 1 170
|
||||
1 sum_1 1845 1 180
|
||||
1 sum_1 1945 1 190
|
||||
1 sum_1 2045 1 200
|
||||
1 sum_1 2145 1 210
|
||||
1 sum_1 2245 1 220
|
||||
1 sum_1 2345 1 230
|
||||
1 sum_1 2445 1 240
|
||||
1 sum_1 2545 1 250
|
||||
1 sum_1 2645 1 260
|
||||
1 sum_1 2745 1 270
|
||||
1 sum_1 2845 1 280
|
||||
1 sum_1 2945 1 290
|
||||
1 sum_1 780 1 0
|
||||
1 sum_1 8950 1 40
|
||||
1 sum_1 18950 1 140
|
||||
1 sum_1 28950 1 240
|
||||
1 sum_1 38950 1 340
|
||||
1 sum_1 48950 1 440
|
||||
1 sum_1 58950 1 540
|
||||
1 sum_1 68950 1 640
|
||||
1 sum_1 78950 1 740
|
||||
1 sum_1 88950 1 840
|
||||
1 sum_1 98950 1 940
|
||||
1 sum_1 108950 1 1040
|
||||
1 sum_1 70170 1 1140
|
||||
1 sum_2 45 1 0
|
||||
1 sum_2 145 1 10
|
||||
1 sum_2 245 1 20
|
||||
1 sum_2 345 1 30
|
||||
1 sum_2 445 1 40
|
||||
1 sum_2 545 1 50
|
||||
1 sum_2 645 1 60
|
||||
1 sum_2 745 1 70
|
||||
1 sum_2 845 1 80
|
||||
1 sum_2 945 1 90
|
||||
1 sum_2 1045 1 100
|
||||
1 sum_2 1145 1 110
|
||||
1 sum_2 1245 1 120
|
||||
1 sum_2 1345 1 130
|
||||
1 sum_2 1445 1 140
|
||||
1 sum_2 1545 1 150
|
||||
1 sum_2 1645 1 160
|
||||
1 sum_2 1745 1 170
|
||||
1 sum_2 1845 1 180
|
||||
1 sum_2 1945 1 190
|
||||
1 sum_2 2045 1 200
|
||||
1 sum_2 2145 1 210
|
||||
1 sum_2 2245 1 220
|
||||
1 sum_2 2345 1 230
|
||||
1 sum_2 2445 1 240
|
||||
1 sum_2 2545 1 250
|
||||
1 sum_2 2645 1 260
|
||||
1 sum_2 2745 1 270
|
||||
1 sum_2 2845 1 280
|
||||
1 sum_2 2945 1 290
|
||||
1 sum_2 780 1 0
|
||||
1 sum_2 8950 1 40
|
||||
1 sum_2 18950 1 140
|
||||
1 sum_2 28950 1 240
|
||||
1 sum_2 38950 1 340
|
||||
1 sum_2 48950 1 440
|
||||
1 sum_2 58950 1 540
|
||||
1 sum_2 68950 1 640
|
||||
1 sum_2 78950 1 740
|
||||
1 sum_2 88950 1 840
|
||||
1 sum_2 98950 1 940
|
||||
1 sum_2 108950 1 1040
|
||||
1 sum_2 70170 1 1140
|
||||
2 max_1 9 1 0
|
||||
2 max_1 19 1 10
|
||||
2 max_1 29 1 20
|
||||
2 max_1 39 1 30
|
||||
2 max_1 49 1 40
|
||||
2 max_1 59 1 50
|
||||
2 max_1 69 1 60
|
||||
2 max_1 79 1 70
|
||||
2 max_1 89 1 80
|
||||
2 max_1 99 1 90
|
||||
2 max_1 109 1 100
|
||||
2 max_1 119 1 110
|
||||
2 max_1 129 1 120
|
||||
2 max_1 139 1 130
|
||||
2 max_1 149 1 140
|
||||
2 max_1 159 1 150
|
||||
2 max_1 169 1 160
|
||||
2 max_1 179 1 170
|
||||
2 max_1 189 1 180
|
||||
2 max_1 199 1 190
|
||||
2 max_1 209 1 200
|
||||
2 max_1 219 1 210
|
||||
2 max_1 229 1 220
|
||||
2 max_1 239 1 230
|
||||
2 max_1 249 1 240
|
||||
2 max_1 259 1 250
|
||||
2 max_1 269 1 260
|
||||
2 max_1 279 1 270
|
||||
2 max_1 289 1 280
|
||||
2 max_1 299 1 290
|
||||
2 max_1 39 1 0
|
||||
2 max_1 139 1 40
|
||||
2 max_1 239 1 140
|
||||
2 max_1 339 1 240
|
||||
2 max_1 439 1 340
|
||||
2 max_1 539 1 440
|
||||
2 max_1 639 1 540
|
||||
2 max_1 739 1 640
|
||||
2 max_1 839 1 740
|
||||
2 max_1 939 1 840
|
||||
2 max_1 1039 1 940
|
||||
2 max_1 1139 1 1040
|
||||
2 max_1 1199 1 1140
|
||||
2 max_2 9 1 0
|
||||
2 max_2 19 1 10
|
||||
2 max_2 29 1 20
|
||||
2 max_2 39 1 30
|
||||
2 max_2 49 1 40
|
||||
2 max_2 59 1 50
|
||||
2 max_2 69 1 60
|
||||
2 max_2 79 1 70
|
||||
2 max_2 89 1 80
|
||||
2 max_2 99 1 90
|
||||
2 max_2 109 1 100
|
||||
2 max_2 119 1 110
|
||||
2 max_2 129 1 120
|
||||
2 max_2 139 1 130
|
||||
2 max_2 149 1 140
|
||||
2 max_2 159 1 150
|
||||
2 max_2 169 1 160
|
||||
2 max_2 179 1 170
|
||||
2 max_2 189 1 180
|
||||
2 max_2 199 1 190
|
||||
2 max_2 209 1 200
|
||||
2 max_2 219 1 210
|
||||
2 max_2 229 1 220
|
||||
2 max_2 239 1 230
|
||||
2 max_2 249 1 240
|
||||
2 max_2 259 1 250
|
||||
2 max_2 269 1 260
|
||||
2 max_2 279 1 270
|
||||
2 max_2 289 1 280
|
||||
2 max_2 299 1 290
|
||||
2 max_2 39 1 0
|
||||
2 max_2 139 1 40
|
||||
2 max_2 239 1 140
|
||||
2 max_2 339 1 240
|
||||
2 max_2 439 1 340
|
||||
2 max_2 539 1 440
|
||||
2 max_2 639 1 540
|
||||
2 max_2 739 1 640
|
||||
2 max_2 839 1 740
|
||||
2 max_2 939 1 840
|
||||
2 max_2 1039 1 940
|
||||
2 max_2 1139 1 1040
|
||||
2 max_2 1199 1 1140
|
||||
2 sum_1 45 1 0
|
||||
2 sum_1 145 1 10
|
||||
2 sum_1 245 1 20
|
||||
2 sum_1 345 1 30
|
||||
2 sum_1 445 1 40
|
||||
2 sum_1 545 1 50
|
||||
2 sum_1 645 1 60
|
||||
2 sum_1 745 1 70
|
||||
2 sum_1 845 1 80
|
||||
2 sum_1 945 1 90
|
||||
2 sum_1 1045 1 100
|
||||
2 sum_1 1145 1 110
|
||||
2 sum_1 1245 1 120
|
||||
2 sum_1 1345 1 130
|
||||
2 sum_1 1445 1 140
|
||||
2 sum_1 1545 1 150
|
||||
2 sum_1 1645 1 160
|
||||
2 sum_1 1745 1 170
|
||||
2 sum_1 1845 1 180
|
||||
2 sum_1 1945 1 190
|
||||
2 sum_1 2045 1 200
|
||||
2 sum_1 2145 1 210
|
||||
2 sum_1 2245 1 220
|
||||
2 sum_1 2345 1 230
|
||||
2 sum_1 2445 1 240
|
||||
2 sum_1 2545 1 250
|
||||
2 sum_1 2645 1 260
|
||||
2 sum_1 2745 1 270
|
||||
2 sum_1 2845 1 280
|
||||
2 sum_1 2945 1 290
|
||||
2 sum_1 780 1 0
|
||||
2 sum_1 8950 1 40
|
||||
2 sum_1 18950 1 140
|
||||
2 sum_1 28950 1 240
|
||||
2 sum_1 38950 1 340
|
||||
2 sum_1 48950 1 440
|
||||
2 sum_1 58950 1 540
|
||||
2 sum_1 68950 1 640
|
||||
2 sum_1 78950 1 740
|
||||
2 sum_1 88950 1 840
|
||||
2 sum_1 98950 1 940
|
||||
2 sum_1 108950 1 1040
|
||||
2 sum_1 70170 1 1140
|
||||
2 sum_2 45 1 0
|
||||
2 sum_2 145 1 10
|
||||
2 sum_2 245 1 20
|
||||
2 sum_2 345 1 30
|
||||
2 sum_2 445 1 40
|
||||
2 sum_2 545 1 50
|
||||
2 sum_2 645 1 60
|
||||
2 sum_2 745 1 70
|
||||
2 sum_2 845 1 80
|
||||
2 sum_2 945 1 90
|
||||
2 sum_2 1045 1 100
|
||||
2 sum_2 1145 1 110
|
||||
2 sum_2 1245 1 120
|
||||
2 sum_2 1345 1 130
|
||||
2 sum_2 1445 1 140
|
||||
2 sum_2 1545 1 150
|
||||
2 sum_2 1645 1 160
|
||||
2 sum_2 1745 1 170
|
||||
2 sum_2 1845 1 180
|
||||
2 sum_2 1945 1 190
|
||||
2 sum_2 2045 1 200
|
||||
2 sum_2 2145 1 210
|
||||
2 sum_2 2245 1 220
|
||||
2 sum_2 2345 1 230
|
||||
2 sum_2 2445 1 240
|
||||
2 sum_2 2545 1 250
|
||||
2 sum_2 2645 1 260
|
||||
2 sum_2 2745 1 270
|
||||
2 sum_2 2845 1 280
|
||||
2 sum_2 2945 1 290
|
||||
2 sum_2 780 1 0
|
||||
2 sum_2 8950 1 40
|
||||
2 sum_2 18950 1 140
|
||||
2 sum_2 28950 1 240
|
||||
2 sum_2 38950 1 340
|
||||
2 sum_2 48950 1 440
|
||||
2 sum_2 58950 1 540
|
||||
2 sum_2 68950 1 640
|
||||
2 sum_2 78950 1 740
|
||||
2 sum_2 88950 1 840
|
||||
2 sum_2 98950 1 940
|
||||
2 sum_2 108950 1 1040
|
||||
2 sum_2 70170 1 1140
|
||||
|
@ -32,6 +32,8 @@ WITH dates AS
|
||||
select 1, 'max_2', older_date - number * 60 - 30, number, 1, number from dates, numbers(1200) union all
|
||||
select 2, 'max_2', older_date - number * 60 - 30, number, 1, number from dates, numbers(1200);
|
||||
|
||||
select key, Path, Value, Version, col from test_graphite final order by key, Path, Time desc;
|
||||
|
||||
optimize table test_graphite final;
|
||||
|
||||
select key, Path, Value, Version, col from test_graphite order by key, Path, Time desc;
|
||||
|
@ -1,4 +1,4 @@
|
||||
SELECT '-1E9-1E9-1E9-1E9' AS x, toDecimal32(x, 0); -- { serverError 72 }
|
||||
SELECT '-1E9-1E9-1E9-1E9' AS x, toDecimal32(x, 0); -- { serverError 69 }
|
||||
SELECT '-1E9' AS x, toDecimal32(x, 0); -- { serverError 69 }
|
||||
SELECT '1E-9' AS x, toDecimal32(x, 0);
|
||||
SELECT '1E-8' AS x, toDecimal32(x, 0);
|
||||
|
@ -8,3 +8,4 @@
|
||||
0
|
||||
0 9999
|
||||
0 9999
|
||||
3
|
||||
|
@ -43,3 +43,9 @@ select min(j), max(j) from has_final_mark;
|
||||
|
||||
set max_rows_to_read = 5001; -- one normal part 5000 + one minmax_count_projection part 1
|
||||
select min(j), max(j) from mixed_final_mark;
|
||||
|
||||
-- The first primary expr is the same of some partition column
|
||||
drop table if exists t;
|
||||
create table t (server_date Date, something String) engine MergeTree partition by (toYYYYMM(server_date), server_date) order by (server_date, something);
|
||||
insert into t values ('2019-01-01', 'test1'), ('2019-02-01', 'test2'), ('2019-03-01', 'test3');
|
||||
select count() from t;
|
||||
|
@ -4,7 +4,6 @@ CREATE FUNCTION 01856_test_function_0 AS (a, b, c) -> a * b * c;
|
||||
SELECT 01856_test_function_0(2, 3, 4);
|
||||
SELECT isConstant(01856_test_function_0(1, 2, 3));
|
||||
DROP FUNCTION 01856_test_function_0;
|
||||
CREATE FUNCTION 01856_test_function_1 AS (a, b) -> a || b || c; --{serverError 47}
|
||||
CREATE FUNCTION 01856_test_function_1 AS (a, b) -> 01856_test_function_1(a, b) + 01856_test_function_1(a, b); --{serverError 611}
|
||||
CREATE FUNCTION cast AS a -> a + 1; --{serverError 609}
|
||||
CREATE FUNCTION sum AS (a, b) -> a + b; --{serverError 609}
|
||||
|
@ -3,8 +3,8 @@ select toInt64('+-1'); -- { serverError 72; }
|
||||
select toInt64('++1'); -- { serverError 72; }
|
||||
select toInt64('++'); -- { serverError 72; }
|
||||
select toInt64('+'); -- { serverError 72; }
|
||||
select toInt64('1+1'); -- { serverError 72; }
|
||||
select toInt64('1-1'); -- { serverError 72; }
|
||||
select toInt64('1+1'); -- { serverError 6; }
|
||||
select toInt64('1-1'); -- { serverError 6; }
|
||||
select toInt64(''); -- { serverError 32; }
|
||||
select toInt64('1');
|
||||
select toInt64('-1');
|
||||
|
@ -1,46 +1,46 @@
|
||||
-- Tags: no-fasttest
|
||||
|
||||
SELECT '--JSON_VALUE--';
|
||||
SELECT JSON_VALUE('$', '{"hello":1}'); -- root is a complex object => default value (empty string)
|
||||
SELECT JSON_VALUE('$.hello', '{"hello":1}');
|
||||
SELECT JSON_VALUE('$.hello', '{"hello":1.2}');
|
||||
SELECT JSON_VALUE('$.hello', '{"hello":true}');
|
||||
SELECT JSON_VALUE('$.hello', '{"hello":"world"}');
|
||||
SELECT JSON_VALUE('$.hello', '{"hello":null}');
|
||||
SELECT JSON_VALUE('$.hello', '{"hello":["world","world2"]}');
|
||||
SELECT JSON_VALUE('$.hello', '{"hello":{"world":"!"}}');
|
||||
SELECT JSON_VALUE('$.hello', '{hello:world}'); -- invalid json => default value (empty string)
|
||||
SELECT JSON_VALUE('$.hello', '');
|
||||
SELECT JSON_VALUE('{"hello":1}', '$'); -- root is a complex object => default value (empty string)
|
||||
SELECT JSON_VALUE('{"hello":1}', '$.hello');
|
||||
SELECT JSON_VALUE('{"hello":1.2}', '$.hello');
|
||||
SELECT JSON_VALUE('{"hello":true}', '$.hello');
|
||||
SELECT JSON_VALUE('{"hello":"world"}', '$.hello');
|
||||
SELECT JSON_VALUE('{"hello":null}', '$.hello');
|
||||
SELECT JSON_VALUE('{"hello":["world","world2"]}', '$.hello');
|
||||
SELECT JSON_VALUE('{"hello":{"world":"!"}}', '$.hello');
|
||||
SELECT JSON_VALUE('{hello:world}', '$.hello'); -- invalid json => default value (empty string)
|
||||
SELECT JSON_VALUE('', '$.hello');
|
||||
|
||||
SELECT '--JSON_QUERY--';
|
||||
SELECT JSON_QUERY('$', '{"hello":1}');
|
||||
SELECT JSON_QUERY('$.hello', '{"hello":1}');
|
||||
SELECT JSON_QUERY('$.hello', '{"hello":1.2}');
|
||||
SELECT JSON_QUERY('$.hello', '{"hello":true}');
|
||||
SELECT JSON_QUERY('$.hello', '{"hello":"world"}');
|
||||
SELECT JSON_QUERY('$.hello', '{"hello":null}');
|
||||
SELECT JSON_QUERY('$.hello', '{"hello":["world","world2"]}');
|
||||
SELECT JSON_QUERY('$.hello', '{"hello":{"world":"!"}}');
|
||||
SELECT JSON_QUERY('$.hello', '{hello:{"world":"!"}}}'); -- invalid json => default value (empty string)
|
||||
SELECT JSON_QUERY('$.hello', '');
|
||||
SELECT JSON_QUERY('$.array[*][0 to 2, 4]', '{"array":[[0, 1, 2, 3, 4, 5], [0, -1, -2, -3, -4, -5]]}');
|
||||
SELECT JSON_QUERY('{"hello":1}', '$');
|
||||
SELECT JSON_QUERY('{"hello":1}', '$.hello');
|
||||
SELECT JSON_QUERY('{"hello":1.2}', '$.hello');
|
||||
SELECT JSON_QUERY('{"hello":true}', '$.hello');
|
||||
SELECT JSON_QUERY('{"hello":"world"}', '$.hello');
|
||||
SELECT JSON_QUERY('{"hello":null}', '$.hello');
|
||||
SELECT JSON_QUERY('{"hello":["world","world2"]}', '$.hello');
|
||||
SELECT JSON_QUERY('{"hello":{"world":"!"}}', '$.hello');
|
||||
SELECT JSON_QUERY( '{hello:{"world":"!"}}}', '$.hello'); -- invalid json => default value (empty string)
|
||||
SELECT JSON_QUERY('', '$.hello');
|
||||
SELECT JSON_QUERY('{"array":[[0, 1, 2, 3, 4, 5], [0, -1, -2, -3, -4, -5]]}', '$.array[*][0 to 2, 4]');
|
||||
|
||||
SELECT '--JSON_EXISTS--';
|
||||
SELECT JSON_EXISTS('$', '{"hello":1}');
|
||||
SELECT JSON_EXISTS('$', '');
|
||||
SELECT JSON_EXISTS('$', '{}');
|
||||
SELECT JSON_EXISTS('$.hello', '{"hello":1}');
|
||||
SELECT JSON_EXISTS('$.world', '{"hello":1,"world":2}');
|
||||
SELECT JSON_EXISTS('$.world', '{"hello":{"world":1}}');
|
||||
SELECT JSON_EXISTS('$.hello.world', '{"hello":{"world":1}}');
|
||||
SELECT JSON_EXISTS('$.hello', '{hello:world}'); -- invalid json => default value (zero integer)
|
||||
SELECT JSON_EXISTS('$.hello', '');
|
||||
SELECT JSON_EXISTS('$.hello[*]', '{"hello":["world"]}');
|
||||
SELECT JSON_EXISTS('$.hello[0]', '{"hello":["world"]}');
|
||||
SELECT JSON_EXISTS('$.hello[1]', '{"hello":["world"]}');
|
||||
SELECT JSON_EXISTS('$.a[*].b', '{"a":[{"b":1},{"c":2}]}');
|
||||
SELECT JSON_EXISTS('$.a[*].f', '{"a":[{"b":1},{"c":2}]}');
|
||||
SELECT JSON_EXISTS('$.a[*][0].h', '{"a":[[{"b":1}, {"g":1}],[{"h":1},{"y":1}]]}');
|
||||
SELECT JSON_EXISTS('{"hello":1}', '$');
|
||||
SELECT JSON_EXISTS('', '$');
|
||||
SELECT JSON_EXISTS('{}', '$');
|
||||
SELECT JSON_EXISTS('{"hello":1}', '$.hello');
|
||||
SELECT JSON_EXISTS('{"hello":1,"world":2}', '$.world');
|
||||
SELECT JSON_EXISTS('{"hello":{"world":1}}', '$.world');
|
||||
SELECT JSON_EXISTS('{"hello":{"world":1}}', '$.hello.world');
|
||||
SELECT JSON_EXISTS('{hello:world}', '$.hello'); -- invalid json => default value (zero integer)
|
||||
SELECT JSON_EXISTS('', '$.hello');
|
||||
SELECT JSON_EXISTS('{"hello":["world"]}', '$.hello[*]');
|
||||
SELECT JSON_EXISTS('{"hello":["world"]}', '$.hello[0]');
|
||||
SELECT JSON_EXISTS('{"hello":["world"]}', '$.hello[1]');
|
||||
SELECT JSON_EXISTS('{"a":[{"b":1},{"c":2}]}', '$.a[*].b');
|
||||
SELECT JSON_EXISTS('{"a":[{"b":1},{"c":2}]}', '$.a[*].f');
|
||||
SELECT JSON_EXISTS('{"a":[[{"b":1}, {"g":1}],[{"h":1},{"y":1}]]}', '$.a[*][0].h');
|
||||
|
||||
SELECT '--MANY ROWS--';
|
||||
DROP TABLE IF EXISTS 01889_sql_json;
|
||||
@ -48,5 +48,5 @@ CREATE TABLE 01889_sql_json (id UInt8, json String) ENGINE = MergeTree ORDER BY
|
||||
INSERT INTO 01889_sql_json(id, json) VALUES(0, '{"name":"Ivan","surname":"Ivanov","friends":["Vasily","Kostya","Artyom"]}');
|
||||
INSERT INTO 01889_sql_json(id, json) VALUES(1, '{"name":"Katya","surname":"Baltica","friends":["Tihon","Ernest","Innokentiy"]}');
|
||||
INSERT INTO 01889_sql_json(id, json) VALUES(2, '{"name":"Vitali","surname":"Brown","friends":["Katya","Anatoliy","Ivan","Oleg"]}');
|
||||
SELECT id, JSON_QUERY('$.friends[0 to 2]', json) FROM 01889_sql_json ORDER BY id;
|
||||
SELECT id, JSON_QUERY(json, '$.friends[0 to 2]') FROM 01889_sql_json ORDER BY id;
|
||||
DROP TABLE 01889_sql_json;
|
||||
|
@ -0,0 +1,6 @@
|
||||
1
|
||||
AggregateFunction(uniqExact, Nullable(String))
|
||||
1
|
||||
AggregateFunction(uniqExact, Nullable(UInt8))
|
||||
1
|
||||
1
|
@ -0,0 +1,8 @@
|
||||
SELECT finalizeAggregation(initializeAggregation('uniqExactState', toNullable('foo')));
|
||||
SELECT toTypeName(initializeAggregation('uniqExactState', toNullable('foo')));
|
||||
|
||||
SELECT finalizeAggregation(initializeAggregation('uniqExactState', toNullable(123)));
|
||||
SELECT toTypeName(initializeAggregation('uniqExactState', toNullable(123)));
|
||||
|
||||
SELECT initializeAggregation('uniqExactState', toNullable('foo')) = arrayReduce('uniqExactState', [toNullable('foo')]);
|
||||
SELECT initializeAggregation('uniqExactState', toNullable(123)) = arrayReduce('uniqExactState', [toNullable(123)]);
|
19
tests/queries/0_stateless/02098_date32_comparison.reference
Normal file
19
tests/queries/0_stateless/02098_date32_comparison.reference
Normal file
@ -0,0 +1,19 @@
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
||||
1
|
19
tests/queries/0_stateless/02098_date32_comparison.sql
Normal file
19
tests/queries/0_stateless/02098_date32_comparison.sql
Normal file
@ -0,0 +1,19 @@
|
||||
select toDate32('1990-01-01') = toDate('1990-01-01');
|
||||
select toDate('1991-01-02') > toDate32('1990-01-01');
|
||||
select toDate32('1925-01-01') <= toDate('1990-01-01');
|
||||
select toDate('1991-01-01') < toDate32('2283-11-11');
|
||||
select toDate32('1990-01-01') = toDateTime('1990-01-01');
|
||||
select toDateTime('1991-01-02') > toDate32('1990-01-01');
|
||||
select toDate32('1925-01-01') <= toDateTime('1990-01-01');
|
||||
select toDateTime('1991-01-01') < toDate32('2283-11-11');
|
||||
select toDate32('1990-01-01') = toDateTime64('1990-01-01',2);
|
||||
select toDateTime64('1991-01-02',2) > toDate32('1990-01-01');
|
||||
select toDate32('1925-01-01') = toDateTime64('1925-01-01',2);
|
||||
select toDateTime64('1925-01-02',2) > toDate32('1925-01-01');
|
||||
select toDate32('2283-11-11') = toDateTime64('2283-11-11',2);
|
||||
select toDateTime64('2283-11-11',2) > toDate32('1925-01-01');
|
||||
select toDate32('1990-01-01') = '1990-01-01';
|
||||
select '1991-01-02' > toDate32('1990-01-01');
|
||||
select toDate32('1925-01-01') = '1925-01-01';
|
||||
select '2283-11-11' >= toDate32('2283-11-10');
|
||||
select '2283-11-11' > toDate32('1925-01-01');
|
@ -0,0 +1 @@
|
||||
8
|
@ -0,0 +1,4 @@
|
||||
-- Tags: no-parallel
|
||||
CREATE FUNCTION 02098_alias_function AS x -> (((x * 2) AS x_doubled) + x_doubled);
|
||||
SELECT 02098_alias_function(2);
|
||||
DROP FUNCTION 02098_alias_function;
|
@ -0,0 +1 @@
|
||||
[2,4,6]
|
@ -0,0 +1,4 @@
|
||||
-- Tags: no-parallel
|
||||
CREATE FUNCTION 02099_lambda_function AS x -> arrayMap(array_element -> array_element * 2, x);
|
||||
SELECT 02099_lambda_function([1,2,3]);
|
||||
DROP FUNCTION 02099_lambda_function;
|
@ -0,0 +1,4 @@
|
||||
CREATE FUNCTION `02101_test_function` AS x -> (x + 1)
|
||||
2
|
||||
CREATE FUNCTION `02101_test_function` AS x -> (x + 2)
|
||||
3
|
@ -0,0 +1,13 @@
|
||||
-- Tags: no-parallel
|
||||
|
||||
CREATE OR REPLACE FUNCTION 02101_test_function AS x -> x + 1;
|
||||
|
||||
SELECT create_query FROM system.functions WHERE name = '02101_test_function';
|
||||
SELECT 02101_test_function(1);
|
||||
|
||||
CREATE OR REPLACE FUNCTION 02101_test_function AS x -> x + 2;
|
||||
|
||||
SELECT create_query FROM system.functions WHERE name = '02101_test_function';
|
||||
SELECT 02101_test_function(1);
|
||||
|
||||
DROP FUNCTION 02101_test_function;
|
@ -0,0 +1 @@
|
||||
2
|
@ -0,0 +1,9 @@
|
||||
-- Tags: no-parallel
|
||||
|
||||
CREATE FUNCTION 02101_test_function AS x -> x + 1;
|
||||
|
||||
SELECT 02101_test_function(1);
|
||||
|
||||
DROP FUNCTION 02101_test_function;
|
||||
DROP FUNCTION 02101_test_function; --{serverError 46}
|
||||
DROP FUNCTION IF EXISTS 02101_test_function;
|
@ -0,0 +1 @@
|
||||
2
|
@ -0,0 +1,8 @@
|
||||
-- Tags: no-parallel
|
||||
|
||||
CREATE FUNCTION IF NOT EXISTS 02102_test_function AS x -> x + 1;
|
||||
SELECT 02102_test_function(1);
|
||||
|
||||
CREATE FUNCTION 02102_test_function AS x -> x + 1; --{serverError 609}
|
||||
CREATE FUNCTION IF NOT EXISTS 02102_test_function AS x -> x + 1;
|
||||
DROP FUNCTION 02102_test_function;
|
@ -0,0 +1 @@
|
||||
1 42
|
7
tests/queries/0_stateless/03000_clickhouse_local_columns_description.sh
Executable file
7
tests/queries/0_stateless/03000_clickhouse_local_columns_description.sh
Executable file
@ -0,0 +1,7 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
# shellcheck source=../shell_config.sh
|
||||
. "$CUR_DIR"/../shell_config.sh
|
||||
|
||||
${CLICKHOUSE_LOCAL} --query "create table t (n int, m int default 42) engine=Memory;insert into t values (1, NULL);select * from t"
|
@ -15,6 +15,8 @@ Let's highlight some of these new exciting new capabilities in 21.10:
|
||||
* Instead of logging every query (which can be a lot of logs!), you can now log a random sample of your queries. The number of queries logged is determined by defining a specified probability between 0.0 (no queries logged) and 1.0 (all queries logged) using the new `log_queries_probability` setting.
|
||||
* Positional arguments are now available in your GROUP BY, ORDER BY and LIMIT BY clauses. For example, `SELECT foo, bar, baz FROM my_table ORDER BY 2,3` orders the results by whatever the bar and baz columns (no need to specify column names twice!)
|
||||
|
||||
We're also thrilled to announce some new free training available to you in our Learn ClickHouse portal: [https://clickhouse.com/learn/lessons/whatsnew-clickhouse-21.10/](https://clickhouse.com/learn/lessons/whatsnew-clickhouse-21.10/)
|
||||
|
||||
We're always listening for new ideas, and we're happy to welcome new contributors to the ClickHouse project. Whether for submitting code or improving our documentation and examples, please get involved by sending us a pull request or submitting an issue. Our beginner developers contribution guide will help you get started: [https://clickhouse.com/docs/en/development/developer-instruction/](https://clickhouse.com/docs/en/development/developer-instruction/)
|
||||
|
||||
|
||||
@ -22,6 +24,6 @@ We're always listening for new ideas, and we're happy to welcome new contributor
|
||||
|
||||
Release 21.10
|
||||
|
||||
Release Date: 2021-10-21
|
||||
Release Date: 2021-10-17
|
||||
|
||||
Release Notes: [21.10](https://github.com/ClickHouse/ClickHouse/blob/master/CHANGELOG.md)
|
||||
|
File diff suppressed because one or more lines are too long
@ -1,4 +1,4 @@
|
||||
.page {
|
||||
overflow: hidden;
|
||||
width: 100vw;
|
||||
width: 100%;
|
||||
}
|
||||
|
Loading…
Reference in New Issue
Block a user