mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 23:52:03 +00:00
Merge branch 'master' into Azure_write_buffer
This commit is contained in:
commit
a41c005a6e
@ -405,7 +405,7 @@ Returns the name of the current user. In case of a distributed query, the name o
|
||||
SELECT currentUser();
|
||||
```
|
||||
|
||||
Alias: `user()`, `USER()`.
|
||||
Aliases: `user()`, `USER()`, `current_user()`. Aliases are case insensitive.
|
||||
|
||||
**Returned values**
|
||||
|
||||
|
@ -35,13 +35,13 @@ ClickHouse — столбцовая система управления база
|
||||
В примерах изображён только порядок расположения данных.
|
||||
То есть значения из разных столбцов хранятся отдельно, а данные одного столбца — вместе.
|
||||
|
||||
Примеры столбцовых СУБД: Vertica, Paraccel (Actian Matrix, Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise, Actian Vector), LucidDB, SAP HANA, Google Dremel, Google PowerDrill, Druid, kdb+.
|
||||
Примеры столбцовых СУБД: Vertica, Paraccel (Actian Matrix, Amazon Redshift), Sybase IQ, Exasol, Infobright, InfiniDB, MonetDB (VectorWise, Actian Vector), LucidDB, SAP HANA и прочий треш, Google Dremel, Google PowerDrill, Druid, kdb+.
|
||||
{: .grey }
|
||||
|
||||
Разный порядок хранения данных лучше подходит для разных сценариев работы.
|
||||
Сценарий работы с данными — это то, какие производятся запросы, как часто и в каком соотношении; сколько читается данных на запросы каждого вида — строк, столбцов, байтов; как соотносятся чтения и обновления данных; какой рабочий размер данных и насколько локально он используется; используются ли транзакции и с какой изолированностью; какие требования к дублированию данных и логической целостности; требования к задержкам на выполнение и пропускной способности запросов каждого вида и т. п.
|
||||
|
||||
Чем больше нагрузка на систему, тем более важной становится специализация под сценарий работы, и тем более конкретной становится эта специализация. Не существует системы, одинаково хорошо подходящей под существенно различные сценарии работы. Если система подходит под широкое множество сценариев работы, то при достаточно большой нагрузке, система будет справляться со всеми сценариями работы плохо, или справляться хорошо только с одним из сценариев работы.
|
||||
Чем больше нагрузка на систему, тем более важной становится специализация под сценарий работы, и тем более конкретной становится эта специализация. Не существует системы, одинаково хорошо подходящей под существенно различные сценарии работы. Если система подходит под широкое множество сценариев работы, то при достаточно большой нагрузке система будет справляться со всеми сценариями работы плохо, или справляться хорошо только с одним из сценариев работы.
|
||||
|
||||
## Ключевые особенности OLAP-сценария работы {#kliuchevye-osobennosti-olap-stsenariia-raboty}
|
||||
|
||||
@ -53,11 +53,11 @@ ClickHouse — столбцовая система управления база
|
||||
- запросы идут сравнительно редко (обычно не более сотни в секунду на сервер);
|
||||
- при выполнении простых запросов, допустимы задержки в районе 50 мс;
|
||||
- значения в столбцах достаточно мелкие — числа и небольшие строки (например, 60 байт на URL);
|
||||
- требуется высокая пропускная способность при обработке одного запроса (до миллиардов строк в секунду на один узел);
|
||||
- требуется высокая пропускная способность при обработке одного запроса (до миллиардов строк в секунду на один сервер);
|
||||
- транзакции отсутствуют;
|
||||
- низкие требования к консистентности данных;
|
||||
- в запросе одна большая таблица, все таблицы кроме одной маленькие;
|
||||
- результат выполнения запроса существенно меньше исходных данных — то есть данные фильтруются или агрегируются; результат выполнения помещается в оперативную память одного узла.
|
||||
- низкие требования к согласованности данных;
|
||||
- в запросе одна большая таблица, все остальные таблицы из запроса — маленькие;
|
||||
- результат выполнения запроса существенно меньше исходных данных — то есть данные фильтруются или агрегируются; результат выполнения помещается в оперативную память одного сервера.
|
||||
|
||||
Легко видеть, что OLAP-сценарий работы существенно отличается от других распространённых сценариев работы (например, OLTP или Key-Value сценариев работы). Таким образом, не имеет никакого смысла пытаться использовать OLTP-системы или системы класса «ключ — значение» для обработки аналитических запросов, если вы хотите получить приличную производительность («выше плинтуса»). Например, если вы попытаетесь использовать для аналитики MongoDB или Redis — вы получите анекдотически низкую производительность по сравнению с OLAP-СУБД.
|
||||
|
||||
@ -77,11 +77,11 @@ ClickHouse — столбцовая система управления база
|
||||
|
||||
### По вводу-выводу {#po-vvodu-vyvodu}
|
||||
|
||||
1. Для выполнения аналитического запроса, требуется прочитать небольшое количество столбцов таблицы. В столбцовой БД для этого можно читать только нужные данные. Например, если вам требуется только 5 столбцов из 100, то следует рассчитывать на 20-кратное уменьшение ввода-вывода.
|
||||
2. Так как данные читаются пачками, то их проще сжимать. Данные, лежащие по столбцам также лучше сжимаются. За счёт этого, дополнительно уменьшается объём ввода-вывода.
|
||||
3. За счёт уменьшения ввода-вывода, больше данных влезает в системный кэш.
|
||||
1. Для выполнения аналитического запроса требуется прочитать небольшое количество столбцов таблицы. В столбцовой БД для этого можно читать только нужные данные. Например, если вам требуется только 5 столбцов из 100, то следует рассчитывать на 20-кратное уменьшение ввода-вывода.
|
||||
2. Так как данные читаются пачками, то их проще сжимать. Данные, лежащие по столбцам, также лучше сжимаются. За счёт этого, дополнительно уменьшается объём ввода-вывода.
|
||||
3. За счёт уменьшения ввода-вывода больше данных влезает в системный кэш.
|
||||
|
||||
Например, для запроса «посчитать количество записей для каждой рекламной системы», требуется прочитать один столбец «идентификатор рекламной системы», который занимает 1 байт в несжатом виде. Если большинство переходов было не с рекламных систем, то можно рассчитывать хотя бы на десятикратное сжатие этого столбца. При использовании быстрого алгоритма сжатия, возможно разжатие данных со скоростью более нескольких гигабайт несжатых данных в секунду. То есть, такой запрос может выполняться со скоростью около нескольких миллиардов строк в секунду на одном сервере. На практике, такая скорость действительно достигается.
|
||||
Например, для запроса «посчитать количество записей для каждой рекламной системы» требуется прочитать один столбец «идентификатор рекламной системы», который занимает 1 байт в несжатом виде. Если большинство переходов было не с рекламных систем, то можно рассчитывать хотя бы на десятикратное сжатие этого столбца. При использовании быстрого алгоритма сжатия возможно разжатие данных со скоростью более нескольких гигабайт несжатых данных в секунду. То есть такой запрос может выполняться со скоростью около нескольких миллиардов строк в секунду на одном сервере. На практике такая скорость действительно достигается.
|
||||
|
||||
### По вычислениям {#po-vychisleniiam}
|
||||
|
||||
@ -96,4 +96,4 @@ ClickHouse — столбцовая система управления база
|
||||
|
||||
В «обычных» СУБД этого не делается, так как не имеет смысла при выполнении простых запросов. Хотя есть исключения. Например, в MemSQL кодогенерация используется для уменьшения времени отклика при выполнении SQL-запросов. Для сравнения: в аналитических СУБД требуется оптимизация по пропускной способности (throughput, ГБ/с), а не времени отклика (latency, с).
|
||||
|
||||
Стоит заметить, что для эффективности по CPU требуется, чтобы язык запросов был декларативным (SQL, MDX) или хотя бы векторным (J, K). То есть необходимо, чтобы запрос содержал циклы только в неявном виде, открывая возможности для оптимизации.
|
||||
Стоит заметить, что для эффективности по CPU требуется, чтобы язык запросов был декларативным (SQL, MDX) или хотя бы векторным (J, K, APL). То есть необходимо, чтобы запрос содержал циклы только в неявном виде, открывая возможности для оптимизации.
|
||||
|
@ -1134,13 +1134,6 @@ void Client::processOptions(const OptionsDescription & options_description,
|
||||
}
|
||||
|
||||
|
||||
static bool checkIfStdoutIsRegularFile()
|
||||
{
|
||||
struct stat file_stat;
|
||||
return fstat(STDOUT_FILENO, &file_stat) == 0 && S_ISREG(file_stat.st_mode);
|
||||
}
|
||||
|
||||
|
||||
void Client::processConfig()
|
||||
{
|
||||
if (!queries.empty() && config().has("queries-file"))
|
||||
@ -1176,38 +1169,7 @@ void Client::processConfig()
|
||||
|
||||
pager = config().getString("pager", "");
|
||||
|
||||
is_default_format = !config().has("vertical") && !config().has("output-format") && !config().has("format");
|
||||
if (is_default_format && checkIfStdoutIsRegularFile())
|
||||
{
|
||||
is_default_format = false;
|
||||
std::optional<String> format_from_file_name;
|
||||
format_from_file_name = FormatFactory::instance().tryGetFormatFromFileDescriptor(STDOUT_FILENO);
|
||||
format = format_from_file_name ? *format_from_file_name : "TabSeparated";
|
||||
}
|
||||
else if (config().has("vertical"))
|
||||
{
|
||||
format = config().getString("output-format", config().getString("format", "Vertical"));
|
||||
}
|
||||
else
|
||||
{
|
||||
format = config().getString("output-format", config().getString("format", is_interactive ? "PrettyCompact" : "TabSeparated"));
|
||||
}
|
||||
|
||||
format_max_block_size = config().getUInt64("format_max_block_size",
|
||||
global_context->getSettingsRef().max_block_size);
|
||||
|
||||
insert_format = "Values";
|
||||
|
||||
/// Setting value from cmd arg overrides one from config
|
||||
if (global_context->getSettingsRef().max_insert_block_size.changed)
|
||||
{
|
||||
insert_format_max_block_size = global_context->getSettingsRef().max_insert_block_size;
|
||||
}
|
||||
else
|
||||
{
|
||||
insert_format_max_block_size = config().getUInt64("insert_format_max_block_size",
|
||||
global_context->getSettingsRef().max_insert_block_size);
|
||||
}
|
||||
setDefaultFormatsFromConfiguration();
|
||||
|
||||
global_context->setClientName(std::string(DEFAULT_CLIENT_NAME));
|
||||
global_context->setQueryKindInitial();
|
||||
|
@ -312,47 +312,28 @@ void LocalServer::cleanup()
|
||||
}
|
||||
|
||||
|
||||
static bool checkIfStdinIsRegularFile()
|
||||
{
|
||||
struct stat file_stat;
|
||||
return fstat(STDIN_FILENO, &file_stat) == 0 && S_ISREG(file_stat.st_mode);
|
||||
}
|
||||
|
||||
|
||||
static bool checkIfStdoutIsRegularFile()
|
||||
{
|
||||
struct stat file_stat;
|
||||
return fstat(STDOUT_FILENO, &file_stat) == 0 && S_ISREG(file_stat.st_mode);
|
||||
}
|
||||
|
||||
|
||||
std::string LocalServer::getInitialCreateTableQuery()
|
||||
{
|
||||
if (!config().has("table-structure") && !config().has("table-file") && !config().has("table-data-format") && (!checkIfStdinIsRegularFile() || queries.empty()))
|
||||
if (!config().has("table-structure") && !config().has("table-file") && !config().has("table-data-format") && (!isRegularFile(STDIN_FILENO) || queries.empty()))
|
||||
return {};
|
||||
|
||||
auto table_name = backQuoteIfNeed(config().getString("table-name", "table"));
|
||||
auto table_structure = config().getString("table-structure", "auto");
|
||||
|
||||
String table_file;
|
||||
std::optional<String> format_from_file_name;
|
||||
if (!config().has("table-file") || config().getString("table-file") == "-")
|
||||
{
|
||||
/// Use Unix tools stdin naming convention
|
||||
table_file = "stdin";
|
||||
format_from_file_name = FormatFactory::instance().tryGetFormatFromFileDescriptor(STDIN_FILENO);
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Use regular file
|
||||
auto file_name = config().getString("table-file");
|
||||
table_file = quoteString(file_name);
|
||||
format_from_file_name = FormatFactory::instance().tryGetFormatFromFileName(file_name);
|
||||
}
|
||||
|
||||
auto data_format = backQuoteIfNeed(
|
||||
config().getString("table-data-format", config().getString("format", format_from_file_name ? *format_from_file_name : "TSV")));
|
||||
|
||||
String data_format = backQuoteIfNeed(default_input_format);
|
||||
|
||||
if (table_structure == "auto")
|
||||
table_structure = "";
|
||||
@ -618,26 +599,7 @@ void LocalServer::processConfig()
|
||||
if (config().has("macros"))
|
||||
global_context->setMacros(std::make_unique<Macros>(config(), "macros", log));
|
||||
|
||||
if (!config().has("output-format") && !config().has("format") && checkIfStdoutIsRegularFile())
|
||||
{
|
||||
std::optional<String> format_from_file_name;
|
||||
format_from_file_name = FormatFactory::instance().tryGetFormatFromFileDescriptor(STDOUT_FILENO);
|
||||
format = format_from_file_name ? *format_from_file_name : "TSV";
|
||||
}
|
||||
else
|
||||
format = config().getString("output-format", config().getString("format", is_interactive ? "PrettyCompact" : "TSV"));
|
||||
insert_format = "Values";
|
||||
|
||||
/// Setting value from cmd arg overrides one from config
|
||||
if (global_context->getSettingsRef().max_insert_block_size.changed)
|
||||
{
|
||||
insert_format_max_block_size = global_context->getSettingsRef().max_insert_block_size;
|
||||
}
|
||||
else
|
||||
{
|
||||
insert_format_max_block_size = config().getUInt64("insert_format_max_block_size",
|
||||
global_context->getSettingsRef().max_insert_block_size);
|
||||
}
|
||||
setDefaultFormatsFromConfiguration();
|
||||
|
||||
/// Sets external authenticators config (LDAP, Kerberos).
|
||||
global_context->setExternalAuthenticatorsConfig(config());
|
||||
|
@ -5,6 +5,12 @@
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
}
|
||||
|
||||
struct Settings;
|
||||
|
||||
AggregateFunctionPtr AggregateFunctionCount::getOwnNullAdapter(
|
||||
@ -19,7 +25,9 @@ namespace
|
||||
AggregateFunctionPtr createAggregateFunctionCount(const std::string & name, const DataTypes & argument_types, const Array & parameters, const Settings *)
|
||||
{
|
||||
assertNoParameters(name, parameters);
|
||||
assertArityAtMost<1>(name, argument_types);
|
||||
|
||||
if (argument_types.size() > 1)
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires zero or one argument", name);
|
||||
|
||||
return std::make_shared<AggregateFunctionCount>(argument_types);
|
||||
}
|
||||
|
@ -14,6 +14,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
@ -116,8 +117,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -91,6 +91,21 @@ public:
|
||||
if (!returns_many && levels.size() > 1)
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires one level parameter or less", getName());
|
||||
|
||||
if constexpr (has_second_arg)
|
||||
{
|
||||
assertBinary(Name::name, argument_types_);
|
||||
if (!isUInt(argument_types_[1]))
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Second argument (weight) for function {} must be unsigned integer, but it has type {}",
|
||||
Name::name,
|
||||
argument_types_[1]->getName());
|
||||
}
|
||||
else
|
||||
{
|
||||
assertUnary(Name::name, argument_types_);
|
||||
}
|
||||
|
||||
if constexpr (is_quantile_ddsketch)
|
||||
{
|
||||
if (params.empty())
|
||||
@ -272,22 +287,6 @@ public:
|
||||
static_cast<ColVecType &>(to).getData().push_back(data.get(level));
|
||||
}
|
||||
}
|
||||
|
||||
static void assertSecondArg(const DataTypes & types)
|
||||
{
|
||||
if constexpr (has_second_arg)
|
||||
{
|
||||
assertBinary(Name::name, types);
|
||||
if (!isUInt(types[1]))
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Second argument (weight) for function {} must be unsigned integer, but it has type {}",
|
||||
Name::name,
|
||||
types[1]->getName());
|
||||
}
|
||||
else
|
||||
assertUnary(Name::name, types);
|
||||
}
|
||||
};
|
||||
|
||||
struct NameQuantile { static constexpr auto name = "quantile"; };
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -1,7 +1,6 @@
|
||||
#include <AggregateFunctions/AggregateFunctionQuantile.h>
|
||||
#include <AggregateFunctions/QuantileBFloat16Histogram.h>
|
||||
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||
#include <AggregateFunctions/Helpers.h>
|
||||
#include <DataTypes/DataTypeDate.h>
|
||||
#include <DataTypes/DataTypeDateTime.h>
|
||||
#include <Core/Field.h>
|
||||
@ -13,6 +12,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +26,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -27,8 +28,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -14,6 +14,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
@ -116,8 +117,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -27,8 +28,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -27,8 +28,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
@ -39,12 +40,12 @@ AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
#undef DISPATCH
|
||||
if (which.idx == TypeIndex::Date) return std::make_shared<Function<DataTypeDate::FieldType, false>>(argument_types, params);
|
||||
if (which.idx == TypeIndex::DateTime) return std::make_shared<Function<DataTypeDateTime::FieldType, false>>(argument_types, params);
|
||||
if (which.idx == TypeIndex::DateTime64) return std::make_shared<Function<DateTime64, false>>(argument_types, params);
|
||||
|
||||
if (which.idx == TypeIndex::Decimal32) return std::make_shared<Function<Decimal32, false>>(argument_types, params);
|
||||
if (which.idx == TypeIndex::Decimal64) return std::make_shared<Function<Decimal64, false>>(argument_types, params);
|
||||
if (which.idx == TypeIndex::Decimal128) return std::make_shared<Function<Decimal128, false>>(argument_types, params);
|
||||
if (which.idx == TypeIndex::Decimal256) return std::make_shared<Function<Decimal256, false>>(argument_types, params);
|
||||
if (which.idx == TypeIndex::DateTime64) return std::make_shared<Function<DateTime64, false>>(argument_types, params);
|
||||
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument for aggregate function {}",
|
||||
argument_type->getName(), name);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -16,6 +16,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
@ -216,8 +217,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -16,6 +16,7 @@ namespace DB
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int INCORRECT_DATA;
|
||||
extern const int LOGICAL_ERROR;
|
||||
@ -503,8 +504,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -15,6 +15,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
@ -320,8 +321,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -13,6 +13,7 @@ struct Settings;
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
|
||||
}
|
||||
|
||||
@ -26,8 +27,8 @@ template <template <typename, bool> class Function>
|
||||
AggregateFunctionPtr createAggregateFunctionQuantile(
|
||||
const std::string & name, const DataTypes & argument_types, const Array & params, const Settings *)
|
||||
{
|
||||
/// Second argument type check doesn't depend on the type of the first one.
|
||||
Function<void, true>::assertSecondArg(argument_types);
|
||||
if (argument_types.empty())
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at least one argument", name);
|
||||
|
||||
const DataTypePtr & argument_type = argument_types[0];
|
||||
WhichDataType which(argument_type);
|
||||
|
@ -33,21 +33,4 @@ inline void assertBinary(const std::string & name, const DataTypes & argument_ty
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires two arguments", name);
|
||||
}
|
||||
|
||||
template<std::size_t maximal_arity>
|
||||
inline void assertArityAtMost(const std::string & name, const DataTypes & argument_types)
|
||||
{
|
||||
if (argument_types.size() <= maximal_arity)
|
||||
return;
|
||||
|
||||
if constexpr (maximal_arity == 0)
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} cannot have arguments", name);
|
||||
|
||||
if constexpr (maximal_arity == 1)
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires zero or one argument",
|
||||
name);
|
||||
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH, "Aggregate function {} requires at most {} arguments",
|
||||
name, maximal_arity);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -801,6 +801,14 @@ struct IdentifierResolveScope
|
||||
/// Node hash to mask id map
|
||||
std::shared_ptr<std::map<IQueryTreeNode::Hash, size_t>> projection_mask_map;
|
||||
|
||||
struct ResolvedFunctionsCache
|
||||
{
|
||||
FunctionOverloadResolverPtr resolver;
|
||||
FunctionBasePtr function_base;
|
||||
};
|
||||
|
||||
std::map<IQueryTreeNode::Hash, ResolvedFunctionsCache> functions_cache;
|
||||
|
||||
[[maybe_unused]] const IdentifierResolveScope * getNearestQueryScope() const
|
||||
{
|
||||
const IdentifierResolveScope * scope_to_check = this;
|
||||
@ -5586,9 +5594,20 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
FunctionOverloadResolverPtr function = UserDefinedExecutableFunctionFactory::instance().tryGet(function_name, scope.context, parameters);
|
||||
bool is_executable_udf = true;
|
||||
|
||||
IdentifierResolveScope::ResolvedFunctionsCache * function_cache = nullptr;
|
||||
|
||||
if (!function)
|
||||
{
|
||||
function = FunctionFactory::instance().tryGet(function_name, scope.context);
|
||||
/// This is a hack to allow a query like `select randConstant(), randConstant(), randConstant()`.
|
||||
/// Function randConstant() would return the same value for the same arguments (in scope).
|
||||
|
||||
auto hash = function_node_ptr->getTreeHash();
|
||||
function_cache = &scope.functions_cache[hash];
|
||||
if (!function_cache->resolver)
|
||||
function_cache->resolver = FunctionFactory::instance().tryGet(function_name, scope.context);
|
||||
|
||||
function = function_cache->resolver;
|
||||
|
||||
is_executable_udf = false;
|
||||
}
|
||||
|
||||
@ -5812,7 +5831,17 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
||||
|
||||
try
|
||||
{
|
||||
auto function_base = function->build(argument_columns);
|
||||
FunctionBasePtr function_base;
|
||||
if (function_cache)
|
||||
{
|
||||
auto & cached_function = function_cache->function_base;
|
||||
if (!cached_function)
|
||||
cached_function = function->build(argument_columns);
|
||||
|
||||
function_base = cached_function;
|
||||
}
|
||||
else
|
||||
function_base = function->build(argument_columns);
|
||||
|
||||
/// Do not constant fold get scalar functions
|
||||
bool disable_constant_folding = function_name == "__getScalar" || function_name == "shardNum" ||
|
||||
|
@ -73,6 +73,7 @@ namespace
|
||||
.use_virtual_addressing = s3_uri.is_virtual_hosted_style,
|
||||
.disable_checksum = local_settings.s3_disable_checksum,
|
||||
.gcs_issue_compose_request = context->getConfigRef().getBool("s3.gcs_issue_compose_request", false),
|
||||
.is_s3express_bucket = S3::isS3ExpressEndpoint(s3_uri.endpoint),
|
||||
};
|
||||
|
||||
return S3::ClientFactory::instance().create(
|
||||
|
@ -564,7 +564,7 @@ try
|
||||
out_buf = &std_out;
|
||||
}
|
||||
|
||||
String current_format = format;
|
||||
String current_format = default_output_format;
|
||||
|
||||
select_into_file = false;
|
||||
select_into_file_and_stdout = false;
|
||||
@ -722,6 +722,87 @@ void ClientBase::adjustSettings()
|
||||
global_context->setSettings(settings);
|
||||
}
|
||||
|
||||
bool ClientBase::isRegularFile(int fd)
|
||||
{
|
||||
struct stat file_stat;
|
||||
return fstat(fd, &file_stat) == 0 && S_ISREG(file_stat.st_mode);
|
||||
}
|
||||
|
||||
void ClientBase::setDefaultFormatsFromConfiguration()
|
||||
{
|
||||
if (config().has("output-format"))
|
||||
{
|
||||
default_output_format = config().getString("output-format");
|
||||
is_default_format = false;
|
||||
}
|
||||
else if (config().has("format"))
|
||||
{
|
||||
default_output_format = config().getString("format");
|
||||
is_default_format = false;
|
||||
}
|
||||
else if (config().has("vertical"))
|
||||
{
|
||||
default_output_format = "Vertical";
|
||||
is_default_format = false;
|
||||
}
|
||||
else if (isRegularFile(STDOUT_FILENO))
|
||||
{
|
||||
std::optional<String> format_from_file_name = FormatFactory::instance().tryGetFormatFromFileDescriptor(STDOUT_FILENO);
|
||||
if (format_from_file_name)
|
||||
default_output_format = *format_from_file_name;
|
||||
else
|
||||
default_output_format = "TSV";
|
||||
}
|
||||
else if (is_interactive || stdout_is_a_tty)
|
||||
{
|
||||
default_output_format = "PrettyCompact";
|
||||
}
|
||||
else
|
||||
{
|
||||
default_output_format = "TSV";
|
||||
}
|
||||
|
||||
if (config().has("input-format"))
|
||||
{
|
||||
default_input_format = config().getString("input-format");
|
||||
}
|
||||
else if (config().has("format"))
|
||||
{
|
||||
default_input_format = config().getString("format");
|
||||
}
|
||||
else if (config().getString("table-file", "-") != "-")
|
||||
{
|
||||
auto file_name = config().getString("table-file");
|
||||
std::optional<String> format_from_file_name = FormatFactory::instance().tryGetFormatFromFileName(file_name);
|
||||
if (format_from_file_name)
|
||||
default_input_format = *format_from_file_name;
|
||||
else
|
||||
default_input_format = "TSV";
|
||||
}
|
||||
else
|
||||
{
|
||||
std::optional<String> format_from_file_name = FormatFactory::instance().tryGetFormatFromFileDescriptor(STDIN_FILENO);
|
||||
if (format_from_file_name)
|
||||
default_input_format = *format_from_file_name;
|
||||
else
|
||||
default_input_format = "TSV";
|
||||
}
|
||||
|
||||
format_max_block_size = config().getUInt64("format_max_block_size",
|
||||
global_context->getSettingsRef().max_block_size);
|
||||
|
||||
/// Setting value from cmd arg overrides one from config
|
||||
if (global_context->getSettingsRef().max_insert_block_size.changed)
|
||||
{
|
||||
insert_format_max_block_size = global_context->getSettingsRef().max_insert_block_size;
|
||||
}
|
||||
else
|
||||
{
|
||||
insert_format_max_block_size = config().getUInt64("insert_format_max_block_size",
|
||||
global_context->getSettingsRef().max_insert_block_size);
|
||||
}
|
||||
}
|
||||
|
||||
void ClientBase::initTTYBuffer(ProgressOption progress)
|
||||
{
|
||||
if (tty_buf)
|
||||
@ -1605,7 +1686,7 @@ void ClientBase::sendData(Block & sample, const ColumnsDescription & columns_des
|
||||
|
||||
void ClientBase::sendDataFrom(ReadBuffer & buf, Block & sample, const ColumnsDescription & columns_description, ASTPtr parsed_query, bool have_more_data)
|
||||
{
|
||||
String current_format = insert_format;
|
||||
String current_format = "Values";
|
||||
|
||||
/// Data format can be specified in the INSERT query.
|
||||
if (const auto * insert = parsed_query->as<ASTInsertQuery>())
|
||||
|
@ -185,9 +185,13 @@ protected:
|
||||
static bool isSyncInsertWithData(const ASTInsertQuery & insert_query, const ContextPtr & context);
|
||||
bool processMultiQueryFromFile(const String & file_name);
|
||||
|
||||
static bool isRegularFile(int fd);
|
||||
|
||||
/// Adjust some settings after command line options and config had been processed.
|
||||
void adjustSettings();
|
||||
|
||||
void setDefaultFormatsFromConfiguration();
|
||||
|
||||
void initTTYBuffer(ProgressOption progress);
|
||||
|
||||
/// Should be one of the first, to be destroyed the last,
|
||||
@ -218,12 +222,13 @@ protected:
|
||||
|
||||
String pager;
|
||||
|
||||
String format; /// Query results output format.
|
||||
String default_output_format; /// Query results output format.
|
||||
String default_input_format; /// Tables' format for clickhouse-local.
|
||||
|
||||
bool select_into_file = false; /// If writing result INTO OUTFILE. It affects progress rendering.
|
||||
bool select_into_file_and_stdout = false; /// If writing result INTO OUTFILE AND STDOUT. It affects progress rendering.
|
||||
bool is_default_format = true; /// false, if format is set in the config or command line.
|
||||
size_t format_max_block_size = 0; /// Max block size for console output.
|
||||
String insert_format; /// Format of INSERT data that is read from stdin in batch mode.
|
||||
size_t insert_format_max_block_size = 0; /// Max block size when reading INSERT data.
|
||||
size_t max_client_network_bandwidth = 0; /// The maximum speed of data exchange over the network for the client in bytes per second.
|
||||
|
||||
|
@ -13,8 +13,6 @@ namespace DB
|
||||
}
|
||||
}
|
||||
|
||||
// I wanted to make this ALWAYS_INLINE to prevent flappy performance tests,
|
||||
// but GCC complains it may not be inlined.
|
||||
static void formatReadable(double size, DB::WriteBuffer & out,
|
||||
int precision, const char ** units, size_t units_size, double delimiter)
|
||||
{
|
||||
@ -25,7 +23,12 @@ static void formatReadable(double size, DB::WriteBuffer & out,
|
||||
DB::DoubleConverter<false>::BufferType buffer;
|
||||
double_conversion::StringBuilder builder{buffer, sizeof(buffer)};
|
||||
|
||||
const auto result = DB::DoubleConverter<false>::instance().ToFixed(size, precision, &builder);
|
||||
const auto & converter = DB::DoubleConverter<false>::instance();
|
||||
|
||||
auto result = converter.ToFixed(size, precision, &builder);
|
||||
|
||||
if (!result)
|
||||
result = converter.ToShortest(size, &builder);
|
||||
|
||||
if (!result)
|
||||
throw DB::Exception(DB::ErrorCodes::CANNOT_PRINT_FLOAT_OR_DOUBLE_NUMBER, "Cannot print float or double number");
|
||||
@ -65,7 +68,11 @@ std::string formatReadableSizeWithDecimalSuffix(double value, int precision)
|
||||
|
||||
void formatReadableQuantity(double value, DB::WriteBuffer & out, int precision)
|
||||
{
|
||||
const char * units[] = {"", " thousand", " million", " billion", " trillion", " quadrillion"};
|
||||
const char * units[] = {"", " thousand", " million", " billion", " trillion", " quadrillion",
|
||||
" quintillion", " sextillion", " septillion", " octillion", " nonillion", " decillion",
|
||||
" undecillion", " duodecillion", " tredecillion", " quattuordecillion", " quindecillion",
|
||||
" sexdecillion", " septendecillion", " octodecillion", " novemdecillion", " vigintillion"};
|
||||
|
||||
formatReadable(value, out, precision, units, sizeof(units) / sizeof(units[0]), 1000);
|
||||
}
|
||||
|
||||
|
@ -105,6 +105,7 @@ void KeeperSnapshotManagerS3::updateS3Configuration(const Poco::Util::AbstractCo
|
||||
.use_virtual_addressing = new_uri.is_virtual_hosted_style,
|
||||
.disable_checksum = false,
|
||||
.gcs_issue_compose_request = false,
|
||||
.is_s3express_bucket = S3::isS3ExpressEndpoint(new_uri.endpoint),
|
||||
};
|
||||
|
||||
auto client = S3::ClientFactory::instance().create(
|
||||
|
@ -44,12 +44,78 @@ struct ContextSharedPart : boost::noncopyable
|
||||
: macros(std::make_unique<Macros>())
|
||||
{}
|
||||
|
||||
~ContextSharedPart()
|
||||
{
|
||||
if (keeper_dispatcher)
|
||||
{
|
||||
try
|
||||
{
|
||||
keeper_dispatcher->shutdown();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
/// Wait for thread pool for background reads and writes,
|
||||
/// since it may use per-user MemoryTracker which will be destroyed here.
|
||||
if (asynchronous_remote_fs_reader)
|
||||
{
|
||||
try
|
||||
{
|
||||
asynchronous_remote_fs_reader->wait();
|
||||
asynchronous_remote_fs_reader.reset();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
if (asynchronous_local_fs_reader)
|
||||
{
|
||||
try
|
||||
{
|
||||
asynchronous_local_fs_reader->wait();
|
||||
asynchronous_local_fs_reader.reset();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
if (synchronous_local_fs_reader)
|
||||
{
|
||||
try
|
||||
{
|
||||
synchronous_local_fs_reader->wait();
|
||||
synchronous_local_fs_reader.reset();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
|
||||
if (threadpool_writer)
|
||||
{
|
||||
try
|
||||
{
|
||||
threadpool_writer->wait();
|
||||
threadpool_writer.reset();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// For access of most of shared objects.
|
||||
mutable SharedMutex mutex;
|
||||
|
||||
mutable std::mutex keeper_dispatcher_mutex;
|
||||
mutable std::shared_ptr<KeeperDispatcher> keeper_dispatcher TSA_GUARDED_BY(keeper_dispatcher_mutex);
|
||||
|
||||
ServerSettings server_settings;
|
||||
|
||||
String path; /// Path to the data directory, with a slash at the end.
|
||||
@ -77,6 +143,10 @@ struct ContextSharedPart : boost::noncopyable
|
||||
|
||||
mutable ThrottlerPtr local_read_throttler; /// A server-wide throttler for local IO reads
|
||||
mutable ThrottlerPtr local_write_throttler; /// A server-wide throttler for local IO writes
|
||||
|
||||
mutable std::mutex keeper_dispatcher_mutex;
|
||||
mutable std::shared_ptr<KeeperDispatcher> keeper_dispatcher TSA_GUARDED_BY(keeper_dispatcher_mutex);
|
||||
|
||||
};
|
||||
|
||||
ContextData::ContextData() = default;
|
||||
|
@ -1145,6 +1145,7 @@ class IColumn;
|
||||
M(Bool, output_format_enable_streaming, false, "Enable streaming in output formats that support it.", 0) \
|
||||
M(Bool, output_format_write_statistics, true, "Write statistics about read rows, bytes, time elapsed in suitable output formats.", 0) \
|
||||
M(Bool, output_format_pretty_row_numbers, false, "Add row numbers before each row for pretty output format", 0) \
|
||||
M(Bool, output_format_pretty_highlight_digit_groups, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline.", 0) \
|
||||
M(UInt64, output_format_pretty_single_large_number_tip_threshold, 1'000'000, "Print a readable number tip on the right side of the table if the block consists of a single number which exceeds this value (except 0)", 0) \
|
||||
M(Bool, insert_distributed_one_random_shard, false, "If setting is enabled, inserting into distributed table will choose a random shard to write when there is no sharding key", 0) \
|
||||
\
|
||||
|
@ -111,6 +111,7 @@ static std::map<ClickHouseVersion, SettingsChangesHistory::SettingsChanges> sett
|
||||
{"azure_max_upload_part_size", 5ull*1024*1024*1024, 5ull*1024*1024*1024, "The maximum size of part to upload during multipart upload to Azure blob storage."},
|
||||
{"azure_upload_part_size_multiply_factor", 2, 2, "Multiply azure_min_upload_part_size by this factor each time azure_multiply_parts_count_threshold parts were uploaded from a single write to Azure blob storage."},
|
||||
{"azure_upload_part_size_multiply_parts_count_threshold", 500, 500, "Each time this number of parts was uploaded to Azure blob storage, azure_min_upload_part_size is multiplied by azure_upload_part_size_multiply_factor."},
|
||||
{"output_format_pretty_highlight_digit_groups", false, true, "If enabled and if output is a terminal, highlight every digit corresponding to the number of thousands, millions, etc. with underline."},
|
||||
}},
|
||||
{"24.2", {{"allow_suspicious_variant_types", true, false, "Don't allow creating Variant type with suspicious variants by default"},
|
||||
{"validate_experimental_and_suspicious_types_inside_nested_types", false, true, "Validate usage of experimental and suspicious types inside nested types"},
|
||||
|
@ -1,5 +1,6 @@
|
||||
#include <Disks/ObjectStorages/S3/diskSettings.h>
|
||||
#include "IO/S3/Client.h"
|
||||
#include <IO/S3/Client.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
#if USE_AWS_S3
|
||||
|
||||
@ -10,7 +11,7 @@
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include "Disks/DiskFactory.h"
|
||||
#include <Disks/DiskFactory.h>
|
||||
|
||||
#include <aws/core/client/DefaultRetryStrategy.h>
|
||||
#include <base/getFQDNOrHostName.h>
|
||||
@ -25,6 +26,11 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NO_ELEMENTS_IN_CONFIG;
|
||||
}
|
||||
|
||||
std::unique_ptr<S3ObjectStorageSettings> getSettings(const Poco::Util::AbstractConfiguration & config, const String & config_prefix, ContextPtr context)
|
||||
{
|
||||
const Settings & settings = context->getSettingsRef();
|
||||
@ -47,11 +53,15 @@ std::unique_ptr<S3::Client> getClient(
|
||||
const Settings & global_settings = context->getGlobalContext()->getSettingsRef();
|
||||
const Settings & local_settings = context->getSettingsRef();
|
||||
|
||||
String endpoint = context->getMacros()->expand(config.getString(config_prefix + ".endpoint"));
|
||||
const String endpoint = context->getMacros()->expand(config.getString(config_prefix + ".endpoint"));
|
||||
S3::URI uri(endpoint);
|
||||
if (!uri.key.ends_with('/'))
|
||||
uri.key.push_back('/');
|
||||
|
||||
if (S3::isS3ExpressEndpoint(endpoint) && !config.has(config_prefix + ".region"))
|
||||
throw Exception(
|
||||
ErrorCodes::NO_ELEMENTS_IN_CONFIG, "Region should be explicitly specified for directory buckets ({})", config_prefix);
|
||||
|
||||
S3::PocoHTTPClientConfiguration client_configuration = S3::ClientFactory::instance().createClientConfiguration(
|
||||
config.getString(config_prefix + ".region", ""),
|
||||
context->getRemoteHostFilter(),
|
||||
@ -93,6 +103,7 @@ std::unique_ptr<S3::Client> getClient(
|
||||
.use_virtual_addressing = uri.is_virtual_hosted_style,
|
||||
.disable_checksum = local_settings.s3_disable_checksum,
|
||||
.gcs_issue_compose_request = config.getBool("s3.gcs_issue_compose_request", false),
|
||||
.is_s3express_bucket = S3::isS3ExpressEndpoint(endpoint),
|
||||
};
|
||||
|
||||
return S3::ClientFactory::instance().create(
|
||||
|
@ -167,6 +167,7 @@ FormatSettings getFormatSettings(const ContextPtr & context, const Settings & se
|
||||
format_settings.pretty.max_column_pad_width = settings.output_format_pretty_max_column_pad_width;
|
||||
format_settings.pretty.max_rows = settings.output_format_pretty_max_rows;
|
||||
format_settings.pretty.max_value_width = settings.output_format_pretty_max_value_width;
|
||||
format_settings.pretty.highlight_digit_groups = settings.output_format_pretty_highlight_digit_groups;
|
||||
format_settings.pretty.output_format_pretty_row_numbers = settings.output_format_pretty_row_numbers;
|
||||
format_settings.pretty.output_format_pretty_single_large_number_tip_threshold = settings.output_format_pretty_single_large_number_tip_threshold;
|
||||
format_settings.protobuf.input_flatten_google_wrappers = settings.input_format_protobuf_flatten_google_wrappers;
|
||||
|
@ -275,6 +275,7 @@ struct FormatSettings
|
||||
UInt64 max_rows = 10000;
|
||||
UInt64 max_column_pad_width = 250;
|
||||
UInt64 max_value_width = 10000;
|
||||
bool highlight_digit_groups = true;
|
||||
SettingFieldUInt64Auto color{"auto"};
|
||||
|
||||
bool output_format_pretty_row_numbers = false;
|
||||
|
@ -1921,6 +1921,19 @@ struct NameParseDateTimeBestEffort;
|
||||
struct NameParseDateTimeBestEffortOrZero;
|
||||
struct NameParseDateTimeBestEffortOrNull;
|
||||
|
||||
template <typename Name, typename ToDataType>
|
||||
constexpr bool mightBeDateTime()
|
||||
{
|
||||
if constexpr (std::is_same_v<ToDataType, DataTypeDateTime64>)
|
||||
return true;
|
||||
else if constexpr (
|
||||
std::is_same_v<Name, NameToDateTime> || std::is_same_v<Name, NameParseDateTimeBestEffort>
|
||||
|| std::is_same_v<Name, NameParseDateTimeBestEffortOrZero> || std::is_same_v<Name, NameParseDateTimeBestEffortOrNull>)
|
||||
return true;
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
template<typename Name, typename ToDataType>
|
||||
inline bool isDateTime64(const ColumnsWithTypeAndName & arguments)
|
||||
{
|
||||
@ -2190,7 +2203,6 @@ private:
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, Name, FormatSettings::DateTimeOverflowBehavior::Saturate>::execute(arguments, result_type, input_rows_count, from_string_tag, scale);
|
||||
break;
|
||||
}
|
||||
|
||||
}
|
||||
else if constexpr (IsDataTypeDateOrDateTime<RightDataType> && std::is_same_v<LeftDataType, DataTypeDateTime64>)
|
||||
{
|
||||
@ -2208,12 +2220,23 @@ private:
|
||||
break;
|
||||
}
|
||||
}
|
||||
else if constexpr ((IsDataTypeNumber<LeftDataType>
|
||||
|| IsDataTypeDateOrDateTime<LeftDataType>)&&IsDataTypeDateOrDateTime<RightDataType>)
|
||||
{
|
||||
#define GENERATE_OVERFLOW_MODE_CASE(OVERFLOW_MODE) \
|
||||
case FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE: \
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, Name, FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE>::execute( \
|
||||
arguments, result_type, input_rows_count, from_string_tag); \
|
||||
break;
|
||||
case FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE: \
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, Name, FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE>::execute( \
|
||||
arguments, result_type, input_rows_count, from_string_tag); \
|
||||
break;
|
||||
switch (date_time_overflow_behavior)
|
||||
{
|
||||
GENERATE_OVERFLOW_MODE_CASE(Throw)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Ignore)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Saturate)
|
||||
}
|
||||
|
||||
#undef GENERATE_OVERFLOW_MODE_CASE
|
||||
}
|
||||
else if constexpr (IsDataTypeDecimalOrNumber<LeftDataType> && IsDataTypeDecimalOrNumber<RightDataType>)
|
||||
{
|
||||
using LeftT = typename LeftDataType::FieldType;
|
||||
@ -2232,44 +2255,36 @@ private:
|
||||
}
|
||||
else
|
||||
{
|
||||
switch (date_time_overflow_behavior)
|
||||
{
|
||||
GENERATE_OVERFLOW_MODE_CASE(Throw)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Ignore)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Saturate)
|
||||
}
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, Name>::execute(
|
||||
arguments, result_type, input_rows_count, from_string_tag);
|
||||
}
|
||||
}
|
||||
else if constexpr ((IsDataTypeNumber<LeftDataType> || IsDataTypeDateOrDateTime<LeftDataType>)
|
||||
&& IsDataTypeDateOrDateTime<RightDataType>)
|
||||
{
|
||||
switch (date_time_overflow_behavior)
|
||||
{
|
||||
GENERATE_OVERFLOW_MODE_CASE(Throw)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Ignore)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Saturate)
|
||||
}
|
||||
}
|
||||
#undef GENERATE_OVERFLOW_MODE_CASE
|
||||
else
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, Name>::execute(arguments, result_type, input_rows_count, from_string_tag);
|
||||
|
||||
return true;
|
||||
};
|
||||
|
||||
if (isDateTime64<Name, ToDataType>(arguments))
|
||||
if constexpr (mightBeDateTime<Name, ToDataType>())
|
||||
{
|
||||
/// For toDateTime('xxxx-xx-xx xx:xx:xx.00', 2[, 'timezone']) we need to it convert to DateTime64
|
||||
const ColumnWithTypeAndName & scale_column = arguments[1];
|
||||
UInt32 scale = extractToDecimalScale(scale_column);
|
||||
|
||||
if (to_datetime64 || scale != 0) /// When scale = 0, the data type is DateTime otherwise the data type is DateTime64
|
||||
if (isDateTime64<Name, ToDataType>(arguments))
|
||||
{
|
||||
if (!callOnIndexAndDataType<DataTypeDateTime64>(from_type->getTypeId(), call, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag))
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument of function {}",
|
||||
arguments[0].type->getName(), getName());
|
||||
/// For toDateTime('xxxx-xx-xx xx:xx:xx.00', 2[, 'timezone']) we need to it convert to DateTime64
|
||||
const ColumnWithTypeAndName & scale_column = arguments[1];
|
||||
UInt32 scale = extractToDecimalScale(scale_column);
|
||||
|
||||
return result_column;
|
||||
if (to_datetime64 || scale != 0) /// When scale = 0, the data type is DateTime otherwise the data type is DateTime64
|
||||
{
|
||||
if (!callOnIndexAndDataType<DataTypeDateTime64>(
|
||||
from_type->getTypeId(), call, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag))
|
||||
throw Exception(
|
||||
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Illegal type {} of argument of function {}",
|
||||
arguments[0].type->getName(),
|
||||
getName());
|
||||
|
||||
return result_column;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -2468,19 +2483,27 @@ public:
|
||||
result_column = executeInternal<ToDataType>(arguments, result_type, input_rows_count,
|
||||
assert_cast<const ToDataType &>(*removeNullable(result_type)).getScale());
|
||||
}
|
||||
else if (isDateTime64<Name, ToDataType>(arguments))
|
||||
else if constexpr (mightBeDateTime<Name, ToDataType>())
|
||||
{
|
||||
UInt64 scale = to_datetime64 ? DataTypeDateTime64::default_scale : 0;
|
||||
if (arguments.size() > 1)
|
||||
scale = extractToDecimalScale(arguments[1]);
|
||||
|
||||
if (scale == 0)
|
||||
if (isDateTime64<Name, ToDataType>(arguments))
|
||||
{
|
||||
result_column = executeInternal<DataTypeDateTime>(arguments, result_type, input_rows_count, 0);
|
||||
UInt64 scale = to_datetime64 ? DataTypeDateTime64::default_scale : 0;
|
||||
if (arguments.size() > 1)
|
||||
scale = extractToDecimalScale(arguments[1]);
|
||||
|
||||
if (scale == 0)
|
||||
{
|
||||
result_column = executeInternal<DataTypeDateTime>(arguments, result_type, input_rows_count, 0);
|
||||
}
|
||||
else
|
||||
{
|
||||
result_column
|
||||
= executeInternal<DataTypeDateTime64>(arguments, result_type, input_rows_count, static_cast<UInt32>(scale));
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
result_column = executeInternal<DataTypeDateTime64>(arguments, result_type, input_rows_count, static_cast<UInt32>(scale));
|
||||
result_column = executeInternal<ToDataType>(arguments, result_type, input_rows_count, 0);
|
||||
}
|
||||
}
|
||||
else
|
||||
@ -3173,43 +3196,14 @@ private:
|
||||
|
||||
if constexpr (IsDataTypeNumber<LeftDataType>)
|
||||
{
|
||||
if constexpr (IsDataTypeNumber<RightDataType>)
|
||||
if constexpr (IsDataTypeDateOrDateTime<RightDataType>)
|
||||
{
|
||||
#define GENERATE_OVERFLOW_MODE_CASE(OVERFLOW_MODE, ADDITIONS) \
|
||||
case FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE: \
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, FunctionCastName, FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE>::execute( \
|
||||
arguments, result_type, input_rows_count, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag, ADDITIONS()); \
|
||||
break;
|
||||
if (wrapper_cast_type == CastType::accurate)
|
||||
{
|
||||
switch (date_time_overflow_behavior)
|
||||
{
|
||||
GENERATE_OVERFLOW_MODE_CASE(Throw, AccurateConvertStrategyAdditions)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Ignore, AccurateConvertStrategyAdditions)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Saturate, AccurateConvertStrategyAdditions)
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
switch (date_time_overflow_behavior)
|
||||
{
|
||||
GENERATE_OVERFLOW_MODE_CASE(Throw, AccurateOrNullConvertStrategyAdditions)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Ignore, AccurateOrNullConvertStrategyAdditions)
|
||||
GENERATE_OVERFLOW_MODE_CASE(Saturate, AccurateOrNullConvertStrategyAdditions)
|
||||
}
|
||||
}
|
||||
#undef GENERATE_OVERFLOW_MODE_CASE
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
if constexpr (std::is_same_v<RightDataType, DataTypeDate> || std::is_same_v<RightDataType, DataTypeDateTime>)
|
||||
{
|
||||
#define GENERATE_OVERFLOW_MODE_CASE(OVERFLOW_MODE, ADDITIONS) \
|
||||
case FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE: \
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, FunctionCastName, FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE>::template execute<ADDITIONS>( \
|
||||
arguments, result_type, input_rows_count, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag); \
|
||||
break;
|
||||
case FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE: \
|
||||
result_column \
|
||||
= ConvertImpl<LeftDataType, RightDataType, FunctionCastName, FormatSettings::DateTimeOverflowBehavior::OVERFLOW_MODE>:: \
|
||||
execute(arguments, result_type, input_rows_count, BehaviourOnErrorFromString::ConvertDefaultBehaviorTag, ADDITIONS()); \
|
||||
break;
|
||||
if (wrapper_cast_type == CastType::accurate)
|
||||
{
|
||||
switch (date_time_overflow_behavior)
|
||||
@ -3229,6 +3223,30 @@ arguments, result_type, input_rows_count, BehaviourOnErrorFromString::ConvertDef
|
||||
}
|
||||
}
|
||||
#undef GENERATE_OVERFLOW_MODE_CASE
|
||||
|
||||
return true;
|
||||
}
|
||||
else if constexpr (IsDataTypeNumber<RightDataType>)
|
||||
{
|
||||
if (wrapper_cast_type == CastType::accurate)
|
||||
{
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, FunctionCastName>::execute(
|
||||
arguments,
|
||||
result_type,
|
||||
input_rows_count,
|
||||
BehaviourOnErrorFromString::ConvertDefaultBehaviorTag,
|
||||
AccurateConvertStrategyAdditions());
|
||||
}
|
||||
else
|
||||
{
|
||||
result_column = ConvertImpl<LeftDataType, RightDataType, FunctionCastName>::execute(
|
||||
arguments,
|
||||
result_type,
|
||||
input_rows_count,
|
||||
BehaviourOnErrorFromString::ConvertDefaultBehaviorTag,
|
||||
AccurateOrNullConvertStrategyAdditions());
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
@ -55,6 +55,7 @@ REGISTER_FUNCTION(CurrentUser)
|
||||
{
|
||||
factory.registerFunction<FunctionCurrentUser>();
|
||||
factory.registerAlias("user", FunctionCurrentUser::name, FunctionFactory::CaseInsensitive);
|
||||
factory.registerAlias("current_user", FunctionCurrentUser::name, FunctionFactory::CaseInsensitive);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -1,4 +1,5 @@
|
||||
#include <IO/S3/Client.h>
|
||||
#include <Common/Exception.h>
|
||||
|
||||
#if USE_AWS_S3
|
||||
|
||||
@ -304,6 +305,9 @@ Model::HeadObjectOutcome Client::HeadObject(HeadObjectRequest & request) const
|
||||
|
||||
request.setApiMode(api_mode);
|
||||
|
||||
if (isS3ExpressBucket())
|
||||
request.setIsS3ExpressBucket();
|
||||
|
||||
addAdditionalAMZHeadersToCanonicalHeadersList(request, client_configuration.extra_headers);
|
||||
|
||||
if (auto region = getRegionForBucket(bucket); !region.empty())
|
||||
@ -530,7 +534,11 @@ Client::doRequest(RequestType & request, RequestFn request_fn) const
|
||||
addAdditionalAMZHeadersToCanonicalHeadersList(request, client_configuration.extra_headers);
|
||||
const auto & bucket = request.GetBucket();
|
||||
request.setApiMode(api_mode);
|
||||
if (client_settings.disable_checksum)
|
||||
|
||||
/// We have to use checksums for S3Express buckets, so the order of checks should be the following
|
||||
if (client_settings.is_s3express_bucket)
|
||||
request.setIsS3ExpressBucket();
|
||||
else if (client_settings.disable_checksum)
|
||||
request.disableChecksum();
|
||||
|
||||
if (auto region = getRegionForBucket(bucket); !region.empty())
|
||||
@ -915,9 +923,9 @@ std::unique_ptr<S3::Client> ClientFactory::create( // NOLINT
|
||||
std::move(sse_kms_config),
|
||||
credentials_provider,
|
||||
client_configuration, // Client configuration.
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never,
|
||||
client_settings
|
||||
);
|
||||
client_settings.is_s3express_bucket ? Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::RequestDependent
|
||||
: Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never,
|
||||
client_settings);
|
||||
}
|
||||
|
||||
PocoHTTPClientConfiguration ClientFactory::createClientConfiguration( // NOLINT
|
||||
@ -956,6 +964,11 @@ PocoHTTPClientConfiguration ClientFactory::createClientConfiguration( // NOLINT
|
||||
return config;
|
||||
}
|
||||
|
||||
bool isS3ExpressEndpoint(const std::string & endpoint)
|
||||
{
|
||||
/// On one hand this check isn't 100% reliable, on the other - all it will change is whether we attach checksums to the requests.
|
||||
return endpoint.contains("s3express");
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -92,6 +92,8 @@ private:
|
||||
std::unordered_map<ClientCache *, std::weak_ptr<ClientCache>> client_caches;
|
||||
};
|
||||
|
||||
bool isS3ExpressEndpoint(const std::string & endpoint);
|
||||
|
||||
struct ClientSettings
|
||||
{
|
||||
bool use_virtual_addressing;
|
||||
@ -107,6 +109,7 @@ struct ClientSettings
|
||||
/// Ability to enable it preserved since likely it is required for old
|
||||
/// files.
|
||||
bool gcs_issue_compose_request;
|
||||
bool is_s3express_bucket;
|
||||
};
|
||||
|
||||
/// Client that improves the client from the AWS SDK
|
||||
@ -208,6 +211,9 @@ public:
|
||||
const std::shared_ptr<Aws::Http::HttpRequest>& httpRequest) const override;
|
||||
|
||||
bool supportsMultiPartCopy() const;
|
||||
|
||||
bool isS3ExpressBucket() const { return client_settings.is_s3express_bucket; }
|
||||
|
||||
private:
|
||||
friend struct ::MockS3::Client;
|
||||
|
||||
|
@ -21,12 +21,44 @@
|
||||
#include <aws/s3/model/UploadPartCopyRequest.h>
|
||||
#include <aws/s3/model/DeleteObjectRequest.h>
|
||||
#include <aws/s3/model/DeleteObjectsRequest.h>
|
||||
#include <aws/s3/model/ChecksumAlgorithm.h>
|
||||
#include <aws/s3/model/CompletedPart.h>
|
||||
#include <aws/core/utils/HashingUtils.h>
|
||||
|
||||
#include <base/defines.h>
|
||||
|
||||
namespace DB::S3
|
||||
{
|
||||
|
||||
namespace Model = Aws::S3::Model;
|
||||
|
||||
/// Used only for S3Express
|
||||
namespace RequestChecksum
|
||||
{
|
||||
inline void setPartChecksum(Model::CompletedPart & part, const std::string & checksum)
|
||||
{
|
||||
part.SetChecksumCRC32(checksum);
|
||||
}
|
||||
|
||||
inline void setRequestChecksum(Model::UploadPartRequest & req, const std::string & checksum)
|
||||
{
|
||||
req.SetChecksumCRC32(checksum);
|
||||
}
|
||||
|
||||
inline std::string calculateChecksum(Model::UploadPartRequest & req)
|
||||
{
|
||||
chassert(req.GetChecksumAlgorithm() == Aws::S3::Model::ChecksumAlgorithm::CRC32);
|
||||
return Aws::Utils::HashingUtils::Base64Encode(Aws::Utils::HashingUtils::CalculateCRC32(*(req.GetBody())));
|
||||
}
|
||||
|
||||
template <typename R>
|
||||
inline void setChecksumAlgorithm(R & request)
|
||||
{
|
||||
if constexpr (requires { request.SetChecksumAlgorithm(Model::ChecksumAlgorithm::CRC32); })
|
||||
request.SetChecksumAlgorithm(Model::ChecksumAlgorithm::CRC32);
|
||||
}
|
||||
};
|
||||
|
||||
template <typename BaseRequest>
|
||||
class ExtendedRequest : public BaseRequest
|
||||
{
|
||||
@ -49,11 +81,13 @@ public:
|
||||
|
||||
Aws::String GetChecksumAlgorithmName() const override
|
||||
{
|
||||
chassert(!is_s3express_bucket || checksum);
|
||||
|
||||
/// Return empty string is enough to disable checksums (see
|
||||
/// AWSClient::AddChecksumToRequest [1] for more details).
|
||||
///
|
||||
/// [1]: https://github.com/aws/aws-sdk-cpp/blob/b0ee1c0d336dbb371c34358b68fba6c56aae2c92/src/aws-cpp-sdk-core/source/client/AWSClient.cpp#L783-L839
|
||||
if (!checksum)
|
||||
if (!is_s3express_bucket && !checksum)
|
||||
return "";
|
||||
return BaseRequest::GetChecksumAlgorithmName();
|
||||
}
|
||||
@ -84,9 +118,12 @@ public:
|
||||
}
|
||||
|
||||
/// Disable checksum to avoid extra read of the input stream
|
||||
void disableChecksum() const
|
||||
void disableChecksum() const { checksum = false; }
|
||||
|
||||
void setIsS3ExpressBucket()
|
||||
{
|
||||
checksum = false;
|
||||
is_s3express_bucket = true;
|
||||
RequestChecksum::setChecksumAlgorithm(*this);
|
||||
}
|
||||
|
||||
protected:
|
||||
@ -94,6 +131,7 @@ protected:
|
||||
mutable std::optional<S3::URI> uri_override;
|
||||
mutable ApiMode api_mode{ApiMode::AWS};
|
||||
mutable bool checksum = true;
|
||||
bool is_s3express_bucket = false;
|
||||
};
|
||||
|
||||
class CopyObjectRequest : public ExtendedRequest<Model::CopyObjectRequest>
|
||||
|
@ -35,7 +35,7 @@ URI::URI(const std::string & uri_)
|
||||
/// Case when bucket name represented in domain name of S3 URL.
|
||||
/// E.g. (https://bucket-name.s3.Region.amazonaws.com/key)
|
||||
/// https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html#virtual-hosted-style-access
|
||||
static const RE2 virtual_hosted_style_pattern(R"((.+)\.(s3|cos|obs|oss|eos)([.\-][a-z0-9\-.:]+))");
|
||||
static const RE2 virtual_hosted_style_pattern(R"((.+)\.(s3express[\-a-z0-9]+|s3|cos|obs|oss|eos)([.\-][a-z0-9\-.:]+))");
|
||||
|
||||
/// Case when bucket name and key represented in path of S3 URL.
|
||||
/// E.g. (https://s3.Region.amazonaws.com/bucket-name/key)
|
||||
@ -43,6 +43,7 @@ URI::URI(const std::string & uri_)
|
||||
static const RE2 path_style_pattern("^/([^/]*)/(.*)");
|
||||
|
||||
static constexpr auto S3 = "S3";
|
||||
static constexpr auto S3EXPRESS = "S3EXPRESS";
|
||||
static constexpr auto COSN = "COSN";
|
||||
static constexpr auto COS = "COS";
|
||||
static constexpr auto OBS = "OBS";
|
||||
@ -115,21 +116,16 @@ URI::URI(const std::string & uri_)
|
||||
}
|
||||
|
||||
boost::to_upper(name);
|
||||
if (name != S3 && name != COS && name != OBS && name != OSS && name != EOS)
|
||||
/// For S3Express it will look like s3express-eun1-az1, i.e. contain region and AZ info
|
||||
if (name != S3 && !name.starts_with(S3EXPRESS) && name != COS && name != OBS && name != OSS && name != EOS)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Object storage system name is unrecognized in virtual hosted style S3 URI: {}",
|
||||
quoteString(name));
|
||||
|
||||
if (name == S3)
|
||||
storage_name = name;
|
||||
else if (name == OBS)
|
||||
storage_name = OBS;
|
||||
else if (name == OSS)
|
||||
storage_name = OSS;
|
||||
else if (name == EOS)
|
||||
storage_name = EOS;
|
||||
else
|
||||
if (name == COS)
|
||||
storage_name = COSN;
|
||||
else
|
||||
storage_name = name;
|
||||
}
|
||||
else if (re2::RE2::PartialMatch(uri.getPath(), path_style_pattern, &bucket, &key))
|
||||
{
|
||||
|
@ -110,7 +110,8 @@ void testServerSideEncryption(
|
||||
bool disable_checksum,
|
||||
String server_side_encryption_customer_key_base64,
|
||||
DB::S3::ServerSideEncryptionKMSConfig sse_kms_config,
|
||||
String expected_headers)
|
||||
String expected_headers,
|
||||
bool is_s3express_bucket = false)
|
||||
{
|
||||
TestPocoHTTPServer http;
|
||||
|
||||
@ -144,6 +145,7 @@ void testServerSideEncryption(
|
||||
.use_virtual_addressing = uri.is_virtual_hosted_style,
|
||||
.disable_checksum = disable_checksum,
|
||||
.gcs_issue_compose_request = false,
|
||||
.is_s3express_bucket = is_s3express_bucket,
|
||||
};
|
||||
|
||||
std::shared_ptr<DB::S3::Client> client = DB::S3::ClientFactory::instance().create(
|
||||
@ -295,4 +297,25 @@ TEST(IOTestAwsS3Client, AppendExtraSSEKMSHeadersWrite)
|
||||
"x-amz-server-side-encryption-context: arn:aws:s3:::bucket_ARN\n");
|
||||
}
|
||||
|
||||
TEST(IOTestAwsS3Client, ChecksumHeaderIsPresentForS3Express)
|
||||
{
|
||||
/// See https://github.com/ClickHouse/ClickHouse/pull/19748
|
||||
testServerSideEncryption(
|
||||
doWriteRequest,
|
||||
/* disable_checksum= */ true,
|
||||
"",
|
||||
{},
|
||||
"authorization: ... SignedHeaders="
|
||||
"amz-sdk-invocation-id;"
|
||||
"amz-sdk-request;"
|
||||
"content-length;"
|
||||
"content-type;"
|
||||
"host;"
|
||||
"x-amz-checksum-crc32;"
|
||||
"x-amz-content-sha256;"
|
||||
"x-amz-date;"
|
||||
"x-amz-sdk-checksum-algorithm, ...\n",
|
||||
/*is_s3express_bucket=*/true);
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -18,8 +18,6 @@
|
||||
#include <IO/S3/getObjectInfo.h>
|
||||
#include <IO/S3/BlobStorageLogWriter.h>
|
||||
|
||||
#include <aws/s3/model/StorageClass.h>
|
||||
|
||||
#include <utility>
|
||||
|
||||
|
||||
@ -469,6 +467,14 @@ S3::UploadPartRequest WriteBufferFromS3::getUploadRequest(size_t part_number, Pa
|
||||
/// If we don't do it, AWS SDK can mistakenly set it to application/xml, see https://github.com/aws/aws-sdk-cpp/issues/1840
|
||||
req.SetContentType("binary/octet-stream");
|
||||
|
||||
/// Checksums need to be provided on CompleteMultipartUpload requests, so we calculate then manually and store in multipart_checksums
|
||||
if (client_ptr->isS3ExpressBucket())
|
||||
{
|
||||
auto checksum = S3::RequestChecksum::calculateChecksum(req);
|
||||
S3::RequestChecksum::setRequestChecksum(req, checksum);
|
||||
multipart_checksums.push_back(std::move(checksum));
|
||||
}
|
||||
|
||||
return req;
|
||||
}
|
||||
|
||||
@ -588,7 +594,10 @@ void WriteBufferFromS3::completeMultipartUpload()
|
||||
for (size_t i = 0; i < multipart_tags.size(); ++i)
|
||||
{
|
||||
Aws::S3::Model::CompletedPart part;
|
||||
multipart_upload.AddParts(part.WithETag(multipart_tags[i]).WithPartNumber(static_cast<int>(i + 1)));
|
||||
part.WithETag(multipart_tags[i]).WithPartNumber(static_cast<int>(i + 1));
|
||||
if (!multipart_checksums.empty())
|
||||
S3::RequestChecksum::setPartChecksum(part, multipart_checksums.at(i));
|
||||
multipart_upload.AddParts(part);
|
||||
}
|
||||
|
||||
req.SetMultipartUpload(multipart_upload);
|
||||
|
@ -92,6 +92,7 @@ private:
|
||||
/// We initiate upload, then upload each part and get ETag as a response, and then finalizeImpl() upload with listing all our parts.
|
||||
String multipart_upload_id;
|
||||
std::deque<String> multipart_tags;
|
||||
std::deque<String> multipart_checksums; // if enabled
|
||||
bool multipart_upload_finished = false;
|
||||
|
||||
/// Track that prefinalize() is called only once
|
||||
|
@ -162,6 +162,14 @@ TEST(S3UriTest, validPatterns)
|
||||
ASSERT_EQ("", uri.version_id);
|
||||
ASSERT_EQ(false, uri.is_virtual_hosted_style);
|
||||
}
|
||||
{
|
||||
S3::URI uri("https://test-perf-bucket--eun1-az1--x-s3.s3express-eun1-az1.eu-north-1.amazonaws.com/test.csv");
|
||||
ASSERT_EQ("https://s3express-eun1-az1.eu-north-1.amazonaws.com", uri.endpoint);
|
||||
ASSERT_EQ("test-perf-bucket--eun1-az1--x-s3", uri.bucket);
|
||||
ASSERT_EQ("test.csv", uri.key);
|
||||
ASSERT_EQ("", uri.version_id);
|
||||
ASSERT_EQ(true, uri.is_virtual_hosted_style);
|
||||
}
|
||||
}
|
||||
|
||||
TEST_P(S3UriTest, invalidPatterns)
|
||||
|
@ -205,16 +205,17 @@ struct Client : DB::S3::Client
|
||||
{
|
||||
explicit Client(std::shared_ptr<S3MemStrore> mock_s3_store)
|
||||
: DB::S3::Client(
|
||||
100,
|
||||
DB::S3::ServerSideEncryptionKMSConfig(),
|
||||
std::make_shared<Aws::Auth::SimpleAWSCredentialsProvider>("", ""),
|
||||
GetClientConfiguration(),
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never,
|
||||
DB::S3::ClientSettings{
|
||||
.use_virtual_addressing = true,
|
||||
.disable_checksum= false,
|
||||
.gcs_issue_compose_request = false,
|
||||
})
|
||||
100,
|
||||
DB::S3::ServerSideEncryptionKMSConfig(),
|
||||
std::make_shared<Aws::Auth::SimpleAWSCredentialsProvider>("", ""),
|
||||
GetClientConfiguration(),
|
||||
Aws::Client::AWSAuthV4Signer::PayloadSigningPolicy::Never,
|
||||
DB::S3::ClientSettings{
|
||||
.use_virtual_addressing = true,
|
||||
.disable_checksum = false,
|
||||
.gcs_issue_compose_request = false,
|
||||
.is_s3express_bucket = false,
|
||||
})
|
||||
, store(mock_s3_store)
|
||||
{}
|
||||
|
||||
|
@ -221,10 +221,6 @@ struct ContextSharedPart : boost::noncopyable
|
||||
|
||||
ConfigurationPtr sensitive_data_masker_config;
|
||||
|
||||
#if USE_NURAFT
|
||||
mutable std::mutex keeper_dispatcher_mutex;
|
||||
mutable std::shared_ptr<KeeperDispatcher> keeper_dispatcher TSA_GUARDED_BY(keeper_dispatcher_mutex);
|
||||
#endif
|
||||
mutable std::mutex auxiliary_zookeepers_mutex;
|
||||
mutable std::map<String, zkutil::ZooKeeperPtr> auxiliary_zookeepers TSA_GUARDED_BY(auxiliary_zookeepers_mutex); /// Map for auxiliary ZooKeeper clients.
|
||||
ConfigurationPtr auxiliary_zookeepers_config TSA_GUARDED_BY(auxiliary_zookeepers_mutex); /// Stores auxiliary zookeepers configs
|
||||
@ -417,6 +413,11 @@ struct ContextSharedPart : boost::noncopyable
|
||||
|
||||
bool is_server_completely_started TSA_GUARDED_BY(mutex) = false;
|
||||
|
||||
#if USE_NURAFT
|
||||
mutable std::mutex keeper_dispatcher_mutex;
|
||||
mutable std::shared_ptr<KeeperDispatcher> keeper_dispatcher TSA_GUARDED_BY(keeper_dispatcher_mutex);
|
||||
#endif
|
||||
|
||||
ContextSharedPart()
|
||||
: access_control(std::make_unique<AccessControl>())
|
||||
, global_overcommit_tracker(&process_list)
|
||||
@ -432,9 +433,22 @@ struct ContextSharedPart : boost::noncopyable
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
~ContextSharedPart()
|
||||
{
|
||||
#if USE_NURAFT
|
||||
if (keeper_dispatcher)
|
||||
{
|
||||
try
|
||||
{
|
||||
keeper_dispatcher->shutdown();
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
/// Wait for thread pool for background reads and writes,
|
||||
/// since it may use per-user MemoryTracker which will be destroyed here.
|
||||
if (asynchronous_remote_fs_reader)
|
||||
|
@ -1087,8 +1087,9 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
||||
// If this is a stub ATTACH query, read the query definition from the database
|
||||
if (create.attach && !create.storage && !create.columns_list)
|
||||
{
|
||||
auto database = DatabaseCatalog::instance().getDatabase(database_name);
|
||||
if (database->shouldReplicateQuery(getContext(), query_ptr))
|
||||
// In case of an ON CLUSTER query, the database may not be present on the initiator node
|
||||
auto database = DatabaseCatalog::instance().tryGetDatabase(database_name);
|
||||
if (database && database->shouldReplicateQuery(getContext(), query_ptr))
|
||||
{
|
||||
auto guard = DatabaseCatalog::instance().getDDLGuard(database_name, create.getTable());
|
||||
create.setDatabase(database_name);
|
||||
@ -1099,6 +1100,9 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
||||
if (!create.cluster.empty())
|
||||
return executeQueryOnCluster(create);
|
||||
|
||||
if (!database)
|
||||
throw Exception(ErrorCodes::UNKNOWN_DATABASE, "Database {} does not exist", backQuoteIfNeed(database_name));
|
||||
|
||||
/// For short syntax of ATTACH query we have to lock table name here, before reading metadata
|
||||
/// and hold it until table is attached
|
||||
if (likely(need_ddl_guard))
|
||||
@ -1250,6 +1254,7 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
||||
|
||||
DatabasePtr database;
|
||||
bool need_add_to_database = !create.temporary;
|
||||
// In case of an ON CLUSTER query, the database may not be present on the initiator node
|
||||
if (need_add_to_database)
|
||||
database = DatabaseCatalog::instance().tryGetDatabase(database_name);
|
||||
|
||||
@ -1270,7 +1275,7 @@ BlockIO InterpreterCreateQuery::createTable(ASTCreateQuery & create)
|
||||
"CREATE AS SELECT is not supported with Replicated databases. Use separate CREATE and INSERT queries");
|
||||
}
|
||||
|
||||
if (need_add_to_database && database && database->shouldReplicateQuery(getContext(), query_ptr))
|
||||
if (database && database->shouldReplicateQuery(getContext(), query_ptr))
|
||||
{
|
||||
chassert(!ddl_guard);
|
||||
auto guard = DatabaseCatalog::instance().getDDLGuard(create.getDatabase(), create.getTable());
|
||||
|
@ -122,6 +122,8 @@ FutureSetFromSubquery::FutureSetFromSubquery(
|
||||
set_and_key->set = std::make_shared<Set>(size_limits, settings.use_index_for_in_with_subqueries_max_values, settings.transform_null_in);
|
||||
}
|
||||
|
||||
FutureSetFromSubquery::~FutureSetFromSubquery() = default;
|
||||
|
||||
SetPtr FutureSetFromSubquery::get() const
|
||||
{
|
||||
if (set_and_key->set != nullptr && set_and_key->set->isCreated())
|
||||
|
@ -108,6 +108,8 @@ public:
|
||||
QueryTreeNodePtr query_tree_,
|
||||
const Settings & settings);
|
||||
|
||||
~FutureSetFromSubquery() override;
|
||||
|
||||
SetPtr get() const override;
|
||||
DataTypes getTypes() const override;
|
||||
SetPtr buildOrderedSetInplace(const ContextPtr & context) override;
|
||||
|
@ -32,6 +32,9 @@ const ColumnIdentifier & GlobalPlannerContext::createColumnIdentifier(const Name
|
||||
column_identifier = column.name;
|
||||
|
||||
auto [it, inserted] = column_identifiers.emplace(column_identifier);
|
||||
if (!inserted)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Column identifier {} is already registered", column_identifier);
|
||||
|
||||
assert(inserted);
|
||||
|
||||
return *it;
|
||||
|
@ -960,8 +960,14 @@ JoinTreeQueryPlan buildQueryPlanForTableExpression(QueryTreeNodePtr table_expres
|
||||
}
|
||||
else
|
||||
{
|
||||
std::shared_ptr<GlobalPlannerContext> subquery_planner_context;
|
||||
if (wrap_read_columns_in_subquery)
|
||||
subquery_planner_context = std::make_shared<GlobalPlannerContext>(nullptr, nullptr, FiltersForTableExpressionMap{});
|
||||
else
|
||||
subquery_planner_context = planner_context->getGlobalPlannerContext();
|
||||
|
||||
auto subquery_options = select_query_options.subquery();
|
||||
Planner subquery_planner(table_expression, subquery_options, planner_context->getGlobalPlannerContext());
|
||||
Planner subquery_planner(table_expression, subquery_options, subquery_planner_context);
|
||||
/// Propagate storage limits to subquery
|
||||
subquery_planner.addStorageLimits(*select_query_info.storage_limits);
|
||||
subquery_planner.buildQueryPlanIfNeeded();
|
||||
|
@ -303,7 +303,7 @@ void PrettyBlockOutputFormat::writeChunk(const Chunk & chunk, PortKind port_kind
|
||||
const auto & type = *header.getByPosition(j).type;
|
||||
writeValueWithPadding(*columns[j], *serializations[j], i,
|
||||
widths[j].empty() ? max_widths[j] : widths[j][i],
|
||||
max_widths[j], type.shouldAlignRightInPrettyFormats());
|
||||
max_widths[j], type.shouldAlignRightInPrettyFormats(), isNumber(type));
|
||||
}
|
||||
|
||||
writeCString(grid_symbols.bar, out);
|
||||
@ -322,9 +322,75 @@ void PrettyBlockOutputFormat::writeChunk(const Chunk & chunk, PortKind port_kind
|
||||
}
|
||||
|
||||
|
||||
static String highlightDigitGroups(String source)
|
||||
{
|
||||
if (source.size() <= 4)
|
||||
return source;
|
||||
|
||||
bool is_regular_number = true;
|
||||
size_t num_digits_before_decimal = 0;
|
||||
for (auto c : source)
|
||||
{
|
||||
if (c == '-' || c == ' ')
|
||||
continue;
|
||||
if (c == '.')
|
||||
break;
|
||||
if (c >= '0' && c <= '9')
|
||||
{
|
||||
++num_digits_before_decimal;
|
||||
}
|
||||
else
|
||||
{
|
||||
is_regular_number = false;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if (!is_regular_number || num_digits_before_decimal <= 4)
|
||||
return source;
|
||||
|
||||
String result;
|
||||
size_t size = source.size();
|
||||
result.reserve(2 * size);
|
||||
|
||||
bool before_decimal = true;
|
||||
size_t digit_num = 0;
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
{
|
||||
auto c = source[i];
|
||||
if (before_decimal && c >= '0' && c <= '9')
|
||||
{
|
||||
++digit_num;
|
||||
size_t offset = num_digits_before_decimal - digit_num;
|
||||
if (offset && offset % 3 == 0)
|
||||
{
|
||||
result += "\033[4m";
|
||||
result += c;
|
||||
result += "\033[0m";
|
||||
}
|
||||
else
|
||||
{
|
||||
result += c;
|
||||
}
|
||||
}
|
||||
else if (c == '.')
|
||||
{
|
||||
before_decimal = false;
|
||||
result += c;
|
||||
}
|
||||
else
|
||||
{
|
||||
result += c;
|
||||
}
|
||||
}
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
|
||||
void PrettyBlockOutputFormat::writeValueWithPadding(
|
||||
const IColumn & column, const ISerialization & serialization, size_t row_num,
|
||||
size_t value_width, size_t pad_to_width, bool align_right)
|
||||
size_t value_width, size_t pad_to_width, bool align_right, bool is_number)
|
||||
{
|
||||
String serialized_value = " ";
|
||||
{
|
||||
@ -359,6 +425,10 @@ void PrettyBlockOutputFormat::writeValueWithPadding(
|
||||
writeChar(' ', out);
|
||||
};
|
||||
|
||||
/// Highlight groups of thousands.
|
||||
if (color && is_number && format_settings.pretty.highlight_digit_groups)
|
||||
serialized_value = highlightDigitGroups(serialized_value);
|
||||
|
||||
if (align_right)
|
||||
{
|
||||
write_padding();
|
||||
@ -419,16 +489,19 @@ void PrettyBlockOutputFormat::writeReadableNumberTip(const Chunk & chunk)
|
||||
auto is_single_number = readable_number_tip && chunk.getNumRows() == 1 && chunk.getNumColumns() == 1;
|
||||
if (!is_single_number)
|
||||
return;
|
||||
|
||||
auto value = columns[0]->getFloat64(0);
|
||||
auto threshold = format_settings.pretty.output_format_pretty_single_large_number_tip_threshold;
|
||||
if (threshold == 0 || value <= threshold)
|
||||
return;
|
||||
if (color)
|
||||
writeCString("\033[90m", out);
|
||||
writeCString(" -- ", out);
|
||||
formatReadableQuantity(value, out, 2);
|
||||
if (color)
|
||||
writeCString("\033[0m", out);
|
||||
|
||||
if (threshold && isFinite(value) && abs(value) > threshold)
|
||||
{
|
||||
if (color)
|
||||
writeCString("\033[90m", out);
|
||||
writeCString(" -- ", out);
|
||||
formatReadableQuantity(value, out, 2);
|
||||
if (color)
|
||||
writeCString("\033[0m", out);
|
||||
}
|
||||
}
|
||||
|
||||
void registerOutputFormatPretty(FormatFactory & factory)
|
||||
|
@ -48,7 +48,7 @@ protected:
|
||||
|
||||
void writeValueWithPadding(
|
||||
const IColumn & column, const ISerialization & serialization, size_t row_num,
|
||||
size_t value_width, size_t pad_to_width, bool align_right);
|
||||
size_t value_width, size_t pad_to_width, bool align_right, bool is_number);
|
||||
|
||||
void resetFormatterImpl() override
|
||||
{
|
||||
|
@ -169,7 +169,7 @@ void PrettyCompactBlockOutputFormat::writeRow(
|
||||
|
||||
const auto & type = *header.getByPosition(j).type;
|
||||
const auto & cur_widths = widths[j].empty() ? max_widths[j] : widths[j][row_num];
|
||||
writeValueWithPadding(*columns[j], *serializations[j], row_num, cur_widths, max_widths[j], type.shouldAlignRightInPrettyFormats());
|
||||
writeValueWithPadding(*columns[j], *serializations[j], row_num, cur_widths, max_widths[j], type.shouldAlignRightInPrettyFormats(), isNumber(type));
|
||||
}
|
||||
|
||||
writeCString(grid_symbols.bar, out);
|
||||
|
@ -84,7 +84,7 @@ void PrettySpaceBlockOutputFormat::writeChunk(const Chunk & chunk, PortKind port
|
||||
const auto & type = *header.getByPosition(column).type;
|
||||
auto & cur_width = widths[column].empty() ? max_widths[column] : widths[column][row];
|
||||
writeValueWithPadding(
|
||||
*columns[column], *serializations[column], row, cur_width, max_widths[column], type.shouldAlignRightInPrettyFormats());
|
||||
*columns[column], *serializations[column], row, cur_width, max_widths[column], type.shouldAlignRightInPrettyFormats(), isNumber(type));
|
||||
}
|
||||
|
||||
writeReadableNumberTip(chunk);
|
||||
|
@ -192,6 +192,8 @@ MySQLHandler::MySQLHandler(
|
||||
settings_replacements.emplace("NET_READ_TIMEOUT", "receive_timeout");
|
||||
}
|
||||
|
||||
MySQLHandler::~MySQLHandler() = default;
|
||||
|
||||
void MySQLHandler::run()
|
||||
{
|
||||
setThreadName("MySQLHandler");
|
||||
|
@ -46,6 +46,8 @@ public:
|
||||
const ProfileEvents::Event & read_event_ = ProfileEvents::end(),
|
||||
const ProfileEvents::Event & write_event_ = ProfileEvents::end());
|
||||
|
||||
~MySQLHandler() override;
|
||||
|
||||
void run() final;
|
||||
|
||||
protected:
|
||||
|
@ -624,6 +624,8 @@ HDFSSource::HDFSSource(
|
||||
initialize();
|
||||
}
|
||||
|
||||
HDFSSource::~HDFSSource() = default;
|
||||
|
||||
bool HDFSSource::initialize()
|
||||
{
|
||||
bool skip_empty_files = getContext()->getSettingsRef().hdfs_skip_empty_files;
|
||||
|
@ -153,6 +153,8 @@ public:
|
||||
std::shared_ptr<IteratorWrapper> file_iterator_,
|
||||
bool need_only_count_);
|
||||
|
||||
~HDFSSource() override;
|
||||
|
||||
String getName() const override;
|
||||
|
||||
Chunk generate() override;
|
||||
|
@ -386,6 +386,8 @@ StorageKafka::StorageKafka(
|
||||
});
|
||||
}
|
||||
|
||||
StorageKafka::~StorageKafka() = default;
|
||||
|
||||
VirtualColumnsDescription StorageKafka::createVirtuals(StreamingHandleErrorMode handle_error_mode)
|
||||
{
|
||||
VirtualColumnsDescription desc;
|
||||
|
@ -42,6 +42,8 @@ public:
|
||||
std::unique_ptr<KafkaSettings> kafka_settings_,
|
||||
const String & collection_name_);
|
||||
|
||||
~StorageKafka() override;
|
||||
|
||||
std::string getName() const override { return "Kafka"; }
|
||||
|
||||
bool noPushingToViews() const override { return true; }
|
||||
|
@ -197,6 +197,8 @@ StorageEmbeddedRocksDB::StorageEmbeddedRocksDB(const StorageID & table_id_,
|
||||
initDB();
|
||||
}
|
||||
|
||||
StorageEmbeddedRocksDB::~StorageEmbeddedRocksDB() = default;
|
||||
|
||||
void StorageEmbeddedRocksDB::truncate(const ASTPtr &, const StorageMetadataPtr & , ContextPtr, TableExclusiveLockHolder &)
|
||||
{
|
||||
std::lock_guard lock(rocksdb_ptr_mx);
|
||||
|
@ -39,6 +39,8 @@ public:
|
||||
String rocksdb_dir_ = "",
|
||||
bool read_only_ = false);
|
||||
|
||||
~StorageEmbeddedRocksDB() override;
|
||||
|
||||
std::string getName() const override { return "EmbeddedRocksDB"; }
|
||||
|
||||
void read(
|
||||
|
@ -12,6 +12,7 @@
|
||||
#include <Interpreters/InterpreterDropQuery.h>
|
||||
#include <Interpreters/InterpreterInsertQuery.h>
|
||||
#include <Interpreters/InterpreterRenameQuery.h>
|
||||
#include <Interpreters/InterpreterSelectWithUnionQuery.h>
|
||||
#include <Interpreters/getHeaderForProcessingStage.h>
|
||||
#include <Interpreters/getTableExpressions.h>
|
||||
|
||||
@ -42,6 +43,7 @@ namespace ErrorCodes
|
||||
extern const int INCORRECT_QUERY;
|
||||
extern const int QUERY_IS_NOT_SUPPORTED_IN_MATERIALIZED_VIEW;
|
||||
extern const int TOO_MANY_MATERIALIZED_VIEWS;
|
||||
extern const int NO_SUCH_COLUMN_IN_TABLE;
|
||||
}
|
||||
|
||||
namespace ActionLocks
|
||||
@ -390,12 +392,22 @@ void StorageMaterializedView::alter(
|
||||
StorageInMemoryMetadata old_metadata = getInMemoryMetadata();
|
||||
params.apply(new_metadata, local_context);
|
||||
|
||||
/// start modify query
|
||||
const auto & new_select = new_metadata.select;
|
||||
const auto & old_select = old_metadata.getSelectQuery();
|
||||
|
||||
DatabaseCatalog::instance().updateViewDependency(old_select.select_table_id, table_id, new_select.select_table_id, table_id);
|
||||
/// end modify query
|
||||
|
||||
new_metadata.setSelectQuery(new_select);
|
||||
|
||||
/// Check the materialized view's inner table structure.
|
||||
if (has_inner_table)
|
||||
{
|
||||
const Block & block = InterpreterSelectWithUnionQuery::getSampleBlock(new_select.select_query, local_context);
|
||||
const auto & inner_table_metadata = tryGetTargetTable()->getInMemoryMetadata().columns;
|
||||
for (const auto & name : block.getNames())
|
||||
if (!inner_table_metadata.has(name))
|
||||
throw Exception(ErrorCodes::NO_SUCH_COLUMN_IN_TABLE, "Column {} does not exist in the materialized view's inner table", name);
|
||||
}
|
||||
|
||||
DatabaseCatalog::instance().getDatabase(table_id.database_name)->alterTable(local_context, table_id, new_metadata);
|
||||
setInMemoryMetadata(new_metadata);
|
||||
@ -497,7 +509,7 @@ void StorageMaterializedView::renameInMemory(const StorageID & new_table_id)
|
||||
updateTargetTableId(new_table_id.database_name, std::nullopt);
|
||||
}
|
||||
const auto & select_query = metadata_snapshot->getSelectQuery();
|
||||
// TODO Actually we don't need to update dependency if MV has UUID, but then db and table name will be outdated
|
||||
/// TODO: Actually, we don't need to update dependency if MV has UUID, but then db and table name will be outdated
|
||||
DatabaseCatalog::instance().updateViewDependency(select_query.select_table_id, old_table_id, select_query.select_table_id, getStorageID());
|
||||
|
||||
if (refresher)
|
||||
|
@ -129,6 +129,7 @@ namespace ErrorCodes
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
extern const int CANNOT_COMPILE_REGEXP;
|
||||
extern const int FILE_DOESNT_EXIST;
|
||||
extern const int NO_ELEMENTS_IN_CONFIG;
|
||||
}
|
||||
|
||||
|
||||
@ -1408,6 +1409,9 @@ void StorageS3::Configuration::connect(const ContextPtr & context)
|
||||
const Settings & global_settings = context->getGlobalContext()->getSettingsRef();
|
||||
const Settings & local_settings = context->getSettingsRef();
|
||||
|
||||
if (S3::isS3ExpressEndpoint(url.endpoint) && auth_settings.region.empty())
|
||||
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG, "Region should be explicitly specified for directory buckets");
|
||||
|
||||
S3::PocoHTTPClientConfiguration client_configuration = S3::ClientFactory::instance().createClientConfiguration(
|
||||
auth_settings.region,
|
||||
context->getRemoteHostFilter(),
|
||||
@ -1434,6 +1438,7 @@ void StorageS3::Configuration::connect(const ContextPtr & context)
|
||||
.use_virtual_addressing = url.is_virtual_hosted_style,
|
||||
.disable_checksum = local_settings.s3_disable_checksum,
|
||||
.gcs_issue_compose_request = context->getConfigRef().getBool("s3.gcs_issue_compose_request", false),
|
||||
.is_s3express_bucket = S3::isS3ExpressEndpoint(url.endpoint),
|
||||
};
|
||||
|
||||
auto credentials = Aws::Auth::AWSCredentials(auth_settings.access_key_id, auth_settings.secret_access_key, auth_settings.session_token);
|
||||
|
@ -387,6 +387,8 @@ StorageURLSource::StorageURLSource(
|
||||
};
|
||||
}
|
||||
|
||||
StorageURLSource::~StorageURLSource() = default;
|
||||
|
||||
Chunk StorageURLSource::generate()
|
||||
{
|
||||
while (true)
|
||||
|
@ -181,6 +181,8 @@ public:
|
||||
bool glob_url = false,
|
||||
bool need_only_count_ = false);
|
||||
|
||||
~StorageURLSource() override;
|
||||
|
||||
String getName() const override { return name; }
|
||||
|
||||
void setKeyCondition(const ActionsDAGPtr & filter_actions_dag, ContextPtr context_) override
|
||||
|
@ -1,4 +1,3 @@
|
||||
00223_shard_distributed_aggregation_memory_efficient
|
||||
00725_memory_tracking
|
||||
01155_rename_move_materialized_view
|
||||
01624_soft_constraints
|
||||
@ -6,6 +5,4 @@
|
||||
# Check after constants refactoring
|
||||
02901_parallel_replicas_rollup
|
||||
# Flaky. Please don't delete them without fixing them:
|
||||
01287_max_execution_speed
|
||||
02003_WithMergeableStateAfterAggregationAndLimit_LIMIT_BY_LIMIT_OFFSET
|
||||
02404_memory_bound_merging
|
||||
|
@ -216,11 +216,14 @@ def gen_tags(version: ClickHouseVersion, release_type: str) -> List[str]:
|
||||
return tags
|
||||
|
||||
|
||||
def buildx_args(urls: Dict[str, str], arch: str, direct_urls: List[str]) -> List[str]:
|
||||
def buildx_args(
|
||||
urls: Dict[str, str], arch: str, direct_urls: List[str], version: str
|
||||
) -> List[str]:
|
||||
args = [
|
||||
f"--platform=linux/{arch}",
|
||||
f"--label=build-url={GITHUB_RUN_URL}",
|
||||
f"--label=com.clickhouse.build.githash={git.sha}",
|
||||
f"--label=com.clickhouse.build.version={version}",
|
||||
]
|
||||
if direct_urls:
|
||||
args.append(f"--build-arg=DIRECT_DOWNLOAD_URLS='{' '.join(direct_urls)}'")
|
||||
@ -267,7 +270,9 @@ def build_and_push_image(
|
||||
urls = [url for url in direct_urls[arch] if ".deb" in url]
|
||||
else:
|
||||
urls = [url for url in direct_urls[arch] if ".tgz" in url]
|
||||
cmd_args.extend(buildx_args(repo_urls, arch, direct_urls=urls))
|
||||
cmd_args.extend(
|
||||
buildx_args(repo_urls, arch, direct_urls=urls, version=version.describe)
|
||||
)
|
||||
if not push:
|
||||
cmd_args.append(f"--tag={image.repo}:{arch_tag}")
|
||||
cmd_args.extend(
|
||||
|
@ -1387,7 +1387,6 @@ class TestCase:
|
||||
self.reference_file,
|
||||
self.stdout_file,
|
||||
],
|
||||
encoding="latin-1",
|
||||
stdout=PIPE,
|
||||
universal_newlines=True,
|
||||
) as diff_proc:
|
||||
|
@ -46,6 +46,12 @@ def test_ddl(started_cluster):
|
||||
control_node.query(
|
||||
"ALTER TABLE test_db.test_table ON CLUSTER 'external' add column data String"
|
||||
)
|
||||
control_node.query("DETACH TABLE test_db.test_table ON CLUSTER 'external'")
|
||||
|
||||
expected = ""
|
||||
assert_create_query(data_node, "test_db", "test_table", expected)
|
||||
|
||||
control_node.query("ATTACH TABLE test_db.test_table ON CLUSTER 'external'")
|
||||
|
||||
expected = "CREATE TABLE test_db.test_table (`id` Int64, `data` String) ENGINE = MergeTree ORDER BY id SETTINGS index_granularity = 8192"
|
||||
assert_create_query(data_node, "test_db", "test_table", expected)
|
||||
|
@ -56,13 +56,13 @@ SELECT
|
||||
WHERE
|
||||
length(message_format_string) = 0
|
||||
AND (message like '%DB::Exception%' or message like '%Coordination::Exception%')
|
||||
AND message not like '% Received from %' and message not like '%(SYNTAX_ERROR)%'
|
||||
AND message not like '% Received from %' and message not like '%(SYNTAX_ERROR)%' and message not like '%Fault injection%'
|
||||
GROUP BY message ORDER BY c LIMIT 10
|
||||
))
|
||||
FROM logs
|
||||
WHERE
|
||||
(message like '%DB::Exception%' or message like '%Coordination::Exception%')
|
||||
AND message not like '% Received from %' and message not like '%(SYNTAX_ERROR)%';
|
||||
AND message not like '% Received from %' and message not like '%(SYNTAX_ERROR)%' and message not like '%Fault injection%';
|
||||
|
||||
|
||||
-- FIXME some of the following messages are not informative and it has to be fixed
|
||||
|
@ -1,4 +1,4 @@
|
||||
SET output_format_pretty_color=1;
|
||||
SET output_format_pretty_color=1, output_format_pretty_highlight_digit_groups=0;
|
||||
SELECT toUInt64(round(exp10(number))) AS x, toString(x) AS s FROM system.numbers LIMIT 10 FORMAT Pretty;
|
||||
SELECT toUInt64(round(exp10(number))) AS x, toString(x) AS s FROM system.numbers LIMIT 10 FORMAT PrettyCompact;
|
||||
SELECT toUInt64(round(exp10(number))) AS x, toString(x) AS s FROM system.numbers LIMIT 10 FORMAT PrettySpace;
|
||||
|
@ -1,4 +1,4 @@
|
||||
1
|
||||
1
|
||||
1 1
|
||||
1 1 1
|
||||
1
|
||||
|
@ -1,5 +1,5 @@
|
||||
-- since actual user name is unknown, have to perform just smoke tests
|
||||
-- Since the actual user name is unknown, have to perform just smoke tests
|
||||
select currentUser() IS NOT NULL;
|
||||
select length(currentUser()) > 0;
|
||||
select currentUser() = user(), currentUser() = USER();
|
||||
select currentUser() = initial_user from system.processes where query like '%$!@#%';
|
||||
select currentUser() = user(), currentUser() = USER(), current_user() = currentUser();
|
||||
select currentUser() = initial_user from system.processes where query like '%$!@#%' AND current_database = currentDatabase();
|
||||
|
@ -4,7 +4,12 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,14 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,14 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -40,6 +40,6 @@
|
||||
86.59 quadrillion 86.59 quadrillion 2.15 billion
|
||||
235.39 quadrillion 235.39 quadrillion 2.15 billion
|
||||
639.84 quadrillion 639.84 quadrillion 2.15 billion
|
||||
1739.27 quadrillion 1739.27 quadrillion 2.15 billion
|
||||
4727.84 quadrillion 4727.84 quadrillion 2.15 billion
|
||||
12851.60 quadrillion 12851.60 quadrillion 2.15 billion
|
||||
1.74 quintillion 1.74 quintillion 2.15 billion
|
||||
4.73 quintillion 4.73 quintillion 2.15 billion
|
||||
12.85 quintillion 12.85 quintillion 2.15 billion
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -5,8 +5,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,7 +2,12 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -5,8 +5,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,7 +2,12 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,7 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 20
|
||||
@ -15,8 +21,8 @@ expect_after {
|
||||
-i $any_spawn_id timeout { exit 1 }
|
||||
}
|
||||
|
||||
system "echo \"drop table if exists t; create table t(i String) engine=Memory; insert into t select 'test string'\" > $env(CLICKHOUSE_TMP)/file_02112"
|
||||
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_CLIENT --disable_suggestion --interactive --queries-file $env(CLICKHOUSE_TMP)/file_02112"
|
||||
system "echo \"drop table if exists t; create table t(i String) engine=Memory; insert into t select 'test string'\" > $CLICKHOUSE_TMP/file_02112"
|
||||
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_CLIENT --disable_suggestion --interactive --queries-file $CLICKHOUSE_TMP/file_02112"
|
||||
expect ":) "
|
||||
|
||||
send -- "select i from t format TSV\r"
|
||||
|
@ -2,7 +2,12 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
|
||||
log_user 0
|
||||
set timeout 20
|
||||
|
@ -2,7 +2,12 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
|
||||
log_user 0
|
||||
set timeout 20
|
||||
@ -15,8 +20,8 @@ expect_after {
|
||||
-i $any_spawn_id timeout { exit 1 }
|
||||
}
|
||||
|
||||
system "echo \"drop table if exists t; create table t(i String) engine=Memory; insert into t select 'test string'\" > $env(CLICKHOUSE_TMP)/file_02112"
|
||||
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_LOCAL --disable_suggestion --interactive --queries-file $env(CLICKHOUSE_TMP)/file_02112"
|
||||
system "echo \"drop table if exists t; create table t(i String) engine=Memory; insert into t select 'test string'\" > $CLICKHOUSE_TMP/file_02112"
|
||||
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_LOCAL --disable_suggestion --interactive --queries-file $CLICKHOUSE_TMP/file_02112"
|
||||
expect ":) "
|
||||
|
||||
send -- "select \* from t format TSV\r"
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
@ -2,8 +2,13 @@
|
||||
|
||||
set basedir [file dirname $argv0]
|
||||
set basename [file tail $argv0]
|
||||
exp_internal -f $env(CLICKHOUSE_TMP)/$basename.debuglog 0
|
||||
set history_file $env(CLICKHOUSE_TMP)/$basename.history
|
||||
if {[info exists env(CLICKHOUSE_TMP)]} {
|
||||
set CLICKHOUSE_TMP $env(CLICKHOUSE_TMP)
|
||||
} else {
|
||||
set CLICKHOUSE_TMP "."
|
||||
}
|
||||
exp_internal -f $CLICKHOUSE_TMP/$basename.debuglog 0
|
||||
set history_file $CLICKHOUSE_TMP/$basename.history
|
||||
|
||||
log_user 0
|
||||
set timeout 60
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user