Compare commits

...

25 Commits

Author SHA1 Message Date
Konstantin Bogdanov
aab1ee9cc9
Merge 93a3c1d27e into e0f8b8d351 2024-11-21 07:49:27 +01:00
Yakov Olkhovskiy
e0f8b8d351
Merge pull request #70458 from ClickHouse/fix-ephemeral-comment
Fix ephemeral column comment
2024-11-21 05:10:11 +00:00
Alexey Milovidov
da2176d696
Merge pull request #72081 from ClickHouse/add-dashboard-selector
Add advanced dashboard selector
2024-11-21 05:06:51 +00:00
Alexey Milovidov
53e0036593
Merge pull request #72176 from ClickHouse/change-ldf-major-versions
Get rid of `major` tags in official docker images
2024-11-21 05:05:41 +00:00
Alexey Milovidov
25bd73ea5e
Merge pull request #72023 from ClickHouse/fix-bind
Fix comments
2024-11-21 05:03:24 +00:00
Yakov Olkhovskiy
72d5af29e0 Merge branch 'master' into fix-ephemeral-comment 2024-11-20 22:01:54 +00:00
Konstantin Bogdanov
93a3c1d27e
What will break if I do this 2024-11-20 18:48:13 +01:00
Konstantin Bogdanov
f2a2941fa6
Fix style check 2024-11-20 18:26:58 +01:00
Mikhail f. Shiryaev
9a2a664b04
Get rid of major tags in official docker images 2024-11-20 16:36:50 +01:00
Konstantin Bogdanov
5b1f7c258b
Stricter checks 2024-11-20 16:32:04 +01:00
Konstantin Bogdanov
07821a576d
Cleaner test 2024-11-20 16:21:13 +01:00
Konstantin Bogdanov
b47a95bd4f
Remove unnecessary checks 2024-11-20 15:54:42 +01:00
Konstantin Bogdanov
e3e6c79d72
Lint 2024-11-20 14:46:29 +01:00
Konstantin Bogdanov
b1e7847ee8
Prototype a test 2024-11-19 18:37:44 +01:00
Konstantin Bogdanov
2886575776
Prototype a test 2024-11-19 18:28:01 +01:00
Konstantin Bogdanov
3ff22e3fc1
Fix arguments parsing for object storage functions 2024-11-19 16:26:58 +01:00
serxa
ad67608956 Add advanced dashboard selector 2024-11-19 13:18:21 +00:00
Konstantin Bogdanov
03352372e9
Check for isDistributed 2024-11-18 13:50:06 +01:00
Konstantin Bogdanov
e7ce3e1b73
Poke CI 2024-11-18 11:52:07 +01:00
Konstantin Bogdanov
769fe98782
Fix linking issue 2024-11-18 11:52:06 +01:00
Konstantin Bogdanov
47bae80a39
Lint 2024-11-18 11:52:06 +01:00
Konstantin Bogdanov
31eddf6073
Replace table function engines with their -Cluster alternatives 2024-11-18 11:52:04 +01:00
Alexey Milovidov
49589da56e Fix comments 2024-11-18 07:18:46 +01:00
Yakov Olkhovskiy
3827d90bb0 add test 2024-10-08 02:37:41 +00:00
Yakov Olkhovskiy
bf3a3ad607 fix ephemeral comment 2024-10-08 02:27:36 +00:00
28 changed files with 278 additions and 95 deletions

View File

@ -16,16 +16,18 @@ ClickHouse works 100-1000x faster than traditional database management systems,
For more information and documentation see https://clickhouse.com/.
<!-- This is not related to the docker official library, remove it before commit to https://github.com/docker-library/docs -->
## Versions
- The `latest` tag points to the latest release of the latest stable branch.
- Branch tags like `22.2` point to the latest release of the corresponding branch.
- Full version tags like `22.2.3.5` point to the corresponding release.
- Full version tags like `22.2.3` and `22.2.3.5` point to the corresponding release.
<!-- docker-official-library:off -->
<!-- This is not related to the docker official library, remove it before commit to https://github.com/docker-library/docs -->
- The tag `head` is built from the latest commit to the default branch.
- Each tag has optional `-alpine` suffix to reflect that it's built on top of `alpine`.
<!-- REMOVE UNTIL HERE -->
<!-- docker-official-library:on -->
### Compatibility
- The amd64 image requires support for [SSE3 instructions](https://en.wikipedia.org/wiki/SSE3). Virtually all x86 CPUs after 2005 support SSE3.

View File

@ -10,16 +10,18 @@ ClickHouse works 100-1000x faster than traditional database management systems,
For more information and documentation see https://clickhouse.com/.
<!-- This is not related to the docker official library, remove it before commit to https://github.com/docker-library/docs -->
## Versions
- The `latest` tag points to the latest release of the latest stable branch.
- Branch tags like `22.2` point to the latest release of the corresponding branch.
- Full version tags like `22.2.3.5` point to the corresponding release.
- Full version tags like `22.2.3` and `22.2.3.5` point to the corresponding release.
<!-- docker-official-library:off -->
<!-- This is not related to the docker official library, remove it before commit to https://github.com/docker-library/docs -->
- The tag `head` is built from the latest commit to the default branch.
- Each tag has optional `-alpine` suffix to reflect that it's built on top of `alpine`.
<!-- REMOVE UNTIL HERE -->
<!-- docker-official-library:on -->
### Compatibility
- The amd64 image requires support for [SSE3 instructions](https://en.wikipedia.org/wiki/SSE3). Virtually all x86 CPUs after 2005 support SSE3.

View File

@ -522,4 +522,3 @@ sidebar_label: 2024
* Backported in [#68518](https://github.com/ClickHouse/ClickHouse/issues/68518): Minor update in Dynamic/JSON serializations. [#68459](https://github.com/ClickHouse/ClickHouse/pull/68459) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68558](https://github.com/ClickHouse/ClickHouse/issues/68558): CI: Minor release workflow fix. [#68536](https://github.com/ClickHouse/ClickHouse/pull/68536) ([Max K.](https://github.com/maxknv)).
* Backported in [#68576](https://github.com/ClickHouse/ClickHouse/issues/68576): CI: Tidy build timeout from 2h to 3h. [#68567](https://github.com/ClickHouse/ClickHouse/pull/68567) ([Max K.](https://github.com/maxknv)).

View File

@ -497,4 +497,3 @@ sidebar_label: 2024
* Backported in [#69899](https://github.com/ClickHouse/ClickHouse/issues/69899): Revert "Merge pull request [#69032](https://github.com/ClickHouse/ClickHouse/issues/69032) from alexon1234/include_real_time_execution_in_http_header". [#69885](https://github.com/ClickHouse/ClickHouse/pull/69885) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#69931](https://github.com/ClickHouse/ClickHouse/issues/69931): RIPE is an acronym and thus should be capital. RIPE stands for **R**ACE **I**ntegrity **P**rimitives **E**valuation and RACE stands for **R**esearch and Development in **A**dvanced **C**ommunications **T**echnologies in **E**urope. [#69901](https://github.com/ClickHouse/ClickHouse/pull/69901) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Backported in [#70034](https://github.com/ClickHouse/ClickHouse/issues/70034): Revert "Add RIPEMD160 function". [#70005](https://github.com/ClickHouse/ClickHouse/pull/70005) ([Robert Schulze](https://github.com/rschu1ze)).

View File

@ -936,4 +936,4 @@ SELECT mapPartialReverseSort((k, v) -> v, 2, map('k1', 3, 'k2', 1, 'k3', 2));
┌─mapPartialReverseSort(lambda(tuple(k, v), v), 2, map('k1', 3, 'k2', 1, 'k3', 2))─┐
│ {'k1':3,'k3':2,'k2':1} │
└──────────────────────────────────────────────────────────────────────────────────┘
```
```

View File

@ -476,7 +476,7 @@
<input id="edit" type="button" value="✎" style="display: none;">
<input id="add" type="button" value="Add chart" style="display: none;">
<input id="reload" type="button" value="Reload">
<span id="search-span" class="nowrap" style="display: none;"><input id="search" type="button" value="🔎" title="Run query to obtain list of charts from ClickHouse"><input id="search-query" name="search" type="text" spellcheck="false"></span>
<span id="search-span" class="nowrap" style="display: none;"><input id="search" type="button" value="🔎" title="Run query to obtain list of charts from ClickHouse. Either select dashboard name or write your own query"><input id="search-query" name="search" list="search-options" type="text" spellcheck="false"><datalist id="search-options"></datalist></span>
<div id="chart-params"></div>
</div>
</form>
@ -532,9 +532,15 @@ const errorMessages = [
}
]
/// Dashboard selector
const dashboardSearchQuery = (dashboard_name) => `SELECT title, query FROM system.dashboards WHERE dashboard = '${dashboard_name}'`;
let dashboard_queries = {
"Overview": dashboardSearchQuery("Overview"),
};
const default_dashboard = 'Overview';
/// Query to fill `queries` list for the dashboard
let search_query = `SELECT title, query FROM system.dashboards WHERE dashboard = 'Overview'`;
let search_query = dashboardSearchQuery(default_dashboard);
let customized = false;
let queries = [];
@ -1439,7 +1445,7 @@ async function reloadAll(do_search) {
try {
updateParams();
if (do_search) {
search_query = document.getElementById('search-query').value;
search_query = toSearchQuery(document.getElementById('search-query').value);
queries = [];
refreshCustomized(false);
}
@ -1504,7 +1510,7 @@ function updateFromState() {
document.getElementById('url').value = host;
document.getElementById('user').value = user;
document.getElementById('password').value = password;
document.getElementById('search-query').value = search_query;
document.getElementById('search-query').value = fromSearchQuery(search_query);
refreshCustomized();
}
@ -1543,6 +1549,44 @@ if (window.location.hash) {
} catch {}
}
function fromSearchQuery(query) {
for (const dashboard_name in dashboard_queries) {
if (query == dashboard_queries[dashboard_name])
return dashboard_name;
}
return query;
}
function toSearchQuery(value) {
if (value in dashboard_queries)
return dashboard_queries[value];
else
return value;
}
async function populateSearchOptions() {
let {reply, error} = await doFetch("SELECT dashboard FROM system.dashboards GROUP BY dashboard ORDER BY ALL");
if (error) {
throw new Error(error);
}
let data = reply.data;
if (data.dashboard.length == 0) {
console.log("Unable to fetch dashboards list");
return;
}
dashboard_queries = {};
for (let i = 0; i < data.dashboard.length; i++) {
const dashboard = data.dashboard[i];
dashboard_queries[dashboard] = dashboardSearchQuery(dashboard);
}
const searchOptions = document.getElementById('search-options');
for (const dashboard in dashboard_queries) {
const opt = document.createElement('option');
opt.value = dashboard;
searchOptions.appendChild(opt);
}
}
async function start() {
try {
updateFromState();
@ -1558,6 +1602,7 @@ async function start() {
} else {
drawAll();
}
await populateSearchOptions();
} catch (e) {
showError(e.message);
}

View File

@ -528,7 +528,7 @@ QueryTreeNodePtr IdentifierResolver::tryResolveIdentifierFromCompoundExpression(
*
* Resolve strategy:
* 1. Try to bind identifier to scope argument name to node map.
* 2. If identifier is binded but expression context and node type are incompatible return nullptr.
* 2. If identifier is bound but expression context and node type are incompatible return nullptr.
*
* It is important to support edge cases, where we lookup for table or function node, but argument has same name.
* Example: WITH (x -> x + 1) AS func, (func -> func(1) + func) AS lambda SELECT lambda(1);

View File

@ -362,7 +362,7 @@ ReplxxLineReader::ReplxxLineReader(
if (highlighter)
rx.set_highlighter_callback(highlighter);
/// By default C-p/C-n binded to COMPLETE_NEXT/COMPLETE_PREV,
/// By default C-p/C-n bound to COMPLETE_NEXT/COMPLETE_PREV,
/// bind C-p/C-n to history-previous/history-next like readline.
rx.bind_key(Replxx::KEY::control('N'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::HISTORY_NEXT, code); });
rx.bind_key(Replxx::KEY::control('P'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::HISTORY_PREVIOUS, code); });
@ -384,9 +384,9 @@ ReplxxLineReader::ReplxxLineReader(
rx.bind_key(Replxx::KEY::control('J'), commit_action);
rx.bind_key(Replxx::KEY::ENTER, commit_action);
/// By default COMPLETE_NEXT/COMPLETE_PREV was binded to C-p/C-n, re-bind
/// By default COMPLETE_NEXT/COMPLETE_PREV was bound to C-p/C-n, re-bind
/// to M-P/M-N (that was used for HISTORY_COMMON_PREFIX_SEARCH before, but
/// it also binded to M-p/M-n).
/// it also bound to M-p/M-n).
rx.bind_key(Replxx::KEY::meta('N'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::COMPLETE_NEXT, code); });
rx.bind_key(Replxx::KEY::meta('P'), [this](char32_t code) { return rx.invoke(Replxx::ACTION::COMPLETE_PREVIOUS, code); });
/// By default M-BACKSPACE is KILL_TO_WHITESPACE_ON_LEFT, while in readline it is backward-kill-word

View File

@ -5653,6 +5653,9 @@ Parts virtually divided into segments to be distributed between replicas for par
DECLARE(Bool, parallel_replicas_local_plan, true, R"(
Build local plan for local replica
)", BETA) \
DECLARE(Bool, parallel_replicas_for_cluster_engines, true, R"(
Replace table function engines with their -Cluster alternatives
)", EXPERIMENTAL) \
\
DECLARE(Bool, allow_experimental_analyzer, true, R"(
Allow new query analyzer.

View File

@ -60,6 +60,7 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
{
{"24.12",
{
{"parallel_replicas_for_cluster_engines", true, true, "New setting"},
}
},
{"24.11",

View File

@ -237,6 +237,7 @@ bool IParserColumnDeclaration<NameParser>::parseImpl(Pos & pos, ASTPtr & node, E
null_modifier.emplace(true);
}
bool is_comment = false;
/// Collate is also allowed after NULL/NOT NULL
if (!collation_expression && s_collate.ignore(pos, expected)
&& !collation_parser.parse(pos, collation_expression, expected))
@ -254,7 +255,9 @@ bool IParserColumnDeclaration<NameParser>::parseImpl(Pos & pos, ASTPtr & node, E
else if (s_ephemeral.ignore(pos, expected))
{
default_specifier = s_ephemeral.getName();
if (!expr_parser.parse(pos, default_expression, expected) && type)
if (s_comment.ignore(pos, expected))
is_comment = true;
if ((is_comment || !expr_parser.parse(pos, default_expression, expected)) && type)
{
ephemeral_default = true;
@ -289,19 +292,22 @@ bool IParserColumnDeclaration<NameParser>::parseImpl(Pos & pos, ASTPtr & node, E
if (require_type && !type && !default_expression)
return false; /// reject column name without type
if ((type || default_expression) && allow_null_modifiers && !null_modifier.has_value())
if (!is_comment)
{
if (s_not.ignore(pos, expected))
if ((type || default_expression) && allow_null_modifiers && !null_modifier.has_value())
{
if (!s_null.ignore(pos, expected))
return false;
null_modifier.emplace(false);
if (s_not.ignore(pos, expected))
{
if (!s_null.ignore(pos, expected))
return false;
null_modifier.emplace(false);
}
else if (s_null.ignore(pos, expected))
null_modifier.emplace(true);
}
else if (s_null.ignore(pos, expected))
null_modifier.emplace(true);
}
if (s_comment.ignore(pos, expected))
if (is_comment || s_comment.ignore(pos, expected))
{
/// should be followed by a string literal
if (!string_literal_parser.parse(pos, comment_expression, expected))

View File

@ -125,7 +125,7 @@ void IStorageCluster::read(
storage_snapshot->check(column_names);
updateBeforeRead(context);
auto cluster = getCluster(context);
auto cluster = context->getCluster(cluster_name);
/// Calculate the header. This is significant, because some columns could be thrown away in some cases like query with count(*)

View File

@ -10,7 +10,7 @@
#include <Storages/VirtualColumnUtils.h>
#include <Storages/ObjectStorage/Utils.h>
#include <Storages/ObjectStorage/StorageObjectStorageSource.h>
#include <Storages/extractTableFunctionArgumentsFromSelectQuery.h>
#include <Storages/extractTableFunctionFromSelectQuery.h>
namespace DB
@ -86,7 +86,16 @@ void StorageObjectStorageCluster::updateQueryToSendIfNeeded(
const DB::StorageSnapshotPtr & storage_snapshot,
const ContextPtr & context)
{
ASTExpressionList * expression_list = extractTableFunctionArgumentsFromSelectQuery(query);
auto * table_function = extractTableFunctionFromSelectQuery(query);
if (!table_function)
{
throw Exception(
ErrorCodes::LOGICAL_ERROR,
"Expected SELECT query from table function {}, got '{}'",
configuration->getEngineName(), queryToString(query));
}
auto * expression_list = table_function->arguments->as<ASTExpressionList>();
if (!expression_list)
{
throw Exception(
@ -105,10 +114,18 @@ void StorageObjectStorageCluster::updateQueryToSendIfNeeded(
configuration->getEngineName());
}
ASTPtr cluster_name_arg = args.front();
args.erase(args.begin());
configuration->addStructureAndFormatToArgsIfNeeded(args, structure, configuration->format, context);
args.insert(args.begin(), cluster_name_arg);
if (table_function->name == configuration->getTypeName())
configuration->addStructureAndFormatToArgsIfNeeded(args, structure, configuration->format, context);
else if (table_function->name == fmt::format("{}Cluster", configuration->getTypeName()))
{
ASTPtr cluster_name_arg = args.front();
args.erase(args.begin());
configuration->addStructureAndFormatToArgsIfNeeded(args, structure, configuration->format, context);
args.insert(args.begin(), cluster_name_arg);
}
else
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected table function name: {}", table_function->name);
}
RemoteQueryExecutor::Extension StorageObjectStorageCluster::getTaskIteratorExtension(

View File

@ -9,7 +9,7 @@
#include <Storages/StorageFileCluster.h>
#include <Storages/IStorage.h>
#include <Storages/StorageFile.h>
#include <Storages/extractTableFunctionArgumentsFromSelectQuery.h>
#include <Storages/extractTableFunctionFromSelectQuery.h>
#include <Storages/VirtualColumnUtils.h>
#include <TableFunctions/TableFunctionFileCluster.h>
@ -19,11 +19,6 @@
namespace DB
{
namespace ErrorCodes
{
extern const int LOGICAL_ERROR;
}
StorageFileCluster::StorageFileCluster(
const ContextPtr & context,
const String & cluster_name_,
@ -66,12 +61,14 @@ StorageFileCluster::StorageFileCluster(
void StorageFileCluster::updateQueryToSendIfNeeded(DB::ASTPtr & query, const StorageSnapshotPtr & storage_snapshot, const DB::ContextPtr & context)
{
ASTExpressionList * expression_list = extractTableFunctionArgumentsFromSelectQuery(query);
if (!expression_list)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected SELECT query from table function fileCluster, got '{}'", queryToString(query));
auto * table_function = extractTableFunctionFromSelectQuery(query);
TableFunctionFileCluster::updateStructureAndFormatArgumentsIfNeeded(
expression_list->children, storage_snapshot->metadata->getColumns().getAll().toNamesAndTypesDescription(), format_name, context);
table_function,
storage_snapshot->metadata->getColumns().getAll().toNamesAndTypesDescription(),
format_name,
context
);
}
RemoteQueryExecutor::Extension StorageFileCluster::getTaskIteratorExtension(const ActionsDAG::Node * predicate, const ContextPtr & context) const

View File

@ -20,7 +20,7 @@
#include <Storages/IStorage.h>
#include <Storages/SelectQueryInfo.h>
#include <Storages/StorageURL.h>
#include <Storages/extractTableFunctionArgumentsFromSelectQuery.h>
#include <Storages/extractTableFunctionFromSelectQuery.h>
#include <Storages/VirtualColumnUtils.h>
#include <TableFunctions/TableFunctionURLCluster.h>
@ -30,16 +30,17 @@
namespace DB
{
namespace Setting
{
extern const SettingsUInt64 glob_expansion_max_elements;
}
namespace ErrorCodes
{
extern const int LOGICAL_ERROR;
}
namespace Setting
{
extern const SettingsUInt64 glob_expansion_max_elements;
}
StorageURLCluster::StorageURLCluster(
const ContextPtr & context,
const String & cluster_name_,
@ -86,12 +87,20 @@ StorageURLCluster::StorageURLCluster(
void StorageURLCluster::updateQueryToSendIfNeeded(ASTPtr & query, const StorageSnapshotPtr & storage_snapshot, const ContextPtr & context)
{
ASTExpressionList * expression_list = extractTableFunctionArgumentsFromSelectQuery(query);
auto * table_function = extractTableFunctionFromSelectQuery(query);
if (!table_function)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected SELECT query from table function urlCluster, got '{}'", queryToString(query));
auto * expression_list = table_function->arguments->as<ASTExpressionList>();
if (!expression_list)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected SELECT query from table function urlCluster, got '{}'", queryToString(query));
TableFunctionURLCluster::updateStructureAndFormatArgumentsIfNeeded(
expression_list->children, storage_snapshot->metadata->getColumns().getAll().toNamesAndTypesDescription(), format_name, context);
table_function,
storage_snapshot->metadata->getColumns().getAll().toNamesAndTypesDescription(),
format_name,
context
);
}
RemoteQueryExecutor::Extension StorageURLCluster::getTaskIteratorExtension(const ActionsDAG::Node * predicate, const ContextPtr & context) const

View File

@ -1,7 +1,6 @@
#include <Storages/extractTableFunctionArgumentsFromSelectQuery.h>
#include <Storages/extractTableFunctionFromSelectQuery.h>
#include <Parsers/ASTExpressionList.h>
#include <Parsers/ASTFunction.h>
#include <Parsers/ASTLiteral.h>
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTTablesInSelectQuery.h>
@ -11,7 +10,7 @@
namespace DB
{
ASTExpressionList * extractTableFunctionArgumentsFromSelectQuery(ASTPtr & query)
ASTFunction * extractTableFunctionFromSelectQuery(ASTPtr & query)
{
auto * select_query = query->as<ASTSelectQuery>();
if (!select_query || !select_query->tables())
@ -22,8 +21,7 @@ ASTExpressionList * extractTableFunctionArgumentsFromSelectQuery(ASTPtr & query)
if (!table_expression->table_function)
return nullptr;
auto * table_function = table_expression->table_function->as<ASTFunction>();
return table_function->arguments->as<ASTExpressionList>();
return table_expression->table_function->as<ASTFunction>();
}
}

View File

@ -2,10 +2,11 @@
#include <Parsers/IAST_fwd.h>
#include <Parsers/ASTExpressionList.h>
#include <Parsers/ASTFunction.h>
namespace DB
{
ASTExpressionList * extractTableFunctionArgumentsFromSelectQuery(ASTPtr & query);
ASTFunction * extractTableFunctionFromSelectQuery(ASTPtr & query);
}

View File

@ -2,6 +2,7 @@
#include <Interpreters/Context.h>
#include <Interpreters/evaluateConstantExpression.h>
#include <Parsers/ASTFunction.h>
#include <Storages/checkAndGetLiteralArgument.h>
#include <TableFunctions/ITableFunction.h>
#include <TableFunctions/TableFunctionObjectStorage.h>
@ -24,15 +25,25 @@ class ITableFunctionCluster : public Base
public:
String getName() const override = 0;
static void updateStructureAndFormatArgumentsIfNeeded(ASTs & args, const String & structure_, const String & format_, const ContextPtr & context)
static void updateStructureAndFormatArgumentsIfNeeded(ASTFunction * table_function, const String & structure_, const String & format_, const ContextPtr & context)
{
auto * expression_list = table_function->arguments->as<ASTExpressionList>();
ASTs args = expression_list->children;
if (args.empty())
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected empty list of arguments for {}Cluster table function", Base::name);
ASTPtr cluster_name_arg = args.front();
args.erase(args.begin());
Base::updateStructureAndFormatArgumentsIfNeeded(args, structure_, format_, context);
args.insert(args.begin(), cluster_name_arg);
if (table_function->name == Base::name)
Base::updateStructureAndFormatArgumentsIfNeeded(args, structure_, format_, context);
else if (table_function->name == fmt::format("{}Cluster", Base::name))
{
ASTPtr cluster_name_arg = args.front();
args.erase(args.begin());
Base::updateStructureAndFormatArgumentsIfNeeded(args, structure_, format_, context);
args.insert(args.begin(), cluster_name_arg);
}
else
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected table function name: {}", table_function->name);
}
protected:

View File

@ -1,5 +1,8 @@
#include "config.h"
#include <Core/Settings.h>
#include <Core/SettingsEnums.h>
#include <Access/Common/AccessFlags.h>
#include <Analyzer/FunctionNode.h>
#include <Analyzer/TableFunctionNode.h>
@ -19,11 +22,20 @@
#include <Storages/ObjectStorage/Local/Configuration.h>
#include <Storages/ObjectStorage/S3/Configuration.h>
#include <Storages/ObjectStorage/StorageObjectStorage.h>
#include <Storages/ObjectStorage/StorageObjectStorageCluster.h>
namespace DB
{
namespace Setting
{
extern const SettingsUInt64 allow_experimental_parallel_reading_from_replicas;
extern const SettingsBool parallel_replicas_for_cluster_engines;
extern const SettingsString cluster_for_parallel_replicas;
extern const SettingsParallelReplicasMode parallel_replicas_mode;
}
namespace ErrorCodes
{
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
@ -100,8 +112,9 @@ StoragePtr TableFunctionObjectStorage<Definition, Configuration>::executeImpl(
ColumnsDescription cached_columns,
bool is_insert_query) const
{
ColumnsDescription columns;
chassert(configuration);
ColumnsDescription columns;
if (configuration->structure != "auto")
columns = parseColumnsListFromString(configuration->structure, context);
else if (!structure_hint.empty())
@ -109,18 +122,43 @@ StoragePtr TableFunctionObjectStorage<Definition, Configuration>::executeImpl(
else if (!cached_columns.empty())
columns = cached_columns;
StoragePtr storage = std::make_shared<StorageObjectStorage>(
StoragePtr storage;
const auto & settings = context->getSettingsRef();
auto parallel_replicas_cluster_name = settings[Setting::cluster_for_parallel_replicas].toString();
auto can_use_parallel_replicas = settings[Setting::allow_experimental_parallel_reading_from_replicas] > 0
&& settings[Setting::parallel_replicas_for_cluster_engines]
&& settings[Setting::parallel_replicas_mode] == ParallelReplicasMode::READ_TASKS
&& !parallel_replicas_cluster_name.empty()
&& !context->isDistributed();
if (can_use_parallel_replicas)
{
storage = std::make_shared<StorageObjectStorageCluster>(
parallel_replicas_cluster_name,
configuration,
getObjectStorage(context, !is_insert_query),
StorageID(getDatabaseName(), table_name),
columns,
ConstraintsDescription{},
context);
storage->startup();
return storage;
}
storage = std::make_shared<StorageObjectStorage>(
configuration,
getObjectStorage(context, !is_insert_query),
context,
StorageID(getDatabaseName(), table_name),
columns,
ConstraintsDescription{},
String{},
/* comment */ String{},
/* format_settings */ std::nullopt,
/* mode */ LoadingStrictnessLevel::CREATE,
/* distributed_processing */ false,
nullptr);
/* partition_by */ nullptr);
storage->startup();
return storage;

View File

@ -6,8 +6,10 @@
#include <Storages/ObjectStorage/StorageObjectStorage.h>
#include <Storages/VirtualColumnUtils.h>
#include <TableFunctions/ITableFunction.h>
#include "config.h"
namespace DB
{

View File

@ -2,22 +2,34 @@
#include "registerTableFunctions.h"
#include <Access/Common/AccessFlags.h>
#include <Analyzer/FunctionNode.h>
#include <Analyzer/TableFunctionNode.h>
#include <Core/Settings.h>
#include <Formats/FormatFactory.h>
#include <Interpreters/evaluateConstantExpression.h>
#include <Interpreters/parseColumnsListForTableFunction.h>
#include <Parsers/ASTFunction.h>
#include <Parsers/ASTIdentifier.h>
#include <Storages/ColumnsDescription.h>
#include <Storages/NamedCollectionsHelpers.h>
#include <Storages/StorageURLCluster.h>
#include <TableFunctions/TableFunctionFactory.h>
#include <Analyzer/FunctionNode.h>
#include <Analyzer/TableFunctionNode.h>
#include <Interpreters/parseColumnsListForTableFunction.h>
#include <Formats/FormatFactory.h>
#include <IO/WriteHelpers.h>
#include <IO/WriteBufferFromVector.h>
namespace DB
{
namespace Setting
{
extern const SettingsUInt64 allow_experimental_parallel_reading_from_replicas;
extern const SettingsBool parallel_replicas_for_cluster_engines;
extern const SettingsString cluster_for_parallel_replicas;
extern const SettingsParallelReplicasMode parallel_replicas_mode;
}
std::vector<size_t> TableFunctionURL::skipAnalysisForArguments(const QueryTreeNodePtr & query_node_table_function, ContextPtr) const
{
auto & table_function_node = query_node_table_function->as<TableFunctionNode &>();
@ -120,6 +132,28 @@ StoragePtr TableFunctionURL::getStorage(
const String & source, const String & format_, const ColumnsDescription & columns, ContextPtr global_context,
const std::string & table_name, const String & compression_method_) const
{
const auto & settings = global_context->getSettingsRef();
auto parallel_replicas_cluster_name = settings[Setting::cluster_for_parallel_replicas].toString();
auto can_use_parallel_replicas = settings[Setting::allow_experimental_parallel_reading_from_replicas] > 0
&& settings[Setting::parallel_replicas_for_cluster_engines]
&& settings[Setting::parallel_replicas_mode] == ParallelReplicasMode::READ_TASKS
&& !parallel_replicas_cluster_name.empty()
&& !global_context->isDistributed();
if (can_use_parallel_replicas)
{
return std::make_shared<StorageURLCluster>(
global_context,
settings[Setting::cluster_for_parallel_replicas],
filename,
format,
compression_method,
StorageID(getDatabaseName(), table_name),
getActualTableStructure(global_context, /* is_insert_query */ true),
ConstraintsDescription{},
configuration);
}
return std::make_shared<StorageURL>(
source,
StorageID(getDatabaseName(), table_name),

View File

@ -10,11 +10,10 @@ StoragePtr TableFunctionURLCluster::getStorage(
const String & /*source*/, const String & /*format_*/, const ColumnsDescription & columns, ContextPtr context,
const std::string & table_name, const String & /*compression_method_*/) const
{
StoragePtr storage;
if (context->getClientInfo().query_kind == ClientInfo::QueryKind::SECONDARY_QUERY)
{
//On worker node this uri won't contain globs
storage = std::make_shared<StorageURL>(
return std::make_shared<StorageURL>(
filename,
StorageID(getDatabaseName(), table_name),
format,
@ -29,20 +28,17 @@ StoragePtr TableFunctionURLCluster::getStorage(
nullptr,
/*distributed_processing=*/ true);
}
else
{
storage = std::make_shared<StorageURLCluster>(
context,
cluster_name,
filename,
format,
compression_method,
StorageID(getDatabaseName(), table_name),
getActualTableStructure(context, /* is_insert_query */ true),
ConstraintsDescription{},
configuration);
}
return storage;
return std::make_shared<StorageURLCluster>(
context,
cluster_name,
filename,
format,
compression_method,
StorageID(getDatabaseName(), table_name),
getActualTableStructure(context, /* is_insert_query */ true),
ConstraintsDescription{},
configuration);
}
void registerTableFunctionURLCluster(TableFunctionFactory & factory)

View File

@ -299,8 +299,6 @@ class TagAttrs:
# Only one latest can exist
latest: ClickHouseVersion
# Only one can be a major one (the most fresh per a year)
majors: Dict[int, ClickHouseVersion]
# Only one lts version can exist
lts: Optional[ClickHouseVersion]
@ -345,14 +343,6 @@ def ldf_tags(version: ClickHouseVersion, distro: str, tag_attrs: TagAttrs) -> st
tags.append("lts")
tags.append(f"lts-{distro}")
# If the tag `22`, `23`, `24` etc. should be included in the tags
with_major = tag_attrs.majors.get(version.major) in (None, version)
if with_major:
tag_attrs.majors[version.major] = version
if without_distro:
tags.append(f"{version.major}")
tags.append(f"{version.major}-{distro}")
# Add all normal tags
for tag in (
f"{version.major}.{version.minor}",
@ -384,7 +374,7 @@ def generate_ldf(args: argparse.Namespace) -> None:
args.directory / git_runner(f"git -C {args.directory} rev-parse --show-cdup")
).absolute()
lines = ldf_header(git, directory)
tag_attrs = TagAttrs(versions[-1], {}, None)
tag_attrs = TagAttrs(versions[-1], None)
# We iterate from the most recent to the oldest version
for version in reversed(versions):

View File

@ -378,7 +378,7 @@ def test_reload_via_client(cluster, zk):
configure_from_zk(zk)
break
except QueryRuntimeException:
logging.exception("The new socket is not binded yet")
logging.exception("The new socket is not bound yet")
time.sleep(0.1)
if exception:

View File

@ -0,0 +1,11 @@
drop table if exists test;
CREATE TABLE test (
`start_s` UInt32 EPHEMERAL COMMENT 'start UNIX time' ,
`start_us` UInt16 EPHEMERAL COMMENT 'start microseconds',
`finish_s` UInt32 EPHEMERAL COMMENT 'finish UNIX time',
`finish_us` UInt16 EPHEMERAL COMMENT 'finish microseconds',
`captured` DateTime MATERIALIZED fromUnixTimestamp(start_s),
`duration` Decimal32(6) MATERIALIZED finish_s - start_s + (finish_us - start_us)/1000000
)
ENGINE Null;
drop table if exists test;

View File

@ -0,0 +1,8 @@
Expression (Project names)
ReadFromCluster
Expression (Project names)
ReadFromCluster
Expression ((Project names + (Projection + Change column names to column identifiers)))
ReadFromURL
Expression ((Project names + (Projection + Change column names to column identifiers)))
S3Source

View File

@ -0,0 +1,14 @@
-- Tags: no-fasttest
-- Tag no-fasttest: Depends on AWS
SET enable_parallel_replicas=1;
SET cluster_for_parallel_replicas='default';
SET parallel_replicas_for_cluster_engines=true;
EXPLAIN SELECT * FROM url('http://localhost:8123');
EXPLAIN SELECT * FROM s3('http://localhost:11111/test/a.tsv', 'TSV');
SET parallel_replicas_for_cluster_engines=false;
EXPLAIN SELECT * FROM url('http://localhost:8123');
EXPLAIN SELECT * FROM s3('http://localhost:11111/test/a.tsv', 'TSV');