mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-19 06:01:57 +00:00
Merge branch 'add-gcs-table-function' of github.com:jkaflik/ClickHouse into add-gcs-table-function
This commit is contained in:
commit
5ee21d8b98
@ -64,6 +64,6 @@ struct fmt::formatter<wide::integer<Bits, Signed>>
|
||||
template <typename FormatContext>
|
||||
auto format(const wide::integer<Bits, Signed> & value, FormatContext & ctx)
|
||||
{
|
||||
return format_to(ctx.out(), "{}", to_string(value));
|
||||
return fmt::format_to(ctx.out(), "{}", to_string(value));
|
||||
}
|
||||
};
|
||||
|
@ -77,9 +77,12 @@ It is recommended to use official pre-compiled `deb` packages for Debian or Ubun
|
||||
#### Setup the Debian repository
|
||||
``` bash
|
||||
sudo apt-get install -y apt-transport-https ca-certificates dirmngr
|
||||
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
|
||||
GNUPGHOME=$(mktemp -d)
|
||||
sudo GNUPGHOME="$GNUPGHOME" gpg --no-default-keyring --keyring /usr/share/keyrings/clickhouse-keyring.gpg --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 8919F6BD2B48D754
|
||||
sudo rm -r "$GNUPGHOME"
|
||||
sudo chmod +r /usr/share/keyrings/clickhouse-keyring.gpg
|
||||
|
||||
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
|
||||
echo "deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb stable main" | sudo tee \
|
||||
/etc/apt/sources.list.d/clickhouse.list
|
||||
sudo apt-get update
|
||||
```
|
||||
|
@ -7,11 +7,23 @@ import SelfManaged from '@site/docs/en/_snippets/_self_managed_only_no_roadmap.m
|
||||
|
||||
# Sampling Query Profiler
|
||||
|
||||
<SelfManaged />
|
||||
|
||||
ClickHouse runs sampling profiler that allows analyzing query execution. Using profiler you can find source code routines that used the most frequently during query execution. You can trace CPU time and wall-clock time spent including idle time.
|
||||
|
||||
To use profiler:
|
||||
Query profiler is automatically enabled in ClickHouse Cloud and you can run a sample query as follows
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
count(),
|
||||
arrayStringConcat(arrayMap(x -> concat(demangle(addressToSymbol(x)), '\n ', addressToLine(x)), trace), '\n') AS sym
|
||||
FROM system.trace_log
|
||||
WHERE (query_id = 'ebca3574-ad0a-400a-9cbc-dca382f5998c') AND (event_date = today())
|
||||
GROUP BY trace
|
||||
ORDER BY count() DESC
|
||||
LIMIT 10
|
||||
SETTINGS allow_introspection_functions = 1
|
||||
```
|
||||
|
||||
In self-managed deployments, to use query profiler:
|
||||
|
||||
- Setup the [trace_log](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-trace_log) section of the server configuration.
|
||||
|
||||
|
@ -149,10 +149,10 @@ SETTINGS index_granularity = 8192, index_granularity_bytes = 0;
|
||||
|
||||
[//]: # (<details open>)
|
||||
<details>
|
||||
<summary><font color="black">
|
||||
<summary><font color="white">
|
||||
DDL详情
|
||||
</font></summary>
|
||||
<p><font color="black">
|
||||
<p><font color="white">
|
||||
|
||||
为了简化本文后面的讨论,并使图和结果可重现,使用DDL语句有如下说明:
|
||||
<ul>
|
||||
@ -164,7 +164,7 @@ SETTINGS index_granularity = 8192, index_granularity_bytes = 0;
|
||||
<li><font face = "monospace">index_granularity</font>: 显式设置为其默认值8192。这意味着对于每一组8192行,主索引将有一个索引条目,例如,如果表包含16384行,那么索引将有两个索引条目。
|
||||
</li>
|
||||
<br/>
|
||||
<li><font face = "monospace">index_granularity_bytes</font>: 设置为0表示禁止<a href="https://clickhouse.com/docs/en/whats-new/changelog/2019/#experimental-features-1" target="_blank"><font color="blue">自适应索引粒度</font></a>。自适应索引粒度意味着ClickHouse自动为一组n行创建一个索引条目
|
||||
<li><font face = "monospace">index_granularity_bytes</font>: 设置为0表示禁止<a href="https://clickhouse.com/docs/en/whats-new/changelog/2019/#experimental-features-1" target="_blank"><font color="white">自适应索引粒度</font></a>。自适应索引粒度意味着ClickHouse自动为一组n行创建一个索引条目
|
||||
<ul>
|
||||
<li>如果n小于8192,但n行的合并行数据大小大于或等于10MB (index_granularity_bytes的默认值)或</li>
|
||||
<li>n达到8192</li>
|
||||
@ -446,10 +446,10 @@ ClickHouse客户端的输出显示,没有进行全表扫描,只有8.19千行
|
||||
我们可以在上面的跟踪日志中看到,1083个现有标记中有一个满足查询。
|
||||
|
||||
<details>
|
||||
<summary><font color="black">
|
||||
<summary><font color="white">
|
||||
Trace Log详情
|
||||
</font></summary>
|
||||
<p><font color="black">
|
||||
<p><font color="white">
|
||||
|
||||
Mark 176 was identified (the 'found left boundary mark' is inclusive, the 'found right boundary mark' is exclusive), and therefore all 8192 rows from granule 176 (which starts at row 1.441.792 - we will see that later on in this article) are then streamed into ClickHouse in order to find the actual rows with a UserID column value of <font face = "monospace">749927693</font>.
|
||||
</font></p>
|
||||
@ -520,10 +520,10 @@ LIMIT 10;
|
||||
如上所述,通过对索引的1083个UserID标记进行二分搜索,确定了第176个标记。因此,它对应的颗粒176可能包含UserID列值为749.927.693的行。
|
||||
|
||||
<details>
|
||||
<summary><font color="black">
|
||||
<summary><font color="white">
|
||||
颗粒选择的具体过程
|
||||
</font></summary>
|
||||
<p><font color="black">
|
||||
<p><font color="white">
|
||||
|
||||
上图显示,标记176是第一个UserID值小于749.927.693的索引条目,并且下一个标记(标记177)的颗粒177的最小UserID值大于该值的索引条目。因此,只有标记176对应的颗粒176可能包含UserID列值为749.927.693的行。
|
||||
</font></p>
|
||||
@ -671,15 +671,15 @@ Processed 8.81 million rows,
|
||||
为了说明,我们给出通用的排除搜索算法的工作原理:
|
||||
|
||||
<details open>
|
||||
<summary><font color="black">
|
||||
<summary><font color="white">
|
||||
<a name="generic-exclusion-search-algorithm"></a>通用排除搜索算法
|
||||
</font></summary>
|
||||
<p><font color="black">
|
||||
<p><font color="white">
|
||||
|
||||
|
||||
|
||||
|
||||
下面将演示当通过第一个列之后的任何列选择颗粒时,当前一个键列具有或高或低的基数时,ClickHouse<a href="https://github.com/ClickHouse/ClickHouse/blob/22.3/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp#L14444" target="_blank" ><font color="blue">通用排除搜索算法</font></a> 是如何工作的。
|
||||
下面将演示当通过第一个列之后的任何列选择颗粒时,当前一个键列具有或高或低的基数时,ClickHouse<a href="https://github.com/ClickHouse/ClickHouse/blob/22.3/src/Storages/MergeTree/MergeTreeDataSelectExecutor.cpp#L14444" target="_blank" ><font color="white">通用排除搜索算法</font></a> 是如何工作的。
|
||||
|
||||
作为这两种情况的例子,我们将假设:
|
||||
- 搜索URL值为"W3"的行。
|
||||
@ -736,9 +736,9 @@ Processed 8.81 million rows,
|
||||
在我们的示例数据集中,两个键列(UserID、URL)都具有类似的高基数,并且,如前所述,当URL列的前一个键列具有较高基数时,通用排除搜索算法不是很有效。
|
||||
|
||||
:::note 看下跳数索引
|
||||
因为UserID和URL具有较高的基数,[<font color="blue">根据URL过滤数据</font>](#query-on-url)不是特别有效,对URL列创建[<font color="blue">二级跳数索引</font>](./skipping-indexes.md)同样也不会有太多改善。
|
||||
因为UserID和URL具有较高的基数,[<font color="white">根据URL过滤数据</font>](#query-on-url)不是特别有效,对URL列创建[<font color="white">二级跳数索引</font>](./skipping-indexes.md)同样也不会有太多改善。
|
||||
|
||||
例如,这两个语句在我们的表的URL列上创建并填充一个<a href="https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/#primary-keys-and-indexes-in-queries" target="_blank"><font color="blue">minmax</font></a>跳数索引。
|
||||
例如,这两个语句在我们的表的URL列上创建并填充一个<a href="https://clickhouse.com/docs/en/engines/table-engines/mergetree-family/mergetree/#primary-keys-and-indexes-in-queries" target="_blank"><font color="white">minmax</font></a>跳数索引。
|
||||
```sql
|
||||
ALTER TABLE hits_UserID_URL ADD INDEX url_skipping_index URL TYPE minmax GRANULARITY 4;
|
||||
ALTER TABLE hits_UserID_URL MATERIALIZE INDEX url_skipping_index;
|
||||
@ -907,10 +907,10 @@ ClickHouse只选择了39个索引标记,而不是使用通用排除搜索时
|
||||
|
||||
点击下面了解详情:
|
||||
<details>
|
||||
<summary><font color="black">
|
||||
<summary><font color="white">
|
||||
对UserID的查询过滤性能较差<a name="query-on-userid-slow"></a>
|
||||
</font></summary>
|
||||
<p><font color="black">
|
||||
<p><font color="white">
|
||||
|
||||
```sql
|
||||
SELECT URL, count(URL) AS Count
|
||||
|
@ -61,14 +61,25 @@ namespace
|
||||
res.any_database = true;
|
||||
res.any_table = true;
|
||||
res.any_column = true;
|
||||
res.any_parameter = true;
|
||||
break;
|
||||
}
|
||||
case 1:
|
||||
{
|
||||
res.any_database = false;
|
||||
res.database = full_name[0];
|
||||
res.any_table = true;
|
||||
res.any_column = true;
|
||||
if (access_flags.isGlobalWithParameter())
|
||||
{
|
||||
res.parameter = full_name[0];
|
||||
res.any_parameter = false;
|
||||
res.any_database = false;
|
||||
}
|
||||
else
|
||||
{
|
||||
res.database = full_name[0];
|
||||
res.any_database = false;
|
||||
res.any_parameter = false;
|
||||
res.any_table = true;
|
||||
res.any_column = true;
|
||||
}
|
||||
break;
|
||||
}
|
||||
case 2:
|
||||
@ -110,10 +121,35 @@ namespace
|
||||
size_t count_elements_with_diff_columns = sorted.countElementsWithDifferenceInColumnOnly(i);
|
||||
if (count_elements_with_diff_columns == 1)
|
||||
{
|
||||
/// Easy case: one Element is converted to one AccessRightsElement.
|
||||
const auto & element = sorted[i];
|
||||
if (element.access_flags)
|
||||
res.emplace_back(element.getResult());
|
||||
{
|
||||
const bool all_granted = sorted.size() == 1 && element.access_flags.contains(AccessFlags::allFlags());
|
||||
if (all_granted)
|
||||
{
|
||||
/// Easy case: one Element is converted to one AccessRightsElement.
|
||||
res.emplace_back(element.getResult());
|
||||
}
|
||||
else
|
||||
{
|
||||
auto per_parameter = element.access_flags.splitIntoParameterTypes();
|
||||
if (per_parameter.size() == 1)
|
||||
{
|
||||
/// Easy case: one Element is converted to one AccessRightsElement.
|
||||
res.emplace_back(element.getResult());
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Difficult case: one element is converted into multiple AccessRightsElements.
|
||||
for (const auto & [_, parameter_flags] : per_parameter)
|
||||
{
|
||||
auto current_element{element};
|
||||
current_element.access_flags = parameter_flags;
|
||||
res.emplace_back(current_element.getResult());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
++i;
|
||||
}
|
||||
else
|
||||
@ -137,6 +173,8 @@ namespace
|
||||
{
|
||||
return (element.full_name.size() != 3) || (element.full_name[0] != start_element.full_name[0])
|
||||
|| (element.full_name[1] != start_element.full_name[1]) || (element.grant_option != start_element.grant_option)
|
||||
|| (element.access_flags.isGlobalWithParameter() != start_element.access_flags.isGlobalWithParameter())
|
||||
|| (element.access_flags.getParameterType() != start_element.access_flags.getParameterType())
|
||||
|| (element.is_partial_revoke != start_element.is_partial_revoke);
|
||||
});
|
||||
|
||||
@ -191,11 +229,19 @@ namespace
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Levels:
|
||||
* 1. GLOBAL
|
||||
* 2. DATABASE_LEVEL 2. GLOBAL_WITH_PARAMETER (parameter example: named collection)
|
||||
* 3. TABLE_LEVEL
|
||||
* 4. COLUMN_LEVEL
|
||||
*/
|
||||
|
||||
enum Level
|
||||
{
|
||||
GLOBAL_LEVEL,
|
||||
DATABASE_LEVEL,
|
||||
GLOBAL_WITH_PARAMETER = DATABASE_LEVEL,
|
||||
TABLE_LEVEL,
|
||||
COLUMN_LEVEL,
|
||||
};
|
||||
@ -205,7 +251,7 @@ namespace
|
||||
switch (level)
|
||||
{
|
||||
case GLOBAL_LEVEL: return AccessFlags::allFlagsGrantableOnGlobalLevel();
|
||||
case DATABASE_LEVEL: return AccessFlags::allFlagsGrantableOnDatabaseLevel();
|
||||
case DATABASE_LEVEL: return AccessFlags::allFlagsGrantableOnDatabaseLevel() | AccessFlags::allFlagsGrantableOnGlobalWithParameterLevel();
|
||||
case TABLE_LEVEL: return AccessFlags::allFlagsGrantableOnTableLevel();
|
||||
case COLUMN_LEVEL: return AccessFlags::allFlagsGrantableOnColumnLevel();
|
||||
}
|
||||
@ -783,7 +829,14 @@ void AccessRights::grantImplHelper(const AccessRightsElement & element)
|
||||
{
|
||||
assert(!element.is_partial_revoke);
|
||||
assert(!element.grant_option || with_grant_option);
|
||||
if (element.any_database)
|
||||
if (element.isGlobalWithParameter())
|
||||
{
|
||||
if (element.any_parameter)
|
||||
grantImpl<with_grant_option>(element.access_flags);
|
||||
else
|
||||
grantImpl<with_grant_option>(element.access_flags, element.parameter);
|
||||
}
|
||||
else if (element.any_database)
|
||||
grantImpl<with_grant_option>(element.access_flags);
|
||||
else if (element.any_table)
|
||||
grantImpl<with_grant_option>(element.access_flags, element.database);
|
||||
@ -858,7 +911,14 @@ template <bool grant_option>
|
||||
void AccessRights::revokeImplHelper(const AccessRightsElement & element)
|
||||
{
|
||||
assert(!element.grant_option || grant_option);
|
||||
if (element.any_database)
|
||||
if (element.isGlobalWithParameter())
|
||||
{
|
||||
if (element.any_parameter)
|
||||
revokeImpl<grant_option>(element.access_flags);
|
||||
else
|
||||
revokeImpl<grant_option>(element.access_flags, element.parameter);
|
||||
}
|
||||
else if (element.any_database)
|
||||
revokeImpl<grant_option>(element.access_flags);
|
||||
else if (element.any_table)
|
||||
revokeImpl<grant_option>(element.access_flags, element.database);
|
||||
@ -948,7 +1008,14 @@ template <bool grant_option>
|
||||
bool AccessRights::isGrantedImplHelper(const AccessRightsElement & element) const
|
||||
{
|
||||
assert(!element.grant_option || grant_option);
|
||||
if (element.any_database)
|
||||
if (element.isGlobalWithParameter())
|
||||
{
|
||||
if (element.any_parameter)
|
||||
return isGrantedImpl<grant_option>(element.access_flags);
|
||||
else
|
||||
return isGrantedImpl<grant_option>(element.access_flags, element.parameter);
|
||||
}
|
||||
else if (element.any_database)
|
||||
return isGrantedImpl<grant_option>(element.access_flags);
|
||||
else if (element.any_table)
|
||||
return isGrantedImpl<grant_option>(element.access_flags, element.database);
|
||||
|
@ -15,6 +15,7 @@ namespace ErrorCodes
|
||||
{
|
||||
extern const int UNKNOWN_ACCESS_TYPE;
|
||||
extern const int LOGICAL_ERROR;
|
||||
extern const int MIXED_ACCESS_PARAMETER_TYPES;
|
||||
}
|
||||
|
||||
namespace
|
||||
@ -96,11 +97,14 @@ namespace
|
||||
|
||||
const Flags & getAllFlags() const { return all_flags; }
|
||||
const Flags & getGlobalFlags() const { return all_flags_for_target[GLOBAL]; }
|
||||
const Flags & getGlobalWithParameterFlags() const { return all_flags_grantable_on_global_with_parameter_level; }
|
||||
const Flags & getDatabaseFlags() const { return all_flags_for_target[DATABASE]; }
|
||||
const Flags & getTableFlags() const { return all_flags_for_target[TABLE]; }
|
||||
const Flags & getColumnFlags() const { return all_flags_for_target[COLUMN]; }
|
||||
const Flags & getDictionaryFlags() const { return all_flags_for_target[DICTIONARY]; }
|
||||
const Flags & getNamedCollectionFlags() const { return all_flags_for_target[NAMED_COLLECTION]; }
|
||||
const Flags & getAllFlagsGrantableOnGlobalLevel() const { return getAllFlags(); }
|
||||
const Flags & getAllFlagsGrantableOnGlobalWithParameterLevel() const { return getGlobalWithParameterFlags(); }
|
||||
const Flags & getAllFlagsGrantableOnDatabaseLevel() const { return all_flags_grantable_on_database_level; }
|
||||
const Flags & getAllFlagsGrantableOnTableLevel() const { return all_flags_grantable_on_table_level; }
|
||||
const Flags & getAllFlagsGrantableOnColumnLevel() const { return getColumnFlags(); }
|
||||
@ -116,6 +120,7 @@ namespace
|
||||
VIEW = TABLE,
|
||||
COLUMN,
|
||||
DICTIONARY,
|
||||
NAMED_COLLECTION,
|
||||
};
|
||||
|
||||
struct Node;
|
||||
@ -295,6 +300,7 @@ namespace
|
||||
collectAllFlags(child.get());
|
||||
|
||||
all_flags_grantable_on_table_level = all_flags_for_target[TABLE] | all_flags_for_target[DICTIONARY] | all_flags_for_target[COLUMN];
|
||||
all_flags_grantable_on_global_with_parameter_level = all_flags_for_target[NAMED_COLLECTION];
|
||||
all_flags_grantable_on_database_level = all_flags_for_target[DATABASE] | all_flags_grantable_on_table_level;
|
||||
}
|
||||
|
||||
@ -345,12 +351,44 @@ namespace
|
||||
std::unordered_map<std::string_view, Flags> keyword_to_flags_map;
|
||||
std::vector<Flags> access_type_to_flags_mapping;
|
||||
Flags all_flags;
|
||||
Flags all_flags_for_target[static_cast<size_t>(DICTIONARY) + 1];
|
||||
Flags all_flags_for_target[static_cast<size_t>(NAMED_COLLECTION) + 1];
|
||||
Flags all_flags_grantable_on_database_level;
|
||||
Flags all_flags_grantable_on_table_level;
|
||||
Flags all_flags_grantable_on_global_with_parameter_level;
|
||||
};
|
||||
}
|
||||
|
||||
bool AccessFlags::isGlobalWithParameter() const
|
||||
{
|
||||
return getParameterType() != AccessFlags::NONE;
|
||||
}
|
||||
|
||||
std::unordered_map<AccessFlags::ParameterType, AccessFlags> AccessFlags::splitIntoParameterTypes() const
|
||||
{
|
||||
std::unordered_map<ParameterType, AccessFlags> result;
|
||||
|
||||
auto named_collection_flags = AccessFlags::allNamedCollectionFlags() & *this;
|
||||
if (named_collection_flags)
|
||||
result.emplace(ParameterType::NAMED_COLLECTION, named_collection_flags);
|
||||
|
||||
auto other_flags = (~AccessFlags::allNamedCollectionFlags()) & *this;
|
||||
if (other_flags)
|
||||
result.emplace(ParameterType::NONE, other_flags);
|
||||
|
||||
return result;
|
||||
}
|
||||
|
||||
AccessFlags::ParameterType AccessFlags::getParameterType() const
|
||||
{
|
||||
if (isEmpty() || !AccessFlags::allGlobalWithParameterFlags().contains(*this))
|
||||
return AccessFlags::NONE;
|
||||
|
||||
/// All flags refer to NAMED COLLECTION access type.
|
||||
if (AccessFlags::allNamedCollectionFlags().contains(*this))
|
||||
return AccessFlags::NAMED_COLLECTION;
|
||||
|
||||
throw Exception(ErrorCodes::MIXED_ACCESS_PARAMETER_TYPES, "Having mixed parameter types: {}", toString());
|
||||
}
|
||||
|
||||
AccessFlags::AccessFlags(AccessType type) : flags(Helper::instance().accessTypeToFlags(type)) {}
|
||||
AccessFlags::AccessFlags(std::string_view keyword) : flags(Helper::instance().keywordToFlags(keyword)) {}
|
||||
@ -361,11 +399,14 @@ std::vector<AccessType> AccessFlags::toAccessTypes() const { return Helper::inst
|
||||
std::vector<std::string_view> AccessFlags::toKeywords() const { return Helper::instance().flagsToKeywords(flags); }
|
||||
AccessFlags AccessFlags::allFlags() { return Helper::instance().getAllFlags(); }
|
||||
AccessFlags AccessFlags::allGlobalFlags() { return Helper::instance().getGlobalFlags(); }
|
||||
AccessFlags AccessFlags::allGlobalWithParameterFlags() { return Helper::instance().getGlobalWithParameterFlags(); }
|
||||
AccessFlags AccessFlags::allDatabaseFlags() { return Helper::instance().getDatabaseFlags(); }
|
||||
AccessFlags AccessFlags::allTableFlags() { return Helper::instance().getTableFlags(); }
|
||||
AccessFlags AccessFlags::allColumnFlags() { return Helper::instance().getColumnFlags(); }
|
||||
AccessFlags AccessFlags::allDictionaryFlags() { return Helper::instance().getDictionaryFlags(); }
|
||||
AccessFlags AccessFlags::allNamedCollectionFlags() { return Helper::instance().getNamedCollectionFlags(); }
|
||||
AccessFlags AccessFlags::allFlagsGrantableOnGlobalLevel() { return Helper::instance().getAllFlagsGrantableOnGlobalLevel(); }
|
||||
AccessFlags AccessFlags::allFlagsGrantableOnGlobalWithParameterLevel() { return Helper::instance().getAllFlagsGrantableOnGlobalWithParameterLevel(); }
|
||||
AccessFlags AccessFlags::allFlagsGrantableOnDatabaseLevel() { return Helper::instance().getAllFlagsGrantableOnDatabaseLevel(); }
|
||||
AccessFlags AccessFlags::allFlagsGrantableOnTableLevel() { return Helper::instance().getAllFlagsGrantableOnTableLevel(); }
|
||||
AccessFlags AccessFlags::allFlagsGrantableOnColumnLevel() { return Helper::instance().getAllFlagsGrantableOnColumnLevel(); }
|
||||
|
@ -48,8 +48,17 @@ public:
|
||||
AccessFlags operator ~() const { AccessFlags res; res.flags = ~flags; return res; }
|
||||
|
||||
bool isEmpty() const { return flags.none(); }
|
||||
bool isAll() const { return flags.all(); }
|
||||
explicit operator bool() const { return !isEmpty(); }
|
||||
bool contains(const AccessFlags & other) const { return (flags & other.flags) == other.flags; }
|
||||
bool isGlobalWithParameter() const;
|
||||
enum ParameterType
|
||||
{
|
||||
NONE,
|
||||
NAMED_COLLECTION,
|
||||
};
|
||||
ParameterType getParameterType() const;
|
||||
std::unordered_map<ParameterType, AccessFlags> splitIntoParameterTypes() const;
|
||||
|
||||
friend bool operator ==(const AccessFlags & left, const AccessFlags & right) { return left.flags == right.flags; }
|
||||
friend bool operator !=(const AccessFlags & left, const AccessFlags & right) { return !(left == right); }
|
||||
@ -76,6 +85,8 @@ public:
|
||||
/// Returns all the global flags.
|
||||
static AccessFlags allGlobalFlags();
|
||||
|
||||
static AccessFlags allGlobalWithParameterFlags();
|
||||
|
||||
/// Returns all the flags related to a database.
|
||||
static AccessFlags allDatabaseFlags();
|
||||
|
||||
@ -88,10 +99,16 @@ public:
|
||||
/// Returns all the flags related to a dictionary.
|
||||
static AccessFlags allDictionaryFlags();
|
||||
|
||||
/// Returns all the flags related to a named collection.
|
||||
static AccessFlags allNamedCollectionFlags();
|
||||
|
||||
/// Returns all the flags which could be granted on the global level.
|
||||
/// The same as allFlags().
|
||||
static AccessFlags allFlagsGrantableOnGlobalLevel();
|
||||
|
||||
/// Returns all the flags which could be granted on the global with parameter level.
|
||||
static AccessFlags allFlagsGrantableOnGlobalWithParameterLevel();
|
||||
|
||||
/// Returns all the flags which could be granted on the database level.
|
||||
/// Returns allDatabaseFlags() | allTableFlags() | allDictionaryFlags() | allColumnFlags().
|
||||
static AccessFlags allFlagsGrantableOnDatabaseLevel();
|
||||
|
@ -21,24 +21,31 @@ namespace
|
||||
result += ")";
|
||||
}
|
||||
|
||||
void formatONClause(const String & database, bool any_database, const String & table, bool any_table, String & result)
|
||||
void formatONClause(const AccessRightsElement & element, String & result)
|
||||
{
|
||||
result += "ON ";
|
||||
if (any_database)
|
||||
if (element.isGlobalWithParameter())
|
||||
{
|
||||
if (element.any_parameter)
|
||||
result += "*";
|
||||
else
|
||||
result += backQuoteIfNeed(element.parameter);
|
||||
}
|
||||
else if (element.any_database)
|
||||
{
|
||||
result += "*.*";
|
||||
}
|
||||
else
|
||||
{
|
||||
if (!database.empty())
|
||||
if (!element.database.empty())
|
||||
{
|
||||
result += backQuoteIfNeed(database);
|
||||
result += backQuoteIfNeed(element.database);
|
||||
result += ".";
|
||||
}
|
||||
if (any_table)
|
||||
if (element.any_table)
|
||||
result += "*";
|
||||
else
|
||||
result += backQuoteIfNeed(table);
|
||||
result += backQuoteIfNeed(element.table);
|
||||
}
|
||||
}
|
||||
|
||||
@ -96,7 +103,7 @@ namespace
|
||||
String result;
|
||||
formatAccessFlagsWithColumns(element.access_flags, element.columns, element.any_column, result);
|
||||
result += " ";
|
||||
formatONClause(element.database, element.any_database, element.table, element.any_table, result);
|
||||
formatONClause(element, result);
|
||||
if (with_options)
|
||||
formatOptions(element.grant_option, element.is_partial_revoke, result);
|
||||
return result;
|
||||
@ -122,14 +129,16 @@ namespace
|
||||
if (i != elements.size() - 1)
|
||||
{
|
||||
const auto & next_element = elements[i + 1];
|
||||
if (element.sameDatabaseAndTable(next_element) && element.sameOptions(next_element))
|
||||
if (element.sameDatabaseAndTableAndParameter(next_element) && element.sameOptions(next_element))
|
||||
{
|
||||
next_element_uses_same_table_and_options = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (!next_element_uses_same_table_and_options)
|
||||
{
|
||||
part += " ";
|
||||
formatONClause(element.database, element.any_database, element.table, element.any_table, part);
|
||||
formatONClause(element, part);
|
||||
if (with_options)
|
||||
formatOptions(element.grant_option, element.is_partial_revoke, part);
|
||||
if (result.empty())
|
||||
@ -164,6 +173,7 @@ AccessRightsElement::AccessRightsElement(
|
||||
, any_database(false)
|
||||
, any_table(false)
|
||||
, any_column(false)
|
||||
, any_parameter(false)
|
||||
{
|
||||
}
|
||||
|
||||
@ -188,12 +198,15 @@ AccessRightsElement::AccessRightsElement(
|
||||
, any_database(false)
|
||||
, any_table(false)
|
||||
, any_column(false)
|
||||
, any_parameter(false)
|
||||
{
|
||||
}
|
||||
|
||||
void AccessRightsElement::eraseNonGrantable()
|
||||
{
|
||||
if (!any_column)
|
||||
if (isGlobalWithParameter() && !any_parameter)
|
||||
access_flags &= AccessFlags::allFlagsGrantableOnGlobalWithParameterLevel();
|
||||
else if (!any_column)
|
||||
access_flags &= AccessFlags::allFlagsGrantableOnColumnLevel();
|
||||
else if (!any_table)
|
||||
access_flags &= AccessFlags::allFlagsGrantableOnTableLevel();
|
||||
@ -215,6 +228,11 @@ String AccessRightsElement::toStringWithoutOptions() const { return toStringImpl
|
||||
|
||||
bool AccessRightsElements::empty() const { return std::all_of(begin(), end(), [](const AccessRightsElement & e) { return e.empty(); }); }
|
||||
|
||||
bool AccessRightsElements::sameDatabaseAndTableAndParameter() const
|
||||
{
|
||||
return (size() < 2) || std::all_of(std::next(begin()), end(), [this](const AccessRightsElement & e) { return e.sameDatabaseAndTableAndParameter(front()); });
|
||||
}
|
||||
|
||||
bool AccessRightsElements::sameDatabaseAndTable() const
|
||||
{
|
||||
return (size() < 2) || std::all_of(std::next(begin()), end(), [this](const AccessRightsElement & e) { return e.sameDatabaseAndTable(front()); });
|
||||
|
@ -11,12 +11,17 @@ namespace DB
|
||||
struct AccessRightsElement
|
||||
{
|
||||
AccessFlags access_flags;
|
||||
|
||||
String database;
|
||||
String table;
|
||||
Strings columns;
|
||||
String parameter;
|
||||
|
||||
bool any_database = true;
|
||||
bool any_table = true;
|
||||
bool any_column = true;
|
||||
bool any_parameter = false;
|
||||
|
||||
bool grant_option = false;
|
||||
bool is_partial_revoke = false;
|
||||
|
||||
@ -44,14 +49,26 @@ struct AccessRightsElement
|
||||
|
||||
bool empty() const { return !access_flags || (!any_column && columns.empty()); }
|
||||
|
||||
auto toTuple() const { return std::tie(access_flags, any_database, database, any_table, table, any_column, columns, grant_option, is_partial_revoke); }
|
||||
auto toTuple() const { return std::tie(access_flags, any_database, database, any_table, table, any_column, columns, any_parameter, parameter, grant_option, is_partial_revoke); }
|
||||
friend bool operator==(const AccessRightsElement & left, const AccessRightsElement & right) { return left.toTuple() == right.toTuple(); }
|
||||
friend bool operator!=(const AccessRightsElement & left, const AccessRightsElement & right) { return !(left == right); }
|
||||
|
||||
bool sameDatabaseAndTableAndParameter(const AccessRightsElement & other) const
|
||||
{
|
||||
return sameDatabaseAndTable(other) && sameParameter(other);
|
||||
}
|
||||
|
||||
bool sameParameter(const AccessRightsElement & other) const
|
||||
{
|
||||
return (parameter == other.parameter) && (any_parameter == other.any_parameter)
|
||||
&& (access_flags.getParameterType() == other.access_flags.getParameterType())
|
||||
&& (isGlobalWithParameter() == other.isGlobalWithParameter());
|
||||
}
|
||||
|
||||
bool sameDatabaseAndTable(const AccessRightsElement & other) const
|
||||
{
|
||||
return (database == other.database) && (any_database == other.any_database) && (table == other.table)
|
||||
&& (any_table == other.any_table);
|
||||
return (database == other.database) && (any_database == other.any_database)
|
||||
&& (table == other.table) && (any_table == other.any_table);
|
||||
}
|
||||
|
||||
bool sameOptions(const AccessRightsElement & other) const
|
||||
@ -67,6 +84,8 @@ struct AccessRightsElement
|
||||
/// If the database is empty, replaces it with `current_database`. Otherwise does nothing.
|
||||
void replaceEmptyDatabase(const String & current_database);
|
||||
|
||||
bool isGlobalWithParameter() const { return access_flags.isGlobalWithParameter(); }
|
||||
|
||||
/// Returns a human-readable representation like "GRANT SELECT, UPDATE(x, y) ON db.table".
|
||||
String toString() const;
|
||||
String toStringWithoutOptions() const;
|
||||
@ -81,6 +100,7 @@ public:
|
||||
using Base::Base;
|
||||
|
||||
bool empty() const;
|
||||
bool sameDatabaseAndTableAndParameter() const;
|
||||
bool sameDatabaseAndTable() const;
|
||||
bool sameOptions() const;
|
||||
|
||||
|
@ -12,7 +12,7 @@ enum class AccessType
|
||||
/// Macro M should be defined as M(name, aliases, node_type, parent_group_name)
|
||||
/// where name is identifier with underscores (instead of spaces);
|
||||
/// aliases is a string containing comma-separated list;
|
||||
/// node_type either specifies access type's level (GLOBAL/DATABASE/TABLE/DICTIONARY/VIEW/COLUMNS),
|
||||
/// node_type either specifies access type's level (GLOBAL/NAMED_COLLECTION/DATABASE/TABLE/DICTIONARY/VIEW/COLUMNS),
|
||||
/// or specifies that the access type is a GROUP of other access types;
|
||||
/// parent_group_name is the name of the group containing this access type (or NONE if there is no such group).
|
||||
/// NOTE A parent group must be declared AFTER all its children.
|
||||
@ -70,7 +70,7 @@ enum class AccessType
|
||||
M(ALTER_FREEZE_PARTITION, "FREEZE PARTITION, UNFREEZE", TABLE, ALTER_TABLE) \
|
||||
\
|
||||
M(ALTER_DATABASE_SETTINGS, "ALTER DATABASE SETTING, ALTER MODIFY DATABASE SETTING, MODIFY DATABASE SETTING", DATABASE, ALTER_DATABASE) /* allows to execute ALTER MODIFY SETTING */\
|
||||
M(ALTER_NAMED_COLLECTION, "", GROUP, ALTER) /* allows to execute ALTER NAMED COLLECTION */\
|
||||
M(ALTER_NAMED_COLLECTION, "", NAMED_COLLECTION, NAMED_COLLECTION_CONTROL) /* allows to execute ALTER NAMED COLLECTION */\
|
||||
\
|
||||
M(ALTER_TABLE, "", GROUP, ALTER) \
|
||||
M(ALTER_DATABASE, "", GROUP, ALTER) \
|
||||
@ -92,7 +92,7 @@ enum class AccessType
|
||||
M(CREATE_ARBITRARY_TEMPORARY_TABLE, "", GLOBAL, CREATE) /* allows to create and manipulate temporary tables
|
||||
with arbitrary table engine */\
|
||||
M(CREATE_FUNCTION, "", GLOBAL, CREATE) /* allows to execute CREATE FUNCTION */ \
|
||||
M(CREATE_NAMED_COLLECTION, "", GLOBAL, CREATE) /* allows to execute CREATE NAMED COLLECTION */ \
|
||||
M(CREATE_NAMED_COLLECTION, "", NAMED_COLLECTION, NAMED_COLLECTION_CONTROL) /* allows to execute CREATE NAMED COLLECTION */ \
|
||||
M(CREATE, "", GROUP, ALL) /* allows to execute {CREATE|ATTACH} */ \
|
||||
\
|
||||
M(DROP_DATABASE, "", DATABASE, DROP) /* allows to execute {DROP|DETACH} DATABASE */\
|
||||
@ -101,7 +101,7 @@ enum class AccessType
|
||||
implicitly enabled by the grant DROP_TABLE */\
|
||||
M(DROP_DICTIONARY, "", DICTIONARY, DROP) /* allows to execute {DROP|DETACH} DICTIONARY */\
|
||||
M(DROP_FUNCTION, "", GLOBAL, DROP) /* allows to execute DROP FUNCTION */\
|
||||
M(DROP_NAMED_COLLECTION, "", GLOBAL, DROP) /* allows to execute DROP NAMED COLLECTION */\
|
||||
M(DROP_NAMED_COLLECTION, "", NAMED_COLLECTION, NAMED_COLLECTION_CONTROL) /* allows to execute DROP NAMED COLLECTION */\
|
||||
M(DROP, "", GROUP, ALL) /* allows to execute {DROP|DETACH} */\
|
||||
\
|
||||
M(TRUNCATE, "TRUNCATE TABLE", TABLE, ALL) \
|
||||
@ -137,9 +137,10 @@ enum class AccessType
|
||||
M(SHOW_QUOTAS, "SHOW CREATE QUOTA", GLOBAL, SHOW_ACCESS) \
|
||||
M(SHOW_SETTINGS_PROFILES, "SHOW PROFILES, SHOW CREATE SETTINGS PROFILE, SHOW CREATE PROFILE", GLOBAL, SHOW_ACCESS) \
|
||||
M(SHOW_ACCESS, "", GROUP, ACCESS_MANAGEMENT) \
|
||||
M(SHOW_NAMED_COLLECTIONS, "SHOW NAMED COLLECTIONS", GLOBAL, ACCESS_MANAGEMENT) \
|
||||
M(SHOW_NAMED_COLLECTIONS_SECRETS, "SHOW NAMED COLLECTIONS SECRETS", GLOBAL, ACCESS_MANAGEMENT) \
|
||||
M(ACCESS_MANAGEMENT, "", GROUP, ALL) \
|
||||
M(SHOW_NAMED_COLLECTIONS, "SHOW NAMED COLLECTIONS", NAMED_COLLECTION, NAMED_COLLECTION_CONTROL) \
|
||||
M(SHOW_NAMED_COLLECTIONS_SECRETS, "SHOW NAMED COLLECTIONS SECRETS", NAMED_COLLECTION, NAMED_COLLECTION_CONTROL) \
|
||||
M(NAMED_COLLECTION_CONTROL, "", NAMED_COLLECTION, ALL) \
|
||||
\
|
||||
M(SYSTEM_SHUTDOWN, "SYSTEM KILL, SHUTDOWN", GLOBAL, SYSTEM) \
|
||||
M(SYSTEM_DROP_DNS_CACHE, "SYSTEM DROP DNS, DROP DNS CACHE, DROP DNS", GLOBAL, SYSTEM_DROP_CACHE) \
|
||||
|
@ -507,13 +507,17 @@ bool ContextAccess::checkAccessImplHelper(AccessFlags flags, const Args &... arg
|
||||
if (!flags)
|
||||
return true;
|
||||
|
||||
/// Access to temporary tables is controlled in an unusual way, not like normal tables.
|
||||
/// Creating of temporary tables is controlled by AccessType::CREATE_TEMPORARY_TABLES grant,
|
||||
/// and other grants are considered as always given.
|
||||
/// The DatabaseCatalog class won't resolve StorageID for temporary tables
|
||||
/// which shouldn't be accessed.
|
||||
if (getDatabase(args...) == DatabaseCatalog::TEMPORARY_DATABASE)
|
||||
return access_granted();
|
||||
const auto parameter_type = flags.getParameterType();
|
||||
if (parameter_type == AccessFlags::NONE)
|
||||
{
|
||||
/// Access to temporary tables is controlled in an unusual way, not like normal tables.
|
||||
/// Creating of temporary tables is controlled by AccessType::CREATE_TEMPORARY_TABLES grant,
|
||||
/// and other grants are considered as always given.
|
||||
/// The DatabaseCatalog class won't resolve StorageID for temporary tables
|
||||
/// which shouldn't be accessed.
|
||||
if (getDatabase(args...) == DatabaseCatalog::TEMPORARY_DATABASE)
|
||||
return access_granted();
|
||||
}
|
||||
|
||||
auto acs = getAccessRightsWithImplicit();
|
||||
bool granted;
|
||||
@ -611,7 +615,14 @@ template <bool throw_if_denied, bool grant_option>
|
||||
bool ContextAccess::checkAccessImplHelper(const AccessRightsElement & element) const
|
||||
{
|
||||
assert(!element.grant_option || grant_option);
|
||||
if (element.any_database)
|
||||
if (element.isGlobalWithParameter())
|
||||
{
|
||||
if (element.any_parameter)
|
||||
return checkAccessImpl<throw_if_denied, grant_option>(element.access_flags);
|
||||
else
|
||||
return checkAccessImpl<throw_if_denied, grant_option>(element.access_flags, element.parameter);
|
||||
}
|
||||
else if (element.any_database)
|
||||
return checkAccessImpl<throw_if_denied, grant_option>(element.access_flags);
|
||||
else if (element.any_table)
|
||||
return checkAccessImpl<throw_if_denied, grant_option>(element.access_flags, element.database);
|
||||
|
@ -233,10 +233,10 @@ namespace
|
||||
user->access.revokeGrantOption(AccessType::ALL);
|
||||
}
|
||||
|
||||
bool show_named_collections = config.getBool(user_config + ".show_named_collections", false);
|
||||
if (!show_named_collections)
|
||||
bool named_collection_control = config.getBool(user_config + ".named_collection_control", false);
|
||||
if (!named_collection_control)
|
||||
{
|
||||
user->access.revoke(AccessType::SHOW_NAMED_COLLECTIONS);
|
||||
user->access.revoke(AccessType::NAMED_COLLECTION_CONTROL);
|
||||
}
|
||||
|
||||
bool show_named_collections_secrets = config.getBool(user_config + ".show_named_collections_secrets", false);
|
||||
|
@ -53,7 +53,7 @@ TEST(AccessRights, Union)
|
||||
"SHOW ROW POLICIES, SYSTEM MERGES, SYSTEM TTL MERGES, SYSTEM FETCHES, "
|
||||
"SYSTEM MOVES, SYSTEM SENDS, SYSTEM REPLICATION QUEUES, "
|
||||
"SYSTEM DROP REPLICA, SYSTEM SYNC REPLICA, SYSTEM RESTART REPLICA, "
|
||||
"SYSTEM RESTORE REPLICA, SYSTEM WAIT LOADING PARTS, SYSTEM SYNC DATABASE REPLICA, SYSTEM FLUSH DISTRIBUTED, dictGet ON db1.*");
|
||||
"SYSTEM RESTORE REPLICA, SYSTEM WAIT LOADING PARTS, SYSTEM SYNC DATABASE REPLICA, SYSTEM FLUSH DISTRIBUTED, dictGet ON db1.*, GRANT NAMED COLLECTION CONTROL ON db1");
|
||||
}
|
||||
|
||||
|
||||
|
@ -20,13 +20,11 @@
|
||||
|
||||
#include <unordered_set>
|
||||
|
||||
#include <pcg_random.hpp>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Core/Types.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <IO/UseSSL.h>
|
||||
#include <IO/WriteBufferFromOStream.h>
|
||||
#include <Parsers/ASTExplainQuery.h>
|
||||
#include <Parsers/ASTExpressionList.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTIdentifier.h>
|
||||
@ -34,17 +32,20 @@
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/ASTOrderByElement.h>
|
||||
#include <Parsers/ASTQueryWithOutput.h>
|
||||
#include <Parsers/ASTSelectIntersectExceptQuery.h>
|
||||
#include <Parsers/ASTSelectQuery.h>
|
||||
#include <Parsers/ASTSelectWithUnionQuery.h>
|
||||
#include <Parsers/ASTSetQuery.h>
|
||||
#include <Parsers/ASTSubquery.h>
|
||||
#include <Parsers/ASTTablesInSelectQuery.h>
|
||||
#include <Parsers/ASTSelectIntersectExceptQuery.h>
|
||||
#include <Parsers/ASTUseQuery.h>
|
||||
#include <Parsers/ASTWindowDefinition.h>
|
||||
#include <Parsers/ParserQuery.h>
|
||||
#include <Parsers/formatAST.h>
|
||||
#include <Parsers/parseQuery.h>
|
||||
#include <pcg_random.hpp>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -681,6 +682,98 @@ void QueryFuzzer::fuzzTableName(ASTTableExpression & table)
|
||||
}
|
||||
}
|
||||
|
||||
void QueryFuzzer::fuzzExplainQuery(ASTExplainQuery & explain)
|
||||
{
|
||||
/// Fuzz ExplainKind
|
||||
if (fuzz_rand() % 20 == 0)
|
||||
{
|
||||
/// Do not modify ExplainKind
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::ParsedAST);
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::AnalyzedSyntax);
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::QueryTree);
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::QueryPlan);
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::QueryPipeline);
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::QueryEstimates);
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::TableOverride);
|
||||
}
|
||||
else if (fuzz_rand() % 11 == 0)
|
||||
{
|
||||
explain.setExplainKind(ASTExplainQuery::ExplainKind::CurrentTransaction);
|
||||
}
|
||||
|
||||
static const std::unordered_map<ASTExplainQuery::ExplainKind, std::vector<String>> settings_by_kind
|
||||
= {{ASTExplainQuery::ExplainKind::ParsedAST, {"graph", "optimize"}},
|
||||
{ASTExplainQuery::ExplainKind::AnalyzedSyntax, {}},
|
||||
{ASTExplainQuery::QueryTree, {"run_passes", "dump_passes", "dump_ast", "passes"}},
|
||||
{ASTExplainQuery::ExplainKind::QueryPlan, {"header, description", "actions", "indexes", "optimize", "json", "sorting"}},
|
||||
{ASTExplainQuery::ExplainKind::QueryPipeline, {"header", "graph=1", "compact"}},
|
||||
{ASTExplainQuery::ExplainKind::QueryEstimates, {}},
|
||||
{ASTExplainQuery::ExplainKind::TableOverride, {}},
|
||||
{ASTExplainQuery::ExplainKind::CurrentTransaction, {}}};
|
||||
|
||||
const auto & settings = settings_by_kind.at(explain.getKind());
|
||||
bool settings_have_fuzzed = false;
|
||||
for (auto & child : explain.children)
|
||||
{
|
||||
if (auto * settings_ast = typeid_cast<ASTSetQuery *>(child.get()))
|
||||
{
|
||||
fuzzExplainSettings(*settings_ast, settings);
|
||||
settings_have_fuzzed = true;
|
||||
}
|
||||
/// Fuzz other child like Explain Query
|
||||
else
|
||||
{
|
||||
fuzz(child);
|
||||
}
|
||||
}
|
||||
|
||||
if (!settings_have_fuzzed && !settings.empty())
|
||||
{
|
||||
auto settings_ast = std::make_shared<ASTSetQuery>();
|
||||
fuzzExplainSettings(*settings_ast, settings);
|
||||
explain.setSettings(settings_ast);
|
||||
}
|
||||
}
|
||||
|
||||
void QueryFuzzer::fuzzExplainSettings(ASTSetQuery & settings, const std::vector<String> & names)
|
||||
{
|
||||
auto & changes = settings.changes;
|
||||
|
||||
if (fuzz_rand() % 50 == 0 && !changes.empty())
|
||||
{
|
||||
changes.erase(changes.begin() + fuzz_rand() % changes.size());
|
||||
}
|
||||
|
||||
for (const auto & name : names)
|
||||
{
|
||||
if (fuzz_rand() % 5 == 0)
|
||||
{
|
||||
changes.emplace_back(name, true);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static ASTPtr tryParseInsertQuery(const String & full_query)
|
||||
{
|
||||
const char * pos = full_query.data();
|
||||
@ -991,6 +1084,10 @@ void QueryFuzzer::fuzz(ASTPtr & ast)
|
||||
{
|
||||
fuzzCreateQuery(*create_query);
|
||||
}
|
||||
else if (auto * explain_query = typeid_cast<ASTExplainQuery *>(ast.get()))
|
||||
{
|
||||
fuzzExplainQuery(*explain_query);
|
||||
}
|
||||
else
|
||||
{
|
||||
fuzz(ast->children);
|
||||
|
@ -22,6 +22,8 @@ class ASTCreateQuery;
|
||||
class ASTInsertQuery;
|
||||
class ASTColumnDeclaration;
|
||||
class ASTDropQuery;
|
||||
class ASTExplainQuery;
|
||||
class ASTSetQuery;
|
||||
struct ASTTableExpression;
|
||||
struct ASTWindowDefinition;
|
||||
|
||||
@ -86,6 +88,8 @@ struct QueryFuzzer
|
||||
void fuzzColumnLikeExpressionList(IAST * ast);
|
||||
void fuzzWindowFrame(ASTWindowDefinition & def);
|
||||
void fuzzCreateQuery(ASTCreateQuery & create);
|
||||
void fuzzExplainQuery(ASTExplainQuery & explain);
|
||||
void fuzzExplainSettings(ASTSetQuery & settings, const std::vector<String> & names);
|
||||
void fuzzColumnDeclaration(ASTColumnDeclaration & column);
|
||||
void fuzzTableName(ASTTableExpression & table);
|
||||
void fuzz(ASTs & asts);
|
||||
|
@ -648,6 +648,7 @@
|
||||
M(677, THREAD_WAS_CANCELED) \
|
||||
M(678, IO_URING_INIT_FAILED) \
|
||||
M(679, IO_URING_SUBMIT_ERROR) \
|
||||
M(690, MIXED_ACCESS_PARAMETER_TYPES) \
|
||||
\
|
||||
M(999, KEEPER_EXCEPTION) \
|
||||
M(1000, POCO_EXCEPTION) \
|
||||
|
@ -17,6 +17,7 @@ namespace ErrorCodes
|
||||
extern const int NAMED_COLLECTION_DOESNT_EXIST;
|
||||
extern const int NAMED_COLLECTION_ALREADY_EXISTS;
|
||||
extern const int NAMED_COLLECTION_IS_IMMUTABLE;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
namespace Configuration = NamedCollectionConfiguration;
|
||||
@ -200,6 +201,11 @@ public:
|
||||
return std::unique_ptr<Impl>(new Impl(collection_config, keys));
|
||||
}
|
||||
|
||||
bool has(const Key & key) const
|
||||
{
|
||||
return Configuration::hasConfigValue(*config, key);
|
||||
}
|
||||
|
||||
template <typename T> T get(const Key & key) const
|
||||
{
|
||||
return Configuration::getConfigValue<T>(*config, key);
|
||||
@ -341,6 +347,21 @@ MutableNamedCollectionPtr NamedCollection::create(
|
||||
new NamedCollection(std::move(impl), collection_name, source_id, is_mutable));
|
||||
}
|
||||
|
||||
bool NamedCollection::has(const Key & key) const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
return pimpl->has(key);
|
||||
}
|
||||
|
||||
bool NamedCollection::hasAny(const std::initializer_list<Key> & keys) const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
for (const auto & key : keys)
|
||||
if (pimpl->has(key))
|
||||
return true;
|
||||
return false;
|
||||
}
|
||||
|
||||
template <typename T> T NamedCollection::get(const Key & key) const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
@ -353,6 +374,28 @@ template <typename T> T NamedCollection::getOrDefault(const Key & key, const T &
|
||||
return pimpl->getOrDefault<T>(key, default_value);
|
||||
}
|
||||
|
||||
template <typename T> T NamedCollection::getAny(const std::initializer_list<Key> & keys) const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
for (const auto & key : keys)
|
||||
{
|
||||
if (pimpl->has(key))
|
||||
return pimpl->get<T>(key);
|
||||
}
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "No such keys: {}", fmt::join(keys, ", "));
|
||||
}
|
||||
|
||||
template <typename T> T NamedCollection::getAnyOrDefault(const std::initializer_list<Key> & keys, const T & default_value) const
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
for (const auto & key : keys)
|
||||
{
|
||||
if (pimpl->has(key))
|
||||
return pimpl->get<T>(key);
|
||||
}
|
||||
return default_value;
|
||||
}
|
||||
|
||||
template <typename T, bool Locked> void NamedCollection::set(const Key & key, const T & value)
|
||||
{
|
||||
assertMutable();
|
||||
@ -444,6 +487,18 @@ template Int64 NamedCollection::getOrDefault<Int64>(const NamedCollection::Key &
|
||||
template Float64 NamedCollection::getOrDefault<Float64>(const NamedCollection::Key & key, const Float64 & default_value) const;
|
||||
template bool NamedCollection::getOrDefault<bool>(const NamedCollection::Key & key, const bool & default_value) const;
|
||||
|
||||
template String NamedCollection::getAny<String>(const std::initializer_list<NamedCollection::Key> & key) const;
|
||||
template UInt64 NamedCollection::getAny<UInt64>(const std::initializer_list<NamedCollection::Key> & key) const;
|
||||
template Int64 NamedCollection::getAny<Int64>(const std::initializer_list<NamedCollection::Key> & key) const;
|
||||
template Float64 NamedCollection::getAny<Float64>(const std::initializer_list<NamedCollection::Key> & key) const;
|
||||
template bool NamedCollection::getAny<bool>(const std::initializer_list<NamedCollection::Key> & key) const;
|
||||
|
||||
template String NamedCollection::getAnyOrDefault<String>(const std::initializer_list<NamedCollection::Key> & key, const String & default_value) const;
|
||||
template UInt64 NamedCollection::getAnyOrDefault<UInt64>(const std::initializer_list<NamedCollection::Key> & key, const UInt64 & default_value) const;
|
||||
template Int64 NamedCollection::getAnyOrDefault<Int64>(const std::initializer_list<NamedCollection::Key> & key, const Int64 & default_value) const;
|
||||
template Float64 NamedCollection::getAnyOrDefault<Float64>(const std::initializer_list<NamedCollection::Key> & key, const Float64 & default_value) const;
|
||||
template bool NamedCollection::getAnyOrDefault<bool>(const std::initializer_list<NamedCollection::Key> & key, const bool & default_value) const;
|
||||
|
||||
template void NamedCollection::set<String, true>(const NamedCollection::Key & key, const String & value);
|
||||
template void NamedCollection::set<String, false>(const NamedCollection::Key & key, const String & value);
|
||||
template void NamedCollection::set<UInt64, true>(const NamedCollection::Key & key, const UInt64 & value);
|
||||
|
@ -33,10 +33,18 @@ public:
|
||||
SourceId source_id_,
|
||||
bool is_mutable_);
|
||||
|
||||
bool has(const Key & key) const;
|
||||
|
||||
bool hasAny(const std::initializer_list<Key> & keys) const;
|
||||
|
||||
template <typename T> T get(const Key & key) const;
|
||||
|
||||
template <typename T> T getOrDefault(const Key & key, const T & default_value) const;
|
||||
|
||||
template <typename T> T getAny(const std::initializer_list<Key> & keys) const;
|
||||
|
||||
template <typename T> T getAnyOrDefault(const std::initializer_list<Key> & keys, const T & default_value) const;
|
||||
|
||||
std::unique_lock<std::mutex> lock();
|
||||
|
||||
template <typename T, bool locked = false> void set(const Key & key, const T & value);
|
||||
|
@ -202,8 +202,16 @@ std::string wipeSensitiveDataAndCutToLength(const std::string & str, size_t max_
|
||||
if (auto * masker = SensitiveDataMasker::getInstance())
|
||||
masker->wipeSensitiveData(res);
|
||||
|
||||
if (max_length && (res.length() > max_length))
|
||||
size_t length = res.length();
|
||||
if (max_length && (length > max_length))
|
||||
{
|
||||
constexpr size_t max_extra_msg_len = sizeof("... (truncated 18446744073709551615 characters)");
|
||||
if (max_length < max_extra_msg_len)
|
||||
return "(removed " + std::to_string(length) + " characters)";
|
||||
max_length -= max_extra_msg_len;
|
||||
res.resize(max_length);
|
||||
res.append("... (truncated " + std::to_string(length - max_length) + " characters)");
|
||||
}
|
||||
|
||||
return res;
|
||||
}
|
||||
|
@ -89,12 +89,12 @@ static DataTypePtr createExact(const ASTPtr & arguments)
|
||||
{
|
||||
if (!arguments || arguments->children.size() != 1)
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
"Decimal data type family must have exactly two arguments: precision and scale");
|
||||
|
||||
"Decimal32 | Decimal64 | Decimal128 | Decimal256 data type family must have exactly one arguments: scale");
|
||||
const auto * scale_arg = arguments->children[0]->as<ASTLiteral>();
|
||||
|
||||
if (!scale_arg || !(scale_arg->value.getType() == Field::Types::Int64 || scale_arg->value.getType() == Field::Types::UInt64))
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Decimal data type family must have a two numbers as its arguments");
|
||||
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
|
||||
"Decimal32 | Decimal64 | Decimal128 | Decimal256 data type family must have a one number as its argument");
|
||||
|
||||
UInt64 precision = DecimalUtils::max_precision<T>;
|
||||
UInt64 scale = scale_arg->value.get<UInt64>();
|
||||
|
@ -13,7 +13,6 @@
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/queryToString.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <Common/NamedCollections/NamedCollections.h>
|
||||
#include <Common/logger_useful.h>
|
||||
@ -24,11 +23,11 @@
|
||||
|
||||
#if USE_MYSQL
|
||||
# include <Core/MySQL/MySQLClient.h>
|
||||
# include <Databases/MySQL/ConnectionMySQLSettings.h>
|
||||
# include <Databases/MySQL/DatabaseMySQL.h>
|
||||
# include <Databases/MySQL/MaterializedMySQLSettings.h>
|
||||
# include <Storages/MySQL/MySQLHelpers.h>
|
||||
# include <Storages/MySQL/MySQLSettings.h>
|
||||
# include <Storages/StorageMySQL.h>
|
||||
# include <Databases/MySQL/DatabaseMaterializedMySQL.h>
|
||||
# include <mysqlxx/Pool.h>
|
||||
#endif
|
||||
@ -183,21 +182,13 @@ DatabasePtr DatabaseFactory::getImpl(const ASTCreateQuery & create, const String
|
||||
if (!engine->arguments)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Engine `{}` must have arguments", engine_name);
|
||||
|
||||
StorageMySQLConfiguration configuration;
|
||||
StorageMySQL::Configuration configuration;
|
||||
ASTs & arguments = engine->arguments->children;
|
||||
auto mysql_settings = std::make_unique<ConnectionMySQLSettings>();
|
||||
auto mysql_settings = std::make_unique<MySQLSettings>();
|
||||
|
||||
if (auto named_collection = getExternalDataSourceConfiguration(arguments, context, true, true, *mysql_settings))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(arguments, context))
|
||||
{
|
||||
auto [common_configuration, storage_specific_args, settings_changes] = named_collection.value();
|
||||
|
||||
configuration.set(common_configuration);
|
||||
configuration.addresses = {std::make_pair(configuration.host, configuration.port)};
|
||||
mysql_settings->applyChanges(settings_changes);
|
||||
|
||||
if (!storage_specific_args.empty())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"MySQL database require mysql_hostname, mysql_database_name, mysql_username, mysql_password arguments.");
|
||||
configuration = StorageMySQL::processNamedCollectionResult(*named_collection, *mysql_settings, false);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -230,8 +221,9 @@ DatabasePtr DatabaseFactory::getImpl(const ASTCreateQuery & create, const String
|
||||
{
|
||||
if (engine_name == "MySQL")
|
||||
{
|
||||
mysql_settings->loadFromQueryContext(context);
|
||||
mysql_settings->loadFromQuery(*engine_define); /// higher priority
|
||||
mysql_settings->loadFromQueryContext(context, *engine_define);
|
||||
if (engine_define->settings)
|
||||
mysql_settings->loadFromQuery(*engine_define);
|
||||
|
||||
auto mysql_pool = createMySQLPoolWithFailover(configuration, *mysql_settings);
|
||||
|
||||
@ -324,21 +316,9 @@ DatabasePtr DatabaseFactory::getImpl(const ASTCreateQuery & create, const String
|
||||
auto use_table_cache = false;
|
||||
StoragePostgreSQL::Configuration configuration;
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, context))
|
||||
{
|
||||
validateNamedCollection(
|
||||
*named_collection,
|
||||
{"host", "port", "user", "password", "database"},
|
||||
{"schema", "on_conflict", "use_table_cache"});
|
||||
|
||||
configuration.host = named_collection->get<String>("host");
|
||||
configuration.port = static_cast<UInt16>(named_collection->get<UInt64>("port"));
|
||||
configuration.addresses = {std::make_pair(configuration.host, configuration.port)};
|
||||
configuration.username = named_collection->get<String>("user");
|
||||
configuration.password = named_collection->get<String>("password");
|
||||
configuration.database = named_collection->get<String>("database");
|
||||
configuration.schema = named_collection->getOrDefault<String>("schema", "");
|
||||
configuration.on_conflict = named_collection->getOrDefault<String>("on_conflict", "");
|
||||
configuration = StoragePostgreSQL::processNamedCollectionResult(*named_collection, false);
|
||||
use_table_cache = named_collection->getOrDefault<UInt64>("use_tables_cache", 0);
|
||||
}
|
||||
else
|
||||
@ -399,20 +379,9 @@ DatabasePtr DatabaseFactory::getImpl(const ASTCreateQuery & create, const String
|
||||
ASTs & engine_args = engine->arguments->children;
|
||||
StoragePostgreSQL::Configuration configuration;
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, context))
|
||||
{
|
||||
validateNamedCollection(
|
||||
*named_collection,
|
||||
{"host", "port", "user", "password", "database"},
|
||||
{"schema"});
|
||||
|
||||
configuration.host = named_collection->get<String>("host");
|
||||
configuration.port = static_cast<UInt16>(named_collection->get<UInt64>("port"));
|
||||
configuration.addresses = {std::make_pair(configuration.host, configuration.port)};
|
||||
configuration.username = named_collection->get<String>("user");
|
||||
configuration.password = named_collection->get<String>("password");
|
||||
configuration.database = named_collection->get<String>("database");
|
||||
configuration.schema = named_collection->getOrDefault<String>("schema", "");
|
||||
configuration = StoragePostgreSQL::processNamedCollectionResult(*named_collection, false);
|
||||
}
|
||||
else
|
||||
{
|
||||
|
@ -1,65 +0,0 @@
|
||||
#include <Databases/MySQL/ConnectionMySQLSettings.h>
|
||||
|
||||
#include <Core/SettingsFields.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int UNKNOWN_SETTING;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
IMPLEMENT_SETTINGS_TRAITS(ConnectionMySQLSettingsTraits, LIST_OF_MYSQL_DATABASE_SETTINGS)
|
||||
|
||||
void ConnectionMySQLSettings::loadFromQuery(ASTStorage & storage_def)
|
||||
{
|
||||
if (storage_def.settings)
|
||||
{
|
||||
try
|
||||
{
|
||||
applyChanges(storage_def.settings->changes);
|
||||
}
|
||||
catch (Exception & e)
|
||||
{
|
||||
if (e.code() == ErrorCodes::UNKNOWN_SETTING)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "{} for database {}", e.message(), storage_def.engine->name);
|
||||
else
|
||||
e.rethrow();
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
auto settings_ast = std::make_shared<ASTSetQuery>();
|
||||
settings_ast->is_standalone = false;
|
||||
storage_def.set(storage_def.settings, settings_ast);
|
||||
}
|
||||
|
||||
SettingsChanges & changes = storage_def.settings->changes;
|
||||
#define ADD_IF_ABSENT(NAME) \
|
||||
if (std::find_if(changes.begin(), changes.end(), \
|
||||
[](const SettingChange & c) { return c.name == #NAME; }) \
|
||||
== changes.end()) \
|
||||
changes.push_back(SettingChange{#NAME, static_cast<Field>(NAME)});
|
||||
|
||||
APPLY_FOR_IMMUTABLE_CONNECTION_MYSQL_SETTINGS(ADD_IF_ABSENT)
|
||||
#undef ADD_IF_ABSENT
|
||||
}
|
||||
|
||||
void ConnectionMySQLSettings::loadFromQueryContext(ContextPtr context)
|
||||
{
|
||||
if (!context->hasQueryContext())
|
||||
return;
|
||||
|
||||
const Settings & settings = context->getQueryContext()->getSettingsRef();
|
||||
|
||||
if (settings.mysql_datatypes_support_level.value != mysql_datatypes_support_level.value)
|
||||
set("mysql_datatypes_support_level", settings.mysql_datatypes_support_level.toString());
|
||||
}
|
||||
|
||||
|
||||
}
|
@ -1,38 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#include <Core/BaseSettings.h>
|
||||
#include <Core/Defines.h>
|
||||
#include <Core/SettingsEnums.h>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class ASTStorage;
|
||||
|
||||
#define LIST_OF_CONNECTION_MYSQL_SETTINGS(M, ALIAS) \
|
||||
M(MySQLDataTypesSupport, mysql_datatypes_support_level, 0, "Which MySQL types should be converted to corresponding ClickHouse types (rather than being represented as String). Can be empty or any combination of 'decimal' or 'datetime64'. When empty MySQL's DECIMAL and DATETIME/TIMESTAMP with non-zero precision are seen as String on ClickHouse's side.", 0) \
|
||||
|
||||
/// Settings that should not change after the creation of a database.
|
||||
#define APPLY_FOR_IMMUTABLE_CONNECTION_MYSQL_SETTINGS(M) \
|
||||
M(mysql_datatypes_support_level)
|
||||
|
||||
#define LIST_OF_MYSQL_DATABASE_SETTINGS(M, ALIAS) \
|
||||
LIST_OF_CONNECTION_MYSQL_SETTINGS(M, ALIAS) \
|
||||
LIST_OF_MYSQL_SETTINGS(M, ALIAS)
|
||||
|
||||
DECLARE_SETTINGS_TRAITS(ConnectionMySQLSettingsTraits, LIST_OF_MYSQL_DATABASE_SETTINGS)
|
||||
|
||||
|
||||
/** Settings for the MySQL database engine.
|
||||
* Could be loaded from a CREATE DATABASE query (SETTINGS clause) and Query settings.
|
||||
*/
|
||||
struct ConnectionMySQLSettings : public BaseSettings<ConnectionMySQLSettingsTraits>
|
||||
{
|
||||
void loadFromQuery(ASTStorage & storage_def);
|
||||
|
||||
void loadFromQueryContext(ContextPtr context);
|
||||
};
|
||||
|
||||
}
|
@ -53,7 +53,7 @@ DatabaseMySQL::DatabaseMySQL(
|
||||
const String & metadata_path_,
|
||||
const ASTStorage * database_engine_define_,
|
||||
const String & database_name_in_mysql_,
|
||||
std::unique_ptr<ConnectionMySQLSettings> settings_,
|
||||
std::unique_ptr<MySQLSettings> settings_,
|
||||
mysqlxx::PoolWithFailover && pool,
|
||||
bool attach)
|
||||
: IDatabase(database_name_)
|
||||
@ -61,7 +61,7 @@ DatabaseMySQL::DatabaseMySQL(
|
||||
, metadata_path(metadata_path_)
|
||||
, database_engine_define(database_engine_define_->clone())
|
||||
, database_name_in_mysql(database_name_in_mysql_)
|
||||
, database_settings(std::move(settings_))
|
||||
, mysql_settings(std::move(settings_))
|
||||
, mysql_pool(std::move(pool)) /// NOLINT
|
||||
{
|
||||
try
|
||||
@ -309,7 +309,7 @@ DatabaseMySQL::fetchTablesColumnsList(const std::vector<String> & tables_name, C
|
||||
database_name_in_mysql,
|
||||
tables_name,
|
||||
settings,
|
||||
database_settings->mysql_datatypes_support_level);
|
||||
mysql_settings->mysql_datatypes_support_level);
|
||||
}
|
||||
|
||||
void DatabaseMySQL::shutdown()
|
||||
|
@ -9,8 +9,8 @@
|
||||
#include <Core/NamesAndTypes.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Storages/ColumnsDescription.h>
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
#include <Databases/DatabasesCommon.h>
|
||||
#include <Databases/MySQL/ConnectionMySQLSettings.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <mysqlxx/PoolWithFailover.h>
|
||||
|
||||
@ -44,7 +44,7 @@ public:
|
||||
const String & metadata_path,
|
||||
const ASTStorage * database_engine_define,
|
||||
const String & database_name_in_mysql,
|
||||
std::unique_ptr<ConnectionMySQLSettings> settings_,
|
||||
std::unique_ptr<MySQLSettings> settings_,
|
||||
mysqlxx::PoolWithFailover && pool,
|
||||
bool attach);
|
||||
|
||||
@ -93,7 +93,7 @@ private:
|
||||
String metadata_path;
|
||||
ASTPtr database_engine_define;
|
||||
String database_name_in_mysql;
|
||||
std::unique_ptr<ConnectionMySQLSettings> database_settings;
|
||||
std::unique_ptr<MySQLSettings> mysql_settings;
|
||||
|
||||
std::atomic<bool> quit{false};
|
||||
std::condition_variable cond;
|
||||
|
@ -8,7 +8,6 @@
|
||||
#include <Core/BackgroundSchedulePool.h>
|
||||
#include <Parsers/ASTCreateQuery.h>
|
||||
#include <Core/PostgreSQL/PoolWithFailover.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -13,9 +13,9 @@
|
||||
#include <Interpreters/Context.h>
|
||||
#include <QueryPipeline/Pipe.h>
|
||||
#include <QueryPipeline/QueryPipeline.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/MySQL/MySQLHelpers.h>
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <Columns/ColumnString.h>
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <IO/WriteBufferFromString.h>
|
||||
@ -68,27 +68,21 @@ void registerDictionarySourceMysql(DictionarySourceFactory & factory)
|
||||
auto settings_config_prefix = config_prefix + ".mysql";
|
||||
std::shared_ptr<mysqlxx::PoolWithFailover> pool;
|
||||
MySQLSettings mysql_settings;
|
||||
auto has_config_key = [&](const String & key)
|
||||
{
|
||||
return dictionary_allowed_keys.contains(key) || key.starts_with("replica") || mysql_settings.has(key);
|
||||
};
|
||||
StorageMySQLConfiguration configuration;
|
||||
auto named_collection = created_from_ddl
|
||||
? getExternalDataSourceConfiguration(config, settings_config_prefix, global_context, has_config_key, mysql_settings)
|
||||
: std::nullopt;
|
||||
|
||||
StorageMySQL::Configuration configuration;
|
||||
auto named_collection = created_from_ddl ? tryGetNamedCollectionWithOverrides(config, settings_config_prefix) : nullptr;
|
||||
if (named_collection)
|
||||
{
|
||||
if (created_from_ddl)
|
||||
global_context->getRemoteHostFilter().checkHostAndPort(configuration.host, toString(configuration.port));
|
||||
named_collection->remove("name");
|
||||
configuration = StorageMySQL::processNamedCollectionResult(*named_collection, mysql_settings);
|
||||
global_context->getRemoteHostFilter().checkHostAndPort(configuration.host, toString(configuration.port));
|
||||
|
||||
mysql_settings.applyChanges(named_collection->settings_changes);
|
||||
configuration.set(named_collection->configuration);
|
||||
configuration.addresses = {std::make_pair(configuration.host, configuration.port)};
|
||||
const auto & settings = global_context->getSettingsRef();
|
||||
if (!mysql_settings.isChanged("connect_timeout"))
|
||||
mysql_settings.connect_timeout = settings.external_storage_connect_timeout_sec;
|
||||
if (!mysql_settings.isChanged("read_write_timeout"))
|
||||
mysql_settings.read_write_timeout = settings.external_storage_rw_timeout_sec;
|
||||
|
||||
pool = std::make_shared<mysqlxx::PoolWithFailover>(createMySQLPoolWithFailover(configuration, mysql_settings));
|
||||
}
|
||||
else
|
||||
|
@ -68,6 +68,16 @@ namespace DB
|
||||
factory.registerSource("redis", create_table_source);
|
||||
}
|
||||
|
||||
RedisDictionarySource::Connection::Connection(PoolPtr pool_, ClientPtr client_)
|
||||
: pool(std::move(pool_)), client(std::move(client_))
|
||||
{
|
||||
}
|
||||
|
||||
RedisDictionarySource::Connection::~Connection()
|
||||
{
|
||||
pool->returnObject(std::move(client));
|
||||
}
|
||||
|
||||
static constexpr size_t REDIS_MAX_BLOCK_SIZE = DEFAULT_BLOCK_SIZE;
|
||||
static constexpr size_t REDIS_LOCK_ACQUIRE_TIMEOUT_MS = 5000;
|
||||
|
||||
|
@ -52,15 +52,8 @@ namespace DB
|
||||
|
||||
struct Connection
|
||||
{
|
||||
Connection(PoolPtr pool_, ClientPtr client_)
|
||||
: pool(std::move(pool_)), client(std::move(client_))
|
||||
{
|
||||
}
|
||||
|
||||
~Connection()
|
||||
{
|
||||
pool->returnObject(std::move(client));
|
||||
}
|
||||
Connection(PoolPtr pool_, ClientPtr client_);
|
||||
~Connection();
|
||||
|
||||
PoolPtr pool;
|
||||
ClientPtr client;
|
||||
|
@ -48,7 +48,7 @@ namespace
|
||||
if (current_query)
|
||||
{
|
||||
const auto & prev_element = current_query->access_rights_elements.back();
|
||||
bool continue_with_current_query = element.sameDatabaseAndTable(prev_element) && element.sameOptions(prev_element);
|
||||
bool continue_with_current_query = element.sameDatabaseAndTableAndParameter(prev_element) && element.sameOptions(prev_element);
|
||||
if (!continue_with_current_query)
|
||||
current_query = nullptr;
|
||||
}
|
||||
|
@ -467,10 +467,6 @@ SetPtr makeExplicitSet(
|
||||
return set;
|
||||
}
|
||||
|
||||
ScopeStack::Level::~Level() = default;
|
||||
ScopeStack::Level::Level() = default;
|
||||
ScopeStack::Level::Level(Level &&) noexcept = default;
|
||||
|
||||
class ScopeStack::Index
|
||||
{
|
||||
/// Map column name -> Node.
|
||||
@ -524,6 +520,10 @@ public:
|
||||
}
|
||||
};
|
||||
|
||||
ScopeStack::Level::~Level() = default;
|
||||
ScopeStack::Level::Level() = default;
|
||||
ScopeStack::Level::Level(Level &&) noexcept = default;
|
||||
|
||||
ActionsMatcher::Data::Data(
|
||||
ContextPtr context_,
|
||||
SizeLimits set_size_limit_,
|
||||
|
@ -12,9 +12,10 @@ namespace DB
|
||||
BlockIO InterpreterAlterNamedCollectionQuery::execute()
|
||||
{
|
||||
auto current_context = getContext();
|
||||
current_context->checkAccess(AccessType::ALTER_NAMED_COLLECTION);
|
||||
|
||||
const auto & query = query_ptr->as<const ASTAlterNamedCollectionQuery &>();
|
||||
|
||||
current_context->checkAccess(AccessType::ALTER_NAMED_COLLECTION, query.collection_name);
|
||||
|
||||
if (!query.cluster.empty())
|
||||
{
|
||||
DDLQueryOnClusterParams params;
|
||||
|
@ -13,10 +13,10 @@ namespace DB
|
||||
BlockIO InterpreterCreateNamedCollectionQuery::execute()
|
||||
{
|
||||
auto current_context = getContext();
|
||||
current_context->checkAccess(AccessType::CREATE_NAMED_COLLECTION);
|
||||
|
||||
const auto & query = query_ptr->as<const ASTCreateNamedCollectionQuery &>();
|
||||
|
||||
current_context->checkAccess(AccessType::CREATE_NAMED_COLLECTION, query.collection_name);
|
||||
|
||||
if (!query.cluster.empty())
|
||||
{
|
||||
DDLQueryOnClusterParams params;
|
||||
|
@ -12,9 +12,10 @@ namespace DB
|
||||
BlockIO InterpreterDropNamedCollectionQuery::execute()
|
||||
{
|
||||
auto current_context = getContext();
|
||||
current_context->checkAccess(AccessType::DROP_NAMED_COLLECTION);
|
||||
|
||||
const auto & query = query_ptr->as<const ASTDropNamedCollectionQuery &>();
|
||||
|
||||
current_context->checkAccess(AccessType::DROP_NAMED_COLLECTION, query.collection_name);
|
||||
|
||||
if (!query.cluster.empty())
|
||||
{
|
||||
DDLQueryOnClusterParams params;
|
||||
|
@ -70,7 +70,6 @@ std::pair<Field, std::shared_ptr<const IDataType>> evaluateConstantExpression(co
|
||||
if (context->getClientInfo().query_kind != ClientInfo::QueryKind::SECONDARY_QUERY && context->getSettingsRef().normalize_function_names)
|
||||
FunctionNameNormalizer().visit(ast.get());
|
||||
|
||||
String result_name = ast->getColumnName();
|
||||
auto syntax_result = TreeRewriter(context).analyze(ast, source_columns);
|
||||
|
||||
/// AST potentially could be transformed to literal during TreeRewriter analyze.
|
||||
@ -82,6 +81,7 @@ std::pair<Field, std::shared_ptr<const IDataType>> evaluateConstantExpression(co
|
||||
|
||||
ColumnPtr result_column;
|
||||
DataTypePtr result_type;
|
||||
String result_name = ast->getColumnName();
|
||||
for (const auto & action_node : actions->getOutputs())
|
||||
{
|
||||
if ((action_node->result_name == result_name) && action_node->column)
|
||||
|
@ -80,6 +80,8 @@ public:
|
||||
return res;
|
||||
}
|
||||
|
||||
void setExplainKind(ExplainKind kind_) { kind = kind_; }
|
||||
|
||||
void setExplainedQuery(ASTPtr query_)
|
||||
{
|
||||
children.emplace_back(query_);
|
||||
|
@ -28,8 +28,8 @@ namespace DB
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int UNEXPECTED_EXPRESSION;
|
||||
extern const int UNEXPECTED_AST_STRUCTURE;
|
||||
extern const int UNKNOWN_FUNCTION;
|
||||
}
|
||||
|
||||
|
||||
@ -471,8 +471,9 @@ namespace
|
||||
|
||||
void ASTFunction::appendColumnNameImpl(WriteBuffer & ostr) const
|
||||
{
|
||||
if (name == "view")
|
||||
throw Exception(ErrorCodes::UNEXPECTED_EXPRESSION, "Table function view cannot be used as an expression");
|
||||
/// These functions contain some unexpected ASTs in arguments (e.g. SETTINGS or even a SELECT query)
|
||||
if (name == "view" || name == "viewIfPermitted" || name == "mysql" || name == "postgresql" || name == "mongodb" || name == "s3")
|
||||
throw Exception(ErrorCodes::UNKNOWN_FUNCTION, "Table function '{}' cannot be used as an expression", name);
|
||||
|
||||
/// If function can be converted to literal it will be parsed as literal after formatting.
|
||||
/// In distributed query it may lead to mismathed column names.
|
||||
|
@ -27,21 +27,28 @@ namespace
|
||||
}
|
||||
|
||||
|
||||
void formatONClause(const String & database, bool any_database, const String & table, bool any_table, const IAST::FormatSettings & settings)
|
||||
void formatONClause(const AccessRightsElement & element, const IAST::FormatSettings & settings)
|
||||
{
|
||||
settings.ostr << (settings.hilite ? IAST::hilite_keyword : "") << "ON " << (settings.hilite ? IAST::hilite_none : "");
|
||||
if (any_database)
|
||||
if (element.isGlobalWithParameter())
|
||||
{
|
||||
if (element.any_parameter)
|
||||
settings.ostr << "*";
|
||||
else
|
||||
settings.ostr << backQuoteIfNeed(element.parameter);
|
||||
}
|
||||
else if (element.any_database)
|
||||
{
|
||||
settings.ostr << "*.*";
|
||||
}
|
||||
else
|
||||
{
|
||||
if (!database.empty())
|
||||
settings.ostr << backQuoteIfNeed(database) << ".";
|
||||
if (any_table)
|
||||
if (!element.database.empty())
|
||||
settings.ostr << backQuoteIfNeed(element.database) << ".";
|
||||
if (element.any_table)
|
||||
settings.ostr << "*";
|
||||
else
|
||||
settings.ostr << backQuoteIfNeed(table);
|
||||
settings.ostr << backQuoteIfNeed(element.table);
|
||||
}
|
||||
}
|
||||
|
||||
@ -70,15 +77,16 @@ namespace
|
||||
if (i != elements.size() - 1)
|
||||
{
|
||||
const auto & next_element = elements[i + 1];
|
||||
if ((element.database == next_element.database) && (element.any_database == next_element.any_database)
|
||||
&& (element.table == next_element.table) && (element.any_table == next_element.any_table))
|
||||
if (element.sameDatabaseAndTableAndParameter(next_element))
|
||||
{
|
||||
next_element_on_same_db_and_table = true;
|
||||
}
|
||||
}
|
||||
|
||||
if (!next_element_on_same_db_and_table)
|
||||
{
|
||||
settings.ostr << " ";
|
||||
formatONClause(element.database, element.any_database, element.table, element.any_table, settings);
|
||||
formatONClause(element, settings);
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -123,13 +123,40 @@ namespace
|
||||
if (!parseAccessFlagsWithColumns(pos, expected, access_and_columns))
|
||||
return false;
|
||||
|
||||
String database_name, table_name, parameter;
|
||||
bool any_database = false, any_table = false, any_parameter = false;
|
||||
|
||||
size_t is_global_with_parameter = 0;
|
||||
for (const auto & elem : access_and_columns)
|
||||
{
|
||||
if (elem.first.isGlobalWithParameter())
|
||||
++is_global_with_parameter;
|
||||
}
|
||||
|
||||
if (!ParserKeyword{"ON"}.ignore(pos, expected))
|
||||
return false;
|
||||
|
||||
String database_name, table_name;
|
||||
bool any_database = false, any_table = false;
|
||||
if (!parseDatabaseAndTableNameOrAsterisks(pos, expected, database_name, any_database, table_name, any_table))
|
||||
if (is_global_with_parameter && is_global_with_parameter == access_and_columns.size())
|
||||
{
|
||||
ASTPtr parameter_ast;
|
||||
if (ParserToken{TokenType::Asterisk}.ignore(pos, expected))
|
||||
{
|
||||
any_parameter = true;
|
||||
}
|
||||
else if (ParserIdentifier{}.parse(pos, parameter_ast, expected))
|
||||
{
|
||||
any_parameter = false;
|
||||
parameter = getIdentifierName(parameter_ast);
|
||||
}
|
||||
else
|
||||
return false;
|
||||
|
||||
any_database = any_table = true;
|
||||
}
|
||||
else if (!parseDatabaseAndTableNameOrAsterisks(pos, expected, database_name, any_database, table_name, any_table))
|
||||
{
|
||||
return false;
|
||||
}
|
||||
|
||||
for (auto & [access_flags, columns] : access_and_columns)
|
||||
{
|
||||
@ -140,7 +167,9 @@ namespace
|
||||
element.any_database = any_database;
|
||||
element.database = database_name;
|
||||
element.any_table = any_table;
|
||||
element.any_parameter = any_parameter;
|
||||
element.table = table_name;
|
||||
element.parameter = parameter;
|
||||
res_elements.emplace_back(std::move(element));
|
||||
}
|
||||
|
||||
@ -173,6 +202,8 @@ namespace
|
||||
throw Exception(ErrorCodes::INVALID_GRANT, "{} cannot be granted on the table level", old_flags.toString());
|
||||
else if (!element.any_database)
|
||||
throw Exception(ErrorCodes::INVALID_GRANT, "{} cannot be granted on the database level", old_flags.toString());
|
||||
else if (!element.any_parameter)
|
||||
throw Exception(ErrorCodes::INVALID_GRANT, "{} cannot be granted on the global with parameter level", old_flags.toString());
|
||||
else
|
||||
throw Exception(ErrorCodes::INVALID_GRANT, "{} cannot be granted", old_flags.toString());
|
||||
});
|
||||
|
@ -210,6 +210,10 @@ std::string DistributedSink::getCurrentStateDescription()
|
||||
}
|
||||
|
||||
|
||||
DistributedSink::JobReplica::JobReplica(size_t shard_index_, size_t replica_index_, bool is_local_job_, const Block & sample_block)
|
||||
: shard_index(shard_index_), replica_index(replica_index_), is_local_job(is_local_job_), current_shard_block(sample_block.cloneEmpty()) {}
|
||||
|
||||
|
||||
void DistributedSink::initWritingJobs(const Block & first_block, size_t start, size_t end)
|
||||
{
|
||||
const Settings & settings = context->getSettingsRef();
|
||||
|
@ -118,8 +118,7 @@ private:
|
||||
struct JobReplica
|
||||
{
|
||||
JobReplica() = default;
|
||||
JobReplica(size_t shard_index_, size_t replica_index_, bool is_local_job_, const Block & sample_block)
|
||||
: shard_index(shard_index_), replica_index(replica_index_), is_local_job(is_local_job_), current_shard_block(sample_block.cloneEmpty()) {}
|
||||
JobReplica(size_t shard_index_, size_t replica_index_, bool is_local_job_, const Block & sample_block);
|
||||
|
||||
size_t shard_index = 0;
|
||||
size_t replica_index = 0;
|
||||
|
@ -9,20 +9,6 @@
|
||||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
#include <IO/WriteBufferFromString.h>
|
||||
|
||||
#if USE_AMQPCPP
|
||||
#include <Storages/RabbitMQ/RabbitMQSettings.h>
|
||||
#endif
|
||||
#if USE_RDKAFKA
|
||||
#include <Storages/Kafka/KafkaSettings.h>
|
||||
#endif
|
||||
#if USE_MYSQL
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
#include <Databases/MySQL/ConnectionMySQLSettings.h>
|
||||
#endif
|
||||
#if USE_NATSIO
|
||||
#include <Storages/NATS/NATSSettings.h>
|
||||
#endif
|
||||
|
||||
#include <re2/re2.h>
|
||||
|
||||
namespace DB
|
||||
@ -94,116 +80,6 @@ void ExternalDataSourceConfiguration::set(const ExternalDataSourceConfiguration
|
||||
}
|
||||
|
||||
|
||||
template <typename T>
|
||||
std::optional<ExternalDataSourceInfo> getExternalDataSourceConfiguration(
|
||||
const ASTs & args, ContextPtr context, bool is_database_engine, bool throw_on_no_collection, const BaseSettings<T> & storage_settings)
|
||||
{
|
||||
if (args.empty())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "External data source must have arguments");
|
||||
|
||||
ExternalDataSourceConfiguration configuration;
|
||||
StorageSpecificArgs non_common_args;
|
||||
|
||||
if (const auto * collection = typeid_cast<const ASTIdentifier *>(args[0].get()))
|
||||
{
|
||||
const auto & config = context->getConfigRef();
|
||||
const auto & collection_prefix = fmt::format("named_collections.{}", collection->name());
|
||||
|
||||
if (!config.has(collection_prefix))
|
||||
{
|
||||
/// For table function remote we do not throw on no collection, because then we consider first arg
|
||||
/// as cluster definition from config.
|
||||
if (!throw_on_no_collection)
|
||||
return std::nullopt;
|
||||
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "There is no collection named `{}` in config", collection->name());
|
||||
}
|
||||
|
||||
SettingsChanges config_settings = getSettingsChangesFromConfig(storage_settings, config, collection_prefix);
|
||||
|
||||
configuration.host = config.getString(collection_prefix + ".host", "");
|
||||
configuration.port = config.getInt(collection_prefix + ".port", 0);
|
||||
configuration.username = config.getString(collection_prefix + ".user", "");
|
||||
configuration.password = config.getString(collection_prefix + ".password", "");
|
||||
configuration.quota_key = config.getString(collection_prefix + ".quota_key", "");
|
||||
configuration.database = config.getString(collection_prefix + ".database", "");
|
||||
configuration.table = config.getString(collection_prefix + ".table", config.getString(collection_prefix + ".collection", ""));
|
||||
configuration.schema = config.getString(collection_prefix + ".schema", "");
|
||||
configuration.addresses_expr = config.getString(collection_prefix + ".addresses_expr", "");
|
||||
|
||||
if (!configuration.addresses_expr.empty() && !configuration.host.empty())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Cannot have `addresses_expr` and `host`, `port` in configuration at the same time");
|
||||
|
||||
if ((args.size() == 1) && ((configuration.addresses_expr.empty() && (configuration.host.empty() || configuration.port == 0))
|
||||
|| configuration.database.empty() || (configuration.table.empty() && !is_database_engine)))
|
||||
{
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Named collection of connection parameters is missing some "
|
||||
"of the parameters and no key-value arguments are added");
|
||||
}
|
||||
|
||||
/// Check key-value arguments.
|
||||
for (size_t i = 1; i < args.size(); ++i)
|
||||
{
|
||||
if (const auto * ast_function = typeid_cast<const ASTFunction *>(args[i].get()))
|
||||
{
|
||||
const auto * args_expr = assert_cast<const ASTExpressionList *>(ast_function->arguments.get());
|
||||
auto function_args = args_expr->children;
|
||||
if (function_args.size() != 2)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected key-value defined argument");
|
||||
|
||||
auto arg_name = function_args[0]->as<ASTIdentifier>()->name();
|
||||
if (function_args[1]->as<ASTFunction>())
|
||||
{
|
||||
non_common_args.emplace_back(std::make_pair(arg_name, function_args[1]));
|
||||
continue;
|
||||
}
|
||||
|
||||
auto arg_value_ast = evaluateConstantExpressionOrIdentifierAsLiteral(function_args[1], context);
|
||||
auto * arg_value_literal = arg_value_ast->as<ASTLiteral>();
|
||||
if (arg_value_literal)
|
||||
{
|
||||
auto arg_value = arg_value_literal->value;
|
||||
|
||||
if (arg_name == "host")
|
||||
configuration.host = arg_value.safeGet<String>();
|
||||
else if (arg_name == "port")
|
||||
configuration.port = arg_value.safeGet<UInt64>();
|
||||
else if (arg_name == "user")
|
||||
configuration.username = arg_value.safeGet<String>();
|
||||
else if (arg_name == "password")
|
||||
configuration.password = arg_value.safeGet<String>();
|
||||
else if (arg_name == "quota_key")
|
||||
configuration.quota_key = arg_value.safeGet<String>();
|
||||
else if (arg_name == "database")
|
||||
configuration.database = arg_value.safeGet<String>();
|
||||
else if (arg_name == "table")
|
||||
configuration.table = arg_value.safeGet<String>();
|
||||
else if (arg_name == "schema")
|
||||
configuration.schema = arg_value.safeGet<String>();
|
||||
else if (arg_name == "addresses_expr")
|
||||
configuration.addresses_expr = arg_value.safeGet<String>();
|
||||
else if (storage_settings.has(arg_name))
|
||||
config_settings.emplace_back(arg_name, arg_value);
|
||||
else
|
||||
non_common_args.emplace_back(std::make_pair(arg_name, arg_value_ast));
|
||||
}
|
||||
else
|
||||
{
|
||||
non_common_args.emplace_back(std::make_pair(arg_name, arg_value_ast));
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected key-value defined argument");
|
||||
}
|
||||
}
|
||||
|
||||
return ExternalDataSourceInfo{ .configuration = configuration, .specific_args = non_common_args, .settings_changes = config_settings };
|
||||
}
|
||||
return std::nullopt;
|
||||
}
|
||||
|
||||
static void validateConfigKeys(
|
||||
const Poco::Util::AbstractConfiguration & dict_config, const String & config_prefix, HasConfigKeyFunc has_config_key_func)
|
||||
{
|
||||
@ -402,68 +278,6 @@ void URLBasedDataSourceConfiguration::set(const URLBasedDataSourceConfiguration
|
||||
headers = conf.headers;
|
||||
}
|
||||
|
||||
template<typename T>
|
||||
bool getExternalDataSourceConfiguration(const ASTs & args, BaseSettings<T> & settings, ContextPtr context)
|
||||
{
|
||||
if (args.empty())
|
||||
return false;
|
||||
|
||||
if (const auto * collection = typeid_cast<const ASTIdentifier *>(args[0].get()))
|
||||
{
|
||||
const auto & config = context->getConfigRef();
|
||||
const auto & config_prefix = fmt::format("named_collections.{}", collection->name());
|
||||
|
||||
if (!config.has(config_prefix))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "There is no collection named `{}` in config", collection->name());
|
||||
|
||||
auto config_settings = getSettingsChangesFromConfig(settings, config, config_prefix);
|
||||
|
||||
/// Check key-value arguments.
|
||||
for (size_t i = 1; i < args.size(); ++i)
|
||||
{
|
||||
if (const auto * ast_function = typeid_cast<const ASTFunction *>(args[i].get()))
|
||||
{
|
||||
const auto * args_expr = assert_cast<const ASTExpressionList *>(ast_function->arguments.get());
|
||||
auto function_args = args_expr->children;
|
||||
if (function_args.size() != 2)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected key-value defined argument");
|
||||
|
||||
auto arg_name = function_args[0]->as<ASTIdentifier>()->name();
|
||||
auto arg_value_ast = evaluateConstantExpressionOrIdentifierAsLiteral(function_args[1], context);
|
||||
auto arg_value = arg_value_ast->as<ASTLiteral>()->value;
|
||||
config_settings.emplace_back(arg_name, arg_value);
|
||||
}
|
||||
else
|
||||
{
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected key-value defined argument");
|
||||
}
|
||||
}
|
||||
|
||||
settings.applyChanges(config_settings);
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
#if USE_AMQPCPP
|
||||
template
|
||||
bool getExternalDataSourceConfiguration(const ASTs & args, BaseSettings<RabbitMQSettingsTraits> & settings, ContextPtr context);
|
||||
#endif
|
||||
|
||||
#if USE_RDKAFKA
|
||||
template
|
||||
bool getExternalDataSourceConfiguration(const ASTs & args, BaseSettings<KafkaSettingsTraits> & settings, ContextPtr context);
|
||||
#endif
|
||||
|
||||
#if USE_NATSIO
|
||||
template
|
||||
bool getExternalDataSourceConfiguration(const ASTs & args, BaseSettings<NATSSettingsTraits> & settings, ContextPtr context);
|
||||
#endif
|
||||
|
||||
template
|
||||
std::optional<ExternalDataSourceInfo> getExternalDataSourceConfiguration(
|
||||
const ASTs & args, ContextPtr context, bool is_database_engine, bool throw_on_no_collection, const BaseSettings<EmptySettingsTraits> & storage_settings);
|
||||
|
||||
template
|
||||
std::optional<ExternalDataSourceInfo> getExternalDataSourceConfiguration(
|
||||
const Poco::Util::AbstractConfiguration & dict_config, const String & dict_config_prefix,
|
||||
@ -473,23 +287,4 @@ template
|
||||
SettingsChanges getSettingsChangesFromConfig(
|
||||
const BaseSettings<EmptySettingsTraits> & settings, const Poco::Util::AbstractConfiguration & config, const String & config_prefix);
|
||||
|
||||
#if USE_MYSQL
|
||||
template
|
||||
std::optional<ExternalDataSourceInfo> getExternalDataSourceConfiguration(
|
||||
const ASTs & args, ContextPtr context, bool is_database_engine, bool throw_on_no_collection, const BaseSettings<MySQLSettingsTraits> & storage_settings);
|
||||
|
||||
template
|
||||
std::optional<ExternalDataSourceInfo> getExternalDataSourceConfiguration(
|
||||
const ASTs & args, ContextPtr context, bool is_database_engine, bool throw_on_no_collection, const BaseSettings<ConnectionMySQLSettingsTraits> & storage_settings);
|
||||
|
||||
template
|
||||
std::optional<ExternalDataSourceInfo> getExternalDataSourceConfiguration(
|
||||
const Poco::Util::AbstractConfiguration & dict_config, const String & dict_config_prefix,
|
||||
ContextPtr context, HasConfigKeyFunc has_config_key, const BaseSettings<MySQLSettingsTraits> & settings);
|
||||
|
||||
template
|
||||
SettingsChanges getSettingsChangesFromConfig(
|
||||
const BaseSettings<MySQLSettingsTraits> & settings, const Poco::Util::AbstractConfiguration & config, const String & config_prefix);
|
||||
|
||||
#endif
|
||||
}
|
||||
|
@ -34,18 +34,6 @@ struct ExternalDataSourceConfiguration
|
||||
};
|
||||
|
||||
|
||||
struct StoragePostgreSQLConfiguration : ExternalDataSourceConfiguration
|
||||
{
|
||||
String on_conflict;
|
||||
};
|
||||
|
||||
|
||||
struct StorageMySQLConfiguration : ExternalDataSourceConfiguration
|
||||
{
|
||||
bool replace_query = false;
|
||||
String on_duplicate_clause;
|
||||
};
|
||||
|
||||
using StorageSpecificArgs = std::vector<std::pair<String, ASTPtr>>;
|
||||
|
||||
struct ExternalDataSourceInfo
|
||||
@ -55,20 +43,6 @@ struct ExternalDataSourceInfo
|
||||
SettingsChanges settings_changes;
|
||||
};
|
||||
|
||||
/* If there is a storage engine's configuration specified in the named_collections,
|
||||
* this function returns valid for usage ExternalDataSourceConfiguration struct
|
||||
* otherwise std::nullopt is returned.
|
||||
*
|
||||
* If any configuration options are provided as key-value engine arguments, they will override
|
||||
* configuration values, i.e. ENGINE = PostgreSQL(postgresql_configuration, database = 'postgres_database');
|
||||
*
|
||||
* Any key-value engine argument except common (`host`, `port`, `username`, `password`, `database`)
|
||||
* is returned in EngineArgs struct.
|
||||
*/
|
||||
template <typename T = EmptySettingsTraits>
|
||||
std::optional<ExternalDataSourceInfo> getExternalDataSourceConfiguration(
|
||||
const ASTs & args, ContextPtr context, bool is_database_engine = false, bool throw_on_no_collection = true, const BaseSettings<T> & storage_settings = {});
|
||||
|
||||
using HasConfigKeyFunc = std::function<bool(const String &)>;
|
||||
|
||||
template <typename T = EmptySettingsTraits>
|
||||
@ -91,7 +65,6 @@ struct ExternalDataSourcesByPriority
|
||||
ExternalDataSourcesByPriority
|
||||
getExternalDataSourceConfigurationByPriority(const Poco::Util::AbstractConfiguration & dict_config, const String & dict_config_prefix, ContextPtr context, HasConfigKeyFunc has_config_key);
|
||||
|
||||
|
||||
struct URLBasedDataSourceConfiguration
|
||||
{
|
||||
String url;
|
||||
@ -118,7 +91,4 @@ struct URLBasedDataSourceConfig
|
||||
std::optional<URLBasedDataSourceConfig> getURLBasedDataSourceConfiguration(
|
||||
const Poco::Util::AbstractConfiguration & dict_config, const String & dict_config_prefix, ContextPtr context);
|
||||
|
||||
template<typename T>
|
||||
bool getExternalDataSourceConfiguration(const ASTs & args, BaseSettings<T> & settings, ContextPtr context);
|
||||
|
||||
}
|
||||
|
@ -29,8 +29,6 @@ namespace ErrorCodes
|
||||
}
|
||||
|
||||
|
||||
ReadBufferFromHDFS::~ReadBufferFromHDFS() = default;
|
||||
|
||||
struct ReadBufferFromHDFS::ReadBufferFromHDFSImpl : public BufferWithOwnMemory<SeekableReadBuffer>
|
||||
{
|
||||
String hdfs_uri;
|
||||
@ -166,6 +164,8 @@ ReadBufferFromHDFS::ReadBufferFromHDFS(
|
||||
{
|
||||
}
|
||||
|
||||
ReadBufferFromHDFS::~ReadBufferFromHDFS() = default;
|
||||
|
||||
size_t ReadBufferFromHDFS::getFileSize()
|
||||
{
|
||||
return impl->getFileSize();
|
||||
|
@ -19,13 +19,13 @@
|
||||
#include <Processors/Executors/CompletedPipelineExecutor.h>
|
||||
#include <QueryPipeline/QueryPipeline.h>
|
||||
#include <QueryPipeline/Pipe.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/MessageQueueSink.h>
|
||||
#include <Storages/Kafka/KafkaProducer.h>
|
||||
#include <Storages/Kafka/KafkaSettings.h>
|
||||
#include <Storages/Kafka/KafkaSource.h>
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/StorageMaterializedView.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <base/getFQDNOrHostName.h>
|
||||
#include <Common/logger_useful.h>
|
||||
#include <boost/algorithm/string/replace.hpp>
|
||||
@ -834,10 +834,21 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
{
|
||||
ASTs & engine_args = args.engine_args;
|
||||
size_t args_count = engine_args.size();
|
||||
bool has_settings = args.storage_def->settings;
|
||||
const bool has_settings = args.storage_def->settings;
|
||||
|
||||
auto kafka_settings = std::make_unique<KafkaSettings>();
|
||||
auto named_collection = getExternalDataSourceConfiguration(args.engine_args, *kafka_settings, args.getLocalContext());
|
||||
String collection_name;
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(args.engine_args, args.getLocalContext()))
|
||||
{
|
||||
for (const auto & setting : kafka_settings->all())
|
||||
{
|
||||
const auto & setting_name = setting.getName();
|
||||
if (named_collection->has(setting_name))
|
||||
kafka_settings->set(setting_name, named_collection->get<String>(setting_name));
|
||||
}
|
||||
collection_name = assert_cast<const ASTIdentifier *>(args.engine_args[0].get())->name();
|
||||
}
|
||||
|
||||
if (has_settings)
|
||||
{
|
||||
kafka_settings->loadFromQuery(*args.storage_def);
|
||||
@ -901,14 +912,10 @@ void registerStorageKafka(StorageFactory & factory)
|
||||
* - Do intermediate commits when the batch consumed and handled
|
||||
*/
|
||||
|
||||
String collection_name;
|
||||
if (named_collection)
|
||||
/* 0 = raw, 1 = evaluateConstantExpressionAsLiteral, 2=evaluateConstantExpressionOrIdentifierAsLiteral */
|
||||
/// In case of named collection we already validated the arguments.
|
||||
if (collection_name.empty())
|
||||
{
|
||||
collection_name = assert_cast<const ASTIdentifier *>(args.engine_args[0].get())->name();
|
||||
}
|
||||
else
|
||||
{
|
||||
/* 0 = raw, 1 = evaluateConstantExpressionAsLiteral, 2=evaluateConstantExpressionOrIdentifierAsLiteral */
|
||||
CHECK_KAFKA_STORAGE_ARGUMENT(1, kafka_broker_list, 0)
|
||||
CHECK_KAFKA_STORAGE_ARGUMENT(2, kafka_topic_list, 1)
|
||||
CHECK_KAFKA_STORAGE_ARGUMENT(3, kafka_group_name, 2)
|
||||
|
@ -129,7 +129,7 @@ SinkToStoragePtr StorageMeiliSearch::write(const ASTPtr & /*query*/, const Stora
|
||||
|
||||
MeiliSearchConfiguration StorageMeiliSearch::getConfiguration(ASTs engine_args, ContextPtr context)
|
||||
{
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, context))
|
||||
{
|
||||
validateNamedCollection(*named_collection, {"url", "index"}, {"key"});
|
||||
|
||||
|
@ -1,6 +1,5 @@
|
||||
#pragma once
|
||||
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/IStorage.h>
|
||||
#include <Storages/MeiliSearch/MeiliSearchConnection.h>
|
||||
|
||||
|
@ -13,6 +13,20 @@ namespace ProfileEvents
|
||||
namespace DB
|
||||
{
|
||||
|
||||
struct MergeTreeSink::DelayedChunk
|
||||
{
|
||||
struct Partition
|
||||
{
|
||||
MergeTreeDataWriter::TemporaryPart temp_part;
|
||||
UInt64 elapsed_ns;
|
||||
String block_dedup_token;
|
||||
ProfileEvents::Counters part_counters;
|
||||
};
|
||||
|
||||
std::vector<Partition> partitions;
|
||||
};
|
||||
|
||||
|
||||
MergeTreeSink::~MergeTreeSink() = default;
|
||||
|
||||
MergeTreeSink::MergeTreeSink(
|
||||
@ -41,20 +55,6 @@ void MergeTreeSink::onFinish()
|
||||
finishDelayedChunk();
|
||||
}
|
||||
|
||||
struct MergeTreeSink::DelayedChunk
|
||||
{
|
||||
struct Partition
|
||||
{
|
||||
MergeTreeDataWriter::TemporaryPart temp_part;
|
||||
UInt64 elapsed_ns;
|
||||
String block_dedup_token;
|
||||
ProfileEvents::Counters part_counters;
|
||||
};
|
||||
|
||||
std::vector<Partition> partitions;
|
||||
};
|
||||
|
||||
|
||||
void MergeTreeSink::consume(Chunk chunk)
|
||||
{
|
||||
auto block = getHeader().cloneWithColumns(chunk.detachColumns());
|
||||
|
@ -7,28 +7,6 @@
|
||||
namespace DB
|
||||
{
|
||||
|
||||
MergeTreeSource::MergeTreeSource(MergeTreeSelectAlgorithmPtr algorithm_)
|
||||
: ISource(algorithm_->getHeader())
|
||||
, algorithm(std::move(algorithm_))
|
||||
{
|
||||
#if defined(OS_LINUX)
|
||||
if (algorithm->getSettings().use_asynchronous_read_from_pool)
|
||||
async_reading_state = std::make_unique<AsyncReadingState>();
|
||||
#endif
|
||||
}
|
||||
|
||||
MergeTreeSource::~MergeTreeSource() = default;
|
||||
|
||||
std::string MergeTreeSource::getName() const
|
||||
{
|
||||
return algorithm->getName();
|
||||
}
|
||||
|
||||
void MergeTreeSource::onCancel()
|
||||
{
|
||||
algorithm->cancel();
|
||||
}
|
||||
|
||||
#if defined(OS_LINUX)
|
||||
struct MergeTreeSource::AsyncReadingState
|
||||
{
|
||||
@ -155,6 +133,28 @@ private:
|
||||
};
|
||||
#endif
|
||||
|
||||
MergeTreeSource::MergeTreeSource(MergeTreeSelectAlgorithmPtr algorithm_)
|
||||
: ISource(algorithm_->getHeader())
|
||||
, algorithm(std::move(algorithm_))
|
||||
{
|
||||
#if defined(OS_LINUX)
|
||||
if (algorithm->getSettings().use_asynchronous_read_from_pool)
|
||||
async_reading_state = std::make_unique<AsyncReadingState>();
|
||||
#endif
|
||||
}
|
||||
|
||||
MergeTreeSource::~MergeTreeSource() = default;
|
||||
|
||||
std::string MergeTreeSource::getName() const
|
||||
{
|
||||
return algorithm->getName();
|
||||
}
|
||||
|
||||
void MergeTreeSource::onCancel()
|
||||
{
|
||||
algorithm->cancel();
|
||||
}
|
||||
|
||||
ISource::Status MergeTreeSource::prepare()
|
||||
{
|
||||
#if defined(OS_LINUX)
|
||||
|
@ -44,6 +44,7 @@ MergeTreeWhereOptimizer::MergeTreeWhereOptimizer(
|
||||
, log{log_}
|
||||
, column_sizes{std::move(column_sizes_)}
|
||||
, move_all_conditions_to_prewhere(context->getSettingsRef().move_all_conditions_to_prewhere)
|
||||
, log_queries_cut_to_length(context->getSettingsRef().log_queries_cut_to_length)
|
||||
{
|
||||
for (const auto & name : queried_columns)
|
||||
{
|
||||
@ -310,7 +311,7 @@ void MergeTreeWhereOptimizer::optimize(ASTSelectQuery & select) const
|
||||
select.setExpression(ASTSelectQuery::Expression::WHERE, reconstruct(where_conditions));
|
||||
select.setExpression(ASTSelectQuery::Expression::PREWHERE, reconstruct(prewhere_conditions));
|
||||
|
||||
LOG_DEBUG(log, "MergeTreeWhereOptimizer: condition \"{}\" moved to PREWHERE", select.prewhere());
|
||||
LOG_DEBUG(log, "MergeTreeWhereOptimizer: condition \"{}\" moved to PREWHERE", select.prewhere()->formatForLogging(log_queries_cut_to_length));
|
||||
}
|
||||
|
||||
|
||||
|
@ -115,6 +115,7 @@ private:
|
||||
UInt64 total_size_of_queried_columns = 0;
|
||||
NameSet array_joined_names;
|
||||
const bool move_all_conditions_to_prewhere = false;
|
||||
UInt64 log_queries_cut_to_length = 0;
|
||||
};
|
||||
|
||||
|
||||
|
@ -2,9 +2,7 @@
|
||||
|
||||
#if USE_MYSQL
|
||||
#include <mysqlxx/PoolWithFailover.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
#include <Databases/MySQL/ConnectionMySQLSettings.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -14,8 +12,7 @@ namespace ErrorCodes
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
template <typename T> mysqlxx::PoolWithFailover
|
||||
createMySQLPoolWithFailover(const StorageMySQLConfiguration & configuration, const T & mysql_settings)
|
||||
mysqlxx::PoolWithFailover createMySQLPoolWithFailover(const StorageMySQL::Configuration & configuration, const MySQLSettings & mysql_settings)
|
||||
{
|
||||
if (!mysql_settings.connection_pool_size)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Connection pool cannot have zero size");
|
||||
@ -30,11 +27,6 @@ createMySQLPoolWithFailover(const StorageMySQLConfiguration & configuration, con
|
||||
mysql_settings.read_write_timeout);
|
||||
}
|
||||
|
||||
template
|
||||
mysqlxx::PoolWithFailover createMySQLPoolWithFailover(const StorageMySQLConfiguration & configuration, const MySQLSettings & mysql_settings);
|
||||
template
|
||||
mysqlxx::PoolWithFailover createMySQLPoolWithFailover(const StorageMySQLConfiguration & configuration, const ConnectionMySQLSettings & mysql_settings);
|
||||
|
||||
}
|
||||
|
||||
#endif
|
||||
|
@ -3,15 +3,14 @@
|
||||
|
||||
#if USE_MYSQL
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Storages/StorageMySQL.h>
|
||||
|
||||
namespace mysqlxx { class PoolWithFailover; }
|
||||
|
||||
namespace DB
|
||||
{
|
||||
struct StorageMySQLConfiguration;
|
||||
|
||||
template <typename T> mysqlxx::PoolWithFailover
|
||||
createMySQLPoolWithFailover(const StorageMySQLConfiguration & configuration, const T & mysql_settings);
|
||||
mysqlxx::PoolWithFailover createMySQLPoolWithFailover(const StorageMySQL::Configuration & configuration, const MySQLSettings & mysql_settings);
|
||||
|
||||
}
|
||||
|
||||
|
@ -3,6 +3,9 @@
|
||||
#include <Parsers/ASTSetQuery.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Parsers/formatAST.h>
|
||||
#include <Core/Field.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -43,4 +46,33 @@ void MySQLSettings::loadFromQuery(ASTStorage & storage_def)
|
||||
}
|
||||
}
|
||||
|
||||
void MySQLSettings::loadFromQueryContext(ContextPtr context, ASTStorage & storage_def)
|
||||
{
|
||||
if (!context->hasQueryContext())
|
||||
return;
|
||||
|
||||
const Settings & settings = context->getQueryContext()->getSettingsRef();
|
||||
|
||||
if (settings.mysql_datatypes_support_level.value != mysql_datatypes_support_level.value)
|
||||
{
|
||||
static constexpr auto setting_name = "mysql_datatypes_support_level";
|
||||
set(setting_name, settings.mysql_datatypes_support_level.toString());
|
||||
|
||||
if (!storage_def.settings)
|
||||
{
|
||||
auto settings_ast = std::make_shared<ASTSetQuery>();
|
||||
settings_ast->is_standalone = false;
|
||||
storage_def.set(storage_def.settings, settings_ast);
|
||||
}
|
||||
|
||||
auto & changes = storage_def.settings->changes;
|
||||
if (changes.end() == std::find_if(
|
||||
changes.begin(), changes.end(),
|
||||
[](const SettingChange & c) { return c.name == setting_name; }))
|
||||
{
|
||||
changes.push_back(SettingChange{setting_name, settings.mysql_datatypes_support_level.toString()});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -2,6 +2,8 @@
|
||||
|
||||
#include <Core/Defines.h>
|
||||
#include <Core/BaseSettings.h>
|
||||
#include <Core/SettingsEnums.h>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
|
||||
|
||||
namespace Poco::Util
|
||||
@ -22,6 +24,7 @@ class ASTSetQuery;
|
||||
M(Bool, connection_auto_close, true, "Auto-close connection after query execution, i.e. disable connection reuse.", 0) \
|
||||
M(UInt64, connect_timeout, DBMS_DEFAULT_CONNECT_TIMEOUT_SEC, "Connect timeout (in seconds)", 0) \
|
||||
M(UInt64, read_write_timeout, DBMS_DEFAULT_RECEIVE_TIMEOUT_SEC, "Read/write timeout (in seconds)", 0) \
|
||||
M(MySQLDataTypesSupport, mysql_datatypes_support_level, 0, "Which MySQL types should be converted to corresponding ClickHouse types (rather than being represented as String). Can be empty or any combination of 'decimal' or 'datetime64'. When empty MySQL's DECIMAL and DATETIME/TIMESTAMP with non-zero precision are seen as String on ClickHouse's side.", 0) \
|
||||
|
||||
DECLARE_SETTINGS_TRAITS(MySQLSettingsTraits, LIST_OF_MYSQL_SETTINGS)
|
||||
|
||||
@ -34,6 +37,7 @@ struct MySQLSettings : public MySQLBaseSettings
|
||||
{
|
||||
void loadFromQuery(ASTStorage & storage_def);
|
||||
void loadFromQuery(const ASTSetQuery & settings_def);
|
||||
void loadFromQueryContext(ContextPtr context, ASTStorage & storage_def);
|
||||
};
|
||||
|
||||
|
||||
|
@ -10,13 +10,13 @@
|
||||
#include <Processors/Transforms/ExpressionTransform.h>
|
||||
#include <Processors/QueryPlan/ReadFromPreparedSource.h>
|
||||
#include <Processors/QueryPlan/QueryPlan.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/NATS/NATSSource.h>
|
||||
#include <Storages/NATS/StorageNATS.h>
|
||||
#include <Storages/NATS/NATSProducer.h>
|
||||
#include <Storages/MessageQueueSink.h>
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/StorageMaterializedView.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <QueryPipeline/Pipe.h>
|
||||
#include <boost/algorithm/string/split.hpp>
|
||||
#include <boost/algorithm/string/trim.hpp>
|
||||
@ -711,8 +711,16 @@ void registerStorageNATS(StorageFactory & factory)
|
||||
auto creator_fn = [](const StorageFactory::Arguments & args)
|
||||
{
|
||||
auto nats_settings = std::make_unique<NATSSettings>();
|
||||
bool with_named_collection = getExternalDataSourceConfiguration(args.engine_args, *nats_settings, args.getLocalContext());
|
||||
if (!with_named_collection && !args.storage_def->settings)
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(args.engine_args, args.getLocalContext()))
|
||||
{
|
||||
for (const auto & setting : nats_settings->all())
|
||||
{
|
||||
const auto & setting_name = setting.getName();
|
||||
if (named_collection->has(setting_name))
|
||||
nats_settings->set(setting_name, named_collection->get<String>(setting_name));
|
||||
}
|
||||
}
|
||||
else if (!args.storage_def->settings)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "NATS engine must have settings");
|
||||
|
||||
nats_settings->loadFromQuery(*args.storage_def);
|
||||
|
@ -15,7 +15,7 @@ namespace ErrorCodes
|
||||
|
||||
namespace
|
||||
{
|
||||
NamedCollectionPtr tryGetNamedCollectionFromASTs(ASTs asts)
|
||||
NamedCollectionPtr tryGetNamedCollectionFromASTs(ASTs asts, bool throw_unknown_collection)
|
||||
{
|
||||
if (asts.empty())
|
||||
return nullptr;
|
||||
@ -25,10 +25,12 @@ namespace
|
||||
return nullptr;
|
||||
|
||||
const auto & collection_name = identifier->name();
|
||||
return NamedCollectionFactory::instance().get(collection_name);
|
||||
if (throw_unknown_collection)
|
||||
return NamedCollectionFactory::instance().get(collection_name);
|
||||
return NamedCollectionFactory::instance().tryGet(collection_name);
|
||||
}
|
||||
|
||||
std::optional<std::pair<std::string, Field>> getKeyValueFromAST(ASTPtr ast)
|
||||
std::optional<std::pair<std::string, std::variant<Field, ASTPtr>>> getKeyValueFromAST(ASTPtr ast, bool fallback_to_ast_value, ContextPtr context)
|
||||
{
|
||||
const auto * function = ast->as<ASTFunction>();
|
||||
if (!function || function->name != "equals")
|
||||
@ -40,50 +42,87 @@ namespace
|
||||
if (function_args.size() != 2)
|
||||
return std::nullopt;
|
||||
|
||||
auto literal_key = evaluateConstantExpressionOrIdentifierAsLiteral(
|
||||
function_args[0], Context::getGlobalContextInstance());
|
||||
auto literal_key = evaluateConstantExpressionOrIdentifierAsLiteral(function_args[0], context);
|
||||
auto key = checkAndGetLiteralArgument<String>(literal_key, "key");
|
||||
|
||||
auto literal_value = evaluateConstantExpressionOrIdentifierAsLiteral(
|
||||
function_args[1], Context::getGlobalContextInstance());
|
||||
auto value = literal_value->as<ASTLiteral>()->value;
|
||||
ASTPtr literal_value;
|
||||
try
|
||||
{
|
||||
if (key == "database" || key == "db")
|
||||
literal_value = evaluateConstantExpressionForDatabaseName(function_args[1], context);
|
||||
else
|
||||
literal_value = evaluateConstantExpressionOrIdentifierAsLiteral(function_args[1], context);
|
||||
}
|
||||
catch (...)
|
||||
{
|
||||
if (fallback_to_ast_value)
|
||||
return std::pair{key, function_args[1]};
|
||||
throw;
|
||||
}
|
||||
|
||||
return std::pair{key, value};
|
||||
auto value = literal_value->as<ASTLiteral>()->value;
|
||||
return std::pair{key, Field(value)};
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
NamedCollectionPtr tryGetNamedCollectionWithOverrides(ASTs asts)
|
||||
MutableNamedCollectionPtr tryGetNamedCollectionWithOverrides(
|
||||
ASTs asts, ContextPtr context, bool throw_unknown_collection, std::vector<std::pair<std::string, ASTPtr>> * complex_args)
|
||||
{
|
||||
if (asts.empty())
|
||||
return nullptr;
|
||||
|
||||
NamedCollectionUtils::loadIfNot();
|
||||
|
||||
auto collection = tryGetNamedCollectionFromASTs(asts);
|
||||
auto collection = tryGetNamedCollectionFromASTs(asts, throw_unknown_collection);
|
||||
if (!collection)
|
||||
return nullptr;
|
||||
|
||||
if (asts.size() == 1)
|
||||
return collection;
|
||||
|
||||
auto collection_copy = collection->duplicate();
|
||||
|
||||
if (asts.size() == 1)
|
||||
return collection_copy;
|
||||
|
||||
for (auto * it = std::next(asts.begin()); it != asts.end(); ++it)
|
||||
{
|
||||
auto value_override = getKeyValueFromAST(*it);
|
||||
auto value_override = getKeyValueFromAST(*it, /* fallback_to_ast_value */complex_args != nullptr, context);
|
||||
|
||||
if (!value_override && !(*it)->as<ASTFunction>())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Expected key-value argument or function");
|
||||
if (!value_override)
|
||||
continue;
|
||||
|
||||
if (const ASTPtr * value = std::get_if<ASTPtr>(&value_override->second))
|
||||
{
|
||||
complex_args->emplace_back(value_override->first, *value);
|
||||
continue;
|
||||
}
|
||||
|
||||
const auto & [key, value] = *value_override;
|
||||
collection_copy->setOrUpdate<String>(key, toString(value));
|
||||
collection_copy->setOrUpdate<String>(key, toString(std::get<Field>(value_override->second)));
|
||||
}
|
||||
|
||||
return collection_copy;
|
||||
}
|
||||
|
||||
MutableNamedCollectionPtr tryGetNamedCollectionWithOverrides(
|
||||
const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix)
|
||||
{
|
||||
auto collection_name = config.getString(config_prefix + ".name", "");
|
||||
if (collection_name.empty())
|
||||
return nullptr;
|
||||
|
||||
const auto & collection = NamedCollectionFactory::instance().get(collection_name);
|
||||
auto collection_copy = collection->duplicate();
|
||||
|
||||
Poco::Util::AbstractConfiguration::Keys keys;
|
||||
config.keys(config_prefix, keys);
|
||||
for (const auto & key : keys)
|
||||
collection_copy->setOrUpdate<String>(key, config.getString(config_prefix + '.' + key));
|
||||
|
||||
return collection_copy;
|
||||
}
|
||||
|
||||
HTTPHeaderEntries getHeadersFromNamedCollection(const NamedCollection & collection)
|
||||
{
|
||||
HTTPHeaderEntries headers;
|
||||
|
@ -16,16 +16,82 @@ namespace ErrorCodes
|
||||
namespace DB
|
||||
{
|
||||
|
||||
NamedCollectionPtr tryGetNamedCollectionWithOverrides(ASTs asts);
|
||||
/// Helper function to get named collection for table engine.
|
||||
/// Table engines have collection name as first argument of ast and other arguments are key-value overrides.
|
||||
MutableNamedCollectionPtr tryGetNamedCollectionWithOverrides(
|
||||
ASTs asts, ContextPtr context, bool throw_unknown_collection = true, std::vector<std::pair<std::string, ASTPtr>> * complex_args = nullptr);
|
||||
/// Helper function to get named collection for dictionary source.
|
||||
/// Dictionaries have collection name as name argument of dict configuration and other arguments are overrides.
|
||||
MutableNamedCollectionPtr tryGetNamedCollectionWithOverrides(const Poco::Util::AbstractConfiguration & config, const std::string & config_prefix);
|
||||
|
||||
HTTPHeaderEntries getHeadersFromNamedCollection(const NamedCollection & collection);
|
||||
|
||||
template <typename RequiredKeys = std::unordered_set<std::string>,
|
||||
typename OptionalKeys = std::unordered_set<std::string>>
|
||||
struct ExternalDatabaseEqualKeysSet
|
||||
{
|
||||
static constexpr std::array<std::pair<std::string_view, std::string_view>, 5> equal_keys{
|
||||
std::pair{"username", "user"}, std::pair{"database", "db"}, std::pair{"hostname", "host"}, std::pair{"addresses_expr", "host"}, std::pair{"addresses_expr", "hostname"}};
|
||||
};
|
||||
struct MongoDBEqualKeysSet
|
||||
{
|
||||
static constexpr std::array<std::pair<std::string_view, std::string_view>, 4> equal_keys{
|
||||
std::pair{"username", "user"}, std::pair{"database", "db"}, std::pair{"hostname", "host"}, std::pair{"table", "collection"}};
|
||||
};
|
||||
|
||||
template <typename EqualKeys> struct NamedCollectionValidateKey
|
||||
{
|
||||
NamedCollectionValidateKey() = default;
|
||||
NamedCollectionValidateKey(const char * value_) : value(value_) {}
|
||||
NamedCollectionValidateKey(std::string_view value_) : value(value_) {}
|
||||
NamedCollectionValidateKey(const String & value_) : value(value_) {}
|
||||
|
||||
std::string_view value;
|
||||
|
||||
bool operator==(const auto & other) const
|
||||
{
|
||||
if (value == other.value)
|
||||
return true;
|
||||
|
||||
for (const auto & equal : EqualKeys::equal_keys)
|
||||
{
|
||||
if (((equal.first == value) && (equal.second == other.value)) || ((equal.first == other.value) && (equal.second == value)))
|
||||
{
|
||||
return true;
|
||||
}
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
bool operator<(const auto & other) const
|
||||
{
|
||||
std::string_view canonical_self = value;
|
||||
std::string_view canonical_other = other.value;
|
||||
for (const auto & equal : EqualKeys::equal_keys)
|
||||
{
|
||||
if ((equal.first == value) || (equal.second == value))
|
||||
canonical_self = std::max(canonical_self, std::max(equal.first, equal.second));
|
||||
if ((equal.first == other.value) || (equal.second == other.value))
|
||||
canonical_other = std::max(canonical_other, std::max(equal.first, equal.second));
|
||||
}
|
||||
|
||||
return canonical_self < canonical_other;
|
||||
}
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
std::ostream & operator << (std::ostream & ostr, const NamedCollectionValidateKey<T> & key)
|
||||
{
|
||||
ostr << key.value;
|
||||
return ostr;
|
||||
}
|
||||
|
||||
template <class keys_cmp> using ValidateKeysMultiset = std::multiset<NamedCollectionValidateKey<keys_cmp>, std::less<NamedCollectionValidateKey<keys_cmp>>>;
|
||||
using ValidateKeysSet = std::multiset<std::string_view>;
|
||||
|
||||
template <typename Keys = ValidateKeysSet>
|
||||
void validateNamedCollection(
|
||||
const NamedCollection & collection,
|
||||
const RequiredKeys & required_keys,
|
||||
const OptionalKeys & optional_keys,
|
||||
const Keys & required_keys,
|
||||
const Keys & optional_keys,
|
||||
const std::vector<std::regex> & optional_regex_keys = {})
|
||||
{
|
||||
NamedCollection::Keys keys = collection.getKeys();
|
||||
@ -40,7 +106,12 @@ void validateNamedCollection(
|
||||
}
|
||||
|
||||
if (optional_keys.contains(key))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
if (required_keys.contains(key))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Duplicate key {} in named collection", key);
|
||||
|
||||
auto match = std::find_if(
|
||||
optional_regex_keys.begin(), optional_regex_keys.end(),
|
||||
@ -49,10 +120,10 @@ void validateNamedCollection(
|
||||
|
||||
if (!match)
|
||||
{
|
||||
throw Exception(
|
||||
ErrorCodes::BAD_ARGUMENTS,
|
||||
"Unexpected key {} in named collection. Required keys: {}, optional keys: {}",
|
||||
backQuoteIfNeed(key), fmt::join(required_keys, ", "), fmt::join(optional_keys, ", "));
|
||||
throw Exception(
|
||||
ErrorCodes::BAD_ARGUMENTS,
|
||||
"Unexpected key {} in named collection. Required keys: {}, optional keys: {}",
|
||||
backQuoteIfNeed(key), fmt::join(required_keys, ", "), fmt::join(optional_keys, ", "));
|
||||
}
|
||||
}
|
||||
|
||||
@ -66,3 +137,18 @@ void validateNamedCollection(
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
struct fmt::formatter<DB::NamedCollectionValidateKey<T>>
|
||||
{
|
||||
constexpr static auto parse(format_parse_context & context)
|
||||
{
|
||||
return context.begin();
|
||||
}
|
||||
|
||||
template <typename FormatContext>
|
||||
auto format(const DB::NamedCollectionValidateKey<T> & elem, FormatContext & context)
|
||||
{
|
||||
return fmt::format_to(context.out(), "{}", elem.value);
|
||||
}
|
||||
};
|
||||
|
@ -18,7 +18,7 @@
|
||||
#include <Storages/RabbitMQ/RabbitMQSource.h>
|
||||
#include <Storages/RabbitMQ/StorageRabbitMQ.h>
|
||||
#include <Storages/RabbitMQ/RabbitMQProducer.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/StorageMaterializedView.h>
|
||||
#include <boost/algorithm/string/split.hpp>
|
||||
@ -1194,8 +1194,17 @@ void registerStorageRabbitMQ(StorageFactory & factory)
|
||||
auto creator_fn = [](const StorageFactory::Arguments & args)
|
||||
{
|
||||
auto rabbitmq_settings = std::make_unique<RabbitMQSettings>();
|
||||
bool with_named_collection = getExternalDataSourceConfiguration(args.engine_args, *rabbitmq_settings, args.getLocalContext());
|
||||
if (!with_named_collection && !args.storage_def->settings)
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(args.engine_args, args.getLocalContext()))
|
||||
{
|
||||
for (const auto & setting : rabbitmq_settings->all())
|
||||
{
|
||||
const auto & setting_name = setting.getName();
|
||||
if (named_collection->has(setting_name))
|
||||
rabbitmq_settings->set(setting_name, named_collection->get<String>(setting_name));
|
||||
}
|
||||
}
|
||||
else if (!args.storage_def->settings)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "RabbitMQ engine must have settings");
|
||||
|
||||
if (args.storage_def->settings)
|
||||
|
@ -13,7 +13,7 @@
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
#include <Storages/StoragePostgreSQL.h>
|
||||
#include <Storages/StorageURL.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/MySQL/MySQLHelpers.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <Storages/checkAndGetLiteralArgument.h>
|
||||
#include <Common/logger_useful.h>
|
||||
@ -25,160 +25,25 @@ namespace DB
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
|
||||
extern const int BAD_ARGUMENTS;
|
||||
}
|
||||
|
||||
StorageExternalDistributed::StorageExternalDistributed(
|
||||
const StorageID & table_id_,
|
||||
ExternalStorageEngine table_engine,
|
||||
const String & cluster_description,
|
||||
const ExternalDataSourceConfiguration & configuration,
|
||||
std::unordered_set<StoragePtr> && shards_,
|
||||
const ColumnsDescription & columns_,
|
||||
const ConstraintsDescription & constraints_,
|
||||
const String & comment,
|
||||
ContextPtr context)
|
||||
const String & comment)
|
||||
: IStorage(table_id_)
|
||||
, shards(shards_)
|
||||
{
|
||||
StorageInMemoryMetadata storage_metadata;
|
||||
storage_metadata.setColumns(columns_);
|
||||
storage_metadata.setConstraints(constraints_);
|
||||
storage_metadata.setComment(comment);
|
||||
setInMemoryMetadata(storage_metadata);
|
||||
|
||||
size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements;
|
||||
std::vector<String> shards_descriptions = parseRemoteDescription(cluster_description, 0, cluster_description.size(), ',', max_addresses);
|
||||
std::vector<std::pair<std::string, UInt16>> addresses;
|
||||
|
||||
#if USE_MYSQL || USE_LIBPQXX
|
||||
|
||||
/// For each shard pass replicas description into storage, replicas are managed by storage's PoolWithFailover.
|
||||
for (const auto & shard_description : shards_descriptions)
|
||||
{
|
||||
StoragePtr shard;
|
||||
|
||||
switch (table_engine)
|
||||
{
|
||||
#if USE_MYSQL
|
||||
case ExternalStorageEngine::MySQL:
|
||||
{
|
||||
addresses = parseRemoteDescriptionForExternalDatabase(shard_description, max_addresses, 3306);
|
||||
|
||||
mysqlxx::PoolWithFailover pool(
|
||||
configuration.database,
|
||||
addresses,
|
||||
configuration.username,
|
||||
configuration.password);
|
||||
|
||||
shard = std::make_shared<StorageMySQL>(
|
||||
table_id_,
|
||||
std::move(pool),
|
||||
configuration.database,
|
||||
configuration.table,
|
||||
/* replace_query = */ false,
|
||||
/* on_duplicate_clause = */ "",
|
||||
columns_,
|
||||
constraints_,
|
||||
String{},
|
||||
context,
|
||||
MySQLSettings{});
|
||||
break;
|
||||
}
|
||||
#endif
|
||||
#if USE_LIBPQXX
|
||||
|
||||
case ExternalStorageEngine::PostgreSQL:
|
||||
{
|
||||
addresses = parseRemoteDescriptionForExternalDatabase(shard_description, max_addresses, 5432);
|
||||
StoragePostgreSQL::Configuration postgres_conf;
|
||||
postgres_conf.addresses = addresses;
|
||||
postgres_conf.username = configuration.username;
|
||||
postgres_conf.password = configuration.password;
|
||||
postgres_conf.database = configuration.database;
|
||||
postgres_conf.table = configuration.table;
|
||||
postgres_conf.schema = configuration.schema;
|
||||
|
||||
const auto & settings = context->getSettingsRef();
|
||||
auto pool = std::make_shared<postgres::PoolWithFailover>(
|
||||
postgres_conf,
|
||||
settings.postgresql_connection_pool_size,
|
||||
settings.postgresql_connection_pool_wait_timeout,
|
||||
POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES,
|
||||
settings.postgresql_connection_pool_auto_close_connection);
|
||||
|
||||
shard = std::make_shared<StoragePostgreSQL>(table_id_, std::move(pool), configuration.table, columns_, constraints_, String{});
|
||||
break;
|
||||
}
|
||||
#endif
|
||||
default:
|
||||
{
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Unsupported table engine. Supported engines are: MySQL, PostgreSQL, URL");
|
||||
}
|
||||
}
|
||||
|
||||
shards.emplace(std::move(shard));
|
||||
}
|
||||
|
||||
#else
|
||||
(void)configuration;
|
||||
(void)cluster_description;
|
||||
(void)addresses;
|
||||
(void)table_engine;
|
||||
#endif
|
||||
}
|
||||
|
||||
|
||||
StorageExternalDistributed::StorageExternalDistributed(
|
||||
const String & addresses_description,
|
||||
const StorageID & table_id,
|
||||
const String & format_name,
|
||||
const std::optional<FormatSettings> & format_settings,
|
||||
const String & compression_method,
|
||||
const ColumnsDescription & columns,
|
||||
const ConstraintsDescription & constraints,
|
||||
ContextPtr context)
|
||||
: IStorage(table_id)
|
||||
{
|
||||
StorageInMemoryMetadata storage_metadata;
|
||||
storage_metadata.setColumns(columns);
|
||||
storage_metadata.setConstraints(constraints);
|
||||
setInMemoryMetadata(storage_metadata);
|
||||
|
||||
size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements;
|
||||
/// Generate addresses without splitting for failover options
|
||||
std::vector<String> url_descriptions = parseRemoteDescription(addresses_description, 0, addresses_description.size(), ',', max_addresses);
|
||||
std::vector<String> uri_options;
|
||||
|
||||
for (const auto & url_description : url_descriptions)
|
||||
{
|
||||
/// For each uri (which acts like shard) check if it has failover options
|
||||
uri_options = parseRemoteDescription(url_description, 0, url_description.size(), '|', max_addresses);
|
||||
StoragePtr shard;
|
||||
|
||||
if (uri_options.size() > 1)
|
||||
{
|
||||
shard = std::make_shared<StorageURLWithFailover>(
|
||||
uri_options,
|
||||
table_id,
|
||||
format_name,
|
||||
format_settings,
|
||||
columns, constraints, context,
|
||||
compression_method);
|
||||
}
|
||||
else
|
||||
{
|
||||
shard = std::make_shared<StorageURL>(
|
||||
url_description, table_id, format_name, format_settings, columns, constraints, String{}, context, compression_method);
|
||||
|
||||
LOG_DEBUG(&Poco::Logger::get("StorageURLDistributed"), "Adding URL: {}", url_description);
|
||||
}
|
||||
|
||||
shards.emplace(std::move(shard));
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
void StorageExternalDistributed::read(
|
||||
QueryPlan & query_plan,
|
||||
const Names & column_names,
|
||||
@ -226,7 +91,6 @@ void StorageExternalDistributed::read(
|
||||
query_plan.unitePlans(std::move(union_step), std::move(plans));
|
||||
}
|
||||
|
||||
|
||||
void registerStorageExternalDistributed(StorageFactory & factory)
|
||||
{
|
||||
factory.registerStorage("ExternalDistributed", [](const StorageFactory::Arguments & args)
|
||||
@ -237,102 +101,94 @@ void registerStorageExternalDistributed(StorageFactory & factory)
|
||||
"Engine ExternalDistributed must have at least 2 arguments: "
|
||||
"engine_name, named_collection and/or description");
|
||||
|
||||
auto engine_name = checkAndGetLiteralArgument<String>(engine_args[0], "engine_name");
|
||||
StorageExternalDistributed::ExternalStorageEngine table_engine;
|
||||
if (engine_name == "URL")
|
||||
table_engine = StorageExternalDistributed::ExternalStorageEngine::URL;
|
||||
else if (engine_name == "MySQL")
|
||||
table_engine = StorageExternalDistributed::ExternalStorageEngine::MySQL;
|
||||
else if (engine_name == "PostgreSQL")
|
||||
table_engine = StorageExternalDistributed::ExternalStorageEngine::PostgreSQL;
|
||||
else
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"External storage engine {} is not supported for StorageExternalDistributed. "
|
||||
"Supported engines are: MySQL, PostgreSQL, URL",
|
||||
engine_name);
|
||||
auto context = args.getLocalContext();
|
||||
const auto & settings = context->getSettingsRef();
|
||||
size_t max_addresses = settings.glob_expansion_max_elements;
|
||||
auto get_addresses = [&](const std::string addresses_expr)
|
||||
{
|
||||
return parseRemoteDescription(addresses_expr, 0, addresses_expr.size(), ',', max_addresses);
|
||||
};
|
||||
|
||||
std::unordered_set<StoragePtr> shards;
|
||||
ASTs inner_engine_args(engine_args.begin() + 1, engine_args.end());
|
||||
String cluster_description;
|
||||
|
||||
auto engine_name = checkAndGetLiteralArgument<String>(engine_args[0], "engine_name");
|
||||
if (engine_name == "URL")
|
||||
{
|
||||
StorageURL::Configuration configuration;
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args))
|
||||
{
|
||||
StorageURL::processNamedCollectionResult(configuration, *named_collection);
|
||||
StorageURL::collectHeaders(engine_args, configuration.headers, args.getLocalContext());
|
||||
}
|
||||
else
|
||||
{
|
||||
for (auto & engine_arg : engine_args)
|
||||
engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, args.getLocalContext());
|
||||
|
||||
cluster_description = checkAndGetLiteralArgument<String>(engine_args[1], "cluster_description");
|
||||
configuration.format = checkAndGetLiteralArgument<String>(engine_args[2], "format");
|
||||
configuration.compression_method = "auto";
|
||||
if (engine_args.size() == 4)
|
||||
configuration.compression_method = checkAndGetLiteralArgument<String>(engine_args[3], "compression_method");
|
||||
}
|
||||
|
||||
|
||||
auto configuration = StorageURL::getConfiguration(inner_engine_args, context);
|
||||
auto shards_addresses = get_addresses(configuration.addresses_expr);
|
||||
auto format_settings = StorageURL::getFormatSettingsFromArgs(args);
|
||||
|
||||
return std::make_shared<StorageExternalDistributed>(
|
||||
cluster_description,
|
||||
args.table_id,
|
||||
configuration.format,
|
||||
format_settings,
|
||||
configuration.compression_method,
|
||||
args.columns,
|
||||
args.constraints,
|
||||
args.getContext());
|
||||
for (const auto & shard_address : shards_addresses)
|
||||
{
|
||||
auto uri_options = parseRemoteDescription(shard_address, 0, shard_address.size(), '|', max_addresses);
|
||||
if (uri_options.size() > 1)
|
||||
{
|
||||
shards.insert(
|
||||
std::make_shared<StorageURLWithFailover>(
|
||||
uri_options, args.table_id, configuration.format, format_settings,
|
||||
args.columns, args.constraints, context, configuration.compression_method));
|
||||
}
|
||||
else
|
||||
{
|
||||
shards.insert(std::make_shared<StorageURL>(
|
||||
shard_address, args.table_id, configuration.format, format_settings,
|
||||
args.columns, args.constraints, String{}, context, configuration.compression_method));
|
||||
}
|
||||
}
|
||||
}
|
||||
#if USE_MYSQL
|
||||
else if (engine_name == "MySQL")
|
||||
{
|
||||
MySQLSettings mysql_settings;
|
||||
auto configuration = StorageMySQL::getConfiguration(inner_engine_args, context, mysql_settings);
|
||||
auto shards_addresses = get_addresses(configuration.addresses_expr);
|
||||
for (const auto & shard_address : shards_addresses)
|
||||
{
|
||||
auto current_configuration{configuration};
|
||||
current_configuration.addresses = parseRemoteDescriptionForExternalDatabase(shard_address, max_addresses, 3306);
|
||||
auto pool = createMySQLPoolWithFailover(current_configuration, mysql_settings);
|
||||
shards.insert(std::make_shared<StorageMySQL>(
|
||||
args.table_id, std::move(pool), configuration.database, configuration.table,
|
||||
/* replace_query = */ false, /* on_duplicate_clause = */ "",
|
||||
args.columns, args.constraints, String{}, context, mysql_settings));
|
||||
}
|
||||
}
|
||||
#endif
|
||||
#if USE_LIBPQXX
|
||||
else if (engine_name == "PostgreSQL")
|
||||
{
|
||||
auto configuration = StoragePostgreSQL::getConfiguration(inner_engine_args, context);
|
||||
auto shards_addresses = get_addresses(configuration.addresses_expr);
|
||||
for (const auto & shard_address : shards_addresses)
|
||||
{
|
||||
auto current_configuration{configuration};
|
||||
current_configuration.addresses = parseRemoteDescriptionForExternalDatabase(shard_address, max_addresses, 5432);
|
||||
auto pool = std::make_shared<postgres::PoolWithFailover>(
|
||||
current_configuration,
|
||||
settings.postgresql_connection_pool_size,
|
||||
settings.postgresql_connection_pool_wait_timeout,
|
||||
POSTGRESQL_POOL_WITH_FAILOVER_DEFAULT_MAX_TRIES,
|
||||
settings.postgresql_connection_pool_auto_close_connection);
|
||||
shards.insert(std::make_shared<StoragePostgreSQL>(
|
||||
args.table_id, std::move(pool), configuration.table, args.columns, args.constraints, String{}));
|
||||
}
|
||||
}
|
||||
#endif
|
||||
else
|
||||
{
|
||||
ExternalDataSourceConfiguration configuration;
|
||||
if (auto named_collection = getExternalDataSourceConfiguration(inner_engine_args, args.getLocalContext()))
|
||||
{
|
||||
auto [common_configuration, storage_specific_args, _] = named_collection.value();
|
||||
configuration.set(common_configuration);
|
||||
|
||||
for (const auto & [name, value] : storage_specific_args)
|
||||
{
|
||||
if (name == "description")
|
||||
cluster_description = checkAndGetLiteralArgument<String>(value, "cluster_description");
|
||||
else
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Unknown key-value argument {} for table function URL", name);
|
||||
}
|
||||
|
||||
if (cluster_description.empty())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Engine ExternalDistribued must have `description` key-value argument or named collection parameter");
|
||||
}
|
||||
else
|
||||
{
|
||||
if (engine_args.size() != 6)
|
||||
throw Exception(ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH,
|
||||
"Storage ExternalDistributed requires 5 parameters: "
|
||||
"ExternalDistributed('engine_name', 'cluster_description', 'database', 'table', 'user', 'password').");
|
||||
|
||||
cluster_description = checkAndGetLiteralArgument<String>(engine_args[1], "cluster_description");
|
||||
configuration.database = checkAndGetLiteralArgument<String>(engine_args[2], "database");
|
||||
configuration.table = checkAndGetLiteralArgument<String>(engine_args[3], "table");
|
||||
configuration.username = checkAndGetLiteralArgument<String>(engine_args[4], "username");
|
||||
configuration.password = checkAndGetLiteralArgument<String>(engine_args[5], "password");
|
||||
}
|
||||
|
||||
|
||||
return std::make_shared<StorageExternalDistributed>(
|
||||
args.table_id,
|
||||
table_engine,
|
||||
cluster_description,
|
||||
configuration,
|
||||
args.columns,
|
||||
args.constraints,
|
||||
args.comment,
|
||||
args.getContext());
|
||||
throw Exception(
|
||||
ErrorCodes::BAD_ARGUMENTS,
|
||||
"External storage engine {} is not supported for StorageExternalDistributed. "
|
||||
"Supported engines are: MySQL, PostgreSQL, URL",
|
||||
engine_name);
|
||||
}
|
||||
|
||||
return std::make_shared<StorageExternalDistributed>(
|
||||
args.table_id,
|
||||
std::move(shards),
|
||||
args.columns,
|
||||
args.constraints,
|
||||
args.comment);
|
||||
},
|
||||
{
|
||||
.source_access_type = AccessType::SOURCES,
|
||||
|
@ -18,32 +18,12 @@ struct ExternalDataSourceConfiguration;
|
||||
class StorageExternalDistributed final : public DB::IStorage
|
||||
{
|
||||
public:
|
||||
enum class ExternalStorageEngine
|
||||
{
|
||||
MySQL,
|
||||
PostgreSQL,
|
||||
URL
|
||||
};
|
||||
|
||||
StorageExternalDistributed(
|
||||
const StorageID & table_id_,
|
||||
ExternalStorageEngine table_engine,
|
||||
const String & cluster_description,
|
||||
const ExternalDataSourceConfiguration & configuration,
|
||||
std::unordered_set<StoragePtr> && shards_,
|
||||
const ColumnsDescription & columns_,
|
||||
const ConstraintsDescription & constraints_,
|
||||
const String & comment,
|
||||
ContextPtr context_);
|
||||
|
||||
StorageExternalDistributed(
|
||||
const String & addresses_description,
|
||||
const StorageID & table_id,
|
||||
const String & format_name,
|
||||
const std::optional<FormatSettings> & format_settings,
|
||||
const String & compression_method,
|
||||
const ColumnsDescription & columns,
|
||||
const ConstraintsDescription & constraints,
|
||||
ContextPtr context);
|
||||
const String & comment);
|
||||
|
||||
std::string getName() const override { return "ExternalDistributed"; }
|
||||
|
||||
|
@ -18,6 +18,7 @@
|
||||
#include <QueryPipeline/Pipe.h>
|
||||
#include <Processors/Sources/MongoDBSource.h>
|
||||
#include <Processors/Sinks/SinkToStorage.h>
|
||||
#include <unordered_set>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -171,30 +172,23 @@ SinkToStoragePtr StorageMongoDB::write(const ASTPtr & /* query */, const Storage
|
||||
return std::make_shared<StorageMongoDBSink>(collection_name, database_name, metadata_snapshot, connection);
|
||||
}
|
||||
|
||||
struct KeysCmp
|
||||
{
|
||||
constexpr bool operator()(const auto & lhs, const auto & rhs) const
|
||||
{
|
||||
return lhs == rhs || ((lhs == "table") && (rhs == "collection")) || ((rhs == "table") && (lhs == "collection"));
|
||||
}
|
||||
};
|
||||
StorageMongoDB::Configuration StorageMongoDB::getConfiguration(ASTs engine_args, ContextPtr context)
|
||||
{
|
||||
Configuration configuration;
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, context))
|
||||
{
|
||||
validateNamedCollection(
|
||||
*named_collection,
|
||||
std::unordered_multiset<std::string_view, std::hash<std::string_view>, KeysCmp>{"host", "port", "user", "password", "database", "collection", "table"},
|
||||
ValidateKeysMultiset<MongoDBEqualKeysSet>{"host", "port", "user", "username", "password", "database", "db", "collection", "table"},
|
||||
{"options"});
|
||||
|
||||
configuration.host = named_collection->get<String>("host");
|
||||
configuration.host = named_collection->getAny<String>({"host", "hostname"});
|
||||
configuration.port = static_cast<UInt16>(named_collection->get<UInt64>("port"));
|
||||
configuration.username = named_collection->get<String>("user");
|
||||
configuration.username = named_collection->getAny<String>({"user", "username"});
|
||||
configuration.password = named_collection->get<String>("password");
|
||||
configuration.database = named_collection->get<String>("database");
|
||||
configuration.table = named_collection->getOrDefault<String>("collection", named_collection->getOrDefault<String>("table", ""));
|
||||
configuration.database = named_collection->getAny<String>({"database", "db"});
|
||||
configuration.table = named_collection->getAny<String>({"collection", "table"});
|
||||
configuration.options = named_collection->getOrDefault<String>("options", "");
|
||||
}
|
||||
else
|
||||
|
@ -3,7 +3,6 @@
|
||||
#include <Poco/MongoDB/Connection.h>
|
||||
|
||||
#include <Storages/IStorage.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include <QueryPipeline/Pipe.h>
|
||||
#include <Common/parseRemoteDescription.h>
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -235,31 +236,53 @@ SinkToStoragePtr StorageMySQL::write(const ASTPtr & /*query*/, const StorageMeta
|
||||
local_context->getSettingsRef().mysql_max_rows_to_insert);
|
||||
}
|
||||
|
||||
|
||||
StorageMySQLConfiguration StorageMySQL::getConfiguration(ASTs engine_args, ContextPtr context_, MySQLBaseSettings & storage_settings)
|
||||
StorageMySQL::Configuration StorageMySQL::processNamedCollectionResult(
|
||||
const NamedCollection & named_collection, MySQLSettings & storage_settings, bool require_table)
|
||||
{
|
||||
StorageMySQLConfiguration configuration;
|
||||
StorageMySQL::Configuration configuration;
|
||||
|
||||
if (auto named_collection = getExternalDataSourceConfiguration(
|
||||
engine_args, context_, /* is_database_engine */false, /* throw_on_no_collection */true, storage_settings))
|
||||
ValidateKeysMultiset<ExternalDatabaseEqualKeysSet> optional_arguments = {"replace_query", "on_duplicate_clause", "addresses_expr", "host", "hostname", "port"};
|
||||
auto mysql_settings = storage_settings.all();
|
||||
for (const auto & setting : mysql_settings)
|
||||
optional_arguments.insert(setting.getName());
|
||||
|
||||
ValidateKeysMultiset<ExternalDatabaseEqualKeysSet> required_arguments = {"user", "username", "password", "database", "db"};
|
||||
if (require_table)
|
||||
required_arguments.insert("table");
|
||||
validateNamedCollection<ValidateKeysMultiset<ExternalDatabaseEqualKeysSet>>(named_collection, required_arguments, optional_arguments);
|
||||
|
||||
configuration.addresses_expr = named_collection.getOrDefault<String>("addresses_expr", "");
|
||||
if (configuration.addresses_expr.empty())
|
||||
{
|
||||
auto [common_configuration, storage_specific_args, settings_changes] = named_collection.value();
|
||||
configuration.set(common_configuration);
|
||||
configuration.host = named_collection.getOrDefault<String>("host", named_collection.getOrDefault<String>("hostname", ""));
|
||||
configuration.port = static_cast<UInt16>(named_collection.get<UInt64>("port"));
|
||||
configuration.addresses = {std::make_pair(configuration.host, configuration.port)};
|
||||
storage_settings.applyChanges(settings_changes);
|
||||
}
|
||||
|
||||
for (const auto & [arg_name, arg_value] : storage_specific_args)
|
||||
{
|
||||
if (arg_name == "replace_query")
|
||||
configuration.replace_query = checkAndGetLiteralArgument<bool>(arg_value, "replace_query");
|
||||
else if (arg_name == "on_duplicate_clause")
|
||||
configuration.on_duplicate_clause = checkAndGetLiteralArgument<String>(arg_value, "on_duplicate_clause");
|
||||
else
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Unexpected key-value argument."
|
||||
"Got: {}, but expected one of:"
|
||||
"host, port, username, password, database, table, replace_query, on_duplicate_clause.", arg_name);
|
||||
}
|
||||
configuration.username = named_collection.getAny<String>({"username", "user"});
|
||||
configuration.password = named_collection.get<String>("password");
|
||||
configuration.database = named_collection.getAny<String>({"db", "database"});
|
||||
if (require_table)
|
||||
configuration.table = named_collection.get<String>("table");
|
||||
configuration.replace_query = named_collection.getOrDefault<UInt64>("replace_query", false);
|
||||
configuration.on_duplicate_clause = named_collection.getOrDefault<String>("on_duplicate_clause", "");
|
||||
|
||||
for (const auto & setting : mysql_settings)
|
||||
{
|
||||
const auto & setting_name = setting.getName();
|
||||
if (named_collection.has(setting_name))
|
||||
storage_settings.set(setting_name, named_collection.get<String>(setting_name));
|
||||
}
|
||||
|
||||
return configuration;
|
||||
}
|
||||
|
||||
StorageMySQL::Configuration StorageMySQL::getConfiguration(ASTs engine_args, ContextPtr context_, MySQLSettings & storage_settings)
|
||||
{
|
||||
StorageMySQL::Configuration configuration;
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, context_))
|
||||
{
|
||||
configuration = StorageMySQL::processNamedCollectionResult(*named_collection, storage_settings);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -271,10 +294,10 @@ StorageMySQLConfiguration StorageMySQL::getConfiguration(ASTs engine_args, Conte
|
||||
for (auto & engine_arg : engine_args)
|
||||
engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, context_);
|
||||
|
||||
const auto & host_port = checkAndGetLiteralArgument<String>(engine_args[0], "host:port");
|
||||
configuration.addresses_expr = checkAndGetLiteralArgument<String>(engine_args[0], "host:port");
|
||||
size_t max_addresses = context_->getSettingsRef().glob_expansion_max_elements;
|
||||
|
||||
configuration.addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 3306);
|
||||
configuration.addresses = parseRemoteDescriptionForExternalDatabase(configuration.addresses_expr, max_addresses, 3306);
|
||||
configuration.database = checkAndGetLiteralArgument<String>(engine_args[1], "database");
|
||||
configuration.table = checkAndGetLiteralArgument<String>(engine_args[2], "table");
|
||||
configuration.username = checkAndGetLiteralArgument<String>(engine_args[3], "username");
|
||||
|
@ -6,7 +6,6 @@
|
||||
|
||||
#include <Storages/IStorage.h>
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <mysqlxx/PoolWithFailover.h>
|
||||
|
||||
namespace Poco
|
||||
@ -17,6 +16,8 @@ class Logger;
|
||||
namespace DB
|
||||
{
|
||||
|
||||
class NamedCollection;
|
||||
|
||||
/** Implements storage in the MySQL database.
|
||||
* Use ENGINE = mysql(host_port, database_name, table_name, user_name, password)
|
||||
* Read only.
|
||||
@ -50,7 +51,26 @@ public:
|
||||
|
||||
SinkToStoragePtr write(const ASTPtr & query, const StorageMetadataPtr & /*metadata_snapshot*/, ContextPtr context) override;
|
||||
|
||||
static StorageMySQLConfiguration getConfiguration(ASTs engine_args, ContextPtr context_, MySQLBaseSettings & storage_settings);
|
||||
struct Configuration
|
||||
{
|
||||
String host;
|
||||
UInt16 port = 0;
|
||||
String username = "default";
|
||||
String password;
|
||||
String database;
|
||||
String table;
|
||||
|
||||
bool replace_query = false;
|
||||
String on_duplicate_clause;
|
||||
|
||||
std::vector<std::pair<String, UInt16>> addresses; /// Failover replicas.
|
||||
String addresses_expr;
|
||||
};
|
||||
|
||||
static Configuration getConfiguration(ASTs engine_args, ContextPtr context_, MySQLSettings & storage_settings);
|
||||
|
||||
static Configuration processNamedCollectionResult(
|
||||
const NamedCollection & named_collection, MySQLSettings & storage_settings, bool require_table = true);
|
||||
|
||||
private:
|
||||
friend class StorageMySQLSink;
|
||||
|
@ -387,31 +387,41 @@ SinkToStoragePtr StoragePostgreSQL::write(
|
||||
return std::make_shared<PostgreSQLSink>(metadata_snapshot, pool->get(), remote_table_name, remote_table_schema, on_conflict);
|
||||
}
|
||||
|
||||
StoragePostgreSQL::Configuration StoragePostgreSQL::processNamedCollectionResult(const NamedCollection & named_collection, bool require_table)
|
||||
{
|
||||
StoragePostgreSQL::Configuration configuration;
|
||||
ValidateKeysMultiset<ExternalDatabaseEqualKeysSet> required_arguments = {"user", "username", "password", "database", "db"};
|
||||
if (require_table)
|
||||
required_arguments.insert("table");
|
||||
|
||||
validateNamedCollection<ValidateKeysMultiset<ExternalDatabaseEqualKeysSet>>(
|
||||
named_collection, required_arguments, {"schema", "on_conflict", "addresses_expr", "host", "hostname", "port"});
|
||||
|
||||
configuration.addresses_expr = named_collection.getOrDefault<String>("addresses_expr", "");
|
||||
if (configuration.addresses_expr.empty())
|
||||
{
|
||||
configuration.host = named_collection.getAny<String>({"host", "hostname"});
|
||||
configuration.port = static_cast<UInt16>(named_collection.get<UInt64>("port"));
|
||||
configuration.addresses = {std::make_pair(configuration.host, configuration.port)};
|
||||
}
|
||||
|
||||
configuration.username = named_collection.getAny<String>({"username", "user"});
|
||||
configuration.password = named_collection.get<String>("password");
|
||||
configuration.database = named_collection.getAny<String>({"db", "database"});
|
||||
if (require_table)
|
||||
configuration.table = named_collection.get<String>("table");
|
||||
configuration.schema = named_collection.getOrDefault<String>("schema", "");
|
||||
configuration.on_conflict = named_collection.getOrDefault<String>("on_conflict", "");
|
||||
|
||||
return configuration;
|
||||
}
|
||||
|
||||
StoragePostgreSQL::Configuration StoragePostgreSQL::getConfiguration(ASTs engine_args, ContextPtr context)
|
||||
{
|
||||
StoragePostgreSQL::Configuration configuration;
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, context))
|
||||
{
|
||||
validateNamedCollection(
|
||||
*named_collection,
|
||||
{"user", "password", "database", "table"},
|
||||
{"schema", "on_conflict", "addresses_expr", "host", "port"});
|
||||
|
||||
configuration.addresses_expr = named_collection->getOrDefault<String>("addresses_expr", "");
|
||||
if (configuration.addresses_expr.empty())
|
||||
{
|
||||
configuration.host = named_collection->get<String>("host");
|
||||
configuration.port = static_cast<UInt16>(named_collection->get<UInt64>("port"));
|
||||
configuration.addresses = {std::make_pair(configuration.host, configuration.port)};
|
||||
}
|
||||
|
||||
configuration.username = named_collection->get<String>("user");
|
||||
configuration.password = named_collection->get<String>("password");
|
||||
configuration.database = named_collection->get<String>("database");
|
||||
configuration.table = named_collection->get<String>("table");
|
||||
configuration.schema = named_collection->getOrDefault<String>("schema", "");
|
||||
configuration.on_conflict = named_collection->getOrDefault<String>("on_conflict", "");
|
||||
configuration = StoragePostgreSQL::processNamedCollectionResult(*named_collection);
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -428,10 +438,10 @@ StoragePostgreSQL::Configuration StoragePostgreSQL::getConfiguration(ASTs engine
|
||||
for (auto & engine_arg : engine_args)
|
||||
engine_arg = evaluateConstantExpressionOrIdentifierAsLiteral(engine_arg, context);
|
||||
|
||||
const auto & host_port = checkAndGetLiteralArgument<String>(engine_args[0], "host:port");
|
||||
configuration.addresses_expr = checkAndGetLiteralArgument<String>(engine_args[0], "host:port");
|
||||
size_t max_addresses = context->getSettingsRef().glob_expansion_max_elements;
|
||||
|
||||
configuration.addresses = parseRemoteDescriptionForExternalDatabase(host_port, max_addresses, 5432);
|
||||
configuration.addresses = parseRemoteDescriptionForExternalDatabase(configuration.addresses_expr, max_addresses, 5432);
|
||||
if (configuration.addresses.size() == 1)
|
||||
{
|
||||
configuration.host = configuration.addresses[0].first;
|
||||
|
@ -5,7 +5,6 @@
|
||||
#if USE_LIBPQXX
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Storages/IStorage.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
|
||||
namespace Poco
|
||||
{
|
||||
@ -20,6 +19,7 @@ using PoolWithFailoverPtr = std::shared_ptr<PoolWithFailover>;
|
||||
|
||||
namespace DB
|
||||
{
|
||||
class NamedCollection;
|
||||
|
||||
class StoragePostgreSQL final : public IStorage
|
||||
{
|
||||
@ -64,6 +64,8 @@ public:
|
||||
|
||||
static Configuration getConfiguration(ASTs engine_args, ContextPtr context);
|
||||
|
||||
static Configuration processNamedCollectionResult(const NamedCollection & named_collection, bool require_table = true);
|
||||
|
||||
private:
|
||||
String remote_table_name;
|
||||
String remote_table_schema;
|
||||
|
@ -1295,7 +1295,7 @@ StorageS3::Configuration StorageS3::getConfiguration(ASTs & engine_args, Context
|
||||
{
|
||||
StorageS3::Configuration configuration;
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(engine_args, local_context))
|
||||
{
|
||||
processNamedCollectionResult(configuration, *named_collection);
|
||||
}
|
||||
|
@ -1121,7 +1121,7 @@ StorageURL::Configuration StorageURL::getConfiguration(ASTs & args, ContextPtr l
|
||||
{
|
||||
StorageURL::Configuration configuration;
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(args, local_context))
|
||||
{
|
||||
StorageURL::processNamedCollectionResult(configuration, *named_collection);
|
||||
collectHeaders(args, configuration.headers, local_context);
|
||||
|
@ -8,7 +8,6 @@
|
||||
#include <IO/HTTPHeaderEntries.h>
|
||||
#include <Storages/IStorage.h>
|
||||
#include <Storages/StorageFactory.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/Cache/SchemaCache.h>
|
||||
#include <Storages/StorageConfiguration.h>
|
||||
|
||||
@ -188,6 +187,7 @@ public:
|
||||
std::string url;
|
||||
std::string http_method;
|
||||
HTTPHeaderEntries headers;
|
||||
std::string addresses_expr;
|
||||
};
|
||||
|
||||
static Configuration getConfiguration(ASTs & args, ContextPtr context);
|
||||
@ -221,13 +221,6 @@ public:
|
||||
size_t max_block_size,
|
||||
size_t num_streams) override;
|
||||
|
||||
struct Configuration
|
||||
{
|
||||
String url;
|
||||
String compression_method = "auto";
|
||||
std::vector<std::pair<String, String>> headers;
|
||||
};
|
||||
|
||||
private:
|
||||
std::vector<String> uri_options;
|
||||
};
|
||||
|
@ -7,9 +7,9 @@
|
||||
#include <Interpreters/ProfileEventsExt.h>
|
||||
#include <Access/Common/AccessType.h>
|
||||
#include <Access/Common/AccessFlags.h>
|
||||
#include <Access/ContextAccess.h>
|
||||
#include <Columns/ColumnMap.h>
|
||||
#include <Common/NamedCollections/NamedCollections.h>
|
||||
#include <Access/ContextAccess.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -30,7 +30,6 @@ StorageSystemNamedCollections::StorageSystemNamedCollections(const StorageID & t
|
||||
|
||||
void StorageSystemNamedCollections::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const
|
||||
{
|
||||
context->checkAccess(AccessType::SHOW_NAMED_COLLECTIONS);
|
||||
const auto & access = context->getAccess();
|
||||
|
||||
NamedCollectionUtils::loadIfNot();
|
||||
@ -38,6 +37,9 @@ void StorageSystemNamedCollections::fillData(MutableColumns & res_columns, Conte
|
||||
auto collections = NamedCollectionFactory::instance().getAll();
|
||||
for (const auto & [name, collection] : collections)
|
||||
{
|
||||
if (!access->isGranted(AccessType::SHOW_NAMED_COLLECTIONS, name))
|
||||
continue;
|
||||
|
||||
res_columns[0]->insert(name);
|
||||
|
||||
auto * column_map = typeid_cast<ColumnMap *>(res_columns[1].get());
|
||||
|
@ -28,6 +28,7 @@ namespace
|
||||
DICTIONARY,
|
||||
VIEW,
|
||||
COLUMN,
|
||||
NAMED_COLLECTION,
|
||||
};
|
||||
|
||||
DataTypeEnum8::Values getLevelEnumValues()
|
||||
@ -39,6 +40,7 @@ namespace
|
||||
enum_values.emplace_back("DICTIONARY", static_cast<Int8>(DICTIONARY));
|
||||
enum_values.emplace_back("VIEW", static_cast<Int8>(VIEW));
|
||||
enum_values.emplace_back("COLUMN", static_cast<Int8>(COLUMN));
|
||||
enum_values.emplace_back("NAMED_COLLECTION", static_cast<Int8>(NAMED_COLLECTION));
|
||||
return enum_values;
|
||||
}
|
||||
}
|
||||
|
@ -6,7 +6,6 @@
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/evaluateConstantExpression.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
#include <Storages/StorageMySQL.h>
|
||||
#include <Storages/MySQL/MySQLSettings.h>
|
||||
#include <Storages/MySQL/MySQLHelpers.h>
|
||||
#include <TableFunctions/ITableFunction.h>
|
||||
|
@ -3,7 +3,7 @@
|
||||
|
||||
#if USE_MYSQL
|
||||
#include <TableFunctions/ITableFunction.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/StorageMySQL.h>
|
||||
#include <mysqlxx/Pool.h>
|
||||
|
||||
|
||||
@ -30,7 +30,7 @@ private:
|
||||
void parseArguments(const ASTPtr & ast_function, ContextPtr context) override;
|
||||
|
||||
mutable std::optional<mysqlxx::PoolWithFailover> pool;
|
||||
std::optional<StorageMySQLConfiguration> configuration;
|
||||
std::optional<StorageMySQL::Configuration> configuration;
|
||||
};
|
||||
|
||||
}
|
||||
|
@ -2,8 +2,8 @@
|
||||
|
||||
#include <Storages/getStructureOfRemoteTable.h>
|
||||
#include <Storages/StorageDistributed.h>
|
||||
#include <Storages/ExternalDataSourceConfiguration.h>
|
||||
#include <Storages/checkAndGetLiteralArgument.h>
|
||||
#include <Storages/NamedCollectionsHelpers.h>
|
||||
#include <Parsers/ASTIdentifier_fwd.h>
|
||||
#include <Parsers/ASTLiteral.h>
|
||||
#include <Parsers/ASTFunction.h>
|
||||
@ -34,10 +34,10 @@ namespace ErrorCodes
|
||||
void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr context)
|
||||
{
|
||||
ASTs & args_func = ast_function->children;
|
||||
ExternalDataSourceConfiguration configuration;
|
||||
|
||||
String cluster_name;
|
||||
String cluster_description;
|
||||
String database, table, username = "default", password;
|
||||
|
||||
if (args_func.size() != 1)
|
||||
throw Exception(help_message, ErrorCodes::NUMBER_OF_ARGUMENTS_DOESNT_MATCH);
|
||||
@ -50,47 +50,38 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr
|
||||
* For now named collection can be used only for remote as cluster does not require credentials.
|
||||
*/
|
||||
size_t max_args = is_cluster_function ? 4 : 6;
|
||||
auto named_collection = getExternalDataSourceConfiguration(args, context, false, false);
|
||||
if (named_collection)
|
||||
NamedCollectionPtr named_collection;
|
||||
std::vector<std::pair<std::string, ASTPtr>> complex_args;
|
||||
if (!is_cluster_function && (named_collection = tryGetNamedCollectionWithOverrides(args, context, false, &complex_args)))
|
||||
{
|
||||
if (is_cluster_function)
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Named collection cannot be used for table function cluster");
|
||||
validateNamedCollection<ValidateKeysMultiset<ExternalDatabaseEqualKeysSet>>(
|
||||
*named_collection,
|
||||
{"addresses_expr", "host", "hostname", "table"},
|
||||
{"username", "user", "password", "sharding_key", "port", "database", "db"});
|
||||
|
||||
/**
|
||||
* Common arguments: database, table, username, password, addresses_expr.
|
||||
* Specific args (remote): sharding_key, or database (in case it is not ASTLiteral).
|
||||
* None of the common arguments is empty at this point, it is checked in getExternalDataSourceConfiguration.
|
||||
*/
|
||||
auto [common_configuration, storage_specific_args, _] = named_collection.value();
|
||||
configuration.set(common_configuration);
|
||||
|
||||
for (const auto & [arg_name, arg_value] : storage_specific_args)
|
||||
if (!complex_args.empty())
|
||||
{
|
||||
if (arg_name == "sharding_key")
|
||||
for (const auto & [arg_name, arg_ast] : complex_args)
|
||||
{
|
||||
sharding_key = arg_value;
|
||||
}
|
||||
else if (arg_name == "database")
|
||||
{
|
||||
const auto * function = arg_value->as<ASTFunction>();
|
||||
if (function && TableFunctionFactory::instance().isTableFunctionName(function->name))
|
||||
{
|
||||
remote_table_function_ptr = arg_value;
|
||||
}
|
||||
if (arg_name == "database" || arg_name == "db")
|
||||
remote_table_function_ptr = arg_ast;
|
||||
else if (arg_name == "sharding_key")
|
||||
sharding_key = arg_ast;
|
||||
else
|
||||
{
|
||||
auto database_literal = evaluateConstantExpressionOrIdentifierAsLiteral(arg_value, context);
|
||||
configuration.database = checkAndGetLiteralArgument<String>(database_literal, "database");
|
||||
}
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Unexpected argument representation for {}", arg_name);
|
||||
}
|
||||
else
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"Unexpected key-value argument."
|
||||
"Got: {}, but expected: sharding_key", arg_name);
|
||||
}
|
||||
cluster_description = configuration.addresses_expr;
|
||||
if (cluster_description.empty())
|
||||
cluster_description = configuration.port ? configuration.host + ':' + toString(configuration.port) : configuration.host;
|
||||
else
|
||||
database = named_collection->getAnyOrDefault<String>({"db", "database"}, "default");
|
||||
|
||||
cluster_description = named_collection->getOrDefault<String>("addresses_expr", "");
|
||||
if (cluster_description.empty() && named_collection->hasAny({"host", "hostname"}))
|
||||
cluster_description = named_collection->has("port")
|
||||
? named_collection->getAny<String>({"host", "hostname"}) + ':' + toString(named_collection->get<UInt64>("port"))
|
||||
: named_collection->getAny<String>({"host", "hostname"});
|
||||
table = named_collection->get<String>("table");
|
||||
username = named_collection->getAnyOrDefault<String>({"username", "user"}, "default");
|
||||
password = named_collection->getOrDefault<String>("password", "");
|
||||
}
|
||||
else
|
||||
{
|
||||
@ -159,11 +150,11 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr
|
||||
else
|
||||
{
|
||||
args[arg_num] = evaluateConstantExpressionForDatabaseName(args[arg_num], context);
|
||||
configuration.database = checkAndGetLiteralArgument<String>(args[arg_num], "database");
|
||||
database = checkAndGetLiteralArgument<String>(args[arg_num], "database");
|
||||
|
||||
++arg_num;
|
||||
|
||||
auto qualified_name = QualifiedTableName::parseFromString(configuration.database);
|
||||
auto qualified_name = QualifiedTableName::parseFromString(database);
|
||||
if (qualified_name.database.empty())
|
||||
{
|
||||
if (arg_num >= args.size())
|
||||
@ -179,8 +170,8 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr
|
||||
}
|
||||
}
|
||||
|
||||
configuration.database = std::move(qualified_name.database);
|
||||
configuration.table = std::move(qualified_name.table);
|
||||
database = std::move(qualified_name.database);
|
||||
table = std::move(qualified_name.table);
|
||||
|
||||
/// Cluster function may have sharding key for insert
|
||||
if (is_cluster_function && arg_num < args.size())
|
||||
@ -195,9 +186,9 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr
|
||||
{
|
||||
if (arg_num < args.size())
|
||||
{
|
||||
if (!get_string_literal(*args[arg_num], configuration.username))
|
||||
if (!get_string_literal(*args[arg_num], username))
|
||||
{
|
||||
configuration.username = "default";
|
||||
username = "default";
|
||||
sharding_key = args[arg_num];
|
||||
}
|
||||
++arg_num;
|
||||
@ -205,7 +196,7 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr
|
||||
|
||||
if (arg_num < args.size() && !sharding_key)
|
||||
{
|
||||
if (!get_string_literal(*args[arg_num], configuration.password))
|
||||
if (!get_string_literal(*args[arg_num], password))
|
||||
{
|
||||
sharding_key = args[arg_num];
|
||||
}
|
||||
@ -267,19 +258,19 @@ void TableFunctionRemote::parseArguments(const ASTPtr & ast_function, ContextPtr
|
||||
cluster = std::make_shared<Cluster>(
|
||||
context->getSettingsRef(),
|
||||
names,
|
||||
configuration.username,
|
||||
configuration.password,
|
||||
username,
|
||||
password,
|
||||
(secure ? (maybe_secure_port ? *maybe_secure_port : DBMS_DEFAULT_SECURE_PORT) : context->getTCPPort()),
|
||||
treat_local_as_remote,
|
||||
treat_local_port_as_remote,
|
||||
secure);
|
||||
}
|
||||
|
||||
if (!remote_table_function_ptr && configuration.table.empty())
|
||||
if (!remote_table_function_ptr && table.empty())
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS, "The name of remote table cannot be empty");
|
||||
|
||||
remote_table_id.database_name = configuration.database;
|
||||
remote_table_id.table_name = configuration.table;
|
||||
remote_table_id.database_name = database;
|
||||
remote_table_id.table_name = table;
|
||||
}
|
||||
|
||||
StoragePtr TableFunctionRemote::executeImpl(const ASTPtr & /*ast_function*/, ContextPtr context, const std::string & table_name, ColumnsDescription cached_columns) const
|
||||
|
@ -32,7 +32,7 @@ namespace ErrorCodes
|
||||
void TableFunctionS3::parseArgumentsImpl(
|
||||
const String & error_message, ASTs & args, ContextPtr context, StorageS3::Configuration & s3_configuration, bool get_format_from_file)
|
||||
{
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(args, context))
|
||||
{
|
||||
StorageS3::processNamedCollectionResult(s3_configuration, *named_collection);
|
||||
}
|
||||
|
@ -56,7 +56,7 @@ void TableFunctionURL::parseArguments(const ASTPtr & ast, ContextPtr context)
|
||||
|
||||
auto & url_function_args = assert_cast<ASTExpressionList *>(args[0].get())->children;
|
||||
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(url_function_args))
|
||||
if (auto named_collection = tryGetNamedCollectionWithOverrides(url_function_args, context))
|
||||
{
|
||||
StorageURL::processNamedCollectionResult(configuration, *named_collection);
|
||||
|
||||
|
@ -59,11 +59,11 @@ def get_scales(runner_type: str) -> Tuple[int, int]:
|
||||
"returns the multipliers for scaling down and up ASG by types"
|
||||
# Scaling down is quicker on the lack of running jobs than scaling up on
|
||||
# queue
|
||||
scale_down = 3
|
||||
scale_down = 2
|
||||
scale_up = 5
|
||||
if runner_type == "style-checker":
|
||||
# the style checkers have so many noise, so it scales up too quickly
|
||||
scale_down = 2
|
||||
scale_down = 1
|
||||
scale_up = 10
|
||||
return scale_down, scale_up
|
||||
|
||||
|
@ -70,6 +70,9 @@ class TestSetCapacity(unittest.TestCase):
|
||||
TestCase("w/reserve", 1, 13, 20, [Queue("queued", 17, "w/reserve")], -1),
|
||||
# Increase capacity
|
||||
TestCase("increase", 1, 13, 20, [Queue("queued", 23, "increase")], 15),
|
||||
TestCase(
|
||||
"style-checker", 1, 13, 20, [Queue("queued", 33, "style-checker")], 15
|
||||
),
|
||||
TestCase("increase", 1, 13, 20, [Queue("queued", 18, "increase")], 14),
|
||||
TestCase("increase", 1, 13, 20, [Queue("queued", 183, "increase")], 20),
|
||||
TestCase(
|
||||
@ -85,10 +88,20 @@ class TestSetCapacity(unittest.TestCase):
|
||||
),
|
||||
TestCase("lower-min", 10, 5, 20, [Queue("queued", 5, "lower-min")], 10),
|
||||
# Decrease capacity
|
||||
TestCase("w/reserve", 1, 13, 20, [Queue("queued", 5, "w/reserve")], 11),
|
||||
TestCase("w/reserve", 1, 13, 20, [Queue("queued", 5, "w/reserve")], 9),
|
||||
TestCase(
|
||||
"style-checker", 1, 13, 20, [Queue("queued", 5, "style-checker")], 5
|
||||
),
|
||||
TestCase("w/reserve", 1, 23, 20, [Queue("queued", 17, "w/reserve")], 20),
|
||||
TestCase("decrease", 1, 13, 20, [Queue("in_progress", 3, "decrease")], 10),
|
||||
TestCase("decrease", 1, 13, 20, [Queue("in_progress", 5, "decrease")], 11),
|
||||
TestCase("decrease", 1, 13, 20, [Queue("in_progress", 3, "decrease")], 8),
|
||||
TestCase(
|
||||
"style-checker",
|
||||
1,
|
||||
13,
|
||||
20,
|
||||
[Queue("in_progress", 5, "style-checker")],
|
||||
5,
|
||||
),
|
||||
)
|
||||
for t in test_cases:
|
||||
self.client.data_helper(t.name, t.min_size, t.desired_capacity, t.max_size)
|
||||
|
@ -100,55 +100,52 @@ class Reviews:
|
||||
if review.state == "APPROVED"
|
||||
}
|
||||
|
||||
if approved:
|
||||
if not approved:
|
||||
logging.info(
|
||||
"The following users from %s team approved the PR: %s",
|
||||
"The PR #%s is not approved by any of %s team member",
|
||||
self.pr.number,
|
||||
TEAM_NAME,
|
||||
", ".join(user.login for user in approved.keys()),
|
||||
)
|
||||
# The only reliable place to get the 100% accurate last_modified
|
||||
# info is when the commit was pushed to GitHub. The info is
|
||||
# available as a header 'last-modified' of /{org}/{repo}/commits/{sha}.
|
||||
# Unfortunately, it's formatted as 'Wed, 04 Jan 2023 11:05:13 GMT'
|
||||
|
||||
commit = self.pr.head.repo.get_commit(self.pr.head.sha)
|
||||
if commit.stats.last_modified is None:
|
||||
logging.warning(
|
||||
"Unable to get info about the commit %s", self.pr.head.sha
|
||||
)
|
||||
return False
|
||||
|
||||
last_changed = datetime.strptime(
|
||||
commit.stats.last_modified, "%a, %d %b %Y %H:%M:%S GMT"
|
||||
)
|
||||
|
||||
approved_at = max(review.submitted_at for review in approved.values())
|
||||
if approved_at == datetime.fromtimestamp(0):
|
||||
logging.info(
|
||||
"Unable to get `datetime.fromtimestamp(0)`, "
|
||||
"here's debug info about reviews: %s",
|
||||
"\n".join(pformat(review) for review in self.reviews.values()),
|
||||
)
|
||||
else:
|
||||
logging.info(
|
||||
"The PR is approved at %s",
|
||||
approved_at.isoformat(),
|
||||
)
|
||||
|
||||
if approved_at < last_changed:
|
||||
logging.info(
|
||||
"There are changes after approve at %s",
|
||||
approved_at.isoformat(),
|
||||
)
|
||||
return False
|
||||
return True
|
||||
return False
|
||||
|
||||
logging.info(
|
||||
"The PR #%s is not approved by any of %s team member",
|
||||
self.pr.number,
|
||||
"The following users from %s team approved the PR: %s",
|
||||
TEAM_NAME,
|
||||
", ".join(user.login for user in approved.keys()),
|
||||
)
|
||||
return False
|
||||
|
||||
# The only reliable place to get the 100% accurate last_modified
|
||||
# info is when the commit was pushed to GitHub. The info is
|
||||
# available as a header 'last-modified' of /{org}/{repo}/commits/{sha}.
|
||||
# Unfortunately, it's formatted as 'Wed, 04 Jan 2023 11:05:13 GMT'
|
||||
commit = self.pr.head.repo.get_commit(self.pr.head.sha)
|
||||
if commit.stats.last_modified is None:
|
||||
logging.warning("Unable to get info about the commit %s", self.pr.head.sha)
|
||||
return False
|
||||
|
||||
last_changed = datetime.strptime(
|
||||
commit.stats.last_modified, "%a, %d %b %Y %H:%M:%S GMT"
|
||||
)
|
||||
logging.info("The PR is changed at %s", last_changed.isoformat())
|
||||
|
||||
approved_at = max(review.submitted_at for review in approved.values())
|
||||
if approved_at == datetime.fromtimestamp(0):
|
||||
logging.info(
|
||||
"Unable to get `datetime.fromtimestamp(0)`, "
|
||||
"here's debug info about reviews: %s",
|
||||
"\n".join(pformat(review) for review in self.reviews.values()),
|
||||
)
|
||||
else:
|
||||
logging.info("The PR is approved at %s", approved_at.isoformat())
|
||||
|
||||
if approved_at < last_changed:
|
||||
logging.info(
|
||||
"There are changes done at %s after approval at %s",
|
||||
last_changed.isoformat(),
|
||||
approved_at.isoformat(),
|
||||
)
|
||||
return False
|
||||
return True
|
||||
|
||||
|
||||
def get_workflows_for_head(repo: Repository, head_sha: str) -> List[WorkflowRun]:
|
||||
|
@ -2,7 +2,7 @@
|
||||
<users>
|
||||
<default>
|
||||
<access_management>1</access_management>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
|
@ -402,6 +402,9 @@ def test_introspection():
|
||||
assert instance.query("SHOW GRANTS FOR B") == TSV(
|
||||
["GRANT CREATE ON *.* TO B WITH GRANT OPTION"]
|
||||
)
|
||||
assert instance.query("SHOW GRANTS FOR default") == TSV(
|
||||
["GRANT ALL ON *.* TO default WITH GRANT OPTION"]
|
||||
)
|
||||
assert instance.query("SHOW GRANTS FOR A,B") == TSV(
|
||||
[
|
||||
"GRANT SELECT ON test.table TO A",
|
||||
|
@ -2,9 +2,16 @@
|
||||
<named_collections>
|
||||
<named_collection_1/>
|
||||
<named_collection_2/>
|
||||
<named_collection_3/>
|
||||
<named_collection_4/>
|
||||
<named_collection_5/>
|
||||
<named_collection_3>
|
||||
<user>user</user>
|
||||
<password>pass</password>
|
||||
</named_collection_3>
|
||||
<named_collection_4>
|
||||
<host></host>
|
||||
</named_collection_4>
|
||||
<named_collection_5>
|
||||
<host></host>
|
||||
</named_collection_5>
|
||||
<named_collection_6/>
|
||||
</named_collections>
|
||||
</clickhouse>
|
||||
|
@ -126,7 +126,7 @@ def test_create_table():
|
||||
f"MySQL(named_collection_2, database = 'mysql_db', host = 'mysql57', port = 3306, password = '{password}', table = 'mysql_table', user = 'mysql_user')",
|
||||
f"MySQL(named_collection_3, database = 'mysql_db', host = 'mysql57', port = 3306, table = 'mysql_table')",
|
||||
f"PostgreSQL(named_collection_4, host = 'postgres1', port = 5432, database = 'postgres_db', table = 'postgres_table', user = 'postgres_user', password = '{password}')",
|
||||
f"MongoDB(named_collection_5, host = 'mongo1', port = 5432, database = 'mongo_db', collection = 'mongo_col', user = 'mongo_user', password = '{password}')",
|
||||
f"MongoDB(named_collection_5, host = 'mongo1', port = 5432, db = 'mongo_db', collection = 'mongo_col', user = 'mongo_user', password = '{password}')",
|
||||
f"S3(named_collection_6, url = 'http://minio1:9001/root/data/test8.csv', access_key_id = 'minio', secret_access_key = '{password}', format = 'CSV')",
|
||||
]
|
||||
|
||||
@ -163,7 +163,7 @@ def test_create_table():
|
||||
"CREATE TABLE table9 (`x` int) ENGINE = MySQL(named_collection_2, database = 'mysql_db', host = 'mysql57', port = 3306, password = '[HIDDEN]', table = 'mysql_table', user = 'mysql_user')",
|
||||
"CREATE TABLE table10 (x int) ENGINE = MySQL(named_collection_3, database = 'mysql_db', host = 'mysql57', port = 3306, table = 'mysql_table')",
|
||||
"CREATE TABLE table11 (`x` int) ENGINE = PostgreSQL(named_collection_4, host = 'postgres1', port = 5432, database = 'postgres_db', table = 'postgres_table', user = 'postgres_user', password = '[HIDDEN]')",
|
||||
"CREATE TABLE table12 (`x` int) ENGINE = MongoDB(named_collection_5, host = 'mongo1', port = 5432, database = 'mongo_db', collection = 'mongo_col', user = 'mongo_user', password = '[HIDDEN]'",
|
||||
"CREATE TABLE table12 (`x` int) ENGINE = MongoDB(named_collection_5, host = 'mongo1', port = 5432, db = 'mongo_db', collection = 'mongo_col', user = 'mongo_user', password = '[HIDDEN]'",
|
||||
"CREATE TABLE table13 (`x` int) ENGINE = S3(named_collection_6, url = 'http://minio1:9001/root/data/test8.csv', access_key_id = 'minio', secret_access_key = '[HIDDEN]', format = 'CSV')",
|
||||
],
|
||||
must_not_contain=[password],
|
||||
@ -233,9 +233,9 @@ def test_table_functions():
|
||||
f"remoteSecure('127.{{2..11}}', 'default', 'remote_table', 'remote_user', rand())",
|
||||
f"mysql(named_collection_1, host = 'mysql57', port = 3306, database = 'mysql_db', table = 'mysql_table', user = 'mysql_user', password = '{password}')",
|
||||
f"postgresql(named_collection_2, password = '{password}', host = 'postgres1', port = 5432, database = 'postgres_db', table = 'postgres_table', user = 'postgres_user')",
|
||||
f"s3(named_collection_3, url = 'http://minio1:9001/root/data/test4.csv', access_key_id = 'minio', secret_access_key = '{password}')",
|
||||
f"remote(named_collection_4, addresses_expr = '127.{{2..11}}', database = 'default', table = 'remote_table', user = 'remote_user', password = '{password}', sharding_key = rand())",
|
||||
f"remoteSecure(named_collection_5, addresses_expr = '127.{{2..11}}', database = 'default', table = 'remote_table', user = 'remote_user', password = '{password}')",
|
||||
f"s3(named_collection_2, url = 'http://minio1:9001/root/data/test4.csv', access_key_id = 'minio', secret_access_key = '{password}')",
|
||||
f"remote(named_collection_6, addresses_expr = '127.{{2..11}}', database = 'default', table = 'remote_table', user = 'remote_user', password = '{password}', sharding_key = rand())",
|
||||
f"remoteSecure(named_collection_6, addresses_expr = '127.{{2..11}}', database = 'default', table = 'remote_table', user = 'remote_user', password = '{password}')",
|
||||
]
|
||||
|
||||
for i, table_function in enumerate(table_functions):
|
||||
@ -286,9 +286,9 @@ def test_table_functions():
|
||||
"CREATE TABLE tablefunc24 (x int) AS remoteSecure('127.{2..11}', 'default', 'remote_table', 'remote_user', rand())",
|
||||
"CREATE TABLE tablefunc25 (`x` int) AS mysql(named_collection_1, host = 'mysql57', port = 3306, database = 'mysql_db', table = 'mysql_table', user = 'mysql_user', password = '[HIDDEN]')",
|
||||
"CREATE TABLE tablefunc26 (`x` int) AS postgresql(named_collection_2, password = '[HIDDEN]', host = 'postgres1', port = 5432, database = 'postgres_db', table = 'postgres_table', user = 'postgres_user')",
|
||||
"CREATE TABLE tablefunc27 (`x` int) AS s3(named_collection_3, url = 'http://minio1:9001/root/data/test4.csv', access_key_id = 'minio', secret_access_key = '[HIDDEN]')",
|
||||
"CREATE TABLE tablefunc28 (`x` int) AS remote(named_collection_4, addresses_expr = '127.{2..11}', database = 'default', table = 'remote_table', user = 'remote_user', password = '[HIDDEN]', sharding_key = rand())",
|
||||
"CREATE TABLE tablefunc29 (`x` int) AS remoteSecure(named_collection_5, addresses_expr = '127.{2..11}', database = 'default', table = 'remote_table', user = 'remote_user', password = '[HIDDEN]')",
|
||||
"CREATE TABLE tablefunc27 (`x` int) AS s3(named_collection_2, url = 'http://minio1:9001/root/data/test4.csv', access_key_id = 'minio', secret_access_key = '[HIDDEN]')",
|
||||
"CREATE TABLE tablefunc28 (`x` int) AS remote(named_collection_6, addresses_expr = '127.{2..11}', database = 'default', table = 'remote_table', user = 'remote_user', password = '[HIDDEN]', sharding_key = rand())",
|
||||
"CREATE TABLE tablefunc29 (`x` int) AS remoteSecure(named_collection_6, addresses_expr = '127.{2..11}', database = 'default', table = 'remote_table', user = 'remote_user', password = '[HIDDEN]')",
|
||||
],
|
||||
must_not_contain=[password],
|
||||
)
|
||||
|
@ -6,7 +6,6 @@
|
||||
<host>mysql57</host>
|
||||
<port>3306</port>
|
||||
<database>test_database</database>
|
||||
<table>test_table</table>
|
||||
</mysql1>
|
||||
<mysql2>
|
||||
<user>postgres</user>
|
||||
@ -19,7 +18,6 @@
|
||||
<host>mysql57</host>
|
||||
<port>1111</port>
|
||||
<database>clickhouse</database>
|
||||
<table>test_table</table>
|
||||
</mysql3>
|
||||
</named_collections>
|
||||
</clickhouse>
|
||||
|
@ -4,6 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
</default>
|
||||
</users>
|
||||
</clickhouse>
|
@ -24,6 +24,16 @@ def cluster():
|
||||
],
|
||||
stay_alive=True,
|
||||
)
|
||||
cluster.add_instance(
|
||||
"node_only_named_collection_control",
|
||||
main_configs=[
|
||||
"configs/config.d/named_collections.xml",
|
||||
],
|
||||
user_configs=[
|
||||
"configs/users.d/users_only_named_collection_control.xml",
|
||||
],
|
||||
stay_alive=True,
|
||||
)
|
||||
cluster.add_instance(
|
||||
"node_no_default_access",
|
||||
main_configs=[
|
||||
@ -34,16 +44,6 @@ def cluster():
|
||||
],
|
||||
stay_alive=True,
|
||||
)
|
||||
cluster.add_instance(
|
||||
"node_no_default_access_but_with_access_management",
|
||||
main_configs=[
|
||||
"configs/config.d/named_collections.xml",
|
||||
],
|
||||
user_configs=[
|
||||
"configs/users.d/users_no_default_access_with_access_management.xml",
|
||||
],
|
||||
stay_alive=True,
|
||||
)
|
||||
|
||||
logging.info("Starting cluster...")
|
||||
cluster.start()
|
||||
@ -70,40 +70,39 @@ def replace_in_users_config(node, old, new):
|
||||
)
|
||||
|
||||
|
||||
def test_access(cluster):
|
||||
def test_default_access(cluster):
|
||||
node = cluster.instances["node_no_default_access"]
|
||||
assert 0 == int(node.query("select count() from system.named_collections"))
|
||||
node = cluster.instances["node_only_named_collection_control"]
|
||||
assert 1 == int(node.query("select count() from system.named_collections"))
|
||||
assert (
|
||||
"DB::Exception: default: Not enough privileges. To execute this query it's necessary to have grant SHOW NAMED COLLECTIONS ON *.*"
|
||||
in node.query_and_get_error("select count() from system.named_collections")
|
||||
)
|
||||
node = cluster.instances["node_no_default_access_but_with_access_management"]
|
||||
assert (
|
||||
"DB::Exception: default: Not enough privileges. To execute this query it's necessary to have grant SHOW NAMED COLLECTIONS ON *.*"
|
||||
in node.query_and_get_error("select count() from system.named_collections")
|
||||
node.query("select collection['key1'] from system.named_collections").strip()
|
||||
== "[HIDDEN]"
|
||||
)
|
||||
|
||||
node = cluster.instances["node"]
|
||||
assert int(node.query("select count() from system.named_collections")) > 0
|
||||
|
||||
replace_in_users_config(
|
||||
node, "show_named_collections>1", "show_named_collections>0"
|
||||
node, "named_collection_control>1", "named_collection_control>0"
|
||||
)
|
||||
assert "show_named_collections>0" in node.exec_in_container(
|
||||
assert "named_collection_control>0" in node.exec_in_container(
|
||||
["bash", "-c", f"cat /etc/clickhouse-server/users.d/users.xml"]
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert 0 == int(node.query("select count() from system.named_collections"))
|
||||
|
||||
replace_in_users_config(
|
||||
node, "named_collection_control>0", "named_collection_control>1"
|
||||
)
|
||||
assert "named_collection_control>1" in node.exec_in_container(
|
||||
["bash", "-c", f"cat /etc/clickhouse-server/users.d/users.xml"]
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert (
|
||||
"DB::Exception: default: Not enough privileges. To execute this query it's necessary to have grant SHOW NAMED COLLECTIONS ON *.*"
|
||||
in node.query_and_get_error("select count() from system.named_collections")
|
||||
)
|
||||
replace_in_users_config(
|
||||
node, "show_named_collections>0", "show_named_collections>1"
|
||||
)
|
||||
assert "show_named_collections>1" in node.exec_in_container(
|
||||
["bash", "-c", f"cat /etc/clickhouse-server/users.d/users.xml"]
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert (
|
||||
node.query("select collection['key1'] from system.named_collections").strip()
|
||||
node.query(
|
||||
"select collection['key1'] from system.named_collections where name = 'collection1'"
|
||||
).strip()
|
||||
== "value1"
|
||||
)
|
||||
replace_in_users_config(
|
||||
@ -114,7 +113,9 @@ def test_access(cluster):
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert (
|
||||
node.query("select collection['key1'] from system.named_collections").strip()
|
||||
node.query(
|
||||
"select collection['key1'] from system.named_collections where name = 'collection1'"
|
||||
).strip()
|
||||
== "[HIDDEN]"
|
||||
)
|
||||
replace_in_users_config(
|
||||
@ -125,11 +126,282 @@ def test_access(cluster):
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert (
|
||||
node.query("select collection['key1'] from system.named_collections").strip()
|
||||
node.query(
|
||||
"select collection['key1'] from system.named_collections where name = 'collection1'"
|
||||
).strip()
|
||||
== "value1"
|
||||
)
|
||||
|
||||
|
||||
def test_granular_access_show_query(cluster):
|
||||
node = cluster.instances["node"]
|
||||
assert (
|
||||
"GRANT ALL ON *.* TO default WITH GRANT OPTION"
|
||||
== node.query("SHOW GRANTS FOR default").strip()
|
||||
) # includes named collections control
|
||||
assert 1 == int(node.query("SELECT count() FROM system.named_collections"))
|
||||
assert (
|
||||
"collection1" == node.query("SELECT name FROM system.named_collections").strip()
|
||||
)
|
||||
|
||||
node.query("DROP USER IF EXISTS kek")
|
||||
node.query("CREATE USER kek")
|
||||
node.query("GRANT select ON *.* TO kek")
|
||||
assert 0 == int(
|
||||
node.query("SELECT count() FROM system.named_collections", user="kek")
|
||||
)
|
||||
|
||||
node.query("GRANT show named collections ON collection1 TO kek")
|
||||
assert 1 == int(
|
||||
node.query("SELECT count() FROM system.named_collections", user="kek")
|
||||
)
|
||||
assert (
|
||||
"collection1"
|
||||
== node.query("SELECT name FROM system.named_collections", user="kek").strip()
|
||||
)
|
||||
|
||||
node.query("CREATE NAMED COLLECTION collection2 AS key1=1, key2='value2'")
|
||||
assert 2 == int(node.query("SELECT count() FROM system.named_collections"))
|
||||
assert (
|
||||
"collection1\ncollection2"
|
||||
== node.query("select name from system.named_collections").strip()
|
||||
)
|
||||
|
||||
assert 1 == int(
|
||||
node.query("SELECT count() FROM system.named_collections", user="kek")
|
||||
)
|
||||
assert (
|
||||
"collection1"
|
||||
== node.query("select name from system.named_collections", user="kek").strip()
|
||||
)
|
||||
|
||||
node.query("GRANT show named collections ON collection2 TO kek")
|
||||
assert 2 == int(
|
||||
node.query("SELECT count() FROM system.named_collections", user="kek")
|
||||
)
|
||||
assert (
|
||||
"collection1\ncollection2"
|
||||
== node.query("select name from system.named_collections", user="kek").strip()
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert (
|
||||
"collection1\ncollection2"
|
||||
== node.query("select name from system.named_collections", user="kek").strip()
|
||||
)
|
||||
|
||||
# check:
|
||||
# GRANT show named collections ON *
|
||||
# REVOKE show named collections ON collection
|
||||
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT select ON *.* TO koko")
|
||||
assert 0 == int(
|
||||
node.query("SELECT count() FROM system.named_collections", user="koko")
|
||||
)
|
||||
assert "GRANT SELECT ON *.* TO koko" == node.query("SHOW GRANTS FOR koko;").strip()
|
||||
node.query("GRANT show named collections ON * TO koko")
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT SHOW NAMED COLLECTIONS ON * TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
assert (
|
||||
"collection1\ncollection2"
|
||||
== node.query("select name from system.named_collections", user="koko").strip()
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT SHOW NAMED COLLECTIONS ON * TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
assert (
|
||||
"collection1\ncollection2"
|
||||
== node.query("select name from system.named_collections", user="koko").strip()
|
||||
)
|
||||
|
||||
node.query("REVOKE show named collections ON collection1 FROM koko;")
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT SHOW NAMED COLLECTIONS ON * TO koko\nREVOKE SHOW NAMED COLLECTIONS ON collection1 FROM koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
assert (
|
||||
"collection2"
|
||||
== node.query("select name from system.named_collections", user="koko").strip()
|
||||
)
|
||||
node.restart_clickhouse()
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT SHOW NAMED COLLECTIONS ON * TO koko\nREVOKE SHOW NAMED COLLECTIONS ON collection1 FROM koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
assert (
|
||||
"collection2"
|
||||
== node.query("select name from system.named_collections", user="koko").strip()
|
||||
)
|
||||
node.query("REVOKE show named collections ON collection2 FROM koko;")
|
||||
assert (
|
||||
"" == node.query("select * from system.named_collections", user="koko").strip()
|
||||
)
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT SHOW NAMED COLLECTIONS ON * TO koko\nREVOKE SHOW NAMED COLLECTIONS ON collection1 FROM koko\nREVOKE SHOW NAMED COLLECTIONS ON collection2 FROM koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
# check:
|
||||
# GRANT show named collections ON collection
|
||||
# REVOKE show named collections ON *
|
||||
|
||||
node.query("GRANT show named collections ON collection2 TO koko")
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT SHOW NAMED COLLECTIONS ON * TO koko\nREVOKE SHOW NAMED COLLECTIONS ON collection1 FROM koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
assert (
|
||||
"collection2"
|
||||
== node.query("select name from system.named_collections", user="koko").strip()
|
||||
)
|
||||
node.query("REVOKE show named collections ON * FROM koko;")
|
||||
assert "GRANT SELECT ON *.* TO koko" == node.query("SHOW GRANTS FOR koko;").strip()
|
||||
assert (
|
||||
"" == node.query("select * from system.named_collections", user="koko").strip()
|
||||
)
|
||||
|
||||
node.query("DROP NAMED COLLECTION collection2")
|
||||
|
||||
|
||||
def test_show_grants(cluster):
|
||||
node = cluster.instances["node"]
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT CREATE NAMED COLLECTION ON name1 TO koko")
|
||||
node.query("GRANT select ON name1.* TO koko")
|
||||
assert (
|
||||
"GRANT SELECT ON name1.* TO koko\nGRANT CREATE NAMED COLLECTION ON name1 TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT CREATE NAMED COLLECTION ON name1 TO koko")
|
||||
node.query("GRANT select ON name1 TO koko")
|
||||
assert (
|
||||
"GRANT SELECT ON default.name1 TO koko\nGRANT CREATE NAMED COLLECTION ON name1 TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT select ON name1 TO koko")
|
||||
node.query("GRANT CREATE NAMED COLLECTION ON name1 TO koko")
|
||||
assert (
|
||||
"GRANT SELECT ON default.name1 TO koko\nGRANT CREATE NAMED COLLECTION ON name1 TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT select ON *.* TO koko")
|
||||
node.query("GRANT CREATE NAMED COLLECTION ON * TO koko")
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT CREATE NAMED COLLECTION ON * TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT CREATE NAMED COLLECTION ON * TO koko")
|
||||
node.query("GRANT select ON *.* TO koko")
|
||||
assert (
|
||||
"GRANT SELECT ON *.* TO koko\nGRANT CREATE NAMED COLLECTION ON * TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT CREATE NAMED COLLECTION ON * TO koko")
|
||||
node.query("GRANT select ON * TO koko")
|
||||
assert (
|
||||
"GRANT CREATE NAMED COLLECTION ON * TO koko\nGRANT SELECT ON default.* TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
node.query("DROP USER IF EXISTS koko")
|
||||
node.query("CREATE USER koko")
|
||||
node.query("GRANT select ON * TO koko")
|
||||
node.query("GRANT CREATE NAMED COLLECTION ON * TO koko")
|
||||
assert (
|
||||
"GRANT CREATE NAMED COLLECTION ON * TO koko\nGRANT SELECT ON default.* TO koko"
|
||||
== node.query("SHOW GRANTS FOR koko;").strip()
|
||||
)
|
||||
|
||||
|
||||
def test_granular_access_create_alter_drop_query(cluster):
|
||||
node = cluster.instances["node"]
|
||||
node.query("DROP USER IF EXISTS kek")
|
||||
node.query("CREATE USER kek")
|
||||
node.query("GRANT select ON *.* TO kek")
|
||||
assert 0 == int(
|
||||
node.query("SELECT count() FROM system.named_collections", user="kek")
|
||||
)
|
||||
|
||||
assert (
|
||||
"DB::Exception: kek: Not enough privileges. To execute this query it's necessary to have grant CREATE NAMED COLLECTION"
|
||||
in node.query_and_get_error(
|
||||
"CREATE NAMED COLLECTION collection2 AS key1=1, key2='value2'", user="kek"
|
||||
)
|
||||
)
|
||||
node.query("GRANT create named collection ON collection2 TO kek")
|
||||
node.query(
|
||||
"CREATE NAMED COLLECTION collection2 AS key1=1, key2='value2'", user="kek"
|
||||
)
|
||||
assert 0 == int(
|
||||
node.query("select count() from system.named_collections", user="kek")
|
||||
)
|
||||
|
||||
node.query("GRANT show named collections ON collection2 TO kek")
|
||||
assert (
|
||||
"collection2"
|
||||
== node.query("select name from system.named_collections", user="kek").strip()
|
||||
)
|
||||
assert (
|
||||
"1"
|
||||
== node.query(
|
||||
"select collection['key1'] from system.named_collections where name = 'collection2'"
|
||||
).strip()
|
||||
)
|
||||
|
||||
assert (
|
||||
"DB::Exception: kek: Not enough privileges. To execute this query it's necessary to have grant ALTER NAMED COLLECTION"
|
||||
in node.query_and_get_error(
|
||||
"ALTER NAMED COLLECTION collection2 SET key1=2", user="kek"
|
||||
)
|
||||
)
|
||||
node.query("GRANT alter named collection ON collection2 TO kek")
|
||||
node.query("ALTER NAMED COLLECTION collection2 SET key1=2", user="kek")
|
||||
assert (
|
||||
"2"
|
||||
== node.query(
|
||||
"select collection['key1'] from system.named_collections where name = 'collection2'"
|
||||
).strip()
|
||||
)
|
||||
node.query("REVOKE alter named collection ON collection2 FROM kek")
|
||||
assert (
|
||||
"DB::Exception: kek: Not enough privileges. To execute this query it's necessary to have grant ALTER NAMED COLLECTION"
|
||||
in node.query_and_get_error(
|
||||
"ALTER NAMED COLLECTION collection2 SET key1=3", user="kek"
|
||||
)
|
||||
)
|
||||
|
||||
assert (
|
||||
"DB::Exception: kek: Not enough privileges. To execute this query it's necessary to have grant DROP NAMED COLLECTION"
|
||||
in node.query_and_get_error("DROP NAMED COLLECTION collection2", user="kek")
|
||||
)
|
||||
node.query("GRANT drop named collection ON collection2 TO kek")
|
||||
node.query("DROP NAMED COLLECTION collection2", user="kek")
|
||||
assert 0 == int(
|
||||
node.query("select count() from system.named_collections", user="kek")
|
||||
)
|
||||
|
||||
|
||||
def test_config_reload(cluster):
|
||||
node = cluster.instances["node"]
|
||||
assert (
|
||||
@ -164,6 +436,16 @@ def test_config_reload(cluster):
|
||||
).strip()
|
||||
)
|
||||
|
||||
replace_in_server_config(node, "value2", "value1")
|
||||
node.query("SYSTEM RELOAD CONFIG")
|
||||
|
||||
assert (
|
||||
"value1"
|
||||
== node.query(
|
||||
"select collection['key1'] from system.named_collections where name = 'collection1'"
|
||||
).strip()
|
||||
)
|
||||
|
||||
|
||||
def test_sql_commands(cluster):
|
||||
node = cluster.instances["node"]
|
||||
|
@ -4,7 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
|
@ -4,7 +4,7 @@
|
||||
<password></password>
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<show_named_collections>1</show_named_collections>
|
||||
<named_collection_control>1</named_collection_control>
|
||||
<show_named_collections_secrets>1</show_named_collections_secrets>
|
||||
</default>
|
||||
</users>
|
||||
|
@ -36,7 +36,6 @@
|
||||
<host>mysql57</host>
|
||||
<port>3306</port>
|
||||
<database>clickhouse</database>
|
||||
<table>test_settings</table>
|
||||
<connection_pool_size>1</connection_pool_size>
|
||||
<read_write_timeout>20123001</read_write_timeout>
|
||||
<connect_timeout>20123002</connect_timeout>
|
||||
|
@ -765,7 +765,7 @@ def test_settings(started_cluster):
|
||||
|
||||
rw_timeout = 20123001
|
||||
connect_timeout = 20123002
|
||||
node1.query(f"SELECT * FROM mysql(mysql_with_settings)")
|
||||
node1.query(f"SELECT * FROM mysql(mysql_with_settings, table='test_settings')")
|
||||
assert node1.contains_in_log(
|
||||
f"with settings: connect_timeout={connect_timeout}, read_write_timeout={rw_timeout}"
|
||||
)
|
||||
|
@ -382,7 +382,7 @@ def test_postgres_distributed(started_cluster):
|
||||
"""
|
||||
CREATE TABLE test_shards2
|
||||
(id UInt32, name String, age UInt32, money UInt32)
|
||||
ENGINE = ExternalDistributed('PostgreSQL', postgres4, description='postgres{1|2}:5432,postgres{3|4}:5432'); """
|
||||
ENGINE = ExternalDistributed('PostgreSQL', postgres4, addresses_expr='postgres{1|2}:5432,postgres{3|4}:5432'); """
|
||||
)
|
||||
|
||||
result = node2.query("SELECT DISTINCT(name) FROM test_shards2 ORDER BY name")
|
||||
|
@ -17,7 +17,7 @@ DROP TABLE IF EXISTS wv;
|
||||
|
||||
CREATE TABLE dst(time DateTime, colA String, colB String) Engine=MergeTree ORDER BY tuple();
|
||||
CREATE TABLE mt(colA String, colB String) ENGINE=MergeTree ORDER BY tuple();
|
||||
CREATE WINDOW VIEW wv TO dst AS SELECT tumbleStart(w_id) AS time, colA, colB FROM mt GROUP BY tumble(now(), INTERVAL '10' SECOND, 'US/Samoa') AS w_id, colA, colB;
|
||||
CREATE WINDOW VIEW wv TO dst AS SELECT tumbleStart(w_id) AS time, colA, colB FROM mt GROUP BY tumble(now('US/Samoa'), INTERVAL '10' SECOND, 'US/Samoa') AS w_id, colA, colB;
|
||||
|
||||
INSERT INTO mt VALUES ('test1', 'test2');
|
||||
EOF
|
||||
|
@ -39,7 +39,7 @@ ALTER MOVE PARTITION ['ALTER MOVE PART','MOVE PARTITION','MOVE PART'] TABLE ALTE
|
||||
ALTER FETCH PARTITION ['ALTER FETCH PART','FETCH PARTITION'] TABLE ALTER TABLE
|
||||
ALTER FREEZE PARTITION ['FREEZE PARTITION','UNFREEZE'] TABLE ALTER TABLE
|
||||
ALTER DATABASE SETTINGS ['ALTER DATABASE SETTING','ALTER MODIFY DATABASE SETTING','MODIFY DATABASE SETTING'] DATABASE ALTER DATABASE
|
||||
ALTER NAMED COLLECTION [] \N ALTER
|
||||
ALTER NAMED COLLECTION [] NAMED_COLLECTION NAMED COLLECTION CONTROL
|
||||
ALTER TABLE [] \N ALTER
|
||||
ALTER DATABASE [] \N ALTER
|
||||
ALTER VIEW REFRESH ['ALTER LIVE VIEW REFRESH','REFRESH VIEW'] VIEW ALTER VIEW
|
||||
@ -53,14 +53,14 @@ CREATE DICTIONARY [] DICTIONARY CREATE
|
||||
CREATE TEMPORARY TABLE [] GLOBAL CREATE ARBITRARY TEMPORARY TABLE
|
||||
CREATE ARBITRARY TEMPORARY TABLE [] GLOBAL CREATE
|
||||
CREATE FUNCTION [] GLOBAL CREATE
|
||||
CREATE NAMED COLLECTION [] GLOBAL CREATE
|
||||
CREATE NAMED COLLECTION [] NAMED_COLLECTION NAMED COLLECTION CONTROL
|
||||
CREATE [] \N ALL
|
||||
DROP DATABASE [] DATABASE DROP
|
||||
DROP TABLE [] TABLE DROP
|
||||
DROP VIEW [] VIEW DROP
|
||||
DROP DICTIONARY [] DICTIONARY DROP
|
||||
DROP FUNCTION [] GLOBAL DROP
|
||||
DROP NAMED COLLECTION [] GLOBAL DROP
|
||||
DROP NAMED COLLECTION [] NAMED_COLLECTION NAMED COLLECTION CONTROL
|
||||
DROP [] \N ALL
|
||||
TRUNCATE ['TRUNCATE TABLE'] TABLE ALL
|
||||
OPTIMIZE ['OPTIMIZE TABLE'] TABLE ALL
|
||||
@ -90,9 +90,10 @@ SHOW ROW POLICIES ['SHOW POLICIES','SHOW CREATE ROW POLICY','SHOW CREATE POLICY'
|
||||
SHOW QUOTAS ['SHOW CREATE QUOTA'] GLOBAL SHOW ACCESS
|
||||
SHOW SETTINGS PROFILES ['SHOW PROFILES','SHOW CREATE SETTINGS PROFILE','SHOW CREATE PROFILE'] GLOBAL SHOW ACCESS
|
||||
SHOW ACCESS [] \N ACCESS MANAGEMENT
|
||||
SHOW NAMED COLLECTIONS ['SHOW NAMED COLLECTIONS'] GLOBAL ACCESS MANAGEMENT
|
||||
SHOW NAMED COLLECTIONS SECRETS ['SHOW NAMED COLLECTIONS SECRETS'] GLOBAL ACCESS MANAGEMENT
|
||||
ACCESS MANAGEMENT [] \N ALL
|
||||
SHOW NAMED COLLECTIONS ['SHOW NAMED COLLECTIONS'] NAMED_COLLECTION NAMED COLLECTION CONTROL
|
||||
SHOW NAMED COLLECTIONS SECRETS ['SHOW NAMED COLLECTIONS SECRETS'] NAMED_COLLECTION NAMED COLLECTION CONTROL
|
||||
NAMED COLLECTION CONTROL [] NAMED_COLLECTION ALL
|
||||
SYSTEM SHUTDOWN ['SYSTEM KILL','SHUTDOWN'] GLOBAL SYSTEM
|
||||
SYSTEM DROP DNS CACHE ['SYSTEM DROP DNS','DROP DNS CACHE','DROP DNS'] GLOBAL SYSTEM DROP CACHE
|
||||
SYSTEM DROP MARK CACHE ['SYSTEM DROP MARK','DROP MARK CACHE','DROP MARKS'] GLOBAL SYSTEM DROP CACHE
|
||||
|
@ -1,3 +1,3 @@
|
||||
SELECT view(SELECT 1); -- { clientError 62 }
|
||||
|
||||
SELECT sumIf(dummy, dummy) FROM remote('127.0.0.{1,2}', numbers(2, 100), view(SELECT CAST(NULL, 'Nullable(UInt8)') AS dummy FROM system.one)); -- { serverError 183 }
|
||||
SELECT sumIf(dummy, dummy) FROM remote('127.0.0.{1,2}', numbers(2, 100), view(SELECT CAST(NULL, 'Nullable(UInt8)') AS dummy FROM system.one)); -- { serverError UNKNOWN_FUNCTION }
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user