mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 01:25:21 +00:00
Merge branch 'master' into distinct-in-order-sqlancer-crashes
This commit is contained in:
commit
7e6bc6a326
2
.github/workflows/master.yml
vendored
2
.github/workflows/master.yml
vendored
@ -3643,7 +3643,7 @@ jobs:
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/unit_tests_asan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Unit tests (release-clang)
|
||||
CHECK_NAME=Unit tests (release)
|
||||
REPO_COPY=${{runner.temp}}/unit_tests_asan/ClickHouse
|
||||
EOF
|
||||
- name: Download json reports
|
||||
|
2
.github/workflows/pull_request.yml
vendored
2
.github/workflows/pull_request.yml
vendored
@ -4541,7 +4541,7 @@ jobs:
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/unit_tests_asan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Unit tests (release-clang)
|
||||
CHECK_NAME=Unit tests (release)
|
||||
REPO_COPY=${{runner.temp}}/unit_tests_asan/ClickHouse
|
||||
EOF
|
||||
- name: Download json reports
|
||||
|
@ -23,7 +23,6 @@
|
||||
* Added `Overlay` database engine to combine multiple databases into one. Added `Filesystem` database engine to represent a directory in the filesystem as a set of implicitly available tables with auto-detected formats and structures. A new `S3` database engine allows to read-only interact with s3 storage by representing a prefix as a set of tables. A new `HDFS` database engine allows to interact with HDFS storage in the same way. [#48821](https://github.com/ClickHouse/ClickHouse/pull/48821) ([alekseygolub](https://github.com/alekseygolub)).
|
||||
* Add support for external disks in Keeper for storing snapshots and logs. [#50098](https://github.com/ClickHouse/ClickHouse/pull/50098) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add support for multi-directory selection (`{}`) globs. [#50559](https://github.com/ClickHouse/ClickHouse/pull/50559) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Support ZooKeeper `reconfig` command for ClickHouse Keeper with incremental reconfiguration which can be enabled via `keeper_server.enable_reconfiguration` setting. Support adding servers, removing servers, and changing server priorities. [#49450](https://github.com/ClickHouse/ClickHouse/pull/49450) ([Mike Kot](https://github.com/myrrc)).
|
||||
* Kafka connector can fetch Avro schema from schema registry with basic authentication using url-encoded credentials. [#49664](https://github.com/ClickHouse/ClickHouse/pull/49664) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
* Add function `arrayJaccardIndex` which computes the Jaccard similarity between two arrays. [#50076](https://github.com/ClickHouse/ClickHouse/pull/50076) ([FFFFFFFHHHHHHH](https://github.com/FFFFFFFHHHHHHH)).
|
||||
* Add a column `is_obsolete` to `system.settings` and similar tables. Closes [#50819](https://github.com/ClickHouse/ClickHouse/issues/50819). [#50826](https://github.com/ClickHouse/ClickHouse/pull/50826) ([flynn](https://github.com/ucasfl)).
|
||||
@ -124,6 +123,7 @@
|
||||
* (experimental MaterializedMySQL) Now double quoted comments are supported in MaterializedMySQL. [#52355](https://github.com/ClickHouse/ClickHouse/pull/52355) ([Val Doroshchuk](https://github.com/valbok)).
|
||||
* Upgrade Intel QPL from v1.1.0 to v1.2.0 2. Upgrade Intel accel-config from v3.5 to v4.0 3. Fixed issue that Device IOTLB miss has big perf. impact for IAA accelerators. [#52180](https://github.com/ClickHouse/ClickHouse/pull/52180) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
* The `session_timezone` setting (new in version 23.6) is demoted to experimental. [#52445](https://github.com/ClickHouse/ClickHouse/pull/52445) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Support ZooKeeper `reconfig` command for ClickHouse Keeper with incremental reconfiguration which can be enabled via `keeper_server.enable_reconfiguration` setting. Support adding servers, removing servers, and changing server priorities. [#49450](https://github.com/ClickHouse/ClickHouse/pull/49450) ([Mike Kot](https://github.com/myrrc)). It is suspected that this feature is incomplete.
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Add experimental ClickHouse builds for Linux RISC-V 64 to CI. [#31398](https://github.com/ClickHouse/ClickHouse/pull/31398) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
@ -84,6 +84,7 @@ The BACKUP and RESTORE statements take a list of DATABASE and TABLE names, a des
|
||||
- `password` for the file on disk
|
||||
- `base_backup`: the destination of the previous backup of this source. For example, `Disk('backups', '1.zip')`
|
||||
- `structure_only`: if enabled, allows to only backup or restore the CREATE statements without the data of tables
|
||||
- `s3_storage_class`: the storage class used for S3 backup. For example, `STANDARD`
|
||||
|
||||
### Usage examples
|
||||
|
||||
|
@ -2288,6 +2288,8 @@ This section contains the following parameters:
|
||||
- `session_timeout_ms` — Maximum timeout for the client session in milliseconds.
|
||||
- `operation_timeout_ms` — Maximum timeout for one operation in milliseconds.
|
||||
- `root` — The [znode](http://zookeeper.apache.org/doc/r3.5.5/zookeeperOver.html#Nodes+and+ephemeral+nodes) that is used as the root for znodes used by the ClickHouse server. Optional.
|
||||
- `fallback_session_lifetime.min` - If the first zookeeper host resolved by zookeeper_load_balancing strategy is unavailable, limit the lifetime of a zookeeper session to the fallback node. This is done for load-balancing purposes to avoid excessive load on one of zookeeper hosts. This setting sets the minimal duration of the fallback session. Set in seconds. Optional. Default is 3 hours.
|
||||
- `fallback_session_lifetime.max` - If the first zookeeper host resolved by zookeeper_load_balancing strategy is unavailable, limit the lifetime of a zookeeper session to the fallback node. This is done for load-balancing purposes to avoid excessive load on one of zookeeper hosts. This setting sets the maximum duration of the fallback session. Set in seconds. Optional. Default is 6 hours.
|
||||
- `identity` — User and password, that can be required by ZooKeeper to give access to requested znodes. Optional.
|
||||
- zookeeper_load_balancing - Specifies the algorithm of ZooKeeper node selection.
|
||||
* random - randomly selects one of ZooKeeper nodes.
|
||||
|
@ -327,3 +327,39 @@ The maximum amount of data consumed by temporary files on disk in bytes for all
|
||||
Zero means unlimited.
|
||||
|
||||
Default value: 0.
|
||||
|
||||
## max_sessions_for_user {#max-sessions-per-user}
|
||||
|
||||
Maximum number of simultaneous sessions per authenticated user to the ClickHouse server.
|
||||
|
||||
Example:
|
||||
|
||||
``` xml
|
||||
<profiles>
|
||||
<single_session_profile>
|
||||
<max_sessions_for_user>1</max_sessions_for_user>
|
||||
</single_session_profile>
|
||||
<two_sessions_profile>
|
||||
<max_sessions_for_user>2</max_sessions_for_user>
|
||||
</two_sessions_profile>
|
||||
<unlimited_sessions_profile>
|
||||
<max_sessions_for_user>0</max_sessions_for_user>
|
||||
</unlimited_sessions_profile>
|
||||
</profiles>
|
||||
<users>
|
||||
<!-- User Alice can connect to a ClickHouse server no more than once at a time. -->
|
||||
<Alice>
|
||||
<profile>single_session_user</profile>
|
||||
</Alice>
|
||||
<!-- User Bob can use 2 simultaneous sessions. -->
|
||||
<Bob>
|
||||
<profile>two_sessions_profile</profile>
|
||||
</Bob>
|
||||
<!-- User Charles can use arbitrarily many of simultaneous sessions. -->
|
||||
<Charles>
|
||||
<profile>unlimited_sessions_profile</profile>
|
||||
</Charles>
|
||||
</users>
|
||||
```
|
||||
|
||||
Default value: 0 (Infinite count of simultaneous sessions).
|
||||
|
@ -39,7 +39,7 @@ Example:
|
||||
<max_threads>8</max_threads>
|
||||
</default>
|
||||
|
||||
<!-- Settings for quries from the user interface -->
|
||||
<!-- Settings for queries from the user interface -->
|
||||
<web>
|
||||
<max_rows_to_read>1000000000</max_rows_to_read>
|
||||
<max_bytes_to_read>100000000000</max_bytes_to_read>
|
||||
@ -67,6 +67,8 @@ Example:
|
||||
<max_ast_depth>50</max_ast_depth>
|
||||
<max_ast_elements>100</max_ast_elements>
|
||||
|
||||
<max_sessions_for_user>4</max_sessions_for_user>
|
||||
|
||||
<readonly>1</readonly>
|
||||
</web>
|
||||
</profiles>
|
||||
|
@ -10,6 +10,7 @@ Columns:
|
||||
- `event` ([String](../../sql-reference/data-types/string.md)) — Event name.
|
||||
- `value` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Number of events occurred.
|
||||
- `description` ([String](../../sql-reference/data-types/string.md)) — Event description.
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Alias for `event`.
|
||||
|
||||
You can find all supported events in source file [src/Common/ProfileEvents.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/ProfileEvents.cpp).
|
||||
|
||||
|
@ -10,6 +10,7 @@ Columns:
|
||||
- `metric` ([String](../../sql-reference/data-types/string.md)) — Metric name.
|
||||
- `value` ([Int64](../../sql-reference/data-types/int-uint.md)) — Metric value.
|
||||
- `description` ([String](../../sql-reference/data-types/string.md)) — Metric description.
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Alias for `metric`.
|
||||
|
||||
You can find all supported metrics in source file [src/Common/CurrentMetrics.cpp](https://github.com/ClickHouse/ClickHouse/blob/master/src/Common/CurrentMetrics.cpp).
|
||||
|
||||
|
@ -314,3 +314,40 @@ FORMAT Null;
|
||||
При вставке данных, ClickHouse вычисляет количество партиций во вставленном блоке. Если число партиций больше, чем `max_partitions_per_insert_block`, ClickHouse генерирует исключение со следующим текстом:
|
||||
|
||||
> «Too many partitions for single INSERT block (more than» + toString(max_parts) + «). The limit is controlled by ‘max_partitions_per_insert_block’ setting. Large number of partitions is a common misconception. It will lead to severe negative performance impact, including slow server startup, slow INSERT queries and slow SELECT queries. Recommended total number of partitions for a table is under 1000..10000. Please note, that partitioning is not intended to speed up SELECT queries (ORDER BY key is sufficient to make range queries fast). Partitions are intended for data manipulation (DROP PARTITION, etc).»
|
||||
|
||||
## max_sessions_for_user {#max-sessions-per-user}
|
||||
|
||||
Максимальное количество одновременных сессий на одного аутентифицированного пользователя.
|
||||
|
||||
Пример:
|
||||
|
||||
``` xml
|
||||
<profiles>
|
||||
<single_session_profile>
|
||||
<max_sessions_for_user>1</max_sessions_for_user>
|
||||
</single_session_profile>
|
||||
<two_sessions_profile>
|
||||
<max_sessions_for_user>2</max_sessions_for_user>
|
||||
</two_sessions_profile>
|
||||
<unlimited_sessions_profile>
|
||||
<max_sessions_for_user>0</max_sessions_for_user>
|
||||
</unlimited_sessions_profile>
|
||||
</profiles>
|
||||
<users>
|
||||
<!-- Пользователь Alice может одновременно подключаться не
|
||||
более одного раза к серверу ClickHouse. -->
|
||||
<Alice>
|
||||
<profile>single_session_profile</profile>
|
||||
</Alice>
|
||||
<!-- Пользователь Bob может использовать 2 одновременных сессии. -->
|
||||
<Bob>
|
||||
<profile>two_sessions_profile</profile>
|
||||
</Bob>
|
||||
<!-- Пользователь Charles может иметь любое количество одновременных сессий. -->
|
||||
<Charles>
|
||||
<profile>unlimited_sessions_profile</profile>
|
||||
</Charles>
|
||||
</users>
|
||||
```
|
||||
|
||||
Значение по умолчанию: 0 (неограниченное количество сессий).
|
||||
|
@ -39,7 +39,7 @@ SET profile = 'web'
|
||||
<max_threads>8</max_threads>
|
||||
</default>
|
||||
|
||||
<!-- Settings for quries from the user interface -->
|
||||
<!-- Settings for queries from the user interface -->
|
||||
<web>
|
||||
<max_rows_to_read>1000000000</max_rows_to_read>
|
||||
<max_bytes_to_read>100000000000</max_bytes_to_read>
|
||||
@ -67,6 +67,7 @@ SET profile = 'web'
|
||||
<max_ast_depth>50</max_ast_depth>
|
||||
<max_ast_elements>100</max_ast_elements>
|
||||
|
||||
<max_sessions_for_user>4</max_sessions_for_user>
|
||||
<readonly>1</readonly>
|
||||
</web>
|
||||
</profiles>
|
||||
|
@ -1691,17 +1691,26 @@ try
|
||||
global_context->initializeTraceCollector();
|
||||
|
||||
/// Set up server-wide memory profiler (for total memory tracker).
|
||||
UInt64 total_memory_profiler_step = config().getUInt64("total_memory_profiler_step", 0);
|
||||
if (total_memory_profiler_step)
|
||||
if (server_settings.total_memory_profiler_step)
|
||||
{
|
||||
total_memory_tracker.setProfilerStep(total_memory_profiler_step);
|
||||
total_memory_tracker.setProfilerStep(server_settings.total_memory_profiler_step);
|
||||
}
|
||||
|
||||
double total_memory_tracker_sample_probability = config().getDouble("total_memory_tracker_sample_probability", 0);
|
||||
if (total_memory_tracker_sample_probability > 0.0)
|
||||
if (server_settings.total_memory_tracker_sample_probability > 0.0)
|
||||
{
|
||||
total_memory_tracker.setSampleProbability(total_memory_tracker_sample_probability);
|
||||
total_memory_tracker.setSampleProbability(server_settings.total_memory_tracker_sample_probability);
|
||||
}
|
||||
|
||||
if (server_settings.total_memory_profiler_sample_min_allocation_size)
|
||||
{
|
||||
total_memory_tracker.setSampleMinAllocationSize(server_settings.total_memory_profiler_sample_min_allocation_size);
|
||||
}
|
||||
|
||||
if (server_settings.total_memory_profiler_sample_max_allocation_size)
|
||||
{
|
||||
total_memory_tracker.setSampleMaxAllocationSize(server_settings.total_memory_profiler_sample_max_allocation_size);
|
||||
}
|
||||
|
||||
}
|
||||
#endif
|
||||
|
||||
@ -2036,27 +2045,26 @@ void Server::createServers(
|
||||
|
||||
for (const auto & protocol : protocols)
|
||||
{
|
||||
if (!server_type.shouldStart(ServerType::Type::CUSTOM, protocol))
|
||||
std::string prefix = "protocols." + protocol + ".";
|
||||
std::string port_name = prefix + "port";
|
||||
std::string description {"<undefined> protocol"};
|
||||
if (config.has(prefix + "description"))
|
||||
description = config.getString(prefix + "description");
|
||||
|
||||
if (!config.has(prefix + "port"))
|
||||
continue;
|
||||
|
||||
if (!server_type.shouldStart(ServerType::Type::CUSTOM, port_name))
|
||||
continue;
|
||||
|
||||
std::vector<std::string> hosts;
|
||||
if (config.has("protocols." + protocol + ".host"))
|
||||
hosts.push_back(config.getString("protocols." + protocol + ".host"));
|
||||
if (config.has(prefix + "host"))
|
||||
hosts.push_back(config.getString(prefix + "host"));
|
||||
else
|
||||
hosts = listen_hosts;
|
||||
|
||||
for (const auto & host : hosts)
|
||||
{
|
||||
std::string conf_name = "protocols." + protocol;
|
||||
std::string prefix = conf_name + ".";
|
||||
|
||||
if (!config.has(prefix + "port"))
|
||||
continue;
|
||||
|
||||
std::string description {"<undefined> protocol"};
|
||||
if (config.has(prefix + "description"))
|
||||
description = config.getString(prefix + "description");
|
||||
std::string port_name = prefix + "port";
|
||||
bool is_secure = false;
|
||||
auto stack = buildProtocolStackFromConfig(config, protocol, http_params, async_metrics, is_secure);
|
||||
|
||||
|
@ -328,9 +328,6 @@ void ContextAccess::setRolesInfo(const std::shared_ptr<const EnabledRolesInfo> &
|
||||
|
||||
enabled_row_policies = access_control->getEnabledRowPolicies(*params.user_id, roles_info->enabled_roles);
|
||||
|
||||
enabled_quota = access_control->getEnabledQuota(
|
||||
*params.user_id, user_name, roles_info->enabled_roles, params.address, params.forwarded_address, params.quota_key);
|
||||
|
||||
enabled_settings = access_control->getEnabledSettings(
|
||||
*params.user_id, user->settings, roles_info->enabled_roles, roles_info->settings_from_enabled_roles);
|
||||
|
||||
@ -416,19 +413,32 @@ RowPolicyFilterPtr ContextAccess::getRowPolicyFilter(const String & database, co
|
||||
std::shared_ptr<const EnabledQuota> ContextAccess::getQuota() const
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
if (enabled_quota)
|
||||
return enabled_quota;
|
||||
static const auto unlimited_quota = EnabledQuota::getUnlimitedQuota();
|
||||
return unlimited_quota;
|
||||
|
||||
if (!enabled_quota)
|
||||
{
|
||||
if (roles_info)
|
||||
{
|
||||
enabled_quota = access_control->getEnabledQuota(*params.user_id,
|
||||
user_name,
|
||||
roles_info->enabled_roles,
|
||||
params.address,
|
||||
params.forwarded_address,
|
||||
params.quota_key);
|
||||
}
|
||||
else
|
||||
{
|
||||
static const auto unlimited_quota = EnabledQuota::getUnlimitedQuota();
|
||||
return unlimited_quota;
|
||||
}
|
||||
}
|
||||
|
||||
return enabled_quota;
|
||||
}
|
||||
|
||||
|
||||
std::optional<QuotaUsage> ContextAccess::getQuotaUsage() const
|
||||
{
|
||||
std::lock_guard lock{mutex};
|
||||
if (enabled_quota)
|
||||
return enabled_quota->getUsage();
|
||||
return {};
|
||||
return getQuota()->getUsage();
|
||||
}
|
||||
|
||||
|
||||
|
@ -1,4 +1,5 @@
|
||||
#include <string_view>
|
||||
#include <unordered_map>
|
||||
#include <Access/SettingsConstraints.h>
|
||||
#include <Access/resolveSetting.h>
|
||||
#include <Access/AccessControl.h>
|
||||
@ -6,6 +7,7 @@
|
||||
#include <Storages/MergeTree/MergeTreeSettings.h>
|
||||
#include <Common/FieldVisitorToString.h>
|
||||
#include <Common/FieldVisitorsAccurateComparison.h>
|
||||
#include <Common/SettingSource.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <Poco/Util/AbstractConfiguration.h>
|
||||
#include <boost/range/algorithm_ext/erase.hpp>
|
||||
@ -20,6 +22,39 @@ namespace ErrorCodes
|
||||
extern const int UNKNOWN_SETTING;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
struct SettingSourceRestrictions
|
||||
{
|
||||
constexpr SettingSourceRestrictions() { allowed_sources.set(); }
|
||||
|
||||
constexpr SettingSourceRestrictions(std::initializer_list<SettingSource> allowed_sources_)
|
||||
{
|
||||
for (auto allowed_source : allowed_sources_)
|
||||
setSourceAllowed(allowed_source, true);
|
||||
}
|
||||
|
||||
constexpr bool isSourceAllowed(SettingSource source) { return allowed_sources[source]; }
|
||||
constexpr void setSourceAllowed(SettingSource source, bool allowed) { allowed_sources[source] = allowed; }
|
||||
|
||||
std::bitset<SettingSource::COUNT> allowed_sources;
|
||||
};
|
||||
|
||||
const std::unordered_map<std::string_view, SettingSourceRestrictions> SETTINGS_SOURCE_RESTRICTIONS = {
|
||||
{"max_sessions_for_user", {SettingSource::PROFILE}},
|
||||
};
|
||||
|
||||
SettingSourceRestrictions getSettingSourceRestrictions(std::string_view name)
|
||||
{
|
||||
auto settingConstraintIter = SETTINGS_SOURCE_RESTRICTIONS.find(name);
|
||||
if (settingConstraintIter != SETTINGS_SOURCE_RESTRICTIONS.end())
|
||||
return settingConstraintIter->second;
|
||||
else
|
||||
return SettingSourceRestrictions(); // allows everything
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
SettingsConstraints::SettingsConstraints(const AccessControl & access_control_) : access_control(&access_control_)
|
||||
{
|
||||
}
|
||||
@ -98,7 +133,7 @@ void SettingsConstraints::merge(const SettingsConstraints & other)
|
||||
}
|
||||
|
||||
|
||||
void SettingsConstraints::check(const Settings & current_settings, const SettingsProfileElements & profile_elements) const
|
||||
void SettingsConstraints::check(const Settings & current_settings, const SettingsProfileElements & profile_elements, SettingSource source) const
|
||||
{
|
||||
for (const auto & element : profile_elements)
|
||||
{
|
||||
@ -108,19 +143,19 @@ void SettingsConstraints::check(const Settings & current_settings, const Setting
|
||||
if (element.value)
|
||||
{
|
||||
SettingChange value(element.setting_name, *element.value);
|
||||
check(current_settings, value);
|
||||
check(current_settings, value, source);
|
||||
}
|
||||
|
||||
if (element.min_value)
|
||||
{
|
||||
SettingChange value(element.setting_name, *element.min_value);
|
||||
check(current_settings, value);
|
||||
check(current_settings, value, source);
|
||||
}
|
||||
|
||||
if (element.max_value)
|
||||
{
|
||||
SettingChange value(element.setting_name, *element.max_value);
|
||||
check(current_settings, value);
|
||||
check(current_settings, value, source);
|
||||
}
|
||||
|
||||
SettingConstraintWritability new_value = SettingConstraintWritability::WRITABLE;
|
||||
@ -142,24 +177,24 @@ void SettingsConstraints::check(const Settings & current_settings, const Setting
|
||||
}
|
||||
}
|
||||
|
||||
void SettingsConstraints::check(const Settings & current_settings, const SettingChange & change) const
|
||||
void SettingsConstraints::check(const Settings & current_settings, const SettingChange & change, SettingSource source) const
|
||||
{
|
||||
checkImpl(current_settings, const_cast<SettingChange &>(change), THROW_ON_VIOLATION);
|
||||
checkImpl(current_settings, const_cast<SettingChange &>(change), THROW_ON_VIOLATION, source);
|
||||
}
|
||||
|
||||
void SettingsConstraints::check(const Settings & current_settings, const SettingsChanges & changes) const
|
||||
void SettingsConstraints::check(const Settings & current_settings, const SettingsChanges & changes, SettingSource source) const
|
||||
{
|
||||
for (const auto & change : changes)
|
||||
check(current_settings, change);
|
||||
check(current_settings, change, source);
|
||||
}
|
||||
|
||||
void SettingsConstraints::check(const Settings & current_settings, SettingsChanges & changes) const
|
||||
void SettingsConstraints::check(const Settings & current_settings, SettingsChanges & changes, SettingSource source) const
|
||||
{
|
||||
boost::range::remove_erase_if(
|
||||
changes,
|
||||
[&](SettingChange & change) -> bool
|
||||
{
|
||||
return !checkImpl(current_settings, const_cast<SettingChange &>(change), THROW_ON_VIOLATION);
|
||||
return !checkImpl(current_settings, const_cast<SettingChange &>(change), THROW_ON_VIOLATION, source);
|
||||
});
|
||||
}
|
||||
|
||||
@ -174,13 +209,13 @@ void SettingsConstraints::check(const MergeTreeSettings & current_settings, cons
|
||||
check(current_settings, change);
|
||||
}
|
||||
|
||||
void SettingsConstraints::clamp(const Settings & current_settings, SettingsChanges & changes) const
|
||||
void SettingsConstraints::clamp(const Settings & current_settings, SettingsChanges & changes, SettingSource source) const
|
||||
{
|
||||
boost::range::remove_erase_if(
|
||||
changes,
|
||||
[&](SettingChange & change) -> bool
|
||||
{
|
||||
return !checkImpl(current_settings, change, CLAMP_ON_VIOLATION);
|
||||
return !checkImpl(current_settings, change, CLAMP_ON_VIOLATION, source);
|
||||
});
|
||||
}
|
||||
|
||||
@ -215,7 +250,10 @@ bool getNewValueToCheck(const T & current_settings, SettingChange & change, Fiel
|
||||
return true;
|
||||
}
|
||||
|
||||
bool SettingsConstraints::checkImpl(const Settings & current_settings, SettingChange & change, ReactionOnViolation reaction) const
|
||||
bool SettingsConstraints::checkImpl(const Settings & current_settings,
|
||||
SettingChange & change,
|
||||
ReactionOnViolation reaction,
|
||||
SettingSource source) const
|
||||
{
|
||||
std::string_view setting_name = Settings::Traits::resolveName(change.name);
|
||||
|
||||
@ -247,7 +285,7 @@ bool SettingsConstraints::checkImpl(const Settings & current_settings, SettingCh
|
||||
if (!getNewValueToCheck(current_settings, change, new_value, reaction == THROW_ON_VIOLATION))
|
||||
return false;
|
||||
|
||||
return getChecker(current_settings, setting_name).check(change, new_value, reaction);
|
||||
return getChecker(current_settings, setting_name).check(change, new_value, reaction, source);
|
||||
}
|
||||
|
||||
bool SettingsConstraints::checkImpl(const MergeTreeSettings & current_settings, SettingChange & change, ReactionOnViolation reaction) const
|
||||
@ -255,10 +293,13 @@ bool SettingsConstraints::checkImpl(const MergeTreeSettings & current_settings,
|
||||
Field new_value;
|
||||
if (!getNewValueToCheck(current_settings, change, new_value, reaction == THROW_ON_VIOLATION))
|
||||
return false;
|
||||
return getMergeTreeChecker(change.name).check(change, new_value, reaction);
|
||||
return getMergeTreeChecker(change.name).check(change, new_value, reaction, SettingSource::QUERY);
|
||||
}
|
||||
|
||||
bool SettingsConstraints::Checker::check(SettingChange & change, const Field & new_value, ReactionOnViolation reaction) const
|
||||
bool SettingsConstraints::Checker::check(SettingChange & change,
|
||||
const Field & new_value,
|
||||
ReactionOnViolation reaction,
|
||||
SettingSource source) const
|
||||
{
|
||||
if (!explain.empty())
|
||||
{
|
||||
@ -326,6 +367,14 @@ bool SettingsConstraints::Checker::check(SettingChange & change, const Field & n
|
||||
change.value = max_value;
|
||||
}
|
||||
|
||||
if (!getSettingSourceRestrictions(setting_name).isSourceAllowed(source))
|
||||
{
|
||||
if (reaction == THROW_ON_VIOLATION)
|
||||
throw Exception(ErrorCodes::READONLY, "Setting {} is not allowed to be set by {}", setting_name, toString(source));
|
||||
else
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -2,6 +2,7 @@
|
||||
|
||||
#include <Access/SettingsProfileElement.h>
|
||||
#include <Common/SettingsChanges.h>
|
||||
#include <Common/SettingSource.h>
|
||||
#include <unordered_map>
|
||||
|
||||
namespace Poco::Util
|
||||
@ -73,17 +74,18 @@ public:
|
||||
void merge(const SettingsConstraints & other);
|
||||
|
||||
/// Checks whether `change` violates these constraints and throws an exception if so.
|
||||
void check(const Settings & current_settings, const SettingsProfileElements & profile_elements) const;
|
||||
void check(const Settings & current_settings, const SettingChange & change) const;
|
||||
void check(const Settings & current_settings, const SettingsChanges & changes) const;
|
||||
void check(const Settings & current_settings, SettingsChanges & changes) const;
|
||||
void check(const Settings & current_settings, const SettingsProfileElements & profile_elements, SettingSource source) const;
|
||||
void check(const Settings & current_settings, const SettingChange & change, SettingSource source) const;
|
||||
void check(const Settings & current_settings, const SettingsChanges & changes, SettingSource source) const;
|
||||
void check(const Settings & current_settings, SettingsChanges & changes, SettingSource source) const;
|
||||
|
||||
/// Checks whether `change` violates these constraints and throws an exception if so. (setting short name is expected inside `changes`)
|
||||
void check(const MergeTreeSettings & current_settings, const SettingChange & change) const;
|
||||
void check(const MergeTreeSettings & current_settings, const SettingsChanges & changes) const;
|
||||
|
||||
/// Checks whether `change` violates these and clamps the `change` if so.
|
||||
void clamp(const Settings & current_settings, SettingsChanges & changes) const;
|
||||
void clamp(const Settings & current_settings, SettingsChanges & changes, SettingSource source) const;
|
||||
|
||||
|
||||
friend bool operator ==(const SettingsConstraints & left, const SettingsConstraints & right);
|
||||
friend bool operator !=(const SettingsConstraints & left, const SettingsConstraints & right) { return !(left == right); }
|
||||
@ -133,7 +135,10 @@ private:
|
||||
{}
|
||||
|
||||
// Perform checking
|
||||
bool check(SettingChange & change, const Field & new_value, ReactionOnViolation reaction) const;
|
||||
bool check(SettingChange & change,
|
||||
const Field & new_value,
|
||||
ReactionOnViolation reaction,
|
||||
SettingSource source) const;
|
||||
};
|
||||
|
||||
struct StringHash
|
||||
@ -145,7 +150,11 @@ private:
|
||||
}
|
||||
};
|
||||
|
||||
bool checkImpl(const Settings & current_settings, SettingChange & change, ReactionOnViolation reaction) const;
|
||||
bool checkImpl(const Settings & current_settings,
|
||||
SettingChange & change,
|
||||
ReactionOnViolation reaction,
|
||||
SettingSource source) const;
|
||||
|
||||
bool checkImpl(const MergeTreeSettings & current_settings, SettingChange & change, ReactionOnViolation reaction) const;
|
||||
|
||||
Checker getChecker(const Settings & current_settings, std::string_view setting_name) const;
|
||||
|
@ -6494,55 +6494,69 @@ void QueryAnalyzer::resolveArrayJoin(QueryTreeNodePtr & array_join_node, Identif
|
||||
|
||||
resolveExpressionNode(array_join_expression, scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/);
|
||||
|
||||
auto result_type = array_join_expression->getResultType();
|
||||
bool is_array_type = isArray(result_type);
|
||||
bool is_map_type = isMap(result_type);
|
||||
|
||||
if (!is_array_type && !is_map_type)
|
||||
throw Exception(ErrorCodes::TYPE_MISMATCH,
|
||||
"ARRAY JOIN {} requires expression {} with Array or Map type. Actual {}. In scope {}",
|
||||
array_join_node_typed.formatASTForErrorMessage(),
|
||||
array_join_expression->formatASTForErrorMessage(),
|
||||
result_type->getName(),
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
|
||||
if (is_map_type)
|
||||
result_type = assert_cast<const DataTypeMap &>(*result_type).getNestedType();
|
||||
|
||||
result_type = assert_cast<const DataTypeArray &>(*result_type).getNestedType();
|
||||
|
||||
String array_join_column_name;
|
||||
|
||||
if (!array_join_expression_alias.empty())
|
||||
auto process_array_join_expression = [&](QueryTreeNodePtr & expression)
|
||||
{
|
||||
array_join_column_name = array_join_expression_alias;
|
||||
}
|
||||
else if (auto * array_join_expression_inner_column = array_join_expression->as<ColumnNode>())
|
||||
auto result_type = expression->getResultType();
|
||||
bool is_array_type = isArray(result_type);
|
||||
bool is_map_type = isMap(result_type);
|
||||
|
||||
if (!is_array_type && !is_map_type)
|
||||
throw Exception(ErrorCodes::TYPE_MISMATCH,
|
||||
"ARRAY JOIN {} requires expression {} with Array or Map type. Actual {}. In scope {}",
|
||||
array_join_node_typed.formatASTForErrorMessage(),
|
||||
expression->formatASTForErrorMessage(),
|
||||
result_type->getName(),
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
|
||||
if (is_map_type)
|
||||
result_type = assert_cast<const DataTypeMap &>(*result_type).getNestedType();
|
||||
|
||||
result_type = assert_cast<const DataTypeArray &>(*result_type).getNestedType();
|
||||
|
||||
String array_join_column_name;
|
||||
|
||||
if (!array_join_expression_alias.empty())
|
||||
{
|
||||
array_join_column_name = array_join_expression_alias;
|
||||
}
|
||||
else if (auto * array_join_expression_inner_column = array_join_expression->as<ColumnNode>())
|
||||
{
|
||||
array_join_column_name = array_join_expression_inner_column->getColumnName();
|
||||
}
|
||||
else if (!identifier_full_name.empty())
|
||||
{
|
||||
array_join_column_name = identifier_full_name;
|
||||
}
|
||||
else
|
||||
{
|
||||
array_join_column_name = "__array_join_expression_" + std::to_string(array_join_expressions_counter);
|
||||
++array_join_expressions_counter;
|
||||
}
|
||||
|
||||
if (array_join_column_names.contains(array_join_column_name))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"ARRAY JOIN {} multiple columns with name {}. In scope {}",
|
||||
array_join_node_typed.formatASTForErrorMessage(),
|
||||
array_join_column_name,
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
array_join_column_names.emplace(array_join_column_name);
|
||||
|
||||
NameAndTypePair array_join_column(array_join_column_name, result_type);
|
||||
auto array_join_column_node = std::make_shared<ColumnNode>(std::move(array_join_column), expression, array_join_node);
|
||||
array_join_column_node->setAlias(array_join_expression_alias);
|
||||
array_join_column_expressions.push_back(std::move(array_join_column_node));
|
||||
};
|
||||
|
||||
// Support ARRAY JOIN COLUMNS(...). COLUMNS transformer is resolved to list of columns.
|
||||
if (auto * columns_list = array_join_expression->as<ListNode>())
|
||||
{
|
||||
array_join_column_name = array_join_expression_inner_column->getColumnName();
|
||||
}
|
||||
else if (!identifier_full_name.empty())
|
||||
{
|
||||
array_join_column_name = identifier_full_name;
|
||||
for (auto & array_join_subexpression : columns_list->getNodes())
|
||||
process_array_join_expression(array_join_subexpression);
|
||||
}
|
||||
else
|
||||
{
|
||||
array_join_column_name = "__array_join_expression_" + std::to_string(array_join_expressions_counter);
|
||||
++array_join_expressions_counter;
|
||||
process_array_join_expression(array_join_expression);
|
||||
}
|
||||
|
||||
if (array_join_column_names.contains(array_join_column_name))
|
||||
throw Exception(ErrorCodes::BAD_ARGUMENTS,
|
||||
"ARRAY JOIN {} multiple columns with name {}. In scope {}",
|
||||
array_join_node_typed.formatASTForErrorMessage(),
|
||||
array_join_column_name,
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
array_join_column_names.emplace(array_join_column_name);
|
||||
|
||||
NameAndTypePair array_join_column(array_join_column_name, result_type);
|
||||
auto array_join_column_node = std::make_shared<ColumnNode>(std::move(array_join_column), array_join_expression, array_join_node);
|
||||
array_join_column_node->setAlias(array_join_expression_alias);
|
||||
array_join_column_expressions.push_back(std::move(array_join_column_node));
|
||||
}
|
||||
|
||||
/** Allow to resolve ARRAY JOIN columns from aliases with types after ARRAY JOIN only after ARRAY JOIN expression list is resolved, because
|
||||
@ -6554,11 +6568,9 @@ void QueryAnalyzer::resolveArrayJoin(QueryTreeNodePtr & array_join_node, Identif
|
||||
* And it is expected that `value_element` inside projection expression list will be resolved as `value_element` expression
|
||||
* with type after ARRAY JOIN.
|
||||
*/
|
||||
for (size_t i = 0; i < array_join_nodes_size; ++i)
|
||||
array_join_nodes = std::move(array_join_column_expressions);
|
||||
for (auto & array_join_column_expression : array_join_nodes)
|
||||
{
|
||||
auto & array_join_column_expression = array_join_nodes[i];
|
||||
array_join_column_expression = std::move(array_join_column_expressions[i]);
|
||||
|
||||
auto it = scope.alias_name_to_expression_node.find(array_join_column_expression->getAlias());
|
||||
if (it != scope.alias_name_to_expression_node.end())
|
||||
{
|
||||
|
@ -30,6 +30,7 @@ public:
|
||||
String compression_method;
|
||||
int compression_level = -1;
|
||||
String password;
|
||||
String s3_storage_class;
|
||||
ContextPtr context;
|
||||
bool is_internal_backup = false;
|
||||
std::shared_ptr<IBackupCoordination> backup_coordination;
|
||||
|
@ -178,7 +178,7 @@ void BackupReaderS3::copyFileToDisk(const String & path_in_backup, size_t file_s
|
||||
|
||||
|
||||
BackupWriterS3::BackupWriterS3(
|
||||
const S3::URI & s3_uri_, const String & access_key_id_, const String & secret_access_key_, bool allow_s3_native_copy, const ContextPtr & context_)
|
||||
const S3::URI & s3_uri_, const String & access_key_id_, const String & secret_access_key_, bool allow_s3_native_copy, const String & storage_class_name, const ContextPtr & context_)
|
||||
: BackupWriterDefault(&Poco::Logger::get("BackupWriterS3"), context_)
|
||||
, s3_uri(s3_uri_)
|
||||
, client(makeS3Client(s3_uri_, access_key_id_, secret_access_key_, context_))
|
||||
@ -188,6 +188,7 @@ BackupWriterS3::BackupWriterS3(
|
||||
request_settings.updateFromSettings(context_->getSettingsRef());
|
||||
request_settings.max_single_read_retries = context_->getSettingsRef().s3_max_single_read_retries; // FIXME: Avoid taking value for endpoint
|
||||
request_settings.allow_native_copy = allow_s3_native_copy;
|
||||
request_settings.setStorageClassName(storage_class_name);
|
||||
}
|
||||
|
||||
void BackupWriterS3::copyFileFromDisk(const String & path_in_backup, DiskPtr src_disk, const String & src_path,
|
||||
|
@ -38,7 +38,7 @@ private:
|
||||
class BackupWriterS3 : public BackupWriterDefault
|
||||
{
|
||||
public:
|
||||
BackupWriterS3(const S3::URI & s3_uri_, const String & access_key_id_, const String & secret_access_key_, bool allow_s3_native_copy, const ContextPtr & context_);
|
||||
BackupWriterS3(const S3::URI & s3_uri_, const String & access_key_id_, const String & secret_access_key_, bool allow_s3_native_copy, const String & storage_class_name, const ContextPtr & context_);
|
||||
~BackupWriterS3() override;
|
||||
|
||||
bool fileExists(const String & file_name) override;
|
||||
|
@ -21,6 +21,7 @@ namespace ErrorCodes
|
||||
M(String, id) \
|
||||
M(String, compression_method) \
|
||||
M(String, password) \
|
||||
M(String, s3_storage_class) \
|
||||
M(Bool, structure_only) \
|
||||
M(Bool, async) \
|
||||
M(Bool, decrypt_files_from_encrypted_disks) \
|
||||
|
@ -25,6 +25,9 @@ struct BackupSettings
|
||||
/// Password used to encrypt the backup.
|
||||
String password;
|
||||
|
||||
/// S3 storage class.
|
||||
String s3_storage_class = "";
|
||||
|
||||
/// If this is set to true then only create queries will be written to backup,
|
||||
/// without the data of tables.
|
||||
bool structure_only = false;
|
||||
|
@ -344,6 +344,7 @@ void BackupsWorker::doBackup(
|
||||
backup_create_params.compression_method = backup_settings.compression_method;
|
||||
backup_create_params.compression_level = backup_settings.compression_level;
|
||||
backup_create_params.password = backup_settings.password;
|
||||
backup_create_params.s3_storage_class = backup_settings.s3_storage_class;
|
||||
backup_create_params.is_internal_backup = backup_settings.internal;
|
||||
backup_create_params.backup_coordination = backup_coordination;
|
||||
backup_create_params.backup_uuid = backup_settings.backup_uuid;
|
||||
|
@ -112,7 +112,7 @@ void registerBackupEngineS3(BackupFactory & factory)
|
||||
}
|
||||
else
|
||||
{
|
||||
auto writer = std::make_shared<BackupWriterS3>(S3::URI{s3_uri}, access_key_id, secret_access_key, params.allow_s3_native_copy, params.context);
|
||||
auto writer = std::make_shared<BackupWriterS3>(S3::URI{s3_uri}, access_key_id, secret_access_key, params.allow_s3_native_copy, params.s3_storage_class, params.context);
|
||||
return std::make_unique<BackupImpl>(
|
||||
backup_name_for_logging,
|
||||
archive_params,
|
||||
|
@ -124,6 +124,9 @@ void Suggest::load(ContextPtr context, const ConnectionParameters & connection_p
|
||||
if (e.code() == ErrorCodes::DEADLOCK_AVOIDED)
|
||||
continue;
|
||||
|
||||
/// Client can successfully connect to the server and
|
||||
/// get ErrorCodes::USER_SESSION_LIMIT_EXCEEDED for suggestion connection.
|
||||
|
||||
/// We should not use std::cerr here, because this method works concurrently with the main thread.
|
||||
/// WriteBufferFromFileDescriptor will write directly to the file descriptor, avoiding data race on std::cerr.
|
||||
|
||||
|
@ -582,6 +582,7 @@
|
||||
M(697, CANNOT_RESTORE_TO_NONENCRYPTED_DISK) \
|
||||
M(698, INVALID_REDIS_STORAGE_TYPE) \
|
||||
M(699, INVALID_REDIS_TABLE_STRUCTURE) \
|
||||
M(700, USER_SESSION_LIMIT_EXCEEDED) \
|
||||
\
|
||||
M(999, KEEPER_EXCEPTION) \
|
||||
M(1000, POCO_EXCEPTION) \
|
||||
|
@ -229,7 +229,7 @@ void MemoryTracker::allocImpl(Int64 size, bool throw_if_memory_exceeded, MemoryT
|
||||
}
|
||||
|
||||
std::bernoulli_distribution sample(sample_probability);
|
||||
if (unlikely(sample_probability > 0.0 && sample(thread_local_rng)))
|
||||
if (unlikely(sample_probability > 0.0 && isSizeOkForSampling(size) && sample(thread_local_rng)))
|
||||
{
|
||||
MemoryTrackerBlockerInThread untrack_lock(VariableContext::Global);
|
||||
DB::TraceSender::send(DB::TraceType::MemorySample, StackTrace(), {.size = size});
|
||||
@ -413,7 +413,7 @@ void MemoryTracker::free(Int64 size)
|
||||
}
|
||||
|
||||
std::bernoulli_distribution sample(sample_probability);
|
||||
if (unlikely(sample_probability > 0.0 && sample(thread_local_rng)))
|
||||
if (unlikely(sample_probability > 0.0 && isSizeOkForSampling(size) && sample(thread_local_rng)))
|
||||
{
|
||||
MemoryTrackerBlockerInThread untrack_lock(VariableContext::Global);
|
||||
DB::TraceSender::send(DB::TraceType::MemorySample, StackTrace(), {.size = -size});
|
||||
@ -534,6 +534,12 @@ void MemoryTracker::setOrRaiseProfilerLimit(Int64 value)
|
||||
;
|
||||
}
|
||||
|
||||
bool MemoryTracker::isSizeOkForSampling(UInt64 size) const
|
||||
{
|
||||
/// We can avoid comparison min_allocation_size_bytes with zero, because we cannot have 0 bytes allocation/deallocation
|
||||
return ((max_allocation_size_bytes == 0 || size <= max_allocation_size_bytes) && size >= min_allocation_size_bytes);
|
||||
}
|
||||
|
||||
bool canEnqueueBackgroundTask()
|
||||
{
|
||||
auto limit = background_memory_tracker.getSoftLimit();
|
||||
|
@ -67,6 +67,12 @@ private:
|
||||
/// To randomly sample allocations and deallocations in trace_log.
|
||||
double sample_probability = 0;
|
||||
|
||||
/// Randomly sample allocations only larger or equal to this size
|
||||
UInt64 min_allocation_size_bytes = 0;
|
||||
|
||||
/// Randomly sample allocations only smaller or equal to this size
|
||||
UInt64 max_allocation_size_bytes = 0;
|
||||
|
||||
/// Singly-linked list. All information will be passed to subsequent memory trackers also (it allows to implement trackers hierarchy).
|
||||
/// In terms of tree nodes it is the list of parents. Lifetime of these trackers should "include" lifetime of current tracker.
|
||||
std::atomic<MemoryTracker *> parent {};
|
||||
@ -88,6 +94,8 @@ private:
|
||||
|
||||
void setOrRaiseProfilerLimit(Int64 value);
|
||||
|
||||
bool isSizeOkForSampling(UInt64 size) const;
|
||||
|
||||
/// allocImpl(...) and free(...) should not be used directly
|
||||
friend struct CurrentMemoryTracker;
|
||||
void allocImpl(Int64 size, bool throw_if_memory_exceeded, MemoryTracker * query_tracker = nullptr);
|
||||
@ -166,6 +174,16 @@ public:
|
||||
sample_probability = value;
|
||||
}
|
||||
|
||||
void setSampleMinAllocationSize(UInt64 value)
|
||||
{
|
||||
min_allocation_size_bytes = value;
|
||||
}
|
||||
|
||||
void setSampleMaxAllocationSize(UInt64 value)
|
||||
{
|
||||
max_allocation_size_bytes = value;
|
||||
}
|
||||
|
||||
void setProfilerStep(Int64 value)
|
||||
{
|
||||
profiler_step = value;
|
||||
|
43
src/Common/SettingSource.h
Normal file
43
src/Common/SettingSource.h
Normal file
@ -0,0 +1,43 @@
|
||||
#pragma once
|
||||
|
||||
#include <string_view>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
enum SettingSource
|
||||
{
|
||||
/// Query or session change:
|
||||
/// SET <setting> = <value>
|
||||
/// SELECT ... SETTINGS [<setting> = <value]
|
||||
QUERY,
|
||||
|
||||
/// Profile creation or altering:
|
||||
/// CREATE SETTINGS PROFILE ... SETTINGS [<setting> = <value]
|
||||
/// ALTER SETTINGS PROFILE ... SETTINGS [<setting> = <value]
|
||||
PROFILE,
|
||||
|
||||
/// Role creation or altering:
|
||||
/// CREATE ROLE ... SETTINGS [<setting> = <value>]
|
||||
/// ALTER ROLE ... SETTINGS [<setting> = <value]
|
||||
ROLE,
|
||||
|
||||
/// User creation or altering:
|
||||
/// CREATE USER ... SETTINGS [<setting> = <value>]
|
||||
/// ALTER USER ... SETTINGS [<setting> = <value]
|
||||
USER,
|
||||
|
||||
COUNT,
|
||||
};
|
||||
|
||||
constexpr std::string_view toString(SettingSource source)
|
||||
{
|
||||
switch (source)
|
||||
{
|
||||
case SettingSource::QUERY: return "query";
|
||||
case SettingSource::PROFILE: return "profile";
|
||||
case SettingSource::USER: return "user";
|
||||
case SettingSource::ROLE: return "role";
|
||||
default: return "unknown";
|
||||
}
|
||||
}
|
||||
}
|
@ -136,6 +136,8 @@ using ResponseCallback = std::function<void(const Response &)>;
|
||||
struct Response
|
||||
{
|
||||
Error error = Error::ZOK;
|
||||
int64_t zxid = 0;
|
||||
|
||||
Response() = default;
|
||||
Response(const Response &) = default;
|
||||
Response & operator=(const Response &) = default;
|
||||
@ -490,8 +492,6 @@ public:
|
||||
/// Useful to check owner of ephemeral node.
|
||||
virtual int64_t getSessionID() const = 0;
|
||||
|
||||
virtual Poco::Net::SocketAddress getConnectedAddress() const = 0;
|
||||
|
||||
/// If the method will throw an exception, callbacks won't be called.
|
||||
///
|
||||
/// After the method is executed successfully, you must wait for callbacks
|
||||
@ -564,6 +564,10 @@ public:
|
||||
|
||||
virtual const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return nullptr; }
|
||||
|
||||
/// A ZooKeeper session can have an optional deadline set on it.
|
||||
/// After it has been reached, the session needs to be finalized.
|
||||
virtual bool hasReachedDeadline() const = 0;
|
||||
|
||||
/// Expire session and finish all pending requests
|
||||
virtual void finalize(const String & reason) = 0;
|
||||
};
|
||||
|
@ -195,6 +195,7 @@ struct TestKeeperMultiRequest final : MultiRequest, TestKeeperRequest
|
||||
std::pair<ResponsePtr, Undo> TestKeeperCreateRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
CreateResponse response;
|
||||
response.zxid = zxid;
|
||||
Undo undo;
|
||||
|
||||
if (container.contains(path))
|
||||
@ -257,9 +258,10 @@ std::pair<ResponsePtr, Undo> TestKeeperCreateRequest::process(TestKeeper::Contai
|
||||
return { std::make_shared<CreateResponse>(response), undo };
|
||||
}
|
||||
|
||||
std::pair<ResponsePtr, Undo> TestKeeperRemoveRequest::process(TestKeeper::Container & container, int64_t) const
|
||||
std::pair<ResponsePtr, Undo> TestKeeperRemoveRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
RemoveResponse response;
|
||||
response.zxid = zxid;
|
||||
Undo undo;
|
||||
|
||||
auto it = container.find(path);
|
||||
@ -296,9 +298,10 @@ std::pair<ResponsePtr, Undo> TestKeeperRemoveRequest::process(TestKeeper::Contai
|
||||
return { std::make_shared<RemoveResponse>(response), undo };
|
||||
}
|
||||
|
||||
std::pair<ResponsePtr, Undo> TestKeeperExistsRequest::process(TestKeeper::Container & container, int64_t) const
|
||||
std::pair<ResponsePtr, Undo> TestKeeperExistsRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
ExistsResponse response;
|
||||
response.zxid = zxid;
|
||||
|
||||
auto it = container.find(path);
|
||||
if (it != container.end())
|
||||
@ -314,9 +317,10 @@ std::pair<ResponsePtr, Undo> TestKeeperExistsRequest::process(TestKeeper::Contai
|
||||
return { std::make_shared<ExistsResponse>(response), {} };
|
||||
}
|
||||
|
||||
std::pair<ResponsePtr, Undo> TestKeeperGetRequest::process(TestKeeper::Container & container, int64_t) const
|
||||
std::pair<ResponsePtr, Undo> TestKeeperGetRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
GetResponse response;
|
||||
response.zxid = zxid;
|
||||
|
||||
auto it = container.find(path);
|
||||
if (it == container.end())
|
||||
@ -336,6 +340,7 @@ std::pair<ResponsePtr, Undo> TestKeeperGetRequest::process(TestKeeper::Container
|
||||
std::pair<ResponsePtr, Undo> TestKeeperSetRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
SetResponse response;
|
||||
response.zxid = zxid;
|
||||
Undo undo;
|
||||
|
||||
auto it = container.find(path);
|
||||
@ -370,9 +375,10 @@ std::pair<ResponsePtr, Undo> TestKeeperSetRequest::process(TestKeeper::Container
|
||||
return { std::make_shared<SetResponse>(response), undo };
|
||||
}
|
||||
|
||||
std::pair<ResponsePtr, Undo> TestKeeperListRequest::process(TestKeeper::Container & container, int64_t) const
|
||||
std::pair<ResponsePtr, Undo> TestKeeperListRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
ListResponse response;
|
||||
response.zxid = zxid;
|
||||
|
||||
auto it = container.find(path);
|
||||
if (it == container.end())
|
||||
@ -414,9 +420,10 @@ std::pair<ResponsePtr, Undo> TestKeeperListRequest::process(TestKeeper::Containe
|
||||
return { std::make_shared<ListResponse>(response), {} };
|
||||
}
|
||||
|
||||
std::pair<ResponsePtr, Undo> TestKeeperCheckRequest::process(TestKeeper::Container & container, int64_t) const
|
||||
std::pair<ResponsePtr, Undo> TestKeeperCheckRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
CheckResponse response;
|
||||
response.zxid = zxid;
|
||||
auto it = container.find(path);
|
||||
if (it == container.end())
|
||||
{
|
||||
@ -434,10 +441,11 @@ std::pair<ResponsePtr, Undo> TestKeeperCheckRequest::process(TestKeeper::Contain
|
||||
return { std::make_shared<CheckResponse>(response), {} };
|
||||
}
|
||||
|
||||
std::pair<ResponsePtr, Undo> TestKeeperSyncRequest::process(TestKeeper::Container & /*container*/, int64_t) const
|
||||
std::pair<ResponsePtr, Undo> TestKeeperSyncRequest::process(TestKeeper::Container & /*container*/, int64_t zxid) const
|
||||
{
|
||||
SyncResponse response;
|
||||
response.path = path;
|
||||
response.zxid = zxid;
|
||||
|
||||
return { std::make_shared<SyncResponse>(std::move(response)), {} };
|
||||
}
|
||||
@ -456,6 +464,7 @@ std::pair<ResponsePtr, Undo> TestKeeperReconfigRequest::process(TestKeeper::Cont
|
||||
std::pair<ResponsePtr, Undo> TestKeeperMultiRequest::process(TestKeeper::Container & container, int64_t zxid) const
|
||||
{
|
||||
MultiResponse response;
|
||||
response.zxid = zxid;
|
||||
response.responses.reserve(requests.size());
|
||||
std::vector<Undo> undo_actions;
|
||||
|
||||
|
@ -39,8 +39,8 @@ public:
|
||||
~TestKeeper() override;
|
||||
|
||||
bool isExpired() const override { return expired; }
|
||||
bool hasReachedDeadline() const override { return false; }
|
||||
int64_t getSessionID() const override { return 0; }
|
||||
Poco::Net::SocketAddress getConnectedAddress() const override { return connected_zk_address; }
|
||||
|
||||
|
||||
void create(
|
||||
@ -135,8 +135,6 @@ private:
|
||||
|
||||
zkutil::ZooKeeperArgs args;
|
||||
|
||||
Poco::Net::SocketAddress connected_zk_address;
|
||||
|
||||
std::mutex push_request_mutex;
|
||||
std::atomic<bool> expired{false};
|
||||
|
||||
|
@ -112,31 +112,17 @@ void ZooKeeper::init(ZooKeeperArgs args_)
|
||||
throw KeeperException("Cannot use any of provided ZooKeeper nodes", Coordination::Error::ZCONNECTIONLOSS);
|
||||
}
|
||||
|
||||
impl = std::make_unique<Coordination::ZooKeeper>(nodes, args, zk_log);
|
||||
impl = std::make_unique<Coordination::ZooKeeper>(nodes, args, zk_log, [this](size_t node_idx, const Coordination::ZooKeeper::Node & node)
|
||||
{
|
||||
connected_zk_host = node.address.host().toString();
|
||||
connected_zk_port = node.address.port();
|
||||
connected_zk_index = node_idx;
|
||||
});
|
||||
|
||||
if (args.chroot.empty())
|
||||
LOG_TRACE(log, "Initialized, hosts: {}", fmt::join(args.hosts, ","));
|
||||
else
|
||||
LOG_TRACE(log, "Initialized, hosts: {}, chroot: {}", fmt::join(args.hosts, ","), args.chroot);
|
||||
|
||||
Poco::Net::SocketAddress address = impl->getConnectedAddress();
|
||||
|
||||
connected_zk_host = address.host().toString();
|
||||
connected_zk_port = address.port();
|
||||
|
||||
connected_zk_index = 0;
|
||||
|
||||
if (args.hosts.size() > 1)
|
||||
{
|
||||
for (size_t i = 0; i < args.hosts.size(); i++)
|
||||
{
|
||||
if (args.hosts[i] == address.toString())
|
||||
{
|
||||
connected_zk_index = i;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
else if (args.implementation == "testkeeper")
|
||||
{
|
||||
|
@ -521,6 +521,7 @@ public:
|
||||
void setZooKeeperLog(std::shared_ptr<DB::ZooKeeperLog> zk_log_);
|
||||
|
||||
UInt32 getSessionUptime() const { return static_cast<UInt32>(session_uptime.elapsedSeconds()); }
|
||||
bool hasReachedDeadline() const { return impl->hasReachedDeadline(); }
|
||||
|
||||
void setServerCompletelyStarted();
|
||||
|
||||
|
@ -204,6 +204,14 @@ void ZooKeeperArgs::initFromKeeperSection(const Poco::Util::AbstractConfiguratio
|
||||
throw DB::Exception(DB::ErrorCodes::BAD_ARGUMENTS, "Unknown load balancing: {}", load_balancing_str);
|
||||
get_priority_load_balancing.load_balancing = *load_balancing;
|
||||
}
|
||||
else if (key == "fallback_session_lifetime")
|
||||
{
|
||||
fallback_session_lifetime = SessionLifetimeConfiguration
|
||||
{
|
||||
.min_sec = config.getUInt(config_name + "." + key + ".min"),
|
||||
.max_sec = config.getUInt(config_name + "." + key + ".max"),
|
||||
};
|
||||
}
|
||||
else
|
||||
throw KeeperException(std::string("Unknown key ") + key + " in config file", Coordination::Error::ZBADARGUMENTS);
|
||||
}
|
||||
|
@ -11,8 +11,17 @@ namespace Poco::Util
|
||||
namespace zkutil
|
||||
{
|
||||
|
||||
constexpr UInt32 ZK_MIN_FALLBACK_SESSION_DEADLINE_SEC = 3 * 60 * 60;
|
||||
constexpr UInt32 ZK_MAX_FALLBACK_SESSION_DEADLINE_SEC = 6 * 60 * 60;
|
||||
|
||||
struct ZooKeeperArgs
|
||||
{
|
||||
struct SessionLifetimeConfiguration
|
||||
{
|
||||
UInt32 min_sec = ZK_MIN_FALLBACK_SESSION_DEADLINE_SEC;
|
||||
UInt32 max_sec = ZK_MAX_FALLBACK_SESSION_DEADLINE_SEC;
|
||||
bool operator == (const SessionLifetimeConfiguration &) const = default;
|
||||
};
|
||||
ZooKeeperArgs(const Poco::Util::AbstractConfiguration & config, const String & config_name);
|
||||
|
||||
/// hosts_string -- comma separated [secure://]host:port list
|
||||
@ -36,6 +45,7 @@ struct ZooKeeperArgs
|
||||
UInt64 send_sleep_ms = 0;
|
||||
UInt64 recv_sleep_ms = 0;
|
||||
|
||||
SessionLifetimeConfiguration fallback_session_lifetime = {};
|
||||
DB::GetPriorityForLoadBalancing get_priority_load_balancing;
|
||||
|
||||
private:
|
||||
|
@ -642,6 +642,8 @@ void ZooKeeperMultiResponse::readImpl(ReadBuffer & in)
|
||||
|
||||
if (op_error == Error::ZOK || op_num == OpNum::Error)
|
||||
dynamic_cast<ZooKeeperResponse &>(*response).readImpl(in);
|
||||
|
||||
response->zxid = zxid;
|
||||
}
|
||||
|
||||
/// Footer.
|
||||
|
@ -28,7 +28,6 @@ using LogElements = std::vector<ZooKeeperLogElement>;
|
||||
struct ZooKeeperResponse : virtual Response
|
||||
{
|
||||
XID xid = 0;
|
||||
int64_t zxid = 0;
|
||||
|
||||
UInt64 response_created_time_ns = 0;
|
||||
|
||||
|
@ -313,8 +313,8 @@ ZooKeeper::~ZooKeeper()
|
||||
ZooKeeper::ZooKeeper(
|
||||
const Nodes & nodes,
|
||||
const zkutil::ZooKeeperArgs & args_,
|
||||
std::shared_ptr<ZooKeeperLog> zk_log_)
|
||||
: args(args_)
|
||||
std::shared_ptr<ZooKeeperLog> zk_log_, std::optional<ConnectedCallback> && connected_callback_)
|
||||
: args(args_), connected_callback(std::move(connected_callback_))
|
||||
{
|
||||
log = &Poco::Logger::get("ZooKeeperClient");
|
||||
std::atomic_store(&zk_log, std::move(zk_log_));
|
||||
@ -395,8 +395,9 @@ void ZooKeeper::connect(
|
||||
WriteBufferFromOwnString fail_reasons;
|
||||
for (size_t try_no = 0; try_no < num_tries; ++try_no)
|
||||
{
|
||||
for (const auto & node : nodes)
|
||||
for (size_t i = 0; i < nodes.size(); ++i)
|
||||
{
|
||||
const auto & node = nodes[i];
|
||||
try
|
||||
{
|
||||
/// Reset the state of previous attempt.
|
||||
@ -443,9 +444,25 @@ void ZooKeeper::connect(
|
||||
e.addMessage("while receiving handshake from ZooKeeper");
|
||||
throw;
|
||||
}
|
||||
|
||||
connected = true;
|
||||
connected_zk_address = node.address;
|
||||
|
||||
if (connected_callback.has_value())
|
||||
(*connected_callback)(i, node);
|
||||
|
||||
if (i != 0)
|
||||
{
|
||||
std::uniform_int_distribution<UInt32> fallback_session_lifetime_distribution
|
||||
{
|
||||
args.fallback_session_lifetime.min_sec,
|
||||
args.fallback_session_lifetime.max_sec,
|
||||
};
|
||||
UInt32 session_lifetime_seconds = fallback_session_lifetime_distribution(thread_local_rng);
|
||||
client_session_deadline = clock::now() + std::chrono::seconds(session_lifetime_seconds);
|
||||
|
||||
LOG_DEBUG(log, "Connected to a suboptimal ZooKeeper host ({}, index {})."
|
||||
" To preserve balance in ZooKeeper usage, this ZooKeeper session will expire in {} seconds",
|
||||
node.address.toString(), i, session_lifetime_seconds);
|
||||
}
|
||||
|
||||
break;
|
||||
}
|
||||
@ -462,7 +479,6 @@ void ZooKeeper::connect(
|
||||
if (!connected)
|
||||
{
|
||||
WriteBufferFromOwnString message;
|
||||
connected_zk_address = Poco::Net::SocketAddress();
|
||||
|
||||
message << "All connection tries failed while connecting to ZooKeeper. nodes: ";
|
||||
bool first = true;
|
||||
@ -1060,6 +1076,7 @@ void ZooKeeper::pushRequest(RequestInfo && info)
|
||||
{
|
||||
try
|
||||
{
|
||||
checkSessionDeadline();
|
||||
info.time = clock::now();
|
||||
if (zk_log)
|
||||
{
|
||||
@ -1482,6 +1499,17 @@ void ZooKeeper::setupFaultDistributions()
|
||||
inject_setup.test_and_set();
|
||||
}
|
||||
|
||||
void ZooKeeper::checkSessionDeadline() const
|
||||
{
|
||||
if (unlikely(hasReachedDeadline()))
|
||||
throw Exception(Error::ZSESSIONEXPIRED, "Session expired (force expiry client-side)");
|
||||
}
|
||||
|
||||
bool ZooKeeper::hasReachedDeadline() const
|
||||
{
|
||||
return client_session_deadline.has_value() && clock::now() >= client_session_deadline.value();
|
||||
}
|
||||
|
||||
void ZooKeeper::maybeInjectSendFault()
|
||||
{
|
||||
if (unlikely(inject_setup.test() && send_inject_fault && send_inject_fault.value()(thread_local_rng)))
|
||||
|
@ -107,6 +107,7 @@ public:
|
||||
};
|
||||
|
||||
using Nodes = std::vector<Node>;
|
||||
using ConnectedCallback = std::function<void(size_t, const Node&)>;
|
||||
|
||||
/** Connection to nodes is performed in order. If you want, shuffle them manually.
|
||||
* Operation timeout couldn't be greater than session timeout.
|
||||
@ -115,7 +116,8 @@ public:
|
||||
ZooKeeper(
|
||||
const Nodes & nodes,
|
||||
const zkutil::ZooKeeperArgs & args_,
|
||||
std::shared_ptr<ZooKeeperLog> zk_log_);
|
||||
std::shared_ptr<ZooKeeperLog> zk_log_,
|
||||
std::optional<ConnectedCallback> && connected_callback_ = {});
|
||||
|
||||
~ZooKeeper() override;
|
||||
|
||||
@ -123,11 +125,13 @@ public:
|
||||
/// If expired, you can only destroy the object. All other methods will throw exception.
|
||||
bool isExpired() const override { return requests_queue.isFinished(); }
|
||||
|
||||
/// A ZooKeeper session can have an optional deadline set on it.
|
||||
/// After it has been reached, the session needs to be finalized.
|
||||
bool hasReachedDeadline() const override;
|
||||
|
||||
/// Useful to check owner of ephemeral node.
|
||||
int64_t getSessionID() const override { return session_id; }
|
||||
|
||||
Poco::Net::SocketAddress getConnectedAddress() const override { return connected_zk_address; }
|
||||
|
||||
void executeGenericRequest(
|
||||
const ZooKeeperRequestPtr & request,
|
||||
ResponseCallback callback);
|
||||
@ -213,9 +217,9 @@ public:
|
||||
|
||||
private:
|
||||
ACLs default_acls;
|
||||
Poco::Net::SocketAddress connected_zk_address;
|
||||
|
||||
zkutil::ZooKeeperArgs args;
|
||||
std::optional<ConnectedCallback> connected_callback = {};
|
||||
|
||||
/// Fault injection
|
||||
void maybeInjectSendFault();
|
||||
@ -252,6 +256,7 @@ private:
|
||||
clock::time_point time;
|
||||
};
|
||||
|
||||
std::optional<clock::time_point> client_session_deadline {};
|
||||
using RequestsQueue = ConcurrentBoundedQueue<RequestInfo>;
|
||||
|
||||
RequestsQueue requests_queue{1024};
|
||||
@ -324,6 +329,8 @@ private:
|
||||
|
||||
void initFeatureFlags();
|
||||
|
||||
void checkSessionDeadline() const;
|
||||
|
||||
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
|
||||
std::shared_ptr<ZooKeeperLog> zk_log;
|
||||
|
||||
|
@ -794,8 +794,14 @@ bool KeeperServer::applyConfigUpdate(const ClusterUpdateAction & action)
|
||||
std::lock_guard _{server_write_mutex};
|
||||
|
||||
if (const auto * add = std::get_if<AddRaftServer>(&action))
|
||||
return raft_instance->get_srv_config(add->id) != nullptr
|
||||
|| raft_instance->add_srv(static_cast<nuraft::srv_config>(*add))->get_accepted();
|
||||
{
|
||||
if (raft_instance->get_srv_config(add->id) != nullptr)
|
||||
return true;
|
||||
|
||||
auto resp = raft_instance->add_srv(static_cast<nuraft::srv_config>(*add));
|
||||
resp->get();
|
||||
return resp->get_accepted();
|
||||
}
|
||||
else if (const auto * remove = std::get_if<RemoveRaftServer>(&action))
|
||||
{
|
||||
if (remove->id == raft_instance->get_leader())
|
||||
@ -807,8 +813,12 @@ bool KeeperServer::applyConfigUpdate(const ClusterUpdateAction & action)
|
||||
return false;
|
||||
}
|
||||
|
||||
return raft_instance->get_srv_config(remove->id) == nullptr
|
||||
|| raft_instance->remove_srv(remove->id)->get_accepted();
|
||||
if (raft_instance->get_srv_config(remove->id) == nullptr)
|
||||
return true;
|
||||
|
||||
auto resp = raft_instance->remove_srv(remove->id);
|
||||
resp->get();
|
||||
return resp->get_accepted();
|
||||
}
|
||||
else if (const auto * update = std::get_if<UpdateRaftServerPriority>(&action))
|
||||
{
|
||||
|
@ -83,8 +83,12 @@ namespace DB
|
||||
M(UInt64, background_schedule_pool_size, 128, "The maximum number of threads that will be used for constantly executing some lightweight periodic operations.", 0) \
|
||||
M(UInt64, background_message_broker_schedule_pool_size, 16, "The maximum number of threads that will be used for executing background operations for message streaming.", 0) \
|
||||
M(UInt64, background_distributed_schedule_pool_size, 16, "The maximum number of threads that will be used for executing distributed sends.", 0) \
|
||||
M(Bool, display_secrets_in_show_and_select, false, "Allow showing secrets in SHOW and SELECT queries via a format setting and a grant", 0)
|
||||
|
||||
M(Bool, display_secrets_in_show_and_select, false, "Allow showing secrets in SHOW and SELECT queries via a format setting and a grant", 0) \
|
||||
\
|
||||
M(UInt64, total_memory_profiler_step, 0, "Whenever server memory usage becomes larger than every next step in number of bytes the memory profiler will collect the allocating stack trace. Zero means disabled memory profiler. Values lower than a few megabytes will slow down server.", 0) \
|
||||
M(Double, total_memory_tracker_sample_probability, 0, "Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type. The probability is for every alloc/free regardless to the size of the allocation (can be changed with `memory_profiler_sample_min_allocation_size` and `memory_profiler_sample_max_allocation_size`). Note that sampling happens only when the amount of untracked memory exceeds 'max_untracked_memory'. You may want to set 'max_untracked_memory' to 0 for extra fine grained sampling.", 0) \
|
||||
M(UInt64, total_memory_profiler_sample_min_allocation_size, 0, "Collect random allocations of size greater or equal than specified value with probability equal to `total_memory_profiler_sample_probability`. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.", 0) \
|
||||
M(UInt64, total_memory_profiler_sample_max_allocation_size, 0, "Collect random allocations of size less or equal than specified value with probability equal to `total_memory_profiler_sample_probability`. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.", 0)
|
||||
|
||||
DECLARE_SETTINGS_TRAITS(ServerSettingsTraits, SERVER_SETTINGS)
|
||||
|
||||
|
@ -386,6 +386,8 @@ class IColumn;
|
||||
M(UInt64, max_temporary_columns, 0, "If a query generates more than the specified number of temporary columns in memory as a result of intermediate calculation, exception is thrown. Zero value means unlimited. This setting is useful to prevent too complex queries.", 0) \
|
||||
M(UInt64, max_temporary_non_const_columns, 0, "Similar to the 'max_temporary_columns' setting but applies only to non-constant columns. This makes sense, because constant columns are cheap and it is reasonable to allow more of them.", 0) \
|
||||
\
|
||||
M(UInt64, max_sessions_for_user, 0, "Maximum number of simultaneous sessions for a user.", 0) \
|
||||
\
|
||||
M(UInt64, max_subquery_depth, 100, "If a query has more than specified number of nested subqueries, throw an exception. This allows you to have a sanity check to protect the users of your cluster from going insane with their queries.", 0) \
|
||||
M(UInt64, max_analyze_depth, 5000, "Maximum number of analyses performed by interpreter.", 0) \
|
||||
M(UInt64, max_ast_depth, 1000, "Maximum depth of query syntax tree. Checked after parsing.", 0) \
|
||||
@ -427,7 +429,9 @@ class IColumn;
|
||||
M(UInt64, memory_overcommit_ratio_denominator_for_user, 1_GiB, "It represents soft memory limit on the global level. This value is used to compute query overcommit ratio.", 0) \
|
||||
M(UInt64, max_untracked_memory, (4 * 1024 * 1024), "Small allocations and deallocations are grouped in thread local variable and tracked or profiled only when amount (in absolute value) becomes larger than specified value. If the value is higher than 'memory_profiler_step' it will be effectively lowered to 'memory_profiler_step'.", 0) \
|
||||
M(UInt64, memory_profiler_step, (4 * 1024 * 1024), "Whenever query memory usage becomes larger than every next step in number of bytes the memory profiler will collect the allocating stack trace. Zero means disabled memory profiler. Values lower than a few megabytes will slow down query processing.", 0) \
|
||||
M(Float, memory_profiler_sample_probability, 0., "Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type. The probability is for every alloc/free regardless to the size of the allocation. Note that sampling happens only when the amount of untracked memory exceeds 'max_untracked_memory'. You may want to set 'max_untracked_memory' to 0 for extra fine grained sampling.", 0) \
|
||||
M(Float, memory_profiler_sample_probability, 0., "Collect random allocations and deallocations and write them into system.trace_log with 'MemorySample' trace_type. The probability is for every alloc/free regardless to the size of the allocation (can be changed with `memory_profiler_sample_min_allocation_size` and `memory_profiler_sample_max_allocation_size`). Note that sampling happens only when the amount of untracked memory exceeds 'max_untracked_memory'. You may want to set 'max_untracked_memory' to 0 for extra fine grained sampling.", 0) \
|
||||
M(UInt64, memory_profiler_sample_min_allocation_size, 0, "Collect random allocations of size greater or equal than specified value with probability equal to `memory_profiler_sample_probability`. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.", 0) \
|
||||
M(UInt64, memory_profiler_sample_max_allocation_size, 0, "Collect random allocations of size less or equal than specified value with probability equal to `memory_profiler_sample_probability`. 0 means disabled. You may want to set 'max_untracked_memory' to 0 to make this threshold to work as expected.", 0) \
|
||||
M(Bool, trace_profile_events, false, "Send to system.trace_log profile event and value of increment on each increment with 'ProfileEvent' trace_type", 0) \
|
||||
\
|
||||
M(UInt64, memory_usage_overcommit_max_wait_microseconds, 5'000'000, "Maximum time thread will wait for memory to be freed in the case of memory overcommit. If timeout is reached and memory is not freed, exception is thrown.", 0) \
|
||||
|
@ -17,13 +17,13 @@ namespace ErrorCodes
|
||||
extern const int UNKNOWN_ELEMENT_IN_CONFIG;
|
||||
}
|
||||
|
||||
void DictionaryFactory::registerLayout(const std::string & layout_type, LayoutCreateFunction create_layout, bool is_layout_complex)
|
||||
void DictionaryFactory::registerLayout(const std::string & layout_type, LayoutCreateFunction create_layout, bool is_layout_complex, bool has_layout_complex)
|
||||
{
|
||||
auto it = registered_layouts.find(layout_type);
|
||||
if (it != registered_layouts.end())
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "DictionaryFactory: the layout name '{}' is not unique", layout_type);
|
||||
|
||||
RegisteredLayout layout { .layout_create_function = create_layout, .is_layout_complex = is_layout_complex };
|
||||
RegisteredLayout layout { .layout_create_function = create_layout, .is_layout_complex = is_layout_complex, .has_layout_complex = has_layout_complex };
|
||||
registered_layouts.emplace(layout_type, std::move(layout));
|
||||
}
|
||||
|
||||
@ -89,6 +89,25 @@ bool DictionaryFactory::isComplex(const std::string & layout_type) const
|
||||
return it->second.is_layout_complex;
|
||||
}
|
||||
|
||||
bool DictionaryFactory::convertToComplex(std::string & layout_type) const
|
||||
{
|
||||
auto it = registered_layouts.find(layout_type);
|
||||
|
||||
if (it == registered_layouts.end())
|
||||
{
|
||||
throw Exception(ErrorCodes::UNKNOWN_ELEMENT_IN_CONFIG,
|
||||
"Unknown dictionary layout type: {}",
|
||||
layout_type);
|
||||
}
|
||||
|
||||
if (!it->second.is_layout_complex && it->second.has_layout_complex)
|
||||
{
|
||||
layout_type = "complex_key_" + layout_type;
|
||||
return true;
|
||||
}
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
DictionaryFactory & DictionaryFactory::instance()
|
||||
{
|
||||
|
@ -55,13 +55,18 @@ public:
|
||||
|
||||
bool isComplex(const std::string & layout_type) const;
|
||||
|
||||
void registerLayout(const std::string & layout_type, LayoutCreateFunction create_layout, bool is_layout_complex);
|
||||
/// If the argument `layout_type` is not complex layout and has corresponding complex layout,
|
||||
/// change `layout_type` to corresponding complex and return true; otherwise do nothing and return false.
|
||||
bool convertToComplex(std::string & layout_type) const;
|
||||
|
||||
void registerLayout(const std::string & layout_type, LayoutCreateFunction create_layout, bool is_layout_complex, bool has_layout_complex = true);
|
||||
|
||||
private:
|
||||
struct RegisteredLayout
|
||||
{
|
||||
LayoutCreateFunction layout_create_function;
|
||||
bool is_layout_complex;
|
||||
bool has_layout_complex;
|
||||
};
|
||||
|
||||
using LayoutRegistry = std::unordered_map<std::string, RegisteredLayout>;
|
||||
|
@ -683,7 +683,7 @@ void registerDictionaryFlat(DictionaryFactory & factory)
|
||||
return std::make_unique<FlatDictionary>(dict_id, dict_struct, std::move(source_ptr), configuration);
|
||||
};
|
||||
|
||||
factory.registerLayout("flat", create_layout, false);
|
||||
factory.registerLayout("flat", create_layout, false, false);
|
||||
}
|
||||
|
||||
|
||||
|
@ -19,6 +19,7 @@
|
||||
#include <Functions/FunctionFactory.h>
|
||||
#include <Common/isLocalAddress.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -614,6 +615,16 @@ getDictionaryConfigurationFromAST(const ASTCreateQuery & query, ContextPtr conte
|
||||
|
||||
checkPrimaryKey(all_attr_names_and_types, pk_attrs);
|
||||
|
||||
/// If the pk size is 1 and pk's DataType is not number, we should convert to complex.
|
||||
/// NOTE: the data type of Numeric key(simple layout) is UInt64, so if the type is not under UInt64, type casting will lead to precision loss.
|
||||
DataTypePtr first_key_type = DataTypeFactory::instance().get(all_attr_names_and_types.find(pk_attrs[0])->second.type);
|
||||
if ((pk_attrs.size() > 1 || (pk_attrs.size() == 1 && !isNumber(first_key_type)))
|
||||
&& !complex
|
||||
&& DictionaryFactory::instance().convertToComplex(dictionary_layout->layout_type))
|
||||
{
|
||||
complex = true;
|
||||
}
|
||||
|
||||
buildPrimaryKeyConfiguration(xml_document, structure_element, complex, pk_attrs, query.dictionary_attributes_list);
|
||||
|
||||
buildLayoutConfiguration(xml_document, current_dictionary, query.dictionary->dict_settings, dictionary_layout);
|
||||
|
@ -3,6 +3,7 @@
|
||||
|
||||
#include <IO/ReadBufferFromString.h>
|
||||
#include <IO/ReadBufferFromFile.h>
|
||||
#include <IO/ReadBufferFromEmptyFile.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteBufferFromFile.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
@ -485,8 +486,15 @@ std::unique_ptr<ReadBufferFromFileBase> DiskObjectStorage::readFile(
|
||||
std::optional<size_t> read_hint,
|
||||
std::optional<size_t> file_size) const
|
||||
{
|
||||
auto storage_objects = metadata_storage->getStorageObjects(path);
|
||||
|
||||
const bool file_can_be_empty = !file_size.has_value() || *file_size == 0;
|
||||
|
||||
if (storage_objects.empty() && file_can_be_empty)
|
||||
return std::make_unique<ReadBufferFromEmptyFile>();
|
||||
|
||||
return object_storage->readObjects(
|
||||
metadata_storage->getStorageObjects(path),
|
||||
storage_objects,
|
||||
object_storage->getAdjustedSettingsFromMetadataFile(settings, path),
|
||||
read_hint,
|
||||
file_size);
|
||||
|
@ -13,6 +13,8 @@
|
||||
#include <aws/core/utils/HashingUtils.h>
|
||||
#include <aws/core/utils/logging/ErrorMacros.h>
|
||||
|
||||
#include <Poco/Net/NetException.h>
|
||||
|
||||
#include <IO/S3Common.h>
|
||||
#include <IO/S3/Requests.h>
|
||||
#include <IO/S3/PocoHTTPClientFactory.h>
|
||||
@ -23,6 +25,15 @@
|
||||
|
||||
#include <Common/logger_useful.h>
|
||||
|
||||
namespace ProfileEvents
|
||||
{
|
||||
extern const Event S3WriteRequestsErrors;
|
||||
extern const Event S3ReadRequestsErrors;
|
||||
|
||||
extern const Event DiskS3WriteRequestsErrors;
|
||||
extern const Event DiskS3ReadRequestsErrors;
|
||||
}
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -346,12 +357,14 @@ Model::HeadObjectOutcome Client::HeadObject(const HeadObjectRequest & request) c
|
||||
|
||||
Model::ListObjectsV2Outcome Client::ListObjectsV2(const ListObjectsV2Request & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::ListObjectsV2Request & req) { return ListObjectsV2(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ true>(
|
||||
request, [this](const Model::ListObjectsV2Request & req) { return ListObjectsV2(req); });
|
||||
}
|
||||
|
||||
Model::ListObjectsOutcome Client::ListObjects(const ListObjectsRequest & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::ListObjectsRequest & req) { return ListObjects(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ true>(
|
||||
request, [this](const Model::ListObjectsRequest & req) { return ListObjects(req); });
|
||||
}
|
||||
|
||||
Model::GetObjectOutcome Client::GetObject(const GetObjectRequest & request) const
|
||||
@ -361,19 +374,19 @@ Model::GetObjectOutcome Client::GetObject(const GetObjectRequest & request) cons
|
||||
|
||||
Model::AbortMultipartUploadOutcome Client::AbortMultipartUpload(const AbortMultipartUploadRequest & request) const
|
||||
{
|
||||
return doRequest(
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::AbortMultipartUploadRequest & req) { return AbortMultipartUpload(req); });
|
||||
}
|
||||
|
||||
Model::CreateMultipartUploadOutcome Client::CreateMultipartUpload(const CreateMultipartUploadRequest & request) const
|
||||
{
|
||||
return doRequest(
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::CreateMultipartUploadRequest & req) { return CreateMultipartUpload(req); });
|
||||
}
|
||||
|
||||
Model::CompleteMultipartUploadOutcome Client::CompleteMultipartUpload(const CompleteMultipartUploadRequest & request) const
|
||||
{
|
||||
auto outcome = doRequest(
|
||||
auto outcome = doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::CompleteMultipartUploadRequest & req) { return CompleteMultipartUpload(req); });
|
||||
|
||||
if (!outcome.IsSuccess() || provider_type != ProviderType::GCS)
|
||||
@ -403,32 +416,38 @@ Model::CompleteMultipartUploadOutcome Client::CompleteMultipartUpload(const Comp
|
||||
|
||||
Model::CopyObjectOutcome Client::CopyObject(const CopyObjectRequest & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::CopyObjectRequest & req) { return CopyObject(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::CopyObjectRequest & req) { return CopyObject(req); });
|
||||
}
|
||||
|
||||
Model::PutObjectOutcome Client::PutObject(const PutObjectRequest & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::PutObjectRequest & req) { return PutObject(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::PutObjectRequest & req) { return PutObject(req); });
|
||||
}
|
||||
|
||||
Model::UploadPartOutcome Client::UploadPart(const UploadPartRequest & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::UploadPartRequest & req) { return UploadPart(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::UploadPartRequest & req) { return UploadPart(req); });
|
||||
}
|
||||
|
||||
Model::UploadPartCopyOutcome Client::UploadPartCopy(const UploadPartCopyRequest & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::UploadPartCopyRequest & req) { return UploadPartCopy(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::UploadPartCopyRequest & req) { return UploadPartCopy(req); });
|
||||
}
|
||||
|
||||
Model::DeleteObjectOutcome Client::DeleteObject(const DeleteObjectRequest & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::DeleteObjectRequest & req) { return DeleteObject(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::DeleteObjectRequest & req) { return DeleteObject(req); });
|
||||
}
|
||||
|
||||
Model::DeleteObjectsOutcome Client::DeleteObjects(const DeleteObjectsRequest & request) const
|
||||
{
|
||||
return doRequest(request, [this](const Model::DeleteObjectsRequest & req) { return DeleteObjects(req); });
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, [this](const Model::DeleteObjectsRequest & req) { return DeleteObjects(req); });
|
||||
}
|
||||
|
||||
Client::ComposeObjectOutcome Client::ComposeObject(const ComposeObjectRequest & request) const
|
||||
@ -457,7 +476,8 @@ Client::ComposeObjectOutcome Client::ComposeObject(const ComposeObjectRequest &
|
||||
return ComposeObjectOutcome(MakeRequest(req, endpointResolutionOutcome.GetResult(), Aws::Http::HttpMethod::HTTP_PUT));
|
||||
};
|
||||
|
||||
return doRequest(request, request_fn);
|
||||
return doRequestWithRetryNetworkErrors</*IsReadMethod*/ false>(
|
||||
request, request_fn);
|
||||
}
|
||||
|
||||
template <typename RequestType, typename RequestFn>
|
||||
@ -538,6 +558,65 @@ Client::doRequest(const RequestType & request, RequestFn request_fn) const
|
||||
throw Exception(ErrorCodes::TOO_MANY_REDIRECTS, "Too many redirects");
|
||||
}
|
||||
|
||||
template <bool IsReadMethod, typename RequestType, typename RequestFn>
|
||||
std::invoke_result_t<RequestFn, RequestType>
|
||||
Client::doRequestWithRetryNetworkErrors(const RequestType & request, RequestFn request_fn) const
|
||||
{
|
||||
auto with_retries = [this, request_fn_ = std::move(request_fn)] (const RequestType & request_)
|
||||
{
|
||||
chassert(client_configuration.retryStrategy);
|
||||
const Int64 max_attempts = client_configuration.retryStrategy->GetMaxAttempts();
|
||||
std::exception_ptr last_exception = nullptr;
|
||||
for (Int64 attempt_no = 0; attempt_no < max_attempts; ++attempt_no)
|
||||
{
|
||||
try
|
||||
{
|
||||
/// S3 does retries network errors actually.
|
||||
/// But it is matter when errors occur.
|
||||
/// This code retries a specific case when
|
||||
/// network error happens when XML document is being read from the response body.
|
||||
/// Hence, the response body is a stream, network errors are possible at reading.
|
||||
/// S3 doesn't retry them.
|
||||
|
||||
/// Not all requests can be retried in that way.
|
||||
/// Requests that read out response body to build the result are possible to retry.
|
||||
/// Requests that expose the response stream as an answer are not retried with that code. E.g. GetObject.
|
||||
return request_fn_(request_);
|
||||
}
|
||||
catch (Poco::Net::ConnectionResetException &)
|
||||
{
|
||||
|
||||
if constexpr (IsReadMethod)
|
||||
{
|
||||
if (client_configuration.for_disk_s3)
|
||||
ProfileEvents::increment(ProfileEvents::DiskS3ReadRequestsErrors);
|
||||
else
|
||||
ProfileEvents::increment(ProfileEvents::S3ReadRequestsErrors);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (client_configuration.for_disk_s3)
|
||||
ProfileEvents::increment(ProfileEvents::DiskS3WriteRequestsErrors);
|
||||
else
|
||||
ProfileEvents::increment(ProfileEvents::S3WriteRequestsErrors);
|
||||
}
|
||||
|
||||
tryLogCurrentException(log, "Will retry");
|
||||
last_exception = std::current_exception();
|
||||
|
||||
auto error = Aws::Client::AWSError<Aws::Client::CoreErrors>(Aws::Client::CoreErrors::NETWORK_CONNECTION, /*retry*/ true);
|
||||
client_configuration.retryStrategy->CalculateDelayBeforeNextRetry(error, attempt_no);
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
chassert(last_exception);
|
||||
std::rethrow_exception(last_exception);
|
||||
};
|
||||
|
||||
return doRequest(request, with_retries);
|
||||
}
|
||||
|
||||
bool Client::supportsMultiPartCopy() const
|
||||
{
|
||||
return provider_type != ProviderType::GCS;
|
||||
|
@ -250,6 +250,10 @@ private:
|
||||
std::invoke_result_t<RequestFn, RequestType>
|
||||
doRequest(const RequestType & request, RequestFn request_fn) const;
|
||||
|
||||
template <bool IsReadMethod, typename RequestType, typename RequestFn>
|
||||
std::invoke_result_t<RequestFn, RequestType>
|
||||
doRequestWithRetryNetworkErrors(const RequestType & request, RequestFn request_fn) const;
|
||||
|
||||
void updateURIForBucket(const std::string & bucket, S3::URI new_uri) const;
|
||||
std::optional<S3::URI> getURIFromError(const Aws::S3::S3Error & error) const;
|
||||
std::optional<Aws::S3::S3Error> updateURIForBucketForHead(const std::string & bucket) const;
|
||||
|
@ -46,7 +46,7 @@ BlockIO InterpreterCreateRoleQuery::execute()
|
||||
settings_from_query = SettingsProfileElements{*query.settings, access_control};
|
||||
|
||||
if (!query.attach)
|
||||
getContext()->checkSettingsConstraints(*settings_from_query);
|
||||
getContext()->checkSettingsConstraints(*settings_from_query, SettingSource::ROLE);
|
||||
}
|
||||
|
||||
if (!query.cluster.empty())
|
||||
|
@ -54,7 +54,7 @@ BlockIO InterpreterCreateSettingsProfileQuery::execute()
|
||||
settings_from_query = SettingsProfileElements{*query.settings, access_control};
|
||||
|
||||
if (!query.attach)
|
||||
getContext()->checkSettingsConstraints(*settings_from_query);
|
||||
getContext()->checkSettingsConstraints(*settings_from_query, SettingSource::PROFILE);
|
||||
}
|
||||
|
||||
if (!query.cluster.empty())
|
||||
|
@ -133,7 +133,7 @@ BlockIO InterpreterCreateUserQuery::execute()
|
||||
settings_from_query = SettingsProfileElements{*query.settings, access_control};
|
||||
|
||||
if (!query.attach)
|
||||
getContext()->checkSettingsConstraints(*settings_from_query);
|
||||
getContext()->checkSettingsConstraints(*settings_from_query, SettingSource::USER);
|
||||
}
|
||||
|
||||
if (!query.cluster.empty())
|
||||
|
@ -45,6 +45,7 @@
|
||||
#include <Interpreters/Cache/QueryCache.h>
|
||||
#include <Interpreters/Cache/FileCacheFactory.h>
|
||||
#include <Interpreters/Cache/FileCache.h>
|
||||
#include <Interpreters/SessionTracker.h>
|
||||
#include <Core/ServerSettings.h>
|
||||
#include <Interpreters/PreparedSets.h>
|
||||
#include <Core/Settings.h>
|
||||
@ -158,6 +159,7 @@ namespace CurrentMetrics
|
||||
extern const Metric IOWriterThreadsActive;
|
||||
}
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
@ -276,6 +278,7 @@ struct ContextSharedPart : boost::noncopyable
|
||||
mutable QueryCachePtr query_cache; /// Cache of query results.
|
||||
mutable MMappedFileCachePtr mmap_cache; /// Cache of mmapped files to avoid frequent open/map/unmap/close and to reuse from several threads.
|
||||
ProcessList process_list; /// Executing queries at the moment.
|
||||
SessionTracker session_tracker;
|
||||
GlobalOvercommitTracker global_overcommit_tracker;
|
||||
MergeList merge_list; /// The list of executable merge (for (Replicated)?MergeTree)
|
||||
MovesList moves_list; /// The list of executing moves (for (Replicated)?MergeTree)
|
||||
@ -739,6 +742,9 @@ std::unique_lock<std::recursive_mutex> Context::getLock() const
|
||||
ProcessList & Context::getProcessList() { return shared->process_list; }
|
||||
const ProcessList & Context::getProcessList() const { return shared->process_list; }
|
||||
OvercommitTracker * Context::getGlobalOvercommitTracker() const { return &shared->global_overcommit_tracker; }
|
||||
|
||||
SessionTracker & Context::getSessionTracker() { return shared->session_tracker; }
|
||||
|
||||
MergeList & Context::getMergeList() { return shared->merge_list; }
|
||||
const MergeList & Context::getMergeList() const { return shared->merge_list; }
|
||||
MovesList & Context::getMovesList() { return shared->moves_list; }
|
||||
@ -1094,7 +1100,7 @@ void Context::setUser(const UUID & user_id_, bool set_current_profiles_, bool se
|
||||
std::optional<ContextAccessParams> params;
|
||||
{
|
||||
auto lock = getLock();
|
||||
params.emplace(ContextAccessParams{user_id_, /* full_access= */ false, /* use_default_roles = */ true, {}, settings, current_database, client_info});
|
||||
params.emplace(ContextAccessParams{user_id_, /* full_access= */ false, /* use_default_roles = */ true, {}, settings, current_database, client_info });
|
||||
}
|
||||
/// `temp_access` is used here only to extract information about the user, not to actually check access.
|
||||
/// NOTE: AccessControl::getContextAccess() may require some IO work, so Context::getLock() must be unlocked while we're doing this.
|
||||
@ -1157,13 +1163,6 @@ std::optional<UUID> Context::getUserID() const
|
||||
}
|
||||
|
||||
|
||||
void Context::setQuotaKey(String quota_key_)
|
||||
{
|
||||
auto lock = getLock();
|
||||
client_info.quota_key = std::move(quota_key_);
|
||||
}
|
||||
|
||||
|
||||
void Context::setCurrentRoles(const std::vector<UUID> & current_roles_)
|
||||
{
|
||||
auto lock = getLock();
|
||||
@ -1303,7 +1302,7 @@ void Context::setCurrentProfiles(const SettingsProfilesInfo & profiles_info, boo
|
||||
{
|
||||
auto lock = getLock();
|
||||
if (check_constraints)
|
||||
checkSettingsConstraints(profiles_info.settings);
|
||||
checkSettingsConstraints(profiles_info.settings, SettingSource::PROFILE);
|
||||
applySettingsChanges(profiles_info.settings);
|
||||
settings_constraints_and_current_profiles = profiles_info.getConstraintsAndProfileIDs(settings_constraints_and_current_profiles);
|
||||
}
|
||||
@ -1857,29 +1856,29 @@ void Context::applySettingsChanges(const SettingsChanges & changes)
|
||||
}
|
||||
|
||||
|
||||
void Context::checkSettingsConstraints(const SettingsProfileElements & profile_elements) const
|
||||
void Context::checkSettingsConstraints(const SettingsProfileElements & profile_elements, SettingSource source) const
|
||||
{
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, profile_elements);
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, profile_elements, source);
|
||||
}
|
||||
|
||||
void Context::checkSettingsConstraints(const SettingChange & change) const
|
||||
void Context::checkSettingsConstraints(const SettingChange & change, SettingSource source) const
|
||||
{
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, change);
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, change, source);
|
||||
}
|
||||
|
||||
void Context::checkSettingsConstraints(const SettingsChanges & changes) const
|
||||
void Context::checkSettingsConstraints(const SettingsChanges & changes, SettingSource source) const
|
||||
{
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, changes);
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, changes, source);
|
||||
}
|
||||
|
||||
void Context::checkSettingsConstraints(SettingsChanges & changes) const
|
||||
void Context::checkSettingsConstraints(SettingsChanges & changes, SettingSource source) const
|
||||
{
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, changes);
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.check(settings, changes, source);
|
||||
}
|
||||
|
||||
void Context::clampToSettingsConstraints(SettingsChanges & changes) const
|
||||
void Context::clampToSettingsConstraints(SettingsChanges & changes, SettingSource source) const
|
||||
{
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.clamp(settings, changes);
|
||||
getSettingsConstraintsAndCurrentProfiles()->constraints.clamp(settings, changes, source);
|
||||
}
|
||||
|
||||
void Context::checkMergeTreeSettingsConstraints(const MergeTreeSettings & merge_tree_settings, const SettingsChanges & changes) const
|
||||
@ -2711,7 +2710,10 @@ zkutil::ZooKeeperPtr Context::getZooKeeper() const
|
||||
const auto & config = shared->zookeeper_config ? *shared->zookeeper_config : getConfigRef();
|
||||
if (!shared->zookeeper)
|
||||
shared->zookeeper = std::make_shared<zkutil::ZooKeeper>(config, zkutil::getZooKeeperConfigName(config), getZooKeeperLog());
|
||||
else if (shared->zookeeper->expired())
|
||||
else if (shared->zookeeper->hasReachedDeadline())
|
||||
shared->zookeeper->finalize("ZooKeeper session has reached its deadline");
|
||||
|
||||
if (shared->zookeeper->expired())
|
||||
{
|
||||
Stopwatch watch;
|
||||
LOG_DEBUG(shared->log, "Trying to establish a new connection with ZooKeeper");
|
||||
|
@ -9,6 +9,7 @@
|
||||
#include <Common/HTTPHeaderFilter.h>
|
||||
#include <Common/ThreadPool_fwd.h>
|
||||
#include <Common/Throttler_fwd.h>
|
||||
#include <Common/SettingSource.h>
|
||||
#include <Core/NamesAndTypes.h>
|
||||
#include <Core/Settings.h>
|
||||
#include <Core/UUID.h>
|
||||
@ -202,6 +203,8 @@ using MergeTreeMetadataCachePtr = std::shared_ptr<MergeTreeMetadataCache>;
|
||||
class PreparedSetsCache;
|
||||
using PreparedSetsCachePtr = std::shared_ptr<PreparedSetsCache>;
|
||||
|
||||
class SessionTracker;
|
||||
|
||||
/// An empty interface for an arbitrary object that may be attached by a shared pointer
|
||||
/// to query context, when using ClickHouse as a library.
|
||||
struct IHostContext
|
||||
@ -539,8 +542,6 @@ public:
|
||||
|
||||
String getUserName() const;
|
||||
|
||||
void setQuotaKey(String quota_key_);
|
||||
|
||||
void setCurrentRoles(const std::vector<UUID> & current_roles_);
|
||||
void setCurrentRolesDefault();
|
||||
boost::container::flat_set<UUID> getCurrentRoles() const;
|
||||
@ -735,11 +736,11 @@ public:
|
||||
void applySettingsChanges(const SettingsChanges & changes);
|
||||
|
||||
/// Checks the constraints.
|
||||
void checkSettingsConstraints(const SettingsProfileElements & profile_elements) const;
|
||||
void checkSettingsConstraints(const SettingChange & change) const;
|
||||
void checkSettingsConstraints(const SettingsChanges & changes) const;
|
||||
void checkSettingsConstraints(SettingsChanges & changes) const;
|
||||
void clampToSettingsConstraints(SettingsChanges & changes) const;
|
||||
void checkSettingsConstraints(const SettingsProfileElements & profile_elements, SettingSource source) const;
|
||||
void checkSettingsConstraints(const SettingChange & change, SettingSource source) const;
|
||||
void checkSettingsConstraints(const SettingsChanges & changes, SettingSource source) const;
|
||||
void checkSettingsConstraints(SettingsChanges & changes, SettingSource source) const;
|
||||
void clampToSettingsConstraints(SettingsChanges & changes, SettingSource source) const;
|
||||
void checkMergeTreeSettingsConstraints(const MergeTreeSettings & merge_tree_settings, const SettingsChanges & changes) const;
|
||||
|
||||
/// Reset settings to default value
|
||||
@ -861,6 +862,8 @@ public:
|
||||
|
||||
OvercommitTracker * getGlobalOvercommitTracker() const;
|
||||
|
||||
SessionTracker & getSessionTracker();
|
||||
|
||||
MergeList & getMergeList();
|
||||
const MergeList & getMergeList() const;
|
||||
|
||||
|
@ -15,7 +15,7 @@ namespace DB
|
||||
BlockIO InterpreterSetQuery::execute()
|
||||
{
|
||||
const auto & ast = query_ptr->as<ASTSetQuery &>();
|
||||
getContext()->checkSettingsConstraints(ast.changes);
|
||||
getContext()->checkSettingsConstraints(ast.changes, SettingSource::QUERY);
|
||||
auto session_context = getContext()->getSessionContext();
|
||||
session_context->applySettingsChanges(ast.changes);
|
||||
session_context->addQueryParameters(ast.query_parameters);
|
||||
@ -28,7 +28,7 @@ void InterpreterSetQuery::executeForCurrentContext(bool ignore_setting_constrain
|
||||
{
|
||||
const auto & ast = query_ptr->as<ASTSetQuery &>();
|
||||
if (!ignore_setting_constraints)
|
||||
getContext()->checkSettingsConstraints(ast.changes);
|
||||
getContext()->checkSettingsConstraints(ast.changes, SettingSource::QUERY);
|
||||
getContext()->applySettingsChanges(ast.changes);
|
||||
getContext()->resetSettingsToDefaultValue(ast.default_settings);
|
||||
}
|
||||
|
@ -223,7 +223,10 @@ ProcessList::insert(const String & query_, const IAST * ast, ContextMutablePtr q
|
||||
{
|
||||
/// Set up memory profiling
|
||||
thread_group->memory_tracker.setProfilerStep(settings.memory_profiler_step);
|
||||
|
||||
thread_group->memory_tracker.setSampleProbability(settings.memory_profiler_sample_probability);
|
||||
thread_group->memory_tracker.setSampleMinAllocationSize(settings.memory_profiler_sample_min_allocation_size);
|
||||
thread_group->memory_tracker.setSampleMaxAllocationSize(settings.memory_profiler_sample_max_allocation_size);
|
||||
thread_group->performance_counters.setTraceProfileEvents(settings.trace_profile_events);
|
||||
}
|
||||
|
||||
|
@ -3,11 +3,13 @@
|
||||
#include <Access/AccessControl.h>
|
||||
#include <Access/Credentials.h>
|
||||
#include <Access/ContextAccess.h>
|
||||
#include <Access/SettingsProfilesInfo.h>
|
||||
#include <Access/User.h>
|
||||
#include <Common/logger_useful.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/ThreadPool.h>
|
||||
#include <Common/setThreadName.h>
|
||||
#include <Interpreters/SessionTracker.h>
|
||||
#include <Interpreters/Context.h>
|
||||
#include <Interpreters/SessionLog.h>
|
||||
#include <Interpreters/Cluster.h>
|
||||
@ -200,7 +202,6 @@ private:
|
||||
|
||||
LOG_TEST(log, "Schedule closing session with session_id: {}, user_id: {}",
|
||||
session.key.second, session.key.first);
|
||||
|
||||
}
|
||||
|
||||
void cleanThread()
|
||||
@ -336,6 +337,9 @@ void Session::authenticate(const Credentials & credentials_, const Poco::Net::So
|
||||
if (session_context)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "If there is a session context it must be created after authentication");
|
||||
|
||||
if (session_tracker_handle)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Session tracker handle was created before authentication finish");
|
||||
|
||||
auto address = address_;
|
||||
if ((address == Poco::Net::SocketAddress{}) && (prepared_client_info->interface == ClientInfo::Interface::LOCAL))
|
||||
address = Poco::Net::SocketAddress{"127.0.0.1", 0};
|
||||
@ -490,6 +494,8 @@ ContextMutablePtr Session::makeSessionContext()
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Session context must be created before any query context");
|
||||
if (!user_id)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Session context must be created after authentication");
|
||||
if (session_tracker_handle)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Session tracker handle was created before making session");
|
||||
|
||||
LOG_DEBUG(log, "{} Creating session context with user_id: {}",
|
||||
toString(auth_id), toString(*user_id));
|
||||
@ -503,13 +509,17 @@ ContextMutablePtr Session::makeSessionContext()
|
||||
prepared_client_info.reset();
|
||||
|
||||
/// Set user information for the new context: current profiles, roles, access rights.
|
||||
if (user_id)
|
||||
new_session_context->setUser(*user_id);
|
||||
new_session_context->setUser(*user_id);
|
||||
|
||||
/// Session context is ready.
|
||||
session_context = new_session_context;
|
||||
user = session_context->getUser();
|
||||
|
||||
session_tracker_handle = session_context->getSessionTracker().trackSession(
|
||||
*user_id,
|
||||
{},
|
||||
session_context->getSettingsRef().max_sessions_for_user);
|
||||
|
||||
return session_context;
|
||||
}
|
||||
|
||||
@ -521,6 +531,8 @@ ContextMutablePtr Session::makeSessionContext(const String & session_name_, std:
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Session context must be created before any query context");
|
||||
if (!user_id)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Session context must be created after authentication");
|
||||
if (session_tracker_handle)
|
||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Session tracker handle was created before making session");
|
||||
|
||||
LOG_DEBUG(log, "{} Creating named session context with name: {}, user_id: {}",
|
||||
toString(auth_id), session_name_, toString(*user_id));
|
||||
@ -541,9 +553,23 @@ ContextMutablePtr Session::makeSessionContext(const String & session_name_, std:
|
||||
new_session_context->setClientInfo(*prepared_client_info);
|
||||
prepared_client_info.reset();
|
||||
|
||||
auto access = new_session_context->getAccess();
|
||||
UInt64 max_sessions_for_user = 0;
|
||||
/// Set user information for the new context: current profiles, roles, access rights.
|
||||
if (user_id && !new_session_context->getAccess()->tryGetUser())
|
||||
if (!access->tryGetUser())
|
||||
{
|
||||
new_session_context->setUser(*user_id);
|
||||
max_sessions_for_user = new_session_context->getSettingsRef().max_sessions_for_user;
|
||||
}
|
||||
else
|
||||
{
|
||||
// Always get setting from profile
|
||||
// profile can be changed by ALTER PROFILE during single session
|
||||
auto settings = access->getDefaultSettings();
|
||||
const Field * max_session_for_user_field = settings.tryGet("max_sessions_for_user");
|
||||
if (max_session_for_user_field)
|
||||
max_sessions_for_user = max_session_for_user_field->safeGet<UInt64>();
|
||||
}
|
||||
|
||||
/// Session context is ready.
|
||||
session_context = std::move(new_session_context);
|
||||
@ -551,6 +577,11 @@ ContextMutablePtr Session::makeSessionContext(const String & session_name_, std:
|
||||
named_session_created = new_named_session_created;
|
||||
user = session_context->getUser();
|
||||
|
||||
session_tracker_handle = session_context->getSessionTracker().trackSession(
|
||||
*user_id,
|
||||
{ session_name_ },
|
||||
max_sessions_for_user);
|
||||
|
||||
return session_context;
|
||||
}
|
||||
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <Access/AuthenticationData.h>
|
||||
#include <Interpreters/ClientInfo.h>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
#include <Interpreters/SessionTracker.h>
|
||||
|
||||
#include <chrono>
|
||||
#include <memory>
|
||||
@ -113,6 +114,8 @@ private:
|
||||
std::shared_ptr<NamedSessionData> named_session;
|
||||
bool named_session_created = false;
|
||||
|
||||
SessionTracker::SessionTrackerHandle session_tracker_handle;
|
||||
|
||||
Poco::Logger * log = nullptr;
|
||||
};
|
||||
|
||||
|
62
src/Interpreters/SessionTracker.cpp
Normal file
62
src/Interpreters/SessionTracker.cpp
Normal file
@ -0,0 +1,62 @@
|
||||
#include "SessionTracker.h"
|
||||
|
||||
#include <Common/Exception.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int USER_SESSION_LIMIT_EXCEEDED;
|
||||
}
|
||||
|
||||
SessionTracker::Session::Session(SessionTracker & tracker_,
|
||||
const UUID& user_id_,
|
||||
SessionInfos::const_iterator session_info_iter_) noexcept
|
||||
: tracker(tracker_), user_id(user_id_), session_info_iter(session_info_iter_)
|
||||
{
|
||||
}
|
||||
|
||||
SessionTracker::Session::~Session()
|
||||
{
|
||||
tracker.stopTracking(user_id, session_info_iter);
|
||||
}
|
||||
|
||||
SessionTracker::SessionTrackerHandle
|
||||
SessionTracker::trackSession(const UUID & user_id,
|
||||
const SessionInfo & session_info,
|
||||
size_t max_sessions_for_user)
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
|
||||
auto sessions_for_user_iter = sessions_for_user.find(user_id);
|
||||
if (sessions_for_user_iter == sessions_for_user.end())
|
||||
sessions_for_user_iter = sessions_for_user.emplace(user_id, SessionInfos()).first;
|
||||
|
||||
SessionInfos & session_infos = sessions_for_user_iter->second;
|
||||
if (max_sessions_for_user && session_infos.size() >= max_sessions_for_user)
|
||||
{
|
||||
throw Exception(ErrorCodes::USER_SESSION_LIMIT_EXCEEDED,
|
||||
"User {} has overflown session count {}",
|
||||
toString(user_id),
|
||||
max_sessions_for_user);
|
||||
}
|
||||
|
||||
session_infos.emplace_front(session_info);
|
||||
|
||||
return std::make_unique<SessionTracker::Session>(*this, user_id, session_infos.begin());
|
||||
}
|
||||
|
||||
void SessionTracker::stopTracking(const UUID& user_id, SessionInfos::const_iterator session_info_iter)
|
||||
{
|
||||
std::lock_guard lock(mutex);
|
||||
|
||||
auto sessions_for_user_iter = sessions_for_user.find(user_id);
|
||||
chassert(sessions_for_user_iter != sessions_for_user.end());
|
||||
|
||||
sessions_for_user_iter->second.erase(session_info_iter);
|
||||
if (sessions_for_user_iter->second.empty())
|
||||
sessions_for_user.erase(sessions_for_user_iter);
|
||||
}
|
||||
|
||||
}
|
60
src/Interpreters/SessionTracker.h
Normal file
60
src/Interpreters/SessionTracker.h
Normal file
@ -0,0 +1,60 @@
|
||||
#pragma once
|
||||
|
||||
#include "ClientInfo.h"
|
||||
|
||||
#include <list>
|
||||
#include <map>
|
||||
#include <memory>
|
||||
#include <mutex>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
struct SessionInfo
|
||||
{
|
||||
const String session_id;
|
||||
};
|
||||
|
||||
using SessionInfos = std::list<SessionInfo>;
|
||||
|
||||
using SessionsForUser = std::unordered_map<UUID, SessionInfos>;
|
||||
|
||||
class SessionTracker;
|
||||
|
||||
class SessionTracker
|
||||
{
|
||||
public:
|
||||
class Session : boost::noncopyable
|
||||
{
|
||||
public:
|
||||
explicit Session(SessionTracker & tracker_,
|
||||
const UUID & user_id_,
|
||||
SessionInfos::const_iterator session_info_iter_) noexcept;
|
||||
|
||||
~Session();
|
||||
|
||||
private:
|
||||
friend class SessionTracker;
|
||||
|
||||
SessionTracker & tracker;
|
||||
const UUID user_id;
|
||||
const SessionInfos::const_iterator session_info_iter;
|
||||
};
|
||||
|
||||
using SessionTrackerHandle = std::unique_ptr<SessionTracker::Session>;
|
||||
|
||||
SessionTrackerHandle trackSession(const UUID & user_id,
|
||||
const SessionInfo & session_info,
|
||||
size_t max_sessions_for_user);
|
||||
|
||||
private:
|
||||
/// disallow manual messing with session tracking
|
||||
friend class Session;
|
||||
|
||||
std::mutex mutex;
|
||||
SessionsForUser sessions_for_user TSA_GUARDED_BY(mutex);
|
||||
|
||||
void stopTracking(const UUID& user_id, SessionInfos::const_iterator session_info_iter);
|
||||
};
|
||||
|
||||
}
|
@ -83,6 +83,8 @@ ThreadGroupPtr ThreadGroup::createForBackgroundProcess(ContextPtr storage_contex
|
||||
const Settings & settings = storage_context->getSettingsRef();
|
||||
group->memory_tracker.setProfilerStep(settings.memory_profiler_step);
|
||||
group->memory_tracker.setSampleProbability(settings.memory_profiler_sample_probability);
|
||||
group->memory_tracker.setSampleMinAllocationSize(settings.memory_profiler_sample_min_allocation_size);
|
||||
group->memory_tracker.setSampleMaxAllocationSize(settings.memory_profiler_sample_max_allocation_size);
|
||||
group->memory_tracker.setSoftLimit(settings.memory_overcommit_ratio_denominator);
|
||||
group->memory_tracker.setParent(&background_memory_tracker);
|
||||
if (settings.memory_tracker_fault_probability > 0.0)
|
||||
|
@ -833,7 +833,7 @@ namespace
|
||||
{
|
||||
settings_changes.push_back({key, value});
|
||||
}
|
||||
query_context->checkSettingsConstraints(settings_changes);
|
||||
query_context->checkSettingsConstraints(settings_changes, SettingSource::QUERY);
|
||||
query_context->applySettingsChanges(settings_changes);
|
||||
|
||||
query_context->setCurrentQueryId(query_info.query_id());
|
||||
@ -1118,7 +1118,7 @@ namespace
|
||||
SettingsChanges settings_changes;
|
||||
for (const auto & [key, value] : external_table.settings())
|
||||
settings_changes.push_back({key, value});
|
||||
external_table_context->checkSettingsConstraints(settings_changes);
|
||||
external_table_context->checkSettingsConstraints(settings_changes, SettingSource::QUERY);
|
||||
external_table_context->applySettingsChanges(settings_changes);
|
||||
}
|
||||
auto in = external_table_context->getInputFormat(
|
||||
|
@ -764,7 +764,7 @@ void HTTPHandler::processQuery(
|
||||
context->setDefaultFormat(default_format);
|
||||
|
||||
/// For external data we also want settings
|
||||
context->checkSettingsConstraints(settings_changes);
|
||||
context->checkSettingsConstraints(settings_changes, SettingSource::QUERY);
|
||||
context->applySettingsChanges(settings_changes);
|
||||
|
||||
/// Set the query id supplied by the user, if any, and also update the OpenTelemetry fields.
|
||||
|
@ -40,7 +40,7 @@ const char * ServerType::serverTypeToString(ServerType::Type type)
|
||||
return type_name.data();
|
||||
}
|
||||
|
||||
bool ServerType::shouldStart(Type server_type, const std::string & custom_name_) const
|
||||
bool ServerType::shouldStart(Type server_type, const std::string & server_custom_name) const
|
||||
{
|
||||
if (type == Type::QUERIES_ALL)
|
||||
return true;
|
||||
@ -77,13 +77,15 @@ bool ServerType::shouldStart(Type server_type, const std::string & custom_name_)
|
||||
}
|
||||
}
|
||||
|
||||
return type == server_type && custom_name == custom_name_;
|
||||
if (type == Type::CUSTOM)
|
||||
return server_type == type && server_custom_name == "protocols." + custom_name + ".port";
|
||||
|
||||
return server_type == type;
|
||||
}
|
||||
|
||||
bool ServerType::shouldStop(const std::string & port_name) const
|
||||
{
|
||||
Type port_type;
|
||||
std::string port_custom_name;
|
||||
|
||||
if (port_name == "http_port")
|
||||
port_type = Type::HTTP;
|
||||
@ -119,20 +121,12 @@ bool ServerType::shouldStop(const std::string & port_name) const
|
||||
port_type = Type::INTERSERVER_HTTPS;
|
||||
|
||||
else if (port_name.starts_with("protocols.") && port_name.ends_with(".port"))
|
||||
{
|
||||
constexpr size_t protocols_size = std::string_view("protocols.").size();
|
||||
constexpr size_t port_size = std::string_view("protocols.").size();
|
||||
|
||||
port_type = Type::CUSTOM;
|
||||
port_custom_name = port_name.substr(protocols_size, port_name.size() - port_size);
|
||||
}
|
||||
else
|
||||
port_type = Type::UNKNOWN;
|
||||
|
||||
if (port_type == Type::UNKNOWN)
|
||||
else
|
||||
return false;
|
||||
|
||||
return shouldStart(type, port_custom_name);
|
||||
return shouldStart(port_type, port_name);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -10,7 +10,6 @@ public:
|
||||
|
||||
enum Type
|
||||
{
|
||||
UNKNOWN,
|
||||
TCP,
|
||||
TCP_WITH_PROXY,
|
||||
TCP_SECURE,
|
||||
@ -34,7 +33,8 @@ public:
|
||||
|
||||
static const char * serverTypeToString(Type type);
|
||||
|
||||
bool shouldStart(Type server_type, const std::string & custom_name_ = "") const;
|
||||
/// Checks whether provided in the arguments type should be started or stopped based on current server type.
|
||||
bool shouldStart(Type server_type, const std::string & server_custom_name = "") const;
|
||||
bool shouldStop(const std::string & port_name) const;
|
||||
|
||||
Type type;
|
||||
|
@ -184,14 +184,17 @@ void TCPHandler::runImpl()
|
||||
try
|
||||
{
|
||||
receiveHello();
|
||||
|
||||
/// In interserver mode queries are executed without a session context.
|
||||
if (!is_interserver_mode)
|
||||
session->makeSessionContext();
|
||||
|
||||
sendHello();
|
||||
if (client_tcp_protocol_version >= DBMS_MIN_PROTOCOL_VERSION_WITH_ADDENDUM)
|
||||
receiveAddendum();
|
||||
|
||||
if (!is_interserver_mode) /// In interserver mode queries are executed without a session context.
|
||||
if (!is_interserver_mode)
|
||||
{
|
||||
session->makeSessionContext();
|
||||
|
||||
/// If session created, then settings in session context has been updated.
|
||||
/// So it's better to update the connection settings for flexibility.
|
||||
extractConnectionSettingsFromContext(session->sessionContext());
|
||||
@ -1181,7 +1184,6 @@ std::unique_ptr<Session> TCPHandler::makeSession()
|
||||
res->setClientName(client_name);
|
||||
res->setClientVersion(client_version_major, client_version_minor, client_version_patch, client_tcp_protocol_version);
|
||||
res->setConnectionClientVersion(client_version_major, client_version_minor, client_version_patch, client_tcp_protocol_version);
|
||||
res->setQuotaClientKey(quota_key);
|
||||
res->setClientInterface(interface);
|
||||
|
||||
return res;
|
||||
@ -1274,11 +1276,10 @@ void TCPHandler::receiveHello()
|
||||
void TCPHandler::receiveAddendum()
|
||||
{
|
||||
if (client_tcp_protocol_version >= DBMS_MIN_PROTOCOL_VERSION_WITH_QUOTA_KEY)
|
||||
{
|
||||
readStringBinary(quota_key, *in);
|
||||
if (!is_interserver_mode)
|
||||
session->setQuotaClientKey(quota_key);
|
||||
}
|
||||
|
||||
if (!is_interserver_mode)
|
||||
session->setQuotaClientKey(quota_key);
|
||||
}
|
||||
|
||||
|
||||
@ -1591,12 +1592,12 @@ void TCPHandler::receiveQuery()
|
||||
if (query_kind == ClientInfo::QueryKind::INITIAL_QUERY)
|
||||
{
|
||||
/// Throw an exception if the passed settings violate the constraints.
|
||||
query_context->checkSettingsConstraints(settings_changes);
|
||||
query_context->checkSettingsConstraints(settings_changes, SettingSource::QUERY);
|
||||
}
|
||||
else
|
||||
{
|
||||
/// Quietly clamp to the constraints if it's not an initial query.
|
||||
query_context->clampToSettingsConstraints(settings_changes);
|
||||
query_context->clampToSettingsConstraints(settings_changes, SettingSource::QUERY);
|
||||
}
|
||||
query_context->applySettingsChanges(settings_changes);
|
||||
|
||||
|
@ -457,8 +457,11 @@ const ActionsDAG::Node * MergeTreeIndexConditionSet::operatorFromDAG(const Actio
|
||||
if (arguments_size != 1)
|
||||
return nullptr;
|
||||
|
||||
auto bit_wrapper_function = FunctionFactory::instance().get("__bitWrapperFunc", context);
|
||||
const auto & bit_wrapper_func_node = result_dag->addFunction(bit_wrapper_function, {arguments[0]}, {});
|
||||
|
||||
auto bit_swap_last_two_function = FunctionFactory::instance().get("__bitSwapLastTwo", context);
|
||||
return &result_dag->addFunction(bit_swap_last_two_function, {arguments[0]}, {});
|
||||
return &result_dag->addFunction(bit_swap_last_two_function, {&bit_wrapper_func_node}, {});
|
||||
}
|
||||
else if (function_name == "and" || function_name == "indexHint" || function_name == "or")
|
||||
{
|
||||
|
@ -77,6 +77,8 @@ struct S3Settings
|
||||
|
||||
const PartUploadSettings & getUploadSettings() const { return upload_settings; }
|
||||
|
||||
void setStorageClassName(const String & storage_class_name) { upload_settings.storage_class_name = storage_class_name; }
|
||||
|
||||
RequestSettings() = default;
|
||||
explicit RequestSettings(const Settings & settings);
|
||||
explicit RequestSettings(const NamedCollection & collection);
|
||||
|
@ -16,6 +16,13 @@ NamesAndTypesList StorageSystemEvents::getNamesAndTypes()
|
||||
};
|
||||
}
|
||||
|
||||
NamesAndAliases StorageSystemEvents::getNamesAndAliases()
|
||||
{
|
||||
return {
|
||||
{"name", std::make_shared<DataTypeString>(), "event"}
|
||||
};
|
||||
}
|
||||
|
||||
void StorageSystemEvents::fillData(MutableColumns & res_columns, ContextPtr context, const SelectQueryInfo &) const
|
||||
{
|
||||
for (ProfileEvents::Event i = ProfileEvents::Event(0), end = ProfileEvents::end(); i < end; ++i)
|
||||
|
@ -17,6 +17,8 @@ public:
|
||||
|
||||
static NamesAndTypesList getNamesAndTypes();
|
||||
|
||||
static NamesAndAliases getNamesAndAliases();
|
||||
|
||||
protected:
|
||||
using IStorageSystemOneBlock::IStorageSystemOneBlock;
|
||||
|
||||
|
@ -17,6 +17,13 @@ NamesAndTypesList StorageSystemMetrics::getNamesAndTypes()
|
||||
};
|
||||
}
|
||||
|
||||
NamesAndAliases StorageSystemMetrics::getNamesAndAliases()
|
||||
{
|
||||
return {
|
||||
{"name", std::make_shared<DataTypeString>(), "metric"}
|
||||
};
|
||||
}
|
||||
|
||||
void StorageSystemMetrics::fillData(MutableColumns & res_columns, ContextPtr, const SelectQueryInfo &) const
|
||||
{
|
||||
for (size_t i = 0, end = CurrentMetrics::end(); i < end; ++i)
|
||||
|
@ -18,6 +18,8 @@ public:
|
||||
|
||||
static NamesAndTypesList getNamesAndTypes();
|
||||
|
||||
static NamesAndAliases getNamesAndAliases();
|
||||
|
||||
protected:
|
||||
using IStorageSystemOneBlock::IStorageSystemOneBlock;
|
||||
|
||||
|
@ -11,7 +11,6 @@
|
||||
00927_asof_joins
|
||||
00940_order_by_read_in_order_query_plan
|
||||
00945_bloom_filter_index
|
||||
00979_set_index_not
|
||||
00981_in_subquery_with_tuple
|
||||
01049_join_low_card_bug_long
|
||||
01062_pm_all_join_with_block_continuation
|
||||
|
@ -35,6 +35,11 @@ from version_helper import (
|
||||
get_version_from_repo,
|
||||
update_version_local,
|
||||
)
|
||||
from clickhouse_helper import (
|
||||
ClickHouseHelper,
|
||||
prepare_tests_results_for_clickhouse,
|
||||
)
|
||||
from stopwatch import Stopwatch
|
||||
|
||||
IMAGE_NAME = "clickhouse/binary-builder"
|
||||
BUILD_LOG_NAME = "build_log.log"
|
||||
@ -268,6 +273,7 @@ def mark_failed_reports_pending(build_name: str, pr_info: PRInfo) -> None:
|
||||
def main():
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
stopwatch = Stopwatch()
|
||||
build_name = sys.argv[1]
|
||||
|
||||
build_config = CI_CONFIG["build_config"][build_name]
|
||||
@ -394,7 +400,20 @@ def main():
|
||||
)
|
||||
|
||||
upload_master_static_binaries(pr_info, build_config, s3_helper, build_output_path)
|
||||
# Fail build job if not successeded
|
||||
|
||||
ch_helper = ClickHouseHelper()
|
||||
prepared_events = prepare_tests_results_for_clickhouse(
|
||||
pr_info,
|
||||
[],
|
||||
"success" if success else "failure",
|
||||
stopwatch.duration_seconds,
|
||||
stopwatch.start_time_str,
|
||||
log_url,
|
||||
f"Build ({build_name})",
|
||||
)
|
||||
ch_helper.insert_events_into(db="default", table="checks", events=prepared_events)
|
||||
|
||||
# Fail the build job if it didn't succeed
|
||||
if not success:
|
||||
sys.exit(1)
|
||||
|
||||
|
@ -346,7 +346,7 @@ CI_CONFIG = {
|
||||
"Compatibility check (aarch64)": {
|
||||
"required_build": "package_aarch64",
|
||||
},
|
||||
"Unit tests (release-clang)": {
|
||||
"Unit tests (release)": {
|
||||
"required_build": "binary_release",
|
||||
},
|
||||
"Unit tests (asan)": {
|
||||
@ -509,7 +509,7 @@ REQUIRED_CHECKS = [
|
||||
"Style Check",
|
||||
"Unit tests (asan)",
|
||||
"Unit tests (msan)",
|
||||
"Unit tests (release-clang)",
|
||||
"Unit tests (release)",
|
||||
"Unit tests (tsan)",
|
||||
"Unit tests (ubsan)",
|
||||
]
|
||||
|
@ -7,11 +7,18 @@ import urllib.parse
|
||||
import http.server
|
||||
import socketserver
|
||||
import string
|
||||
import socket
|
||||
import struct
|
||||
|
||||
|
||||
INF_COUNT = 100000000
|
||||
|
||||
|
||||
def _and_then(value, func):
|
||||
assert callable(func)
|
||||
return None if value is None else func(value)
|
||||
|
||||
|
||||
class MockControl:
|
||||
def __init__(self, cluster, container, port):
|
||||
self._cluster = cluster
|
||||
@ -30,8 +37,8 @@ class MockControl:
|
||||
)
|
||||
assert response == "OK", response
|
||||
|
||||
def setup_error_at_object_upload(self, count=None, after=None):
|
||||
url = f"http://localhost:{self._port}/mock_settings/error_at_object_upload?nothing=1"
|
||||
def setup_action(self, when, count=None, after=None, action=None, action_args=None):
|
||||
url = f"http://localhost:{self._port}/mock_settings/{when}?nothing=1"
|
||||
|
||||
if count is not None:
|
||||
url += f"&count={count}"
|
||||
@ -39,25 +46,12 @@ class MockControl:
|
||||
if after is not None:
|
||||
url += f"&after={after}"
|
||||
|
||||
response = self._cluster.exec_in_container(
|
||||
self._cluster.get_container_id(self._container),
|
||||
[
|
||||
"curl",
|
||||
"-s",
|
||||
url,
|
||||
],
|
||||
nothrow=True,
|
||||
)
|
||||
assert response == "OK", response
|
||||
if action is not None:
|
||||
url += f"&action={action}"
|
||||
|
||||
def setup_error_at_part_upload(self, count=None, after=None):
|
||||
url = f"http://localhost:{self._port}/mock_settings/error_at_part_upload?nothing=1"
|
||||
|
||||
if count is not None:
|
||||
url += f"&count={count}"
|
||||
|
||||
if after is not None:
|
||||
url += f"&after={after}"
|
||||
if action_args is not None:
|
||||
for x in action_args:
|
||||
url += f"&action_args={x}"
|
||||
|
||||
response = self._cluster.exec_in_container(
|
||||
self._cluster.get_container_id(self._container),
|
||||
@ -70,22 +64,14 @@ class MockControl:
|
||||
)
|
||||
assert response == "OK", response
|
||||
|
||||
def setup_error_at_create_multi_part_upload(self, count=None):
|
||||
url = f"http://localhost:{self._port}/mock_settings/error_at_create_multi_part_upload"
|
||||
def setup_at_object_upload(self, **kwargs):
|
||||
self.setup_action("at_object_upload", **kwargs)
|
||||
|
||||
if count is not None:
|
||||
url += f"?count={count}"
|
||||
def setup_at_part_upload(self, **kwargs):
|
||||
self.setup_action("at_part_upload", **kwargs)
|
||||
|
||||
response = self._cluster.exec_in_container(
|
||||
self._cluster.get_container_id(self._container),
|
||||
[
|
||||
"curl",
|
||||
"-s",
|
||||
url,
|
||||
],
|
||||
nothrow=True,
|
||||
)
|
||||
assert response == "OK", response
|
||||
def setup_at_create_multi_part_upload(self, **kwargs):
|
||||
self.setup_action("at_create_multi_part_upload", **kwargs)
|
||||
|
||||
def setup_fake_puts(self, part_length):
|
||||
response = self._cluster.exec_in_container(
|
||||
@ -140,8 +126,14 @@ class MockControl:
|
||||
class _ServerRuntime:
|
||||
class SlowPut:
|
||||
def __init__(
|
||||
self, probability_=None, timeout_=None, minimal_length_=None, count_=None
|
||||
self,
|
||||
lock,
|
||||
probability_=None,
|
||||
timeout_=None,
|
||||
minimal_length_=None,
|
||||
count_=None,
|
||||
):
|
||||
self.lock = lock
|
||||
self.probability = probability_ if probability_ is not None else 1
|
||||
self.timeout = timeout_ if timeout_ is not None else 0.1
|
||||
self.minimal_length = minimal_length_ if minimal_length_ is not None else 0
|
||||
@ -156,42 +148,135 @@ class _ServerRuntime:
|
||||
)
|
||||
|
||||
def get_timeout(self, content_length):
|
||||
if content_length > self.minimal_length:
|
||||
if self.count > 0:
|
||||
if (
|
||||
_runtime.slow_put.probability == 1
|
||||
or random.random() <= _runtime.slow_put.probability
|
||||
):
|
||||
self.count -= 1
|
||||
return _runtime.slow_put.timeout
|
||||
with self.lock:
|
||||
if content_length > self.minimal_length:
|
||||
if self.count > 0:
|
||||
if (
|
||||
_runtime.slow_put.probability == 1
|
||||
or random.random() <= _runtime.slow_put.probability
|
||||
):
|
||||
self.count -= 1
|
||||
return _runtime.slow_put.timeout
|
||||
return None
|
||||
|
||||
class Expected500ErrorAction:
|
||||
def inject_error(self, request_handler):
|
||||
data = (
|
||||
'<?xml version="1.0" encoding="UTF-8"?>'
|
||||
"<Error>"
|
||||
"<Code>ExpectedError</Code>"
|
||||
"<Message>mock s3 injected error</Message>"
|
||||
"<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
|
||||
"</Error>"
|
||||
)
|
||||
request_handler.write_error(data)
|
||||
|
||||
class RedirectAction:
|
||||
def __init__(self, host="localhost", port=1):
|
||||
self.dst_host = _and_then(host, str)
|
||||
self.dst_port = _and_then(port, int)
|
||||
|
||||
def inject_error(self, request_handler):
|
||||
request_handler.redirect(host=self.dst_host, port=self.dst_port)
|
||||
|
||||
class ConnectionResetByPeerAction:
|
||||
def __init__(self, with_partial_data=None):
|
||||
self.partial_data = ""
|
||||
if with_partial_data is not None and with_partial_data == "1":
|
||||
self.partial_data = (
|
||||
'<?xml version="1.0" encoding="UTF-8"?>\n'
|
||||
"<InitiateMultipartUploadResult>\n"
|
||||
)
|
||||
|
||||
def inject_error(self, request_handler):
|
||||
request_handler.read_all_input()
|
||||
|
||||
if self.partial_data:
|
||||
request_handler.send_response(200)
|
||||
request_handler.send_header("Content-Type", "text/xml")
|
||||
request_handler.send_header("Content-Length", 10000)
|
||||
request_handler.end_headers()
|
||||
request_handler.wfile.write(bytes(self.partial_data, "UTF-8"))
|
||||
|
||||
time.sleep(1)
|
||||
request_handler.connection.setsockopt(
|
||||
socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0)
|
||||
)
|
||||
request_handler.connection.close()
|
||||
|
||||
class BrokenPipeAction:
|
||||
def inject_error(self, request_handler):
|
||||
# partial read
|
||||
self.rfile.read(50)
|
||||
|
||||
time.sleep(1)
|
||||
request_handler.connection.setsockopt(
|
||||
socket.SOL_SOCKET, socket.SO_LINGER, struct.pack("ii", 1, 0)
|
||||
)
|
||||
request_handler.connection.close()
|
||||
|
||||
class ConnectionRefusedAction(RedirectAction):
|
||||
pass
|
||||
|
||||
class CountAfter:
|
||||
def __init__(self, count_=None, after_=None):
|
||||
def __init__(
|
||||
self, lock, count_=None, after_=None, action_=None, action_args_=[]
|
||||
):
|
||||
self.lock = lock
|
||||
|
||||
self.count = count_ if count_ is not None else INF_COUNT
|
||||
self.after = after_ if after_ is not None else 0
|
||||
self.action = action_
|
||||
self.action_args = action_args_
|
||||
|
||||
if self.action == "connection_refused":
|
||||
self.error_handler = _ServerRuntime.ConnectionRefusedAction()
|
||||
elif self.action == "connection_reset_by_peer":
|
||||
self.error_handler = _ServerRuntime.ConnectionResetByPeerAction(
|
||||
*self.action_args
|
||||
)
|
||||
elif self.action == "broken_pipe":
|
||||
self.error_handler = _ServerRuntime.BrokenPipeAction()
|
||||
elif self.action == "redirect_to":
|
||||
self.error_handler = _ServerRuntime.RedirectAction(*self.action_args)
|
||||
else:
|
||||
self.error_handler = _ServerRuntime.Expected500ErrorAction()
|
||||
|
||||
@staticmethod
|
||||
def from_cgi_params(lock, params):
|
||||
return _ServerRuntime.CountAfter(
|
||||
lock=lock,
|
||||
count_=_and_then(params.get("count", [None])[0], int),
|
||||
after_=_and_then(params.get("after", [None])[0], int),
|
||||
action_=params.get("action", [None])[0],
|
||||
action_args_=params.get("action_args", []),
|
||||
)
|
||||
|
||||
def __str__(self):
|
||||
return f"count:{self.count} after:{self.after}"
|
||||
return f"count:{self.count} after:{self.after} action:{self.action} action_args:{self.action_args}"
|
||||
|
||||
def has_effect(self):
|
||||
if self.after:
|
||||
self.after -= 1
|
||||
if self.after == 0:
|
||||
if self.count:
|
||||
self.count -= 1
|
||||
return True
|
||||
return False
|
||||
with self.lock:
|
||||
if self.after:
|
||||
self.after -= 1
|
||||
if self.after == 0:
|
||||
if self.count:
|
||||
self.count -= 1
|
||||
return True
|
||||
return False
|
||||
|
||||
def inject_error(self, request_handler):
|
||||
self.error_handler.inject_error(request_handler)
|
||||
|
||||
def __init__(self):
|
||||
self.lock = threading.Lock()
|
||||
self.error_at_part_upload = None
|
||||
self.error_at_object_upload = None
|
||||
self.at_part_upload = None
|
||||
self.at_object_upload = None
|
||||
self.fake_put_when_length_bigger = None
|
||||
self.fake_uploads = dict()
|
||||
self.slow_put = None
|
||||
self.fake_multipart_upload = None
|
||||
self.error_at_create_multi_part_upload = None
|
||||
self.at_create_multi_part_upload = None
|
||||
|
||||
def register_fake_upload(self, upload_id, key):
|
||||
with self.lock:
|
||||
@ -205,23 +290,18 @@ class _ServerRuntime:
|
||||
|
||||
def reset(self):
|
||||
with self.lock:
|
||||
self.error_at_part_upload = None
|
||||
self.error_at_object_upload = None
|
||||
self.at_part_upload = None
|
||||
self.at_object_upload = None
|
||||
self.fake_put_when_length_bigger = None
|
||||
self.fake_uploads = dict()
|
||||
self.slow_put = None
|
||||
self.fake_multipart_upload = None
|
||||
self.error_at_create_multi_part_upload = None
|
||||
self.at_create_multi_part_upload = None
|
||||
|
||||
|
||||
_runtime = _ServerRuntime()
|
||||
|
||||
|
||||
def _and_then(value, func):
|
||||
assert callable(func)
|
||||
return None if value is None else func(value)
|
||||
|
||||
|
||||
def get_random_string(length):
|
||||
# choose from all lowercase letter
|
||||
letters = string.ascii_lowercase
|
||||
@ -239,7 +319,7 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
def _ping(self):
|
||||
self._ok()
|
||||
|
||||
def _read_out(self):
|
||||
def read_all_input(self):
|
||||
content_length = int(self.headers.get("Content-Length", 0))
|
||||
to_read = content_length
|
||||
while to_read > 0:
|
||||
@ -250,36 +330,36 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
str(self.rfile.read(size))
|
||||
to_read -= size
|
||||
|
||||
def _redirect(self):
|
||||
self._read_out()
|
||||
def redirect(self, host=None, port=None):
|
||||
if host is None and port is None:
|
||||
host = self.server.upstream_host
|
||||
port = self.server.upstream_port
|
||||
|
||||
self.read_all_input()
|
||||
|
||||
self.send_response(307)
|
||||
url = (
|
||||
f"http://{self.server.upstream_host}:{self.server.upstream_port}{self.path}"
|
||||
)
|
||||
url = f"http://{host}:{port}{self.path}"
|
||||
self.log_message("redirect to %s", url)
|
||||
self.send_header("Location", url)
|
||||
self.end_headers()
|
||||
self.wfile.write(b"Redirected")
|
||||
|
||||
def _error(self, data):
|
||||
self._read_out()
|
||||
def write_error(self, data, content_length=None):
|
||||
if content_length is None:
|
||||
content_length = len(data)
|
||||
self.log_message("write_error %s", data)
|
||||
self.read_all_input()
|
||||
self.send_response(500)
|
||||
self.send_header("Content-Type", "text/xml")
|
||||
self.send_header("Content-Length", str(content_length))
|
||||
self.end_headers()
|
||||
self.wfile.write(bytes(data, "UTF-8"))
|
||||
|
||||
def _error_expected_500(self):
|
||||
self._error(
|
||||
'<?xml version="1.0" encoding="UTF-8"?>'
|
||||
"<Error>"
|
||||
"<Code>ExpectedError</Code>"
|
||||
"<Message>mock s3 injected error</Message>"
|
||||
"<RequestId>txfbd566d03042474888193-00608d7537</RequestId>"
|
||||
"</Error>"
|
||||
)
|
||||
if data:
|
||||
self.wfile.write(bytes(data, "UTF-8"))
|
||||
|
||||
def _fake_put_ok(self):
|
||||
self._read_out()
|
||||
self.log_message("fake put")
|
||||
|
||||
self.read_all_input()
|
||||
|
||||
self.send_response(200)
|
||||
self.send_header("Content-Type", "text/xml")
|
||||
@ -288,7 +368,7 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
self.end_headers()
|
||||
|
||||
def _fake_uploads(self, path, upload_id):
|
||||
self._read_out()
|
||||
self.read_all_input()
|
||||
|
||||
parts = [x for x in path.split("/") if x]
|
||||
bucket = parts[0]
|
||||
@ -310,7 +390,7 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
self.wfile.write(bytes(data, "UTF-8"))
|
||||
|
||||
def _fake_post_ok(self, path):
|
||||
self._read_out()
|
||||
self.read_all_input()
|
||||
|
||||
parts = [x for x in path.split("/") if x]
|
||||
bucket = parts[0]
|
||||
@ -338,22 +418,22 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
path = [x for x in parts.path.split("/") if x]
|
||||
assert path[0] == "mock_settings", path
|
||||
if len(path) < 2:
|
||||
return self._error("_mock_settings: wrong command")
|
||||
return self.write_error("_mock_settings: wrong command")
|
||||
|
||||
if path[1] == "error_at_part_upload":
|
||||
if path[1] == "at_part_upload":
|
||||
params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
|
||||
_runtime.error_at_part_upload = _ServerRuntime.CountAfter(
|
||||
count_=_and_then(params.get("count", [None])[0], int),
|
||||
after_=_and_then(params.get("after", [None])[0], int),
|
||||
_runtime.at_part_upload = _ServerRuntime.CountAfter.from_cgi_params(
|
||||
_runtime.lock, params
|
||||
)
|
||||
self.log_message("set at_part_upload %s", _runtime.at_part_upload)
|
||||
return self._ok()
|
||||
|
||||
if path[1] == "error_at_object_upload":
|
||||
if path[1] == "at_object_upload":
|
||||
params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
|
||||
_runtime.error_at_object_upload = _ServerRuntime.CountAfter(
|
||||
count_=_and_then(params.get("count", [None])[0], int),
|
||||
after_=_and_then(params.get("after", [None])[0], int),
|
||||
_runtime.at_object_upload = _ServerRuntime.CountAfter.from_cgi_params(
|
||||
_runtime.lock, params
|
||||
)
|
||||
self.log_message("set at_object_upload %s", _runtime.at_object_upload)
|
||||
return self._ok()
|
||||
|
||||
if path[1] == "fake_puts":
|
||||
@ -361,11 +441,13 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
_runtime.fake_put_when_length_bigger = int(
|
||||
params.get("when_length_bigger", [1024 * 1024])[0]
|
||||
)
|
||||
self.log_message("set fake_puts %s", _runtime.fake_put_when_length_bigger)
|
||||
return self._ok()
|
||||
|
||||
if path[1] == "slow_put":
|
||||
params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
|
||||
_runtime.slow_put = _ServerRuntime.SlowPut(
|
||||
lock=_runtime.lock,
|
||||
minimal_length_=_and_then(params.get("minimal_length", [None])[0], int),
|
||||
probability_=_and_then(params.get("probability", [None])[0], float),
|
||||
timeout_=_and_then(params.get("timeout", [None])[0], float),
|
||||
@ -376,20 +458,26 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
|
||||
if path[1] == "setup_fake_multpartuploads":
|
||||
_runtime.fake_multipart_upload = True
|
||||
self.log_message("set setup_fake_multpartuploads")
|
||||
return self._ok()
|
||||
|
||||
if path[1] == "error_at_create_multi_part_upload":
|
||||
if path[1] == "at_create_multi_part_upload":
|
||||
params = urllib.parse.parse_qs(parts.query, keep_blank_values=False)
|
||||
_runtime.error_at_create_multi_part_upload = int(
|
||||
params.get("count", [INF_COUNT])[0]
|
||||
_runtime.at_create_multi_part_upload = (
|
||||
_ServerRuntime.CountAfter.from_cgi_params(_runtime.lock, params)
|
||||
)
|
||||
self.log_message(
|
||||
"set at_create_multi_part_upload %s",
|
||||
_runtime.at_create_multi_part_upload,
|
||||
)
|
||||
return self._ok()
|
||||
|
||||
if path[1] == "reset":
|
||||
_runtime.reset()
|
||||
self.log_message("reset")
|
||||
return self._ok()
|
||||
|
||||
return self._error("_mock_settings: wrong command")
|
||||
return self.write_error("_mock_settings: wrong command")
|
||||
|
||||
def do_GET(self):
|
||||
if self.path == "/":
|
||||
@ -398,7 +486,8 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
if self.path.startswith("/mock_settings"):
|
||||
return self._mock_settings()
|
||||
|
||||
return self._redirect()
|
||||
self.log_message("get redirect")
|
||||
return self.redirect()
|
||||
|
||||
def do_PUT(self):
|
||||
content_length = int(self.headers.get("Content-Length", 0))
|
||||
@ -414,30 +503,52 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
upload_id = params.get("uploadId", [None])[0]
|
||||
|
||||
if upload_id is not None:
|
||||
if _runtime.error_at_part_upload is not None:
|
||||
if _runtime.error_at_part_upload.has_effect():
|
||||
return self._error_expected_500()
|
||||
if _runtime.at_part_upload is not None:
|
||||
self.log_message(
|
||||
"put at_part_upload %s, %s, %s",
|
||||
_runtime.at_part_upload,
|
||||
upload_id,
|
||||
parts,
|
||||
)
|
||||
|
||||
if _runtime.at_part_upload.has_effect():
|
||||
return _runtime.at_part_upload.inject_error(self)
|
||||
if _runtime.fake_multipart_upload:
|
||||
if _runtime.is_fake_upload(upload_id, parts.path):
|
||||
return self._fake_put_ok()
|
||||
else:
|
||||
if _runtime.error_at_object_upload is not None:
|
||||
if _runtime.error_at_object_upload.has_effect():
|
||||
return self._error_expected_500()
|
||||
if _runtime.at_object_upload is not None:
|
||||
if _runtime.at_object_upload.has_effect():
|
||||
self.log_message(
|
||||
"put error_at_object_upload %s, %s",
|
||||
_runtime.at_object_upload,
|
||||
parts,
|
||||
)
|
||||
return _runtime.at_object_upload.inject_error(self)
|
||||
if _runtime.fake_put_when_length_bigger is not None:
|
||||
if content_length > _runtime.fake_put_when_length_bigger:
|
||||
self.log_message(
|
||||
"put fake_put_when_length_bigger %s, %s, %s",
|
||||
_runtime.fake_put_when_length_bigger,
|
||||
content_length,
|
||||
parts,
|
||||
)
|
||||
return self._fake_put_ok()
|
||||
|
||||
return self._redirect()
|
||||
self.log_message(
|
||||
"put redirect %s",
|
||||
parts,
|
||||
)
|
||||
return self.redirect()
|
||||
|
||||
def do_POST(self):
|
||||
parts = urllib.parse.urlsplit(self.path)
|
||||
params = urllib.parse.parse_qs(parts.query, keep_blank_values=True)
|
||||
uploads = params.get("uploads", [None])[0]
|
||||
if uploads is not None:
|
||||
if _runtime.error_at_create_multi_part_upload:
|
||||
_runtime.error_at_create_multi_part_upload -= 1
|
||||
return self._error_expected_500()
|
||||
if _runtime.at_create_multi_part_upload is not None:
|
||||
if _runtime.at_create_multi_part_upload.has_effect():
|
||||
return _runtime.at_create_multi_part_upload.inject_error(self)
|
||||
|
||||
if _runtime.fake_multipart_upload:
|
||||
upload_id = get_random_string(5)
|
||||
@ -448,13 +559,13 @@ class RequestHandler(http.server.BaseHTTPRequestHandler):
|
||||
if _runtime.is_fake_upload(upload_id, parts.path):
|
||||
return self._fake_post_ok(parts.path)
|
||||
|
||||
return self._redirect()
|
||||
return self.redirect()
|
||||
|
||||
def do_HEAD(self):
|
||||
self._redirect()
|
||||
self.redirect()
|
||||
|
||||
def do_DELETE(self):
|
||||
self._redirect()
|
||||
self.redirect()
|
||||
|
||||
|
||||
class _ThreadedHTTPServer(socketserver.ThreadingMixIn, http.server.HTTPServer):
|
||||
|
@ -48,6 +48,7 @@
|
||||
"test_system_metrics/test.py::test_readonly_metrics",
|
||||
"test_system_replicated_fetches/test.py::test_system_replicated_fetches",
|
||||
"test_zookeeper_config_load_balancing/test.py::test_round_robin",
|
||||
"test_zookeeper_fallback_session/test.py::test_fallback_session",
|
||||
|
||||
"test_global_overcommit_tracker/test.py::test_global_overcommit",
|
||||
|
||||
@ -81,5 +82,15 @@
|
||||
"test_system_flush_logs/test.py::test_log_buffer_size_rows_flush_threshold",
|
||||
"test_system_flush_logs/test.py::test_log_max_size",
|
||||
"test_crash_log/test.py::test_pkill_query_log",
|
||||
"test_crash_log/test.py::test_pkill"
|
||||
"test_crash_log/test.py::test_pkill",
|
||||
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_tcp",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_postgres",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_mysql",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_http",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_http_named_session",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_grpc",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_tcp_and_others",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_tcp",
|
||||
"test_profile_max_sessions_for_user/test.py::test_profile_max_sessions_for_user_end_session"
|
||||
]
|
||||
|
@ -218,22 +218,32 @@ def test_delete_race_leftovers(cluster):
|
||||
time.sleep(5)
|
||||
|
||||
# Check that we correctly deleted all outdated parts and no leftovers on s3
|
||||
known_remote_paths = set(
|
||||
node.query(
|
||||
f"SELECT remote_path FROM system.remote_data_paths WHERE disk_name = 's32'"
|
||||
).splitlines()
|
||||
)
|
||||
|
||||
all_remote_paths = set(
|
||||
obj.object_name
|
||||
for obj in cluster.minio_client.list_objects(
|
||||
cluster.minio_bucket, "data2/", recursive=True
|
||||
# Do it with retries because we delete blobs in the background
|
||||
# and it can be race condition between removing from remote_data_paths and deleting blobs
|
||||
all_remote_paths = set()
|
||||
known_remote_paths = set()
|
||||
for i in range(3):
|
||||
known_remote_paths = set(
|
||||
node.query(
|
||||
f"SELECT remote_path FROM system.remote_data_paths WHERE disk_name = 's32'"
|
||||
).splitlines()
|
||||
)
|
||||
)
|
||||
|
||||
# Some blobs can be deleted after we listed remote_data_paths
|
||||
# It's alright, thus we check only that all remote paths are known
|
||||
# (in other words, all remote paths is subset of known paths)
|
||||
all_remote_paths = set(
|
||||
obj.object_name
|
||||
for obj in cluster.minio_client.list_objects(
|
||||
cluster.minio_bucket, "data2/", recursive=True
|
||||
)
|
||||
)
|
||||
|
||||
# Some blobs can be deleted after we listed remote_data_paths
|
||||
# It's alright, thus we check only that all remote paths are known
|
||||
# (in other words, all remote paths is subset of known paths)
|
||||
if all_remote_paths == {p for p in known_remote_paths if p in all_remote_paths}:
|
||||
break
|
||||
|
||||
time.sleep(1)
|
||||
|
||||
assert all_remote_paths == {p for p in known_remote_paths if p in all_remote_paths}
|
||||
|
||||
# Check that we have all data
|
||||
|
47
tests/integration/test_backup_s3_storage_class/test.py
Normal file
47
tests/integration/test_backup_s3_storage_class/test.py
Normal file
@ -0,0 +1,47 @@
|
||||
import pytest
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
node = cluster.add_instance(
|
||||
"node",
|
||||
stay_alive=True,
|
||||
with_minio=True,
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
def test_backup_s3_storage_class(started_cluster):
|
||||
node.query(
|
||||
"""
|
||||
CREATE TABLE test_s3_storage_class
|
||||
(
|
||||
`id` UInt64,
|
||||
`value` String
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id;
|
||||
""",
|
||||
)
|
||||
node.query(
|
||||
"""
|
||||
INSERT INTO test_s3_storage_class VALUES (1, 'a');
|
||||
""",
|
||||
)
|
||||
result = node.query(
|
||||
"""
|
||||
BACKUP TABLE test_s3_storage_class TO S3('http://minio1:9001/root/data', 'minio', 'minio123')
|
||||
SETTINGS s3_storage_class='STANDARD';
|
||||
"""
|
||||
)
|
||||
|
||||
minio = cluster.minio_client
|
||||
lst = list(minio.list_objects(cluster.minio_bucket, "data/.backup"))
|
||||
assert lst[0].storage_class == "STANDARD"
|
@ -91,7 +91,7 @@ def get_counters(node, query_id, log_type="ExceptionWhileProcessing"):
|
||||
def test_upload_s3_fail_create_multi_part_upload(cluster, broken_s3, compression):
|
||||
node = cluster.instances["node"]
|
||||
|
||||
broken_s3.setup_error_at_create_multi_part_upload()
|
||||
broken_s3.setup_at_create_multi_part_upload()
|
||||
|
||||
insert_query_id = f"INSERT_INTO_TABLE_FUNCTION_FAIL_CREATE_MPU_{compression}"
|
||||
error = node.query_and_get_error(
|
||||
@ -134,7 +134,7 @@ def test_upload_s3_fail_upload_part_when_multi_part_upload(
|
||||
node = cluster.instances["node"]
|
||||
|
||||
broken_s3.setup_fake_multpartuploads()
|
||||
broken_s3.setup_error_at_part_upload(count=1, after=2)
|
||||
broken_s3.setup_at_part_upload(count=1, after=2)
|
||||
|
||||
insert_query_id = f"INSERT_INTO_TABLE_FUNCTION_FAIL_UPLOAD_PART_{compression}"
|
||||
error = node.query_and_get_error(
|
||||
@ -165,3 +165,302 @@ def test_upload_s3_fail_upload_part_when_multi_part_upload(
|
||||
assert count_create_multi_part_uploads == 1
|
||||
assert count_upload_parts >= 2
|
||||
assert count_s3_errors >= 2
|
||||
|
||||
|
||||
def test_when_s3_connection_refused_is_retried(cluster, broken_s3):
|
||||
node = cluster.instances["node"]
|
||||
|
||||
broken_s3.setup_fake_multpartuploads()
|
||||
broken_s3.setup_at_part_upload(count=3, after=2, action="connection_refused")
|
||||
|
||||
insert_query_id = f"INSERT_INTO_TABLE_FUNCTION_CONNECTION_REFUSED_RETRIED"
|
||||
node.query(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_connection_refused_at_write_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=100,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
count_create_multi_part_uploads, count_upload_parts, count_s3_errors = get_counters(
|
||||
node, insert_query_id, log_type="QueryFinish"
|
||||
)
|
||||
assert count_create_multi_part_uploads == 1
|
||||
assert count_upload_parts == 39
|
||||
assert count_s3_errors == 3
|
||||
|
||||
broken_s3.setup_at_part_upload(count=1000, after=2, action="connection_refused")
|
||||
insert_query_id = f"INSERT_INTO_TABLE_FUNCTION_CONNECTION_REFUSED_RETRIED_1"
|
||||
error = node.query_and_get_error(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_connection_refused_at_write_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=100,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
assert "Code: 499" in error, error
|
||||
assert (
|
||||
"Poco::Exception. Code: 1000, e.code() = 111, Connection refused" in error
|
||||
), error
|
||||
|
||||
|
||||
@pytest.mark.parametrize("send_something", [True, False])
|
||||
def test_when_s3_connection_reset_by_peer_at_upload_is_retried(
|
||||
cluster, broken_s3, send_something
|
||||
):
|
||||
node = cluster.instances["node"]
|
||||
|
||||
broken_s3.setup_fake_multpartuploads()
|
||||
broken_s3.setup_at_part_upload(
|
||||
count=3,
|
||||
after=2,
|
||||
action="connection_reset_by_peer",
|
||||
action_args=["1"] if send_something else ["0"],
|
||||
)
|
||||
|
||||
insert_query_id = (
|
||||
f"TEST_WHEN_S3_CONNECTION_RESET_BY_PEER_AT_UPLOAD_{send_something}"
|
||||
)
|
||||
node.query(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_connection_reset_by_peer_at_upload_is_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=100,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
count_create_multi_part_uploads, count_upload_parts, count_s3_errors = get_counters(
|
||||
node, insert_query_id, log_type="QueryFinish"
|
||||
)
|
||||
|
||||
assert count_create_multi_part_uploads == 1
|
||||
assert count_upload_parts == 39
|
||||
assert count_s3_errors == 3
|
||||
|
||||
broken_s3.setup_at_part_upload(
|
||||
count=1000,
|
||||
after=2,
|
||||
action="connection_reset_by_peer",
|
||||
action_args=["1"] if send_something else ["0"],
|
||||
)
|
||||
insert_query_id = (
|
||||
f"TEST_WHEN_S3_CONNECTION_RESET_BY_PEER_AT_UPLOAD_{send_something}_1"
|
||||
)
|
||||
error = node.query_and_get_error(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_connection_reset_by_peer_at_upload_is_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=100,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
assert "Code: 1000" in error, error
|
||||
assert (
|
||||
"DB::Exception: Connection reset by peer." in error
|
||||
or "DB::Exception: Poco::Exception. Code: 1000, e.code() = 104, Connection reset by peer"
|
||||
in error
|
||||
), error
|
||||
|
||||
|
||||
@pytest.mark.parametrize("send_something", [True, False])
|
||||
def test_when_s3_connection_reset_by_peer_at_create_mpu_retried(
|
||||
cluster, broken_s3, send_something
|
||||
):
|
||||
node = cluster.instances["node"]
|
||||
|
||||
broken_s3.setup_fake_multpartuploads()
|
||||
broken_s3.setup_at_create_multi_part_upload(
|
||||
count=3,
|
||||
after=0,
|
||||
action="connection_reset_by_peer",
|
||||
action_args=["1"] if send_something else ["0"],
|
||||
)
|
||||
|
||||
insert_query_id = (
|
||||
f"TEST_WHEN_S3_CONNECTION_RESET_BY_PEER_AT_MULTIPARTUPLOAD_{send_something}"
|
||||
)
|
||||
node.query(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_connection_reset_by_peer_at_create_mpu_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=100,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
count_create_multi_part_uploads, count_upload_parts, count_s3_errors = get_counters(
|
||||
node, insert_query_id, log_type="QueryFinish"
|
||||
)
|
||||
|
||||
assert count_create_multi_part_uploads == 1
|
||||
assert count_upload_parts == 39
|
||||
assert count_s3_errors == 3
|
||||
|
||||
broken_s3.setup_at_create_multi_part_upload(
|
||||
count=1000,
|
||||
after=0,
|
||||
action="connection_reset_by_peer",
|
||||
action_args=["1"] if send_something else ["0"],
|
||||
)
|
||||
|
||||
insert_query_id = (
|
||||
f"TEST_WHEN_S3_CONNECTION_RESET_BY_PEER_AT_MULTIPARTUPLOAD_{send_something}_1"
|
||||
)
|
||||
error = node.query_and_get_error(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_connection_reset_by_peer_at_create_mpu_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=100,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
assert "Code: 1000" in error, error
|
||||
assert (
|
||||
"DB::Exception: Connection reset by peer." in error
|
||||
or "DB::Exception: Poco::Exception. Code: 1000, e.code() = 104, Connection reset by peer"
|
||||
in error
|
||||
), error
|
||||
|
||||
|
||||
def test_when_s3_broken_pipe_at_upload_is_retried(cluster, broken_s3):
|
||||
node = cluster.instances["node"]
|
||||
|
||||
broken_s3.setup_fake_multpartuploads()
|
||||
broken_s3.setup_at_part_upload(
|
||||
count=3,
|
||||
after=2,
|
||||
action="broken_pipe",
|
||||
)
|
||||
|
||||
insert_query_id = f"TEST_WHEN_S3_BROKEN_PIPE_AT_UPLOAD"
|
||||
node.query(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_broken_pipe_at_upload_is_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=1000000,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
count_create_multi_part_uploads, count_upload_parts, count_s3_errors = get_counters(
|
||||
node, insert_query_id, log_type="QueryFinish"
|
||||
)
|
||||
|
||||
assert count_create_multi_part_uploads == 1
|
||||
assert count_upload_parts == 7
|
||||
assert count_s3_errors == 3
|
||||
|
||||
broken_s3.setup_at_part_upload(
|
||||
count=1000,
|
||||
after=2,
|
||||
action="broken_pipe",
|
||||
)
|
||||
insert_query_id = f"TEST_WHEN_S3_BROKEN_PIPE_AT_UPLOAD_1"
|
||||
error = node.query_and_get_error(
|
||||
f"""
|
||||
INSERT INTO
|
||||
TABLE FUNCTION s3(
|
||||
'http://resolver:8083/root/data/test_when_s3_broken_pipe_at_upload_is_retried',
|
||||
'minio', 'minio123',
|
||||
'CSV', auto, 'none'
|
||||
)
|
||||
SELECT
|
||||
*
|
||||
FROM system.numbers
|
||||
LIMIT 1000000
|
||||
SETTINGS
|
||||
s3_max_single_part_upload_size=100,
|
||||
s3_min_upload_part_size=1000000,
|
||||
s3_check_objects_after_upload=0
|
||||
""",
|
||||
query_id=insert_query_id,
|
||||
)
|
||||
|
||||
assert "Code: 1000" in error, error
|
||||
assert (
|
||||
"DB::Exception: Poco::Exception. Code: 1000, e.code() = 32, I/O error: Broken pipe"
|
||||
in error
|
||||
), error
|
||||
|
@ -83,6 +83,8 @@ def test_reconfig_replace_leader(started_cluster):
|
||||
assert "node3" in config
|
||||
assert "node4" not in config
|
||||
|
||||
ku.wait_configs_equal(config, zk2)
|
||||
|
||||
with pytest.raises(Exception):
|
||||
zk1.stop()
|
||||
zk1.close()
|
||||
|
@ -0,0 +1 @@
|
||||
#!/usr/bin/env python3
|
@ -0,0 +1,7 @@
|
||||
<clickhouse>
|
||||
<profiles>
|
||||
<default>
|
||||
<max_untracked_memory>1</max_untracked_memory>
|
||||
</default>
|
||||
</profiles>
|
||||
</clickhouse>
|
@ -0,0 +1,5 @@
|
||||
<clickhouse>
|
||||
<total_memory_tracker_sample_probability>1</total_memory_tracker_sample_probability>
|
||||
<total_memory_profiler_sample_min_allocation_size>4096</total_memory_profiler_sample_min_allocation_size>
|
||||
<total_memory_profiler_sample_max_allocation_size>8192</total_memory_profiler_sample_max_allocation_size>
|
||||
</clickhouse>
|
@ -0,0 +1,40 @@
|
||||
from helpers.cluster import ClickHouseCluster
|
||||
import pytest
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
node = cluster.add_instance(
|
||||
"node",
|
||||
main_configs=["configs/memory_profiler.xml"],
|
||||
user_configs=["configs/max_untracked_memory.xml"],
|
||||
)
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
def test_trace_boundaries_work(started_cluster):
|
||||
if node.is_built_with_sanitizer():
|
||||
pytest.skip("Disabled for sanitizers")
|
||||
|
||||
node.query("select randomPrintableASCII(number) from numbers(1000) FORMAT Null")
|
||||
node.query("SYSTEM FLUSH LOGS")
|
||||
|
||||
assert (
|
||||
node.query(
|
||||
"SELECT countDistinct(abs(size)) > 0 FROM system.trace_log where trace_type = 'MemorySample'"
|
||||
)
|
||||
== "1\n"
|
||||
)
|
||||
assert (
|
||||
node.query(
|
||||
"SELECT count() FROM system.trace_log where trace_type = 'MemorySample' and (abs(size) > 8192 or abs(size) < 4096)"
|
||||
)
|
||||
== "0\n"
|
||||
)
|
@ -783,9 +783,9 @@ def test_merge_canceled_by_s3_errors(cluster, broken_s3, node_name, storage_poli
|
||||
min_key = node.query("SELECT min(key) FROM test_merge_canceled_by_s3_errors")
|
||||
assert int(min_key) == 0, min_key
|
||||
|
||||
broken_s3.setup_error_at_object_upload()
|
||||
broken_s3.setup_at_object_upload()
|
||||
broken_s3.setup_fake_multpartuploads()
|
||||
broken_s3.setup_error_at_part_upload()
|
||||
broken_s3.setup_at_part_upload()
|
||||
|
||||
node.query("SYSTEM START MERGES test_merge_canceled_by_s3_errors")
|
||||
|
||||
@ -828,7 +828,7 @@ def test_merge_canceled_by_s3_errors_when_move(cluster, broken_s3, node_name):
|
||||
settings={"materialize_ttl_after_modify": 0},
|
||||
)
|
||||
|
||||
broken_s3.setup_error_at_object_upload(count=1, after=1)
|
||||
broken_s3.setup_at_object_upload(count=1, after=1)
|
||||
|
||||
node.query("SYSTEM START MERGES merge_canceled_by_s3_errors_when_move")
|
||||
|
||||
|
0
tests/integration/test_profile_max_sessions_for_user/__init__.py
Executable file
0
tests/integration/test_profile_max_sessions_for_user/__init__.py
Executable file
8
tests/integration/test_profile_max_sessions_for_user/configs/dhparam.pem
Executable file
8
tests/integration/test_profile_max_sessions_for_user/configs/dhparam.pem
Executable file
@ -0,0 +1,8 @@
|
||||
-----BEGIN DH PARAMETERS-----
|
||||
MIIBCAKCAQEAua92DDli13gJ+//ZXyGaggjIuidqB0crXfhUlsrBk9BV1hH3i7fR
|
||||
XGP9rUdk2ubnB3k2ejBStL5oBrkHm9SzUFSQHqfDjLZjKoUpOEmuDc4cHvX1XTR5
|
||||
Pr1vf5cd0yEncJWG5W4zyUB8k++SUdL2qaeslSs+f491HBLDYn/h8zCgRbBvxhxb
|
||||
9qeho1xcbnWeqkN6Kc9bgGozA16P9NLuuLttNnOblkH+lMBf42BSne/TWt3AlGZf
|
||||
slKmmZcySUhF8aKfJnLKbkBCFqOtFRh8zBA9a7g+BT/lSANATCDPaAk1YVih2EKb
|
||||
dpc3briTDbRsiqg2JKMI7+VdULY9bh3EawIBAg==
|
||||
-----END DH PARAMETERS-----
|
@ -0,0 +1,9 @@
|
||||
<clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<log>/var/log/clickhouse-server/clickhouse-server.log</log>
|
||||
<errorlog>/var/log/clickhouse-server/clickhouse-server.err.log</errorlog>
|
||||
<size>1000M</size>
|
||||
<count>10</count>
|
||||
</logger>
|
||||
</clickhouse>
|
@ -0,0 +1,9 @@
|
||||
<clickhouse>
|
||||
<postgresql_port>5433</postgresql_port>
|
||||
<mysql_port>9001</mysql_port>
|
||||
<grpc_port>9100</grpc_port>
|
||||
<grpc replace="replace">
|
||||
<!-- Enable if you want very detailed logs -->
|
||||
<verbose_logs>false</verbose_logs>
|
||||
</grpc>
|
||||
</clickhouse>
|
18
tests/integration/test_profile_max_sessions_for_user/configs/server.crt
Executable file
18
tests/integration/test_profile_max_sessions_for_user/configs/server.crt
Executable file
@ -0,0 +1,18 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIC+zCCAeOgAwIBAgIJANhP897Se2gmMA0GCSqGSIb3DQEBCwUAMBQxEjAQBgNV
|
||||
BAMMCWxvY2FsaG9zdDAeFw0yMDA0MTgyMTE2NDBaFw0yMTA0MTgyMTE2NDBaMBQx
|
||||
EjAQBgNVBAMMCWxvY2FsaG9zdDCCASIwDQYJKoZIhvcNAQEBBQADggEPADCCAQoC
|
||||
ggEBAM92kcojQoMsjZ9YGhPMY6h/fDUsZeSKHLxgqE6wbmfU1oZKCPWqnvl+4n0J
|
||||
pnT5h1ETxxYZLepimKq0DEVPUTmCl0xmcKbtUNiaTUKYKsdita6b2vZCX9wUPN9p
|
||||
2Kjnm41l+aZNqIEBhIgHNWg9qowi20y0EIXR79jQLwwaInHAaJLZxVsqY2zjQ/D7
|
||||
1Zh82MXud7iqxBQiEfw9Cz35UFA239R8QTlPkVQfsN1gfLxnLk24QUX3o+hbUI1g
|
||||
nlSpyYDHYQlOmwz8doDs6THHAZNJ4bPE9xHNFpw6dGZdbtH+IKQ/qRZIiOaiNuzJ
|
||||
IOHl6XQDRDkW2LMTiCQ6fjC7Pz8CAwEAAaNQME4wHQYDVR0OBBYEFFvhaA/Eguyf
|
||||
BXkMj8BkNLBqMnz2MB8GA1UdIwQYMBaAFFvhaA/EguyfBXkMj8BkNLBqMnz2MAwG
|
||||
A1UdEwQFMAMBAf8wDQYJKoZIhvcNAQELBQADggEBACeU/oL48eVAKH7NQntHhRaJ
|
||||
ZGeQzKIjrSBjFo8BGXD1nJZhUeFsylLrhCkC8/5/3grE3BNVX9bxcGjO81C9Mn4U
|
||||
t0z13d6ovJjCZSQArtLwgeJGlpH7gNdD3DyT8DQmrqYVnmnB7UmBu45XH1LWGQZr
|
||||
FAOhGRVs6s6mNj8QlLMgdmsOeOQnsGCMdoss8zV9vO2dc4A5SDSSL2mqGGY4Yjtt
|
||||
X+XlEhXXnksGyx8NGVOZX4wcj8WeCAj/lihQ7Zh6XYwZH9i+E46ompUwoziZnNPu
|
||||
2RH63tLNCxkOY2HF5VMlbMmzer3FkhlM6TAZZRPcvSphKPwXK4A33yqc6wnWvpc=
|
||||
-----END CERTIFICATE-----
|
28
tests/integration/test_profile_max_sessions_for_user/configs/server.key
Executable file
28
tests/integration/test_profile_max_sessions_for_user/configs/server.key
Executable file
@ -0,0 +1,28 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDPdpHKI0KDLI2f
|
||||
WBoTzGOof3w1LGXkihy8YKhOsG5n1NaGSgj1qp75fuJ9CaZ0+YdRE8cWGS3qYpiq
|
||||
tAxFT1E5gpdMZnCm7VDYmk1CmCrHYrWum9r2Ql/cFDzfadio55uNZfmmTaiBAYSI
|
||||
BzVoPaqMIttMtBCF0e/Y0C8MGiJxwGiS2cVbKmNs40Pw+9WYfNjF7ne4qsQUIhH8
|
||||
PQs9+VBQNt/UfEE5T5FUH7DdYHy8Zy5NuEFF96PoW1CNYJ5UqcmAx2EJTpsM/HaA
|
||||
7OkxxwGTSeGzxPcRzRacOnRmXW7R/iCkP6kWSIjmojbsySDh5el0A0Q5FtizE4gk
|
||||
On4wuz8/AgMBAAECggEAJ54J2yL+mZQRe2NUn4FBarTloDXZQ1pIgISov1Ybz0Iq
|
||||
sTxEF728XAKp95y3J9Fa0NXJB+RJC2BGrRpy2W17IlNY1yMc0hOxg5t7s4LhcG/e
|
||||
J/jlSG+GZL2MnlFVKXQJFWhq0yIzUmdayqstvLlB7z7cx/n+yb88YRfoVBRNjZEL
|
||||
Tdrsw+087igDjrIxZJ3eMN5Wi434n9s4yAoRQC1bP5wcWx0gD4MzdmL8ip6suiRc
|
||||
LRuBAhV/Op812xlxUhrF5dInUM9OLlGTXpUzexAS8Cyy7S4bfkW2BaCxTF7I7TFw
|
||||
Whx28CKn/G49tIuU0m6AlxWbXpLVePTFyMb7RJz5cQKBgQD7VQd2u3HM6eE3PcXD
|
||||
p6ObdLTUk8OAJ5BMmADFc71W0Epyo26/e8KXKGYGxE2W3fr13y+9b0fl5fxZPuhS
|
||||
MgvXEO7rItAVsLcp0IzaqY0WUee2b4XWPAU0XuPqvjYMpx8H5OEHqFK6lhZysAqM
|
||||
X7Ot3/Hux9X0MC4v5a/HNbDUOQKBgQDTUPaP3ADRrmpmE2sWuzWEnCSEz5f0tCLO
|
||||
wTqhV/UraWUNlAbgK5NB790IjH/gotBSqqNPLJwJh0LUfClKM4LiaHsEag0OArOF
|
||||
GhPMK1Ohps8c2RRsiG8+hxX2HEHeAVbkouEDPDiHdIW/92pBViDoETXL6qxDKbm9
|
||||
LkOcVeDfNwKBgQChh1xsqrvQ/t+IKWNZA/zahH9TwEP9sW/ESkz0mhYuHWA7nV4o
|
||||
ItpFW+l2n+Nd+vy32OFN1p9W2iD9GrklWpTRfEiRRqaFyjVt4mMkhaPvnGRXlAVo
|
||||
Utrldbb1v5ntN9txr2ARE9VXpe53dzzQSxGnxi4vUK/paK3GitAWMCOdwQKBgQCi
|
||||
hmGsUXQb0P6qVYMGr6PAw2re7t8baLRguoMCdqjs45nCMLh9D2apzvb8TTtJJU/+
|
||||
VJlYGqJEPdDrpjcHh8jBo8QBqCM0RGWYGG9jl2syKB6hPGCV/PU6bSE58Y/DVNpk
|
||||
7NUM7PM5UyhPddY2PC0A78Ole29UFLJzSzLa+b4DTwKBgH9Wh2k4YPnPcRrX89UL
|
||||
eSwWa1CGq6HWX8Kd5qyz256aeHWuG5nv15+rBt+D7nwajUsqeVkAXz5H/dHuG1xz
|
||||
jb7RW+pEjx0GVAmIbkM9vOLqEUfHHHPuk4AXCGGZ5sarPiKg4BHKBBsY1dpoO5UH
|
||||
0j71fRA6zurHnTXDaCLWlUpZ
|
||||
-----END PRIVATE KEY-----
|
@ -0,0 +1,17 @@
|
||||
<clickhouse>
|
||||
<!-- Used with https_port and tcp_port_secure. Full ssl options list: https://github.com/ClickHouse-Extras/poco/blob/master/NetSSL_OpenSSL/include/Poco/Net/SSLManager.h#L71 -->
|
||||
<openSSL>
|
||||
<server> <!-- Used for https server AND secure tcp port -->
|
||||
<!-- openssl req -subj "/CN=localhost" -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout /etc/clickhouse-server/server.key -out /etc/clickhouse-server/server.crt -->
|
||||
<certificateFile>/etc/clickhouse-server/config.d/server.crt</certificateFile>
|
||||
<privateKeyFile>/etc/clickhouse-server/config.d/server.key</privateKeyFile>
|
||||
<!-- openssl dhparam -out /etc/clickhouse-server/dhparam.pem 4096 -->
|
||||
<dhParamsFile>/etc/clickhouse-server/config.d/dhparam.pem</dhParamsFile>
|
||||
<verificationMode>none</verificationMode>
|
||||
<loadDefaultCAFile>true</loadDefaultCAFile>
|
||||
<cacheSessions>true</cacheSessions>
|
||||
<disableProtocols>sslv2,sslv3</disableProtocols>
|
||||
<preferServerCiphers>true</preferServerCiphers>
|
||||
</server>
|
||||
</openSSL>
|
||||
</clickhouse>
|
@ -0,0 +1,16 @@
|
||||
<clickhouse>
|
||||
<profiles>
|
||||
<default>
|
||||
<max_sessions_for_user>2</max_sessions_for_user>
|
||||
<function_sleep_max_microseconds_per_block>0</function_sleep_max_microseconds_per_block>
|
||||
</default>
|
||||
</profiles>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
</default>
|
||||
<test_user>
|
||||
<password>123</password>
|
||||
</test_user>
|
||||
</users>
|
||||
</clickhouse>
|
@ -0,0 +1 @@
|
||||
../../../../src/Server/grpc_protos/clickhouse_grpc.proto
|
222
tests/integration/test_profile_max_sessions_for_user/test.py
Executable file
222
tests/integration/test_profile_max_sessions_for_user/test.py
Executable file
@ -0,0 +1,222 @@
|
||||
import os
|
||||
|
||||
import grpc
|
||||
import pymysql.connections
|
||||
import psycopg2 as py_psql
|
||||
import pytest
|
||||
import sys
|
||||
import threading
|
||||
|
||||
from helpers.cluster import ClickHouseCluster, run_and_check
|
||||
|
||||
MAX_SESSIONS_FOR_USER = 2
|
||||
POSTGRES_SERVER_PORT = 5433
|
||||
MYSQL_SERVER_PORT = 9001
|
||||
GRPC_PORT = 9100
|
||||
|
||||
TEST_USER = "test_user"
|
||||
TEST_PASSWORD = "123"
|
||||
|
||||
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
|
||||
DEFAULT_ENCODING = "utf-8"
|
||||
|
||||
# Use grpcio-tools to generate *pb2.py files from *.proto.
|
||||
proto_dir = os.path.join(SCRIPT_DIR, "./protos")
|
||||
gen_dir = os.path.join(SCRIPT_DIR, "./_gen")
|
||||
os.makedirs(gen_dir, exist_ok=True)
|
||||
run_and_check(
|
||||
"python3 -m grpc_tools.protoc -I{proto_dir} --python_out={gen_dir} --grpc_python_out={gen_dir} \
|
||||
{proto_dir}/clickhouse_grpc.proto".format(
|
||||
proto_dir=proto_dir, gen_dir=gen_dir
|
||||
),
|
||||
shell=True,
|
||||
)
|
||||
|
||||
sys.path.append(gen_dir)
|
||||
|
||||
import clickhouse_grpc_pb2
|
||||
import clickhouse_grpc_pb2_grpc
|
||||
|
||||
cluster = ClickHouseCluster(__file__)
|
||||
instance = cluster.add_instance(
|
||||
"node",
|
||||
main_configs=[
|
||||
"configs/ports.xml",
|
||||
"configs/log.xml",
|
||||
"configs/ssl_conf.xml",
|
||||
"configs/dhparam.pem",
|
||||
"configs/server.crt",
|
||||
"configs/server.key",
|
||||
],
|
||||
user_configs=["configs/users.xml"],
|
||||
env_variables={"UBSAN_OPTIONS": "print_stacktrace=1"},
|
||||
)
|
||||
|
||||
|
||||
def get_query(name, id):
|
||||
return f"SElECT '{name}', {id}, sleep(1)"
|
||||
|
||||
|
||||
def grpc_get_url():
|
||||
return f"{instance.ip_address}:{GRPC_PORT}"
|
||||
|
||||
|
||||
def grpc_create_insecure_channel():
|
||||
channel = grpc.insecure_channel(grpc_get_url())
|
||||
grpc.channel_ready_future(channel).result(timeout=2)
|
||||
return channel
|
||||
|
||||
|
||||
def grpc_query(query_text, channel, session_id_):
|
||||
query_info = clickhouse_grpc_pb2.QueryInfo(
|
||||
query=query_text,
|
||||
session_id=session_id_,
|
||||
user_name=TEST_USER,
|
||||
password=TEST_PASSWORD,
|
||||
)
|
||||
|
||||
stub = clickhouse_grpc_pb2_grpc.ClickHouseStub(channel)
|
||||
result = stub.ExecuteQuery(query_info)
|
||||
if result and result.HasField("exception"):
|
||||
raise Exception(result.exception.display_text)
|
||||
return result.output.decode(DEFAULT_ENCODING)
|
||||
|
||||
|
||||
def threaded_run_test(sessions):
|
||||
thread_list = []
|
||||
for i in range(len(sessions)):
|
||||
thread = ThreadWithException(target=sessions[i], args=(i,))
|
||||
thread_list.append(thread)
|
||||
thread.start()
|
||||
|
||||
for thread in thread_list:
|
||||
thread.join()
|
||||
|
||||
exception_count = 0
|
||||
for i in range(len(sessions)):
|
||||
if thread_list[i].run_exception != None:
|
||||
exception_count += 1
|
||||
|
||||
assert exception_count == 1
|
||||
|
||||
|
||||
@pytest.fixture(scope="module")
|
||||
def started_cluster():
|
||||
try:
|
||||
cluster.start()
|
||||
yield cluster
|
||||
finally:
|
||||
cluster.shutdown()
|
||||
|
||||
|
||||
class ThreadWithException(threading.Thread):
|
||||
run_exception = None
|
||||
|
||||
def run(self):
|
||||
try:
|
||||
super().run()
|
||||
except:
|
||||
self.run_exception = sys.exc_info()
|
||||
|
||||
def join(self):
|
||||
super().join()
|
||||
|
||||
|
||||
def postgres_session(id):
|
||||
ch = py_psql.connect(
|
||||
host=instance.ip_address,
|
||||
port=POSTGRES_SERVER_PORT,
|
||||
user=TEST_USER,
|
||||
password=TEST_PASSWORD,
|
||||
database="default",
|
||||
)
|
||||
cur = ch.cursor()
|
||||
cur.execute(get_query("postgres_session", id))
|
||||
cur.fetchall()
|
||||
|
||||
|
||||
def mysql_session(id):
|
||||
client = pymysql.connections.Connection(
|
||||
host=instance.ip_address,
|
||||
user=TEST_USER,
|
||||
password=TEST_PASSWORD,
|
||||
database="default",
|
||||
port=MYSQL_SERVER_PORT,
|
||||
)
|
||||
cursor = client.cursor(pymysql.cursors.DictCursor)
|
||||
cursor.execute(get_query("mysql_session", id))
|
||||
cursor.fetchall()
|
||||
|
||||
|
||||
def tcp_session(id):
|
||||
instance.query(get_query("tcp_session", id), user=TEST_USER, password=TEST_PASSWORD)
|
||||
|
||||
|
||||
def http_session(id):
|
||||
instance.http_query(
|
||||
get_query("http_session", id), user=TEST_USER, password=TEST_PASSWORD
|
||||
)
|
||||
|
||||
|
||||
def http_named_session(id):
|
||||
instance.http_query(
|
||||
get_query("http_named_session", id),
|
||||
user=TEST_USER,
|
||||
password=TEST_PASSWORD,
|
||||
params={"session_id": id},
|
||||
)
|
||||
|
||||
|
||||
def grpc_session(id):
|
||||
grpc_query(
|
||||
get_query("grpc_session", id), grpc_create_insecure_channel(), f"session_{id}"
|
||||
)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_tcp(started_cluster):
|
||||
threaded_run_test([tcp_session] * 3)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_postgres(started_cluster):
|
||||
threaded_run_test([postgres_session] * 3)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_mysql(started_cluster):
|
||||
threaded_run_test([mysql_session] * 3)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_http(started_cluster):
|
||||
threaded_run_test([http_session] * 3)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_http_named_session(started_cluster):
|
||||
threaded_run_test([http_named_session] * 3)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_grpc(started_cluster):
|
||||
threaded_run_test([grpc_session] * 3)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_tcp_and_others(started_cluster):
|
||||
threaded_run_test([tcp_session, grpc_session, grpc_session])
|
||||
threaded_run_test([tcp_session, http_session, http_session])
|
||||
threaded_run_test([tcp_session, mysql_session, mysql_session])
|
||||
threaded_run_test([tcp_session, postgres_session, postgres_session])
|
||||
threaded_run_test([tcp_session, http_session, postgres_session])
|
||||
threaded_run_test([tcp_session, postgres_session, http_session])
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_end_session(started_cluster):
|
||||
for conection_func in [
|
||||
tcp_session,
|
||||
http_session,
|
||||
grpc_session,
|
||||
mysql_session,
|
||||
postgres_session,
|
||||
]:
|
||||
threaded_run_test([conection_func] * MAX_SESSIONS_FOR_USER)
|
||||
threaded_run_test([conection_func] * MAX_SESSIONS_FOR_USER)
|
||||
|
||||
|
||||
def test_profile_max_sessions_for_user_end_session(started_cluster):
|
||||
instance.query_and_get_error("SET max_sessions_for_user = 10")
|
@ -208,7 +208,9 @@ def test_https_wrong_cert():
|
||||
with pytest.raises(Exception) as err:
|
||||
execute_query_https("SELECT currentUser()", user="john", cert_name="wrong")
|
||||
err_str = str(err.value)
|
||||
if count < MAX_RETRY and "Broken pipe" in err_str:
|
||||
if count < MAX_RETRY and (
|
||||
("Broken pipe" in err_str) or ("EOF occurred" in err_str)
|
||||
):
|
||||
count = count + 1
|
||||
logging.warning(f"Failed attempt with wrong cert, err: {err_str}")
|
||||
continue
|
||||
@ -314,7 +316,9 @@ def test_https_non_ssl_auth():
|
||||
cert_name="wrong",
|
||||
)
|
||||
err_str = str(err.value)
|
||||
if count < MAX_RETRY and "Broken pipe" in err_str:
|
||||
if count < MAX_RETRY and (
|
||||
("Broken pipe" in err_str) or ("EOF occurred" in err_str)
|
||||
):
|
||||
count = count + 1
|
||||
logging.warning(
|
||||
f"Failed attempt with wrong cert, user: peter, err: {err_str}"
|
||||
@ -334,7 +338,9 @@ def test_https_non_ssl_auth():
|
||||
cert_name="wrong",
|
||||
)
|
||||
err_str = str(err.value)
|
||||
if count < MAX_RETRY and "Broken pipe" in err_str:
|
||||
if count < MAX_RETRY and (
|
||||
("Broken pipe" in err_str) or ("EOF occurred" in err_str)
|
||||
):
|
||||
count = count + 1
|
||||
logging.warning(
|
||||
f"Failed attempt with wrong cert, user: jane, err: {err_str}"
|
||||
|
@ -3,11 +3,11 @@
|
||||
<default>
|
||||
<shard>
|
||||
<replica>
|
||||
<host>node1</host>
|
||||
<host>main_node</host>
|
||||
<port>9000</port>
|
||||
</replica>
|
||||
<replica>
|
||||
<host>node2</host>
|
||||
<host>backup_node</host>
|
||||
<port>9000</port>
|
||||
</replica>
|
||||
</shard>
|
||||
|
@ -0,0 +1,23 @@
|
||||
<clickhouse>
|
||||
<listen_host>0.0.0.0</listen_host>
|
||||
|
||||
<!-- Default protocols -->
|
||||
<tcp_port>9000</tcp_port>
|
||||
<http_port>8123</http_port>
|
||||
<mysql_port>9004</mysql_port>
|
||||
|
||||
<!-- Custom protocols -->
|
||||
<protocols>
|
||||
<tcp>
|
||||
<type>tcp</type>
|
||||
<host>0.0.0.0</host>
|
||||
<port>9001</port>
|
||||
<description>native protocol (tcp)</description>
|
||||
</tcp>
|
||||
<http>
|
||||
<type>http</type>
|
||||
<port>8124</port>
|
||||
<description>http protocol</description>
|
||||
</http>
|
||||
</protocols>
|
||||
</clickhouse>
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user