mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-24 00:22:29 +00:00
Merge branch 'master' into add_mysql_killed_test_for_materializemysql
This commit is contained in:
commit
08d36f862d
13
CHANGELOG.md
13
CHANGELOG.md
@ -16,6 +16,7 @@
|
||||
* Remove `ANALYZE` and `AST` queries, and make the setting `enable_debug_queries` obsolete since now it is the part of full featured `EXPLAIN` query. [#16536](https://github.com/ClickHouse/ClickHouse/pull/16536) ([Ivan](https://github.com/abyss7)).
|
||||
* Aggregate functions `boundingRatio`, `rankCorr`, `retention`, `timeSeriesGroupSum`, `timeSeriesGroupRateSum`, `windowFunnel` were erroneously made case-insensitive. Now their names are made case sensitive as designed. Only functions that are specified in SQL standard or made for compatibility with other DBMS or functions similar to those should be case-insensitive. [#16407](https://github.com/ClickHouse/ClickHouse/pull/16407) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make `rankCorr` function return nan on insufficient data https://github.com/ClickHouse/ClickHouse/issues/16124. [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -154,6 +155,7 @@
|
||||
* Change default value of `format_regexp_escaping_rule` setting (it's related to `Regexp` format) to `Raw` (it means - read whole subpattern as a value) to make the behaviour more like to what users expect. [#15426](https://github.com/ClickHouse/ClickHouse/pull/15426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for nested multiline comments `/* comment /* comment */ */` in SQL. This conforms to the SQL standard. [#14655](https://github.com/ClickHouse/ClickHouse/pull/14655) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added MergeTree settings (`max_replicated_merges_with_ttl_in_queue` and `max_number_of_merges_with_ttl_in_pool`) to control the number of merges with TTL in the background pool and replicated queue. This change breaks compatibility with older versions only if you use delete TTL. Otherwise, replication will stay compatible. You can avoid incompatibility issues if you update all shard replicas at once or execute `SYSTEM STOP TTL MERGES` until you finish the update of all replicas. If you'll get an incompatible entry in the replication queue, first of all, execute `SYSTEM STOP TTL MERGES` and after `ALTER TABLE ... DETACH PARTITION ...` the partition where incompatible TTL merge was assigned. Attach it back on a single replica. [#14490](https://github.com/ClickHouse/ClickHouse/pull/14490) ([alesapin](https://github.com/alesapin)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -438,6 +440,10 @@
|
||||
|
||||
### ClickHouse release v20.9.2.20, 2020-09-22
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Added column transformers `EXCEPT`, `REPLACE`, `APPLY`, which can be applied to the list of selected columns (after `*` or `COLUMNS(...)`). For example, you can write `SELECT * EXCEPT(URL) REPLACE(number + 1 AS number)`. Another example: `select * apply(length) apply(max) from wide_string_table` to find out the maxium length of all string columns. [#14233](https://github.com/ClickHouse/ClickHouse/pull/14233) ([Amos Bird](https://github.com/amosbird)).
|
||||
@ -621,6 +627,7 @@
|
||||
* Now `OPTIMIZE FINAL` query doesn't recalculate TTL for parts that were added before TTL was created. Use `ALTER TABLE ... MATERIALIZE TTL` once to calculate them, after that `OPTIMIZE FINAL` will evaluate TTL's properly. This behavior never worked for replicated tables. [#14220](https://github.com/ClickHouse/ClickHouse/pull/14220) ([alesapin](https://github.com/alesapin)).
|
||||
* Extend `parallel_distributed_insert_select` setting, adding an option to run `INSERT` into local table. The setting changes type from `Bool` to `UInt64`, so the values `false` and `true` are no longer supported. If you have these values in server configuration, the server will not start. Please replace them with `0` and `1`, respectively. [#14060](https://github.com/ClickHouse/ClickHouse/pull/14060) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove support for the `ODBCDriver` input/output format. This was a deprecated format once used for communication with the ClickHouse ODBC driver, now long superseded by the `ODBCDriver2` format. Resolves [#13629](https://github.com/ClickHouse/ClickHouse/issues/13629). [#13847](https://github.com/ClickHouse/ClickHouse/pull/13847) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -765,6 +772,7 @@
|
||||
* The function `groupArrayMoving*` was not working for distributed queries. It's result was calculated within incorrect data type (without promotion to the largest type). The function `groupArrayMovingAvg` was returning integer number that was inconsistent with the `avg` function. This fixes [#12568](https://github.com/ClickHouse/ClickHouse/issues/12568). [#12622](https://github.com/ClickHouse/ClickHouse/pull/12622) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add sanity check for MergeTree settings. If the settings are incorrect, the server will refuse to start or to create a table, printing detailed explanation to the user. [#13153](https://github.com/ClickHouse/ClickHouse/pull/13153) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Protect from the cases when user may set `background_pool_size` to value lower than `number_of_free_entries_in_pool_to_execute_mutation` or `number_of_free_entries_in_pool_to_lower_max_size_of_merge`. In these cases ALTERs won't work or the maximum size of merge will be too limited. It will throw exception explaining what to do. This closes [#10897](https://github.com/ClickHouse/ClickHouse/issues/10897). [#12728](https://github.com/ClickHouse/ClickHouse/pull/12728) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -951,6 +959,10 @@
|
||||
|
||||
### ClickHouse release v20.6.3.28-stable
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Added an initial implementation of `EXPLAIN` query. Syntax: `EXPLAIN SELECT ...`. This fixes [#1118](https://github.com/ClickHouse/ClickHouse/issues/1118). [#11873](https://github.com/ClickHouse/ClickHouse/pull/11873) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
@ -1139,6 +1151,7 @@
|
||||
* Update `zstd` to 1.4.4. It has some minor improvements in performance and compression ratio. If you run replicas with different versions of ClickHouse you may see reasonable error messages `Data after merge is not byte-identical to data on another replicas.` with explanation. These messages are Ok and you should not worry. This change is backward compatible but we list it here in changelog in case you will wonder about these messages. [#10663](https://github.com/ClickHouse/ClickHouse/pull/10663) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added a check for meaningless codecs and a setting `allow_suspicious_codecs` to control this check. This closes [#4966](https://github.com/ClickHouse/ClickHouse/issues/4966). [#10645](https://github.com/ClickHouse/ClickHouse/pull/10645) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Several Kafka setting changes their defaults. See [#11388](https://github.com/ClickHouse/ClickHouse/pull/11388).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
|
@ -1,100 +1,28 @@
|
||||
#include <stdexcept>
|
||||
#include "common/getMemoryAmount.h"
|
||||
|
||||
// http://nadeausoftware.com/articles/2012/09/c_c_tip_how_get_physical_memory_size_system
|
||||
|
||||
/*
|
||||
* Author: David Robert Nadeau
|
||||
* Site: http://NadeauSoftware.com/
|
||||
* License: Creative Commons Attribution 3.0 Unported License
|
||||
* http://creativecommons.org/licenses/by/3.0/deed.en_US
|
||||
*/
|
||||
|
||||
#if defined(WIN32) || defined(_WIN32)
|
||||
#include <Windows.h>
|
||||
#else
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/param.h>
|
||||
#if defined(BSD)
|
||||
#include <sys/sysctl.h>
|
||||
#endif
|
||||
#endif
|
||||
|
||||
|
||||
/**
|
||||
* Returns the size of physical memory (RAM) in bytes.
|
||||
* Returns 0 on unsupported platform
|
||||
*/
|
||||
/** Returns the size of physical memory (RAM) in bytes.
|
||||
* Returns 0 on unsupported platform
|
||||
*/
|
||||
uint64_t getMemoryAmountOrZero()
|
||||
{
|
||||
#if defined(_WIN32) && (defined(__CYGWIN__) || defined(__CYGWIN32__))
|
||||
/* Cygwin under Windows. ------------------------------------ */
|
||||
/* New 64-bit MEMORYSTATUSEX isn't available. Use old 32.bit */
|
||||
MEMORYSTATUS status;
|
||||
status.dwLength = sizeof(status);
|
||||
GlobalMemoryStatus(&status);
|
||||
return status.dwTotalPhys;
|
||||
int64_t num_pages = sysconf(_SC_PHYS_PAGES);
|
||||
if (num_pages <= 0)
|
||||
return 0;
|
||||
|
||||
#elif defined(WIN32) || defined(_WIN32)
|
||||
/* Windows. ------------------------------------------------- */
|
||||
/* Use new 64-bit MEMORYSTATUSEX, not old 32-bit MEMORYSTATUS */
|
||||
MEMORYSTATUSEX status;
|
||||
status.dwLength = sizeof(status);
|
||||
GlobalMemoryStatusEx(&status);
|
||||
return status.ullTotalPhys;
|
||||
int64_t page_size = sysconf(_SC_PAGESIZE);
|
||||
if (page_size <= 0)
|
||||
return 0;
|
||||
|
||||
#else
|
||||
/* UNIX variants. ------------------------------------------- */
|
||||
/* Prefer sysctl() over sysconf() except sysctl() HW_REALMEM and HW_PHYSMEM */
|
||||
|
||||
#if defined(CTL_HW) && (defined(HW_MEMSIZE) || defined(HW_PHYSMEM64))
|
||||
int mib[2];
|
||||
mib[0] = CTL_HW;
|
||||
#if defined(HW_MEMSIZE)
|
||||
mib[1] = HW_MEMSIZE; /* OSX. --------------------- */
|
||||
#elif defined(HW_PHYSMEM64)
|
||||
mib[1] = HW_PHYSMEM64; /* NetBSD, OpenBSD. --------- */
|
||||
#endif
|
||||
uint64_t size = 0; /* 64-bit */
|
||||
size_t len = sizeof(size);
|
||||
if (sysctl(mib, 2, &size, &len, nullptr, 0) == 0)
|
||||
return size;
|
||||
|
||||
return 0; /* Failed? */
|
||||
|
||||
#elif defined(_SC_AIX_REALMEM)
|
||||
/* AIX. ----------------------------------------------------- */
|
||||
return sysconf(_SC_AIX_REALMEM) * 1024;
|
||||
|
||||
#elif defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
|
||||
/* FreeBSD, Linux, OpenBSD, and Solaris. -------------------- */
|
||||
return uint64_t(sysconf(_SC_PHYS_PAGES))
|
||||
*uint64_t(sysconf(_SC_PAGESIZE));
|
||||
|
||||
#elif defined(_SC_PHYS_PAGES) && defined(_SC_PAGE_SIZE)
|
||||
/* Legacy. -------------------------------------------------- */
|
||||
return uint64_t(sysconf(_SC_PHYS_PAGES))
|
||||
* uint64_t(sysconf(_SC_PAGE_SIZE));
|
||||
|
||||
#elif defined(CTL_HW) && (defined(HW_PHYSMEM) || defined(HW_REALMEM))
|
||||
/* DragonFly BSD, FreeBSD, NetBSD, OpenBSD, and OSX. -------- */
|
||||
int mib[2];
|
||||
mib[0] = CTL_HW;
|
||||
#if defined(HW_REALMEM)
|
||||
mib[1] = HW_REALMEM; /* FreeBSD. ----------------- */
|
||||
#elif defined(HW_PYSMEM)
|
||||
mib[1] = HW_PHYSMEM; /* Others. ------------------ */
|
||||
#endif
|
||||
unsigned int size = 0; /* 32-bit */
|
||||
size_t len = sizeof(size);
|
||||
if (sysctl(mib, 2, &size, &len, nullptr, 0) == 0)
|
||||
return size;
|
||||
|
||||
return 0; /* Failed? */
|
||||
#endif /* sysctl and sysconf variants */
|
||||
|
||||
#endif
|
||||
return num_pages * page_size;
|
||||
}
|
||||
|
||||
|
||||
|
@ -8,7 +8,7 @@ using Int16 = int16_t;
|
||||
using Int32 = int32_t;
|
||||
using Int64 = int64_t;
|
||||
|
||||
#if __cplusplus <= 201703L
|
||||
#ifndef __cpp_char8_t
|
||||
using char8_t = unsigned char;
|
||||
#endif
|
||||
|
||||
|
@ -6,10 +6,12 @@
|
||||
|
||||
#include <common/defines.h>
|
||||
#include <common/getFQDNOrHostName.h>
|
||||
#include <common/getMemoryAmount.h>
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
#include <Common/SymbolIndex.h>
|
||||
#include <Common/StackTrace.h>
|
||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include "Common/config_version.h"
|
||||
@ -28,14 +30,13 @@ namespace
|
||||
|
||||
bool initialized = false;
|
||||
bool anonymize = false;
|
||||
std::string server_data_path;
|
||||
|
||||
void setExtras()
|
||||
{
|
||||
|
||||
if (!anonymize)
|
||||
{
|
||||
sentry_set_extra("server_name", sentry_value_new_string(getFQDNOrHostName().c_str()));
|
||||
}
|
||||
|
||||
sentry_set_tag("version", VERSION_STRING);
|
||||
sentry_set_extra("version_githash", sentry_value_new_string(VERSION_GITHASH));
|
||||
sentry_set_extra("version_describe", sentry_value_new_string(VERSION_DESCRIBE));
|
||||
@ -44,6 +45,15 @@ void setExtras()
|
||||
sentry_set_extra("version_major", sentry_value_new_int32(VERSION_MAJOR));
|
||||
sentry_set_extra("version_minor", sentry_value_new_int32(VERSION_MINOR));
|
||||
sentry_set_extra("version_patch", sentry_value_new_int32(VERSION_PATCH));
|
||||
sentry_set_extra("version_official", sentry_value_new_string(VERSION_OFFICIAL));
|
||||
|
||||
/// Sentry does not support 64-bit integers.
|
||||
sentry_set_extra("total_ram", sentry_value_new_string(formatReadableSizeWithBinarySuffix(getMemoryAmountOrZero()).c_str()));
|
||||
sentry_set_extra("physical_cpu_cores", sentry_value_new_int32(getNumberOfPhysicalCPUCores()));
|
||||
|
||||
if (!server_data_path.empty())
|
||||
sentry_set_extra("disk_free_space", sentry_value_new_string(formatReadableSizeWithBinarySuffix(
|
||||
Poco::File(server_data_path).freeSpace()).c_str()));
|
||||
}
|
||||
|
||||
void sentry_logger(sentry_level_e level, const char * message, va_list args, void *)
|
||||
@ -98,6 +108,7 @@ void SentryWriter::initialize(Poco::Util::LayeredConfiguration & config)
|
||||
}
|
||||
if (enabled)
|
||||
{
|
||||
server_data_path = config.getString("path", "");
|
||||
const std::filesystem::path & default_tmp_path = std::filesystem::path(config.getString("tmp_path", Poco::Path::temp())) / "sentry";
|
||||
const std::string & endpoint
|
||||
= config.getString("send_crash_reports.endpoint");
|
||||
|
@ -5,7 +5,7 @@ toc_title: ReplacingMergeTree
|
||||
|
||||
# ReplacingMergeTree {#replacingmergetree}
|
||||
|
||||
The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value.
|
||||
The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value (`ORDER BY` table section, not `PRIMARY KEY`).
|
||||
|
||||
Data deduplication occurs only during a merge. Merging occurs in the background at an unknown time, so you can’t plan for it. Some of the data may remain unprocessed. Although you can run an unscheduled merge using the `OPTIMIZE` query, don’t count on using it, because the `OPTIMIZE` query will read and write a large amount of data.
|
||||
|
||||
@ -29,13 +29,16 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
For a description of request parameters, see [statement description](../../../sql-reference/statements/create/table.md).
|
||||
|
||||
!!! note "Attention"
|
||||
Uniqueness of rows is determined by the `ORDER BY` table section, not `PRIMARY KEY`.
|
||||
|
||||
**ReplacingMergeTree Parameters**
|
||||
|
||||
- `ver` — column with version. Type `UInt*`, `Date` or `DateTime`. Optional parameter.
|
||||
|
||||
When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one:
|
||||
|
||||
- Last in the selection, if `ver` not set.
|
||||
- The last in the selection, if `ver` not set. A selection is a set of rows in a set of parts participating in the merge. The most recently created part (the last insert) will be the last one in the selection. Thus, after deduplication, the very last row from the most recent insert will remain for each unique sorting key.
|
||||
- With the maximum version, if `ver` specified.
|
||||
|
||||
**Query clauses**
|
||||
|
@ -139,7 +139,7 @@ Lazy loading of dictionaries.
|
||||
|
||||
If `true`, then each dictionary is created on first use. If dictionary creation failed, the function that was using the dictionary throws an exception.
|
||||
|
||||
If `false`, all dictionaries are created when the server starts, and if there is an error, the server shuts down.
|
||||
If `false`, all dictionaries are created when the server starts, if the dictionary or dictionaries are created too long or are created with errors, then the server boots without of these dictionaries and continues to try to create these dictionaries.
|
||||
|
||||
The default is `true`.
|
||||
|
||||
|
@ -25,7 +25,37 @@ SELECT
|
||||
|
||||
## toTimeZone {#totimezone}
|
||||
|
||||
Convert time or date and time to the specified time zone.
|
||||
Convert time or date and time to the specified time zone. The time zone is an attribute of the Date/DateTime types. The internal value (number of seconds) of the table field or of the resultset's column does not change, the column's type changes and its string representation changes accordingly.
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc,
|
||||
toTypeName(time_utc) AS type_utc,
|
||||
toInt32(time_utc) AS int32utc,
|
||||
toTimeZone(time_utc, 'Asia/Yekaterinburg') AS time_yekat,
|
||||
toTypeName(time_yekat) AS type_yekat,
|
||||
toInt32(time_yekat) AS int32yekat,
|
||||
toTimeZone(time_utc, 'US/Samoa') AS time_samoa,
|
||||
toTypeName(time_samoa) AS type_samoa,
|
||||
toInt32(time_samoa) AS int32samoa
|
||||
FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
time_utc: 2019-01-01 00:00:00
|
||||
type_utc: DateTime('UTC')
|
||||
int32utc: 1546300800
|
||||
time_yekat: 2019-01-01 05:00:00
|
||||
type_yekat: DateTime('Asia/Yekaterinburg')
|
||||
int32yekat: 1546300800
|
||||
time_samoa: 2018-12-31 13:00:00
|
||||
type_samoa: DateTime('US/Samoa')
|
||||
int32samoa: 1546300800
|
||||
```
|
||||
|
||||
`toTimeZone(time_utc, 'Asia/Yekaterinburg')` changes the `DateTime('UTC')` type to `DateTime('Asia/Yekaterinburg')`. The value (Unixtimestamp) 1546300800 stays the same, but the string representation (the result of the toString() function) changes from `time_utc: 2019-01-01 00:00:00` to `time_yekat: 2019-01-01 05:00:00`.
|
||||
|
||||
## toYear {#toyear}
|
||||
|
||||
|
@ -14,7 +14,7 @@ The following operations are available:
|
||||
|
||||
- `ALTER TABLE [db.]table MATERIALIZE INDEX name IN PARTITION partition_name` - The query rebuilds the secondary index `name` in the partition `partition_name`. Implemented as a [mutation](../../../../sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
The first two commands areare lightweight in a sense that they only change metadata or remove files.
|
||||
The first two commands are lightweight in a sense that they only change metadata or remove files.
|
||||
|
||||
Also, they are replicated, syncing indices metadata via ZooKeeper.
|
||||
|
||||
|
@ -29,6 +29,8 @@ A column description is `name type` in the simplest case. Example: `RegionID UIn
|
||||
|
||||
Expressions can also be defined for default values (see below).
|
||||
|
||||
If necessary, primary key can be specified, with one or more key expressions.
|
||||
|
||||
### With a Schema Similar to Other Table {#with-a-schema-similar-to-other-table}
|
||||
|
||||
``` sql
|
||||
@ -97,6 +99,34 @@ If you add a new column to a table but later change its default expression, the
|
||||
|
||||
It is not possible to set default values for elements in nested data structures.
|
||||
|
||||
## Primary Key {#primary-key}
|
||||
|
||||
You can define a [primary key](../../../engines/table-engines/mergetree-family/mergetree.md#primary-keys-and-indexes-in-queries) when creating a table. Primary key can be specified in two ways:
|
||||
|
||||
- inside the column list
|
||||
|
||||
``` sql
|
||||
CREATE TABLE db.table_name
|
||||
(
|
||||
name1 type1, name2 type2, ...,
|
||||
PRIMARY KEY(expr1[, expr2,...])]
|
||||
)
|
||||
ENGINE = engine;
|
||||
```
|
||||
|
||||
- outside the column list
|
||||
|
||||
``` sql
|
||||
CREATE TABLE db.table_name
|
||||
(
|
||||
name1 type1, name2 type2, ...
|
||||
)
|
||||
ENGINE = engine
|
||||
PRIMARY KEY(expr1[, expr2,...]);
|
||||
```
|
||||
|
||||
You can't combine both ways in one query.
|
||||
|
||||
## Constraints {#constraints}
|
||||
|
||||
Along with columns descriptions constraints could be defined:
|
||||
|
@ -183,18 +183,18 @@ ClickHouse не требует уникального первичного кл
|
||||
|
||||
- Увеличить эффективность индекса.
|
||||
|
||||
Пусть первичный ключ — `(a, b)`, тогда добавление ещё одного столбца `c` повысит эффективность, если выполнены условия:
|
||||
Пусть первичный ключ — `(a, b)`, тогда добавление ещё одного столбца `c` повысит эффективность, если выполнены условия:
|
||||
|
||||
- Есть запросы с условием на столбец `c`.
|
||||
- Часто встречаются достаточно длинные (в несколько раз больше `index_granularity`) диапазоны данных с одинаковыми значениями `(a, b)`. Иначе говоря, когда добавление ещё одного столбца позволит пропускать достаточно длинные диапазоны данных.
|
||||
- Есть запросы с условием на столбец `c`.
|
||||
- Часто встречаются достаточно длинные (в несколько раз больше `index_granularity`) диапазоны данных с одинаковыми значениями `(a, b)`. Иначе говоря, когда добавление ещё одного столбца позволит пропускать достаточно длинные диапазоны данных.
|
||||
|
||||
- Улучшить сжатие данных.
|
||||
|
||||
ClickHouse сортирует данные по первичному ключу, поэтому чем выше однородность, тем лучше сжатие.
|
||||
ClickHouse сортирует данные по первичному ключу, поэтому чем выше однородность, тем лучше сжатие.
|
||||
|
||||
- Обеспечить дополнительную логику при слиянии кусков данных в движках [CollapsingMergeTree](collapsingmergetree.md#table_engine-collapsingmergetree) и [SummingMergeTree](summingmergetree.md).
|
||||
|
||||
В этом случае имеет смысл указать отдельный *ключ сортировки*, отличающийся от первичного ключа.
|
||||
В этом случае имеет смысл указать отдельный *ключ сортировки*, отличающийся от первичного ключа.
|
||||
|
||||
Длинный первичный ключ будет негативно влиять на производительность вставки и потребление памяти, однако на производительность ClickHouse при запросах `SELECT` лишние столбцы в первичном ключе не влияют.
|
||||
|
||||
@ -309,11 +309,11 @@ SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234
|
||||
|
||||
- `bloom_filter([false_positive])` — [фильтр Блума](https://en.wikipedia.org/wiki/Bloom_filter) для указанных стоблцов.
|
||||
|
||||
Необязательный параметр `false_positive` — это вероятность получения ложноположительного срабатывания. Возможные значения: (0, 1). Значение по умолчанию: 0.025.
|
||||
Необязательный параметр `false_positive` — это вероятность получения ложноположительного срабатывания. Возможные значения: (0, 1). Значение по умолчанию: 0.025.
|
||||
|
||||
Поддержанные типы данных: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`.
|
||||
|
||||
Фильтром могут пользоваться функции: [equals](../../../engines/table_engines/mergetree_family/mergetree.md), [notEquals](../../../engines/table_engines/mergetree_family/mergetree.md), [in](../../../engines/table_engines/mergetree_family/mergetree.md), [notIn](../../../engines/table_engines/mergetree_family/mergetree.md).
|
||||
Поддержанные типы данных: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`.
|
||||
|
||||
Фильтром могут пользоваться функции: [equals](../../../engines/table-engines/mergetree-family/mergetree.md), [notEquals](../../../engines/table-engines/mergetree-family/mergetree.md), [in](../../../engines/table-engines/mergetree-family/mergetree.md), [notIn](../../../engines/table-engines/mergetree-family/mergetree.md).
|
||||
|
||||
**Примеры**
|
||||
|
||||
@ -645,4 +645,4 @@ SETTINGS storage_policy = 'moving_from_ssd_to_hdd'
|
||||
|
||||
После выполнения фоновых слияний или мутаций старые куски не удаляются сразу, а через некоторое время (табличная настройка `old_parts_lifetime`). Также они не перемещаются на другие тома или диски, поэтому до момента удаления они продолжают учитываться при подсчёте занятого дискового пространства.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/table_engines/mergetree/) <!--hide-->
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/mergetree/) <!--hide-->
|
||||
|
@ -5,7 +5,7 @@ toc_title: ReplacingMergeTree
|
||||
|
||||
# ReplacingMergeTree {#replacingmergetree}
|
||||
|
||||
Движок отличается от [MergeTree](mergetree.md#table_engines-mergetree) тем, что выполняет удаление дублирующихся записей с одинаковым значением [ключа сортировки](mergetree.md)).
|
||||
Движок отличается от [MergeTree](mergetree.md#table_engines-mergetree) тем, что выполняет удаление дублирующихся записей с одинаковым значением [ключа сортировки](mergetree.md) (секция `ORDER BY`, не `PRIMARY KEY`).
|
||||
|
||||
Дедупликация данных производится лишь во время слияний. Слияние происходят в фоне в неизвестный момент времени, на который вы не можете ориентироваться. Некоторая часть данных может остаться необработанной. Хотя вы можете вызвать внеочередное слияние с помощью запроса `OPTIMIZE`, на это не стоит рассчитывать, так как запрос `OPTIMIZE` приводит к чтению и записи большого объёма данных.
|
||||
|
||||
@ -28,14 +28,17 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
Описание параметров запроса смотрите в [описании запроса](../../../engines/table-engines/mergetree-family/replacingmergetree.md).
|
||||
|
||||
!!! note "Внимание"
|
||||
Уникальность строк определяется `ORDER BY` секцией таблицы, а не `PRIMARY KEY`.
|
||||
|
||||
**Параметры ReplacingMergeTree**
|
||||
|
||||
- `ver` — столбец с версией, тип `UInt*`, `Date` или `DateTime`. Необязательный параметр.
|
||||
|
||||
При слиянии, из всех строк с одинаковым значением ключа сортировки `ReplacingMergeTree` оставляет только одну:
|
||||
При слиянии `ReplacingMergeTree` оставляет только строку для каждого уникального ключа сортировки:
|
||||
|
||||
- Последнюю в выборке, если `ver` не задан.
|
||||
- С максимальной версией, если `ver` задан.
|
||||
- Последнюю в выборке, если `ver` не задан. Под выборкой здесь понимается набор строк в наборе партов, участвующих в слиянии. Последний по времени создания парт (последний инсерт) будет последним в выборке. Таким образом, после дедупликации для каждого значения ключа сортировки останется самая последняя строка из самого последнего инсерта.
|
||||
- С максимальной версией, если `ver` задан.
|
||||
|
||||
**Секции запроса**
|
||||
|
||||
|
@ -127,7 +127,8 @@ ClickHouse проверяет условия для `min_part_size` и `min_part
|
||||
|
||||
Если `true`, то каждый словарь создаётся при первом использовании. Если словарь не удалось создать, то вызов функции, использующей словарь, сгенерирует исключение.
|
||||
|
||||
Если `false`, то все словари создаются при старте сервера, и в случае ошибки сервер завершает работу.
|
||||
Если `false`, то все словари создаются при старте сервера, если словарь или словари создаются слишком долго или создаются с ошибкой, то сервер загружается без
|
||||
этих словарей и продолжает попытки создать эти словари.
|
||||
|
||||
По умолчанию - `true`.
|
||||
|
||||
|
@ -25,6 +25,40 @@ SELECT
|
||||
|
||||
Поддерживаются только часовые пояса, отличающиеся от UTC на целое число часов.
|
||||
|
||||
## toTimeZone {#totimezone}
|
||||
|
||||
Переводит дату или дату-с-временем в указанный часовой пояс. Часовой пояс (таймзона) это атрибут типов Date/DateTime, внутреннее значение (количество секунд) поля таблицы или колонки результата не изменяется, изменяется тип поля и автоматически его текстовое отображение.
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc,
|
||||
toTypeName(time_utc) AS type_utc,
|
||||
toInt32(time_utc) AS int32utc,
|
||||
toTimeZone(time_utc, 'Asia/Yekaterinburg') AS time_yekat,
|
||||
toTypeName(time_yekat) AS type_yekat,
|
||||
toInt32(time_yekat) AS int32yekat,
|
||||
toTimeZone(time_utc, 'US/Samoa') AS time_samoa,
|
||||
toTypeName(time_samoa) AS type_samoa,
|
||||
toInt32(time_samoa) AS int32samoa
|
||||
FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
time_utc: 2019-01-01 00:00:00
|
||||
type_utc: DateTime('UTC')
|
||||
int32utc: 1546300800
|
||||
time_yekat: 2019-01-01 05:00:00
|
||||
type_yekat: DateTime('Asia/Yekaterinburg')
|
||||
int32yekat: 1546300800
|
||||
time_samoa: 2018-12-31 13:00:00
|
||||
type_samoa: DateTime('US/Samoa')
|
||||
int32samoa: 1546300800
|
||||
```
|
||||
|
||||
`toTimeZone(time_utc, 'Asia/Yekaterinburg')` изменяет тип `DateTime('UTC')` в `DateTime('Asia/Yekaterinburg')`. Значение (unix-время) 1546300800 остается неизменным, но текстовое отображение (результат функции toString()) меняется `time_utc: 2019-01-01 00:00:00` в `time_yekat: 2019-01-01 05:00:00`.
|
||||
|
||||
## toYear {#toyear}
|
||||
|
||||
Переводит дату или дату-с-временем в число типа UInt16, содержащее номер года (AD).
|
||||
|
@ -581,7 +581,7 @@
|
||||
<database>system</database>
|
||||
<table>query_log</table>
|
||||
<!--
|
||||
PARTITION BY expr https://clickhouse.yandex/docs/en/table_engines/custom_partitioning_key/
|
||||
PARTITION BY expr: https://clickhouse.yandex/docs/en/table_engines/custom_partitioning_key/
|
||||
Example:
|
||||
event_date
|
||||
toMonday(event_date)
|
||||
@ -589,6 +589,15 @@
|
||||
toStartOfHour(event_time)
|
||||
-->
|
||||
<partition_by>toYYYYMM(event_date)</partition_by>
|
||||
<!--
|
||||
Table TTL specification: https://clickhouse.tech/docs/en/engines/table-engines/mergetree-family/mergetree/#mergetree-table-ttl
|
||||
Example:
|
||||
event_date + INTERVAL 1 WEEK
|
||||
event_date + INTERVAL 7 DAY DELETE
|
||||
event_date + INTERVAL 2 WEEK TO DISK 'bbb'
|
||||
|
||||
<ttl>event_date + INTERVAL 30 DAY DELETE</ttl>
|
||||
-->
|
||||
|
||||
<!-- Instead of partition_by, you can provide full engine expression (starting with ENGINE = ) with parameters,
|
||||
Example: <engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
|
||||
|
@ -30,6 +30,22 @@ MemoryTracker * getMemoryTracker()
|
||||
return nullptr;
|
||||
}
|
||||
|
||||
/// MemoryTracker cannot throw MEMORY_LIMIT_EXCEEDED (either configured memory
|
||||
/// limit reached or fault injected), in the following cases:
|
||||
///
|
||||
/// - when it is explicitly blocked with LockExceptionInThread
|
||||
///
|
||||
/// - to avoid std::terminate(), when stack unwinding is current in progress in
|
||||
/// this thread.
|
||||
///
|
||||
/// NOTE: that since C++11 destructor marked with noexcept by default, and
|
||||
/// this means that any throw from destructor (that is not marked with
|
||||
/// noexcept(false)) will cause std::terminate()
|
||||
bool inline memoryTrackerCanThrow()
|
||||
{
|
||||
return !MemoryTracker::LockExceptionInThread::isBlocked() && !std::uncaught_exceptions();
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
namespace DB
|
||||
@ -48,7 +64,8 @@ namespace ProfileEvents
|
||||
|
||||
static constexpr size_t log_peak_memory_usage_every = 1ULL << 30;
|
||||
|
||||
thread_local bool MemoryTracker::BlockerInThread::is_blocked = false;
|
||||
thread_local uint64_t MemoryTracker::BlockerInThread::counter = 0;
|
||||
thread_local uint64_t MemoryTracker::LockExceptionInThread::counter = 0;
|
||||
|
||||
MemoryTracker total_memory_tracker(nullptr, VariableContext::Global);
|
||||
|
||||
@ -127,7 +144,7 @@ void MemoryTracker::alloc(Int64 size)
|
||||
}
|
||||
|
||||
std::bernoulli_distribution fault(fault_probability);
|
||||
if (unlikely(fault_probability && fault(thread_local_rng)))
|
||||
if (unlikely(fault_probability && fault(thread_local_rng)) && memoryTrackerCanThrow())
|
||||
{
|
||||
/// Prevent recursion. Exception::ctor -> std::string -> new[] -> MemoryTracker::alloc
|
||||
BlockerInThread untrack_lock;
|
||||
@ -156,7 +173,7 @@ void MemoryTracker::alloc(Int64 size)
|
||||
DB::TraceCollector::collect(DB::TraceType::MemorySample, StackTrace(), size);
|
||||
}
|
||||
|
||||
if (unlikely(current_hard_limit && will_be > current_hard_limit))
|
||||
if (unlikely(current_hard_limit && will_be > current_hard_limit) && memoryTrackerCanThrow())
|
||||
{
|
||||
/// Prevent recursion. Exception::ctor -> std::string -> new[] -> MemoryTracker::alloc
|
||||
BlockerInThread untrack_lock;
|
||||
|
@ -136,11 +136,35 @@ public:
|
||||
private:
|
||||
BlockerInThread(const BlockerInThread &) = delete;
|
||||
BlockerInThread & operator=(const BlockerInThread &) = delete;
|
||||
static thread_local bool is_blocked;
|
||||
static thread_local uint64_t counter;
|
||||
public:
|
||||
BlockerInThread() { is_blocked = true; }
|
||||
~BlockerInThread() { is_blocked = false; }
|
||||
static bool isBlocked() { return is_blocked; }
|
||||
BlockerInThread() { ++counter; }
|
||||
~BlockerInThread() { --counter; }
|
||||
static bool isBlocked() { return counter > 0; }
|
||||
};
|
||||
|
||||
/// To be able to avoid MEMORY_LIMIT_EXCEEDED Exception in destructors:
|
||||
/// - either configured memory limit reached
|
||||
/// - or fault injected
|
||||
///
|
||||
/// So this will simply ignore the configured memory limit (and avoid fault injection).
|
||||
///
|
||||
/// NOTE: exception will be silently ignored, no message in log
|
||||
/// (since logging from MemoryTracker::alloc() is tricky)
|
||||
///
|
||||
/// NOTE: MEMORY_LIMIT_EXCEEDED Exception implicitly blocked if
|
||||
/// stack unwinding is currently in progress in this thread (to avoid
|
||||
/// std::terminate()), so you don't need to use it in this case explicitly.
|
||||
struct LockExceptionInThread
|
||||
{
|
||||
private:
|
||||
LockExceptionInThread(const LockExceptionInThread &) = delete;
|
||||
LockExceptionInThread & operator=(const LockExceptionInThread &) = delete;
|
||||
static thread_local uint64_t counter;
|
||||
public:
|
||||
LockExceptionInThread() { ++counter; }
|
||||
~LockExceptionInThread() { --counter; }
|
||||
static bool isBlocked() { return counter > 0; }
|
||||
};
|
||||
};
|
||||
|
||||
|
@ -165,7 +165,7 @@ private:
|
||||
template <> struct NearestFieldTypeImpl<char> { using Type = std::conditional_t<is_signed_v<char>, Int64, UInt64>; };
|
||||
template <> struct NearestFieldTypeImpl<signed char> { using Type = Int64; };
|
||||
template <> struct NearestFieldTypeImpl<unsigned char> { using Type = UInt64; };
|
||||
#if __cplusplus > 201703L
|
||||
#ifdef __cpp_char8_t
|
||||
template <> struct NearestFieldTypeImpl<char8_t> { using Type = UInt64; };
|
||||
#endif
|
||||
|
||||
|
@ -124,12 +124,39 @@ struct RepeatImpl
|
||||
}
|
||||
|
||||
private:
|
||||
// A very fast repeat implementation, only invoke memcpy for O(log(n)) times.
|
||||
// as the calling times decreases, more data will be copied for each memcpy, thus
|
||||
// SIMD optimization will be more efficient.
|
||||
static void process(const UInt8 * src, UInt8 * dst, UInt64 size, UInt64 repeat_time)
|
||||
{
|
||||
for (UInt64 i = 0; i < repeat_time; ++i)
|
||||
if (unlikely(repeat_time <= 0))
|
||||
{
|
||||
memcpy(dst, src, size - 1);
|
||||
dst += size - 1;
|
||||
*dst = 0;
|
||||
return;
|
||||
}
|
||||
|
||||
size -= 1;
|
||||
UInt64 k = 0;
|
||||
UInt64 last_bit = repeat_time & 1;
|
||||
repeat_time >>= 1;
|
||||
|
||||
const UInt8 * dst_hdr = dst;
|
||||
memcpy(dst, src, size);
|
||||
dst += size;
|
||||
|
||||
while (repeat_time > 0)
|
||||
{
|
||||
UInt64 cpy_size = size * (1ULL << k);
|
||||
memcpy(dst, dst_hdr, cpy_size);
|
||||
dst += cpy_size;
|
||||
if (last_bit)
|
||||
{
|
||||
memcpy(dst, dst_hdr, cpy_size);
|
||||
dst += cpy_size;
|
||||
}
|
||||
k += 1;
|
||||
last_bit = repeat_time & 1;
|
||||
repeat_time >>= 1;
|
||||
}
|
||||
*dst = 0;
|
||||
}
|
||||
|
@ -172,12 +172,16 @@ void PocoHTTPClient::makeRequestInternal(
|
||||
|
||||
auto request_configuration = per_request_configuration(request);
|
||||
if (!request_configuration.proxyHost.empty())
|
||||
{
|
||||
/// Turn on tunnel mode if proxy scheme is HTTP while endpoint scheme is HTTPS.
|
||||
bool use_tunnel = request_configuration.proxyScheme == Aws::Http::Scheme::HTTP && poco_uri.getScheme() == "https";
|
||||
session->setProxy(
|
||||
request_configuration.proxyHost,
|
||||
request_configuration.proxyPort,
|
||||
Aws::Http::SchemeMapper::ToString(request_configuration.proxyScheme),
|
||||
false /// Disable proxy tunneling by default
|
||||
use_tunnel
|
||||
);
|
||||
}
|
||||
|
||||
Poco::Net::HTTPRequest poco_request(Poco::Net::HTTPRequest::HTTP_1_1);
|
||||
|
||||
|
@ -26,6 +26,8 @@ public:
|
||||
TCP = 1,
|
||||
HTTP = 2,
|
||||
GRPC = 3,
|
||||
MYSQL = 4,
|
||||
POSTGRESQL = 5,
|
||||
};
|
||||
|
||||
enum class HTTPMethod : uint8_t
|
||||
|
@ -186,6 +186,7 @@ struct DDLTask
|
||||
Cluster::Address address_in_cluster;
|
||||
size_t host_shard_num;
|
||||
size_t host_replica_num;
|
||||
bool is_circular_replicated = false;
|
||||
|
||||
/// Stage 3.3: execute query
|
||||
ExecutionStatus execution_status;
|
||||
@ -594,7 +595,7 @@ void DDLWorker::parseQueryAndResolveHost(DDLTask & task)
|
||||
* To distinguish one replica from another on the same node,
|
||||
* every shard is placed into separate database.
|
||||
* */
|
||||
is_circular_replicated = true;
|
||||
task.is_circular_replicated = true;
|
||||
auto * query_with_table = dynamic_cast<ASTQueryWithTableAndOutput *>(task.query.get());
|
||||
if (!query_with_table || query_with_table->database.empty())
|
||||
{
|
||||
@ -770,7 +771,6 @@ void DDLWorker::processTask(DDLTask & task)
|
||||
{
|
||||
try
|
||||
{
|
||||
is_circular_replicated = false;
|
||||
parseQueryAndResolveHost(task);
|
||||
|
||||
ASTPtr rewritten_ast = task.query_on_cluster->getRewrittenASTWithoutOnCluster(task.address_in_cluster.default_database);
|
||||
@ -787,7 +787,7 @@ void DDLWorker::processTask(DDLTask & task)
|
||||
storage = DatabaseCatalog::instance().tryGetTable(table_id, context);
|
||||
}
|
||||
|
||||
if (storage && taskShouldBeExecutedOnLeader(rewritten_ast, storage) && !is_circular_replicated)
|
||||
if (storage && taskShouldBeExecutedOnLeader(rewritten_ast, storage) && !task.is_circular_replicated)
|
||||
tryExecuteQueryOnLeaderReplica(task, storage, rewritten_query, task.entry_path, zookeeper);
|
||||
else
|
||||
tryExecuteQuery(rewritten_query, task, task.execution_status);
|
||||
|
@ -104,7 +104,6 @@ private:
|
||||
void attachToThreadGroup();
|
||||
|
||||
private:
|
||||
std::atomic<bool> is_circular_replicated = false;
|
||||
Context context;
|
||||
Poco::Logger * log;
|
||||
|
||||
|
@ -90,11 +90,6 @@ OpenTelemetrySpanHolder::OpenTelemetrySpanHolder(const std::string & _operation_
|
||||
start_time_us = std::chrono::duration_cast<std::chrono::microseconds>(
|
||||
std::chrono::system_clock::now().time_since_epoch()).count();
|
||||
|
||||
#ifndef NDEBUG
|
||||
attribute_names.push_back("clickhouse.start.stacktrace");
|
||||
attribute_values.push_back(StackTrace().toString());
|
||||
#endif
|
||||
|
||||
thread.thread_trace_context.span_id = span_id;
|
||||
}
|
||||
|
||||
@ -130,11 +125,6 @@ OpenTelemetrySpanHolder::~OpenTelemetrySpanHolder()
|
||||
return;
|
||||
}
|
||||
|
||||
#ifndef NDEBUG
|
||||
attribute_names.push_back("clickhouse.end.stacktrace");
|
||||
attribute_values.push_back(StackTrace().toString());
|
||||
#endif
|
||||
|
||||
auto log = context->getOpenTelemetrySpanLog();
|
||||
if (!log)
|
||||
{
|
||||
|
@ -57,6 +57,10 @@ std::shared_ptr<TSystemLog> createSystemLog(
|
||||
throw Exception("If 'engine' is specified for system table, "
|
||||
"PARTITION BY parameters should be specified directly inside 'engine' and 'partition_by' setting doesn't make sense",
|
||||
ErrorCodes::BAD_ARGUMENTS);
|
||||
if (config.has(config_prefix + ".ttl"))
|
||||
throw Exception("If 'engine' is specified for system table, "
|
||||
"TTL parameters should be specified directly inside 'engine' and 'ttl' setting doesn't make sense",
|
||||
ErrorCodes::BAD_ARGUMENTS);
|
||||
engine = config.getString(config_prefix + ".engine");
|
||||
}
|
||||
else
|
||||
@ -65,6 +69,9 @@ std::shared_ptr<TSystemLog> createSystemLog(
|
||||
engine = "ENGINE = MergeTree";
|
||||
if (!partition_by.empty())
|
||||
engine += " PARTITION BY (" + partition_by + ")";
|
||||
String ttl = config.getString(config_prefix + ".ttl", "");
|
||||
if (!ttl.empty())
|
||||
engine += " TTL " + ttl;
|
||||
engine += " ORDER BY (event_date, event_time)";
|
||||
}
|
||||
|
||||
|
@ -356,11 +356,6 @@ void ThreadStatus::detachQuery(bool exit_if_already_detached, bool thread_exits)
|
||||
span.attribute_names.push_back("clickhouse.thread_id");
|
||||
span.attribute_values.push_back(thread_id);
|
||||
|
||||
#ifndef NDEBUG
|
||||
span.attribute_names.push_back("clickhouse.end.stacktrace");
|
||||
span.attribute_values.push_back(StackTrace().toString());
|
||||
#endif
|
||||
|
||||
opentelemetry_span_log->add(span);
|
||||
}
|
||||
|
||||
|
@ -64,18 +64,18 @@ const Block & PullingAsyncPipelineExecutor::getHeader() const
|
||||
|
||||
static void threadFunction(PullingAsyncPipelineExecutor::Data & data, ThreadGroupStatusPtr thread_group, size_t num_threads)
|
||||
{
|
||||
if (thread_group)
|
||||
CurrentThread::attachTo(thread_group);
|
||||
|
||||
SCOPE_EXIT(
|
||||
if (thread_group)
|
||||
CurrentThread::detachQueryIfNotDetached();
|
||||
);
|
||||
|
||||
setThreadName("QueryPipelineEx");
|
||||
|
||||
try
|
||||
{
|
||||
if (thread_group)
|
||||
CurrentThread::attachTo(thread_group);
|
||||
|
||||
SCOPE_EXIT(
|
||||
if (thread_group)
|
||||
CurrentThread::detachQueryIfNotDetached();
|
||||
);
|
||||
|
||||
data.executor->execute(num_threads);
|
||||
}
|
||||
catch (...)
|
||||
|
@ -195,7 +195,14 @@ AggregatingSortedAlgorithm::AggregatingMergedData::AggregatingMergedData(
|
||||
MutableColumns columns_, UInt64 max_block_size_, ColumnsDefinition & def_)
|
||||
: MergedData(std::move(columns_), false, max_block_size_), def(def_)
|
||||
{
|
||||
initAggregateDescription();
|
||||
initAggregateDescription();
|
||||
|
||||
/// Just to make startGroup() simpler.
|
||||
if (def.allocates_memory_in_arena)
|
||||
{
|
||||
arena = std::make_unique<Arena>();
|
||||
arena_size = arena->size();
|
||||
}
|
||||
}
|
||||
|
||||
void AggregatingSortedAlgorithm::AggregatingMergedData::startGroup(const ColumnRawPtrs & raw_columns, size_t row)
|
||||
@ -212,8 +219,19 @@ void AggregatingSortedAlgorithm::AggregatingMergedData::startGroup(const ColumnR
|
||||
for (auto & desc : def.columns_to_simple_aggregate)
|
||||
desc.createState();
|
||||
|
||||
if (def.allocates_memory_in_arena)
|
||||
/// Frequent Arena creation may be too costly, because we have to increment the atomic
|
||||
/// ProfileEvents counters when creating the first Chunk -- e.g. SELECT with
|
||||
/// SimpleAggregateFunction(String) in PK and lots of groups may produce ~1.5M of
|
||||
/// ArenaAllocChunks atomic increments, while LOCK is too costly for CPU
|
||||
/// (~10% overhead here).
|
||||
/// To avoid this, reset arena if and only if:
|
||||
/// - arena is required (i.e. SimpleAggregateFunction(any, String) in PK),
|
||||
/// - arena was used in the previous groups.
|
||||
if (def.allocates_memory_in_arena && arena->size() > arena_size)
|
||||
{
|
||||
arena = std::make_unique<Arena>();
|
||||
arena_size = arena->size();
|
||||
}
|
||||
|
||||
is_group_started = true;
|
||||
}
|
||||
|
@ -73,6 +73,7 @@ private:
|
||||
/// Memory pool for SimpleAggregateFunction
|
||||
/// (only when allocates_memory_in_arena == true).
|
||||
std::unique_ptr<Arena> arena;
|
||||
size_t arena_size = 0;
|
||||
|
||||
bool is_group_started = false;
|
||||
|
||||
|
@ -87,6 +87,7 @@ MySQLHandler::MySQLHandler(IServer & server_, const Poco::Net::StreamSocket & so
|
||||
void MySQLHandler::run()
|
||||
{
|
||||
connection_context.makeSessionContext();
|
||||
connection_context.getClientInfo().interface = ClientInfo::Interface::MYSQL;
|
||||
connection_context.setDefaultFormat("MySQLWire");
|
||||
|
||||
in = std::make_shared<ReadBufferFromPocoSocket>(socket());
|
||||
|
@ -50,6 +50,7 @@ void PostgreSQLHandler::changeIO(Poco::Net::StreamSocket & socket)
|
||||
void PostgreSQLHandler::run()
|
||||
{
|
||||
connection_context.makeSessionContext();
|
||||
connection_context.getClientInfo().interface = ClientInfo::Interface::POSTGRESQL;
|
||||
connection_context.setDefaultFormat("PostgreSQLWire");
|
||||
|
||||
try
|
||||
|
@ -1140,11 +1140,12 @@ bool ReplicatedMergeTreeQueue::shouldExecuteLogEntry(
|
||||
{
|
||||
const char * format_str = "Not executing log entry {} for part {}"
|
||||
" because {} merges with TTL already executing, maximum {}.";
|
||||
LOG_DEBUG(log, format_str, entry.znode_name,
|
||||
entry.new_part_name, total_merges_with_ttl,
|
||||
LOG_DEBUG(log, format_str,
|
||||
entry.znode_name, entry.new_part_name, total_merges_with_ttl,
|
||||
data_settings->max_number_of_merges_with_ttl_in_pool);
|
||||
|
||||
out_postpone_reason = fmt::format(format_str, entry.new_part_name, total_merges_with_ttl,
|
||||
out_postpone_reason = fmt::format(format_str,
|
||||
entry.znode_name, entry.new_part_name, total_merges_with_ttl,
|
||||
data_settings->max_number_of_merges_with_ttl_in_pool);
|
||||
return false;
|
||||
}
|
||||
|
@ -1,3 +1,3 @@
|
||||
<yandex>
|
||||
<grpc_port>9100</grpc_port>
|
||||
<grpc_port replace="replace">9100</grpc_port>
|
||||
</yandex>
|
||||
|
@ -138,12 +138,15 @@ def reset_after_test():
|
||||
|
||||
# Actual tests
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_select_one():
|
||||
assert query("SELECT 1") == "1\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_ordinary_query():
|
||||
assert query("SELECT count() FROM numbers(100)") == "100\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_insert_query():
|
||||
query("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
query("INSERT INTO t VALUES (1),(2),(3)")
|
||||
@ -152,11 +155,13 @@ def test_insert_query():
|
||||
query("INSERT INTO t FORMAT TabSeparated", input_data="9\n10\n")
|
||||
assert query("SELECT a FROM t ORDER BY a") == "1\n2\n3\n4\n5\n6\n7\n8\n9\n10\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_insert_query_streaming():
|
||||
query("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
query("INSERT INTO t VALUES", input_data=["(1),(2),(3),", "(5),(4),(6),", "(7),(8),(9)"])
|
||||
assert query("SELECT a FROM t ORDER BY a") == "1\n2\n3\n4\n5\n6\n7\n8\n9\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_insert_query_delimiter():
|
||||
query("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
query("INSERT INTO t FORMAT CSV 1\n2", input_data=["3", "4\n5"], input_data_delimiter='\n')
|
||||
@ -166,6 +171,7 @@ def test_insert_query_delimiter():
|
||||
query("INSERT INTO t FORMAT CSV 1\n2", input_data=["3", "4\n5"])
|
||||
assert query("SELECT a FROM t ORDER BY a") == "1\n5\n234\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_insert_default_column():
|
||||
query("CREATE TABLE t (a UInt8, b Int32 DEFAULT 100, c String DEFAULT 'c') ENGINE = Memory")
|
||||
query("INSERT INTO t (c, a) VALUES ('x',1),('y',2)")
|
||||
@ -175,17 +181,20 @@ def test_insert_default_column():
|
||||
"3\t100\tc\n" \
|
||||
"4\t100\tc\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_insert_splitted_row():
|
||||
query("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
query("INSERT INTO t VALUES", input_data=["(1),(2),(", "3),(5),(4),(6)"])
|
||||
assert query("SELECT a FROM t ORDER BY a") == "1\n2\n3\n4\n5\n6\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_output_format():
|
||||
query("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
query("INSERT INTO t VALUES (1),(2),(3)")
|
||||
assert query("SELECT a FROM t ORDER BY a FORMAT JSONEachRow") == '{"a":1}\n{"a":2}\n{"a":3}\n'
|
||||
assert query("SELECT a FROM t ORDER BY a", output_format="JSONEachRow") == '{"a":1}\n{"a":2}\n{"a":3}\n'
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_totals_and_extremes():
|
||||
query("CREATE TABLE t (x UInt8, y UInt8) ENGINE = Memory")
|
||||
query("INSERT INTO t VALUES (1, 2), (2, 4), (3, 2), (3, 3), (3, 4)")
|
||||
@ -194,6 +203,7 @@ def test_totals_and_extremes():
|
||||
assert query("SELECT x, y FROM t") == "1\t2\n2\t4\n3\t2\n3\t3\n3\t4\n"
|
||||
assert query_and_get_extremes("SELECT x, y FROM t", settings={"extremes": "1"}) == "1\t2\n3\t4\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_errors_handling():
|
||||
e = query_and_get_error("")
|
||||
#print(e)
|
||||
@ -202,16 +212,19 @@ def test_errors_handling():
|
||||
e = query_and_get_error("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
assert "Table default.t already exists" in e.display_text
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_authentication():
|
||||
query("CREATE USER john IDENTIFIED BY 'qwe123'")
|
||||
assert query("SELECT currentUser()", user_name="john", password="qwe123") == "john\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_logs():
|
||||
logs = query_and_get_logs("SELECT 1", settings={'send_logs_level':'debug'})
|
||||
assert "SELECT 1" in logs
|
||||
assert "Read 1 rows" in logs
|
||||
assert "Peak memory usage" in logs
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_progress():
|
||||
results = query_no_errors("SELECT number, sleep(0.31) FROM numbers(8) SETTINGS max_block_size=2, interactive_delay=100000", stream_output=True)
|
||||
#print(results)
|
||||
@ -246,6 +259,7 @@ def test_progress():
|
||||
}
|
||||
]"""
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_session():
|
||||
session_a = "session A"
|
||||
session_b = "session B"
|
||||
@ -256,10 +270,12 @@ def test_session():
|
||||
assert query("SELECT getSetting('custom_x'), getSetting('custom_y')", session_id=session_a) == "1\t2\n"
|
||||
assert query("SELECT getSetting('custom_x'), getSetting('custom_y')", session_id=session_b) == "3\t4\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_no_session():
|
||||
e = query_and_get_error("SET custom_x=1")
|
||||
assert "There is no session" in e.display_text
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_input_function():
|
||||
query("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
query("INSERT INTO t SELECT col1 * col2 FROM input('col1 UInt8, col2 UInt8') FORMAT CSV", input_data=["5,4\n", "8,11\n", "10,12\n"])
|
||||
@ -269,6 +285,7 @@ def test_input_function():
|
||||
query("INSERT INTO t SELECT col1 * col2 FROM input('col1 UInt8, col2 UInt8') FORMAT CSV 20,10\n", input_data="15,15\n")
|
||||
assert query("SELECT a FROM t ORDER BY a") == "20\n88\n120\n143\n200\n225\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_external_table():
|
||||
columns = [clickhouse_grpc_pb2.NameAndType(name='UserID', type='UInt64'), clickhouse_grpc_pb2.NameAndType(name='UserName', type='String')]
|
||||
ext1 = clickhouse_grpc_pb2.ExternalTable(name='ext1', columns=columns, data='1\tAlex\n2\tBen\n3\tCarl\n', format='TabSeparated')
|
||||
@ -286,6 +303,7 @@ def test_external_table():
|
||||
assert query("SELECT * FROM _data ORDER BY _2", external_tables=[unnamed_table]) == "7\tFred\n"\
|
||||
"6\tGeorge\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_external_table_streaming():
|
||||
columns = [clickhouse_grpc_pb2.NameAndType(name='UserID', type='UInt64'), clickhouse_grpc_pb2.NameAndType(name='UserName', type='String')]
|
||||
def send_query_info():
|
||||
@ -301,6 +319,7 @@ def test_external_table_streaming():
|
||||
"4\tDaniel\n"\
|
||||
"5\tEthan\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_simultaneous_queries_same_channel():
|
||||
threads=[]
|
||||
try:
|
||||
@ -312,6 +331,7 @@ def test_simultaneous_queries_same_channel():
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_simultaneous_queries_multiple_channels():
|
||||
threads=[]
|
||||
try:
|
||||
@ -323,6 +343,7 @@ def test_simultaneous_queries_multiple_channels():
|
||||
for thread in threads:
|
||||
thread.join()
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_cancel_while_processing_input():
|
||||
query("CREATE TABLE t (a UInt8) ENGINE = Memory")
|
||||
def send_query_info():
|
||||
@ -335,6 +356,7 @@ def test_cancel_while_processing_input():
|
||||
assert result.progress.written_rows == 6
|
||||
assert query("SELECT a FROM t ORDER BY a") == "1\n2\n3\n4\n5\n6\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_cancel_while_generating_output():
|
||||
def send_query_info():
|
||||
yield clickhouse_grpc_pb2.QueryInfo(query="SELECT number, sleep(0.2) FROM numbers(10) SETTINGS max_block_size=2")
|
||||
|
@ -1,6 +1,6 @@
|
||||
<yandex>
|
||||
<grpc_port>9100</grpc_port>
|
||||
<grpc>
|
||||
<grpc_port replace="replace">9100</grpc_port>
|
||||
<grpc replace="replace">
|
||||
<enable_ssl>true</enable_ssl>
|
||||
|
||||
<!-- The following two files are used only if enable_ssl=1 -->
|
||||
|
@ -73,15 +73,18 @@ def start_cluster():
|
||||
|
||||
# Actual tests
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_secure_channel():
|
||||
with create_secure_channel() as channel:
|
||||
assert query("SELECT 'ok'", channel) == "ok\n"
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_insecure_channel():
|
||||
with pytest.raises(grpc.FutureTimeoutError):
|
||||
with create_insecure_channel() as channel:
|
||||
query("SELECT 'ok'", channel)
|
||||
|
||||
@pytest.mark.skip(reason="Flaky")
|
||||
def test_wrong_client_certificate():
|
||||
with pytest.raises(grpc.FutureTimeoutError):
|
||||
with create_insecure_channel() as channel:
|
||||
|
@ -689,3 +689,29 @@ def mysql_killed_while_insert(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("DROP DATABASE kill_mysql_while_insert")
|
||||
clickhouse_node.query("DROP DATABASE kill_mysql_while_insert")
|
||||
|
||||
|
||||
def clickhouse_killed_while_insert(clickhouse_node, mysql_node, service_name):
|
||||
mysql_node.query("CREATE DATABASE kill_clickhouse_while_insert")
|
||||
mysql_node.query("CREATE TABLE kill_clickhouse_while_insert.test ( `id` int(11) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB;")
|
||||
clickhouse_node.query("CREATE DATABASE kill_clickhouse_while_insert ENGINE = MaterializeMySQL('{}:3306', 'kill_clickhouse_while_insert', 'root', 'clickhouse')".format(service_name))
|
||||
check_query(clickhouse_node, "SHOW TABLES FROM kill_clickhouse_while_insert FORMAT TSV", 'test\n')
|
||||
|
||||
def insert(num):
|
||||
for i in range(num):
|
||||
query = "INSERT INTO kill_clickhouse_while_insert.test VALUES({v});".format( v = i + 1 )
|
||||
mysql_node.query(query)
|
||||
|
||||
t = threading.Thread(target=insert, args=(1000,))
|
||||
t.start()
|
||||
|
||||
# TODO: add clickhouse_node.restart_clickhouse(20, kill=False) test
|
||||
clickhouse_node.restart_clickhouse(20, kill=True)
|
||||
t.join()
|
||||
|
||||
result = mysql_node.query_and_get_data("SELECT COUNT(1) FROM kill_clickhouse_while_insert.test")
|
||||
for row in result:
|
||||
res = str(row[0]) + '\n'
|
||||
check_query(clickhouse_node, "SELECT count() FROM kill_clickhouse_while_insert.test FORMAT TSV", res)
|
||||
|
||||
mysql_node.query("DROP DATABASE kill_clickhouse_while_insert")
|
||||
clickhouse_node.query("DROP DATABASE kill_clickhouse_while_insert")
|
||||
|
@ -218,3 +218,9 @@ def test_mysql_killed_while_insert_5_7(started_cluster, started_mysql_5_7):
|
||||
def test_mysql_killed_while_insert_8_0(started_cluster, started_mysql_8_0):
|
||||
materialize_with_ddl.mysql_killed_while_insert(clickhouse_node, started_mysql_8_0, "mysql8_0")
|
||||
|
||||
|
||||
def test_clickhouse_killed_while_insert_5_7(started_cluster, started_mysql_5_7):
|
||||
materialize_with_ddl.clickhouse_killed_while_insert(clickhouse_node, started_mysql_5_7, "mysql1")
|
||||
|
||||
def test_clickhouse_killed_while_insert_8_0(started_cluster, started_mysql_8_0):
|
||||
materialize_with_ddl.clickhouse_killed_while_insert(clickhouse_node, started_mysql_8_0, "mysql8_0")
|
||||
|
@ -0,0 +1,24 @@
|
||||
<test>
|
||||
<create_query>
|
||||
CREATE TABLE bench
|
||||
ENGINE = AggregatingMergeTree()
|
||||
ORDER BY key
|
||||
SETTINGS index_granularity = 8192
|
||||
AS
|
||||
SELECT CAST(reinterpretAsString(number), 'SimpleAggregateFunction(any, String)') AS key
|
||||
FROM numbers_mt(toUInt64(5e6))
|
||||
SETTINGS max_insert_threads = 16
|
||||
</create_query>
|
||||
|
||||
<fill_query>OPTIMIZE TABLE bench</fill_query>
|
||||
|
||||
<query>
|
||||
SELECT *
|
||||
FROM bench
|
||||
GROUP BY key
|
||||
SETTINGS optimize_aggregation_in_order = 1, max_threads = 16
|
||||
FORMAT Null
|
||||
</query>
|
||||
|
||||
<drop_query>DROP TABLE IF EXISTS bench</drop_query>
|
||||
</test>
|
@ -1 +0,0 @@
|
||||
Ok.
|
15
tests/queries/0_stateless/01594_too_low_memory_limits.sh
Executable file
15
tests/queries/0_stateless/01594_too_low_memory_limits.sh
Executable file
@ -0,0 +1,15 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
|
||||
. "$CURDIR"/../shell_config.sh
|
||||
|
||||
# it is not mandatory to use existing table since it fails earlier, hence just a placeholder.
|
||||
# this is format of INSERT SELECT, that pass these settings exactly for INSERT query not the SELECT
|
||||
${CLICKHOUSE_CLIENT} --format Null -q 'insert into placeholder_table_name select * from numbers_mt(65535) format Null settings max_memory_usage=1, max_untracked_memory=1' >& /dev/null
|
||||
exit_code=$?
|
||||
|
||||
# expecting ATTEMPT_TO_READ_AFTER_EOF, 32
|
||||
test $exit_code -eq 32 || exit 1
|
||||
|
||||
# check that server is still alive
|
||||
${CLICKHOUSE_CLIENT} --format Null -q 'SELECT 1'
|
@ -174,3 +174,5 @@
|
||||
01584_distributed_buffer_cannot_find_column
|
||||
01018_ip_dictionary
|
||||
00976_ttl_with_old_parts
|
||||
01558_ttest_scipy
|
||||
01561_mann_whitney_scipy
|
||||
|
@ -70,7 +70,9 @@ export CLICKHOUSE_PORT_INTERSERVER=${CLICKHOUSE_PORT_INTERSERVER:="9009"}
|
||||
export CLICKHOUSE_URL_INTERSERVER=${CLICKHOUSE_URL_INTERSERVER:="${CLICKHOUSE_PORT_HTTP_PROTO}://${CLICKHOUSE_HOST}:${CLICKHOUSE_PORT_INTERSERVER}/"}
|
||||
|
||||
export CLICKHOUSE_CURL_COMMAND=${CLICKHOUSE_CURL_COMMAND:="curl"}
|
||||
export CLICKHOUSE_CURL_TIMEOUT=${CLICKHOUSE_CURL_TIMEOUT:="10"}
|
||||
# The queries in CI are prone to sudden delays, and we often don't check for curl
|
||||
# errors, so it makes sense to set a relatively generous timeout.
|
||||
export CLICKHOUSE_CURL_TIMEOUT=${CLICKHOUSE_CURL_TIMEOUT:="60"}
|
||||
export CLICKHOUSE_CURL=${CLICKHOUSE_CURL:="${CLICKHOUSE_CURL_COMMAND} -q -s --max-time ${CLICKHOUSE_CURL_TIMEOUT}"}
|
||||
export CLICKHOUSE_TMP=${CLICKHOUSE_TMP:="."}
|
||||
mkdir -p ${CLICKHOUSE_TMP}
|
||||
|
@ -20,6 +20,10 @@ issue_14810 = "https://github.com/ClickHouse/ClickHouse/issues/14810"
|
||||
issue_15165 = "https://github.com/ClickHouse/ClickHouse/issues/15165"
|
||||
issue_15980 = "https://github.com/ClickHouse/ClickHouse/issues/15980"
|
||||
issue_16403 = "https://github.com/ClickHouse/ClickHouse/issues/16403"
|
||||
issue_17146 = "https://github.com/ClickHouse/ClickHouse/issues/17146"
|
||||
issue_17147 = "https://github.com/ClickHouse/ClickHouse/issues/17147"
|
||||
issue_17653 = "https://github.com/ClickHouse/ClickHouse/issues/17653"
|
||||
issue_17655 = "https://github.com/ClickHouse/ClickHouse/issues/17655"
|
||||
|
||||
xfails = {
|
||||
"syntax/show create quota/I show create quota current":
|
||||
@ -97,11 +101,17 @@ xfails = {
|
||||
"privileges/alter move/:/:/:/:/user with revoked ALTER MOVE PARTITION privilege/":
|
||||
[(Fail, issue_16403)],
|
||||
"privileges/create table/create with join query privilege granted directly or via role/:":
|
||||
[(Fail, issue_14149)],
|
||||
[(Fail, issue_17653)],
|
||||
"privileges/create table/create with join union subquery privilege granted directly or via role/:":
|
||||
[(Fail, issue_14149)],
|
||||
[(Fail, issue_17653)],
|
||||
"privileges/create table/create with nested tables privilege granted directly or via role/:":
|
||||
[(Fail, issue_14149)],
|
||||
[(Fail, issue_17653)],
|
||||
"privileges/kill mutation/no privilege/kill mutation on cluster":
|
||||
[(Fail, issue_17146)],
|
||||
"privileges/kill query/privilege granted directly or via role/:/":
|
||||
[(Fail, issue_17147)],
|
||||
"privileges/show dictionaries/:/check privilege/:/exists/EXISTS with privilege":
|
||||
[(Fail, issue_17655)],
|
||||
}
|
||||
|
||||
xflags = {
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -10,9 +10,6 @@ import rbac.helper.errors as errors
|
||||
aliases = {"ALTER DELETE", "DELETE"}
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_AlterDelete_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, table_type, privilege, node=None):
|
||||
"""Check that user is only able to execute ALTER DELETE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -10,9 +10,6 @@ import rbac.helper.errors as errors
|
||||
aliases = {"ALTER FETCH PARTITION", "FETCH PARTITION"}
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_AlterFetch_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, table_type, privilege, node=None):
|
||||
"""Check that user is only able to execute ALTER FETCH PARTITION when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -10,9 +10,6 @@ import rbac.helper.errors as errors
|
||||
aliases = {"ALTER FREEZE PARTITION", "FREEZE PARTITION"}
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_AlterFreeze_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, table_type, privilege, node=None):
|
||||
"""Check that user is only able to execute ALTER FREEZE PARTITION when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -10,9 +10,6 @@ import rbac.helper.errors as errors
|
||||
aliases = {"ALTER MOVE PARTITION", "ALTER MOVE PART", "MOVE PARTITION", "MOVE PART"}
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_AlterMove_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, table_type, privilege, node=None):
|
||||
"""Check that user is only able to execute ALTER MOVE PARTITION when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -10,9 +10,6 @@ import rbac.helper.errors as errors
|
||||
aliases = {"ALTER UPDATE", "UPDATE"}
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_AlterUpdate_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, table_type, privilege, node=None):
|
||||
"""Check that user is only able to execute ALTER UPDATE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateDatabase_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute ATTACH DATABASE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateDictionary_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute ATTACH DICTIONARY when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateTable_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute ATTACH TABLE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateTemporaryTable_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute ATTACH TEMPORARY TABLE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateDatabase_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute CREATE DATABASE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateDictionary_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute CREATE DICTIONARY when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -17,13 +17,17 @@ def create_without_create_table_privilege(self, node=None):
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
try:
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
with When("I try to create a table without CREATE TABLE privilege as the user"):
|
||||
node.query(f"CREATE TABLE {table_name} (x Int8) ENGINE = Memory", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with When("I try to create a table without CREATE TABLE privilege as the user"):
|
||||
node.query(f"CREATE TABLE {table_name} (x Int8) ENGINE = Memory", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
@TestScenario
|
||||
def create_with_create_table_privilege_granted_directly_or_via_role(self, node=None):
|
||||
@ -103,18 +107,23 @@ def create_with_revoked_create_table_privilege(self, grant_target_name, user_nam
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
try:
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with When("I grant CREATE TABLE privilege"):
|
||||
node.query(f"GRANT CREATE TABLE ON {table_name} TO {grant_target_name}")
|
||||
with When("I grant CREATE TABLE privilege"):
|
||||
node.query(f"GRANT CREATE TABLE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And("I revoke CREATE TABLE privilege"):
|
||||
node.query(f"REVOKE CREATE TABLE ON {table_name} FROM {grant_target_name}")
|
||||
with And("I revoke CREATE TABLE privilege"):
|
||||
node.query(f"REVOKE CREATE TABLE ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with Then("I try to create a table on the table as the user"):
|
||||
node.query(f"CREATE TABLE {table_name} (x Int8) ENGINE = Memory", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
with Then("I try to create a table on the table as the user"):
|
||||
node.query(f"CREATE TABLE {table_name} (x Int8) ENGINE = Memory", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
@TestScenario
|
||||
def create_without_source_table_privilege(self, node=None):
|
||||
@ -131,16 +140,21 @@ def create_without_source_table_privilege(self, node=None):
|
||||
|
||||
with table(node, f"{source_table_name}"):
|
||||
with user(node, f"{user_name}"):
|
||||
try:
|
||||
|
||||
with When("I grant CREATE TABLE privilege to a user"):
|
||||
node.query(f"GRANT CREATE TABLE ON {table_name} TO {user_name}")
|
||||
with When("I grant CREATE TABLE privilege to a user"):
|
||||
node.query(f"GRANT CREATE TABLE ON {table_name} TO {user_name}")
|
||||
|
||||
with And("I grant INSERT privilege"):
|
||||
node.query(f"GRANT INSERT ON {table_name} TO {user_name}")
|
||||
with And("I grant INSERT privilege"):
|
||||
node.query(f"GRANT INSERT ON {table_name} TO {user_name}")
|
||||
|
||||
with Then("I try to create a table without select privilege on the table"):
|
||||
node.query(f"CREATE TABLE {table_name} ENGINE = Memory AS SELECT * FROM {source_table_name}", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
with Then("I try to create a table without select privilege on the table"):
|
||||
node.query(f"CREATE TABLE {table_name} ENGINE = Memory AS SELECT * FROM {source_table_name}", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
@TestScenario
|
||||
def create_without_insert_privilege(self, node=None):
|
||||
@ -158,15 +172,19 @@ def create_without_insert_privilege(self, node=None):
|
||||
with table(node, f"{source_table_name}"):
|
||||
with user(node, f"{user_name}"):
|
||||
|
||||
with When("I grant CREATE TABLE privilege to a user"):
|
||||
node.query(f"GRANT CREATE TABLE ON {table_name} TO {user_name}")
|
||||
try:
|
||||
with When("I grant CREATE TABLE privilege to a user"):
|
||||
node.query(f"GRANT CREATE TABLE ON {table_name} TO {user_name}")
|
||||
|
||||
with And("I grant SELECT privilege"):
|
||||
node.query(f"GRANT SELECT ON {source_table_name} TO {user_name}")
|
||||
with And("I grant SELECT privilege"):
|
||||
node.query(f"GRANT SELECT ON {source_table_name} TO {user_name}")
|
||||
|
||||
with Then("I try to create a table without select privilege on the table"):
|
||||
node.query(f"CREATE TABLE {table_name} ENGINE = Memory AS SELECT * FROM {source_table_name}", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
with Then("I try to create a table without select privilege on the table"):
|
||||
node.query(f"CREATE TABLE {table_name} ENGINE = Memory AS SELECT * FROM {source_table_name}", settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
@TestScenario
|
||||
def create_with_source_table_privilege_granted_directly_or_via_role(self, node=None):
|
||||
@ -281,20 +299,14 @@ def create_with_subquery(self, user_name, grant_target_name, node=None):
|
||||
|
||||
with When(f"permutation={permutation}, tables granted = {tables_granted}"):
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Then("I attempt to create a table as the user"):
|
||||
with When("I attempt to create a table as the user"):
|
||||
node.query(create_table_query.format(table_name=table_name, table0_name=table0_name, table1_name=table1_name, table2_name=table2_name), settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with When("I grant select on all tables"):
|
||||
with grant_select_on_table(node, max(permutations(table_count=3))+1, grant_target_name, table0_name, table1_name, table2_name):
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Then("I attempt to create a table as the user"):
|
||||
with When("I attempt to create a table as the user"):
|
||||
node.query(create_table_query.format(table_name=table_name, table0_name=table0_name, table1_name=table1_name, table2_name=table2_name), settings = [("user", f"{user_name}")])
|
||||
|
||||
finally:
|
||||
@ -358,20 +370,14 @@ def create_with_join_query(self, grant_target_name, user_name, node=None):
|
||||
|
||||
with When(f"permutation={permutation}, tables granted = {tables_granted}"):
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Then("I attempt to create a table as the user"):
|
||||
with When("I attempt to create a table as the user"):
|
||||
node.query(create_table_query.format(table_name=table_name, table0_name=table0_name, table1_name=table1_name), settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with When("I grant select on all tables"):
|
||||
with grant_select_on_table(node, max(permutations(table_count=2))+1, grant_target_name, table0_name, table1_name):
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Then("I attempt to create a table as the user"):
|
||||
with When("I attempt to create a table as the user"):
|
||||
node.query(create_table_query.format(table_name=table_name, table0_name=table0_name, table1_name=table1_name), settings = [("user", f"{user_name}")])
|
||||
|
||||
finally:
|
||||
@ -435,20 +441,14 @@ def create_with_union_query(self, grant_target_name, user_name, node=None):
|
||||
|
||||
with When(f"permutation={permutation}, tables granted = {tables_granted}"):
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Then("I attempt to create a table as the user"):
|
||||
with When("I attempt to create a table as the user"):
|
||||
node.query(create_table_query.format(table_name=table_name, table0_name=table0_name, table1_name=table1_name), settings = [("user", f"{user_name}")],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with When("I grant select on all tables"):
|
||||
with grant_select_on_table(node, max(permutations(table_count=2))+1, grant_target_name, table0_name, table1_name):
|
||||
|
||||
with Given("I don't have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Then("I attempt to create a table as the user"):
|
||||
with When("I attempt to create a table as the user"):
|
||||
node.query(create_table_query.format(table_name=table_name, table0_name=table0_name, table1_name=table1_name), settings = [("user", f"{user_name}")])
|
||||
|
||||
finally:
|
||||
@ -622,9 +622,18 @@ def create_with_nested_tables(self, grant_target_name, user_name, node=None):
|
||||
settings = [("user", f"{user_name}")])
|
||||
|
||||
finally:
|
||||
with Finally("I drop the tables"):
|
||||
with Finally(f"I drop {table_name}"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with And(f"I drop {table1_name}"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table1_name}")
|
||||
|
||||
with And(f"I drop {table3_name}"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table3_name}")
|
||||
|
||||
with And(f"I drop {table5_name}"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table5_name}")
|
||||
|
||||
@TestScenario
|
||||
def create_as_another_table(self, node=None):
|
||||
"""Check that user is able to create a table as another table with only CREATE TABLE privilege.
|
||||
@ -707,7 +716,6 @@ def create_as_merge(self, node=None):
|
||||
@TestFeature
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateTable("1.0"),
|
||||
RQ_SRS_006_RBAC_Privileges_CreateTable_Access("1.0"),
|
||||
)
|
||||
@Name("create table")
|
||||
def feature(self, stress=None, parallel=None, node="clickhouse1"):
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CreateTemporaryTable_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute CREATE TEMPORARY TABLE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DropDatabase_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute DETACH DATABASE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
@ -46,7 +43,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
node.query(f"DETACH DATABASE {db_name}", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
finally:
|
||||
with Finally("I drop the database"):
|
||||
with Finally("I reattach the database", flags=TE):
|
||||
node.query(f"ATTACH DATABASE IF NOT EXISTS {db_name}")
|
||||
with And("I drop the database", flags=TE):
|
||||
node.query(f"DROP DATABASE IF EXISTS {db_name}")
|
||||
|
||||
with Scenario("user with privilege", setup=instrument_clickhouse_server_log):
|
||||
@ -63,7 +62,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
node.query(f"DETACH DATABASE {db_name}", settings = [("user", user_name)])
|
||||
|
||||
finally:
|
||||
with Finally("I drop the database"):
|
||||
with Finally("I reattach the database", flags=TE):
|
||||
node.query(f"ATTACH DATABASE IF NOT EXISTS {db_name}")
|
||||
with And("I drop the database", flags=TE):
|
||||
node.query(f"DROP DATABASE IF EXISTS {db_name}")
|
||||
|
||||
with Scenario("user with revoked privilege", setup=instrument_clickhouse_server_log):
|
||||
@ -84,7 +85,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the database"):
|
||||
with Finally("I reattach the database", flags=TE):
|
||||
node.query(f"ATTACH DATABASE IF NOT EXISTS {db_name}")
|
||||
with And("I drop the database", flags=TE):
|
||||
node.query(f"DROP DATABASE IF EXISTS {db_name}")
|
||||
|
||||
@TestFeature
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DropDictionary_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute DETACH DICTIONARY when they have required privilege, either directly or via role.
|
||||
"""
|
||||
@ -46,7 +43,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
node.query(f"DETACH DICTIONARY {dict_name}", settings = [("user", user_name)], exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the dictionary"):
|
||||
with Finally("I reattach the dictionary", flags=TE):
|
||||
node.query(f"ATTACH DICTIONARY IF NOT EXISTS {dict_name}")
|
||||
with And("I drop the dictionary", flags=TE):
|
||||
node.query(f"DROP DICTIONARY IF EXISTS {dict_name}")
|
||||
|
||||
with Scenario("user with privilege", setup=instrument_clickhouse_server_log):
|
||||
@ -63,7 +62,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
node.query(f"DETACH DICTIONARY {dict_name}", settings = [("user", user_name)])
|
||||
|
||||
finally:
|
||||
with Finally("I drop the dictionary"):
|
||||
with Finally("I reattach the dictionary", flags=TE):
|
||||
node.query(f"ATTACH DICTIONARY IF NOT EXISTS {dict_name}")
|
||||
with And("I drop the dictionary", flags=TE):
|
||||
node.query(f"DROP DICTIONARY IF EXISTS {dict_name}")
|
||||
|
||||
with Scenario("user with revoked privilege", setup=instrument_clickhouse_server_log):
|
||||
@ -83,7 +84,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
node.query(f"DETACH DICTIONARY {dict_name}", settings = [("user", user_name)], exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the dictionary"):
|
||||
with Finally("I reattach the dictionary", flags=TE):
|
||||
node.query(f"ATTACH DICTIONARY IF NOT EXISTS {dict_name}")
|
||||
with And("I drop the dictionary", flags=TE):
|
||||
node.query(f"DROP DICTIONARY IF EXISTS {dict_name}")
|
||||
|
||||
@TestFeature
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DropTable_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute DETACH TABLE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
@ -47,7 +44,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
with Finally("I reattach the table", flags=TE):
|
||||
node.query(f"ATTACH TABLE IF NOT EXISTS {table_name}")
|
||||
with And("I drop the table", flags=TE):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Scenario("user with privilege", setup=instrument_clickhouse_server_log):
|
||||
@ -64,7 +63,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
node.query(f"DETACH TABLE {table_name}", settings = [("user", user_name)])
|
||||
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
with Finally("I reattach the table", flags=TE):
|
||||
node.query(f"ATTACH TABLE IF NOT EXISTS {table_name}")
|
||||
with And("I drop the table", flags=TE):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
with Scenario("user with revoked privilege", setup=instrument_clickhouse_server_log):
|
||||
@ -85,7 +86,9 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
with Finally("I reattach the table", flags=TE):
|
||||
node.query(f"ATTACH TABLE IF NOT EXISTS {table_name}")
|
||||
with And("I drop the table", flags=TE):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name}")
|
||||
|
||||
@TestFeature
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DropView_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute DETACH VIEW when they have required privilege, either directly or via role.
|
||||
"""
|
||||
@ -47,8 +44,10 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the view"):
|
||||
node.query(f"DROP VIEW IF EXISTS {view_name}")
|
||||
with Finally("I reattach the view as a table", flags=TE):
|
||||
node.query(f"ATTACH TABLE IF NOT EXISTS {view_name}")
|
||||
with And("I drop the view", flags=TE):
|
||||
node.query(f"DROP TABLE IF EXISTS {view_name}")
|
||||
|
||||
with Scenario("user with privilege", setup=instrument_clickhouse_server_log):
|
||||
view_name = f"view_{getuid()}"
|
||||
@ -64,8 +63,10 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
node.query(f"DETACH VIEW {view_name}", settings = [("user", user_name)])
|
||||
|
||||
finally:
|
||||
with Finally("I drop the view"):
|
||||
node.query(f"DROP VIEW IF EXISTS {view_name}")
|
||||
with Finally("I reattach the view as a table", flags=TE):
|
||||
node.query(f"ATTACH TABLE IF NOT EXISTS {view_name}")
|
||||
with And("I drop the table", flags=TE):
|
||||
node.query(f"DROP TABLE IF EXISTS {view_name}")
|
||||
|
||||
with Scenario("user with revoked privilege", setup=instrument_clickhouse_server_log):
|
||||
view_name = f"view_{getuid()}"
|
||||
@ -85,8 +86,10 @@ def privilege_check(grant_target_name, user_name, node=None):
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the view"):
|
||||
node.query(f"DROP VIEW IF EXISTS {view_name}")
|
||||
with Finally("I reattach the view as a table", flags=TE):
|
||||
node.query(f"ATTACH TABLE IF NOT EXISTS {view_name}")
|
||||
with And("I drop the view", flags=TE):
|
||||
node.query(f"DROP TABLE IF EXISTS {view_name}")
|
||||
|
||||
@TestFeature
|
||||
@Requirements(
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DropDatabase_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute DROP DATABASE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DropDictionary_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute DROP DICTIONARY when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -3,9 +3,6 @@ from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DropTable_Access("1.0"),
|
||||
)
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute DROP TABLE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
|
@ -13,10 +13,18 @@ def feature(self):
|
||||
try:
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.insert", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.select", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.show_tables", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.public_tables", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.distributed_table", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.grant_option", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.truncate", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.optimize", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.kill_query", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.kill_mutation", "feature"), flags=TE), {})
|
||||
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.show.show_tables", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.show.show_dictionaries", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.show.show_databases", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.show.show_columns", "feature"), flags=TE), {})
|
||||
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.alter.alter_column", "feature"), flags=TE), {})
|
||||
run_scenario(pool, tasks, Feature(test=load("rbac.tests.privileges.alter.alter_index", "feature"), flags=TE), {})
|
||||
|
@ -21,53 +21,67 @@ def without_privilege(self, table_type, node=None):
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
table_name = f"table_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
with user(node, user_name):
|
||||
|
||||
with When("I run INSERT without privilege"):
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
node.query(f"INSERT INTO {table_name} (d) VALUES ('2020-01-01')", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Insert_Grant("1.0"),
|
||||
RQ_SRS_006_RBAC_Grant_Privilege_Insert("1.0"),
|
||||
)
|
||||
def user_with_privilege(self, table_type, node=None):
|
||||
"""Check that user can insert into a table on which they have insert privilege.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
table_name = f"table_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
with user(node, user_name):
|
||||
|
||||
with When("I grant insert privilege"):
|
||||
node.query(f"GRANT INSERT ON {table_name} TO {user_name}")
|
||||
|
||||
with And("I use INSERT"):
|
||||
node.query(f"INSERT INTO {table_name} (d) VALUES ('2020-01-01')", settings=[("user",user_name)])
|
||||
|
||||
with Then("I check the insert functioned"):
|
||||
output = node.query(f"SELECT d FROM {table_name} FORMAT JSONEachRow").output
|
||||
assert output == '{"d":"2020-01-01"}', error()
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Insert_Revoke("1.0"),
|
||||
RQ_SRS_006_RBAC_Revoke_Privilege_Insert("1.0"),
|
||||
)
|
||||
def user_with_revoked_privilege(self, table_type, node=None):
|
||||
"""Check that user is unable to insert into a table after insert privilege on that table has been revoked from user.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
table_name = f"table_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
with user(node, user_name):
|
||||
|
||||
with When("I grant insert privilege"):
|
||||
node.query(f"GRANT INSERT ON {table_name} TO {user_name}")
|
||||
|
||||
with And("I revoke insert privilege"):
|
||||
node.query(f"REVOKE INSERT ON {table_name} FROM {user_name}")
|
||||
|
||||
with And("I use INSERT"):
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
node.query(f"INSERT INTO {table_name} (d) VALUES ('2020-01-01')",
|
||||
@ -124,7 +138,7 @@ def user_column_privileges(self, grant_columns, insert_columns_pass, data_fail,
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Insert_Grant("1.0"),
|
||||
RQ_SRS_006_RBAC_Grant_Privilege_Insert("1.0"),
|
||||
)
|
||||
def role_with_privilege(self, table_type, node=None):
|
||||
"""Check that user can insert into a table after being granted a role that
|
||||
@ -149,7 +163,7 @@ def role_with_privilege(self, table_type, node=None):
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Insert_Revoke("1.0"),
|
||||
RQ_SRS_006_RBAC_Revoke_Privilege_Insert("1.0"),
|
||||
)
|
||||
def role_with_revoked_privilege(self, table_type, node=None):
|
||||
"""Check that user with a role that has insert privilege on a table
|
||||
|
256
tests/testflows/rbac/tests/privileges/kill_mutation.py
Normal file
256
tests/testflows/rbac/tests/privileges/kill_mutation.py
Normal file
@ -0,0 +1,256 @@
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def no_privilege(self, node=None):
|
||||
"""Check that user doesn't need privileges to execute `KILL MUTATION` with no mutations.
|
||||
"""
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Scenario("kill mutation on a table"):
|
||||
user_name = f"user_{getuid()}"
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
with user(node, user_name):
|
||||
with When("I attempt to kill mutation on table"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)])
|
||||
|
||||
with Scenario("kill mutation on cluster"):
|
||||
user_name = f"user_{getuid()}"
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
with user(node, user_name):
|
||||
with When("I attempt to kill mutation on cluster"):
|
||||
node.query(f"KILL MUTATION ON CLUSTER sharded_cluster WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)])
|
||||
|
||||
@TestSuite
|
||||
def privileges_granted_directly(self, node=None):
|
||||
"""Check that a user is able to execute `KILL MUTATION` on a table with a mutation
|
||||
if and only if the user has privilege matching the source of the mutation on that table.
|
||||
For example, to execute `KILL MUTATION` after `ALTER UPDATE`, the user needs `ALTER UPDATE` privilege.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
|
||||
Suite(test=update, setup=instrument_clickhouse_server_log)(user_name=user_name, grant_target_name=user_name)
|
||||
Suite(test=delete, setup=instrument_clickhouse_server_log)(user_name=user_name, grant_target_name=user_name)
|
||||
Suite(test=drop_column, setup=instrument_clickhouse_server_log)(user_name=user_name, grant_target_name=user_name)
|
||||
|
||||
@TestSuite
|
||||
def privileges_granted_via_role(self, node=None):
|
||||
"""Check that a user is able to execute `KILL MUTATION` on a table with a mutation
|
||||
if and only if the user has privilege matching the source of the mutation on that table.
|
||||
For example, to execute `KILL MUTATION` after `ALTER UPDATE`, the user needs `ALTER UPDATE` privilege.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
role_name = f"role_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"), role(node, f"{role_name}"):
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
Suite(test=update, setup=instrument_clickhouse_server_log)(user_name=user_name, grant_target_name=role_name)
|
||||
Suite(test=delete, setup=instrument_clickhouse_server_log)(user_name=user_name, grant_target_name=role_name)
|
||||
Suite(test=drop_column, setup=instrument_clickhouse_server_log)(user_name=user_name, grant_target_name=role_name)
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_KillMutation_AlterUpdate("1.0")
|
||||
)
|
||||
def update(self, user_name, grant_target_name, node=None):
|
||||
"""Check that the user is able to execute `KILL MUTATION` after `ALTER UPDATE`
|
||||
if and only if the user has `ALTER UPDATE` privilege.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Scenario("KILL ALTER UPDATE without privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER UPDATE mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message="Exception: Not allowed to kill mutation.")
|
||||
|
||||
with Scenario("KILL ALTER UPDATE with privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER UPDATE mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
|
||||
|
||||
with When("I grant the ALTER UPDATE privilege"):
|
||||
node.query(f"GRANT ALTER UPDATE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)])
|
||||
|
||||
with Scenario("KILL ALTER UPDATE with revoked privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER UPDATE mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} UPDATE a = x WHERE 1")
|
||||
|
||||
with When("I grant the ALTER UPDATE privilege"):
|
||||
node.query(f"GRANT ALTER UPDATE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And("I revoke the ALTER UPDATE privilege"):
|
||||
node.query(f"REVOKE ALTER UPDATE ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message="Exception: Not allowed to kill mutation.")
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_KillMutation_AlterDelete("1.0")
|
||||
)
|
||||
def delete(self, user_name, grant_target_name, node=None):
|
||||
"""Check that the user is able to execute `KILL MUTATION` after `ALTER DELETE`
|
||||
if and only if the user has `ALTER DELETE` privilege.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Scenario("KILL ALTER DELETE without privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER DELETE mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message="Exception: Not allowed to kill mutation.")
|
||||
|
||||
with Scenario("KILL ALTER DELETE with privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER DELETE mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
|
||||
|
||||
with When("I grant the ALTER DELETE privilege"):
|
||||
node.query(f"GRANT ALTER DELETE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)])
|
||||
|
||||
with Scenario("KILL ALTER DELETE with revoked privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER DELETE mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} DELETE WHERE 1")
|
||||
|
||||
with When("I grant the ALTER DELETE privilege"):
|
||||
node.query(f"GRANT ALTER DELETE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And("I revoke the ALTER DELETE privilege"):
|
||||
node.query(f"REVOKE ALTER DELETE ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message="Exception: Not allowed to kill mutation.")
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_KillMutation_AlterDropColumn("1.0")
|
||||
)
|
||||
def drop_column(self, user_name, grant_target_name, node=None):
|
||||
"""Check that the user is able to execute `KILL MUTATION` after `ALTER DROP COLUMN`
|
||||
if and only if the user has `ALTER DROP COLUMN` privilege.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Scenario("KILL ALTER DROP COLUMN without privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER DROP COLUMN mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message="Exception: Not allowed to kill mutation.")
|
||||
|
||||
with Scenario("KILL ALTER DROP COLUMN with privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER DROP COLUMN mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
|
||||
|
||||
with When("I grant the ALTER DROP COLUMN privilege"):
|
||||
node.query(f"GRANT ALTER DROP COLUMN ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)])
|
||||
|
||||
with Scenario("KILL ALTER DROP COLUMN with revoked privilege"):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Given("I have an ALTER DROP COLUMN mutation"):
|
||||
node.query(f"ALTER TABLE {table_name} DROP COLUMN x")
|
||||
|
||||
with When("I grant the ALTER DROP COLUMN privilege"):
|
||||
node.query(f"GRANT ALTER DROP COLUMN ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And("I revoke the ALTER DROP COLUMN privilege"):
|
||||
node.query(f"REVOKE ALTER DROP COLUMN ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with When("I try to KILL MUTATION"):
|
||||
node.query(f"KILL MUTATION WHERE database = 'default' AND table = '{table_name}'", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message="Exception: Not allowed to kill mutation.")
|
||||
|
||||
@TestFeature
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_KillMutation("1.0"),
|
||||
)
|
||||
@Name("kill mutation")
|
||||
def feature(self, node="clickhouse1", stress=None, parallel=None):
|
||||
"""Check the RBAC functionality of KILL MUTATION.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
if parallel is not None:
|
||||
self.context.parallel = parallel
|
||||
if stress is not None:
|
||||
self.context.stress = stress
|
||||
|
||||
Suite(run=no_privilege, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=privileges_granted_directly, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=privileges_granted_via_role, setup=instrument_clickhouse_server_log)
|
84
tests/testflows/rbac/tests/privileges/kill_query.py
Normal file
84
tests/testflows/rbac/tests/privileges/kill_query.py
Normal file
@ -0,0 +1,84 @@
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def privilege_granted_directly_or_via_role(self, node=None):
|
||||
"""Check that user is only able to execute KILL QUERY when they have required privilege, either directly or via role.
|
||||
"""
|
||||
role_name = f"role_{getuid()}"
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Suite("user with direct privilege", setup=instrument_clickhouse_server_log):
|
||||
with user(node, user_name):
|
||||
|
||||
with When(f"I run checks that {user_name} is only able to execute KILL QUERY with required privileges"):
|
||||
privilege_check(grant_target_name=user_name, user_name=user_name, node=node)
|
||||
|
||||
with Suite("user with privilege via role", setup=instrument_clickhouse_server_log):
|
||||
with user(node, user_name), role(node, role_name):
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
with And(f"I run checks that {user_name} with {role_name} is only able to execute KILL QUERY with required privileges"):
|
||||
privilege_check(grant_target_name=role_name, user_name=user_name, node=node)
|
||||
|
||||
def privilege_check(grant_target_name, user_name, node=None):
|
||||
"""Run scenarios to check the user's access with different privileges.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=f"{user_name}")
|
||||
|
||||
with Scenario("user without privilege", setup=instrument_clickhouse_server_log):
|
||||
|
||||
with When("I attempt to kill a query without privilege"):
|
||||
node.query(f"KILL QUERY WHERE user ='default'", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("user with privilege", setup=instrument_clickhouse_server_log):
|
||||
with When("I grant kill query privilege"):
|
||||
node.query(f"GRANT KILL QUERY TO {grant_target_name}")
|
||||
|
||||
with Then("I attempt to kill a query"):
|
||||
node.query(f"KILL QUERY WHERE 1", settings = [("user", user_name)])
|
||||
|
||||
with Scenario("user with revoked privilege", setup=instrument_clickhouse_server_log):
|
||||
|
||||
with When("I grant the kill query privilege"):
|
||||
node.query(f"GRANT KILL QUERY TO {grant_target_name}")
|
||||
|
||||
with And("I revoke the kill query privilege"):
|
||||
node.query(f"REVOKE KILL QUERY TO {grant_target_name}")
|
||||
|
||||
with Then("I attempt to kill a query"):
|
||||
node.query(f"KILL QUERY WHERE 1", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("execute on cluster", setup=instrument_clickhouse_server_log):
|
||||
|
||||
with When("I grant the truncate privilege"):
|
||||
node.query(f"GRANT KILL QUERY TO {grant_target_name}")
|
||||
|
||||
with Then("I attempt to kill a query"):
|
||||
node.query(f"KILL QUERY ON CLUSTER WHERE 1'", settings = [("user", user_name)])
|
||||
|
||||
@TestFeature
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_KillQuery("1.0"),
|
||||
)
|
||||
@Name("kill query")
|
||||
def feature(self, node="clickhouse1", stress=None, parallel=None):
|
||||
"""Check the RBAC functionality of KILL QUERY.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
if parallel is not None:
|
||||
self.context.parallel = parallel
|
||||
if stress is not None:
|
||||
self.context.stress = stress
|
||||
|
||||
with Suite(test=privilege_granted_directly_or_via_role):
|
||||
privilege_granted_directly_or_via_role()
|
113
tests/testflows/rbac/tests/privileges/optimize.py
Normal file
113
tests/testflows/rbac/tests/privileges/optimize.py
Normal file
@ -0,0 +1,113 @@
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def privilege_granted_directly_or_via_role(self, table_type, node=None):
|
||||
"""Check that user is only able to execute OPTIMIZE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
role_name = f"role_{getuid()}"
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Suite("user with direct privilege", setup=instrument_clickhouse_server_log):
|
||||
with user(node, user_name):
|
||||
|
||||
with When(f"I run checks that {user_name} is only able to execute OPTIMIZE with required privileges"):
|
||||
privilege_check(grant_target_name=user_name, user_name=user_name, table_type=table_type, node=node)
|
||||
|
||||
with Suite("user with privilege via role", setup=instrument_clickhouse_server_log):
|
||||
with user(node, user_name), role(node, role_name):
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
with And(f"I run checks that {user_name} with {role_name} is only able to execute OPTIMIZE with required privileges"):
|
||||
privilege_check(grant_target_name=role_name, user_name=user_name, table_type=table_type, node=node)
|
||||
|
||||
def privilege_check(grant_target_name, user_name, table_type, node=None):
|
||||
"""Run scenarios to check the user's access with different privileges.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=f"{user_name}")
|
||||
|
||||
with Scenario("user without privilege", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
|
||||
with When("I attempt to optimize a table without privilege"):
|
||||
node.query(f"OPTIMIZE TABLE {table_name} FINAL", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("user with privilege", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
|
||||
with When("I grant the optimize privilege"):
|
||||
node.query(f"GRANT OPTIMIZE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with Then("I attempt to optimize a table"):
|
||||
node.query(f"OPTIMIZE TABLE {table_name}", settings = [("user", user_name)])
|
||||
|
||||
with Scenario("user with revoked privilege", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
|
||||
with When("I grant the optimize privilege"):
|
||||
node.query(f"GRANT OPTIMIZE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And("I revoke the optimize privilege"):
|
||||
node.query(f"REVOKE OPTIMIZE ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with Then("I attempt to optimize a table"):
|
||||
node.query(f"OPTIMIZE TABLE {table_name}", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("execute on cluster", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
try:
|
||||
with Given("I have a table on a cluster"):
|
||||
node.query(f"CREATE TABLE {table_name} ON CLUSTER sharded_cluster (d DATE, a String, b UInt8, x String, y Int8) ENGINE = MergeTree() PARTITION BY y ORDER BY d")
|
||||
|
||||
with When("I grant the optimize privilege"):
|
||||
node.query(f"GRANT OPTIMIZE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with Then("I attempt to optimize a table"):
|
||||
node.query(f"OPTIMIZE TABLE {table_name} ON CLUSTER sharded_cluster", settings = [("user", user_name)])
|
||||
|
||||
finally:
|
||||
with Finally("I drop the table from the cluster"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table_name} ON CLUSTER sharded_cluster")
|
||||
|
||||
@TestFeature
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Optimize("1.0"),
|
||||
)
|
||||
@Examples("table_type", [
|
||||
(key,) for key in table_types.keys()
|
||||
])
|
||||
@Name("optimize")
|
||||
def feature(self, node="clickhouse1", stress=None, parallel=None):
|
||||
"""Check the RBAC functionality of OPTIMIZE.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
if parallel is not None:
|
||||
self.context.parallel = parallel
|
||||
if stress is not None:
|
||||
self.context.stress = stress
|
||||
|
||||
for example in self.examples:
|
||||
table_type, = example
|
||||
|
||||
if table_type != "MergeTree" and not self.context.stress:
|
||||
continue
|
||||
|
||||
with Example(str(example)):
|
||||
with Suite(test=privilege_granted_directly_or_via_role):
|
||||
privilege_granted_directly_or_via_role(table_type=table_type)
|
@ -25,7 +25,7 @@ def without_privilege(self, table_type, node=None):
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Select_Grant("1.0"),
|
||||
RQ_SRS_006_RBAC_Grant_Privilege_Select("1.0"),
|
||||
)
|
||||
def user_with_privilege(self, table_type, node=None):
|
||||
"""Check that user can select from a table on which they have select privilege.
|
||||
@ -47,7 +47,7 @@ def user_with_privilege(self, table_type, node=None):
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Select_Revoke("1.0"),
|
||||
RQ_SRS_006_RBAC_Revoke_Privilege_Select("1.0"),
|
||||
)
|
||||
def user_with_revoked_privilege(self, table_type, node=None):
|
||||
"""Check that user is unable to select from a table after select privilege
|
||||
@ -115,7 +115,7 @@ def user_column_privileges(self, grant_columns, select_columns_pass, data_pass,
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Select_Grant("1.0"),
|
||||
RQ_SRS_006_RBAC_Grant_Privilege_Select("1.0"),
|
||||
)
|
||||
def role_with_privilege(self, table_type, node=None):
|
||||
"""Check that user can select from a table after it is granted a role that
|
||||
@ -142,7 +142,7 @@ def role_with_privilege(self, table_type, node=None):
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Select_Revoke("1.0"),
|
||||
RQ_SRS_006_RBAC_Revoke_Privilege_Select("1.0"),
|
||||
)
|
||||
def role_with_revoked_privilege(self, table_type, node=None):
|
||||
"""Check that user with a role that has select privilege on a table is unable
|
||||
|
163
tests/testflows/rbac/tests/privileges/show/show_columns.py
Normal file
163
tests/testflows/rbac/tests/privileges/show/show_columns.py
Normal file
@ -0,0 +1,163 @@
|
||||
from testflows.core import *
|
||||
from testflows.asserts import error
|
||||
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def describe_with_privilege_granted_directly(self, node=None):
|
||||
"""Check that user is able to execute DESCRIBE on a table if and only if
|
||||
they have SHOW COLUMNS privilege for that table granted directly.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
table_name = f"table_name_{getuid()}"
|
||||
|
||||
Suite(test=describe, setup=instrument_clickhouse_server_log)(grant_target_name=user_name, user_name=user_name, table_name=table_name)
|
||||
|
||||
@TestSuite
|
||||
def describe_with_privilege_granted_via_role(self, node=None):
|
||||
"""Check that user is able to execute DESCRIBE on a table if and only if
|
||||
they have SHOW COLUMNS privilege for that table granted through a role.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
role_name = f"role_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"), role(node, f"{role_name}"):
|
||||
table_name = f"table_name_{getuid()}"
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
Suite(test=describe, setup=instrument_clickhouse_server_log)(grant_target_name=role_name, user_name=user_name, table_name=table_name)
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_DescribeTable("1.0"),
|
||||
)
|
||||
def describe(self, grant_target_name, user_name, table_name, node=None):
|
||||
"""Check that user is able to execute DESCRIBE only when they have SHOW COLUMNS privilege.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Scenario("DESCRIBE table without privilege"):
|
||||
with When(f"I attempt to DESCRIBE {table_name}"):
|
||||
node.query(f"DESCRIBE {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("DESCRIBE with privilege"):
|
||||
with When(f"I grant SHOW COLUMNS on the table"):
|
||||
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to DESCRIBE {table_name}"):
|
||||
node.query(f"DESCRIBE TABLE {table_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("DESCRIBE with revoked privilege"):
|
||||
with When(f"I grant SHOW COLUMNS on the table"):
|
||||
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke SHOW COLUMNS on the table"):
|
||||
node.query(f"REVOKE SHOW COLUMNS ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to DESCRIBE {table_name}"):
|
||||
node.query(f"DESCRIBE {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
@TestSuite
|
||||
def show_create_with_privilege_granted_directly(self, node=None):
|
||||
"""Check that user is able to execute SHOW CREATE on a table if and only if
|
||||
they have SHOW COLUMNS privilege for that table granted directly.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
table_name = f"table_name_{getuid()}"
|
||||
|
||||
Suite(test=show_create, setup=instrument_clickhouse_server_log)(grant_target_name=user_name, user_name=user_name, table_name=table_name)
|
||||
|
||||
@TestSuite
|
||||
def show_create_with_privilege_granted_via_role(self, node=None):
|
||||
"""Check that user is able to execute SHOW CREATE on a table if and only if
|
||||
they have SHOW COLUMNS privilege for that table granted directly.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
role_name = f"role_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"), role(node, f"{role_name}"):
|
||||
table_name = f"table_name_{getuid()}"
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
Suite(test=show_create, setup=instrument_clickhouse_server_log)(grant_target_name=role_name, user_name=user_name, table_name=table_name)
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowCreateTable("1.0"),
|
||||
)
|
||||
def show_create(self, grant_target_name, user_name, table_name, node=None):
|
||||
"""Check that user is able to execute SHOW CREATE on a table only when they have SHOW COLUMNS privilege.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Scenario("SHOW CREATE without privilege"):
|
||||
with When(f"I attempt to SHOW CREATE {table_name}"):
|
||||
node.query(f"SHOW CREATE TABLE {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("SHOW CREATE with privilege"):
|
||||
with When(f"I grant SHOW COLUMNS on the table"):
|
||||
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to SHOW CREATE {table_name}"):
|
||||
node.query(f"SHOW CREATE TABLE {table_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("SHOW CREATE with revoked privilege"):
|
||||
with When(f"I grant SHOW COLUMNS on the table"):
|
||||
node.query(f"GRANT SHOW COLUMNS ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke SHOW COLUMNS on the table"):
|
||||
node.query(f"REVOKE SHOW COLUMNS ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to SHOW CREATE {table_name}"):
|
||||
node.query(f"SHOW CREATE TABLE {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
@TestFeature
|
||||
@Name("show columns")
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowColumns("1.0")
|
||||
)
|
||||
def feature(self, node="clickhouse1"):
|
||||
"""Check the RBAC functionality of SHOW COLUMNS.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
Suite(run=describe_with_privilege_granted_directly, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=describe_with_privilege_granted_via_role, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=show_create_with_privilege_granted_directly, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=show_create_with_privilege_granted_via_role, setup=instrument_clickhouse_server_log)
|
214
tests/testflows/rbac/tests/privileges/show/show_databases.py
Normal file
214
tests/testflows/rbac/tests/privileges/show/show_databases.py
Normal file
@ -0,0 +1,214 @@
|
||||
from testflows.core import *
|
||||
from testflows.asserts import error
|
||||
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def dict_privileges_granted_directly(self, node=None):
|
||||
"""Check that a user is able to execute `USE` and `SHOW CREATE`
|
||||
commands on a database and see the database when they execute `SHOW DATABASES` command
|
||||
if and only if they have any privilege on that database granted directly.
|
||||
"""
|
||||
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
db_name = f"db_name_{getuid()}"
|
||||
|
||||
Suite(run=check_privilege, flags=TE,
|
||||
examples=Examples("privilege on grant_target_name user_name db_name", [
|
||||
tuple(list(row)+[user_name,user_name,db_name]) for row in check_privilege.examples
|
||||
], args=Args(name="check privilege={privilege}", format_name=True)))
|
||||
|
||||
@TestSuite
|
||||
def dict_privileges_granted_via_role(self, node=None):
|
||||
"""Check that a user is able to execute `USE` and `SHOW CREATE`
|
||||
commands on a database and see the database when they execute `SHOW DATABASES` command
|
||||
if and only if they have any privilege on that database granted via role.
|
||||
"""
|
||||
|
||||
user_name = f"user_{getuid()}"
|
||||
role_name = f"role_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"), role(node, f"{role_name}"):
|
||||
db_name = f"db_name_{getuid()}"
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
Suite(run=check_privilege, flags=TE,
|
||||
examples=Examples("privilege on grant_target_name user_name db_name", [
|
||||
tuple(list(row)+[role_name,user_name,db_name]) for row in check_privilege.examples
|
||||
], args=Args(name="check privilege={privilege}", format_name=True)))
|
||||
|
||||
@TestOutline(Suite)
|
||||
@Examples("privilege on",[
|
||||
("SHOW","*.*"),
|
||||
("SHOW DATABASES","db"),
|
||||
("CREATE DATABASE","db"),
|
||||
("DROP DATABASE","db"),
|
||||
])
|
||||
def check_privilege(self, privilege, on, grant_target_name, user_name, db_name, node=None):
|
||||
"""Run checks for commands that require SHOW DATABASE privilege.
|
||||
"""
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
on = on.replace("db", f"{db_name}")
|
||||
|
||||
Suite(test=show_db, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, db_name=db_name)
|
||||
Suite(test=use, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, db_name=db_name)
|
||||
Suite(test=show_create, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, db_name=db_name)
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowDatabases_Query("1.0"),
|
||||
)
|
||||
def show_db(self, privilege, on, grant_target_name, user_name, db_name, node=None):
|
||||
"""Check that user is only able to see a database in SHOW DATABASES when they have a privilege on that database.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
try:
|
||||
with Given("I have a database"):
|
||||
node.query(f"CREATE DATABASE {db_name}")
|
||||
|
||||
with Scenario("SHOW DATABASES without privilege"):
|
||||
with When("I check the user doesn't see the database"):
|
||||
output = node.query("SHOW DATABASES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == '', error()
|
||||
|
||||
with Scenario("SHOW DATABASES with privilege"):
|
||||
with When(f"I grant {privilege} on the database"):
|
||||
node.query(f"GRANT {privilege} ON {db_name}.* TO {grant_target_name}")
|
||||
|
||||
with Then("I check the user does see a database"):
|
||||
output = node.query("SHOW DATABASES", settings = [("user", f"{user_name}")], message = f'{db_name}')
|
||||
|
||||
with Scenario("SHOW DATABASES with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the database"):
|
||||
node.query(f"GRANT {privilege} ON {db_name}.* TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the database"):
|
||||
node.query(f"REVOKE {privilege} ON {db_name}.* FROM {grant_target_name}")
|
||||
|
||||
with Then("I check the user does not see a database"):
|
||||
output = node.query("SHOW DATABASES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == f'', error()
|
||||
|
||||
finally:
|
||||
with Finally("I drop the database"):
|
||||
node.query(f"DROP DATABASE IF EXISTS {db_name}")
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_UseDatabase("1.0"),
|
||||
)
|
||||
def use(self, privilege, on, grant_target_name, user_name, db_name, node=None):
|
||||
"""Check that user is able to execute EXISTS on a database if and only if the user has SHOW DATABASE privilege
|
||||
on that database.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
try:
|
||||
with Given("I have a database"):
|
||||
node.query(f"CREATE DATABASE {db_name}")
|
||||
|
||||
with Scenario("USE without privilege"):
|
||||
with When(f"I attempt to USE {db_name}"):
|
||||
node.query(f"USE {db_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("USE with privilege"):
|
||||
with When(f"I grant {privilege} on the database"):
|
||||
node.query(f"GRANT {privilege} ON {db_name}.* TO {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to USE {db_name}"):
|
||||
node.query(f"USE {db_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("USE with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the database"):
|
||||
node.query(f"GRANT {privilege} ON {db_name}.* TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the database"):
|
||||
node.query(f"REVOKE {privilege} ON {db_name}.* FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to USE {db_name}"):
|
||||
node.query(f"USE {db_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the database"):
|
||||
node.query(f"DROP DATABASE IF EXISTS {db_name}")
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowCreateDatabase("1.0"),
|
||||
)
|
||||
def show_create(self, privilege, on, grant_target_name, user_name, db_name, node=None):
|
||||
"""Check that user is able to execute EXISTS on a database if and only if the user has SHOW DATABASE privilege
|
||||
on that database.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
try:
|
||||
with Given("I have a database"):
|
||||
node.query(f"CREATE DATABASE {db_name}")
|
||||
|
||||
with Scenario("SHOW CREATE without privilege"):
|
||||
with When(f"I attempt to SHOW CREATE {db_name}"):
|
||||
node.query(f"SHOW CREATE DATABASE {db_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("SHOW CREATE with privilege"):
|
||||
with When(f"I grant {privilege} on the database"):
|
||||
node.query(f"GRANT {privilege} ON {db_name}.* TO {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to SHOW CREATE {db_name}"):
|
||||
node.query(f"SHOW CREATE DATABASE {db_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("SHOW CREATE with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the database"):
|
||||
node.query(f"GRANT {privilege} ON {db_name}.* TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the database"):
|
||||
node.query(f"REVOKE {privilege} ON {db_name}.* FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to SHOW CREATE {db_name}"):
|
||||
node.query(f"SHOW CREATE DATABASE {db_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the database"):
|
||||
node.query(f"DROP DATABASE IF EXISTS {db_name}")
|
||||
|
||||
@TestFeature
|
||||
@Name("show databases")
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowDatabases("1.0")
|
||||
)
|
||||
def feature(self, node="clickhouse1"):
|
||||
"""Check the RBAC functionality of SHOW DATABASES.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
Suite(run=dict_privileges_granted_directly, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=dict_privileges_granted_via_role, setup=instrument_clickhouse_server_log)
|
215
tests/testflows/rbac/tests/privileges/show/show_dictionaries.py
Normal file
215
tests/testflows/rbac/tests/privileges/show/show_dictionaries.py
Normal file
@ -0,0 +1,215 @@
|
||||
from testflows.core import *
|
||||
from testflows.asserts import error
|
||||
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def dict_privileges_granted_directly(self, node=None):
|
||||
"""Check that a user is able to execute `SHOW CREATE` and `EXISTS`
|
||||
commands on a dictionary and see the dictionary when they execute `SHOW DICTIONARIES` command
|
||||
if and only if they have any privilege on that table granted directly.
|
||||
"""
|
||||
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
dict_name = f"dict_name_{getuid()}"
|
||||
|
||||
Suite(run=check_privilege, flags=TE,
|
||||
examples=Examples("privilege on grant_target_name user_name dict_name", [
|
||||
tuple(list(row)+[user_name,user_name,dict_name]) for row in check_privilege.examples
|
||||
], args=Args(name="check privilege={privilege}", format_name=True)))
|
||||
|
||||
@TestSuite
|
||||
def dict_privileges_granted_via_role(self, node=None):
|
||||
"""Check that a user is able to execute `SHOW CREATE` and `EXISTS`
|
||||
commands on a dictionary and see the dictionary when they execute `SHOW DICTIONARIES` command
|
||||
if and only if they have any privilege on that table granted via role.
|
||||
"""
|
||||
|
||||
user_name = f"user_{getuid()}"
|
||||
role_name = f"role_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"), role(node, f"{role_name}"):
|
||||
dict_name = f"dict_name_{getuid()}"
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
Suite(run=check_privilege, flags=TE,
|
||||
examples=Examples("privilege on grant_target_name user_name dict_name", [
|
||||
tuple(list(row)+[role_name,user_name,dict_name]) for row in check_privilege.examples
|
||||
], args=Args(name="check privilege={privilege}", format_name=True)))
|
||||
|
||||
@TestOutline(Suite)
|
||||
@Examples("privilege on",[
|
||||
("SHOW","*.*"),
|
||||
("SHOW DICTIONARIES","dict"),
|
||||
("CREATE DICTIONARY","dict"),
|
||||
("DROP DICTIONARY","dict"),
|
||||
])
|
||||
def check_privilege(self, privilege, on, grant_target_name, user_name, dict_name, node=None):
|
||||
"""Run checks for commands that require SHOW DICTIONARY privilege.
|
||||
"""
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
on = on.replace("dict", f"{dict_name}")
|
||||
|
||||
Suite(test=show_dict, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, dict_name=dict_name)
|
||||
Suite(test=exists, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, dict_name=dict_name)
|
||||
Suite(test=show_create, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, dict_name=dict_name)
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowDictionaries_Query("1.0"),
|
||||
)
|
||||
def show_dict(self, privilege, on, grant_target_name, user_name, dict_name, node=None):
|
||||
"""Check that user is only able to see a dictionary in SHOW DICTIONARIES
|
||||
when they have a privilege on that dictionary.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
try:
|
||||
with Given("I have a dictionary"):
|
||||
node.query(f"CREATE DICTIONARY {dict_name}(x Int32, y Int32) PRIMARY KEY x LAYOUT(FLAT()) SOURCE(CLICKHOUSE()) LIFETIME(0)")
|
||||
|
||||
with Scenario("SHOW DICTIONARIES without privilege"):
|
||||
with When("I check the user doesn't see the dictionary"):
|
||||
output = node.query("SHOW DICTIONARIES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == '', error()
|
||||
|
||||
with Scenario("SHOW DICTIONARIES with privilege"):
|
||||
with When(f"I grant {privilege} on the dictionary"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with Then("I check the user does see a dictionary"):
|
||||
node.query("SHOW DICTIONARIES", settings = [("user", f"{user_name}")], message=f"{dict_name}")
|
||||
|
||||
with Scenario("SHOW DICTIONARIES with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the dictionary"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the dictionary"):
|
||||
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
|
||||
|
||||
with Then("I check the user does not see a dictionary"):
|
||||
output = node.query("SHOW DICTIONARIES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == f'', error()
|
||||
|
||||
finally:
|
||||
with Finally("I drop the dictionary"):
|
||||
node.query(f"DROP DICTIONARY IF EXISTS {dict_name}")
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ExistsDictionary("1.0"),
|
||||
)
|
||||
def exists(self, privilege, on, grant_target_name, user_name, dict_name, node=None):
|
||||
"""Check that user is able to execute EXISTS on a dictionary if and only if the user has SHOW DICTIONARY privilege
|
||||
on that dictionary.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
try:
|
||||
with Given("I have a dictionary"):
|
||||
node.query(f"CREATE DICTIONARY {dict_name}(x Int32, y Int32) PRIMARY KEY x LAYOUT(FLAT()) SOURCE(CLICKHOUSE()) LIFETIME(0)")
|
||||
|
||||
with Scenario("EXISTS without privilege"):
|
||||
with When(f"I check if {dict_name} EXISTS"):
|
||||
node.query(f"EXISTS {dict_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("EXISTS with privilege"):
|
||||
with When(f"I grant {privilege} on the dictionary"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with Then(f"I check if {dict_name} EXISTS"):
|
||||
node.query(f"EXISTS {dict_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("EXISTS with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the dictionary"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the dictionary"):
|
||||
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I check if {dict_name} EXISTS"):
|
||||
node.query(f"EXISTS {dict_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the dictionary"):
|
||||
node.query(f"DROP DICTIONARY IF EXISTS {dict_name}")
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowCreateDictionary("1.0"),
|
||||
)
|
||||
def show_create(self, privilege, on, grant_target_name, user_name, dict_name, node=None):
|
||||
"""Check that user is able to execute SHOW CREATE on a dictionary if and only if the user has SHOW DICTIONARY privilege
|
||||
on that dictionary.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
try:
|
||||
with Given("I have a dictionary"):
|
||||
node.query(f"CREATE DICTIONARY {dict_name}(x Int32, y Int32) PRIMARY KEY x LAYOUT(FLAT()) SOURCE(CLICKHOUSE()) LIFETIME(0)")
|
||||
|
||||
with Scenario("SHOW CREATE without privilege"):
|
||||
with When(f"I attempt to SHOW CREATE {dict_name}"):
|
||||
node.query(f"SHOW CREATE DICTIONARY {dict_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("SHOW CREATE with privilege"):
|
||||
with When(f"I grant {privilege} on the dictionary"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to SHOW CREATE {dict_name}"):
|
||||
node.query(f"SHOW CREATE DICTIONARY {dict_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("SHOW CREATE with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the dictionary"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the dictionary"):
|
||||
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I attempt to SHOW CREATE {dict_name}"):
|
||||
node.query(f"SHOW CREATE DICTIONARY {dict_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
finally:
|
||||
with Finally("I drop the dictionary"):
|
||||
node.query(f"DROP DICTIONARY IF EXISTS {dict_name}")
|
||||
|
||||
@TestFeature
|
||||
@Name("show dictionaries")
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowDictionaries("1.0"),
|
||||
)
|
||||
def feature(self, node="clickhouse1"):
|
||||
"""Check the RBAC functionality of SHOW DICTIONARIES.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
Suite(run=dict_privileges_granted_directly, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=dict_privileges_granted_via_role, setup=instrument_clickhouse_server_log)
|
204
tests/testflows/rbac/tests/privileges/show/show_tables.py
Executable file
204
tests/testflows/rbac/tests/privileges/show/show_tables.py
Executable file
@ -0,0 +1,204 @@
|
||||
from testflows.core import *
|
||||
from testflows.asserts import error
|
||||
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def table_privileges_granted_directly(self, node=None):
|
||||
"""Check that a user is able to execute `CHECK` and `EXISTS`
|
||||
commands on a table and see the table when they execute `SHOW TABLE` command
|
||||
if and only if they have any privilege on that table granted directly.
|
||||
"""
|
||||
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
table_name = f"table_name_{getuid()}"
|
||||
|
||||
Suite(run=check_privilege, flags=TE,
|
||||
examples=Examples("privilege on grant_target_name user_name table_name", [
|
||||
tuple(list(row)+[user_name,user_name,table_name]) for row in check_privilege.examples
|
||||
], args=Args(name="check privilege={privilege}", format_name=True)))
|
||||
|
||||
@TestSuite
|
||||
def table_privileges_granted_via_role(self, node=None):
|
||||
"""Check that a user is able to execute `CHECK` and `EXISTS`
|
||||
commands on a table and see the table when they execute `SHOW TABLE` command
|
||||
if and only if they have any privilege on that table granted via role.
|
||||
"""
|
||||
|
||||
user_name = f"user_{getuid()}"
|
||||
role_name = f"role_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"), role(node, f"{role_name}"):
|
||||
table_name = f"table_name_{getuid()}"
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
Suite(run=check_privilege, flags=TE,
|
||||
examples=Examples("privilege on grant_target_name user_name table_name", [
|
||||
tuple(list(row)+[role_name,user_name,table_name]) for row in check_privilege.examples
|
||||
], args=Args(name="check privilege={privilege}", format_name=True)))
|
||||
|
||||
@TestOutline(Suite)
|
||||
@Examples("privilege on",[
|
||||
("SHOW", "*.*"),
|
||||
("SHOW TABLES", "table"),
|
||||
("SELECT", "table"),
|
||||
("INSERT", "table"),
|
||||
("ALTER", "table"),
|
||||
("SELECT(a)", "table"),
|
||||
("INSERT(a)", "table"),
|
||||
("ALTER(a)", "table"),
|
||||
])
|
||||
def check_privilege(self, privilege, on, grant_target_name, user_name, table_name, node=None):
|
||||
"""Run checks for commands that require SHOW TABLE privilege.
|
||||
"""
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
Suite(test=show_tables, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, table_name=table_name)
|
||||
Suite(test=exists, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, table_name=table_name)
|
||||
Suite(test=check, setup=instrument_clickhouse_server_log)(privilege=privilege, on=on, grant_target_name=grant_target_name, user_name=user_name, table_name=table_name)
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowTables_Query("1.0"),
|
||||
)
|
||||
def show_tables(self, privilege, on, grant_target_name, user_name, table_name, node=None):
|
||||
"""Check that user is only able to see a table in SHOW TABLES when they have a privilege on that table.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
on = on.replace("table", f"{table_name}")
|
||||
|
||||
with table(node, table_name):
|
||||
|
||||
with Scenario("SHOW TABLES without privilege"):
|
||||
with When("I check the user doesn't see the table"):
|
||||
output = node.query("SHOW TABLES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == '', error()
|
||||
|
||||
with Scenario("SHOW TABLES with privilege"):
|
||||
with When(f"I grant {privilege} on the table"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with Then("I check the user does see a table"):
|
||||
node.query("SHOW TABLES", settings = [("user", f"{user_name}")], message=f"{table_name}")
|
||||
|
||||
with Scenario("SHOW TABLES with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the table"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the table"):
|
||||
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
|
||||
|
||||
with Then("I check the user does not see a table"):
|
||||
output = node.query("SHOW TABLES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == '', error()
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ExistsTable("1.0"),
|
||||
)
|
||||
def exists(self, privilege, on, grant_target_name, user_name, table_name, node=None):
|
||||
"""Check that user is able to execute EXISTS on a table if and only if the user has SHOW TABLE privilege
|
||||
on that table.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
if on == "table":
|
||||
on = f"{table_name}"
|
||||
|
||||
with table(node, table_name):
|
||||
with Scenario("EXISTS without privilege"):
|
||||
with When(f"I check if {table_name} EXISTS"):
|
||||
node.query(f"EXISTS {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("EXISTS with privilege"):
|
||||
with When(f"I grant {privilege} on the table"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with Then(f"I check if {table_name} EXISTS"):
|
||||
node.query(f"EXISTS {table_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("EXISTS with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the table"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the table"):
|
||||
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I check if {table_name} EXISTS"):
|
||||
node.query(f"EXISTS {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
@TestSuite
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_CheckTable("1.0"),
|
||||
)
|
||||
def check(self, privilege, on, grant_target_name, user_name, table_name, node=None):
|
||||
"""Check that user is able to execute CHECK on a table if and only if the user has SHOW TABLE privilege
|
||||
on that table.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=user_name)
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
if on == "table":
|
||||
on = f"{table_name}"
|
||||
|
||||
with table(node, table_name):
|
||||
with Scenario("CHECK without privilege"):
|
||||
with When(f"I CHECK {table_name}"):
|
||||
node.query(f"CHECK TABLE {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("CHECK with privilege"):
|
||||
with When(f"I grant {privilege} on the table"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with Then(f"I CHECK {table_name}"):
|
||||
node.query(f"CHECK TABLE {table_name}", settings=[("user",user_name)])
|
||||
|
||||
with Scenario("CHECK with revoked privilege"):
|
||||
with When(f"I grant {privilege} on the table"):
|
||||
node.query(f"GRANT {privilege} ON {on} TO {grant_target_name}")
|
||||
|
||||
with And(f"I revoke {privilege} on the table"):
|
||||
node.query(f"REVOKE {privilege} ON {on} FROM {grant_target_name}")
|
||||
|
||||
with Then(f"I CHECK {table_name}"):
|
||||
node.query(f"CHECK TABLE {table_name}", settings=[("user",user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
@TestFeature
|
||||
@Name("show tables")
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_ShowTables("1.0"),
|
||||
)
|
||||
def feature(self, node="clickhouse1"):
|
||||
"""Check the RBAC functionality of SHOW TABLES.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
Suite(run=table_privileges_granted_directly, setup=instrument_clickhouse_server_log)
|
||||
Suite(run=table_privileges_granted_via_role, setup=instrument_clickhouse_server_log)
|
@ -1,62 +0,0 @@
|
||||
from contextlib import contextmanager
|
||||
|
||||
from testflows.core import *
|
||||
from testflows.asserts import error
|
||||
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestScenario
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Table_ShowTables("1.0"),
|
||||
)
|
||||
def show_tables(self, node=None):
|
||||
"""Check that a user is able to see a table in `SHOW TABLES` if and only if the user has privilege on that table,
|
||||
either granted directly or through a role.
|
||||
"""
|
||||
user_name = f"user_{getuid()}"
|
||||
role_name = f"role_{getuid()}"
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with user(node, f"{user_name}"):
|
||||
Scenario(test=show_tables_general, flags=TE,
|
||||
name="create with create view and select privilege granted directly")(grant_target_name=user_name, user_name=user_name)
|
||||
|
||||
with user(node, f"{user_name}"), role(node, f"{role_name}"):
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
Scenario(test=show_tables_general, flags=TE,
|
||||
name="create with create view and select privilege granted through a role")(grant_target_name=role_name, user_name=user_name)
|
||||
|
||||
@TestScenario
|
||||
def show_tables_general(self, grant_target_name, user_name, node=None):
|
||||
table0_name = f"table0_{getuid()}"
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
try:
|
||||
with Given("I have a table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table0_name}")
|
||||
node.query(f"CREATE TABLE {table0_name} (a String, b Int8, d Date) Engine = Memory")
|
||||
|
||||
with Then("I check user does not see any tables"):
|
||||
output = node.query("SHOW TABLES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == '', error()
|
||||
|
||||
with When("I grant select privilege on the table"):
|
||||
node.query(f"GRANT SELECT(a) ON {table0_name} TO {grant_target_name}")
|
||||
with Then("I check the user does see a table"):
|
||||
output = node.query("SHOW TABLES", settings = [("user", f"{user_name}")]).output
|
||||
assert output == f'{table0_name}', error()
|
||||
|
||||
finally:
|
||||
with Finally("I drop the table"):
|
||||
node.query(f"DROP TABLE IF EXISTS {table0_name}")
|
||||
|
||||
@TestFeature
|
||||
@Name("show tables")
|
||||
def feature(self, node="clickhouse1"):
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
Scenario(run=show_tables, setup=instrument_clickhouse_server_log, flags=TE)
|
107
tests/testflows/rbac/tests/privileges/truncate.py
Normal file
107
tests/testflows/rbac/tests/privileges/truncate.py
Normal file
@ -0,0 +1,107 @@
|
||||
from rbac.requirements import *
|
||||
from rbac.helper.common import *
|
||||
import rbac.helper.errors as errors
|
||||
|
||||
@TestSuite
|
||||
def privilege_granted_directly_or_via_role(self, table_type, node=None):
|
||||
"""Check that user is only able to execute TRUNCATE when they have required privilege, either directly or via role.
|
||||
"""
|
||||
role_name = f"role_{getuid()}"
|
||||
user_name = f"user_{getuid()}"
|
||||
|
||||
if node is None:
|
||||
node = self.context.node
|
||||
|
||||
with Suite("user with direct privilege", setup=instrument_clickhouse_server_log):
|
||||
with user(node, user_name):
|
||||
|
||||
with When(f"I run checks that {user_name} is only able to execute TRUNCATE with required privileges"):
|
||||
privilege_check(grant_target_name=user_name, user_name=user_name, table_type=table_type, node=node)
|
||||
|
||||
with Suite("user with privilege via role", setup=instrument_clickhouse_server_log):
|
||||
with user(node, user_name), role(node, role_name):
|
||||
|
||||
with When("I grant the role to the user"):
|
||||
node.query(f"GRANT {role_name} TO {user_name}")
|
||||
|
||||
with And(f"I run checks that {user_name} with {role_name} is only able to execute TRUNCATE with required privileges"):
|
||||
privilege_check(grant_target_name=role_name, user_name=user_name, table_type=table_type, node=node)
|
||||
|
||||
def privilege_check(grant_target_name, user_name, table_type, node=None):
|
||||
"""Run scenarios to check the user's access with different privileges.
|
||||
"""
|
||||
exitcode, message = errors.not_enough_privileges(name=f"{user_name}")
|
||||
|
||||
with Scenario("user without privilege", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
|
||||
with When("I attempt to truncate a table without privilege"):
|
||||
node.query(f"TRUNCATE TABLE {table_name}", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("user with privilege", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
|
||||
with When("I grant the truncate privilege"):
|
||||
node.query(f"GRANT TRUNCATE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with Then("I attempt to truncate a table"):
|
||||
node.query(f"TRUNCATE TABLE {table_name}", settings = [("user", user_name)])
|
||||
|
||||
with Scenario("user with revoked privilege", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
|
||||
with When("I grant the truncate privilege"):
|
||||
node.query(f"GRANT TRUNCATE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with And("I revoke the truncate privilege"):
|
||||
node.query(f"REVOKE TRUNCATE ON {table_name} FROM {grant_target_name}")
|
||||
|
||||
with Then("I attempt to truncate a table"):
|
||||
node.query(f"TRUNCATE TABLE {table_name}", settings = [("user", user_name)],
|
||||
exitcode=exitcode, message=message)
|
||||
|
||||
with Scenario("execute on cluster", setup=instrument_clickhouse_server_log):
|
||||
table_name = f"merge_tree_{getuid()}"
|
||||
|
||||
with table(node, table_name, table_type):
|
||||
|
||||
with When("I grant the truncate privilege"):
|
||||
node.query(f"GRANT TRUNCATE ON {table_name} TO {grant_target_name}")
|
||||
|
||||
with Then("I attempt to truncate a table"):
|
||||
node.query(f"TRUNCATE TABLE IF EXISTS {table_name} ON CLUSTER sharded_cluster", settings = [("user", user_name)])
|
||||
|
||||
@TestFeature
|
||||
@Requirements(
|
||||
RQ_SRS_006_RBAC_Privileges_Truncate("1.0"),
|
||||
)
|
||||
@Examples("table_type", [
|
||||
(key,) for key in table_types.keys()
|
||||
])
|
||||
@Name("truncate")
|
||||
def feature(self, node="clickhouse1", stress=None, parallel=None):
|
||||
"""Check the RBAC functionality of TRUNCATE.
|
||||
"""
|
||||
self.context.node = self.context.cluster.node(node)
|
||||
|
||||
if parallel is not None:
|
||||
self.context.parallel = parallel
|
||||
if stress is not None:
|
||||
self.context.stress = stress
|
||||
|
||||
for example in self.examples:
|
||||
table_type, = example
|
||||
|
||||
if table_type != "MergeTree" and not self.context.stress:
|
||||
continue
|
||||
|
||||
with Example(str(example)):
|
||||
with Suite(test=privilege_granted_directly_or_via_role):
|
||||
privilege_granted_directly_or_via_role(table_type=table_type)
|
Loading…
Reference in New Issue
Block a user