Merge branch 'master' into optimize_parquet_reader

This commit is contained in:
Antonio Andelic 2023-02-03 10:24:18 +01:00 committed by GitHub
commit e507721557
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
73 changed files with 367 additions and 143 deletions

View File

@ -2,10 +2,10 @@
slug: /en/engines/table-engines/mergetree-family/invertedindexes
sidebar_label: Inverted Indexes
description: Quickly find search terms in text.
keywords: [full-text search, text search]
keywords: [full-text search, text search, inverted, index, indices]
---
# Inverted indexes [experimental]
# Full-text Search using Inverted Indexes [experimental]
Inverted indexes are an experimental type of [secondary indexes](/docs/en/engines/table-engines/mergetree-family/mergetree.md/#available-types-of-indices) which provide fast text search
capabilities for [String](/docs/en/sql-reference/data-types/string.md) or [FixedString](/docs/en/sql-reference/data-types/fixedstring.md)
@ -50,7 +50,7 @@ Being a type of skipping index, inverted indexes can be dropped or added to a co
``` sql
ALTER TABLE tab DROP INDEX inv_idx;
ALTER TABLE tab ADD INDEX inv_idx(s) TYPE inverted(2) GRANULARITY 1;
ALTER TABLE tab ADD INDEX inv_idx(s) TYPE inverted(2);
```
To use the index, no special functions or syntax are required. Typical string search predicates automatically leverage the index. As
@ -74,7 +74,106 @@ controls the amount of data read consumed from the underlying column before a ne
intermediate memory consumption for index construction but also improves lookup performance since fewer segments need to be checked on
average to evaluate a query.
## Full-text search of the Hacker News dataset
Let's look at the performance improvements of inverted indexes on a large dataset with lots of text. We will use 28.7M rows of comments on the popular Hacker News website. Here is the table without an inverted index:
```sql
CREATE TABLE hackernews (
id UInt64,
deleted UInt8,
type String,
author String,
timestamp DateTime,
comment String,
dead UInt8,
parent UInt64,
poll UInt64,
children Array(UInt32),
url String,
score UInt32,
title String,
parts Array(UInt32),
descendants UInt32
)
ENGINE = MergeTree
ORDER BY (type, author);
```
The 28.7M rows are in a Parquet file in S3 - let's insert them into the `hackernews` table:
```sql
INSERT INTO hackernews
SELECT * FROM s3Cluster(
'default',
'https://datasets-documentation.s3.eu-west-3.amazonaws.com/hackernews/hacknernews.parquet',
'Parquet',
'
id UInt64,
deleted UInt8,
type String,
by String,
time DateTime,
text String,
dead UInt8,
parent UInt64,
poll UInt64,
kids Array(UInt32),
url String,
score UInt32,
title String,
parts Array(UInt32),
descendants UInt32');
```
Consider the following simple search for the term `ClickHouse` (and its varied upper and lower cases) in the `comment` column:
```sql
SELECT count()
FROM hackernews
WHERE hasToken(lower(comment), 'clickhouse');
```
Notice it takes 3 seconds to execute the query:
```response
┌─count()─┐
│ 1145 │
└─────────┘
1 row in set. Elapsed: 3.001 sec. Processed 28.74 million rows, 9.75 GB (9.58 million rows/s., 3.25 GB/s.)
```
We will use `ALTER TABLE` and add an inverted index on the lowercase of the `comment` column, then materialize it (which can take a while - wait for it to materialize):
```sql
ALTER TABLE hackernews
ADD INDEX comment_lowercase(lower(comment)) TYPE inverted;
ALTER TABLE hackernews MATERIALIZE INDEX comment_lowercase;
```
We run the same query...
```sql
SELECT count()
FROM hackernews
WHERE hasToken(lower(comment), 'clickhouse')
```
...and notice the query executes 4x faster:
```response
┌─count()─┐
│ 1145 │
└─────────┘
1 row in set. Elapsed: 0.747 sec. Processed 4.49 million rows, 1.77 GB (6.01 million rows/s., 2.37 GB/s.)
```
:::note
Unlike other secondary indices, inverted indexes (for now) map to row numbers (row ids) instead of granule ids. The reason for this design
is performance. In practice, users often search for multiple terms at once. For example, filter predicate `WHERE s LIKE '%little%' OR s LIKE
'%big%'` can be evaluated directly using an inverted index by forming the union of the row id lists for terms "little" and "big". This also
means that the parameter `GRANULARITY` supplied to index creation has no meaning (it may be removed from the syntax in the future).
:::

View File

@ -283,7 +283,7 @@ SYSTEM START REPLICATION QUEUES [[db.]replicated_merge_tree_family_table_name]
Wait until a `ReplicatedMergeTree` table will be synced with other replicas in a cluster. Will run until `receive_timeout` if fetches currently disabled for the table.
``` sql
SYSTEM SYNC REPLICA [db.]replicated_merge_tree_family_table_name
SYSTEM SYNC REPLICA [ON CLUSTER cluster_name] [db.]replicated_merge_tree_family_table_name
```
After running this statement the `[db.]replicated_merge_tree_family_table_name` fetches commands from the common replicated log into its own replication queue, and then the query waits till the replica processes all of the fetched commands.

View File

@ -2,11 +2,12 @@
slug: /en/sql-reference/table-functions/s3
sidebar_position: 45
sidebar_label: s3
keywords: [s3, gcs, bucket]
---
# s3 Table Function
Provides table-like interface to select/insert files in [Amazon S3](https://aws.amazon.com/s3/). This table function is similar to [hdfs](../../sql-reference/table-functions/hdfs.md), but provides S3-specific features.
Provides a table-like interface to select/insert files in [Amazon S3](https://aws.amazon.com/s3/) and [Google Cloud Storage](https://cloud.google.com/storage/). This table function is similar to the [hdfs function](../../sql-reference/table-functions/hdfs.md), but provides S3-specific features.
**Syntax**
@ -14,9 +15,24 @@ Provides table-like interface to select/insert files in [Amazon S3](https://aws.
s3(path [,aws_access_key_id, aws_secret_access_key] [,format] [,structure] [,compression])
```
:::tip GCS
The S3 Table Function integrates with Google Cloud Storage by using the GCS XML API and HMAC keys. See the [Google interoperability docs]( https://cloud.google.com/storage/docs/interoperability) for more details about the endpoint and HMAC.
For GCS, substitute your HMAC key and HMAC secret where you see `aws_access_key_id` and `aws_secret_access_key`.
:::
**Arguments**
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [here](../../engines/table-engines/integrations/s3.md#wildcards-in-path).
:::note GCS
The GCS path is in this format as the endpoint for the Google XML API is different than the JSON API:
```
https://storage.googleapis.com/<bucket>/<folder>/<filename(s)>
```
and not ~~https://storage.cloud.google.com~~.
:::
- `format` — The [format](../../interfaces/formats.md#formats) of the file.
- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`.
- `compression` — Parameter is optional. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. By default, it will autodetect compression by file extension.

View File

@ -11,11 +11,26 @@ mod ffi {
struct Item {
text: String,
orig_text: String,
}
impl Item {
fn new(text: String) -> Self {
return Self{
// Text that will be shown should not contains new lines since in this case skim may
// live some symbols on the screen, and this looks odd.
text: text.replace("\n", " "),
orig_text: text,
};
}
}
impl SkimItem for Item {
fn text(&self) -> Cow<str> {
return Cow::Borrowed(&self.text);
}
fn output(&self) -> Cow<str> {
return Cow::Borrowed(&self.orig_text);
}
}
fn skim(prefix: &CxxString, words: &CxxVector<CxxString>) -> Result<String, String> {
@ -34,7 +49,7 @@ fn skim(prefix: &CxxString, words: &CxxVector<CxxString>) -> Result<String, Stri
let (tx, rx): (SkimItemSender, SkimItemReceiver) = unbounded();
for word in words {
tx.send(Arc::new(Item{ text: word.to_string() })).unwrap();
tx.send(Arc::new(Item::new(word.to_string()))).unwrap();
}
// so that skim could know when to stop waiting for more items.
drop(tx);

View File

@ -417,6 +417,10 @@ ReplxxLineReader::ReplxxLineReader(
{
rx.print("skim failed: %s (consider using Ctrl-T for a regular non-fuzzy reverse search)\n", e.what());
}
/// REPAINT before to avoid prompt overlap by the query
rx.invoke(Replxx::ACTION::REPAINT, code);
if (!new_query.empty())
rx.set_state(replxx::Replxx::State(new_query.c_str(), static_cast<int>(new_query.size())));

View File

@ -418,14 +418,4 @@ void InterpreterGrantQuery::updateRoleFromQuery(Role & role, const ASTGrantQuery
updateFromQuery(role, query);
}
void InterpreterGrantQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr) const
{
auto & query = query_ptr->as<ASTGrantQuery &>();
if (query.is_revoke)
elem.query_kind = "Revoke";
else
elem.query_kind = "Grant";
}
}

View File

@ -21,7 +21,6 @@ public:
static void updateUserFromQuery(User & user, const ASTGrantQuery & query);
static void updateRoleFromQuery(Role & role, const ASTGrantQuery & query);
void extendQueryLogElemImpl(QueryLogElement &, const ASTPtr &, ContextPtr) const override;
private:
ASTPtr query_ptr;

View File

@ -186,7 +186,10 @@ QueryCache::Writer::Writer(std::mutex & mutex_, Cache & cache_, const Key & key_
, min_query_runtime(min_query_runtime_)
{
if (auto it = cache.find(key); it != cache.end() && !is_stale(it->first))
{
skip_insert = true; /// Key already contained in cache and did not expire yet --> don't replace it
LOG_TRACE(&Poco::Logger::get("QueryResultCache"), "Skipped insert (non-stale entry found), query: {}", key.queryStringFromAst());
}
}
void QueryCache::Writer::buffer(Chunk && partial_query_result)
@ -205,6 +208,7 @@ void QueryCache::Writer::buffer(Chunk && partial_query_result)
{
chunks->clear(); /// eagerly free some space
skip_insert = true;
LOG_TRACE(&Poco::Logger::get("QueryResultCache"), "Skipped insert (query result too big), new_entry_size_in_bytes: {} ({}), new_entry_size_in_rows: {} ({}), query: {}", new_entry_size_in_bytes, max_entry_size_in_bytes, new_entry_size_in_rows, max_entry_size_in_rows, key.queryStringFromAst());
}
}
@ -214,12 +218,19 @@ void QueryCache::Writer::finalizeWrite()
return;
if (std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - query_start_time) < min_query_runtime)
{
LOG_TRACE(&Poco::Logger::get("QueryResultCache"), "Skipped insert (query not expensive enough), query: {}", key.queryStringFromAst());
return;
}
std::lock_guard lock(mutex);
if (auto it = cache.find(key); it != cache.end() && !is_stale(it->first))
return; /// same check as in ctor because a parallel Writer could have inserted the current key in the meantime
{
/// same check as in ctor because a parallel Writer could have inserted the current key in the meantime
LOG_TRACE(&Poco::Logger::get("QueryResultCache"), "Skipped insert (non-stale entry found), query: {}", key.queryStringFromAst());
return;
}
auto sufficient_space_in_cache = [this]() TSA_REQUIRES(mutex)
{
@ -242,9 +253,11 @@ void QueryCache::Writer::finalizeWrite()
LOG_TRACE(&Poco::Logger::get("QueryCache"), "Removed {} stale entries", removed_items);
}
/// Insert or replace if enough space
if (sufficient_space_in_cache())
if (!sufficient_space_in_cache())
LOG_TRACE(&Poco::Logger::get("QueryResultCache"), "Skipped insert (cache has insufficient space), query: {}", key.queryStringFromAst());
else
{
//// Insert or replace key
cache_size_in_bytes += query_result.sizeInBytes();
if (auto it = cache.find(key); it != cache.end())
cache_size_in_bytes -= it->second.sizeInBytes(); // key replacement

View File

@ -11,6 +11,7 @@ namespace ErrorCodes
extern const int NOT_IMPLEMENTED;
}
void IInterpreter::extendQueryLogElem(
QueryLogElement & elem, const ASTPtr & ast, ContextPtr context, const String & query_database, const String & query_table) const
{

View File

@ -9,18 +9,12 @@
#include <Interpreters/ActionsDAG.h>
#include <Interpreters/ExpressionAnalyzer.h>
#include <Interpreters/TreeRewriter.h>
#include <Processors/QueryPlan/IQueryPlanStep.h>
#include <Processors/QueryPlan/FilterStep.h>
namespace DB
{
void IInterpreterUnionOrSelectQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr /*context_*/) const
{
elem.query_kind = "Select";
}
QueryPipelineBuilder IInterpreterUnionOrSelectQuery::buildQueryPipeline()
{
QueryPlan query_plan;

View File

@ -44,8 +44,6 @@ public:
size_t getMaxStreams() const { return max_streams; }
void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr context) const override;
/// Returns whether the query uses the view source from the Context
/// The view source is a virtual storage that currently only materialized views use to replace the source table
/// with the incoming block only

View File

@ -446,7 +446,6 @@ void InterpreterAlterQuery::extendQueryLogElemImpl(QueryLogElement & elem, const
{
const auto & alter = ast->as<const ASTAlterQuery &>();
elem.query_kind = "Alter";
if (alter.command_list != nullptr && alter.alter_object != ASTAlterQuery::AlterObjectType::DATABASE)
{
// Alter queries already have their target table inserted into `elem`.

View File

@ -1703,7 +1703,6 @@ AccessRightsElements InterpreterCreateQuery::getRequiredAccess() const
void InterpreterCreateQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const
{
elem.query_kind = "Create";
if (!as_table_saved.empty())
{
String database = backQuoteIfNeed(as_database_saved.empty() ? getContext()->getCurrentDatabase() : as_database_saved);

View File

@ -432,11 +432,6 @@ AccessRightsElements InterpreterDropQuery::getRequiredAccessForDDLOnCluster() co
return required_access;
}
void InterpreterDropQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const
{
elem.query_kind = "Drop";
}
void InterpreterDropQuery::executeDropQuery(ASTDropQuery::Kind kind, ContextPtr global_context, ContextPtr current_context, const StorageID & target_table_id, bool sync)
{
if (DatabaseCatalog::instance().tryGetTable(target_table_id, current_context))

View File

@ -24,8 +24,6 @@ public:
/// Drop table or database.
BlockIO execute() override;
void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const override;
static void executeDropQuery(ASTDropQuery::Kind kind, ContextPtr global_context, ContextPtr current_context, const StorageID & target_table_id, bool sync);
bool supportsTransactions() const override;

View File

@ -560,10 +560,8 @@ StorageID InterpreterInsertQuery::getDatabaseTable() const
return query_ptr->as<ASTInsertQuery &>().table_id;
}
void InterpreterInsertQuery::extendQueryLogElemImpl(QueryLogElement & elem, ContextPtr context_)
{
elem.query_kind = "Insert";
const auto & insert_table = context_->getInsertionTable();
if (!insert_table.empty())
{

View File

@ -44,6 +44,7 @@ public:
std::atomic_uint64_t * elapsed_counter_ms = nullptr);
static void extendQueryLogElemImpl(QueryLogElement & elem, ContextPtr context_);
void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr context_) const override;
StoragePtr getTable(ASTInsertQuery & query);

View File

@ -197,7 +197,6 @@ AccessRightsElements InterpreterRenameQuery::getRequiredAccess(InterpreterRename
void InterpreterRenameQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr) const
{
elem.query_kind = "Rename";
const auto & rename = ast->as<const ASTRenameQuery &>();
for (const auto & element : rename.elements)
{

View File

@ -55,6 +55,7 @@ class InterpreterRenameQuery : public IInterpreter, WithContext
public:
InterpreterRenameQuery(const ASTPtr & query_ptr_, ContextPtr context_);
BlockIO execute() override;
void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & ast, ContextPtr) const override;
bool renamedInsteadOfExchange() const { return renamed_instead_of_exchange; }

View File

@ -193,8 +193,6 @@ void InterpreterSelectIntersectExceptQuery::ignoreWithTotals()
void InterpreterSelectIntersectExceptQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr /*context_*/) const
{
elem.query_kind = "Select";
for (const auto & interpreter : nested_interpreters)
{
if (const auto * select_interpreter = dynamic_cast<const InterpreterSelectQuery *>(interpreter.get()))

View File

@ -1925,8 +1925,6 @@ RowPolicyFilterPtr InterpreterSelectQuery::getRowPolicyFilter() const
void InterpreterSelectQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr /*context_*/) const
{
elem.query_kind = "Select";
for (const auto & row_policy : row_policy_filter->policies)
{
auto name = row_policy->getFullName().toString();

View File

@ -135,11 +135,6 @@ void InterpreterSelectQueryAnalyzer::addStorageLimits(const StorageLimitsList &
planner.addStorageLimits(storage_limits);
}
void InterpreterSelectQueryAnalyzer::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const
{
elem.query_kind = "Select";
}
void InterpreterSelectQueryAnalyzer::setMergeTreeReadTaskCallbackAndClientInfo(MergeTreeReadTaskCallback && callback)
{
context->getClientInfo().collaborate_with_initiator = true;

View File

@ -46,8 +46,6 @@ public:
bool ignoreQuota() const override { return select_query_options.ignore_quota; }
void extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr &, ContextPtr) const override;
/// Set merge tree read task callback in context and set collaborate_with_initiator in client info
void setMergeTreeReadTaskCallbackAndClientInfo(MergeTreeReadTaskCallback && callback);

View File

@ -398,8 +398,6 @@ void InterpreterSelectWithUnionQuery::ignoreWithTotals()
void InterpreterSelectWithUnionQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr /*context_*/) const
{
elem.query_kind = "Select";
for (const auto & interpreter : nested_interpreters)
{
if (const auto * select_interpreter = dynamic_cast<const InterpreterSelectQuery *>(interpreter.get()))

View File

@ -1155,9 +1155,4 @@ AccessRightsElements InterpreterSystemQuery::getRequiredAccessForDDLOnCluster()
return required_access;
}
void InterpreterSystemQuery::extendQueryLogElemImpl(QueryLogElement & elem, const ASTPtr & /*ast*/, ContextPtr) const
{
elem.query_kind = "System";
}
}

View File

@ -73,8 +73,6 @@ private:
AccessRightsElements getRequiredAccessForDDLOnCluster() const;
void startStopAction(StorageActionBlockType action_type, bool start);
void extendQueryLogElemImpl(QueryLogElement &, const ASTPtr &, ContextPtr) const override;
};

View File

@ -3,17 +3,13 @@
#include <Interpreters/Context.h>
#include <Interpreters/DatabaseAndTableWithAlias.h>
#include <Parsers/ASTSelectWithUnionQuery.h>
#include <Parsers/ASTSelectIntersectExceptQuery.h>
#include <Parsers/ASTSelectQuery.h>
#include <Parsers/ASTKillQueryQuery.h>
#include <Parsers/IAST.h>
#include <Parsers/queryNormalization.h>
#include <Parsers/toOneLineQuery.h>
#include <Processors/Executors/PipelineExecutor.h>
#include <Common/typeid_cast.h>
#include <Common/Exception.h>
#include <Common/CurrentThread.h>
#include <IO/WriteHelpers.h>
#include <Common/logger_useful.h>
#include <chrono>
@ -526,6 +522,7 @@ QueryStatusInfo QueryStatus::getInfo(bool get_thread_list, bool get_profile_even
QueryStatusInfo res{};
res.query = query;
res.query_kind = query_kind;
res.client_info = client_info;
res.elapsed_microseconds = watch.elapsedMicroseconds();
res.is_cancelled = is_killed.load(std::memory_order_relaxed);

View File

@ -52,6 +52,7 @@ class ProcessListEntry;
struct QueryStatusInfo
{
String query;
IAST::QueryKind query_kind{};
UInt64 elapsed_microseconds;
size_t read_rows;
size_t read_bytes;
@ -134,7 +135,8 @@ protected:
OvercommitTracker * global_overcommit_tracker = nullptr;
IAST::QueryKind query_kind;
/// This is used to control the maximum number of SELECT or INSERT queries.
IAST::QueryKind query_kind{};
/// This field is unused in this class, but it
/// increments/decrements metric in constructor/destructor.
@ -176,11 +178,6 @@ public:
return &thread_group->memory_tracker;
}
IAST::QueryKind getQueryKind() const
{
return query_kind;
}
bool updateProgressIn(const Progress & value)
{
CurrentThread::updateProgressIn(value);

View File

@ -166,7 +166,9 @@ void QueryLogElement::appendToBlock(MutableColumns & columns) const
columns[i++]->insertData(query.data(), query.size());
columns[i++]->insertData(formatted_query.data(), formatted_query.size());
columns[i++]->insert(normalized_query_hash);
columns[i++]->insertData(query_kind.data(), query_kind.size());
const std::string_view query_kind_str = magic_enum::enum_name(query_kind);
columns[i++]->insertData(query_kind_str.data(), query_kind_str.size());
{
auto & column_databases = typeid_cast<ColumnArray &>(*columns[i++]);

View File

@ -6,6 +6,7 @@
#include <Interpreters/SystemLog.h>
#include <Interpreters/ClientInfo.h>
#include <Interpreters/TransactionVersionMetadata.h>
#include <Parsers/IAST.h>
namespace ProfileEvents
@ -58,7 +59,7 @@ struct QueryLogElement
String formatted_query;
UInt64 normalized_query_hash{};
String query_kind;
IAST::QueryKind query_kind{};
std::set<String> query_databases;
std::set<String> query_tables;
std::set<String> query_columns;

View File

@ -235,10 +235,10 @@ static void onExceptionBeforeStart(
elem.query = query_for_logging;
elem.normalized_query_hash = normalizedQueryHash<false>(query_for_logging);
// Try log query_kind if ast is valid
// Log query_kind if ast is valid
if (ast)
{
elem.query_kind = magic_enum::enum_name(ast->getQueryKind());
elem.query_kind = ast->getQueryKind();
if (settings.log_formatted_queries)
elem.formatted_query = queryToString(ast);
}
@ -807,6 +807,7 @@ static std::tuple<ASTPtr, BlockIO> executeQueryImpl(
if (settings.log_formatted_queries)
elem.formatted_query = queryToString(ast);
elem.normalized_query_hash = normalizedQueryHash<false>(query_for_logging);
elem.query_kind = ast->getQueryKind();
elem.client_info = client_info;

View File

@ -23,6 +23,8 @@ public:
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTAlterNamedCollectionQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Alter; }
};
}

View File

@ -289,4 +289,9 @@ ASTPtr ASTBackupQuery::getRewrittenASTWithoutOnCluster(const WithoutOnClusterAST
return new_query;
}
IAST::QueryKind ASTBackupQuery::getQueryKind() const
{
return kind == Kind::BACKUP ? QueryKind::Backup : QueryKind::Restore;
}
}

View File

@ -93,5 +93,6 @@ public:
ASTPtr clone() const override;
void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override;
QueryKind getQueryKind() const override;
};
}

View File

@ -23,6 +23,8 @@ struct ASTCheckQuery : public ASTQueryWithTableAndOutput
return res;
}
QueryKind getQueryKind() const override { return QueryKind::Check; }
protected:
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override
{

View File

@ -25,6 +25,8 @@ public:
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTCreateFunctionQuery>(clone()); }
String getFunctionName() const;
QueryKind getQueryKind() const override { return QueryKind::Create; }
};
}

View File

@ -22,6 +22,8 @@ public:
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTCreateNamedCollectionQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Create; }
std::string getCollectionName() const;
};

View File

@ -12,6 +12,8 @@ class ASTDeleteQuery : public ASTQueryWithTableAndOutput, public ASTQueryWithOnC
public:
String getID(char delim) const final;
ASTPtr clone() const final;
QueryKind getQueryKind() const override { return QueryKind::Delete; }
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams & params) const override
{
return removeOnCluster<ASTDeleteQuery>(clone(), params.default_database);

View File

@ -21,6 +21,8 @@ public:
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTDropFunctionQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Drop; }
};
}

View File

@ -9,7 +9,7 @@ namespace DB
/** Get the text that identifies this element. */
String ASTDropIndexQuery::getID(char delim) const
{
return "CreateIndexQuery" + (delim + getDatabase()) + delim + getTable();
return "DropIndexQuery" + (delim + getDatabase()) + delim + getTable();
}
ASTPtr ASTDropIndexQuery::clone() const

View File

@ -20,6 +20,8 @@ public:
void formatImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTDropNamedCollectionQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Drop; }
};
}

View File

@ -109,6 +109,8 @@ public:
const ASTPtr & getTableFunction() const { return table_function; }
const ASTPtr & getTableOverride() const { return table_override; }
QueryKind getQueryKind() const override { return QueryKind::Explain; }
protected:
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override
{

View File

@ -4,6 +4,7 @@
#include <Parsers/ASTFunction.h>
#include <Parsers/IAST.h>
namespace DB
{
@ -38,6 +39,8 @@ public:
from->formatImpl(settings, state, stacked);
external_ddl->formatImpl(settings, state, stacked);
}
QueryKind getQueryKind() const override { return QueryKind::ExternalDDL; }
};
}

View File

@ -42,6 +42,8 @@ public:
{
return removeOnCluster<ASTKillQueryQuery>(clone());
}
QueryKind getQueryKind() const override { return QueryKind::KillQuery; }
};
}

View File

@ -54,6 +54,8 @@ public:
{
return removeOnCluster<ASTOptimizeQuery>(clone(), params.default_database);
}
QueryKind getQueryKind() const override { return QueryKind::Optimize; }
};
}

View File

@ -49,6 +49,8 @@ public:
return res;
}
QueryKind getQueryKind() const override { return QueryKind::Show; }
protected:
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override
{

View File

@ -25,7 +25,7 @@ public:
void formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
QueryKind getQueryKind() const override { return QueryKind::SelectIntersectExcept; }
QueryKind getQueryKind() const override { return QueryKind::Select; }
ASTs getListOfSelects() const;

View File

@ -35,6 +35,8 @@ public:
void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override;
void updateTreeHashImpl(SipHash & hash_state) const override;
QueryKind getQueryKind() const override { return QueryKind::Set; }
};
}

View File

@ -39,6 +39,8 @@ public:
ASTPtr clone() const override;
QueryKind getQueryKind() const override { return QueryKind::Show; }
protected:
void formatLike(const FormatSettings & settings) const;
void formatLimit(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const;

View File

@ -24,6 +24,21 @@ void ASTTransactionControl::formatImpl(const FormatSettings & format /*state*/,
}
}
IAST::QueryKind ASTTransactionControl::getQueryKind() const
{
switch (action)
{
case BEGIN:
return QueryKind::Begin;
case COMMIT:
return QueryKind::Commit;
case ROLLBACK:
return QueryKind::Rollback;
case SET_SNAPSHOT:
return QueryKind::SetTransactionSnapshot;
}
}
void ASTTransactionControl::updateTreeHashImpl(SipHash & hash_state) const
{
hash_state.update(action);

View File

@ -27,6 +27,8 @@ public:
void formatImpl(const FormatSettings & format, FormatState & /*state*/, FormatStateStacked /*frame*/) const override;
void updateTreeHashImpl(SipHash & hash_state) const override;
QueryKind getQueryKind() const override;
};
}

View File

@ -21,6 +21,8 @@ public:
ASTPtr clone() const override { return std::make_shared<ASTUseQuery>(*this); }
QueryKind getQueryKind() const override { return QueryKind::Use; }
protected:
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override
{

View File

@ -37,6 +37,8 @@ public:
return res;
}
QueryKind getQueryKind() const override { return QueryKind::Create; }
protected:
void formatQueryImpl(const FormatSettings & s, FormatState & state, FormatStateStacked frame) const override
{

View File

@ -55,5 +55,7 @@ public:
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
void replaceCurrentUserTag(const String & current_user_name) const;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTCreateQuotaQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Create; }
};
}

View File

@ -35,5 +35,7 @@ public:
ASTPtr clone() const override;
void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTCreateRoleQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Create; }
};
}

View File

@ -51,5 +51,7 @@ public:
void replaceCurrentUserTag(const String & current_user_name) const;
void replaceEmptyDatabase(const String & current_database) const;
QueryKind getQueryKind() const override { return QueryKind::Create; }
};
}

View File

@ -41,5 +41,6 @@ public:
void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override;
void replaceCurrentUserTag(const String & current_user_name) const;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTCreateSettingsProfileQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Create; }
};
}

View File

@ -63,5 +63,7 @@ public:
void formatImpl(const FormatSettings & format, FormatState &, FormatStateStacked) const override;
bool hasSecretParts() const override;
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTCreateUserQuery>(clone()); }
QueryKind getQueryKind() const override { return QueryKind::Create; }
};
}

View File

@ -29,5 +29,7 @@ public:
ASTPtr getRewrittenASTWithoutOnCluster(const WithoutOnClusterASTRewriteParams &) const override { return removeOnCluster<ASTDropAccessEntityQuery>(clone()); }
void replaceEmptyDatabase(const String & current_database) const;
QueryKind getQueryKind() const override { return QueryKind::Drop; }
};
}

View File

@ -27,5 +27,7 @@ public:
String getID(char) const override;
ASTPtr clone() const override;
void formatImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
QueryKind getQueryKind() const override { return QueryKind::Set; }
};
}

View File

@ -31,6 +31,8 @@ public:
void replaceEmptyDatabase(const String & current_database);
QueryKind getQueryKind() const override { return QueryKind::Show; }
protected:
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;

View File

@ -40,6 +40,8 @@ public:
void replaceEmptyDatabase(const String & current_database);
QueryKind getQueryKind() const override { return QueryKind::Show; }
protected:
String getKeyword() const;
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;

View File

@ -17,5 +17,7 @@ public:
String getID(char) const override;
ASTPtr clone() const override;
void formatQueryImpl(const FormatSettings & settings, FormatState &, FormatStateStacked) const override;
QueryKind getQueryKind() const override { return QueryKind::Show; }
};
}

View File

@ -253,16 +253,32 @@ public:
enum class QueryKind : uint8_t
{
None = 0,
Alter,
Select,
Insert,
Delete,
Create,
Drop,
Grant,
Insert,
Rename,
Optimize,
Check,
Alter,
Grant,
Revoke,
SelectIntersectExcept,
Select,
System,
Set,
Use,
Show,
Exists,
Describe,
Explain,
Backup,
Restore,
KillQuery,
ExternalDDL,
Begin,
Commit,
Rollback,
SetTransactionSnapshot,
};
/// Return QueryKind of this AST query.
virtual QueryKind getQueryKind() const { return QueryKind::None; }

View File

@ -91,6 +91,8 @@ protected:
settings.ostr << (settings.hilite ? hilite_keyword : "") << ASTExistsDatabaseQueryIDAndQueryNames::Query
<< " " << (settings.hilite ? hilite_none : "") << backQuoteIfNeed(getDatabase());
}
QueryKind getQueryKind() const override { return QueryKind::Exists; }
};
class ASTShowCreateDatabaseQuery : public ASTQueryWithTableAndOutputImpl<ASTShowCreateDatabaseQueryIDAndQueryNames>
@ -123,6 +125,8 @@ public:
return res;
}
QueryKind getQueryKind() const override { return QueryKind::Describe; }
protected:
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override
{

View File

@ -61,6 +61,7 @@ NamesAndTypesList StorageSystemProcesses::getNamesAndTypes()
{"memory_usage", std::make_shared<DataTypeInt64>()},
{"peak_memory_usage", std::make_shared<DataTypeInt64>()},
{"query", std::make_shared<DataTypeString>()},
{"query_kind", std::make_shared<DataTypeString>()},
{"thread_ids", std::make_shared<DataTypeArray>(std::make_shared<DataTypeUInt64>())},
{"ProfileEvents", std::make_shared<DataTypeMap>(std::make_shared<DataTypeString>(), std::make_shared<DataTypeUInt64>())},
@ -130,6 +131,7 @@ void StorageSystemProcesses::fillData(MutableColumns & res_columns, ContextPtr c
res_columns[i++]->insert(process.memory_usage);
res_columns[i++]->insert(process.peak_memory_usage);
res_columns[i++]->insert(process.query);
res_columns[i++]->insert(magic_enum::enum_name(process.query_kind));
{
Array threads_array;

View File

@ -96,6 +96,16 @@ def test_select(started_cluster):
10,
)
# intersect and except are counted
common_pattern(
node_select,
"select",
"select sleep(1) INTERSECT select sleep(1) EXCEPT select sleep(1)",
"insert into test_concurrent_insert values (0)",
2,
10,
)
def test_insert(started_cluster):
common_pattern(

View File

@ -0,0 +1,49 @@
#!/usr/bin/env bash
# Tags: replica
CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
function query_with_retry
{
retry=0
until [ $retry -ge 5 ]
do
result=$($CLICKHOUSE_CLIENT $2 --query="$1" 2>&1)
if [ "$?" == 0 ]; then
echo -n "$result"
return
else
retry=$(($retry + 1))
sleep 3
fi
done
echo "Query '$1' failed with '$result'"
}
$CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS ttl_repl1"
$CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS ttl_repl2"
$CLICKHOUSE_CLIENT --query="CREATE TABLE ttl_repl1(d Date, x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/{database}/test_00933/ttl_repl', '1') PARTITION BY toDayOfMonth(d) ORDER BY x TTL d + INTERVAL 1 DAY;"
$CLICKHOUSE_CLIENT --query="CREATE TABLE ttl_repl2(d Date, x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/{database}/test_00933/ttl_repl', '2') PARTITION BY toDayOfMonth(d) ORDER BY x TTL d + INTERVAL 1 DAY;"
$CLICKHOUSE_CLIENT --query="INSERT INTO TABLE ttl_repl1 VALUES (toDate('2000-10-10 00:00:00'), 100)"
$CLICKHOUSE_CLIENT --query="INSERT INTO TABLE ttl_repl1 VALUES (toDate('2100-10-10 00:00:00'), 200)"
$CLICKHOUSE_CLIENT --query="ALTER TABLE ttl_repl1 MODIFY TTL d + INTERVAL 1 DAY"
$CLICKHOUSE_CLIENT --query="SYSTEM SYNC REPLICA ttl_repl2"
$CLICKHOUSE_CLIENT --query="INSERT INTO TABLE ttl_repl1 VALUES (toDate('2000-10-10 00:00:00'), 300)"
$CLICKHOUSE_CLIENT --query="INSERT INTO TABLE ttl_repl1 VALUES (toDate('2100-10-10 00:00:00'), 400)"
$CLICKHOUSE_CLIENT --query="SYSTEM SYNC REPLICA ttl_repl2"
query_with_retry "OPTIMIZE TABLE ttl_repl2 FINAL SETTINGS optimize_throw_if_noop = 1"
$CLICKHOUSE_CLIENT --query="SELECT x FROM ttl_repl2 ORDER BY x"
$CLICKHOUSE_CLIENT --query="SHOW CREATE TABLE ttl_repl2"
$CLICKHOUSE_CLIENT --query="DROP TABLE ttl_repl1"
$CLICKHOUSE_CLIENT --query="DROP TABLE ttl_repl2"

View File

@ -1,30 +0,0 @@
-- Tags: long, replica
DROP TABLE IF EXISTS ttl_repl1;
DROP TABLE IF EXISTS ttl_repl2;
CREATE TABLE ttl_repl1(d Date, x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/{database}/test_00933/ttl_repl', '1')
PARTITION BY toDayOfMonth(d) ORDER BY x TTL d + INTERVAL 1 DAY;
CREATE TABLE ttl_repl2(d Date, x UInt32) ENGINE ReplicatedMergeTree('/clickhouse/tables/{database}/test_00933/ttl_repl', '2')
PARTITION BY toDayOfMonth(d) ORDER BY x TTL d + INTERVAL 1 DAY;
INSERT INTO TABLE ttl_repl1 VALUES (toDate('2000-10-10 00:00:00'), 100);
INSERT INTO TABLE ttl_repl1 VALUES (toDate('2100-10-10 00:00:00'), 200);
ALTER TABLE ttl_repl1 MODIFY TTL d + INTERVAL 1 DAY;
SYSTEM SYNC REPLICA ttl_repl2;
INSERT INTO TABLE ttl_repl1 VALUES (toDate('2000-10-10 00:00:00'), 300);
INSERT INTO TABLE ttl_repl1 VALUES (toDate('2100-10-10 00:00:00'), 400);
SYSTEM SYNC REPLICA ttl_repl2;
SELECT sleep(1) format Null; -- wait for probable merges after inserts
OPTIMIZE TABLE ttl_repl2 FINAL;
SELECT x FROM ttl_repl2 ORDER BY x;
SHOW CREATE TABLE ttl_repl2;
DROP TABLE ttl_repl1;
DROP TABLE ttl_repl2;

View File

@ -16,7 +16,7 @@ alter table test_log_queries.logtable rename column j to x, rename column k to y
alter table test_log_queries.logtable2 add column x int, add column y int 1199561338572582360 Alter ['test_log_queries'] ['test_log_queries.logtable2'] ['test_log_queries.logtable2.x','test_log_queries.logtable2.y']
alter table test_log_queries.logtable3 drop column i, drop column k 340702370038862784 Alter ['test_log_queries'] ['test_log_queries.logtable3'] ['test_log_queries.logtable3.i','test_log_queries.logtable3.k']
rename table test_log_queries.logtable2 to test_log_queries.logtable4, test_log_queries.logtable3 to test_log_queries.logtable5 17256232154191063008 Rename ['test_log_queries'] ['test_log_queries.logtable2','test_log_queries.logtable3','test_log_queries.logtable4','test_log_queries.logtable5'] []
optimize table test_log_queries.logtable 12932884188099170316 ['test_log_queries'] ['test_log_queries.logtable'] []
optimize table test_log_queries.logtable 12932884188099170316 Optimize ['test_log_queries'] ['test_log_queries.logtable'] []
drop table if exists test_log_queries.logtable 9614905142075064664 Drop ['test_log_queries'] ['test_log_queries.logtable'] []
drop table if exists test_log_queries.logtable2 5276868561533661466 Drop ['test_log_queries'] ['test_log_queries.logtable2'] []
drop table if exists test_log_queries.logtable3 4776768361842582387 Drop ['test_log_queries'] ['test_log_queries.logtable3'] []

View File

@ -10,27 +10,27 @@ Misc queries
ACTUAL LOG CONTENT:
Select SELECT \'DROP queries and also a cleanup before the test\';
Drop DROP DATABASE IF EXISTS sqllt SYNC;
DROP USER IF EXISTS sqllt_user;
DROP ROLE IF EXISTS sqllt_role;
DROP POLICY IF EXISTS sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary;
DROP ROW POLICY IF EXISTS sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary;
DROP QUOTA IF EXISTS sqllt_quota;
DROP SETTINGS PROFILE IF EXISTS sqllt_settings_profile;
Drop DROP USER IF EXISTS sqllt_user;
Drop DROP ROLE IF EXISTS sqllt_role;
Drop DROP POLICY IF EXISTS sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary;
Drop DROP ROW POLICY IF EXISTS sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary;
Drop DROP QUOTA IF EXISTS sqllt_quota;
Drop DROP SETTINGS PROFILE IF EXISTS sqllt_settings_profile;
Select SELECT \'CREATE queries\';
Create CREATE DATABASE sqllt;
Create CREATE TABLE sqllt.table\n(\n i UInt8, s String\n)\nENGINE = MergeTree PARTITION BY tuple() ORDER BY tuple();
Create CREATE VIEW sqllt.view AS SELECT i, s FROM sqllt.table;
Create CREATE DICTIONARY sqllt.dictionary (key UInt64, value UInt64) PRIMARY KEY key SOURCE(CLICKHOUSE(DB \'sqllt\' TABLE \'table\' HOST \'localhost\' PORT 9001)) LIFETIME(0) LAYOUT(FLAT());
CREATE USER sqllt_user IDENTIFIED WITH plaintext_password
CREATE ROLE sqllt_role;
CREATE POLICY sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL;
CREATE POLICY sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL;
CREATE QUOTA sqllt_quota KEYED BY user_name TO sqllt_role;
CREATE SETTINGS PROFILE sqllt_settings_profile SETTINGS interactive_delay = 200000;
Create CREATE USER sqllt_user IDENTIFIED WITH plaintext_password
Create CREATE ROLE sqllt_role;
Create CREATE POLICY sqllt_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL;
Create CREATE POLICY sqllt_row_policy ON sqllt.table, sqllt.view, sqllt.dictionary AS PERMISSIVE TO ALL;
Create CREATE QUOTA sqllt_quota KEYED BY user_name TO sqllt_role;
Create CREATE SETTINGS PROFILE sqllt_settings_profile SETTINGS interactive_delay = 200000;
Grant GRANT sqllt_role TO sqllt_user;
Select SELECT \'SET queries\';
SET log_profile_events=false;
SET DEFAULT ROLE sqllt_role TO sqllt_user;
Set SET log_profile_events=false;
Set SET DEFAULT ROLE sqllt_role TO sqllt_user;
Select -- SET ROLE sqllt_role; -- tests are executed by user `default` which is defined in XML and is impossible to update.\n\nSELECT \'ALTER TABLE queries\';
Alter ALTER TABLE sqllt.table ADD COLUMN new_col UInt32 DEFAULT 123456789;
Alter ALTER TABLE sqllt.table COMMENT COLUMN new_col \'dummy column with a comment\';
@ -54,19 +54,19 @@ System SYSTEM START FETCHES sqllt.table
System SYSTEM STOP REPLICATED SENDS sqllt.table
System SYSTEM START REPLICATED SENDS sqllt.table
Select -- SYSTEM RELOAD DICTIONARY sqllt.dictionary; -- temporary out of order: Code: 210, Connection refused (localhost:9001) (version 21.3.1.1)\n-- DROP REPLICA\n-- haha, no\n-- SYSTEM KILL;\n-- SYSTEM SHUTDOWN;\n\n-- Since we don\'t really care about the actual output, suppress it with `FORMAT Null`.\nSELECT \'SHOW queries\';
SHOW CREATE TABLE sqllt.table FORMAT Null;
SHOW CREATE DICTIONARY sqllt.dictionary FORMAT Null;
SHOW DATABASES LIKE \'sqllt\' FORMAT Null;
SHOW TABLES FROM sqllt FORMAT Null;
SHOW DICTIONARIES FROM sqllt FORMAT Null;
SHOW GRANTS FORMAT Null;
SHOW GRANTS FOR sqllt_user FORMAT Null;
SHOW CREATE USER sqllt_user FORMAT Null;
SHOW CREATE ROLE sqllt_role FORMAT Null;
SHOW CREATE POLICY sqllt_policy FORMAT Null;
SHOW CREATE ROW POLICY sqllt_row_policy FORMAT Null;
SHOW CREATE QUOTA sqllt_quota FORMAT Null;
SHOW CREATE SETTINGS PROFILE sqllt_settings_profile FORMAT Null;
Show SHOW CREATE TABLE sqllt.table FORMAT Null;
Show SHOW CREATE DICTIONARY sqllt.dictionary FORMAT Null;
Show SHOW DATABASES LIKE \'sqllt\' FORMAT Null;
Show SHOW TABLES FROM sqllt FORMAT Null;
Show SHOW DICTIONARIES FROM sqllt FORMAT Null;
Show SHOW GRANTS FORMAT Null;
Show SHOW GRANTS FOR sqllt_user FORMAT Null;
Show SHOW CREATE USER sqllt_user FORMAT Null;
Show SHOW CREATE ROLE sqllt_role FORMAT Null;
Show SHOW CREATE POLICY sqllt_policy FORMAT Null;
Show SHOW CREATE ROW POLICY sqllt_row_policy FORMAT Null;
Show SHOW CREATE QUOTA sqllt_quota FORMAT Null;
Show SHOW CREATE SETTINGS PROFILE sqllt_settings_profile FORMAT Null;
Select SELECT \'GRANT queries\';
Grant GRANT SELECT ON sqllt.table TO sqllt_user;
Grant GRANT DROP ON sqllt.view TO sqllt_user;
@ -74,13 +74,13 @@ Select SELECT \'REVOKE queries\';
Revoke REVOKE SELECT ON sqllt.table FROM sqllt_user;
Revoke REVOKE DROP ON sqllt.view FROM sqllt_user;
Select SELECT \'Misc queries\';
DESCRIBE TABLE sqllt.table FORMAT Null;
CHECK TABLE sqllt.table FORMAT Null;
Describe DESCRIBE TABLE sqllt.table FORMAT Null;
Check CHECK TABLE sqllt.table FORMAT Null;
Drop DETACH TABLE sqllt.table;
Create ATTACH TABLE sqllt.table;
Rename RENAME TABLE sqllt.table TO sqllt.table_new;
Rename RENAME TABLE sqllt.table_new TO sqllt.table;
Drop TRUNCATE TABLE sqllt.table;
Drop DROP TABLE sqllt.table SYNC;
SET log_comment=\'\';
Set SET log_comment=\'\';
DROP queries and also a cleanup after the test

View File

@ -612,6 +612,7 @@ CREATE TABLE system.processes
`memory_usage` Int64,
`peak_memory_usage` Int64,
`query` String,
`query_kind` String,
`thread_ids` Array(UInt64),
`ProfileEvents` Map(String, UInt64),
`Settings` Map(String, String),