Merge remote-tracking branch 'upstream/master'

This commit is contained in:
alesapin 2018-07-25 13:59:23 +03:00
commit f04b2c3340
43 changed files with 193 additions and 93 deletions

View File

@ -2,6 +2,8 @@
ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real time.
[![Build Status](https://travis-ci.org/yandex/ClickHouse.svg?branch=master)](https://travis-ci.org/yandex/ClickHouse)
## Useful links
* [Official website](https://clickhouse.yandex/) has quick high-level overview of ClickHouse on main page.
@ -9,5 +11,3 @@ ClickHouse is an open-source column-oriented database management system that all
* [Documentation](https://clickhouse.yandex/docs/en/) provides more in-depth information.
* [Contacts](https://clickhouse.yandex/#contacts) can help to get your questions answered if there are any.
[![Build Status](https://travis-ci.org/yandex/ClickHouse.svg?branch=master)](https://travis-ci.org/yandex/ClickHouse)

View File

@ -58,13 +58,13 @@ It is designed to retain the following properties of data:
Most of the properties above are viable for performance testing:
- reading data, filtering, aggregation and sorting will work at almost the same speed
as on original data due to saved cardinalities, magnitudes, compression ratios, etc.
as on original data due to saved cardinalities, magnitudes, compression ratios, etc.
It works in deterministic fashion: you define a seed value and transform is totally determined by input data and by seed.
Some transforms are one to one and could be reversed, so you need to have large enough seed and keep it in secret.
It use some cryptographic primitives to transform data, but from the cryptographic point of view,
it doesn't do anything properly and you should never consider the result as secure, unless you have other reasons for it.
it doesn't do anything properly and you should never consider the result as secure, unless you have other reasons for it.
It may retain some data you don't want to publish.
@ -74,7 +74,7 @@ So, the user will be able to count exact ratio of mobile traffic.
Another example, suppose you have some private data in your table, like user email and you don't want to publish any single email address.
If your table is large enough and contain multiple different emails and there is no email that have very high frequency than all others,
it will perfectly anonymize all data. But if you have small amount of different values in a column, it can possibly reproduce some of them.
it will perfectly anonymize all data. But if you have small amount of different values in a column, it can possibly reproduce some of them.
And you should take care and look at exact algorithm, how this tool works, and probably fine tune some of it command line parameters.
This tool works fine only with reasonable amount of data (at least 1000s of rows).

View File

@ -128,7 +128,8 @@ void BackgroundSchedulePool::TaskInfo::execute()
zkutil::WatchCallback BackgroundSchedulePool::TaskInfo::getWatchCallback()
{
return [t=shared_from_this()](const ZooKeeperImpl::ZooKeeper::WatchResponse &) {
return [t = shared_from_this()](const ZooKeeperImpl::ZooKeeper::WatchResponse &)
{
t->schedule();
};
}

View File

@ -41,7 +41,6 @@ void RegionsHierarchy::reload()
RegionID max_region_id = 0;
auto regions_reader = data_source->createReader();
RegionEntry region_entry;

View File

@ -1,5 +1,6 @@
#include <IO/ReadHelpers.h>
#include <Interpreters/evaluateConstantExpression.h>
#include <Interpreters/Context.h>
#include <Interpreters/convertFieldToType.h>
#include <Parsers/TokenIterator.h>
#include <Parsers/ExpressionListParsers.h>
@ -29,7 +30,7 @@ namespace ErrorCodes
ValuesRowInputStream::ValuesRowInputStream(ReadBuffer & istr_, const Block & header_, const Context & context_, const FormatSettings & format_settings)
: istr(istr_), header(header_), context(context_), format_settings(format_settings)
: istr(istr_), header(header_), context(std::make_unique<Context>(context_)), format_settings(format_settings)
{
/// In this format, BOM at beginning of stream cannot be confused with value, so it is safe to skip it.
skipBOMIfExists(istr);
@ -112,7 +113,7 @@ bool ValuesRowInputStream::read(MutableColumns & columns)
istr.position() = const_cast<char *>(token_iterator->begin);
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(ast, context);
std::pair<Field, DataTypePtr> value_raw = evaluateConstantExpression(ast, *context);
Field value = convertFieldToType(value_raw.first, type, value_raw.second.get());
if (value.isNull())

View File

@ -28,7 +28,7 @@ public:
private:
ReadBuffer & istr;
Block header;
const Context & context;
std::unique_ptr<Context> context; /// pimpl
const FormatSettings format_settings;
};

View File

@ -1286,12 +1286,12 @@ DataTypePtr FunctionArrayDistinct::getReturnTypeImpl(const DataTypes & arguments
{
const DataTypeArray * array_type = checkAndGetDataType<DataTypeArray>(arguments[0].get());
if (!array_type)
throw Exception("Argument for function " + getName() + " must be array but it "
throw Exception("Argument for function " + getName() + " must be array but it "
" has type " + arguments[0]->getName() + ".",
ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT);
auto nested_type = removeNullable(array_type->getNestedType());
return std::make_shared<DataTypeArray>(nested_type);
}
@ -1307,7 +1307,7 @@ void FunctionArrayDistinct::executeImpl(Block & block, const ColumnNumbers & arg
const IColumn & src_data = array->getData();
const ColumnArray::Offsets & offsets = array->getOffsets();
ColumnRawPtrs original_data_columns;
original_data_columns.push_back(&src_data);
@ -1416,7 +1416,7 @@ bool FunctionArrayDistinct::executeString(
HashTableAllocatorWithStackMemory<(1ULL << INITIAL_SIZE_DEGREE) * sizeof(StringRef)>>;
const PaddedPODArray<UInt8> * src_null_map = nullptr;
if (nullable_col)
{
src_null_map = &static_cast<const ColumnUInt8 *>(&nullable_col->getNullMapColumn())->getData();
@ -1471,7 +1471,7 @@ void FunctionArrayDistinct::executeHashed(
res_data_col.insertFrom(*columns[0], j);
}
}
res_offsets.emplace_back(set.size() + prev_off);
prev_off = off;
}

View File

@ -1250,7 +1250,7 @@ private:
IColumn & res_data_col,
ColumnArray::Offsets & res_offsets,
const ColumnNullable * nullable_col);
void executeHashed(
const ColumnArray::Offsets & offsets,
const ColumnRawPtrs & columns,

View File

@ -2,6 +2,7 @@
#include <Parsers/ASTKillQueryQuery.h>
#include <Parsers/queryToString.h>
#include <Interpreters/Context.h>
#include <Interpreters/DDLWorker.h>
#include <Interpreters/ProcessList.h>
#include <Interpreters/executeQuery.h>
#include <Columns/ColumnString.h>
@ -172,6 +173,9 @@ BlockIO InterpreterKillQueryQuery::execute()
{
ASTKillQueryQuery & query = typeid_cast<ASTKillQueryQuery &>(*query_ptr);
if (!query.cluster.empty())
return executeDDLQueryOnCluster(query_ptr, context, {"system"});
BlockIO res_io;
Block processes_block = getSelectFromSystemProcessesResult();
if (!processes_block)

View File

@ -1,6 +1,7 @@
#include <Storages/IStorage.h>
#include <Parsers/ASTOptimizeQuery.h>
#include <Interpreters/Context.h>
#include <Interpreters/DDLWorker.h>
#include <Interpreters/InterpreterOptimizeQuery.h>
#include <Common/typeid_cast.h>
@ -18,6 +19,9 @@ BlockIO InterpreterOptimizeQuery::execute()
{
const ASTOptimizeQuery & ast = typeid_cast<const ASTOptimizeQuery &>(*query_ptr);
if (!ast.cluster.empty())
return executeDDLQueryOnCluster(query_ptr, context, {ast.database});
StoragePtr table = context.getTable(ast.database, ast.table);
auto table_lock = table->lockStructure(true, __PRETTY_FUNCTION__);
table->optimize(query_ptr, ast.partition, ast.final, ast.deduplicate, context);

View File

@ -8,9 +8,22 @@ String ASTKillQueryQuery::getID() const
return "KillQueryQuery_" + (where_expression ? where_expression->getID() : "") + "_" + String(sync ? "SYNC" : "ASYNC");
}
ASTPtr ASTKillQueryQuery::getRewrittenASTWithoutOnCluster(const std::string & /*new_database*/) const
{
auto query_ptr = clone();
ASTKillQueryQuery & query = static_cast<ASTKillQueryQuery &>(*query_ptr);
query.cluster.clear();
return query_ptr;
}
void ASTKillQueryQuery::formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << "KILL QUERY WHERE " << (settings.hilite ? hilite_none : "");
settings.ostr << (settings.hilite ? hilite_keyword : "") << "KILL QUERY ";
formatOnCluster(settings);
settings.ostr << " WHERE " << (settings.hilite ? hilite_none : "");
if (where_expression)
where_expression->formatImpl(settings, state, frame);

View File

@ -1,10 +1,11 @@
#include <Parsers/IAST.h>
#include <Parsers/ASTQueryWithOutput.h>
#include <Parsers/ASTQueryWithOnCluster.h>
namespace DB
{
class ASTKillQueryQuery : public ASTQueryWithOutput
class ASTKillQueryQuery : public ASTQueryWithOutput, public ASTQueryWithOnCluster
{
public:
ASTPtr where_expression; // expression to filter processes from system.processes table
@ -22,6 +23,8 @@ public:
String getID() const override;
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
ASTPtr getRewrittenASTWithoutOnCluster(const std::string &new_database) const override;
};
}

View File

@ -0,0 +1,39 @@
#include <Parsers/ASTOptimizeQuery.h>
namespace DB
{
ASTPtr ASTOptimizeQuery::getRewrittenASTWithoutOnCluster(const std::string & new_database) const
{
auto query_ptr = clone();
ASTOptimizeQuery & query = static_cast<ASTOptimizeQuery &>(*query_ptr);
query.cluster.clear();
if (query.database.empty())
query.database = new_database;
return query_ptr;
}
void ASTOptimizeQuery::formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << "OPTIMIZE TABLE " << (settings.hilite ? hilite_none : "")
<< (!database.empty() ? backQuoteIfNeed(database) + "." : "") << backQuoteIfNeed(table);
formatOnCluster(settings);
if (partition)
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << " PARTITION " << (settings.hilite ? hilite_none : "");
partition->formatImpl(settings, state, frame);
}
if (final)
settings.ostr << (settings.hilite ? hilite_keyword : "") << " FINAL" << (settings.hilite ? hilite_none : "");
if (deduplicate)
settings.ostr << (settings.hilite ? hilite_keyword : "") << " DEDUPLICATE" << (settings.hilite ? hilite_none : "");
}
}

View File

@ -1,7 +1,8 @@
#pragma once
#include <Parsers/IAST.h>
#include <Parsers/ASTQueryWithOutput.h>
#include <Parsers/ASTQueryWithOnCluster.h>
namespace DB
{
@ -9,7 +10,7 @@ namespace DB
/** OPTIMIZE query
*/
class ASTOptimizeQuery : public IAST
class ASTOptimizeQuery : public ASTQueryWithOutput, public ASTQueryWithOnCluster
{
public:
String database;
@ -23,7 +24,8 @@ public:
bool deduplicate;
/** Get the text that identifies this element. */
String getID() const override { return "OptimizeQuery_" + database + "_" + table + (final ? "_final" : "") + (deduplicate ? "_deduplicate" : ""); }
String getID() const override
{ return "OptimizeQuery_" + database + "_" + table + (final ? "_final" : "") + (deduplicate ? "_deduplicate" : ""); }
ASTPtr clone() const override
{
@ -39,24 +41,10 @@ public:
return res;
}
protected:
void formatImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << "OPTIMIZE TABLE " << (settings.hilite ? hilite_none : "")
<< (!database.empty() ? backQuoteIfNeed(database) + "." : "") << backQuoteIfNeed(table);
void formatQueryImpl(const FormatSettings & settings, FormatState & state, FormatStateStacked frame) const override;
if (partition)
{
settings.ostr << (settings.hilite ? hilite_keyword : "") << " PARTITION " << (settings.hilite ? hilite_none : "");
partition->formatImpl(settings, state, frame);
}
ASTPtr getRewrittenASTWithoutOnCluster(const std::string &new_database) const override;
if (final)
settings.ostr << (settings.hilite ? hilite_keyword : "") << " FINAL" << (settings.hilite ? hilite_none : "");
if (deduplicate)
settings.ostr << (settings.hilite ? hilite_keyword : "") << " DEDUPLICATE" << (settings.hilite ? hilite_none : "");
}
};
}

View File

@ -11,29 +11,36 @@ namespace DB
bool ParserKillQueryQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
{
String cluster_str;
auto query = std::make_shared<ASTKillQueryQuery>();
if (!ParserKeyword{"KILL QUERY"}.ignore(pos, expected))
return false;
if (!ParserKeyword{"WHERE"}.ignore(pos, expected))
return false;
ParserKeyword p_on{"ON"};
ParserKeyword p_test{"TEST"};
ParserKeyword p_sync{"SYNC"};
ParserKeyword p_async{"ASYNC"};
ParserKeyword p_where{"WHERE"};
ParserKeyword p_kill_query{"KILL QUERY"};
ParserExpression p_where_expression;
if (!p_where_expression.parse(pos, query->where_expression, expected))
if (!p_kill_query.ignore(pos, expected))
return false;
query->children.emplace_back(query->where_expression);
if (p_on.ignore(pos, expected) && !ASTQueryWithOnCluster::parse(pos, cluster_str, expected))
return false;
if (ParserKeyword{"SYNC"}.ignore(pos))
if (p_where.ignore(pos, expected) && !p_where_expression.parse(pos, query->where_expression, expected))
return false;
if (p_sync.ignore(pos, expected))
query->sync = true;
else if (ParserKeyword{"ASYNC"}.ignore(pos))
else if (p_async.ignore(pos, expected))
query->sync = false;
else if (ParserKeyword{"TEST"}.ignore(pos))
else if (p_test.ignore(pos, expected))
query->test = true;
query->cluster = cluster_str;
query->children.emplace_back(query->where_expression);
node = std::move(query);
return true;
}

View File

@ -28,6 +28,7 @@ bool ParserOptimizeQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expecte
ASTPtr partition;
bool final = false;
bool deduplicate = false;
String cluster_str;
if (!s_optimize_table.ignore(pos, expected))
return false;
@ -42,6 +43,9 @@ bool ParserOptimizeQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expecte
return false;
}
if (ParserKeyword{"ON"}.ignore(pos, expected) && !ASTQueryWithOnCluster::parse(pos, cluster_str, expected))
return false;
if (s_partition.ignore(pos, expected))
{
if (!partition_p.parse(pos, partition, expected))
@ -61,6 +65,8 @@ bool ParserOptimizeQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expecte
query->database = typeid_cast<const ASTIdentifier &>(*database).name;
if (table)
query->table = typeid_cast<const ASTIdentifier &>(*table).name;
query->cluster = cluster_str;
query->partition = partition;
query->final = final;
query->deduplicate = deduplicate;

View File

@ -21,14 +21,12 @@ bool ParserQuery::parseImpl(Pos & pos, ASTPtr & node, Expected & expected)
ParserInsertQuery insert_p(end);
ParserUseQuery use_p;
ParserSetQuery set_p;
ParserOptimizeQuery optimize_p;
ParserSystemQuery system_p;
bool res = query_with_output_p.parse(pos, node, expected)
|| insert_p.parse(pos, node, expected)
|| use_p.parse(pos, node, expected)
|| set_p.parse(pos, node, expected)
|| optimize_p.parse(pos, node, expected)
|| system_p.parse(pos, node, expected);
return res;

View File

@ -10,6 +10,7 @@
#include <Parsers/ParserAlterQuery.h>
#include <Parsers/ParserDropQuery.h>
#include <Parsers/ParserKillQueryQuery.h>
#include <Parsers/ParserOptimizeQuery.h>
namespace DB
@ -27,6 +28,7 @@ bool ParserQueryWithOutput::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
ParserRenameQuery rename_p;
ParserDropQuery drop_p;
ParserCheckQuery check_p;
ParserOptimizeQuery optimize_p;
ParserKillQueryQuery kill_query_p;
ASTPtr query;
@ -41,7 +43,8 @@ bool ParserQueryWithOutput::parseImpl(Pos & pos, ASTPtr & node, Expected & expec
|| rename_p.parse(pos, query, expected)
|| drop_p.parse(pos, query, expected)
|| check_p.parse(pos, query, expected)
|| kill_query_p.parse(pos, query, expected);
|| kill_query_p.parse(pos, query, expected)
|| optimize_p.parse(pos, query, expected);
if (!parsed)
return false;

View File

@ -39,9 +39,9 @@ public:
*/
void check(const NamesAndTypesList & columns, const Names & column_names) const;
/** Check that the data block for the record contains all the columns of the table with the correct types,
/** Check that the data block contains all the columns of the table with the correct types,
* contains only the columns of the table, and all the columns are different.
* If need_all, still checks that all the columns of the table are in the block.
* If need_all, checks that all the columns of the table are in the block.
*/
void check(const Block & block, bool need_all = false) const;

View File

@ -139,7 +139,7 @@ struct MergeTreeSettings
* instead of ordinary ones (dozens KB). \
* Before enabling check that all replicas support new format. \
*/ \
M(SettingBool, use_minimalistic_checksums_in_zookeeper, false)
M(SettingBool, use_minimalistic_checksums_in_zookeeper, true)
/// Settings that should not change after the creation of a table.
#define APPLY_FOR_IMMUTABLE_MERGE_TREE_SETTINGS(M) \

View File

@ -18,10 +18,7 @@ void StorageSystemAggregateFunctionCombinators::fillData(MutableColumns & res_co
for (const auto & pair : combinators)
{
res_columns[0]->insert(pair.first);
if (pair.second->isForInternalUsageOnly())
res_columns[1]->insert(UInt64(1));
else
res_columns[1]->insert(UInt64(0));
res_columns[1]->insert(UInt64(pair.second->isForInternalUsageOnly()));
}
}

View File

@ -1,6 +1,7 @@
#include <Common/config_build.h>
#include <DataTypes/DataTypeString.h>
#include <Interpreters/Settings.h>
#include <Storages/System/StorageSystemBuildOptions.h>
#include <Common/config_build.h>
namespace DB
{

View File

@ -1,10 +1,13 @@
#include <Common/DNSResolver.h>
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Interpreters/Cluster.h>
#include <Interpreters/Context.h>
#include <Storages/System/StorageSystemClusters.h>
#include <Common/DNSResolver.h>
namespace DB
{
NamesAndTypesList StorageSystemClusters::getNamesAndTypes()
{
return {
@ -23,7 +26,8 @@ NamesAndTypesList StorageSystemClusters::getNamesAndTypes()
void StorageSystemClusters::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const
{
auto updateColumns = [&](const std::string & cluster_name, const Cluster::ShardInfo & shard_info, const Cluster::Address & address) {
auto updateColumns = [&](const std::string & cluster_name, const Cluster::ShardInfo & shard_info, const Cluster::Address & address)
{
size_t i = 0;
res_columns[i++]->insert(cluster_name);
res_columns[i++]->insert(static_cast<UInt64>(shard_info.shard_num));

View File

@ -15,7 +15,8 @@
namespace DB
{
NamesAndTypesList StorageSystemColumns::getNamesAndTypes() {
NamesAndTypesList StorageSystemColumns::getNamesAndTypes()
{
return {
{ "database", std::make_shared<DataTypeString>() },
{ "table", std::make_shared<DataTypeString>() },

View File

@ -1,5 +1,7 @@
#include <Core/Field.h>
#include <DataTypes/DataTypeFactory.h>
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Storages/System/StorageSystemDataTypeFamilies.h>
namespace DB
@ -9,8 +11,8 @@ NamesAndTypesList StorageSystemDataTypeFamilies::getNamesAndTypes()
{
return {
{"name", std::make_shared<DataTypeString>()},
{"case_insensivie", std::make_shared<DataTypeNullable>(std::make_shared<DataTypeUInt8>())},
{"alias_to", std::make_shared<DataTypeNullable>(std::make_shared<DataTypeString>())},
{"case_insensivie", std::make_shared<DataTypeUInt8>()},
{"alias_to", std::make_shared<DataTypeString>()},
};
}

View File

@ -1,10 +1,8 @@
#pragma once
#include <DataTypes/DataTypeNullable.h>
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Storages/System/IStorageSystemOneBlock.h>
#include <ext/shared_ptr_helper.h>
#include <Storages/System/IStorageSystemOneBlock.h>
namespace DB
{

View File

@ -1,4 +1,5 @@
#include <Databases/IDatabase.h>
#include <DataTypes/DataTypeString.h>
#include <Interpreters/Context.h>
#include <Storages/System/StorageSystemDatabases.h>

View File

@ -3,6 +3,7 @@
#include <DataTypes/DataTypeString.h>
#include <Storages/System/IStorageSystemOneBlock.h>
#include <ext/shared_ptr_helper.h>
#include <Storages/System/IStorageSystemOneBlock.h>
namespace DB

View File

@ -1,9 +1,14 @@
#include <Storages/System/StorageSystemDictionaries.h>
#include <Interpreters/Context.h>
#include <DataTypes/DataTypeArray.h>
#include <DataTypes/DataTypeDateTime.h>
#include <DataTypes/DataTypesNumber.h>
#include <DataTypes/DataTypeString.h>
#include <Dictionaries/IDictionary.h>
#include <Dictionaries/IDictionarySource.h>
#include <Dictionaries/DictionaryStructure.h>
#include <Interpreters/Context.h>
#include <Interpreters/ExternalDictionaries.h>
#include <Storages/System/StorageSystemDictionaries.h>
#include <ext/map.h>
#include <mutex>
@ -30,7 +35,8 @@ NamesAndTypesList StorageSystemDictionaries::getNamesAndTypes()
};
}
void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const {
void StorageSystemDictionaries::fillData(MutableColumns & res_columns, const Context & context, const SelectQueryInfo &) const
{
const auto & external_dictionaries = context.getExternalDictionaries();
auto objects_map = external_dictionaries.getObjectsMap();
const auto & dictionaries = objects_map.get();

View File

@ -1,4 +1,6 @@
#include <Common/ProfileEvents.h>
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Storages/System/StorageSystemEvents.h>
namespace DB

View File

@ -1,3 +1,5 @@
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Formats/FormatFactory.h>
#include <Storages/System/StorageSystemFormats.h>

View File

@ -1,7 +1,5 @@
#pragma once
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Storages/System/IStorageSystemOneBlock.h>
#include <ext/shared_ptr_helper.h>

View File

@ -1,7 +1,4 @@
#include <AggregateFunctions/AggregateFunctionFactory.h>
#include <Columns/ColumnString.h>
#include <Columns/ColumnsNumber.h>
#include <DataStreams/OneBlockInputStream.h>
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Functions/FunctionFactory.h>

View File

@ -15,7 +15,7 @@ class Context;
/** Implements `functions`system table, which allows you to get a list
* all normal and aggregate functions.
*/
class StorageSystemFunctions : public ext::shared_ptr_helper<StorageSystemFunctions>, public IStorageSystemOneBlock<StorageSystemFunctions>
class StorageSystemFunctions : public ext::shared_ptr_helper<StorageSystemFunctions>, public IStorageSystemOneBlock<StorageSystemFunctions>
{
public:
std::string getName() const override { return "SystemFunctions"; }

View File

@ -1,7 +1,6 @@
#pragma once
#include <DataTypes/DataTypeString.h>
#include <DataTypes/DataTypesNumber.h>
#include <Storages/System/IStorageSystemOneBlock.h>
#include <ext/shared_ptr_helper.h>

View File

@ -1,6 +1,6 @@
#include <Common/Macros.h>
#include <Interpreters/Context.h>
#include <Storages/System/StorageSystemMacros.h>
#include <Common/Macros.h>
namespace DB

View File

@ -16,7 +16,8 @@ namespace DB
{
NamesAndTypesList StorageSystemZooKeeper::getNamesAndTypes() {
NamesAndTypesList StorageSystemZooKeeper::getNamesAndTypes()
{
return {
{ "name", std::make_shared<DataTypeString>() },
{ "value", std::make_shared<DataTypeString>() },

View File

@ -43,7 +43,8 @@ public:
const std::string & name,
const Context & context) const;
const TableFunctions & getAllTableFunctions() const {
const TableFunctions & getAllTableFunctions() const
{
return functions;
}

View File

@ -62,7 +62,8 @@ void MakeColumnsFromVector(DataHolder * ptr)
ptr->ctable.data = ptr->rowHolder.get();
}
extern "C" {
extern "C"
{
void * ClickHouseDictionary_v3_loadIds(void * data_ptr,
ClickHouseLibrary::CStrings * settings,
@ -151,7 +152,8 @@ void * ClickHouseDictionary_v3_loadKeys(void * data_ptr, ClickHouseLibrary::CStr
if (requested_keys)
{
LOG(ptr->lib->log, "requested_keys columns passed: " << requested_keys->size);
for (size_t i = 0; i < requested_keys->size; ++i) {
for (size_t i = 0; i < requested_keys->size; ++i)
{
LOG(ptr->lib->log, "requested_keys at column " << i << " passed: " << requested_keys->data[i].size);
}
}

View File

@ -206,6 +206,7 @@ services:
- server
- --config-file=/etc/clickhouse-server/config.xml
- --log-file=/var/log/clickhouse-server/clickhouse-server.log
- --errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log
depends_on: {depends_on}
'''

View File

@ -332,6 +332,26 @@ def test_allowed_databases(started_cluster):
instance.query("DROP DATABASE db1 ON CLUSTER cluster", settings={"user" : "restricted_user"})
def test_kill_query(started_cluster):
instance = cluster.instances['ch3']
ddl_check_query(instance, "KILL QUERY ON CLUSTER 'cluster' WHERE NOT elapsed FORMAT TSV")
def test_detach_query(started_cluster):
instance = cluster.instances['ch3']
ddl_check_query(instance, "DROP TABLE IF EXISTS test_attach ON CLUSTER cluster FORMAT TSV")
ddl_check_query(instance, "CREATE TABLE test_attach ON CLUSTER cluster (i Int8)ENGINE = Log")
ddl_check_query(instance, "DETACH TABLE test_attach ON CLUSTER cluster FORMAT TSV")
ddl_check_query(instance, "ATTACH TABLE test_attach ON CLUSTER cluster")
def test_optimize_query(started_cluster):
instance = cluster.instances['ch3']
ddl_check_query(instance, "DROP TABLE IF EXISTS test_optimize ON CLUSTER cluster FORMAT TSV")
ddl_check_query(instance, "CREATE TABLE test_optimize ON CLUSTER cluster (p Date, i Int32) ENGINE = MergeTree(p, p, 8192)")
ddl_check_query(instance, "OPTIMIZE TABLE test_optimize ON CLUSTER cluster FORMAT TSV")
if __name__ == '__main__':
with contextmanager(started_cluster)() as cluster:

View File

@ -11,7 +11,7 @@ After executing an ATTACH query, the server will know about the existence of the
If the table was previously detached (``DETACH``), meaning that its structure is known, you can use shorthand without defining the structure.
```sql
ATTACH TABLE [IF NOT EXISTS] [db.]name
ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster]
```
This query is used when starting the server. The server stores table metadata as files with `ATTACH` queries, which it simply runs at launch (with the exception of system tables, which are explicitly created on the server).
@ -39,7 +39,7 @@ If `IF EXISTS` is specified, it doesn't return an error if the table doesn't exi
Deletes information about the 'name' table from the server. The server stops knowing about the table's existence.
```sql
DETACH TABLE [IF EXISTS] [db.]name
DETACH TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
```
This does not delete the table's data or metadata. On the next server launch, the server will read the metadata and find out about the table again.
@ -167,7 +167,7 @@ To make settings that persist after a server restart, you can only use the serve
## OPTIMIZE
```sql
OPTIMIZE TABLE [db.]name [PARTITION partition] [FINAL]
OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition] [FINAL]
```
Asks the table engine to do something for optimization.
@ -181,7 +181,7 @@ If you specify `FINAL`, optimization will be performed even when all the data is
## KILL QUERY
```sql
KILL QUERY
KILL QUERY [ON CLUSTER cluster]
WHERE <where expression to SELECT FROM system.processes query>
[SYNC|ASYNC|TEST]
[FORMAT format]

View File

@ -13,7 +13,7 @@
Если таблица перед этим была отсоединена (`DETACH`), т.е. её структура известна, то можно использовать сокращенную форму записи без определения структуры.
```sql
ATTACH TABLE [IF NOT EXISTS] [db.]name
ATTACH TABLE [IF NOT EXISTS] [db.]name [ON CLUSTER cluster]
```
Этот запрос используется при старте сервера. Сервер хранит метаданные таблиц в виде файлов с запросами `ATTACH`, которые он просто исполняет при запуске (за исключением системных таблиц, создание которых явно вписано в сервер).
@ -39,7 +39,7 @@ DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
Удаляет из сервера информацию о таблице name. Сервер перестаёт знать о существовании таблицы.
```sql
DETACH TABLE [IF EXISTS] [db.]name
DETACH TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
```
Но ни данные, ни метаданные таблицы не удаляются. При следующем запуске сервера, сервер прочитает метаданные и снова узнает о таблице.
@ -166,7 +166,7 @@ SET param = value
## OPTIMIZE
```sql
OPTIMIZE TABLE [db.]name [PARTITION partition] [FINAL]
OPTIMIZE TABLE [db.]name [ON CLUSTER cluster] [PARTITION partition] [FINAL]
```
Просит движок таблицы сделать что-нибудь, что может привести к более оптимальной работе.
@ -180,7 +180,7 @@ OPTIMIZE TABLE [db.]name [PARTITION partition] [FINAL]
## KILL QUERY
```sql
KILL QUERY
KILL QUERY [ON CLUSTER cluster]
WHERE <where expression to SELECT FROM system.processes query>
[SYNC|ASYNC|TEST]
[FORMAT format]