ClickHouse/src/Interpreters/InterpreterSystemQuery.h

69 lines
2.1 KiB
C++
Raw Normal View History

2021-04-20 19:23:54 +00:00
#pragma once
#include <Interpreters/IInterpreter.h>
#include <Parsers/IAST_fwd.h>
#include <Storages/IStorage_fwd.h>
#include <Interpreters/StorageID.h>
#include <Common/ActionLock.h>
#include <Disks/IVolume.h>
namespace Poco { class Logger; }
namespace DB
{
class Context;
class AccessRightsElements;
class ASTSystemQuery;
/** Implement various SYSTEM queries.
* Examples: SYSTEM SHUTDOWN, SYSTEM DROP MARK CACHE.
*
* Some commands are intended to stop/start background actions for tables and comes with two variants:
*
* 1. SYSTEM STOP MERGES table, SYSTEM START MERGES table
* - start/stop actions for specific table.
*
* 2. SYSTEM STOP MERGES, SYSTEM START MERGES
* - start/stop actions for all existing tables.
* Note that the actions for tables that will be created after this query will not be affected.
*/
2021-05-31 14:49:02 +00:00
class InterpreterSystemQuery : public IInterpreter, WithMutableContext
2021-04-20 19:23:54 +00:00
{
public:
2021-05-31 14:49:02 +00:00
InterpreterSystemQuery(const ASTPtr & query_ptr_, ContextMutablePtr context_);
2021-04-20 19:23:54 +00:00
BlockIO execute() override;
private:
ASTPtr query_ptr;
Poco::Logger * log = nullptr;
StorageID table_id = StorageID::createEmpty(); /// Will be set up if query contains table name
VolumePtr volume_ptr;
/// Tries to get a replicated table and restart it
/// Returns pointer to a newly created table if the restart was successful
2021-05-31 14:49:02 +00:00
StoragePtr tryRestartReplica(const StorageID & replica, ContextMutablePtr context, bool need_ddl_guard = true);
2021-04-20 19:23:54 +00:00
void restartReplica(const StorageID & replica, ContextMutablePtr system_context);
2021-05-31 14:49:02 +00:00
void restartReplicas(ContextMutablePtr system_context);
2021-04-20 19:23:54 +00:00
void syncReplica(ASTSystemQuery & query);
SYSTEM RESTORE REPLICA replica [ON CLUSTER cluster] (#13652) * initial commit: add setting and stub * typo * added test stub * fix * wip merging new integration test and code proto * adding steps interpreters * adding firstly proposed solution (moving parts etc) * added checking zookeeper path existence * fixing the include * fixing and sorting includes * fixing outdated struct * fix the name * added ast ptr as level of indirection * fix ref * updating the changes * working on test stub * fix iterator -> reference * revert rocksdb submodule update * fixed show privileges test * updated the test stub * replaced rand() with thread_local_rng(), updated the tests updated the test fixed test config path test fix removed error messages fixed the test updated the test fixed string literal fixed literal typo: = * fixed the empty replica error message * updated the test and the code with logs * updated the possible test cases, updated * added the code/test milestone comments * updated the test (added more testcases) * replaced native assert with CH one * individual replicas recursive delete fix * updated the AS db.name AST * two small logging fixes * manually generated AST fixes * Updated the test, added the possible algo change * Some thoughts about optimizing the solution: ALTER MOVE PARTITION .. TO TABLE -> move to detached/ + ALTER ... ATTACH * fix * Removed the replica sync in test as it's invalid * Some test tweaks * tmp * Rewrote the algo by using the executeQuery instead of hand-crafting the ASTPtr. Two questions still active. * tr: logging active parts * Extracted the parts moving algo into a separate helper function * Fixed the test data and the queries slightly * Replaced query to system.parts to direct invocation, started building the test that breaks on various parts. * Added the case for tables when at least one replica is alive * Updated the test to test replicas restoration by detaching/attaching * Altered the test to check restoration without replica restart * Added the tables swap in the start if the server failed last time * Hotfix when only /replicas/replica... path was deleted * Restore ZK paths while creating a replicated MergeTree table * Updated the docs, fixed the algo for individual replicas restoration case * Initial parts table storage fix, tests sync fix * Reverted individual replica restoration to general algo * Slightly optimised getDataParts * Trying another solution with parts detaching * Rewrote algo without any steps, added ON CLUSTER support * Attaching parts from other replica on restoration * Getting part checksums from ZK * Removed ON CLUSTER, finished working solution * Multiple small changes after review * Fixing parallel test * Supporting rewritten form on cluster * Test fix * Moar logging * Using source replica as checksum provider * improve test, remove some code from parser * Trying solution with move to detached + forget * Moving all parts (not only Committed) to detached * Edited docs for RESTORE REPLICA * Re-merging * minor fixes Co-authored-by: Alexander Tokmakov <avtokmakov@yandex-team.ru>
2021-06-20 08:24:43 +00:00
void restoreReplica();
2021-04-20 19:23:54 +00:00
void dropReplica(ASTSystemQuery & query);
bool dropReplicaImpl(ASTSystemQuery & query, const StoragePtr & table);
void flushDistributed(ASTSystemQuery & query);
void restartDisk(String & name);
AccessRightsElements getRequiredAccessForDDLOnCluster() const;
void startStopAction(StorageActionBlockType action_type, bool start);
void extendQueryLogElemImpl(QueryLogElement &, const ASTPtr &, ContextPtr) const override;
};
}