The use case is to alert when queue contains broken entries. Especially
important when ClickHouse breaks backwards compatibility between
versions and log entries written by newer versions aren't parseable by
old versions.
```
Code: 27, e.displayText() = DB::Exception: Cannot parse input: expected 'quorum: ' before: 'merge_type: 2\n'
```
* initial commit: add setting and stub
* typo
* added test stub
* fix
* wip merging new integration test and code proto
* adding steps interpreters
* adding firstly proposed solution (moving parts etc)
* added checking zookeeper path existence
* fixing the include
* fixing and sorting includes
* fixing outdated struct
* fix the name
* added ast ptr as level of indirection
* fix ref
* updating the changes
* working on test stub
* fix iterator -> reference
* revert rocksdb submodule update
* fixed show privileges test
* updated the test stub
* replaced rand() with thread_local_rng(), updated the tests
updated the test
fixed test config path
test fix
removed error messages
fixed the test
updated the test
fixed string literal
fixed literal
typo: =
* fixed the empty replica error message
* updated the test and the code with logs
* updated the possible test cases, updated
* added the code/test milestone comments
* updated the test (added more testcases)
* replaced native assert with CH one
* individual replicas recursive delete fix
* updated the AS db.name AST
* two small logging fixes
* manually generated AST fixes
* Updated the test, added the possible algo change
* Some thoughts about optimizing the solution:
ALTER MOVE PARTITION .. TO TABLE -> move to detached/ + ALTER ... ATTACH
* fix
* Removed the replica sync in test as it's invalid
* Some test tweaks
* tmp
* Rewrote the algo by using the executeQuery instead of
hand-crafting the ASTPtr.
Two questions still active.
* tr: logging active parts
* Extracted the parts moving algo into a separate helper function
* Fixed the test data and the queries slightly
* Replaced query to system.parts to direct invocation,
started building the test that breaks on various parts.
* Added the case for tables when at least one replica is alive
* Updated the test to test replicas restoration by detaching/attaching
* Altered the test to check restoration without replica restart
* Added the tables swap in the start if the server failed last time
* Hotfix when only /replicas/replica... path was deleted
* Restore ZK paths while creating a replicated MergeTree table
* Updated the docs, fixed the algo for individual replicas restoration case
* Initial parts table storage fix, tests sync fix
* Reverted individual replica restoration to general algo
* Slightly optimised getDataParts
* Trying another solution with parts detaching
* Rewrote algo without any steps, added ON CLUSTER support
* Attaching parts from other replica on restoration
* Getting part checksums from ZK
* Removed ON CLUSTER, finished working solution
* Multiple small changes after review
* Fixing parallel test
* Supporting rewritten form on cluster
* Test fix
* Moar logging
* Using source replica as checksum provider
* improve test, remove some code from parser
* Trying solution with move to detached + forget
* Moving all parts (not only Committed) to detached
* Edited docs for RESTORE REPLICA
* Re-merging
* minor fixes
Co-authored-by: Alexander Tokmakov <avtokmakov@yandex-team.ru>
This is used for removing part metadata from ZooKeeper when executing
queue events like `DROP_RANGE` triggered when a user tries to drop a
part or a partition. There are other uses but I'll focus only on this
one.
Before this change the method was giving up silently if it was unable to
remove parts from ZooKeeper and this behaviour seems to be problematic.
It could lead to operation being reported as successful at first but
data reappearing later (very rarely) or "stuck" events in replication
queue.
Here is one particular scenario which I think we've hit:
* Execute a DETACH PARTITION
* DROP_RANGE event put in the queue
* Replicas try to execute dropRange but some of them get disconnected
from ZK and 5 retries aren't enough (ZK is miss-behaving), return code
(false) is ignored and log pointer advances.
* One of the replica where dropRange failed is restarted.
* checkParts is executed and it finds parts that weren't removed from
ZK, logs `Removing locally missing part from ZooKeeper and queueing a
fetch` and puts GET_PART on the queue.
* Few things can happen from here:
* There is a lagging replica that din't execute DROP_RANGE yet: part will be
fetched. The other replica will execute DROP_RANGE later and we'll
get diverging set of parts on replicas.
* Another replica also silently failed to remove parts from ZK: both
of them are left with GET_PART in the queue and none of them can
make progress, logging: `No active replica has part ... or covering
part`.
ATTACH_PART into the replicated log.
The LogEntry now also has the pre-calculated part checksum for this
entry type, which is later used while searching in the detached/ folder