* fix another issue
* use shard and replica name from Replicated database
* fix
Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
1. Dropped support for DatabaseOrdinary for MaterializeMySQL. It
is marked as experimental, and dropping support makes the code
more maintaible, and speeds up integration tests by 50%.
2. Get rid of thread name logic for StorageMaterializeMySQL wrapping,
use setInternalQuery instead (similar to MaterializedPostgreSQL).
There was a race condition when issuing a DDL query on a replica just
after a new replica was added.
If the DDL query is issued after the new replica adds itself to the
list of replicas, but before the new replica has finished its
recovery, then the first replica adds the new replica to the list of
replicas to wait to confirm the query was replicated.
Meanwhile, the new replica is still in recovery and applies queries
from the /metadata snapshot. When it's done, it bumps its log_ptr
without marking the corresponding log entries (if any) as finished.
The first replica then waits until distributed_ddl_task_timeout
expires and wrongly assumes the query was not replicated.
The issue is fixed by remembering the max_log_ptr at the exact point
where the replica adds itself to the list of replicas, then mark as
finished all queries that happened between that max_log_ptr and the
max_log_ptr of the metadata snapshot used in recovery.
The bug was randomly observed during a downstream test. It can be
reproduced more easily by inserting a sleep of a few seconds at the
end of createReplicaNodesInZooKeeper, enough to have time to issue a
DDL query on the first replica.