Initializing queues for pending on-disk files for async INSERT cannot be
done after table had been attached and visible to user, since it
initializes the per-table counter, that is used during INSERT.
Now there is a window, when this counter is not initialized and it will
start from the beginning, and this could lead to CANNOT_LINK error:
Destination file /data/clickhouse/data/urls_v1/urls_in/shard6_replica1/13129817.bin is already exist and have different inode
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Implementation:
* Moved concurrent backup/restore check inside try-catch block which sets the status so that other nodes in cluster are aware of failures.
* Renamed backup_uuid to restore_uuid in RestoreSettings.
Testing:
* Updated test test_backup_and_restore_on_cluster/test_disallow_concurrency to check for specific backup/restore id.
Right now in case of DETACH/ATTACH there can be a window when after the
table had been DETACH'ed someone will still use it, the common example
here is MVs handling.
It happens because TableExclusiveLockHolder does not guards the
shard_ptr of the IStorage, and so if someone holds it, then it can use
it. So if ATTACH will be done for this table then, you can have multiple
instances of it.
It is not possible for DROP, because before using a table, you should
lock it and after table had been DROP'ed you cannot lock it anymore.
So let's do the same for DETACH.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
There are the following problems with this patch:
- Looses files on exception
- Existing current_batch.txt on startup leads to ENOENT error and hung
of distributed sends without ATTACH/DETACH
- Race between creating the queue for sending at table startup and
INSERT, if it had been created from INSERT, then it will not be
initialized from disk
They were addressed in #45491, but it makes code more cmoplex and plus
since, likely, the release is comming, it is better to revert the
change.
This reverts commit 94604f71b7, reversing
changes made to 80f6a45376.