Sometimes credentials with which the backup had been done are inactive
already, and ClickHouse will not be able to read the metadata file to
continue and fail.
Add a setting to allow ignoring credential from base_backup -
`use_same_s3_credentials_for_base_backup` (default to true).
And the same for RESTORE.
Signed-off-by: Azat Khuzhin <a.khuzhin@semrush.com>
Previously it used `FS::getModificationTime` which has only a second precision.
This caused an issue in some cases where the configuration has been changed
more frequently than once per second, and the change wasn't propagated. This
commit fixes that issue.
Closes#53276
* Optimize the merge if all hashSets are singleLevel
In PR(https://github.com/ClickHouse/ClickHouse/pull/50748), it has added new phase
`parallelizeMergePrepare` before merge if all the hashSets are not all singleLevel
or not all twoLevel. Then it will convert all the singleLevelSet to twoLevelSet in
parallel, which will increase the CPU utilization and QPS.
But if all the hashtables are singleLevel, it could also benefit from the
`parallelizeMergePrepare` optimization in most cases if the hashtable size are not
too small. By tuning the Query `SELECT COUNT(DISTINCT SearchPhase) FROM hits_v1`
in different threads, we have got the mild threshold 6,000.
Test patch with the Query 'SELECT COUNT(DISTINCT Title) FROM hits_v1' on 2x80 vCPUs
server. If the threads are less than 48, the hashSets are all twoLevel or mixed by
singleLevel and twoLevel. If the threads are over 56, all the hashSets are singleLevel.
And the QPS has got at most 2.35x performance gain.
Threads Opt/Base
8 100.0%
16 99.4%
24 110.3%
32 99.9%
40 99.3%
48 99.8%
56 183.0%
64 234.7%
72 233.1%
80 229.9%
88 224.5%
96 229.6%
104 235.1%
112 229.5%
120 229.1%
128 217.8%
136 222.9%
144 217.8%
152 204.3%
160 203.2%
Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>
* Add the comment and explanation for PR#52973
Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>
---------
Signed-off-by: Jiebin Sun <jiebin.sun@intel.com>