Merge branch 'master' into usernam3-sample-clause-links-fix

This commit is contained in:
Nikita Mikhaylov 2023-05-27 14:38:34 +02:00 committed by GitHub
commit 5de6dc87ec
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
91 changed files with 2321 additions and 522 deletions

2
contrib/boost vendored

@ -1 +1 @@
Subproject commit 8fe7b3326ef482ee6ecdf5a4f698f2b8c2780f98 Subproject commit aec12eea7fc762721ae16943d1361340c66c9c17

View File

@ -258,4 +258,4 @@ Since [remote](../../../sql-reference/table-functions/remote.md) and [cluster](.
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns) description - [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns) description
- [background_distributed_schedule_pool_size](../../../operations/settings/settings.md#background_distributed_schedule_pool_size) setting - [background_distributed_schedule_pool_size](../../../operations/settings/settings.md#background_distributed_schedule_pool_size) setting
- [shardNum()](../../../sql-reference/functions/other-functions.md#shard-num) and [shardCount()](../../../sql-reference/functions/other-functions.md#shard-count) functions - [shardNum()](../../../sql-reference/functions/other-functions.md#shardnum) and [shardCount()](../../../sql-reference/functions/other-functions.md#shardcount) functions

View File

@ -167,9 +167,9 @@ user = 'myuser',
password = 'mypass', password = 'mypass',
host = '127.0.0.1', host = '127.0.0.1',
port = 3306, port = 3306,
database = 'test' database = 'test',
connection_pool_size = 8 connection_pool_size = 8,
on_duplicate_clause = 1 on_duplicate_clause = 1,
replace_query = 1 replace_query = 1
``` ```

View File

@ -917,9 +917,9 @@ We recommend using this option in macOS since the `getrlimit()` function returns
Restriction on deleting tables. Restriction on deleting tables.
If the size of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `max_table_size_to_drop` (in bytes), you cant delete it using a DROP query. If the size of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `max_table_size_to_drop` (in bytes), you cant delete it using a [DROP](../../sql-reference/statements/drop.md) query or [TRUNCATE](../../sql-reference/statements/truncate.md) query.
If you still need to delete the table without restarting the ClickHouse server, create the `<clickhouse-path>/flags/force_drop_table` file and run the DROP query. This setting does not require a restart of the Clickhouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
Default value: 50 GB. Default value: 50 GB.
@ -931,6 +931,28 @@ The value 0 means that you can delete all tables without any restrictions.
<max_table_size_to_drop>0</max_table_size_to_drop> <max_table_size_to_drop>0</max_table_size_to_drop>
``` ```
## max_partition_size_to_drop {#max-partition-size-to-drop}
Restriction on dropping partitions.
If the size of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `max_partition_size_to_drop` (in bytes), you cant drop a partition using a [DROP PARTITION](../../sql-reference/statements/alter/partition.md#drop-partitionpart) query.
This setting does not require a restart of the Clickhouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
Default value: 50 GB.
The value 0 means that you can drop partitions without any restrictions.
:::note
This limitation does not restrict drop table and truncate table, see [max_table_size_to_drop](#max-table-size-to-drop)
:::
**Example**
``` xml
<max_partition_size_to_drop>0</max_partition_size_to_drop>
```
## max_thread_pool_size {#max-thread-pool-size} ## max_thread_pool_size {#max-thread-pool-size}
ClickHouse uses threads from the Global Thread pool to process queries. If there is no idle thread to process a query, then a new thread is created in the pool. `max_thread_pool_size` limits the maximum number of threads in the pool. ClickHouse uses threads from the Global Thread pool to process queries. If there is no idle thread to process a query, then a new thread is created in the pool. `max_thread_pool_size` limits the maximum number of threads in the pool.

View File

@ -0,0 +1,27 @@
---
slug: /en/operations/system-tables/build_options
---
# build_options
Contains information about the ClickHouse server's build options.
Columns:
- `name` (String) — Name of the build option, e.g. `USE_ODBC`
- `value` (String) — Value of the build option, e.g. `1`
**Example**
``` sql
SELECT * FROM system.build_options LIMIT 5
```
``` text
┌─name─────────────┬─value─┐
│ USE_BROTLI │ 1 │
│ USE_BZIP2 │ 1 │
│ USE_CAPNP │ 1 │
│ USE_CASSANDRA │ 1 │
│ USE_DATASKETCHES │ 1 │
└──────────────────┴───────┘
```

View File

@ -323,11 +323,11 @@ Alias: `REPEAT`
**Arguments** **Arguments**
- `s` — The string to repeat. [String](../../sql-reference/data-types/string.md). - `s` — The string to repeat. [String](../../sql-reference/data-types/string.md).
- `n` — The number of times to repeat the string. [UInt or Int](../../sql-reference/data-types/int-uint.md). - `n` — The number of times to repeat the string. [UInt* or Int*](../../sql-reference/data-types/int-uint.md).
**Returned value** **Returned value**
The single string containing string `s` repeated `n` times. If `n` \< 1, the function returns empty string. A string containing string `s` repeated `n` times. If `n` <= 0, the function returns the empty string.
Type: `String`. Type: `String`.
@ -345,6 +345,44 @@ Result:
└────────────────────────────────┘ └────────────────────────────────┘
``` ```
## space
Concatenates a space (` `) as many times with itself as specified.
**Syntax**
``` sql
space(n)
```
Alias: `SPACE`.
**Arguments**
- `n` — The number of times to repeat the space. [UInt* or Int*](../../sql-reference/data-types/int-uint.md).
**Returned value**
The string containing string ` ` repeated `n` times. If `n` <= 0, the function returns the empty string.
Type: `String`.
**Example**
Query:
``` sql
SELECT space(3);
```
Result:
``` text
┌─space(3) ────┐
│ │
└──────────────┘
```
## reverse ## reverse
Reverses the sequence of bytes in a string. Reverses the sequence of bytes in a string.

View File

@ -544,10 +544,10 @@ Result:
└─────┴──────────┴───────┘ └─────┴──────────┴───────┘
``` ```
##Filling grouped by sorting prefix ## Filling grouped by sorting prefix
It can be useful to fill rows which have the same values in particular columns independently, - a good example is filling missing values in time series. It can be useful to fill rows which have the same values in particular columns independently, - a good example is filling missing values in time series.
Assume there is the following time series table Assume there is the following time series table:
``` sql ``` sql
CREATE TABLE timeseries CREATE TABLE timeseries
( (
@ -567,7 +567,7 @@ SELECT * FROM timeseries;
└───────────┴─────────────────────────┴───────┘ └───────────┴─────────────────────────┴───────┘
``` ```
And we'd like to fill missing values for each sensor independently with 1 second interval. And we'd like to fill missing values for each sensor independently with 1 second interval.
The way to achieve it is to use `sensor_id` column as sorting prefix for filling column `timestamp` The way to achieve it is to use `sensor_id` column as sorting prefix for filling column `timestamp`:
``` ```
SELECT * SELECT *
FROM timeseries FROM timeseries
@ -589,7 +589,7 @@ INTERPOLATE ( value AS 9999 )
│ 432 │ 2021-12-01 00:00:05.000 │ 5 │ │ 432 │ 2021-12-01 00:00:05.000 │ 5 │
└───────────┴─────────────────────────┴───────┘ └───────────┴─────────────────────────┴───────┘
``` ```
Here, the `value` column was interpolated with `9999` just to make filled rows more noticeable Here, the `value` column was interpolated with `9999` just to make filled rows more noticeable.
This behavior is controlled by setting `use_with_fill_by_sorting_prefix` (enabled by default) This behavior is controlled by setting `use_with_fill_by_sorting_prefix` (enabled by default)
## Related content ## Related content

View File

@ -59,16 +59,31 @@ UInt64 BackupEntryFromImmutableFile::getSize() const
UInt128 BackupEntryFromImmutableFile::getChecksum() const UInt128 BackupEntryFromImmutableFile::getChecksum() const
{ {
std::lock_guard lock{size_and_checksum_mutex};
if (!checksum_adjusted)
{ {
if (!checksum) std::lock_guard lock{size_and_checksum_mutex};
checksum = BackupEntryWithChecksumCalculation<IBackupEntry>::getChecksum(); if (checksum_adjusted)
else if (copy_encrypted) return *checksum;
checksum = combineChecksums(*checksum, disk->getEncryptedFileIV(file_path));
checksum_adjusted = true; if (checksum)
{
if (copy_encrypted)
checksum = combineChecksums(*checksum, disk->getEncryptedFileIV(file_path));
checksum_adjusted = true;
return *checksum;
}
}
auto calculated_checksum = BackupEntryWithChecksumCalculation<IBackupEntry>::getChecksum();
{
std::lock_guard lock{size_and_checksum_mutex};
if (!checksum_adjusted)
{
checksum = calculated_checksum;
checksum_adjusted = true;
}
return *checksum;
} }
return *checksum;
} }
std::optional<UInt128> BackupEntryFromImmutableFile::getPartialChecksum(size_t prefix_length) const std::optional<UInt128> BackupEntryFromImmutableFile::getPartialChecksum(size_t prefix_length) const

View File

@ -44,7 +44,7 @@ private:
const DataSourceDescription data_source_description; const DataSourceDescription data_source_description;
const bool copy_encrypted; const bool copy_encrypted;
mutable std::optional<UInt64> file_size; mutable std::optional<UInt64> file_size;
mutable std::optional<UInt64> checksum; mutable std::optional<UInt128> checksum;
mutable bool file_size_adjusted = false; mutable bool file_size_adjusted = false;
mutable bool checksum_adjusted = false; mutable bool checksum_adjusted = false;
mutable std::mutex size_and_checksum_mutex; mutable std::mutex size_and_checksum_mutex;

View File

@ -8,15 +8,32 @@ namespace DB
template <typename Base> template <typename Base>
UInt128 BackupEntryWithChecksumCalculation<Base>::getChecksum() const UInt128 BackupEntryWithChecksumCalculation<Base>::getChecksum() const
{ {
std::lock_guard lock{checksum_calculation_mutex};
if (!calculated_checksum)
{ {
auto read_buffer = this->getReadBuffer(ReadSettings{}.adjustBufferSize(this->getSize())); std::lock_guard lock{checksum_calculation_mutex};
HashingReadBuffer hashing_read_buffer(*read_buffer); if (calculated_checksum)
hashing_read_buffer.ignoreAll(); return *calculated_checksum;
calculated_checksum = hashing_read_buffer.getHash(); }
size_t size = this->getSize();
{
std::lock_guard lock{checksum_calculation_mutex};
if (!calculated_checksum)
{
if (size == 0)
{
calculated_checksum = 0;
}
else
{
auto read_buffer = this->getReadBuffer(ReadSettings{}.adjustBufferSize(size));
HashingReadBuffer hashing_read_buffer(*read_buffer);
hashing_read_buffer.ignoreAll();
calculated_checksum = hashing_read_buffer.getHash();
}
}
return *calculated_checksum;
} }
return *calculated_checksum;
} }
template <typename Base> template <typename Base>

View File

@ -0,0 +1,350 @@
#include <gtest/gtest.h>
#include <Backups/BackupEntryFromAppendOnlyFile.h>
#include <Backups/BackupEntryFromImmutableFile.h>
#include <Backups/BackupEntryFromSmallFile.h>
#include <Disks/IDisk.h>
#include <Disks/DiskLocal.h>
#include <Disks/DiskEncrypted.h>
#include <IO/FileEncryptionCommon.h>
#include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h>
#include <Poco/TemporaryFile.h>
using namespace DB;
class BackupEntriesTest : public ::testing::Test
{
protected:
void SetUp() override
{
/// Make local disk.
temp_dir = std::make_unique<Poco::TemporaryFile>();
temp_dir->createDirectories();
local_disk = std::make_shared<DiskLocal>("local_disk", temp_dir->path() + "/", 0);
/// Make encrypted disk.
auto settings = std::make_unique<DiskEncryptedSettings>();
settings->wrapped_disk = local_disk;
settings->current_algorithm = FileEncryption::Algorithm::AES_128_CTR;
settings->keys[0] = "1234567890123456";
settings->current_key_id = 0;
settings->disk_path = "encrypted/";
encrypted_disk = std::make_shared<DiskEncrypted>("encrypted_disk", std::move(settings), true);
}
void TearDown() override
{
encrypted_disk.reset();
local_disk.reset();
}
static void writeFile(DiskPtr disk, const String & filepath)
{
auto buf = disk->writeFile(filepath, DBMS_DEFAULT_BUFFER_SIZE, WriteMode::Rewrite, {});
writeString(std::string_view{"Some text"}, *buf);
buf->finalize();
}
static void writeEmptyFile(DiskPtr disk, const String & filepath)
{
auto buf = disk->writeFile(filepath, DBMS_DEFAULT_BUFFER_SIZE, WriteMode::Rewrite, {});
buf->finalize();
}
static void appendFile(DiskPtr disk, const String & filepath)
{
auto buf = disk->writeFile(filepath, DBMS_DEFAULT_BUFFER_SIZE, WriteMode::Append, {});
writeString(std::string_view{"Appended"}, *buf);
buf->finalize();
}
static String getChecksum(const BackupEntryPtr & backup_entry)
{
return getHexUIntUppercase(backup_entry->getChecksum());
}
static const constexpr std::string_view NO_CHECKSUM = "no checksum";
static String getPartialChecksum(const BackupEntryPtr & backup_entry, size_t prefix_length)
{
auto partial_checksum = backup_entry->getPartialChecksum(prefix_length);
if (!partial_checksum)
return String{NO_CHECKSUM};
return getHexUIntUppercase(*partial_checksum);
}
static String readAll(const BackupEntryPtr & backup_entry)
{
auto in = backup_entry->getReadBuffer({});
String str;
readStringUntilEOF(str, *in);
return str;
}
std::unique_ptr<Poco::TemporaryFile> temp_dir;
std::shared_ptr<DiskLocal> local_disk;
std::shared_ptr<DiskEncrypted> encrypted_disk;
};
static const constexpr std::string_view ZERO_CHECKSUM = "00000000000000000000000000000000";
static const constexpr std::string_view SOME_TEXT_CHECKSUM = "28B5529750AC210952FFD366774363ED";
static const constexpr std::string_view S_CHECKSUM = "C27395C39AFB5557BFE47661CC9EB86C";
static const constexpr std::string_view SOME_TEX_CHECKSUM = "D00D9BE8D87919A165F14EDD31088A0E";
static const constexpr std::string_view SOME_TEXT_APPENDED_CHECKSUM = "5A1F10F638DC7A226231F3FD927D1726";
static const constexpr std::string_view PRECALCULATED_CHECKSUM = "1122334455667788AABBCCDDAABBCCDD";
static const constexpr UInt128 PRECALCULATED_CHECKSUM_UINT128 = (UInt128(0x1122334455667788) << 64) | 0xAABBCCDDAABBCCDD;
static const size_t PRECALCULATED_SIZE = 123;
TEST_F(BackupEntriesTest, BackupEntryFromImmutableFile)
{
writeFile(local_disk, "a.txt");
auto entry = std::make_shared<BackupEntryFromImmutableFile>(local_disk, "a.txt");
EXPECT_EQ(entry->getSize(), 9);
EXPECT_EQ(getChecksum(entry), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1), NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 8), NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 9), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1000), SOME_TEXT_CHECKSUM);
EXPECT_EQ(readAll(entry), "Some text");
writeEmptyFile(local_disk, "empty.txt");
auto empty_entry = std::make_shared<BackupEntryFromImmutableFile>(local_disk, "empty.txt");
EXPECT_EQ(empty_entry->getSize(), 0);
EXPECT_EQ(getChecksum(empty_entry), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 1), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 1000), ZERO_CHECKSUM);
EXPECT_EQ(readAll(empty_entry), "");
auto precalculated_entry = std::make_shared<BackupEntryFromImmutableFile>(local_disk, "a.txt", false, PRECALCULATED_SIZE, PRECALCULATED_CHECKSUM_UINT128);
EXPECT_EQ(precalculated_entry->getSize(), PRECALCULATED_SIZE);
EXPECT_EQ(getChecksum(precalculated_entry), PRECALCULATED_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 1), NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, PRECALCULATED_SIZE - 1), NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, PRECALCULATED_SIZE), PRECALCULATED_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 1000), PRECALCULATED_CHECKSUM);
EXPECT_EQ(readAll(precalculated_entry), "Some text");
}
TEST_F(BackupEntriesTest, BackupEntryFromAppendOnlyFile)
{
writeFile(local_disk, "a.txt");
auto entry = std::make_shared<BackupEntryFromAppendOnlyFile>(local_disk, "a.txt");
EXPECT_EQ(entry->getSize(), 9);
EXPECT_EQ(getChecksum(entry), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1), S_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 8), SOME_TEX_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 9), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1000), SOME_TEXT_CHECKSUM);
EXPECT_EQ(readAll(entry), "Some text");
appendFile(local_disk, "a.txt");
EXPECT_EQ(entry->getSize(), 9);
EXPECT_EQ(getChecksum(entry), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1), S_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 8), SOME_TEX_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 9), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1000), SOME_TEXT_CHECKSUM);
EXPECT_EQ(readAll(entry), "Some text");
auto appended_entry = std::make_shared<BackupEntryFromAppendOnlyFile>(local_disk, "a.txt");
EXPECT_EQ(appended_entry->getSize(), 17);
EXPECT_EQ(getChecksum(appended_entry), SOME_TEXT_APPENDED_CHECKSUM);
EXPECT_EQ(getPartialChecksum(appended_entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(appended_entry, 1), S_CHECKSUM);
EXPECT_EQ(getPartialChecksum(appended_entry, 8), SOME_TEX_CHECKSUM);
EXPECT_EQ(getPartialChecksum(appended_entry, 9), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(appended_entry, 22), SOME_TEXT_APPENDED_CHECKSUM);
EXPECT_EQ(getPartialChecksum(appended_entry, 1000), SOME_TEXT_APPENDED_CHECKSUM);
EXPECT_EQ(readAll(appended_entry), "Some textAppended");
writeEmptyFile(local_disk, "empty_appended.txt");
auto empty_entry = std::make_shared<BackupEntryFromAppendOnlyFile>(local_disk, "empty_appended.txt");
EXPECT_EQ(empty_entry->getSize(), 0);
EXPECT_EQ(getChecksum(empty_entry), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 1), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 1000), ZERO_CHECKSUM);
EXPECT_EQ(readAll(empty_entry), "");
appendFile(local_disk, "empty_appended.txt");
EXPECT_EQ(empty_entry->getSize(), 0);
EXPECT_EQ(getChecksum(empty_entry), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 1), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(empty_entry, 1000), ZERO_CHECKSUM);
EXPECT_EQ(readAll(empty_entry), "");
}
TEST_F(BackupEntriesTest, PartialChecksumBeforeFullChecksum)
{
writeFile(local_disk, "a.txt");
auto entry = std::make_shared<BackupEntryFromAppendOnlyFile>(local_disk, "a.txt");
EXPECT_EQ(entry->getSize(), 9);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getChecksum(entry), SOME_TEXT_CHECKSUM);
EXPECT_EQ(readAll(entry), "Some text");
entry = std::make_shared<BackupEntryFromAppendOnlyFile>(local_disk, "a.txt");
EXPECT_EQ(entry->getSize(), 9);
EXPECT_EQ(getPartialChecksum(entry, 1), S_CHECKSUM);
EXPECT_EQ(getChecksum(entry), SOME_TEXT_CHECKSUM);
EXPECT_EQ(readAll(entry), "Some text");
}
TEST_F(BackupEntriesTest, BackupEntryFromSmallFile)
{
writeFile(local_disk, "a.txt");
auto entry = std::make_shared<BackupEntryFromSmallFile>(local_disk, "a.txt");
local_disk->removeFile("a.txt");
EXPECT_EQ(entry->getSize(), 9);
EXPECT_EQ(getChecksum(entry), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1), S_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 8), SOME_TEX_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 9), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1000), SOME_TEXT_CHECKSUM);
EXPECT_EQ(readAll(entry), "Some text");
}
TEST_F(BackupEntriesTest, DecryptedEntriesFromEncryptedDisk)
{
{
writeFile(encrypted_disk, "a.txt");
std::pair<BackupEntryPtr, bool /* partial_checksum_allowed */> test_cases[]
= {{std::make_shared<BackupEntryFromImmutableFile>(encrypted_disk, "a.txt"), false},
{std::make_shared<BackupEntryFromAppendOnlyFile>(encrypted_disk, "a.txt"), true},
{std::make_shared<BackupEntryFromSmallFile>(encrypted_disk, "a.txt"), true}};
for (const auto & [entry, partial_checksum_allowed] : test_cases)
{
EXPECT_EQ(entry->getSize(), 9);
EXPECT_EQ(getChecksum(entry), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1), partial_checksum_allowed ? S_CHECKSUM : NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 8), partial_checksum_allowed ? SOME_TEX_CHECKSUM : NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 9), SOME_TEXT_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1000), SOME_TEXT_CHECKSUM);
EXPECT_EQ(readAll(entry), "Some text");
}
}
{
writeEmptyFile(encrypted_disk, "empty.txt");
BackupEntryPtr entries[]
= {std::make_shared<BackupEntryFromImmutableFile>(encrypted_disk, "empty.txt"),
std::make_shared<BackupEntryFromAppendOnlyFile>(encrypted_disk, "empty.txt"),
std::make_shared<BackupEntryFromSmallFile>(encrypted_disk, "empty.txt")};
for (const auto & entry : entries)
{
EXPECT_EQ(entry->getSize(), 0);
EXPECT_EQ(getChecksum(entry), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1), ZERO_CHECKSUM);
EXPECT_EQ(readAll(entry), "");
}
}
{
auto precalculated_entry = std::make_shared<BackupEntryFromImmutableFile>(encrypted_disk, "a.txt", false, PRECALCULATED_SIZE, PRECALCULATED_CHECKSUM_UINT128);
EXPECT_EQ(precalculated_entry->getSize(), PRECALCULATED_SIZE);
EXPECT_EQ(getChecksum(precalculated_entry), PRECALCULATED_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 1), NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, PRECALCULATED_SIZE), PRECALCULATED_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 1000), PRECALCULATED_CHECKSUM);
EXPECT_EQ(readAll(precalculated_entry), "Some text");
}
}
TEST_F(BackupEntriesTest, EncryptedEntriesFromEncryptedDisk)
{
{
writeFile(encrypted_disk, "a.txt");
BackupEntryPtr entries[]
= {std::make_shared<BackupEntryFromImmutableFile>(encrypted_disk, "a.txt", /* copy_encrypted= */ true),
std::make_shared<BackupEntryFromAppendOnlyFile>(encrypted_disk, "a.txt", /* copy_encrypted= */ true),
std::make_shared<BackupEntryFromSmallFile>(encrypted_disk, "a.txt", /* copy_encrypted= */ true)};
auto encrypted_checksum = getChecksum(entries[0]);
EXPECT_NE(encrypted_checksum, NO_CHECKSUM);
EXPECT_NE(encrypted_checksum, ZERO_CHECKSUM);
EXPECT_NE(encrypted_checksum, SOME_TEXT_CHECKSUM);
auto partial_checksum = getPartialChecksum(entries[1], 9);
EXPECT_NE(partial_checksum, NO_CHECKSUM);
EXPECT_NE(partial_checksum, ZERO_CHECKSUM);
EXPECT_NE(partial_checksum, SOME_TEXT_CHECKSUM);
EXPECT_NE(partial_checksum, encrypted_checksum);
auto encrypted_data = readAll(entries[0]);
EXPECT_EQ(encrypted_data.size(), 9 + FileEncryption::Header::kSize);
for (const auto & entry : entries)
{
EXPECT_EQ(entry->getSize(), 9 + FileEncryption::Header::kSize);
EXPECT_EQ(getChecksum(entry), encrypted_checksum);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
auto encrypted_checksum_9 = getPartialChecksum(entry, 9);
EXPECT_TRUE(encrypted_checksum_9 == NO_CHECKSUM || encrypted_checksum_9 == partial_checksum);
EXPECT_EQ(getPartialChecksum(entry, 9 + FileEncryption::Header::kSize), encrypted_checksum);
EXPECT_EQ(getPartialChecksum(entry, 1000), encrypted_checksum);
EXPECT_EQ(readAll(entry), encrypted_data);
}
}
{
writeEmptyFile(encrypted_disk, "empty.txt");
BackupEntryPtr entries[]
= {std::make_shared<BackupEntryFromImmutableFile>(encrypted_disk, "empty.txt", /* copy_encrypted= */ true),
std::make_shared<BackupEntryFromAppendOnlyFile>(encrypted_disk, "empty.txt", /* copy_encrypted= */ true),
std::make_shared<BackupEntryFromSmallFile>(encrypted_disk, "empty.txt", /* copy_encrypted= */ true)};
for (const auto & entry : entries)
{
EXPECT_EQ(entry->getSize(), 0);
EXPECT_EQ(getChecksum(entry), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(entry, 1), ZERO_CHECKSUM);
EXPECT_EQ(readAll(entry), "");
}
}
{
auto precalculated_entry = std::make_shared<BackupEntryFromImmutableFile>(encrypted_disk, "a.txt", /* copy_encrypted= */ true, PRECALCULATED_SIZE, PRECALCULATED_CHECKSUM_UINT128);
EXPECT_EQ(precalculated_entry->getSize(), PRECALCULATED_SIZE + FileEncryption::Header::kSize);
auto encrypted_checksum = getChecksum(precalculated_entry);
EXPECT_NE(encrypted_checksum, NO_CHECKSUM);
EXPECT_NE(encrypted_checksum, ZERO_CHECKSUM);
EXPECT_NE(encrypted_checksum, SOME_TEXT_CHECKSUM);
EXPECT_NE(encrypted_checksum, PRECALCULATED_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 0), ZERO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 1), NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, PRECALCULATED_SIZE), NO_CHECKSUM);
EXPECT_EQ(getPartialChecksum(precalculated_entry, PRECALCULATED_SIZE + FileEncryption::Header::kSize), encrypted_checksum);
EXPECT_EQ(getPartialChecksum(precalculated_entry, 1000), encrypted_checksum);
auto encrypted_data = readAll(precalculated_entry);
EXPECT_EQ(encrypted_data.size(), 9 + FileEncryption::Header::kSize);
}
}

View File

@ -6,6 +6,7 @@
#include <Common/noexcept_scope.h> #include <Common/noexcept_scope.h>
#include <Common/setThreadName.h> #include <Common/setThreadName.h>
#include <Common/logger_useful.h> #include <Common/logger_useful.h>
#include <Common/ThreadPool.h>
namespace DB namespace DB
{ {
@ -41,9 +42,14 @@ std::exception_ptr LoadJob::exception() const
return load_exception; return load_exception;
} }
ssize_t LoadJob::priority() const size_t LoadJob::executionPool() const
{ {
return load_priority; return execution_pool_id;
}
size_t LoadJob::pool() const
{
return pool_id;
} }
void LoadJob::wait() const void LoadJob::wait() const
@ -112,8 +118,9 @@ void LoadJob::enqueued()
enqueue_time = std::chrono::system_clock::now(); enqueue_time = std::chrono::system_clock::now();
} }
void LoadJob::execute(const LoadJobPtr & self) void LoadJob::execute(size_t pool, const LoadJobPtr & self)
{ {
execution_pool_id = pool;
start_time = std::chrono::system_clock::now(); start_time = std::chrono::system_clock::now();
func(self); func(self);
} }
@ -148,22 +155,35 @@ void LoadTask::remove()
{ {
loader.remove(jobs); loader.remove(jobs);
jobs.clear(); jobs.clear();
goal_jobs.clear();
} }
} }
void LoadTask::detach() void LoadTask::detach()
{ {
jobs.clear(); jobs.clear();
goal_jobs.clear();
} }
AsyncLoader::AsyncLoader(Metric metric_threads, Metric metric_active_threads, size_t max_threads_, bool log_failures_, bool log_progress_)
AsyncLoader::AsyncLoader(std::vector<PoolInitializer> pool_initializers, bool log_failures_, bool log_progress_)
: log_failures(log_failures_) : log_failures(log_failures_)
, log_progress(log_progress_) , log_progress(log_progress_)
, log(&Poco::Logger::get("AsyncLoader")) , log(&Poco::Logger::get("AsyncLoader"))
, max_threads(max_threads_)
, pool(metric_threads, metric_active_threads, max_threads)
{ {
pools.reserve(pool_initializers.size());
for (auto && init : pool_initializers)
pools.push_back({
.name = init.name,
.priority = init.priority,
.thread_pool = std::make_unique<ThreadPool>(
init.metric_threads,
init.metric_active_threads,
init.max_threads,
/* max_free_threads = */ 0,
init.max_threads),
.max_threads = init.max_threads
});
} }
AsyncLoader::~AsyncLoader() AsyncLoader::~AsyncLoader()
@ -175,13 +195,20 @@ void AsyncLoader::start()
{ {
std::unique_lock lock{mutex}; std::unique_lock lock{mutex};
is_running = true; is_running = true;
for (size_t i = 0; workers < max_threads && i < ready_queue.size(); i++) updateCurrentPriorityAndSpawn(lock);
spawn(lock);
} }
void AsyncLoader::wait() void AsyncLoader::wait()
{ {
pool.wait(); // Because job can create new jobs in other pools we have to recheck in cycle
std::unique_lock lock{mutex};
while (!scheduled_jobs.empty())
{
lock.unlock();
for (auto & p : pools)
p.thread_pool->wait();
lock.lock();
}
} }
void AsyncLoader::stop() void AsyncLoader::stop()
@ -191,7 +218,7 @@ void AsyncLoader::stop()
is_running = false; is_running = false;
// NOTE: there is no need to notify because workers never wait // NOTE: there is no need to notify because workers never wait
} }
pool.wait(); wait();
} }
void AsyncLoader::schedule(LoadTask & task) void AsyncLoader::schedule(LoadTask & task)
@ -229,9 +256,9 @@ void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
old_jobs = finished_jobs.size(); old_jobs = finished_jobs.size();
} }
// Make set of jobs to schedule: // Pass 1. Make set of jobs to schedule:
// 1) exclude already scheduled or finished jobs // 1) exclude already scheduled or finished jobs
// 2) include pending dependencies, that are not yet scheduled // 2) include assigned job dependencies (that are not yet scheduled)
LoadJobSet jobs; LoadJobSet jobs;
for (const auto & job : input_jobs) for (const auto & job : input_jobs)
gatherNotScheduled(job, jobs, lock); gatherNotScheduled(job, jobs, lock);
@ -242,17 +269,18 @@ void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
// We do not want any exception to be throws after this point, because the following code is not exception-safe // We do not want any exception to be throws after this point, because the following code is not exception-safe
DENY_ALLOCATIONS_IN_SCOPE; DENY_ALLOCATIONS_IN_SCOPE;
// Schedule all incoming jobs // Pass 2. Schedule all incoming jobs
for (const auto & job : jobs) for (const auto & job : jobs)
{ {
chassert(job->pool() < pools.size());
NOEXCEPT_SCOPE({ NOEXCEPT_SCOPE({
ALLOW_ALLOCATIONS_IN_SCOPE; ALLOW_ALLOCATIONS_IN_SCOPE;
scheduled_jobs.emplace(job, Info{.initial_priority = job->load_priority, .priority = job->load_priority}); scheduled_jobs.try_emplace(job);
job->scheduled(); job->scheduled();
}); });
} }
// Process dependencies on scheduled pending jobs // Pass 3. Process dependencies on scheduled jobs, priority inheritance
for (const auto & job : jobs) for (const auto & job : jobs)
{ {
Info & info = scheduled_jobs.find(job)->second; Info & info = scheduled_jobs.find(job)->second;
@ -267,17 +295,18 @@ void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
}); });
info.dependencies_left++; info.dependencies_left++;
// Priority inheritance: prioritize deps to have at least given `priority` to avoid priority inversion // Priority inheritance: prioritize deps to have at least given `pool.priority` to avoid priority inversion
prioritize(dep, info.priority, lock); prioritize(dep, job->pool_id, lock);
} }
} }
// Enqueue non-blocked jobs (w/o dependencies) to ready queue // Enqueue non-blocked jobs (w/o dependencies) to ready queue
if (!info.is_blocked()) if (!info.isBlocked())
enqueue(info, job, lock); enqueue(info, job, lock);
} }
// Process dependencies on other jobs. It is done in a separate pass to facilitate propagation of cancel signals (if any). // Pass 4: Process dependencies on other jobs.
// It is done in a separate pass to facilitate cancelling due to already failed dependencies.
for (const auto & job : jobs) for (const auto & job : jobs)
{ {
if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end()) if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end())
@ -285,12 +314,12 @@ void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
for (const auto & dep : job->dependencies) for (const auto & dep : job->dependencies)
{ {
if (scheduled_jobs.contains(dep)) if (scheduled_jobs.contains(dep))
continue; // Skip dependencies on scheduled pending jobs (already processed) continue; // Skip dependencies on scheduled jobs (already processed in pass 3)
LoadStatus dep_status = dep->status(); LoadStatus dep_status = dep->status();
if (dep_status == LoadStatus::OK) if (dep_status == LoadStatus::OK)
continue; // Dependency on already successfully finished job -- it's okay. continue; // Dependency on already successfully finished job -- it's okay.
// Dependency on not scheduled pending job -- it's bad. // Dependency on assigned job -- it's bad.
// Probably, there is an error in `jobs` set, `gatherNotScheduled()` should have fixed it. // Probably, there is an error in `jobs` set, `gatherNotScheduled()` should have fixed it.
chassert(dep_status != LoadStatus::PENDING); chassert(dep_status != LoadStatus::PENDING);
@ -305,7 +334,7 @@ void AsyncLoader::scheduleImpl(const LoadJobSet & input_jobs)
job->name, job->name,
getExceptionMessage(dep->exception(), /* with_stacktrace = */ false))); getExceptionMessage(dep->exception(), /* with_stacktrace = */ false)));
}); });
finish(lock, job, LoadStatus::CANCELED, e); finish(job, LoadStatus::CANCELED, e, lock);
break; // This job is now finished, stop its dependencies processing break; // This job is now finished, stop its dependencies processing
} }
} }
@ -327,13 +356,14 @@ void AsyncLoader::gatherNotScheduled(const LoadJobPtr & job, LoadJobSet & jobs,
} }
} }
void AsyncLoader::prioritize(const LoadJobPtr & job, ssize_t new_priority) void AsyncLoader::prioritize(const LoadJobPtr & job, size_t new_pool)
{ {
if (!job) if (!job)
return; return;
chassert(new_pool < pools.size());
DENY_ALLOCATIONS_IN_SCOPE; DENY_ALLOCATIONS_IN_SCOPE;
std::unique_lock lock{mutex}; std::unique_lock lock{mutex};
prioritize(job, new_priority, lock); prioritize(job, new_pool, lock);
} }
void AsyncLoader::remove(const LoadJobSet & jobs) void AsyncLoader::remove(const LoadJobSet & jobs)
@ -347,14 +377,14 @@ void AsyncLoader::remove(const LoadJobSet & jobs)
{ {
if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end()) if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end())
{ {
if (info->second.is_executing()) if (info->second.isExecuting())
continue; // Skip executing jobs on the first pass continue; // Skip executing jobs on the first pass
std::exception_ptr e; std::exception_ptr e;
NOEXCEPT_SCOPE({ NOEXCEPT_SCOPE({
ALLOW_ALLOCATIONS_IN_SCOPE; ALLOW_ALLOCATIONS_IN_SCOPE;
e = std::make_exception_ptr(Exception(ErrorCodes::ASYNC_LOAD_CANCELED, "Load job '{}' canceled", job->name)); e = std::make_exception_ptr(Exception(ErrorCodes::ASYNC_LOAD_CANCELED, "Load job '{}' canceled", job->name));
}); });
finish(lock, job, LoadStatus::CANCELED, e); finish(job, LoadStatus::CANCELED, e, lock);
} }
} }
// On the second pass wait for executing jobs to finish // On the second pass wait for executing jobs to finish
@ -363,7 +393,7 @@ void AsyncLoader::remove(const LoadJobSet & jobs)
if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end()) if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end())
{ {
// Job is currently executing // Job is currently executing
chassert(info->second.is_executing()); chassert(info->second.isExecuting());
lock.unlock(); lock.unlock();
job->waitNoThrow(); // Wait for job to finish job->waitNoThrow(); // Wait for job to finish
lock.lock(); lock.lock();
@ -379,25 +409,36 @@ void AsyncLoader::remove(const LoadJobSet & jobs)
} }
} }
void AsyncLoader::setMaxThreads(size_t value) void AsyncLoader::setMaxThreads(size_t pool, size_t value)
{ {
std::unique_lock lock{mutex}; std::unique_lock lock{mutex};
pool.setMaxThreads(value); auto & p = pools[pool];
pool.setMaxFreeThreads(value); p.thread_pool->setMaxThreads(value);
pool.setQueueSize(value); p.thread_pool->setQueueSize(value); // Keep queue size equal max threads count to avoid blocking during spawning
max_threads = value; p.max_threads = value;
if (!is_running) if (!is_running)
return; return;
for (size_t i = 0; workers < max_threads && i < ready_queue.size(); i++) for (size_t i = 0; canSpawnWorker(p, lock) && i < p.ready_queue.size(); i++)
spawn(lock); spawn(p, lock);
} }
size_t AsyncLoader::getMaxThreads() const size_t AsyncLoader::getMaxThreads(size_t pool) const
{ {
std::unique_lock lock{mutex}; std::unique_lock lock{mutex};
return max_threads; return pools[pool].max_threads;
} }
const String & AsyncLoader::getPoolName(size_t pool) const
{
return pools[pool].name; // NOTE: lock is not needed because `name` is const and `pools` are immutable
}
ssize_t AsyncLoader::getPoolPriority(size_t pool) const
{
return pools[pool].priority; // NOTE: lock is not needed because `priority` is const and `pools` are immutable
}
size_t AsyncLoader::getScheduledJobCount() const size_t AsyncLoader::getScheduledJobCount() const
{ {
std::unique_lock lock{mutex}; std::unique_lock lock{mutex};
@ -412,11 +453,10 @@ std::vector<AsyncLoader::JobState> AsyncLoader::getJobStates() const
states.emplace(job->name, JobState{ states.emplace(job->name, JobState{
.job = job, .job = job,
.dependencies_left = info.dependencies_left, .dependencies_left = info.dependencies_left,
.is_executing = info.is_executing(), .ready_seqno = info.ready_seqno,
.is_blocked = info.is_blocked(), .is_blocked = info.isBlocked(),
.is_ready = info.is_ready(), .is_ready = info.isReady(),
.initial_priority = info.initial_priority, .is_executing = info.isExecuting()
.ready_seqno = last_ready_seqno
}); });
for (const auto & job : finished_jobs) for (const auto & job : finished_jobs)
states.emplace(job->name, JobState{.job = job}); states.emplace(job->name, JobState{.job = job});
@ -462,21 +502,21 @@ String AsyncLoader::checkCycleImpl(const LoadJobPtr & job, LoadJobSet & left, Lo
return {}; return {};
} }
void AsyncLoader::finish(std::unique_lock<std::mutex> & lock, const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job) void AsyncLoader::finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job, std::unique_lock<std::mutex> & lock)
{ {
chassert(scheduled_jobs.contains(job)); // Job was pending
if (status == LoadStatus::OK) if (status == LoadStatus::OK)
{ {
// Notify waiters // Notify waiters
job->ok(); job->ok();
// Update dependent jobs and enqueue if ready // Update dependent jobs and enqueue if ready
chassert(scheduled_jobs.contains(job)); // Job was pending
for (const auto & dep : scheduled_jobs[job].dependent_jobs) for (const auto & dep : scheduled_jobs[job].dependent_jobs)
{ {
chassert(scheduled_jobs.contains(dep)); // All depended jobs must be pending chassert(scheduled_jobs.contains(dep)); // All depended jobs must be pending
Info & dep_info = scheduled_jobs[dep]; Info & dep_info = scheduled_jobs[dep];
dep_info.dependencies_left--; dep_info.dependencies_left--;
if (!dep_info.is_blocked()) if (!dep_info.isBlocked())
enqueue(dep_info, dep, lock); enqueue(dep_info, dep, lock);
} }
} }
@ -488,11 +528,10 @@ void AsyncLoader::finish(std::unique_lock<std::mutex> & lock, const LoadJobPtr &
else if (status == LoadStatus::CANCELED) else if (status == LoadStatus::CANCELED)
job->canceled(exception_from_job); job->canceled(exception_from_job);
chassert(scheduled_jobs.contains(job)); // Job was pending
Info & info = scheduled_jobs[job]; Info & info = scheduled_jobs[job];
if (info.is_ready()) if (info.isReady())
{ {
ready_queue.erase(info.key()); pools[job->pool_id].ready_queue.erase(info.ready_seqno);
info.ready_seqno = 0; info.ready_seqno = 0;
} }
@ -512,7 +551,7 @@ void AsyncLoader::finish(std::unique_lock<std::mutex> & lock, const LoadJobPtr &
dep->name, dep->name,
getExceptionMessage(exception_from_job, /* with_stacktrace = */ false))); getExceptionMessage(exception_from_job, /* with_stacktrace = */ false)));
}); });
finish(lock, dep, LoadStatus::CANCELED, e); finish(dep, LoadStatus::CANCELED, e, lock);
} }
// Clean dependency graph edges pointing to canceled jobs // Clean dependency graph edges pointing to canceled jobs
@ -531,87 +570,130 @@ void AsyncLoader::finish(std::unique_lock<std::mutex> & lock, const LoadJobPtr &
}); });
} }
void AsyncLoader::prioritize(const LoadJobPtr & job, ssize_t new_priority, std::unique_lock<std::mutex> & lock) void AsyncLoader::prioritize(const LoadJobPtr & job, size_t new_pool_id, std::unique_lock<std::mutex> & lock)
{ {
if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end()) if (auto info = scheduled_jobs.find(job); info != scheduled_jobs.end())
{ {
if (info->second.priority >= new_priority) Pool & old_pool = pools[job->pool_id];
return; // Never lower priority Pool & new_pool = pools[new_pool_id];
if (old_pool.priority >= new_pool.priority)
return; // Never lower priority or change pool leaving the same priority
// Update priority and push job forward through ready queue if needed // Update priority and push job forward through ready queue if needed
if (info->second.ready_seqno) UInt64 ready_seqno = info->second.ready_seqno;
ready_queue.erase(info->second.key());
info->second.priority = new_priority; // Requeue job into the new pool queue without allocations
job->load_priority.store(new_priority); // Set user-facing priority (may affect executing jobs) if (ready_seqno)
if (info->second.ready_seqno)
{ {
NOEXCEPT_SCOPE({ new_pool.ready_queue.insert(old_pool.ready_queue.extract(ready_seqno));
ALLOW_ALLOCATIONS_IN_SCOPE; if (canSpawnWorker(new_pool, lock))
ready_queue.emplace(info->second.key(), job); spawn(new_pool, lock);
});
} }
// Set user-facing pool and priority (may affect executing jobs)
job->pool_id.store(new_pool_id);
// Recurse into dependencies // Recurse into dependencies
for (const auto & dep : job->dependencies) for (const auto & dep : job->dependencies)
prioritize(dep, new_priority, lock); prioritize(dep, new_pool_id, lock);
} }
} }
void AsyncLoader::enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<std::mutex> & lock) void AsyncLoader::enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<std::mutex> & lock)
{ {
chassert(!info.is_blocked()); chassert(!info.isBlocked());
chassert(info.ready_seqno == 0); chassert(info.ready_seqno == 0);
info.ready_seqno = ++last_ready_seqno; info.ready_seqno = ++last_ready_seqno;
Pool & pool = pools[job->pool_id];
NOEXCEPT_SCOPE({ NOEXCEPT_SCOPE({
ALLOW_ALLOCATIONS_IN_SCOPE; ALLOW_ALLOCATIONS_IN_SCOPE;
ready_queue.emplace(info.key(), job); pool.ready_queue.emplace(info.ready_seqno, job);
}); });
job->enqueued(); job->enqueued();
if (is_running && workers < max_threads) if (canSpawnWorker(pool, lock))
spawn(lock); spawn(pool, lock);
} }
void AsyncLoader::spawn(std::unique_lock<std::mutex> &) bool AsyncLoader::canSpawnWorker(Pool & pool, std::unique_lock<std::mutex> &)
{ {
workers++; return is_running
&& !pool.ready_queue.empty()
&& pool.workers < pool.max_threads
&& (!current_priority || *current_priority <= pool.priority);
}
bool AsyncLoader::canWorkerLive(Pool & pool, std::unique_lock<std::mutex> &)
{
return is_running
&& !pool.ready_queue.empty()
&& pool.workers <= pool.max_threads
&& (!current_priority || *current_priority <= pool.priority);
}
void AsyncLoader::updateCurrentPriorityAndSpawn(std::unique_lock<std::mutex> & lock)
{
// Find current priority.
// NOTE: We assume low number of pools, so O(N) scans are fine.
std::optional<ssize_t> priority;
for (Pool & pool : pools)
{
if (pool.isActive() && (!priority || *priority < pool.priority))
priority = pool.priority;
}
current_priority = priority;
// Spawn workers in all pools with current priority
for (Pool & pool : pools)
{
for (size_t i = 0; canSpawnWorker(pool, lock) && i < pool.ready_queue.size(); i++)
spawn(pool, lock);
}
}
void AsyncLoader::spawn(Pool & pool, std::unique_lock<std::mutex> &)
{
pool.workers++;
current_priority = pool.priority; // canSpawnWorker() ensures this would not decrease current_priority
NOEXCEPT_SCOPE({ NOEXCEPT_SCOPE({
ALLOW_ALLOCATIONS_IN_SCOPE; ALLOW_ALLOCATIONS_IN_SCOPE;
pool.scheduleOrThrowOnError([this] { worker(); }); pool.thread_pool->scheduleOrThrowOnError([this, &pool] { worker(pool); });
}); });
} }
void AsyncLoader::worker() void AsyncLoader::worker(Pool & pool)
{ {
DENY_ALLOCATIONS_IN_SCOPE; DENY_ALLOCATIONS_IN_SCOPE;
size_t pool_id = &pool - &*pools.begin();
LoadJobPtr job; LoadJobPtr job;
std::exception_ptr exception_from_job; std::exception_ptr exception_from_job;
while (true) while (true)
{ {
// This is inside the loop to also reset previous thread names set inside the jobs // This is inside the loop to also reset previous thread names set inside the jobs
setThreadName("AsyncLoader"); setThreadName(pool.name.c_str());
{ {
std::unique_lock lock{mutex}; std::unique_lock lock{mutex};
// Handle just executed job // Handle just executed job
if (exception_from_job) if (exception_from_job)
finish(lock, job, LoadStatus::FAILED, exception_from_job); finish(job, LoadStatus::FAILED, exception_from_job, lock);
else if (job) else if (job)
finish(lock, job, LoadStatus::OK); finish(job, LoadStatus::OK, {}, lock);
if (!is_running || ready_queue.empty() || workers > max_threads) if (!canWorkerLive(pool, lock))
{ {
workers--; if (--pool.workers == 0)
updateCurrentPriorityAndSpawn(lock); // It will spawn lower priority workers if needed
return; return;
} }
// Take next job to be executed from the ready queue // Take next job to be executed from the ready queue
auto it = ready_queue.begin(); auto it = pool.ready_queue.begin();
job = it->second; job = it->second;
ready_queue.erase(it); pool.ready_queue.erase(it);
scheduled_jobs.find(job)->second.ready_seqno = 0; // This job is no longer in the ready queue scheduled_jobs.find(job)->second.ready_seqno = 0; // This job is no longer in the ready queue
} }
@ -619,7 +701,7 @@ void AsyncLoader::worker()
try try
{ {
job->execute(job); job->execute(pool_id, job);
exception_from_job = {}; exception_from_job = {};
} }
catch (...) catch (...)

View File

@ -12,7 +12,7 @@
#include <base/types.h> #include <base/types.h>
#include <Common/CurrentMetrics.h> #include <Common/CurrentMetrics.h>
#include <Common/Stopwatch.h> #include <Common/Stopwatch.h>
#include <Common/ThreadPool.h> #include <Common/ThreadPool_fwd.h>
namespace Poco { class Logger; } namespace Poco { class Logger; }
@ -46,22 +46,28 @@ class LoadJob : private boost::noncopyable
{ {
public: public:
template <class Func, class LoadJobSetType> template <class Func, class LoadJobSetType>
LoadJob(LoadJobSetType && dependencies_, String name_, Func && func_, ssize_t priority_ = 0) LoadJob(LoadJobSetType && dependencies_, String name_, size_t pool_id_, Func && func_)
: dependencies(std::forward<LoadJobSetType>(dependencies_)) : dependencies(std::forward<LoadJobSetType>(dependencies_))
, name(std::move(name_)) , name(std::move(name_))
, pool_id(pool_id_)
, func(std::forward<Func>(func_)) , func(std::forward<Func>(func_))
, load_priority(priority_)
{} {}
// Current job status. // Current job status.
LoadStatus status() const; LoadStatus status() const;
std::exception_ptr exception() const; std::exception_ptr exception() const;
// Returns current value of a priority of the job. May differ from initial priority. // Returns pool in which the job is executing (was executed). May differ from initial pool and from current pool.
ssize_t priority() const; // Value is only valid (and constant) after execution started.
size_t executionPool() const;
// Returns current pool of the job. May differ from initial and execution pool.
// This value is intended for creating new jobs during this job execution.
// Value may change during job execution by `prioritize()`.
size_t pool() const;
// Sync wait for a pending job to be finished: OK, FAILED or CANCELED status. // Sync wait for a pending job to be finished: OK, FAILED or CANCELED status.
// Throws if job is FAILED or CANCELED. Returns or throws immediately on non-pending job. // Throws if job is FAILED or CANCELED. Returns or throws immediately if called on non-pending job.
void wait() const; void wait() const;
// Wait for a job to reach any non PENDING status. // Wait for a job to reach any non PENDING status.
@ -90,10 +96,11 @@ private:
void scheduled(); void scheduled();
void enqueued(); void enqueued();
void execute(const LoadJobPtr & self); void execute(size_t pool, const LoadJobPtr & self);
std::atomic<size_t> execution_pool_id;
std::atomic<size_t> pool_id;
std::function<void(const LoadJobPtr & self)> func; std::function<void(const LoadJobPtr & self)> func;
std::atomic<ssize_t> load_priority;
mutable std::mutex mutex; mutable std::mutex mutex;
mutable std::condition_variable finished; mutable std::condition_variable finished;
@ -115,25 +122,25 @@ struct EmptyJobFunc
template <class Func = EmptyJobFunc> template <class Func = EmptyJobFunc>
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, String name, Func && func = EmptyJobFunc()) LoadJobPtr makeLoadJob(LoadJobSet && dependencies, String name, Func && func = EmptyJobFunc())
{ {
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), std::forward<Func>(func)); return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), 0, std::forward<Func>(func));
} }
template <class Func = EmptyJobFunc> template <class Func = EmptyJobFunc>
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, String name, Func && func = EmptyJobFunc()) LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, String name, Func && func = EmptyJobFunc())
{ {
return std::make_shared<LoadJob>(dependencies, std::move(name), std::forward<Func>(func)); return std::make_shared<LoadJob>(dependencies, std::move(name), 0, std::forward<Func>(func));
} }
template <class Func = EmptyJobFunc> template <class Func = EmptyJobFunc>
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, ssize_t priority, String name, Func && func = EmptyJobFunc()) LoadJobPtr makeLoadJob(LoadJobSet && dependencies, size_t pool_id, String name, Func && func = EmptyJobFunc())
{ {
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), std::forward<Func>(func), priority); return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), pool_id, std::forward<Func>(func));
} }
template <class Func = EmptyJobFunc> template <class Func = EmptyJobFunc>
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, ssize_t priority, String name, Func && func = EmptyJobFunc()) LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, size_t pool_id, String name, Func && func = EmptyJobFunc())
{ {
return std::make_shared<LoadJob>(dependencies, std::move(name), std::forward<Func>(func), priority); return std::make_shared<LoadJob>(dependencies, std::move(name), pool_id, std::forward<Func>(func));
} }
// Represents a logically connected set of LoadJobs required to achieve some goals (final LoadJob in the set). // Represents a logically connected set of LoadJobs required to achieve some goals (final LoadJob in the set).
@ -185,7 +192,7 @@ inline void scheduleLoad(const LoadTaskPtrs & tasks)
} }
template <class... Args> template <class... Args>
inline void scheduleLoad(Args && ... args) inline void scheduleLoadAll(Args && ... args)
{ {
(scheduleLoad(std::forward<Args>(args)), ...); (scheduleLoad(std::forward<Args>(args)), ...);
} }
@ -208,16 +215,16 @@ inline void waitLoad(const LoadTaskPtrs & tasks)
} }
template <class... Args> template <class... Args>
inline void waitLoad(Args && ... args) inline void waitLoadAll(Args && ... args)
{ {
(waitLoad(std::forward<Args>(args)), ...); (waitLoad(std::forward<Args>(args)), ...);
} }
template <class... Args> template <class... Args>
inline void scheduleAndWaitLoad(Args && ... args) inline void scheduleAndWaitLoadAll(Args && ... args)
{ {
scheduleLoad(std::forward<Args>(args)...); scheduleLoadAll(std::forward<Args>(args)...);
waitLoad(std::forward<Args>(args)...); waitLoadAll(std::forward<Args>(args)...);
} }
inline LoadJobSet getGoals(const LoadTaskPtrs & tasks) inline LoadJobSet getGoals(const LoadTaskPtrs & tasks)
@ -228,6 +235,14 @@ inline LoadJobSet getGoals(const LoadTaskPtrs & tasks)
return result; return result;
} }
inline LoadJobSet getGoalsOr(const LoadTaskPtrs & tasks, const LoadJobSet & alternative)
{
LoadJobSet result;
for (const auto & task : tasks)
result.insert(task->goals().begin(), task->goals().end());
return result.empty() ? alternative : result;
}
inline LoadJobSet joinJobs(const LoadJobSet & jobs1, const LoadJobSet & jobs2) inline LoadJobSet joinJobs(const LoadJobSet & jobs1, const LoadJobSet & jobs2)
{ {
LoadJobSet result; LoadJobSet result;
@ -251,100 +266,117 @@ inline LoadTaskPtrs joinTasks(const LoadTaskPtrs & tasks1, const LoadTaskPtrs &
return result; return result;
} }
// `AsyncLoader` is a scheduler for DAG of `LoadJob`s. It tracks dependencies and priorities of jobs. // `AsyncLoader` is a scheduler for DAG of `LoadJob`s. It tracks job dependencies and priorities.
// Basic usage example: // Basic usage example:
// // Start async_loader with two thread pools (0=bg, 1=fg):
// AsyncLoader async_loader({
// {"BgPool", CurrentMetrics::AsyncLoaderThreads, CurrentMetrics::AsyncLoaderThreadsActive, .max_threads = 1, .priority = 0}
// {"FgPool", CurrentMetrics::AsyncLoaderThreads, CurrentMetrics::AsyncLoaderThreadsActive, .max_threads = 2, .priority = 1}
// });
//
// // Create and schedule a task consisting of three jobs. Job1 has no dependencies and is run first.
// // Job2 and job3 depend on job1 and are run only after job1 completion.
// auto job_func = [&] (const LoadJobPtr & self) { // auto job_func = [&] (const LoadJobPtr & self) {
// LOG_TRACE(log, "Executing load job '{}' with priority '{}'", self->name, self->priority()); // LOG_TRACE(log, "Executing load job '{}' in pool '{}'", self->name, async_loader->getPoolName(self->pool()));
// }; // };
// auto job1 = makeLoadJob({}, "job1", job_func); // auto job1 = makeLoadJob({}, "job1", /* pool_id = */ 0, job_func);
// auto job2 = makeLoadJob({ job1 }, "job2", job_func); // auto job2 = makeLoadJob({ job1 }, "job2", /* pool_id = */ 0, job_func);
// auto job3 = makeLoadJob({ job1 }, "job3", job_func); // auto job3 = makeLoadJob({ job1 }, "job3", /* pool_id = */ 0, job_func);
// auto task = makeLoadTask(async_loader, { job1, job2, job3 }); // auto task = makeLoadTask(async_loader, { job1, job2, job3 });
// task.schedule(); // task.schedule();
// Here we have created and scheduled a task consisting of three jobs. Job1 has no dependencies and is run first. //
// Job2 and job3 depend on job1 and are run only after job1 completion. Another thread may prioritize a job and wait for it: // // Another thread may prioritize a job by changing its pool and wait for it:
// async_loader->prioritize(job3, /* priority = */ 1); // higher priority jobs are run first, default priority is zero. // async_loader->prioritize(job3, /* pool_id = */ 1); // higher priority jobs are run first, default priority is zero.
// job3->wait(); // blocks until job completion or cancellation and rethrow an exception (if any) // job3->wait(); // blocks until job completion or cancellation and rethrow an exception (if any)
// //
// AsyncLoader tracks state of all scheduled jobs. Job lifecycle is the following: // Every job has a pool associated with it. AsyncLoader starts every job in its thread pool.
// 1) Job is constructed with PENDING status and initial priority. The job is placed into a task. // Each pool has a constant priority and a mutable maximum number of threads.
// 2) The task is scheduled with all its jobs and their dependencies. A scheduled job may be ready (i.e. have all its dependencies finished) or blocked. // Higher priority (greater `pool.priority` value) jobs are run first.
// 3a) When all dependencies are successfully executed, the job became ready. A ready job is enqueued into the ready queue. // No job with lower priority is started while there is at least one higher priority job ready or running.
//
// Job priority can be elevated (but cannot be lowered)
// (a) if either it has a dependent job with higher priority:
// in this case the priority and the pool of a dependent job is inherited during `schedule()` call;
// (b) or job was explicitly prioritized by `prioritize(job, higher_priority_pool)` call:
// this also leads to a priority inheritance for all the dependencies.
// Value stored in load job `pool_id` field is atomic and can be changed even during job execution.
// Job is, of course, not moved from its initial thread pool, but it should use `self->pool()` for
// all new jobs it create to avoid priority inversion.
//
// === IMPLEMENTATION DETAILS ===
// All possible states and statuses of a job:
// .---------- scheduled ----------.
// ctor --> assigned --> blocked --> ready --> executing --> finished ------> removed --> dtor
// STATUS: '------------------ PENDING -----------------' '-- OK|FAILED|CANCELED --'
//
// AsyncLoader tracks state of all scheduled and finished jobs. Job lifecycle is the following:
// 1) A job is constructed with PENDING status and assigned to a pool. The job is placed into a task.
// 2) The task is scheduled with all its jobs and their dependencies. A scheduled job may be ready, blocked (and later executing).
// 3a) When all dependencies are successfully finished, the job became ready. A ready job is enqueued into the ready queue of its pool.
// 3b) If at least one of the job dependencies is failed or canceled, then this job is canceled (with all it's dependent jobs as well). // 3b) If at least one of the job dependencies is failed or canceled, then this job is canceled (with all it's dependent jobs as well).
// On cancellation an ASYNC_LOAD_CANCELED exception is generated and saved inside LoadJob object. The job status is changed to CANCELED. // On cancellation an ASYNC_LOAD_CANCELED exception is generated and saved inside LoadJob object. The job status is changed to CANCELED.
// Exception is rethrown by any existing or new `wait()` call. The job is moved to the set of the finished jobs. // Exception is rethrown by any existing or new `wait()` call. The job is moved to the set of the finished jobs.
// 4) The scheduled pending ready job starts execution by a worker. The job is dequeued. Callback `job_func` is called. // 4) The ready job starts execution by a worker. The job is dequeued. Callback `job_func` is called.
// Status of an executing job is PENDING. And it is still considered as a scheduled job by AsyncLoader. // Status of an executing job is PENDING. Note that `job_func` of a CANCELED job is never executed.
// Note that `job_func` of a CANCELED job is never executed.
// 5a) On successful execution the job status is changed to OK and all existing and new `wait()` calls finish w/o exceptions. // 5a) On successful execution the job status is changed to OK and all existing and new `wait()` calls finish w/o exceptions.
// 5b) Any exception thrown out of `job_func` is wrapped into an ASYNC_LOAD_FAILED exception and saved inside LoadJob. // 5b) Any exception thrown out of `job_func` is wrapped into an ASYNC_LOAD_FAILED exception and saved inside LoadJob.
// The job status is changed to FAILED. All the dependent jobs are canceled. The exception is rethrown from all existing and new `wait()` calls. // The job status is changed to FAILED. All the dependent jobs are canceled. The exception is rethrown from all existing and new `wait()` calls.
// 6) The job is no longer considered as scheduled and is instead moved to the finished jobs set. This is just for introspection of the finished jobs. // 6) The job is no longer considered as scheduled and is instead moved to the finished jobs set. This is just for introspection of the finished jobs.
// 7) The task containing this job is destructed or `remove()` is explicitly called. The job is removed from the finished job set. // 7) The task containing this job is destructed or `remove()` is explicitly called. The job is removed from the finished job set.
// 8) The job is destructed. // 8) The job is destructed.
//
// Every job has a priority associated with it. AsyncLoader runs higher priority (greater `priority` value) jobs first. Job priority can be elevated
// (a) if either it has a dependent job with higher priority (in this case priority of a dependent job is inherited);
// (b) or job was explicitly prioritized by `prioritize(job, higher_priority)` call (this also leads to a priority inheritance for all the dependencies).
// Note that to avoid priority inversion `job_func` should use `self->priority()` to schedule new jobs in AsyncLoader or any other pool.
// Value stored in load job priority field is atomic and can be increased even during job execution.
//
// When a task is scheduled it can contain dependencies on previously scheduled jobs. These jobs can have any status. If job A being scheduled depends on
// another job B that is not yet scheduled, then job B will also be scheduled (even if the task does not contain it).
class AsyncLoader : private boost::noncopyable class AsyncLoader : private boost::noncopyable
{ {
private: private:
// Key of a pending job in the ready queue. // Thread pool for job execution.
struct ReadyKey // Pools control the following aspects of job execution:
// 1) Concurrency: Amount of concurrently executing jobs in a pool is `max_threads`.
// 2) Priority: As long as there is executing worker with higher priority, workers with lower priorities are not started
// (although, they can finish last job started before higher priority jobs appeared)
struct Pool
{ {
ssize_t priority; // Ascending order const String name;
ssize_t initial_priority; // Ascending order const ssize_t priority;
UInt64 ready_seqno; // Descending order std::unique_ptr<ThreadPool> thread_pool; // NOTE: we avoid using a `ThreadPool` queue to be able to move jobs between pools.
std::map<UInt64, LoadJobPtr> ready_queue; // FIFO queue of jobs to be executed in this pool. Map is used for faster erasing. Key is `ready_seqno`
size_t max_threads; // Max number of workers to be spawn
size_t workers = 0; // Number of currently execution workers
bool operator<(const ReadyKey & rhs) const bool isActive() const { return workers > 0 || !ready_queue.empty(); }
{
if (priority > rhs.priority)
return true;
if (priority < rhs.priority)
return false;
if (initial_priority > rhs.initial_priority)
return true;
if (initial_priority < rhs.initial_priority)
return false;
return ready_seqno < rhs.ready_seqno;
}
}; };
// Scheduling information for a pending job. // Scheduling information for a pending job.
struct Info struct Info
{ {
ssize_t initial_priority = 0; // Initial priority passed into schedule().
ssize_t priority = 0; // Elevated priority, due to priority inheritance or prioritize().
size_t dependencies_left = 0; // Current number of dependencies on pending jobs. size_t dependencies_left = 0; // Current number of dependencies on pending jobs.
UInt64 ready_seqno = 0; // Zero means that job is not in ready queue. UInt64 ready_seqno = 0; // Zero means that job is not in ready queue.
LoadJobSet dependent_jobs; // Set of jobs dependent on this job. LoadJobSet dependent_jobs; // Set of jobs dependent on this job.
// Three independent states of a non-finished job. // Three independent states of a scheduled job.
bool is_blocked() const { return dependencies_left > 0; } bool isBlocked() const { return dependencies_left > 0; }
bool is_ready() const { return dependencies_left == 0 && ready_seqno > 0; } bool isReady() const { return dependencies_left == 0 && ready_seqno > 0; }
bool is_executing() const { return dependencies_left == 0 && ready_seqno == 0; } bool isExecuting() const { return dependencies_left == 0 && ready_seqno == 0; }
// Get key of a ready job
ReadyKey key() const
{
return {.priority = priority, .initial_priority = initial_priority, .ready_seqno = ready_seqno};
}
}; };
public: public:
using Metric = CurrentMetrics::Metric; using Metric = CurrentMetrics::Metric;
AsyncLoader(Metric metric_threads, Metric metric_active_threads, size_t max_threads_, bool log_failures_, bool log_progress_); // Helper struct for AsyncLoader construction
struct PoolInitializer
{
String name;
Metric metric_threads;
Metric metric_active_threads;
size_t max_threads;
ssize_t priority;
};
AsyncLoader(std::vector<PoolInitializer> pool_initializers, bool log_failures_, bool log_progress_);
// Stops AsyncLoader before destruction
// WARNING: all tasks instances should be destructed before associated AsyncLoader. // WARNING: all tasks instances should be destructed before associated AsyncLoader.
~AsyncLoader(); ~AsyncLoader();
// Start workers to execute scheduled load jobs. // Start workers to execute scheduled load jobs. Note that AsyncLoader is constructed as already started.
void start(); void start();
// Wait for all load jobs to finish, including all new jobs. So at first take care to stop adding new jobs. // Wait for all load jobs to finish, including all new jobs. So at first take care to stop adding new jobs.
@ -356,28 +388,32 @@ public:
// - or canceled using ~Task() or remove() later. // - or canceled using ~Task() or remove() later.
void stop(); void stop();
// Schedule all jobs of given `task` and their dependencies (if any, not scheduled yet). // Schedule all jobs of given `task` and their dependencies (even if they are not in task).
// Higher priority jobs (with greater `job->priority()` value) are executed earlier. // All dependencies of a scheduled job inherit its pool if it has higher priority. This way higher priority job
// All dependencies of a scheduled job inherit its priority if it is higher. This way higher priority job // never waits for (blocked by) lower priority jobs. No priority inversion is possible.
// never wait for (blocked by) lower priority jobs. No priority inversion is possible. // Idempotent: multiple schedule() calls for the same job are no-op.
// Note that `task` destructor ensures that all its jobs are finished (OK, FAILED or CANCELED) // Note that `task` destructor ensures that all its jobs are finished (OK, FAILED or CANCELED)
// and are removed from AsyncLoader, so it is thread-safe to destroy them. // and are removed from AsyncLoader, so it is thread-safe to destroy them.
void schedule(LoadTask & task); void schedule(LoadTask & task);
void schedule(const LoadTaskPtr & task); void schedule(const LoadTaskPtr & task);
// Schedule all tasks atomically. To ensure only highest priority jobs among all tasks are run first. // Schedule all tasks atomically. To ensure only highest priority jobs among all tasks are run first.
void schedule(const std::vector<LoadTaskPtr> & tasks); void schedule(const LoadTaskPtrs & tasks);
// Increase priority of a job and all its dependencies recursively. // Increase priority of a job and all its dependencies recursively.
void prioritize(const LoadJobPtr & job, ssize_t new_priority); // Jobs from higher (than `new_pool`) priority pools are not changed.
void prioritize(const LoadJobPtr & job, size_t new_pool);
// Remove finished jobs, cancel scheduled jobs, wait for executing jobs to finish and remove them. // Remove finished jobs, cancel scheduled jobs, wait for executing jobs to finish and remove them.
void remove(const LoadJobSet & jobs); void remove(const LoadJobSet & jobs);
// Increase or decrease maximum number of simultaneously executing jobs. // Increase or decrease maximum number of simultaneously executing jobs in `pool`.
void setMaxThreads(size_t value); void setMaxThreads(size_t pool, size_t value);
size_t getMaxThreads(size_t pool) const;
const String & getPoolName(size_t pool) const;
ssize_t getPoolPriority(size_t pool) const;
size_t getMaxThreads() const;
size_t getScheduledJobCount() const; size_t getScheduledJobCount() const;
// Helper class for introspection // Helper class for introspection
@ -385,11 +421,10 @@ public:
{ {
LoadJobPtr job; LoadJobPtr job;
size_t dependencies_left = 0; size_t dependencies_left = 0;
bool is_executing = false; UInt64 ready_seqno = 0;
bool is_blocked = false; bool is_blocked = false;
bool is_ready = false; bool is_ready = false;
std::optional<ssize_t> initial_priority; bool is_executing = false;
std::optional<UInt64> ready_seqno;
}; };
// For introspection and debug only, see `system.async_loader` table // For introspection and debug only, see `system.async_loader` table
@ -398,42 +433,32 @@ public:
private: private:
void checkCycle(const LoadJobSet & jobs, std::unique_lock<std::mutex> & lock); void checkCycle(const LoadJobSet & jobs, std::unique_lock<std::mutex> & lock);
String checkCycleImpl(const LoadJobPtr & job, LoadJobSet & left, LoadJobSet & visited, std::unique_lock<std::mutex> & lock); String checkCycleImpl(const LoadJobPtr & job, LoadJobSet & left, LoadJobSet & visited, std::unique_lock<std::mutex> & lock);
void finish(std::unique_lock<std::mutex> & lock, const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job = {}); void finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job, std::unique_lock<std::mutex> & lock);
void scheduleImpl(const LoadJobSet & input_jobs); void scheduleImpl(const LoadJobSet & input_jobs);
void gatherNotScheduled(const LoadJobPtr & job, LoadJobSet & jobs, std::unique_lock<std::mutex> & lock); void gatherNotScheduled(const LoadJobPtr & job, LoadJobSet & jobs, std::unique_lock<std::mutex> & lock);
void prioritize(const LoadJobPtr & job, ssize_t new_priority, std::unique_lock<std::mutex> & lock); void prioritize(const LoadJobPtr & job, size_t new_pool_id, std::unique_lock<std::mutex> & lock);
void enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<std::mutex> & lock); void enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<std::mutex> & lock);
void spawn(std::unique_lock<std::mutex> &); bool canSpawnWorker(Pool & pool, std::unique_lock<std::mutex> &);
void worker(); bool canWorkerLive(Pool & pool, std::unique_lock<std::mutex> &);
void updateCurrentPriorityAndSpawn(std::unique_lock<std::mutex> &);
void spawn(Pool & pool, std::unique_lock<std::mutex> &);
void worker(Pool & pool);
// Logging // Logging
const bool log_failures; // Worker should log all exceptions caught from job functions. const bool log_failures; // Worker should log all exceptions caught from job functions.
const bool log_progress; // Periodically log total progress const bool log_progress; // Periodically log total progress
Poco::Logger * log; Poco::Logger * log;
std::chrono::system_clock::time_point busy_period_start_time;
AtomicStopwatch stopwatch;
size_t old_jobs = 0; // Number of jobs that were finished in previous busy period (for correct progress indication)
mutable std::mutex mutex; // Guards all the fields below. mutable std::mutex mutex; // Guards all the fields below.
bool is_running = false; bool is_running = true;
std::optional<ssize_t> current_priority; // highest priority among active pools
// Full set of scheduled pending jobs along with scheduling info. UInt64 last_ready_seqno = 0; // Increasing counter for ready queue keys.
std::unordered_map<LoadJobPtr, Info> scheduled_jobs; std::unordered_map<LoadJobPtr, Info> scheduled_jobs; // Full set of scheduled pending jobs along with scheduling info.
std::vector<Pool> pools; // Thread pools for job execution and ready queues
// Subset of scheduled pending non-blocked jobs (waiting for a worker to be executed). LoadJobSet finished_jobs; // Set of finished jobs (for introspection only, until jobs are removed).
// Represent a queue of jobs in order of decreasing priority and FIFO for jobs with equal priorities. AtomicStopwatch stopwatch; // For progress indication
std::map<ReadyKey, LoadJobPtr> ready_queue; size_t old_jobs = 0; // Number of jobs that were finished in previous busy period (for correct progress indication)
std::chrono::system_clock::time_point busy_period_start_time;
// Set of finished jobs (for introspection only, until jobs are removed).
LoadJobSet finished_jobs;
// Increasing counter for `ReadyKey` assignment (to preserve FIFO order of the jobs with equal priorities).
UInt64 last_ready_seqno = 0;
// For executing jobs. Note that we avoid using an internal queue of the pool to be able to prioritize jobs.
size_t max_threads;
size_t workers = 0;
ThreadPool pool;
}; };
} }

View File

@ -1041,18 +1041,16 @@ void AsynchronousMetrics::update(TimePoint update_time)
// It doesn't read the EOL itself. // It doesn't read the EOL itself.
++cpuinfo->position(); ++cpuinfo->position();
if (s.rfind("processor", 0) == 0) static constexpr std::string_view PROCESSOR = "processor";
if (s.starts_with(PROCESSOR))
{ {
/// s390x example: processor 0: version = FF, identification = 039C88, machine = 3906 /// s390x example: processor 0: version = FF, identification = 039C88, machine = 3906
/// non s390x example: processor : 0 /// non s390x example: processor : 0
if (auto colon = s.find_first_of(':')) auto core_id_start = std::ssize(PROCESSOR);
{ while (core_id_start < std::ssize(s) && !std::isdigit(s[core_id_start]))
#ifdef __s390x__ ++core_id_start;
core_id = std::stoi(s.substr(10)); /// 10: length of "processor" plus 1
#else core_id = std::stoi(s.substr(core_id_start));
core_id = std::stoi(s.substr(colon + 2));
#endif
}
} }
else if (s.rfind("cpu MHz", 0) == 0) else if (s.rfind("cpu MHz", 0) == 0)
{ {

11
src/Common/Priority.h Normal file
View File

@ -0,0 +1,11 @@
#pragma once
#include <base/types.h>
/// Common type for priority values.
/// Separate type (rather than `Int64` is used just to avoid implicit conversion errors and to default-initialize
struct Priority
{
Int64 value = 0; /// Note that lower value means higher priority.
constexpr operator Int64() const { return value; } /// NOLINT
};

View File

@ -123,7 +123,7 @@ void ThreadPoolImpl<Thread>::setQueueSize(size_t value)
template <typename Thread> template <typename Thread>
template <typename ReturnType> template <typename ReturnType>
ReturnType ThreadPoolImpl<Thread>::scheduleImpl(Job job, ssize_t priority, std::optional<uint64_t> wait_microseconds, bool propagate_opentelemetry_tracing_context) ReturnType ThreadPoolImpl<Thread>::scheduleImpl(Job job, Priority priority, std::optional<uint64_t> wait_microseconds, bool propagate_opentelemetry_tracing_context)
{ {
auto on_error = [&](const std::string & reason) auto on_error = [&](const std::string & reason)
{ {
@ -231,19 +231,19 @@ void ThreadPoolImpl<Thread>::startNewThreadsNoLock()
} }
template <typename Thread> template <typename Thread>
void ThreadPoolImpl<Thread>::scheduleOrThrowOnError(Job job, ssize_t priority) void ThreadPoolImpl<Thread>::scheduleOrThrowOnError(Job job, Priority priority)
{ {
scheduleImpl<void>(std::move(job), priority, std::nullopt); scheduleImpl<void>(std::move(job), priority, std::nullopt);
} }
template <typename Thread> template <typename Thread>
bool ThreadPoolImpl<Thread>::trySchedule(Job job, ssize_t priority, uint64_t wait_microseconds) noexcept bool ThreadPoolImpl<Thread>::trySchedule(Job job, Priority priority, uint64_t wait_microseconds) noexcept
{ {
return scheduleImpl<bool>(std::move(job), priority, wait_microseconds); return scheduleImpl<bool>(std::move(job), priority, wait_microseconds);
} }
template <typename Thread> template <typename Thread>
void ThreadPoolImpl<Thread>::scheduleOrThrow(Job job, ssize_t priority, uint64_t wait_microseconds, bool propagate_opentelemetry_tracing_context) void ThreadPoolImpl<Thread>::scheduleOrThrow(Job job, Priority priority, uint64_t wait_microseconds, bool propagate_opentelemetry_tracing_context)
{ {
scheduleImpl<void>(std::move(job), priority, wait_microseconds, propagate_opentelemetry_tracing_context); scheduleImpl<void>(std::move(job), priority, wait_microseconds, propagate_opentelemetry_tracing_context);
} }

View File

@ -18,6 +18,7 @@
#include <Common/OpenTelemetryTraceContext.h> #include <Common/OpenTelemetryTraceContext.h>
#include <Common/CurrentMetrics.h> #include <Common/CurrentMetrics.h>
#include <Common/ThreadPool_fwd.h> #include <Common/ThreadPool_fwd.h>
#include <Common/Priority.h>
#include <base/scope_guard.h> #include <base/scope_guard.h>
/** Very simple thread pool similar to boost::threadpool. /** Very simple thread pool similar to boost::threadpool.
@ -59,17 +60,17 @@ public:
/// If any thread was throw an exception, first exception will be rethrown from this method, /// If any thread was throw an exception, first exception will be rethrown from this method,
/// and exception will be cleared. /// and exception will be cleared.
/// Also throws an exception if cannot create thread. /// Also throws an exception if cannot create thread.
/// Priority: greater is higher. /// Priority: lower is higher.
/// NOTE: Probably you should call wait() if exception was thrown. If some previously scheduled jobs are using some objects, /// NOTE: Probably you should call wait() if exception was thrown. If some previously scheduled jobs are using some objects,
/// located on stack of current thread, the stack must not be unwinded until all jobs finished. However, /// located on stack of current thread, the stack must not be unwinded until all jobs finished. However,
/// if ThreadPool is a local object, it will wait for all scheduled jobs in own destructor. /// if ThreadPool is a local object, it will wait for all scheduled jobs in own destructor.
void scheduleOrThrowOnError(Job job, ssize_t priority = 0); void scheduleOrThrowOnError(Job job, Priority priority = {});
/// Similar to scheduleOrThrowOnError(...). Wait for specified amount of time and schedule a job or return false. /// Similar to scheduleOrThrowOnError(...). Wait for specified amount of time and schedule a job or return false.
bool trySchedule(Job job, ssize_t priority = 0, uint64_t wait_microseconds = 0) noexcept; bool trySchedule(Job job, Priority priority = {}, uint64_t wait_microseconds = 0) noexcept;
/// Similar to scheduleOrThrowOnError(...). Wait for specified amount of time and schedule a job or throw an exception. /// Similar to scheduleOrThrowOnError(...). Wait for specified amount of time and schedule a job or throw an exception.
void scheduleOrThrow(Job job, ssize_t priority = 0, uint64_t wait_microseconds = 0, bool propagate_opentelemetry_tracing_context = true); void scheduleOrThrow(Job job, Priority priority = {}, uint64_t wait_microseconds = 0, bool propagate_opentelemetry_tracing_context = true);
/// Wait for all currently active jobs to be done. /// Wait for all currently active jobs to be done.
/// You may call schedule and wait many times in arbitrary order. /// You may call schedule and wait many times in arbitrary order.
@ -123,15 +124,15 @@ private:
struct JobWithPriority struct JobWithPriority
{ {
Job job; Job job;
ssize_t priority; Priority priority;
DB::OpenTelemetry::TracingContextOnThread thread_trace_context; DB::OpenTelemetry::TracingContextOnThread thread_trace_context;
JobWithPriority(Job job_, ssize_t priority_, const DB::OpenTelemetry::TracingContextOnThread& thread_trace_context_) JobWithPriority(Job job_, Priority priority_, const DB::OpenTelemetry::TracingContextOnThread & thread_trace_context_)
: job(job_), priority(priority_), thread_trace_context(thread_trace_context_) {} : job(job_), priority(priority_), thread_trace_context(thread_trace_context_) {}
bool operator< (const JobWithPriority & rhs) const bool operator<(const JobWithPriority & rhs) const
{ {
return priority < rhs.priority; return priority > rhs.priority; // Reversed for `priority_queue` max-heap to yield minimum value (i.e. highest priority) first
} }
}; };
@ -141,7 +142,7 @@ private:
std::stack<OnDestroyCallback> on_destroy_callbacks; std::stack<OnDestroyCallback> on_destroy_callbacks;
template <typename ReturnType> template <typename ReturnType>
ReturnType scheduleImpl(Job job, ssize_t priority, std::optional<uint64_t> wait_microseconds, bool propagate_opentelemetry_tracing_context = true); ReturnType scheduleImpl(Job job, Priority priority, std::optional<uint64_t> wait_microseconds, bool propagate_opentelemetry_tracing_context = true);
void worker(typename std::list<Thread>::iterator thread_it); void worker(typename std::list<Thread>::iterator thread_it);
@ -227,7 +228,7 @@ public:
DB::ThreadStatus thread_status; DB::ThreadStatus thread_status;
std::apply(function, arguments); std::apply(function, arguments);
}, },
0, // default priority {}, // default priority
0, // default wait_microseconds 0, // default wait_microseconds
propagate_opentelemetry_context propagate_opentelemetry_context
); );

View File

@ -30,6 +30,11 @@ namespace DB::ErrorCodes
extern const int ASYNC_LOAD_CANCELED; extern const int ASYNC_LOAD_CANCELED;
} }
struct Initializer {
size_t max_threads = 1;
ssize_t priority = 0;
};
struct AsyncLoaderTest struct AsyncLoaderTest
{ {
AsyncLoader loader; AsyncLoader loader;
@ -37,10 +42,34 @@ struct AsyncLoaderTest
std::mutex rng_mutex; std::mutex rng_mutex;
pcg64 rng{randomSeed()}; pcg64 rng{randomSeed()};
explicit AsyncLoaderTest(std::vector<Initializer> initializers)
: loader(getPoolInitializers(initializers), /* log_failures = */ false, /* log_progress = */ false)
{
loader.stop(); // All tests call `start()` manually to better control ordering
}
explicit AsyncLoaderTest(size_t max_threads = 1) explicit AsyncLoaderTest(size_t max_threads = 1)
: loader(CurrentMetrics::TablesLoaderThreads, CurrentMetrics::TablesLoaderThreadsActive, max_threads, /* log_failures = */ false, /* log_progress = */ false) : AsyncLoaderTest({{.max_threads = max_threads}})
{} {}
std::vector<AsyncLoader::PoolInitializer> getPoolInitializers(std::vector<Initializer> initializers)
{
std::vector<AsyncLoader::PoolInitializer> result;
size_t pool_id = 0;
for (auto & desc : initializers)
{
result.push_back({
.name = fmt::format("Pool{}", pool_id),
.metric_threads = CurrentMetrics::TablesLoaderThreads,
.metric_active_threads = CurrentMetrics::TablesLoaderThreadsActive,
.max_threads = desc.max_threads,
.priority = desc.priority
});
pool_id++;
}
return result;
}
template <typename T> template <typename T>
T randomInt(T from, T to) T randomInt(T from, T to)
{ {
@ -114,16 +143,19 @@ struct AsyncLoaderTest
TEST(AsyncLoader, Smoke) TEST(AsyncLoader, Smoke)
{ {
AsyncLoaderTest t(2); AsyncLoaderTest t({
{.max_threads = 2, .priority = 0},
{.max_threads = 2, .priority = -1},
});
static constexpr ssize_t low_priority = -1; static constexpr ssize_t low_priority_pool = 1;
std::atomic<size_t> jobs_done{0}; std::atomic<size_t> jobs_done{0};
std::atomic<size_t> low_priority_jobs_done{0}; std::atomic<size_t> low_priority_jobs_done{0};
auto job_func = [&] (const LoadJobPtr & self) { auto job_func = [&] (const LoadJobPtr & self) {
jobs_done++; jobs_done++;
if (self->priority() == low_priority) if (self->pool() == low_priority_pool)
low_priority_jobs_done++; low_priority_jobs_done++;
}; };
@ -135,7 +167,7 @@ TEST(AsyncLoader, Smoke)
auto job3 = makeLoadJob({ job2 }, "job3", job_func); auto job3 = makeLoadJob({ job2 }, "job3", job_func);
auto job4 = makeLoadJob({ job2 }, "job4", job_func); auto job4 = makeLoadJob({ job2 }, "job4", job_func);
auto task2 = t.schedule({ job3, job4 }); auto task2 = t.schedule({ job3, job4 });
auto job5 = makeLoadJob({ job3, job4 }, low_priority, "job5", job_func); auto job5 = makeLoadJob({ job3, job4 }, low_priority_pool, "job5", job_func);
task2->merge(t.schedule({ job5 })); task2->merge(t.schedule({ job5 }));
std::thread waiter_thread([=] { job5->wait(); }); std::thread waiter_thread([=] { job5->wait(); });
@ -536,7 +568,7 @@ TEST(AsyncLoader, TestOverload)
AsyncLoaderTest t(3); AsyncLoaderTest t(3);
t.loader.start(); t.loader.start();
size_t max_threads = t.loader.getMaxThreads(); size_t max_threads = t.loader.getMaxThreads(/* pool = */ 0);
std::atomic<int> executing{0}; std::atomic<int> executing{0};
for (int concurrency = 4; concurrency <= 8; concurrency++) for (int concurrency = 4; concurrency <= 8; concurrency++)
@ -562,13 +594,24 @@ TEST(AsyncLoader, TestOverload)
TEST(AsyncLoader, StaticPriorities) TEST(AsyncLoader, StaticPriorities)
{ {
AsyncLoaderTest t(1); AsyncLoaderTest t({
{.max_threads = 1, .priority = 0},
{.max_threads = 1, .priority = 1},
{.max_threads = 1, .priority = 2},
{.max_threads = 1, .priority = 3},
{.max_threads = 1, .priority = 4},
{.max_threads = 1, .priority = 5},
{.max_threads = 1, .priority = 6},
{.max_threads = 1, .priority = 7},
{.max_threads = 1, .priority = 8},
{.max_threads = 1, .priority = 9},
});
std::string schedule; std::string schedule;
auto job_func = [&] (const LoadJobPtr & self) auto job_func = [&] (const LoadJobPtr & self)
{ {
schedule += fmt::format("{}{}", self->name, self->priority()); schedule += fmt::format("{}{}", self->name, self->pool());
}; };
std::vector<LoadJobPtr> jobs; std::vector<LoadJobPtr> jobs;
@ -588,21 +631,110 @@ TEST(AsyncLoader, StaticPriorities)
ASSERT_EQ(schedule, "A9E9D9F9G9H9C4B3"); ASSERT_EQ(schedule, "A9E9D9F9G9H9C4B3");
} }
TEST(AsyncLoader, SimplePrioritization)
{
AsyncLoaderTest t({
{.max_threads = 1, .priority = 0},
{.max_threads = 1, .priority = 1},
{.max_threads = 1, .priority = 2},
});
t.loader.start();
std::atomic<int> executed{0}; // Number of previously executed jobs (to test execution order)
LoadJobPtr job_to_prioritize;
auto job_func_A_booster = [&] (const LoadJobPtr &)
{
ASSERT_EQ(executed++, 0);
t.loader.prioritize(job_to_prioritize, 2);
};
auto job_func_B_tester = [&] (const LoadJobPtr &)
{
ASSERT_EQ(executed++, 2);
};
auto job_func_C_boosted = [&] (const LoadJobPtr &)
{
ASSERT_EQ(executed++, 1);
};
std::vector<LoadJobPtr> jobs;
jobs.push_back(makeLoadJob({}, 1, "A", job_func_A_booster)); // 0
jobs.push_back(makeLoadJob({jobs[0]}, 1, "B", job_func_B_tester)); // 1
jobs.push_back(makeLoadJob({}, 0, "C", job_func_C_boosted)); // 2
auto task = makeLoadTask(t.loader, { jobs.begin(), jobs.end() });
job_to_prioritize = jobs[2]; // C
scheduleAndWaitLoadAll(task);
}
TEST(AsyncLoader, DynamicPriorities) TEST(AsyncLoader, DynamicPriorities)
{ {
AsyncLoaderTest t(1); AsyncLoaderTest t({
{.max_threads = 1, .priority = 0},
{.max_threads = 1, .priority = 1},
{.max_threads = 1, .priority = 2},
{.max_threads = 1, .priority = 3},
{.max_threads = 1, .priority = 4},
{.max_threads = 1, .priority = 5},
{.max_threads = 1, .priority = 6},
{.max_threads = 1, .priority = 7},
{.max_threads = 1, .priority = 8},
{.max_threads = 1, .priority = 9},
});
for (bool prioritize : {false, true}) for (bool prioritize : {false, true})
{ {
// Although all pools have max_threads=1, workers from different pools can run simultaneously just after `prioritize()` call
std::barrier sync(2);
bool wait_sync = prioritize;
std::mutex schedule_mutex;
std::string schedule; std::string schedule;
LoadJobPtr job_to_prioritize; LoadJobPtr job_to_prioritize;
// Order of execution of jobs D and E after prioritization is undefined, because it depend on `ready_seqno`
// (Which depends on initial `schedule()` order, which in turn depend on `std::unordered_map` order)
// So we have to obtain `ready_seqno` to be sure.
UInt64 ready_seqno_D = 0;
UInt64 ready_seqno_E = 0;
auto job_func = [&] (const LoadJobPtr & self) auto job_func = [&] (const LoadJobPtr & self)
{ {
{
std::unique_lock lock{schedule_mutex};
schedule += fmt::format("{}{}", self->name, self->executionPool());
}
if (prioritize && self->name == "C") if (prioritize && self->name == "C")
t.loader.prioritize(job_to_prioritize, 9); // dynamic prioritization {
schedule += fmt::format("{}{}", self->name, self->priority()); for (const auto & state : t.loader.getJobStates())
{
if (state.job->name == "D")
ready_seqno_D = state.ready_seqno;
if (state.job->name == "E")
ready_seqno_E = state.ready_seqno;
}
// Jobs D and E should be enqueued at the moment
ASSERT_LT(0, ready_seqno_D);
ASSERT_LT(0, ready_seqno_E);
// Dynamic prioritization G0 -> G9
// Note that it will spawn concurrent worker in higher priority pool
t.loader.prioritize(job_to_prioritize, 9);
sync.arrive_and_wait(); // (A) wait for higher priority worker (B) to test they can be concurrent
}
if (wait_sync && (self->name == "D" || self->name == "E"))
{
wait_sync = false;
sync.arrive_and_wait(); // (B)
}
}; };
// Job DAG with initial priorities. During execution of C4, job G0 priority is increased to G9, postponing B3 job executing. // Job DAG with initial priorities. During execution of C4, job G0 priority is increased to G9, postponing B3 job executing.
@ -624,14 +756,19 @@ TEST(AsyncLoader, DynamicPriorities)
jobs.push_back(makeLoadJob({ jobs[6] }, 0, "H", job_func)); // 7 jobs.push_back(makeLoadJob({ jobs[6] }, 0, "H", job_func)); // 7
auto task = t.schedule({ jobs.begin(), jobs.end() }); auto task = t.schedule({ jobs.begin(), jobs.end() });
job_to_prioritize = jobs[6]; job_to_prioritize = jobs[6]; // G
t.loader.start(); t.loader.start();
t.loader.wait(); t.loader.wait();
t.loader.stop(); t.loader.stop();
if (prioritize) if (prioritize)
ASSERT_EQ(schedule, "A4C4E9D9F9G9B3H0"); {
if (ready_seqno_D < ready_seqno_E)
ASSERT_EQ(schedule, "A4C4D9E9F9G9B3H0");
else
ASSERT_EQ(schedule, "A4C4E9D9F9G9B3H0");
}
else else
ASSERT_EQ(schedule, "A4C4B3E2D1F0G0H0"); ASSERT_EQ(schedule, "A4C4B3E2D1F0G0H0");
} }
@ -742,8 +879,64 @@ TEST(AsyncLoader, SetMaxThreads)
syncs[idx]->arrive_and_wait(); // (A) syncs[idx]->arrive_and_wait(); // (A)
sync_index++; sync_index++;
if (sync_index < syncs.size()) if (sync_index < syncs.size())
t.loader.setMaxThreads(max_threads_values[sync_index]); t.loader.setMaxThreads(/* pool = */ 0, max_threads_values[sync_index]);
syncs[idx]->arrive_and_wait(); // (B) this sync point is required to allow `executing` value to go back down to zero after we change number of workers syncs[idx]->arrive_and_wait(); // (B) this sync point is required to allow `executing` value to go back down to zero after we change number of workers
} }
t.loader.wait(); t.loader.wait();
} }
TEST(AsyncLoader, DynamicPools)
{
const size_t max_threads[] { 2, 10 };
const int jobs_in_chain = 16;
AsyncLoaderTest t({
{.max_threads = max_threads[0], .priority = 0},
{.max_threads = max_threads[1], .priority = 1},
});
t.loader.start();
std::atomic<size_t> executing[2] { 0, 0 }; // Number of currently executing jobs per pool
for (int concurrency = 1; concurrency <= 12; concurrency++)
{
std::atomic<bool> boosted{false}; // Visible concurrency was increased
std::atomic<int> left{concurrency * jobs_in_chain / 2}; // Number of jobs to start before `prioritize()` call
LoadJobSet jobs_to_prioritize;
auto job_func = [&] (const LoadJobPtr & self)
{
auto pool_id = self->executionPool();
executing[pool_id]++;
if (executing[pool_id] > max_threads[0])
boosted = true;
ASSERT_LE(executing[pool_id], max_threads[pool_id]);
// Dynamic prioritization
if (--left == 0)
{
for (const auto & job : jobs_to_prioritize)
t.loader.prioritize(job, 1);
}
t.randomSleepUs(100, 200, 100);
ASSERT_LE(executing[pool_id], max_threads[pool_id]);
executing[pool_id]--;
};
std::vector<LoadTaskPtr> tasks;
tasks.reserve(concurrency);
for (int i = 0; i < concurrency; i++)
tasks.push_back(makeLoadTask(t.loader, t.chainJobSet(jobs_in_chain, job_func)));
jobs_to_prioritize = getGoals(tasks); // All jobs
scheduleAndWaitLoadAll(tasks);
ASSERT_EQ(executing[0], 0);
ASSERT_EQ(executing[1], 0);
ASSERT_EQ(boosted, concurrency > 2);
boosted = false;
}
}

View File

@ -28,7 +28,7 @@ void CachedCompressedReadBuffer::initInput()
} }
void CachedCompressedReadBuffer::prefetch(int64_t priority) void CachedCompressedReadBuffer::prefetch(Priority priority)
{ {
initInput(); initInput();
file_in->prefetch(priority); file_in->prefetch(priority);

View File

@ -36,7 +36,7 @@ private:
bool nextImpl() override; bool nextImpl() override;
void prefetch(int64_t priority) override; void prefetch(Priority priority) override;
/// Passed into file_in. /// Passed into file_in.
ReadBufferFromFileBase::ProfileCallback profile_callback; ReadBufferFromFileBase::ProfileCallback profile_callback;

View File

@ -51,7 +51,7 @@ CompressedReadBufferFromFile::CompressedReadBufferFromFile(std::unique_ptr<ReadB
} }
void CompressedReadBufferFromFile::prefetch(int64_t priority) void CompressedReadBufferFromFile::prefetch(Priority priority)
{ {
file_in.prefetch(priority); file_in.prefetch(priority);
} }

View File

@ -43,7 +43,7 @@ private:
bool nextImpl() override; bool nextImpl() override;
void prefetch(int64_t priority) override; void prefetch(Priority priority) override;
public: public:
explicit CompressedReadBufferFromFile(std::unique_ptr<ReadBufferFromFileBase> buf, bool allow_different_codecs_ = false); explicit CompressedReadBufferFromFile(std::unique_ptr<ReadBufferFromFileBase> buf, bool allow_different_codecs_ = false);

View File

@ -63,7 +63,7 @@ namespace DB
\ \
M(Bool, disable_internal_dns_cache, false, "Disable internal DNS caching at all.", 0) \ M(Bool, disable_internal_dns_cache, false, "Disable internal DNS caching at all.", 0) \
M(Int32, dns_cache_update_period, 15, "Internal DNS cache update period in seconds.", 0) \ M(Int32, dns_cache_update_period, 15, "Internal DNS cache update period in seconds.", 0) \
M(UInt32, dns_max_consecutive_failures, 1024, "Max connection failures before dropping host from ClickHouse DNS cache.", 0) \ M(UInt32, dns_max_consecutive_failures, 1024, "Max DNS resolve failures of a hostname before dropping the hostname from ClickHouse DNS cache.", 0) \
\ \
M(UInt64, max_table_size_to_drop, 50000000000lu, "If size of a table is greater than this value (in bytes) than table could not be dropped with any DROP query.", 0) \ M(UInt64, max_table_size_to_drop, 50000000000lu, "If size of a table is greater than this value (in bytes) than table could not be dropped with any DROP query.", 0) \
M(UInt64, max_partition_size_to_drop, 50000000000lu, "Same as max_table_size_to_drop, but for the partitions.", 0) \ M(UInt64, max_partition_size_to_drop, 50000000000lu, "Same as max_table_size_to_drop, but for the partitions.", 0) \

View File

@ -138,19 +138,6 @@ namespace
} }
} }
String getCurrentKey(const String & path, const DiskEncryptedSettings & settings)
{
auto it = settings.keys.find(settings.current_key_id);
if (it == settings.keys.end())
throw Exception(
ErrorCodes::DATA_ENCRYPTION_ERROR,
"Not found a key with the current ID {} required to cipher file {}",
settings.current_key_id,
quoteString(path));
return it->second;
}
String getKey(const String & path, const FileEncryption::Header & header, const DiskEncryptedSettings & settings) String getKey(const String & path, const FileEncryption::Header & header, const DiskEncryptedSettings & settings)
{ {
auto it = settings.keys.find(header.key_id); auto it = settings.keys.find(header.key_id);
@ -203,18 +190,19 @@ private:
}; };
DiskEncrypted::DiskEncrypted( DiskEncrypted::DiskEncrypted(
const String & name_, const Poco::Util::AbstractConfiguration & config_, const String & config_prefix_, const DisksMap & map_) const String & name_, const Poco::Util::AbstractConfiguration & config_, const String & config_prefix_, const DisksMap & map_, bool use_fake_transaction_)
: DiskEncrypted(name_, parseDiskEncryptedSettings(name_, config_, config_prefix_, map_)) : DiskEncrypted(name_, parseDiskEncryptedSettings(name_, config_, config_prefix_, map_), use_fake_transaction_)
{ {
} }
DiskEncrypted::DiskEncrypted(const String & name_, std::unique_ptr<const DiskEncryptedSettings> settings_) DiskEncrypted::DiskEncrypted(const String & name_, std::unique_ptr<const DiskEncryptedSettings> settings_, bool use_fake_transaction_)
: IDisk(name_) : IDisk(name_)
, delegate(settings_->wrapped_disk) , delegate(settings_->wrapped_disk)
, encrypted_name(name_) , encrypted_name(name_)
, disk_path(settings_->disk_path) , disk_path(settings_->disk_path)
, disk_absolute_path(settings_->wrapped_disk->getPath() + settings_->disk_path) , disk_absolute_path(settings_->wrapped_disk->getPath() + settings_->disk_path)
, current_settings(std::move(settings_)) , current_settings(std::move(settings_))
, use_fake_transaction(use_fake_transaction_)
{ {
delegate->createDirectories(disk_path); delegate->createDirectories(disk_path);
} }
@ -309,38 +297,6 @@ std::unique_ptr<ReadBufferFromFileBase> DiskEncrypted::readFile(
return std::make_unique<ReadBufferFromEncryptedFile>(settings.local_fs_buffer_size, std::move(buffer), key, header); return std::make_unique<ReadBufferFromEncryptedFile>(settings.local_fs_buffer_size, std::move(buffer), key, header);
} }
std::unique_ptr<WriteBufferFromFileBase> DiskEncrypted::writeFile(const String & path, size_t buf_size, WriteMode mode, const WriteSettings &)
{
auto wrapped_path = wrappedPath(path);
FileEncryption::Header header;
String key;
UInt64 old_file_size = 0;
auto settings = current_settings.get();
if (mode == WriteMode::Append && exists(path))
{
old_file_size = getFileSize(path);
if (old_file_size)
{
/// Append mode: we continue to use the same header.
auto read_buffer = delegate->readFile(wrapped_path, ReadSettings().adjustBufferSize(FileEncryption::Header::kSize));
header = readHeader(*read_buffer);
key = getKey(path, header, *settings);
}
}
if (!old_file_size)
{
/// Rewrite mode: we generate a new header.
key = getCurrentKey(path, *settings);
header.algorithm = settings->current_algorithm;
header.key_id = settings->current_key_id;
header.key_hash = calculateKeyHash(key);
header.init_vector = InitVector::random();
}
auto buffer = delegate->writeFile(wrapped_path, buf_size, mode);
return std::make_unique<WriteBufferFromEncryptedFile>(buf_size, std::move(buffer), key, header, old_file_size);
}
size_t DiskEncrypted::getFileSize(const String & path) const size_t DiskEncrypted::getFileSize(const String & path) const
{ {
auto wrapped_path = wrappedPath(path); auto wrapped_path = wrappedPath(path);
@ -416,7 +372,7 @@ void registerDiskEncrypted(DiskFactory & factory, bool global_skip_access_check)
const DisksMap & map) -> DiskPtr const DisksMap & map) -> DiskPtr
{ {
bool skip_access_check = global_skip_access_check || config.getBool(config_prefix + ".skip_access_check", false); bool skip_access_check = global_skip_access_check || config.getBool(config_prefix + ".skip_access_check", false);
DiskPtr disk = std::make_shared<DiskEncrypted>(name, config, config_prefix, map); DiskPtr disk = std::make_shared<DiskEncrypted>(name, config, config_prefix, map, config.getBool(config_prefix + ".use_fake_transaction", true));
disk->startup(context, skip_access_check); disk->startup(context, skip_access_check);
return disk; return disk;
}; };

View File

@ -6,22 +6,14 @@
#include <Disks/IDisk.h> #include <Disks/IDisk.h>
#include <Common/MultiVersion.h> #include <Common/MultiVersion.h>
#include <Disks/FakeDiskTransaction.h> #include <Disks/FakeDiskTransaction.h>
#include <Disks/DiskEncryptedTransaction.h>
namespace DB namespace DB
{ {
class ReadBufferFromFileBase; class ReadBufferFromFileBase;
class WriteBufferFromFileBase; class WriteBufferFromFileBase;
namespace FileEncryption { enum class Algorithm; }
struct DiskEncryptedSettings
{
DiskPtr wrapped_disk;
String disk_path;
std::unordered_map<UInt64, String> keys;
UInt64 current_key_id;
FileEncryption::Algorithm current_algorithm;
};
/// Encrypted disk ciphers all written files on the fly and writes the encrypted files to an underlying (normal) disk. /// Encrypted disk ciphers all written files on the fly and writes the encrypted files to an underlying (normal) disk.
/// And when we read files from an encrypted disk it deciphers them automatically, /// And when we read files from an encrypted disk it deciphers them automatically,
@ -29,8 +21,8 @@ struct DiskEncryptedSettings
class DiskEncrypted : public IDisk class DiskEncrypted : public IDisk
{ {
public: public:
DiskEncrypted(const String & name_, const Poco::Util::AbstractConfiguration & config_, const String & config_prefix_, const DisksMap & map_); DiskEncrypted(const String & name_, const Poco::Util::AbstractConfiguration & config_, const String & config_prefix_, const DisksMap & map_, bool use_fake_transaction_);
DiskEncrypted(const String & name_, std::unique_ptr<const DiskEncryptedSettings> settings_); DiskEncrypted(const String & name_, std::unique_ptr<const DiskEncryptedSettings> settings_, bool use_fake_transaction_);
const String & getName() const override { return encrypted_name; } const String & getName() const override { return encrypted_name; }
const String & getPath() const override { return disk_absolute_path; } const String & getPath() const override { return disk_absolute_path; }
@ -59,28 +51,30 @@ public:
void createDirectory(const String & path) override void createDirectory(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->createDirectory(wrapped_path); tx->createDirectory(path);
tx->commit();
} }
void createDirectories(const String & path) override void createDirectories(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->createDirectories(wrapped_path); tx->createDirectories(path);
tx->commit();
} }
void clearDirectory(const String & path) override void clearDirectory(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->clearDirectory(wrapped_path); tx->clearDirectory(path);
tx->commit();
} }
void moveDirectory(const String & from_path, const String & to_path) override void moveDirectory(const String & from_path, const String & to_path) override
{ {
auto wrapped_from_path = wrappedPath(from_path); auto tx = createEncryptedTransaction();
auto wrapped_to_path = wrappedPath(to_path); tx->moveDirectory(from_path, to_path);
delegate->moveDirectory(wrapped_from_path, wrapped_to_path); tx->commit();
} }
DirectoryIteratorPtr iterateDirectory(const String & path) const override DirectoryIteratorPtr iterateDirectory(const String & path) const override
@ -91,22 +85,23 @@ public:
void createFile(const String & path) override void createFile(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->createFile(wrapped_path); tx->createFile(path);
tx->commit();
} }
void moveFile(const String & from_path, const String & to_path) override void moveFile(const String & from_path, const String & to_path) override
{ {
auto wrapped_from_path = wrappedPath(from_path); auto tx = createEncryptedTransaction();
auto wrapped_to_path = wrappedPath(to_path); tx->moveFile(from_path, to_path);
delegate->moveFile(wrapped_from_path, wrapped_to_path); tx->commit();
} }
void replaceFile(const String & from_path, const String & to_path) override void replaceFile(const String & from_path, const String & to_path) override
{ {
auto wrapped_from_path = wrappedPath(from_path); auto tx = createEncryptedTransaction();
auto wrapped_to_path = wrappedPath(to_path); tx->replaceFile(from_path, to_path);
delegate->replaceFile(wrapped_from_path, wrapped_to_path); tx->commit();
} }
void listFiles(const String & path, std::vector<String> & file_names) const override void listFiles(const String & path, std::vector<String> & file_names) const override
@ -129,61 +124,67 @@ public:
const String & path, const String & path,
size_t buf_size, size_t buf_size,
WriteMode mode, WriteMode mode,
const WriteSettings & settings) override; const WriteSettings & settings) override
{
auto tx = createEncryptedTransaction();
auto result = tx->writeFile(path, buf_size, mode, settings);
return result;
}
void removeFile(const String & path) override void removeFile(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->removeFile(wrapped_path); tx->removeFile(path);
tx->commit();
} }
void removeFileIfExists(const String & path) override void removeFileIfExists(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->removeFileIfExists(wrapped_path); tx->removeFileIfExists(path);
tx->commit();
} }
void removeDirectory(const String & path) override void removeDirectory(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->removeDirectory(wrapped_path); tx->removeDirectory(path);
tx->commit();
} }
void removeRecursive(const String & path) override void removeRecursive(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->removeRecursive(wrapped_path); tx->removeRecursive(path);
tx->commit();
} }
void removeSharedFile(const String & path, bool flag) override void removeSharedFile(const String & path, bool flag) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->removeSharedFile(wrapped_path, flag); tx->removeSharedFile(path, flag);
tx->commit();
} }
void removeSharedRecursive(const String & path, bool keep_all_batch_data, const NameSet & file_names_remove_metadata_only) override void removeSharedRecursive(const String & path, bool keep_all_batch_data, const NameSet & file_names_remove_metadata_only) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->removeSharedRecursive(wrapped_path, keep_all_batch_data, file_names_remove_metadata_only); tx->removeSharedRecursive(path, keep_all_batch_data, file_names_remove_metadata_only);
tx->commit();
} }
void removeSharedFiles(const RemoveBatchRequest & files, bool keep_all_batch_data, const NameSet & file_names_remove_metadata_only) override void removeSharedFiles(const RemoveBatchRequest & files, bool keep_all_batch_data, const NameSet & file_names_remove_metadata_only) override
{ {
for (const auto & file : files) auto tx = createEncryptedTransaction();
{ tx->removeSharedFiles(files, keep_all_batch_data, file_names_remove_metadata_only);
auto wrapped_path = wrappedPath(file.path); tx->commit();
bool keep = keep_all_batch_data || file_names_remove_metadata_only.contains(fs::path(file.path).filename());
if (file.if_exists)
delegate->removeSharedFileIfExists(wrapped_path, keep);
else
delegate->removeSharedFile(wrapped_path, keep);
}
} }
void removeSharedFileIfExists(const String & path, bool flag) override void removeSharedFileIfExists(const String & path, bool flag) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->removeSharedFileIfExists(wrapped_path, flag); tx->removeSharedFileIfExists(path, flag);
tx->commit();
} }
Strings getBlobPath(const String & path) const override Strings getBlobPath(const String & path) const override
@ -194,8 +195,9 @@ public:
void writeFileUsingBlobWritingFunction(const String & path, WriteMode mode, WriteBlobFunction && write_blob_function) override void writeFileUsingBlobWritingFunction(const String & path, WriteMode mode, WriteBlobFunction && write_blob_function) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->writeFileUsingBlobWritingFunction(wrapped_path, mode, std::move(write_blob_function)); tx->writeFileUsingBlobWritingFunction(path, mode, std::move(write_blob_function));
tx->commit();
} }
std::unique_ptr<ReadBufferFromFileBase> readEncryptedFile(const String & path, const ReadSettings & settings) const override std::unique_ptr<ReadBufferFromFileBase> readEncryptedFile(const String & path, const ReadSettings & settings) const override
@ -210,8 +212,9 @@ public:
WriteMode mode, WriteMode mode,
const WriteSettings & settings) const override const WriteSettings & settings) const override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
return delegate->writeFile(wrapped_path, buf_size, mode, settings); auto buf = tx->writeEncryptedFile(path, buf_size, mode, settings);
return buf;
} }
size_t getEncryptedFileSize(const String & path) const override size_t getEncryptedFileSize(const String & path) const override
@ -228,8 +231,9 @@ public:
void setLastModified(const String & path, const Poco::Timestamp & timestamp) override void setLastModified(const String & path, const Poco::Timestamp & timestamp) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->setLastModified(wrapped_path, timestamp); tx->setLastModified(path, timestamp);
tx->commit();
} }
Poco::Timestamp getLastModified(const String & path) const override Poco::Timestamp getLastModified(const String & path) const override
@ -246,15 +250,16 @@ public:
void setReadOnly(const String & path) override void setReadOnly(const String & path) override
{ {
auto wrapped_path = wrappedPath(path); auto tx = createEncryptedTransaction();
delegate->setReadOnly(wrapped_path); tx->setReadOnly(path);
tx->commit();
} }
void createHardLink(const String & src_path, const String & dst_path) override void createHardLink(const String & src_path, const String & dst_path) override
{ {
auto wrapped_src_path = wrappedPath(src_path); auto tx = createEncryptedTransaction();
auto wrapped_dst_path = wrappedPath(dst_path); tx->createHardLink(src_path, dst_path);
delegate->createHardLink(wrapped_src_path, wrapped_dst_path); tx->commit();
} }
void truncateFile(const String & path, size_t size) override; void truncateFile(const String & path, size_t size) override;
@ -289,11 +294,22 @@ public:
SyncGuardPtr getDirectorySyncGuard(const String & path) const override; SyncGuardPtr getDirectorySyncGuard(const String & path) const override;
std::shared_ptr<DiskEncryptedTransaction> createEncryptedTransaction() const
{
auto delegate_transaction = delegate->createTransaction();
return std::make_shared<DiskEncryptedTransaction>(delegate_transaction, disk_path, *current_settings.get(), delegate.get());
}
DiskTransactionPtr createTransaction() override DiskTransactionPtr createTransaction() override
{ {
/// Need to overwrite explicetly because this disk change if (use_fake_transaction)
/// a lot of "delegate" methods. {
return std::make_shared<FakeDiskTransaction>(*this); return std::make_shared<FakeDiskTransaction>(*this);
}
else
{
return createEncryptedTransaction();
}
} }
UInt64 getTotalSpace() const override UInt64 getTotalSpace() const override
@ -331,10 +347,7 @@ public:
private: private:
String wrappedPath(const String & path) const String wrappedPath(const String & path) const
{ {
// if path starts_with disk_path -> got already wrapped path return DiskEncryptedTransaction::wrappedPath(disk_path, path);
if (!disk_path.empty() && path.starts_with(disk_path))
return path;
return disk_path + path;
} }
DiskPtr delegate; DiskPtr delegate;
@ -342,6 +355,7 @@ private:
const String disk_path; const String disk_path;
const String disk_absolute_path; const String disk_absolute_path;
MultiVersion<DiskEncryptedSettings> current_settings; MultiVersion<DiskEncryptedSettings> current_settings;
bool use_fake_transaction;
}; };
} }

View File

@ -0,0 +1,120 @@
#include <Disks/DiskEncryptedTransaction.h>
#if USE_SSL
#include <IO/FileEncryptionCommon.h>
#include <Common/Exception.h>
#include <boost/algorithm/hex.hpp>
#include <IO/ReadBufferFromEncryptedFile.h>
#include <IO/ReadBufferFromFileDecorator.h>
#include <IO/ReadBufferFromString.h>
#include <IO/WriteBufferFromEncryptedFile.h>
#include <Common/quoteString.h>
namespace DB
{
namespace ErrorCodes
{
extern const int DATA_ENCRYPTION_ERROR;
}
namespace
{
FileEncryption::Header readHeader(ReadBufferFromFileBase & read_buffer)
{
try
{
FileEncryption::Header header;
header.read(read_buffer);
return header;
}
catch (Exception & e)
{
e.addMessage("While reading the header of encrypted file " + quoteString(read_buffer.getFileName()));
throw;
}
}
String getCurrentKey(const String & path, const DiskEncryptedSettings & settings)
{
auto it = settings.keys.find(settings.current_key_id);
if (it == settings.keys.end())
throw Exception(
ErrorCodes::DATA_ENCRYPTION_ERROR,
"Not found a key with the current ID {} required to cipher file {}",
settings.current_key_id,
quoteString(path));
return it->second;
}
String getKey(const String & path, const FileEncryption::Header & header, const DiskEncryptedSettings & settings)
{
auto it = settings.keys.find(header.key_id);
if (it == settings.keys.end())
throw Exception(
ErrorCodes::DATA_ENCRYPTION_ERROR,
"Not found a key with ID {} required to decipher file {}",
header.key_id,
quoteString(path));
String key = it->second;
if (FileEncryption::calculateKeyHash(key) != header.key_hash)
throw Exception(
ErrorCodes::DATA_ENCRYPTION_ERROR, "Wrong key with ID {}, could not decipher file {}", header.key_id, quoteString(path));
return key;
}
}
void DiskEncryptedTransaction::copyFile(const std::string & from_file_path, const std::string & to_file_path)
{
auto wrapped_from_path = wrappedPath(from_file_path);
auto wrapped_to_path = wrappedPath(to_file_path);
delegate_transaction->copyFile(wrapped_from_path, wrapped_to_path);
}
std::unique_ptr<WriteBufferFromFileBase> DiskEncryptedTransaction::writeFile( // NOLINT
const std::string & path,
size_t buf_size,
WriteMode mode,
const WriteSettings & settings,
bool autocommit)
{
auto wrapped_path = wrappedPath(path);
FileEncryption::Header header;
String key;
UInt64 old_file_size = 0;
if (mode == WriteMode::Append && delegate_disk->exists(wrapped_path))
{
size_t size = delegate_disk->getFileSize(wrapped_path);
old_file_size = size > FileEncryption::Header::kSize ? (size - FileEncryption::Header::kSize) : 0;
if (old_file_size)
{
/// Append mode: we continue to use the same header.
auto read_buffer = delegate_disk->readFile(wrapped_path, ReadSettings().adjustBufferSize(FileEncryption::Header::kSize));
header = readHeader(*read_buffer);
key = getKey(path, header, current_settings);
}
}
if (!old_file_size)
{
/// Rewrite mode: we generate a new header.
key = getCurrentKey(path, current_settings);
header.algorithm = current_settings.current_algorithm;
header.key_id = current_settings.current_key_id;
header.key_hash = FileEncryption::calculateKeyHash(key);
header.init_vector = FileEncryption::InitVector::random();
}
auto buffer = delegate_transaction->writeFile(wrapped_path, buf_size, mode, settings, autocommit);
return std::make_unique<WriteBufferFromEncryptedFile>(buf_size, std::move(buffer), key, header, old_file_size);
}
}
#endif

View File

@ -0,0 +1,259 @@
#pragma once
#include "config.h"
#if USE_SSL
#include <Disks/IDiskTransaction.h>
#include <Disks/IDisk.h>
#include <IO/ReadBufferFromFile.h>
#include <IO/WriteBufferFromFile.h>
namespace DB
{
namespace FileEncryption { enum class Algorithm; }
struct DiskEncryptedSettings
{
DiskPtr wrapped_disk;
String disk_path;
std::unordered_map<UInt64, String> keys;
UInt64 current_key_id;
FileEncryption::Algorithm current_algorithm;
};
class DiskEncryptedTransaction : public IDiskTransaction
{
public:
static String wrappedPath(const String disk_path, const String & path)
{
// if path starts_with disk_path -> got already wrapped path
if (!disk_path.empty() && path.starts_with(disk_path))
return path;
return disk_path + path;
}
DiskEncryptedTransaction(DiskTransactionPtr delegate_transaction_, const std::string & disk_path_, DiskEncryptedSettings current_settings_, IDisk * delegate_disk_)
: delegate_transaction(delegate_transaction_)
, disk_path(disk_path_)
, current_settings(current_settings_)
, delegate_disk(delegate_disk_)
{}
/// Tries to commit all accumulated operations simultaneously.
/// If something fails rollback and throw exception.
void commit() override // NOLINT
{
delegate_transaction->commit();
}
void undo() override
{
delegate_transaction->undo();
}
~DiskEncryptedTransaction() override = default;
/// Create directory.
void createDirectory(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->createDirectory(wrapped_path);
}
/// Create directory and all parent directories if necessary.
void createDirectories(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->createDirectories(wrapped_path);
}
/// Remove all files from the directory. Directories are not removed.
void clearDirectory(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->clearDirectory(wrapped_path);
}
/// Move directory from `from_path` to `to_path`.
void moveDirectory(const std::string & from_path, const std::string & to_path) override
{
auto wrapped_from_path = wrappedPath(from_path);
auto wrapped_to_path = wrappedPath(to_path);
delegate_transaction->moveDirectory(wrapped_from_path, wrapped_to_path);
}
void moveFile(const std::string & from_path, const std::string & to_path) override
{
auto wrapped_from_path = wrappedPath(from_path);
auto wrapped_to_path = wrappedPath(to_path);
delegate_transaction->moveFile(wrapped_from_path, wrapped_to_path);
}
void createFile(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->createFile(wrapped_path);
}
/// Move the file from `from_path` to `to_path`.
/// If a file with `to_path` path already exists, it will be replaced.
void replaceFile(const std::string & from_path, const std::string & to_path) override
{
auto wrapped_from_path = wrappedPath(from_path);
auto wrapped_to_path = wrappedPath(to_path);
delegate_transaction->replaceFile(wrapped_from_path, wrapped_to_path);
}
/// Only copy of several files supported now. Disk interface support copy to another disk
/// but it's impossible to implement correctly in transactions because other disk can
/// use different metadata storage.
/// TODO: maybe remove it at all, we don't want copies
void copyFile(const std::string & from_file_path, const std::string & to_file_path) override;
/// Open the file for write and return WriteBufferFromFileBase object.
std::unique_ptr<WriteBufferFromFileBase> writeFile( /// NOLINT
const std::string & path,
size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE,
WriteMode mode = WriteMode::Rewrite,
const WriteSettings & settings = {},
bool autocommit = true) override;
/// Remove file. Throws exception if file doesn't exists or it's a directory.
void removeFile(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->removeFile(wrapped_path);
}
/// Remove file if it exists.
void removeFileIfExists(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->removeFileIfExists(wrapped_path);
}
/// Remove directory. Throws exception if it's not a directory or if directory is not empty.
void removeDirectory(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->removeDirectory(wrapped_path);
}
/// Remove file or directory with all children. Use with extra caution. Throws exception if file doesn't exists.
void removeRecursive(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->removeRecursive(wrapped_path);
}
/// Remove file. Throws exception if file doesn't exists or if directory is not empty.
/// Differs from removeFile for S3/HDFS disks
/// Second bool param is a flag to remove (true) or keep (false) shared data on S3
void removeSharedFile(const std::string & path, bool keep_shared_data) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->removeSharedFile(wrapped_path, keep_shared_data);
}
/// Remove file or directory with all children. Use with extra caution. Throws exception if file doesn't exists.
/// Differs from removeRecursive for S3/HDFS disks
/// Second bool param is a flag to remove (false) or keep (true) shared data on S3.
/// Third param determines which files cannot be removed even if second is true.
void removeSharedRecursive(const std::string & path, bool keep_all_shared_data, const NameSet & file_names_remove_metadata_only) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->removeSharedRecursive(wrapped_path, keep_all_shared_data, file_names_remove_metadata_only);
}
/// Remove file or directory if it exists.
/// Differs from removeFileIfExists for S3/HDFS disks
/// Second bool param is a flag to remove (true) or keep (false) shared data on S3
void removeSharedFileIfExists(const std::string & path, bool keep_shared_data) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->removeSharedFileIfExists(wrapped_path, keep_shared_data);
}
/// Batch request to remove multiple files.
/// May be much faster for blob storage.
/// Second bool param is a flag to remove (true) or keep (false) shared data on S3.
/// Third param determines which files cannot be removed even if second is true.
void removeSharedFiles(const RemoveBatchRequest & files, bool keep_all_batch_data, const NameSet & file_names_remove_metadata_only) override
{
for (const auto & file : files)
{
auto wrapped_path = wrappedPath(file.path);
bool keep = keep_all_batch_data || file_names_remove_metadata_only.contains(fs::path(file.path).filename());
if (file.if_exists)
delegate_transaction->removeSharedFileIfExists(wrapped_path, keep);
else
delegate_transaction->removeSharedFile(wrapped_path, keep);
}
}
/// Set last modified time to file or directory at `path`.
void setLastModified(const std::string & path, const Poco::Timestamp & timestamp) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->setLastModified(wrapped_path, timestamp);
}
/// Just chmod.
void chmod(const String & path, mode_t mode) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->chmod(wrapped_path, mode);
}
/// Set file at `path` as read-only.
void setReadOnly(const std::string & path) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->setReadOnly(wrapped_path);
}
/// Create hardlink from `src_path` to `dst_path`.
void createHardLink(const std::string & src_path, const std::string & dst_path) override
{
auto wrapped_src_path = wrappedPath(src_path);
auto wrapped_dst_path = wrappedPath(dst_path);
delegate_transaction->createHardLink(wrapped_src_path, wrapped_dst_path);
}
void writeFileUsingBlobWritingFunction(const String & path, WriteMode mode, WriteBlobFunction && write_blob_function) override
{
auto wrapped_path = wrappedPath(path);
delegate_transaction->writeFileUsingBlobWritingFunction(wrapped_path, mode, std::move(write_blob_function));
}
std::unique_ptr<WriteBufferFromFileBase> writeEncryptedFile(
const String & path,
size_t buf_size,
WriteMode mode,
const WriteSettings & settings) const
{
auto wrapped_path = wrappedPath(path);
return delegate_transaction->writeFile(wrapped_path, buf_size, mode, settings);
}
private:
String wrappedPath(const String & path) const
{
return wrappedPath(disk_path, path);
}
DiskTransactionPtr delegate_transaction;
std::string disk_path;
DiskEncryptedSettings current_settings;
IDisk * delegate_disk;
};
}
#endif

View File

@ -83,19 +83,19 @@ bool AsynchronousBoundedReadBuffer::hasPendingDataToRead()
} }
std::future<IAsynchronousReader::Result> std::future<IAsynchronousReader::Result>
AsynchronousBoundedReadBuffer::asyncReadInto(char * data, size_t size, int64_t priority) AsynchronousBoundedReadBuffer::asyncReadInto(char * data, size_t size, Priority priority)
{ {
IAsynchronousReader::Request request; IAsynchronousReader::Request request;
request.descriptor = std::make_shared<RemoteFSFileDescriptor>(*impl, async_read_counters); request.descriptor = std::make_shared<RemoteFSFileDescriptor>(*impl, async_read_counters);
request.buf = data; request.buf = data;
request.size = size; request.size = size;
request.offset = file_offset_of_buffer_end; request.offset = file_offset_of_buffer_end;
request.priority = read_settings.priority + priority; request.priority = Priority{read_settings.priority.value + priority.value};
request.ignore = bytes_to_ignore; request.ignore = bytes_to_ignore;
return reader.submit(request); return reader.submit(request);
} }
void AsynchronousBoundedReadBuffer::prefetch(int64_t priority) void AsynchronousBoundedReadBuffer::prefetch(Priority priority)
{ {
if (prefetch_future.valid()) if (prefetch_future.valid())
return; return;

View File

@ -39,7 +39,7 @@ public:
off_t seek(off_t offset_, int whence) override; off_t seek(off_t offset_, int whence) override;
void prefetch(int64_t priority) override; void prefetch(Priority priority) override;
void setReadUntilPosition(size_t position) override; /// [..., position). void setReadUntilPosition(size_t position) override; /// [..., position).
@ -72,7 +72,7 @@ private:
struct LastPrefetchInfo struct LastPrefetchInfo
{ {
UInt64 submit_time = 0; UInt64 submit_time = 0;
size_t priority = 0; Priority priority;
}; };
LastPrefetchInfo last_prefetch_info; LastPrefetchInfo last_prefetch_info;
@ -87,7 +87,7 @@ private:
int64_t size, int64_t size,
const std::unique_ptr<Stopwatch> & execution_watch); const std::unique_ptr<Stopwatch> & execution_watch);
std::future<IAsynchronousReader::Result> asyncReadInto(char * data, size_t size, int64_t priority); std::future<IAsynchronousReader::Result> asyncReadInto(char * data, size_t size, Priority priority);
void resetPrefetch(FilesystemPrefetchState state); void resetPrefetch(FilesystemPrefetchState state);

View File

@ -40,7 +40,7 @@ protected:
settings->keys[0] = key; settings->keys[0] = key;
settings->current_key_id = 0; settings->current_key_id = 0;
settings->disk_path = path; settings->disk_path = path;
encrypted_disk = std::make_shared<DiskEncrypted>("encrypted_disk", std::move(settings)); encrypted_disk = std::make_shared<DiskEncrypted>("encrypted_disk", std::move(settings), true);
} }
String getFileNames() String getFileNames()

View File

@ -13,7 +13,6 @@ namespace DB
namespace ErrorCodes namespace ErrorCodes
{ {
extern const int ILLEGAL_COLUMN; extern const int ILLEGAL_COLUMN;
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
extern const int TOO_LARGE_STRING_SIZE; extern const int TOO_LARGE_STRING_SIZE;
} }
@ -25,18 +24,16 @@ struct RepeatImpl
/// Safety threshold against DoS. /// Safety threshold against DoS.
static inline void checkRepeatTime(UInt64 repeat_time) static inline void checkRepeatTime(UInt64 repeat_time)
{ {
static constexpr UInt64 max_repeat_times = 1000000; static constexpr UInt64 max_repeat_times = 1'000'000;
if (repeat_time > max_repeat_times) if (repeat_time > max_repeat_times)
throw Exception(ErrorCodes::TOO_LARGE_STRING_SIZE, "Too many times to repeat ({}), maximum is: {}", throw Exception(ErrorCodes::TOO_LARGE_STRING_SIZE, "Too many times to repeat ({}), maximum is: {}", repeat_time, max_repeat_times);
std::to_string(repeat_time), std::to_string(max_repeat_times));
} }
static inline void checkStringSize(UInt64 size) static inline void checkStringSize(UInt64 size)
{ {
static constexpr UInt64 max_string_size = 1 << 30; static constexpr UInt64 max_string_size = 1 << 30;
if (size > max_string_size) if (size > max_string_size)
throw Exception(ErrorCodes::TOO_LARGE_STRING_SIZE, "Too large string size ({}) in function repeat, maximum is: {}", throw Exception(ErrorCodes::TOO_LARGE_STRING_SIZE, "Too large string size ({}) in function repeat, maximum is: {}", size, max_string_size);
size, max_string_size);
} }
template <typename T> template <typename T>
@ -186,36 +183,37 @@ public:
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; } bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
DataTypePtr getReturnTypeImpl(const DataTypes & arguments) const override DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
{ {
if (!isString(arguments[0])) FunctionArgumentDescriptors args{
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument of function {}", {"s", &isString<IDataType>, nullptr, "String"},
arguments[0]->getName(), getName()); {"n", &isInteger<IDataType>, nullptr, "Integer"},
if (!isInteger(arguments[1])) };
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, "Illegal type {} of argument of function {}",
arguments[1]->getName(), getName()); validateFunctionArgumentTypes(*this, arguments, args);
return arguments[0];
return std::make_shared<DataTypeString>();
} }
bool useDefaultImplementationForConstants() const override { return true; } bool useDefaultImplementationForConstants() const override { return true; }
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t) const override ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr & /*result_type*/, size_t /*input_rows_count*/) const override
{ {
const auto & strcolumn = arguments[0].column; const auto & col_str = arguments[0].column;
const auto & numcolumn = arguments[1].column; const auto & col_num = arguments[1].column;
ColumnPtr res; ColumnPtr res;
if (const ColumnString * col = checkAndGetColumn<ColumnString>(strcolumn.get())) if (const ColumnString * col = checkAndGetColumn<ColumnString>(col_str.get()))
{ {
if (const ColumnConst * scale_column_num = checkAndGetColumn<ColumnConst>(numcolumn.get())) if (const ColumnConst * col_num_const = checkAndGetColumn<ColumnConst>(col_num.get()))
{ {
auto col_res = ColumnString::create(); auto col_res = ColumnString::create();
castType(arguments[1].type.get(), [&](const auto & type) castType(arguments[1].type.get(), [&](const auto & type)
{ {
using DataType = std::decay_t<decltype(type)>; using DataType = std::decay_t<decltype(type)>;
using T = typename DataType::FieldType; using T = typename DataType::FieldType;
T repeat_time = scale_column_num->getValue<T>(); T times = col_num_const->getValue<T>();
RepeatImpl::vectorStrConstRepeat(col->getChars(), col->getOffsets(), col_res->getChars(), col_res->getOffsets(), repeat_time); RepeatImpl::vectorStrConstRepeat(col->getChars(), col->getOffsets(), col_res->getChars(), col_res->getOffsets(), times);
return true; return true;
}); });
return col_res; return col_res;
@ -224,9 +222,9 @@ public:
{ {
using DataType = std::decay_t<decltype(type)>; using DataType = std::decay_t<decltype(type)>;
using T = typename DataType::FieldType; using T = typename DataType::FieldType;
const ColumnVector<T> * colnum = checkAndGetColumn<ColumnVector<T>>(numcolumn.get()); const ColumnVector<T> * column = checkAndGetColumn<ColumnVector<T>>(col_num.get());
auto col_res = ColumnString::create(); auto col_res = ColumnString::create();
RepeatImpl::vectorStrVectorRepeat(col->getChars(), col->getOffsets(), col_res->getChars(), col_res->getOffsets(), colnum->getData()); RepeatImpl::vectorStrVectorRepeat(col->getChars(), col->getOffsets(), col_res->getChars(), col_res->getOffsets(), column->getData());
res = std::move(col_res); res = std::move(col_res);
return true; return true;
})) }))
@ -234,7 +232,7 @@ public:
return res; return res;
} }
} }
else if (const ColumnConst * col_const = checkAndGetColumn<ColumnConst>(strcolumn.get())) else if (const ColumnConst * col_const = checkAndGetColumn<ColumnConst>(col_str.get()))
{ {
/// Note that const-const case is handled by useDefaultImplementationForConstants. /// Note that const-const case is handled by useDefaultImplementationForConstants.
@ -244,9 +242,9 @@ public:
{ {
using DataType = std::decay_t<decltype(type)>; using DataType = std::decay_t<decltype(type)>;
using T = typename DataType::FieldType; using T = typename DataType::FieldType;
const ColumnVector<T> * colnum = checkAndGetColumn<ColumnVector<T>>(numcolumn.get()); const ColumnVector<T> * column = checkAndGetColumn<ColumnVector<T>>(col_num.get());
auto col_res = ColumnString::create(); auto col_res = ColumnString::create();
RepeatImpl::constStrVectorRepeat(copy_str, col_res->getChars(), col_res->getOffsets(), colnum->getData()); RepeatImpl::constStrVectorRepeat(copy_str, col_res->getChars(), col_res->getOffsets(), column->getData());
res = std::move(col_res); res = std::move(col_res);
return true; return true;
})) }))

179
src/Functions/space.cpp Normal file
View File

@ -0,0 +1,179 @@
#include <Columns/ColumnString.h>
#include <Columns/ColumnsNumber.h>
#include <DataTypes/DataTypesNumber.h>
#include <DataTypes/DataTypeString.h>
#include <Functions/FunctionFactory.h>
#include <Functions/FunctionHelpers.h>
#include <Functions/IFunction.h>
#include <cstring>
namespace DB
{
namespace ErrorCodes
{
extern const int ILLEGAL_COLUMN;
extern const int TOO_LARGE_STRING_SIZE;
}
namespace
{
/// Prints whitespace n-times. Actually, space() could also be pushed down to repeat(). Chose a standalone-implementation because
/// we can do memset() whereas repeat() does memcpy().
class FunctionSpace : public IFunction
{
private:
static constexpr auto space = ' ';
/// Safety threshold against DoS.
static inline void checkRepeatTime(size_t repeat_time)
{
static constexpr auto max_repeat_times = 1'000'000uz;
if (repeat_time > max_repeat_times)
throw Exception(ErrorCodes::TOO_LARGE_STRING_SIZE, "Too many times to repeat ({}), maximum is: {}", repeat_time, max_repeat_times);
}
public:
static constexpr auto name = "space";
static FunctionPtr create(ContextPtr) { return std::make_shared<FunctionSpace>(); }
String getName() const override { return name; }
size_t getNumberOfArguments() const override { return 1; }
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return true; }
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
{
FunctionArgumentDescriptors args{
{"n", &isInteger<IDataType>, nullptr, "Integer"}
};
validateFunctionArgumentTypes(*this, arguments, args);
return std::make_shared<DataTypeString>();
}
template <typename DataType>
bool executeConstant(ColumnPtr col_times, ColumnString::Offsets & res_offsets, ColumnString::Chars & res_chars) const
{
const ColumnConst * col_times_const = checkAndGetColumn<ColumnConst>(col_times.get());
const ColumnPtr & col_times_const_internal = col_times_const->getDataColumnPtr();
if (!checkAndGetColumn<typename DataType::ColumnType>(col_times_const_internal.get()))
return false;
using T = typename DataType::FieldType;
T times = col_times_const->getValue<T>();
if (times < 1)
times = 0;
checkRepeatTime(times);
res_offsets.resize(col_times->size());
res_chars.resize(col_times->size() * (times + 1));
size_t pos = 0;
for (size_t i = 0; i < col_times->size(); ++i)
{
memset(res_chars.begin() + pos, space, times);
pos += times;
*(res_chars.begin() + pos) = '\0';
pos += 1;
res_offsets[i] = pos;
}
return true;
}
template <typename DataType>
bool executeVector(ColumnPtr col_times_, ColumnString::Offsets & res_offsets, ColumnString::Chars & res_chars) const
{
auto * col_times = checkAndGetColumn<typename DataType::ColumnType>(col_times_.get());
if (!col_times)
return false;
res_offsets.resize(col_times->size());
res_chars.resize(col_times->size() * 10); /// heuristic
const PaddedPODArray<typename DataType::FieldType> & times_data = col_times->getData();
size_t pos = 0;
for (size_t i = 0; i < col_times->size(); ++i)
{
typename DataType::FieldType times = times_data[i];
if (times < 1)
times = 0;
checkRepeatTime(times);
if (pos + times + 1 > res_chars.size())
res_chars.resize(std::max(2 * res_chars.size(), static_cast<size_t>(pos + times + 1)));
memset(res_chars.begin() + pos, space, times);
pos += times;
*(res_chars.begin() + pos) = '\0';
pos += 1;
res_offsets[i] = pos;
}
res_chars.resize(pos);
return true;
}
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t /*input_rows_count*/) const override
{
const auto & col_num = arguments[0].column;
auto col_res = ColumnString::create();
ColumnString::Offsets & res_offsets = col_res->getOffsets();
ColumnString::Chars & res_chars = col_res->getChars();
if (const ColumnConst * col_num_const = checkAndGetColumn<ColumnConst>(col_num.get()))
{
if ((executeConstant<DataTypeUInt8>(col_num, res_offsets, res_chars))
|| (executeConstant<DataTypeUInt16>(col_num, res_offsets, res_chars))
|| (executeConstant<DataTypeUInt32>(col_num, res_offsets, res_chars))
|| (executeConstant<DataTypeUInt64>(col_num, res_offsets, res_chars))
|| (executeConstant<DataTypeInt8>(col_num, res_offsets, res_chars))
|| (executeConstant<DataTypeInt16>(col_num, res_offsets, res_chars))
|| (executeConstant<DataTypeInt32>(col_num, res_offsets, res_chars))
|| (executeConstant<DataTypeInt64>(col_num, res_offsets, res_chars)))
return col_res;
}
else
{
if ((executeVector<DataTypeUInt8>(col_num, res_offsets, res_chars))
|| (executeVector<DataTypeUInt16>(col_num, res_offsets, res_chars))
|| (executeVector<DataTypeUInt32>(col_num, res_offsets, res_chars))
|| (executeVector<DataTypeUInt64>(col_num, res_offsets, res_chars))
|| (executeVector<DataTypeInt8>(col_num, res_offsets, res_chars))
|| (executeVector<DataTypeInt16>(col_num, res_offsets, res_chars))
|| (executeVector<DataTypeInt32>(col_num, res_offsets, res_chars))
|| (executeVector<DataTypeInt64>(col_num, res_offsets, res_chars)))
return col_res;
}
throw Exception(ErrorCodes::ILLEGAL_COLUMN, "Illegal column {} of argument of function {}", arguments[0].column->getName(), getName());
}
};
}
REGISTER_FUNCTION(Space)
{
factory.registerFunction<FunctionSpace>({}, FunctionFactory::CaseInsensitive);
}
}

View File

@ -26,7 +26,7 @@ namespace ErrorCodes
AsynchronousReadBufferFromFile::AsynchronousReadBufferFromFile( AsynchronousReadBufferFromFile::AsynchronousReadBufferFromFile(
IAsynchronousReader & reader_, IAsynchronousReader & reader_,
Int32 priority_, Priority priority_,
const std::string & file_name_, const std::string & file_name_,
size_t buf_size, size_t buf_size,
int flags, int flags,
@ -60,7 +60,7 @@ AsynchronousReadBufferFromFile::AsynchronousReadBufferFromFile(
AsynchronousReadBufferFromFile::AsynchronousReadBufferFromFile( AsynchronousReadBufferFromFile::AsynchronousReadBufferFromFile(
IAsynchronousReader & reader_, IAsynchronousReader & reader_,
Int32 priority_, Priority priority_,
int & fd_, int & fd_,
const std::string & original_file_name, const std::string & original_file_name,
size_t buf_size, size_t buf_size,

View File

@ -17,7 +17,7 @@ protected:
public: public:
explicit AsynchronousReadBufferFromFile( explicit AsynchronousReadBufferFromFile(
IAsynchronousReader & reader_, IAsynchronousReader & reader_,
Int32 priority_, Priority priority_,
const std::string & file_name_, const std::string & file_name_,
size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE, size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE,
int flags = -1, int flags = -1,
@ -28,7 +28,7 @@ public:
/// Use pre-opened file descriptor. /// Use pre-opened file descriptor.
explicit AsynchronousReadBufferFromFile( explicit AsynchronousReadBufferFromFile(
IAsynchronousReader & reader_, IAsynchronousReader & reader_,
Int32 priority_, Priority priority_,
int & fd, /// Will be set to -1 if constructor didn't throw and ownership of file descriptor is passed to the object. int & fd, /// Will be set to -1 if constructor didn't throw and ownership of file descriptor is passed to the object.
const std::string & original_file_name = {}, const std::string & original_file_name = {},
size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE, size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE,
@ -58,7 +58,7 @@ private:
public: public:
AsynchronousReadBufferFromFileWithDescriptorsCache( AsynchronousReadBufferFromFileWithDescriptorsCache(
IAsynchronousReader & reader_, IAsynchronousReader & reader_,
Int32 priority_, Priority priority_,
const std::string & file_name_, const std::string & file_name_,
size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE, size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE,
int flags = -1, int flags = -1,

View File

@ -40,14 +40,14 @@ std::string AsynchronousReadBufferFromFileDescriptor::getFileName() const
} }
std::future<IAsynchronousReader::Result> AsynchronousReadBufferFromFileDescriptor::asyncReadInto(char * data, size_t size, int64_t priority) std::future<IAsynchronousReader::Result> AsynchronousReadBufferFromFileDescriptor::asyncReadInto(char * data, size_t size, Priority priority)
{ {
IAsynchronousReader::Request request; IAsynchronousReader::Request request;
request.descriptor = std::make_shared<IAsynchronousReader::LocalFileDescriptor>(fd); request.descriptor = std::make_shared<IAsynchronousReader::LocalFileDescriptor>(fd);
request.buf = data; request.buf = data;
request.size = size; request.size = size;
request.offset = file_offset_of_buffer_end; request.offset = file_offset_of_buffer_end;
request.priority = base_priority + priority; request.priority = Priority{base_priority.value + priority.value};
request.ignore = bytes_to_ignore; request.ignore = bytes_to_ignore;
bytes_to_ignore = 0; bytes_to_ignore = 0;
@ -61,7 +61,7 @@ std::future<IAsynchronousReader::Result> AsynchronousReadBufferFromFileDescripto
} }
void AsynchronousReadBufferFromFileDescriptor::prefetch(int64_t priority) void AsynchronousReadBufferFromFileDescriptor::prefetch(Priority priority)
{ {
if (prefetch_future.valid()) if (prefetch_future.valid())
return; return;
@ -151,7 +151,7 @@ void AsynchronousReadBufferFromFileDescriptor::finalize()
AsynchronousReadBufferFromFileDescriptor::AsynchronousReadBufferFromFileDescriptor( AsynchronousReadBufferFromFileDescriptor::AsynchronousReadBufferFromFileDescriptor(
IAsynchronousReader & reader_, IAsynchronousReader & reader_,
Int32 priority_, Priority priority_,
int fd_, int fd_,
size_t buf_size, size_t buf_size,
char * existing_memory, char * existing_memory,

View File

@ -4,6 +4,7 @@
#include <IO/AsynchronousReader.h> #include <IO/AsynchronousReader.h>
#include <Interpreters/Context.h> #include <Interpreters/Context.h>
#include <Common/Throttler_fwd.h> #include <Common/Throttler_fwd.h>
#include <Common/Priority.h>
#include <optional> #include <optional>
#include <unistd.h> #include <unistd.h>
@ -18,7 +19,7 @@ class AsynchronousReadBufferFromFileDescriptor : public ReadBufferFromFileBase
{ {
protected: protected:
IAsynchronousReader & reader; IAsynchronousReader & reader;
int64_t base_priority; Priority base_priority;
Memory<> prefetch_buffer; Memory<> prefetch_buffer;
std::future<IAsynchronousReader::Result> prefetch_future; std::future<IAsynchronousReader::Result> prefetch_future;
@ -39,7 +40,7 @@ protected:
public: public:
AsynchronousReadBufferFromFileDescriptor( AsynchronousReadBufferFromFileDescriptor(
IAsynchronousReader & reader_, IAsynchronousReader & reader_,
Int32 priority_, Priority priority_,
int fd_, int fd_,
size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE, size_t buf_size = DBMS_DEFAULT_BUFFER_SIZE,
char * existing_memory = nullptr, char * existing_memory = nullptr,
@ -49,7 +50,7 @@ public:
~AsynchronousReadBufferFromFileDescriptor() override; ~AsynchronousReadBufferFromFileDescriptor() override;
void prefetch(int64_t priority) override; void prefetch(Priority priority) override;
int getFD() const int getFD() const
{ {
@ -70,7 +71,7 @@ public:
size_t getFileSize() override; size_t getFileSize() override;
private: private:
std::future<IAsynchronousReader::Result> asyncReadInto(char * data, size_t size, int64_t priority); std::future<IAsynchronousReader::Result> asyncReadInto(char * data, size_t size, Priority priority);
}; };
} }

View File

@ -6,6 +6,7 @@
#include <future> #include <future>
#include <boost/noncopyable.hpp> #include <boost/noncopyable.hpp>
#include <Common/Stopwatch.h> #include <Common/Stopwatch.h>
#include <Common/Priority.h>
namespace DB namespace DB
@ -47,7 +48,7 @@ public:
size_t offset = 0; size_t offset = 0;
size_t size = 0; size_t size = 0;
char * buf = nullptr; char * buf = nullptr;
int64_t priority = 0; Priority priority;
size_t ignore = 0; size_t ignore = 0;
}; };

View File

@ -19,7 +19,7 @@ public:
const ReadBuffer & getWrappedReadBuffer() const { return *in; } const ReadBuffer & getWrappedReadBuffer() const { return *in; }
ReadBuffer & getWrappedReadBuffer() { return *in; } ReadBuffer & getWrappedReadBuffer() { return *in; }
void prefetch(int64_t priority) override { in->prefetch(priority); } void prefetch(Priority priority) override { in->prefetch(priority); }
protected: protected:
std::unique_ptr<ReadBuffer> in; std::unique_ptr<ReadBuffer> in;

View File

@ -87,7 +87,7 @@ bool ParallelReadBuffer::addReaderToPool()
auto worker = read_workers.emplace_back(std::make_shared<ReadWorker>(std::move(reader), range_start, size)); auto worker = read_workers.emplace_back(std::make_shared<ReadWorker>(std::move(reader), range_start, size));
++active_working_reader; ++active_working_reader;
schedule([this, my_worker = std::move(worker)]() mutable { readerThreadFunction(std::move(my_worker)); }, 0); schedule([this, my_worker = std::move(worker)]() mutable { readerThreadFunction(std::move(my_worker)); }, Priority{});
return true; return true;
} }

View File

@ -20,7 +20,7 @@ public:
~PeekableReadBuffer() override; ~PeekableReadBuffer() override;
void prefetch(int64_t priority) override { sub_buf->prefetch(priority); } void prefetch(Priority priority) override { sub_buf->prefetch(priority); }
/// Sets checkpoint at current position /// Sets checkpoint at current position
ALWAYS_INLINE inline void setCheckpoint() ALWAYS_INLINE inline void setCheckpoint()

View File

@ -6,6 +6,7 @@
#include <memory> #include <memory>
#include <Common/Exception.h> #include <Common/Exception.h>
#include <Common/Priority.h>
#include <IO/BufferBase.h> #include <IO/BufferBase.h>
#include <IO/AsynchronousReader.h> #include <IO/AsynchronousReader.h>
@ -20,7 +21,7 @@ namespace ErrorCodes
extern const int NOT_IMPLEMENTED; extern const int NOT_IMPLEMENTED;
} }
static constexpr auto DEFAULT_PREFETCH_PRIORITY = 0; static constexpr auto DEFAULT_PREFETCH_PRIORITY = Priority{0};
/** A simple abstract class for buffered data reading (char sequences) from somewhere. /** A simple abstract class for buffered data reading (char sequences) from somewhere.
* Unlike std::istream, it provides access to the internal buffer, * Unlike std::istream, it provides access to the internal buffer,
@ -208,10 +209,10 @@ public:
/** Do something to allow faster subsequent call to 'nextImpl' if possible. /** Do something to allow faster subsequent call to 'nextImpl' if possible.
* It's used for asynchronous readers with double-buffering. * It's used for asynchronous readers with double-buffering.
* `priority` is the Threadpool priority, with which the prefetch task will be schedules. * `priority` is the `ThreadPool` priority, with which the prefetch task will be scheduled.
* Smaller is more priority. * Lower value means higher priority.
*/ */
virtual void prefetch(int64_t /* priority */) {} virtual void prefetch(Priority) {}
/** /**
* Set upper bound for read range [..., position). * Set upper bound for read range [..., position).

View File

@ -124,7 +124,7 @@ bool ReadBufferFromFileDescriptor::nextImpl()
} }
void ReadBufferFromFileDescriptor::prefetch(int64_t) void ReadBufferFromFileDescriptor::prefetch(Priority)
{ {
#if defined(POSIX_FADV_WILLNEED) #if defined(POSIX_FADV_WILLNEED)
/// For direct IO, loading data into page cache is pointless. /// For direct IO, loading data into page cache is pointless.

View File

@ -25,7 +25,7 @@ protected:
ThrottlerPtr throttler; ThrottlerPtr throttler;
bool nextImpl() override; bool nextImpl() override;
void prefetch(int64_t priority) override; void prefetch(Priority priority) override;
/// Name or some description of file. /// Name or some description of file.
std::string getFileName() const override; std::string getFileName() const override;

View File

@ -12,7 +12,7 @@ off_t ReadBufferFromMemory::seek(off_t offset, int whence)
{ {
if (whence == SEEK_SET) if (whence == SEEK_SET)
{ {
if (offset >= 0 && internal_buffer.begin() + offset < internal_buffer.end()) if (offset >= 0 && internal_buffer.begin() + offset <= internal_buffer.end())
{ {
pos = internal_buffer.begin() + offset; pos = internal_buffer.begin() + offset;
working_buffer = internal_buffer; /// We need to restore `working_buffer` in case the position was at EOF before this seek(). working_buffer = internal_buffer; /// We need to restore `working_buffer` in case the position was at EOF before this seek().
@ -25,7 +25,7 @@ off_t ReadBufferFromMemory::seek(off_t offset, int whence)
else if (whence == SEEK_CUR) else if (whence == SEEK_CUR)
{ {
Position new_pos = pos + offset; Position new_pos = pos + offset;
if (new_pos >= internal_buffer.begin() && new_pos < internal_buffer.end()) if (new_pos >= internal_buffer.begin() && new_pos <= internal_buffer.end())
{ {
pos = new_pos; pos = new_pos;
working_buffer = internal_buffer; /// We need to restore `working_buffer` in case the position was at EOF before this seek(). working_buffer = internal_buffer; /// We need to restore `working_buffer` in case the position was at EOF before this seek().

View File

@ -5,6 +5,7 @@
#include <Core/Defines.h> #include <Core/Defines.h>
#include <Interpreters/Cache/FileCache_fwd.h> #include <Interpreters/Cache/FileCache_fwd.h>
#include <Common/Throttler_fwd.h> #include <Common/Throttler_fwd.h>
#include <Common/Priority.h>
#include <IO/ResourceLink.h> #include <IO/ResourceLink.h>
namespace DB namespace DB
@ -84,8 +85,8 @@ struct ReadSettings
size_t mmap_threshold = 0; size_t mmap_threshold = 0;
MMappedFileCache * mmap_cache = nullptr; MMappedFileCache * mmap_cache = nullptr;
/// For 'pread_threadpool'/'io_uring' method. Lower is more priority. /// For 'pread_threadpool'/'io_uring' method. Lower value is higher priority.
size_t priority = 0; Priority priority;
bool load_marks_asynchronously = true; bool load_marks_asynchronously = true;

View File

@ -361,7 +361,7 @@ namespace
task->exception = std::current_exception(); task->exception = std::current_exception();
} }
task_finish_notify(); task_finish_notify();
}, 0); }, Priority{});
} }
catch (...) catch (...)
{ {

View File

@ -17,7 +17,7 @@ public:
off_t seek(off_t off, int whence) override; off_t seek(off_t off, int whence) override;
void prefetch(int64_t priority) override { impl->prefetch(priority); } void prefetch(Priority priority) override { impl->prefetch(priority); }
private: private:
UInt64 min_bytes_for_seek; /// Minimum positive seek offset which shall be executed using seek operation. UInt64 min_bytes_for_seek; /// Minimum positive seek offset which shall be executed using seek operation.

View File

@ -113,7 +113,7 @@ void WriteBufferFromS3::TaskTracker::add(Callback && func)
{ {
LOG_TEST(log, "add, in queue {}", futures.size()); LOG_TEST(log, "add, in queue {}", futures.size());
auto future = scheduler(std::move(func), 0); auto future = scheduler(std::move(func), Priority{});
auto exit_scope = scope_guard( auto exit_scope = scope_guard(
[&future]() [&future]()
{ {

View File

@ -4269,7 +4269,7 @@ ReadSettings Context::getReadSettings() const
res.prefetch_buffer_size = settings.prefetch_buffer_size; res.prefetch_buffer_size = settings.prefetch_buffer_size;
res.direct_io_threshold = settings.min_bytes_to_use_direct_io; res.direct_io_threshold = settings.min_bytes_to_use_direct_io;
res.mmap_threshold = settings.min_bytes_to_use_mmap_io; res.mmap_threshold = settings.min_bytes_to_use_mmap_io;
res.priority = settings.read_priority; res.priority = Priority{settings.read_priority};
res.remote_throttler = getRemoteReadThrottler(); res.remote_throttler = getRemoteReadThrottler();
res.local_throttler = getLocalReadThrottler(); res.local_throttler = getLocalReadThrottler();

View File

@ -969,6 +969,15 @@ const ASTSelectQuery * ExpressionAnalyzer::getSelectQuery() const
return select_query; return select_query;
} }
bool ExpressionAnalyzer::isRemoteStorage() const
{
const Settings & csettings = getContext()->getSettingsRef();
// Consider any storage used in parallel replicas as remote, so the query is executed in multiple servers
const bool enable_parallel_processing_of_joins
= csettings.max_parallel_replicas > 1 && csettings.allow_experimental_parallel_reading_from_replicas > 0;
return syntax->is_remote_storage || enable_parallel_processing_of_joins;
}
const ASTSelectQuery * SelectQueryExpressionAnalyzer::getAggregatingQuery() const const ASTSelectQuery * SelectQueryExpressionAnalyzer::getAggregatingQuery() const
{ {
if (!has_aggregation) if (!has_aggregation)

View File

@ -201,7 +201,7 @@ protected:
const ASTSelectQuery * getSelectQuery() const; const ASTSelectQuery * getSelectQuery() const;
bool isRemoteStorage() const { return syntax->is_remote_storage; } bool isRemoteStorage() const;
NamesAndTypesList getColumnsAfterArrayJoin(ActionsDAGPtr & actions, const NamesAndTypesList & src_columns); NamesAndTypesList getColumnsAfterArrayJoin(ActionsDAGPtr & actions, const NamesAndTypesList & src_columns);
NamesAndTypesList analyzeJoin(ActionsDAGPtr & actions, const NamesAndTypesList & src_columns); NamesAndTypesList analyzeJoin(ActionsDAGPtr & actions, const NamesAndTypesList & src_columns);

View File

@ -19,7 +19,7 @@ NamesAndTypesList FilesystemReadPrefetchesLogElement::getNamesAndTypes()
{"offset", std::make_shared<DataTypeUInt64>()}, {"offset", std::make_shared<DataTypeUInt64>()},
{"size", std::make_shared<DataTypeInt64>()}, {"size", std::make_shared<DataTypeInt64>()},
{"prefetch_submit_time", std::make_shared<DataTypeDateTime64>(6)}, {"prefetch_submit_time", std::make_shared<DataTypeDateTime64>(6)},
{"priority", std::make_shared<DataTypeUInt64>()}, {"priority", std::make_shared<DataTypeInt64>()},
{"prefetch_execution_start_time", std::make_shared<DataTypeDateTime64>(6)}, {"prefetch_execution_start_time", std::make_shared<DataTypeDateTime64>(6)},
{"prefetch_execution_end_time", std::make_shared<DataTypeDateTime64>(6)}, {"prefetch_execution_end_time", std::make_shared<DataTypeDateTime64>(6)},
{"prefetch_execution_time_us", std::make_shared<DataTypeUInt64>()}, {"prefetch_execution_time_us", std::make_shared<DataTypeUInt64>()},
@ -40,7 +40,7 @@ void FilesystemReadPrefetchesLogElement::appendToBlock(MutableColumns & columns)
columns[i++]->insert(offset); columns[i++]->insert(offset);
columns[i++]->insert(size); columns[i++]->insert(size);
columns[i++]->insert(prefetch_submit_time); columns[i++]->insert(prefetch_submit_time);
columns[i++]->insert(priority); columns[i++]->insert(priority.value);
if (execution_watch) if (execution_watch)
{ {
columns[i++]->insert(execution_watch->getStart()); columns[i++]->insert(execution_watch->getStart());

View File

@ -4,6 +4,7 @@
#include <Core/NamesAndTypes.h> #include <Core/NamesAndTypes.h>
#include <Interpreters/SystemLog.h> #include <Interpreters/SystemLog.h>
#include <Common/Stopwatch.h> #include <Common/Stopwatch.h>
#include <Common/Priority.h>
namespace DB namespace DB
{ {
@ -25,7 +26,7 @@ struct FilesystemReadPrefetchesLogElement
Int64 size; /// -1 means unknown Int64 size; /// -1 means unknown
Decimal64 prefetch_submit_time{}; Decimal64 prefetch_submit_time{};
std::optional<Stopwatch> execution_watch; std::optional<Stopwatch> execution_watch;
size_t priority; Priority priority;
FilesystemPrefetchState state; FilesystemPrefetchState state;
UInt64 thread_id; UInt64 thread_id;
String reader_id; String reader_id;

View File

@ -205,10 +205,19 @@ public:
} }
private: private:
static bool shouldBeExecutedGlobally(const Data & data)
{
const Settings & settings = data.getContext()->getSettingsRef();
/// For parallel replicas we reinterpret JOIN as GLOBAL JOIN as a way to broadcast data
const bool enable_parallel_processing_of_joins = data.getContext()->canUseParallelReplicasOnInitiator();
return settings.prefer_global_in_and_join || enable_parallel_processing_of_joins;
}
/// GLOBAL IN /// GLOBAL IN
static void visit(ASTFunction & func, ASTPtr &, Data & data) static void visit(ASTFunction & func, ASTPtr &, Data & data)
{ {
if ((data.getContext()->getSettingsRef().prefer_global_in_and_join if ((shouldBeExecutedGlobally(data)
&& (func.name == "in" || func.name == "notIn" || func.name == "nullIn" || func.name == "notNullIn")) && (func.name == "in" || func.name == "notIn" || func.name == "nullIn" || func.name == "notNullIn"))
|| func.name == "globalIn" || func.name == "globalNotIn" || func.name == "globalNullIn" || func.name == "globalNotNullIn") || func.name == "globalIn" || func.name == "globalNotIn" || func.name == "globalNullIn" || func.name == "globalNotNullIn")
{ {
@ -238,8 +247,7 @@ private:
static void visit(ASTTablesInSelectQueryElement & table_elem, ASTPtr &, Data & data) static void visit(ASTTablesInSelectQueryElement & table_elem, ASTPtr &, Data & data)
{ {
if (table_elem.table_join if (table_elem.table_join
&& (table_elem.table_join->as<ASTTableJoin &>().locality == JoinLocality::Global && (table_elem.table_join->as<ASTTableJoin &>().locality == JoinLocality::Global || shouldBeExecutedGlobally(data)))
|| data.getContext()->getSettingsRef().prefer_global_in_and_join))
{ {
data.addExternalStorage(table_elem.table_expression, true); data.addExternalStorage(table_elem.table_expression, true);
data.has_global_subqueries = true; data.has_global_subqueries = true;

View File

@ -458,19 +458,11 @@ InterpreterSelectQuery::InterpreterSelectQuery(
} }
} }
/// Check support for JOINs for parallel replicas /// Check support for JOIN for parallel replicas with custom key
if (joined_tables.tablesCount() > 1 && (!settings.parallel_replicas_custom_key.value.empty() || settings.allow_experimental_parallel_reading_from_replicas > 0)) if (joined_tables.tablesCount() > 1 && !settings.parallel_replicas_custom_key.value.empty())
{ {
if (settings.allow_experimental_parallel_reading_from_replicas == 1) LOG_WARNING(log, "JOINs are not supported with parallel_replicas_custom_key. Query will be executed without using them.");
{ context->setSetting("parallel_replicas_custom_key", String{""});
LOG_WARNING(log, "JOINs are not supported with parallel replicas. Query will be executed without using them.");
context->setSetting("allow_experimental_parallel_reading_from_replicas", Field(0));
context->setSetting("parallel_replicas_custom_key", String{""});
}
else if (settings.allow_experimental_parallel_reading_from_replicas == 2)
{
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "JOINs are not supported with parallel replicas");
}
} }
/// Check support for FINAL for parallel replicas /// Check support for FINAL for parallel replicas
@ -489,6 +481,21 @@ InterpreterSelectQuery::InterpreterSelectQuery(
} }
} }
/// Check support for parallel replicas for non-replicated storage (plain MergeTree)
bool is_plain_merge_tree = storage && storage->isMergeTree() && !storage->supportsReplication();
if (is_plain_merge_tree && settings.allow_experimental_parallel_reading_from_replicas > 0 && !settings.parallel_replicas_for_non_replicated_merge_tree)
{
if (settings.allow_experimental_parallel_reading_from_replicas == 1)
{
LOG_WARNING(log, "To use parallel replicas with plain MergeTree tables please enable setting `parallel_replicas_for_non_replicated_merge_tree`. For now query will be executed without using them.");
context->setSetting("allow_experimental_parallel_reading_from_replicas", Field(0));
}
else if (settings.allow_experimental_parallel_reading_from_replicas == 2)
{
throw Exception(ErrorCodes::SUPPORT_IS_DISABLED, "To use parallel replicas with plain MergeTree tables please enable setting `parallel_replicas_for_non_replicated_merge_tree`");
}
}
/// Rewrite JOINs /// Rewrite JOINs
if (!has_input && joined_tables.tablesCount() > 1) if (!has_input && joined_tables.tablesCount() > 1)
{ {

View File

@ -112,8 +112,6 @@ std::shared_ptr<InterpreterSelectWithUnionQuery> interpretSubquery(
subquery_options.removeDuplicates(); subquery_options.removeDuplicates();
} }
/// We don't want to execute reading for subqueries in parallel
subquery_context->setSetting("allow_experimental_parallel_reading_from_replicas", Field(0));
return std::make_shared<InterpreterSelectWithUnionQuery>(query, subquery_context, subquery_options, required_source_columns); return std::make_shared<InterpreterSelectWithUnionQuery>(query, subquery_context, subquery_options, required_source_columns);
} }

View File

@ -11,13 +11,13 @@ namespace DB
/// High-order function to run callbacks (functions with 'void()' signature) somewhere asynchronously. /// High-order function to run callbacks (functions with 'void()' signature) somewhere asynchronously.
template <typename Result, typename Callback = std::function<Result()>> template <typename Result, typename Callback = std::function<Result()>>
using ThreadPoolCallbackRunner = std::function<std::future<Result>(Callback &&, int64_t priority)>; using ThreadPoolCallbackRunner = std::function<std::future<Result>(Callback &&, Priority)>;
/// Creates CallbackRunner that runs every callback with 'pool->scheduleOrThrow()'. /// Creates CallbackRunner that runs every callback with 'pool->scheduleOrThrow()'.
template <typename Result, typename Callback = std::function<Result()>> template <typename Result, typename Callback = std::function<Result()>>
ThreadPoolCallbackRunner<Result, Callback> threadPoolCallbackRunner(ThreadPool & pool, const std::string & thread_name) ThreadPoolCallbackRunner<Result, Callback> threadPoolCallbackRunner(ThreadPool & pool, const std::string & thread_name)
{ {
return [my_pool = &pool, thread_group = CurrentThread::getGroup(), thread_name](Callback && callback, int64_t priority) mutable -> std::future<Result> return [my_pool = &pool, thread_group = CurrentThread::getGroup(), thread_name](Callback && callback, Priority priority) mutable -> std::future<Result>
{ {
auto task = std::make_shared<std::packaged_task<Result()>>([thread_group, thread_name, my_callback = std::move(callback)]() mutable -> Result auto task = std::make_shared<std::packaged_task<Result()>>([thread_group, thread_name, my_callback = std::move(callback)]() mutable -> Result
{ {
@ -44,15 +44,14 @@ ThreadPoolCallbackRunner<Result, Callback> threadPoolCallbackRunner(ThreadPool &
auto future = task->get_future(); auto future = task->get_future();
/// ThreadPool is using "bigger is higher priority" instead of "smaller is more priority". my_pool->scheduleOrThrow([my_task = std::move(task)]{ (*my_task)(); }, priority);
my_pool->scheduleOrThrow([my_task = std::move(task)]{ (*my_task)(); }, -priority);
return future; return future;
}; };
} }
template <typename Result, typename T> template <typename Result, typename T>
std::future<Result> scheduleFromThreadPool(T && task, ThreadPool & pool, const std::string & thread_name, int64_t priority = 0) std::future<Result> scheduleFromThreadPool(T && task, ThreadPool & pool, const std::string & thread_name, Priority priority = {})
{ {
auto schedule = threadPoolCallbackRunner<Result, T>(pool, thread_name); auto schedule = threadPoolCallbackRunner<Result, T>(pool, thread_name);
return schedule(std::move(task), priority); return schedule(std::move(task), priority);

View File

@ -16,7 +16,7 @@ public:
std::optional<bool> null_modifier; std::optional<bool> null_modifier;
String default_specifier; String default_specifier;
ASTPtr default_expression; ASTPtr default_expression;
bool ephemeral_default; bool ephemeral_default = false;
ASTPtr comment; ASTPtr comment;
ASTPtr codec; ASTPtr codec;
ASTPtr ttl; ASTPtr ttl;

View File

@ -19,13 +19,13 @@ public:
/// Attribute expression /// Attribute expression
ASTPtr expression; ASTPtr expression;
/// Is attribute mirrored to the parent identifier /// Is attribute mirrored to the parent identifier
bool hierarchical; bool hierarchical = false;
/// Is hierarchical attribute bidirectional /// Is hierarchical attribute bidirectional
bool bidirectional; bool bidirectional = false;
/// Flag that shows whether the id->attribute image is injective /// Flag that shows whether the id->attribute image is injective
bool injective; bool injective = false;
/// MongoDB object ID /// MongoDB object ID
bool is_object_id; bool is_object_id = false;
String getID(char delim) const override { return "DictionaryAttributeDeclaration" + (delim + name); } String getID(char delim) const override { return "DictionaryAttributeDeclaration" + (delim + name); }

View File

@ -11,14 +11,14 @@ namespace DB
class ASTOrderByElement : public IAST class ASTOrderByElement : public IAST
{ {
public: public:
int direction; /// 1 for ASC, -1 for DESC int direction = 0; /// 1 for ASC, -1 for DESC
int nulls_direction; /// Same as direction for NULLS LAST, opposite for NULLS FIRST. int nulls_direction = 0; /// Same as direction for NULLS LAST, opposite for NULLS FIRST.
bool nulls_direction_was_explicitly_specified; bool nulls_direction_was_explicitly_specified = false;
/** Collation for locale-specific string comparison. If empty, then sorting done by bytes. */ /** Collation for locale-specific string comparison. If empty, then sorting done by bytes. */
ASTPtr collation; ASTPtr collation;
bool with_fill; bool with_fill = false;
ASTPtr fill_from; ASTPtr fill_from;
ASTPtr fill_to; ASTPtr fill_to;
ASTPtr fill_step; ASTPtr fill_step;

View File

@ -35,6 +35,13 @@ void ASTQueryWithOutput::formatImpl(const FormatSettings & s, FormatState & stat
{ {
s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "INTO OUTFILE " << (s.hilite ? hilite_none : ""); s.ostr << (s.hilite ? hilite_keyword : "") << s.nl_or_ws << indent_str << "INTO OUTFILE " << (s.hilite ? hilite_none : "");
out_file->formatImpl(s, state, frame); out_file->formatImpl(s, state, frame);
s.ostr << (s.hilite ? hilite_keyword : "");
if (is_outfile_append)
s.ostr << " APPEND";
if (is_into_outfile_with_stdout)
s.ostr << " AND STDOUT";
s.ostr << (s.hilite ? hilite_none : "");
} }
if (format) if (format)

View File

@ -15,8 +15,8 @@ class ASTQueryWithOutput : public IAST
{ {
public: public:
ASTPtr out_file; ASTPtr out_file;
bool is_into_outfile_with_stdout; bool is_into_outfile_with_stdout = false;
bool is_outfile_append; bool is_outfile_append = false;
ASTPtr format; ASTPtr format;
ASTPtr settings_ast; ASTPtr settings_ast;
ASTPtr compression; ASTPtr compression;

View File

@ -23,7 +23,7 @@ class ASTWatchQuery : public ASTQueryWithTableAndOutput
public: public:
ASTPtr limit_length; ASTPtr limit_length;
bool is_watch_events; bool is_watch_events = false;
ASTWatchQuery() = default; ASTWatchQuery() = default;
String getID(char) const override { return "WatchQuery_" + getDatabase() + "_" + getTable(); } String getID(char) const override { return "WatchQuery_" + getDatabase() + "_" + getTable(); }

View File

@ -64,19 +64,19 @@ bool AsynchronousReadBufferFromHDFS::hasPendingDataToRead()
return true; return true;
} }
std::future<IAsynchronousReader::Result> AsynchronousReadBufferFromHDFS::asyncReadInto(char * data, size_t size, int64_t priority) std::future<IAsynchronousReader::Result> AsynchronousReadBufferFromHDFS::asyncReadInto(char * data, size_t size, Priority priority)
{ {
IAsynchronousReader::Request request; IAsynchronousReader::Request request;
request.descriptor = std::make_shared<RemoteFSFileDescriptor>(*impl, nullptr); request.descriptor = std::make_shared<RemoteFSFileDescriptor>(*impl, nullptr);
request.buf = data; request.buf = data;
request.size = size; request.size = size;
request.offset = file_offset_of_buffer_end; request.offset = file_offset_of_buffer_end;
request.priority = base_priority + priority; request.priority = Priority{base_priority.value + priority.value};
request.ignore = 0; request.ignore = 0;
return reader.submit(request); return reader.submit(request);
} }
void AsynchronousReadBufferFromHDFS::prefetch(int64_t priority) void AsynchronousReadBufferFromHDFS::prefetch(Priority priority)
{ {
interval_watch.restart(); interval_watch.restart();

View File

@ -33,7 +33,7 @@ public:
off_t seek(off_t offset_, int whence) override; off_t seek(off_t offset_, int whence) override;
void prefetch(int64_t priority) override; void prefetch(Priority priority) override;
size_t getFileSize() override; size_t getFileSize() override;
@ -50,10 +50,10 @@ private:
bool hasPendingDataToRead(); bool hasPendingDataToRead();
std::future<IAsynchronousReader::Result> asyncReadInto(char * data, size_t size, int64_t priority); std::future<IAsynchronousReader::Result> asyncReadInto(char * data, size_t size, Priority priority);
IAsynchronousReader & reader; IAsynchronousReader & reader;
int64_t base_priority; Priority base_priority;
std::shared_ptr<ReadBufferFromHDFS> impl; std::shared_ptr<ReadBufferFromHDFS> impl;
std::future<IAsynchronousReader::Result> prefetch_future; std::future<IAsynchronousReader::Result> prefetch_future;
Memory<> prefetch_buffer; Memory<> prefetch_buffer;

View File

@ -61,7 +61,7 @@ public:
MergeTreeDataPartInfoForReaderPtr data_part_info_for_read; MergeTreeDataPartInfoForReaderPtr data_part_info_for_read;
virtual void prefetchBeginOfRange(int64_t /* priority */) {} virtual void prefetchBeginOfRange(Priority) {}
protected: protected:
/// Returns actual column name in part, which can differ from table metadata. /// Returns actual column name in part, which can differ from table metadata.

View File

@ -142,7 +142,7 @@ MergeTreeReadTask::MergeTreeReadTask(
const NameSet & column_name_set_, const NameSet & column_name_set_,
const MergeTreeReadTaskColumns & task_columns_, const MergeTreeReadTaskColumns & task_columns_,
MergeTreeBlockSizePredictorPtr size_predictor_, MergeTreeBlockSizePredictorPtr size_predictor_,
int64_t priority_, Priority priority_,
std::future<MergeTreeReaderPtr> reader_, std::future<MergeTreeReaderPtr> reader_,
std::vector<std::future<MergeTreeReaderPtr>> && pre_reader_for_step_) std::vector<std::future<MergeTreeReaderPtr>> && pre_reader_for_step_)
: data_part{data_part_} : data_part{data_part_}

View File

@ -71,11 +71,7 @@ struct MergeTreeReadTask
std::future<MergeTreeReaderPtr> reader; std::future<MergeTreeReaderPtr> reader;
std::vector<std::future<MergeTreeReaderPtr>> pre_reader_for_step; std::vector<std::future<MergeTreeReaderPtr>> pre_reader_for_step;
int64_t priority = 0; /// Priority of the task. Bigger value, bigger priority. Priority priority;
bool operator <(const MergeTreeReadTask & rhs) const
{
return priority < rhs.priority;
}
bool isFinished() const { return mark_ranges.empty() && range_reader.isCurrentRangeFinished(); } bool isFinished() const { return mark_ranges.empty() && range_reader.isCurrentRangeFinished(); }
@ -86,7 +82,7 @@ struct MergeTreeReadTask
const NameSet & column_name_set_, const NameSet & column_name_set_,
const MergeTreeReadTaskColumns & task_columns_, const MergeTreeReadTaskColumns & task_columns_,
MergeTreeBlockSizePredictorPtr size_predictor_, MergeTreeBlockSizePredictorPtr size_predictor_,
int64_t priority_ = 0, Priority priority_ = {},
std::future<MergeTreeReaderPtr> reader_ = {}, std::future<MergeTreeReaderPtr> reader_ = {},
std::vector<std::future<MergeTreeReaderPtr>> && pre_reader_for_step_ = {}); std::vector<std::future<MergeTreeReaderPtr>> && pre_reader_for_step_ = {});

View File

@ -1967,7 +1967,7 @@ try
res.part->remove(); res.part->remove();
else else
preparePartForRemoval(res.part); preparePartForRemoval(res.part);
}, 0)); }, Priority{}));
} }
/// Wait for every scheduled task /// Wait for every scheduled task

View File

@ -90,7 +90,7 @@ std::future<MergeTreeReaderPtr> MergeTreePrefetchedReadPool::createPrefetchedRea
const IMergeTreeDataPart & data_part, const IMergeTreeDataPart & data_part,
const NamesAndTypesList & columns, const NamesAndTypesList & columns,
const MarkRanges & required_ranges, const MarkRanges & required_ranges,
int64_t priority) const Priority priority) const
{ {
auto reader = data_part.getReader( auto reader = data_part.getReader(
columns, storage_snapshot->metadata, required_ranges, columns, storage_snapshot->metadata, required_ranges,
@ -142,7 +142,7 @@ bool MergeTreePrefetchedReadPool::TaskHolder::operator <(const TaskHolder & othe
{ {
chassert(task->priority >= 0); chassert(task->priority >= 0);
chassert(other.task->priority >= 0); chassert(other.task->priority >= 0);
return -task->priority < -other.task->priority; /// Less is better. return task->priority > other.task->priority; /// Less is better.
/// With default std::priority_queue, top() returns largest element. /// With default std::priority_queue, top() returns largest element.
/// So closest to 0 will be on top with this comparator. /// So closest to 0 will be on top with this comparator.
} }
@ -153,7 +153,7 @@ void MergeTreePrefetchedReadPool::startPrefetches() const
return; return;
[[maybe_unused]] TaskHolder prev(nullptr, 0); [[maybe_unused]] TaskHolder prev(nullptr, 0);
[[maybe_unused]] const int64_t highest_priority = reader_settings.read_settings.priority + 1; [[maybe_unused]] const Priority highest_priority{reader_settings.read_settings.priority.value + 1};
assert(prefetch_queue.top().task->priority == highest_priority); assert(prefetch_queue.top().task->priority == highest_priority);
while (!prefetch_queue.empty()) while (!prefetch_queue.empty())
{ {
@ -495,11 +495,11 @@ MergeTreePrefetchedReadPool::ThreadsTasks MergeTreePrefetchedReadPool::createThr
auto need_marks = min_marks_per_thread; auto need_marks = min_marks_per_thread;
/// Priority is given according to the prefetch number for each thread, /// Priority is given according to the prefetch number for each thread,
/// e.g. the first task of each thread has the same priority and is bigger /// e.g. the first task of each thread has the same priority and is greater
/// than second task of each thread, and so on. /// than the second task of each thread, and so on.
/// Add 1 to query read priority because higher priority should be given to /// Add 1 to query read priority because higher priority should be given to
/// reads from pool which are from reader. /// reads from pool which are from reader.
int64_t priority = reader_settings.read_settings.priority + 1; Priority priority{reader_settings.read_settings.priority.value + 1};
while (need_marks > 0 && part_idx < parts_infos.size()) while (need_marks > 0 && part_idx < parts_infos.size())
{ {
@ -597,7 +597,7 @@ MergeTreePrefetchedReadPool::ThreadsTasks MergeTreePrefetchedReadPool::createThr
{ {
prefetch_queue.emplace(TaskHolder(read_task.get(), i)); prefetch_queue.emplace(TaskHolder(read_task.get(), i));
} }
++priority; ++priority.value;
result_threads_tasks[i].push_back(std::move(read_task)); result_threads_tasks[i].push_back(std::move(read_task));
} }

View File

@ -53,12 +53,11 @@ private:
using ThreadTasks = std::deque<MergeTreeReadTaskPtr>; using ThreadTasks = std::deque<MergeTreeReadTaskPtr>;
using ThreadsTasks = std::map<size_t, ThreadTasks>; using ThreadsTasks = std::map<size_t, ThreadTasks>;
/// smaller `priority` means more priority
std::future<MergeTreeReaderPtr> createPrefetchedReader( std::future<MergeTreeReaderPtr> createPrefetchedReader(
const IMergeTreeDataPart & data_part, const IMergeTreeDataPart & data_part,
const NamesAndTypesList & columns, const NamesAndTypesList & columns,
const MarkRanges & required_ranges, const MarkRanges & required_ranges,
int64_t priority) const; Priority priority) const;
void createPrefetchedReaderForTask(MergeTreeReadTask & task) const; void createPrefetchedReaderForTask(MergeTreeReadTask & task) const;

View File

@ -314,7 +314,7 @@ void MergeTreeReaderCompact::readData(
last_read_granule.emplace(from_mark, column_position); last_read_granule.emplace(from_mark, column_position);
} }
void MergeTreeReaderCompact::prefetchBeginOfRange(int64_t priority) void MergeTreeReaderCompact::prefetchBeginOfRange(Priority priority)
{ {
if (!initialized) if (!initialized)
{ {

View File

@ -38,7 +38,7 @@ public:
bool canReadIncompleteGranules() const override { return false; } bool canReadIncompleteGranules() const override { return false; }
void prefetchBeginOfRange(int64_t priority) override; void prefetchBeginOfRange(Priority priority) override;
private: private:
bool isContinuousReading(size_t mark, size_t column_position); bool isContinuousReading(size_t mark, size_t column_position);

View File

@ -58,7 +58,7 @@ MergeTreeReaderWide::MergeTreeReaderWide(
} }
} }
void MergeTreeReaderWide::prefetchBeginOfRange(int64_t priority) void MergeTreeReaderWide::prefetchBeginOfRange(Priority priority)
{ {
prefetched_streams.clear(); prefetched_streams.clear();
@ -90,7 +90,7 @@ void MergeTreeReaderWide::prefetchBeginOfRange(int64_t priority)
} }
void MergeTreeReaderWide::prefetchForAllColumns( void MergeTreeReaderWide::prefetchForAllColumns(
int64_t priority, size_t num_columns, size_t from_mark, size_t current_task_last_mark, bool continue_reading) Priority priority, size_t num_columns, size_t from_mark, size_t current_task_last_mark, bool continue_reading)
{ {
bool do_prefetch = data_part_info_for_read->getDataPartStorage()->isStoredOnRemoteDisk() bool do_prefetch = data_part_info_for_read->getDataPartStorage()->isStoredOnRemoteDisk()
? settings.read_settings.remote_fs_prefetch ? settings.read_settings.remote_fs_prefetch
@ -137,7 +137,7 @@ size_t MergeTreeReaderWide::readRows(
if (num_columns == 0) if (num_columns == 0)
return max_rows_to_read; return max_rows_to_read;
prefetchForAllColumns(/* priority */0, num_columns, from_mark, current_task_last_mark, continue_reading); prefetchForAllColumns(Priority{}, num_columns, from_mark, current_task_last_mark, continue_reading);
for (size_t pos = 0; pos < num_columns; ++pos) for (size_t pos = 0; pos < num_columns; ++pos)
{ {
@ -305,7 +305,7 @@ void MergeTreeReaderWide::deserializePrefix(
} }
void MergeTreeReaderWide::prefetchForColumn( void MergeTreeReaderWide::prefetchForColumn(
int64_t priority, Priority priority,
const NameAndTypePair & name_and_type, const NameAndTypePair & name_and_type,
const SerializationPtr & serialization, const SerializationPtr & serialization,
size_t from_mark, size_t from_mark,

View File

@ -33,14 +33,14 @@ public:
bool canReadIncompleteGranules() const override { return true; } bool canReadIncompleteGranules() const override { return true; }
void prefetchBeginOfRange(int64_t priority) override; void prefetchBeginOfRange(Priority priority) override;
using FileStreams = std::map<std::string, std::unique_ptr<MergeTreeReaderStream>>; using FileStreams = std::map<std::string, std::unique_ptr<MergeTreeReaderStream>>;
private: private:
FileStreams streams; FileStreams streams;
void prefetchForAllColumns(int64_t priority, size_t num_columns, size_t from_mark, size_t current_task_last_mark, bool continue_reading); void prefetchForAllColumns(Priority priority, size_t num_columns, size_t from_mark, size_t current_task_last_mark, bool continue_reading);
void addStreams( void addStreams(
const NameAndTypePair & name_and_type, const NameAndTypePair & name_and_type,
@ -55,7 +55,7 @@ private:
/// Make next readData more simple by calling 'prefetch' of all related ReadBuffers (column streams). /// Make next readData more simple by calling 'prefetch' of all related ReadBuffers (column streams).
void prefetchForColumn( void prefetchForColumn(
int64_t priority, Priority priority,
const NameAndTypePair & name_and_type, const NameAndTypePair & name_and_type,
const SerializationPtr & serialization, const SerializationPtr & serialization,
size_t from_mark, size_t from_mark,

View File

@ -84,7 +84,7 @@ struct MergeTreeSource::AsyncReadingState
{ {
try try
{ {
callback_runner(std::move(job), 0); callback_runner(std::move(job), Priority{});
} }
catch (...) catch (...)
{ {

View File

@ -356,7 +356,7 @@ private:
request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken()); request.SetContinuationToken(outcome.GetResult().GetNextContinuationToken());
return outcome; return outcome;
}, 0); }, Priority{});
} }
std::mutex mutex; std::mutex mutex;
@ -619,7 +619,7 @@ StorageS3Source::ReaderHolder StorageS3Source::createReader()
std::future<StorageS3Source::ReaderHolder> StorageS3Source::createReaderAsync() std::future<StorageS3Source::ReaderHolder> StorageS3Source::createReaderAsync()
{ {
return create_reader_scheduler([this] { return createReader(); }, 0); return create_reader_scheduler([this] { return createReader(); }, Priority{});
} }
StorageS3Source::ReadBufferOrFactory StorageS3Source::createS3ReadBuffer(const String & key, size_t object_size) StorageS3Source::ReadBufferOrFactory StorageS3Source::createS3ReadBuffer(const String & key, size_t object_size)

View File

@ -655,6 +655,7 @@ sleep
sleepEachRow sleepEachRow
snowflakeToDateTime snowflakeToDateTime
snowflakeToDateTime64 snowflakeToDateTime64
space
splitByChar splitByChar
splitByNonAlpha splitByNonAlpha
splitByRegexp splitByRegexp

View File

@ -41,6 +41,6 @@ run_count_with_custom_key "y"
run_count_with_custom_key "cityHash64(y)" run_count_with_custom_key "cityHash64(y)"
run_count_with_custom_key "cityHash64(y) + 1" run_count_with_custom_key "cityHash64(y) + 1"
$CLICKHOUSE_CLIENT --query="SELECT count() FROM cluster(test_cluster_one_shard_three_replicas_localhost, currentDatabase(), 02535_custom_key) as t1 JOIN 02535_custom_key USING y" --parallel_replicas_custom_key="y" --send_logs_level="trace" 2>&1 | grep -Fac "JOINs are not supported with parallel replicas" $CLICKHOUSE_CLIENT --query="SELECT count() FROM cluster(test_cluster_one_shard_three_replicas_localhost, currentDatabase(), 02535_custom_key) as t1 JOIN 02535_custom_key USING y" --parallel_replicas_custom_key="y" --send_logs_level="trace" 2>&1 | grep -Fac "JOINs are not supported with"
$CLICKHOUSE_CLIENT --query="DROP TABLE 02535_custom_key" $CLICKHOUSE_CLIENT --query="DROP TABLE 02535_custom_key"

View File

@ -1,3 +1,4 @@
CREATE TABLE IF NOT EXISTS t_02708(x DateTime) ENGINE = MergeTree ORDER BY tuple(); CREATE TABLE IF NOT EXISTS t_02708(x DateTime) ENGINE = MergeTree ORDER BY tuple();
SET send_logs_level='error';
SELECT count() FROM t_02708 SETTINGS allow_experimental_parallel_reading_from_replicas=1; SELECT count() FROM t_02708 SETTINGS allow_experimental_parallel_reading_from_replicas=1;
DROP TABLE t_02708; DROP TABLE t_02708;

View File

@ -0,0 +1,44 @@
=============== INNER QUERY (NO PARALLEL) ===============
0 PJFiUe#J2O _s\' 14427935816175499794
1 >T%O ,z< 17537932797009027240
12 D[6,P #}Lmb[ ZzU 6394957109822140795
18 $_N- 24422838680427462
2 bX?}ix [ Ny]2 G 16242612901291874718
20 VE] Y 15120036904703536841
22 Ti~3)N)< A!( 3 18361093572663329113
23 Sx>b:^UG XpedE)Q: 7433019734386307503
29 2j&S)ba?XG QuQj 17163829389637435056
3 UlI+1 14144472852965836438
=============== INNER QUERY (PARALLEL) ===============
0 PJFiUe#J2O _s\' 14427935816175499794
1 >T%O ,z< 17537932797009027240
12 D[6,P #}Lmb[ ZzU 6394957109822140795
18 $_N- 24422838680427462
2 bX?}ix [ Ny]2 G 16242612901291874718
20 VE] Y 15120036904703536841
22 Ti~3)N)< A!( 3 18361093572663329113
23 Sx>b:^UG XpedE)Q: 7433019734386307503
29 2j&S)ba?XG QuQj 17163829389637435056
3 UlI+1 14144472852965836438
=============== QUERIES EXECUTED BY PARALLEL INNER QUERY ALONE ===============
0 3 SELECT `key`, `value1`, `value2`, toUInt64(min(`time`)) AS `start_ts` FROM `default`.`join_inner_table` PREWHERE (`id` = \'833c9e22-c245-4eb5-8745-117a9a1f26b1\') AND (`number` > toUInt64(\'1610517366120\')) GROUP BY `key`, `value1`, `value2` ORDER BY `key` ASC, `value1` ASC, `value2` ASC LIMIT 10
1 1 -- Parallel inner query alone\nSELECT\n key,\n value1,\n value2,\n toUInt64(min(time)) AS start_ts\nFROM join_inner_table\nPREWHERE (id = \'833c9e22-c245-4eb5-8745-117a9a1f26b1\') AND (number > toUInt64(\'1610517366120\'))\nGROUP BY key, value1, value2\nORDER BY key, value1, value2\nLIMIT 10\nSETTINGS allow_experimental_parallel_reading_from_replicas = 1;
=============== OUTER QUERY (NO PARALLEL) ===============
>T%O ,z< 10
NQTpY# W\\Xx4 10
PJFiUe#J2O _s\' 10
U c 10
UlI+1 10
bX?}ix [ Ny]2 G 10
t<iT X48q:Z]t0 10
=============== OUTER QUERY (PARALLEL) ===============
>T%O ,z< 10
NQTpY# W\\Xx4 10
PJFiUe#J2O _s\' 10
U c 10
UlI+1 10
bX?}ix [ Ny]2 G 10
t<iT X48q:Z]t0 10
0 3 SELECT `key`, `value1`, `value2`, toUInt64(min(`time`)) AS `start_ts` FROM `default`.`join_inner_table` PREWHERE (`id` = \'833c9e22-c245-4eb5-8745-117a9a1f26b1\') AND (`number` > toUInt64(\'1610517366120\')) GROUP BY `key`, `value1`, `value2`
0 3 SELECT `value1`, `value2`, count() AS `count` FROM `default`.`join_outer_table` ALL INNER JOIN `_data_11888098645495698704_17868075224240210014` USING (`key`) GROUP BY `key`, `value1`, `value2`
1 1 -- Parallel full query\nSELECT\n value1,\n value2,\n avg(count) AS avg\nFROM\n (\n SELECT\n key,\n value1,\n value2,\n count() AS count\n FROM join_outer_table\n INNER JOIN\n (\n SELECT\n key,\n value1,\n value2,\n toUInt64(min(time)) AS start_ts\n FROM join_inner_table\n PREWHERE (id = \'833c9e22-c245-4eb5-8745-117a9a1f26b1\') AND (number > toUInt64(\'1610517366120\'))\n GROUP BY key, value1, value2\n ) USING (key)\n GROUP BY key, value1, value2\n )\nGROUP BY value1, value2\nORDER BY value1, value2\nSETTINGS allow_experimental_parallel_reading_from_replicas = 1;

View File

@ -0,0 +1,182 @@
-- Tags: zookeeper
CREATE TABLE join_inner_table
(
id UUID,
key String,
number Int64,
value1 String,
value2 String,
time Int64
)
ENGINE=ReplicatedMergeTree('/clickhouse/tables/{database}/join_inner_table', 'r1')
ORDER BY (id, number, key);
INSERT INTO join_inner_table
SELECT
'833c9e22-c245-4eb5-8745-117a9a1f26b1'::UUID as id,
rowNumberInAllBlocks()::String as key,
* FROM generateRandom('number Int64, value1 String, value2 String, time Int64', 1, 10, 2)
LIMIT 100;
SET allow_experimental_analyzer = 0;
SET max_parallel_replicas = 3;
SET prefer_localhost_replica = 1;
SET cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost';
SET use_hedged_requests = 0;
SET joined_subquery_requires_alias = 0;
SELECT '=============== INNER QUERY (NO PARALLEL) ===============';
SELECT
key,
value1,
value2,
toUInt64(min(time)) AS start_ts
FROM join_inner_table
PREWHERE (id = '833c9e22-c245-4eb5-8745-117a9a1f26b1') AND (number > toUInt64('1610517366120'))
GROUP BY key, value1, value2
ORDER BY key, value1, value2
LIMIT 10;
SELECT '=============== INNER QUERY (PARALLEL) ===============';
-- Parallel inner query alone
SELECT
key,
value1,
value2,
toUInt64(min(time)) AS start_ts
FROM join_inner_table
PREWHERE (id = '833c9e22-c245-4eb5-8745-117a9a1f26b1') AND (number > toUInt64('1610517366120'))
GROUP BY key, value1, value2
ORDER BY key, value1, value2
LIMIT 10
SETTINGS allow_experimental_parallel_reading_from_replicas = 1;
SELECT '=============== QUERIES EXECUTED BY PARALLEL INNER QUERY ALONE ===============';
SYSTEM FLUSH LOGS;
-- There should be 4 queries. The main query as received by the initiator and the 3 equal queries sent to each replica
SELECT is_initial_query, count() as c, query,
FROM system.query_log
WHERE
event_date >= yesterday()
AND type = 'QueryFinish'
AND initial_query_id =
(
SELECT query_id
FROM system.query_log
WHERE
current_database = currentDatabase()
AND event_date >= yesterday()
AND type = 'QueryFinish'
AND query LIKE '-- Parallel inner query alone%'
)
GROUP BY is_initial_query, query
ORDER BY is_initial_query, c, query;
---- Query with JOIN
CREATE TABLE join_outer_table
(
id UUID,
key String,
otherValue1 String,
otherValue2 String,
time Int64
)
ENGINE = ReplicatedMergeTree('/clickhouse/tables/{database}/join_outer_table', 'r1')
ORDER BY (id, time, key);
INSERT INTO join_outer_table
SELECT
'833c9e22-c245-4eb5-8745-117a9a1f26b1'::UUID as id,
(rowNumberInAllBlocks() % 10)::String as key,
* FROM generateRandom('otherValue1 String, otherValue2 String, time Int64', 1, 10, 2)
LIMIT 100;
SELECT '=============== OUTER QUERY (NO PARALLEL) ===============';
SELECT
value1,
value2,
avg(count) AS avg
FROM
(
SELECT
key,
value1,
value2,
count() AS count
FROM join_outer_table
INNER JOIN
(
SELECT
key,
value1,
value2,
toUInt64(min(time)) AS start_ts
FROM join_inner_table
PREWHERE (id = '833c9e22-c245-4eb5-8745-117a9a1f26b1') AND (number > toUInt64('1610517366120'))
GROUP BY key, value1, value2
) USING (key)
GROUP BY key, value1, value2
)
GROUP BY value1, value2
ORDER BY value1, value2;
SELECT '=============== OUTER QUERY (PARALLEL) ===============';
-- Parallel full query
SELECT
value1,
value2,
avg(count) AS avg
FROM
(
SELECT
key,
value1,
value2,
count() AS count
FROM join_outer_table
INNER JOIN
(
SELECT
key,
value1,
value2,
toUInt64(min(time)) AS start_ts
FROM join_inner_table
PREWHERE (id = '833c9e22-c245-4eb5-8745-117a9a1f26b1') AND (number > toUInt64('1610517366120'))
GROUP BY key, value1, value2
) USING (key)
GROUP BY key, value1, value2
)
GROUP BY value1, value2
ORDER BY value1, value2
SETTINGS allow_experimental_parallel_reading_from_replicas = 1;
SYSTEM FLUSH LOGS;
-- There should be 7 queries. The main query as received by the initiator, the 3 equal queries to execute the subquery
-- in the inner join and the 3 queries executing the whole query (but replacing the subquery with a temp table)
SELECT is_initial_query, count() as c, query,
FROM system.query_log
WHERE
event_date >= yesterday()
AND type = 'QueryFinish'
AND initial_query_id =
(
SELECT query_id
FROM system.query_log
WHERE
current_database = currentDatabase()
AND event_date >= yesterday()
AND type = 'QueryFinish'
AND query LIKE '-- Parallel full query%'
)
GROUP BY is_initial_query, query
ORDER BY is_initial_query, c, query;

View File

@ -0,0 +1,43 @@
CREATE TABLE join_inner_table__fuzz_1
(
`id` UUID,
`key` Nullable(Date),
`number` Int64,
`value1` LowCardinality(String),
`value2` LowCardinality(String),
`time` Int128
)
ENGINE = MergeTree
ORDER BY (id, number, key)
SETTINGS allow_nullable_key = 1;
INSERT INTO join_inner_table__fuzz_1 SELECT
CAST('833c9e22-c245-4eb5-8745-117a9a1f26b1', 'UUID') AS id,
CAST(rowNumberInAllBlocks(), 'String') AS key,
*
FROM generateRandom('number Int64, value1 String, value2 String, time Int64', 1, 10, 2)
LIMIT 100;
SET max_parallel_replicas = 3, prefer_localhost_replica = 1, use_hedged_requests = 0, cluster_for_parallel_replicas = 'test_cluster_one_shard_three_replicas_localhost', allow_experimental_parallel_reading_from_replicas = 1;
-- SELECT query will write a Warning to the logs
SET send_logs_level='error';
SELECT
key,
value1,
value2,
toUInt64(min(time)) AS start_ts
FROM join_inner_table__fuzz_1
PREWHERE (id = '833c9e22-c245-4eb5-8745-117a9a1f26b1') AND (number > toUInt64('1610517366120'))
GROUP BY
key,
value1,
value2
WITH ROLLUP
ORDER BY
key ASC,
value1 ASC,
value2 ASC NULLS LAST
LIMIT 10
FORMAT Null;

View File

@ -0,0 +1,86 @@
const, uint
3
3
3
3
const, int
3
3
3
3
const, int, negative
0
0
0
0
negative tests
null
\N
const, uint, multiple
const int, multiple
non-const, uint
3
2
1
0
12
10
4
5
4
21
9
7
56
20
5
7
non-const, int
3
2
1
0
12
10
4
5
0
0
0
0
56
20
5
7

View File

@ -0,0 +1,64 @@
SELECT 'const, uint';
SELECT space(3::UInt8), length(space(3::UInt8));
SELECT space(3::UInt16), length(space(3::UInt16));
SELECT space(3::UInt32), length(space(3::UInt32));
SELECT space(3::UInt64), length(space(3::UInt64));
SELECT 'const, int';
SELECT space(3::Int8), length(space(3::Int8));
SELECT space(3::Int16), length(space(3::Int16));
SELECT space(3::Int32), length(space(3::Int32));
SELECT space(3::Int64), length(space(3::Int64));
SELECT 'const, int, negative';
SELECT space(-3::Int8), length(space(-3::Int8));
SELECT space(-3::Int16), length(space(-3::Int16));
SELECT space(-3::Int32), length(space(-3::Int32));
SELECT space(-3::Int64), length(space(-3::Int64));
SELECT 'negative tests';
SELECT space('abc'); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT }
SELECT space(['abc']); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT }
SELECT space(('abc')); -- { serverError ILLEGAL_TYPE_OF_ARGUMENT }
SELECT space(30303030303030303030303030303030::UInt64); -- { serverError TOO_LARGE_STRING_SIZE }
SELECT 'null';
SELECT space(NULL);
DROP TABLE IF EXISTS defaults;
CREATE TABLE defaults
(
u8 UInt8,
u16 UInt16,
u32 UInt32,
u64 UInt64,
i8 Int8,
i16 Int16,
i32 Int32,
i64 Int64
) ENGINE = Memory();
INSERT INTO defaults values (3, 12, 4, 56, 3, 12, -4, 56) (2, 10, 21, 20, 2, 10, -21, 20) (1, 4, 9, 5, 1, 4, -9, 5) (0, 5, 7, 7, 0, 5, -7, 7);
SELECT 'const, uint, multiple';
SELECT space(30::UInt8) FROM defaults;
SELECT space(30::UInt16) FROM defaults;
SELECT space(30::UInt32) FROM defaults;
SELECT space(30::UInt64) FROM defaults;
SELECT 'const int, multiple';
SELECT space(30::Int8) FROM defaults;
SELECT space(30::Int16) FROM defaults;
SELECT space(30::Int32) FROM defaults;
SELECT space(30::Int64) FROM defaults;
SELECT 'non-const, uint';
SELECT space(u8), length(space(u8)) FROM defaults;
SELECT space(u16), length(space(u16)) FROM defaults;
SELECT space(u32), length(space(u32)) from defaults;
SELECT space(u64), length(space(u64)) FROM defaults;
SELECT 'non-const, int';
SELECT space(i8), length(space(i8)) FROM defaults;
SELECT space(i16), length(space(i16)) FROM defaults;
SELECT space(i32), length(space(i32)) from defaults;
SELECT space(i64), length(space(i64)) FROM defaults;
DROP TABLE defaults;

View File

@ -2,7 +2,7 @@ CREATE TABLE IF NOT EXISTS parallel_replicas_plain (x String) ENGINE=MergeTree()
INSERT INTO parallel_replicas_plain SELECT toString(number) FROM numbers(10); INSERT INTO parallel_replicas_plain SELECT toString(number) FROM numbers(10);
SET max_parallel_replicas=3, allow_experimental_parallel_reading_from_replicas=1, use_hedged_requests=0, cluster_for_parallel_replicas='parallel_replicas'; SET max_parallel_replicas=3, allow_experimental_parallel_reading_from_replicas=1, use_hedged_requests=0, cluster_for_parallel_replicas='parallel_replicas';
SET send_logs_level='error';
SET parallel_replicas_for_non_replicated_merge_tree = 0; SET parallel_replicas_for_non_replicated_merge_tree = 0;
SELECT x FROM parallel_replicas_plain LIMIT 1 FORMAT Null; SELECT x FROM parallel_replicas_plain LIMIT 1 FORMAT Null;

View File

@ -0,0 +1,2 @@
Expression ((Projection + Before ORDER BY))
ReadFromStorage (SystemNumbers)

View File

@ -0,0 +1,11 @@
#!/usr/bin/env bash
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh
out="explain1.$CLICKHOUSE_TEST_UNIQUE_NAME.out"
# only EXPLAIN triggers the problem under MSan
$CLICKHOUSE_CLIENT -q "explain select * from numbers(1) into outfile '$out'"
cat "$out"
rm -f "$out"

View File

@ -0,0 +1,20 @@
SELECT *
FROM numbers(1)
INTO OUTFILE '/dev/null'
;
SELECT *
FROM numbers(1)
INTO OUTFILE '/dev/null' AND STDOUT
;
SELECT *
FROM numbers(1)
INTO OUTFILE '/dev/null' APPEND
;
SELECT *
FROM numbers(1)
INTO OUTFILE '/dev/null' APPEND AND STDOUT
;

View File

@ -0,0 +1,12 @@
#!/usr/bin/env bash
CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh
echo "
select * from numbers(1) into outfile '/dev/null';
select * from numbers(1) into outfile '/dev/null' and stdout;
select * from numbers(1) into outfile '/dev/null' append;
select * from numbers(1) into outfile '/dev/null' append and stdout;
" | clickhouse-format -n