Merge branch 'master' into docs-organize-user-profiles

This commit is contained in:
Dan Roscigno 2023-07-11 17:59:48 -04:00 committed by GitHub
commit 7756620f15
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
50 changed files with 727 additions and 138 deletions

View File

@ -471,6 +471,7 @@ The CSV format supports the output of totals and extremes the same way as `TabSe
- [input_format_csv_skip_trailing_empty_lines](/docs/en/operations/settings/settings-formats.md/#input_format_csv_skip_trailing_empty_lines) - skip trailing empty lines at the end of data. Default value - `false`.
- [input_format_csv_trim_whitespaces](/docs/en/operations/settings/settings-formats.md/#input_format_csv_trim_whitespaces) - trim spaces and tabs in non-quoted CSV strings. Default value - `true`.
- [input_format_csv_allow_whitespace_or_tab_as_delimiter](/docs/en/operations/settings/settings-formats.md/# input_format_csv_allow_whitespace_or_tab_as_delimiter) - Allow to use whitespace or tab as field delimiter in CSV strings. Default value - `false`.
- [input_format_csv_allow_variable_number_of_columns](/docs/en/operations/settings/settings-formats.md/#input_format_csv_allow_variable_number_of_columns) - ignore extra columns in CSV input (if file has more columns than expected) and treat missing fields in CSV input as default values. Default value - `false`.
## CSVWithNames {#csvwithnames}

View File

@ -931,6 +931,11 @@ Result
```text
" string "
```
### input_format_csv_allow_variable_number_of_columns {#input_format_csv_allow_variable_number_of_columns}
ignore extra columns in CSV input (if file has more columns than expected) and treat missing fields in CSV input as default values.
Disabled by default.
### input_format_csv_allow_whitespace_or_tab_as_delimiter {#input_format_csv_allow_whitespace_or_tab_as_delimiter}

View File

@ -718,7 +718,7 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d
## age
Returns the `unit` component of the difference between `startdate` and `enddate`. The difference is calculated using a precision of 1 second.
Returns the `unit` component of the difference between `startdate` and `enddate`. The difference is calculated using a precision of 1 microsecond.
E.g. the difference between `2021-12-29` and `2022-01-01` is 3 days for `day` unit, 0 months for `month` unit, 0 years for `year` unit.
For an alternative to `age`, see function `date\_diff`.
@ -734,6 +734,8 @@ age('unit', startdate, enddate, [timezone])
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
Possible values:
- `microsecond` (possible abbreviations: `us`, `u`)
- `millisecond` (possible abbreviations: `ms`)
- `second` (possible abbreviations: `ss`, `s`)
- `minute` (possible abbreviations: `mi`, `n`)
- `hour` (possible abbreviations: `hh`, `h`)
@ -809,6 +811,8 @@ Aliases: `dateDiff`, `DATE_DIFF`, `timestampDiff`, `timestamp_diff`, `TIMESTAMP_
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
Possible values:
- `microsecond` (possible abbreviations: `us`, `u`)
- `millisecond` (possible abbreviations: `ms`)
- `second` (possible abbreviations: `ss`, `s`)
- `minute` (possible abbreviations: `mi`, `n`)
- `hour` (possible abbreviations: `hh`, `h`)

View File

@ -401,8 +401,8 @@ $ clickhouse-client --format_csv_delimiter="|" --query="INSERT INTO test.csv FOR
- [output_format_csv_crlf_end_of_line](../operations/settings/settings.md#output_format_csv_crlf_end_of_line) - если установлено значение true, конец строки в формате вывода CSV будет `\r\n` вместо `\n`. Значение по умолчанию - `false`.
- [input_format_csv_skip_first_lines](../operations/settings/settings.md#input_format_csv_skip_first_lines) - пропустить указанное количество строк в начале данных. Значение по умолчанию - `0`.
- [input_format_csv_detect_header](../operations/settings/settings.md#input_format_csv_detect_header) - обнаружить заголовок с именами и типами в формате CSV. Значение по умолчанию - `true`.
- [input_format_csv_trim_whitespaces](../operations/settings/settings.md#input_format_csv_trim_whitespaces) - удалить пробелы и символы табуляции из строк без кавычек.
Значение по умолчанию - `true`.
- [input_format_csv_trim_whitespaces](../operations/settings/settings.md#input_format_csv_trim_whitespaces) - удалить пробелы и символы табуляции из строк без кавычек. Значение по умолчанию - `true`.
- [input_format_csv_allow_variable_number_of_columns](../operations/settings/settings.md/#input_format_csv_allow_variable_number_of_columns) - игнорировать дополнительные столбцы (если файл содержит больше столбцов чем ожидается) и рассматривать отсутствующие поля в CSV в качестве значений по умолчанию. Значение по умолчанию - `false`.
## CSVWithNames {#csvwithnames}

View File

@ -1686,7 +1686,7 @@ SELECT * FROM table_with_enum_column_for_csv_insert;
## input_format_csv_detect_header {#input_format_csv_detect_header}
Обнаружить заголовок с именами и типами в формате CSV.
Значение по умолчанию - `true`.
## input_format_csv_skip_first_lines {#input_format_csv_skip_first_lines}
@ -1727,6 +1727,12 @@ echo ' string ' | ./clickhouse local -q "select * from table FORMAT CSV" --in
" string "
```
## input_format_csv_allow_variable_number_of_columns {#input_format_csv_allow_variable_number_of_columns}
Игнорировать дополнительные столбцы (если файл содержит больше столбцов чем ожидается) и рассматривать отсутствующие поля в CSV в качестве значений по умолчанию.
Выключено по умолчанию.
## output_format_tsv_crlf_end_of_line {#settings-output-format-tsv-crlf-end-of-line}
Использовать в качестве разделителя строк для TSV формата CRLF (DOC/Windows стиль) вместо LF (Unix стиль).

View File

@ -621,7 +621,7 @@ SELECT toDate('2016-12-27') AS date, toYearWeek(date) AS yearWeek0, toYearWeek(d
## age
Вычисляет компонент `unit` разницы между `startdate` и `enddate`. Разница вычисляется с точностью в 1 секунду.
Вычисляет компонент `unit` разницы между `startdate` и `enddate`. Разница вычисляется с точностью в 1 микросекунду.
Например, разница между `2021-12-29` и `2022-01-01` 3 дня для единицы `day`, 0 месяцев для единицы `month`, 0 лет для единицы `year`.
**Синтаксис**
@ -635,6 +635,8 @@ age('unit', startdate, enddate, [timezone])
- `unit` — единица измерения времени, в которой будет выражено возвращаемое значение функции. [String](../../sql-reference/data-types/string.md).
Возможные значения:
- `microsecond` (возможные сокращения: `us`, `u`)
- `millisecond` (возможные сокращения: `ms`)
- `second` (возможные сокращения: `ss`, `s`)
- `minute` (возможные сокращения: `mi`, `n`)
- `hour` (возможные сокращения: `hh`, `h`)
@ -708,6 +710,8 @@ date_diff('unit', startdate, enddate, [timezone])
- `unit` — единица измерения времени, в которой будет выражено возвращаемое значение функции. [String](../../sql-reference/data-types/string.md).
Возможные значения:
- `microsecond` (возможные сокращения: `us`, `u`)
- `millisecond` (возможные сокращения: `ms`)
- `second` (возможные сокращения: `ss`, `s`)
- `minute` (возможные сокращения: `mi`, `n`)
- `hour` (возможные сокращения: `hh`, `h`)

View File

@ -643,6 +643,8 @@ date_diff('unit', startdate, enddate, [timezone])
- `unit``value`对应的时间单位。类型为[String](../../sql-reference/data-types/string.md)。
可能的值:
- `microsecond`
- `millisecond`
- `second`
- `minute`
- `hour`

View File

@ -73,6 +73,7 @@ if (BUILD_STANDALONE_KEEPER)
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Server/TCPServer.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Server/NotFoundHandler.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Server/ProtocolServerAdapter.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Server/CertificateReloader.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Server/PrometheusRequestHandler.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Server/PrometheusMetricsWriter.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Server/waitServersToFinish.cpp

View File

@ -42,6 +42,7 @@
#if USE_SSL
# include <Poco/Net/Context.h>
# include <Poco/Net/SecureServerSocket.h>
# include <Server/CertificateReloader.h>
#endif
#include <Server/ProtocolServerAdapter.h>
@ -451,10 +452,18 @@ try
zkutil::EventPtr unused_event = std::make_shared<Poco::Event>();
zkutil::ZooKeeperNodeCache unused_cache([] { return nullptr; });
const std::string cert_path = config().getString("openSSL.server.certificateFile", "");
const std::string key_path = config().getString("openSSL.server.privateKeyFile", "");
std::vector<std::string> extra_paths = {include_from_path};
if (!cert_path.empty()) extra_paths.emplace_back(cert_path);
if (!key_path.empty()) extra_paths.emplace_back(key_path);
/// ConfigReloader have to strict parameters which are redundant in our case
auto main_config_reloader = std::make_unique<ConfigReloader>(
config_path,
include_from_path,
extra_paths,
config().getString("path", ""),
std::move(unused_cache),
unused_event,
@ -462,6 +471,10 @@ try
{
if (config->has("keeper_server"))
global_context->updateKeeperConfiguration(*config);
#if USE_SSL
CertificateReloader::instance().tryLoad(*config);
#endif
},
/* already_loaded = */ false); /// Reload it right now (initial loading)

View File

@ -88,7 +88,6 @@
#include <Server/PostgreSQLHandlerFactory.h>
#include <Server/ProxyV1HandlerFactory.h>
#include <Server/TLSHandlerFactory.h>
#include <Server/CertificateReloader.h>
#include <Server/ProtocolServerAdapter.h>
#include <Server/HTTP/HTTPServer.h>
#include <Interpreters/AsynchronousInsertQueue.h>
@ -109,6 +108,7 @@
#if USE_SSL
# include <Poco/Net/SecureServerSocket.h>
# include <Server/CertificateReloader.h>
#endif
#if USE_GRPC
@ -1100,9 +1100,16 @@ try
SensitiveDataMasker::setInstance(std::make_unique<SensitiveDataMasker>(config(), "query_masking_rules"));
}
const std::string cert_path = config().getString("openSSL.server.certificateFile", "");
const std::string key_path = config().getString("openSSL.server.privateKeyFile", "");
std::vector<std::string> extra_paths = {include_from_path};
if (!cert_path.empty()) extra_paths.emplace_back(cert_path);
if (!key_path.empty()) extra_paths.emplace_back(key_path);
auto main_config_reloader = std::make_unique<ConfigReloader>(
config_path,
include_from_path,
extra_paths,
config().getString("path", ""),
std::move(main_config_zk_node_cache),
main_config_zk_changed_event,

View File

@ -807,7 +807,7 @@ void UsersConfigAccessStorage::load(
config_reloader.reset();
config_reloader = std::make_unique<ConfigReloader>(
users_config_path,
include_from_path,
std::vector{{include_from_path}},
preprocessed_dir,
zkutil::ZooKeeperNodeCache(get_zookeeper_function),
std::make_shared<Poco::Event>(),

View File

@ -14,14 +14,15 @@ namespace DB
{
ConfigReloader::ConfigReloader(
const std::string & path_,
const std::string & include_from_path_,
std::string_view config_path_,
const std::vector<std::string>& extra_paths_,
const std::string & preprocessed_dir_,
zkutil::ZooKeeperNodeCache && zk_node_cache_,
const zkutil::EventPtr & zk_changed_event_,
Updater && updater_,
bool already_loaded)
: path(path_), include_from_path(include_from_path_)
: config_path(config_path_)
, extra_paths(extra_paths_)
, preprocessed_dir(preprocessed_dir_)
, zk_node_cache(std::move(zk_node_cache_))
, zk_changed_event(zk_changed_event_)
@ -98,10 +99,10 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
FilesChangesTracker new_files = getNewFileList();
if (force || need_reload_from_zk || new_files.isDifferOrNewerThan(files))
{
ConfigProcessor config_processor(path);
ConfigProcessor config_processor(config_path);
ConfigProcessor::LoadedConfig loaded_config;
LOG_DEBUG(log, "Loading config '{}'", path);
LOG_DEBUG(log, "Loading config '{}'", config_path);
try
{
@ -118,7 +119,7 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
if (throw_on_error)
throw;
tryLogCurrentException(log, "ZooKeeper error when loading config from '" + path + "'");
tryLogCurrentException(log, "ZooKeeper error when loading config from '" + config_path + "'");
return;
}
catch (...)
@ -126,7 +127,7 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
if (throw_on_error)
throw;
tryLogCurrentException(log, "Error loading config from '" + path + "'");
tryLogCurrentException(log, "Error loading config from '" + config_path + "'");
return;
}
config_processor.savePreprocessedConfig(loaded_config, preprocessed_dir);
@ -142,7 +143,7 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
need_reload_from_zk = false;
}
LOG_DEBUG(log, "Loaded config '{}', performing update on configuration", path);
LOG_DEBUG(log, "Loaded config '{}', performing update on configuration", config_path);
try
{
@ -152,11 +153,11 @@ void ConfigReloader::reloadIfNewer(bool force, bool throw_on_error, bool fallbac
{
if (throw_on_error)
throw;
tryLogCurrentException(log, "Error updating configuration from '" + path + "' config.");
tryLogCurrentException(log, "Error updating configuration from '" + config_path + "' config.");
return;
}
LOG_DEBUG(log, "Loaded config '{}', performed update on configuration", path);
LOG_DEBUG(log, "Loaded config '{}', performed update on configuration", config_path);
}
}
@ -196,10 +197,11 @@ ConfigReloader::FilesChangesTracker ConfigReloader::getNewFileList() const
{
FilesChangesTracker file_list;
file_list.addIfExists(path);
file_list.addIfExists(include_from_path);
file_list.addIfExists(config_path);
for (const std::string& path : extra_paths)
file_list.addIfExists(path);
for (const auto & merge_path : ConfigProcessor::getConfigMergeFiles(path))
for (const auto & merge_path : ConfigProcessor::getConfigMergeFiles(config_path))
file_list.addIfExists(merge_path);
return file_list;

View File

@ -22,23 +22,21 @@ class Context;
/** Every two seconds checks configuration files for update.
* If configuration is changed, then config will be reloaded by ConfigProcessor
* and the reloaded config will be applied via Updater functor.
* It doesn't take into account changes of --config-file, <users_config> and <include_from> parameters.
* It doesn't take into account changes of --config-file and <users_config>.
*/
class ConfigReloader
{
public:
using Updater = std::function<void(ConfigurationPtr, bool)>;
/** include_from_path is usually /etc/metrika.xml (i.e. value of <include_from> tag)
*/
ConfigReloader(
const std::string & path,
const std::string & include_from_path,
const std::string & preprocessed_dir,
zkutil::ZooKeeperNodeCache && zk_node_cache,
const zkutil::EventPtr & zk_changed_event,
Updater && updater,
bool already_loaded);
std::string_view path_,
const std::vector<std::string>& extra_paths_,
const std::string & preprocessed_dir,
zkutil::ZooKeeperNodeCache && zk_node_cache,
const zkutil::EventPtr & zk_changed_event,
Updater && updater,
bool already_loaded);
~ConfigReloader();
@ -73,8 +71,9 @@ private:
Poco::Logger * log = &Poco::Logger::get("ConfigReloader");
std::string path;
std::string include_from_path;
std::string config_path;
std::vector<std::string> extra_paths;
std::string preprocessed_dir;
FilesChangesTracker files;
zkutil::ZooKeeperNodeCache zk_node_cache;

View File

@ -48,7 +48,11 @@ inline auto scaleMultiplier(UInt32 scale)
/** Components of DecimalX value:
* whole - represents whole part of decimal, can be negative or positive.
* fractional - for fractional part of decimal, always positive.
* fractional - for fractional part of decimal.
*
* 0.123 represents 0 / 0.123
* -0.123 represents 0 / -0.123
* -1.123 represents -1 / 0.123
*/
template <typename DecimalType>
struct DecimalComponents

View File

@ -275,7 +275,7 @@ class IColumn;
\
M(UInt64, http_headers_progress_interval_ms, 100, "Do not send HTTP headers X-ClickHouse-Progress more frequently than at each specified interval.", 0) \
M(Bool, http_wait_end_of_query, false, "Enable HTTP response buffering on the server-side.", 0) \
M(UInt64, http_response_buffer_size, false, "The number of bytes to buffer in the server memory before sending a HTTP response to the client or flushing to disk (when http_wait_end_of_query is enabled).", 0) \
M(UInt64, http_response_buffer_size, 0, "The number of bytes to buffer in the server memory before sending a HTTP response to the client or flushing to disk (when http_wait_end_of_query is enabled).", 0) \
\
M(Bool, fsync_metadata, true, "Do fsync after changing metadata for tables and databases (.sql files). Could be disabled in case of poor latency on server with high load of DDL queries and high load of disk subsystem.", 0) \
\
@ -1011,6 +1011,7 @@ class IColumn;
M(Bool, regexp_dict_allow_hyperscan, true, "Allow regexp_tree dictionary using Hyperscan library.", 0) \
\
M(Bool, dictionary_use_async_executor, false, "Execute a pipeline for reading from a dictionary with several threads. It's supported only by DIRECT dictionary with CLICKHOUSE source.", 0) \
M(Bool, input_format_csv_allow_variable_number_of_columns, false, "Ignore extra columns in CSV input (if file has more columns than expected) and treat missing fields in CSV input as default values", 0) \
// End of FORMAT_FACTORY_SETTINGS
// Please add settings non-related to formats into the COMMON_SETTINGS above.

View File

@ -138,7 +138,7 @@ Columns CacheDictionary<dictionary_key_type>::getColumns(
const Columns & default_values_columns) const
{
/**
* Flow of getColumsImpl
* Flow of getColumnsImpl
* 1. Get fetch result from storage
* 2. If all keys are found in storage and not expired
* 2.1. If storage returns fetched columns in order of keys then result is returned to client.

View File

@ -6,6 +6,8 @@
#include <Common/logger_useful.h>
#include <Common/Exception.h>
#include <Disks/ObjectStorages/MetadataStorageFromDisk.h>
namespace DB
{
@ -63,11 +65,18 @@ struct PureMetadataObjectStorageOperation final : public IDiskObjectStorageOpera
std::string getInfoForLog() const override { return fmt::format("PureMetadataObjectStorageOperation"); }
};
struct ObjectsToRemove
{
StoredObjects objects;
UnlinkMetadataFileOperationOutcomePtr unlink_outcome;
};
struct RemoveObjectStorageOperation final : public IDiskObjectStorageOperation
{
std::string path;
bool delete_metadata_only;
StoredObjects objects_to_remove;
ObjectsToRemove objects_to_remove;
bool if_exists;
bool remove_from_cache = false;
@ -103,15 +112,12 @@ struct RemoveObjectStorageOperation final : public IDiskObjectStorageOperation
try
{
uint32_t hardlink_count = metadata_storage.getHardlinkCount(path);
auto objects = metadata_storage.getStorageObjects(path);
tx->unlinkMetadata(path);
auto unlink_outcome = tx->unlinkMetadata(path);
if (hardlink_count == 0)
{
objects_to_remove = std::move(objects);
}
if (unlink_outcome)
objects_to_remove = ObjectsToRemove{std::move(objects), std::move(unlink_outcome)};
}
catch (const Exception & e)
{
@ -140,8 +146,11 @@ struct RemoveObjectStorageOperation final : public IDiskObjectStorageOperation
/// due to network error or similar. And when it will retry an operation it may receive
/// a 404 HTTP code. We don't want to threat this code as a real error for deletion process
/// (e.g. throwing some exceptions) and thus we just use method `removeObjectsIfExists`
if (!delete_metadata_only && !objects_to_remove.empty())
object_storage.removeObjectsIfExist(objects_to_remove);
if (!delete_metadata_only && !objects_to_remove.objects.empty()
&& objects_to_remove.unlink_outcome->num_hardlinks == 0)
{
object_storage.removeObjectsIfExist(objects_to_remove.objects);
}
}
};
@ -150,7 +159,9 @@ struct RemoveManyObjectStorageOperation final : public IDiskObjectStorageOperati
RemoveBatchRequest remove_paths;
bool keep_all_batch_data;
NameSet file_names_remove_metadata_only;
StoredObjects objects_to_remove;
std::vector<ObjectsToRemove> objects_to_remove;
bool remove_from_cache = false;
RemoveManyObjectStorageOperation(
@ -174,7 +185,6 @@ struct RemoveManyObjectStorageOperation final : public IDiskObjectStorageOperati
{
for (const auto & [path, if_exists] : remove_paths)
{
if (!metadata_storage.exists(path))
{
if (if_exists)
@ -188,14 +198,12 @@ struct RemoveManyObjectStorageOperation final : public IDiskObjectStorageOperati
try
{
uint32_t hardlink_count = metadata_storage.getHardlinkCount(path);
auto objects = metadata_storage.getStorageObjects(path);
tx->unlinkMetadata(path);
/// File is really redundant
if (hardlink_count == 0 && !keep_all_batch_data && !file_names_remove_metadata_only.contains(fs::path(path).filename()))
std::move(objects.begin(), objects.end(), std::back_inserter(objects_to_remove));
auto unlink_outcome = tx->unlinkMetadata(path);
if (unlink_outcome && !keep_all_batch_data && !file_names_remove_metadata_only.contains(fs::path(path).filename()))
{
objects_to_remove.emplace_back(ObjectsToRemove{std::move(objects), std::move(unlink_outcome)});
}
}
catch (const Exception & e)
{
@ -215,15 +223,21 @@ struct RemoveManyObjectStorageOperation final : public IDiskObjectStorageOperati
void undo() override
{
}
void finalize() override
{
StoredObjects remove_from_remote;
for (auto && [objects, unlink_outcome] : objects_to_remove)
{
if (unlink_outcome->num_hardlinks == 0)
std::move(objects.begin(), objects.end(), std::back_inserter(remove_from_remote));
}
/// Read comment inside RemoveObjectStorageOperation class
/// TL;DR Don't pay any attention to 404 status code
if (!objects_to_remove.empty())
object_storage.removeObjectsIfExist(objects_to_remove);
if (!remove_from_remote.empty())
object_storage.removeObjectsIfExist(remove_from_remote);
}
};
@ -231,10 +245,9 @@ struct RemoveManyObjectStorageOperation final : public IDiskObjectStorageOperati
struct RemoveRecursiveObjectStorageOperation final : public IDiskObjectStorageOperation
{
std::string path;
std::unordered_map<std::string, StoredObjects> objects_to_remove;
std::unordered_map<std::string, ObjectsToRemove> objects_to_remove_by_path;
bool keep_all_batch_data;
NameSet file_names_remove_metadata_only;
StoredObjects objects_to_remove_from_cache;
RemoveRecursiveObjectStorageOperation(
IObjectStorage & object_storage_,
@ -261,14 +274,11 @@ struct RemoveRecursiveObjectStorageOperation final : public IDiskObjectStorageOp
{
try
{
uint32_t hardlink_count = metadata_storage.getHardlinkCount(path_to_remove);
auto objects_paths = metadata_storage.getStorageObjects(path_to_remove);
tx->unlinkMetadata(path_to_remove);
if (hardlink_count == 0)
auto unlink_outcome = tx->unlinkMetadata(path_to_remove);
if (unlink_outcome)
{
objects_to_remove[path_to_remove] = std::move(objects_paths);
objects_to_remove_by_path[path_to_remove] = ObjectsToRemove{std::move(objects_paths), std::move(unlink_outcome)};
}
}
catch (const Exception & e)
@ -318,11 +328,12 @@ struct RemoveRecursiveObjectStorageOperation final : public IDiskObjectStorageOp
if (!keep_all_batch_data)
{
StoredObjects remove_from_remote;
for (auto && [local_path, remote_paths] : objects_to_remove)
for (auto && [local_path, objects_to_remove] : objects_to_remove_by_path)
{
if (!file_names_remove_metadata_only.contains(fs::path(local_path).filename()))
{
std::move(remote_paths.begin(), remote_paths.end(), std::back_inserter(remove_from_remote));
if (objects_to_remove.unlink_outcome->num_hardlinks == 0)
std::move(objects_to_remove.objects.begin(), objects_to_remove.objects.end(), std::back_inserter(remove_from_remote));
}
}
/// Read comment inside RemoveObjectStorageOperation class

View File

@ -22,6 +22,8 @@ namespace ErrorCodes
}
class IMetadataStorage;
struct UnlinkMetadataFileOperationOutcome;
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
/// Tries to provide some "transactions" interface, which allow
/// to execute (commit) operations simultaneously. We don't provide
@ -127,9 +129,10 @@ public:
/// Unlink metadata file and do something special if required
/// By default just remove file (unlink file).
virtual void unlinkMetadata(const std::string & path)
virtual UnlinkMetadataFileOperationOutcomePtr unlinkMetadata(const std::string & path)
{
unlinkFile(path);
return nullptr;
}
virtual ~IMetadataTransaction() = default;

View File

@ -340,9 +340,12 @@ void MetadataStorageFromDiskTransaction::addBlobToMetadata(const std::string & p
addOperation(std::make_unique<AddBlobOperation>(path, blob_name, metadata_storage.object_storage_root_path, size_in_bytes, *metadata_storage.disk, metadata_storage));
}
void MetadataStorageFromDiskTransaction::unlinkMetadata(const std::string & path)
UnlinkMetadataFileOperationOutcomePtr MetadataStorageFromDiskTransaction::unlinkMetadata(const std::string & path)
{
addOperation(std::make_unique<UnlinkMetadataFileOperation>(path, *metadata_storage.disk, metadata_storage));
auto operation = std::make_unique<UnlinkMetadataFileOperation>(path, *metadata_storage.getDisk(), metadata_storage);
auto result = operation->outcome;
addOperation(std::move(operation));
return result;
}
}

View File

@ -11,6 +11,9 @@
namespace DB
{
struct UnlinkMetadataFileOperationOutcome;
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
/// Store metadata on a separate disk
/// (used for object storages, like S3 and related).
class MetadataStorageFromDisk final : public IMetadataStorage
@ -131,7 +134,7 @@ public:
void replaceFile(const std::string & path_from, const std::string & path_to) override;
void unlinkMetadata(const std::string & path) override;
UnlinkMetadataFileOperationOutcomePtr unlinkMetadata(const std::string & path) override;
};

View File

@ -319,6 +319,8 @@ void UnlinkMetadataFileOperation::execute(std::unique_lock<SharedMutex> & metada
write_operation = std::make_unique<WriteFileOperation>(path, disk, metadata->serializeToString());
write_operation->execute(metadata_lock);
}
outcome->num_hardlinks = ref_count;
unlink_operation = std::make_unique<UnlinkFileOperation>(path, disk);
unlink_operation->execute(metadata_lock);
}
@ -334,6 +336,9 @@ void UnlinkMetadataFileOperation::undo()
if (write_operation)
write_operation->undo();
/// Update outcome to reflect the fact that we have restored the file.
outcome->num_hardlinks++;
}
void SetReadonlyFileOperation::execute(std::unique_lock<SharedMutex> & metadata_lock)

View File

@ -3,6 +3,8 @@
#include <Common/SharedMutex.h>
#include <Disks/ObjectStorages/IMetadataStorage.h>
#include <numeric>
namespace DB
{
class MetadataStorageFromDisk;
@ -242,9 +244,19 @@ private:
std::unique_ptr<WriteFileOperation> write_operation;
};
/// Return the result of operation to the caller.
/// It is used in `IDiskObjectStorageOperation::finalize` after metadata transaction executed to make decision on blob removal.
struct UnlinkMetadataFileOperationOutcome
{
UInt32 num_hardlinks = std::numeric_limits<UInt32>::max();
};
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
struct UnlinkMetadataFileOperation final : public IMetadataOperation
{
const UnlinkMetadataFileOperationOutcomePtr outcome = std::make_shared<UnlinkMetadataFileOperationOutcome>();
UnlinkMetadataFileOperation(
const std::string & path_,
IDisk & disk_,

View File

@ -135,9 +135,11 @@ void MetadataStorageFromPlainObjectStorageTransaction::addBlobToMetadata(
{
/// Noop, local metadata files is only one file, it is the metadata file itself.
}
void MetadataStorageFromPlainObjectStorageTransaction::unlinkMetadata(const std::string &)
UnlinkMetadataFileOperationOutcomePtr MetadataStorageFromPlainObjectStorageTransaction::unlinkMetadata(const std::string &)
{
/// Noop, no separate metadata.
/// No hardlinks, so will always remove file.
return std::make_shared<UnlinkMetadataFileOperationOutcome>(UnlinkMetadataFileOperationOutcome{0});
}
}

View File

@ -9,6 +9,9 @@
namespace DB
{
struct UnlinkMetadataFileOperationOutcome;
using UnlinkMetadataFileOperationOutcomePtr = std::shared_ptr<UnlinkMetadataFileOperationOutcome>;
/// Object storage is used as a filesystem, in a limited form:
/// - no directory concept, files only
/// - no stat/chmod/...
@ -104,7 +107,7 @@ public:
void unlinkFile(const std::string & path) override;
void unlinkMetadata(const std::string & path) override;
UnlinkMetadataFileOperationOutcomePtr unlinkMetadata(const std::string & path) override;
void commit() override
{

View File

@ -72,6 +72,7 @@ FormatSettings getFormatSettings(ContextPtr context, const Settings & settings)
format_settings.csv.skip_trailing_empty_lines = settings.input_format_csv_skip_trailing_empty_lines;
format_settings.csv.trim_whitespaces = settings.input_format_csv_trim_whitespaces;
format_settings.csv.allow_whitespace_or_tab_as_delimiter = settings.input_format_csv_allow_whitespace_or_tab_as_delimiter;
format_settings.csv.allow_variable_number_of_columns = settings.input_format_csv_allow_variable_number_of_columns;
format_settings.hive_text.fields_delimiter = settings.input_format_hive_text_fields_delimiter;
format_settings.hive_text.collection_items_delimiter = settings.input_format_hive_text_collection_items_delimiter;
format_settings.hive_text.map_keys_delimiter = settings.input_format_hive_text_map_keys_delimiter;

View File

@ -140,6 +140,7 @@ struct FormatSettings
bool skip_trailing_empty_lines = false;
bool trim_whitespaces = true;
bool allow_whitespace_or_tab_as_delimiter = false;
bool allow_variable_number_of_columns = false;
} csv;
struct HiveText

View File

@ -19,6 +19,9 @@
namespace DB
{
static constexpr auto microsecond_multiplier = 1000000;
static constexpr auto millisecond_multiplier = 1000;
namespace ErrorCodes
{
extern const int ILLEGAL_TYPE_OF_ARGUMENT;
@ -1377,6 +1380,36 @@ struct ToRelativeSecondNumImpl
using FactorTransform = ZeroTransform;
};
template <Int64 scale_multiplier>
struct ToRelativeSubsecondNumImpl
{
static constexpr auto name = "toRelativeSubsecondNumImpl";
static inline Int64 execute(const DateTime64 & t, DateTime64::NativeType scale, const DateLUTImpl &)
{
static_assert(scale_multiplier == 1000 || scale_multiplier == 1000000);
if (scale == scale_multiplier)
return t.value;
if (scale > scale_multiplier)
return t.value / (scale / scale_multiplier);
return t.value * (scale_multiplier / scale);
}
static inline Int64 execute(UInt32 t, const DateLUTImpl &)
{
return t * scale_multiplier;
}
static inline Int64 execute(Int32 d, const DateLUTImpl & time_zone)
{
return static_cast<Int64>(time_zone.fromDayNum(ExtendedDayNum(d))) * scale_multiplier;
}
static inline Int64 execute(UInt16 d, const DateLUTImpl & time_zone)
{
return static_cast<Int64>(time_zone.fromDayNum(DayNum(d)) * scale_multiplier);
}
using FactorTransform = ZeroTransform;
};
struct ToYYYYMMImpl
{
static constexpr auto name = "toYYYYMM";
@ -1476,25 +1509,47 @@ struct ToYYYYMMDDhhmmssImpl
using FactorTransform = ZeroTransform;
};
struct DateTimeComponentsWithFractionalPart : public DateLUTImpl::DateTimeComponents
{
UInt16 millisecond;
UInt16 microsecond;
};
struct ToDateTimeComponentsImpl
{
static constexpr auto name = "toDateTimeComponents";
static inline DateLUTImpl::DateTimeComponents execute(Int64 t, const DateLUTImpl & time_zone)
static inline DateTimeComponentsWithFractionalPart execute(const DateTime64 & t, DateTime64::NativeType scale_multiplier, const DateLUTImpl & time_zone)
{
return time_zone.toDateTimeComponents(t);
auto components = DecimalUtils::splitWithScaleMultiplier(t, scale_multiplier);
if (t.value < 0 && components.fractional)
{
components.fractional = scale_multiplier + (components.whole ? Int64(-1) : Int64(1)) * components.fractional;
--components.whole;
}
Int64 fractional = components.fractional;
if (scale_multiplier > microsecond_multiplier)
fractional = fractional / (scale_multiplier / microsecond_multiplier);
else if (scale_multiplier < microsecond_multiplier)
fractional = fractional * (microsecond_multiplier / scale_multiplier);
constexpr Int64 divider = microsecond_multiplier/ millisecond_multiplier;
UInt16 millisecond = static_cast<UInt16>(fractional / divider);
UInt16 microsecond = static_cast<UInt16>(fractional % divider);
return DateTimeComponentsWithFractionalPart{time_zone.toDateTimeComponents(components.whole), millisecond, microsecond};
}
static inline DateLUTImpl::DateTimeComponents execute(UInt32 t, const DateLUTImpl & time_zone)
static inline DateTimeComponentsWithFractionalPart execute(UInt32 t, const DateLUTImpl & time_zone)
{
return time_zone.toDateTimeComponents(static_cast<DateLUTImpl::Time>(t));
return DateTimeComponentsWithFractionalPart{time_zone.toDateTimeComponents(static_cast<DateLUTImpl::Time>(t)), 0, 0};
}
static inline DateLUTImpl::DateTimeComponents execute(Int32 d, const DateLUTImpl & time_zone)
static inline DateTimeComponentsWithFractionalPart execute(Int32 d, const DateLUTImpl & time_zone)
{
return time_zone.toDateTimeComponents(ExtendedDayNum(d));
return DateTimeComponentsWithFractionalPart{time_zone.toDateTimeComponents(ExtendedDayNum(d)), 0, 0};
}
static inline DateLUTImpl::DateTimeComponents execute(UInt16 d, const DateLUTImpl & time_zone)
static inline DateTimeComponentsWithFractionalPart execute(UInt16 d, const DateLUTImpl & time_zone)
{
return time_zone.toDateTimeComponents(DayNum(d));
return DateTimeComponentsWithFractionalPart{time_zone.toDateTimeComponents(DayNum(d)), 0, 0};
}
using FactorTransform = ZeroTransform;

View File

@ -5,7 +5,7 @@
namespace DB
{
/** Tansform-type wrapper for DateTime64, simplifies DateTime64 support for given Transform.
/** Transform-type wrapper for DateTime64, simplifies DateTime64 support for given Transform.
*
* Depending on what overloads of Transform::execute() are available, when called with DateTime64 value,
* invokes Transform::execute() with either:
@ -80,7 +80,10 @@ public:
}
else
{
const auto components = DecimalUtils::splitWithScaleMultiplier(t, scale_multiplier);
auto components = DecimalUtils::splitWithScaleMultiplier(t, scale_multiplier);
if (t.value < 0 && components.fractional)
--components.whole;
return wrapped_transform.execute(static_cast<Int64>(components.whole), std::forward<Args>(args)...);
}
}

View File

@ -174,12 +174,13 @@ public:
{
auto res = static_cast<Int64>(transform_y.execute(y, timezone_y))
- static_cast<Int64>(transform_x.execute(x, timezone_x));
DateLUTImpl::DateTimeComponents a_comp;
DateLUTImpl::DateTimeComponents b_comp;
DateTimeComponentsWithFractionalPart a_comp;
DateTimeComponentsWithFractionalPart b_comp;
Int64 adjust_value;
auto x_seconds = TransformDateTime64<ToRelativeSecondNumImpl<ResultPrecision::Extended>>(transform_x.getScaleMultiplier()).execute(x, timezone_x);
auto y_seconds = TransformDateTime64<ToRelativeSecondNumImpl<ResultPrecision::Extended>>(transform_y.getScaleMultiplier()).execute(y, timezone_y);
if (x_seconds <= y_seconds)
auto x_microseconds = TransformDateTime64<ToRelativeSubsecondNumImpl<microsecond_multiplier>>(transform_x.getScaleMultiplier()).execute(x, timezone_x);
auto y_microseconds = TransformDateTime64<ToRelativeSubsecondNumImpl<microsecond_multiplier>>(transform_y.getScaleMultiplier()).execute(y, timezone_y);
if (x_microseconds <= y_microseconds)
{
a_comp = TransformDateTime64<ToDateTimeComponentsImpl>(transform_x.getScaleMultiplier()).execute(x, timezone_x);
b_comp = TransformDateTime64<ToDateTimeComponentsImpl>(transform_y.getScaleMultiplier()).execute(y, timezone_y);
@ -192,14 +193,16 @@ public:
adjust_value = 1;
}
if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeYearNumImpl<ResultPrecision::Extended>>>)
{
if ((a_comp.date.month > b_comp.date.month)
|| ((a_comp.date.month == b_comp.date.month) && ((a_comp.date.day > b_comp.date.day)
|| ((a_comp.date.day == b_comp.date.day) && ((a_comp.time.hour > b_comp.time.hour)
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second))))
)))))
|| ((a_comp.time.minute == b_comp.time.minute) && ((a_comp.time.second > b_comp.time.second)
|| ((a_comp.time.second == b_comp.time.second) && ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))))))))))))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeQuarterNumImpl<ResultPrecision::Extended>>>)
@ -210,8 +213,9 @@ public:
|| ((x_month_in_quarter == y_month_in_quarter) && ((a_comp.date.day > b_comp.date.day)
|| ((a_comp.date.day == b_comp.date.day) && ((a_comp.time.hour > b_comp.time.hour)
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second))))
)))))
|| ((a_comp.time.minute == b_comp.time.minute) && ((a_comp.time.second > b_comp.time.second)
|| ((a_comp.time.second == b_comp.time.second) && ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))))))))))))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeMonthNumImpl<ResultPrecision::Extended>>>)
@ -219,8 +223,9 @@ public:
if ((a_comp.date.day > b_comp.date.day)
|| ((a_comp.date.day == b_comp.date.day) && ((a_comp.time.hour > b_comp.time.hour)
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second))))
)))
|| ((a_comp.time.minute == b_comp.time.minute) && ((a_comp.time.second > b_comp.time.second)
|| ((a_comp.time.second == b_comp.time.second) && ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))))))))))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeWeekNumImpl<ResultPrecision::Extended>>>)
@ -230,25 +235,44 @@ public:
if ((x_day_of_week > y_day_of_week)
|| ((x_day_of_week == y_day_of_week) && (a_comp.time.hour > b_comp.time.hour))
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second)))))
|| ((a_comp.time.minute == b_comp.time.minute) && ((a_comp.time.second > b_comp.time.second)
|| ((a_comp.time.second == b_comp.time.second) && ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))))))))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeDayNumImpl<ResultPrecision::Extended>>>)
{
if ((a_comp.time.hour > b_comp.time.hour)
|| ((a_comp.time.hour == b_comp.time.hour) && ((a_comp.time.minute > b_comp.time.minute)
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second)))))
|| ((a_comp.time.minute == b_comp.time.minute) && ((a_comp.time.second > b_comp.time.second)
|| ((a_comp.time.second == b_comp.time.second) && ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))))))))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeHourNumImpl<ResultPrecision::Extended>>>)
{
if ((a_comp.time.minute > b_comp.time.minute)
|| ((a_comp.time.minute == b_comp.time.minute) && (a_comp.time.second > b_comp.time.second)))
|| ((a_comp.time.minute == b_comp.time.minute) && ((a_comp.time.second > b_comp.time.second)
|| ((a_comp.time.second == b_comp.time.second) && ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))))))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeMinuteNumImpl<ResultPrecision::Extended>>>)
{
if (a_comp.time.second > b_comp.time.second)
if ((a_comp.time.second > b_comp.time.second)
|| ((a_comp.time.second == b_comp.time.second) && ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeSecondNumImpl<ResultPrecision::Extended>>>)
{
if ((a_comp.millisecond > b_comp.millisecond)
|| ((a_comp.millisecond == b_comp.millisecond) && (a_comp.microsecond > b_comp.microsecond)))
res += adjust_value;
}
else if constexpr (std::is_same_v<TransformX, TransformDateTime64<ToRelativeSubsecondNumImpl<1000>>>)
{
if (a_comp.microsecond > b_comp.microsecond)
res += adjust_value;
}
return res;
@ -373,6 +397,10 @@ public:
impl.template dispatchForColumns<ToRelativeMinuteNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
else if (unit == "second" || unit == "ss" || unit == "s")
impl.template dispatchForColumns<ToRelativeSecondNumImpl<ResultPrecision::Extended>>(x, y, timezone_x, timezone_y, res->getData());
else if (unit == "millisecond" || unit == "ms")
impl.template dispatchForColumns<ToRelativeSubsecondNumImpl<millisecond_multiplier>>(x, y, timezone_x, timezone_y, res->getData());
else if (unit == "microsecond" || unit == "us" || unit == "u")
impl.template dispatchForColumns<ToRelativeSubsecondNumImpl<microsecond_multiplier>>(x, y, timezone_x, timezone_y, res->getData());
else
throw Exception(ErrorCodes::BAD_ARGUMENTS,
"Function {} does not support '{}' unit", getName(), unit);

View File

@ -10,7 +10,6 @@
#include <Functions/DateTimeTransforms.h>
#include <Functions/FunctionFactory.h>
#include <Functions/IFunction.h>
#include <Functions/TransformDateTime64.h>
#include <IO/WriteHelpers.h>

View File

@ -283,6 +283,11 @@ bool CSVFormatReader::parseRowEndWithDiagnosticInfo(WriteBuffer & out)
return true;
}
bool CSVFormatReader::allowVariableNumberOfColumns()
{
return format_settings.csv.allow_variable_number_of_columns;
}
bool CSVFormatReader::readField(
IColumn & column,
const DataTypePtr & type,
@ -347,6 +352,12 @@ bool CSVFormatReader::checkForSuffix()
return false;
}
bool CSVFormatReader::checkForEndOfRow()
{
skipWhitespacesAndTabs(*buf, format_settings.csv.allow_whitespace_or_tab_as_delimiter);
return buf->eof() || *buf->position() == '\n' || *buf->position() == '\r';
}
CSVSchemaReader::CSVSchemaReader(ReadBuffer & in_, bool with_names_, bool with_types_, const FormatSettings & format_settings_)
: FormatWithNamesAndTypesSchemaReader(
buf,

View File

@ -69,6 +69,9 @@ public:
void skipRowEndDelimiter() override;
void skipPrefixBeforeHeader() override;
bool checkForEndOfRow() override;
bool allowVariableNumberOfColumns() override;
std::vector<String> readNames() override { return readHeaderRow(); }
std::vector<String> readTypes() override { return readHeaderRow(); }
std::vector<String> readHeaderRow() { return readRowImpl<true>(); }

View File

@ -227,7 +227,30 @@ bool RowInputFormatWithNamesAndTypes::readRow(MutableColumns & columns, RowReadE
format_reader->skipField(file_column);
if (!is_last_file_column)
{
if (format_reader->allowVariableNumberOfColumns() && format_reader->checkForEndOfRow())
{
++file_column;
while (file_column < column_mapping->column_indexes_for_input_fields.size())
{
const auto & rem_column_index = column_mapping->column_indexes_for_input_fields[file_column];
columns[*rem_column_index]->insertDefault();
++file_column;
}
}
else
format_reader->skipFieldDelimiter();
}
}
if (format_reader->allowVariableNumberOfColumns() && !format_reader->checkForEndOfRow())
{
do
{
format_reader->skipFieldDelimiter();
format_reader->skipField(1);
}
while (!format_reader->checkForEndOfRow());
}
format_reader->skipRowEndDelimiter();

View File

@ -119,6 +119,10 @@ public:
/// Check suffix.
virtual bool checkForSuffix() { return in->eof(); }
virtual bool checkForEndOfRow() { throw Exception(ErrorCodes::NOT_IMPLEMENTED, "Method checkForEndOfRow is not implemented"); }
virtual bool allowVariableNumberOfColumns() { return false; }
const FormatSettings & getFormatSettings() const { return format_settings; }
virtual void setReadBuffer(ReadBuffer & in_) { in = &in_; }

View File

@ -904,10 +904,9 @@ try
/// Destroy CascadeBuffer to actualize buffers' positions and reset extra references
if (used_output.hasDelayed())
{
if (used_output.out_maybe_delayed_and_compressed)
{
used_output.out_maybe_delayed_and_compressed->finalize();
}
/// do not call finalize here for CascadeWriteBuffer used_output.out_maybe_delayed_and_compressed,
/// exception is written into used_output.out_maybe_compressed later
/// HTTPHandler::trySendExceptionToClient is called with exception context, it is Ok to destroy buffers
used_output.out_maybe_delayed_and_compressed.reset();
}

View File

@ -1,12 +1,18 @@
<clickhouse>
<storage_configuration>
<disks>
<s3>
<s31>
<type>s3</type>
<endpoint>http://minio1:9001/root/data/</endpoint>
<access_key_id>minio</access_key_id>
<secret_access_key>minio123</secret_access_key>
</s3>
</s31>
<s32>
<type>s3</type>
<endpoint>http://minio1:9001/root/data2/</endpoint>
<access_key_id>minio</access_key_id>
<secret_access_key>minio123</secret_access_key>
</s32>
</disks>
<policies>
<two_disks>
@ -15,10 +21,17 @@
<disk>default</disk>
</default>
<external>
<disk>s3</disk>
<disk>s31</disk>
</external>
</volumes>
</two_disks>
<one_disk>
<volumes>
<external>
<disk>s32</disk>
</external>
</volumes>
</one_disk>
</policies>
</storage_configuration>

View File

@ -39,7 +39,7 @@ def cluster():
def create_table(node, table_name, replicated, additional_settings):
settings = {
"storage_policy": "two_disks",
"old_parts_lifetime": 1,
"old_parts_lifetime": 0,
"index_granularity": 512,
"temporary_directories_lifetime": 0,
"merge_tree_clear_old_temporary_directories_interval_seconds": 1,
@ -73,9 +73,13 @@ def create_table(node, table_name, replicated, additional_settings):
"allow_remote_fs_zero_copy_replication,replicated_engine",
[(False, False), (False, True), (True, True)],
)
def test_create_table(
def test_alter_moving(
cluster, allow_remote_fs_zero_copy_replication, replicated_engine
):
"""
Test that we correctly move parts during ALTER TABLE
"""
if replicated_engine:
nodes = list(cluster.instances.values())
else:
@ -126,7 +130,7 @@ def test_create_table(
partition = f"2021-01-{i:02d}"
try:
random.choice(nodes).query(
f"ALTER TABLE {table_name} MOVE PARTITION '{partition}' TO DISK 's3'",
f"ALTER TABLE {table_name} MOVE PARTITION '{partition}' TO DISK 's31'",
)
except QueryRuntimeException as e:
if "PART_IS_TEMPORARILY_LOCKED" in str(e):
@ -153,3 +157,84 @@ def test_create_table(
)
assert data_digest == "1000\n"
def test_delete_race_leftovers(cluster):
"""
Test that we correctly delete outdated parts and do not leave any leftovers on s3
"""
node = cluster.instances["node1"]
table_name = "test_delete_race_leftovers"
additional_settings = {
# use another disk not to interfere with other tests
"storage_policy": "one_disk",
# always remove parts in parallel
"concurrent_part_removal_threshold": 1,
}
create_table(
node, table_name, replicated=True, additional_settings=additional_settings
)
# Stop merges to have several small parts in active set
node.query(f"SYSTEM STOP MERGES {table_name}")
# Creare several small parts in one partition
for i in range(1, 11):
node.query(
f"INSERT INTO {table_name} SELECT toDate('2021-01-01'), number as id, toString(sipHash64(number, {i})) FROM numbers(10_000)"
)
table_digest_query = f"SELECT count(), sum(sipHash64(id, data)) FROM {table_name}"
table_digest = node.query(table_digest_query)
# Execute several noop deletes to have parts with updated mutation id without changes in data
# New parts will have symlinks to old parts
node.query(f"SYSTEM START MERGES {table_name}")
for i in range(10):
node.query(f"DELETE FROM {table_name} WHERE data = ''")
# Make existing parts outdated
# Also we don't want have changing parts set,
# because it will be difficult match objects on s3 and in remote_data_paths to check correctness
node.query(f"OPTIMIZE TABLE {table_name} FINAL")
inactive_parts_query = (
f"SELECT count() FROM system.parts "
f"WHERE not active AND table = '{table_name}' AND database = 'default'"
)
# Try to wait for deletion of outdated parts
# However, we do not want to wait too long
# If some parts are not deleted after several iterations, we will just continue
for i in range(20):
inactive_parts_count = int(node.query(inactive_parts_query).strip())
if inactive_parts_count == 0:
print(f"Inactive parts are deleted after {i} iterations")
break
print(f"Inactive parts count: {inactive_parts_count}")
time.sleep(5)
# Check that we correctly deleted all outdated parts and no leftovers on s3
known_remote_paths = set(
node.query(
f"SELECT remote_path FROM system.remote_data_paths WHERE disk_name = 's32'"
).splitlines()
)
all_remote_paths = set(
obj.object_name
for obj in cluster.minio_client.list_objects(
cluster.minio_bucket, "data2/", recursive=True
)
)
# Some blobs can be deleted after we listed remote_data_paths
# It's alright, thus we check only that all remote paths are known
# (in other words, all remote paths is subset of known paths)
assert all_remote_paths == {p for p in known_remote_paths if p in all_remote_paths}
# Check that we have all data
assert table_digest == node.query(table_digest_query)

View File

@ -1,13 +1,32 @@
=== Test input_format_csv_empty_as_default
Hello, world 123 2016-01-01
Hello, "world" 456 2016-01-02
Hello "world" 789 2016-01-03
Hello\n world 100 2016-01-04
default 1 2019-06-19
default-eof 1 2019-06-19
=== Test datetime
2016-01-01 01:02:03 1
2016-01-02 01:02:03 2
2017-08-15 13:15:01 3
1970-01-02 05:46:39 4
=== Test nullable datetime
2016-01-01 01:02:03 NUL
2016-01-02 01:02:03 Nhello
\N \N
=== Test ignore extra columns
Hello 1 String1
Hello 2 String2
Hello 3 String3
Hello 4 String4
Hello 5 String5
Hello 6 String6
=== Test missing as default
0 0 33 \N 55 Default
0 0 33 \N 55 Default
Hello 0 0 33 \N 55 Default
Hello 0 0 33 \N 55 Default
Hello 1 3 2 \N 55 Default
Hello 1 4 2 3 4 String
Hello 1 4 2 3 4 String
Hello 1 5 2 3 4 String

View File

@ -4,6 +4,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
echo === Test input_format_csv_empty_as_default
$CLICKHOUSE_CLIENT --query="DROP TABLE IF EXISTS csv";
$CLICKHOUSE_CLIENT --query="CREATE TABLE csv (s String, n UInt64 DEFAULT 1, d Date DEFAULT '2019-06-19') ENGINE = Memory";
@ -18,6 +19,7 @@ Hello "world", 789 ,2016-01-03
$CLICKHOUSE_CLIENT --query="SELECT * FROM csv ORDER BY d, s";
$CLICKHOUSE_CLIENT --query="DROP TABLE csv";
echo === Test datetime
$CLICKHOUSE_CLIENT --query="CREATE TABLE csv (t DateTime('Asia/Istanbul'), s String) ENGINE = Memory";
echo '"2016-01-01 01:02:03","1"
@ -28,7 +30,7 @@ echo '"2016-01-01 01:02:03","1"
$CLICKHOUSE_CLIENT --query="SELECT * FROM csv ORDER BY s";
$CLICKHOUSE_CLIENT --query="DROP TABLE csv";
echo === Test nullable datetime
$CLICKHOUSE_CLIENT --query="CREATE TABLE csv (t Nullable(DateTime('Asia/Istanbul')), s Nullable(String)) ENGINE = Memory";
echo 'NULL, NULL
@ -37,3 +39,32 @@ echo 'NULL, NULL
$CLICKHOUSE_CLIENT --query="SELECT * FROM csv ORDER BY s NULLS LAST";
$CLICKHOUSE_CLIENT --query="DROP TABLE csv";
echo === Test ignore extra columns
$CLICKHOUSE_CLIENT --query="CREATE TABLE csv (s String, n UInt64 DEFAULT 3, d String DEFAULT 'String4') ENGINE = Memory";
echo '"Hello", 1, "String1"
"Hello", 2, "String2",
"Hello", 3, "String3", "2016-01-13"
"Hello", 4, , "2016-01-14"
"Hello", 5, "String5", "2016-01-15", "2016-01-16"
"Hello", 6, "String6" , "line with a
break"' | $CLICKHOUSE_CLIENT --input_format_defaults_for_omitted_fields=1 --input_format_csv_empty_as_default=1 --input_format_csv_allow_variable_number_of_columns=1 --query="INSERT INTO csv FORMAT CSV";
$CLICKHOUSE_CLIENT --query="SELECT * FROM csv ORDER BY s, n";
$CLICKHOUSE_CLIENT --query="DROP TABLE csv";
echo === Test missing as default
$CLICKHOUSE_CLIENT --query="CREATE TABLE csv (f1 String, f2 UInt64, f3 UInt256, f4 UInt64 Default 33, f5 Nullable(UInt64), f6 Nullable(UInt64) Default 55, f7 String DEFAULT 'Default') ENGINE = Memory";
echo '
,
"Hello"
"Hello",
"Hello", 1, 3, 2
"Hello",1,4,2,3,4,"String"
"Hello", 1, 4, 2, 3, 4, "String"
"Hello", 1, 5, 2, 3, 4, "String",'| $CLICKHOUSE_CLIENT --input_format_defaults_for_omitted_fields=1 --input_format_csv_allow_variable_number_of_columns=1 --query="INSERT INTO csv FORMAT CSV";
$CLICKHOUSE_CLIENT --query="SELECT * FROM csv ORDER BY f1, f2, f3, f4, f5 NULLS FIRST, f6, f7";
$CLICKHOUSE_CLIENT --query="DROP TABLE csv";

View File

@ -7,13 +7,17 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
format="RowBinary"
function query {
# bash isn't able to store \0 bytes, so use [1; 255] random range
echo "SELECT greatest(toUInt8(1), toUInt8(intHash64(number))) FROM system.numbers LIMIT $1 FORMAT RowBinary"
echo "SELECT greatest(toUInt8(1), toUInt8(intHash64(number))) FROM system.numbers LIMIT $1 FORMAT $format"
}
function ch_url() {
${CLICKHOUSE_CURL_COMMAND} -q -sS "${CLICKHOUSE_URL}&max_block_size=$max_block_size&$1" -d "$(query "$2")"
${CLICKHOUSE_CURL_COMMAND} -q -sS \
"${CLICKHOUSE_URL}${max_block_size:+"&max_block_size=$max_block_size"}&$1" \
-d "$(query "$2")"
}
@ -24,9 +28,9 @@ exception_pattern="DB::Exception:[[:print:]]*"
function check_only_exception() {
local res
res=$(ch_url "$1" "$2")
#(echo "$res")
#(echo "$res" | wc -l)
#(echo "$res" | grep -c "$exception_pattern")
# echo "$res"
# echo "$res" | wc -l
# echo "$res" | grep -c "$exception_pattern"
[[ $(echo "$res" | wc -l) -eq 1 ]] || echo FAIL 1 "$@"
[[ $(echo "$res" | grep -c "$exception_pattern") -eq 1 ]] || echo FAIL 2 "$@"
}
@ -34,19 +38,23 @@ function check_only_exception() {
function check_last_line_exception() {
local res
res=$(ch_url "$1" "$2")
#echo "$res" > res
#echo "$res" | wc -c
#echo "$res" | tail -n -2
# echo "$res" > res
# echo "$res" | wc -c
# echo "$res" | tail -n -2
[[ $(echo "$res" | tail -n -1 | grep -c "$exception_pattern") -eq 1 ]] || echo FAIL 3 "$@"
[[ $(echo "$res" | head -n -1 | grep -c "$exception_pattern") -eq 0 ]] || echo FAIL 4 "$@"
}
function check_exception_handling() {
format=TSV \
check_last_line_exception \
"max_block_size=30000&max_result_rows=400000&buffer_size=1048577&wait_end_of_query=0" 111222333444
check_only_exception "max_result_bytes=1000" 1001
check_only_exception "max_result_bytes=1000&wait_end_of_query=1" 1001
check_only_exception "max_result_bytes=1048576&buffer_size=1048576&wait_end_of_query=0" 1048577
check_only_exception "max_result_bytes=1048576&buffer_size=1048576&wait_end_of_query=1" 1048577
check_last_line_exception "max_result_bytes=1048576&buffer_size=1048576&wait_end_of_query=0" 1048577
check_only_exception "max_result_bytes=1048576&buffer_size=1048576&wait_end_of_query=1" 1048577
check_only_exception "max_result_bytes=1500000&buffer_size=2500000&wait_end_of_query=0" 1500001
check_only_exception "max_result_bytes=1500000&buffer_size=1500000&wait_end_of_query=1" 1500001

View File

@ -4,3 +4,4 @@
201707
20170721
20170721112233
19691231235959

View File

@ -4,3 +4,4 @@ SELECT toYYYYMMDDhhmmss(toDate('2017-07-21'));
SELECT toYYYYMM(toDateTime('2017-07-21T11:22:33'));
SELECT toYYYYMMDD(toDateTime('2017-07-21T11:22:33'));
SELECT toYYYYMMDDhhmmss(toDateTime('2017-07-21T11:22:33'));
SELECT toYYYYMMDDhhmmss(toDateTime64('1969-12-31 23:59:59.900', 3));

View File

@ -33,4 +33,14 @@ Hello
2021-01-01
1
1
86400000
172800000
86461000
86401299
701
701
800
60200201
60
10
1

View File

@ -41,4 +41,16 @@ SELECT TIMESTAMPSUB(DATE '2022-01-01', INTERVAL 1 YEAR);
SELECT DATE_DIFF(YEAR, DATE '2021-01-01', DATE '2022-01-01');
SELECT DATEDIFF(YEAR, DATE '2021-01-01', DATE '2022-01-01');
SELECT DATEDIFF(millisecond, '2021-01-01'::Date, '2021-01-02'::Date);
SELECT DATEDIFF(millisecond, '2021-01-01'::Date, '2021-01-03'::Date32);
SELECT DATEDIFF(millisecond, '2021-01-01'::Date, '2021-01-02 00:01:01'::DateTime);
SELECT DATEDIFF(millisecond, '2021-01-01'::Date, '2021-01-02 00:00:01.299'::DateTime64);
SELECT DATEDIFF(millisecond, '2021-01-01 23:59:59.299'::DateTime64, '2021-01-02'::Date);
SELECT DATEDIFF(millisecond, '2021-01-01 23:59:59.299999'::DateTime64(6), '2021-01-02'::Date);
SELECT DATEDIFF(millisecond, '2021-01-01 23:59:59.2'::DateTime64(1), '2021-01-02'::Date);
SELECT DATEDIFF(microsecond, '2021-01-01 23:59:59.899999'::DateTime64(6), '2021-01-02 00:01:00.100200300'::DateTime64(9));
SELECT DATEDIFF(microsecond, '1969-12-31 23:59:59.999950'::DateTime64(6), '1970-01-01 00:00:00.000010'::DateTime64(6));
SELECT DATEDIFF(second, '1969-12-31 23:59:59.123'::DateTime64(6), '1970-01-01 00:00:09.123'::DateTime64(6));
SELECT EXISTS (SELECT 1);

View File

@ -22,18 +22,27 @@ $CLICKHOUSE_CLIENT -nm -q "
function delete_user()
{
$CLICKHOUSE_CLIENT -q "DROP USER IF EXISTS test_user_02242" ||:
while true; do
$CLICKHOUSE_CLIENT -q "DROP USER IF EXISTS test_user_02242" ||:
sleep 0.$RANDOM;
done
}
function create_and_login_user()
{
$CLICKHOUSE_CLIENT -q "CREATE USER IF NOT EXISTS test_user_02242" ||:
$CLICKHOUSE_CLIENT -u "test_user_02242" -q "SELECT COUNT(*) FROM system.session_log WHERE user == 'test_user_02242'" > /dev/null ||:
while true; do
$CLICKHOUSE_CLIENT -q "CREATE USER IF NOT EXISTS test_user_02242" ||:
$CLICKHOUSE_CLIENT -u "test_user_02242" -q "SELECT COUNT(*) FROM system.session_log WHERE user == 'test_user_02242'" > /dev/null ||:
sleep 0.$RANDOM;
done
}
function set_role()
{
$CLICKHOUSE_CLIENT -q "SET ROLE test_role_02242 TO test_user_02242" ||:
while true; do
$CLICKHOUSE_CLIENT -q "SET DEFAULT ROLE test_role_02242 TO test_user_02242" ||:
sleep 0.$RANDOM;
done
}
export -f delete_user
@ -42,12 +51,10 @@ export -f set_role
TIMEOUT=10
for (( i = 0 ; i < 100; ++i ))
do
clickhouse_client_loop_timeout $TIMEOUT create_and_login_user 2> /dev/null &
clickhouse_client_loop_timeout $TIMEOUT delete_user 2> /dev/null &
clickhouse_client_loop_timeout $TIMEOUT set_role 2> /dev/null &
done
timeout $TIMEOUT bash -c create_and_login_user 2> /dev/null &
timeout $TIMEOUT bash -c delete_user 2> /dev/null &
timeout $TIMEOUT bash -c set_role 2> /dev/null &
wait

View File

@ -19,17 +19,18 @@ $CLICKHOUSE_CLIENT -nm -q "
function create_drop_grant()
{
$CLICKHOUSE_CLIENT -q "CREATE USER IF NOT EXISTS test_user_02243 GRANTEES NONE" ||:
$CLICKHOUSE_CLIENT -q "GRANT ALL ON *.* TO test_user_02243 WITH GRANT OPTION" ||:
$CLICKHOUSE_CLIENT -q "DROP USER IF EXISTS test_user_02243" &
$CLICKHOUSE_CLIENT --user test_user_02243 -q "GRANT ALL ON *.* TO kek_02243" &
wait
while true; do
$CLICKHOUSE_CLIENT -q "CREATE USER IF NOT EXISTS test_user_02243 GRANTEES NONE" ||:
$CLICKHOUSE_CLIENT -q "GRANT ALL ON *.* TO test_user_02243 WITH GRANT OPTION" ||:
$CLICKHOUSE_CLIENT -q "DROP USER IF EXISTS test_user_02243" &
$CLICKHOUSE_CLIENT --user test_user_02243 -q "GRANT ALL ON *.* TO kek_02243" &
done
}
export -f create_drop_grant
TIMEOUT=10
clickhouse_client_loop_timeout $TIMEOUT create_drop_grant 2> /dev/null &
timeout $TIMEOUT bash -c create_drop_grant 2> /dev/null &
wait
$CLICKHOUSE_CLIENT --user kek_02243 -q "SELECT * FROM test" 2>&1| grep -Fa "Exception: " | grep -Eo ACCESS_DENIED | uniq

View File

@ -111,3 +111,109 @@ SELECT age('day', materialize(toDateTime64('2015-08-18 00:00:00', 0, 'UTC')), ma
1
SELECT age('day', materialize(toDate('2015-08-18', 'UTC')), materialize(toDateTime64('2015-08-19 00:00:00', 3, 'UTC')));
1
-- DateTime64 vs DateTime64 with fractional part
SELECT age('microsecond', toDateTime64('2015-08-18 20:30:36.100200005', 9, 'UTC'), toDateTime64('2015-08-18 20:30:41.200400005', 9, 'UTC'));
5100200
SELECT age('microsecond', toDateTime64('2015-08-18 20:30:36.100200005', 9, 'UTC'), toDateTime64('2015-08-18 20:30:41.200400004', 9, 'UTC'));
5100200
SELECT age('millisecond', toDateTime64('2015-08-18 20:30:36.450299', 6, 'UTC'), toDateTime64('2015-08-18 20:30:41.550299', 6, 'UTC'));
5100
SELECT age('millisecond', toDateTime64('2015-08-18 20:30:36.450299', 6, 'UTC'), toDateTime64('2015-08-18 20:30:41.550298', 6, 'UTC'));
5099
SELECT age('second', toDateTime64('2023-03-01 19:18:36.999003', 6, 'UTC'), toDateTime64('2023-03-01 19:18:41.999002', 6, 'UTC'));
4
SELECT age('second', toDateTime64('2023-03-01 19:18:36.999', 3, 'UTC'), toDateTime64('2023-03-01 19:18:41.001', 3, 'UTC'));
4
SELECT age('minute', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-01 20:35:36.300', 3, 'UTC'));
5
SELECT age('minute', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-01 20:35:36.100', 3, 'UTC'));
4
SELECT age('minute', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-01 20:35:36.200100', 6, 'UTC'));
4
SELECT age('hour', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
3
SELECT age('hour', toDateTime64('2015-01-01 20:31:36.200', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
2
SELECT age('hour', toDateTime64('2015-01-01 20:30:37.200', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
2
SELECT age('hour', toDateTime64('2015-01-01 20:30:36.300', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
2
SELECT age('hour', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-01 23:30:36.200100', 6, 'UTC'));
2
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:30:36.200', 3, 'UTC'));
3
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 19:30:36.200', 3, 'UTC'));
2
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:28:36.200', 3, 'UTC'));
2
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:30:35.200', 3, 'UTC'));
2
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:30:36.199', 3, 'UTC'));
2
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-04 20:30:36.200100', 6, 'UTC'));
2
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:30:36.200', 3, 'UTC'));
2
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 19:30:36.200', 3, 'UTC'));
1
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:29:36.200', 3, 'UTC'));
1
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:30:35.200', 3, 'UTC'));
1
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:30:36.100', 3, 'UTC'));
1
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-15 20:30:36.200100', 6, 'UTC'));
1
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:30:36.200', 3, 'UTC'));
16
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-01 20:30:36.200', 3, 'UTC'));
15
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 19:30:36.200', 3, 'UTC'));
15
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:29:36.200', 3, 'UTC'));
15
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:30:35.200', 3, 'UTC'));
15
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:30:36.100', 3, 'UTC'));
15
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200101', 6, 'UTC'), toDateTime64('2016-05-02 20:30:36.200100', 6, 'UTC'));
15
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:30:36.200', 3, 'UTC'));
5
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-01 20:30:36.200', 3, 'UTC'));
4
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 19:30:36.200', 3, 'UTC'));
4
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:29:36.200', 3, 'UTC'));
4
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:30:35.200', 3, 'UTC'));
4
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:30:36.100', 3, 'UTC'));
4
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200101', 6, 'UTC'), toDateTime64('2016-04-02 20:30:36.200100', 6, 'UTC'));
4
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:30:36.200', 3, 'UTC'));
8
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-01-02 20:30:36.200', 3, 'UTC'));
7
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-01 20:30:36.200', 3, 'UTC'));
7
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 19:30:36.200', 3, 'UTC'));
7
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:29:36.200', 3, 'UTC'));
7
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:30:35.200', 3, 'UTC'));
7
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:30:36.100', 3, 'UTC'));
7
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200101', 6, 'UTC'), toDateTime64('2023-02-02 20:30:36.200100', 6, 'UTC'));
7
-- DateTime64 vs DateTime64 with negative time
SELECT age('millisecond', toDateTime64('1969-12-31 23:59:58.001', 3, 'UTC'), toDateTime64('1970-01-01 00:00:00.350', 3, 'UTC'));
2349
SELECT age('second', toDateTime64('1969-12-31 23:59:58.001', 3, 'UTC'), toDateTime64('1970-01-01 00:00:00.35', 3, 'UTC'));
2
SELECT age('second', toDateTime64('1969-12-31 23:59:50.001', 3, 'UTC'), toDateTime64('1969-12-31 23:59:55.002', 3, 'UTC'));
5
SELECT age('second', toDateTime64('1969-12-31 23:59:50.003', 3, 'UTC'), toDateTime64('1969-12-31 23:59:55.002', 3, 'UTC'));
4

View File

@ -75,3 +75,68 @@ SELECT age('second', materialize(toDateTime64('2015-08-18 00:00:00', 0, 'UTC')),
SELECT age('second', materialize(toDateTime('2015-08-18 00:00:00', 'UTC')), materialize(toDateTime64('2015-08-18 00:00:10', 3, 'UTC')));
SELECT age('day', materialize(toDateTime64('2015-08-18 00:00:00', 0, 'UTC')), materialize(toDate('2015-08-19', 'UTC')));
SELECT age('day', materialize(toDate('2015-08-18', 'UTC')), materialize(toDateTime64('2015-08-19 00:00:00', 3, 'UTC')));
-- DateTime64 vs DateTime64 with fractional part
SELECT age('microsecond', toDateTime64('2015-08-18 20:30:36.100200005', 9, 'UTC'), toDateTime64('2015-08-18 20:30:41.200400005', 9, 'UTC'));
SELECT age('microsecond', toDateTime64('2015-08-18 20:30:36.100200005', 9, 'UTC'), toDateTime64('2015-08-18 20:30:41.200400004', 9, 'UTC'));
SELECT age('millisecond', toDateTime64('2015-08-18 20:30:36.450299', 6, 'UTC'), toDateTime64('2015-08-18 20:30:41.550299', 6, 'UTC'));
SELECT age('millisecond', toDateTime64('2015-08-18 20:30:36.450299', 6, 'UTC'), toDateTime64('2015-08-18 20:30:41.550298', 6, 'UTC'));
SELECT age('second', toDateTime64('2023-03-01 19:18:36.999003', 6, 'UTC'), toDateTime64('2023-03-01 19:18:41.999002', 6, 'UTC'));
SELECT age('second', toDateTime64('2023-03-01 19:18:36.999', 3, 'UTC'), toDateTime64('2023-03-01 19:18:41.001', 3, 'UTC'));
SELECT age('minute', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-01 20:35:36.300', 3, 'UTC'));
SELECT age('minute', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-01 20:35:36.100', 3, 'UTC'));
SELECT age('minute', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-01 20:35:36.200100', 6, 'UTC'));
SELECT age('hour', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
SELECT age('hour', toDateTime64('2015-01-01 20:31:36.200', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
SELECT age('hour', toDateTime64('2015-01-01 20:30:37.200', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
SELECT age('hour', toDateTime64('2015-01-01 20:30:36.300', 3, 'UTC'), toDateTime64('2015-01-01 23:30:36.200', 3, 'UTC'));
SELECT age('hour', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-01 23:30:36.200100', 6, 'UTC'));
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:30:36.200', 3, 'UTC'));
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 19:30:36.200', 3, 'UTC'));
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:28:36.200', 3, 'UTC'));
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:30:35.200', 3, 'UTC'));
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-04 20:30:36.199', 3, 'UTC'));
SELECT age('day', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-04 20:30:36.200100', 6, 'UTC'));
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:30:36.200', 3, 'UTC'));
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 19:30:36.200', 3, 'UTC'));
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:29:36.200', 3, 'UTC'));
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:30:35.200', 3, 'UTC'));
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200', 3, 'UTC'), toDateTime64('2015-01-15 20:30:36.100', 3, 'UTC'));
SELECT age('week', toDateTime64('2015-01-01 20:30:36.200101', 6, 'UTC'), toDateTime64('2015-01-15 20:30:36.200100', 6, 'UTC'));
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:30:36.200', 3, 'UTC'));
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-01 20:30:36.200', 3, 'UTC'));
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 19:30:36.200', 3, 'UTC'));
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:29:36.200', 3, 'UTC'));
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:30:35.200', 3, 'UTC'));
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-05-02 20:30:36.100', 3, 'UTC'));
SELECT age('month', toDateTime64('2015-01-02 20:30:36.200101', 6, 'UTC'), toDateTime64('2016-05-02 20:30:36.200100', 6, 'UTC'));
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:30:36.200', 3, 'UTC'));
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-01 20:30:36.200', 3, 'UTC'));
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 19:30:36.200', 3, 'UTC'));
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:29:36.200', 3, 'UTC'));
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:30:35.200', 3, 'UTC'));
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200', 3, 'UTC'), toDateTime64('2016-04-02 20:30:36.100', 3, 'UTC'));
SELECT age('quarter', toDateTime64('2015-01-02 20:30:36.200101', 6, 'UTC'), toDateTime64('2016-04-02 20:30:36.200100', 6, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:30:36.200', 3, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-01-02 20:30:36.200', 3, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-01 20:30:36.200', 3, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 19:30:36.200', 3, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:29:36.200', 3, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:30:35.200', 3, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200', 3, 'UTC'), toDateTime64('2023-02-02 20:30:36.100', 3, 'UTC'));
SELECT age('year', toDateTime64('2015-02-02 20:30:36.200101', 6, 'UTC'), toDateTime64('2023-02-02 20:30:36.200100', 6, 'UTC'));
-- DateTime64 vs DateTime64 with negative time
SELECT age('millisecond', toDateTime64('1969-12-31 23:59:58.001', 3, 'UTC'), toDateTime64('1970-01-01 00:00:00.350', 3, 'UTC'));
SELECT age('second', toDateTime64('1969-12-31 23:59:58.001', 3, 'UTC'), toDateTime64('1970-01-01 00:00:00.35', 3, 'UTC'));
SELECT age('second', toDateTime64('1969-12-31 23:59:50.001', 3, 'UTC'), toDateTime64('1969-12-31 23:59:55.002', 3, 'UTC'));
SELECT age('second', toDateTime64('1969-12-31 23:59:50.003', 3, 'UTC'), toDateTime64('1969-12-31 23:59:55.002', 3, 'UTC'));

View File

@ -0,0 +1,2 @@
1 1
2 0.5

View File

@ -0,0 +1,10 @@
# We support specifying aliases in any place in the query, including CASE expression:
with arrayJoin([1,2]) as arg
select arg,
(case
when arg = 1
then 1 as one
when arg = 2
then one / 2
end) as imposible;