Merge branch 'ClickHouse:master' into fix_test_00080_show_tables_and_system_tables

This commit is contained in:
Yarik Briukhovetskyi 2024-08-23 15:29:52 +02:00 committed by GitHub
commit 15f04fa313
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
67 changed files with 437 additions and 161 deletions

View File

@ -130,6 +130,7 @@ jobs:
with:
build_name: package_debug
data: ${{ needs.RunConfig.outputs.data }}
force: true
BuilderBinDarwin:
needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }}

View File

@ -44,7 +44,7 @@ The following upcoming meetups are featuring creator of ClickHouse & CTO, Alexey
* [ClickHouse Guangzhou User Group Meetup](https://mp.weixin.qq.com/s/GSvo-7xUoVzCsuUvlLTpCw) - August 25
* [San Francisco Meetup (Cloudflare)](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/302540575) - September 5
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/clickhouse-nc-meetup-group/events/302557230) - September 9
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/triangletechtalks/events/302723486/) - September 9
* [New York Meetup (Rokt)](https://www.meetup.com/clickhouse-new-york-user-group/events/302575342) - September 10
* [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12

View File

@ -40,6 +40,3 @@ RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
ARG sqllogic_test_repo="https://github.com/gregrahn/sqllogictest.git"
RUN git clone --recursive ${sqllogic_test_repo}
COPY run.sh /
CMD ["/bin/bash", "/run.sh"]

View File

@ -0,0 +1,33 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.5.6.45-stable (bdca8604c29) FIXME as compared to v24.5.5.78-stable (0138248cb62)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#67902](https://github.com/ClickHouse/ClickHouse/issues/67902): Fixing the `Not-ready Set` error after the `PREWHERE` optimization for StorageMerge. [#65057](https://github.com/ClickHouse/ClickHouse/pull/65057) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68252](https://github.com/ClickHouse/ClickHouse/issues/68252): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#68064](https://github.com/ClickHouse/ClickHouse/issues/68064): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68158](https://github.com/ClickHouse/ClickHouse/issues/68158): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68115](https://github.com/ClickHouse/ClickHouse/issues/68115): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67886](https://github.com/ClickHouse/ClickHouse/issues/67886): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#68272](https://github.com/ClickHouse/ClickHouse/issues/68272): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67807](https://github.com/ClickHouse/ClickHouse/issues/67807): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67836](https://github.com/ClickHouse/ClickHouse/issues/67836): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67991](https://github.com/ClickHouse/ClickHouse/issues/67991): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68207](https://github.com/ClickHouse/ClickHouse/issues/68207): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68091](https://github.com/ClickHouse/ClickHouse/issues/68091): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68122](https://github.com/ClickHouse/ClickHouse/issues/68122): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68171](https://github.com/ClickHouse/ClickHouse/issues/68171): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68337](https://github.com/ClickHouse/ClickHouse/issues/68337): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68667](https://github.com/ClickHouse/ClickHouse/issues/68667): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Update version after release. [#67862](https://github.com/ClickHouse/ClickHouse/pull/67862) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68077](https://github.com/ClickHouse/ClickHouse/issues/68077): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Backported in [#68756](https://github.com/ClickHouse/ClickHouse/issues/68756): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,33 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.6.4.42-stable (c534bb4b4dd) FIXME as compared to v24.6.3.95-stable (8325c920d11)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68066](https://github.com/ClickHouse/ClickHouse/issues/68066): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68566](https://github.com/ClickHouse/ClickHouse/issues/68566): Fix indexHint function case found by fuzzer. [#66286](https://github.com/ClickHouse/ClickHouse/pull/66286) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68159](https://github.com/ClickHouse/ClickHouse/issues/68159): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68116](https://github.com/ClickHouse/ClickHouse/issues/68116): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67887](https://github.com/ClickHouse/ClickHouse/issues/67887): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#68611](https://github.com/ClickHouse/ClickHouse/issues/68611): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68275](https://github.com/ClickHouse/ClickHouse/issues/68275): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67993](https://github.com/ClickHouse/ClickHouse/issues/67993): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68208](https://github.com/ClickHouse/ClickHouse/issues/68208): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68093](https://github.com/ClickHouse/ClickHouse/issues/68093): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68124](https://github.com/ClickHouse/ClickHouse/issues/68124): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68221](https://github.com/ClickHouse/ClickHouse/issues/68221): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68173](https://github.com/ClickHouse/ClickHouse/issues/68173): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68339](https://github.com/ClickHouse/ClickHouse/issues/68339): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68396](https://github.com/ClickHouse/ClickHouse/issues/68396): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68668](https://github.com/ClickHouse/ClickHouse/issues/68668): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Update version after release. [#67909](https://github.com/ClickHouse/ClickHouse/pull/67909) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68079](https://github.com/ClickHouse/ClickHouse/issues/68079): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Backported in [#68758](https://github.com/ClickHouse/ClickHouse/issues/68758): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,36 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.7.4.51-stable (70fe2f6fa52) FIXME as compared to v24.7.3.42-stable (63730bc4293)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68232](https://github.com/ClickHouse/ClickHouse/issues/68232): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#68068](https://github.com/ClickHouse/ClickHouse/issues/68068): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68613](https://github.com/ClickHouse/ClickHouse/issues/68613): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68278](https://github.com/ClickHouse/ClickHouse/issues/68278): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68040](https://github.com/ClickHouse/ClickHouse/issues/68040): Fix creation of view with recursive CTE. [#67587](https://github.com/ClickHouse/ClickHouse/pull/67587) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Backported in [#68038](https://github.com/ClickHouse/ClickHouse/issues/68038): Fix crash on `percent_rank`. `percent_rank`'s default frame type is changed to `range unbounded preceding and unbounded following`. `IWindowFunction`'s default window frame is considered and now window functions without window frame definition in sql can be put into different `WindowTransfomer`s properly. [#67661](https://github.com/ClickHouse/ClickHouse/pull/67661) ([lgbo](https://github.com/lgbo-ustc)).
* Backported in [#68224](https://github.com/ClickHouse/ClickHouse/issues/68224): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68095](https://github.com/ClickHouse/ClickHouse/issues/68095): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68126](https://github.com/ClickHouse/ClickHouse/issues/68126): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68223](https://github.com/ClickHouse/ClickHouse/issues/68223): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68175](https://github.com/ClickHouse/ClickHouse/issues/68175): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68341](https://github.com/ClickHouse/ClickHouse/issues/68341): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68398](https://github.com/ClickHouse/ClickHouse/issues/68398): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68669](https://github.com/ClickHouse/ClickHouse/issues/68669): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#67803](https://github.com/ClickHouse/ClickHouse/issues/67803): Disable some Dynamic tests under sanitizers, rewrite 03202_dynamic_null_map_subcolumn to sql. [#67359](https://github.com/ClickHouse/ClickHouse/pull/67359) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68081](https://github.com/ClickHouse/ClickHouse/issues/68081): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Update version after release. [#68044](https://github.com/ClickHouse/ClickHouse/pull/68044) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68269](https://github.com/ClickHouse/ClickHouse/issues/68269): [Green CI] Fix test 01903_correct_block_size_prediction_with_default. [#68203](https://github.com/ClickHouse/ClickHouse/pull/68203) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#68432](https://github.com/ClickHouse/ClickHouse/issues/68432): tests: make 01600_parts_states_metrics_long better. [#68265](https://github.com/ClickHouse/ClickHouse/pull/68265) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68538](https://github.com/ClickHouse/ClickHouse/issues/68538): CI: Native build for package_aarch64. [#68457](https://github.com/ClickHouse/ClickHouse/pull/68457) ([Max K.](https://github.com/maxknv)).
* Backported in [#68555](https://github.com/ClickHouse/ClickHouse/issues/68555): CI: Minor release workflow fix. [#68536](https://github.com/ClickHouse/ClickHouse/pull/68536) ([Max K.](https://github.com/maxknv)).
* Backported in [#68760](https://github.com/ClickHouse/ClickHouse/issues/68760): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).

View File

@ -273,6 +273,25 @@ void SystemLogBase<LogElement>::startup()
saving_thread = std::make_unique<ThreadFromGlobalPool>([this] { savingThreadFunction(); });
}
template <typename LogElement>
void SystemLogBase<LogElement>::stopFlushThread()
{
{
std::lock_guard lock(thread_mutex);
if (!saving_thread || !saving_thread->joinable())
return;
if (is_shutdown)
return;
is_shutdown = true;
queue->shutdown();
}
saving_thread->join();
}
template <typename LogElement>
void SystemLogBase<LogElement>::add(LogElement element)
{

View File

@ -216,6 +216,8 @@ public:
static consteval bool shouldTurnOffLogger() { return false; }
protected:
void stopFlushThread() final;
std::shared_ptr<SystemLogQueue<LogElement>> queue;
};
}

View File

@ -72,11 +72,13 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
{"24.9",
{
{"input_format_try_infer_variants", false, false, "Try to infer Variant type in text formats when there is more than one possible type for column/array elements"},
{"join_output_by_rowlist_perkey_rows_threshold", 0, 5, "The lower limit of per-key average rows in the right table to determine whether to output by row list in hash join."},
{"create_if_not_exists", false, false, "New setting."},
{"allow_materialized_view_with_bad_select", true, true, "Support (but not enable yet) stricter validation in CREATE MATERIALIZED VIEW"},
}
},
{"24.8",
{
{"create_if_not_exists", false, false, "New setting."},
{"rows_before_aggregation", true, true, "Provide exact value for rows_before_aggregation statistic, represents the number of rows read before aggregation"},
{"restore_replace_external_table_functions_to_null", false, false, "New setting."},
{"restore_replace_external_engines_to_null", false, false, "New setting."},
@ -85,7 +87,6 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
{"use_hive_partitioning", false, false, "Allows to use hive partitioning for File, URL, S3, AzureBlobStorage and HDFS engines."},
{"allow_experimental_kafka_offsets_storage_in_keeper", false, false, "Allow the usage of experimental Kafka storage engine that stores the committed offsets in ClickHouse Keeper"},
{"allow_archive_path_syntax", true, true, "Added new setting to allow disabling archive path syntax."},
{"allow_materialized_view_with_bad_select", true, true, "Support (but not enable yet) stricter validation in CREATE MATERIALIZED VIEW"},
{"query_cache_tag", "", "", "New setting for labeling query cache settings."},
{"allow_experimental_time_series_table", false, false, "Added new setting to allow the TimeSeries table engine"},
{"enable_analyzer", 1, 1, "Added an alias to a setting `allow_experimental_analyzer`."},
@ -93,7 +94,6 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
{"allow_experimental_json_type", false, false, "Add new experimental JSON type"},
{"use_json_alias_for_old_object_type", true, false, "Use JSON type alias to create new JSON type"},
{"type_json_skip_duplicated_paths", false, false, "Allow to skip duplicated paths during JSON parsing"},
{"join_output_by_rowlist_perkey_rows_threshold", 0, 5, "The lower limit of per-key average rows in the right table to determine whether to output by row list in hash join."},
{"allow_experimental_vector_similarity_index", false, false, "Added new setting to allow experimental vector similarity indexes"},
{"input_format_try_infer_datetimes_only_datetime64", true, false, "Allow to infer DateTime instead of DateTime64 in data formats"}
}

View File

@ -93,9 +93,9 @@ namespace impl
if (is_const)
i = 0;
assert(key0->size() == key1->size());
if (offsets != nullptr)
if (offsets != nullptr && i > 0)
{
const auto * const begin = offsets->begin();
const auto * const begin = std::upper_bound(offsets->begin(), offsets->end(), i - 1);
const auto * upper = std::upper_bound(begin, offsets->end(), i);
if (upper != offsets->end())
i = upper - begin;

View File

@ -10,7 +10,7 @@ void PeriodicLog<LogElement>::startCollect(size_t collect_interval_milliseconds_
{
collect_interval_milliseconds = collect_interval_milliseconds_;
is_shutdown_metric_thread = false;
flush_thread = std::make_unique<ThreadFromGlobalPool>([this] { threadFunction(); });
collecting_thread = std::make_unique<ThreadFromGlobalPool>([this] { threadFunction(); });
}
template <typename LogElement>
@ -19,15 +19,15 @@ void PeriodicLog<LogElement>::stopCollect()
bool old_val = false;
if (!is_shutdown_metric_thread.compare_exchange_strong(old_val, true))
return;
if (flush_thread)
flush_thread->join();
if (collecting_thread)
collecting_thread->join();
}
template <typename LogElement>
void PeriodicLog<LogElement>::shutdown()
{
stopCollect();
this->stopFlushThread();
Base::shutdown();
}
template <typename LogElement>

View File

@ -17,6 +17,7 @@ template <typename LogElement>
class PeriodicLog : public SystemLog<LogElement>
{
using SystemLog<LogElement>::SystemLog;
using Base = SystemLog<LogElement>;
public:
using TimePoint = std::chrono::system_clock::time_point;
@ -24,18 +25,18 @@ public:
/// Launches a background thread to collect metrics with interval
void startCollect(size_t collect_interval_milliseconds_);
/// Stop background thread
void stopCollect();
void shutdown() final;
protected:
/// Stop background thread
void stopCollect();
virtual void stepFunction(TimePoint current_time) = 0;
private:
void threadFunction();
std::unique_ptr<ThreadFromGlobalPool> flush_thread;
std::unique_ptr<ThreadFromGlobalPool> collecting_thread;
size_t collect_interval_milliseconds;
std::atomic<bool> is_shutdown_metric_thread{false};
};

View File

@ -402,32 +402,13 @@ SystemLog<LogElement>::SystemLog(
template <typename LogElement>
void SystemLog<LogElement>::shutdown()
{
stopFlushThread();
Base::stopFlushThread();
auto table = DatabaseCatalog::instance().tryGetTable(table_id, getContext());
if (table)
table->flushAndShutdown();
}
template <typename LogElement>
void SystemLog<LogElement>::stopFlushThread()
{
{
std::lock_guard lock(thread_mutex);
if (!saving_thread || !saving_thread->joinable())
return;
if (is_shutdown)
return;
is_shutdown = true;
queue->shutdown();
}
saving_thread->join();
}
template <typename LogElement>
void SystemLog<LogElement>::savingThreadFunction()

View File

@ -125,8 +125,6 @@ public:
void shutdown() override;
void stopFlushThread() override;
/** Creates new table if it does not exist.
* Renames old table if its structure is not suitable.
* This cannot be done in constructor to avoid deadlock while renaming a table under locked Context when SystemLog object is created.
@ -136,9 +134,6 @@ public:
protected:
LoggerPtr log;
using ISystemLog::is_shutdown;
using ISystemLog::saving_thread;
using ISystemLog::thread_mutex;
using Base::queue;
StoragePtr getStorage() const;

View File

@ -113,7 +113,15 @@ bool ColumnDescription::operator==(const ColumnDescription & other) const
&& ast_to_str(ttl) == ast_to_str(other.ttl);
}
void ColumnDescription::writeText(WriteBuffer & buf) const
String formatASTStateAware(IAST & ast, IAST::FormatState & state)
{
WriteBufferFromOwnString buf;
IAST::FormatSettings settings(buf, true, false);
ast.formatImpl(settings, state, IAST::FormatStateStacked());
return buf.str();
}
void ColumnDescription::writeText(WriteBuffer & buf, IAST::FormatState & state, bool include_comment) const
{
/// NOTE: Serialization format is insane.
@ -126,20 +134,21 @@ void ColumnDescription::writeText(WriteBuffer & buf) const
writeChar('\t', buf);
DB::writeText(DB::toString(default_desc.kind), buf);
writeChar('\t', buf);
writeEscapedString(queryToString(default_desc.expression), buf);
writeEscapedString(formatASTStateAware(*default_desc.expression, state), buf);
}
if (!comment.empty())
if (!comment.empty() && include_comment)
{
writeChar('\t', buf);
DB::writeText("COMMENT ", buf);
writeEscapedString(queryToString(ASTLiteral(Field(comment))), buf);
auto ast = ASTLiteral(Field(comment));
writeEscapedString(formatASTStateAware(ast, state), buf);
}
if (codec)
{
writeChar('\t', buf);
writeEscapedString(queryToString(codec), buf);
writeEscapedString(formatASTStateAware(*codec, state), buf);
}
if (!settings.empty())
@ -150,21 +159,21 @@ void ColumnDescription::writeText(WriteBuffer & buf) const
ASTSetQuery ast;
ast.is_standalone = false;
ast.changes = settings;
writeEscapedString(queryToString(ast), buf);
writeEscapedString(formatASTStateAware(ast, state), buf);
DB::writeText(")", buf);
}
if (!statistics.empty())
{
writeChar('\t', buf);
writeEscapedString(queryToString(statistics.getAST()), buf);
writeEscapedString(formatASTStateAware(*statistics.getAST(), state), buf);
}
if (ttl)
{
writeChar('\t', buf);
DB::writeText("TTL ", buf);
writeEscapedString(queryToString(ttl), buf);
writeEscapedString(formatASTStateAware(*ttl, state), buf);
}
writeChar('\n', buf);
@ -895,16 +904,17 @@ void ColumnsDescription::resetColumnTTLs()
}
String ColumnsDescription::toString() const
String ColumnsDescription::toString(bool include_comments) const
{
WriteBufferFromOwnString buf;
IAST::FormatState ast_format_state;
writeCString("columns format version: 1\n", buf);
DB::writeText(columns.size(), buf);
writeCString(" columns:\n", buf);
for (const ColumnDescription & column : columns)
column.writeText(buf);
column.writeText(buf, ast_format_state, include_comments);
return buf.str();
}

View File

@ -104,7 +104,7 @@ struct ColumnDescription
bool operator==(const ColumnDescription & other) const;
bool operator!=(const ColumnDescription & other) const { return !(*this == other); }
void writeText(WriteBuffer & buf) const;
void writeText(WriteBuffer & buf, IAST::FormatState & state, bool include_comment) const;
void readText(ReadBuffer & buf);
};
@ -137,7 +137,7 @@ public:
/// NOTE Must correspond with Nested::flatten function.
void flattenNested(); /// TODO: remove, insert already flattened Nested columns.
bool operator==(const ColumnsDescription & other) const { return columns == other.columns; }
bool operator==(const ColumnsDescription & other) const { return toString(false) == other.toString(false); }
bool operator!=(const ColumnsDescription & other) const { return !(*this == other); }
auto begin() const { return columns.begin(); }
@ -221,7 +221,7 @@ public:
/// Does column has non default specified compression codec
bool hasCompressionCodec(const String & column_name) const;
String toString() const;
String toString(bool include_comments = true) const;
static ColumnsDescription parse(const String & str);
size_t size() const

View File

@ -290,6 +290,10 @@ VirtualColumnsDescription StorageDistributed::createVirtuals()
desc.addEphemeral("_shard_num", std::make_shared<DataTypeUInt32>(), "Deprecated. Use function shardNum instead");
/// Add virtual columns from table with Merge engine.
desc.addEphemeral("_database", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>()), "The name of database which the row comes from");
desc.addEphemeral("_table", std::make_shared<DataTypeLowCardinality>(std::make_shared<DataTypeString>()), "The name of table which the row comes from");
return desc;
}

View File

@ -642,10 +642,6 @@ std::vector<ReadFromMerge::ChildPlan> ReadFromMerge::createChildrenPlans(SelectQ
column_names_as_aliases.push_back(ExpressionActions::getSmallestColumn(storage_metadata_snapshot->getColumns().getAllPhysical()).name);
}
}
else
{
}
auto child = createPlanForTable(
nested_storage_snaphsot,
@ -657,6 +653,7 @@ std::vector<ReadFromMerge::ChildPlan> ReadFromMerge::createChildrenPlans(SelectQ
row_policy_data_opt,
modified_context,
current_streams);
child.plan.addInterpreterContext(modified_context);
if (child.plan.isInitialized())
@ -914,12 +911,14 @@ SelectQueryInfo ReadFromMerge::getModifiedQueryInfo(const ContextMutablePtr & mo
modified_query_info.table_expression = replacement_table_expression;
modified_query_info.planner_context->getOrCreateTableExpressionData(replacement_table_expression);
auto get_column_options = GetColumnsOptions(GetColumnsOptions::All).withExtendedObjects().withVirtuals();
if (storage_snapshot_->storage.supportsSubcolumns())
get_column_options.withSubcolumns();
auto get_column_options = GetColumnsOptions(GetColumnsOptions::All)
.withExtendedObjects()
.withSubcolumns(storage_snapshot_->storage.supportsSubcolumns());
std::unordered_map<std::string, QueryTreeNodePtr> column_name_to_node;
/// Consider only non-virtual columns of storage while checking for _table and _database columns.
/// I.e. always override virtual columns with these names from underlying table (if any).
if (!storage_snapshot_->tryGetColumn(get_column_options, "_table"))
{
auto table_name_node = std::make_shared<ConstantNode>(current_storage_id.table_name);
@ -946,6 +945,7 @@ SelectQueryInfo ReadFromMerge::getModifiedQueryInfo(const ContextMutablePtr & mo
column_name_to_node.emplace("_database", function_node);
}
get_column_options.withVirtuals();
auto storage_columns = storage_snapshot_->metadata->getColumns();
bool with_aliases = /* common_processed_stage == QueryProcessingStage::FetchColumns && */ !storage_columns.getAliases().empty();

View File

@ -333,7 +333,10 @@ def _pre_action(s3, job_name, batch, indata, pr_info):
CI.JobNames.BUILD_CHECK,
): # we might want to rerun build report job
rerun_helper = RerunHelper(commit, _get_ext_check_name(job_name))
if rerun_helper.is_already_finished_by_status():
if (
rerun_helper.is_already_finished_by_status()
and not Utils.is_job_triggered_manually()
):
print("WARNING: Rerunning job with GH status ")
status = rerun_helper.get_finished_status()
assert status
@ -344,7 +347,7 @@ def _pre_action(s3, job_name, batch, indata, pr_info):
skip_status = status.state
# ci cache check
if not to_be_skipped and not no_cache:
if not to_be_skipped and not no_cache and not Utils.is_job_triggered_manually():
ci_cache = CiCache(s3, indata["jobs_data"]["digests"]).update()
job_config = CI.get_job_config(job_name)
if ci_cache.is_successful(

View File

@ -501,9 +501,10 @@ class CI:
JobNames.SQLANCER_DEBUG: CommonJobConfigs.SQLLANCER_TEST.with_properties(
required_builds=[BuildNames.PACKAGE_DEBUG],
),
JobNames.SQL_LOGIC_TEST: CommonJobConfigs.SQLLOGIC_TEST.with_properties(
required_builds=[BuildNames.PACKAGE_RELEASE],
),
# TODO: job does not work at all, uncomment and fix
# JobNames.SQL_LOGIC_TEST: CommonJobConfigs.SQLLOGIC_TEST.with_properties(
# required_builds=[BuildNames.PACKAGE_RELEASE],
# ),
JobNames.SQLTEST: CommonJobConfigs.SQL_TEST.with_properties(
required_builds=[BuildNames.PACKAGE_RELEASE],
),

View File

@ -204,7 +204,7 @@ class JobNames(metaclass=WithIter):
PERFORMANCE_TEST_AMD64 = "Performance Comparison (release)"
PERFORMANCE_TEST_ARM64 = "Performance Comparison (aarch64)"
SQL_LOGIC_TEST = "Sqllogic test (release)"
# SQL_LOGIC_TEST = "Sqllogic test (release)"
SQLANCER = "SQLancer (release)"
SQLANCER_DEBUG = "SQLancer (debug)"

View File

@ -18,6 +18,7 @@ class Envs:
)
S3_BUILDS_BUCKET = os.getenv("S3_BUILDS_BUCKET", "clickhouse-builds")
GITHUB_WORKFLOW = os.getenv("GITHUB_WORKFLOW", "")
GITHUB_ACTOR = os.getenv("GITHUB_ACTOR", "")
class WithIter(type):
@ -282,3 +283,7 @@ class Utils:
):
res = res.replace(*r)
return res
@staticmethod
def is_job_triggered_manually():
return "robot" not in Envs.GITHUB_ACTOR

View File

@ -31,7 +31,7 @@ IMAGE_NAME = "clickhouse/sqllogic-test"
def get_run_command(
builds_path: Path,
repo_tests_path: Path,
repo_path: Path,
result_path: Path,
server_log_path: Path,
image: DockerImage,
@ -39,11 +39,11 @@ def get_run_command(
return (
f"docker run "
f"--volume={builds_path}:/package_folder "
f"--volume={repo_tests_path}:/clickhouse-tests "
f"--volume={repo_path}:/repo "
f"--volume={result_path}:/test_output "
f"--volume={server_log_path}:/var/log/clickhouse-server "
"--security-opt seccomp=unconfined " # required to issue io_uring sys-calls
f"--cap-add=SYS_PTRACE {image}"
f"--cap-add=SYS_PTRACE {image} /repo/tests/docker_scripts/sqllogic_runner.sh"
)
@ -94,8 +94,6 @@ def main():
docker_image = pull_image(get_docker_image(IMAGE_NAME))
repo_tests_path = repo_path / "tests"
packages_path = temp_path / "packages"
packages_path.mkdir(parents=True, exist_ok=True)
@ -111,7 +109,7 @@ def main():
run_command = get_run_command( # run script inside docker
packages_path,
repo_tests_path,
repo_path,
result_path,
server_log_path,
docker_image,

View File

@ -3567,7 +3567,7 @@ if __name__ == "__main__":
f"Cannot access the specified directory with queries ({args.queries})",
file=sys.stderr,
)
sys.exit(1)
assert False, "No --queries provided"
CAPTURE_CLIENT_STACKTRACE = args.capture_client_stacktrace

View File

@ -15,10 +15,10 @@ echo "Files in current directory"
ls -la ./
echo "Files in root directory"
ls -la /
echo "Files in /clickhouse-tests directory"
ls -la /clickhouse-tests
echo "Files in /clickhouse-tests/sqllogic directory"
ls -la /clickhouse-tests/sqllogic
echo "Files in /repo/tests directory"
ls -la /repo/tests
echo "Files in /repo/tests/sqllogic directory"
ls -la /repo/tests/sqllogic
echo "Files in /package_folder directory"
ls -la /package_folder
echo "Files in /test_output"
@ -45,13 +45,13 @@ function run_tests()
cd /test_output
/clickhouse-tests/sqllogic/runner.py --help 2>&1 \
/repo/tests/sqllogic/runner.py --help 2>&1 \
| ts '%Y-%m-%d %H:%M:%S'
mkdir -p /test_output/self-test
/clickhouse-tests/sqllogic/runner.py --log-file /test_output/runner-self-test.log \
/repo/tests/sqllogic/runner.py --log-file /test_output/runner-self-test.log \
self-test \
--self-test-dir /clickhouse-tests/sqllogic/self-test \
--self-test-dir /repo/tests/sqllogic/self-test \
--out-dir /test_output/self-test \
2>&1 \
| ts '%Y-%m-%d %H:%M:%S'
@ -63,7 +63,7 @@ function run_tests()
if [ -d /sqllogictest ]
then
mkdir -p /test_output/statements-test
/clickhouse-tests/sqllogic/runner.py \
/repo/tests/sqllogic/runner.py \
--log-file /test_output/runner-statements-test.log \
--log-level info \
statements-test \
@ -77,7 +77,7 @@ function run_tests()
tar -zcvf statements-check.tar.gz statements-test 1>/dev/null
mkdir -p /test_output/complete-test
/clickhouse-tests/sqllogic/runner.py \
/repo/tests/sqllogic/runner.py \
--log-file /test_output/runner-complete-test.log \
--log-level info \
complete-test \

View File

@ -10,8 +10,7 @@ dmesg --clear
# shellcheck disable=SC1091
source /setup_export_logs.sh
ln -s /repo/tests/clickhouse-test/ci/stress.py /usr/bin/stress
ln -s /repo/tests/clickhouse-test/clickhouse-test /usr/bin/clickhouse-test
ln -s /repo/tests/clickhouse-test /usr/bin/clickhouse-test
# Stress tests and upgrade check uses similar code that was placed
# in a separate bash library. See tests/ci/stress_tests.lib
@ -266,6 +265,7 @@ fi
start_server
cd /repo/tests/ || exit 1 # clickhouse-test can find queries dir from there
python3 /repo/tests/ci/stress.py --hung-check --drop-databases --output-folder /test_output --skip-func-tests "$SKIP_TESTS_OPTION" --global-time-limit 1200 \
&& echo -e "Test script exit code$OK" >> /test_output/test_results.tsv \
|| echo -e "Test script failed$FAIL script exit code: $?" >> /test_output/test_results.tsv

View File

@ -0,0 +1,26 @@
<clickhouse>
<keeper_server>
<tcp_port>2181</tcp_port>
<server_id>1</server_id>
<log_storage_path>/var/lib/clickhouse/coordination/log</log_storage_path>
<snapshot_storage_path>/var/lib/clickhouse/coordination/snapshots</snapshot_storage_path>
<coordination_settings>
<session_timeout_ms>20000</session_timeout_ms>
</coordination_settings>
<raft_configuration>
<server>
<id>1</id>
<hostname>localhost</hostname>
<port>9444</port>
</server>
</raft_configuration>
</keeper_server>
<zookeeper>
<node index="1">
<host>localhost</host>
<port>2181</port>
</node>
<session_timeout_ms>20000</session_timeout_ms>
</zookeeper>
</clickhouse>

View File

@ -0,0 +1,8 @@
<clickhouse>
<users>
<default>
<profile>default</profile>
<no_password></no_password>
</default>
</users>
</clickhouse>

View File

@ -0,0 +1,71 @@
import pytest
import random
import string
from helpers.cluster import ClickHouseCluster
cluster = ClickHouseCluster(__file__)
node = cluster.add_instance(
"node",
main_configs=[
"config/enable_keeper.xml",
"config/users.xml",
],
stay_alive=True,
with_minio=True,
macros={"shard": 1, "replica": 1},
)
@pytest.fixture(scope="module")
def start_cluster():
try:
cluster.start()
yield cluster
finally:
cluster.shutdown()
def randomize_table_name(table_name, random_suffix_length=10):
letters = string.ascii_letters + string.digits
return f"{table_name}_{''.join(random.choice(letters) for _ in range(random_suffix_length))}"
@pytest.mark.parametrize("engine", ["ReplicatedMergeTree"])
def test_aliases_in_default_expr_not_break_table_structure(start_cluster, engine):
"""
Making sure that using aliases in columns' default expressions does not lead to having different columns metadata in ZooKeeper and on disk.
Issue: https://github.com/ClickHouse/clickhouse-private/issues/5150
"""
data = '{"event": {"col1-key": "col1-val", "col2-key": "col2-val"}}'
table_name = randomize_table_name("t")
node.query(
f"""
DROP TABLE IF EXISTS {table_name};
CREATE TABLE {table_name}
(
`data` String,
`col1` String DEFAULT JSONExtractString(JSONExtractString(data, 'event') AS event, 'col1-key'),
`col2` String MATERIALIZED JSONExtractString(JSONExtractString(data, 'event') AS event, 'col2-key')
)
ENGINE = {engine}('/test/{table_name}', '{{replica}}')
ORDER BY col1
"""
)
node.restart_clickhouse()
node.query(
f"""
INSERT INTO {table_name} (data) VALUES ('{data}');
"""
)
assert node.query(f"SELECT data FROM {table_name}").strip() == data
assert node.query(f"SELECT col1 FROM {table_name}").strip() == "col1-val"
assert node.query(f"SELECT col2 FROM {table_name}").strip() == "col2-val"
node.query(f"DROP TABLE {table_name}")

View File

@ -8,7 +8,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
set -o errexit
set -o pipefail
$CLICKHOUSE_CLIENT -n --query="
$CLICKHOUSE_CLIENT --query="
DROP TABLE IF EXISTS users;
CREATE TABLE users (UserID UInt64) ENGINE = Log;
INSERT INTO users VALUES (1468013291393583084);

View File

@ -4,7 +4,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
$CLICKHOUSE_CLIENT -n --ignore-error --query="
$CLICKHOUSE_CLIENT --ignore-error --query="
DROP TABLE IF EXISTS test1_00550;
DROP TABLE IF EXISTS test2_00550;
DROP TABLE IF EXISTS test3_00550;

View File

@ -1,3 +1,5 @@
-- Tags: no-random-settings, no-random-merge-tree-settings
-- small number of insert threads can make insert terribly slow, especially with some build like msan
DROP TABLE IF EXISTS mt;
CREATE TABLE mt (x UInt64) ENGINE = MergeTree ORDER BY x SETTINGS parts_to_delay_insert = 100000, parts_to_throw_insert = 100000;

View File

@ -13,7 +13,8 @@ opts=(
DATABASE_ORDINARY="${CLICKHOUSE_DATABASE}_ordinary"
$CLICKHOUSE_CLIENT "${opts[@]}" --allow_deprecated_database_ordinary=1 --multiquery "
$CLICKHOUSE_CLIENT "${opts[@]}" --query "
SET allow_deprecated_database_ordinary = 1;
SET allow_experimental_window_view = 1;
SET window_view_clean_interval = 1;
@ -28,8 +29,7 @@ $CLICKHOUSE_CLIENT "${opts[@]}" --allow_deprecated_database_ordinary=1 --multiqu
INSERT INTO ${DATABASE_ORDINARY}.mt VALUES (1, 2, toDateTime('1990/01/01 12:00:01', 'US/Samoa'));
INSERT INTO ${DATABASE_ORDINARY}.mt VALUES (1, 3, toDateTime('1990/01/01 12:00:02', 'US/Samoa'));
INSERT INTO ${DATABASE_ORDINARY}.mt VALUES (1, 4, toDateTime('1990/01/01 12:00:05', 'US/Samoa'));
INSERT INTO ${DATABASE_ORDINARY}.mt VALUES (1, 5, toDateTime('1990/01/01 12:00:06', 'US/Samoa'));
"
INSERT INTO ${DATABASE_ORDINARY}.mt VALUES (1, 5, toDateTime('1990/01/01 12:00:06', 'US/Samoa'));"
while true; do
$CLICKHOUSE_CLIENT "${opts[@]}" --query="SELECT count(*) FROM ${DATABASE_ORDINARY}.\`.inner.wv\`" | grep -q "5" && break || sleep .5 ||:

View File

@ -13,7 +13,7 @@ REPLICA=$($CLICKHOUSE_CLIENT --query "Select getMacro('replica')")
SCALE=1000
$CLICKHOUSE_CLIENT -n --query "
$CLICKHOUSE_CLIENT --query "
DROP TABLE IF EXISTS r1;
DROP TABLE IF EXISTS r2;
CREATE TABLE r1 (x UInt64) ENGINE = ReplicatedMergeTree('/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/{shard}', '1{replica}') ORDER BY x
@ -46,7 +46,7 @@ $CLICKHOUSE_CLIENT --receive_timeout 600 --query "SYSTEM SYNC REPLICA r2" # Need
$CLICKHOUSE_CLIENT --query "SELECT value FROM system.zookeeper WHERE path = '/clickhouse/tables/$CLICKHOUSE_TEST_ZOOKEEPER_PREFIX/$SHARD/replicas/2$REPLICA' AND name = 'is_lost'";
$CLICKHOUSE_CLIENT -n --query "
$CLICKHOUSE_CLIENT --query "
DROP TABLE IF EXISTS r1;
DROP TABLE IF EXISTS r2;
"

View File

@ -9,7 +9,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
function check_log
{
${CLICKHOUSE_CLIENT} --format=JSONEachRow -nq "
${CLICKHOUSE_CLIENT} --format=JSONEachRow -q "
set enable_analyzer = 1;
system flush logs;

View File

@ -1,2 +1,2 @@
9fd46251e5574c633cbfbb9293671888 -
9fd46251e5574c633cbfbb9293671888 -
48fad37bc89fc3bcc29c4750897b6709 -
48fad37bc89fc3bcc29c4750897b6709 -

View File

@ -4,7 +4,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
${CLICKHOUSE_CLIENT} -n --query "
${CLICKHOUSE_CLIENT} --query "
DROP TABLE IF EXISTS t;
CREATE TABLE t (a LowCardinality(Nullable(String))) ENGINE = Memory;
"
@ -12,7 +12,7 @@ CREATE TABLE t (a LowCardinality(Nullable(String))) ENGINE = Memory;
${CLICKHOUSE_CLIENT} --query "INSERT INTO t FORMAT RawBLOB" < ${BASH_SOURCE[0]}
cat ${BASH_SOURCE[0]} | md5sum
${CLICKHOUSE_CLIENT} -n --query "SELECT * FROM t FORMAT RawBLOB" | md5sum
${CLICKHOUSE_CLIENT} --query "SELECT * FROM t FORMAT RawBLOB" | md5sum
${CLICKHOUSE_CLIENT} --query "
DROP TABLE t;

View File

@ -24,7 +24,7 @@ expect_after {
-i $any_spawn_id timeout { exit 1 }
}
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_CLIENT_BINARY \$CLICKHOUSE_CLIENT_OPT --disable_suggestion -mn --history_file=$history_file --highlight 0"
spawn bash -c "source $basedir/../shell_config.sh ; \$CLICKHOUSE_CLIENT_BINARY \$CLICKHOUSE_CLIENT_OPT --disable_suggestion -m --history_file=$history_file --highlight 0"
expect "\n:) "
send -- "DROP TABLE IF EXISTS t01565;\r"

View File

@ -17,7 +17,7 @@ function wait_with_limit()
done
}
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
drop table if exists data_01811;
drop table if exists buffer_01811;
@ -39,9 +39,9 @@ $CLICKHOUSE_CLIENT -nm -q "
# wait for background buffer flush
wait_with_limit 30 '[[ $($CLICKHOUSE_CLIENT -q "select count() from data_01811") -gt 0 ]]'
$CLICKHOUSE_CLIENT -nm -q "select count() from data_01811"
$CLICKHOUSE_CLIENT -m -q "select count() from data_01811"
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
drop table buffer_01811;
drop table data_01811;
"

View File

@ -8,7 +8,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
sql="toUInt16OrNull(arrayFirst((v, k) -> (k = '4Id'), arr[2], arr[1]))"
# Create the table and fill it
$CLICKHOUSE_CLIENT -n --query="
$CLICKHOUSE_CLIENT --query="
CREATE TABLE test_extract(str String, arr Array(Array(String)) ALIAS extractAllGroupsHorizontal(str, '\\W(\\w+)=(\"[^\"]*?\"|[^\",}]*)')) ENGINE=MergeTree() PARTITION BY tuple() ORDER BY tuple();
INSERT INTO test_extract (str) WITH range(8) as range_arr, arrayMap(x-> concat(toString(x),'Id'), range_arr) as key, arrayMap(x -> rand() % 8, range_arr) as val, arrayStringConcat(arrayMap((x,y) -> concat(x,'=',toString(y)), key, val),',') as str SELECT str FROM numbers(500000);
ALTER TABLE test_extract ADD COLUMN 15Id Nullable(UInt16) DEFAULT $sql;"
@ -24,7 +24,7 @@ function test()
$CLICKHOUSE_CLIENT --query="SELECT uniq(15Id) FROM test_extract $where SETTINGS max_threads=1" --query_id=$uuid_1
uuid_2=$(cat /proc/sys/kernel/random/uuid)
$CLICKHOUSE_CLIENT --query="SELECT uniq($sql) FROM test_extract $where SETTINGS max_threads=1" --query_id=$uuid_2
$CLICKHOUSE_CLIENT -n --query="
$CLICKHOUSE_CLIENT --query="
SYSTEM FLUSH LOGS;
WITH memory_1 AS (SELECT memory_usage FROM system.query_log WHERE current_database = currentDatabase() AND query_id='$uuid_1' AND type = 'QueryFinish' as memory_1),
memory_2 AS (SELECT memory_usage FROM system.query_log WHERE current_database = currentDatabase() AND query_id='$uuid_2' AND type = 'QueryFinish' as memory_2)

View File

@ -16,7 +16,7 @@ function test_table_comments()
local ENGINE_NAME="$1"
echo "engine : ${ENGINE_NAME}"
$CLICKHOUSE_CLIENT -nm <<EOF
$CLICKHOUSE_CLIENT -m <<EOF
DROP TABLE IF EXISTS comment_test_table;
CREATE TABLE comment_test_table

View File

@ -20,9 +20,9 @@ function explain_sorting {
function explain_sortmode {
echo "-- QUERY: "$1
$CLICKHOUSE_CLIENT --enable_analyzer=0 --merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability=0.0 -nq "$1" | eval $FIND_SORTMODE
$CLICKHOUSE_CLIENT --enable_analyzer=0 --merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability=0.0 -q "$1" | eval $FIND_SORTMODE
echo "-- QUERY (analyzer): "$1
$CLICKHOUSE_CLIENT --enable_analyzer=1 --merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability=0.0 -nq "$1" | eval $FIND_SORTMODE
$CLICKHOUSE_CLIENT --enable_analyzer=1 --merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_injection_probability=0.0 -q "$1" | eval $FIND_SORTMODE
}
$CLICKHOUSE_CLIENT -q "drop table if exists optimize_sorting sync"

View File

@ -9,7 +9,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CURDIR"/../shell_config.sh
$CLICKHOUSE_CLIENT -mn -q """
$CLICKHOUSE_CLIENT -m -q """
DROP TABLE IF EXISTS t1;
DROP TABLE IF EXISTS t2;

View File

@ -39,7 +39,7 @@ function check_span()
extra_condition=""
fi
ret=$(${CLICKHOUSE_CLIENT} -nq "
ret=$(${CLICKHOUSE_CLIENT} -q "
SYSTEM FLUSH LOGS;
SELECT count()

View File

@ -239,10 +239,15 @@ Check bug found fuzzing
Test arrays and maps
608E1FF030C9E206185B112C2A25F1A7
ABB65AE97711A2E053E324ED88B1D08B
Test emtpy arrays and maps
Test empty arrays and maps
4761183170873013810
0AD04BFD000000000000000000000000
4761183170873013810
0AD04BFD000000000000000000000000
Test maps with arrays as keys
16734549324845627102
D675BB3D687973A238AB891DD99C7047
1D03941D808D04810D2363A6C107D622
16734549324845627102
16734549324845627102
1D03941D808D04810D2363A6C107D622
1D03941D808D04810D2363A6C107D622

View File

@ -346,10 +346,13 @@ INSERT INTO sipHashKeyed_keys FORMAT VALUES ({'a':'b', 'c':'d'}), ({'e':'f', 'g'
SELECT hex(sipHash128ReferenceKeyed((0::UInt64, materialize(0::UInt64)), a)) FROM sipHashKeyed_keys ORDER BY a;
DROP TABLE sipHashKeyed_keys;
SELECT 'Test emtpy arrays and maps';
SELECT 'Test empty arrays and maps';
SELECT sipHash64Keyed((1::UInt64, 2::UInt64), []);
SELECT hex(sipHash128Keyed((1::UInt64, 2::UInt64), []));
SELECT sipHash64Keyed((1::UInt64, 2::UInt64), mapFromArrays([], []));
SELECT hex(sipHash128Keyed((1::UInt64, 2::UInt64), mapFromArrays([], [])));
SELECT 'Test maps with arrays as keys';
SELECT sipHash64Keyed((1::UInt64, 2::UInt64), map([0], 1, [2], 3));
SELECT hex(sipHash128Keyed((0::UInt64, 0::UInt64), map([0], 1, [2], 3)));
SELECT hex(sipHash128Keyed((1::UInt64, 2::UInt64), map([0], 1, [2], 3)));
SELECT sipHash64Keyed((materialize(1::UInt64), 2::UInt64), map([0], 1, [2], 3)) FROM numbers(2);
SELECT hex(sipHash128Keyed((materialize(1::UInt64), 2::UInt64), map([0], 1, [2], 3))) FROM numbers(2);

View File

@ -10,7 +10,7 @@ for check_query in "SELECT value FROM system.settings WHERE name = 'alter_sync';
echo "Checking setting value with '$check_query'"
echo 'Using SET'
$CLICKHOUSE_CLIENT -mn -q """
$CLICKHOUSE_CLIENT -m -q """
SET replication_alter_partitions_sync = 0;
$check_query
@ -28,7 +28,7 @@ for check_query in "SELECT value FROM system.settings WHERE name = 'alter_sync';
done
$CLICKHOUSE_CLIENT -mn -q """
$CLICKHOUSE_CLIENT -m -q """
DROP VIEW IF EXISTS 02539_settings_alias_view;
CREATE VIEW 02539_settings_alias_view AS SELECT 1 SETTINGS replication_alter_partitions_sync = 2;
SHOW CREATE TABLE 02539_settings_alias_view;

View File

@ -5,7 +5,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
QUERY_ID="${CLICKHOUSE_DATABASE}_read_with_cancel"
$CLICKHOUSE_CLIENT --max_rows_to_read 0 -n --query_id="$QUERY_ID" --query="SELECT sum(number * 0) FROM numbers(10000000000) SETTINGS partial_result_on_first_cancel=true;" &
$CLICKHOUSE_CLIENT --max_rows_to_read 0 --query_id="$QUERY_ID" --query="SELECT sum(number * 0) FROM numbers(10000000000) SETTINGS partial_result_on_first_cancel=true;" &
pid=$!
for _ in {0..60}

View File

@ -3,7 +3,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh
$CLICKHOUSE_CLIENT --multiquery "
$CLICKHOUSE_CLIENT "
SELECT 'Policy for table \`*\` does not affect other tables in the database';
CREATE ROW POLICY 02703_asterisk_${CLICKHOUSE_DATABASE}_policy ON ${CLICKHOUSE_DATABASE}.\`*\` USING x=1 AS permissive TO ALL;
CREATE TABLE ${CLICKHOUSE_DATABASE}.\`*\` (x UInt8, y UInt8) ENGINE = MergeTree ORDER BY x AS SELECT 100, 20;

View File

@ -3,7 +3,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh
$CLICKHOUSE_CLIENT --multiquery "
$CLICKHOUSE_CLIENT "
DROP TABLE IF EXISTS 02703_rptable;
DROP TABLE IF EXISTS 02703_rptable_another;

View File

@ -5,7 +5,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
CLICKHOUSE_USER="user_$CLICKHOUSE_DATABASE"
$CLICKHOUSE_CLIENT --multiquery "
$CLICKHOUSE_CLIENT "
DROP USER IF EXISTS ${CLICKHOUSE_USER};
CREATE USER ${CLICKHOUSE_USER};
@ -28,7 +28,7 @@ DROP POLICY ${CLICKHOUSE_DATABASE}_tb_policy ON ${CLICKHOUSE_DATABASE}.table;
$CLICKHOUSE_CLIENT --query "CREATE ROW POLICY any_02703 ON *.some_table USING 1 AS PERMISSIVE TO ALL;" 2>&1 | grep -q "SYNTAX_ERROR"
$CLICKHOUSE_CLIENT --multiquery "
$CLICKHOUSE_CLIENT "
CREATE TABLE 02703_rqtable_default (x UInt8) ENGINE = MergeTree ORDER BY x;
CREATE ROW POLICY ${CLICKHOUSE_DATABASE}_filter_11_db_policy ON * USING x=1 AS permissive TO ALL;

View File

@ -7,7 +7,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=./mergetree_mutations.lib
. "$CURDIR"/mergetree_mutations.lib
${CLICKHOUSE_CLIENT} -n --query "
${CLICKHOUSE_CLIENT} --query "
DROP TABLE IF EXISTS t_delay_mutations SYNC;
CREATE TABLE t_delay_mutations (id UInt64, v UInt64)
@ -36,14 +36,14 @@ SELECT count() FROM system.mutations WHERE database = currentDatabase() AND tabl
${CLICKHOUSE_CLIENT} --query "SYSTEM START MERGES t_delay_mutations"
wait_for_mutation "t_delay_mutations" "mutation_5.txt"
${CLICKHOUSE_CLIENT} -n --query "
${CLICKHOUSE_CLIENT} --query "
SELECT * FROM t_delay_mutations ORDER BY id;
SELECT count() FROM system.mutations WHERE database = currentDatabase() AND table = 't_delay_mutations' AND NOT is_done;
DROP TABLE IF EXISTS t_delay_mutations SYNC;
"
${CLICKHOUSE_CLIENT} -n --query "
${CLICKHOUSE_CLIENT} --query "
SYSTEM FLUSH LOGS;
SELECT

View File

@ -4,7 +4,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
# shellcheck source=../shell_config.sh
. "$CUR_DIR"/../shell_config.sh
$CLICKHOUSE_CLIENT -n -q "
$CLICKHOUSE_CLIENT -q "
DROP TABLE IF EXISTS mv;
DROP TABLE IF EXISTS output;
DROP TABLE IF EXISTS input;
@ -17,7 +17,7 @@ $CLICKHOUSE_CLIENT -n -q "
for enable_analyzer in 0 1; do
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --enable_analyzer "$enable_analyzer" --query_id "$query_id" -q "INSERT INTO input SELECT * FROM numbers(1)"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
1 view,
@ -35,7 +35,7 @@ for enable_analyzer in 0 1; do
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --enable_analyzer "$enable_analyzer" --query_id "$query_id" -q "SELECT * FROM system.one WHERE dummy IN (SELECT * FROM system.one) FORMAT Null"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
1 subquery,
@ -52,7 +52,7 @@ for enable_analyzer in 0 1; do
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --enable_analyzer "$enable_analyzer" --query_id "$query_id" -q "WITH (SELECT * FROM system.one) AS x SELECT x FORMAT Null"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
1 CSE,
@ -69,7 +69,7 @@ for enable_analyzer in 0 1; do
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --enable_analyzer "$enable_analyzer" --query_id "$query_id" -q "WITH (SELECT * FROM system.one) AS x SELECT x, x FORMAT Null"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
1 CSE_Multi,
@ -86,7 +86,7 @@ for enable_analyzer in 0 1; do
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --enable_analyzer "$enable_analyzer" --query_id "$query_id" -q "WITH x AS (SELECT * FROM system.one) SELECT * FROM x FORMAT Null"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
1 CTE,
@ -103,7 +103,7 @@ for enable_analyzer in 0 1; do
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --enable_analyzer "$enable_analyzer" --query_id "$query_id" -q "WITH x AS (SELECT * FROM system.one) SELECT * FROM x UNION ALL SELECT * FROM x FORMAT Null"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
1 CTE_Multi,

View File

@ -13,7 +13,7 @@ $CLICKHOUSE_CLIENT -q "SELECT * FROM system.tables WHERE 1 in (SELECT number fro
$CLICKHOUSE_CLIENT -q "SELECT xor(1, 0) FROM system.parts WHERE 1 IN (SELECT 1) FORMAT Null"
# (Not all of these tests are effective because some of these tables are empty.)
$CLICKHOUSE_CLIENT -nq "
$CLICKHOUSE_CLIENT -q "
select * from system.columns where table in (select '123');
select * from system.replicas where database in (select '123');
select * from system.data_skipping_indices where database in (select '123');
@ -23,7 +23,7 @@ $CLICKHOUSE_CLIENT -nq "
select * from system.replication_queue where database in (select '123');
select * from system.distribution_queue where database in (select '123');
"
$CLICKHOUSE_CLIENT -nq "
$CLICKHOUSE_CLIENT -q "
create table a (x Int8) engine MergeTree order by x;
insert into a values (1);
select * from mergeTreeIndex(currentDatabase(), 'a') where part_name in (select '123');

View File

@ -26,7 +26,7 @@ ${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_6" --query='SELECT * FROM nu
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_7" --query='SELECT * FROM numbers_mt(5000), numbers(5000) SETTINGS max_threads = 1, joined_subquery_requires_alias=0' "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_8" --query='SELECT * FROM numbers_mt(5000), numbers(5000) SETTINGS max_threads = 4, joined_subquery_requires_alias=0' "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_9" -mn --query="""
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_9" -m --query="""
SELECT count() FROM
(SELECT number FROM numbers_mt(1,100000)
UNION ALL SELECT number FROM numbers_mt(10000, 200000)
@ -38,7 +38,7 @@ SELECT count() FROM
UNION ALL SELECT number FROM numbers_mt(300000, 4000000)
) SETTINGS max_threads = 1""" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_10" -mn --query="""
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_10" -m --query="""
SELECT count() FROM
(SELECT number FROM numbers_mt(1,100000)
UNION ALL SELECT number FROM numbers_mt(10000, 2000)
@ -50,7 +50,7 @@ SELECT count() FROM
UNION ALL SELECT number FROM numbers_mt(300000, 4000000)
) SETTINGS max_threads = 4""" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_11" -mn --query="""
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_11" -m --query="""
SELECT count() FROM
(SELECT number FROM numbers_mt(1,100000)
UNION ALL SELECT number FROM numbers_mt(1, 1)
@ -62,20 +62,20 @@ SELECT count() FROM
UNION ALL SELECT number FROM numbers_mt(1, 4000000)
) SETTINGS max_threads = 4""" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_12" -mn --query="""
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_12" -m --query="""
SELECT sum(number) FROM numbers_mt(100000)
GROUP BY number % 2
WITH TOTALS ORDER BY number % 2
SETTINGS max_threads = 4""" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_13" -mn --query="SELECT * FROM numbers(100000) SETTINGS max_threads = 1" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_13" -m --query="SELECT * FROM numbers(100000) SETTINGS max_threads = 1" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_14" -mn --query="SELECT * FROM numbers(100000) SETTINGS max_threads = 4" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} --query_id="${UNIQUE_QUERY_ID}_14" -m --query="SELECT * FROM numbers(100000) SETTINGS max_threads = 4" "${QUERY_OPTIONS[@]}"
${CLICKHOUSE_CLIENT} -q "SYSTEM FLUSH LOGS"
for i in {1..14}
do
${CLICKHOUSE_CLIENT} -mn --query="""
${CLICKHOUSE_CLIENT} -m --query="""
SELECT '${i}',
peak_threads_usage,
(select count() from system.query_thread_log WHERE system.query_thread_log.query_id = '${UNIQUE_QUERY_ID}_${i}' AND current_database = currentDatabase()) = length(thread_ids),

View File

@ -54,6 +54,8 @@ _row_exists UInt8 Persisted mask created by lightweight delete that show wheth
_block_number UInt64 Persisted original number of block that was assigned at insert Delta, LZ4 1
_block_offset UInt64 Persisted original number of row in block that was assigned at insert Delta, LZ4 1
_shard_num UInt32 Deprecated. Use function shardNum instead 1
_database LowCardinality(String) The name of database which the row comes from 1
_table LowCardinality(String) The name of table which the row comes from 1
SET describe_compact_output = 0, describe_include_virtual_columns = 1, describe_include_subcolumns = 1;
DESCRIBE TABLE t_describe_options;
id UInt64 index column 0 0
@ -87,6 +89,8 @@ _row_exists UInt8 Persisted mask created by lightweight delete that show wheth
_block_number UInt64 Persisted original number of block that was assigned at insert Delta, LZ4 0 1
_block_offset UInt64 Persisted original number of row in block that was assigned at insert Delta, LZ4 0 1
_shard_num UInt32 Deprecated. Use function shardNum instead 0 1
_database LowCardinality(String) The name of database which the row comes from 0 1
_table LowCardinality(String) The name of table which the row comes from 0 1
arr.size0 UInt64 1 0
t.a String ZSTD(1) 1 0
t.b UInt64 ZSTD(1) 1 0
@ -144,6 +148,8 @@ _row_exists UInt8 1
_block_number UInt64 1
_block_offset UInt64 1
_shard_num UInt32 1
_database LowCardinality(String) 1
_table LowCardinality(String) 1
SET describe_compact_output = 1, describe_include_virtual_columns = 1, describe_include_subcolumns = 1;
DESCRIBE TABLE t_describe_options;
id UInt64 0 0
@ -177,6 +183,8 @@ _row_exists UInt8 0 1
_block_number UInt64 0 1
_block_offset UInt64 0 1
_shard_num UInt32 0 1
_database LowCardinality(String) 0 1
_table LowCardinality(String) 0 1
arr.size0 UInt64 1 0
t.a String 1 0
t.b UInt64 1 0

View File

@ -5,7 +5,7 @@ CUR_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
. "$CUR_DIR"/../shell_config.sh
database_name="$CLICKHOUSE_DATABASE"_02911_keeper_map
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
DROP DATABASE IF EXISTS $database_name;
CREATE DATABASE $database_name;
CREATE TABLE $database_name.02911_backup_restore_keeper_map1 (key UInt64, value String) Engine=KeeperMap('/' || currentDatabase() || '/test02911') PRIMARY KEY key;
@ -13,9 +13,9 @@ $CLICKHOUSE_CLIENT -nm -q "
CREATE TABLE $database_name.02911_backup_restore_keeper_map3 (key UInt64, value String) Engine=KeeperMap('/' || currentDatabase() || '/test02911_different') PRIMARY KEY key;
"
$CLICKHOUSE_CLIENT -nm -q "INSERT INTO $database_name.02911_backup_restore_keeper_map2 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 5000;"
$CLICKHOUSE_CLIENT -m -q "INSERT INTO $database_name.02911_backup_restore_keeper_map2 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 5000;"
$CLICKHOUSE_CLIENT -nm -q "INSERT INTO $database_name.02911_backup_restore_keeper_map3 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 3000;"
$CLICKHOUSE_CLIENT -m -q "INSERT INTO $database_name.02911_backup_restore_keeper_map3 SELECT number, 'test' || toString(number) FROM system.numbers LIMIT 3000;"
backup_path="$database_name"
for i in $(seq 1 3); do

View File

@ -15,7 +15,7 @@ do
echo $i >> ${logs_dir}/a.txt
done
${CLICKHOUSE_CLIENT} -n --query="
${CLICKHOUSE_CLIENT} --query="
DROP TABLE IF EXISTS file_log;
DROP TABLE IF EXISTS table_to_store_data;
DROP TABLE IF EXISTS file_log_mv;
@ -69,7 +69,7 @@ done
${CLICKHOUSE_CLIENT} --query "SELECT * FROM table_to_store_data ORDER BY id;"
${CLICKHOUSE_CLIENT} -n --query="
${CLICKHOUSE_CLIENT} --query="
DROP TABLE file_log;
DROP TABLE table_to_store_data;
DROP TABLE file_log_mv;

View File

@ -11,13 +11,13 @@ set -e
function wait_until()
{
local q=$1 && shift
while [ "$($CLICKHOUSE_CLIENT -nm -q "$q")" != "1" ]; do
while [ "$($CLICKHOUSE_CLIENT -m -q "$q")" != "1" ]; do
# too frequent FLUSH LOGS is too costly
sleep 2
done
}
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
drop table if exists rmt_master;
drop table if exists rmt_slave;
@ -33,7 +33,7 @@ $CLICKHOUSE_CLIENT -nm -q "
optimize table rmt_master final settings alter_sync=1, optimize_throw_if_noop=1;
"
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
system flush logs;
select 'before';
select table, event_type, error>0, countIf(error=0) from system.part_log where database = currentDatabase() group by 1, 2, 3 order by 1, 2, 3;
@ -42,7 +42,7 @@ $CLICKHOUSE_CLIENT -nm -q "
"
# wait until rmt_slave will fetch the part and reflect this error in system.part_log
wait_until "system flush logs; select count()>0 from system.part_log where table = 'rmt_slave' and database = '$CLICKHOUSE_DATABASE' and error > 0"
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
system sync replica rmt_slave;
system flush logs;

View File

@ -14,13 +14,13 @@ set -e
function wait_until()
{
local q=$1 && shift
while [ "$($CLICKHOUSE_CLIENT -nm -q "$q")" != "1" ]; do
while [ "$($CLICKHOUSE_CLIENT -m -q "$q")" != "1" ]; do
# too frequent FLUSH LOGS is too costly
sleep 2
done
}
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
drop table if exists rmt_master;
drop table if exists rmt_slave;
@ -41,10 +41,10 @@ $CLICKHOUSE_CLIENT -nm -q "
# the part, and rmt_slave will consider it instead of performing mutation on
# it's own, otherwise prefer_fetch_merged_part_*_threshold will be simply ignored
wait_for_mutation rmt_master 0000000000
$CLICKHOUSE_CLIENT -nm -q "system start pulling replication log rmt_slave"
$CLICKHOUSE_CLIENT -m -q "system start pulling replication log rmt_slave"
# and wait until rmt_slave to fetch the part and reflect this error in system.part_log
wait_until "system flush logs; select count()>0 from system.part_log where table = 'rmt_slave' and database = '$CLICKHOUSE_DATABASE' and error > 0"
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
system flush logs;
select 'before';
select table, event_type, error>0, countIf(error=0) from system.part_log where database = currentDatabase() group by 1, 2, 3 order by 1, 2, 3;
@ -52,7 +52,7 @@ $CLICKHOUSE_CLIENT -nm -q "
system start replicated sends rmt_master;
"
wait_for_mutation rmt_slave 0000000000
$CLICKHOUSE_CLIENT -nm -q "
$CLICKHOUSE_CLIENT -m -q "
system sync replica rmt_slave;
system flush logs;

View File

@ -33,7 +33,7 @@ $CLICKHOUSE_CLIENT -q "
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --query_id "$query_id" -q "
SELECT count(*) FROM table_$table_id FORMAT Null;"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
ProfileEvents['SelectQueriesWithPrimaryKeyUsage'] AS selects_with_pk_usage
@ -50,7 +50,7 @@ $CLICKHOUSE_CLIENT -mn -q "
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --query_id "$query_id" -q "
SELECT count(*) FROM table_$table_id WHERE col2 >= 50000 FORMAT Null;"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
ProfileEvents['SelectQueriesWithPrimaryKeyUsage'] AS selects_with_pk_usage
@ -67,7 +67,7 @@ $CLICKHOUSE_CLIENT -mn -q "
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --query_id "$query_id" -q "
SELECT count(*) FROM table_$table_id WHERE pk >= 50000 FORMAT Null;"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
ProfileEvents['SelectQueriesWithPrimaryKeyUsage'] AS selects_with_pk_usage
@ -84,7 +84,7 @@ $CLICKHOUSE_CLIENT -mn -q "
query_id="$(random_str 10)"
$CLICKHOUSE_CLIENT --query_id "$query_id" -q "
SELECT count(*) FROM table_$table_id WHERE col1 >= 50000 FORMAT Null;"
$CLICKHOUSE_CLIENT -mn -q "
$CLICKHOUSE_CLIENT -m -q "
SYSTEM FLUSH LOGS;
SELECT
ProfileEvents['SelectQueriesWithPrimaryKeyUsage'] AS selects_with_pk_usage

View File

@ -7,7 +7,7 @@ CURDIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)
DATABASE_ATOMIC="${CLICKHOUSE_DATABASE}_atomic"
DATABASE_LAZY="${CLICKHOUSE_DATABASE}_lazy"
$CLICKHOUSE_CLIENT --multiquery "
$CLICKHOUSE_CLIENT "
SELECT 'database atomic tests';
DROP DATABASE IF EXISTS ${DATABASE_ATOMIC};
@ -36,7 +36,7 @@ DROP DATABASE ${DATABASE_ATOMIC} SYNC;
"
$CLICKHOUSE_CLIENT --multiquery "
$CLICKHOUSE_CLIENT "
SELECT '-----------------------';
SELECT 'database lazy tests';

View File

@ -16,7 +16,7 @@ $CLICKHOUSE_CLIENT -q "
INSERT INTO data2 VALUES ('a1451105-722e-4fe7-bfaa-65ad2ae249c2', '2000-01-02', 'CREATED');
"
$CLICKHOUSE_CLIENT -nq "
$CLICKHOUSE_CLIENT -q "
SET enable_analyzer = 1, cluster_for_parallel_replicas = 'parallel_replicas', max_parallel_replicas = 10, allow_experimental_parallel_reading_from_replicas = 2, parallel_replicas_for_non_replicated_merge_tree = 1, max_threads = 1;
SELECT

View File

@ -0,0 +1,8 @@
1 t_local_1
2 t_local_2
1 t_local_1
2 t_local_2
1 1
2 1
1 1
2 1

View File

@ -0,0 +1,24 @@
DROP TABLE IF EXISTS t_local_1;
DROP TABLE IF EXISTS t_local_2;
DROP TABLE IF EXISTS t_merge;
DROP TABLE IF EXISTS t_distr;
CREATE TABLE t_local_1 (a UInt32) ENGINE = MergeTree ORDER BY a;
CREATE TABLE t_local_2 (a UInt32) ENGINE = MergeTree ORDER BY a;
INSERT INTO t_local_1 VALUES (1);
INSERT INTO t_local_2 VALUES (2);
CREATE TABLE t_merge AS t_local_1 ENGINE = Merge(currentDatabase(), '^(t_local_1|t_local_2)$');
CREATE TABLE t_distr AS t_local_1 engine=Distributed('test_shard_localhost', currentDatabase(), t_merge, rand());
SELECT a, _table FROM t_merge ORDER BY a;
SELECT a, _table FROM t_distr ORDER BY a;
SELECT a, _database = currentDatabase() FROM t_merge ORDER BY a;
SELECT a, _database = currentDatabase() FROM t_distr ORDER BY a;
DROP TABLE IF EXISTS t_local_1;
DROP TABLE IF EXISTS t_local_2;
DROP TABLE IF EXISTS t_merge;
DROP TABLE IF EXISTS t_distr;

View File

@ -1,13 +1,15 @@
v24.8.2.3-lts 2024-08-22
v24.8.1.2684-lts 2024-08-21
v24.7.4.51-stable 2024-08-23
v24.7.3.42-stable 2024-08-08
v24.7.2.13-stable 2024-08-01
v24.7.1.2915-stable 2024-07-30
v24.6.4.42-stable 2024-08-23
v24.6.3.95-stable 2024-08-06
v24.6.2.17-stable 2024-07-05
v24.6.1.4423-stable 2024-07-01
v24.5.6.45-stable 2024-08-23
v24.5.5.78-stable 2024-08-05
v24.5.5.41-stable 2024-08-22
v24.5.4.49-stable 2024-07-01
v24.5.3.5-stable 2024-06-13
v24.5.2.34-stable 2024-06-13

1 v24.8.2.3-lts 2024-08-22
2 v24.8.1.2684-lts 2024-08-21
3 v24.7.4.51-stable 2024-08-23
4 v24.7.3.42-stable 2024-08-08
5 v24.7.2.13-stable 2024-08-01
6 v24.7.1.2915-stable 2024-07-30
7 v24.6.4.42-stable 2024-08-23
8 v24.6.3.95-stable 2024-08-06
9 v24.6.2.17-stable 2024-07-05
10 v24.6.1.4423-stable 2024-07-01
11 v24.5.6.45-stable 2024-08-23
12 v24.5.5.78-stable 2024-08-05
v24.5.5.41-stable 2024-08-22
13 v24.5.4.49-stable 2024-07-01
14 v24.5.3.5-stable 2024-06-13
15 v24.5.2.34-stable 2024-06-13