Merge remote-tracking branch 'upstream/master' into more-flexible-drop-cache

This commit is contained in:
kssenii 2023-06-30 18:22:45 +02:00
commit 87d2447570
169 changed files with 2722 additions and 725 deletions

View File

@ -1,4 +1,5 @@
### Table of Contents ### Table of Contents
**[ClickHouse release v23.6, 2023-06-30](#236)**<br/>
**[ClickHouse release v23.5, 2023-06-08](#235)**<br/> **[ClickHouse release v23.5, 2023-06-08](#235)**<br/>
**[ClickHouse release v23.4, 2023-04-26](#234)**<br/> **[ClickHouse release v23.4, 2023-04-26](#234)**<br/>
**[ClickHouse release v23.3 LTS, 2023-03-30](#233)**<br/> **[ClickHouse release v23.3 LTS, 2023-03-30](#233)**<br/>
@ -8,6 +9,106 @@
# 2023 Changelog # 2023 Changelog
### <a id="236"></a> ClickHouse release 23.6, 2023-06-29
#### Backward Incompatible Change
* Delete feature `do_not_evict_index_and_mark_files` in the fs cache. This feature was only making things worse. [#51253](https://github.com/ClickHouse/ClickHouse/pull/51253) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Remove ALTER support for experimental LIVE VIEW. [#51287](https://github.com/ClickHouse/ClickHouse/pull/51287) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Decrease the default values for `http_max_field_value_size` and `http_max_field_name_size` to 128 KiB. [#51163](https://github.com/ClickHouse/ClickHouse/pull/51163) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* CGroups metrics related to CPU are replaced with one metric, `CGroupMaxCPU` for better usability. The `Normalized` CPU usage metrics will be normalized to CGroups limits instead of the total number of CPUs when they are set. This closes [#50836](https://github.com/ClickHouse/ClickHouse/issues/50836). [#50835](https://github.com/ClickHouse/ClickHouse/pull/50835) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### New Feature
* The function `transform` as well as `CASE` with value matching started to support all data types. This closes [#29730](https://github.com/ClickHouse/ClickHouse/issues/29730). This closes [#32387](https://github.com/ClickHouse/ClickHouse/issues/32387). This closes [#50827](https://github.com/ClickHouse/ClickHouse/issues/50827). This closes [#31336](https://github.com/ClickHouse/ClickHouse/issues/31336). This closes [#40493](https://github.com/ClickHouse/ClickHouse/issues/40493). [#51351](https://github.com/ClickHouse/ClickHouse/pull/51351) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Added option `--rename_files_after_processing <pattern>`. This closes [#34207](https://github.com/ClickHouse/ClickHouse/issues/34207). [#49626](https://github.com/ClickHouse/ClickHouse/pull/49626) ([alekseygolub](https://github.com/alekseygolub)).
* Add support for `TRUNCATE` modifier in `INTO OUTFILE` clause. Suggest using `APPEND` or `TRUNCATE` for `INTO OUTFILE` when file exists. [#50950](https://github.com/ClickHouse/ClickHouse/pull/50950) ([alekar](https://github.com/alekar)).
* Add table engine `Redis` and table function `redis`. It allows querying external Redis servers. [#50150](https://github.com/ClickHouse/ClickHouse/pull/50150) ([JackyWoo](https://github.com/JackyWoo)).
* Allow to skip empty files in file/s3/url/hdfs table functions using settings `s3_skip_empty_files`, `hdfs_skip_empty_files`, `engine_file_skip_empty_files`, `engine_url_skip_empty_files`. [#50364](https://github.com/ClickHouse/ClickHouse/pull/50364) ([Kruglov Pavel](https://github.com/Avogar)).
* Add a new setting named `use_mysql_types_in_show_columns` to alter the `SHOW COLUMNS` SQL statement to display MySQL equivalent types when a client is connected via the MySQL compatibility port. [#49577](https://github.com/ClickHouse/ClickHouse/pull/49577) ([Thomas Panetti](https://github.com/tpanetti)).
* Clickhouse-client can now be called with a connection string instead of "--host", "--port", "--user" etc. [#50689](https://github.com/ClickHouse/ClickHouse/pull/50689) ([Alexey Gerasimchuck](https://github.com/Demilivor)).
* Add setting `session_timezone`; it is used as the default timezone for a session when not explicitly specified. [#44149](https://github.com/ClickHouse/ClickHouse/pull/44149) ([Andrey Zvonov](https://github.com/zvonand)).
* Codec DEFLATE_QPL is now controlled via server setting "enable_deflate_qpl_codec" (default: false) instead of setting "allow_experimental_codecs". This marks DEFLATE_QPL non-experimental. [#50775](https://github.com/ClickHouse/ClickHouse/pull/50775) ([Robert Schulze](https://github.com/rschu1ze)).
#### Performance Improvement
* Improved scheduling of merge selecting and cleanup tasks in `ReplicatedMergeTree`. The tasks will not be executed too frequently when there's nothing to merge or cleanup. Added settings `max_merge_selecting_sleep_ms`, `merge_selecting_sleep_slowdown_factor`, `max_cleanup_delay_period` and `cleanup_thread_preferred_points_per_iteration`. It should close [#31919](https://github.com/ClickHouse/ClickHouse/issues/31919). [#50107](https://github.com/ClickHouse/ClickHouse/pull/50107) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Make filter push down through cross join. [#50605](https://github.com/ClickHouse/ClickHouse/pull/50605) ([Han Fei](https://github.com/hanfei1991)).
* Improve performance with enabled QueryProfiler using thread-local timer_id instead of global object. [#48778](https://github.com/ClickHouse/ClickHouse/pull/48778) ([Jiebin Sun](https://github.com/jiebinn)).
* Rewrite CapnProto input/output format to improve its performance. Map column names and CapnProto fields case insensitive, fix reading/writing of nested structure fields. [#49752](https://github.com/ClickHouse/ClickHouse/pull/49752) ([Kruglov Pavel](https://github.com/Avogar)).
* Optimize parquet write performance for parallel threads. [#50102](https://github.com/ClickHouse/ClickHouse/pull/50102) ([Hongbin Ma](https://github.com/binmahone)).
* Disable `parallelize_output_from_storages` for processing MATERIALIZED VIEWs and storages with one block only. [#50214](https://github.com/ClickHouse/ClickHouse/pull/50214) ([Azat Khuzhin](https://github.com/azat)).
* Merge PR [#46558](https://github.com/ClickHouse/ClickHouse/pull/46558). Avoid block permutation during sort if the block is already sorted. [#50697](https://github.com/ClickHouse/ClickHouse/pull/50697) ([Alexey Milovidov](https://github.com/alexey-milovidov), [Maksim Kita](https://github.com/kitaisreal)).
* Make multiple list requests to ZooKeeper in parallel to speed up reading from system.zookeeper table. [#51042](https://github.com/ClickHouse/ClickHouse/pull/51042) ([Alexander Gololobov](https://github.com/davenger)).
* Speedup initialization of DateTime lookup tables for time zones. This should reduce startup/connect time of clickhouse-client especially in debug build as it is rather heavy. [#51347](https://github.com/ClickHouse/ClickHouse/pull/51347) ([Alexander Gololobov](https://github.com/davenger)).
* Fix data lakes slowness because of synchronous head requests. (Related to Iceberg/Deltalake/Hudi being slow with a lot of files). [#50976](https://github.com/ClickHouse/ClickHouse/pull/50976) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Do not read all the columns from right GLOBAL JOIN table. [#50721](https://github.com/ClickHouse/ClickHouse/pull/50721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
#### Experimental Feature
* Support parallel replicas with the analyzer. [#50441](https://github.com/ClickHouse/ClickHouse/pull/50441) ([Raúl Marín](https://github.com/Algunenano)).
* Add random sleep before large merges/mutations execution to split load more evenly between replicas in case of zero-copy replication. [#51282](https://github.com/ClickHouse/ClickHouse/pull/51282) ([alesapin](https://github.com/alesapin)).
* Do not replicate `ALTER PARTITION` queries and mutations through `Replicated` database if it has only one shard and the underlying table is `ReplicatedMergeTree`. [#51049](https://github.com/ClickHouse/ClickHouse/pull/51049) ([Alexander Tokmakov](https://github.com/tavplubix)).
#### Improvement
* Relax the thresholds for "too many parts" to be more modern. Return the backpressure during long-running insert queries. [#50856](https://github.com/ClickHouse/ClickHouse/pull/50856) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Allow to cast IPv6 to IPv4 address for CIDR ::ffff:0:0/96 (IPv4-mapped addresses). [#49759](https://github.com/ClickHouse/ClickHouse/pull/49759) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Update MongoDB protocol to support MongoDB 5.1 version and newer. Support for the versions with the old protocol (<3.6) is preserved. Closes [#45621](https://github.com/ClickHouse/ClickHouse/issues/45621), [#49879](https://github.com/ClickHouse/ClickHouse/issues/49879). [#50061](https://github.com/ClickHouse/ClickHouse/pull/50061) ([Nikolay Degterinsky](https://github.com/evillique)).
* Add setting `input_format_max_bytes_to_read_for_schema_inference` to limit the number of bytes to read in schema inference. Closes [#50577](https://github.com/ClickHouse/ClickHouse/issues/50577). [#50592](https://github.com/ClickHouse/ClickHouse/pull/50592) ([Kruglov Pavel](https://github.com/Avogar)).
* Respect setting `input_format_null_as_default` in schema inference. [#50602](https://github.com/ClickHouse/ClickHouse/pull/50602) ([Kruglov Pavel](https://github.com/Avogar)).
* Allow to skip trailing empty lines in CSV/TSV/CustomSeparated formats via settings `input_format_csv_skip_trailing_empty_lines`, `input_format_tsv_skip_trailing_empty_lines` and `input_format_custom_skip_trailing_empty_lines` (disabled by default). Closes [#49315](https://github.com/ClickHouse/ClickHouse/issues/49315). [#50635](https://github.com/ClickHouse/ClickHouse/pull/50635) ([Kruglov Pavel](https://github.com/Avogar)).
* Functions "toDateOrDefault|OrNull" and "accuateCast[OrDefault|OrNull]" now correctly parse numeric arguments. [#50709](https://github.com/ClickHouse/ClickHouse/pull/50709) ([Dmitry Kardymon](https://github.com/kardymonds)).
* Support CSV with whitespace or `\t` field delimiters, and these delimiters are supported in Spark. [#50712](https://github.com/ClickHouse/ClickHouse/pull/50712) ([KevinyhZou](https://github.com/KevinyhZou)).
* Settings `number_of_mutations_to_delay` and `number_of_mutations_to_throw` are enabled by default now with values 500 and 1000 respectively. [#50726](https://github.com/ClickHouse/ClickHouse/pull/50726) ([Anton Popov](https://github.com/CurtizJ)).
* The dashboard correctly shows missing values. This closes [#50831](https://github.com/ClickHouse/ClickHouse/issues/50831). [#50832](https://github.com/ClickHouse/ClickHouse/pull/50832) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Added the possibility to use date and time arguments in the syslog timestamp format in functions `parseDateTimeBestEffort*` and `parseDateTime64BestEffort*`. [#50925](https://github.com/ClickHouse/ClickHouse/pull/50925) ([Victor Krasnov](https://github.com/sirvickr)).
* Command line parameter "--password" in clickhouse-client can now be specified only once. [#50966](https://github.com/ClickHouse/ClickHouse/pull/50966) ([Alexey Gerasimchuck](https://github.com/Demilivor)).
* Use `hash_of_all_files` from `system.parts` to check identity of parts during on-cluster backups. [#50997](https://github.com/ClickHouse/ClickHouse/pull/50997) ([Vitaly Baranov](https://github.com/vitlibar)).
* The system table zookeeper_connection connected_time identifies the time when the connection is established (standard format), and session_uptime_elapsed_seconds is added, which labels the duration of the established connection session (in seconds). [#51026](https://github.com/ClickHouse/ClickHouse/pull/51026) ([郭小龙](https://github.com/guoxiaolongzte)).
* Improve the progress bar for file/s3/hdfs/url table functions by using chunk size from source data and using incremental total size counting in each thread. Fix the progress bar for *Cluster functions. This closes [#47250](https://github.com/ClickHouse/ClickHouse/issues/47250). [#51088](https://github.com/ClickHouse/ClickHouse/pull/51088) ([Kruglov Pavel](https://github.com/Avogar)).
* Add total_bytes_to_read to the Progress packet in TCP protocol for better Progress bar. [#51158](https://github.com/ClickHouse/ClickHouse/pull/51158) ([Kruglov Pavel](https://github.com/Avogar)).
* Better checking of data parts on disks with filesystem cache. [#51164](https://github.com/ClickHouse/ClickHouse/pull/51164) ([Anton Popov](https://github.com/CurtizJ)).
* Fix sometimes not correct current_elements_num in fs cache. [#51242](https://github.com/ClickHouse/ClickHouse/pull/51242) ([Kseniia Sumarokova](https://github.com/kssenii)).
#### Build/Testing/Packaging Improvement
* Add embedded keeper-client to standalone keeper binary. [#50964](https://github.com/ClickHouse/ClickHouse/pull/50964) ([pufit](https://github.com/pufit)).
* Actual LZ4 version is used now. [#50621](https://github.com/ClickHouse/ClickHouse/pull/50621) ([Nikita Taranov](https://github.com/nickitat)).
* ClickHouse server will print the list of changed settings on fatal errors. This closes [#51137](https://github.com/ClickHouse/ClickHouse/issues/51137). [#51138](https://github.com/ClickHouse/ClickHouse/pull/51138) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Allow building ClickHouse with clang-17. [#51300](https://github.com/ClickHouse/ClickHouse/pull/51300) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* [SQLancer](https://github.com/sqlancer/sqlancer) check is considered stable as bugs that were triggered by it are fixed. Now failures of SQLancer check will be reported as failed check status. [#51340](https://github.com/ClickHouse/ClickHouse/pull/51340) ([Ilya Yatsishin](https://github.com/qoega)).
* Split huge `RUN` in Dockerfile into smaller conditional. Install the necessary tools on demand in the same `RUN` layer, and remove them after that. Upgrade the OS only once at the beginning. Use a modern way to check the signed repository. Downgrade the base repo to ubuntu:20.04 to address the issues on older docker versions. Upgrade golang version to address golang vulnerabilities. [#51504](https://github.com/ClickHouse/ClickHouse/pull/51504) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Report loading status for executable dictionaries correctly [#48775](https://github.com/ClickHouse/ClickHouse/pull/48775) ([Anton Kozlov](https://github.com/tonickkozlov)).
* Proper mutation of skip indices and projections [#50104](https://github.com/ClickHouse/ClickHouse/pull/50104) ([Amos Bird](https://github.com/amosbird)).
* Cleanup moving parts [#50489](https://github.com/ClickHouse/ClickHouse/pull/50489) ([vdimir](https://github.com/vdimir)).
* Fix backward compatibility for IP types hashing in aggregate functions [#50551](https://github.com/ClickHouse/ClickHouse/pull/50551) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Fix Log family table return wrong rows count after truncate [#50585](https://github.com/ClickHouse/ClickHouse/pull/50585) ([flynn](https://github.com/ucasfl)).
* Fix bug in `uniqExact` parallel merging [#50590](https://github.com/ClickHouse/ClickHouse/pull/50590) ([Nikita Taranov](https://github.com/nickitat)).
* Revert recent grace hash join changes [#50699](https://github.com/ClickHouse/ClickHouse/pull/50699) ([vdimir](https://github.com/vdimir)).
* Query Cache: Try to fix bad cast from `ColumnConst` to `ColumnVector<char8_t>` [#50704](https://github.com/ClickHouse/ClickHouse/pull/50704) ([Robert Schulze](https://github.com/rschu1ze)).
* Avoid storing logs in Keeper containing unknown operation [#50751](https://github.com/ClickHouse/ClickHouse/pull/50751) ([Antonio Andelic](https://github.com/antonio2368)).
* SummingMergeTree support for DateTime64 [#50797](https://github.com/ClickHouse/ClickHouse/pull/50797) ([Jordi Villar](https://github.com/jrdi)).
* Add compatibility setting for non-const timezones [#50834](https://github.com/ClickHouse/ClickHouse/pull/50834) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix hashing of LDAP params in the cache entries [#50865](https://github.com/ClickHouse/ClickHouse/pull/50865) ([Julian Maicher](https://github.com/jmaicher)).
* Fallback to parsing big integer from String instead of exception in Parquet format [#50873](https://github.com/ClickHouse/ClickHouse/pull/50873) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix checking the lock file too often while writing a backup [#50889](https://github.com/ClickHouse/ClickHouse/pull/50889) ([Vitaly Baranov](https://github.com/vitlibar)).
* Do not apply projection if read-in-order was enabled. [#50923](https://github.com/ClickHouse/ClickHouse/pull/50923) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix race in the Azure blob storage iterator [#50936](https://github.com/ClickHouse/ClickHouse/pull/50936) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
* Fix erroneous `sort_description` propagation in `CreatingSets` [#50955](https://github.com/ClickHouse/ClickHouse/pull/50955) ([Nikita Taranov](https://github.com/nickitat)).
* Fix Iceberg v2 optional metadata parsing [#50974](https://github.com/ClickHouse/ClickHouse/pull/50974) ([Kseniia Sumarokova](https://github.com/kssenii)).
* MaterializedMySQL: Keep parentheses for empty table overrides [#50977](https://github.com/ClickHouse/ClickHouse/pull/50977) ([Val Doroshchuk](https://github.com/valbok)).
* Fix crash in BackupCoordinationStageSync::setError() [#51012](https://github.com/ClickHouse/ClickHouse/pull/51012) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix subtly broken copy-on-write of ColumnLowCardinality dictionary [#51064](https://github.com/ClickHouse/ClickHouse/pull/51064) ([Michael Kolupaev](https://github.com/al13n321)).
* Generate safe IVs [#51086](https://github.com/ClickHouse/ClickHouse/pull/51086) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Fix ineffective query cache for SELECTs with subqueries [#51132](https://github.com/ClickHouse/ClickHouse/pull/51132) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix Set index with constant nullable comparison. [#51205](https://github.com/ClickHouse/ClickHouse/pull/51205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix a crash in s3 and s3Cluster functions [#51209](https://github.com/ClickHouse/ClickHouse/pull/51209) ([Nikolay Degterinsky](https://github.com/evillique)).
* Fix a crash with compiled expressions [#51231](https://github.com/ClickHouse/ClickHouse/pull/51231) ([LiuNeng](https://github.com/liuneng1994)).
* Fix use-after-free in StorageURL when switching URLs [#51260](https://github.com/ClickHouse/ClickHouse/pull/51260) ([Michael Kolupaev](https://github.com/al13n321)).
* Updated check for parameterized view [#51272](https://github.com/ClickHouse/ClickHouse/pull/51272) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
* Fix multiple writing of same file to backup [#51299](https://github.com/ClickHouse/ClickHouse/pull/51299) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix fuzzer failure in ActionsDAG [#51301](https://github.com/ClickHouse/ClickHouse/pull/51301) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Remove garbage from function `transform` [#51350](https://github.com/ClickHouse/ClickHouse/pull/51350) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
### <a id="235"></a> ClickHouse release 23.5, 2023-06-08 ### <a id="235"></a> ClickHouse release 23.5, 2023-06-08
#### Upgrade Notes #### Upgrade Notes

View File

@ -13,6 +13,7 @@ The following versions of ClickHouse server are currently being supported with s
| Version | Supported | | Version | Supported |
|:-|:-| |:-|:-|
| 23.6 | ✔️ |
| 23.5 | ✔️ | | 23.5 | ✔️ |
| 23.4 | ✔️ | | 23.4 | ✔️ |
| 23.3 | ✔️ | | 23.3 | ✔️ |

View File

@ -2,21 +2,23 @@
#include <base/strong_typedef.h> #include <base/strong_typedef.h>
#include <base/extended_types.h> #include <base/extended_types.h>
#include <Common/formatIPv6.h>
#include <Common/memcmpSmall.h> #include <Common/memcmpSmall.h>
namespace DB namespace DB
{ {
using IPv4 = StrongTypedef<UInt32, struct IPv4Tag>; struct IPv4 : StrongTypedef<UInt32, struct IPv4Tag>
{
using StrongTypedef::StrongTypedef;
using StrongTypedef::operator=;
constexpr explicit IPv4(UInt64 value): StrongTypedef(static_cast<UnderlyingType>(value)) {}
};
struct IPv6 : StrongTypedef<UInt128, struct IPv6Tag> struct IPv6 : StrongTypedef<UInt128, struct IPv6Tag>
{ {
constexpr IPv6() = default; using StrongTypedef::StrongTypedef;
constexpr explicit IPv6(const UInt128 & x) : StrongTypedef(x) {} using StrongTypedef::operator=;
constexpr explicit IPv6(UInt128 && x) : StrongTypedef(std::move(x)) {}
IPv6 & operator=(const UInt128 & rhs) { StrongTypedef::operator=(rhs); return *this; }
IPv6 & operator=(UInt128 && rhs) { StrongTypedef::operator=(std::move(rhs)); return *this; }
bool operator<(const IPv6 & rhs) const bool operator<(const IPv6 & rhs) const
{ {
@ -54,12 +56,22 @@ namespace DB
namespace std namespace std
{ {
/// For historical reasons we hash IPv6 as a FixedString(16)
template <> template <>
struct hash<DB::IPv6> struct hash<DB::IPv6>
{ {
size_t operator()(const DB::IPv6 & x) const size_t operator()(const DB::IPv6 & x) const
{ {
return std::hash<DB::IPv6::UnderlyingType>()(x.toUnderType()); return std::hash<std::string_view>{}(std::string_view(reinterpret_cast<const char*>(&x.toUnderType()), IPV6_BINARY_LENGTH));
}
};
template <>
struct hash<DB::IPv4>
{
size_t operator()(const DB::IPv4 & x) const
{
return std::hash<DB::IPv4::UnderlyingType>()(x.toUnderType());
} }
}; };
} }

View File

@ -27,6 +27,8 @@ using FromDoubleIntermediateType = long double;
using FromDoubleIntermediateType = boost::multiprecision::cpp_bin_float_double_extended; using FromDoubleIntermediateType = boost::multiprecision::cpp_bin_float_double_extended;
#endif #endif
namespace CityHash_v1_0_2 { struct uint128; }
namespace wide namespace wide
{ {
@ -281,6 +283,17 @@ struct integer<Bits, Signed>::_impl
} }
} }
template <typename CityHashUInt128 = CityHash_v1_0_2::uint128>
constexpr static void wide_integer_from_cityhash_uint128(integer<Bits, Signed> & self, const CityHashUInt128 & value) noexcept
{
static_assert(sizeof(item_count) >= 2);
if constexpr (std::endian::native == std::endian::little)
wide_integer_from_tuple_like(self, std::make_pair(value.low64, value.high64));
else
wide_integer_from_tuple_like(self, std::make_pair(value.high64, value.low64));
}
/** /**
* N.B. t is constructed from double, so max(t) = max(double) ~ 2^310 * N.B. t is constructed from double, so max(t) = max(double) ~ 2^310
* the recursive call happens when t / 2^64 > 2^64, so there won't be more than 5 of them. * the recursive call happens when t / 2^64 > 2^64, so there won't be more than 5 of them.
@ -1036,6 +1049,8 @@ constexpr integer<Bits, Signed>::integer(T rhs) noexcept
_impl::wide_integer_from_wide_integer(*this, rhs); _impl::wide_integer_from_wide_integer(*this, rhs);
else if constexpr (IsTupleLike<T>::value) else if constexpr (IsTupleLike<T>::value)
_impl::wide_integer_from_tuple_like(*this, rhs); _impl::wide_integer_from_tuple_like(*this, rhs);
else if constexpr (std::is_same_v<std::remove_cvref_t<T>, CityHash_v1_0_2::uint128>)
_impl::wide_integer_from_cityhash_uint128(*this, rhs);
else else
_impl::wide_integer_from_builtin(*this, rhs); _impl::wide_integer_from_builtin(*this, rhs);
} }
@ -1051,6 +1066,8 @@ constexpr integer<Bits, Signed>::integer(std::initializer_list<T> il) noexcept
_impl::wide_integer_from_wide_integer(*this, *il.begin()); _impl::wide_integer_from_wide_integer(*this, *il.begin());
else if constexpr (IsTupleLike<T>::value) else if constexpr (IsTupleLike<T>::value)
_impl::wide_integer_from_tuple_like(*this, *il.begin()); _impl::wide_integer_from_tuple_like(*this, *il.begin());
else if constexpr (std::is_same_v<std::remove_cvref_t<T>, CityHash_v1_0_2::uint128>)
_impl::wide_integer_from_cityhash_uint128(*this, *il.begin());
else else
_impl::wide_integer_from_builtin(*this, *il.begin()); _impl::wide_integer_from_builtin(*this, *il.begin());
} }
@ -1088,6 +1105,8 @@ constexpr integer<Bits, Signed> & integer<Bits, Signed>::operator=(T rhs) noexce
{ {
if constexpr (IsTupleLike<T>::value) if constexpr (IsTupleLike<T>::value)
_impl::wide_integer_from_tuple_like(*this, rhs); _impl::wide_integer_from_tuple_like(*this, rhs);
else if constexpr (std::is_same_v<std::remove_cvref_t<T>, CityHash_v1_0_2::uint128>)
_impl::wide_integer_from_cityhash_uint128(*this, rhs);
else else
_impl::wide_integer_from_builtin(*this, rhs); _impl::wide_integer_from_builtin(*this, rhs);
return *this; return *this;

View File

@ -2,11 +2,11 @@
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION, # NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes. # only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
SET(VERSION_REVISION 54475) SET(VERSION_REVISION 54476)
SET(VERSION_MAJOR 23) SET(VERSION_MAJOR 23)
SET(VERSION_MINOR 6) SET(VERSION_MINOR 7)
SET(VERSION_PATCH 1) SET(VERSION_PATCH 1)
SET(VERSION_GITHASH 2fec796e73efda10a538a03af3205ce8ffa1b2de) SET(VERSION_GITHASH d1c7e13d08868cb04d3562dcced704dd577cb1df)
SET(VERSION_DESCRIBE v23.6.1.1-testing) SET(VERSION_DESCRIBE v23.7.1.1-testing)
SET(VERSION_STRING 23.6.1.1) SET(VERSION_STRING 23.7.1.1)
# end of autochange # end of autochange

View File

@ -17,3 +17,17 @@ get_target_property(FLAT_HASH_SET_INCLUDE_DIR absl::flat_hash_set INTERFACE_INCL
target_include_directories (_abseil_swiss_tables SYSTEM BEFORE INTERFACE ${FLAT_HASH_SET_INCLUDE_DIR}) target_include_directories (_abseil_swiss_tables SYSTEM BEFORE INTERFACE ${FLAT_HASH_SET_INCLUDE_DIR})
add_library(ch_contrib::abseil_swiss_tables ALIAS _abseil_swiss_tables) add_library(ch_contrib::abseil_swiss_tables ALIAS _abseil_swiss_tables)
set(ABSL_FORMAT_SRC
${ABSL_ROOT_DIR}/absl/strings/internal/str_format/arg.cc
${ABSL_ROOT_DIR}/absl/strings/internal/str_format/bind.cc
${ABSL_ROOT_DIR}/absl/strings/internal/str_format/extension.cc
${ABSL_ROOT_DIR}/absl/strings/internal/str_format/float_conversion.cc
${ABSL_ROOT_DIR}/absl/strings/internal/str_format/output.cc
${ABSL_ROOT_DIR}/absl/strings/internal/str_format/parser.cc
)
add_library(_abseil_str_format ${ABSL_FORMAT_SRC})
target_include_directories(_abseil_str_format PUBLIC ${ABSL_ROOT_DIR})
add_library(ch_contrib::abseil_str_format ALIAS _abseil_str_format)

View File

@ -1,6 +1,6 @@
option (ENABLE_AZURE_BLOB_STORAGE "Enable Azure blob storage" ${ENABLE_LIBRARIES}) option (ENABLE_AZURE_BLOB_STORAGE "Enable Azure blob storage" ${ENABLE_LIBRARIES})
if (NOT ENABLE_AZURE_BLOB_STORAGE OR BUILD_STANDALONE_KEEPER OR OS_FREEBSD) if (NOT ENABLE_AZURE_BLOB_STORAGE OR OS_FREEBSD)
message(STATUS "Not using Azure blob storage") message(STATUS "Not using Azure blob storage")
return() return()
endif() endif()

View File

@ -61,11 +61,24 @@ namespace CityHash_v1_0_2
typedef uint8_t uint8; typedef uint8_t uint8;
typedef uint32_t uint32; typedef uint32_t uint32;
typedef uint64_t uint64; typedef uint64_t uint64;
typedef std::pair<uint64, uint64> uint128;
/// Represent an unsigned integer of 128 bits as it's used in CityHash.
/// Originally CityHash used `std::pair<uint64, uint64>` instead of this struct,
/// however the members `first` and `second` could be easily confused so they were renamed to `low64` and `high64`:
/// `first` -> `low64`, `second` -> `high64`.
struct uint128
{
uint64 low64 = 0;
uint64 high64 = 0;
inline uint64 Uint128Low64(const uint128& x) { return x.first; } uint128() = default;
inline uint64 Uint128High64(const uint128& x) { return x.second; } uint128(uint64 low64_, uint64 high64_) : low64(low64_), high64(high64_) {}
friend bool operator ==(const uint128 & x, const uint128 & y) { return (x.low64 == y.low64) && (x.high64 == y.high64); }
friend bool operator !=(const uint128 & x, const uint128 & y) { return !(x == y); }
};
inline uint64 Uint128Low64(const uint128 & x) { return x.low64; }
inline uint64 Uint128High64(const uint128 & x) { return x.high64; }
// Hash function for a byte array. // Hash function for a byte array.
uint64 CityHash64(const char *buf, size_t len); uint64 CityHash64(const char *buf, size_t len);

2
contrib/re2 vendored

@ -1 +1 @@
Subproject commit 13ebb377c6ad763ca61d12dd6f88b1126bd0b911 Subproject commit 03da4fc0857c285e3a26782f6bc8931c4c950df4

View File

@ -12,6 +12,7 @@ endif()
set(SRC_DIR "${ClickHouse_SOURCE_DIR}/contrib/re2") set(SRC_DIR "${ClickHouse_SOURCE_DIR}/contrib/re2")
set(RE2_SOURCES set(RE2_SOURCES
${SRC_DIR}/re2/bitmap256.cc
${SRC_DIR}/re2/bitstate.cc ${SRC_DIR}/re2/bitstate.cc
${SRC_DIR}/re2/compile.cc ${SRC_DIR}/re2/compile.cc
${SRC_DIR}/re2/dfa.cc ${SRC_DIR}/re2/dfa.cc
@ -28,15 +29,16 @@ set(RE2_SOURCES
${SRC_DIR}/re2/regexp.cc ${SRC_DIR}/re2/regexp.cc
${SRC_DIR}/re2/set.cc ${SRC_DIR}/re2/set.cc
${SRC_DIR}/re2/simplify.cc ${SRC_DIR}/re2/simplify.cc
${SRC_DIR}/re2/stringpiece.cc
${SRC_DIR}/re2/tostring.cc ${SRC_DIR}/re2/tostring.cc
${SRC_DIR}/re2/unicode_casefold.cc ${SRC_DIR}/re2/unicode_casefold.cc
${SRC_DIR}/re2/unicode_groups.cc ${SRC_DIR}/re2/unicode_groups.cc
${SRC_DIR}/util/pcre.cc
${SRC_DIR}/util/rune.cc ${SRC_DIR}/util/rune.cc
${SRC_DIR}/util/strutil.cc ${SRC_DIR}/util/strutil.cc
) )
add_library(re2 ${RE2_SOURCES}) add_library(re2 ${RE2_SOURCES})
target_include_directories(re2 PUBLIC "${SRC_DIR}") target_include_directories(re2 PUBLIC "${SRC_DIR}")
target_link_libraries(re2 ch_contrib::abseil_str_format)
# Building re2 which is thread-safe and re2_st which is not. # Building re2 which is thread-safe and re2_st which is not.
# re2 changes its state during matching of regular expression, e.g. creates temporary DFA. # re2 changes its state during matching of regular expression, e.g. creates temporary DFA.
@ -48,6 +50,7 @@ target_compile_definitions (re2_st PRIVATE NDEBUG NO_THREADS re2=re2_st)
target_include_directories (re2_st PRIVATE .) target_include_directories (re2_st PRIVATE .)
target_include_directories (re2_st SYSTEM PUBLIC ${CMAKE_CURRENT_BINARY_DIR}) target_include_directories (re2_st SYSTEM PUBLIC ${CMAKE_CURRENT_BINARY_DIR})
target_include_directories (re2_st SYSTEM BEFORE PUBLIC ${SRC_DIR}) target_include_directories (re2_st SYSTEM BEFORE PUBLIC ${SRC_DIR})
target_link_libraries (re2_st ch_contrib::abseil_str_format)
file (MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/re2_st) file (MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/re2_st)
foreach (FILENAME filtered_re2.h re2.h set.h stringpiece.h) foreach (FILENAME filtered_re2.h re2.h set.h stringpiece.h)
@ -60,17 +63,6 @@ foreach (FILENAME filtered_re2.h re2.h set.h stringpiece.h)
add_dependencies (re2_st transform_${FILENAME}) add_dependencies (re2_st transform_${FILENAME})
endforeach () endforeach ()
file (MAKE_DIRECTORY ${CMAKE_CURRENT_BINARY_DIR}/util)
foreach (FILENAME mutex.h)
add_custom_command (OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/util/${FILENAME}"
COMMAND ${CMAKE_COMMAND} -DSOURCE_FILENAME="${SRC_DIR}/util/${FILENAME}"
-DTARGET_FILENAME="${CMAKE_CURRENT_BINARY_DIR}/util/${FILENAME}"
-P "${CMAKE_CURRENT_SOURCE_DIR}/re2_transform.cmake"
COMMENT "Creating ${FILENAME} for re2_st library.")
add_custom_target (transform_${FILENAME} DEPENDS "${CMAKE_CURRENT_BINARY_DIR}/util/${FILENAME}")
add_dependencies (re2_st transform_${FILENAME})
endforeach ()
# NOTE: you should not change name of library here, since it is used to generate required header (see above) # NOTE: you should not change name of library here, since it is used to generate required header (see above)
add_library(ch_contrib::re2 ALIAS re2) add_library(ch_contrib::re2 ALIAS re2)
add_library(ch_contrib::re2_st ALIAS re2_st) add_library(ch_contrib::re2_st ALIAS re2_st)

View File

@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
esac esac
ARG REPOSITORY="https://s3.amazonaws.com/clickhouse-builds/22.4/31c367d3cd3aefd316778601ff6565119fe36682/package_release" ARG REPOSITORY="https://s3.amazonaws.com/clickhouse-builds/22.4/31c367d3cd3aefd316778601ff6565119fe36682/package_release"
ARG VERSION="23.5.3.24" ARG VERSION="23.6.1.1524"
ARG PACKAGES="clickhouse-keeper" ARG PACKAGES="clickhouse-keeper"
# user/group precreated explicitly with fixed uid/gid on purpose. # user/group precreated explicitly with fixed uid/gid on purpose.

View File

@ -33,7 +33,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc # lts / testing / prestable / etc
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="23.5.3.24" ARG VERSION="23.6.1.1524"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
# user/group precreated explicitly with fixed uid/gid on purpose. # user/group precreated explicitly with fixed uid/gid on purpose.

View File

@ -23,7 +23,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main" ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
ARG VERSION="23.5.3.24" ARG VERSION="23.6.1.1524"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
# set non-empty deb_location_url url to create a docker image # set non-empty deb_location_url url to create a docker image
@ -48,14 +48,15 @@ ARG TARGETARCH
RUN arch="${TARGETARCH:-amd64}" \ RUN arch="${TARGETARCH:-amd64}" \
&& if [ -n "${deb_location_url}" ]; then \ && if [ -n "${deb_location_url}" ]; then \
echo "installing from custom url with deb packages: ${deb_location_url}" \ echo "installing from custom url with deb packages: ${deb_location_url}" \
rm -rf /tmp/clickhouse_debs \ && rm -rf /tmp/clickhouse_debs \
&& mkdir -p /tmp/clickhouse_debs \ && mkdir -p /tmp/clickhouse_debs \
&& for package in ${PACKAGES}; do \ && for package in ${PACKAGES}; do \
{ wget --progress=bar:force:noscroll "${deb_location_url}/${package}_${VERSION}_${arch}.deb" -P /tmp/clickhouse_debs || \ { wget --progress=bar:force:noscroll "${deb_location_url}/${package}_${VERSION}_${arch}.deb" -P /tmp/clickhouse_debs || \
wget --progress=bar:force:noscroll "${deb_location_url}/${package}_${VERSION}_all.deb" -P /tmp/clickhouse_debs ; } \ wget --progress=bar:force:noscroll "${deb_location_url}/${package}_${VERSION}_all.deb" -P /tmp/clickhouse_debs ; } \
|| exit 1 \ || exit 1 \
; done \ ; done \
&& dpkg -i /tmp/clickhouse_debs/*.deb ; \ && dpkg -i /tmp/clickhouse_debs/*.deb \
&& rm -rf /tmp/* ; \
fi fi
# install from a single binary # install from a single binary
@ -65,11 +66,12 @@ RUN if [ -n "${single_binary_location_url}" ]; then \
&& mkdir -p /tmp/clickhouse_binary \ && mkdir -p /tmp/clickhouse_binary \
&& wget --progress=bar:force:noscroll "${single_binary_location_url}" -O /tmp/clickhouse_binary/clickhouse \ && wget --progress=bar:force:noscroll "${single_binary_location_url}" -O /tmp/clickhouse_binary/clickhouse \
&& chmod +x /tmp/clickhouse_binary/clickhouse \ && chmod +x /tmp/clickhouse_binary/clickhouse \
&& /tmp/clickhouse_binary/clickhouse install --user "clickhouse" --group "clickhouse" ; \ && /tmp/clickhouse_binary/clickhouse install --user "clickhouse" --group "clickhouse" \
&& rm -rf /tmp/* ; \
fi fi
# A fallback to installation from ClickHouse repository # A fallback to installation from ClickHouse repository
RUN if ! clickhouse local -q "SELECT ''" > /dev/null; then \ RUN if ! clickhouse local -q "SELECT ''" > /dev/null 2>&1; then \
apt-get update \ apt-get update \
&& apt-get install --yes --no-install-recommends \ && apt-get install --yes --no-install-recommends \
apt-transport-https \ apt-transport-https \

View File

@ -9,6 +9,7 @@ RUN apt-get update \
expect \ expect \
file \ file \
lsof \ lsof \
odbcinst \
psmisc \ psmisc \
python3 \ python3 \
python3-lxml \ python3-lxml \

View File

@ -80,7 +80,7 @@ function start_server
function clone_root function clone_root
{ {
git config --global --add safe.directory "$FASTTEST_SOURCE" [ "$UID" -eq 0 ] && git config --global --add safe.directory "$FASTTEST_SOURCE"
git clone --depth 1 https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt" git clone --depth 1 https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt"
( (
@ -151,7 +151,7 @@ function clone_submodules
) )
git submodule sync git submodule sync
git submodule update --jobs=16 --depth 1 --init "${SUBMODULES_TO_UPDATE[@]}" git submodule update --jobs=16 --depth 1 --single-branch --init "${SUBMODULES_TO_UPDATE[@]}"
git submodule foreach git reset --hard git submodule foreach git reset --hard
git submodule foreach git checkout @ -f git submodule foreach git checkout @ -f
git submodule foreach git clean -xfd git submodule foreach git clean -xfd
@ -202,10 +202,11 @@ function build
| ts '%Y-%m-%d %H:%M:%S' \ | ts '%Y-%m-%d %H:%M:%S' \
| tee "$FASTTEST_OUTPUT/test_result.txt" | tee "$FASTTEST_OUTPUT/test_result.txt"
if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then
cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse" mkdir -p "$FASTTEST_OUTPUT/binaries/"
cp programs/clickhouse "$FASTTEST_OUTPUT/binaries/clickhouse"
strip programs/clickhouse -o "$FASTTEST_OUTPUT/clickhouse-stripped" strip programs/clickhouse -o programs/clickhouse-stripped
zstd --threads=0 "$FASTTEST_OUTPUT/clickhouse-stripped" zstd --threads=0 programs/clickhouse-stripped -o "$FASTTEST_OUTPUT/binaries/clickhouse-stripped.zst"
fi fi
ccache_status ccache_status
ccache --evict-older-than 1d ||: ccache --evict-older-than 1d ||:

View File

@ -46,12 +46,13 @@ RUN arch=${TARGETARCH:-amd64} \
arm64) rarch=aarch64 ;; \ arm64) rarch=aarch64 ;; \
esac \ esac \
&& cd /tmp \ && cd /tmp \
&& curl -o mysql-odbc.rpm "https://cdn.mysql.com/archives/mysql-connector-odbc-8.0/mysql-connector-odbc-8.0.27-1.el8.${rarch}.rpm" \ && curl -o mysql-odbc.rpm "https://cdn.mysql.com/archives/mysql-connector-odbc-8.0/mysql-connector-odbc-8.0.32-1.el9.${rarch}.rpm" \
&& rpm2archive mysql-odbc.rpm \ && rpm2archive mysql-odbc.rpm \
&& tar xf mysql-odbc.rpm.tgz -C / ./usr/lib64/ \ && tar xf mysql-odbc.rpm.tgz -C / ./usr/lib64/ \
&& LINK_DIR=$(dpkg -L libodbc1 | rg '^/usr/lib/.*-linux-gnu/odbc$') \ && rm mysql-odbc.rpm mysql-odbc.rpm.tgz \
&& ln -s /usr/lib64/libmyodbc8a.so "$LINK_DIR" \ && ODBC_DIR=$(dpkg -L odbc-postgresql | rg '^/usr/lib/.*-linux-gnu/odbc$') \
&& ln -s /usr/lib64/libmyodbc8a.so "$LINK_DIR"/libmyodbc.so && ln -s /usr/lib64/libmyodbc8a.so "$ODBC_DIR" \
&& ln -s /usr/lib64/libmyodbc8a.so "$ODBC_DIR"/libmyodbc.so
# Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper. # Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper.
# ZooKeeper is not started by default, but consumes some space in containers. # ZooKeeper is not started by default, but consumes some space in containers.

View File

@ -2,4 +2,7 @@
# Helper docker container to run iptables without sudo # Helper docker container to run iptables without sudo
FROM alpine FROM alpine
RUN apk add -U iproute2 RUN apk add --no-cache -U iproute2 \
&& for bin in iptables iptables-restore iptables-save; \
do ln -sf xtables-nft-multi "/sbin/$bin"; \
done

View File

@ -1,7 +1,7 @@
# docker build -t clickhouse/mysql-php-client . # docker build -t clickhouse/mysql-php-client .
# MySQL PHP client docker container # MySQL PHP client docker container
FROM php:8.0.18-cli FROM php:8-cli-alpine
COPY ./client.crt client.crt COPY ./client.crt client.crt
COPY ./client.key client.key COPY ./client.key client.key

View File

@ -1,5 +1,5 @@
# docker build -t clickhouse/integration-tests-runner . # docker build -t clickhouse/integration-tests-runner .
FROM ubuntu:20.04 FROM ubuntu:22.04
# ARG for quick switch to a given ubuntu mirror # ARG for quick switch to a given ubuntu mirror
ARG apt_archive="http://archive.ubuntu.com" ARG apt_archive="http://archive.ubuntu.com"
@ -56,17 +56,19 @@ RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
/var/lib/apt/lists/* \ /var/lib/apt/lists/* \
/var/cache/debconf \ /var/cache/debconf \
/tmp/* \ /tmp/* \
&& apt-get clean && apt-get clean \
&& dockerd --version; docker --version
RUN dockerd --version; docker --version
RUN python3 -m pip install --no-cache-dir \ RUN python3 -m pip install --no-cache-dir \
PyMySQL \ PyMySQL \
aerospike==4.0.0 \ aerospike==11.1.0 \
avro==1.10.2 \
asyncio \ asyncio \
avro==1.10.2 \
azure-storage-blob \
cassandra-driver \ cassandra-driver \
confluent-kafka==1.5.0 \ confluent-kafka==1.9.2 \
delta-spark==2.3.0 \
dict2xml \ dict2xml \
dicttoxml \ dicttoxml \
docker \ docker \
@ -76,40 +78,38 @@ RUN python3 -m pip install --no-cache-dir \
kafka-python \ kafka-python \
kazoo \ kazoo \
lz4 \ lz4 \
meilisearch==0.18.3 \
minio \ minio \
nats-py \ nats-py \
protobuf \ protobuf \
psycopg2-binary==2.8.6 \ psycopg2-binary==2.9.6 \
pyhdfs \
pymongo==3.11.0 \ pymongo==3.11.0 \
pyspark==3.3.2 \
pytest \ pytest \
pytest-order==1.0.0 \ pytest-order==1.0.0 \
pytest-timeout \
pytest-random \ pytest-random \
pytest-xdist \
pytest-repeat \ pytest-repeat \
pytest-timeout \
pytest-xdist \
pytz \ pytz \
redis \ redis \
tzlocal==2.1 \
urllib3 \
requests-kerberos \ requests-kerberos \
pyspark==3.3.2 \ tzlocal==2.1 \
delta-spark==2.2.0 \ urllib3
pyhdfs \
azure-storage-blob \
meilisearch==0.18.3
COPY modprobe.sh /usr/local/bin/modprobe
COPY dockerd-entrypoint.sh /usr/local/bin/
COPY compose/ /compose/
COPY misc/ /misc/
# Hudi supports only spark 3.3.*, not 3.4
RUN curl -fsSL -O https://dlcdn.apache.org/spark/spark-3.3.2/spark-3.3.2-bin-hadoop3.tgz \ RUN curl -fsSL -O https://dlcdn.apache.org/spark/spark-3.3.2/spark-3.3.2-bin-hadoop3.tgz \
&& tar xzvf spark-3.3.2-bin-hadoop3.tgz -C / \ && tar xzvf spark-3.3.2-bin-hadoop3.tgz -C / \
&& rm spark-3.3.2-bin-hadoop3.tgz && rm spark-3.3.2-bin-hadoop3.tgz
# download spark and packages # download spark and packages
# if you change packages, don't forget to update them in tests/integration/helpers/cluster.py # if you change packages, don't forget to update them in tests/integration/helpers/cluster.py
RUN echo ":quit" | /spark-3.3.2-bin-hadoop3/bin/spark-shell --packages "org.apache.hudi:hudi-spark3.3-bundle_2.12:0.13.0,io.delta:delta-core_2.12:2.2.0,org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:1.1.0" > /dev/null RUN packages="org.apache.hudi:hudi-spark3.3-bundle_2.12:0.13.0,\
io.delta:delta-core_2.12:2.3.0,\
org.apache.iceberg:iceberg-spark-runtime-3.3_2.12:1.1.0" \
&& /spark-3.3.2-bin-hadoop3/bin/spark-shell --packages "$packages" > /dev/null \
&& find /root/.ivy2/ -name '*.jar' -exec ln -sf {} /spark-3.3.2-bin-hadoop3/jars/ \;
RUN set -x \ RUN set -x \
&& addgroup --system dockremap \ && addgroup --system dockremap \
@ -118,6 +118,12 @@ RUN set -x \
&& echo 'dockremap:165536:65536' >> /etc/subuid \ && echo 'dockremap:165536:65536' >> /etc/subuid \
&& echo 'dockremap:165536:65536' >> /etc/subgid && echo 'dockremap:165536:65536' >> /etc/subgid
COPY modprobe.sh /usr/local/bin/modprobe
COPY dockerd-entrypoint.sh /usr/local/bin/
COPY compose/ /compose/
COPY misc/ /misc/
# Same options as in test/base/Dockerfile # Same options as in test/base/Dockerfile
# (in case you need to override them in tests) # (in case you need to override them in tests)
ENV TSAN_OPTIONS='halt_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1' ENV TSAN_OPTIONS='halt_on_error=1 history_size=7 memory_limit_mb=46080 second_deadlock_stack=1'

View File

@ -12,6 +12,17 @@ echo '{
"registry-mirrors" : ["http://dockerhub-proxy.dockerhub-proxy-zone:5000"] "registry-mirrors" : ["http://dockerhub-proxy.dockerhub-proxy-zone:5000"]
}' | dd of=/etc/docker/daemon.json 2>/dev/null }' | dd of=/etc/docker/daemon.json 2>/dev/null
if [ -f /sys/fs/cgroup/cgroup.controllers ]; then
# move the processes from the root group to the /init group,
# otherwise writing subtree_control fails with EBUSY.
# An error during moving non-existent process (i.e., "cat") is ignored.
mkdir -p /sys/fs/cgroup/init
xargs -rn1 < /sys/fs/cgroup/cgroup.procs > /sys/fs/cgroup/init/cgroup.procs || :
# enable controllers
sed -e 's/ / +/g' -e 's/^/+/' < /sys/fs/cgroup/cgroup.controllers \
> /sys/fs/cgroup/cgroup.subtree_control
fi
# In case of test hung it is convenient to use pytest --pdb to debug it, # In case of test hung it is convenient to use pytest --pdb to debug it,
# and on hung you can simply press Ctrl-C and it will spawn a python pdb, # and on hung you can simply press Ctrl-C and it will spawn a python pdb,
# but on SIGINT dockerd will exit, so ignore it to preserve the daemon. # but on SIGINT dockerd will exit, so ignore it to preserve the daemon.
@ -52,6 +63,8 @@ export CLICKHOUSE_TESTS_BASE_CONFIG_DIR=/clickhouse-config
export CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH=/clickhouse-odbc-bridge export CLICKHOUSE_ODBC_BRIDGE_BINARY_PATH=/clickhouse-odbc-bridge
export CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH=/clickhouse-library-bridge export CLICKHOUSE_LIBRARY_BRIDGE_BINARY_PATH=/clickhouse-library-bridge
export DOCKER_BASE_TAG=${DOCKER_BASE_TAG:=latest}
export DOCKER_HELPER_TAG=${DOCKER_HELPER_TAG:=latest}
export DOCKER_MYSQL_GOLANG_CLIENT_TAG=${DOCKER_MYSQL_GOLANG_CLIENT_TAG:=latest} export DOCKER_MYSQL_GOLANG_CLIENT_TAG=${DOCKER_MYSQL_GOLANG_CLIENT_TAG:=latest}
export DOCKER_DOTNET_CLIENT_TAG=${DOCKER_DOTNET_CLIENT_TAG:=latest} export DOCKER_DOTNET_CLIENT_TAG=${DOCKER_DOTNET_CLIENT_TAG:=latest}
export DOCKER_MYSQL_JAVA_CLIENT_TAG=${DOCKER_MYSQL_JAVA_CLIENT_TAG:=latest} export DOCKER_MYSQL_JAVA_CLIENT_TAG=${DOCKER_MYSQL_JAVA_CLIENT_TAG:=latest}

View File

@ -16,8 +16,9 @@ COPY s3downloader /s3downloader
ENV S3_URL="https://clickhouse-datasets.s3.amazonaws.com" ENV S3_URL="https://clickhouse-datasets.s3.amazonaws.com"
ENV DATASETS="hits visits" ENV DATASETS="hits visits"
RUN npm install -g azurite # The following is already done in clickhouse/stateless-test
RUN npm install tslib # RUN npm install -g azurite
# RUN npm install tslib
COPY run.sh / COPY run.sh /
CMD ["/bin/bash", "/run.sh"] CMD ["/bin/bash", "/run.sh"]

View File

@ -20,6 +20,7 @@ RUN apt-get update -y \
netcat-openbsd \ netcat-openbsd \
nodejs \ nodejs \
npm \ npm \
odbcinst \
openjdk-11-jre-headless \ openjdk-11-jre-headless \
openssl \ openssl \
postgresql-client \ postgresql-client \
@ -71,7 +72,7 @@ RUN arch=${TARGETARCH:-amd64} \
&& chmod +x ./mc ./minio && chmod +x ./mc ./minio
RUN wget 'https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz' \ RUN wget --no-verbose 'https://dlcdn.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz' \
&& tar -xvf hadoop-3.3.1.tar.gz \ && tar -xvf hadoop-3.3.1.tar.gz \
&& rm -rf hadoop-3.3.1.tar.gz && rm -rf hadoop-3.3.1.tar.gz
@ -79,8 +80,8 @@ ENV MINIO_ROOT_USER="clickhouse"
ENV MINIO_ROOT_PASSWORD="clickhouse" ENV MINIO_ROOT_PASSWORD="clickhouse"
ENV EXPORT_S3_STORAGE_POLICIES=1 ENV EXPORT_S3_STORAGE_POLICIES=1
RUN npm install -g azurite RUN npm install -g azurite \
RUN npm install tslib && npm install -g tslib
COPY run.sh / COPY run.sh /
COPY setup_minio.sh / COPY setup_minio.sh /

View File

@ -90,6 +90,30 @@ sleep 5
attach_gdb_to_clickhouse || true # FIXME: to not break old builds, clean on 2023-09-01 attach_gdb_to_clickhouse || true # FIXME: to not break old builds, clean on 2023-09-01
function run_with_retry()
{
set +e
local total_retries="$1"
shift
local retry=0
until [ "$retry" -ge "$total_retries" ]
do
if "$@"; then
set -e
return
else
retry=$((retry + 1))
sleep 3
fi
done
echo "Command '$*' failed after $total_retries retries, exiting"
exit 1
}
function run_tests() function run_tests()
{ {
set -x set -x
@ -138,7 +162,8 @@ function run_tests()
ADDITIONAL_OPTIONS+=('--report-logs-stats') ADDITIONAL_OPTIONS+=('--report-logs-stats')
clickhouse-test "00001_select_1" > /dev/null ||: clickhouse-test "00001_select_1" > /dev/null ||:
clickhouse-client -q "insert into system.zookeeper (name, path, value) values ('auxiliary_zookeeper2', '/test/chroot/', '')" ||:
run_with_retry 5 clickhouse-client -q "insert into system.zookeeper (name, path, value) values ('auxiliary_zookeeper2', '/test/chroot/', '')"
set +e set +e
clickhouse-test --testname --shard --zookeeper --check-zookeeper-session --hung-check --print-time \ clickhouse-test --testname --shard --zookeeper --check-zookeeper-session --hung-check --print-time \

View File

@ -1,5 +1,5 @@
# docker build -t clickhouse/test-util . # docker build -t clickhouse/test-util .
FROM ubuntu:20.04 FROM ubuntu:22.04
# ARG for quick switch to a given ubuntu mirror # ARG for quick switch to a given ubuntu mirror
ARG apt_archive="http://archive.ubuntu.com" ARG apt_archive="http://archive.ubuntu.com"

View File

@ -0,0 +1,19 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.3.6.7-lts (7e3f0a271b7) FIXME as compared to v23.3.5.9-lts (f5fbc2fd2b3)
#### Improvement
* Backported in [#51240](https://github.com/ClickHouse/ClickHouse/issues/51240): Improve the progress bar for file/s3/hdfs/url table functions by using chunk size from source data and using incremental total size counting in each thread. Fix the progress bar for *Cluster functions. This closes [#47250](https://github.com/ClickHouse/ClickHouse/issues/47250). [#51088](https://github.com/ClickHouse/ClickHouse/pull/51088) ([Kruglov Pavel](https://github.com/Avogar)).
#### Build/Testing/Packaging Improvement
* Backported in [#51529](https://github.com/ClickHouse/ClickHouse/issues/51529): Split huge `RUN` in Dockerfile into smaller conditional. Install the necessary tools on demand in the same `RUN` layer, and remove them after that. Upgrade the OS only once at the beginning. Use a modern way to check the signed repository. Downgrade the base repo to ubuntu:20.04 to address the issues on older docker versions. Upgrade golang version to address golang vulnerabilities. [#51504](https://github.com/ClickHouse/ClickHouse/pull/51504) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix type of LDAP server params hash in cache entry [#50865](https://github.com/ClickHouse/ClickHouse/pull/50865) ([Julian Maicher](https://github.com/jmaicher)).

View File

@ -0,0 +1,16 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.3.7.5-lts (bc683c11c92) FIXME as compared to v23.3.6.7-lts (7e3f0a271b7)
#### Build/Testing/Packaging Improvement
* Backported in [#51568](https://github.com/ClickHouse/ClickHouse/issues/51568): This a follow-up for [#51504](https://github.com/ClickHouse/ClickHouse/issues/51504), the cleanup was lost during refactoring. [#51564](https://github.com/ClickHouse/ClickHouse/pull/51564) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix fuzzer failure in ActionsDAG [#51301](https://github.com/ClickHouse/ClickHouse/pull/51301) ([Alexey Milovidov](https://github.com/alexey-milovidov)).

View File

@ -0,0 +1,27 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.4.5.22-stable (0ced5d6a8da) FIXME as compared to v23.4.4.16-stable (747ba4fc6a0)
#### Build/Testing/Packaging Improvement
* Backported in [#51530](https://github.com/ClickHouse/ClickHouse/issues/51530): Split huge `RUN` in Dockerfile into smaller conditional. Install the necessary tools on demand in the same `RUN` layer, and remove them after that. Upgrade the OS only once at the beginning. Use a modern way to check the signed repository. Downgrade the base repo to ubuntu:20.04 to address the issues on older docker versions. Upgrade golang version to address golang vulnerabilities. [#51504](https://github.com/ClickHouse/ClickHouse/pull/51504) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Backported in [#51570](https://github.com/ClickHouse/ClickHouse/issues/51570): This a follow-up for [#51504](https://github.com/ClickHouse/ClickHouse/issues/51504), the cleanup was lost during refactoring. [#51564](https://github.com/ClickHouse/ClickHouse/pull/51564) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix broken index analysis when binary operator contains a null constant argument [#50177](https://github.com/ClickHouse/ClickHouse/pull/50177) ([Amos Bird](https://github.com/amosbird)).
* Fix reconnecting of HTTPS session when target host IP was changed [#50240](https://github.com/ClickHouse/ClickHouse/pull/50240) ([Aleksei Filatov](https://github.com/aalexfvk)).
* Fix incorrect constant folding [#50536](https://github.com/ClickHouse/ClickHouse/pull/50536) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix type of LDAP server params hash in cache entry [#50865](https://github.com/ClickHouse/ClickHouse/pull/50865) ([Julian Maicher](https://github.com/jmaicher)).
* Fallback to parsing big integer from String instead of exception in Parquet format [#50873](https://github.com/ClickHouse/ClickHouse/pull/50873) ([Kruglov Pavel](https://github.com/Avogar)).
* Do not apply projection if read-in-order was enabled. [#50923](https://github.com/ClickHouse/ClickHouse/pull/50923) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix fuzzer failure in ActionsDAG [#51301](https://github.com/ClickHouse/ClickHouse/pull/51301) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Increase max array size in group bitmap [#50620](https://github.com/ClickHouse/ClickHouse/pull/50620) ([Kruglov Pavel](https://github.com/Avogar)).

View File

@ -0,0 +1,31 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.5.4.25-stable (190f962abcf) FIXME as compared to v23.5.3.24-stable (76f54616d3b)
#### Improvement
* Backported in [#51235](https://github.com/ClickHouse/ClickHouse/issues/51235): Improve the progress bar for file/s3/hdfs/url table functions by using chunk size from source data and using incremental total size counting in each thread. Fix the progress bar for *Cluster functions. This closes [#47250](https://github.com/ClickHouse/ClickHouse/issues/47250). [#51088](https://github.com/ClickHouse/ClickHouse/pull/51088) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#51255](https://github.com/ClickHouse/ClickHouse/issues/51255): Disable cache setting `do_not_evict_index_and_mark_files` (Was enabled in `23.5`). [#51222](https://github.com/ClickHouse/ClickHouse/pull/51222) ([Kseniia Sumarokova](https://github.com/kssenii)).
#### Build/Testing/Packaging Improvement
* Backported in [#51531](https://github.com/ClickHouse/ClickHouse/issues/51531): Split huge `RUN` in Dockerfile into smaller conditional. Install the necessary tools on demand in the same `RUN` layer, and remove them after that. Upgrade the OS only once at the beginning. Use a modern way to check the signed repository. Downgrade the base repo to ubuntu:20.04 to address the issues on older docker versions. Upgrade golang version to address golang vulnerabilities. [#51504](https://github.com/ClickHouse/ClickHouse/pull/51504) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Backported in [#51572](https://github.com/ClickHouse/ClickHouse/issues/51572): This a follow-up for [#51504](https://github.com/ClickHouse/ClickHouse/issues/51504), the cleanup was lost during refactoring. [#51564](https://github.com/ClickHouse/ClickHouse/pull/51564) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Query Cache: Try to fix bad cast from ColumnConst to ColumnVector<char8_t> [#50704](https://github.com/ClickHouse/ClickHouse/pull/50704) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix type of LDAP server params hash in cache entry [#50865](https://github.com/ClickHouse/ClickHouse/pull/50865) ([Julian Maicher](https://github.com/jmaicher)).
* Fallback to parsing big integer from String instead of exception in Parquet format [#50873](https://github.com/ClickHouse/ClickHouse/pull/50873) ([Kruglov Pavel](https://github.com/Avogar)).
* Do not apply projection if read-in-order was enabled. [#50923](https://github.com/ClickHouse/ClickHouse/pull/50923) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix race azure blob storage iterator [#50936](https://github.com/ClickHouse/ClickHouse/pull/50936) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
* Fix ineffective query cache for SELECTs with subqueries [#51132](https://github.com/ClickHouse/ClickHouse/pull/51132) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix fuzzer failure in ActionsDAG [#51301](https://github.com/ClickHouse/ClickHouse/pull/51301) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Fix ParallelReadBuffer seek [#50820](https://github.com/ClickHouse/ClickHouse/pull/50820) ([Michael Kolupaev](https://github.com/al13n321)).

View File

@ -0,0 +1,301 @@
---
sidebar_position: 1
sidebar_label: 2023
---
# 2023 Changelog
### ClickHouse release v23.6.1.1524-stable (d1c7e13d088) FIXME as compared to v23.5.1.3174-stable (2fec796e73e)
#### Backward Incompatible Change
* Delete feature `do_not_evict_index_and_mark_files` in the fs cache. This feature was only making things worse. [#51253](https://github.com/ClickHouse/ClickHouse/pull/51253) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Remove ALTER support for experimental LIVE VIEW. [#51287](https://github.com/ClickHouse/ClickHouse/pull/51287) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
#### New Feature
* Add setting `session_timezone`, it is used as default timezone for session when not explicitly specified. [#44149](https://github.com/ClickHouse/ClickHouse/pull/44149) ([Andrey Zvonov](https://github.com/zvonand)).
* Added overlay database engine and representation of a directory as a database This commit adds 4 databases: 1. DatabaseOverlay: Implements the IDatabase interface. Allow to combine multiple databases, such as FileSystem and Memory. Internally, it stores a vector with other database pointers and proxies requests to them in turn until it is executed successfully. 2. DatabaseFilesystem: allows to read-only interact with files stored on the file system. Internally, it uses TableFunctionFile to implicitly load file when a user requests the table. Result of TableFunctionFile call cached inside to provide quick access. 3. DatabaseS3: allows to read-only interact with s3 storage. It uses TableFunctionS3 to implicitly load table from s3 4. DatabaseHDFS: allows to interact with hdfs storage. It uses TableFunctionHDFS to implicitly load table from hdfs. [#48821](https://github.com/ClickHouse/ClickHouse/pull/48821) ([alekseygolub](https://github.com/alekseygolub)).
* Add a new setting named `use_mysql_types_in_show_columns` to alter the `SHOW COLUMNS` SQL statement to display MySQL equivalent types when a client is connected via the MySQL compatibility port. [#49577](https://github.com/ClickHouse/ClickHouse/pull/49577) ([Thomas Panetti](https://github.com/tpanetti)).
* Added option `--rename_files_after_processing <pattern>`. This closes [#34207](https://github.com/ClickHouse/ClickHouse/issues/34207). [#49626](https://github.com/ClickHouse/ClickHouse/pull/49626) ([alekseygolub](https://github.com/alekseygolub)).
* 1. Add `TableFunctionRedis` 3. Add table engine Redis 4. Add `RedisCommon` which contains Redis related tools and types 5. Support `equals` and `in` filter push down into Redis. [#50150](https://github.com/ClickHouse/ClickHouse/pull/50150) ([JackyWoo](https://github.com/JackyWoo)).
* Allow to skip empty files in file/s3/url/hdfs table functions using settings `s3_skip_empty_files`, `hdfs_skip_empty_files`, `engine_file_skip_empty_files`, `engine_url_skip_empty_files`. [#50364](https://github.com/ClickHouse/ClickHouse/pull/50364) ([Kruglov Pavel](https://github.com/Avogar)).
* Clickhouse-client can now be called with a connection instead of "--host", "--port", "--user" etc. [#50689](https://github.com/ClickHouse/ClickHouse/pull/50689) ([Alexey Gerasimchuck](https://github.com/Demilivor)).
* Codec DEFLATE_QPL is now controlled via server setting "enable_deflate_qpl_codec" (default: false) instead of setting "allow_experimental_codecs". This marks QPL_DEFLATE non-experimental. [#50775](https://github.com/ClickHouse/ClickHouse/pull/50775) ([Robert Schulze](https://github.com/rschu1ze)).
#### Performance Improvement
* Improve performance with enabled QueryProfiler using thread-local timer_id instead of global object. [#48778](https://github.com/ClickHouse/ClickHouse/pull/48778) ([Jiebin Sun](https://github.com/jiebinn)).
* Rewrite CapnProto input/output format to improve its performance. Map column names and CapnProto fields case insensitive, fix reading/writing of nested structure fields. [#49752](https://github.com/ClickHouse/ClickHouse/pull/49752) ([Kruglov Pavel](https://github.com/Avogar)).
* Optimize parquet write performance for parallel threads. [#50102](https://github.com/ClickHouse/ClickHouse/pull/50102) ([Hongbin Ma](https://github.com/binmahone)).
* ### Documentation entry for user-facing changes Disable `parallelize_output_from_storages` for processing MATERIALIZED VIEWs and storages with one block only. [#50214](https://github.com/ClickHouse/ClickHouse/pull/50214) ([Azat Khuzhin](https://github.com/azat)).
* Merge PR https://github.com/ClickHouse/ClickHouse/pull/46558 (Avoid processing already sorted data). Avoid block permutation during sort if the block is already sorted. [#50697](https://github.com/ClickHouse/ClickHouse/pull/50697) ([Maksim Kita](https://github.com/kitaisreal)).
* In the earlier PRs ([#50062](https://github.com/ClickHouse/ClickHouse/issues/50062), [#50307](https://github.com/ClickHouse/ClickHouse/issues/50307)), we used to propose an optimization pattern which transforms the predicates with toYear/toYYYYMM into its equivalent but converter-free form. This transformation could bring significant performance impact to some workloads, such as SSB. However, as issue [#50628](https://github.com/ClickHouse/ClickHouse/issues/50628) indicated, these two PRs would introduce some issues which may results in incomplete query results, and as a result, they were reverted by [#50629](https://github.com/ClickHouse/ClickHouse/issues/50629). [#50951](https://github.com/ClickHouse/ClickHouse/pull/50951) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
* Make multiple list requests to ZooKeeper in parallel to speed up reading from system.zookeeper table. [#51042](https://github.com/ClickHouse/ClickHouse/pull/51042) ([Alexander Gololobov](https://github.com/davenger)).
* Speedup initialization of DateTime lookup tables for time zones. This should reduce startup/connect time of clickhouse client especially in debug build as it is rather heavy. [#51347](https://github.com/ClickHouse/ClickHouse/pull/51347) ([Alexander Gololobov](https://github.com/davenger)).
#### Improvement
* Allow to cast IPv6 to IPv4 address for CIDR ::ffff:0:0/96 (IPv4-mapped addresses). [#49759](https://github.com/ClickHouse/ClickHouse/pull/49759) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Update MongoDB protocol to support MongoDB 5.1 version and newer. Support for the versions with the old protocol (<3.6) is preserved. Closes [#45621](https://github.com/ClickHouse/ClickHouse/issues/45621), [#49879](https://github.com/ClickHouse/ClickHouse/issues/49879). [#50061](https://github.com/ClickHouse/ClickHouse/pull/50061) ([Nikolay Degterinsky](https://github.com/evillique)).
* Improved scheduling of merge selecting and cleanup tasks in `ReplicatedMergeTree`. The tasks will not be executed too frequently when there's nothing to merge or cleanup. Added settings `max_merge_selecting_sleep_ms`, `merge_selecting_sleep_slowdown_factor`, `max_cleanup_delay_period` and `cleanup_thread_preferred_points_per_iteration`. It should close [#31919](https://github.com/ClickHouse/ClickHouse/issues/31919). [#50107](https://github.com/ClickHouse/ClickHouse/pull/50107) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Support parallel replicas with the analyzer. [#50441](https://github.com/ClickHouse/ClickHouse/pull/50441) ([Raúl Marín](https://github.com/Algunenano)).
* Add setting `input_format_max_bytes_to_read_for_schema_inference` to limit the number of bytes to read in schema inference. Closes [#50577](https://github.com/ClickHouse/ClickHouse/issues/50577). [#50592](https://github.com/ClickHouse/ClickHouse/pull/50592) ([Kruglov Pavel](https://github.com/Avogar)).
* Respect setting input_format_as_default in schema inference. [#50602](https://github.com/ClickHouse/ClickHouse/pull/50602) ([Kruglov Pavel](https://github.com/Avogar)).
* Make filter push down through cross join. [#50605](https://github.com/ClickHouse/ClickHouse/pull/50605) ([Han Fei](https://github.com/hanfei1991)).
* Actual lz4 version is used now. [#50621](https://github.com/ClickHouse/ClickHouse/pull/50621) ([Nikita Taranov](https://github.com/nickitat)).
* Allow to skip trailing empty lines in CSV/TSV/CustomSeparated formats via settings `input_format_csv_skip_trailing_empty_lines`, `input_format_tsv_skip_trailing_empty_lines` and `input_format_custom_skip_trailing_empty_lines` (disabled by default). Closes [#49315](https://github.com/ClickHouse/ClickHouse/issues/49315). [#50635](https://github.com/ClickHouse/ClickHouse/pull/50635) ([Kruglov Pavel](https://github.com/Avogar)).
* Functions "toDateOrDefault|OrNull()" and "accuateCast[OrDefault|OrNull]()" now correctly parse numeric arguments. [#50709](https://github.com/ClickHouse/ClickHouse/pull/50709) ([Dmitry Kardymon](https://github.com/kardymonds)).
* Currently, the csv input format can not parse the csv file with whitespace or \t field delimiter, and these delimiters is supported in spark. [#50712](https://github.com/ClickHouse/ClickHouse/pull/50712) ([KevinyhZou](https://github.com/KevinyhZou)).
* Settings `number_of_mutations_to_delay` and `number_of_mutations_to_throw` are enabled by default now with values 500 and 1000 respectively. [#50726](https://github.com/ClickHouse/ClickHouse/pull/50726) ([Anton Popov](https://github.com/CurtizJ)).
* Keeper improvement: add feature flags for Keeper API. Each feature flag can be disabled or enabled by defining it under `keeper_server.feature_flags` config. E.g. to enable `CheckNotExists` request, `keeper_server.feature_flags.check_not_exists` should be set to `1` on Keeper. [#50796](https://github.com/ClickHouse/ClickHouse/pull/50796) ([Antonio Andelic](https://github.com/antonio2368)).
* The dashboard correctly shows missing values. This closes [#50831](https://github.com/ClickHouse/ClickHouse/issues/50831). [#50832](https://github.com/ClickHouse/ClickHouse/pull/50832) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* CGroups metrics related to CPU are replaced with one metric, `CGroupMaxCPU` for better usability. The `Normalized` CPU usage metrics will be normalized to CGroups limits instead of the total number of CPUs when they are set. This closes [#50836](https://github.com/ClickHouse/ClickHouse/issues/50836). [#50835](https://github.com/ClickHouse/ClickHouse/pull/50835) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Relax the thresholds for "too many parts" to be more modern. Return the backpressure during long-running insert queries. [#50856](https://github.com/ClickHouse/ClickHouse/pull/50856) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Added the possibility to use date and time arguments in syslog timestamp format in functions parseDateTimeBestEffort*() and parseDateTime64BestEffort*(). [#50925](https://github.com/ClickHouse/ClickHouse/pull/50925) ([Victor Krasnov](https://github.com/sirvickr)).
* Suggest using `APPEND` or `TRUNCATE` for `INTO OUTFILE` when file exists. [#50950](https://github.com/ClickHouse/ClickHouse/pull/50950) ([alekar](https://github.com/alekar)).
* Add embedded keeper-client to standalone keeper binary. [#50964](https://github.com/ClickHouse/ClickHouse/pull/50964) ([pufit](https://github.com/pufit)).
* Command line parameter "--password" in clickhouse-client can now be specified only once. [#50966](https://github.com/ClickHouse/ClickHouse/pull/50966) ([Alexey Gerasimchuck](https://github.com/Demilivor)).
* Fix data lakes slowness because of synchronous head requests. (Related to Iceberg/Deltalake/Hudi being slow with a lot of files). [#50976](https://github.com/ClickHouse/ClickHouse/pull/50976) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Use `hash_of_all_files` from `system.parts` to check identity of parts during on-cluster backups. [#50997](https://github.com/ClickHouse/ClickHouse/pull/50997) ([Vitaly Baranov](https://github.com/vitlibar)).
* The system table zookeeper_connection connected_time identifies the time when the connection is established (standard format), and session_uptime_elapsed_seconds is added, which labels the duration of the established connection session (in seconds). [#51026](https://github.com/ClickHouse/ClickHouse/pull/51026) ([郭小龙](https://github.com/guoxiaolongzte)).
* Show halves of checksums in `system.parts`, `system.projection_parts` and in error messages in the correct order. [#51040](https://github.com/ClickHouse/ClickHouse/pull/51040) ([Vitaly Baranov](https://github.com/vitlibar)).
* Do not replicate `ALTER PARTITION` queries and mutations through `Replicated` database if it has only one shard and the underlying table is `ReplicatedMergeTree`. [#51049](https://github.com/ClickHouse/ClickHouse/pull/51049) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Improve the progress bar for file/s3/hdfs/url table functions by using chunk size from source data and using incremental total size counting in each thread. Fix the progress bar for *Cluster functions. This closes [#47250](https://github.com/ClickHouse/ClickHouse/issues/47250). [#51088](https://github.com/ClickHouse/ClickHouse/pull/51088) ([Kruglov Pavel](https://github.com/Avogar)).
* Add total_bytes_to_read to Progress packet in TCP protocol for better Progress bar. [#51158](https://github.com/ClickHouse/ClickHouse/pull/51158) ([Kruglov Pavel](https://github.com/Avogar)).
* Better checking of data parts on disks with filesystem cache. [#51164](https://github.com/ClickHouse/ClickHouse/pull/51164) ([Anton Popov](https://github.com/CurtizJ)).
* Disable cache setting `do_not_evict_index_and_mark_files` (Was enabled in `23.5`). [#51222](https://github.com/ClickHouse/ClickHouse/pull/51222) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix sometimes not correct current_elements_num in fs cache. [#51242](https://github.com/ClickHouse/ClickHouse/pull/51242) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Add random sleep before merges/mutations execution to split load more evenly between replicas in case of zero-copy replication. [#51282](https://github.com/ClickHouse/ClickHouse/pull/51282) ([alesapin](https://github.com/alesapin)).
* The function `transform` as well as `CASE` with value matching started to support all data types. This closes [#29730](https://github.com/ClickHouse/ClickHouse/issues/29730). This closes [#32387](https://github.com/ClickHouse/ClickHouse/issues/32387). This closes [#50827](https://github.com/ClickHouse/ClickHouse/issues/50827). This closes [#31336](https://github.com/ClickHouse/ClickHouse/issues/31336). This closes [#40493](https://github.com/ClickHouse/ClickHouse/issues/40493). [#51351](https://github.com/ClickHouse/ClickHouse/pull/51351) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* We have found a bug in LLVM that makes the usage of `compile_expressions` setting unsafe. It is disabled by default. [#51368](https://github.com/ClickHouse/ClickHouse/pull/51368) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Issue [#50220](https://github.com/ClickHouse/ClickHouse/issues/50220) reports a core in `grace_hash` join. We finally reproduce the exception on local, and found that the issue is related to the failure of creating temporary file. Somehow this is triggered in https://github.com/ClickHouse/ClickHouse/pull/49816 https://github.com/ClickHouse/ClickHouse/pull/49483. [#51382](https://github.com/ClickHouse/ClickHouse/pull/51382) ([lgbo](https://github.com/lgbo-ustc)).
#### Build/Testing/Packaging Improvement
* Update contrib/re2 to 2023-06-02. [#50949](https://github.com/ClickHouse/ClickHouse/pull/50949) ([Yuriy Chernyshov](https://github.com/georgthegreat)).
* ClickHouse server will print the list of changed settings on fatal errors. This closes [#51137](https://github.com/ClickHouse/ClickHouse/issues/51137). [#51138](https://github.com/ClickHouse/ClickHouse/pull/51138) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* In https://github.com/ClickHouse/ClickHouse/pull/51143 the fasstests failed, but the status wasn't created because of the chown `file not found`. This addresses it. Decrease the default values for `http-max-field-value-size` and `http_max_field_name_size` to 128K. [#51163](https://github.com/ClickHouse/ClickHouse/pull/51163) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Update Ubuntu version in docker containers. [#51180](https://github.com/ClickHouse/ClickHouse/pull/51180) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Allow building ClickHouse with clang-17. [#51300](https://github.com/ClickHouse/ClickHouse/pull/51300) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* [SQLancer](https://github.com/sqlancer/sqlancer) check is considered stable as bugs that were triggered by it are fixed. Now failures of SQLancer check will be reported as failed check status. [#51340](https://github.com/ClickHouse/ClickHouse/pull/51340) ([Ilya Yatsishin](https://github.com/qoega)).
* Making our CI even better. [#51494](https://github.com/ClickHouse/ClickHouse/pull/51494) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Split huge `RUN` in Dockerfile into smaller conditional. Install the necessary tools on demand in the same `RUN` layer, and remove them after that. Upgrade the OS only once at the beginning. Use a modern way to check the signed repository. Downgrade the base repo to ubuntu:20.04 to address the issues on older docker versions. Upgrade golang version to address golang vulnerabilities. [#51504](https://github.com/ClickHouse/ClickHouse/pull/51504) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* This a follow-up for [#51504](https://github.com/ClickHouse/ClickHouse/issues/51504), the cleanup was lost during refactoring. [#51564](https://github.com/ClickHouse/ClickHouse/pull/51564) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Report loading status for executable dictionaries correctly [#48775](https://github.com/ClickHouse/ClickHouse/pull/48775) ([Anton Kozlov](https://github.com/tonickkozlov)).
* Proper mutation of skip indices and projections [#50104](https://github.com/ClickHouse/ClickHouse/pull/50104) ([Amos Bird](https://github.com/amosbird)).
* Cleanup moving parts [#50489](https://github.com/ClickHouse/ClickHouse/pull/50489) ([vdimir](https://github.com/vdimir)).
* Fix backward compatibility for IP types hashing in aggregate functions [#50551](https://github.com/ClickHouse/ClickHouse/pull/50551) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Fix Log family table return wrong rows count after truncate [#50585](https://github.com/ClickHouse/ClickHouse/pull/50585) ([flynn](https://github.com/ucasfl)).
* Fix bug in `uniqExact` parallel merging [#50590](https://github.com/ClickHouse/ClickHouse/pull/50590) ([Nikita Taranov](https://github.com/nickitat)).
* Revert recent grace hash join changes [#50699](https://github.com/ClickHouse/ClickHouse/pull/50699) ([vdimir](https://github.com/vdimir)).
* Query Cache: Try to fix bad cast from ColumnConst to ColumnVector<char8_t> [#50704](https://github.com/ClickHouse/ClickHouse/pull/50704) ([Robert Schulze](https://github.com/rschu1ze)).
* Do not read all the columns from right GLOBAL JOIN table. [#50721](https://github.com/ClickHouse/ClickHouse/pull/50721) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Avoid storing logs in Keeper containing unknown operation [#50751](https://github.com/ClickHouse/ClickHouse/pull/50751) ([Antonio Andelic](https://github.com/antonio2368)).
* SummingMergeTree support for DateTime64 [#50797](https://github.com/ClickHouse/ClickHouse/pull/50797) ([Jordi Villar](https://github.com/jrdi)).
* Add compat setting for non-const timezones [#50834](https://github.com/ClickHouse/ClickHouse/pull/50834) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix type of LDAP server params hash in cache entry [#50865](https://github.com/ClickHouse/ClickHouse/pull/50865) ([Julian Maicher](https://github.com/jmaicher)).
* Fallback to parsing big integer from String instead of exception in Parquet format [#50873](https://github.com/ClickHouse/ClickHouse/pull/50873) ([Kruglov Pavel](https://github.com/Avogar)).
* Fix checking the lock file too often while writing a backup [#50889](https://github.com/ClickHouse/ClickHouse/pull/50889) ([Vitaly Baranov](https://github.com/vitlibar)).
* Do not apply projection if read-in-order was enabled. [#50923](https://github.com/ClickHouse/ClickHouse/pull/50923) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix race azure blob storage iterator [#50936](https://github.com/ClickHouse/ClickHouse/pull/50936) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
* Fix erroneous `sort_description` propagation in `CreatingSets` [#50955](https://github.com/ClickHouse/ClickHouse/pull/50955) ([Nikita Taranov](https://github.com/nickitat)).
* Fix iceberg V2 optional metadata parsing [#50974](https://github.com/ClickHouse/ClickHouse/pull/50974) ([Kseniia Sumarokova](https://github.com/kssenii)).
* MaterializedMySQL: Keep parentheses for empty table overrides [#50977](https://github.com/ClickHouse/ClickHouse/pull/50977) ([Val Doroshchuk](https://github.com/valbok)).
* Fix crash in BackupCoordinationStageSync::setError() [#51012](https://github.com/ClickHouse/ClickHouse/pull/51012) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix subtly broken copy-on-write of ColumnLowCardinality dictionary [#51064](https://github.com/ClickHouse/ClickHouse/pull/51064) ([Michael Kolupaev](https://github.com/al13n321)).
* Generate safe IVs [#51086](https://github.com/ClickHouse/ClickHouse/pull/51086) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Fix ineffective query cache for SELECTs with subqueries [#51132](https://github.com/ClickHouse/ClickHouse/pull/51132) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix Set index with constant nullable comparison. [#51205](https://github.com/ClickHouse/ClickHouse/pull/51205) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix a crash in s3 and s3Cluster functions [#51209](https://github.com/ClickHouse/ClickHouse/pull/51209) ([Nikolay Degterinsky](https://github.com/evillique)).
* Fix core dump when compile expression [#51231](https://github.com/ClickHouse/ClickHouse/pull/51231) ([LiuNeng](https://github.com/liuneng1994)).
* Fix use-after-free in StorageURL when switching URLs [#51260](https://github.com/ClickHouse/ClickHouse/pull/51260) ([Michael Kolupaev](https://github.com/al13n321)).
* Updated check for parameterized view [#51272](https://github.com/ClickHouse/ClickHouse/pull/51272) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
* Fix multiple writing of same file to backup [#51299](https://github.com/ClickHouse/ClickHouse/pull/51299) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix fuzzer failure in ActionsDAG [#51301](https://github.com/ClickHouse/ClickHouse/pull/51301) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Remove garbage from function `transform` [#51350](https://github.com/ClickHouse/ClickHouse/pull/51350) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix MSan report in lowerUTF8/upperUTF8 [#51371](https://github.com/ClickHouse/ClickHouse/pull/51371) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* fs cache: fix a bit incorrect use_count after [#44985](https://github.com/ClickHouse/ClickHouse/issues/44985) [#51406](https://github.com/ClickHouse/ClickHouse/pull/51406) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix segfault in MathUnary [#51499](https://github.com/ClickHouse/ClickHouse/pull/51499) ([Ilya Yatsishin](https://github.com/qoega)).
* Fix logical assert in `tupleElement()` with default values [#51534](https://github.com/ClickHouse/ClickHouse/pull/51534) ([Robert Schulze](https://github.com/rschu1ze)).
* fs cache: remove file from opened file cache immediately when evicting file [#51596](https://github.com/ClickHouse/ClickHouse/pull/51596) ([Kseniia Sumarokova](https://github.com/kssenii)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Deprecate delete-on-destroy.txt [#49181](https://github.com/ClickHouse/ClickHouse/pull/49181) ([Alexander Gololobov](https://github.com/davenger)).
* Attempt to increase the general runners' survival rate [#49283](https://github.com/ClickHouse/ClickHouse/pull/49283) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Refactor subqueries for IN [#49570](https://github.com/ClickHouse/ClickHouse/pull/49570) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Test plan optimization analyzer [#50095](https://github.com/ClickHouse/ClickHouse/pull/50095) ([Igor Nikonov](https://github.com/devcrafter)).
* Implement endianness-independent serialization for quantileTiming [#50324](https://github.com/ClickHouse/ClickHouse/pull/50324) ([ltrk2](https://github.com/ltrk2)).
* require `finalize()` call before d-tor for all writes buffers [#50395](https://github.com/ClickHouse/ClickHouse/pull/50395) ([Sema Checherinda](https://github.com/CheSema)).
* Implement big-endian support for the deterministic reservoir sampler [#50405](https://github.com/ClickHouse/ClickHouse/pull/50405) ([ltrk2](https://github.com/ltrk2)).
* Fix compilation error on big-endian platforms [#50406](https://github.com/ClickHouse/ClickHouse/pull/50406) ([ltrk2](https://github.com/ltrk2)).
* Attach gdb in stateless tests [#50487](https://github.com/ClickHouse/ClickHouse/pull/50487) ([Kruglov Pavel](https://github.com/Avogar)).
* JIT infrastructure refactoring [#50531](https://github.com/ClickHouse/ClickHouse/pull/50531) ([Maksim Kita](https://github.com/kitaisreal)).
* Analyzer: Do not apply Query Tree optimizations on shards [#50584](https://github.com/ClickHouse/ClickHouse/pull/50584) ([Dmitry Novik](https://github.com/novikd)).
* Increase max array size in group bitmap [#50620](https://github.com/ClickHouse/ClickHouse/pull/50620) ([Kruglov Pavel](https://github.com/Avogar)).
* Misc Annoy index improvements [#50661](https://github.com/ClickHouse/ClickHouse/pull/50661) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix reading negative decimals in avro format [#50668](https://github.com/ClickHouse/ClickHouse/pull/50668) ([Kruglov Pavel](https://github.com/Avogar)).
* Unify priorities for connection pools [#50675](https://github.com/ClickHouse/ClickHouse/pull/50675) ([Sergei Trifonov](https://github.com/serxa)).
* Prostpone check of outdated parts [#50676](https://github.com/ClickHouse/ClickHouse/pull/50676) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Unify priorities: `IExecutableTask`s [#50677](https://github.com/ClickHouse/ClickHouse/pull/50677) ([Sergei Trifonov](https://github.com/serxa)).
* Disable grace_hash join in stress tests [#50693](https://github.com/ClickHouse/ClickHouse/pull/50693) ([vdimir](https://github.com/vdimir)).
* ReverseTransform small improvement [#50698](https://github.com/ClickHouse/ClickHouse/pull/50698) ([Maksim Kita](https://github.com/kitaisreal)).
* Support OPTIMIZE for temporary tables [#50710](https://github.com/ClickHouse/ClickHouse/pull/50710) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Refactor reading from object storages [#50711](https://github.com/ClickHouse/ClickHouse/pull/50711) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix data race in log message of cached buffer [#50723](https://github.com/ClickHouse/ClickHouse/pull/50723) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Add new keywords into projections documentation [#50743](https://github.com/ClickHouse/ClickHouse/pull/50743) ([YalalovSM](https://github.com/YalalovSM)).
* Fix build for aarch64 (temporary disable azure) [#50770](https://github.com/ClickHouse/ClickHouse/pull/50770) ([alesapin](https://github.com/alesapin)).
* Update version after release [#50772](https://github.com/ClickHouse/ClickHouse/pull/50772) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Update version_date.tsv and changelogs after v23.5.1.3174-stable [#50774](https://github.com/ClickHouse/ClickHouse/pull/50774) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update CHANGELOG.md [#50788](https://github.com/ClickHouse/ClickHouse/pull/50788) ([Ilya Yatsishin](https://github.com/qoega)).
* Update version_date.tsv and changelogs after v23.2.7.32-stable [#50809](https://github.com/ClickHouse/ClickHouse/pull/50809) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Desctructing --> Destructing [#50810](https://github.com/ClickHouse/ClickHouse/pull/50810) ([Robert Schulze](https://github.com/rschu1ze)).
* Don't mark a part as broken on `Poco::TimeoutException` [#50811](https://github.com/ClickHouse/ClickHouse/pull/50811) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Rename azure_blob_storage to azureBlobStorage [#50812](https://github.com/ClickHouse/ClickHouse/pull/50812) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
* Fix ParallelReadBuffer seek [#50820](https://github.com/ClickHouse/ClickHouse/pull/50820) ([Michael Kolupaev](https://github.com/al13n321)).
* [RFC] Print git hash when crashing [#50823](https://github.com/ClickHouse/ClickHouse/pull/50823) ([Michael Kolupaev](https://github.com/al13n321)).
* Add tests for function "transform" [#50833](https://github.com/ClickHouse/ClickHouse/pull/50833) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Update version_date.tsv and changelogs after v23.5.2.7-stable [#50844](https://github.com/ClickHouse/ClickHouse/pull/50844) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Updated changelog with azureBlobStorage table function & engine entry [#50850](https://github.com/ClickHouse/ClickHouse/pull/50850) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
* Update easy_tasks_sorted_ru.md [#50853](https://github.com/ClickHouse/ClickHouse/pull/50853) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Document x86 / ARM prerequisites for Docker image [#50867](https://github.com/ClickHouse/ClickHouse/pull/50867) ([Robert Schulze](https://github.com/rschu1ze)).
* MaterializedMySQL: Add test_named_collections [#50874](https://github.com/ClickHouse/ClickHouse/pull/50874) ([Val Doroshchuk](https://github.com/valbok)).
* Update version_date.tsv and changelogs after v22.8.18.31-lts [#50881](https://github.com/ClickHouse/ClickHouse/pull/50881) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update version_date.tsv and changelogs after v23.3.3.52-lts [#50882](https://github.com/ClickHouse/ClickHouse/pull/50882) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update version_date.tsv and changelogs after v23.4.3.48-stable [#50883](https://github.com/ClickHouse/ClickHouse/pull/50883) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* MaterializedMySQL: Add additional test case to insert_with_modify_binlog_checksum [#50884](https://github.com/ClickHouse/ClickHouse/pull/50884) ([Val Doroshchuk](https://github.com/valbok)).
* Update broken tests list [#50886](https://github.com/ClickHouse/ClickHouse/pull/50886) ([Dmitry Novik](https://github.com/novikd)).
* Fix LOGICAL_ERROR in snowflakeToDateTime*() [#50893](https://github.com/ClickHouse/ClickHouse/pull/50893) ([Robert Schulze](https://github.com/rschu1ze)).
* Tests with parallel replicas are no more "always green" [#50896](https://github.com/ClickHouse/ClickHouse/pull/50896) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Slightly more information in error message about cached disk [#50897](https://github.com/ClickHouse/ClickHouse/pull/50897) ([Michael Kolupaev](https://github.com/al13n321)).
* do not call finalize after exception [#50907](https://github.com/ClickHouse/ClickHouse/pull/50907) ([Sema Checherinda](https://github.com/CheSema)).
* Update Annoy docs [#50912](https://github.com/ClickHouse/ClickHouse/pull/50912) ([Robert Schulze](https://github.com/rschu1ze)).
* A bit safer UserDefinedSQLFunctionVisitor [#50913](https://github.com/ClickHouse/ClickHouse/pull/50913) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Update contribe/orc in .gitmodules [#50920](https://github.com/ClickHouse/ClickHouse/pull/50920) ([San](https://github.com/santrancisco)).
* MaterializedMySQL: Add missing DROP DATABASE for tests [#50924](https://github.com/ClickHouse/ClickHouse/pull/50924) ([Val Doroshchuk](https://github.com/valbok)).
* Fix 'Illegal column timezone' in stress tests [#50929](https://github.com/ClickHouse/ClickHouse/pull/50929) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix tests sanity checks and avoid dropping system.query_log table [#50934](https://github.com/ClickHouse/ClickHouse/pull/50934) ([Azat Khuzhin](https://github.com/azat)).
* Fix tests for throttling by allowing more margin of error for trottling event [#50935](https://github.com/ClickHouse/ClickHouse/pull/50935) ([Azat Khuzhin](https://github.com/azat)).
* 01746_convert_type_with_default: Temporarily disable flaky test [#50937](https://github.com/ClickHouse/ClickHouse/pull/50937) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix the statless tests image for old commits [#50947](https://github.com/ClickHouse/ClickHouse/pull/50947) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Fix logic in `AsynchronousBoundedReadBuffer::seek` [#50952](https://github.com/ClickHouse/ClickHouse/pull/50952) ([Nikita Taranov](https://github.com/nickitat)).
* Uncomment flaky test (01746_convert_type_with_default) [#50954](https://github.com/ClickHouse/ClickHouse/pull/50954) ([Dmitry Kardymon](https://github.com/kardymonds)).
* Fix keeper-client help message [#50965](https://github.com/ClickHouse/ClickHouse/pull/50965) ([pufit](https://github.com/pufit)).
* fix build issue on clang 15 [#50967](https://github.com/ClickHouse/ClickHouse/pull/50967) ([Chang chen](https://github.com/baibaichen)).
* Docs: Fix embedded video link [#50972](https://github.com/ClickHouse/ClickHouse/pull/50972) ([Robert Schulze](https://github.com/rschu1ze)).
* Change submodule capnproto to it's fork in ClickHouse [#50987](https://github.com/ClickHouse/ClickHouse/pull/50987) ([Kruglov Pavel](https://github.com/Avogar)).
* Attempt to make 01281_group_by_limit_memory_tracking not flaky [#50995](https://github.com/ClickHouse/ClickHouse/pull/50995) ([Dmitry Novik](https://github.com/novikd)).
* Fix flaky 02561_null_as_default_more_formats [#51001](https://github.com/ClickHouse/ClickHouse/pull/51001) ([Igor Nikonov](https://github.com/devcrafter)).
* Fix flaky test_seekable_formats [#51002](https://github.com/ClickHouse/ClickHouse/pull/51002) ([Kruglov Pavel](https://github.com/Avogar)).
* Follow-up to [#50448](https://github.com/ClickHouse/ClickHouse/issues/50448) [#51006](https://github.com/ClickHouse/ClickHouse/pull/51006) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix a versions' tweak for tagged commits, improve version_helper [#51035](https://github.com/ClickHouse/ClickHouse/pull/51035) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Sqlancer has changed master to main [#51060](https://github.com/ClickHouse/ClickHouse/pull/51060) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Do not spam sqlancer build log [#51061](https://github.com/ClickHouse/ClickHouse/pull/51061) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Refactor IColumn::forEachSubcolumn to make it slightly harder to implement incorrectly [#51072](https://github.com/ClickHouse/ClickHouse/pull/51072) ([Michael Kolupaev](https://github.com/al13n321)).
* MaterializedMySQL: Rename materialize_with_ddl.py -> materialized_with_ddl [#51074](https://github.com/ClickHouse/ClickHouse/pull/51074) ([Val Doroshchuk](https://github.com/valbok)).
* Improve woboq browser report [#51077](https://github.com/ClickHouse/ClickHouse/pull/51077) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Fix for part_names_mutex used after destruction [#51099](https://github.com/ClickHouse/ClickHouse/pull/51099) ([Alexander Gololobov](https://github.com/davenger)).
* Fix ColumnConst::forEachSubcolumn missing from previous PR [#51102](https://github.com/ClickHouse/ClickHouse/pull/51102) ([Michael Kolupaev](https://github.com/al13n321)).
* Fix the test 02783_parsedatetimebesteffort_syslog flakiness [#51112](https://github.com/ClickHouse/ClickHouse/pull/51112) ([Victor Krasnov](https://github.com/sirvickr)).
* Compatibility with clang-17 [#51114](https://github.com/ClickHouse/ClickHouse/pull/51114) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Make more parallel get requests to ZooKeeper in system.zookeeper [#51118](https://github.com/ClickHouse/ClickHouse/pull/51118) ([Alexander Gololobov](https://github.com/davenger)).
* Fix 02703_max_local_write_bandwidth flakiness [#51120](https://github.com/ClickHouse/ClickHouse/pull/51120) ([Azat Khuzhin](https://github.com/azat)).
* Update version_date.tsv and changelogs after v23.5.3.24-stable [#51121](https://github.com/ClickHouse/ClickHouse/pull/51121) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update version_date.tsv and changelogs after v23.4.4.16-stable [#51122](https://github.com/ClickHouse/ClickHouse/pull/51122) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update version_date.tsv and changelogs after v23.3.4.17-lts [#51123](https://github.com/ClickHouse/ClickHouse/pull/51123) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update version_date.tsv and changelogs after v22.8.19.10-lts [#51124](https://github.com/ClickHouse/ClickHouse/pull/51124) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Fix typo [#51126](https://github.com/ClickHouse/ClickHouse/pull/51126) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Slightly better diagnostics [#51127](https://github.com/ClickHouse/ClickHouse/pull/51127) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Small fix in `MergeTreePrefetchedReadPool` [#51131](https://github.com/ClickHouse/ClickHouse/pull/51131) ([Nikita Taranov](https://github.com/nickitat)).
* Don't report table function accesses to system.errors [#51147](https://github.com/ClickHouse/ClickHouse/pull/51147) ([Raúl Marín](https://github.com/Algunenano)).
* Fix SQLancer branch name [#51148](https://github.com/ClickHouse/ClickHouse/pull/51148) ([Ilya Yatsishin](https://github.com/qoega)).
* Revert "Added ability to implicitly use file/hdfs/s3 table functions in clickhouse-local" [#51149](https://github.com/ClickHouse/ClickHouse/pull/51149) ([Alexander Tokmakov](https://github.com/tavplubix)).
* More profile events for fs cache [#51161](https://github.com/ClickHouse/ClickHouse/pull/51161) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Unforget to pass callback to readBigAt() in ParallelReadBuffer [#51165](https://github.com/ClickHouse/ClickHouse/pull/51165) ([Michael Kolupaev](https://github.com/al13n321)).
* Update README.md [#51179](https://github.com/ClickHouse/ClickHouse/pull/51179) ([Tyler Hannan](https://github.com/tylerhannan)).
* Update exception message [#51187](https://github.com/ClickHouse/ClickHouse/pull/51187) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Split long test 02149_schema_inference_formats_with_schema into several tests to avoid timeout in debug [#51197](https://github.com/ClickHouse/ClickHouse/pull/51197) ([Kruglov Pavel](https://github.com/Avogar)).
* Avoid initializing DateLUT from emptyArray function registration [#51199](https://github.com/ClickHouse/ClickHouse/pull/51199) ([Alexander Gololobov](https://github.com/davenger)).
* Suppress check for covered parts in ZooKeeper [#51207](https://github.com/ClickHouse/ClickHouse/pull/51207) ([Alexander Tokmakov](https://github.com/tavplubix)).
* One more profile event for fs cache [#51223](https://github.com/ClickHouse/ClickHouse/pull/51223) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Typo: passowrd_sha256_hex --> password_sha256_hex [#51233](https://github.com/ClickHouse/ClickHouse/pull/51233) ([Robert Schulze](https://github.com/rschu1ze)).
* Introduce settings enum field with auto-generated values list [#51237](https://github.com/ClickHouse/ClickHouse/pull/51237) ([Sergei Trifonov](https://github.com/serxa)).
* Drop session if we fail to get Keeper API version [#51238](https://github.com/ClickHouse/ClickHouse/pull/51238) ([Alexander Gololobov](https://github.com/davenger)).
* Revert "Fix a crash in s3 and s3Cluster functions" [#51239](https://github.com/ClickHouse/ClickHouse/pull/51239) ([Alexander Tokmakov](https://github.com/tavplubix)).
* fix flaky `AsyncLoader` destructor [#51245](https://github.com/ClickHouse/ClickHouse/pull/51245) ([Sergei Trifonov](https://github.com/serxa)).
* Docs: little cleanup of configuration-files.md [#51249](https://github.com/ClickHouse/ClickHouse/pull/51249) ([Robert Schulze](https://github.com/rschu1ze)).
* Fix a stupid bug on Replicated database recovery [#51252](https://github.com/ClickHouse/ClickHouse/pull/51252) ([Alexander Tokmakov](https://github.com/tavplubix)).
* FileCache: tryReserve() slight improvement [#51259](https://github.com/ClickHouse/ClickHouse/pull/51259) ([Igor Nikonov](https://github.com/devcrafter)).
* Ugly hotfix for "terminate on uncaught exception" in WriteBufferFromOStream [#51265](https://github.com/ClickHouse/ClickHouse/pull/51265) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Avoid too many calls to Poco::Logger::get [#51266](https://github.com/ClickHouse/ClickHouse/pull/51266) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Update version_date.tsv and changelogs after v23.3.5.9-lts [#51269](https://github.com/ClickHouse/ClickHouse/pull/51269) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Better reporting of broken parts [#51270](https://github.com/ClickHouse/ClickHouse/pull/51270) ([Anton Popov](https://github.com/CurtizJ)).
* Update ext-dict-functions.md [#51283](https://github.com/ClickHouse/ClickHouse/pull/51283) ([Mike Kot](https://github.com/myrrc)).
* Disable table structure check for secondary queries from Replicated db [#51284](https://github.com/ClickHouse/ClickHouse/pull/51284) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Define Thrift version for parquet and use correct arrow version [#51285](https://github.com/ClickHouse/ClickHouse/pull/51285) ([Kruglov Pavel](https://github.com/Avogar)).
* Restore Azure build on ARM [#51288](https://github.com/ClickHouse/ClickHouse/pull/51288) ([Robert Schulze](https://github.com/rschu1ze)).
* Query Cache: Un-comment settings in server cfg [#51294](https://github.com/ClickHouse/ClickHouse/pull/51294) ([Robert Schulze](https://github.com/rschu1ze)).
* Require more checks [#51295](https://github.com/ClickHouse/ClickHouse/pull/51295) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix metadata loading test [#51297](https://github.com/ClickHouse/ClickHouse/pull/51297) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Scratch the strange Python code [#51302](https://github.com/ClickHouse/ClickHouse/pull/51302) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Add a test for [#47865](https://github.com/ClickHouse/ClickHouse/issues/47865) [#51306](https://github.com/ClickHouse/ClickHouse/pull/51306) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Add a test for [#48894](https://github.com/ClickHouse/ClickHouse/issues/48894) [#51307](https://github.com/ClickHouse/ClickHouse/pull/51307) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Add a test for [#48676](https://github.com/ClickHouse/ClickHouse/issues/48676) [#51308](https://github.com/ClickHouse/ClickHouse/pull/51308) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix long test `functions_bad_arguments` [#51310](https://github.com/ClickHouse/ClickHouse/pull/51310) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Unify merge predicate [#51344](https://github.com/ClickHouse/ClickHouse/pull/51344) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix using locks in ProcessList [#51348](https://github.com/ClickHouse/ClickHouse/pull/51348) ([Vitaly Baranov](https://github.com/vitlibar)).
* Add a test for [#42631](https://github.com/ClickHouse/ClickHouse/issues/42631) [#51353](https://github.com/ClickHouse/ClickHouse/pull/51353) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix performance tests due to warnings from jemalloc about Per-CPU arena disabled [#51362](https://github.com/ClickHouse/ClickHouse/pull/51362) ([Azat Khuzhin](https://github.com/azat)).
* Fix "merge_truncate_long" test [#51369](https://github.com/ClickHouse/ClickHouse/pull/51369) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Increase timeout of Fast Test [#51372](https://github.com/ClickHouse/ClickHouse/pull/51372) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix bad tests for DNS [#51374](https://github.com/ClickHouse/ClickHouse/pull/51374) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Attempt to fix the `relax_too_many_parts` test [#51375](https://github.com/ClickHouse/ClickHouse/pull/51375) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix MySQL test in Debug mode [#51376](https://github.com/ClickHouse/ClickHouse/pull/51376) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix bad test `01018_Distributed__shard_num` [#51377](https://github.com/ClickHouse/ClickHouse/pull/51377) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix "logical error" in addressToLineWithInlines [#51379](https://github.com/ClickHouse/ClickHouse/pull/51379) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix test 01280_ttl_where_group_by [#51380](https://github.com/ClickHouse/ClickHouse/pull/51380) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Attempt to fix `test_ssl_cert_authentication` [#51384](https://github.com/ClickHouse/ClickHouse/pull/51384) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Revert "Merge pull request [#50951](https://github.com/ClickHouse/ClickHouse/issues/50951) from ZhiguoZh/20230607-toyear-fix" [#51390](https://github.com/ClickHouse/ClickHouse/pull/51390) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Two tests are twice longer in average with Analyzer and sometimes failing [#51391](https://github.com/ClickHouse/ClickHouse/pull/51391) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix 00899_long_attach_memory_limit [#51395](https://github.com/ClickHouse/ClickHouse/pull/51395) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix test 01293_optimize_final_force [#51396](https://github.com/ClickHouse/ClickHouse/pull/51396) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix test 02481_parquet_list_monotonically_increasing_offsets [#51397](https://github.com/ClickHouse/ClickHouse/pull/51397) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix test 02497_trace_events_stress_long [#51398](https://github.com/ClickHouse/ClickHouse/pull/51398) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix broken labeling for `manual approve` [#51405](https://github.com/ClickHouse/ClickHouse/pull/51405) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Fix parts lifetime in `MergeTreeTransaction` [#51407](https://github.com/ClickHouse/ClickHouse/pull/51407) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix flaky test test_skip_empty_files [#51409](https://github.com/ClickHouse/ClickHouse/pull/51409) ([Kruglov Pavel](https://github.com/Avogar)).
* fix flacky test test_profile_events_s3 [#51412](https://github.com/ClickHouse/ClickHouse/pull/51412) ([Sema Checherinda](https://github.com/CheSema)).
* Update README.md [#51413](https://github.com/ClickHouse/ClickHouse/pull/51413) ([Tyler Hannan](https://github.com/tylerhannan)).
* Replace try/catch logic in hasTokenOrNull() by something more lightweight [#51425](https://github.com/ClickHouse/ClickHouse/pull/51425) ([Robert Schulze](https://github.com/rschu1ze)).
* Add retries to `tlsv1_3` tests [#51434](https://github.com/ClickHouse/ClickHouse/pull/51434) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Update exception message [#51440](https://github.com/ClickHouse/ClickHouse/pull/51440) ([Kseniia Sumarokova](https://github.com/kssenii)).
* fs cache: add check for intersecting ranges [#51444](https://github.com/ClickHouse/ClickHouse/pull/51444) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Slightly better code around packets for parallel replicas [#51451](https://github.com/ClickHouse/ClickHouse/pull/51451) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Update system_warnings test [#51453](https://github.com/ClickHouse/ClickHouse/pull/51453) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Many fixes [#51455](https://github.com/ClickHouse/ClickHouse/pull/51455) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Fix test 01605_adaptive_granularity_block_borders [#51457](https://github.com/ClickHouse/ClickHouse/pull/51457) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Try fix flaky 02497_storage_file_reader_selection [#51468](https://github.com/ClickHouse/ClickHouse/pull/51468) ([Kruglov Pavel](https://github.com/Avogar)).
* Try making Keeper in `DatabaseReplicated` tests more stable [#51473](https://github.com/ClickHouse/ClickHouse/pull/51473) ([Antonio Andelic](https://github.com/antonio2368)).
* Convert 02003_memory_limit_in_client from expect to sh test (to fix flakiness) [#51475](https://github.com/ClickHouse/ClickHouse/pull/51475) ([Azat Khuzhin](https://github.com/azat)).
* Fix test_disk_over_web_server [#51476](https://github.com/ClickHouse/ClickHouse/pull/51476) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Delay shutdown of system and temporary databases [#51479](https://github.com/ClickHouse/ClickHouse/pull/51479) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix memory leakage in CompressionCodecDeflateQpl [#51480](https://github.com/ClickHouse/ClickHouse/pull/51480) ([Vitaly Baranov](https://github.com/vitlibar)).
* Increase retries in test_multiple_disks/test.py::test_start_stop_moves [#51482](https://github.com/ClickHouse/ClickHouse/pull/51482) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix race in BoundedReadBuffer [#51484](https://github.com/ClickHouse/ClickHouse/pull/51484) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix flaky unit test [#51485](https://github.com/ClickHouse/ClickHouse/pull/51485) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix flaky test `test_host_regexp_multiple_ptr_records` [#51506](https://github.com/ClickHouse/ClickHouse/pull/51506) ([Nikolay Degterinsky](https://github.com/evillique)).
* Add a comment [#51517](https://github.com/ClickHouse/ClickHouse/pull/51517) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Make `test_ssl_cert_authentication` similar to `test_tlvs1_3` [#51520](https://github.com/ClickHouse/ClickHouse/pull/51520) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Fix duplicate storage set logical error. [#51521](https://github.com/ClickHouse/ClickHouse/pull/51521) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Update test_storage_postgresql/test.py::test_concurrent_queries [#51523](https://github.com/ClickHouse/ClickHouse/pull/51523) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix FATAL: query context is not detached from thread group [#51540](https://github.com/ClickHouse/ClickHouse/pull/51540) ([Igor Nikonov](https://github.com/devcrafter)).
* Update version_date.tsv and changelogs after v23.3.6.7-lts [#51548](https://github.com/ClickHouse/ClickHouse/pull/51548) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Decoupled commits from [#51180](https://github.com/ClickHouse/ClickHouse/issues/51180) for backports [#51561](https://github.com/ClickHouse/ClickHouse/pull/51561) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
* Try to fix deadlock in ZooKeeper client [#51563](https://github.com/ClickHouse/ClickHouse/pull/51563) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Retry chroot creation in ZK before stateless tests [#51585](https://github.com/ClickHouse/ClickHouse/pull/51585) ([Antonio Andelic](https://github.com/antonio2368)).
* use timeout instead trap in 01443_merge_truncate_long.sh [#51593](https://github.com/ClickHouse/ClickHouse/pull/51593) ([Sema Checherinda](https://github.com/CheSema)).
* Update version_date.tsv and changelogs after v23.5.4.25-stable [#51604](https://github.com/ClickHouse/ClickHouse/pull/51604) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Fix MergeTreeMarksLoader segfaulting if marks file is longer than expected [#51636](https://github.com/ClickHouse/ClickHouse/pull/51636) ([Michael Kolupaev](https://github.com/al13n321)).
* Update version_date.tsv and changelogs after v23.4.5.22-stable [#51638](https://github.com/ClickHouse/ClickHouse/pull/51638) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update version_date.tsv and changelogs after v23.3.7.5-lts [#51639](https://github.com/ClickHouse/ClickHouse/pull/51639) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Update parts.md [#51643](https://github.com/ClickHouse/ClickHouse/pull/51643) ([Ramazan Polat](https://github.com/ramazanpolat)).

View File

@ -44,11 +44,12 @@ Create a table in ClickHouse which allows to read data from Redis:
``` sql ``` sql
CREATE TABLE redis_table CREATE TABLE redis_table
( (
`k` String, `key` String,
`m` String, `v1` UInt32,
`n` UInt32 `v2` String,
`v3` Float32
) )
ENGINE = Redis('redis1:6379') PRIMARY KEY(k); ENGINE = Redis('redis1:6379') PRIMARY KEY(key);
``` ```
Insert: Insert:
@ -111,9 +112,16 @@ Flush Redis db asynchronously. Also `Truncate` support SYNC mode.
TRUNCATE TABLE redis_table SYNC; TRUNCATE TABLE redis_table SYNC;
``` ```
Join:
Join with other tables.
```
SELECT * FROM redis_table JOIN merge_tree_table ON merge_tree_table.key=redis_table.key;
```
## Limitations {#limitations} ## Limitations {#limitations}
Redis engine also supports scanning queries, such as `where k > xx`, but it has some limitations: Redis engine also supports scanning queries, such as `where k > xx`, but it has some limitations:
1. Scanning query may produce some duplicated keys in a very rare case when it is rehashing. See details in [Redis Scan](https://github.com/redis/redis/blob/e4d183afd33e0b2e6e8d1c79a832f678a04a7886/src/dict.c#L1186-L1269) 1. Scanning query may produce some duplicated keys in a very rare case when it is rehashing. See details in [Redis Scan](https://github.com/redis/redis/blob/e4d183afd33e0b2e6e8d1c79a832f678a04a7886/src/dict.c#L1186-L1269).
2. During the scanning, keys could be created and deleted, so the resulting dataset can not represent a valid point in time. 2. During the scanning, keys could be created and deleted, so the resulting dataset can not represent a valid point in time.

View File

@ -756,6 +756,17 @@ If you perform the `SELECT` query between merges, you may get expired data. To a
- [ttl_only_drop_parts](/docs/en/operations/settings/settings.md/#ttl_only_drop_parts) setting - [ttl_only_drop_parts](/docs/en/operations/settings/settings.md/#ttl_only_drop_parts) setting
## Disk types
In addition to local block devices, ClickHouse supports these storage types:
- [`s3` for S3 and MinIO](#table_engine-mergetree-s3)
- [`gcs` for GCS](/docs/en/integrations/data-ingestion/gcs/index.md/#creating-a-disk)
- [`blob_storage_disk` for Azure Blob Storage](#table_engine-mergetree-azure-blob-storage)
- [`hdfs` for HDFS](#hdfs-storage)
- [`web` for read-only from web](#web-storage)
- [`cache` for local caching](/docs/en/operations/storing-data.md/#using-local-cache)
- [`s3_plain` for backups to S3](/docs/en/operations/backup#backuprestore-using-an-s3-disk)
## Using Multiple Block Devices for Data Storage {#table_engine-mergetree-multiple-volumes} ## Using Multiple Block Devices for Data Storage {#table_engine-mergetree-multiple-volumes}
### Introduction {#introduction} ### Introduction {#introduction}
@ -936,7 +947,16 @@ configuration files; all the settings are in the CREATE/ATTACH query.
The example uses `type=web`, but any disk type can be configured as dynamic, even Local disk. Local disks require a path argument to be inside the server config parameter `custom_local_disks_base_directory`, which has no default, so set that also when using local disk. The example uses `type=web`, but any disk type can be configured as dynamic, even Local disk. Local disks require a path argument to be inside the server config parameter `custom_local_disks_base_directory`, which has no default, so set that also when using local disk.
::: :::
#### Example dynamic web storage
:::tip
A [demo dataset](https://github.com/ClickHouse/web-tables-demo) is hosted in GitHub. To prepare your own tables for web storage see the tool [clickhouse-static-files-uploader](/docs/en/operations/storing-data.md/#storing-data-on-webserver)
:::
In this `ATTACH TABLE` query the `UUID` provided matches the directory name of the data, and the endpoint is the URL for the raw GitHub content.
```sql ```sql
# highlight-next-line
ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'
( (
price UInt32, price UInt32,
@ -1238,6 +1258,93 @@ Examples of working configurations can be found in integration tests directory (
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use. Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
::: :::
## HDFS storage {#hdfs-storage}
In this sample configuration:
- the disk is of type `hdfs`
- the data is hosted at `hdfs://hdfs1:9000/clickhouse/`
```xml
<clickhouse>
<storage_configuration>
<disks>
<hdfs>
<type>hdfs</type>
<endpoint>hdfs://hdfs1:9000/clickhouse/</endpoint>
<skip_access_check>true</skip_access_check>
</hdfs>
<hdd>
<type>local</type>
<path>/</path>
</hdd>
</disks>
<policies>
<hdfs>
<volumes>
<main>
<disk>hdfs</disk>
</main>
<external>
<disk>hdd</disk>
</external>
</volumes>
</hdfs>
</policies>
</storage_configuration>
</clickhouse>
```
## Web storage (read-only) {#web-storage}
Web storage can be used for read-only purposes. An example use is for hosting sample
data, or for migrating data.
:::tip
Storage can also be configured temporarily within a query, if a web dataset is not expected
to be used routinely, see [dynamic storage](#dynamic-storage) and skip editing the
configuration file.
:::
In this sample configuration:
- the disk is of type `web`
- the data is hosted at `http://nginx:80/test1/`
- a cache on local storage is used
```xml
<clickhouse>
<storage_configuration>
<disks>
<web>
<type>web</type>
<endpoint>http://nginx:80/test1/</endpoint>
</web>
<cached_web>
<type>cache</type>
<disk>web</disk>
<path>cached_web_cache/</path>
<max_size>100000000</max_size>
</cached_web>
</disks>
<policies>
<web>
<volumes>
<main>
<disk>web</disk>
</main>
</volumes>
</web>
<cached_web>
<volumes>
<main>
<disk>cached_web</disk>
</main>
</volumes>
</cached_web>
</policies>
</storage_configuration>
</clickhouse>
```
## Virtual Columns {#virtual-columns} ## Virtual Columns {#virtual-columns}
- `_part` — Name of a part. - `_part` — Name of a part.

View File

@ -2120,6 +2120,12 @@ This section contains the following parameters:
- `operation_timeout_ms` — Maximum timeout for one operation in milliseconds. - `operation_timeout_ms` — Maximum timeout for one operation in milliseconds.
- `root` — The [znode](http://zookeeper.apache.org/doc/r3.5.5/zookeeperOver.html#Nodes+and+ephemeral+nodes) that is used as the root for znodes used by the ClickHouse server. Optional. - `root` — The [znode](http://zookeeper.apache.org/doc/r3.5.5/zookeeperOver.html#Nodes+and+ephemeral+nodes) that is used as the root for znodes used by the ClickHouse server. Optional.
- `identity` — User and password, that can be required by ZooKeeper to give access to requested znodes. Optional. - `identity` — User and password, that can be required by ZooKeeper to give access to requested znodes. Optional.
- zookeeper_load_balancing - Specifies the algorithm of ZooKeeper node selection.
* random - randomly selects one of ZooKeeper nodes.
* in_order - selects the first ZooKeeper node, if it's not available then the second, and so on.
* nearest_hostname - selects a ZooKeeper node with a hostname that is most similar to the servers hostname.
* first_or_random - selects the first ZooKeeper node, if it's not available then randomly selects one of remaining ZooKeeper nodes.
* round_robin - selects the first ZooKeeper node, if reconnection happens selects the next.
**Example configuration** **Example configuration**
@ -2139,6 +2145,8 @@ This section contains the following parameters:
<root>/path/to/zookeeper/node</root> <root>/path/to/zookeeper/node</root>
<!-- Optional. Zookeeper digest ACL string. --> <!-- Optional. Zookeeper digest ACL string. -->
<identity>user:password</identity> <identity>user:password</identity>
<!--<zookeeper_load_balancing>random / in_order / nearest_hostname / first_or_random / round_robin</zookeeper_load_balancing>-->
<zookeeper_load_balancing>random</zookeeper_load_balancing>
</zookeeper> </zookeeper>
``` ```

View File

@ -1322,7 +1322,7 @@ Connection pool size for PostgreSQL table engine and database engine.
Default value: 16 Default value: 16
## postgresql_connection_pool_size {#postgresql-connection-pool-size} ## postgresql_connection_pool_wait_timeout {#postgresql-connection-pool-wait-timeout}
Connection pool push/pop timeout on empty pool for PostgreSQL table engine and database engine. By default it will block on empty pool. Connection pool push/pop timeout on empty pool for PostgreSQL table engine and database engine. By default it will block on empty pool.

View File

@ -184,13 +184,15 @@ These settings should be defined in the disk configuration section.
- `enable_filesystem_query_cache_limit` - allow to limit the size of cache which is downloaded within each query (depends on user setting `max_query_cache_size`). Default: `false`. - `enable_filesystem_query_cache_limit` - allow to limit the size of cache which is downloaded within each query (depends on user setting `max_query_cache_size`). Default: `false`.
- `enable_cache_hits_threshold` - number which defines how many times some data needs to be read before it will be cached. Default: `0`, e.g. the data is cached at the first attempt to read it. - `enable_cache_hits_threshold` - number which defines how many times some data needs to be read before it will be cached. Default: `false`. This threshold can be defined by `cache_hits_threshold`. Default: `0`, e.g. the data is cached at the first attempt to read it.
- `enable_bypass_cache_with_threshold` - allows to skip cache completely in case the requested read range exceeds the threshold. Default: `false`. This threshold can be defined by `bypass_cache_threashold`. Default: `268435456` (`256Mi`).
- `do_not_evict_index_and_mark_files` - do not evict small frequently used files according to cache policy. Default: `false`. This setting was added in version 22.8. If you used filesystem cache before this version, then it will not work on versions starting from 22.8 if this setting is set to `true`. If you want to use this setting, clear old cache created before version 22.8 before upgrading. - `do_not_evict_index_and_mark_files` - do not evict small frequently used files according to cache policy. Default: `false`. This setting was added in version 22.8. If you used filesystem cache before this version, then it will not work on versions starting from 22.8 if this setting is set to `true`. If you want to use this setting, clear old cache created before version 22.8 before upgrading.
- `max_file_segment_size` - a maximum size of a single cache file in bytes or in readable format (`ki, Mi, Gi, etc`, example `10Gi`). Default: `104857600` (`100Mi`). - `max_file_segment_size` - a maximum size of a single cache file in bytes or in readable format (`ki, Mi, Gi, etc`, example `10Gi`). Default: `8388608` (`8Mi`).
- `max_elements` - a limit for a number of cache files. Default: `1048576`. - `max_elements` - a limit for a number of cache files. Default: `10000000`.
File Cache **query/profile settings**: File Cache **query/profile settings**:

View File

@ -27,7 +27,7 @@ Columns:
Data storing format is controlled by the `min_bytes_for_wide_part` and `min_rows_for_wide_part` settings of the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table. Data storing format is controlled by the `min_bytes_for_wide_part` and `min_rows_for_wide_part` settings of the [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table.
- `active` ([UInt8](../../sql-reference/data-types/int-uint.md)) Flag that indicates whether the data part is active. If a data part is active, its used in a table. Otherwise, its deleted. Inactive data parts remain after merging. - `active` ([UInt8](../../sql-reference/data-types/int-uint.md)) Flag that indicates whether the data part is active. If a data part is active, its used in a table. Otherwise, its deleted. Inactive data parts remain after merging.
- `marks` ([UInt64](../../sql-reference/data-types/int-uint.md)) The number of marks. To get the approximate number of rows in a data part, multiply `marks` by the index granularity (usually 8192) (this hint does not work for adaptive granularity). - `marks` ([UInt64](../../sql-reference/data-types/int-uint.md)) The number of marks. To get the approximate number of rows in a data part, multiply `marks` by the index granularity (usually 8192) (this hint does not work for adaptive granularity).

View File

@ -97,6 +97,10 @@ Result:
If you apply this combinator, the aggregate function does not return the resulting value (such as the number of unique values for the [uniq](../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function), but an intermediate state of the aggregation (for `uniq`, this is the hash table for calculating the number of unique values). This is an `AggregateFunction(...)` that can be used for further processing or stored in a table to finish aggregating later. If you apply this combinator, the aggregate function does not return the resulting value (such as the number of unique values for the [uniq](../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function), but an intermediate state of the aggregation (for `uniq`, this is the hash table for calculating the number of unique values). This is an `AggregateFunction(...)` that can be used for further processing or stored in a table to finish aggregating later.
:::note
Please notice, that -MapState is not an invariant for the same data due to the fact that order of data in intermediate state can change, though it doesn't impact ingestion of this data.
:::
To work with these states, use: To work with these states, use:
- [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) table engine. - [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md) table engine.

View File

@ -230,13 +230,15 @@ hasAll(set, subset)
**Arguments** **Arguments**
- `set` Array of any type with a set of elements. - `set` Array of any type with a set of elements.
- `subset` Array of any type with elements that should be tested to be a subset of `set`. - `subset` Array of any type that shares a common supertype with `set` containing elements that should be tested to be a subset of `set`.
**Return values** **Return values**
- `1`, if `set` contains all of the elements from `subset`. - `1`, if `set` contains all of the elements from `subset`.
- `0`, otherwise. - `0`, otherwise.
Raises an exception `NO_COMMON_TYPE` if the set and subset elements do not share a common supertype.
**Peculiar properties** **Peculiar properties**
- An empty array is a subset of any array. - An empty array is a subset of any array.
@ -253,7 +255,7 @@ hasAll(set, subset)
`SELECT hasAll(['a', 'b'], ['a'])` returns 1. `SELECT hasAll(['a', 'b'], ['a'])` returns 1.
`SELECT hasAll([1], ['a'])` returns 0. `SELECT hasAll([1], ['a'])` raises a `NO_COMMON_TYPE` exception.
`SELECT hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]])` returns 0. `SELECT hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]])` returns 0.
@ -268,13 +270,15 @@ hasAny(array1, array2)
**Arguments** **Arguments**
- `array1` Array of any type with a set of elements. - `array1` Array of any type with a set of elements.
- `array2` Array of any type with a set of elements. - `array2` Array of any type that shares a common supertype with `array1`.
**Return values** **Return values**
- `1`, if `array1` and `array2` have one similar element at least. - `1`, if `array1` and `array2` have one similar element at least.
- `0`, otherwise. - `0`, otherwise.
Raises an exception `NO_COMMON_TYPE` if the array1 and array2 elements do not share a common supertype.
**Peculiar properties** **Peculiar properties**
- `Null` processed as a value. - `Null` processed as a value.
@ -288,7 +292,7 @@ hasAny(array1, array2)
`SELECT hasAny([-128, 1., 512], [1])` returns `1`. `SELECT hasAny([-128, 1., 512], [1])` returns `1`.
`SELECT hasAny([[1, 2], [3, 4]], ['a', 'c'])` returns `0`. `SELECT hasAny([[1, 2], [3, 4]], ['a', 'c'])` raises a `NO_COMMON_TYPE` exception.
`SELECT hasAll([[1, 2], [3, 4]], [[1, 2], [1, 2]])` returns `1`. `SELECT hasAll([[1, 2], [3, 4]], [[1, 2], [1, 2]])` returns `1`.
@ -318,6 +322,8 @@ For Example:
- `1`, if `array1` contains `array2`. - `1`, if `array1` contains `array2`.
- `0`, otherwise. - `0`, otherwise.
Raises an exception `NO_COMMON_TYPE` if the array1 and array2 elements do not share a common supertype.
**Peculiar properties** **Peculiar properties**
- The function will return `1` if `array2` is empty. - The function will return `1` if `array2` is empty.
@ -339,6 +345,9 @@ For Example:
`SELECT hasSubstr(['a', 'b' , 'c'], ['a', 'c'])` returns 0. `SELECT hasSubstr(['a', 'b' , 'c'], ['a', 'c'])` returns 0.
`SELECT hasSubstr([[1, 2], [3, 4], [5, 6]], [[1, 2], [3, 4]])` returns 1. `SELECT hasSubstr([[1, 2], [3, 4], [5, 6]], [[1, 2], [3, 4]])` returns 1.
i
`SELECT hasSubstr([1, 2, NULL, 3, 4], ['a'])` raises a `NO_COMMON_TYPE` exception.
## indexOf(arr, x) ## indexOf(arr, x)

View File

@ -8,7 +8,7 @@ sidebar_label: Nullable
## isNull ## isNull
Returns whether the argument is [NULL](../../sql-reference/syntax.md#null-literal). Returns whether the argument is [NULL](../../sql-reference/syntax.md#null).
``` sql ``` sql
isNull(x) isNull(x)

View File

@ -22,14 +22,15 @@ tuple(x, y, …)
A function that allows getting a column from a tuple. A function that allows getting a column from a tuple.
If the second argument is a number `n`, it is the column index, starting from 1. If the second argument is a string `s`, it represents the name of the element. Besides, we can provide the third optional argument, such that when index out of bounds or element for such name does not exist, the default value returned instead of throw exception. The second and third arguments if provided are always must be constant. There is no cost to execute the function. If the second argument is a number `index`, it is the column index, starting from 1. If the second argument is a string `name`, it represents the name of the element. Besides, we can provide the third optional argument, such that when index out of bounds or no element exist for the name, the default value returned instead of throwing an exception. The second and third arguments, if provided, must be constants. There is no cost to execute the function.
The function implements the operator `x.n` and `x.s`. The function implements operators `x.index` and `x.name`.
**Syntax** **Syntax**
``` sql ``` sql
tupleElement(tuple, n/s [, default_value]) tupleElement(tuple, index, [, default_value])
tupleElement(tuple, name, [, default_value])
``` ```
## untuple ## untuple

View File

@ -66,6 +66,10 @@ WITH anySimpleState(number) AS c SELECT toTypeName(c), c FROM numbers(1);
В случае применения этого комбинатора, агрегатная функция возвращает не готовое значение (например, в случае функции [uniq](reference/uniq.md#agg_function-uniq) — количество уникальных значений), а промежуточное состояние агрегации (например, в случае функции `uniq` — хэш-таблицу для расчёта количества уникальных значений), которое имеет тип `AggregateFunction(...)` и может использоваться для дальнейшей обработки или может быть сохранено в таблицу для последующей доагрегации. В случае применения этого комбинатора, агрегатная функция возвращает не готовое значение (например, в случае функции [uniq](reference/uniq.md#agg_function-uniq) — количество уникальных значений), а промежуточное состояние агрегации (например, в случае функции `uniq` — хэш-таблицу для расчёта количества уникальных значений), которое имеет тип `AggregateFunction(...)` и может использоваться для дальнейшей обработки или может быть сохранено в таблицу для последующей доагрегации.
:::note
Промежуточное состояние для -MapState не является инвариантом для одних и тех же исходных данные т.к. порядок данных может меняться. Это не влияет, тем не менее, на загрузку таких данных.
:::
Для работы с промежуточными состояниями предназначены: Для работы с промежуточными состояниями предназначены:
- Движок таблиц [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md). - Движок таблиц [AggregatingMergeTree](../../engines/table-engines/mergetree-family/aggregatingmergetree.md).

View File

@ -43,6 +43,8 @@ if (BUILD_STANDALONE_KEEPER)
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperDispatcher.cpp ${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperDispatcher.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperLogStore.cpp ${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperLogStore.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperServer.cpp ${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperServer.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperContext.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperFeatureFlags.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperSnapshotManager.cpp ${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperSnapshotManager.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperSnapshotManagerS3.cpp ${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperSnapshotManagerS3.cpp
${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperStateMachine.cpp ${CMAKE_CURRENT_SOURCE_DIR}/../../src/Coordination/KeeperStateMachine.cpp

View File

@ -25,6 +25,7 @@ IAggregateFunction * createWithNumericOrTimeType(const IDataType & argument_type
WhichDataType which(argument_type); WhichDataType which(argument_type);
if (which.idx == TypeIndex::Date) return new AggregateFunctionTemplate<UInt16, Data>(std::forward<TArgs>(args)...); if (which.idx == TypeIndex::Date) return new AggregateFunctionTemplate<UInt16, Data>(std::forward<TArgs>(args)...);
if (which.idx == TypeIndex::DateTime) return new AggregateFunctionTemplate<UInt32, Data>(std::forward<TArgs>(args)...); if (which.idx == TypeIndex::DateTime) return new AggregateFunctionTemplate<UInt32, Data>(std::forward<TArgs>(args)...);
if (which.idx == TypeIndex::IPv4) return new AggregateFunctionTemplate<IPv4, Data>(std::forward<TArgs>(args)...);
return createWithNumericType<AggregateFunctionTemplate, Data, TArgs...>(argument_type, std::forward<TArgs>(args)...); return createWithNumericType<AggregateFunctionTemplate, Data, TArgs...>(argument_type, std::forward<TArgs>(args)...);
} }

View File

@ -4,6 +4,7 @@
#include <AggregateFunctions/FactoryHelpers.h> #include <AggregateFunctions/FactoryHelpers.h>
#include <DataTypes/DataTypeDate.h> #include <DataTypes/DataTypeDate.h>
#include <DataTypes/DataTypeDateTime.h> #include <DataTypes/DataTypeDateTime.h>
#include <DataTypes/DataTypeIPv4andIPv6.h>
namespace DB namespace DB
@ -39,12 +40,22 @@ public:
static DataTypePtr createResultType() { return std::make_shared<DataTypeArray>(std::make_shared<DataTypeDateTime>()); } static DataTypePtr createResultType() { return std::make_shared<DataTypeArray>(std::make_shared<DataTypeDateTime>()); }
}; };
template <typename HasLimit>
class AggregateFunctionGroupUniqArrayIPv4 : public AggregateFunctionGroupUniqArray<DataTypeIPv4::FieldType, HasLimit>
{
public:
explicit AggregateFunctionGroupUniqArrayIPv4(const DataTypePtr & argument_type, const Array & parameters_, UInt64 max_elems_ = std::numeric_limits<UInt64>::max())
: AggregateFunctionGroupUniqArray<DataTypeIPv4::FieldType, HasLimit>(argument_type, parameters_, createResultType(), max_elems_) {}
static DataTypePtr createResultType() { return std::make_shared<DataTypeArray>(std::make_shared<DataTypeIPv4>()); }
};
template <typename HasLimit, typename ... TArgs> template <typename HasLimit, typename ... TArgs>
IAggregateFunction * createWithExtraTypes(const DataTypePtr & argument_type, TArgs && ... args) IAggregateFunction * createWithExtraTypes(const DataTypePtr & argument_type, TArgs && ... args)
{ {
WhichDataType which(argument_type); WhichDataType which(argument_type);
if (which.idx == TypeIndex::Date) return new AggregateFunctionGroupUniqArrayDate<HasLimit>(argument_type, std::forward<TArgs>(args)...); if (which.idx == TypeIndex::Date) return new AggregateFunctionGroupUniqArrayDate<HasLimit>(argument_type, std::forward<TArgs>(args)...);
else if (which.idx == TypeIndex::DateTime) return new AggregateFunctionGroupUniqArrayDateTime<HasLimit>(argument_type, std::forward<TArgs>(args)...); else if (which.idx == TypeIndex::DateTime) return new AggregateFunctionGroupUniqArrayDateTime<HasLimit>(argument_type, std::forward<TArgs>(args)...);
else if (which.idx == TypeIndex::IPv4) return new AggregateFunctionGroupUniqArrayIPv4<HasLimit>(argument_type, std::forward<TArgs>(args)...);
else else
{ {
/// Check that we can use plain version of AggregateFunctionGroupUniqArrayGeneric /// Check that we can use plain version of AggregateFunctionGroupUniqArrayGeneric

View File

@ -100,6 +100,10 @@ public:
return std::make_shared<AggregateFunctionMap<UInt256>>(nested_function, arguments); return std::make_shared<AggregateFunctionMap<UInt256>>(nested_function, arguments);
case TypeIndex::UUID: case TypeIndex::UUID:
return std::make_shared<AggregateFunctionMap<UUID>>(nested_function, arguments); return std::make_shared<AggregateFunctionMap<UUID>>(nested_function, arguments);
case TypeIndex::IPv4:
return std::make_shared<AggregateFunctionMap<IPv4>>(nested_function, arguments);
case TypeIndex::IPv6:
return std::make_shared<AggregateFunctionMap<IPv6>>(nested_function, arguments);
case TypeIndex::FixedString: case TypeIndex::FixedString:
case TypeIndex::String: case TypeIndex::String:
return std::make_shared<AggregateFunctionMap<String>>(nested_function, arguments); return std::make_shared<AggregateFunctionMap<String>>(nested_function, arguments);

View File

@ -19,7 +19,9 @@
#include <IO/ReadHelpers.h> #include <IO/ReadHelpers.h>
#include <IO/WriteHelpers.h> #include <IO/WriteHelpers.h>
#include "DataTypes/Serializations/ISerialization.h" #include "DataTypes/Serializations/ISerialization.h"
#include <base/IPv4andIPv6.h>
#include "base/types.h" #include "base/types.h"
#include <Common/formatIPv6.h>
#include <Common/Arena.h> #include <Common/Arena.h>
#include "AggregateFunctions/AggregateFunctionFactory.h" #include "AggregateFunctions/AggregateFunctionFactory.h"
@ -69,6 +71,31 @@ struct AggregateFunctionMapCombinatorData<String>
} }
}; };
/// Specialization for IPv6 - for historical reasons it should be stored as FixedString(16)
template <>
struct AggregateFunctionMapCombinatorData<IPv6>
{
struct IPv6Hash
{
using hash_type = std::hash<IPv6>;
using is_transparent = void;
size_t operator()(const IPv6 & ip) const { return hash_type{}(ip); }
};
using SearchType = IPv6;
std::unordered_map<IPv6, AggregateDataPtr, IPv6Hash, std::equal_to<>> merged_maps;
static void writeKey(const IPv6 & key, WriteBuffer & buf)
{
writeIPv6Binary(key, buf);
}
static void readKey(IPv6 & key, ReadBuffer & buf)
{
readIPv6Binary(key, buf);
}
};
template <typename KeyType> template <typename KeyType>
class AggregateFunctionMap final class AggregateFunctionMap final
: public IAggregateFunctionDataHelper<AggregateFunctionMapCombinatorData<KeyType>, AggregateFunctionMap<KeyType>> : public IAggregateFunctionDataHelper<AggregateFunctionMapCombinatorData<KeyType>, AggregateFunctionMap<KeyType>>
@ -147,6 +174,8 @@ public:
StringRef key_ref; StringRef key_ref;
if (key_type->getTypeId() == TypeIndex::FixedString) if (key_type->getTypeId() == TypeIndex::FixedString)
key_ref = assert_cast<const ColumnFixedString &>(key_column).getDataAt(offset + i); key_ref = assert_cast<const ColumnFixedString &>(key_column).getDataAt(offset + i);
else if (key_type->getTypeId() == TypeIndex::IPv6)
key_ref = assert_cast<const ColumnIPv6 &>(key_column).getDataAt(offset + i);
else else
key_ref = assert_cast<const ColumnString &>(key_column).getDataAt(offset + i); key_ref = assert_cast<const ColumnString &>(key_column).getDataAt(offset + i);

View File

@ -5,6 +5,7 @@
#include <Common/FieldVisitorConvertToNumber.h> #include <Common/FieldVisitorConvertToNumber.h>
#include <DataTypes/DataTypeDate.h> #include <DataTypes/DataTypeDate.h>
#include <DataTypes/DataTypeDateTime.h> #include <DataTypes/DataTypeDateTime.h>
#include <DataTypes/DataTypeIPv4andIPv6.h>
static inline constexpr UInt64 TOP_K_MAX_SIZE = 0xFFFFFF; static inline constexpr UInt64 TOP_K_MAX_SIZE = 0xFFFFFF;
@ -60,6 +61,22 @@ public:
{} {}
}; };
template <bool is_weighted>
class AggregateFunctionTopKIPv4 : public AggregateFunctionTopK<DataTypeIPv4::FieldType, is_weighted>
{
public:
using AggregateFunctionTopK<DataTypeIPv4::FieldType, is_weighted>::AggregateFunctionTopK;
AggregateFunctionTopKIPv4(UInt64 threshold_, UInt64 load_factor, const DataTypes & argument_types_, const Array & params)
: AggregateFunctionTopK<DataTypeIPv4::FieldType, is_weighted>(
threshold_,
load_factor,
argument_types_,
params,
std::make_shared<DataTypeArray>(std::make_shared<DataTypeIPv4>()))
{}
};
template <bool is_weighted> template <bool is_weighted>
IAggregateFunction * createWithExtraTypes(const DataTypes & argument_types, UInt64 threshold, UInt64 load_factor, const Array & params) IAggregateFunction * createWithExtraTypes(const DataTypes & argument_types, UInt64 threshold, UInt64 load_factor, const Array & params)
@ -72,6 +89,8 @@ IAggregateFunction * createWithExtraTypes(const DataTypes & argument_types, UInt
return new AggregateFunctionTopKDate<is_weighted>(threshold, load_factor, argument_types, params); return new AggregateFunctionTopKDate<is_weighted>(threshold, load_factor, argument_types, params);
if (which.idx == TypeIndex::DateTime) if (which.idx == TypeIndex::DateTime)
return new AggregateFunctionTopKDateTime<is_weighted>(threshold, load_factor, argument_types, params); return new AggregateFunctionTopKDateTime<is_weighted>(threshold, load_factor, argument_types, params);
if (which.idx == TypeIndex::IPv4)
return new AggregateFunctionTopKIPv4<is_weighted>(threshold, load_factor, argument_types, params);
/// Check that we can use plain version of AggregateFunctionTopKGeneric /// Check that we can use plain version of AggregateFunctionTopKGeneric
if (argument_types[0]->isValueUnambiguouslyRepresentedInContiguousMemoryRegion()) if (argument_types[0]->isValueUnambiguouslyRepresentedInContiguousMemoryRegion())

View File

@ -8,6 +8,7 @@
#include <DataTypes/DataTypeDateTime.h> #include <DataTypes/DataTypeDateTime.h>
#include <DataTypes/DataTypeTuple.h> #include <DataTypes/DataTypeTuple.h>
#include <DataTypes/DataTypeUUID.h> #include <DataTypes/DataTypeUUID.h>
#include <DataTypes/DataTypeIPv4andIPv6.h>
#include <Core/Settings.h> #include <Core/Settings.h>
@ -60,6 +61,10 @@ createAggregateFunctionUniq(const std::string & name, const DataTypes & argument
return std::make_shared<AggregateFunctionUniq<String, Data>>(argument_types); return std::make_shared<AggregateFunctionUniq<String, Data>>(argument_types);
else if (which.isUUID()) else if (which.isUUID())
return std::make_shared<AggregateFunctionUniq<DataTypeUUID::FieldType, Data>>(argument_types); return std::make_shared<AggregateFunctionUniq<DataTypeUUID::FieldType, Data>>(argument_types);
else if (which.isIPv4())
return std::make_shared<AggregateFunctionUniq<DataTypeIPv4::FieldType, Data>>(argument_types);
else if (which.isIPv6())
return std::make_shared<AggregateFunctionUniq<DataTypeIPv6::FieldType, Data>>(argument_types);
else if (which.isTuple()) else if (which.isTuple())
{ {
if (use_exact_hash_function) if (use_exact_hash_function)
@ -109,6 +114,10 @@ createAggregateFunctionUniq(const std::string & name, const DataTypes & argument
return std::make_shared<AggregateFunctionUniq<String, Data<String, is_able_to_parallelize_merge>>>(argument_types); return std::make_shared<AggregateFunctionUniq<String, Data<String, is_able_to_parallelize_merge>>>(argument_types);
else if (which.isUUID()) else if (which.isUUID())
return std::make_shared<AggregateFunctionUniq<DataTypeUUID::FieldType, Data<DataTypeUUID::FieldType, is_able_to_parallelize_merge>>>(argument_types); return std::make_shared<AggregateFunctionUniq<DataTypeUUID::FieldType, Data<DataTypeUUID::FieldType, is_able_to_parallelize_merge>>>(argument_types);
else if (which.isIPv4())
return std::make_shared<AggregateFunctionUniq<DataTypeIPv4::FieldType, Data<DataTypeIPv4::FieldType, is_able_to_parallelize_merge>>>(argument_types);
else if (which.isIPv6())
return std::make_shared<AggregateFunctionUniq<DataTypeIPv6::FieldType, Data<DataTypeIPv6::FieldType, is_able_to_parallelize_merge>>>(argument_types);
else if (which.isTuple()) else if (which.isTuple())
{ {
if (use_exact_hash_function) if (use_exact_hash_function)

View File

@ -101,6 +101,18 @@ struct AggregateFunctionUniqHLL12Data<UUID, false>
static String getName() { return "uniqHLL12"; } static String getName() { return "uniqHLL12"; }
}; };
template <>
struct AggregateFunctionUniqHLL12Data<IPv6, false>
{
using Set = HyperLogLogWithSmallSetOptimization<UInt64, 16, 12>;
Set set;
constexpr static bool is_able_to_parallelize_merge = false;
constexpr static bool is_variadic = false;
static String getName() { return "uniqHLL12"; }
};
template <bool is_exact_, bool argument_is_tuple_, bool is_able_to_parallelize_merge_> template <bool is_exact_, bool argument_is_tuple_, bool is_able_to_parallelize_merge_>
struct AggregateFunctionUniqHLL12DataForVariadic struct AggregateFunctionUniqHLL12DataForVariadic
{ {
@ -155,6 +167,25 @@ struct AggregateFunctionUniqExactData<String, is_able_to_parallelize_merge_>
static String getName() { return "uniqExact"; } static String getName() { return "uniqExact"; }
}; };
/// For historical reasons IPv6 is treated as FixedString(16)
template <bool is_able_to_parallelize_merge_>
struct AggregateFunctionUniqExactData<IPv6, is_able_to_parallelize_merge_>
{
using Key = UInt128;
/// When creating, the hash table must be small.
using SingleLevelSet = HashSet<Key, UInt128TrivialHash, HashTableGrower<3>, HashTableAllocatorWithStackMemory<sizeof(Key) * (1 << 3)>>;
using TwoLevelSet = TwoLevelHashSet<Key, UInt128TrivialHash>;
using Set = UniqExactSet<SingleLevelSet, TwoLevelSet>;
Set set;
constexpr static bool is_able_to_parallelize_merge = is_able_to_parallelize_merge_;
constexpr static bool is_variadic = false;
static String getName() { return "uniqExact"; }
};
template <bool is_exact_, bool argument_is_tuple_, bool is_able_to_parallelize_merge_> template <bool is_exact_, bool argument_is_tuple_, bool is_able_to_parallelize_merge_>
struct AggregateFunctionUniqExactDataForVariadic : AggregateFunctionUniqExactData<String, is_able_to_parallelize_merge_> struct AggregateFunctionUniqExactDataForVariadic : AggregateFunctionUniqExactData<String, is_able_to_parallelize_merge_>
{ {
@ -248,27 +279,22 @@ struct Adder
AggregateFunctionUniqUniquesHashSetData> || std::is_same_v<Data, AggregateFunctionUniqHLL12Data<T, Data::is_able_to_parallelize_merge>>) AggregateFunctionUniqUniquesHashSetData> || std::is_same_v<Data, AggregateFunctionUniqHLL12Data<T, Data::is_able_to_parallelize_merge>>)
{ {
const auto & column = *columns[0]; const auto & column = *columns[0];
if constexpr (!std::is_same_v<T, String>) if constexpr (std::is_same_v<T, String> || std::is_same_v<T, IPv6>)
{
StringRef value = column.getDataAt(row_num);
data.set.insert(CityHash_v1_0_2::CityHash64(value.data, value.size));
}
else
{ {
using ValueType = typename decltype(data.set)::value_type; using ValueType = typename decltype(data.set)::value_type;
const auto & value = assert_cast<const ColumnVector<T> &>(column).getElement(row_num); const auto & value = assert_cast<const ColumnVector<T> &>(column).getElement(row_num);
data.set.insert(static_cast<ValueType>(AggregateFunctionUniqTraits<T>::hash(value))); data.set.insert(static_cast<ValueType>(AggregateFunctionUniqTraits<T>::hash(value)));
} }
else
{
StringRef value = column.getDataAt(row_num);
data.set.insert(CityHash_v1_0_2::CityHash64(value.data, value.size));
}
} }
else if constexpr (std::is_same_v<Data, AggregateFunctionUniqExactData<T, Data::is_able_to_parallelize_merge>>) else if constexpr (std::is_same_v<Data, AggregateFunctionUniqExactData<T, Data::is_able_to_parallelize_merge>>)
{ {
const auto & column = *columns[0]; const auto & column = *columns[0];
if constexpr (!std::is_same_v<T, String>) if constexpr (std::is_same_v<T, String> || std::is_same_v<T, IPv6>)
{
data.set.template insert<const T &, use_single_level_hash_table>(
assert_cast<const ColumnVector<T> &>(column).getData()[row_num]);
}
else
{ {
StringRef value = column.getDataAt(row_num); StringRef value = column.getDataAt(row_num);
@ -279,6 +305,11 @@ struct Adder
data.set.template insert<const UInt128 &, use_single_level_hash_table>(key); data.set.template insert<const UInt128 &, use_single_level_hash_table>(key);
} }
else
{
data.set.template insert<const T &, use_single_level_hash_table>(
assert_cast<const ColumnVector<T> &>(column).getData()[row_num]);
}
} }
#if USE_DATASKETCHES #if USE_DATASKETCHES
else if constexpr (std::is_same_v<Data, AggregateFunctionUniqThetaData>) else if constexpr (std::is_same_v<Data, AggregateFunctionUniqThetaData>)

View File

@ -8,6 +8,7 @@
#include <DataTypes/DataTypeDate.h> #include <DataTypes/DataTypeDate.h>
#include <DataTypes/DataTypeDate32.h> #include <DataTypes/DataTypeDate32.h>
#include <DataTypes/DataTypeDateTime.h> #include <DataTypes/DataTypeDateTime.h>
#include <DataTypes/DataTypeIPv4andIPv6.h>
#include <functional> #include <functional>
@ -60,6 +61,10 @@ namespace
return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<String>>(argument_types, params); return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<String>>(argument_types, params);
else if (which.isUUID()) else if (which.isUUID())
return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<DataTypeUUID::FieldType>>(argument_types, params); return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<DataTypeUUID::FieldType>>(argument_types, params);
else if (which.isIPv4())
return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<DataTypeIPv4::FieldType>>(argument_types, params);
else if (which.isIPv6())
return std::make_shared<typename WithK<K, HashValueType>::template AggregateFunction<DataTypeIPv6::FieldType>>(argument_types, params);
else if (which.isTuple()) else if (which.isTuple())
{ {
if (use_exact_hash_function) if (use_exact_hash_function)

View File

@ -119,6 +119,10 @@ struct AggregateFunctionUniqCombinedData<String, K, HashValueType> : public Aggr
{ {
}; };
template <UInt8 K, typename HashValueType>
struct AggregateFunctionUniqCombinedData<IPv6, K, HashValueType> : public AggregateFunctionUniqCombinedDataWithKey<UInt64 /*always*/, K>
{
};
template <typename T, UInt8 K, typename HashValueType> template <typename T, UInt8 K, typename HashValueType>
class AggregateFunctionUniqCombined final class AggregateFunctionUniqCombined final
@ -141,16 +145,16 @@ public:
void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override void add(AggregateDataPtr __restrict place, const IColumn ** columns, size_t row_num, Arena *) const override
{ {
if constexpr (!std::is_same_v<T, String>) if constexpr (std::is_same_v<T, String> || std::is_same_v<T, IPv6>)
{
const auto & value = assert_cast<const ColumnVector<T> &>(*columns[0]).getElement(row_num);
this->data(place).set.insert(detail::AggregateFunctionUniqCombinedTraits<T, HashValueType>::hash(value));
}
else
{ {
StringRef value = columns[0]->getDataAt(row_num); StringRef value = columns[0]->getDataAt(row_num);
this->data(place).set.insert(CityHash_v1_0_2::CityHash64(value.data, value.size)); this->data(place).set.insert(CityHash_v1_0_2::CityHash64(value.data, value.size));
} }
else
{
const auto & value = assert_cast<const ColumnVector<T> &>(*columns[0]).getElement(row_num);
this->data(place).set.insert(detail::AggregateFunctionUniqCombinedTraits<T, HashValueType>::hash(value));
}
} }
void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override void merge(AggregateDataPtr __restrict place, ConstAggregateDataPtr rhs, Arena *) const override

View File

@ -1,5 +1,6 @@
#include "Exception.h" #include "Exception.h"
#include <algorithm>
#include <cstring> #include <cstring>
#include <cxxabi.h> #include <cxxabi.h>
#include <cstdlib> #include <cstdlib>
@ -83,6 +84,7 @@ Exception::Exception(const MessageMasked & msg_masked, int code, bool remote_)
: Poco::Exception(msg_masked.msg, code) : Poco::Exception(msg_masked.msg, code)
, remote(remote_) , remote(remote_)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
handle_error_code(msg_masked.msg, code, remote, getStackFramePointers()); handle_error_code(msg_masked.msg, code, remote, getStackFramePointers());
} }
@ -90,12 +92,14 @@ Exception::Exception(MessageMasked && msg_masked, int code, bool remote_)
: Poco::Exception(msg_masked.msg, code) : Poco::Exception(msg_masked.msg, code)
, remote(remote_) , remote(remote_)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
handle_error_code(message(), code, remote, getStackFramePointers()); handle_error_code(message(), code, remote, getStackFramePointers());
} }
Exception::Exception(CreateFromPocoTag, const Poco::Exception & exc) Exception::Exception(CreateFromPocoTag, const Poco::Exception & exc)
: Poco::Exception(exc.displayText(), ErrorCodes::POCO_EXCEPTION) : Poco::Exception(exc.displayText(), ErrorCodes::POCO_EXCEPTION)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
#ifdef STD_EXCEPTION_HAS_STACK_TRACE #ifdef STD_EXCEPTION_HAS_STACK_TRACE
auto * stack_trace_frames = exc.get_stack_trace_frames(); auto * stack_trace_frames = exc.get_stack_trace_frames();
auto stack_trace_size = exc.get_stack_trace_size(); auto stack_trace_size = exc.get_stack_trace_size();
@ -107,6 +111,7 @@ Exception::Exception(CreateFromPocoTag, const Poco::Exception & exc)
Exception::Exception(CreateFromSTDTag, const std::exception & exc) Exception::Exception(CreateFromSTDTag, const std::exception & exc)
: Poco::Exception(demangle(typeid(exc).name()) + ": " + String(exc.what()), ErrorCodes::STD_EXCEPTION) : Poco::Exception(demangle(typeid(exc).name()) + ": " + String(exc.what()), ErrorCodes::STD_EXCEPTION)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
#ifdef STD_EXCEPTION_HAS_STACK_TRACE #ifdef STD_EXCEPTION_HAS_STACK_TRACE
auto * stack_trace_frames = exc.get_stack_trace_frames(); auto * stack_trace_frames = exc.get_stack_trace_frames();
auto stack_trace_size = exc.get_stack_trace_size(); auto stack_trace_size = exc.get_stack_trace_size();
@ -153,7 +158,17 @@ std::string Exception::getStackTraceString() const
auto * stack_trace_frames = get_stack_trace_frames(); auto * stack_trace_frames = get_stack_trace_frames();
auto stack_trace_size = get_stack_trace_size(); auto stack_trace_size = get_stack_trace_size();
__msan_unpoison(stack_trace_frames, stack_trace_size * sizeof(stack_trace_frames[0])); __msan_unpoison(stack_trace_frames, stack_trace_size * sizeof(stack_trace_frames[0]));
return StackTrace::toString(stack_trace_frames, 0, stack_trace_size); String thread_stack_trace;
std::for_each(capture_thread_frame_pointers.rbegin(), capture_thread_frame_pointers.rend(),
[&thread_stack_trace](StackTrace::FramePointers & frame_pointers)
{
thread_stack_trace +=
"\nJob's origin stack trace:\n" +
StackTrace::toString(frame_pointers.data(), 0, std::ranges::find(frame_pointers, nullptr) - frame_pointers.begin());
}
);
return StackTrace::toString(stack_trace_frames, 0, stack_trace_size) + thread_stack_trace;
#else #else
return trace.toString(); return trace.toString();
#endif #endif
@ -185,6 +200,9 @@ Exception::FramePointers Exception::getStackFramePointers() const
return frame_pointers; return frame_pointers;
} }
thread_local bool Exception::enable_job_stack_trace = false;
thread_local std::vector<StackTrace::FramePointers> Exception::thread_frame_pointers = {};
void throwFromErrno(const std::string & s, int code, int the_errno) void throwFromErrno(const std::string & s, int code, int the_errno)
{ {

View File

@ -25,18 +25,27 @@ class Exception : public Poco::Exception
public: public:
using FramePointers = std::vector<void *>; using FramePointers = std::vector<void *>;
Exception() = default; Exception()
{
capture_thread_frame_pointers = thread_frame_pointers;
}
Exception(const PreformattedMessage & msg, int code): Exception(msg.text, code) Exception(const PreformattedMessage & msg, int code): Exception(msg.text, code)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
message_format_string = msg.format_string; message_format_string = msg.format_string;
} }
Exception(PreformattedMessage && msg, int code): Exception(std::move(msg.text), code) Exception(PreformattedMessage && msg, int code): Exception(std::move(msg.text), code)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
message_format_string = msg.format_string; message_format_string = msg.format_string;
} }
/// Collect call stacks of all previous jobs' schedulings leading to this thread job's execution
static thread_local bool enable_job_stack_trace;
static thread_local std::vector<StackTrace::FramePointers> thread_frame_pointers;
protected: protected:
// used to remove the sensitive information from exceptions if query_masking_rules is configured // used to remove the sensitive information from exceptions if query_masking_rules is configured
struct MessageMasked struct MessageMasked
@ -66,6 +75,7 @@ public:
Exception(int code, T && message) Exception(int code, T && message)
: Exception(message, code) : Exception(message, code)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
message_format_string = tryGetStaticFormatString(message); message_format_string = tryGetStaticFormatString(message);
} }
@ -80,6 +90,7 @@ public:
Exception(int code, FormatStringHelper<Args...> fmt, Args &&... args) Exception(int code, FormatStringHelper<Args...> fmt, Args &&... args)
: Exception(fmt::format(fmt.fmt_str, std::forward<Args>(args)...), code) : Exception(fmt::format(fmt.fmt_str, std::forward<Args>(args)...), code)
{ {
capture_thread_frame_pointers = thread_frame_pointers;
message_format_string = fmt.message_format_string; message_format_string = fmt.message_format_string;
} }
@ -131,6 +142,8 @@ private:
protected: protected:
std::string_view message_format_string; std::string_view message_format_string;
/// Local copy of static per-thread thread_frame_pointers, should be mutable to be unpoisoned on printout
mutable std::vector<StackTrace::FramePointers> capture_thread_frame_pointers;
}; };

View File

@ -669,16 +669,16 @@ unsigned OptimizedRegularExpressionImpl<thread_safe>::match(const char * subject
matches.resize(limit); matches.resize(limit);
for (size_t i = 0; i < limit; ++i) for (size_t i = 0; i < limit; ++i)
{ {
if (pieces[i] != nullptr) if (pieces[i].empty())
{
matches[i].offset = pieces[i].data() - subject;
matches[i].length = pieces[i].length();
}
else
{ {
matches[i].offset = std::string::npos; matches[i].offset = std::string::npos;
matches[i].length = 0; matches[i].length = 0;
} }
else
{
matches[i].offset = pieces[i].data() - subject;
matches[i].length = pieces[i].length();
}
} }
return limit; return limit;
} }

View File

@ -412,6 +412,21 @@ void StackTrace::toStringEveryLine(std::function<void(std::string_view)> callbac
toStringEveryLineImpl(true, {frame_pointers, offset, size}, std::move(callback)); toStringEveryLineImpl(true, {frame_pointers, offset, size}, std::move(callback));
} }
void StackTrace::toStringEveryLine(const FramePointers & frame_pointers, std::function<void(std::string_view)> callback)
{
toStringEveryLineImpl(true, {frame_pointers, 0, static_cast<size_t>(std::ranges::find(frame_pointers, nullptr) - frame_pointers.begin())}, std::move(callback));
}
void StackTrace::toStringEveryLine(void ** frame_pointers_raw, size_t offset, size_t size, std::function<void(std::string_view)> callback)
{
__msan_unpoison(frame_pointers_raw, size * sizeof(*frame_pointers_raw));
StackTrace::FramePointers frame_pointers{};
std::copy_n(frame_pointers_raw, size, frame_pointers.begin());
toStringEveryLineImpl(true, {frame_pointers, offset, size}, std::move(callback));
}
using StackTraceCache = std::map<StackTraceTriple, String, std::less<>>; using StackTraceCache = std::map<StackTraceTriple, String, std::less<>>;
static StackTraceCache & cacheInstance() static StackTraceCache & cacheInstance()

View File

@ -65,6 +65,8 @@ public:
static void symbolize(const FramePointers & frame_pointers, size_t offset, size_t size, StackTrace::Frames & frames); static void symbolize(const FramePointers & frame_pointers, size_t offset, size_t size, StackTrace::Frames & frames);
void toStringEveryLine(std::function<void(std::string_view)> callback) const; void toStringEveryLine(std::function<void(std::string_view)> callback) const;
static void toStringEveryLine(const FramePointers & frame_pointers, std::function<void(std::string_view)> callback);
static void toStringEveryLine(void ** frame_pointers_raw, size_t offset, size_t size, std::function<void(std::string_view)> callback);
/// Displaying the addresses can be disabled for security reasons. /// Displaying the addresses can be disabled for security reasons.
/// If you turn off addresses, it will be more secure, but we will be unable to help you with debugging. /// If you turn off addresses, it will be more secure, but we will be unable to help you with debugging.

View File

@ -189,7 +189,9 @@ ReturnType ThreadPoolImpl<Thread>::scheduleImpl(Job job, Priority priority, std:
jobs.emplace(std::move(job), jobs.emplace(std::move(job),
priority, priority,
/// Tracing context on this thread is used as parent context for the sub-thread that runs the job /// Tracing context on this thread is used as parent context for the sub-thread that runs the job
propagate_opentelemetry_tracing_context ? DB::OpenTelemetry::CurrentContext() : DB::OpenTelemetry::TracingContextOnThread()); propagate_opentelemetry_tracing_context ? DB::OpenTelemetry::CurrentContext() : DB::OpenTelemetry::TracingContextOnThread(),
/// capture_frame_pointers
DB::Exception::enable_job_stack_trace);
++scheduled_jobs; ++scheduled_jobs;
} }
@ -348,6 +350,8 @@ void ThreadPoolImpl<Thread>::worker(typename std::list<Thread>::iterator thread_
/// A copy of parent trace context /// A copy of parent trace context
DB::OpenTelemetry::TracingContextOnThread parent_thread_trace_context; DB::OpenTelemetry::TracingContextOnThread parent_thread_trace_context;
std::vector<StackTrace::FramePointers> thread_frame_pointers;
/// Get a job from the queue. /// Get a job from the queue.
Job job; Job job;
@ -393,6 +397,9 @@ void ThreadPoolImpl<Thread>::worker(typename std::list<Thread>::iterator thread_
/// to prevent us from modifying its priority. We have to use const_cast to force move semantics on JobWithPriority::job. /// to prevent us from modifying its priority. We have to use const_cast to force move semantics on JobWithPriority::job.
job = std::move(const_cast<Job &>(jobs.top().job)); job = std::move(const_cast<Job &>(jobs.top().job));
parent_thread_trace_context = std::move(const_cast<DB::OpenTelemetry::TracingContextOnThread &>(jobs.top().thread_trace_context)); parent_thread_trace_context = std::move(const_cast<DB::OpenTelemetry::TracingContextOnThread &>(jobs.top().thread_trace_context));
DB::Exception::enable_job_stack_trace = jobs.top().enable_job_stack_trace;
if (DB::Exception::enable_job_stack_trace)
thread_frame_pointers = std::move(const_cast<std::vector<StackTrace::FramePointers> &>(jobs.top().frame_pointers));
jobs.pop(); jobs.pop();
/// We don't run jobs after `shutdown` is set, but we have to properly dequeue all jobs and finish them. /// We don't run jobs after `shutdown` is set, but we have to properly dequeue all jobs and finish them.
@ -411,6 +418,10 @@ void ThreadPoolImpl<Thread>::worker(typename std::list<Thread>::iterator thread_
/// Run the job. /// Run the job.
try try
{ {
if (DB::Exception::enable_job_stack_trace)
DB::Exception::thread_frame_pointers = std::move(thread_frame_pointers);
CurrentMetrics::Increment metric_active_pool_threads(metric_active_threads); CurrentMetrics::Increment metric_active_pool_threads(metric_active_threads);
job(); job();

View File

@ -19,6 +19,8 @@
#include <Common/CurrentMetrics.h> #include <Common/CurrentMetrics.h>
#include <Common/ThreadPool_fwd.h> #include <Common/ThreadPool_fwd.h>
#include <Common/Priority.h> #include <Common/Priority.h>
#include <Common/StackTrace.h>
#include <Common/Exception.h>
#include <base/scope_guard.h> #include <base/scope_guard.h>
/** Very simple thread pool similar to boost::threadpool. /** Very simple thread pool similar to boost::threadpool.
@ -127,8 +129,19 @@ private:
Priority priority; Priority priority;
DB::OpenTelemetry::TracingContextOnThread thread_trace_context; DB::OpenTelemetry::TracingContextOnThread thread_trace_context;
JobWithPriority(Job job_, Priority priority_, const DB::OpenTelemetry::TracingContextOnThread & thread_trace_context_) /// Call stacks of all jobs' schedulings leading to this one
: job(job_), priority(priority_), thread_trace_context(thread_trace_context_) {} std::vector<StackTrace::FramePointers> frame_pointers;
bool enable_job_stack_trace = false;
JobWithPriority(Job job_, Priority priority_, const DB::OpenTelemetry::TracingContextOnThread & thread_trace_context_, bool capture_frame_pointers = false)
: job(job_), priority(priority_), thread_trace_context(thread_trace_context_), enable_job_stack_trace(capture_frame_pointers)
{
if (!capture_frame_pointers)
return;
/// Save all previous jobs call stacks and append with current
frame_pointers = DB::Exception::thread_frame_pointers;
frame_pointers.push_back(StackTrace().getFramePointers());
}
bool operator<(const JobWithPriority & rhs) const bool operator<(const JobWithPriority & rhs) const
{ {

View File

@ -290,6 +290,7 @@ public:
void flushUntrackedMemory(); void flushUntrackedMemory();
private: private:
void applyGlobalSettings();
void applyQuerySettings(); void applyQuerySettings();
void initPerformanceCounters(); void initPerformanceCounters();

View File

@ -2,6 +2,8 @@ include("${ClickHouse_SOURCE_DIR}/cmake/dbms_glob_sources.cmake")
add_headers_and_sources(clickhouse_common_zookeeper .) add_headers_and_sources(clickhouse_common_zookeeper .)
list(APPEND clickhouse_common_zookeeper_sources ${CMAKE_CURRENT_SOURCE_DIR}/../../../src/Coordination/KeeperFeatureFlags.cpp)
# for clickhouse server # for clickhouse server
add_library(clickhouse_common_zookeeper ${clickhouse_common_zookeeper_headers} ${clickhouse_common_zookeeper_sources}) add_library(clickhouse_common_zookeeper ${clickhouse_common_zookeeper_headers} ${clickhouse_common_zookeeper_sources})
target_compile_definitions (clickhouse_common_zookeeper PRIVATE -DZOOKEEPER_LOG) target_compile_definitions (clickhouse_common_zookeeper PRIVATE -DZOOKEEPER_LOG)

View File

@ -2,7 +2,7 @@
#include <base/types.h> #include <base/types.h>
#include <Common/Exception.h> #include <Common/Exception.h>
#include <Coordination/KeeperConstants.h> #include <Coordination/KeeperFeatureFlags.h>
#include <Poco/Net/SocketAddress.h> #include <Poco/Net/SocketAddress.h>
#include <vector> #include <vector>
@ -530,7 +530,9 @@ public:
const Requests & requests, const Requests & requests,
MultiCallback callback) = 0; MultiCallback callback) = 0;
virtual DB::KeeperApiVersion getApiVersion() const = 0; virtual bool isFeatureEnabled(DB::KeeperFeatureFlag feature_flag) const = 0;
virtual const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return nullptr; }
/// Expire session and finish all pending requests /// Expire session and finish all pending requests
virtual void finalize(const String & reason) = 0; virtual void finalize(const String & reason) = 0;

View File

@ -11,6 +11,7 @@
#include <Common/ZooKeeper/ZooKeeperArgs.h> #include <Common/ZooKeeper/ZooKeeperArgs.h>
#include <Common/ThreadPool.h> #include <Common/ThreadPool.h>
#include <Common/ConcurrentBoundedQueue.h> #include <Common/ConcurrentBoundedQueue.h>
#include <Coordination/KeeperFeatureFlags.h>
namespace Coordination namespace Coordination
@ -92,9 +93,9 @@ public:
void finalize(const String & reason) override; void finalize(const String & reason) override;
DB::KeeperApiVersion getApiVersion() const override bool isFeatureEnabled(DB::KeeperFeatureFlag) const override
{ {
return KeeperApiVersion::ZOOKEEPER_COMPATIBLE; return false;
} }
struct Node struct Node

View File

@ -865,9 +865,9 @@ bool ZooKeeper::expired()
return impl->isExpired(); return impl->isExpired();
} }
DB::KeeperApiVersion ZooKeeper::getApiVersion() const bool ZooKeeper::isFeatureEnabled(DB::KeeperFeatureFlag feature_flag) const
{ {
return impl->getApiVersion(); return impl->isFeatureEnabled(feature_flag);
} }
Int64 ZooKeeper::getClientID() Int64 ZooKeeper::getClientID()

View File

@ -15,6 +15,7 @@
#include <Common/ZooKeeper/ZooKeeperConstants.h> #include <Common/ZooKeeper/ZooKeeperConstants.h>
#include <Common/ZooKeeper/ZooKeeperArgs.h> #include <Common/ZooKeeper/ZooKeeperArgs.h>
#include <Common/thread_local_rng.h> #include <Common/thread_local_rng.h>
#include <Coordination/KeeperFeatureFlags.h>
#include <unistd.h> #include <unistd.h>
#include <random> #include <random>
@ -215,7 +216,7 @@ public:
/// Returns true, if the session has expired. /// Returns true, if the session has expired.
bool expired(); bool expired();
DB::KeeperApiVersion getApiVersion() const; bool isFeatureEnabled(DB::KeeperFeatureFlag feature_flag) const;
/// Create a znode. /// Create a znode.
/// Throw an exception if something went wrong. /// Throw an exception if something went wrong.
@ -528,6 +529,8 @@ public:
size_t getConnectedZooKeeperIndex() const { return connected_zk_index; } size_t getConnectedZooKeeperIndex() const { return connected_zk_index; }
UInt64 getConnectedTime() const { return connected_time; } UInt64 getConnectedTime() const { return connected_time; }
const DB::KeeperFeatureFlags * getKeeperFeatureFlags() const { return impl->getKeeperFeatureFlags(); }
private: private:
void init(ZooKeeperArgs args_); void init(ZooKeeperArgs args_);
@ -554,7 +557,7 @@ private:
template <typename TResponse, bool try_multi, typename TIter> template <typename TResponse, bool try_multi, typename TIter>
MultiReadResponses<TResponse, try_multi> multiRead(TIter start, TIter end, RequestFactory request_factory, AsyncFunction<TResponse> async_fun) MultiReadResponses<TResponse, try_multi> multiRead(TIter start, TIter end, RequestFactory request_factory, AsyncFunction<TResponse> async_fun)
{ {
if (getApiVersion() >= DB::KeeperApiVersion::WITH_MULTI_READ) if (isFeatureEnabled(DB::KeeperFeatureFlag::MULTI_READ))
{ {
Coordination::Requests requests; Coordination::Requests requests;
for (auto it = start; it != end; ++it) for (auto it = start; it != end; ++it)
@ -687,7 +690,7 @@ String getZooKeeperConfigName(const Poco::Util::AbstractConfiguration & config);
template <typename Client> template <typename Client>
void addCheckNotExistsRequest(Coordination::Requests & requests, const Client & client, const std::string & path) void addCheckNotExistsRequest(Coordination::Requests & requests, const Client & client, const std::string & path)
{ {
if (client.getApiVersion() >= DB::KeeperApiVersion::WITH_CHECK_NOT_EXISTS) if (client.isFeatureEnabled(DB::KeeperFeatureFlag::CHECK_NOT_EXISTS))
{ {
auto request = std::make_shared<Coordination::CheckRequest>(); auto request = std::make_shared<Coordination::CheckRequest>();
request->path = path; request->path = path;

View File

@ -4,6 +4,7 @@
#include <base/getFQDNOrHostName.h> #include <base/getFQDNOrHostName.h>
#include <Poco/Util/AbstractConfiguration.h> #include <Poco/Util/AbstractConfiguration.h>
#include <Common/isLocalAddress.h> #include <Common/isLocalAddress.h>
#include <Common/StringUtils/StringUtils.h>
#include <Poco/String.h> #include <Poco/String.h>
namespace DB namespace DB

View File

@ -354,7 +354,8 @@ ZooKeeper::ZooKeeper(
send_thread = ThreadFromGlobalPool([this] { sendThread(); }); send_thread = ThreadFromGlobalPool([this] { sendThread(); });
receive_thread = ThreadFromGlobalPool([this] { receiveThread(); }); receive_thread = ThreadFromGlobalPool([this] { receiveThread(); });
initApiVersion(); initFeatureFlags();
keeper_feature_flags.logFlags(log);
ProfileEvents::increment(ProfileEvents::ZooKeeperInit); ProfileEvents::increment(ProfileEvents::ZooKeeperInit);
} }
@ -362,6 +363,16 @@ ZooKeeper::ZooKeeper(
{ {
tryLogCurrentException(log, "Failed to connect to ZooKeeper"); tryLogCurrentException(log, "Failed to connect to ZooKeeper");
try
{
requests_queue.finish();
socket.shutdown();
}
catch (...)
{
tryLogCurrentException(log);
}
send_thread.join(); send_thread.join();
receive_thread.join(); receive_thread.join();
@ -1089,13 +1100,15 @@ void ZooKeeper::pushRequest(RequestInfo && info)
ProfileEvents::increment(ProfileEvents::ZooKeeperTransactions); ProfileEvents::increment(ProfileEvents::ZooKeeperTransactions);
} }
KeeperApiVersion ZooKeeper::getApiVersion() const bool ZooKeeper::isFeatureEnabled(KeeperFeatureFlag feature_flag) const
{ {
return keeper_api_version; return keeper_feature_flags.isEnabled(feature_flag);
} }
void ZooKeeper::initApiVersion() void ZooKeeper::initFeatureFlags()
{ {
const auto try_get = [&](const std::string & path, const std::string & description) -> std::optional<std::string>
{
auto promise = std::make_shared<std::promise<Coordination::GetResponse>>(); auto promise = std::make_shared<std::promise<Coordination::GetResponse>>();
auto future = promise->get_future(); auto future = promise->get_future();
@ -1104,29 +1117,47 @@ void ZooKeeper::initApiVersion()
promise->set_value(response); promise->set_value(response);
}; };
get(keeper_api_version_path, std::move(callback), {}); get(path, std::move(callback), {});
if (future.wait_for(std::chrono::milliseconds(args.operation_timeout_ms)) != std::future_status::ready) if (future.wait_for(std::chrono::milliseconds(args.operation_timeout_ms)) != std::future_status::ready)
{ throw Exception(Error::ZOPERATIONTIMEOUT, "Failed to get {}: timeout", description);
throw Exception(Error::ZOPERATIONTIMEOUT, "Failed to get API version: timeout");
}
auto response = future.get(); auto response = future.get();
if (response.error == Coordination::Error::ZNONODE) if (response.error == Coordination::Error::ZNONODE)
{ {
LOG_TRACE(log, "API version not found, assuming {}", keeper_api_version); LOG_TRACE(log, "Failed to get {}", description);
return; return std::nullopt;
} }
else if (response.error != Coordination::Error::ZOK) else if (response.error != Coordination::Error::ZOK)
{ {
throw Exception(response.error, "Failed to get API version"); throw Exception(response.error, "Failed to get {}", description);
} }
return std::move(response.data);
};
if (auto feature_flags = try_get(keeper_api_feature_flags_path, "feature flags"); feature_flags.has_value())
{
keeper_feature_flags.setFeatureFlags(std::move(*feature_flags));
return;
}
auto keeper_api_version_string = try_get(keeper_api_version_path, "API version");
DB::KeeperApiVersion keeper_api_version{DB::KeeperApiVersion::ZOOKEEPER_COMPATIBLE};
if (!keeper_api_version_string.has_value())
{
LOG_TRACE(log, "API version not found, assuming {}", keeper_api_version);
return;
}
DB::ReadBufferFromOwnString buf(*keeper_api_version_string);
uint8_t keeper_version{0}; uint8_t keeper_version{0};
DB::ReadBufferFromOwnString buf(response.data);
DB::readIntText(keeper_version, buf); DB::readIntText(keeper_version, buf);
keeper_api_version = static_cast<DB::KeeperApiVersion>(keeper_version); keeper_api_version = static_cast<DB::KeeperApiVersion>(keeper_version);
LOG_TRACE(log, "Detected server's API version: {}", keeper_api_version); LOG_TRACE(log, "Detected server's API version: {}", keeper_api_version);
keeper_feature_flags.fromApiVersion(keeper_api_version);
} }
@ -1246,7 +1277,7 @@ void ZooKeeper::list(
WatchCallback watch) WatchCallback watch)
{ {
std::shared_ptr<ZooKeeperListRequest> request{nullptr}; std::shared_ptr<ZooKeeperListRequest> request{nullptr};
if (keeper_api_version < Coordination::KeeperApiVersion::WITH_FILTERED_LIST) if (!isFeatureEnabled(KeeperFeatureFlag::FILTERED_LIST))
{ {
if (list_request_type != ListRequestType::ALL) if (list_request_type != ListRequestType::ALL)
throw Exception(Error::ZBADARGUMENTS, "Filtered list request type cannot be used because it's not supported by the server"); throw Exception(Error::ZBADARGUMENTS, "Filtered list request type cannot be used because it's not supported by the server");
@ -1311,7 +1342,7 @@ void ZooKeeper::multi(
{ {
ZooKeeperMultiRequest request(requests, default_acls); ZooKeeperMultiRequest request(requests, default_acls);
if (request.getOpNum() == OpNum::MultiRead && keeper_api_version < Coordination::KeeperApiVersion::WITH_MULTI_READ) if (request.getOpNum() == OpNum::MultiRead && !isFeatureEnabled(KeeperFeatureFlag::MULTI_READ))
throw Exception(Error::ZBADARGUMENTS, "MultiRead request type cannot be used because it's not supported by the server"); throw Exception(Error::ZBADARGUMENTS, "MultiRead request type cannot be used because it's not supported by the server");
RequestInfo request_info; RequestInfo request_info;

View File

@ -9,6 +9,7 @@
#include <Common/ZooKeeper/ZooKeeperCommon.h> #include <Common/ZooKeeper/ZooKeeperCommon.h>
#include <Common/ZooKeeper/ZooKeeperArgs.h> #include <Common/ZooKeeper/ZooKeeperArgs.h>
#include <Coordination/KeeperConstants.h> #include <Coordination/KeeperConstants.h>
#include <Coordination/KeeperFeatureFlags.h>
#include <IO/ReadBuffer.h> #include <IO/ReadBuffer.h>
#include <IO/WriteBuffer.h> #include <IO/WriteBuffer.h>
@ -181,7 +182,7 @@ public:
const Requests & requests, const Requests & requests,
MultiCallback callback) override; MultiCallback callback) override;
DB::KeeperApiVersion getApiVersion() const override; bool isFeatureEnabled(KeeperFeatureFlag feature_flag) const override;
/// Without forcefully invalidating (finalizing) ZooKeeper session before /// Without forcefully invalidating (finalizing) ZooKeeper session before
/// establishing a new one, there was a possibility that server is using /// establishing a new one, there was a possibility that server is using
@ -201,6 +202,8 @@ public:
void setServerCompletelyStarted(); void setServerCompletelyStarted();
const KeeperFeatureFlags * getKeeperFeatureFlags() const override { return &keeper_feature_flags; }
private: private:
ACLs default_acls; ACLs default_acls;
Poco::Net::SocketAddress connected_zk_address; Poco::Net::SocketAddress connected_zk_address;
@ -312,12 +315,12 @@ private:
void logOperationIfNeeded(const ZooKeeperRequestPtr & request, const ZooKeeperResponsePtr & response = nullptr, bool finalize = false, UInt64 elapsed_ms = 0); void logOperationIfNeeded(const ZooKeeperRequestPtr & request, const ZooKeeperResponsePtr & response = nullptr, bool finalize = false, UInt64 elapsed_ms = 0);
void initApiVersion(); void initFeatureFlags();
CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession}; CurrentMetrics::Increment active_session_metric_increment{CurrentMetrics::ZooKeeperSession};
std::shared_ptr<ZooKeeperLog> zk_log; std::shared_ptr<ZooKeeperLog> zk_log;
DB::KeeperApiVersion keeper_api_version{DB::KeeperApiVersion::ZOOKEEPER_COMPATIBLE}; DB::KeeperFeatureFlags keeper_feature_flags;
}; };
} }

View File

@ -402,9 +402,9 @@ public:
ephemeral_nodes.clear(); ephemeral_nodes.clear();
} }
KeeperApiVersion getApiVersion() const bool isFeatureEnabled(KeeperFeatureFlag feature_flag) const
{ {
return keeper->getApiVersion(); return keeper->isFeatureEnabled(feature_flag);
} }
private: private:

View File

@ -23,7 +23,7 @@ namespace DB
* The exact match of the type is checked. That is, cast to the ancestor will be unsuccessful. * The exact match of the type is checked. That is, cast to the ancestor will be unsuccessful.
*/ */
template <typename To, typename From> template <typename To, typename From>
To assert_cast(From && from) inline To assert_cast(From && from)
{ {
#ifndef NDEBUG #ifndef NDEBUG
try try

View File

@ -1,4 +1,5 @@
#include "getNumberOfPhysicalCPUCores.h" #include "getNumberOfPhysicalCPUCores.h"
#include <filesystem>
#include "config.h" #include "config.h"
#if defined(OS_LINUX) #if defined(OS_LINUX)
@ -7,6 +8,8 @@
#endif #endif
#include <boost/algorithm/string/trim.hpp> #include <boost/algorithm/string/trim.hpp>
#include <boost/algorithm/string/split.hpp>
#include <base/range.h>
#include <thread> #include <thread>
#include <set> #include <set>
@ -15,7 +18,7 @@ namespace
{ {
#if defined(OS_LINUX) #if defined(OS_LINUX)
int32_t readFrom(const char * filename, int default_value) int32_t readFrom(const std::filesystem::path & filename, int default_value)
{ {
std::ifstream infile(filename); std::ifstream infile(filename);
if (!infile.is_open()) if (!infile.is_open())
@ -31,10 +34,87 @@ int32_t readFrom(const char * filename, int default_value)
uint32_t getCGroupLimitedCPUCores(unsigned default_cpu_count) uint32_t getCGroupLimitedCPUCores(unsigned default_cpu_count)
{ {
uint32_t quota_count = default_cpu_count; uint32_t quota_count = default_cpu_count;
std::filesystem::path prefix = "/sys/fs/cgroup";
/// cgroupsv2
std::ifstream contr_file(prefix / "cgroup.controllers");
if (contr_file.is_open())
{
/// First, we identify the cgroup the process belongs
std::ifstream cgroup_name_file("/proc/self/cgroup");
if (!cgroup_name_file.is_open())
return default_cpu_count;
// cgroup_name_file always starts with '0::/' for v2
cgroup_name_file.ignore(4);
std::string cgroup_name;
cgroup_name_file >> cgroup_name;
std::filesystem::path current_cgroup;
if (cgroup_name.empty())
current_cgroup = prefix;
else
current_cgroup = prefix / cgroup_name;
// Looking for cpu.max in directories from the current cgroup to the top level
// It does not stop on the first time since the child could have a greater value than parent
while (current_cgroup != prefix.parent_path())
{
std::ifstream cpu_max_file(current_cgroup / "cpu.max");
current_cgroup = current_cgroup.parent_path();
if (cpu_max_file.is_open())
{
std::string cpu_limit_str;
float cpu_period;
cpu_max_file >> cpu_limit_str >> cpu_period;
if (cpu_limit_str != "max" && cpu_period != 0)
{
float cpu_limit = std::stof(cpu_limit_str);
quota_count = std::min(static_cast<uint32_t>(ceil(cpu_limit / cpu_period)), quota_count);
}
}
}
current_cgroup = prefix / cgroup_name;
// Looking for cpuset.cpus.effective in directories from the current cgroup to the top level
while (current_cgroup != prefix.parent_path())
{
std::ifstream cpuset_cpus_file(current_cgroup / "cpuset.cpus.effective");
current_cgroup = current_cgroup.parent_path();
if (cpuset_cpus_file.is_open())
{
// The line in the file is "0,2-4,6,9-14" cpu numbers
// It's always grouped and ordered
std::vector<std::string> cpu_ranges;
std::string cpuset_line;
cpuset_cpus_file >> cpuset_line;
if (cpuset_line.empty())
continue;
boost::split(cpu_ranges, cpuset_line, boost::is_any_of(","));
uint32_t cpus_count = 0;
for (const std::string& cpu_number_or_range : cpu_ranges)
{
std::vector<std::string> cpu_range;
boost::split(cpu_range, cpu_number_or_range, boost::is_any_of("-"));
if (cpu_range.size() == 2)
{
int start = std::stoi(cpu_range[0]);
int end = std::stoi(cpu_range[1]);
cpus_count += (end - start) + 1;
}
else
cpus_count++;
}
quota_count = std::min(cpus_count, quota_count);
break;
}
}
return quota_count;
}
/// cgroupsv1
/// Return the number of milliseconds per period process is guaranteed to run. /// Return the number of milliseconds per period process is guaranteed to run.
/// -1 for no quota /// -1 for no quota
int cgroup_quota = readFrom("/sys/fs/cgroup/cpu/cpu.cfs_quota_us", -1); int cgroup_quota = readFrom(prefix / "cpu/cpu.cfs_quota_us", -1);
int cgroup_period = readFrom("/sys/fs/cgroup/cpu/cpu.cfs_period_us", -1); int cgroup_period = readFrom(prefix / "cpu/cpu.cfs_period_us", -1);
if (cgroup_quota > -1 && cgroup_period > 0) if (cgroup_quota > -1 && cgroup_period > 0)
quota_count = static_cast<uint32_t>(ceil(static_cast<float>(cgroup_quota) / static_cast<float>(cgroup_period))); quota_count = static_cast<uint32_t>(ceil(static_cast<float>(cgroup_quota) / static_cast<float>(cgroup_period)));

View File

@ -40,7 +40,7 @@ std::string makeRegexpPatternFromGlobs(const std::string & initial_str_with_glob
size_t current_index = 0; size_t current_index = 0;
while (RE2::FindAndConsume(&input, enum_or_range, &matched)) while (RE2::FindAndConsume(&input, enum_or_range, &matched))
{ {
std::string buffer = matched.ToString(); std::string buffer{matched};
oss_for_replacing << escaped_with_globs.substr(current_index, matched.data() - escaped_with_globs.data() - current_index - 1) << '('; oss_for_replacing << escaped_with_globs.substr(current_index, matched.data() - escaped_with_globs.data() - current_index - 1) << '(';
if (buffer.find(',') == std::string::npos) if (buffer.find(',') == std::string::npos)

View File

@ -49,8 +49,8 @@ static void validateChecksum(char * data, size_t size, const Checksum expected_c
/// TODO mess up of endianness in error message. /// TODO mess up of endianness in error message.
message << "Checksum doesn't match: corrupted data." message << "Checksum doesn't match: corrupted data."
" Reference: " + getHexUIntLowercase(expected_checksum.first) + getHexUIntLowercase(expected_checksum.second) " Reference: " + getHexUIntLowercase(expected_checksum.high64) + getHexUIntLowercase(expected_checksum.low64)
+ ". Actual: " + getHexUIntLowercase(calculated_checksum.first) + getHexUIntLowercase(calculated_checksum.second) + ". Actual: " + getHexUIntLowercase(calculated_checksum.high64) + getHexUIntLowercase(calculated_checksum.low64)
+ ". Size of compressed block: " + toString(size); + ". Size of compressed block: " + toString(size);
const char * message_hardware_failure = "This is most likely due to hardware failure. " const char * message_hardware_failure = "This is most likely due to hardware failure. "
@ -95,8 +95,8 @@ static void validateChecksum(char * data, size_t size, const Checksum expected_c
} }
/// Check if the difference caused by single bit flip in stored checksum. /// Check if the difference caused by single bit flip in stored checksum.
size_t difference = std::popcount(expected_checksum.first ^ calculated_checksum.first) size_t difference = std::popcount(expected_checksum.low64 ^ calculated_checksum.low64)
+ std::popcount(expected_checksum.second ^ calculated_checksum.second); + std::popcount(expected_checksum.high64 ^ calculated_checksum.high64);
if (difference == 1) if (difference == 1)
{ {
@ -194,8 +194,8 @@ size_t CompressedReadBufferBase::readCompressedData(size_t & size_decompressed,
{ {
Checksum checksum; Checksum checksum;
ReadBufferFromMemory checksum_in(own_compressed_buffer.data(), sizeof(checksum)); ReadBufferFromMemory checksum_in(own_compressed_buffer.data(), sizeof(checksum));
readBinaryLittleEndian(checksum.first, checksum_in); readBinaryLittleEndian(checksum.low64, checksum_in);
readBinaryLittleEndian(checksum.second, checksum_in); readBinaryLittleEndian(checksum.high64, checksum_in);
validateChecksum(compressed_buffer, size_compressed_without_checksum, checksum); validateChecksum(compressed_buffer, size_compressed_without_checksum, checksum);
} }
@ -238,8 +238,8 @@ size_t CompressedReadBufferBase::readCompressedDataBlockForAsynchronous(size_t &
{ {
Checksum checksum; Checksum checksum;
ReadBufferFromMemory checksum_in(own_compressed_buffer.data(), sizeof(checksum)); ReadBufferFromMemory checksum_in(own_compressed_buffer.data(), sizeof(checksum));
readBinaryLittleEndian(checksum.first, checksum_in); readBinaryLittleEndian(checksum.low64, checksum_in);
readBinaryLittleEndian(checksum.second, checksum_in); readBinaryLittleEndian(checksum.high64, checksum_in);
validateChecksum(compressed_buffer, size_compressed_without_checksum, checksum); validateChecksum(compressed_buffer, size_compressed_without_checksum, checksum);
} }

View File

@ -38,8 +38,8 @@ void CompressedWriteBuffer::nextImpl()
CityHash_v1_0_2::uint128 checksum = CityHash_v1_0_2::CityHash128(out_compressed_ptr, compressed_size); CityHash_v1_0_2::uint128 checksum = CityHash_v1_0_2::CityHash128(out_compressed_ptr, compressed_size);
writeBinaryLittleEndian(checksum.first, out); writeBinaryLittleEndian(checksum.low64, out);
writeBinaryLittleEndian(checksum.second, out); writeBinaryLittleEndian(checksum.high64, out);
out.position() += compressed_size; out.position() += compressed_size;
} }
@ -50,8 +50,8 @@ void CompressedWriteBuffer::nextImpl()
CityHash_v1_0_2::uint128 checksum = CityHash_v1_0_2::CityHash128(compressed_buffer.data(), compressed_size); CityHash_v1_0_2::uint128 checksum = CityHash_v1_0_2::CityHash128(compressed_buffer.data(), compressed_size);
writeBinaryLittleEndian(checksum.first, out); writeBinaryLittleEndian(checksum.low64, out);
writeBinaryLittleEndian(checksum.second, out); writeBinaryLittleEndian(checksum.high64, out);
out.write(compressed_buffer.data(), compressed_size); out.write(compressed_buffer.data(), compressed_size);
} }

View File

@ -36,7 +36,7 @@ void CoordinationSettings::loadFromConfig(const String & config_elem, const Poco
} }
const String KeeperConfigurationAndSettings::DEFAULT_FOUR_LETTER_WORD_CMD = "conf,cons,crst,envi,ruok,srst,srvr,stat,wchs,dirs,mntr,isro,rcvr,apiv,csnp,lgif,rqld,rclc,clrs"; const String KeeperConfigurationAndSettings::DEFAULT_FOUR_LETTER_WORD_CMD = "conf,cons,crst,envi,ruok,srst,srvr,stat,wchs,dirs,mntr,isro,rcvr,apiv,csnp,lgif,rqld,rclc,clrs,ftfl";
KeeperConfigurationAndSettings::KeeperConfigurationAndSettings() KeeperConfigurationAndSettings::KeeperConfigurationAndSettings()
: server_id(NOT_EXIST) : server_id(NOT_EXIST)

View File

@ -9,9 +9,11 @@
#include <Common/getCurrentProcessFDCount.h> #include <Common/getCurrentProcessFDCount.h>
#include <Common/getMaxFileDescriptorCount.h> #include <Common/getMaxFileDescriptorCount.h>
#include <Common/StringUtils/StringUtils.h> #include <Common/StringUtils/StringUtils.h>
#include "Coordination/KeeperFeatureFlags.h"
#include <Coordination/Keeper4LWInfo.h> #include <Coordination/Keeper4LWInfo.h>
#include <IO/WriteHelpers.h> #include <IO/WriteHelpers.h>
#include <IO/Operators.h> #include <IO/Operators.h>
#include <boost/algorithm/string.hpp>
#include <unistd.h> #include <unistd.h>
#include <bit> #include <bit>
@ -153,6 +155,9 @@ void FourLetterCommandFactory::registerCommands(KeeperDispatcher & keeper_dispat
FourLetterCommandPtr clean_resources_command = std::make_shared<CleanResourcesCommand>(keeper_dispatcher); FourLetterCommandPtr clean_resources_command = std::make_shared<CleanResourcesCommand>(keeper_dispatcher);
factory.registerCommand(clean_resources_command); factory.registerCommand(clean_resources_command);
FourLetterCommandPtr feature_flags_command = std::make_shared<FeatureFlagsCommand>(keeper_dispatcher);
factory.registerCommand(feature_flags_command);
factory.initializeAllowList(keeper_dispatcher); factory.initializeAllowList(keeper_dispatcher);
factory.setInitialize(true); factory.setInitialize(true);
} }
@ -486,7 +491,7 @@ String RecoveryCommand::run()
String ApiVersionCommand::run() String ApiVersionCommand::run()
{ {
return toString(static_cast<uint8_t>(Coordination::current_keeper_api_version)); return toString(static_cast<uint8_t>(KeeperApiVersion::WITH_MULTI_READ));
} }
String CreateSnapshotCommand::run() String CreateSnapshotCommand::run()
@ -535,4 +540,29 @@ String CleanResourcesCommand::run()
return "ok"; return "ok";
} }
String FeatureFlagsCommand::run()
{
const auto & feature_flags = keeper_dispatcher.getKeeperContext()->feature_flags;
StringBuffer ret;
auto append = [&ret] (const String & key, uint8_t value) -> void
{
writeText(key, ret);
writeText('\t', ret);
writeText(std::to_string(value), ret);
writeText('\n', ret);
};
for (const auto & [feature_flag, name] : magic_enum::enum_entries<KeeperFeatureFlag>())
{
std::string feature_flag_string(name);
boost::to_lower(feature_flag_string);
append(feature_flag_string, feature_flags.isEnabled(feature_flag));
}
return ret.str();
}
} }

View File

@ -401,4 +401,16 @@ struct CleanResourcesCommand : public IFourLetterCommand
~CleanResourcesCommand() override = default; ~CleanResourcesCommand() override = default;
}; };
struct FeatureFlagsCommand : public IFourLetterCommand
{
explicit FeatureFlagsCommand(KeeperDispatcher & keeper_dispatcher_)
: IFourLetterCommand(keeper_dispatcher_)
{
}
String name() override { return "ftfl"; }
String run() override;
~FeatureFlagsCommand() override = default;
};
} }

View File

@ -5,6 +5,7 @@
namespace DB namespace DB
{ {
/// left for backwards compatibility
enum class KeeperApiVersion : uint8_t enum class KeeperApiVersion : uint8_t
{ {
ZOOKEEPER_COMPATIBLE = 0, ZOOKEEPER_COMPATIBLE = 0,
@ -13,15 +14,8 @@ enum class KeeperApiVersion : uint8_t
WITH_CHECK_NOT_EXISTS, WITH_CHECK_NOT_EXISTS,
}; };
inline constexpr auto current_keeper_api_version = KeeperApiVersion::WITH_CHECK_NOT_EXISTS;
const std::string keeper_system_path = "/keeper"; const std::string keeper_system_path = "/keeper";
const std::string keeper_api_version_path = keeper_system_path + "/api_version"; const std::string keeper_api_version_path = keeper_system_path + "/api_version";
const std::string keeper_api_feature_flags_path = keeper_system_path + "/feature_flags";
using PathWithData = std::pair<std::string_view, std::string>;
const std::vector<PathWithData> child_system_paths_with_data
{
{keeper_api_version_path, toString(static_cast<uint8_t>(current_keeper_api_version))}
};
} }

View File

@ -0,0 +1,60 @@
#include <Coordination/KeeperContext.h>
#include <Coordination/KeeperConstants.h>
#include <Common/logger_useful.h>
#include <Coordination/KeeperFeatureFlags.h>
#include <boost/algorithm/string.hpp>
namespace DB
{
namespace ErrorCodes
{
extern const int BAD_ARGUMENTS;
}
KeeperContext::KeeperContext()
{
/// enable by default some feature flags
feature_flags.enableFeatureFlag(KeeperFeatureFlag::FILTERED_LIST);
feature_flags.enableFeatureFlag(KeeperFeatureFlag::MULTI_READ);
system_nodes_with_data[keeper_api_feature_flags_path] = feature_flags.getFeatureFlags();
/// for older clients, the default is equivalent to WITH_MULTI_READ version
system_nodes_with_data[keeper_api_version_path] = toString(static_cast<uint8_t>(KeeperApiVersion::WITH_MULTI_READ));
}
void KeeperContext::initialize(const Poco::Util::AbstractConfiguration & config)
{
digest_enabled = config.getBool("keeper_server.digest_enabled", false);
ignore_system_path_on_startup = config.getBool("keeper_server.ignore_system_path_on_startup", false);
static const std::string feature_flags_key = "keeper_server.feature_flags";
if (config.has(feature_flags_key))
{
Poco::Util::AbstractConfiguration::Keys keys;
config.keys(feature_flags_key, keys);
for (const auto & key : keys)
{
auto feature_flag_string = boost::to_upper_copy(key);
auto feature_flag = magic_enum::enum_cast<KeeperFeatureFlag>(feature_flag_string);
if (!feature_flag.has_value())
throw Exception(ErrorCodes::BAD_ARGUMENTS, "Invalid feature flag defined in config for Keeper: {}", key);
auto is_enabled = config.getBool(feature_flags_key + "." + key);
if (is_enabled)
feature_flags.enableFeatureFlag(feature_flag.value());
else
feature_flags.disableFeatureFlag(feature_flag.value());
}
system_nodes_with_data[keeper_api_feature_flags_path] = feature_flags.getFeatureFlags();
}
feature_flags.logFlags(&Poco::Logger::get("KeeperContext"));
}
}

View File

@ -1,10 +1,17 @@
#pragma once #pragma once
#include <Poco/Util/AbstractConfiguration.h>
#include <Coordination/KeeperFeatureFlags.h>
namespace DB namespace DB
{ {
struct KeeperContext struct KeeperContext
{ {
KeeperContext();
void initialize(const Poco::Util::AbstractConfiguration & config);
enum class Phase : uint8_t enum class Phase : uint8_t
{ {
INIT, INIT,
@ -16,6 +23,10 @@ struct KeeperContext
bool ignore_system_path_on_startup{false}; bool ignore_system_path_on_startup{false};
bool digest_enabled{true}; bool digest_enabled{true};
std::unordered_map<std::string, std::string> system_nodes_with_data;
KeeperFeatureFlags feature_flags;
}; };
using KeeperContextPtr = std::shared_ptr<KeeperContext>; using KeeperContextPtr = std::shared_ptr<KeeperContext>;

View File

@ -336,7 +336,17 @@ void KeeperDispatcher::initialize(const Poco::Util::AbstractConfiguration & conf
snapshot_s3.startup(config, macros); snapshot_s3.startup(config, macros);
server = std::make_unique<KeeperServer>(configuration_and_settings, config, responses_queue, snapshots_queue, snapshot_s3, [this](const KeeperStorage::RequestForSession & request_for_session) keeper_context = std::make_shared<KeeperContext>();
keeper_context->initialize(config);
server = std::make_unique<KeeperServer>(
configuration_and_settings,
config,
responses_queue,
snapshots_queue,
keeper_context,
snapshot_s3,
[this](const KeeperStorage::RequestForSession & request_for_session)
{ {
/// check if we have queue of read requests depending on this request to be committed /// check if we have queue of read requests depending on this request to be committed
std::lock_guard lock(read_request_queue_mutex); std::lock_guard lock(read_request_queue_mutex);
@ -344,7 +354,8 @@ void KeeperDispatcher::initialize(const Poco::Util::AbstractConfiguration & conf
{ {
auto & xid_to_request_queue = it->second; auto & xid_to_request_queue = it->second;
if (auto request_queue_it = xid_to_request_queue.find(request_for_session.request->xid); request_queue_it != xid_to_request_queue.end()) if (auto request_queue_it = xid_to_request_queue.find(request_for_session.request->xid);
request_queue_it != xid_to_request_queue.end())
{ {
for (const auto & read_request : request_queue_it->second) for (const auto & read_request : request_queue_it->second)
{ {

View File

@ -81,6 +81,8 @@ private:
KeeperSnapshotManagerS3 snapshot_s3; KeeperSnapshotManagerS3 snapshot_s3;
KeeperContextPtr keeper_context;
/// Thread put requests to raft /// Thread put requests to raft
void requestThread(); void requestThread();
/// Thread put responses for subscribed sessions /// Thread put responses for subscribed sessions
@ -198,6 +200,12 @@ public:
return configuration_and_settings; return configuration_and_settings;
} }
const KeeperContextPtr & getKeeperContext() const
{
return keeper_context;
}
void incrementPacketsSent() void incrementPacketsSent()
{ {
keeper_stats.incrementPacketsSent(); keeper_stats.incrementPacketsSent();

View File

@ -0,0 +1,92 @@
#include <Coordination/KeeperFeatureFlags.h>
#include <Common/ErrorCodes.h>
#include <Common/Exception.h>
#include <Common/logger_useful.h>
namespace DB
{
namespace
{
std::pair<size_t, size_t> getByteAndBitIndex(size_t num)
{
size_t byte_idx = num / 8;
auto bit_idx = (7 - num % 8);
return {byte_idx, bit_idx};
}
}
KeeperFeatureFlags::KeeperFeatureFlags()
{
/// get byte idx of largest value
auto [byte_idx, _] = getByteAndBitIndex(magic_enum::enum_count<KeeperFeatureFlag>() - 1);
feature_flags = std::string(byte_idx + 1, 0);
}
KeeperFeatureFlags::KeeperFeatureFlags(std::string feature_flags_)
: feature_flags(std::move(feature_flags_))
{}
void KeeperFeatureFlags::fromApiVersion(KeeperApiVersion keeper_api_version)
{
if (keeper_api_version == KeeperApiVersion::ZOOKEEPER_COMPATIBLE)
return;
if (keeper_api_version >= KeeperApiVersion::WITH_FILTERED_LIST)
enableFeatureFlag(KeeperFeatureFlag::FILTERED_LIST);
if (keeper_api_version >= KeeperApiVersion::WITH_MULTI_READ)
enableFeatureFlag(KeeperFeatureFlag::MULTI_READ);
if (keeper_api_version >= KeeperApiVersion::WITH_CHECK_NOT_EXISTS)
enableFeatureFlag(KeeperFeatureFlag::CHECK_NOT_EXISTS);
}
bool KeeperFeatureFlags::isEnabled(KeeperFeatureFlag feature_flag) const
{
auto [byte_idx, bit_idx] = getByteAndBitIndex(magic_enum::enum_integer(feature_flag));
if (byte_idx > feature_flags.size())
return false;
return feature_flags[byte_idx] & (1 << bit_idx);
}
void KeeperFeatureFlags::setFeatureFlags(std::string feature_flags_)
{
feature_flags = std::move(feature_flags_);
}
void KeeperFeatureFlags::enableFeatureFlag(KeeperFeatureFlag feature_flag)
{
auto [byte_idx, bit_idx] = getByteAndBitIndex(magic_enum::enum_integer(feature_flag));
chassert(byte_idx < feature_flags.size());
feature_flags[byte_idx] |= (1 << bit_idx);
}
void KeeperFeatureFlags::disableFeatureFlag(KeeperFeatureFlag feature_flag)
{
auto [byte_idx, bit_idx] = getByteAndBitIndex(magic_enum::enum_integer(feature_flag));
chassert(byte_idx < feature_flags.size());
feature_flags[byte_idx] &= ~(1 << bit_idx);
}
const std::string & KeeperFeatureFlags::getFeatureFlags() const
{
return feature_flags;
}
void KeeperFeatureFlags::logFlags(Poco::Logger * log) const
{
for (const auto & [feature_flag, feature_flag_name] : magic_enum::enum_entries<KeeperFeatureFlag>())
{
auto is_enabled = isEnabled(feature_flag);
LOG_INFO(log, "Keeper feature flag {}: {}", feature_flag_name, is_enabled ? "enabled" : "disabled");
}
}
}

View File

@ -0,0 +1,39 @@
#pragma once
#include <Coordination/KeeperConstants.h>
namespace DB
{
/// these values cannot be reordered or removed, only new values can be added
enum class KeeperFeatureFlag : size_t
{
FILTERED_LIST = 0,
MULTI_READ,
CHECK_NOT_EXISTS,
};
class KeeperFeatureFlags
{
public:
KeeperFeatureFlags();
explicit KeeperFeatureFlags(std::string feature_flags_);
/// backwards compatibility
void fromApiVersion(KeeperApiVersion keeper_api_version);
bool isEnabled(KeeperFeatureFlag feature) const;
void setFeatureFlags(std::string feature_flags_);
const std::string & getFeatureFlags() const;
void enableFeatureFlag(KeeperFeatureFlag feature);
void disableFeatureFlag(KeeperFeatureFlag feature);
void logFlags(Poco::Logger * log) const;
private:
std::string feature_flags;
};
}

View File

@ -108,21 +108,19 @@ KeeperServer::KeeperServer(
const Poco::Util::AbstractConfiguration & config, const Poco::Util::AbstractConfiguration & config,
ResponsesQueue & responses_queue_, ResponsesQueue & responses_queue_,
SnapshotsQueue & snapshots_queue_, SnapshotsQueue & snapshots_queue_,
KeeperContextPtr keeper_context_,
KeeperSnapshotManagerS3 & snapshot_manager_s3, KeeperSnapshotManagerS3 & snapshot_manager_s3,
KeeperStateMachine::CommitCallback commit_callback) KeeperStateMachine::CommitCallback commit_callback)
: server_id(configuration_and_settings_->server_id) : server_id(configuration_and_settings_->server_id)
, coordination_settings(configuration_and_settings_->coordination_settings) , coordination_settings(configuration_and_settings_->coordination_settings)
, log(&Poco::Logger::get("KeeperServer")) , log(&Poco::Logger::get("KeeperServer"))
, is_recovering(config.getBool("keeper_server.force_recovery", false)) , is_recovering(config.getBool("keeper_server.force_recovery", false))
, keeper_context{std::make_shared<KeeperContext>()} , keeper_context{std::move(keeper_context_)}
, create_snapshot_on_exit(config.getBool("keeper_server.create_snapshot_on_exit", true)) , create_snapshot_on_exit(config.getBool("keeper_server.create_snapshot_on_exit", true))
{ {
if (coordination_settings->quorum_reads) if (coordination_settings->quorum_reads)
LOG_WARNING(log, "Quorum reads enabled, Keeper will work slower."); LOG_WARNING(log, "Quorum reads enabled, Keeper will work slower.");
keeper_context->digest_enabled = config.getBool("keeper_server.digest_enabled", false);
keeper_context->ignore_system_path_on_startup = config.getBool("keeper_server.ignore_system_path_on_startup", false);
state_machine = nuraft::cs_new<KeeperStateMachine>( state_machine = nuraft::cs_new<KeeperStateMachine>(
responses_queue_, responses_queue_,
snapshots_queue_, snapshots_queue_,

View File

@ -72,6 +72,7 @@ public:
const Poco::Util::AbstractConfiguration & config_, const Poco::Util::AbstractConfiguration & config_,
ResponsesQueue & responses_queue_, ResponsesQueue & responses_queue_,
SnapshotsQueue & snapshots_queue_, SnapshotsQueue & snapshots_queue_,
KeeperContextPtr keeper_context_,
KeeperSnapshotManagerS3 & snapshot_manager_s3, KeeperSnapshotManagerS3 & snapshot_manager_s3,
KeeperStateMachine::CommitCallback commit_callback); KeeperStateMachine::CommitCallback commit_callback);

View File

@ -185,7 +185,7 @@ void KeeperStorageSnapshot::serialize(const KeeperStorageSnapshot & snapshot, Wr
} }
/// Serialize data tree /// Serialize data tree
writeBinary(snapshot.snapshot_container_size - child_system_paths_with_data.size(), out); writeBinary(snapshot.snapshot_container_size - keeper_context->system_nodes_with_data.size(), out);
size_t counter = 0; size_t counter = 0;
for (auto it = snapshot.begin; counter < snapshot.snapshot_container_size; ++counter) for (auto it = snapshot.begin; counter < snapshot.snapshot_container_size; ++counter)
{ {

View File

@ -283,9 +283,9 @@ void KeeperStorage::initializeSystemNodes()
} }
// insert child system nodes // insert child system nodes
for (const auto & [path, data] : child_system_paths_with_data) for (const auto & [path, data] : keeper_context->system_nodes_with_data)
{ {
assert(keeper_api_version_path.starts_with(keeper_system_path)); assert(path.starts_with(keeper_system_path));
Node child_system_node; Node child_system_node;
child_system_node.setData(data); child_system_node.setData(data);
auto [map_key, _] = container.insert(std::string{path}, child_system_node); auto [map_key, _] = container.insert(std::string{path}, child_system_node);
@ -1060,7 +1060,7 @@ struct KeeperStorageGetRequestProcessor final : public KeeperStorageRequestProce
ProfileEvents::increment(ProfileEvents::KeeperGetRequest); ProfileEvents::increment(ProfileEvents::KeeperGetRequest);
Coordination::ZooKeeperGetRequest & request = dynamic_cast<Coordination::ZooKeeperGetRequest &>(*zk_request); Coordination::ZooKeeperGetRequest & request = dynamic_cast<Coordination::ZooKeeperGetRequest &>(*zk_request);
if (request.path == Coordination::keeper_api_version_path) if (request.path == Coordination::keeper_api_feature_flags_path)
return {}; return {};
if (!storage.uncommitted_state.getNode(request.path)) if (!storage.uncommitted_state.getNode(request.path))

View File

@ -2,7 +2,9 @@
#include <gtest/gtest.h> #include <gtest/gtest.h>
#include "Common/ZooKeeper/IKeeper.h" #include "Common/ZooKeeper/IKeeper.h"
#include "Coordination/KeeperConstants.h"
#include "Coordination/KeeperContext.h" #include "Coordination/KeeperContext.h"
#include "Coordination/KeeperFeatureFlags.h"
#include "Coordination/KeeperStorage.h" #include "Coordination/KeeperStorage.h"
#include "Core/Defines.h" #include "Core/Defines.h"
#include "IO/WriteHelpers.h" #include "IO/WriteHelpers.h"
@ -1191,7 +1193,7 @@ TEST_P(CoordinationTest, TestStorageSnapshotSimple)
EXPECT_EQ(snapshot.snapshot_meta->get_last_log_idx(), 2); EXPECT_EQ(snapshot.snapshot_meta->get_last_log_idx(), 2);
EXPECT_EQ(snapshot.session_id, 7); EXPECT_EQ(snapshot.session_id, 7);
EXPECT_EQ(snapshot.snapshot_container_size, 5); EXPECT_EQ(snapshot.snapshot_container_size, 6);
EXPECT_EQ(snapshot.session_and_timeout.size(), 2); EXPECT_EQ(snapshot.session_and_timeout.size(), 2);
auto buf = manager.serializeSnapshotToBuffer(snapshot); auto buf = manager.serializeSnapshotToBuffer(snapshot);
@ -1203,7 +1205,7 @@ TEST_P(CoordinationTest, TestStorageSnapshotSimple)
auto [restored_storage, snapshot_meta, _] = manager.deserializeSnapshotFromBuffer(debuf); auto [restored_storage, snapshot_meta, _] = manager.deserializeSnapshotFromBuffer(debuf);
EXPECT_EQ(restored_storage->container.size(), 5); EXPECT_EQ(restored_storage->container.size(), 6);
EXPECT_EQ(restored_storage->container.getValue("/").getChildren().size(), 2); EXPECT_EQ(restored_storage->container.getValue("/").getChildren().size(), 2);
EXPECT_EQ(restored_storage->container.getValue("/hello").getChildren().size(), 1); EXPECT_EQ(restored_storage->container.getValue("/hello").getChildren().size(), 1);
EXPECT_EQ(restored_storage->container.getValue("/hello/somepath").getChildren().size(), 0); EXPECT_EQ(restored_storage->container.getValue("/hello/somepath").getChildren().size(), 0);
@ -1235,14 +1237,14 @@ TEST_P(CoordinationTest, TestStorageSnapshotMoreWrites)
DB::KeeperStorageSnapshot snapshot(&storage, 50); DB::KeeperStorageSnapshot snapshot(&storage, 50);
EXPECT_EQ(snapshot.snapshot_meta->get_last_log_idx(), 50); EXPECT_EQ(snapshot.snapshot_meta->get_last_log_idx(), 50);
EXPECT_EQ(snapshot.snapshot_container_size, 53); EXPECT_EQ(snapshot.snapshot_container_size, 54);
for (size_t i = 50; i < 100; ++i) for (size_t i = 50; i < 100; ++i)
{ {
addNode(storage, "/hello_" + std::to_string(i), "world_" + std::to_string(i)); addNode(storage, "/hello_" + std::to_string(i), "world_" + std::to_string(i));
} }
EXPECT_EQ(storage.container.size(), 103); EXPECT_EQ(storage.container.size(), 104);
auto buf = manager.serializeSnapshotToBuffer(snapshot); auto buf = manager.serializeSnapshotToBuffer(snapshot);
manager.serializeSnapshotBufferToDisk(*buf, 50); manager.serializeSnapshotBufferToDisk(*buf, 50);
@ -1252,7 +1254,7 @@ TEST_P(CoordinationTest, TestStorageSnapshotMoreWrites)
auto debuf = manager.deserializeSnapshotBufferFromDisk(50); auto debuf = manager.deserializeSnapshotBufferFromDisk(50);
auto [restored_storage, meta, _] = manager.deserializeSnapshotFromBuffer(debuf); auto [restored_storage, meta, _] = manager.deserializeSnapshotFromBuffer(debuf);
EXPECT_EQ(restored_storage->container.size(), 53); EXPECT_EQ(restored_storage->container.size(), 54);
for (size_t i = 0; i < 50; ++i) for (size_t i = 0; i < 50; ++i)
{ {
EXPECT_EQ(restored_storage->container.getValue("/hello_" + std::to_string(i)).getData(), "world_" + std::to_string(i)); EXPECT_EQ(restored_storage->container.getValue("/hello_" + std::to_string(i)).getData(), "world_" + std::to_string(i));
@ -1291,7 +1293,7 @@ TEST_P(CoordinationTest, TestStorageSnapshotManySnapshots)
auto [restored_storage, meta, _] = manager.restoreFromLatestSnapshot(); auto [restored_storage, meta, _] = manager.restoreFromLatestSnapshot();
EXPECT_EQ(restored_storage->container.size(), 253); EXPECT_EQ(restored_storage->container.size(), 254);
for (size_t i = 0; i < 250; ++i) for (size_t i = 0; i < 250; ++i)
{ {
@ -1325,16 +1327,16 @@ TEST_P(CoordinationTest, TestStorageSnapshotMode)
if (i % 2 == 0) if (i % 2 == 0)
storage.container.erase("/hello_" + std::to_string(i)); storage.container.erase("/hello_" + std::to_string(i));
} }
EXPECT_EQ(storage.container.size(), 28); EXPECT_EQ(storage.container.size(), 29);
EXPECT_EQ(storage.container.snapshotSizeWithVersion().first, 104); EXPECT_EQ(storage.container.snapshotSizeWithVersion().first, 105);
EXPECT_EQ(storage.container.snapshotSizeWithVersion().second, 1); EXPECT_EQ(storage.container.snapshotSizeWithVersion().second, 1);
auto buf = manager.serializeSnapshotToBuffer(snapshot); auto buf = manager.serializeSnapshotToBuffer(snapshot);
manager.serializeSnapshotBufferToDisk(*buf, 50); manager.serializeSnapshotBufferToDisk(*buf, 50);
} }
EXPECT_TRUE(fs::exists("./snapshots/snapshot_50.bin" + params.extension)); EXPECT_TRUE(fs::exists("./snapshots/snapshot_50.bin" + params.extension));
EXPECT_EQ(storage.container.size(), 28); EXPECT_EQ(storage.container.size(), 29);
storage.clearGarbageAfterSnapshot(); storage.clearGarbageAfterSnapshot();
EXPECT_EQ(storage.container.snapshotSizeWithVersion().first, 28); EXPECT_EQ(storage.container.snapshotSizeWithVersion().first, 29);
for (size_t i = 0; i < 50; ++i) for (size_t i = 0; i < 50; ++i)
{ {
if (i % 2 != 0) if (i % 2 != 0)
@ -1863,7 +1865,7 @@ TEST_P(CoordinationTest, TestStorageSnapshotDifferentCompressions)
auto [restored_storage, snapshot_meta, _] = new_manager.deserializeSnapshotFromBuffer(debuf); auto [restored_storage, snapshot_meta, _] = new_manager.deserializeSnapshotFromBuffer(debuf);
EXPECT_EQ(restored_storage->container.size(), 5); EXPECT_EQ(restored_storage->container.size(), 6);
EXPECT_EQ(restored_storage->container.getValue("/").getChildren().size(), 2); EXPECT_EQ(restored_storage->container.getValue("/").getChildren().size(), 2);
EXPECT_EQ(restored_storage->container.getValue("/hello").getChildren().size(), 1); EXPECT_EQ(restored_storage->container.getValue("/hello").getChildren().size(), 1);
EXPECT_EQ(restored_storage->container.getValue("/hello/somepath").getChildren().size(), 0); EXPECT_EQ(restored_storage->container.getValue("/hello/somepath").getChildren().size(), 0);
@ -2346,18 +2348,19 @@ TEST_P(CoordinationTest, TestDurableState)
} }
} }
TEST_P(CoordinationTest, TestCurrentApiVersion) TEST_P(CoordinationTest, TestFeatureFlags)
{ {
using namespace Coordination; using namespace Coordination;
KeeperStorage storage{500, "", keeper_context}; KeeperStorage storage{500, "", keeper_context};
auto request = std::make_shared<ZooKeeperGetRequest>(); auto request = std::make_shared<ZooKeeperGetRequest>();
request->path = DB::keeper_api_version_path; request->path = DB::keeper_api_feature_flags_path;
auto responses = storage.processRequest(request, 0, std::nullopt, true, true); auto responses = storage.processRequest(request, 0, std::nullopt, true, true);
const auto & get_response = getSingleResponse<ZooKeeperGetResponse>(responses); const auto & get_response = getSingleResponse<ZooKeeperGetResponse>(responses);
uint8_t keeper_version{0}; DB::KeeperFeatureFlags feature_flags;
DB::ReadBufferFromOwnString buf(get_response.data); feature_flags.setFeatureFlags(get_response.data);
DB::readIntText(keeper_version, buf); ASSERT_TRUE(feature_flags.isEnabled(KeeperFeatureFlag::FILTERED_LIST));
EXPECT_EQ(keeper_version, static_cast<uint8_t>(current_keeper_api_version)); ASSERT_TRUE(feature_flags.isEnabled(KeeperFeatureFlag::MULTI_READ));
ASSERT_FALSE(feature_flags.isEnabled(KeeperFeatureFlag::CHECK_NOT_EXISTS));
} }
TEST_P(CoordinationTest, TestSystemNodeModify) TEST_P(CoordinationTest, TestSystemNodeModify)

View File

@ -467,6 +467,7 @@ class IColumn;
M(UInt64, max_fetch_partition_retries_count, 5, "Amount of retries while fetching partition from another host.", 0) \ M(UInt64, max_fetch_partition_retries_count, 5, "Amount of retries while fetching partition from another host.", 0) \
M(UInt64, http_max_multipart_form_data_size, 1024 * 1024 * 1024, "Limit on size of multipart/form-data content. This setting cannot be parsed from URL parameters and should be set in user profile. Note that content is parsed and external tables are created in memory before start of query execution. And this is the only limit that has effect on that stage (limits on max memory usage and max execution time have no effect while reading HTTP form data).", 0) \ M(UInt64, http_max_multipart_form_data_size, 1024 * 1024 * 1024, "Limit on size of multipart/form-data content. This setting cannot be parsed from URL parameters and should be set in user profile. Note that content is parsed and external tables are created in memory before start of query execution. And this is the only limit that has effect on that stage (limits on max memory usage and max execution time have no effect while reading HTTP form data).", 0) \
M(Bool, calculate_text_stack_trace, true, "Calculate text stack trace in case of exceptions during query execution. This is the default. It requires symbol lookups that may slow down fuzzing tests when huge amount of wrong queries are executed. In normal cases you should not disable this option.", 0) \ M(Bool, calculate_text_stack_trace, true, "Calculate text stack trace in case of exceptions during query execution. This is the default. It requires symbol lookups that may slow down fuzzing tests when huge amount of wrong queries are executed. In normal cases you should not disable this option.", 0) \
M(Bool, enable_job_stack_trace, false, "Output stack trace of a job creator when job results in exception", 0) \
M(Bool, allow_ddl, true, "If it is set to true, then a user is allowed to executed DDL queries.", 0) \ M(Bool, allow_ddl, true, "If it is set to true, then a user is allowed to executed DDL queries.", 0) \
M(Bool, parallel_view_processing, false, "Enables pushing to attached views concurrently instead of sequentially.", 0) \ M(Bool, parallel_view_processing, false, "Enables pushing to attached views concurrently instead of sequentially.", 0) \
M(Bool, enable_unaligned_array_join, false, "Allow ARRAY JOIN with multiple arrays that have different sizes. When this settings is enabled, arrays will be resized to the longest one.", 0) \ M(Bool, enable_unaligned_array_join, false, "Allow ARRAY JOIN with multiple arrays that have different sizes. When this settings is enabled, arrays will be resized to the longest one.", 0) \

View File

@ -27,7 +27,7 @@ namespace DB
using UUID = StrongTypedef<UInt128, struct UUIDTag>; using UUID = StrongTypedef<UInt128, struct UUIDTag>;
using IPv4 = StrongTypedef<UInt32, struct IPv4Tag>; struct IPv4;
struct IPv6; struct IPv6;

View File

@ -19,6 +19,7 @@
#include <csignal> #include <csignal>
#include <unistd.h> #include <unistd.h>
#include <algorithm>
#include <typeinfo> #include <typeinfo>
#include <iostream> #include <iostream>
#include <fstream> #include <fstream>
@ -153,6 +154,7 @@ static void signalHandler(int sig, siginfo_t * info, void * context)
writePODBinary(*info, out); writePODBinary(*info, out);
writePODBinary(signal_context, out); writePODBinary(signal_context, out);
writePODBinary(stack_trace, out); writePODBinary(stack_trace, out);
writeVectorBinary(Exception::thread_frame_pointers, out);
writeBinary(static_cast<UInt32>(getThreadId()), out); writeBinary(static_cast<UInt32>(getThreadId()), out);
writePODBinary(current_thread, out); writePODBinary(current_thread, out);
@ -250,6 +252,7 @@ public:
siginfo_t info{}; siginfo_t info{};
ucontext_t * context{}; ucontext_t * context{};
StackTrace stack_trace(NoCapture{}); StackTrace stack_trace(NoCapture{});
std::vector<StackTrace::FramePointers> thread_frame_pointers;
UInt32 thread_num{}; UInt32 thread_num{};
ThreadStatus * thread_ptr{}; ThreadStatus * thread_ptr{};
@ -260,12 +263,13 @@ public:
} }
readPODBinary(stack_trace, in); readPODBinary(stack_trace, in);
readVectorBinary(thread_frame_pointers, in);
readBinary(thread_num, in); readBinary(thread_num, in);
readPODBinary(thread_ptr, in); readPODBinary(thread_ptr, in);
/// This allows to receive more signals if failure happens inside onFault function. /// This allows to receive more signals if failure happens inside onFault function.
/// Example: segfault while symbolizing stack trace. /// Example: segfault while symbolizing stack trace.
std::thread([=, this] { onFault(sig, info, context, stack_trace, thread_num, thread_ptr); }).detach(); std::thread([=, this] { onFault(sig, info, context, stack_trace, thread_frame_pointers, thread_num, thread_ptr); }).detach();
} }
} }
} }
@ -300,6 +304,7 @@ private:
const siginfo_t & info, const siginfo_t & info,
ucontext_t * context, ucontext_t * context,
const StackTrace & stack_trace, const StackTrace & stack_trace,
const std::vector<StackTrace::FramePointers> & thread_frame_pointers,
UInt32 thread_num, UInt32 thread_num,
ThreadStatus * thread_ptr) const ThreadStatus * thread_ptr) const
{ {
@ -375,6 +380,31 @@ private:
/// Write symbolized stack trace line by line for better grep-ability. /// Write symbolized stack trace line by line for better grep-ability.
stack_trace.toStringEveryLine([&](std::string_view s) { LOG_FATAL(log, fmt::runtime(s)); }); stack_trace.toStringEveryLine([&](std::string_view s) { LOG_FATAL(log, fmt::runtime(s)); });
/// In case it's a scheduled job write all previous jobs origins call stacks
std::for_each(thread_frame_pointers.rbegin(), thread_frame_pointers.rend(),
[this](const StackTrace::FramePointers & frame_pointers)
{
if (size_t size = std::ranges::find(frame_pointers, nullptr) - frame_pointers.begin())
{
LOG_FATAL(log, "========================================");
WriteBufferFromOwnString bare_stacktrace;
writeString("Job's origin stack trace:", bare_stacktrace);
std::for_each_n(frame_pointers.begin(), size,
[&bare_stacktrace](const void * ptr)
{
writeChar(' ', bare_stacktrace);
writePointerHex(ptr, bare_stacktrace);
}
);
LOG_FATAL(log, fmt::runtime(bare_stacktrace.str()));
StackTrace::toStringEveryLine(const_cast<void **>(frame_pointers.data()), 0, size, [this](std::string_view s) { LOG_FATAL(log, fmt::runtime(s)); });
}
}
);
#if defined(OS_LINUX) #if defined(OS_LINUX)
/// Write information about binary checksum. It can be difficult to calculate, so do it only after printing stack trace. /// Write information about binary checksum. It can be difficult to calculate, so do it only after printing stack trace.
/// Please keep the below log messages in-sync with the ones in programs/server/Server.cpp /// Please keep the below log messages in-sync with the ones in programs/server/Server.cpp

View File

@ -69,7 +69,7 @@ void DataTypeMap::assertKeyType() const
if (!checkKeyType(key_type)) if (!checkKeyType(key_type))
throw Exception(ErrorCodes::BAD_ARGUMENTS, throw Exception(ErrorCodes::BAD_ARGUMENTS,
"Type of Map key must be a type, that can be represented by integer " "Type of Map key must be a type, that can be represented by integer "
"or String or FixedString (possibly LowCardinality) or UUID," "or String or FixedString (possibly LowCardinality) or UUID or IPv6,"
" but {} given", key_type->getName()); " but {} given", key_type->getName());
} }
@ -120,6 +120,7 @@ bool DataTypeMap::checkKeyType(DataTypePtr key_type)
else if (!key_type->isValueRepresentedByInteger() else if (!key_type->isValueRepresentedByInteger()
&& !isStringOrFixedString(*key_type) && !isStringOrFixedString(*key_type)
&& !WhichDataType(key_type).isNothing() && !WhichDataType(key_type).isNothing()
&& !WhichDataType(key_type).isIPv6()
&& !WhichDataType(key_type).isUUID()) && !WhichDataType(key_type).isUUID())
{ {
return false; return false;

View File

@ -1293,6 +1293,16 @@ void DatabaseReplicated::commitAlterTable(const StorageID & table_id,
assert(checkDigestValid(query_context)); assert(checkDigestValid(query_context));
} }
bool DatabaseReplicated::canExecuteReplicatedMetadataAlter() const
{
/// ReplicatedMergeTree may call commitAlterTable from its background threads when executing ALTER_METADATA entries.
/// It may update the metadata digest (both locally and in ZooKeeper)
/// before DatabaseReplicatedDDLWorker::initializeReplication() has finished.
/// We should not update metadata until the database is initialized.
return ddl_worker && ddl_worker->isCurrentlyActive();
}
void DatabaseReplicated::detachTablePermanently(ContextPtr local_context, const String & table_name) void DatabaseReplicated::detachTablePermanently(ContextPtr local_context, const String & table_name)
{ {
auto txn = local_context->getZooKeeperMetadataTransaction(); auto txn = local_context->getZooKeeperMetadataTransaction();

View File

@ -48,6 +48,8 @@ public:
/// then it will be executed on all replicas. /// then it will be executed on all replicas.
BlockIO tryEnqueueReplicatedDDL(const ASTPtr & query, ContextPtr query_context, bool internal) override; BlockIO tryEnqueueReplicatedDDL(const ASTPtr & query, ContextPtr query_context, bool internal) override;
bool canExecuteReplicatedMetadataAlter() const override;
bool hasReplicationThread() const override { return true; } bool hasReplicationThread() const override { return true; }
void stopReplication() override; void stopReplication() override;

View File

@ -91,12 +91,12 @@ void DatabaseReplicatedDDLWorker::initializeReplication()
if (zookeeper->tryGet(database->replica_path + "/digest", digest_str)) if (zookeeper->tryGet(database->replica_path + "/digest", digest_str))
{ {
digest = parse<UInt64>(digest_str); digest = parse<UInt64>(digest_str);
LOG_TRACE(log, "Metadata digest in ZooKeeper: {}", digest);
std::lock_guard lock{database->metadata_mutex}; std::lock_guard lock{database->metadata_mutex};
local_digest = database->tables_metadata_digest; local_digest = database->tables_metadata_digest;
} }
else else
{ {
LOG_WARNING(log, "Did not find digest in ZooKeeper, creating it");
/// Database was created by old ClickHouse versions, let's create the node /// Database was created by old ClickHouse versions, let's create the node
std::lock_guard lock{database->metadata_mutex}; std::lock_guard lock{database->metadata_mutex};
digest = local_digest = database->tables_metadata_digest; digest = local_digest = database->tables_metadata_digest;
@ -104,6 +104,9 @@ void DatabaseReplicatedDDLWorker::initializeReplication()
zookeeper->create(database->replica_path + "/digest", digest_str, zkutil::CreateMode::Persistent); zookeeper->create(database->replica_path + "/digest", digest_str, zkutil::CreateMode::Persistent);
} }
LOG_TRACE(log, "Trying to initialize replication: our_log_ptr={}, max_log_ptr={}, local_digest={}, zk_digest={}",
our_log_ptr, max_log_ptr, local_digest, digest);
bool is_new_replica = our_log_ptr == 0; bool is_new_replica = our_log_ptr == 0;
bool lost_according_to_log_ptr = our_log_ptr + logs_to_keep < max_log_ptr; bool lost_according_to_log_ptr = our_log_ptr + logs_to_keep < max_log_ptr;
bool lost_according_to_digest = database->db_settings.check_consistency && local_digest != digest; bool lost_according_to_digest = database->db_settings.check_consistency && local_digest != digest;
@ -158,7 +161,7 @@ bool DatabaseReplicatedDDLWorker::waitForReplicaToProcessAllEntries(UInt64 timeo
LOG_TRACE(log, "Waiting for worker thread to process all entries before {}, current task is {}", max_log, current_task); LOG_TRACE(log, "Waiting for worker thread to process all entries before {}, current task is {}", max_log, current_task);
bool processed = wait_current_task_change.wait_for(lock, std::chrono::milliseconds(timeout_ms), [&]() bool processed = wait_current_task_change.wait_for(lock, std::chrono::milliseconds(timeout_ms), [&]()
{ {
return zookeeper->expired() || current_task == max_log || stop_flag; return zookeeper->expired() || current_task >= max_log || stop_flag;
}); });
if (!processed) if (!processed)

View File

@ -254,6 +254,9 @@ public:
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "{}: alterTable() is not supported", getEngineName()); throw Exception(ErrorCodes::NOT_IMPLEMENTED, "{}: alterTable() is not supported", getEngineName());
} }
/// Special method for ReplicatedMergeTree and DatabaseReplicated
virtual bool canExecuteReplicatedMetadataAlter() const { return true; }
/// Returns time of table's metadata change, 0 if there is no corresponding metadata file. /// Returns time of table's metadata change, 0 if there is no corresponding metadata file.
virtual time_t getObjectMetadataModificationTime(const String & /*name*/) const virtual time_t getObjectMetadataModificationTime(const String & /*name*/) const
{ {

View File

@ -10,6 +10,7 @@
#include <Common/ConcurrentBoundedQueue.h> #include <Common/ConcurrentBoundedQueue.h>
#include <Common/CurrentMetrics.h> #include <Common/CurrentMetrics.h>
#include <Common/MemoryTrackerBlockerInThread.h> #include <Common/MemoryTrackerBlockerInThread.h>
#include <Common/scope_guard_safe.h>
#include <Core/Defines.h> #include <Core/Defines.h>
@ -69,6 +70,11 @@ public:
shards_queues[shard].emplace(backlog); shards_queues[shard].emplace(backlog);
pool.scheduleOrThrowOnError([this, shard, thread_group = CurrentThread::getGroup()] pool.scheduleOrThrowOnError([this, shard, thread_group = CurrentThread::getGroup()]
{ {
SCOPE_EXIT_SAFE(
if (thread_group)
CurrentThread::detachFromGroupIfNotDetached();
);
/// Do not account memory that was occupied by the dictionaries for the query/user context. /// Do not account memory that was occupied by the dictionaries for the query/user context.
MemoryTrackerBlockerInThread memory_blocker; MemoryTrackerBlockerInThread memory_blocker;
@ -230,6 +236,11 @@ HashedDictionary<dictionary_key_type, sparse, sharded>::~HashedDictionary()
pool.trySchedule([&container, thread_group = CurrentThread::getGroup()] pool.trySchedule([&container, thread_group = CurrentThread::getGroup()]
{ {
SCOPE_EXIT_SAFE(
if (thread_group)
CurrentThread::detachFromGroupIfNotDetached();
);
/// Do not account memory that was occupied by the dictionaries for the query/user context. /// Do not account memory that was occupied by the dictionaries for the query/user context.
MemoryTrackerBlockerInThread memory_blocker; MemoryTrackerBlockerInThread memory_blocker;

View File

@ -479,7 +479,7 @@ std::pair<String, bool> processBackRefs(const String & data, const re2_st::RE2 &
for (const auto & item : pieces) for (const auto & item : pieces)
{ {
if (item.ref_num >= 0 && item.ref_num < 10) if (item.ref_num >= 0 && item.ref_num < 10)
result += matches[item.ref_num].ToString(); result += String{matches[item.ref_num]};
else else
result += item.literal; result += item.literal;
} }

View File

@ -160,7 +160,7 @@ CachedOnDiskReadBufferFromFile::getCacheReadBuffer(const FileSegment & file_segm
if (use_external_buffer) if (use_external_buffer)
local_read_settings.local_fs_buffer_size = 0; local_read_settings.local_fs_buffer_size = 0;
auto buf = createReadBufferFromFileBase(path, local_read_settings); auto buf = createReadBufferFromFileBase(path, local_read_settings, std::nullopt, std::nullopt, file_segment.getFlagsForLocalRead());
if (getFileSizeFromReadBuffer(*buf) == 0) if (getFileSizeFromReadBuffer(*buf) == 0)
throw Exception(ErrorCodes::LOGICAL_ERROR, "Attempt to read from an empty cache file: {}", path); throw Exception(ErrorCodes::LOGICAL_ERROR, "Attempt to read from an empty cache file: {}", path);
@ -510,9 +510,6 @@ bool CachedOnDiskReadBufferFromFile::completeFileSegmentAndGetNext()
current_file_segment->use(); current_file_segment->use();
implementation_buffer = getImplementationBuffer(*current_file_segment); implementation_buffer = getImplementationBuffer(*current_file_segment);
if (read_type == ReadType::CACHED)
current_file_segment->incrementHitsCount();
LOG_TEST( LOG_TEST(
log, "New segment range: {}, old range: {}", log, "New segment range: {}, old range: {}",
current_file_segment->range().toString(), completed_range.toString()); current_file_segment->range().toString(), completed_range.toString());
@ -855,9 +852,7 @@ bool CachedOnDiskReadBufferFromFile::nextImplStep()
else else
{ {
implementation_buffer = getImplementationBuffer(file_segments->front()); implementation_buffer = getImplementationBuffer(file_segments->front());
file_segments->front().use();
if (read_type == ReadType::CACHED)
file_segments->front().incrementHitsCount();
} }
chassert(!internal_buffer.empty()); chassert(!internal_buffer.empty());

View File

@ -154,6 +154,8 @@ private:
using ColVecType = ColumnVectorOrDecimal<Type>; using ColVecType = ColumnVectorOrDecimal<Type>;
const auto col_vec = checkAndGetColumn<ColVecType>(col.column.get()); const auto col_vec = checkAndGetColumn<ColVecType>(col.column.get());
if (col_vec == nullptr)
return false;
return (res = execute<Type, ReturnType>(col_vec)) != nullptr; return (res = execute<Type, ReturnType>(col_vec)) != nullptr;
}; };

View File

@ -17,11 +17,8 @@ namespace DB
namespace ErrorCodes namespace ErrorCodes
{ {
extern const int ILLEGAL_TYPE_OF_ARGUMENT; extern const int ILLEGAL_TYPE_OF_ARGUMENT;
extern const int ILLEGAL_INDEX;
extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH; extern const int NUMBER_OF_ARGUMENTS_DOESNT_MATCH;
extern const int NOT_FOUND_COLUMN_IN_BLOCK; extern const int NOT_FOUND_COLUMN_IN_BLOCK;
extern const int NUMBER_OF_DIMENSIONS_MISMATCHED;
extern const int SIZES_OF_ARRAYS_DONT_MATCH;
} }
namespace namespace
@ -34,32 +31,14 @@ class FunctionTupleElement : public IFunction
{ {
public: public:
static constexpr auto name = "tupleElement"; static constexpr auto name = "tupleElement";
static FunctionPtr create(ContextPtr)
{
return std::make_shared<FunctionTupleElement>();
}
String getName() const override
{
return name;
}
static FunctionPtr create(ContextPtr) { return std::make_shared<FunctionTupleElement>(); }
String getName() const override { return name; }
bool isVariadic() const override { return true; } bool isVariadic() const override { return true; }
size_t getNumberOfArguments() const override { return 0; }
size_t getNumberOfArguments() const override bool useDefaultImplementationForConstants() const override { return true; }
{
return 0;
}
bool useDefaultImplementationForConstants() const override
{
return true;
}
ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {1}; } ColumnNumbers getArgumentsThatAreAlwaysConstant() const override { return {1}; }
bool useDefaultImplementationForNulls() const override { return false; } bool useDefaultImplementationForNulls() const override { return false; }
bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return false; } bool isSuitableForShortCircuitArgumentsExecution(const DataTypesWithConstInfo & /*arguments*/) const override { return false; }
DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override DataTypePtr getReturnTypeImpl(const ColumnsWithTypeAndName & arguments) const override
@ -72,194 +51,112 @@ public:
getName(), number_of_arguments); getName(), number_of_arguments);
size_t count_arrays = 0; size_t count_arrays = 0;
const IDataType * tuple_col = arguments[0].type.get(); const IDataType * input_type = arguments[0].type.get();
while (const DataTypeArray * array = checkAndGetDataType<DataTypeArray>(tuple_col)) while (const DataTypeArray * array = checkAndGetDataType<DataTypeArray>(input_type))
{ {
tuple_col = array->getNestedType().get(); input_type = array->getNestedType().get();
++count_arrays; ++count_arrays;
} }
const DataTypeTuple * tuple = checkAndGetDataType<DataTypeTuple>(tuple_col); const DataTypeTuple * tuple = checkAndGetDataType<DataTypeTuple>(input_type);
if (!tuple) if (!tuple)
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
"First argument for function {} must be tuple or array of tuple. Actual {}", "First argument for function {} must be tuple or array of tuple. Actual {}",
getName(), getName(),
arguments[0].type->getName()); arguments[0].type->getName());
auto index = getElementNum(arguments[1].column, *tuple, number_of_arguments); std::optional<size_t> index = getElementIndex(arguments[1].column, *tuple, number_of_arguments);
if (index.has_value()) if (index.has_value())
{ {
DataTypePtr out_return_type = tuple->getElements()[index.value()]; DataTypePtr return_type = tuple->getElements()[index.value()];
for (; count_arrays; --count_arrays) for (; count_arrays; --count_arrays)
out_return_type = std::make_shared<DataTypeArray>(out_return_type); return_type = std::make_shared<DataTypeArray>(return_type);
return out_return_type; return return_type;
} }
else else
{
const IDataType * default_col = arguments[2].type.get();
size_t default_argument_count_arrays = 0;
if (const DataTypeArray * array = checkAndGetDataType<DataTypeArray>(default_col))
{
default_argument_count_arrays = array->getNumberOfDimensions();
}
if (count_arrays != default_argument_count_arrays)
{
throw Exception(ErrorCodes::NUMBER_OF_DIMENSIONS_MISMATCHED,
"Dimension of types mismatched between first argument and third argument. "
"Dimension of 1st argument: {}. "
"Dimension of 3rd argument: {}.",count_arrays, default_argument_count_arrays);
}
return arguments[2].type; return arguments[2].type;
} }
}
ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t input_rows_count) const override ColumnPtr executeImpl(const ColumnsWithTypeAndName & arguments, const DataTypePtr &, size_t input_rows_count) const override
{ {
Columns array_offsets; const auto & input_arg = arguments[0];
const IDataType * input_type = input_arg.type.get();
const IColumn * input_col = input_arg.column.get();
const auto & first_arg = arguments[0]; bool input_arg_is_const = false;
if (typeid_cast<const ColumnConst *>(input_col))
const IDataType * tuple_type = first_arg.type.get();
const IColumn * tuple_col = first_arg.column.get();
bool first_arg_is_const = false;
if (typeid_cast<const ColumnConst *>(tuple_col))
{ {
tuple_col = assert_cast<const ColumnConst *>(tuple_col)->getDataColumnPtr().get(); input_col = assert_cast<const ColumnConst *>(input_col)->getDataColumnPtr().get();
first_arg_is_const = true; input_arg_is_const = true;
} }
while (const DataTypeArray * array_type = checkAndGetDataType<DataTypeArray>(tuple_type))
{
const ColumnArray * array_col = assert_cast<const ColumnArray *>(tuple_col);
tuple_type = array_type->getNestedType().get(); Columns array_offsets;
tuple_col = &array_col->getData(); while (const DataTypeArray * array_type = checkAndGetDataType<DataTypeArray>(input_type))
{
const ColumnArray * array_col = assert_cast<const ColumnArray *>(input_col);
input_type = array_type->getNestedType().get();
input_col = &array_col->getData();
array_offsets.push_back(array_col->getOffsetsPtr()); array_offsets.push_back(array_col->getOffsetsPtr());
} }
const DataTypeTuple * tuple_type_concrete = checkAndGetDataType<DataTypeTuple>(tuple_type); const DataTypeTuple * input_type_as_tuple = checkAndGetDataType<DataTypeTuple>(input_type);
const ColumnTuple * tuple_col_concrete = checkAndGetColumn<ColumnTuple>(tuple_col); const ColumnTuple * input_col_as_tuple = checkAndGetColumn<ColumnTuple>(input_col);
if (!tuple_type_concrete || !tuple_col_concrete) if (!input_type_as_tuple || !input_col_as_tuple)
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
"First argument for function {} must be tuple or array of tuple. Actual {}", "First argument for function {} must be tuple or array of tuple. Actual {}", getName(), input_arg.type->getName());
getName(),
first_arg.type->getName());
auto index = getElementNum(arguments[1].column, *tuple_type_concrete, arguments.size()); std::optional<size_t> index = getElementIndex(arguments[1].column, *input_type_as_tuple, arguments.size());
if (!index.has_value()) if (!index.has_value())
{
if (!array_offsets.empty())
{
recursiveCheckArrayOffsets(arguments[0].column, arguments[2].column, array_offsets.size());
}
return arguments[2].column; return arguments[2].column;
}
ColumnPtr res = tuple_col_concrete->getColumns()[index.value()]; ColumnPtr res = input_col_as_tuple->getColumns()[index.value()];
/// Wrap into Arrays /// Wrap into Arrays
for (auto it = array_offsets.rbegin(); it != array_offsets.rend(); ++it) for (auto it = array_offsets.rbegin(); it != array_offsets.rend(); ++it)
res = ColumnArray::create(res, *it); res = ColumnArray::create(res, *it);
if (first_arg_is_const) if (input_arg_is_const)
{
res = ColumnConst::create(res, input_rows_count); res = ColumnConst::create(res, input_rows_count);
}
return res; return res;
} }
private: private:
std::optional<size_t> getElementIndex(const ColumnPtr & index_column, const DataTypeTuple & tuple, size_t argument_size) const
void recursiveCheckArrayOffsets(ColumnPtr col_x, ColumnPtr col_y, size_t depth) const
{
for (size_t i = 1; i < depth; ++i)
{
checkArrayOffsets(col_x, col_y);
col_x = assert_cast<const ColumnArray *>(col_x.get())->getDataPtr();
col_y = assert_cast<const ColumnArray *>(col_y.get())->getDataPtr();
}
checkArrayOffsets(col_x, col_y);
}
void checkArrayOffsets(ColumnPtr col_x, ColumnPtr col_y) const
{
if (isColumnConst(*col_x))
{
checkArrayOffsetsWithFirstArgConst(col_x, col_y);
}
else if (isColumnConst(*col_y))
{
checkArrayOffsetsWithFirstArgConst(col_y, col_x);
}
else
{
const auto & array_x = *assert_cast<const ColumnArray *>(col_x.get());
const auto & array_y = *assert_cast<const ColumnArray *>(col_y.get());
if (!array_x.hasEqualOffsets(array_y))
{
throw Exception(ErrorCodes::SIZES_OF_ARRAYS_DONT_MATCH,
"The argument 1 and argument 3 of function {} have different array sizes", getName());
}
}
}
void checkArrayOffsetsWithFirstArgConst(ColumnPtr col_x, ColumnPtr col_y) const
{
col_x = assert_cast<const ColumnConst *>(col_x.get())->getDataColumnPtr();
col_y = col_y->convertToFullColumnIfConst();
const auto & array_x = *assert_cast<const ColumnArray *>(col_x.get());
const auto & array_y = *assert_cast<const ColumnArray *>(col_y.get());
const auto & offsets_x = array_x.getOffsets();
const auto & offsets_y = array_y.getOffsets();
ColumnArray::Offset prev_offset = 0;
size_t row_size = offsets_y.size();
for (size_t row = 0; row < row_size; ++row)
{
if (unlikely(offsets_x[0] != offsets_y[row] - prev_offset))
{
throw Exception(ErrorCodes::SIZES_OF_ARRAYS_DONT_MATCH,
"The argument 1 and argument 3 of function {} have different array sizes", getName());
}
prev_offset = offsets_y[row];
}
}
std::optional<size_t> getElementNum(const ColumnPtr & index_column, const DataTypeTuple & tuple, const size_t argument_size) const
{ {
if (checkAndGetColumnConst<ColumnUInt8>(index_column.get()) if (checkAndGetColumnConst<ColumnUInt8>(index_column.get())
|| checkAndGetColumnConst<ColumnUInt16>(index_column.get()) || checkAndGetColumnConst<ColumnUInt16>(index_column.get())
|| checkAndGetColumnConst<ColumnUInt32>(index_column.get()) || checkAndGetColumnConst<ColumnUInt32>(index_column.get())
|| checkAndGetColumnConst<ColumnUInt64>(index_column.get())) || checkAndGetColumnConst<ColumnUInt64>(index_column.get()))
{ {
size_t index = index_column->getUInt(0); const size_t index = index_column->getUInt(0);
if (index == 0) if (index > 0 && index <= tuple.getElements().size())
throw Exception(ErrorCodes::ILLEGAL_INDEX, "Indices in tuples are 1-based."); return {index - 1};
else
{
if (argument_size == 2)
throw Exception(ErrorCodes::NOT_FOUND_COLUMN_IN_BLOCK, "Tuple doesn't have element with index '{}'", index);
return std::nullopt;
}
if (index > tuple.getElements().size())
throw Exception(ErrorCodes::ILLEGAL_INDEX, "Index for tuple element is out of range.");
return std::optional<size_t>(index - 1);
} }
else if (const auto * name_col = checkAndGetColumnConst<ColumnString>(index_column.get())) else if (const auto * name_col = checkAndGetColumnConst<ColumnString>(index_column.get()))
{ {
auto index = tuple.tryGetPositionByName(name_col->getValue<String>()); std::optional<size_t> index = tuple.tryGetPositionByName(name_col->getValue<String>());
if (index.has_value())
{
return index;
}
if (argument_size == 2) if (index.has_value())
return index;
else
{ {
if (argument_size == 2)
throw Exception(ErrorCodes::NOT_FOUND_COLUMN_IN_BLOCK, "Tuple doesn't have element with name '{}'", name_col->getValue<String>()); throw Exception(ErrorCodes::NOT_FOUND_COLUMN_IN_BLOCK, "Tuple doesn't have element with name '{}'", name_col->getValue<String>());
}
return std::nullopt; return std::nullopt;
} }
}
else else
throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT, throw Exception(ErrorCodes::ILLEGAL_TYPE_OF_ARGUMENT,
"Second argument to {} must be a constant UInt or String", "Second argument to {} must be a constant UInt or String",

Some files were not shown because too many files have changed in this diff Show More