Merge branch 'master' into joelynch/disk-encrypted-missing-method

This commit is contained in:
Joe Lynch 2024-09-12 11:58:41 +02:00
commit 92351a67e8
No known key found for this signature in database
860 changed files with 18059 additions and 5340 deletions

View File

@ -130,6 +130,7 @@ jobs:
with:
build_name: package_debug
data: ${{ needs.RunConfig.outputs.data }}
force: true
BuilderBinDarwin:
needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }}

View File

@ -30,7 +30,6 @@
* Support more variants of JOIN strictness (`LEFT/RIGHT SEMI/ANTI/ANY JOIN`) with inequality conditions which involve columns from both left and right table. e.g. `t1.y < t2.y` (see the setting `allow_experimental_join_condition`). [#64281](https://github.com/ClickHouse/ClickHouse/pull/64281) ([lgbo](https://github.com/lgbo-ustc)).
* Intrpret Hive-style partitioning for different engines (`File`, `URL`, `S3`, `AzureBlobStorage`, `HDFS`). Hive-style partitioning organizes data into partitioned sub-directories, making it efficient to query and manage large datasets. Currently, it only creates virtual columns with the appropriate name and data. The follow-up PR will introduce the appropriate data filtering (performance speedup). [#65997](https://github.com/ClickHouse/ClickHouse/pull/65997) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Add function `printf` for Spark compatiability (but you can use the existing `format` function). [#66257](https://github.com/ClickHouse/ClickHouse/pull/66257) ([李扬](https://github.com/taiyang-li)).
* Added a new server setting, `disable_insertion_and_mutation`. If it is enabled, the server will deny all insertions and mutations. This includes asynchronous INSERTs. This setting can be used to create read-only replicas. [#66519](https://github.com/ClickHouse/ClickHouse/pull/66519) ([Xu Jia](https://github.com/XuJia0210)).
* Add options `restore_replace_external_engines_to_null` and `restore_replace_external_table_functions_to_null` to replace external engines and table_engines to `Null` engine that can be useful for testing. It should work for RESTORE and explicit table creation. [#66536](https://github.com/ClickHouse/ClickHouse/pull/66536) ([Ilya Yatsishin](https://github.com/qoega)).
* Added support for reading `MULTILINESTRING` geometry in `WKT` format using function `readWKTLineString`. [#67647](https://github.com/ClickHouse/ClickHouse/pull/67647) ([Jacob Reckhard](https://github.com/jacobrec)).
* Add a new table function `fuzzQuery`. This function allows the modification of a given query string with random variations. Example: `SELECT query FROM fuzzQuery('SELECT 1') LIMIT 5;`. [#67655](https://github.com/ClickHouse/ClickHouse/pull/67655) ([pufit](https://github.com/pufit)).

31
CITATION.cff Normal file
View File

@ -0,0 +1,31 @@
# This CITATION.cff file was generated with cffinit.
cff-version: 1.2.0
title: "ClickHouse"
message: "If you use this software, please cite it as below."
type: software
authors:
- family-names: "Milovidov"
given-names: "Alexey"
repository-code: 'https://github.com/ClickHouse/ClickHouse'
url: 'https://clickhouse.com'
license: Apache-2.0
preferred-citation:
type: article
authors:
- family-names: "Schulze"
given-names: "Robert"
- family-names: "Schreiber"
given-names: "Tom"
- family-names: "Yatsishin"
given-names: "Ilya"
- family-names: "Dahimene"
given-names: "Ryadh"
- family-names: "Milovidov"
given-names: "Alexey"
journal: "Proceedings of the VLDB Endowment"
title: "ClickHouse - Lightning Fast Analytics for Everyone"
year: 2024
volume: 17
issue: 12
doi: 10.14778/3685800.3685802

View File

@ -34,7 +34,7 @@ curl https://clickhouse.com/ | sh
Every month we get together with the community (users, contributors, customers, those interested in learning more about ClickHouse) to discuss what is coming in the latest release. If you are interested in sharing what you've built on ClickHouse, let us know.
* [v24.8 Community Call](https://clickhouse.com/company/events/v24-8-community-release-call) - August 20
* [v24.9 Community Call](https://clickhouse.com/company/events/v24-9-community-release-call) - September 26
## Upcoming Events
@ -42,23 +42,37 @@ Keep an eye out for upcoming meetups and events around the world. Somewhere else
The following upcoming meetups are featuring creator of ClickHouse & CTO, Alexey Milovidov:
* [ClickHouse Guangzhou User Group Meetup](https://mp.weixin.qq.com/s/GSvo-7xUoVzCsuUvlLTpCw) - August 25
* [San Francisco Meetup (Cloudflare)](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/302540575) - September 5
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/clickhouse-nc-meetup-group/events/302557230) - September 9
* [Raleigh Meetup (Deutsche Bank)](https://www.meetup.com/triangletechtalks/events/302723486/) - September 9
* [New York Meetup (Rokt)](https://www.meetup.com/clickhouse-new-york-user-group/events/302575342) - September 10
* [Chicago Meetup (Jump Capital)](https://lu.ma/43tvmrfw) - September 12
Other upcoming meetups
* [Toronto Meetup (Shopify)](https://www.meetup.com/clickhouse-toronto-user-group/events/301490855/) - September 10
* [Austin Meetup](https://www.meetup.com/clickhouse-austin-user-group/events/302558689/) - September 17
* [London Meetup](https://www.meetup.com/clickhouse-london-user-group/events/302977267) - September 17
* [Bangalore Meetup](https://www.meetup.com/clickhouse-bangalore-user-group/events/303208274/) - September 18
* [Tel Aviv Meetup](https://www.meetup.com/clickhouse-meetup-israel/events/303095121) - September 22
* [Jakarta Meetup](https://www.meetup.com/clickhouse-indonesia-user-group/events/303191359/) - October 1
* [Singapore Meetup](https://www.meetup.com/clickhouse-singapore-meetup-group/events/303212064/) - October 3
* [Madrid Meetup](https://www.meetup.com/clickhouse-spain-user-group/events/303096564/) - October 22
* [Barcelona Meetup](https://www.meetup.com/clickhouse-spain-user-group/events/303096876/) - October 29
* [Oslo Meetup](https://www.meetup.com/open-source-real-time-data-warehouse-real-time-analytics/events/302938622) - October 31
* [Ghent Meetup](https://www.meetup.com/clickhouse-belgium-user-group/events/303049405/) - November 19
* [Dubai Meetup](https://www.meetup.com/clickhouse-dubai-meetup-group/events/303096989/) - November 21
* [Paris Meetup](https://www.meetup.com/clickhouse-france-user-group/events/303096434) - November 26
Recently completed events
* [ClickHouse Guangzhou User Group Meetup](https://mp.weixin.qq.com/s/GSvo-7xUoVzCsuUvlLTpCw) - August 25
* [Seattle Meetup (Statsig)](https://www.meetup.com/clickhouse-seattle-user-group/events/302518075/) - August 27
* [Melbourne Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302732666/) - August 27
* [Sydney Meetup](https://www.meetup.com/clickhouse-australia-user-group/events/302862966/) - September 5
* [Zurich Meetup](https://www.meetup.com/clickhouse-switzerland-meetup-group/events/302267429/) - September 5
* [Toronto Meetup (Shopify)](https://www.meetup.com/clickhouse-toronto-user-group/events/301490855/) - September 10
* [London Meetup](https://www.meetup.com/clickhouse-london-user-group/events/302977267) - September 17
* [San Francisco Meetup (Cloudflare)](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/302540575) - September 5
## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"
* **Recording available**: [**v24.4 Release Call**](https://www.youtube.com/watch?v=dtUqgcfOGmE) All the features of 24.4, one convenient video! Watch it now!
* **Recording available**: [**v24.8 LTS Release Call**](https://www.youtube.com/watch?v=AeLmp2jc51k) All the features of 24.8 LTS, one convenient video! Watch it now!
## Interested in joining ClickHouse and making it your full-time job?

View File

@ -66,13 +66,11 @@ TRAP(gethostbyname)
TRAP(gethostbyname2)
TRAP(gethostent)
TRAP(getlogin)
TRAP(getmntent)
TRAP(getnetbyaddr)
TRAP(getnetbyname)
TRAP(getnetent)
TRAP(getnetgrent)
TRAP(getnetgrent_r)
TRAP(getopt)
TRAP(getopt_long)
TRAP(getopt_long_only)
TRAP(getpass)
@ -133,7 +131,6 @@ TRAP(nrand48)
TRAP(__ppc_get_timebase_freq)
TRAP(ptsname)
TRAP(putchar_unlocked)
TRAP(putenv)
TRAP(pututline)
TRAP(pututxline)
TRAP(putwchar_unlocked)
@ -148,7 +145,6 @@ TRAP(sethostent)
TRAP(sethostid)
TRAP(setkey)
//TRAP(setlocale) // Used by replxx at startup
TRAP(setlogmask)
TRAP(setnetent)
TRAP(setnetgrent)
TRAP(setprotoent)
@ -203,7 +199,6 @@ TRAP(lgammal)
TRAP(nftw)
TRAP(nl_langinfo)
TRAP(putc_unlocked)
TRAP(rand)
/** In the current POSIX.1 specification (POSIX.1-2008), readdir() is not required to be thread-safe. However, in modern
* implementations (including the glibc implementation), concurrent calls to readdir() that specify different directory streams
* are thread-safe. In cases where multiple threads must read from the same directory stream, using readdir() with external
@ -288,4 +283,14 @@ TRAP(tss_get)
TRAP(tss_set)
TRAP(tss_delete)
#ifndef USE_MUSL
/// These produce duplicate symbol errors when statically linking with musl.
/// Maybe we can remove them from the musl fork.
TRAP(getopt)
TRAP(putenv)
TRAP(setlogmask)
TRAP(rand)
TRAP(getmntent)
#endif
#endif

View File

@ -48,25 +48,17 @@ std::string PathImpl::currentImpl()
std::string PathImpl::homeImpl()
{
std::string path;
#if defined(_POSIX_C_SOURCE) || defined(_BSD_SOURCE) || defined(_POSIX_C_SOURCE)
size_t buf_size = 1024; // Same as glibc use for getpwuid
std::vector<char> buf(buf_size);
struct passwd res;
struct passwd* pwd = nullptr;
getpwuid_r(getuid(), &res, buf.data(), buf_size, &pwd);
#else
struct passwd* pwd = getpwuid(getuid());
#endif
if (pwd)
path = pwd->pw_dir;
else
{
#if defined(_POSIX_C_SOURCE) || defined(_BSD_SOURCE) || defined(_POSIX_C_SOURCE)
getpwuid_r(getuid(), &res, buf.data(), buf_size, &pwd);
#else
pwd = getpwuid(geteuid());
#endif
if (pwd)
path = pwd->pw_dir;
else

View File

@ -18,7 +18,9 @@
#define Net_HTTPResponse_INCLUDED
#include <map>
#include <vector>
#include "Poco/Net/HTTPCookie.h"
#include "Poco/Net/HTTPMessage.h"
#include "Poco/Net/Net.h"
@ -180,6 +182,8 @@ namespace Net
/// May throw an exception in case of a malformed
/// Set-Cookie header.
void getHeaders(std::map<std::string, std::string> & headers) const;
void write(std::ostream & ostr) const;
/// Writes the HTTP response to the given
/// output stream.

View File

@ -19,6 +19,8 @@
#include <ios>
#include <memory>
#include <functional>
#include "Poco/Any.h"
#include "Poco/Buffer.h"
#include "Poco/Exception.h"
@ -33,6 +35,27 @@ namespace Net
{
class IHTTPSessionDataHooks
/// Interface to control stream of data bytes being sent or received though socket by HTTPSession
/// It allows to monitor, throttle and schedule data streams with syscall granulatrity
{
public:
virtual ~IHTTPSessionDataHooks() = default;
virtual void atStart(int bytes) = 0;
/// Called before sending/receiving data `bytes` to/from socket.
virtual void atFinish(int bytes) = 0;
/// Called when sending/receiving of data `bytes` is successfully finished.
virtual void atFail() = 0;
/// If an error occurred during send/receive `fail()` is called instead of `finish()`.
};
using HTTPSessionDataHooksPtr = std::shared_ptr<IHTTPSessionDataHooks>;
class Net_API HTTPSession
/// HTTPSession implements basic HTTP session management
/// for both HTTP clients and HTTP servers.
@ -73,6 +96,12 @@ namespace Net
Poco::Timespan getReceiveTimeout() const;
/// Returns receive timeout for the HTTP session.
void setSendDataHooks(const HTTPSessionDataHooksPtr & sendDataHooks = {});
/// Sets data hooks that will be called on every sent to the socket.
void setReceiveDataHooks(const HTTPSessionDataHooksPtr & receiveDataHooks = {});
/// Sets data hooks that will be called on every receive from the socket.
bool connected() const;
/// Returns true if the underlying socket is connected.
@ -211,6 +240,10 @@ namespace Net
Poco::Exception * _pException;
Poco::Any _data;
// Data hooks
HTTPSessionDataHooksPtr _sendDataHooks;
HTTPSessionDataHooksPtr _receiveDataHooks;
friend class HTTPStreamBuf;
friend class HTTPHeaderStreamBuf;
friend class HTTPFixedLengthStreamBuf;
@ -246,6 +279,16 @@ namespace Net
return _receiveTimeout;
}
inline void HTTPSession::setSendDataHooks(const HTTPSessionDataHooksPtr & sendDataHooks)
{
_sendDataHooks = sendDataHooks;
}
inline void HTTPSession::setReceiveDataHooks(const HTTPSessionDataHooksPtr & receiveDataHooks)
{
_receiveDataHooks = receiveDataHooks;
}
inline StreamSocket & HTTPSession::socket()
{
return _socket;

View File

@ -209,6 +209,15 @@ void HTTPResponse::getCookies(std::vector<HTTPCookie>& cookies) const
}
}
void HTTPResponse::getHeaders(std::map<std::string, std::string> & headers) const
{
headers.clear();
for (const auto & it : *this)
{
headers.emplace(it.first, it.second);
}
}
void HTTPResponse::write(std::ostream& ostr) const
{

View File

@ -166,10 +166,17 @@ int HTTPSession::write(const char* buffer, std::streamsize length)
{
try
{
return _socket.sendBytes(buffer, (int) length);
if (_sendDataHooks)
_sendDataHooks->atStart((int) length);
int result = _socket.sendBytes(buffer, (int) length);
if (_sendDataHooks)
_sendDataHooks->atFinish(result);
return result;
}
catch (Poco::Exception& exc)
{
if (_sendDataHooks)
_sendDataHooks->atFail();
setException(exc);
throw;
}
@ -180,10 +187,17 @@ int HTTPSession::receive(char* buffer, int length)
{
try
{
return _socket.receiveBytes(buffer, length);
if (_receiveDataHooks)
_receiveDataHooks->atStart(length);
int result = _socket.receiveBytes(buffer, length);
if (_receiveDataHooks)
_receiveDataHooks->atFinish(result);
return result;
}
catch (Poco::Exception& exc)
{
if (_receiveDataHooks)
_receiveDataHooks->atFail();
setException(exc);
throw;
}

View File

@ -311,6 +311,14 @@ int SecureSocketImpl::sendBytes(const void* buffer, int length, int flags)
while (mustRetry(rc, remaining_time));
if (rc <= 0)
{
// At this stage we still can have last not yet received SSL message containing SSL error
// so make a read to force SSL to process possible SSL error
if (SSL_get_error(_pSSL, rc) == SSL_ERROR_SYSCALL && SocketImpl::lastError() == POCO_ECONNRESET)
{
char c = 0;
SSL_read(_pSSL, &c, 1);
}
rc = handleError(rc);
if (rc == 0) throw SSLConnectionUnexpectedlyClosedException();
}

View File

@ -18,4 +18,4 @@ target_compile_options (_poco_util
-Wno-zero-as-null-pointer-constant
)
target_include_directories (_poco_util SYSTEM PUBLIC "include")
target_link_libraries (_poco_util PUBLIC Poco::JSON Poco::XML)
target_link_libraries (_poco_util PUBLIC Poco::JSON Poco::XML Poco::Net)

View File

@ -241,6 +241,20 @@ namespace Util
/// If the value contains references to other properties (${<property>}), these
/// are expanded.
std::string getHost(const std::string & key) const;
/// Returns the string value of the host property with the given name.
/// Throws a NotFoundException if the key does not exist.
/// Throws a SyntaxException if the property is not a valid host (IP address or domain).
/// If the value contains references to other properties (${<property>}), these
/// are expanded.
std::string getHost(const std::string & key, const std::string & defaultValue) const;
/// If a property with the given key exists, returns the host property's string value,
/// otherwise returns the given default value.
/// Throws a SyntaxException if the property is not a valid host (IP address or domain).
/// If the value contains references to other properties (${<property>}), these
/// are expanded.
virtual void setString(const std::string & key, const std::string & value);
/// Sets the property with the given key to the given value.
/// An already existing value for the key is overwritten.
@ -339,12 +353,35 @@ namespace Util
static bool parseBool(const std::string & value);
void setRawWithEvent(const std::string & key, std::string value);
static void checkHostValidity(const std::string & value);
/// Throws a SyntaxException if the value is not a valid host (IP address or domain).
virtual ~AbstractConfiguration();
private:
std::string internalExpand(const std::string & value) const;
std::string uncheckedExpand(const std::string & value) const;
static bool isValidIPv4Address(const std::string & value);
/// IPv4 address considered valid if it is "0.0.0.0" or one of those,
/// defined by inet_aton() or inet_addr()
static bool isValidIPv6Address(const std::string & value);
/// IPv6 address considered valid if it is "::" or one of those,
/// defined by inet_pton() with AF_INET6 flag
/// (in this case it may have scope id and may be surrounded by '[', ']')
static bool isValidDomainName(const std::string & value);
/// <domain> ::= <subdomain> [ "." ]
/// <subdomain> ::= <label> | <subdomain> "." <label>
/// <label> ::= <letter> [ [ <ldh-str> ] <let-dig> ]
/// <ldh-str> ::= <let-dig-hyp> | <let-dig-hyp> <ldh-str>
/// <let-dig-hyp> ::= <let-dig> | "-"
/// <let-dig> ::= <letter> | <digit>
/// <letter> ::= any one of the 52 alphabetic characters A through Z in
/// upper case and a through z in lower case
/// <digit> ::= any one of the ten digits 0 through 9
AbstractConfiguration(const AbstractConfiguration &);
AbstractConfiguration & operator=(const AbstractConfiguration &);

View File

@ -18,6 +18,7 @@
#include "Poco/NumberParser.h"
#include "Poco/NumberFormatter.h"
#include "Poco/String.h"
#include "Poco/Net/IPAddressImpl.h"
using Poco::Mutex;
@ -263,6 +264,41 @@ bool AbstractConfiguration::getBool(const std::string& key, bool defaultValue) c
}
std::string AbstractConfiguration::getHost(const std::string& key) const
{
Mutex::ScopedLock lock(_mutex);
std::string value;
if (getRaw(key, value))
{
std::string expandedValue = internalExpand(value);
checkHostValidity(expandedValue);
return expandedValue;
}
else
throw NotFoundException(key);
}
std::string AbstractConfiguration::getHost(const std::string& key, const std::string& defaultValue) const
{
Mutex::ScopedLock lock(_mutex);
std::string value;
if (getRaw(key, value))
{
std::string expandedValue = internalExpand(value);
checkHostValidity(expandedValue);
return expandedValue;
}
else
{
checkHostValidity(defaultValue);
return defaultValue;
}
}
void AbstractConfiguration::setString(const std::string& key, const std::string& value)
{
setRawWithEvent(key, value);
@ -529,4 +565,68 @@ void AbstractConfiguration::setRawWithEvent(const std::string& key, std::string
}
void AbstractConfiguration::checkHostValidity(const std::string& value)
{
if (!isValidIPv4Address(value) && !isValidIPv6Address(value) && !isValidDomainName(value))
{
throw SyntaxException("Property is not a valid host name", value);
}
}
bool AbstractConfiguration::isValidIPv4Address(const std::string& value)
{
using Poco::Net::Impl::IPv4AddressImpl;
IPv4AddressImpl empty4 = IPv4AddressImpl();
IPv4AddressImpl ipAddress = IPv4AddressImpl::parse(value);
return ipAddress != empty4 || value == "0.0.0.0";
}
bool AbstractConfiguration::isValidIPv6Address(const std::string& value)
{
#if defined(POCO_HAVE_IPv6)
using Poco::Net::Impl::IPv6AddressImpl;
IPv6AddressImpl empty6 = IPv6AddressImpl();
IPv6AddressImpl ipAddress = IPv6AddressImpl::parse(value);
return ipAddress != empty6 || value == "::";
#else
return false;
#endif
}
bool AbstractConfiguration::isValidDomainName(const std::string& value)
{
if (value.empty() || value == "." || value.length() > 253)
return false;
int labelLength = 0;
char oldChar = 0;
for (char ch : value)
{
if (ch == '.')
{
if (labelLength == 0 || labelLength > 63 || oldChar == '-')
return false;
labelLength = 0;
}
else if (isalnum(ch) || ch == '-')
{
if (labelLength == 0 && (ch == '-' || isdigit(ch)))
return false;
++labelLength;
}
else
{
return false;
}
oldChar = ch;
}
return oldChar == '.' || (labelLength > 0 && labelLength <= 63 && oldChar != '-');
}
} } // namespace Poco::Util

View File

@ -8,4 +8,7 @@ set (CMAKE_CXX_COMPILER_TARGET "x86_64-pc-freebsd11")
set (CMAKE_ASM_COMPILER_TARGET "x86_64-pc-freebsd11")
set (CMAKE_SYSROOT "${CMAKE_CURRENT_LIST_DIR}/../../contrib/sysroot/freebsd-x86_64")
# dprintf is used in a patched version of replxx
add_compile_definitions(_WITH_DPRINTF)
set (CMAKE_TRY_COMPILE_TARGET_TYPE STATIC_LIBRARY) # disable linkage check - it doesn't work in CMake

2
contrib/curl vendored

@ -1 +1 @@
Subproject commit de7b3e89218467159a7af72d58cea8425946e97d
Subproject commit 83bedbd730d62b83744cc26fa0433d3f6e2e4cd6

2
contrib/grpc vendored

@ -1 +1 @@
Subproject commit 1716359d2e28d304a250f9df0e6c0ccad03de8db
Subproject commit 7bc3abe952aba1dc7bce7f2f790dc781cb51a41e

View File

@ -15,162 +15,64 @@ set(ICUDATA_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/icudata/")
# These lists of sources were generated from build log of the original ICU build system (configure + make).
set(ICUUC_SOURCES
"${ICU_SOURCE_DIR}/common/errorcode.cpp"
"${ICU_SOURCE_DIR}/common/putil.cpp"
"${ICU_SOURCE_DIR}/common/umath.cpp"
"${ICU_SOURCE_DIR}/common/utypes.cpp"
"${ICU_SOURCE_DIR}/common/uinvchar.cpp"
"${ICU_SOURCE_DIR}/common/umutex.cpp"
"${ICU_SOURCE_DIR}/common/ucln_cmn.cpp"
"${ICU_SOURCE_DIR}/common/uinit.cpp"
"${ICU_SOURCE_DIR}/common/uobject.cpp"
"${ICU_SOURCE_DIR}/common/cmemory.cpp"
"${ICU_SOURCE_DIR}/common/charstr.cpp"
"${ICU_SOURCE_DIR}/common/cstr.cpp"
"${ICU_SOURCE_DIR}/common/udata.cpp"
"${ICU_SOURCE_DIR}/common/ucmndata.cpp"
"${ICU_SOURCE_DIR}/common/udatamem.cpp"
"${ICU_SOURCE_DIR}/common/umapfile.cpp"
"${ICU_SOURCE_DIR}/common/udataswp.cpp"
"${ICU_SOURCE_DIR}/common/utrie_swap.cpp"
"${ICU_SOURCE_DIR}/common/ucol_swp.cpp"
"${ICU_SOURCE_DIR}/common/utrace.cpp"
"${ICU_SOURCE_DIR}/common/uhash.cpp"
"${ICU_SOURCE_DIR}/common/uhash_us.cpp"
"${ICU_SOURCE_DIR}/common/uenum.cpp"
"${ICU_SOURCE_DIR}/common/ustrenum.cpp"
"${ICU_SOURCE_DIR}/common/uvector.cpp"
"${ICU_SOURCE_DIR}/common/ustack.cpp"
"${ICU_SOURCE_DIR}/common/uvectr32.cpp"
"${ICU_SOURCE_DIR}/common/uvectr64.cpp"
"${ICU_SOURCE_DIR}/common/ucnv.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_bld.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_cnv.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_io.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_cb.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_err.cpp"
"${ICU_SOURCE_DIR}/common/ucnvlat1.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u7.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u8.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u16.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u32.cpp"
"${ICU_SOURCE_DIR}/common/ucnvscsu.cpp"
"${ICU_SOURCE_DIR}/common/ucnvbocu.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_ext.cpp"
"${ICU_SOURCE_DIR}/common/ucnvmbcs.cpp"
"${ICU_SOURCE_DIR}/common/ucnv2022.cpp"
"${ICU_SOURCE_DIR}/common/ucnvhz.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_lmb.cpp"
"${ICU_SOURCE_DIR}/common/ucnvisci.cpp"
"${ICU_SOURCE_DIR}/common/ucnvdisp.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_set.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_ct.cpp"
"${ICU_SOURCE_DIR}/common/resource.cpp"
"${ICU_SOURCE_DIR}/common/uresbund.cpp"
"${ICU_SOURCE_DIR}/common/ures_cnv.cpp"
"${ICU_SOURCE_DIR}/common/uresdata.cpp"
"${ICU_SOURCE_DIR}/common/resbund.cpp"
"${ICU_SOURCE_DIR}/common/resbund_cnv.cpp"
"${ICU_SOURCE_DIR}/common/ucurr.cpp"
"${ICU_SOURCE_DIR}/common/localebuilder.cpp"
"${ICU_SOURCE_DIR}/common/localeprioritylist.cpp"
"${ICU_SOURCE_DIR}/common/messagepattern.cpp"
"${ICU_SOURCE_DIR}/common/ucat.cpp"
"${ICU_SOURCE_DIR}/common/locmap.cpp"
"${ICU_SOURCE_DIR}/common/uloc.cpp"
"${ICU_SOURCE_DIR}/common/locid.cpp"
"${ICU_SOURCE_DIR}/common/locutil.cpp"
"${ICU_SOURCE_DIR}/common/locavailable.cpp"
"${ICU_SOURCE_DIR}/common/locdispnames.cpp"
"${ICU_SOURCE_DIR}/common/locdspnm.cpp"
"${ICU_SOURCE_DIR}/common/loclikely.cpp"
"${ICU_SOURCE_DIR}/common/locresdata.cpp"
"${ICU_SOURCE_DIR}/common/lsr.cpp"
"${ICU_SOURCE_DIR}/common/loclikelysubtags.cpp"
"${ICU_SOURCE_DIR}/common/locdistance.cpp"
"${ICU_SOURCE_DIR}/common/localematcher.cpp"
"${ICU_SOURCE_DIR}/common/bytestream.cpp"
"${ICU_SOURCE_DIR}/common/stringpiece.cpp"
"${ICU_SOURCE_DIR}/common/bytesinkutil.cpp"
"${ICU_SOURCE_DIR}/common/stringtriebuilder.cpp"
"${ICU_SOURCE_DIR}/common/bytestriebuilder.cpp"
"${ICU_SOURCE_DIR}/common/bytestrie.cpp"
"${ICU_SOURCE_DIR}/common/bytestrieiterator.cpp"
"${ICU_SOURCE_DIR}/common/ucharstrie.cpp"
"${ICU_SOURCE_DIR}/common/ucharstriebuilder.cpp"
"${ICU_SOURCE_DIR}/common/ucharstrieiterator.cpp"
"${ICU_SOURCE_DIR}/common/dictionarydata.cpp"
"${ICU_SOURCE_DIR}/common/edits.cpp"
"${ICU_SOURCE_DIR}/common/appendable.cpp"
"${ICU_SOURCE_DIR}/common/ustr_cnv.cpp"
"${ICU_SOURCE_DIR}/common/unistr_cnv.cpp"
"${ICU_SOURCE_DIR}/common/unistr.cpp"
"${ICU_SOURCE_DIR}/common/unistr_case.cpp"
"${ICU_SOURCE_DIR}/common/unistr_props.cpp"
"${ICU_SOURCE_DIR}/common/utf_impl.cpp"
"${ICU_SOURCE_DIR}/common/ustring.cpp"
"${ICU_SOURCE_DIR}/common/ustrcase.cpp"
"${ICU_SOURCE_DIR}/common/ucasemap.cpp"
"${ICU_SOURCE_DIR}/common/ucasemap_titlecase_brkiter.cpp"
"${ICU_SOURCE_DIR}/common/cstring.cpp"
"${ICU_SOURCE_DIR}/common/ustrfmt.cpp"
"${ICU_SOURCE_DIR}/common/ustrtrns.cpp"
"${ICU_SOURCE_DIR}/common/ustr_wcs.cpp"
"${ICU_SOURCE_DIR}/common/utext.cpp"
"${ICU_SOURCE_DIR}/common/unistr_case_locale.cpp"
"${ICU_SOURCE_DIR}/common/ustrcase_locale.cpp"
"${ICU_SOURCE_DIR}/common/unistr_titlecase_brkiter.cpp"
"${ICU_SOURCE_DIR}/common/ustr_titlecase_brkiter.cpp"
"${ICU_SOURCE_DIR}/common/normalizer2impl.cpp"
"${ICU_SOURCE_DIR}/common/normalizer2.cpp"
"${ICU_SOURCE_DIR}/common/filterednormalizer2.cpp"
"${ICU_SOURCE_DIR}/common/normlzr.cpp"
"${ICU_SOURCE_DIR}/common/unorm.cpp"
"${ICU_SOURCE_DIR}/common/unormcmp.cpp"
"${ICU_SOURCE_DIR}/common/loadednormalizer2impl.cpp"
"${ICU_SOURCE_DIR}/common/chariter.cpp"
"${ICU_SOURCE_DIR}/common/schriter.cpp"
"${ICU_SOURCE_DIR}/common/uchriter.cpp"
"${ICU_SOURCE_DIR}/common/uiter.cpp"
"${ICU_SOURCE_DIR}/common/patternprops.cpp"
"${ICU_SOURCE_DIR}/common/uchar.cpp"
"${ICU_SOURCE_DIR}/common/uprops.cpp"
"${ICU_SOURCE_DIR}/common/ucase.cpp"
"${ICU_SOURCE_DIR}/common/propname.cpp"
"${ICU_SOURCE_DIR}/common/ubidi_props.cpp"
"${ICU_SOURCE_DIR}/common/characterproperties.cpp"
"${ICU_SOURCE_DIR}/common/ubidi.cpp"
"${ICU_SOURCE_DIR}/common/ubidiwrt.cpp"
"${ICU_SOURCE_DIR}/common/ubidiln.cpp"
"${ICU_SOURCE_DIR}/common/ushape.cpp"
"${ICU_SOURCE_DIR}/common/uscript.cpp"
"${ICU_SOURCE_DIR}/common/uscript_props.cpp"
"${ICU_SOURCE_DIR}/common/usc_impl.cpp"
"${ICU_SOURCE_DIR}/common/unames.cpp"
"${ICU_SOURCE_DIR}/common/utrie.cpp"
"${ICU_SOURCE_DIR}/common/utrie2.cpp"
"${ICU_SOURCE_DIR}/common/utrie2_builder.cpp"
"${ICU_SOURCE_DIR}/common/ucptrie.cpp"
"${ICU_SOURCE_DIR}/common/umutablecptrie.cpp"
"${ICU_SOURCE_DIR}/common/bmpset.cpp"
"${ICU_SOURCE_DIR}/common/unisetspan.cpp"
"${ICU_SOURCE_DIR}/common/uset_props.cpp"
"${ICU_SOURCE_DIR}/common/uniset_props.cpp"
"${ICU_SOURCE_DIR}/common/uniset_closure.cpp"
"${ICU_SOURCE_DIR}/common/uset.cpp"
"${ICU_SOURCE_DIR}/common/uniset.cpp"
"${ICU_SOURCE_DIR}/common/usetiter.cpp"
"${ICU_SOURCE_DIR}/common/ruleiter.cpp"
"${ICU_SOURCE_DIR}/common/caniter.cpp"
"${ICU_SOURCE_DIR}/common/unifilt.cpp"
"${ICU_SOURCE_DIR}/common/unifunct.cpp"
"${ICU_SOURCE_DIR}/common/uarrsort.cpp"
"${ICU_SOURCE_DIR}/common/brkiter.cpp"
"${ICU_SOURCE_DIR}/common/ubrk.cpp"
"${ICU_SOURCE_DIR}/common/brkeng.cpp"
"${ICU_SOURCE_DIR}/common/brkiter.cpp"
"${ICU_SOURCE_DIR}/common/bytesinkutil.cpp"
"${ICU_SOURCE_DIR}/common/bytestream.cpp"
"${ICU_SOURCE_DIR}/common/bytestrie.cpp"
"${ICU_SOURCE_DIR}/common/bytestriebuilder.cpp"
"${ICU_SOURCE_DIR}/common/bytestrieiterator.cpp"
"${ICU_SOURCE_DIR}/common/caniter.cpp"
"${ICU_SOURCE_DIR}/common/characterproperties.cpp"
"${ICU_SOURCE_DIR}/common/chariter.cpp"
"${ICU_SOURCE_DIR}/common/charstr.cpp"
"${ICU_SOURCE_DIR}/common/cmemory.cpp"
"${ICU_SOURCE_DIR}/common/cstr.cpp"
"${ICU_SOURCE_DIR}/common/cstring.cpp"
"${ICU_SOURCE_DIR}/common/cwchar.cpp"
"${ICU_SOURCE_DIR}/common/dictbe.cpp"
"${ICU_SOURCE_DIR}/common/dictionarydata.cpp"
"${ICU_SOURCE_DIR}/common/dtintrv.cpp"
"${ICU_SOURCE_DIR}/common/edits.cpp"
"${ICU_SOURCE_DIR}/common/emojiprops.cpp"
"${ICU_SOURCE_DIR}/common/errorcode.cpp"
"${ICU_SOURCE_DIR}/common/filteredbrk.cpp"
"${ICU_SOURCE_DIR}/common/filterednormalizer2.cpp"
"${ICU_SOURCE_DIR}/common/icudataver.cpp"
"${ICU_SOURCE_DIR}/common/icuplug.cpp"
"${ICU_SOURCE_DIR}/common/loadednormalizer2impl.cpp"
"${ICU_SOURCE_DIR}/common/localebuilder.cpp"
"${ICU_SOURCE_DIR}/common/localematcher.cpp"
"${ICU_SOURCE_DIR}/common/localeprioritylist.cpp"
"${ICU_SOURCE_DIR}/common/locavailable.cpp"
"${ICU_SOURCE_DIR}/common/locbased.cpp"
"${ICU_SOURCE_DIR}/common/locdispnames.cpp"
"${ICU_SOURCE_DIR}/common/locdistance.cpp"
"${ICU_SOURCE_DIR}/common/locdspnm.cpp"
"${ICU_SOURCE_DIR}/common/locid.cpp"
"${ICU_SOURCE_DIR}/common/loclikely.cpp"
"${ICU_SOURCE_DIR}/common/loclikelysubtags.cpp"
"${ICU_SOURCE_DIR}/common/locmap.cpp"
"${ICU_SOURCE_DIR}/common/locresdata.cpp"
"${ICU_SOURCE_DIR}/common/locutil.cpp"
"${ICU_SOURCE_DIR}/common/lsr.cpp"
"${ICU_SOURCE_DIR}/common/lstmbe.cpp"
"${ICU_SOURCE_DIR}/common/messagepattern.cpp"
"${ICU_SOURCE_DIR}/common/mlbe.cpp"
"${ICU_SOURCE_DIR}/common/normalizer2.cpp"
"${ICU_SOURCE_DIR}/common/normalizer2impl.cpp"
"${ICU_SOURCE_DIR}/common/normlzr.cpp"
"${ICU_SOURCE_DIR}/common/parsepos.cpp"
"${ICU_SOURCE_DIR}/common/patternprops.cpp"
"${ICU_SOURCE_DIR}/common/pluralmap.cpp"
"${ICU_SOURCE_DIR}/common/propname.cpp"
"${ICU_SOURCE_DIR}/common/propsvec.cpp"
"${ICU_SOURCE_DIR}/common/punycode.cpp"
"${ICU_SOURCE_DIR}/common/putil.cpp"
"${ICU_SOURCE_DIR}/common/rbbi.cpp"
"${ICU_SOURCE_DIR}/common/rbbi_cache.cpp"
"${ICU_SOURCE_DIR}/common/rbbidata.cpp"
"${ICU_SOURCE_DIR}/common/rbbinode.cpp"
"${ICU_SOURCE_DIR}/common/rbbirb.cpp"
@ -178,166 +80,180 @@ set(ICUUC_SOURCES
"${ICU_SOURCE_DIR}/common/rbbisetb.cpp"
"${ICU_SOURCE_DIR}/common/rbbistbl.cpp"
"${ICU_SOURCE_DIR}/common/rbbitblb.cpp"
"${ICU_SOURCE_DIR}/common/rbbi_cache.cpp"
"${ICU_SOURCE_DIR}/common/resbund.cpp"
"${ICU_SOURCE_DIR}/common/resbund_cnv.cpp"
"${ICU_SOURCE_DIR}/common/resource.cpp"
"${ICU_SOURCE_DIR}/common/restrace.cpp"
"${ICU_SOURCE_DIR}/common/ruleiter.cpp"
"${ICU_SOURCE_DIR}/common/schriter.cpp"
"${ICU_SOURCE_DIR}/common/serv.cpp"
"${ICU_SOURCE_DIR}/common/servnotf.cpp"
"${ICU_SOURCE_DIR}/common/servls.cpp"
"${ICU_SOURCE_DIR}/common/servlk.cpp"
"${ICU_SOURCE_DIR}/common/servlkf.cpp"
"${ICU_SOURCE_DIR}/common/servls.cpp"
"${ICU_SOURCE_DIR}/common/servnotf.cpp"
"${ICU_SOURCE_DIR}/common/servrbf.cpp"
"${ICU_SOURCE_DIR}/common/servslkf.cpp"
"${ICU_SOURCE_DIR}/common/uidna.cpp"
"${ICU_SOURCE_DIR}/common/usprep.cpp"
"${ICU_SOURCE_DIR}/common/uts46.cpp"
"${ICU_SOURCE_DIR}/common/punycode.cpp"
"${ICU_SOURCE_DIR}/common/util.cpp"
"${ICU_SOURCE_DIR}/common/util_props.cpp"
"${ICU_SOURCE_DIR}/common/parsepos.cpp"
"${ICU_SOURCE_DIR}/common/locbased.cpp"
"${ICU_SOURCE_DIR}/common/cwchar.cpp"
"${ICU_SOURCE_DIR}/common/wintz.cpp"
"${ICU_SOURCE_DIR}/common/dtintrv.cpp"
"${ICU_SOURCE_DIR}/common/ucnvsel.cpp"
"${ICU_SOURCE_DIR}/common/propsvec.cpp"
"${ICU_SOURCE_DIR}/common/ulist.cpp"
"${ICU_SOURCE_DIR}/common/uloc_tag.cpp"
"${ICU_SOURCE_DIR}/common/icudataver.cpp"
"${ICU_SOURCE_DIR}/common/icuplug.cpp"
"${ICU_SOURCE_DIR}/common/sharedobject.cpp"
"${ICU_SOURCE_DIR}/common/simpleformatter.cpp"
"${ICU_SOURCE_DIR}/common/unifiedcache.cpp"
"${ICU_SOURCE_DIR}/common/uloc_keytype.cpp"
"${ICU_SOURCE_DIR}/common/ubiditransform.cpp"
"${ICU_SOURCE_DIR}/common/pluralmap.cpp"
"${ICU_SOURCE_DIR}/common/static_unicode_sets.cpp"
"${ICU_SOURCE_DIR}/common/restrace.cpp"
"${ICU_SOURCE_DIR}/common/emojiprops.cpp"
"${ICU_SOURCE_DIR}/common/lstmbe.cpp")
"${ICU_SOURCE_DIR}/common/stringpiece.cpp"
"${ICU_SOURCE_DIR}/common/stringtriebuilder.cpp"
"${ICU_SOURCE_DIR}/common/uarrsort.cpp"
"${ICU_SOURCE_DIR}/common/ubidi.cpp"
"${ICU_SOURCE_DIR}/common/ubidi_props.cpp"
"${ICU_SOURCE_DIR}/common/ubidiln.cpp"
"${ICU_SOURCE_DIR}/common/ubiditransform.cpp"
"${ICU_SOURCE_DIR}/common/ubidiwrt.cpp"
"${ICU_SOURCE_DIR}/common/ubrk.cpp"
"${ICU_SOURCE_DIR}/common/ucase.cpp"
"${ICU_SOURCE_DIR}/common/ucasemap.cpp"
"${ICU_SOURCE_DIR}/common/ucasemap_titlecase_brkiter.cpp"
"${ICU_SOURCE_DIR}/common/ucat.cpp"
"${ICU_SOURCE_DIR}/common/uchar.cpp"
"${ICU_SOURCE_DIR}/common/ucharstrie.cpp"
"${ICU_SOURCE_DIR}/common/ucharstriebuilder.cpp"
"${ICU_SOURCE_DIR}/common/ucharstrieiterator.cpp"
"${ICU_SOURCE_DIR}/common/uchriter.cpp"
"${ICU_SOURCE_DIR}/common/ucln_cmn.cpp"
"${ICU_SOURCE_DIR}/common/ucmndata.cpp"
"${ICU_SOURCE_DIR}/common/ucnv.cpp"
"${ICU_SOURCE_DIR}/common/ucnv2022.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_bld.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_cb.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_cnv.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_ct.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_err.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_ext.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_io.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_lmb.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_set.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u16.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u32.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u7.cpp"
"${ICU_SOURCE_DIR}/common/ucnv_u8.cpp"
"${ICU_SOURCE_DIR}/common/ucnvbocu.cpp"
"${ICU_SOURCE_DIR}/common/ucnvdisp.cpp"
"${ICU_SOURCE_DIR}/common/ucnvhz.cpp"
"${ICU_SOURCE_DIR}/common/ucnvisci.cpp"
"${ICU_SOURCE_DIR}/common/ucnvlat1.cpp"
"${ICU_SOURCE_DIR}/common/ucnvmbcs.cpp"
"${ICU_SOURCE_DIR}/common/ucnvscsu.cpp"
"${ICU_SOURCE_DIR}/common/ucnvsel.cpp"
"${ICU_SOURCE_DIR}/common/ucol_swp.cpp"
"${ICU_SOURCE_DIR}/common/ucptrie.cpp"
"${ICU_SOURCE_DIR}/common/ucurr.cpp"
"${ICU_SOURCE_DIR}/common/udata.cpp"
"${ICU_SOURCE_DIR}/common/udatamem.cpp"
"${ICU_SOURCE_DIR}/common/udataswp.cpp"
"${ICU_SOURCE_DIR}/common/uenum.cpp"
"${ICU_SOURCE_DIR}/common/uhash.cpp"
"${ICU_SOURCE_DIR}/common/uhash_us.cpp"
"${ICU_SOURCE_DIR}/common/uidna.cpp"
"${ICU_SOURCE_DIR}/common/uinit.cpp"
"${ICU_SOURCE_DIR}/common/uinvchar.cpp"
"${ICU_SOURCE_DIR}/common/uiter.cpp"
"${ICU_SOURCE_DIR}/common/ulist.cpp"
"${ICU_SOURCE_DIR}/common/uloc.cpp"
"${ICU_SOURCE_DIR}/common/uloc_keytype.cpp"
"${ICU_SOURCE_DIR}/common/uloc_tag.cpp"
"${ICU_SOURCE_DIR}/common/ulocale.cpp"
"${ICU_SOURCE_DIR}/common/ulocbuilder.cpp"
"${ICU_SOURCE_DIR}/common/umapfile.cpp"
"${ICU_SOURCE_DIR}/common/umath.cpp"
"${ICU_SOURCE_DIR}/common/umutablecptrie.cpp"
"${ICU_SOURCE_DIR}/common/umutex.cpp"
"${ICU_SOURCE_DIR}/common/unames.cpp"
"${ICU_SOURCE_DIR}/common/unifiedcache.cpp"
"${ICU_SOURCE_DIR}/common/unifilt.cpp"
"${ICU_SOURCE_DIR}/common/unifunct.cpp"
"${ICU_SOURCE_DIR}/common/uniset.cpp"
"${ICU_SOURCE_DIR}/common/uniset_closure.cpp"
"${ICU_SOURCE_DIR}/common/uniset_props.cpp"
"${ICU_SOURCE_DIR}/common/unisetspan.cpp"
"${ICU_SOURCE_DIR}/common/unistr.cpp"
"${ICU_SOURCE_DIR}/common/unistr_case.cpp"
"${ICU_SOURCE_DIR}/common/unistr_case_locale.cpp"
"${ICU_SOURCE_DIR}/common/unistr_cnv.cpp"
"${ICU_SOURCE_DIR}/common/unistr_props.cpp"
"${ICU_SOURCE_DIR}/common/unistr_titlecase_brkiter.cpp"
"${ICU_SOURCE_DIR}/common/unorm.cpp"
"${ICU_SOURCE_DIR}/common/unormcmp.cpp"
"${ICU_SOURCE_DIR}/common/uobject.cpp"
"${ICU_SOURCE_DIR}/common/uprops.cpp"
"${ICU_SOURCE_DIR}/common/ures_cnv.cpp"
"${ICU_SOURCE_DIR}/common/uresbund.cpp"
"${ICU_SOURCE_DIR}/common/uresdata.cpp"
"${ICU_SOURCE_DIR}/common/usc_impl.cpp"
"${ICU_SOURCE_DIR}/common/uscript.cpp"
"${ICU_SOURCE_DIR}/common/uscript_props.cpp"
"${ICU_SOURCE_DIR}/common/uset.cpp"
"${ICU_SOURCE_DIR}/common/uset_props.cpp"
"${ICU_SOURCE_DIR}/common/usetiter.cpp"
"${ICU_SOURCE_DIR}/common/ushape.cpp"
"${ICU_SOURCE_DIR}/common/usprep.cpp"
"${ICU_SOURCE_DIR}/common/ustack.cpp"
"${ICU_SOURCE_DIR}/common/ustr_cnv.cpp"
"${ICU_SOURCE_DIR}/common/ustr_titlecase_brkiter.cpp"
"${ICU_SOURCE_DIR}/common/ustr_wcs.cpp"
"${ICU_SOURCE_DIR}/common/ustrcase.cpp"
"${ICU_SOURCE_DIR}/common/ustrcase_locale.cpp"
"${ICU_SOURCE_DIR}/common/ustrenum.cpp"
"${ICU_SOURCE_DIR}/common/ustrfmt.cpp"
"${ICU_SOURCE_DIR}/common/ustring.cpp"
"${ICU_SOURCE_DIR}/common/ustrtrns.cpp"
"${ICU_SOURCE_DIR}/common/utext.cpp"
"${ICU_SOURCE_DIR}/common/utf_impl.cpp"
"${ICU_SOURCE_DIR}/common/util.cpp"
"${ICU_SOURCE_DIR}/common/util_props.cpp"
"${ICU_SOURCE_DIR}/common/utrace.cpp"
"${ICU_SOURCE_DIR}/common/utrie.cpp"
"${ICU_SOURCE_DIR}/common/utrie2.cpp"
"${ICU_SOURCE_DIR}/common/utrie2_builder.cpp"
"${ICU_SOURCE_DIR}/common/utrie_swap.cpp"
"${ICU_SOURCE_DIR}/common/uts46.cpp"
"${ICU_SOURCE_DIR}/common/utypes.cpp"
"${ICU_SOURCE_DIR}/common/uvector.cpp"
"${ICU_SOURCE_DIR}/common/uvectr32.cpp"
"${ICU_SOURCE_DIR}/common/uvectr64.cpp"
"${ICU_SOURCE_DIR}/common/wintz.cpp")
set(ICUI18N_SOURCES
"${ICU_SOURCE_DIR}/i18n/ucln_in.cpp"
"${ICU_SOURCE_DIR}/i18n/fmtable.cpp"
"${ICU_SOURCE_DIR}/i18n/format.cpp"
"${ICU_SOURCE_DIR}/i18n/msgfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/umsg.cpp"
"${ICU_SOURCE_DIR}/i18n/numfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/unum.cpp"
"${ICU_SOURCE_DIR}/i18n/decimfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/dcfmtsym.cpp"
"${ICU_SOURCE_DIR}/i18n/fmtable_cnv.cpp"
"${ICU_SOURCE_DIR}/i18n/choicfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/datefmt.cpp"
"${ICU_SOURCE_DIR}/i18n/smpdtfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/reldtfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/dtfmtsym.cpp"
"${ICU_SOURCE_DIR}/i18n/udat.cpp"
"${ICU_SOURCE_DIR}/i18n/dtptngen.cpp"
"${ICU_SOURCE_DIR}/i18n/udatpg.cpp"
"${ICU_SOURCE_DIR}/i18n/nfrs.cpp"
"${ICU_SOURCE_DIR}/i18n/nfrule.cpp"
"${ICU_SOURCE_DIR}/i18n/nfsubs.cpp"
"${ICU_SOURCE_DIR}/i18n/rbnf.cpp"
"${ICU_SOURCE_DIR}/i18n/numsys.cpp"
"${ICU_SOURCE_DIR}/i18n/unumsys.cpp"
"${ICU_SOURCE_DIR}/i18n/ucsdet.cpp"
"${ICU_SOURCE_DIR}/i18n/ucal.cpp"
"${ICU_SOURCE_DIR}/i18n/calendar.cpp"
"${ICU_SOURCE_DIR}/i18n/gregocal.cpp"
"${ICU_SOURCE_DIR}/i18n/timezone.cpp"
"${ICU_SOURCE_DIR}/i18n/simpletz.cpp"
"${ICU_SOURCE_DIR}/i18n/olsontz.cpp"
"${ICU_SOURCE_DIR}/i18n/alphaindex.cpp"
"${ICU_SOURCE_DIR}/i18n/anytrans.cpp"
"${ICU_SOURCE_DIR}/i18n/astro.cpp"
"${ICU_SOURCE_DIR}/i18n/taiwncal.cpp"
"${ICU_SOURCE_DIR}/i18n/basictz.cpp"
"${ICU_SOURCE_DIR}/i18n/bocsu.cpp"
"${ICU_SOURCE_DIR}/i18n/brktrans.cpp"
"${ICU_SOURCE_DIR}/i18n/buddhcal.cpp"
"${ICU_SOURCE_DIR}/i18n/persncal.cpp"
"${ICU_SOURCE_DIR}/i18n/islamcal.cpp"
"${ICU_SOURCE_DIR}/i18n/japancal.cpp"
"${ICU_SOURCE_DIR}/i18n/gregoimp.cpp"
"${ICU_SOURCE_DIR}/i18n/hebrwcal.cpp"
"${ICU_SOURCE_DIR}/i18n/indiancal.cpp"
"${ICU_SOURCE_DIR}/i18n/chnsecal.cpp"
"${ICU_SOURCE_DIR}/i18n/calendar.cpp"
"${ICU_SOURCE_DIR}/i18n/casetrn.cpp"
"${ICU_SOURCE_DIR}/i18n/cecal.cpp"
"${ICU_SOURCE_DIR}/i18n/coptccal.cpp"
"${ICU_SOURCE_DIR}/i18n/dangical.cpp"
"${ICU_SOURCE_DIR}/i18n/ethpccal.cpp"
"${ICU_SOURCE_DIR}/i18n/chnsecal.cpp"
"${ICU_SOURCE_DIR}/i18n/choicfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/coleitr.cpp"
"${ICU_SOURCE_DIR}/i18n/coll.cpp"
"${ICU_SOURCE_DIR}/i18n/sortkey.cpp"
"${ICU_SOURCE_DIR}/i18n/bocsu.cpp"
"${ICU_SOURCE_DIR}/i18n/ucoleitr.cpp"
"${ICU_SOURCE_DIR}/i18n/ucol.cpp"
"${ICU_SOURCE_DIR}/i18n/ucol_res.cpp"
"${ICU_SOURCE_DIR}/i18n/ucol_sit.cpp"
"${ICU_SOURCE_DIR}/i18n/collation.cpp"
"${ICU_SOURCE_DIR}/i18n/collationsettings.cpp"
"${ICU_SOURCE_DIR}/i18n/collationbuilder.cpp"
"${ICU_SOURCE_DIR}/i18n/collationcompare.cpp"
"${ICU_SOURCE_DIR}/i18n/collationdata.cpp"
"${ICU_SOURCE_DIR}/i18n/collationtailoring.cpp"
"${ICU_SOURCE_DIR}/i18n/collationdatabuilder.cpp"
"${ICU_SOURCE_DIR}/i18n/collationdatareader.cpp"
"${ICU_SOURCE_DIR}/i18n/collationdatawriter.cpp"
"${ICU_SOURCE_DIR}/i18n/collationfastlatin.cpp"
"${ICU_SOURCE_DIR}/i18n/collationfastlatinbuilder.cpp"
"${ICU_SOURCE_DIR}/i18n/collationfcd.cpp"
"${ICU_SOURCE_DIR}/i18n/collationiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/utf16collationiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/utf8collationiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/uitercollationiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/collationsets.cpp"
"${ICU_SOURCE_DIR}/i18n/collationcompare.cpp"
"${ICU_SOURCE_DIR}/i18n/collationfastlatin.cpp"
"${ICU_SOURCE_DIR}/i18n/collationkeys.cpp"
"${ICU_SOURCE_DIR}/i18n/rulebasedcollator.cpp"
"${ICU_SOURCE_DIR}/i18n/collationroot.cpp"
"${ICU_SOURCE_DIR}/i18n/collationrootelements.cpp"
"${ICU_SOURCE_DIR}/i18n/collationdatabuilder.cpp"
"${ICU_SOURCE_DIR}/i18n/collationweights.cpp"
"${ICU_SOURCE_DIR}/i18n/collationruleparser.cpp"
"${ICU_SOURCE_DIR}/i18n/collationbuilder.cpp"
"${ICU_SOURCE_DIR}/i18n/collationfastlatinbuilder.cpp"
"${ICU_SOURCE_DIR}/i18n/listformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/ulistformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/strmatch.cpp"
"${ICU_SOURCE_DIR}/i18n/usearch.cpp"
"${ICU_SOURCE_DIR}/i18n/search.cpp"
"${ICU_SOURCE_DIR}/i18n/stsearch.cpp"
"${ICU_SOURCE_DIR}/i18n/translit.cpp"
"${ICU_SOURCE_DIR}/i18n/utrans.cpp"
"${ICU_SOURCE_DIR}/i18n/esctrn.cpp"
"${ICU_SOURCE_DIR}/i18n/unesctrn.cpp"
"${ICU_SOURCE_DIR}/i18n/funcrepl.cpp"
"${ICU_SOURCE_DIR}/i18n/strrepl.cpp"
"${ICU_SOURCE_DIR}/i18n/tridpars.cpp"
"${ICU_SOURCE_DIR}/i18n/collationsets.cpp"
"${ICU_SOURCE_DIR}/i18n/collationsettings.cpp"
"${ICU_SOURCE_DIR}/i18n/collationtailoring.cpp"
"${ICU_SOURCE_DIR}/i18n/collationweights.cpp"
"${ICU_SOURCE_DIR}/i18n/compactdecimalformat.cpp"
"${ICU_SOURCE_DIR}/i18n/coptccal.cpp"
"${ICU_SOURCE_DIR}/i18n/cpdtrans.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_data.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_pars.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_rule.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_set.cpp"
"${ICU_SOURCE_DIR}/i18n/nultrans.cpp"
"${ICU_SOURCE_DIR}/i18n/remtrans.cpp"
"${ICU_SOURCE_DIR}/i18n/casetrn.cpp"
"${ICU_SOURCE_DIR}/i18n/titletrn.cpp"
"${ICU_SOURCE_DIR}/i18n/tolowtrn.cpp"
"${ICU_SOURCE_DIR}/i18n/toupptrn.cpp"
"${ICU_SOURCE_DIR}/i18n/anytrans.cpp"
"${ICU_SOURCE_DIR}/i18n/name2uni.cpp"
"${ICU_SOURCE_DIR}/i18n/uni2name.cpp"
"${ICU_SOURCE_DIR}/i18n/nortrans.cpp"
"${ICU_SOURCE_DIR}/i18n/quant.cpp"
"${ICU_SOURCE_DIR}/i18n/transreg.cpp"
"${ICU_SOURCE_DIR}/i18n/brktrans.cpp"
"${ICU_SOURCE_DIR}/i18n/regexcmp.cpp"
"${ICU_SOURCE_DIR}/i18n/rematch.cpp"
"${ICU_SOURCE_DIR}/i18n/repattrn.cpp"
"${ICU_SOURCE_DIR}/i18n/regexst.cpp"
"${ICU_SOURCE_DIR}/i18n/regextxt.cpp"
"${ICU_SOURCE_DIR}/i18n/regeximp.cpp"
"${ICU_SOURCE_DIR}/i18n/uregex.cpp"
"${ICU_SOURCE_DIR}/i18n/uregexc.cpp"
"${ICU_SOURCE_DIR}/i18n/ulocdata.cpp"
"${ICU_SOURCE_DIR}/i18n/measfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/currfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/curramt.cpp"
"${ICU_SOURCE_DIR}/i18n/currunit.cpp"
"${ICU_SOURCE_DIR}/i18n/measure.cpp"
"${ICU_SOURCE_DIR}/i18n/utmscale.cpp"
"${ICU_SOURCE_DIR}/i18n/csdetect.cpp"
"${ICU_SOURCE_DIR}/i18n/csmatch.cpp"
"${ICU_SOURCE_DIR}/i18n/csr2022.cpp"
@ -346,60 +262,80 @@ set(ICUI18N_SOURCES
"${ICU_SOURCE_DIR}/i18n/csrsbcs.cpp"
"${ICU_SOURCE_DIR}/i18n/csrucode.cpp"
"${ICU_SOURCE_DIR}/i18n/csrutf8.cpp"
"${ICU_SOURCE_DIR}/i18n/inputext.cpp"
"${ICU_SOURCE_DIR}/i18n/wintzimpl.cpp"
"${ICU_SOURCE_DIR}/i18n/windtfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/winnmfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/basictz.cpp"
"${ICU_SOURCE_DIR}/i18n/dtrule.cpp"
"${ICU_SOURCE_DIR}/i18n/rbtz.cpp"
"${ICU_SOURCE_DIR}/i18n/tzrule.cpp"
"${ICU_SOURCE_DIR}/i18n/tztrans.cpp"
"${ICU_SOURCE_DIR}/i18n/vtzone.cpp"
"${ICU_SOURCE_DIR}/i18n/zonemeta.cpp"
"${ICU_SOURCE_DIR}/i18n/standardplural.cpp"
"${ICU_SOURCE_DIR}/i18n/upluralrules.cpp"
"${ICU_SOURCE_DIR}/i18n/plurrule.cpp"
"${ICU_SOURCE_DIR}/i18n/plurfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/selfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/curramt.cpp"
"${ICU_SOURCE_DIR}/i18n/currfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/currpinf.cpp"
"${ICU_SOURCE_DIR}/i18n/currunit.cpp"
"${ICU_SOURCE_DIR}/i18n/dangical.cpp"
"${ICU_SOURCE_DIR}/i18n/datefmt.cpp"
"${ICU_SOURCE_DIR}/i18n/dayperiodrules.cpp"
"${ICU_SOURCE_DIR}/i18n/dcfmtsym.cpp"
"${ICU_SOURCE_DIR}/i18n/decContext.cpp"
"${ICU_SOURCE_DIR}/i18n/decNumber.cpp"
"${ICU_SOURCE_DIR}/i18n/decimfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/displayoptions.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-bignum-dtoa.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-bignum.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-cached-powers.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-double-to-string.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-fast-dtoa.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-string-to-double.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-strtod.cpp"
"${ICU_SOURCE_DIR}/i18n/dtfmtsym.cpp"
"${ICU_SOURCE_DIR}/i18n/dtitvfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/dtitvinf.cpp"
"${ICU_SOURCE_DIR}/i18n/udateintervalformat.cpp"
"${ICU_SOURCE_DIR}/i18n/tmunit.cpp"
"${ICU_SOURCE_DIR}/i18n/tmutamt.cpp"
"${ICU_SOURCE_DIR}/i18n/tmutfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/currpinf.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof_build.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof_conf.cpp"
"${ICU_SOURCE_DIR}/i18n/smpdtfst.cpp"
"${ICU_SOURCE_DIR}/i18n/ztrans.cpp"
"${ICU_SOURCE_DIR}/i18n/zrule.cpp"
"${ICU_SOURCE_DIR}/i18n/vzone.cpp"
"${ICU_SOURCE_DIR}/i18n/dtptngen.cpp"
"${ICU_SOURCE_DIR}/i18n/dtrule.cpp"
"${ICU_SOURCE_DIR}/i18n/erarules.cpp"
"${ICU_SOURCE_DIR}/i18n/esctrn.cpp"
"${ICU_SOURCE_DIR}/i18n/ethpccal.cpp"
"${ICU_SOURCE_DIR}/i18n/fmtable.cpp"
"${ICU_SOURCE_DIR}/i18n/fmtable_cnv.cpp"
"${ICU_SOURCE_DIR}/i18n/format.cpp"
"${ICU_SOURCE_DIR}/i18n/formatted_string_builder.cpp"
"${ICU_SOURCE_DIR}/i18n/formattedval_iterimpl.cpp"
"${ICU_SOURCE_DIR}/i18n/formattedval_sbimpl.cpp"
"${ICU_SOURCE_DIR}/i18n/formattedvalue.cpp"
"${ICU_SOURCE_DIR}/i18n/fphdlimp.cpp"
"${ICU_SOURCE_DIR}/i18n/fpositer.cpp"
"${ICU_SOURCE_DIR}/i18n/ufieldpositer.cpp"
"${ICU_SOURCE_DIR}/i18n/decNumber.cpp"
"${ICU_SOURCE_DIR}/i18n/decContext.cpp"
"${ICU_SOURCE_DIR}/i18n/alphaindex.cpp"
"${ICU_SOURCE_DIR}/i18n/tznames.cpp"
"${ICU_SOURCE_DIR}/i18n/tznames_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/tzgnames.cpp"
"${ICU_SOURCE_DIR}/i18n/tzfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/compactdecimalformat.cpp"
"${ICU_SOURCE_DIR}/i18n/funcrepl.cpp"
"${ICU_SOURCE_DIR}/i18n/gender.cpp"
"${ICU_SOURCE_DIR}/i18n/region.cpp"
"${ICU_SOURCE_DIR}/i18n/scriptset.cpp"
"${ICU_SOURCE_DIR}/i18n/uregion.cpp"
"${ICU_SOURCE_DIR}/i18n/reldatefmt.cpp"
"${ICU_SOURCE_DIR}/i18n/quantityformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/gregocal.cpp"
"${ICU_SOURCE_DIR}/i18n/gregoimp.cpp"
"${ICU_SOURCE_DIR}/i18n/hebrwcal.cpp"
"${ICU_SOURCE_DIR}/i18n/indiancal.cpp"
"${ICU_SOURCE_DIR}/i18n/inputext.cpp"
"${ICU_SOURCE_DIR}/i18n/islamcal.cpp"
"${ICU_SOURCE_DIR}/i18n/iso8601cal.cpp"
"${ICU_SOURCE_DIR}/i18n/japancal.cpp"
"${ICU_SOURCE_DIR}/i18n/listformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/measfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/measunit.cpp"
"${ICU_SOURCE_DIR}/i18n/sharedbreakiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/scientificnumberformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/dayperiodrules.cpp"
"${ICU_SOURCE_DIR}/i18n/measunit_extra.cpp"
"${ICU_SOURCE_DIR}/i18n/measure.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_arguments.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_checker.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_data_model.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_errors.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_evaluation.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_formattable.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_formatter.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_function_registry.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_parser.cpp"
"${ICU_SOURCE_DIR}/i18n/messageformat2_serializer.cpp"
"${ICU_SOURCE_DIR}/i18n/msgfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/name2uni.cpp"
"${ICU_SOURCE_DIR}/i18n/nfrs.cpp"
"${ICU_SOURCE_DIR}/i18n/nfrule.cpp"
"${ICU_SOURCE_DIR}/i18n/nfsubs.cpp"
"${ICU_SOURCE_DIR}/i18n/nortrans.cpp"
"${ICU_SOURCE_DIR}/i18n/nultrans.cpp"
"${ICU_SOURCE_DIR}/i18n/number_affixutils.cpp"
"${ICU_SOURCE_DIR}/i18n/number_asformat.cpp"
"${ICU_SOURCE_DIR}/i18n/number_capi.cpp"
"${ICU_SOURCE_DIR}/i18n/number_compact.cpp"
"${ICU_SOURCE_DIR}/i18n/number_currencysymbols.cpp"
"${ICU_SOURCE_DIR}/i18n/number_decimalquantity.cpp"
"${ICU_SOURCE_DIR}/i18n/number_decimfmtprops.cpp"
"${ICU_SOURCE_DIR}/i18n/number_fluent.cpp"
@ -407,7 +343,9 @@ set(ICUI18N_SOURCES
"${ICU_SOURCE_DIR}/i18n/number_grouping.cpp"
"${ICU_SOURCE_DIR}/i18n/number_integerwidth.cpp"
"${ICU_SOURCE_DIR}/i18n/number_longnames.cpp"
"${ICU_SOURCE_DIR}/i18n/number_mapper.cpp"
"${ICU_SOURCE_DIR}/i18n/number_modifiers.cpp"
"${ICU_SOURCE_DIR}/i18n/number_multiplier.cpp"
"${ICU_SOURCE_DIR}/i18n/number_notation.cpp"
"${ICU_SOURCE_DIR}/i18n/number_output.cpp"
"${ICU_SOURCE_DIR}/i18n/number_padding.cpp"
@ -415,46 +353,125 @@ set(ICUI18N_SOURCES
"${ICU_SOURCE_DIR}/i18n/number_patternstring.cpp"
"${ICU_SOURCE_DIR}/i18n/number_rounding.cpp"
"${ICU_SOURCE_DIR}/i18n/number_scientific.cpp"
"${ICU_SOURCE_DIR}/i18n/number_utils.cpp"
"${ICU_SOURCE_DIR}/i18n/number_asformat.cpp"
"${ICU_SOURCE_DIR}/i18n/number_mapper.cpp"
"${ICU_SOURCE_DIR}/i18n/number_multiplier.cpp"
"${ICU_SOURCE_DIR}/i18n/number_currencysymbols.cpp"
"${ICU_SOURCE_DIR}/i18n/number_simple.cpp"
"${ICU_SOURCE_DIR}/i18n/number_skeletons.cpp"
"${ICU_SOURCE_DIR}/i18n/number_capi.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-string-to-double.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-double-to-string.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-bignum-dtoa.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-bignum.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-cached-powers.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-fast-dtoa.cpp"
"${ICU_SOURCE_DIR}/i18n/double-conversion-strtod.cpp"
"${ICU_SOURCE_DIR}/i18n/string_segment.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_parsednumber.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_symbols.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_decimal.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_scientific.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_currency.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_affixes.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_compositions.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_validators.cpp"
"${ICU_SOURCE_DIR}/i18n/numrange_fluent.cpp"
"${ICU_SOURCE_DIR}/i18n/numrange_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/erarules.cpp"
"${ICU_SOURCE_DIR}/i18n/formattedvalue.cpp"
"${ICU_SOURCE_DIR}/i18n/formattedval_iterimpl.cpp"
"${ICU_SOURCE_DIR}/i18n/formattedval_sbimpl.cpp"
"${ICU_SOURCE_DIR}/i18n/formatted_string_builder.cpp"
"${ICU_SOURCE_DIR}/i18n/measunit_extra.cpp"
"${ICU_SOURCE_DIR}/i18n/number_symbolswrapper.cpp"
"${ICU_SOURCE_DIR}/i18n/number_usageprefs.cpp"
"${ICU_SOURCE_DIR}/i18n/number_utils.cpp"
"${ICU_SOURCE_DIR}/i18n/numfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_affixes.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_compositions.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_currency.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_decimal.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_parsednumber.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_scientific.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_symbols.cpp"
"${ICU_SOURCE_DIR}/i18n/numparse_validators.cpp"
"${ICU_SOURCE_DIR}/i18n/numrange_capi.cpp"
"${ICU_SOURCE_DIR}/i18n/numrange_fluent.cpp"
"${ICU_SOURCE_DIR}/i18n/numrange_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/numsys.cpp"
"${ICU_SOURCE_DIR}/i18n/olsontz.cpp"
"${ICU_SOURCE_DIR}/i18n/persncal.cpp"
"${ICU_SOURCE_DIR}/i18n/pluralranges.cpp"
"${ICU_SOURCE_DIR}/i18n/plurfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/plurrule.cpp"
"${ICU_SOURCE_DIR}/i18n/quant.cpp"
"${ICU_SOURCE_DIR}/i18n/quantityformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/rbnf.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_data.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_pars.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_rule.cpp"
"${ICU_SOURCE_DIR}/i18n/rbt_set.cpp"
"${ICU_SOURCE_DIR}/i18n/rbtz.cpp"
"${ICU_SOURCE_DIR}/i18n/regexcmp.cpp"
"${ICU_SOURCE_DIR}/i18n/regeximp.cpp"
"${ICU_SOURCE_DIR}/i18n/regexst.cpp"
"${ICU_SOURCE_DIR}/i18n/regextxt.cpp"
"${ICU_SOURCE_DIR}/i18n/region.cpp"
"${ICU_SOURCE_DIR}/i18n/reldatefmt.cpp"
"${ICU_SOURCE_DIR}/i18n/reldtfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/rematch.cpp"
"${ICU_SOURCE_DIR}/i18n/remtrans.cpp"
"${ICU_SOURCE_DIR}/i18n/repattrn.cpp"
"${ICU_SOURCE_DIR}/i18n/rulebasedcollator.cpp"
"${ICU_SOURCE_DIR}/i18n/scientificnumberformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/scriptset.cpp"
"${ICU_SOURCE_DIR}/i18n/search.cpp"
"${ICU_SOURCE_DIR}/i18n/selfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/sharedbreakiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/simpletz.cpp"
"${ICU_SOURCE_DIR}/i18n/smpdtfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/smpdtfst.cpp"
"${ICU_SOURCE_DIR}/i18n/sortkey.cpp"
"${ICU_SOURCE_DIR}/i18n/standardplural.cpp"
"${ICU_SOURCE_DIR}/i18n/string_segment.cpp"
"${ICU_SOURCE_DIR}/i18n/strmatch.cpp"
"${ICU_SOURCE_DIR}/i18n/strrepl.cpp"
"${ICU_SOURCE_DIR}/i18n/stsearch.cpp"
"${ICU_SOURCE_DIR}/i18n/taiwncal.cpp"
"${ICU_SOURCE_DIR}/i18n/timezone.cpp"
"${ICU_SOURCE_DIR}/i18n/titletrn.cpp"
"${ICU_SOURCE_DIR}/i18n/tmunit.cpp"
"${ICU_SOURCE_DIR}/i18n/tmutamt.cpp"
"${ICU_SOURCE_DIR}/i18n/tmutfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/tolowtrn.cpp"
"${ICU_SOURCE_DIR}/i18n/toupptrn.cpp"
"${ICU_SOURCE_DIR}/i18n/translit.cpp"
"${ICU_SOURCE_DIR}/i18n/transreg.cpp"
"${ICU_SOURCE_DIR}/i18n/tridpars.cpp"
"${ICU_SOURCE_DIR}/i18n/tzfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/tzgnames.cpp"
"${ICU_SOURCE_DIR}/i18n/tznames.cpp"
"${ICU_SOURCE_DIR}/i18n/tznames_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/tzrule.cpp"
"${ICU_SOURCE_DIR}/i18n/tztrans.cpp"
"${ICU_SOURCE_DIR}/i18n/ucal.cpp"
"${ICU_SOURCE_DIR}/i18n/ucln_in.cpp"
"${ICU_SOURCE_DIR}/i18n/ucol.cpp"
"${ICU_SOURCE_DIR}/i18n/ucol_res.cpp"
"${ICU_SOURCE_DIR}/i18n/ucol_sit.cpp"
"${ICU_SOURCE_DIR}/i18n/ucoleitr.cpp"
"${ICU_SOURCE_DIR}/i18n/ucsdet.cpp"
"${ICU_SOURCE_DIR}/i18n/udat.cpp"
"${ICU_SOURCE_DIR}/i18n/udateintervalformat.cpp"
"${ICU_SOURCE_DIR}/i18n/udatpg.cpp"
"${ICU_SOURCE_DIR}/i18n/ufieldpositer.cpp"
"${ICU_SOURCE_DIR}/i18n/uitercollationiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/ulistformatter.cpp"
"${ICU_SOURCE_DIR}/i18n/ulocdata.cpp"
"${ICU_SOURCE_DIR}/i18n/umsg.cpp"
"${ICU_SOURCE_DIR}/i18n/unesctrn.cpp"
"${ICU_SOURCE_DIR}/i18n/uni2name.cpp"
"${ICU_SOURCE_DIR}/i18n/units_complexconverter.cpp"
"${ICU_SOURCE_DIR}/i18n/units_converter.cpp"
"${ICU_SOURCE_DIR}/i18n/units_data.cpp"
"${ICU_SOURCE_DIR}/i18n/units_router.cpp")
"${ICU_SOURCE_DIR}/i18n/units_router.cpp"
"${ICU_SOURCE_DIR}/i18n/unum.cpp"
"${ICU_SOURCE_DIR}/i18n/unumsys.cpp"
"${ICU_SOURCE_DIR}/i18n/upluralrules.cpp"
"${ICU_SOURCE_DIR}/i18n/uregex.cpp"
"${ICU_SOURCE_DIR}/i18n/uregexc.cpp"
"${ICU_SOURCE_DIR}/i18n/uregion.cpp"
"${ICU_SOURCE_DIR}/i18n/usearch.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof_build.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof_conf.cpp"
"${ICU_SOURCE_DIR}/i18n/uspoof_impl.cpp"
"${ICU_SOURCE_DIR}/i18n/utf16collationiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/utf8collationiterator.cpp"
"${ICU_SOURCE_DIR}/i18n/utmscale.cpp"
"${ICU_SOURCE_DIR}/i18n/utrans.cpp"
"${ICU_SOURCE_DIR}/i18n/vtzone.cpp"
"${ICU_SOURCE_DIR}/i18n/vzone.cpp"
"${ICU_SOURCE_DIR}/i18n/windtfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/winnmfmt.cpp"
"${ICU_SOURCE_DIR}/i18n/wintzimpl.cpp"
"${ICU_SOURCE_DIR}/i18n/zonemeta.cpp"
"${ICU_SOURCE_DIR}/i18n/zrule.cpp"
"${ICU_SOURCE_DIR}/i18n/ztrans.cpp")
file(GENERATE OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/empty.cpp" CONTENT " ")
enable_language(ASM)
@ -464,6 +481,11 @@ if (ARCH_S390X)
else()
set(ICUDATA_SOURCE_FILE "${ICUDATA_SOURCE_DIR}/icudt75l_dat.S" )
endif()
# ^^ you might be confused how for different little endian platforms (x86, ARM) the same assembly files can be used.
# These files are indeed assembly but they only contain data ('.long' directive), which makes them portable accross CPUs.
# Only the endianness and the character set (ASCII, EBCDIC) makes a difference, also see
# https://unicode-org.github.io/icu/userguide/icu_data/#sharing-icu-data-between-platforms, 'Sharing ICU Data Between Platforms')
# (and as an experiment, try re-generating the data files on x86 vs. ARM, ... you'll get exactly the same files)
set(ICUDATA_SOURCES
"${ICUDATA_SOURCE_FILE}"

2
contrib/libarchive vendored

@ -1 +1 @@
Subproject commit ee45796171324519f0c0bfd012018dd099296336
Subproject commit 313aa1fa10b657de791e3202c168a6c833bc3543

View File

@ -157,7 +157,7 @@ if (TARGET ch_contrib::zlib)
endif()
if (TARGET ch_contrib::zstd)
target_compile_definitions(_libarchive PUBLIC HAVE_ZSTD_H=1 HAVE_LIBZSTD=1 HAVE_LIBZSTD_COMPRESSOR=1)
target_compile_definitions(_libarchive PUBLIC HAVE_ZSTD_H=1 HAVE_LIBZSTD=1 HAVE_ZSTD_compressStream=1)
target_link_libraries(_libarchive PRIVATE ch_contrib::zstd)
endif()

View File

@ -334,13 +334,16 @@ typedef uint64_t uintmax_t;
/* #undef ARCHIVE_XATTR_LINUX */
/* Version number of bsdcpio */
#define BSDCPIO_VERSION_STRING "3.7.0"
#define BSDCPIO_VERSION_STRING "3.7.4"
/* Version number of bsdtar */
#define BSDTAR_VERSION_STRING "3.7.0"
#define BSDTAR_VERSION_STRING "3.7.4"
/* Version number of bsdcat */
#define BSDCAT_VERSION_STRING "3.7.0"
#define BSDCAT_VERSION_STRING "3.7.4"
/* Version number of bsdunzip */
#define BSDUNZIP_VERSION_STRING "3.7.4"
/* Define to 1 if you have the `acl_create_entry' function. */
/* #undef HAVE_ACL_CREATE_ENTRY */
@ -642,8 +645,8 @@ typedef uint64_t uintmax_t;
/* Define to 1 if you have the `getgrnam_r' function. */
#define HAVE_GETGRNAM_R 1
/* Define to 1 if platform uses `optreset` to reset `getopt` */
#define HAVE_GETOPT_OPTRESET 1
/* Define to 1 if you have the `getline' function. */
#define HAVE_GETLINE 1
/* Define to 1 if you have the `getpid' function. */
#define HAVE_GETPID 1
@ -750,6 +753,12 @@ typedef uint64_t uintmax_t;
/* Define to 1 if you have the `pcreposix' library (-lpcreposix). */
/* #undef HAVE_LIBPCREPOSIX */
/* Define to 1 if you have the `pcre2-8' library (-lpcre2-8). */
/* #undef HAVE_LIBPCRE2 */
/* Define to 1 if you have the `pcreposix' library (-lpcre2posix). */
/* #undef HAVE_LIBPCRE2POSIX */
/* Define to 1 if you have the `xml2' library (-lxml2). */
#define HAVE_LIBXML2 1
@ -765,9 +774,8 @@ typedef uint64_t uintmax_t;
/* Define to 1 if you have the `zstd' library (-lzstd). */
/* #undef HAVE_LIBZSTD */
/* Define to 1 if you have the `zstd' library (-lzstd) with compression
support. */
/* #undef HAVE_LIBZSTD_COMPRESSOR */
/* Define to 1 if you have the ZSTD_compressStream function. */
/* #undef HAVE_ZSTD_compressStream */
/* Define to 1 if you have the <limits.h> header file. */
#define HAVE_LIMITS_H 1
@ -923,6 +931,9 @@ typedef uint64_t uintmax_t;
/* Define to 1 if you have the <pcreposix.h> header file. */
/* #undef HAVE_PCREPOSIX_H */
/* Define to 1 if you have the <pcre2posix.h> header file. */
/* #undef HAVE_PCRE2POSIX_H */
/* Define to 1 if you have the `pipe' function. */
#define HAVE_PIPE 1
@ -1029,6 +1040,12 @@ typedef uint64_t uintmax_t;
/* Define to 1 if you have the `strrchr' function. */
#define HAVE_STRRCHR 1
/* Define to 1 if the system has the type `struct statfs'. */
/* #undef HAVE_STRUCT_STATFS */
/* Define to 1 if `f_iosize' is a member of `struct statfs'. */
/* #undef HAVE_STRUCT_STATFS_F_IOSIZE */
/* Define to 1 if `f_namemax' is a member of `struct statfs'. */
/* #undef HAVE_STRUCT_STATFS_F_NAMEMAX */
@ -1077,6 +1094,9 @@ typedef uint64_t uintmax_t;
/* Define to 1 if you have the `symlink' function. */
#define HAVE_SYMLINK 1
/* Define to 1 if you have the `sysconf' function. */
#define HAVE_SYSCONF 1
/* Define to 1 if you have the <sys/acl.h> header file. */
/* #undef HAVE_SYS_ACL_H */
@ -1276,10 +1296,10 @@ typedef uint64_t uintmax_t;
#define ICONV_CONST
/* Version number of libarchive as a single integer */
#define LIBARCHIVE_VERSION_NUMBER "3007000"
#define LIBARCHIVE_VERSION_NUMBER "3007004"
/* Version number of libarchive */
#define LIBARCHIVE_VERSION_STRING "3.7.0"
#define LIBARCHIVE_VERSION_STRING "3.7.4"
/* Define to 1 if `lstat' dereferences a symlink specified with a trailing
slash. */
@ -1333,7 +1353,7 @@ typedef uint64_t uintmax_t;
#endif /* SAFE_TO_DEFINE_EXTENSIONS */
/* Version number of package */
#define VERSION "3.7.0"
#define VERSION "3.7.4"
/* Number of bits in a file offset, on hosts where this is settable. */
/* #undef _FILE_OFFSET_BITS */

2
contrib/libfiu vendored

@ -1 +1 @@
Subproject commit b85edbde4cf974b1b40d27828a56f0505f4e2ee5
Subproject commit a1290d8cd3d7b4541d6c976e0a54f572ac03f2a3

2
contrib/libuv vendored

@ -1 +1 @@
Subproject commit 4482964660c77eec1166cd7d14fb915e3dbd774a
Subproject commit 714b58b9849568211ade86b44dd91d37f8a2175e

View File

@ -10,6 +10,7 @@ set(uv_sources
src/random.c
src/strscpy.c
src/strtok.c
src/thread-common.c
src/threadpool.c
src/timer.c
src/uv-common.c
@ -70,10 +71,7 @@ if(CMAKE_SYSTEM_NAME STREQUAL "Linux")
list(APPEND uv_defines _GNU_SOURCE _POSIX_C_SOURCE=200112)
list(APPEND uv_libraries rt)
list(APPEND uv_sources
src/unix/epoll.c
src/unix/linux-core.c
src/unix/linux-inotify.c
src/unix/linux-syscalls.c
src/unix/linux.c
src/unix/procfs-exepath.c
src/unix/random-getrandom.c
src/unix/random-sysctl-linux.c)

View File

@ -140,6 +140,12 @@ if (CMAKE_CROSSCOMPILING)
message (STATUS "CROSS COMPILING SET LLVM HOST TRIPLE ${LLVM_HOST_TRIPLE}")
endif()
# llvm-project/llvm/cmake/config-ix.cmake does a weird thing: it defines _LARGEFILE64_SOURCE,
# then checks if lseek64() function exists, then undefines _LARGEFILE64_SOURCE.
# Then the actual code that uses this function *doesn't* define _LARGEFILE64_SOURCE, so lseek64()
# may not exist and compilation fails. This happens with musl.
add_compile_definitions("_LARGEFILE64_SOURCE")
add_subdirectory ("${LLVM_SOURCE_DIR}" "${LLVM_BINARY_DIR}")
set_directory_properties (PROPERTIES

2
contrib/openssl vendored

@ -1 +1 @@
Subproject commit 66deddc1e53cda8706604a019777259372d1bd62
Subproject commit b3e62c440f390e12e77c80675f883af82ad3d5ed

2
contrib/replxx vendored

@ -1 +1 @@
Subproject commit 5d04501f93a4fb7f0bb8b73b8f614bc986f9e25b
Subproject commit 711c18e7f4d951255aa8b0851e5a55d5a5fb0ddb

2
contrib/sysroot vendored

@ -1 +1 @@
Subproject commit cc385041b226d1fc28ead14dbab5d40a5f821dd8
Subproject commit 5be834147d5b5dd77ca2b821f356982029320513

2
contrib/usearch vendored

@ -1 +1 @@
Subproject commit e21a5778a0d4469ddaf38c94b7be0196bb701ee4
Subproject commit 7a8967cb442b08ca20c3dd781414378e65957d37

View File

@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.8.1.2684"
ARG VERSION="24.8.4.13"
ARG PACKAGES="clickhouse-keeper"
ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.8.1.2684"
ARG VERSION="24.8.4.13"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
ARG REPO_CHANNEL="stable"
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
ARG VERSION="24.8.1.2684"
ARG VERSION="24.8.4.13"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
#docker-official-library:off

View File

@ -124,6 +124,8 @@ function setup_logs_replication
check_logs_credentials || return 0
__set_connection_args
echo "My hostname is ${HOSTNAME}"
echo 'Create all configured system logs'
clickhouse-client --query "SYSTEM FLUSH LOGS"
@ -184,7 +186,17 @@ function setup_logs_replication
/^TTL /d
')
echo -e "Creating remote destination table ${table}_${hash} with statement:\n${statement}" >&2
echo -e "Creating remote destination table ${table}_${hash} with statement:" >&2
echo "::group::${table}"
# there's the only way big "$statement" can be printed without causing EAGAIN error
# cat: write error: Resource temporarily unavailable
statement_print="${statement}"
if [ "${#statement_print}" -gt 4000 ]; then
statement_print="${statement::1999}\n…\n${statement:${#statement}-1999}"
fi
echo -e "$statement_print"
echo "::endgroup::"
echo "$statement" | clickhouse-client --database_replicated_initial_query_timeout_sec=10 \
--distributed_ddl_task_timeout=30 --distributed_ddl_output_mode=throw_only_active \

View File

@ -3,6 +3,8 @@
FROM alpine:3.18
RUN apk add --no-cache -U iproute2 \
&& for bin in iptables iptables-restore iptables-save; \
&& for bin in \
iptables iptables-restore iptables-save \
ip6tables ip6tables-restore ip6tables-save; \
do ln -sf xtables-nft-multi "/sbin/$bin"; \
done

View File

@ -112,3 +112,5 @@ wadllib==1.3.6
websocket-client==0.59.0
wheel==0.37.1
zipp==1.0.0
deltalake==0.16.0

View File

@ -13,7 +13,8 @@ entry="/usr/share/clickhouse-test/performance/scripts/entrypoint.sh"
# https://www.kernel.org/doc/Documentation/filesystems/tmpfs.txt
# Double-escaped backslashes are a tribute to the engineering wonder of docker --
# it gives '/bin/sh: 1: [bash,: not found' otherwise.
numactl --hardware
echo > compare.log
numactl --hardware | tee -a compare.log
node=$(( RANDOM % $(numactl --hardware | sed -n 's/^.*available:\(.*\)nodes.*$/\1/p') ));
echo Will bind to NUMA node $node;
echo Will bind to NUMA node $node | tee -a compare.log
numactl --cpunodebind=$node --membind=$node $entry

View File

@ -40,6 +40,3 @@ RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
ARG sqllogic_test_repo="https://github.com/gregrahn/sqllogictest.git"
RUN git clone --recursive ${sqllogic_test_repo}
COPY run.sh /
CMD ["/bin/bash", "/run.sh"]

View File

@ -0,0 +1,32 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.3.10.33-lts (37b6502ebf0) FIXME as compared to v24.3.9.5-lts (a939270465e)
#### Improvement
* Backported in [#68870](https://github.com/ClickHouse/ClickHouse/issues/68870): Make allow_experimental_analyzer be controlled by the initiator for distributed queries. This ensures compatibility and correctness during operations in mixed version clusters. [#65777](https://github.com/ClickHouse/ClickHouse/pull/65777) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Backported in [#69095](https://github.com/ClickHouse/ClickHouse/issues/69095): Support for the Spanish language in the embedded dictionaries. [#69035](https://github.com/ClickHouse/ClickHouse/pull/69035) ([Vasily Okunev](https://github.com/VOkunev)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68995](https://github.com/ClickHouse/ClickHouse/issues/68995): Fix the upper bound of the function `fromModifiedJulianDay`. It was supposed to be `9999-12-31` but was mistakenly set to `9999-01-01`. [#67583](https://github.com/ClickHouse/ClickHouse/pull/67583) ([PHO](https://github.com/depressed-pho)).
* Backported in [#68844](https://github.com/ClickHouse/ClickHouse/issues/68844): Fixed crash in Parquet filtering when data types in the file substantially differ from requested types (e.g. `... FROM file('a.parquet', Parquet, 'x String')`, but the file has `x Int64`). Without this fix, use `input_format_parquet_filter_push_down = 0` as a workaround. [#68131](https://github.com/ClickHouse/ClickHouse/pull/68131) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#68881](https://github.com/ClickHouse/ClickHouse/issues/68881): Fixes [#50868](https://github.com/ClickHouse/ClickHouse/issues/50868). Small DateTime64 constant values returned by a nested subquery inside a distributed query were wrongly transformed to Nulls, thus causing errors and possible incorrect query results. [#68323](https://github.com/ClickHouse/ClickHouse/pull/68323) ([Shankar](https://github.com/shiyer7474)).
* Backported in [#69054](https://github.com/ClickHouse/ClickHouse/issues/69054): Added back virtual columns ` _table` and `_database` to distributed tables. They were available until version 24.3. [#68672](https://github.com/ClickHouse/ClickHouse/pull/68672) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68856](https://github.com/ClickHouse/ClickHouse/issues/68856): Fix possible error `Size of permutation (0) is less than required (...)` during Variant column permutation. [#68681](https://github.com/ClickHouse/ClickHouse/pull/68681) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#69152](https://github.com/ClickHouse/ClickHouse/issues/69152): Fix possible wrong result during anyHeavy state merge. [#68950](https://github.com/ClickHouse/ClickHouse/pull/68950) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#69112](https://github.com/ClickHouse/ClickHouse/issues/69112): Fix logical error when we have empty async insert. [#69080](https://github.com/ClickHouse/ClickHouse/pull/69080) ([Han Fei](https://github.com/hanfei1991)).
#### NO CL CATEGORY
* Backported in [#68938](https://github.com/ClickHouse/ClickHouse/issues/68938):. [#68897](https://github.com/ClickHouse/ClickHouse/pull/68897) ([Alexander Gololobov](https://github.com/davenger)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#68826](https://github.com/ClickHouse/ClickHouse/issues/68826): Turn off fault injection for insert in `01396_inactive_replica_cleanup_nodes_zookeeper`. [#68715](https://github.com/ClickHouse/ClickHouse/pull/68715) ([alesapin](https://github.com/alesapin)).
* Backported in [#68754](https://github.com/ClickHouse/ClickHouse/issues/68754): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).
* Backported in [#69044](https://github.com/ClickHouse/ClickHouse/issues/69044): Fix 01114_database_atomic flakiness. [#68930](https://github.com/ClickHouse/ClickHouse/pull/68930) ([Raúl Marín](https://github.com/Algunenano)).

View File

@ -0,0 +1,17 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.3.11.7-lts (28795d0a47e) FIXME as compared to v24.3.10.33-lts (37b6502ebf0)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#67479](https://github.com/ClickHouse/ClickHouse/issues/67479): In rare cases ClickHouse could consider parts as broken because of some unexpected projections on disk. Now it's fixed. [#66898](https://github.com/ClickHouse/ClickHouse/pull/66898) ([alesapin](https://github.com/alesapin)).
* Backported in [#69243](https://github.com/ClickHouse/ClickHouse/issues/69243): `UNION` clause in subqueries wasn't handled correctly in queries with parallel replicas and lead to LOGICAL_ERROR `Duplicate announcement received for replica`. [#69146](https://github.com/ClickHouse/ClickHouse/pull/69146) ([Igor Nikonov](https://github.com/devcrafter)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#69221](https://github.com/ClickHouse/ClickHouse/issues/69221): Disable memory test with sanitizer. [#69193](https://github.com/ClickHouse/ClickHouse/pull/69193) ([alesapin](https://github.com/alesapin)).

View File

@ -0,0 +1,33 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.5.6.45-stable (bdca8604c29) FIXME as compared to v24.5.5.78-stable (0138248cb62)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#67902](https://github.com/ClickHouse/ClickHouse/issues/67902): Fixing the `Not-ready Set` error after the `PREWHERE` optimization for StorageMerge. [#65057](https://github.com/ClickHouse/ClickHouse/pull/65057) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68252](https://github.com/ClickHouse/ClickHouse/issues/68252): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#68064](https://github.com/ClickHouse/ClickHouse/issues/68064): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68158](https://github.com/ClickHouse/ClickHouse/issues/68158): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68115](https://github.com/ClickHouse/ClickHouse/issues/68115): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67886](https://github.com/ClickHouse/ClickHouse/issues/67886): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#68272](https://github.com/ClickHouse/ClickHouse/issues/68272): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67807](https://github.com/ClickHouse/ClickHouse/issues/67807): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67836](https://github.com/ClickHouse/ClickHouse/issues/67836): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67991](https://github.com/ClickHouse/ClickHouse/issues/67991): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68207](https://github.com/ClickHouse/ClickHouse/issues/68207): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68091](https://github.com/ClickHouse/ClickHouse/issues/68091): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68122](https://github.com/ClickHouse/ClickHouse/issues/68122): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68171](https://github.com/ClickHouse/ClickHouse/issues/68171): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68337](https://github.com/ClickHouse/ClickHouse/issues/68337): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68667](https://github.com/ClickHouse/ClickHouse/issues/68667): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Update version after release. [#67862](https://github.com/ClickHouse/ClickHouse/pull/67862) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68077](https://github.com/ClickHouse/ClickHouse/issues/68077): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Backported in [#68756](https://github.com/ClickHouse/ClickHouse/issues/68756): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,29 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.5.7.31-stable (6c185e9aec1) FIXME as compared to v24.5.6.45-stable (bdca8604c29)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68564](https://github.com/ClickHouse/ClickHouse/issues/68564): Fix indexHint function case found by fuzzer. [#66286](https://github.com/ClickHouse/ClickHouse/pull/66286) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68996](https://github.com/ClickHouse/ClickHouse/issues/68996): Fix the upper bound of the function `fromModifiedJulianDay`. It was supposed to be `9999-12-31` but was mistakenly set to `9999-01-01`. [#67583](https://github.com/ClickHouse/ClickHouse/pull/67583) ([PHO](https://github.com/depressed-pho)).
* Backported in [#68865](https://github.com/ClickHouse/ClickHouse/issues/68865): Fixed crash in Parquet filtering when data types in the file substantially differ from requested types (e.g. `... FROM file('a.parquet', Parquet, 'x String')`, but the file has `x Int64`). Without this fix, use `input_format_parquet_filter_push_down = 0` as a workaround. [#68131](https://github.com/ClickHouse/ClickHouse/pull/68131) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#69004](https://github.com/ClickHouse/ClickHouse/issues/69004): After https://github.com/ClickHouse/ClickHouse/pull/61984 `schema_inference_make_columns_nullable=0` still can make columns `Nullable` in Parquet/Arrow formats. The change was backward incompatible and users noticed the changes in the behaviour. This PR makes `schema_inference_make_columns_nullable=0` to work as before (no Nullable columns will be inferred) and introduces new value `auto` for this setting that will make columns `Nullable` only if data has information about nullability. [#68298](https://github.com/ClickHouse/ClickHouse/pull/68298) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68882](https://github.com/ClickHouse/ClickHouse/issues/68882): Fixes [#50868](https://github.com/ClickHouse/ClickHouse/issues/50868). Small DateTime64 constant values returned by a nested subquery inside a distributed query were wrongly transformed to Nulls, thus causing errors and possible incorrect query results. [#68323](https://github.com/ClickHouse/ClickHouse/pull/68323) ([Shankar](https://github.com/shiyer7474)).
* Backported in [#69023](https://github.com/ClickHouse/ClickHouse/issues/69023): Added back virtual columns ` _table` and `_database` to distributed tables. They were available until version 24.3. [#68672](https://github.com/ClickHouse/ClickHouse/pull/68672) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68858](https://github.com/ClickHouse/ClickHouse/issues/68858): Fix possible error `Size of permutation (0) is less than required (...)` during Variant column permutation. [#68681](https://github.com/ClickHouse/ClickHouse/pull/68681) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68784](https://github.com/ClickHouse/ClickHouse/issues/68784): Fix issue with materialized constant keys when hashing maps with arrays as keys in functions `sipHash(64/128)Keyed`. [#68731](https://github.com/ClickHouse/ClickHouse/pull/68731) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#69154](https://github.com/ClickHouse/ClickHouse/issues/69154): Fix possible wrong result during anyHeavy state merge. [#68950](https://github.com/ClickHouse/ClickHouse/pull/68950) ([Raúl Marín](https://github.com/Algunenano)).
#### NO CL CATEGORY
* Backported in [#68940](https://github.com/ClickHouse/ClickHouse/issues/68940):. [#68897](https://github.com/ClickHouse/ClickHouse/pull/68897) ([Alexander Gololobov](https://github.com/davenger)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#68828](https://github.com/ClickHouse/ClickHouse/issues/68828): Turn off fault injection for insert in `01396_inactive_replica_cleanup_nodes_zookeeper`. [#68715](https://github.com/ClickHouse/ClickHouse/pull/68715) ([alesapin](https://github.com/alesapin)).
* Backported in [#69046](https://github.com/ClickHouse/ClickHouse/issues/69046): Fix 01114_database_atomic flakiness. [#68930](https://github.com/ClickHouse/ClickHouse/pull/68930) ([Raúl Marín](https://github.com/Algunenano)).

View File

@ -0,0 +1,18 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.5.8.10-stable (f11729638ea) FIXME as compared to v24.5.7.31-stable (6c185e9aec1)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#69295](https://github.com/ClickHouse/ClickHouse/issues/69295): TODO. [#68744](https://github.com/ClickHouse/ClickHouse/pull/68744) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#69245](https://github.com/ClickHouse/ClickHouse/issues/69245): `UNION` clause in subqueries wasn't handled correctly in queries with parallel replicas and lead to LOGICAL_ERROR `Duplicate announcement received for replica`. [#69146](https://github.com/ClickHouse/ClickHouse/pull/69146) ([Igor Nikonov](https://github.com/devcrafter)).
* Fix crash when using `s3` table function with GLOB paths and filters. [#69176](https://github.com/ClickHouse/ClickHouse/pull/69176) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#69223](https://github.com/ClickHouse/ClickHouse/issues/69223): Disable memory test with sanitizer. [#69193](https://github.com/ClickHouse/ClickHouse/pull/69193) ([alesapin](https://github.com/alesapin)).

View File

@ -0,0 +1,83 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.6.3.38-stable (4e33c831589) FIXME as compared to v24.6.2.17-stable (5710a8b5c0c)
#### Improvement
* Backported in [#66770](https://github.com/ClickHouse/ClickHouse/issues/66770): Make allow_experimental_analyzer be controlled by the initiator for distributed queries. This ensures compatibility and correctness during operations in mixed version clusters. [#65777](https://github.com/ClickHouse/ClickHouse/pull/65777) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#66885](https://github.com/ClickHouse/ClickHouse/issues/66885): Fix unexpeced size of low cardinality column in function calls. [#65298](https://github.com/ClickHouse/ClickHouse/pull/65298) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66303](https://github.com/ClickHouse/ClickHouse/issues/66303): Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
* Backported in [#66330](https://github.com/ClickHouse/ClickHouse/issues/66330): Add missing settings `input_format_csv_skip_first_lines/input_format_tsv_skip_first_lines/input_format_csv_try_infer_numbers_from_strings/input_format_csv_try_infer_strings_from_quoted_tuples` in schema inference cache because they can change the resulting schema. It prevents from incorrect result of schema inference with these settings changed. [#65980](https://github.com/ClickHouse/ClickHouse/pull/65980) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#66157](https://github.com/ClickHouse/ClickHouse/issues/66157): Fixed buffer overflow bug in `unbin`/`unhex` implementation. [#66106](https://github.com/ClickHouse/ClickHouse/pull/66106) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#66210](https://github.com/ClickHouse/ClickHouse/issues/66210): Disable the `merge-filters` optimization introduced in [#64760](https://github.com/ClickHouse/ClickHouse/issues/64760). It may cause an exception if optimization merges two filter expressions and does not apply a short-circuit evaluation. [#66126](https://github.com/ClickHouse/ClickHouse/pull/66126) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66456](https://github.com/ClickHouse/ClickHouse/issues/66456): Fixed a bug in ZooKeeper client: a session could get stuck in unusable state after receiving a hardware error from ZooKeeper. For example, this might happen due to "soft memory limit" in ClickHouse Keeper. [#66140](https://github.com/ClickHouse/ClickHouse/pull/66140) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66228](https://github.com/ClickHouse/ClickHouse/issues/66228): Fix issue in SumIfToCountIfVisitor and signed integers. [#66146](https://github.com/ClickHouse/ClickHouse/pull/66146) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66183](https://github.com/ClickHouse/ClickHouse/issues/66183): Fix rare case with missing data in the result of distributed query, close [#61432](https://github.com/ClickHouse/ClickHouse/issues/61432). [#66174](https://github.com/ClickHouse/ClickHouse/pull/66174) ([vdimir](https://github.com/vdimir)).
* Backported in [#66271](https://github.com/ClickHouse/ClickHouse/issues/66271): Don't throw `TIMEOUT_EXCEEDED` for `none_only_active` mode of `distributed_ddl_output_mode`. [#66218](https://github.com/ClickHouse/ClickHouse/pull/66218) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#66682](https://github.com/ClickHouse/ClickHouse/issues/66682): Fix handling limit for `system.numbers_mt` when no index can be used. [#66231](https://github.com/ClickHouse/ClickHouse/pull/66231) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66587](https://github.com/ClickHouse/ClickHouse/issues/66587): Fixed how the ClickHouse server detects the maximum number of usable CPU cores as specified by cgroups v2 if the server runs in a container such as Docker. In more detail, containers often run their process in the root cgroup which has an empty name. In that case, ClickHouse ignored the CPU limits set by cgroups v2. [#66237](https://github.com/ClickHouse/ClickHouse/pull/66237) ([filimonov](https://github.com/filimonov)).
* Backported in [#66362](https://github.com/ClickHouse/ClickHouse/issues/66362): Fix the `Not-ready set` error when a subquery with `IN` is used in the constraint. [#66261](https://github.com/ClickHouse/ClickHouse/pull/66261) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68066](https://github.com/ClickHouse/ClickHouse/issues/68066): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68566](https://github.com/ClickHouse/ClickHouse/issues/68566): Fix indexHint function case found by fuzzer. [#66286](https://github.com/ClickHouse/ClickHouse/pull/66286) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68159](https://github.com/ClickHouse/ClickHouse/issues/68159): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#66613](https://github.com/ClickHouse/ClickHouse/issues/66613): Fix `Column identifier is already registered` error with `group_by_use_nulls=true` and new analyzer. [#66400](https://github.com/ClickHouse/ClickHouse/pull/66400) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66693](https://github.com/ClickHouse/ClickHouse/issues/66693): Fix the VALID UNTIL clause in the user definition resetting after a restart. Closes [#66405](https://github.com/ClickHouse/ClickHouse/issues/66405). [#66409](https://github.com/ClickHouse/ClickHouse/pull/66409) ([Nikolay Degterinsky](https://github.com/evillique)).
* Backported in [#66577](https://github.com/ClickHouse/ClickHouse/issues/66577): Fix `Cannot find column` error for queries with constant expression in `GROUP BY` key and new analyzer enabled. [#66433](https://github.com/ClickHouse/ClickHouse/pull/66433) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66721](https://github.com/ClickHouse/ClickHouse/issues/66721): Correctly track memory for `Allocator::realloc`. [#66548](https://github.com/ClickHouse/ClickHouse/pull/66548) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#66670](https://github.com/ClickHouse/ClickHouse/issues/66670): Fix reading of uninitialized memory when hashing empty tuples. This closes [#66559](https://github.com/ClickHouse/ClickHouse/issues/66559). [#66562](https://github.com/ClickHouse/ClickHouse/pull/66562) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#66952](https://github.com/ClickHouse/ClickHouse/issues/66952): Fix an invalid result for queries with `WINDOW`. This could happen when `PARTITION` columns have sparse serialization and window functions are executed in parallel. [#66579](https://github.com/ClickHouse/ClickHouse/pull/66579) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66956](https://github.com/ClickHouse/ClickHouse/issues/66956): Fix removing named collections in local storage. [#66599](https://github.com/ClickHouse/ClickHouse/pull/66599) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66716](https://github.com/ClickHouse/ClickHouse/issues/66716): Fix removing named collections in local storage. [#66599](https://github.com/ClickHouse/ClickHouse/pull/66599) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#66759](https://github.com/ClickHouse/ClickHouse/issues/66759): Fix `Unknown identifier` and `Column is not under aggregate function` errors for queries with the expression `(column IS NULL).` The bug was triggered by [#65088](https://github.com/ClickHouse/ClickHouse/issues/65088), with the disabled analyzer only. [#66654](https://github.com/ClickHouse/ClickHouse/pull/66654) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66751](https://github.com/ClickHouse/ClickHouse/issues/66751): Fix `Method getResultType is not supported for QUERY query node` error when scalar subquery was used as the first argument of IN (with new analyzer). [#66655](https://github.com/ClickHouse/ClickHouse/pull/66655) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68116](https://github.com/ClickHouse/ClickHouse/issues/68116): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67635](https://github.com/ClickHouse/ClickHouse/issues/67635): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67482](https://github.com/ClickHouse/ClickHouse/issues/67482): In rare cases ClickHouse could consider parts as broken because of some unexpected projections on disk. Now it's fixed. [#66898](https://github.com/ClickHouse/ClickHouse/pull/66898) ([alesapin](https://github.com/alesapin)).
* Backported in [#67816](https://github.com/ClickHouse/ClickHouse/issues/67816): Only relevant to the experimental Variant data type. Fix crash with Variant + AggregateFunction type. [#67122](https://github.com/ClickHouse/ClickHouse/pull/67122) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67199](https://github.com/ClickHouse/ClickHouse/issues/67199): TRUNCATE DATABASE used to stop replication as if it was a DROP DATABASE query, it's fixed. [#67129](https://github.com/ClickHouse/ClickHouse/pull/67129) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Backported in [#67381](https://github.com/ClickHouse/ClickHouse/issues/67381): Fix error `Cannot convert column because it is non constant in source stream but must be constant in result.` for a query that reads from the `Merge` table over the `Distriburted` table with one shard. [#67146](https://github.com/ClickHouse/ClickHouse/pull/67146) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#67244](https://github.com/ClickHouse/ClickHouse/issues/67244): This closes [#67156](https://github.com/ClickHouse/ClickHouse/issues/67156). This closes [#66447](https://github.com/ClickHouse/ClickHouse/issues/66447). The bug was introduced in https://github.com/ClickHouse/ClickHouse/pull/62907. [#67178](https://github.com/ClickHouse/ClickHouse/pull/67178) ([Maksim Kita](https://github.com/kitaisreal)).
* Backported in [#67503](https://github.com/ClickHouse/ClickHouse/issues/67503): Fix crash in DistributedAsyncInsert when connection is empty. [#67219](https://github.com/ClickHouse/ClickHouse/pull/67219) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#67887](https://github.com/ClickHouse/ClickHouse/issues/67887): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67578](https://github.com/ClickHouse/ClickHouse/issues/67578): Fix execution of nested short-circuit functions. [#67520](https://github.com/ClickHouse/ClickHouse/pull/67520) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68611](https://github.com/ClickHouse/ClickHouse/issues/68611): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#67852](https://github.com/ClickHouse/ClickHouse/issues/67852): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68275](https://github.com/ClickHouse/ClickHouse/issues/68275): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67808](https://github.com/ClickHouse/ClickHouse/issues/67808): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67838](https://github.com/ClickHouse/ClickHouse/issues/67838): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67993](https://github.com/ClickHouse/ClickHouse/issues/67993): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68208](https://github.com/ClickHouse/ClickHouse/issues/68208): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68093](https://github.com/ClickHouse/ClickHouse/issues/68093): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68124](https://github.com/ClickHouse/ClickHouse/issues/68124): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68221](https://github.com/ClickHouse/ClickHouse/issues/68221): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68173](https://github.com/ClickHouse/ClickHouse/issues/68173): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68339](https://github.com/ClickHouse/ClickHouse/issues/68339): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68396](https://github.com/ClickHouse/ClickHouse/issues/68396): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68668](https://github.com/ClickHouse/ClickHouse/issues/68668): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NO CL ENTRY
* NO CL ENTRY: 'Revert "Backport [#66599](https://github.com/ClickHouse/ClickHouse/issues/66599) to 24.6: Fix dropping named collection in local storage"'. [#66922](https://github.com/ClickHouse/ClickHouse/pull/66922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#66332](https://github.com/ClickHouse/ClickHouse/issues/66332): Do not raise a NOT_IMPLEMENTED error when getting s3 metrics with a multiple disk configuration. [#65403](https://github.com/ClickHouse/ClickHouse/pull/65403) ([Elena Torró](https://github.com/elenatorro)).
* Backported in [#66142](https://github.com/ClickHouse/ClickHouse/issues/66142): Fix flaky test_storage_s3_queue tests. [#66009](https://github.com/ClickHouse/ClickHouse/pull/66009) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#66389](https://github.com/ClickHouse/ClickHouse/issues/66389): Disable broken cases from 02911_join_on_nullsafe_optimization. [#66310](https://github.com/ClickHouse/ClickHouse/pull/66310) ([vdimir](https://github.com/vdimir)).
* Backported in [#66428](https://github.com/ClickHouse/ClickHouse/issues/66428): Ignore subquery for IN in DDLLoadingDependencyVisitor. [#66395](https://github.com/ClickHouse/ClickHouse/pull/66395) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#66546](https://github.com/ClickHouse/ClickHouse/issues/66546): Add additional log masking in CI. [#66523](https://github.com/ClickHouse/ClickHouse/pull/66523) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#66861](https://github.com/ClickHouse/ClickHouse/issues/66861): Fix data race in S3::ClientCache. [#66644](https://github.com/ClickHouse/ClickHouse/pull/66644) ([Konstantin Morozov](https://github.com/k-morozov)).
* Backported in [#66877](https://github.com/ClickHouse/ClickHouse/issues/66877): Support one more case in JOIN ON ... IS NULL. [#66725](https://github.com/ClickHouse/ClickHouse/pull/66725) ([vdimir](https://github.com/vdimir)).
* Backported in [#67061](https://github.com/ClickHouse/ClickHouse/issues/67061): Increase asio pool size in case the server is tiny. [#66761](https://github.com/ClickHouse/ClickHouse/pull/66761) ([alesapin](https://github.com/alesapin)).
* Backported in [#66940](https://github.com/ClickHouse/ClickHouse/issues/66940): Small fix in realloc memory tracking. [#66820](https://github.com/ClickHouse/ClickHouse/pull/66820) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67254](https://github.com/ClickHouse/ClickHouse/issues/67254): Followup [#66725](https://github.com/ClickHouse/ClickHouse/issues/66725). [#66869](https://github.com/ClickHouse/ClickHouse/pull/66869) ([vdimir](https://github.com/vdimir)).
* Backported in [#67414](https://github.com/ClickHouse/ClickHouse/issues/67414): CI: Fix build results for release branches. [#67402](https://github.com/ClickHouse/ClickHouse/pull/67402) ([Max K.](https://github.com/maxknv)).
* Update version after release. [#67909](https://github.com/ClickHouse/ClickHouse/pull/67909) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68079](https://github.com/ClickHouse/ClickHouse/issues/68079): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).

View File

@ -0,0 +1,33 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.6.4.42-stable (c534bb4b4dd) FIXME as compared to v24.6.3.95-stable (8325c920d11)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68066](https://github.com/ClickHouse/ClickHouse/issues/68066): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68566](https://github.com/ClickHouse/ClickHouse/issues/68566): Fix indexHint function case found by fuzzer. [#66286](https://github.com/ClickHouse/ClickHouse/pull/66286) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68159](https://github.com/ClickHouse/ClickHouse/issues/68159): Fix cluster() for inter-server secret (preserve initial user as before). [#66364](https://github.com/ClickHouse/ClickHouse/pull/66364) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68116](https://github.com/ClickHouse/ClickHouse/issues/68116): Fix possible PARAMETER_OUT_OF_BOUND error during reading variant subcolumn. [#66659](https://github.com/ClickHouse/ClickHouse/pull/66659) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67887](https://github.com/ClickHouse/ClickHouse/issues/67887): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#68611](https://github.com/ClickHouse/ClickHouse/issues/68611): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68275](https://github.com/ClickHouse/ClickHouse/issues/68275): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#67993](https://github.com/ClickHouse/ClickHouse/issues/67993): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68208](https://github.com/ClickHouse/ClickHouse/issues/68208): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68093](https://github.com/ClickHouse/ClickHouse/issues/68093): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68124](https://github.com/ClickHouse/ClickHouse/issues/68124): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68221](https://github.com/ClickHouse/ClickHouse/issues/68221): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68173](https://github.com/ClickHouse/ClickHouse/issues/68173): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68339](https://github.com/ClickHouse/ClickHouse/issues/68339): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68396](https://github.com/ClickHouse/ClickHouse/issues/68396): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68668](https://github.com/ClickHouse/ClickHouse/issues/68668): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Update version after release. [#67909](https://github.com/ClickHouse/ClickHouse/pull/67909) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68079](https://github.com/ClickHouse/ClickHouse/issues/68079): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Backported in [#68758](https://github.com/ClickHouse/ClickHouse/issues/68758): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,29 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.6.5.30-stable (e6e196c92d6) FIXME as compared to v24.6.4.42-stable (c534bb4b4dd)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68969](https://github.com/ClickHouse/ClickHouse/issues/68969): Fix the upper bound of the function `fromModifiedJulianDay`. It was supposed to be `9999-12-31` but was mistakenly set to `9999-01-01`. [#67583](https://github.com/ClickHouse/ClickHouse/pull/67583) ([PHO](https://github.com/depressed-pho)).
* Backported in [#68814](https://github.com/ClickHouse/ClickHouse/issues/68814): Fixed crash in Parquet filtering when data types in the file substantially differ from requested types (e.g. `... FROM file('a.parquet', Parquet, 'x String')`, but the file has `x Int64`). Without this fix, use `input_format_parquet_filter_push_down = 0` as a workaround. [#68131](https://github.com/ClickHouse/ClickHouse/pull/68131) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#69005](https://github.com/ClickHouse/ClickHouse/issues/69005): After https://github.com/ClickHouse/ClickHouse/pull/61984 `schema_inference_make_columns_nullable=0` still can make columns `Nullable` in Parquet/Arrow formats. The change was backward incompatible and users noticed the changes in the behaviour. This PR makes `schema_inference_make_columns_nullable=0` to work as before (no Nullable columns will be inferred) and introduces new value `auto` for this setting that will make columns `Nullable` only if data has information about nullability. [#68298](https://github.com/ClickHouse/ClickHouse/pull/68298) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68883](https://github.com/ClickHouse/ClickHouse/issues/68883): Fixes [#50868](https://github.com/ClickHouse/ClickHouse/issues/50868). Small DateTime64 constant values returned by a nested subquery inside a distributed query were wrongly transformed to Nulls, thus causing errors and possible incorrect query results. [#68323](https://github.com/ClickHouse/ClickHouse/pull/68323) ([Shankar](https://github.com/shiyer7474)).
* Backported in [#69025](https://github.com/ClickHouse/ClickHouse/issues/69025): Added back virtual columns ` _table` and `_database` to distributed tables. They were available until version 24.3. [#68672](https://github.com/ClickHouse/ClickHouse/pull/68672) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68860](https://github.com/ClickHouse/ClickHouse/issues/68860): Fix possible error `Size of permutation (0) is less than required (...)` during Variant column permutation. [#68681](https://github.com/ClickHouse/ClickHouse/pull/68681) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68786](https://github.com/ClickHouse/ClickHouse/issues/68786): Fix issue with materialized constant keys when hashing maps with arrays as keys in functions `sipHash(64/128)Keyed`. [#68731](https://github.com/ClickHouse/ClickHouse/pull/68731) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#69156](https://github.com/ClickHouse/ClickHouse/issues/69156): Fix possible wrong result during anyHeavy state merge. [#68950](https://github.com/ClickHouse/ClickHouse/pull/68950) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#69116](https://github.com/ClickHouse/ClickHouse/issues/69116): Fix logical error when we have empty async insert. [#69080](https://github.com/ClickHouse/ClickHouse/pull/69080) ([Han Fei](https://github.com/hanfei1991)).
#### NO CL CATEGORY
* Backported in [#68942](https://github.com/ClickHouse/ClickHouse/issues/68942):. [#68897](https://github.com/ClickHouse/ClickHouse/pull/68897) ([Alexander Gololobov](https://github.com/davenger)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#68830](https://github.com/ClickHouse/ClickHouse/issues/68830): Turn off fault injection for insert in `01396_inactive_replica_cleanup_nodes_zookeeper`. [#68715](https://github.com/ClickHouse/ClickHouse/pull/68715) ([alesapin](https://github.com/alesapin)).
* Backported in [#69048](https://github.com/ClickHouse/ClickHouse/issues/69048): Fix 01114_database_atomic flakiness. [#68930](https://github.com/ClickHouse/ClickHouse/pull/68930) ([Raúl Marín](https://github.com/Algunenano)).

View File

@ -0,0 +1,16 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.6.6.6-stable (a4c4580e639) FIXME as compared to v24.6.5.30-stable (e6e196c92d6)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#69197](https://github.com/ClickHouse/ClickHouse/issues/69197): TODO. [#68744](https://github.com/ClickHouse/ClickHouse/pull/68744) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#69225](https://github.com/ClickHouse/ClickHouse/issues/69225): Disable memory test with sanitizer. [#69193](https://github.com/ClickHouse/ClickHouse/pull/69193) ([alesapin](https://github.com/alesapin)).

View File

@ -0,0 +1,55 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.7.3.47-stable (2e50fe27a14) FIXME as compared to v24.7.2.13-stable (6e41f601b2f)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68232](https://github.com/ClickHouse/ClickHouse/issues/68232): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#67969](https://github.com/ClickHouse/ClickHouse/issues/67969): Fixed reading of subcolumns after `ALTER ADD COLUMN` query. [#66243](https://github.com/ClickHouse/ClickHouse/pull/66243) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68068](https://github.com/ClickHouse/ClickHouse/issues/68068): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#67637](https://github.com/ClickHouse/ClickHouse/issues/67637): Fix for occasional deadlock in Context::getDDLWorker. [#66843](https://github.com/ClickHouse/ClickHouse/pull/66843) ([Alexander Gololobov](https://github.com/davenger)).
* Backported in [#67820](https://github.com/ClickHouse/ClickHouse/issues/67820): Fix possible deadlock on query cancel with parallel replicas. [#66905](https://github.com/ClickHouse/ClickHouse/pull/66905) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67818](https://github.com/ClickHouse/ClickHouse/issues/67818): Only relevant to the experimental Variant data type. Fix crash with Variant + AggregateFunction type. [#67122](https://github.com/ClickHouse/ClickHouse/pull/67122) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67766](https://github.com/ClickHouse/ClickHouse/issues/67766): Fix crash of `uniq` and `uniqTheta ` with `tuple()` argument. Closes [#67303](https://github.com/ClickHouse/ClickHouse/issues/67303). [#67306](https://github.com/ClickHouse/ClickHouse/pull/67306) ([flynn](https://github.com/ucasfl)).
* Backported in [#67881](https://github.com/ClickHouse/ClickHouse/issues/67881): Correctly parse file name/URI containing `::` if it's not an archive. [#67433](https://github.com/ClickHouse/ClickHouse/pull/67433) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#68613](https://github.com/ClickHouse/ClickHouse/issues/68613): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#67854](https://github.com/ClickHouse/ClickHouse/issues/67854): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68278](https://github.com/ClickHouse/ClickHouse/issues/68278): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68040](https://github.com/ClickHouse/ClickHouse/issues/68040): Fix creation of view with recursive CTE. [#67587](https://github.com/ClickHouse/ClickHouse/pull/67587) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Backported in [#68038](https://github.com/ClickHouse/ClickHouse/issues/68038): Fix crash on `percent_rank`. `percent_rank`'s default frame type is changed to `range unbounded preceding and unbounded following`. `IWindowFunction`'s default window frame is considered and now window functions without window frame definition in sql can be put into different `WindowTransfomer`s properly. [#67661](https://github.com/ClickHouse/ClickHouse/pull/67661) ([lgbo](https://github.com/lgbo-ustc)).
* Backported in [#67713](https://github.com/ClickHouse/ClickHouse/issues/67713): Fix reloading SQL UDFs with UNION. Previously, restarting the server could make UDF invalid. [#67665](https://github.com/ClickHouse/ClickHouse/pull/67665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#67840](https://github.com/ClickHouse/ClickHouse/issues/67840): Fix potential stack overflow in `JSONMergePatch` function. Renamed this function from `jsonMergePatch` to `JSONMergePatch` because the previous name was wrong. The previous name is still kept for compatibility. Improved diagnostic of errors in the function. This closes [#67304](https://github.com/ClickHouse/ClickHouse/issues/67304). [#67756](https://github.com/ClickHouse/ClickHouse/pull/67756) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* Backported in [#67995](https://github.com/ClickHouse/ClickHouse/issues/67995): Validate experimental/suspicious data types in ALTER ADD/MODIFY COLUMN. [#67911](https://github.com/ClickHouse/ClickHouse/pull/67911) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68224](https://github.com/ClickHouse/ClickHouse/issues/68224): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68095](https://github.com/ClickHouse/ClickHouse/issues/68095): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68126](https://github.com/ClickHouse/ClickHouse/issues/68126): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68223](https://github.com/ClickHouse/ClickHouse/issues/68223): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68175](https://github.com/ClickHouse/ClickHouse/issues/68175): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68341](https://github.com/ClickHouse/ClickHouse/issues/68341): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68398](https://github.com/ClickHouse/ClickHouse/issues/68398): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68669](https://github.com/ClickHouse/ClickHouse/issues/68669): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#67518](https://github.com/ClickHouse/ClickHouse/issues/67518): Split slow test 03036_dynamic_read_subcolumns. [#66954](https://github.com/ClickHouse/ClickHouse/pull/66954) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67516](https://github.com/ClickHouse/ClickHouse/issues/67516): Split 01508_partition_pruning_long. [#66983](https://github.com/ClickHouse/ClickHouse/pull/66983) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67529](https://github.com/ClickHouse/ClickHouse/issues/67529): Reduce max time of 00763_long_lock_buffer_alter_destination_table. [#67185](https://github.com/ClickHouse/ClickHouse/pull/67185) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#67803](https://github.com/ClickHouse/ClickHouse/issues/67803): Disable some Dynamic tests under sanitizers, rewrite 03202_dynamic_null_map_subcolumn to sql. [#67359](https://github.com/ClickHouse/ClickHouse/pull/67359) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#67643](https://github.com/ClickHouse/ClickHouse/issues/67643): [Green CI] Fix potentially flaky test_mask_sensitive_info integration test. [#67506](https://github.com/ClickHouse/ClickHouse/pull/67506) ([Alexey Katsman](https://github.com/alexkats)).
* Backported in [#67609](https://github.com/ClickHouse/ClickHouse/issues/67609): Fix test_zookeeper_config_load_balancing after adding the xdist worker name to the instance. [#67590](https://github.com/ClickHouse/ClickHouse/pull/67590) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#67871](https://github.com/ClickHouse/ClickHouse/issues/67871): Fix 02434_cancel_insert_when_client_dies. [#67600](https://github.com/ClickHouse/ClickHouse/pull/67600) ([vdimir](https://github.com/vdimir)).
* Backported in [#67704](https://github.com/ClickHouse/ClickHouse/issues/67704): Fix 02910_bad_logs_level_in_local in fast tests. [#67603](https://github.com/ClickHouse/ClickHouse/pull/67603) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#67689](https://github.com/ClickHouse/ClickHouse/issues/67689): Fix 01605_adaptive_granularity_block_borders. [#67605](https://github.com/ClickHouse/ClickHouse/pull/67605) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67827](https://github.com/ClickHouse/ClickHouse/issues/67827): Try fix 03143_asof_join_ddb_long. [#67620](https://github.com/ClickHouse/ClickHouse/pull/67620) ([Nikita Taranov](https://github.com/nickitat)).
* Backported in [#67892](https://github.com/ClickHouse/ClickHouse/issues/67892): Revert "Merge pull request [#66510](https://github.com/ClickHouse/ClickHouse/issues/66510) from canhld94/fix_trivial_count_non_deterministic_func". [#67800](https://github.com/ClickHouse/ClickHouse/pull/67800) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68081](https://github.com/ClickHouse/ClickHouse/issues/68081): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Update version after release. [#68044](https://github.com/ClickHouse/ClickHouse/pull/68044) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68269](https://github.com/ClickHouse/ClickHouse/issues/68269): [Green CI] Fix test 01903_correct_block_size_prediction_with_default. [#68203](https://github.com/ClickHouse/ClickHouse/pull/68203) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#68432](https://github.com/ClickHouse/ClickHouse/issues/68432): tests: make 01600_parts_states_metrics_long better. [#68265](https://github.com/ClickHouse/ClickHouse/pull/68265) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68538](https://github.com/ClickHouse/ClickHouse/issues/68538): CI: Native build for package_aarch64. [#68457](https://github.com/ClickHouse/ClickHouse/pull/68457) ([Max K.](https://github.com/maxknv)).
* Backported in [#68555](https://github.com/ClickHouse/ClickHouse/issues/68555): CI: Minor release workflow fix. [#68536](https://github.com/ClickHouse/ClickHouse/pull/68536) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,36 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.7.4.51-stable (70fe2f6fa52) FIXME as compared to v24.7.3.42-stable (63730bc4293)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68232](https://github.com/ClickHouse/ClickHouse/issues/68232): Fixed `Not-ready Set` in some system tables when filtering using subqueries. [#66018](https://github.com/ClickHouse/ClickHouse/pull/66018) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#68068](https://github.com/ClickHouse/ClickHouse/issues/68068): Fix boolean literals in query sent to external database (for engines like `PostgreSQL`). [#66282](https://github.com/ClickHouse/ClickHouse/pull/66282) ([vdimir](https://github.com/vdimir)).
* Backported in [#68613](https://github.com/ClickHouse/ClickHouse/issues/68613): Fixes [#66026](https://github.com/ClickHouse/ClickHouse/issues/66026). Avoid unresolved table function arguments traversal in `ReplaceTableNodeToDummyVisitor`. [#67522](https://github.com/ClickHouse/ClickHouse/pull/67522) ([Dmitry Novik](https://github.com/novikd)).
* Backported in [#68278](https://github.com/ClickHouse/ClickHouse/issues/68278): Fix inserting into stream like engines (Kafka, RabbitMQ, NATS) through HTTP interface. [#67554](https://github.com/ClickHouse/ClickHouse/pull/67554) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68040](https://github.com/ClickHouse/ClickHouse/issues/68040): Fix creation of view with recursive CTE. [#67587](https://github.com/ClickHouse/ClickHouse/pull/67587) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
* Backported in [#68038](https://github.com/ClickHouse/ClickHouse/issues/68038): Fix crash on `percent_rank`. `percent_rank`'s default frame type is changed to `range unbounded preceding and unbounded following`. `IWindowFunction`'s default window frame is considered and now window functions without window frame definition in sql can be put into different `WindowTransfomer`s properly. [#67661](https://github.com/ClickHouse/ClickHouse/pull/67661) ([lgbo](https://github.com/lgbo-ustc)).
* Backported in [#68224](https://github.com/ClickHouse/ClickHouse/issues/68224): Fix wrong `count()` result when there is non-deterministic function in predicate. [#67922](https://github.com/ClickHouse/ClickHouse/pull/67922) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Backported in [#68095](https://github.com/ClickHouse/ClickHouse/issues/68095): Fixed the calculation of the maximum thread soft limit in containerized environments where the usable CPU count is limited. [#67963](https://github.com/ClickHouse/ClickHouse/pull/67963) ([Robert Schulze](https://github.com/rschu1ze)).
* Backported in [#68126](https://github.com/ClickHouse/ClickHouse/issues/68126): Fixed skipping of untouched parts in mutations with new analyzer. Previously with enabled analyzer data in part could be rewritten by mutation even if mutation doesn't affect this part according to predicate. [#68052](https://github.com/ClickHouse/ClickHouse/pull/68052) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68223](https://github.com/ClickHouse/ClickHouse/issues/68223): Fixed a NULL pointer dereference, triggered by a specially crafted query, that crashed the server via hopEnd, hopStart, tumbleEnd, and tumbleStart. [#68098](https://github.com/ClickHouse/ClickHouse/pull/68098) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#68175](https://github.com/ClickHouse/ClickHouse/issues/68175): Removes an incorrect optimization to remove sorting in subqueries that use `OFFSET`. Fixes [#67906](https://github.com/ClickHouse/ClickHouse/issues/67906). [#68099](https://github.com/ClickHouse/ClickHouse/pull/68099) ([Graham Campbell](https://github.com/GrahamCampbell)).
* Backported in [#68341](https://github.com/ClickHouse/ClickHouse/issues/68341): Try fix postgres crash when query is cancelled. [#68288](https://github.com/ClickHouse/ClickHouse/pull/68288) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#68398](https://github.com/ClickHouse/ClickHouse/issues/68398): Fix missing sync replica mode in query `SYSTEM SYNC REPLICA`. [#68326](https://github.com/ClickHouse/ClickHouse/pull/68326) ([Duc Canh Le](https://github.com/canhld94)).
* Backported in [#68669](https://github.com/ClickHouse/ClickHouse/issues/68669): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#67803](https://github.com/ClickHouse/ClickHouse/issues/67803): Disable some Dynamic tests under sanitizers, rewrite 03202_dynamic_null_map_subcolumn to sql. [#67359](https://github.com/ClickHouse/ClickHouse/pull/67359) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68081](https://github.com/ClickHouse/ClickHouse/issues/68081): Add an explicit error for `ALTER MODIFY SQL SECURITY` on non-view tables. [#67953](https://github.com/ClickHouse/ClickHouse/pull/67953) ([pufit](https://github.com/pufit)).
* Update version after release. [#68044](https://github.com/ClickHouse/ClickHouse/pull/68044) ([robot-clickhouse](https://github.com/robot-clickhouse)).
* Backported in [#68269](https://github.com/ClickHouse/ClickHouse/issues/68269): [Green CI] Fix test 01903_correct_block_size_prediction_with_default. [#68203](https://github.com/ClickHouse/ClickHouse/pull/68203) ([Pablo Marcos](https://github.com/pamarcos)).
* Backported in [#68432](https://github.com/ClickHouse/ClickHouse/issues/68432): tests: make 01600_parts_states_metrics_long better. [#68265](https://github.com/ClickHouse/ClickHouse/pull/68265) ([Azat Khuzhin](https://github.com/azat)).
* Backported in [#68538](https://github.com/ClickHouse/ClickHouse/issues/68538): CI: Native build for package_aarch64. [#68457](https://github.com/ClickHouse/ClickHouse/pull/68457) ([Max K.](https://github.com/maxknv)).
* Backported in [#68555](https://github.com/ClickHouse/ClickHouse/issues/68555): CI: Minor release workflow fix. [#68536](https://github.com/ClickHouse/ClickHouse/pull/68536) ([Max K.](https://github.com/maxknv)).
* Backported in [#68760](https://github.com/ClickHouse/ClickHouse/issues/68760): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).

View File

@ -0,0 +1,17 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.7.6.8-stable (7779883593a) FIXME as compared to v24.7.5.37-stable (f2533ca97be)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#69198](https://github.com/ClickHouse/ClickHouse/issues/69198): TODO. [#68744](https://github.com/ClickHouse/ClickHouse/pull/68744) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#69249](https://github.com/ClickHouse/ClickHouse/issues/69249): `UNION` clause in subqueries wasn't handled correctly in queries with parallel replicas and lead to LOGICAL_ERROR `Duplicate announcement received for replica`. [#69146](https://github.com/ClickHouse/ClickHouse/pull/69146) ([Igor Nikonov](https://github.com/devcrafter)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#69227](https://github.com/ClickHouse/ClickHouse/issues/69227): Disable memory test with sanitizer. [#69193](https://github.com/ClickHouse/ClickHouse/pull/69193) ([alesapin](https://github.com/alesapin)).

View File

@ -0,0 +1,12 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.8.2.3-lts (b54f79ed323) FIXME as compared to v24.8.1.2684-lts (161c62fd295)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68670](https://github.com/ClickHouse/ClickHouse/issues/68670): Fix `LOGICAL_ERROR`s when functions `sipHash64Keyed`, `sipHash128Keyed`, or `sipHash128ReferenceKeyed` are applied to empty arrays or tuples. [#68630](https://github.com/ClickHouse/ClickHouse/pull/68630) ([Robert Schulze](https://github.com/rschu1ze)).

View File

@ -0,0 +1,50 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.8.3.59-lts (e729b9fa40e) FIXME as compared to v24.8.2.3-lts (b54f79ed323)
#### New Feature
* Backported in [#68710](https://github.com/ClickHouse/ClickHouse/issues/68710): Query cache entries can now be dropped by tag. For example, the query cache entry created by `SELECT 1 SETTINGS use_query_cache = true, query_cache_tag = 'abc'` can now be dropped by `SYSTEM DROP QUERY CACHE TAG 'abc'` (or of course just: `SYSTEM DROP QUERY CACHE` which will clear the entire query cache). [#68477](https://github.com/ClickHouse/ClickHouse/pull/68477) ([Michał Tabaszewski](https://github.com/pinsvin00)).
#### Improvement
* Backported in [#69097](https://github.com/ClickHouse/ClickHouse/issues/69097): Support for the Spanish language in the embedded dictionaries. [#69035](https://github.com/ClickHouse/ClickHouse/pull/69035) ([Vasily Okunev](https://github.com/VOkunev)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#68973](https://github.com/ClickHouse/ClickHouse/issues/68973): Fix the upper bound of the function `fromModifiedJulianDay`. It was supposed to be `9999-12-31` but was mistakenly set to `9999-01-01`. [#67583](https://github.com/ClickHouse/ClickHouse/pull/67583) ([PHO](https://github.com/depressed-pho)).
* Backported in [#68818](https://github.com/ClickHouse/ClickHouse/issues/68818): Fixed crash in Parquet filtering when data types in the file substantially differ from requested types (e.g. `... FROM file('a.parquet', Parquet, 'x String')`, but the file has `x Int64`). Without this fix, use `input_format_parquet_filter_push_down = 0` as a workaround. [#68131](https://github.com/ClickHouse/ClickHouse/pull/68131) ([Michael Kolupaev](https://github.com/al13n321)).
* Backported in [#68893](https://github.com/ClickHouse/ClickHouse/issues/68893): After https://github.com/ClickHouse/ClickHouse/pull/61984 `schema_inference_make_columns_nullable=0` still can make columns `Nullable` in Parquet/Arrow formats. The change was backward incompatible and users noticed the changes in the behaviour. This PR makes `schema_inference_make_columns_nullable=0` to work as before (no Nullable columns will be inferred) and introduces new value `auto` for this setting that will make columns `Nullable` only if data has information about nullability. [#68298](https://github.com/ClickHouse/ClickHouse/pull/68298) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68721](https://github.com/ClickHouse/ClickHouse/issues/68721): Fixes [#50868](https://github.com/ClickHouse/ClickHouse/issues/50868). Small DateTime64 constant values returned by a nested subquery inside a distributed query were wrongly transformed to Nulls, thus causing errors and possible incorrect query results. [#68323](https://github.com/ClickHouse/ClickHouse/pull/68323) ([Shankar](https://github.com/shiyer7474)).
* Backported in [#69029](https://github.com/ClickHouse/ClickHouse/issues/69029): Added back virtual columns ` _table` and `_database` to distributed tables. They were available until version 24.3. [#68672](https://github.com/ClickHouse/ClickHouse/pull/68672) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#68864](https://github.com/ClickHouse/ClickHouse/issues/68864): Fix possible error `Size of permutation (0) is less than required (...)` during Variant column permutation. [#68681](https://github.com/ClickHouse/ClickHouse/pull/68681) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68854](https://github.com/ClickHouse/ClickHouse/issues/68854): Fix possible error `DB::Exception: Block structure mismatch in joined block stream: different columns:` with new JSON column. [#68686](https://github.com/ClickHouse/ClickHouse/pull/68686) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68790](https://github.com/ClickHouse/ClickHouse/issues/68790): Fix issue with materialized constant keys when hashing maps with arrays as keys in functions `sipHash(64/128)Keyed`. [#68731](https://github.com/ClickHouse/ClickHouse/pull/68731) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
* Backported in [#69108](https://github.com/ClickHouse/ClickHouse/issues/69108): TODO. [#68744](https://github.com/ClickHouse/ClickHouse/pull/68744) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Backported in [#68850](https://github.com/ClickHouse/ClickHouse/issues/68850): Fix resolving dynamic subcolumns from subqueries in analyzer. [#68824](https://github.com/ClickHouse/ClickHouse/pull/68824) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68911](https://github.com/ClickHouse/ClickHouse/issues/68911): Fix complex types metadata parsing in DeltaLake. Closes [#68739](https://github.com/ClickHouse/ClickHouse/issues/68739). [#68836](https://github.com/ClickHouse/ClickHouse/pull/68836) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Backported in [#69160](https://github.com/ClickHouse/ClickHouse/issues/69160): Fix possible wrong result during anyHeavy state merge. [#68950](https://github.com/ClickHouse/ClickHouse/pull/68950) ([Raúl Marín](https://github.com/Algunenano)).
* Backported in [#69072](https://github.com/ClickHouse/ClickHouse/issues/69072): Fixed writing to Materialized Views with enabled setting `optimize_functions_to_subcolumns`. [#68951](https://github.com/ClickHouse/ClickHouse/pull/68951) ([Anton Popov](https://github.com/CurtizJ)).
* Backported in [#69016](https://github.com/ClickHouse/ClickHouse/issues/69016): Don't use serializations cache in const Dynamic column methods. It could let to use-of-unitialized value or even race condition during aggregations. [#68953](https://github.com/ClickHouse/ClickHouse/pull/68953) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#69120](https://github.com/ClickHouse/ClickHouse/issues/69120): Fix logical error when we have empty async insert. [#69080](https://github.com/ClickHouse/ClickHouse/pull/69080) ([Han Fei](https://github.com/hanfei1991)).
#### NO CL CATEGORY
* Backported in [#68947](https://github.com/ClickHouse/ClickHouse/issues/68947):. [#68897](https://github.com/ClickHouse/ClickHouse/pull/68897) ([Alexander Gololobov](https://github.com/davenger)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#68704](https://github.com/ClickHouse/ClickHouse/issues/68704): Fix enumerating dynamic subcolumns. [#68582](https://github.com/ClickHouse/ClickHouse/pull/68582) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#69000](https://github.com/ClickHouse/ClickHouse/issues/69000): Prioritizing of virtual columns in hive partitioning. [#68606](https://github.com/ClickHouse/ClickHouse/pull/68606) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Backported in [#68799](https://github.com/ClickHouse/ClickHouse/issues/68799): CI: Disable SQLLogic job. [#68654](https://github.com/ClickHouse/ClickHouse/pull/68654) ([Max K.](https://github.com/maxknv)).
* Backported in [#68834](https://github.com/ClickHouse/ClickHouse/issues/68834): Turn off fault injection for insert in `01396_inactive_replica_cleanup_nodes_zookeeper`. [#68715](https://github.com/ClickHouse/ClickHouse/pull/68715) ([alesapin](https://github.com/alesapin)).
* Backported in [#68781](https://github.com/ClickHouse/ClickHouse/issues/68781): Fix flaky test 00989_parallel_parts_loading. [#68737](https://github.com/ClickHouse/ClickHouse/pull/68737) ([alesapin](https://github.com/alesapin)).
* Backported in [#68762](https://github.com/ClickHouse/ClickHouse/issues/68762): To make patch release possible from every commit on release branch, package_debug build is required and must not be skipped. [#68750](https://github.com/ClickHouse/ClickHouse/pull/68750) ([Max K.](https://github.com/maxknv)).
* Backported in [#68810](https://github.com/ClickHouse/ClickHouse/issues/68810): Try to disable rerun check if job triggered manually. [#68751](https://github.com/ClickHouse/ClickHouse/pull/68751) ([Max K.](https://github.com/maxknv)).
* Backported in [#68962](https://github.com/ClickHouse/ClickHouse/issues/68962): Fix 2477 timeout. [#68752](https://github.com/ClickHouse/ClickHouse/pull/68752) ([jsc0218](https://github.com/jsc0218)).
* Backported in [#68977](https://github.com/ClickHouse/ClickHouse/issues/68977): Check setting use_json_alias_for_old_object_type in runtime. [#68793](https://github.com/ClickHouse/ClickHouse/pull/68793) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#68852](https://github.com/ClickHouse/ClickHouse/issues/68852): Make dynamic structure selection more consistent. [#68802](https://github.com/ClickHouse/ClickHouse/pull/68802) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#69052](https://github.com/ClickHouse/ClickHouse/issues/69052): Fix 01114_database_atomic flakiness. [#68930](https://github.com/ClickHouse/ClickHouse/pull/68930) ([Raúl Marín](https://github.com/Algunenano)).

View File

@ -0,0 +1,22 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.8.4.13-lts (53195bc189b) FIXME as compared to v24.8.3.59-lts (e729b9fa40e)
#### Improvement
* Backported in [#68699](https://github.com/ClickHouse/ClickHouse/issues/68699): Delete old code of named collections from dictionaries and substitute it to the new, which allows to use DDL created named collections in dictionaries. Closes [#60936](https://github.com/ClickHouse/ClickHouse/issues/60936), closes [#36890](https://github.com/ClickHouse/ClickHouse/issues/36890). [#68412](https://github.com/ClickHouse/ClickHouse/pull/68412) ([Kseniia Sumarokova](https://github.com/kssenii)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Backported in [#69231](https://github.com/ClickHouse/ClickHouse/issues/69231): Fix parsing error when null should be inserted as default in some cases during JSON type parsing. [#68955](https://github.com/ClickHouse/ClickHouse/pull/68955) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#69251](https://github.com/ClickHouse/ClickHouse/issues/69251): `UNION` clause in subqueries wasn't handled correctly in queries with parallel replicas and lead to LOGICAL_ERROR `Duplicate announcement received for replica`. [#69146](https://github.com/ClickHouse/ClickHouse/pull/69146) ([Igor Nikonov](https://github.com/devcrafter)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Backported in [#69189](https://github.com/ClickHouse/ClickHouse/issues/69189): Don't create Object type if use_json_alias_for_old_object_type=1 but allow_experimental_object_type=0. [#69150](https://github.com/ClickHouse/ClickHouse/pull/69150) ([Kruglov Pavel](https://github.com/Avogar)).
* Backported in [#69229](https://github.com/ClickHouse/ClickHouse/issues/69229): Disable memory test with sanitizer. [#69193](https://github.com/ClickHouse/ClickHouse/pull/69193) ([alesapin](https://github.com/alesapin)).
* Backported in [#69219](https://github.com/ClickHouse/ClickHouse/issues/69219): Disable perf-like test with sanitizers. [#69194](https://github.com/ClickHouse/ClickHouse/pull/69194) ([alesapin](https://github.com/alesapin)).

View File

@ -0,0 +1,72 @@
---
slug: /en/engines/table-engines/integrations/azure-queue
sidebar_position: 181
sidebar_label: AzureQueue
---
# AzureQueue Table Engine
This engine provides an integration with [Azure Blob Storage](https://azure.microsoft.com/en-us/products/storage/blobs) ecosystem, allowing streaming data import.
## Create Table {#creating-a-table}
``` sql
CREATE TABLE test (name String, value UInt32)
ENGINE = AzureQueue(...)
[SETTINGS]
[mode = '',]
[after_processing = 'keep',]
[keeper_path = '',]
...
```
**Engine parameters**
`AzureQueue` parameters are the same as `AzureBlobStorage` table engine supports. See parameters section [here](../../../engines/table-engines/integrations/azureBlobStorage.md).
**Example**
```sql
CREATE TABLE azure_queue_engine_table (name String, value UInt32)
ENGINE=AzureQueue('DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://azurite1:10000/devstoreaccount1/data/')
SETTINGS
mode = 'unordered'
```
## Settings {#settings}
The set of supported settings is the same as for `S3Queue` table engine, but without `s3queue_` prefix. See [full list of settings settings](../../../engines/table-engines/integrations/s3queue.md#settings).
## Description {#description}
`SELECT` is not particularly useful for streaming import (except for debugging), because each file can be imported only once. It is more practical to create real-time threads using [materialized views](../../../sql-reference/statements/create/view.md). To do this:
1. Use the engine to create a table for consuming from specified path in S3 and consider it a data stream.
2. Create a table with the desired structure.
3. Create a materialized view that converts data from the engine and puts it into a previously created table.
When the `MATERIALIZED VIEW` joins the engine, it starts collecting data in the background.
Example:
``` sql
CREATE TABLE azure_queue_engine_table (name String, value UInt32)
ENGINE=AzureQueue('<endpoint>', 'CSV', 'gzip')
SETTINGS
mode = 'unordered';
CREATE TABLE stats (name String, value UInt32)
ENGINE = MergeTree() ORDER BY name;
CREATE MATERIALIZED VIEW consumer TO stats
AS SELECT name, value FROM azure_queue_engine_table;
SELECT * FROM stats ORDER BY name;
```
## Virtual columns {#virtual-columns}
- `_path` — Path to the file.
- `_file` — Name of the file.
For more information about virtual columns see [here](../../../engines/table-engines/index.md#table_engines-virtual_columns).

View File

@ -6,28 +6,34 @@ sidebar_label: Iceberg
# Iceberg Table Engine
This engine provides a read-only integration with existing Apache [Iceberg](https://iceberg.apache.org/) tables in Amazon S3.
This engine provides a read-only integration with existing Apache [Iceberg](https://iceberg.apache.org/) tables in Amazon S3, Azure and locally stored tables.
## Create Table
Note that the Iceberg table must already exist in S3, this command does not take DDL parameters to create a new table.
Note that the Iceberg table must already exist in the storage, this command does not take DDL parameters to create a new table.
``` sql
CREATE TABLE iceberg_table
ENGINE = Iceberg(url, [aws_access_key_id, aws_secret_access_key,])
CREATE TABLE iceberg_table_s3
ENGINE = IcebergS3(url, [, NOSIGN | access_key_id, secret_access_key, [session_token]], format, [,compression])
CREATE TABLE iceberg_table_azure
ENGINE = IcebergAzure(connection_string|storage_account_url, container_name, blobpath, [account_name, account_key, format, compression])
CREATE TABLE iceberg_table_local
ENGINE = IcebergLocal(path_to_table, [,format] [,compression_method])
```
**Engine parameters**
**Engine arguments**
- `url` — url with the path to an existing Iceberg table.
- `aws_access_key_id`, `aws_secret_access_key` - Long-term credentials for the [AWS](https://aws.amazon.com/) account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file.
Description of the arguments coincides with description of arguments in engines `S3`, `AzureBlobStorage` and `File` correspondingly.
`format` stands for the format of data files in the Iceberg table.
Engine parameters can be specified using [Named Collections](../../../operations/named-collections.md)
**Example**
```sql
CREATE TABLE iceberg_table ENGINE=Iceberg('http://test.s3.amazonaws.com/clickhouse-bucket/test_table', 'test', 'test')
CREATE TABLE iceberg_table ENGINE=IcebergS3('http://test.s3.amazonaws.com/clickhouse-bucket/test_table', 'test', 'test')
```
Using named collections:
@ -45,9 +51,15 @@ Using named collections:
```
```sql
CREATE TABLE iceberg_table ENGINE=Iceberg(iceberg_conf, filename = 'test_table')
CREATE TABLE iceberg_table ENGINE=IcebergS3(iceberg_conf, filename = 'test_table')
```
**Aliases**
Table engine `Iceberg` is an alias to `IcebergS3` now.
## See also
- [iceberg table function](/docs/en/sql-reference/table-functions/iceberg.md)

View File

@ -35,7 +35,7 @@ CREATE TABLE s3_engine_table (name String, value UInt32)
[SETTINGS ...]
```
### Engine parameters
### Engine parameters {#parameters}
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `**`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
- `NOSIGN` - If this keyword is provided in place of credentials, all the requests will not be signed.

View File

@ -5,6 +5,7 @@ sidebar_label: S3Queue
---
# S3Queue Table Engine
This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem and allows streaming import. This engine is similar to the [Kafka](../../../engines/table-engines/integrations/kafka.md), [RabbitMQ](../../../engines/table-engines/integrations/rabbitmq.md) engines, but provides S3-specific features.
## Create Table {#creating-a-table}
@ -16,27 +17,25 @@ CREATE TABLE s3_queue_engine_table (name String, value UInt32)
[mode = '',]
[after_processing = 'keep',]
[keeper_path = '',]
[s3queue_loading_retries = 0,]
[s3queue_processing_threads_num = 1,]
[s3queue_enable_logging_to_s3queue_log = 0,]
[s3queue_polling_min_timeout_ms = 1000,]
[s3queue_polling_max_timeout_ms = 10000,]
[s3queue_polling_backoff_ms = 0,]
[s3queue_tracked_file_ttl_sec = 0,]
[s3queue_tracked_files_limit = 1000,]
[s3queue_cleanup_interval_min_ms = 10000,]
[s3queue_cleanup_interval_max_ms = 30000,]
[loading_retries = 0,]
[processing_threads_num = 1,]
[enable_logging_to_s3queue_log = 0,]
[polling_min_timeout_ms = 1000,]
[polling_max_timeout_ms = 10000,]
[polling_backoff_ms = 0,]
[tracked_file_ttl_sec = 0,]
[tracked_files_limit = 1000,]
[cleanup_interval_min_ms = 10000,]
[cleanup_interval_max_ms = 30000,]
```
Starting with `24.7` settings without `s3queue_` prefix are also supported.
:::warning
Before `24.7`, it is required to use `s3queue_` prefix for all settings apart from `mode`, `after_processing` and `keeper_path`.
:::
**Engine parameters**
- `path` — Bucket url with path to file. Supports following wildcards in readonly mode: `*`, `**`, `?`, `{abc,def}` and `{N..M}` where `N`, `M` — numbers, `'abc'`, `'def'` — strings. For more information see [below](#wildcards-in-path).
- `NOSIGN` - If this keyword is provided in place of credentials, all the requests will not be signed.
- `format` — The [format](../../../interfaces/formats.md#formats) of the file.
- `aws_access_key_id`, `aws_secret_access_key` - Long-term credentials for the [AWS](https://aws.amazon.com/) account user. You can use these to authenticate your requests. Parameter is optional. If credentials are not specified, they are used from the configuration file. For more information see [Using S3 for Data Storage](../mergetree-family/mergetree.md#table_engine-mergetree-s3).
- `compression` — Compression type. Supported values: `none`, `gzip/gz`, `brotli/br`, `xz/LZMA`, `zstd/zst`. Parameter is optional. By default, it will autodetect compression by file extension.
`S3Queue` parameters are the same as `S3` table engine supports. See parameters section [here](../../../engines/table-engines/integrations/s3.md#parameters).
**Example**

View File

@ -54,7 +54,7 @@ Parameters:
- `distance_function`: either `L2Distance` (the [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) - the length of a
line between two points in Euclidean space), or `cosineDistance` (the [cosine
distance](https://en.wikipedia.org/wiki/Cosine_similarity#Cosine_distance)- the angle between two non-zero vectors).
- `quantization`: either `f32`, `f16`, or `i8` for storing the vector with reduced precision (optional, default: `f32`)
- `quantization`: either `f64`, `f32`, `f16`, `bf16`, or `i8` for storing the vector with reduced precision (optional, default: `bf16`)
- `m`: the number of neighbors per graph node (optional, default: 16)
- `ef_construction`: (optional, default: 128)
- `ef_search`: (optional, default: 64)
@ -111,15 +111,16 @@ ANN indexes are built during column insertion and merge. As a result, `INSERT` a
tables. ANNIndexes are ideally used only with immutable or rarely changed data, respectively when are far more read requests than write
requests.
ANN indexes support these queries:
ANN indexes support this type of query:
``` sql
SELECT *
FROM table
[WHERE ...]
ORDER BY Distance(vectors, Point)
LIMIT N
```
``` sql
WITH [...] AS reference_vector
SELECT *
FROM table
WHERE ... -- WHERE clause is optional
ORDER BY Distance(vectors, reference_vector)
LIMIT N
```
:::tip
To avoid writing out large vectors, you can use [query

View File

@ -80,7 +80,7 @@ For partitioning by month, use the `toYYYYMM(date_column)` expression, where `da
`PRIMARY KEY` — The primary key if it [differs from the sorting key](#choosing-a-primary-key-that-differs-from-the-sorting-key). Optional.
Specifying a sorting key (using `ORDER BY` clause) implicitly specifies a primary key.
It is usually not necessary to specify the primary key in addition to the primary key.
It is usually not necessary to specify the primary key in addition to the sorting key.
#### SAMPLE BY
@ -989,19 +989,52 @@ ALTER TABLE tab DROP STATISTICS a;
These lightweight statistics aggregate information about distribution of values in columns. Statistics are stored in every part and updated when every insert comes.
They can be used for prewhere optimization only if we enable `set allow_statistics_optimize = 1`.
#### Available Types of Column Statistics {#available-types-of-column-statistics}
### Available Types of Column Statistics {#available-types-of-column-statistics}
- `MinMax`
The minimum and maximum column value which allows to estimate the selectivity of range filters on numeric columns.
Syntax: `minmax`
- `TDigest`
[TDigest](https://github.com/tdunning/t-digest) sketches which allow to compute approximate percentiles (e.g. the 90th percentile) for numeric columns.
Syntax: `tdigest`
- `Uniq`
[HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog) sketches which provide an estimation how many distinct values a column contains.
- `count_min`
Syntax: `uniq`
- `CountMin`
[CountMin](https://en.wikipedia.org/wiki/Count%E2%80%93min_sketch) sketches which provide an approximate count of the frequency of each value in a column.
Syntax `countmin`
### Supported Data Types {#supported-data-types}
| | (U)Int*, Float*, Decimal(*), Date*, Boolean, Enum* | String or FixedString |
|-----------|----------------------------------------------------|-----------------------|
| CountMin | ✔ | ✔ |
| MinMax | ✔ | ✗ |
| TDigest | ✔ | ✗ |
| Uniq | ✔ | ✔ |
### Supported Operations {#supported-operations}
| | Equality filters (==) | Range filters (>, >=, <, <=) |
|-----------|-----------------------|------------------------------|
| CountMin | ✔ | ✗ |
| MinMax | ✗ | ✔ |
| TDigest | ✗ | ✔ |
| Uniq | ✔ | ✗ |
[Count-min](https://en.wikipedia.org/wiki/Count%E2%80%93min_sketch) sketches which provide an approximate count of the frequency of each value in a column.
## Column-level Settings {#column-level-settings}

View File

@ -109,6 +109,7 @@ For partitioning by month, use the `toYYYYMM(date_column)` expression, where `da
- `_file` — Resource name of the `URL`. Type: `LowCardinalty(String)`.
- `_size` — Size of the resource in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
- `_headers` - HTTP response headers. Type: `Map(LowCardinality(String), LowCardinality(String))`.
## Storage Settings {#storage-settings}

View File

@ -39,6 +39,7 @@ The supported formats are:
| [JSONCompact](#jsoncompact) | ✔ | ✔ |
| [JSONCompactStrings](#jsoncompactstrings) | ✗ | ✔ |
| [JSONCompactColumns](#jsoncompactcolumns) | ✔ | ✔ |
| [JSONCompactWithProgress](#jsoncompactwithprogress) | ✗ | ✔ |
| [JSONEachRow](#jsoneachrow) | ✔ | ✔ |
| [PrettyJSONEachRow](#prettyjsoneachrow) | ✗ | ✔ |
| [JSONEachRowWithProgress](#jsoneachrowwithprogress) | ✗ | ✔ |
@ -988,6 +989,59 @@ Example:
Columns that are not present in the block will be filled with default values (you can use [input_format_defaults_for_omitted_fields](/docs/en/operations/settings/settings-formats.md/#input_format_defaults_for_omitted_fields) setting here)
## JSONCompactWithProgress (#jsoncompactwithprogress)
In this format, ClickHouse outputs each row as a separated, newline-delimited JSON Object.
Each row is either a metadata object, data object, progress information or statistics object:
1. **Metadata Object (`meta`)**
- Describes the structure of the data rows.
- Fields: `name` (column name), `type` (data type, e.g., `UInt32`, `String`, etc.).
- Example: `{"meta": [{"name":"id", "type":"UInt32"}, {"name":"name", "type":"String"}]}`
- Appears before any data objects.
2. **Data Object (`data`)**
- Represents a row of query results.
- Fields: An array with values corresponding to the columns defined in the metadata.
- Example: `{"data":["1", "John Doe"]}`
- Appears after the metadata object, one per row.
3. **Progress Information Object (`progress`)**
- Provides real-time progress feedback during query execution.
- Fields: `read_rows`, `read_bytes`, `written_rows`, `written_bytes`, `total_rows_to_read`, `result_rows`, `result_bytes`, `elapsed_ns`.
- Example: `{"progress":{"read_rows":"8","read_bytes":"168"}}`
- May appear intermittently.
4. **Statistics Object (`statistics`)**
- Summarizes query execution statistics.
- Fields: `rows`, `rows_before_limit_at_least`, `elapsed`, `rows_read`, `bytes_read`.
- Example: `{"statistics": {"rows":2, "elapsed":0.001995, "rows_read":8}}`
- Appears at the end.
5. **Exception Object (`exception`)**
- Represents an error that occurred during query execution.
- Fields: A single text field containing the error message.
- Example: `{"exception": "Code: 395. DB::Exception: Value passed to 'throwIf' function is non-zero..."}`
- Appears when an error is encountered.
6. **Totals Object (`totals`)**
- Provides the totals for each numeric column in the result set.
- Fields: An array with total values corresponding to the columns defined in the metadata.
- Example: `{"totals": ["", "3"]}`
- Appears at the end of the data rows, if applicable.
Example:
```json
{"meta": [{"name":"id", "type":"UInt32"}, {"name":"name", "type":"String"}]}
{"progress":{"read_rows":"8","read_bytes":"168","written_rows":"0","written_bytes":"0","total_rows_to_read":"2","result_rows":"0","result_bytes":"0","elapsed_ns":"0"}}
{"data":["1", "John Doe"]}
{"data":["2", "Joe Doe"]}
{"statistics": {"rows":2, "rows_before_limit_at_least":8, "elapsed":0.001995, "rows_read":8, "bytes_read":168}}
```
## JSONEachRow {#jsoneachrow}
In this format, ClickHouse outputs each row as a separated, newline-delimited JSON Object.

View File

@ -58,7 +58,7 @@ Connection: Close
Content-Type: text/tab-separated-values; charset=UTF-8
X-ClickHouse-Server-Display-Name: clickhouse.ru-central1.internal
X-ClickHouse-Query-Id: 5abe861c-239c-467f-b955-8a201abb8b7f
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds": "0"}
1
```
@ -472,7 +472,7 @@ $ curl -v 'http://localhost:8123/predefined_query'
< X-ClickHouse-Format: Template
< X-ClickHouse-Timezone: Asia/Shanghai
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
# HELP "Query" "Number of executing queries"
# TYPE "Query" counter
@ -668,7 +668,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/hi'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
* Connection #0 to host localhost left intact
Say Hi!%
@ -708,7 +708,7 @@ $ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler'
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
* Connection #0 to host localhost left intact
<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>%
@ -766,7 +766,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
<html><body>Absolute Path File</body></html>
* Connection #0 to host localhost left intact
@ -785,7 +785,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
<html><body>Relative Path File</body></html>
* Connection #0 to host localhost left intact

View File

@ -1389,7 +1389,7 @@ DESC format(JSONEachRow, '{"id" : 1, "age" : 25, "name" : "Josh", "status" : nul
#### schema_inference_make_columns_nullable
Controls making inferred types `Nullable` in schema inference for formats without information about nullability.
If the setting is enabled, all inferred type will be `Nullable`, if disabled, the inferred type will be `Nullable` only if `input_format_null_as_default` is disabled and the column contains `NULL` in a sample that is parsed during schema inference.
If the setting is enabled, all inferred type will be `Nullable`, if disabled, the inferred type will never be `Nullable`, if set to `auto`, the inferred type will be `Nullable` only if the column contains `NULL` in a sample that is parsed during schema inference or file metadata contains information about column nullability.
Enabled by default.
@ -1412,15 +1412,13 @@ DESC format(JSONEachRow, $$
└─────────┴─────────────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
```
```sql
SET schema_inference_make_columns_nullable = 0;
SET input_format_null_as_default = 0;
SET schema_inference_make_columns_nullable = 'auto';
DESC format(JSONEachRow, $$
{"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]}
{"id" : 2, "age" : 19, "name" : "Alan", "status" : "married", "hobbies" : ["tennis", "art"]}
$$)
```
```response
┌─name────┬─type─────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
│ id │ Int64 │ │ │ │ │ │
│ age │ Int64 │ │ │ │ │ │
@ -1432,7 +1430,6 @@ DESC format(JSONEachRow, $$
```sql
SET schema_inference_make_columns_nullable = 0;
SET input_format_null_as_default = 1;
DESC format(JSONEachRow, $$
{"id" : 1, "age" : 25, "name" : "Josh", "status" : null, "hobbies" : ["football", "cooking"]}
{"id" : 2, "age" : 19, "name" : "Alan", "status" : "married", "hobbies" : ["tennis", "art"]}

View File

@ -233,6 +233,16 @@ Features:
- Useful tools: Zookeeper data exploration, query EXPLAIN, kill queries, etc.
- Visualization metric charts: queries and resource usage, number of merges/mutation, merge performance, query performance, etc.
### CKibana {#ckibana}
[CKibana](https://github.com/TongchengOpenSource/ckibana) is a lightweight service that allows you to effortlessly search, explore, and visualize ClickHouse data using the native Kibana UI.
Features:
- Translates chart requests from the native Kibana UI into ClickHouse query syntax.
- Supports advanced features such as sampling and caching to enhance query performance.
- Minimizes the learning cost for users after migrating from ElasticSearch to ClickHouse.
## Commercial {#commercial}
### DataGrip {#datagrip}

View File

@ -1463,26 +1463,29 @@ Examples:
## logger {#logger}
Logging settings.
The location and format of log messages.
Keys:
- `level` Logging level. Acceptable values: `trace`, `debug`, `information`, `warning`, `error`.
- `log` The log file. Contains all the entries according to `level`.
- `errorlog` Error log file.
- `size` Size of the file. Applies to `log` and `errorlog`. Once the file reaches `size`, ClickHouse archives and renames it, and creates a new log file in its place.
- `count` The number of archived log files that ClickHouse stores.
- `console` Send `log` and `errorlog` to the console instead of file. To enable, set to `1` or `true`.
- `console_log_level` Logging level for console. Default to `level`.
- `use_syslog` - Log to syslog as well.
- `syslog_level` - Logging level for logging to syslog.
- `stream_compress` Compress `log` and `errorlog` with `lz4` stream compression. To enable, set to `1` or `true`.
- `formatting` Specify log format to be printed in console log (currently only `json` supported).
- `level` Log level. Acceptable values: `none` (turn logging off), `fatal`, `critical`, `error`, `warning`, `notice`, `information`,
`debug`, `trace`, `test`
- `log` The path to the log file.
- `errorlog` The path to the error log file.
- `size` Rotation policy: Maximum size of the log files in bytes. Once the log file size exceeds this threshold, it is renamed and archived, and a new log file is created.
- `count` Rotation policy: How many historical log files Clickhouse are kept at most.
- `stream_compress` Compress log messages using LZ4. Set to `1` or `true` to enable.
- `console` Do not write log messages to log files, instead print them in the console. Set to `1` or `true` to enable. Default is
`1` if Clickhouse does not run in daemon mode, `0` otherwise.
- `console_log_level` Log level for console output. Defaults to `level`.
- `formatting` Log format for console output. Currently, only `json` is supported).
- `use_syslog` - Also forward log output to syslog.
- `syslog_level` - Log level for logging to syslog.
Both log and error log file names (only file names, not directories) support date and time format specifiers.
**Log format specifiers**
**Format specifiers**
Using the following format specifiers, you can define a pattern for the resulting file name. “Example” column shows possible results for `2023-07-06 18:32:07`.
File names in `log` and `errorLog` paths support below format specifiers for the resulting file name (the directory part does not support them).
Column “Example” shows the output at `2023-07-06 18:32:07`.
| Specifier | Description | Example |
|-------------|---------------------------------------------------------------------------------------------------------------------|--------------------------|
@ -1537,18 +1540,37 @@ Using the following format specifiers, you can define a pattern for the resultin
</logger>
```
Writing to the console can be configured. Config example:
To print log messages only in the console:
``` xml
<logger>
<level>information</level>
<console>1</console>
<console>true</console>
</logger>
```
**Per-level Overrides**
The log level of individual log names can be overridden. For example, to mute all messages of loggers "Backup" and "RBAC".
```xml
<logger>
<levels>
<logger>
<name>Backup</name>
<level>none</level>
</logger>
<logger>
<name>RBAC</name>
<level>none</level>
</logger>
</levels>
</logger>
```
### syslog
Writing to the syslog is also supported. Config example:
To write log messages additionally to syslog:
``` xml
<logger>
@ -1562,14 +1584,12 @@ Writing to the syslog is also supported. Config example:
</logger>
```
Keys for syslog:
Keys for `<syslog>`:
- use_syslog — Required setting if you want to write to the syslog.
- address — The host\[:port\] of syslogd. If omitted, the local daemon is used.
- hostname — Optional. The name of the host that logs are sent from.
- facility — [The syslog facility keyword](https://en.wikipedia.org/wiki/Syslog#Facility) in uppercase letters with the “LOG_” prefix: (`LOG_USER`, `LOG_DAEMON`, `LOG_LOCAL3`, and so on).
Default value: `LOG_USER` if `address` is specified, `LOG_DAEMON` otherwise.
- format Message format. Possible values: `bsd` and `syslog.`
- `address` — The address of syslog in format `host\[:port\]`. If omitted, the local daemon is used.
- `hostname` — The name of the host from which logs are send. Optional.
- `facility` — The syslog [facility keyword](https://en.wikipedia.org/wiki/Syslog#Facility). Must be specified uppercase with a “LOG_” prefix, e.g. `LOG_USER`, `LOG_DAEMON`, `LOG_LOCAL3`, etc. Default value: `LOG_USER` if `address` is specified, `LOG_DAEMON` otherwise.
- `format` Log message format. Possible values: `bsd` and `syslog.`
### Log formats
@ -1588,6 +1608,7 @@ You can specify the log format that will be outputted in the console log. Curren
"source_line": "192"
}
```
To enable JSON logging support, use the following snippet:
```xml

View File

@ -171,8 +171,8 @@ If the `schema_inference_hints` is not formated properly, or if there is a typo
## schema_inference_make_columns_nullable {#schema_inference_make_columns_nullable}
Controls making inferred types `Nullable` in schema inference for formats without information about nullability.
If the setting is enabled, the inferred type will be `Nullable` only if column contains `NULL` in a sample that is parsed during schema inference.
Controls making inferred types `Nullable` in schema inference.
If the setting is enabled, all inferred type will be `Nullable`, if disabled, the inferred type will never be `Nullable`, if set to `auto`, the inferred type will be `Nullable` only if the column contains `NULL` in a sample that is parsed during schema inference or file metadata contains information about column nullability.
Default value: `true`.

View File

@ -3226,7 +3226,7 @@ Default value: `0`.
## lightweight_deletes_sync {#lightweight_deletes_sync}
The same as 'mutation_sync', but controls only execution of lightweight deletes.
The same as [`mutations_sync`](#mutations_sync), but controls only execution of lightweight deletes.
Possible values:
@ -5633,7 +5633,6 @@ Default value: `1GiB`.
## use_json_alias_for_old_object_type
When enabled, `JSON` data type alias will be used to create an old [Object('json')](../../sql-reference/data-types/json.md) type instead of the new [JSON](../../sql-reference/data-types/newjson.md) type.
This setting requires server restart to take effect when changed.
Default value: `false`.

View File

@ -499,7 +499,7 @@ Required parameters:
- `type``encrypted`. Otherwise the encrypted disk is not created.
- `disk` — Type of disk for data storage.
- `key` — The key for encryption and decryption. Type: [Uint64](/docs/en/sql-reference/data-types/int-uint.md). You can use `key_hex` parameter to encode the key in hexadecimal form.
You can specify multiple keys using the `id` attribute (see example above).
You can specify multiple keys using the `id` attribute (see example below).
Optional parameters:

View File

@ -0,0 +1,41 @@
---
slug: /en/operations/system-tables/projections
---
# projections
Contains information about existing projections in all the tables.
Columns:
- `database` ([String](../../sql-reference/data-types/string.md)) — Database name.
- `table` ([String](../../sql-reference/data-types/string.md)) — Table name.
- `name` ([String](../../sql-reference/data-types/string.md)) — Projection name.
- `type` ([Enum](../../sql-reference/data-types/enum.md)) — Projection type ('Normal' = 0, 'Aggregate' = 1).
- `sorting_key` ([Array(String)](../../sql-reference/data-types/array.md)) — Projection sorting key.
- `query` ([String](../../sql-reference/data-types/string.md)) — Projection query.
**Example**
```sql
SELECT * FROM system.projections LIMIT 2 FORMAT Vertical;
```
```text
Row 1:
──────
database: default
table: landing
name: improved_sorting_key
type: Normal
sorting_key: ['user_id','date']
query: SELECT * ORDER BY user_id, date
Row 2:
──────
database: default
table: landing
name: agg_no_key
type: Aggregate
sorting_key: []
query: SELECT count()
```

View File

@ -47,6 +47,8 @@ keeper foo bar
- `ls '[path]'` -- Lists the nodes for the given path (default: cwd)
- `cd '[path]'` -- Changes the working path (default `.`)
- `cp '<src>' '<dest>'` -- Copies 'src' node to 'dest' path
- `mv '<src>' '<dest>'` -- Moves 'src' node to the 'dest' path
- `exists '<path>'` -- Returns `1` if node exists, `0` otherwise
- `set '<path>' <value> [version]` -- Updates the node's value. Only updates if version matches (default: -1)
- `create '<path>' <value> [mode]` -- Creates new node with the set value

View File

@ -104,7 +104,7 @@ Events that occur at the same second may lay in the sequence in an undefined ord
**Parameters**
- `pattern` — Pattern string. See [Pattern syntax](#sequencematch).
- `pattern` — Pattern string. See [Pattern syntax](#pattern-syntax).
**Returned values**
@ -113,8 +113,7 @@ Events that occur at the same second may lay in the sequence in an undefined ord
Type: `UInt8`.
<a name="sequence-function-pattern-syntax"></a>
**Pattern syntax**
#### Pattern syntax
- `(?N)` — Matches the condition argument at position `N`. Conditions are numbered in the `[1, 32]` range. For example, `(?1)` matches the argument passed to the `cond1` parameter.
@ -196,7 +195,7 @@ sequenceCount(pattern)(timestamp, cond1, cond2, ...)
**Parameters**
- `pattern` — Pattern string. See [Pattern syntax](#sequencematch).
- `pattern` — Pattern string. See [Pattern syntax](#pattern-syntax).
**Returned values**

View File

@ -0,0 +1,44 @@
---
slug: /en/sql-reference/aggregate-functions/reference/distinctdynamictypes
sidebar_position: 215
---
# distinctDynamicTypes
Calculates the list of distinct data types stored in [Dynamic](../../data-types/dynamic.md) column.
**Syntax**
```sql
distinctDynamicTypes(dynamic)
```
**Arguments**
- `dynamic` — [Dynamic](../../data-types/dynamic.md) column.
**Returned Value**
- The sorted list of data type names [Array(String)](../../data-types/array.md).
**Example**
Query:
```sql
DROP TABLE IF EXISTS test_dynamic;
CREATE TABLE test_dynamic(d Dynamic) ENGINE = Memory;
INSERT INTO test_dynamic VALUES (42), (NULL), ('Hello'), ([1, 2, 3]), ('2020-01-01'), (map(1, 2)), (43), ([4, 5]), (NULL), ('World'), (map(3, 4))
```
```sql
SELECT distinctDynamicTypes(d) FROM test_dynamic;
```
Result:
```reference
┌─distinctDynamicTypes(d)──────────────────────────────────────┐
│ ['Array(Int64)','Date','Int64','Map(UInt8, UInt8)','String'] │
└──────────────────────────────────────────────────────────────┘
```

View File

@ -0,0 +1,125 @@
---
slug: /en/sql-reference/aggregate-functions/reference/distinctjsonpaths
sidebar_position: 216
---
# distinctJSONPaths
Calculates the list of distinct paths stored in [JSON](../../data-types/newjson.md) column.
**Syntax**
```sql
distinctJSONPaths(json)
```
**Arguments**
- `json` — [JSON](../../data-types/newjson.md) column.
**Returned Value**
- The sorted list of paths [Array(String)](../../data-types/array.md).
**Example**
Query:
```sql
DROP TABLE IF EXISTS test_json;
CREATE TABLE test_json(json JSON) ENGINE = Memory;
INSERT INTO test_json VALUES ('{"a" : 42, "b" : "Hello"}'), ('{"b" : [1, 2, 3], "c" : {"d" : {"e" : "2020-01-01"}}}'), ('{"a" : 43, "c" : {"d" : {"f" : [{"g" : 42}]}}}')
```
```sql
SELECT distinctJSONPaths(json) FROM test_json;
```
Result:
```reference
┌─distinctJSONPaths(json)───┐
│ ['a','b','c.d.e','c.d.f'] │
└───────────────────────────┘
```
# distinctJSONPathsAndTypes
Calculates the list of distinct paths and their types stored in [JSON](../../data-types/newjson.md) column.
**Syntax**
```sql
distinctJSONPathsAndTypes(json)
```
**Arguments**
- `json` — [JSON](../../data-types/newjson.md) column.
**Returned Value**
- The sorted map of paths and types [Map(String, Array(String))](../../data-types/map.md).
**Example**
Query:
```sql
DROP TABLE IF EXISTS test_json;
CREATE TABLE test_json(json JSON) ENGINE = Memory;
INSERT INTO test_json VALUES ('{"a" : 42, "b" : "Hello"}'), ('{"b" : [1, 2, 3], "c" : {"d" : {"e" : "2020-01-01"}}}'), ('{"a" : 43, "c" : {"d" : {"f" : [{"g" : 42}]}}}')
```
```sql
SELECT distinctJSONPathsAndTypes(json) FROM test_json;
```
Result:
```reference
┌─distinctJSONPathsAndTypes(json)───────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ {'a':['Int64'],'b':['Array(Nullable(Int64))','String'],'c.d.e':['Date'],'c.d.f':['Array(JSON(max_dynamic_types=16, max_dynamic_paths=256))']} │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
```
**Note**
If JSON declaration contains paths with specified types, these paths will be always included in the result of `distinctJSONPaths/distinctJSONPathsAndTypes` functions even if input data didn't have values for these paths.
```sql
DROP TABLE IF EXISTS test_json;
CREATE TABLE test_json(json JSON(a UInt32)) ENGINE = Memory;
INSERT INTO test_json VALUES ('{"b" : "Hello"}'), ('{"b" : "World", "c" : [1, 2, 3]}');
```
```sql
SELECT json FROM test_json;
```
```text
┌─json──────────────────────────────────┐
│ {"a":0,"b":"Hello"} │
│ {"a":0,"b":"World","c":["1","2","3"]} │
└───────────────────────────────────────┘
```
```sql
SELECT distinctJSONPaths(json) FROM test_json;
```
```text
┌─distinctJSONPaths(json)─┐
│ ['a','b','c'] │
└─────────────────────────┘
```
```sql
SELECT distinctJSONPathsAndTypes(json) FROM test_json;
```
```text
┌─distinctJSONPathsAndTypes(json)────────────────────────────────┐
│ {'a':['UInt32'],'b':['String'],'c':['Array(Nullable(Int64))']} │
└────────────────────────────────────────────────────────────────┘
```

View File

@ -70,7 +70,7 @@ SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON as json
└────────────────────────────────────────────────┘
```
CAST from named `Tuple`, `Map` and `Object('json')` to `JSON` type will be supported later.
CAST from `JSON`, named `Tuple`, `Map` and `Object('json')` to `JSON` type will be supported later.
## Reading JSON paths as subcolumns
@ -453,8 +453,8 @@ As we can see, after inserting paths `e` and `f.g` the limit was reached and we
### During merges of data parts in MergeTree table engines
During merge of several data parts in MergeTree table the `JSON` column in the resulting data part can reach the limit of dynamic paths won't be able to store all paths from source parts as subcolumns.
In this case ClickHouse chooses what paths will remain as subcolumns after merge and what types will be stored in the shared data structure. In most cases ClickHouse tries to keep paths that contains
During merge of several data parts in MergeTree table the `JSON` column in the resulting data part can reach the limit of dynamic paths and won't be able to store all paths from source parts as subcolumns.
In this case ClickHouse chooses what paths will remain as subcolumns after merge and what paths will be stored in the shared data structure. In most cases ClickHouse tries to keep paths that contain
the largest number of non-null values and move the rarest paths to the shared data structure, but it depends on the implementation.
Let's see an example of such merge. First, let's create a table with `JSON` column, set the limit of dynamic paths to `3` and insert values with `5` different paths:
@ -505,7 +505,130 @@ As we can see, ClickHouse kept the most frequent paths `a`, `b` and `c` and move
## Introspection functions
There are several functions that can help to inspect the content of the JSON column: [JSONAllPaths](../functions/json-functions.md#jsonallpaths), [JSONAllPathsWithTypes](../functions/json-functions.md#jsonallpathswithtypes), [JSONDynamicPaths](../functions/json-functions.md#jsondynamicpaths), [JSONDynamicPathsWithTypes](../functions/json-functions.md#jsondynamicpathswithtypes), [JSONSharedDataPaths](../functions/json-functions.md#jsonshareddatapaths), [JSONSharedDataPathsWithTypes](../functions/json-functions.md#jsonshareddatapathswithtypes).
There are several functions that can help to inspect the content of the JSON column: [JSONAllPaths](../functions/json-functions.md#jsonallpaths), [JSONAllPathsWithTypes](../functions/json-functions.md#jsonallpathswithtypes), [JSONDynamicPaths](../functions/json-functions.md#jsondynamicpaths), [JSONDynamicPathsWithTypes](../functions/json-functions.md#jsondynamicpathswithtypes), [JSONSharedDataPaths](../functions/json-functions.md#jsonshareddatapaths), [JSONSharedDataPathsWithTypes](../functions/json-functions.md#jsonshareddatapathswithtypes), [distinctDynamicTypes](../aggregate-functions/reference/distinctdynamictypes.md), [distinctJSONPaths and distinctJSONPathsAndTypes](../aggregate-functions/reference/distinctjsonpaths.md)
**Examples**
Let's investigate the content of [GH Archive](https://www.gharchive.org/) dataset for `2020-01-01` date:
```sql
SELECT arrayJoin(distinctJSONPaths(json)) FROM s3('s3://clickhouse-public-datasets/gharchive/original/2020-01-01-*.json.gz', JSONAsObject)
```
```text
┌─arrayJoin(distinctJSONPaths(json))─────────────────────────┐
│ actor.avatar_url │
│ actor.display_login │
│ actor.gravatar_id │
│ actor.id │
│ actor.login │
│ actor.url │
│ created_at │
│ id │
│ org.avatar_url │
│ org.gravatar_id │
│ org.id │
│ org.login │
│ org.url │
│ payload.action │
│ payload.before │
│ payload.comment._links.html.href │
│ payload.comment._links.pull_request.href │
│ payload.comment._links.self.href │
│ payload.comment.author_association │
│ payload.comment.body │
│ payload.comment.commit_id │
│ payload.comment.created_at │
│ payload.comment.diff_hunk │
│ payload.comment.html_url │
│ payload.comment.id │
│ payload.comment.in_reply_to_id │
│ payload.comment.issue_url │
│ payload.comment.line │
│ payload.comment.node_id │
│ payload.comment.original_commit_id │
│ payload.comment.original_position │
│ payload.comment.path │
│ payload.comment.position │
│ payload.comment.pull_request_review_id │
...
│ payload.release.node_id │
│ payload.release.prerelease │
│ payload.release.published_at │
│ payload.release.tag_name │
│ payload.release.tarball_url │
│ payload.release.target_commitish │
│ payload.release.upload_url │
│ payload.release.url │
│ payload.release.zipball_url │
│ payload.size │
│ public │
│ repo.id │
│ repo.name │
│ repo.url │
│ type │
└─arrayJoin(distinctJSONPaths(json))─────────────────────────┘
```
```sql
SELECT arrayJoin(distinctJSONPathsAndTypes(json)) FROM s3('s3://clickhouse-public-datasets/gharchive/original/2020-01-01-*.json.gz', JSONAsObject) SETTINGS date_time_input_format='best_effort'
```
```text
┌─arrayJoin(distinctJSONPathsAndTypes(json))──────────────────┐
│ ('actor.avatar_url',['String']) │
│ ('actor.display_login',['String']) │
│ ('actor.gravatar_id',['String']) │
│ ('actor.id',['Int64']) │
│ ('actor.login',['String']) │
│ ('actor.url',['String']) │
│ ('created_at',['DateTime']) │
│ ('id',['String']) │
│ ('org.avatar_url',['String']) │
│ ('org.gravatar_id',['String']) │
│ ('org.id',['Int64']) │
│ ('org.login',['String']) │
│ ('org.url',['String']) │
│ ('payload.action',['String']) │
│ ('payload.before',['String']) │
│ ('payload.comment._links.html.href',['String']) │
│ ('payload.comment._links.pull_request.href',['String']) │
│ ('payload.comment._links.self.href',['String']) │
│ ('payload.comment.author_association',['String']) │
│ ('payload.comment.body',['String']) │
│ ('payload.comment.commit_id',['String']) │
│ ('payload.comment.created_at',['DateTime']) │
│ ('payload.comment.diff_hunk',['String']) │
│ ('payload.comment.html_url',['String']) │
│ ('payload.comment.id',['Int64']) │
│ ('payload.comment.in_reply_to_id',['Int64']) │
│ ('payload.comment.issue_url',['String']) │
│ ('payload.comment.line',['Int64']) │
│ ('payload.comment.node_id',['String']) │
│ ('payload.comment.original_commit_id',['String']) │
│ ('payload.comment.original_position',['Int64']) │
│ ('payload.comment.path',['String']) │
│ ('payload.comment.position',['Int64']) │
│ ('payload.comment.pull_request_review_id',['Int64']) │
...
│ ('payload.release.node_id',['String']) │
│ ('payload.release.prerelease',['Bool']) │
│ ('payload.release.published_at',['DateTime']) │
│ ('payload.release.tag_name',['String']) │
│ ('payload.release.tarball_url',['String']) │
│ ('payload.release.target_commitish',['String']) │
│ ('payload.release.upload_url',['String']) │
│ ('payload.release.url',['String']) │
│ ('payload.release.zipball_url',['String']) │
│ ('payload.size',['Int64']) │
│ ('public',['Bool']) │
│ ('repo.id',['Int64']) │
│ ('repo.name',['String']) │
│ ('repo.url',['String']) │
│ ('type',['String']) │
└─arrayJoin(distinctJSONPathsAndTypes(json))──────────────────┘
```
## Tips for better usage of the JSON type

View File

@ -2035,6 +2035,7 @@ Query:
SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1]);
```
Result:
``` text
@ -2043,6 +2044,43 @@ Result:
└──────────────────────────────────────┘
```
## arrayZipUnaligned
Combines multiple arrays into a single array, allowing for unaligned arrays. The resulting array contains the corresponding elements of the source arrays grouped into tuples in the listed order of arguments.
**Syntax**
``` sql
arrayZipUnaligned(arr1, arr2, ..., arrN)
```
**Arguments**
- `arrN` — [Array](../data-types/array.md).
The function can take any number of arrays of different types.
**Returned value**
- Array with elements from the source arrays grouped into [tuples](../data-types/tuple.md). Data types in the tuple are the same as types of the input arrays and in the same order as arrays are passed. [Array](../data-types/array.md). If the arrays have different sizes, the shorter arrays will be padded with `null` values.
**Example**
Query:
``` sql
SELECT arrayZipUnaligned(['a'], [1, 2, 3]);
```
Result:
``` text
┌─arrayZipUnaligned(['a'], [1, 2, 3])─┐
│ [('a',1),(NULL,2),(NULL,3)] │
└─────────────────────────────────────┘
```
## arrayAUC
Calculate AUC (Area Under the Curve, which is a concept in machine learning, see more details: <https://en.wikipedia.org/wiki/Receiver_operating_characteristic#Area_under_the_curve>).

View File

@ -49,7 +49,7 @@ Result:
## multiIf
Allows to write the [CASE](../../sql-reference/operators/index.md#operator_case) operator more compactly in the query.
Allows to write the [CASE](../../sql-reference/operators/index.md#conditional-expression) operator more compactly in the query.
**Syntax**

View File

@ -1617,45 +1617,348 @@ The calculation is performed relative to specific points in time:
If unit `WEEK` was specified, `toStartOfInterval` assumes that weeks start on Monday. Note that this behavior is different from that of function `toStartOfWeek` in which weeks start by default on Sunday.
**See Also**
**Syntax**
```sql
toStartOfInterval(value, INTERVAL x unit[, time_zone])
toStartOfInterval(value, INTERVAL x unit[, origin[, time_zone]])
```
The second overload emulates TimescaleDB's `time_bucket()` function, respectively PostgreSQL's `date_bin()` function, e.g.
``` SQL
SELECT toStartOfInterval(toDateTime('2023-01-01 14:45:00'), INTERVAL 1 MINUTE, toDateTime('2023-01-01 14:35:30'));
```
**See Also**
- [date_trunc](#date_trunc)
## toTime
Converts a date with time to a certain fixed date, while preserving the time.
**Syntax**
```sql
toTime(date[,timezone])
```
**Arguments**
- `date` — Date to convert to a time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
- `timezone` (optional) — Timezone for the returned value. [String](../data-types/string.md).
**Returned value**
- DateTime with date equated to `1970-01-02` while preserving the time. [DateTime](../data-types/datetime.md).
:::note
If the `date` input argument contained sub-second components,
they will be dropped in the returned `DateTime` value with second-accuracy.
:::
**Example**
Query:
```sql
SELECT toTime(toDateTime64('1970-12-10 01:20:30.3000',3)) AS result, toTypeName(result);
```
Result:
```response
┌──────────────result─┬─toTypeName(result)─┐
│ 1970-01-02 01:20:30 │ DateTime │
└─────────────────────┴────────────────────┘
```
## toRelativeYearNum
Converts a date, or date with time, to the number of the year, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of years elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeYearNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of years from a fixed reference point in the past. [UInt16](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeYearNum(toDate('2002-12-08')) AS y1,
toRelativeYearNum(toDate('2010-10-26')) AS y2
```
Result:
```response
┌───y1─┬───y2─┐
│ 2002 │ 2010 │
└──────┴──────┘
```
## toRelativeQuarterNum
Converts a date, or date with time, to the number of the quarter, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of quarters elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeQuarterNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of quarters from a fixed reference point in the past. [UInt32](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeQuarterNum(toDate('1993-11-25')) AS q1,
toRelativeQuarterNum(toDate('2005-01-05')) AS q2
```
Result:
```response
┌───q1─┬───q2─┐
│ 7975 │ 8020 │
└──────┴──────┘
```
## toRelativeMonthNum
Converts a date, or date with time, to the number of the month, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of months elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeMonthNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of months from a fixed reference point in the past. [UInt32](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeMonthNum(toDate('2001-04-25')) AS m1,
toRelativeMonthNum(toDate('2009-07-08')) AS m2
```
Result:
```response
┌────m1─┬────m2─┐
│ 24016 │ 24115 │
└───────┴───────┘
```
## toRelativeWeekNum
Converts a date, or date with time, to the number of the week, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of weeks elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeWeekNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of weeks from a fixed reference point in the past. [UInt32](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeWeekNum(toDate('2000-02-29')) AS w1,
toRelativeWeekNum(toDate('2001-01-12')) AS w2
```
Result:
```response
┌───w1─┬───w2─┐
│ 1574 │ 1619 │
└──────┴──────┘
```
## toRelativeDayNum
Converts a date, or date with time, to the number of the day, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of days elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeDayNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of days from a fixed reference point in the past. [UInt32](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeDayNum(toDate('1993-10-05')) AS d1,
toRelativeDayNum(toDate('2000-09-20')) AS d2
```
Result:
```response
┌───d1─┬────d2─┐
│ 8678 │ 11220 │
└──────┴───────┘
```
## toRelativeHourNum
Converts a date, or date with time, to the number of the hour, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of hours elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeHourNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of hours from a fixed reference point in the past. [UInt32](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeHourNum(toDateTime('1993-10-05 05:20:36')) AS h1,
toRelativeHourNum(toDateTime('2000-09-20 14:11:29')) AS h2
```
Result:
```response
┌─────h1─┬─────h2─┐
│ 208276 │ 269292 │
└────────┴────────┘
```
## toRelativeMinuteNum
Converts a date, or date with time, to the number of the minute, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of minutes elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeMinuteNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of minutes from a fixed reference point in the past. [UInt32](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeMinuteNum(toDateTime('1993-10-05 05:20:36')) AS m1,
toRelativeMinuteNum(toDateTime('2000-09-20 14:11:29')) AS m2
```
Result:
```response
┌───────m1─┬───────m2─┐
│ 12496580 │ 16157531 │
└──────────┴──────────┘
```
## toRelativeSecondNum
Converts a date, or date with time, to the number of the second, starting from a certain fixed point in the past.
Converts a date, or date with time, to the number of the seconds elapsed since a certain fixed point in the past.
**Syntax**
```sql
toRelativeSecondNum(date)
```
**Arguments**
- `date` — Date or date with time. [Date](../data-types/date.md)/[DateTime](../data-types/datetime.md)/[DateTime64](../data-types/datetime64.md).
**Returned value**
- The number of seconds from a fixed reference point in the past. [UInt32](../data-types/int-uint.md).
**Example**
Query:
```sql
SELECT
toRelativeSecondNum(toDateTime('1993-10-05 05:20:36')) AS s1,
toRelativeSecondNum(toDateTime('2000-09-20 14:11:29')) AS s2
```
Result:
```response
┌────────s1─┬────────s2─┐
│ 749794836 │ 969451889 │
└───────────┴───────────┘
```
## toISOYear
@ -2019,7 +2322,7 @@ Alias: `dateTrunc`.
`unit` argument is case-insensitive.
- `value` — Date and time. [DateTime](../data-types/datetime.md) or [DateTime64](../data-types/datetime64.md).
- `value` — Date and time. [Date](../data-types/date.md), [Date32](../data-types/date32.md), [DateTime](../data-types/datetime.md) or [DateTime64](../data-types/datetime64.md).
- `timezone` — [Timezone name](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone) for the returned value (optional). If not specified, the function uses the timezone of the `value` parameter. [String](../data-types/string.md).
**Returned value**
@ -3884,19 +4187,29 @@ Result:
└───────────────────────────────────────────────────────────────────────┘
```
## timeSlots(StartTime, Duration,\[, Size\])
## timeSlots
For a time interval starting at StartTime and continuing for Duration seconds, it returns an array of moments in time, consisting of points from this interval rounded down to the Size in seconds. Size is an optional parameter set to 1800 (30 minutes) by default.
This is necessary, for example, when searching for pageviews in the corresponding session.
Accepts DateTime and DateTime64 as StartTime argument. For DateTime, Duration and Size arguments must be `UInt32`. For DateTime64 they must be `Decimal64`.
Returns an array of DateTime/DateTime64 (return type matches the type of StartTime). For DateTime64, the return value's scale can differ from the scale of StartTime --- the highest scale among all given arguments is taken.
Example:
**Syntax**
```sql
timeSlots(StartTime, Duration,\[, Size\])
```
**Example**
```sql
SELECT timeSlots(toDateTime('2012-01-01 12:20:00'), toUInt32(600));
SELECT timeSlots(toDateTime('1980-12-12 21:01:02', 'UTC'), toUInt32(600), 299);
SELECT timeSlots(toDateTime64('1980-12-12 21:01:02.1234', 4, 'UTC'), toDecimal64(600.1, 1), toDecimal64(299, 0));
```
Result:
``` text
┌─timeSlots(toDateTime('2012-01-01 12:20:00'), toUInt32(600))─┐
│ ['2012-01-01 12:00:00','2012-01-01 12:30:00'] │
@ -4287,7 +4600,7 @@ Result:
## fromModifiedJulianDay
Converts a [Modified Julian Day](https://en.wikipedia.org/wiki/Julian_day#Variants) number to a [Proleptic Gregorian calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) date in text form `YYYY-MM-DD`. This function supports day number from `-678941` to `2973119` (which represent 0000-01-01 and 9999-12-31 respectively). It raises an exception if the day number is outside of the supported range.
Converts a [Modified Julian Day](https://en.wikipedia.org/wiki/Julian_day#Variants) number to a [Proleptic Gregorian calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) date in text form `YYYY-MM-DD`. This function supports day number from `-678941` to `2973483` (which represent 0000-01-01 and 9999-12-31 respectively). It raises an exception if the day number is outside of the supported range.
**Syntax**

View File

@ -6,7 +6,7 @@ title: "Functions for Working with Geohash"
## Geohash
[Geohash](https://en.wikipedia.org/wiki/Geohash) is the geocode system, which subdivides Earths surface into buckets of grid shape and encodes each cell into a short string of letters and digits. It is a hierarchical data structure, so the longer is the geohash string, the more precise is the geographic location.
[Geohash](https://en.wikipedia.org/wiki/Geohash) is the geocode system, which subdivides Earths surface into buckets of grid shape and encodes each cell into a short string of letters and digits. It is a hierarchical data structure, so the longer the geohash string is, the more precise the geographic location will be.
If you need to manually convert geographic coordinates to geohash strings, you can use [geohash.org](http://geohash.org/).
@ -14,26 +14,37 @@ If you need to manually convert geographic coordinates to geohash strings, you c
Encodes latitude and longitude as a [geohash](#geohash)-string.
**Syntax**
``` sql
geohashEncode(longitude, latitude, [precision])
```
**Input values**
- longitude - longitude part of the coordinate you want to encode. Floating in range`[-180°, 180°]`
- latitude - latitude part of the coordinate you want to encode. Floating in range `[-90°, 90°]`
- precision - Optional, length of the resulting encoded string, defaults to `12`. Integer in range `[1, 12]`. Any value less than `1` or greater than `12` is silently converted to `12`.
- `longitude` — Longitude part of the coordinate you want to encode. Floating in range`[-180°, 180°]`. [Float](../../data-types/float.md).
- `latitude` — Latitude part of the coordinate you want to encode. Floating in range `[-90°, 90°]`. [Float](../../data-types/float.md).
- `precision` (optional) — Length of the resulting encoded string. Defaults to `12`. Integer in the range `[1, 12]`. [Int8](../../data-types/int-uint.md).
:::note
- All coordinate parameters must be of the same type: either `Float32` or `Float64`.
- For the `precision` parameter, any value less than `1` or greater than `12` is silently converted to `12`.
:::
**Returned values**
- alphanumeric `String` of encoded coordinate (modified version of the base32-encoding alphabet is used).
- Alphanumeric string of the encoded coordinate (modified version of the base32-encoding alphabet is used). [String](../../data-types/string.md).
**Example**
Query:
``` sql
SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res;
```
Result:
``` text
┌─res──────────┐
│ ezs42d000000 │
@ -44,13 +55,19 @@ SELECT geohashEncode(-5.60302734375, 42.593994140625, 0) AS res;
Decodes any [geohash](#geohash)-encoded string into longitude and latitude.
**Syntax**
```sql
geohashDecode(hash_str)
```
**Input values**
- encoded string - geohash-encoded string.
- `hash_str` — Geohash-encoded string.
**Returned values**
- (longitude, latitude) - 2-tuple of `Float64` values of longitude and latitude.
- Tuple `(longitude, latitude)` of `Float64` values of longitude and latitude. [Tuple](../../data-types/tuple.md)([Float64](../../data-types/float.md))
**Example**

View File

@ -688,6 +688,40 @@ SELECT kostikConsistentHash(16045690984833335023, 2);
└───────────────────────────────────────────────┘
```
## ripeMD160
Produces [RIPEMD-160](https://en.wikipedia.org/wiki/RIPEMD) hash value.
**Syntax**
```sql
ripeMD160(input)
```
**Parameters**
- `input`: Input string. [String](../data-types/string.md)
**Returned value**
- A [UInt256](../data-types/int-uint.md) hash value where the 160-bit RIPEMD-160 hash is stored in the first 20 bytes. The remaining 12 bytes are zero-padded.
**Example**
Use the [hex](../functions/encoding-functions.md/#hex) function to represent the result as a hex-encoded string.
Query:
```sql
SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog'));
```
```response
┌─hex(ripeMD160('The quick brown fox jumps over the lazy dog'))─┐
│ 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B │
└───────────────────────────────────────────────────────────────┘
```
## murmurHash2_32, murmurHash2_64
Produces a [MurmurHash2](https://github.com/aappleby/smhasher) hash value.

View File

@ -20,10 +20,10 @@ overlay(s, replace, offset[, length])
**Parameters**
- `input`: A string type [String](../data-types/string.md).
- `s`: A string type [String](../data-types/string.md).
- `replace`: A string type [String](../data-types/string.md).
- `offset`: An integer type [Int](../data-types/int-uint.md). If `offset` is negative, it is counted from the end of the `input` string.
- `length`: Optional. An integer type [Int](../data-types/int-uint.md). `length` specifies the length of the snippet within input to be replaced. If `length` is not specified, the number of bytes removed from `input` equals the length of `replace`; otherwise `length` bytes are removed.
- `offset`: An integer type [Int](../data-types/int-uint.md) (1-based). If `offset` is negative, it is counted from the end of the string `s`.
- `length`: Optional. An integer type [Int](../data-types/int-uint.md). `length` specifies the length of the snippet within the input string `s` to be replaced. If `length` is not specified, the number of bytes removed from `s` equals the length of `replace`; otherwise `length` bytes are removed.
**Returned value**
@ -32,22 +32,35 @@ overlay(s, replace, offset[, length])
**Example**
```sql
SELECT overlay('ClickHouse SQL', 'CORE', 12) AS res;
SELECT overlay('My father is from Mexico.', 'mother', 4) AS res;
```
Result:
```text
┌─res─────────────┐
│ ClickHouse CORE │
└─────────────────┘
┌─res──────────────────────┐
│ My mother is from Mexico.│
└──────────────────────────┘
```
```sql
SELECT overlay('My father is from Mexico.', 'dad', 4, 6) AS res;
```
Result:
```text
┌─res───────────────────┐
│ My dad is from Mexico.│
└───────────────────────┘
```
## overlayUTF8
Replace part of the string `input` with another string `replace`, starting at the 1-based index `offset`.
Assumes that the string contains valid UTF-8 encoded text. If this assumption is violated, no exception is thrown and the result is undefined.
Assumes that the string contains valid UTF-8 encoded text.
If this assumption is violated, no exception is thrown and the result is undefined.
**Syntax**
@ -59,8 +72,8 @@ overlayUTF8(s, replace, offset[, length])
- `s`: A string type [String](../data-types/string.md).
- `replace`: A string type [String](../data-types/string.md).
- `offset`: An integer type [Int](../data-types/int-uint.md). If `offset` is negative, it is counted from the end of the `input` string.
- `length`: Optional. An integer type [Int](../data-types/int-uint.md). `length` specifies the length of the snippet within input to be replaced. If `length` is not specified, the number of characters removed from `input` equals the length of `replace`; otherwise `length` characters are removed.
- `offset`: An integer type [Int](../data-types/int-uint.md) (1-based). If `offset` is negative, it is counted from the end of the input string `s`.
- `length`: Optional. An integer type [Int](../data-types/int-uint.md). `length` specifies the length of the snippet within the input string `s` to be replaced. If `length` is not specified, the number of characters removed from `s` equals the length of `replace`; otherwise `length` characters are removed.
**Returned value**
@ -69,15 +82,15 @@ overlayUTF8(s, replace, offset[, length])
**Example**
```sql
SELECT overlayUTF8('ClickHouse是一款OLAP数据库', '开源', 12, 2) AS res;
SELECT overlay('Mein Vater ist aus Österreich.', 'der Türkei', 20) AS res;
```
Result:
```text
┌─res────────────────────────┐
ClickHouse是开源OLAP数据库
└────────────────────────────┘
┌─res───────────────────────────
Mein Vater ist aus der Türkei.
└───────────────────────────────
```
## replaceOne

View File

@ -49,6 +49,55 @@ SETTINGS cast_keep_nullable = 1
└──────────────────┴─────────────────────┴──────────────────┘
```
## toBool
Converts an input value to a value of type [`Bool`](../data-types/boolean.md). Throws an exception in case of an error.
**Syntax**
```sql
toBool(expr)
```
**Arguments**
- `expr` — Expression returning a number or a string. [Expression](../syntax.md/#syntax-expressions).
Supported arguments:
- Values of type (U)Int8/16/32/64/128/256.
- Values of type Float32/64.
- Strings `true` or `false` (case-insensitive).
**Returned value**
- Returns `true` or `false` based on evaluation of the argument. [Bool](../data-types/boolean.md).
**Example**
Query:
```sql
SELECT
toBool(toUInt8(1)),
toBool(toInt8(-1)),
toBool(toFloat32(1.01)),
toBool('true'),
toBool('false'),
toBool('FALSE')
FORMAT Vertical
```
Result:
```response
toBool(toUInt8(1)): true
toBool(toInt8(-1)): true
toBool(toFloat32(1.01)): true
toBool('true'): true
toBool('false'): false
toBool('FALSE'): false
```
## toInt8
Converts an input value to a value of type [`Int8`](../data-types/int-uint.md). Throws an exception in case of an error.
@ -3857,7 +3906,7 @@ Result:
## toDateTime64
Converts the argument to the [DateTime64](../data-types/datetime64.md) data type.
Converts an input value to a value of type [DateTime64](../data-types/datetime64.md).
**Syntax**
@ -3869,7 +3918,7 @@ toDateTime64(expr, scale, [timezone])
- `expr` — The value. [String](../data-types/string.md), [UInt32](../data-types/int-uint.md), [Float](../data-types/float.md) or [DateTime](../data-types/datetime.md).
- `scale` - Tick size (precision): 10<sup>-precision</sup> seconds. Valid range: [ 0 : 9 ].
- `timezone` - Time zone of the specified datetime64 object.
- `timezone` (optional) - Time zone of the specified datetime64 object.
**Returned value**
@ -3928,10 +3977,137 @@ SELECT toDateTime64('2019-01-01 00:00:00', 3, 'Asia/Istanbul') AS value, toTypeN
## toDateTime64OrZero
Like [toDateTime64](#todatetime64), this function converts an input value to a value of type [DateTime64](../data-types/datetime64.md) but returns the min value of [DateTime64](../data-types/datetime64.md) if an invalid argument is received.
**Syntax**
``` sql
toDateTime64OrZero(expr, scale, [timezone])
```
**Arguments**
- `expr` — The value. [String](../data-types/string.md), [UInt32](../data-types/int-uint.md), [Float](../data-types/float.md) or [DateTime](../data-types/datetime.md).
- `scale` - Tick size (precision): 10<sup>-precision</sup> seconds. Valid range: [ 0 : 9 ].
- `timezone` (optional) - Time zone of the specified DateTime64 object.
**Returned value**
- A calendar date and time of day, with sub-second precision, otherwise the minimum value of `DateTime64`: `1970-01-01 01:00:00.000`. [DateTime64](../data-types/datetime64.md).
**Example**
Query:
```sql
SELECT toDateTime64OrZero('2008-10-12 00:00:00 00:30:30', 3) AS invalid_arg
```
Result:
```response
┌─────────────invalid_arg─┐
│ 1970-01-01 01:00:00.000 │
└─────────────────────────┘
```
**See also**
- [toDateTime64](#todatetime64).
- [toDateTime64OrNull](#todatetime64ornull).
- [toDateTime64OrDefault](#todatetime64ordefault).
## toDateTime64OrNull
Like [toDateTime64](#todatetime64), this function converts an input value to a value of type [DateTime64](../data-types/datetime64.md) but returns `NULL` if an invalid argument is received.
**Syntax**
``` sql
toDateTime64OrNull(expr, scale, [timezone])
```
**Arguments**
- `expr` — The value. [String](../data-types/string.md), [UInt32](../data-types/int-uint.md), [Float](../data-types/float.md) or [DateTime](../data-types/datetime.md).
- `scale` - Tick size (precision): 10<sup>-precision</sup> seconds. Valid range: [ 0 : 9 ].
- `timezone` (optional) - Time zone of the specified DateTime64 object.
**Returned value**
- A calendar date and time of day, with sub-second precision, otherwise `NULL`. [DateTime64](../data-types/datetime64.md)/[NULL](../data-types/nullable.md).
**Example**
Query:
```sql
SELECT
toDateTime64OrNull('1976-10-18 00:00:00.30', 3) AS valid_arg,
toDateTime64OrNull('1976-10-18 00:00:00 30', 3) AS invalid_arg
```
Result:
```response
┌───────────────valid_arg─┬─invalid_arg─┐
│ 1976-10-18 00:00:00.300 │ ᴺᵁᴸᴸ │
└─────────────────────────┴─────────────┘
```
**See also**
- [toDateTime64](#todatetime64).
- [toDateTime64OrZero](#todatetime64orzero).
- [toDateTime64OrDefault](#todatetime64ordefault).
## toDateTime64OrDefault
Like [toDateTime64](#todatetime64), this function converts an input value to a value of type [DateTime64](../data-types/datetime64.md),
but returns either the default value of [DateTime64](../data-types/datetime64.md)
or the provided default if an invalid argument is received.
**Syntax**
``` sql
toDateTime64OrNull(expr, scale, [timezone, default])
```
**Arguments**
- `expr` — The value. [String](../data-types/string.md), [UInt32](../data-types/int-uint.md), [Float](../data-types/float.md) or [DateTime](../data-types/datetime.md).
- `scale` - Tick size (precision): 10<sup>-precision</sup> seconds. Valid range: [ 0 : 9 ].
- `timezone` (optional) - Time zone of the specified DateTime64 object.
- `default` (optional) - Default value to return if an invalid argument is received. [DateTime64](../data-types/datetime64.md).
**Returned value**
- A calendar date and time of day, with sub-second precision, otherwise the minimum value of `DateTime64` or the `default` value if provided. [DateTime64](../data-types/datetime64.md).
**Example**
Query:
```sql
SELECT
toDateTime64OrDefault('1976-10-18 00:00:00 30', 3) AS invalid_arg,
toDateTime64OrDefault('1976-10-18 00:00:00 30', 3, 'UTC', toDateTime64('2001-01-01 00:00:00.00',3)) AS invalid_arg_with_default
```
Result:
```response
┌─────────────invalid_arg─┬─invalid_arg_with_default─┐
│ 1970-01-01 01:00:00.000 │ 2000-12-31 23:00:00.000 │
└─────────────────────────┴──────────────────────────┘
```
**See also**
- [toDateTime64](#todatetime64).
- [toDateTime64OrZero](#todatetime64orzero).
- [toDateTime64OrNull](#todatetime64ornull).
## toDecimal32
Converts an input value to a value of type [`Decimal(9, S)`](../data-types/decimal.md) with scale of `S`. Throws an exception in case of an error.

View File

@ -265,8 +265,6 @@ SELECT now() AS current_date_time, current_date_time + INTERVAL '4' day + INTERV
└─────────────────────┴────────────────────────────────────────────────────────────┘
```
You can work with dates without using `INTERVAL`, just by adding or subtracting seconds, minutes, and hours. For example, an interval of one day can be set by adding `60*60*24`.
:::note
The `INTERVAL` syntax or `addDays` function are always preferred. Simple addition or subtraction (syntax like `now() + ...`) doesn't consider time settings. For example, daylight saving time.
:::

View File

@ -351,7 +351,7 @@ ALTER TABLE mt DELETE IN PARTITION ID '2' WHERE p = 2;
You can specify the partition expression in `ALTER ... PARTITION` queries in different ways:
- As a value from the `partition` column of the `system.parts` table. For example, `ALTER TABLE visits DETACH PARTITION 201901`.
- Using the keyword `ALL`. It can be used only with DROP/DETACH/ATTACH. For example, `ALTER TABLE visits ATTACH PARTITION ALL`.
- Using the keyword `ALL`. It can be used only with DROP/DETACH/ATTACH/ATTACH FROM. For example, `ALTER TABLE visits ATTACH PARTITION ALL`.
- As a tuple of expressions or constants that matches (in types) the table partitioning keys tuple. In the case of a single element partitioning key, the expression should be wrapped in the `tuple (...)` function. For example, `ALTER TABLE visits DETACH PARTITION tuple(toYYYYMM(toDate('2019-01-25')))`.
- Using the partition ID. Partition ID is a string identifier of the partition (human-readable, if possible) that is used as the names of partitions in the file system and in ZooKeeper. The partition ID must be specified in the `PARTITION ID` clause, in a single quotes. For example, `ALTER TABLE visits DETACH PARTITION ID '201901'`.
- In the [ALTER ATTACH PART](#attach-partitionpart) and [DROP DETACHED PART](#drop-detached-partitionpart) query, to specify the name of a part, use string literal with a value from the `name` column of the [system.detached_parts](/docs/en/operations/system-tables/detached_parts.md/#system_tables-detached_parts) table. For example, `ALTER TABLE visits ATTACH PART '201901_1_1_0'`.

View File

@ -8,7 +8,7 @@ title: "CREATE ROW POLICY"
Creates a [row policy](../../../guides/sre/user-management/index.md#row-policy-management), i.e. a filter used to determine which rows a user can read from a table.
:::tip
Row policies makes sense only for users with readonly access. If user can modify table or copy partitions between tables, it defeats the restrictions of row policies.
Row policies make sense only for users with readonly access. If a user can modify a table or copy partitions between tables, it defeats the restrictions of row policies.
:::
Syntax:
@ -24,40 +24,40 @@ CREATE [ROW] POLICY [IF NOT EXISTS | OR REPLACE] policy_name1 [ON CLUSTER cluste
## USING Clause
Allows to specify a condition to filter rows. An user will see a row if the condition is calculated to non-zero for the row.
Allows specifying a condition to filter rows. A user will see a row if the condition is calculated to non-zero for the row.
## TO Clause
In the section `TO` you can provide a list of users and roles this policy should work for. For example, `CREATE ROW POLICY ... TO accountant, john@localhost`.
In the `TO` section you can provide a list of users and roles this policy should work for. For example, `CREATE ROW POLICY ... TO accountant, john@localhost`.
Keyword `ALL` means all the ClickHouse users including current user. Keyword `ALL EXCEPT` allow to exclude some users from the all users list, for example, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost`
Keyword `ALL` means all the ClickHouse users, including current user. Keyword `ALL EXCEPT` allows excluding some users from the all users list, for example, `CREATE ROW POLICY ... TO ALL EXCEPT accountant, john@localhost`
:::note
If there are no row policies defined for a table then any user can `SELECT` all the row from the table. Defining one or more row policies for the table makes the access to the table depending on the row policies no matter if those row policies are defined for the current user or not. For example, the following policy
If there are no row policies defined for a table, then any user can `SELECT` all the rows from the table. Defining one or more row policies for the table makes access to the table dependent on the row policies, no matter if those row policies are defined for the current user or not. For example, the following policy:
`CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter`
forbids the users `mira` and `peter` to see the rows with `b != 1`, and any non-mentioned user (e.g., the user `paul`) will see no rows from `mydb.table1` at all.
forbids the users `mira` and `peter` from seeing the rows with `b != 1`, and any non-mentioned user (e.g., the user `paul`) will see no rows from `mydb.table1` at all.
If that's not desirable it can't be fixed by adding one more row policy, like the following:
If that's not desirable, it can be fixed by adding one more row policy, like the following:
`CREATE ROW POLICY pol2 ON mydb.table1 USING 1 TO ALL EXCEPT mira, peter`
:::
## AS Clause
It's allowed to have more than one policy enabled on the same table for the same user at the one time. So we need a way to combine the conditions from multiple policies.
It's allowed to have more than one policy enabled on the same table for the same user at one time. So we need a way to combine the conditions from multiple policies.
By default policies are combined using the boolean `OR` operator. For example, the following policies
By default, policies are combined using the boolean `OR` operator. For example, the following policies:
``` sql
CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter
CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 TO peter, antonio
```
enables the user `peter` to see rows with either `b=1` or `c=2`.
enable the user `peter` to see rows with either `b=1` or `c=2`.
The `AS` clause specifies how policies should be combined with other policies. Policies can be either permissive or restrictive. By default policies are permissive, which means they are combined using the boolean `OR` operator.
The `AS` clause specifies how policies should be combined with other policies. Policies can be either permissive or restrictive. By default, policies are permissive, which means they are combined using the boolean `OR` operator.
A policy can be defined as restrictive as an alternative. Restrictive policies are combined using the boolean `AND` operator.
@ -68,25 +68,25 @@ row_is_visible = (one or more of the permissive policies' conditions are non-zer
(all of the restrictive policies's conditions are non-zero)
```
For example, the following policies
For example, the following policies:
``` sql
CREATE ROW POLICY pol1 ON mydb.table1 USING b=1 TO mira, peter
CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio
```
enables the user `peter` to see rows only if both `b=1` AND `c=2`.
enable the user `peter` to see rows only if both `b=1` AND `c=2`.
Database policies are combined with table policies.
For example, the following policies
For example, the following policies:
``` sql
CREATE ROW POLICY pol1 ON mydb.* USING b=1 TO mira, peter
CREATE ROW POLICY pol2 ON mydb.table1 USING c=2 AS RESTRICTIVE TO peter, antonio
```
enables the user `peter` to see table1 rows only if both `b=1` AND `c=2`, although
enable the user `peter` to see table1 rows only if both `b=1` AND `c=2`, although
any other table in mydb would have only `b=1` policy applied for the user.

View File

@ -10,10 +10,10 @@ title: The Lightweight DELETE Statement
The lightweight `DELETE` statement removes rows from the table `[db.]table` that match the expression `expr`. It is only available for the *MergeTree table engine family.
``` sql
DELETE FROM [db.]table [ON CLUSTER cluster] WHERE expr;
DELETE FROM [db.]table [ON CLUSTER cluster] [IN PARTITION partition_expr] WHERE expr;
```
It is called "lightweight `DELETE`" to contrast it to the [ALTER table DELETE](/en/sql-reference/statements/alter/delete) command, which is a heavyweight process.
It is called "lightweight `DELETE`" to contrast it to the [ALTER TABLE ... DELETE](/en/sql-reference/statements/alter/delete) command, which is a heavyweight process.
## Examples
@ -22,23 +22,27 @@ It is called "lightweight `DELETE`" to contrast it to the [ALTER table DELETE](/
DELETE FROM hits WHERE Title LIKE '%hello%';
```
## Lightweight `DELETE` does not delete data from storage immediately
## Lightweight `DELETE` does not delete data immediately
With lightweight `DELETE`, deleted rows are internally marked as deleted immediately and will be automatically filtered out of all subsequent queries. However, cleanup of data happens during the next merge. As a result, it is possible that for an unspecified period, data is not actually deleted from storage and is only marked as deleted.
Lightweight `DELETE` is implemented as a [mutation](/en/sql-reference/statements/alter#mutations) that marks rows as deleted but does not immediately physically delete them.
If you need to guarantee that your data is deleted from storage in a predictable time, consider using the [ALTER table DELETE](/en/sql-reference/statements/alter/delete) command. Note that deleting data using `ALTER table DELETE` may consume significant resources as it recreates all affected parts.
By default, `DELETE` statements wait until marking the rows as deleted is completed before returning. This can take a long time if the amount of data is large. Alternatively, you can run it asynchronously in the background using the setting [`lightweight_deletes_sync`](/en/operations/settings/settings#lightweight_deletes_sync). If disabled, the `DELETE` statement is going to return immediately, but the data can still be visible to queries until the background mutation is finished.
The mutation does not physically delete the rows that have been marked as deleted, this will only happen during the next merge. As a result, it is possible that for an unspecified period, data is not actually deleted from storage and is only marked as deleted.
If you need to guarantee that your data is deleted from storage in a predictable time, consider using the table setting [`min_age_to_force_merge_seconds`](https://clickhouse.com/docs/en/operations/settings/merge-tree-settings#min_age_to_force_merge_seconds). Or you can use the [ALTER TABLE ... DELETE](/en/sql-reference/statements/alter/delete) command. Note that deleting data using `ALTER TABLE ... DELETE` may consume significant resources as it recreates all affected parts.
## Deleting large amounts of data
Large deletes can negatively affect ClickHouse performance. If you are attempting to delete all rows from a table, consider using the [`TRUNCATE TABLE`](/en/sql-reference/statements/truncate) command.
If you anticipate frequent deletes, consider using a [custom partitioning key](/en/engines/table-engines/mergetree-family/custom-partitioning-key). You can then use the [`ALTER TABLE...DROP PARTITION`](/en/sql-reference/statements/alter/partition#drop-partitionpart) command to quickly drop all rows associated with that partition.
If you anticipate frequent deletes, consider using a [custom partitioning key](/en/engines/table-engines/mergetree-family/custom-partitioning-key). You can then use the [`ALTER TABLE ... DROP PARTITION`](/en/sql-reference/statements/alter/partition#drop-partitionpart) command to quickly drop all rows associated with that partition.
## Limitations of lightweight `DELETE`
### Lightweight `DELETE`s with projections
By default, `DELETE` does not work for tables with projections. This is because rows in a projection may be affected by a `DELETE` operation. But there is a [MergeTree setting](https://clickhouse.com/docs/en/operations/settings/merge-tree-settings) `lightweight_mutation_projection_mode` can change the behavior.
By default, `DELETE` does not work for tables with projections. This is because rows in a projection may be affected by a `DELETE` operation. But there is a [MergeTree setting](https://clickhouse.com/docs/en/operations/settings/merge-tree-settings) `lightweight_mutation_projection_mode` to change the behavior.
## Performance considerations when using lightweight `DELETE`
@ -48,7 +52,7 @@ The following can also negatively impact lightweight `DELETE` performance:
- A heavy `WHERE` condition in a `DELETE` query.
- If the mutations queue is filled with many other mutations, this can possibly lead to performance issues as all mutations on a table are executed sequentially.
- The affected table having a very large number of data parts.
- The affected table has a very large number of data parts.
- Having a lot of data in compact parts. In a Compact part, all columns are stored in one file.
## Delete permissions
@ -61,31 +65,31 @@ GRANT ALTER DELETE ON db.table to username;
## How lightweight DELETEs work internally in ClickHouse
1. A "mask" is applied to affected rows
1. **A "mask" is applied to affected rows**
When a `DELETE FROM table ...` query is executed, ClickHouse saves a mask where each row is marked as either “existing” or as “deleted”. Those “deleted” rows are omitted for subsequent queries. However, rows are actually only removed later by subsequent merges. Writing this mask is much more lightweight than what is done by an `ALTER table DELETE` query.
When a `DELETE FROM table ...` query is executed, ClickHouse saves a mask where each row is marked as either “existing” or as “deleted”. Those “deleted” rows are omitted for subsequent queries. However, rows are actually only removed later by subsequent merges. Writing this mask is much more lightweight than what is done by an `ALTER TABLE ... DELETE` query.
The mask is implemented as a hidden `_row_exists` system column that stores `True` for all visible rows and `False` for deleted ones. This column is only present in a part if some rows in the part were deleted. This column does not exist when a part has all values equal to `True`.
The mask is implemented as a hidden `_row_exists` system column that stores `True` for all visible rows and `False` for deleted ones. This column is only present in a part if some rows in the part were deleted. This column does not exist when a part has all values equal to `True`.
2. `SELECT` queries are transformed to include the mask
2. **`SELECT` queries are transformed to include the mask**
When a masked column is used in a query, the `SELECT ... FROM table WHERE condition` query internally is extended by the predicate on `_row_exists` and is transformed to:
```sql
SELECT ... FROM table PREWHERE _row_exists WHERE condition
```
At execution time, the column `_row_exists` is read to determine which rows should not be returned. If there are many deleted rows, ClickHouse can determine which granules can be fully skipped when reading the rest of the columns.
When a masked column is used in a query, the `SELECT ... FROM table WHERE condition` query internally is extended by the predicate on `_row_exists` and is transformed to:
```sql
SELECT ... FROM table PREWHERE _row_exists WHERE condition
```
At execution time, the column `_row_exists` is read to determine which rows should not be returned. If there are many deleted rows, ClickHouse can determine which granules can be fully skipped when reading the rest of the columns.
3. `DELETE` queries are transformed to `ALTER table UPDATE` queries
3. **`DELETE` queries are transformed to `ALTER TABLE ... UPDATE` queries**
The `DELETE FROM table WHERE condition` is translated into an `ALTER table UPDATE _row_exists = 0 WHERE condition` mutation.
The `DELETE FROM table WHERE condition` is translated into an `ALTER TABLE table UPDATE _row_exists = 0 WHERE condition` mutation.
Internally, this mutation is executed in two steps:
Internally, this mutation is executed in two steps:
1. A `SELECT count() FROM table WHERE condition` command is executed for each individual part to determine if the part is affected.
1. A `SELECT count() FROM table WHERE condition` command is executed for each individual part to determine if the part is affected.
2. Based on the commands above, affected parts are then mutated, and hardlinks are created for unaffected parts. In the case of wide parts, the `_row_exists` column for each row is updated and all other columns' files are hardlinked. For compact parts, all columns are re-written because they are all stored together in one file.
2. Based on the commands above, affected parts are then mutated, and hardlinks are created for unaffected parts. In the case of wide parts, the `_row_exists` column for each row is updated, and all other columns' files are hardlinked. For compact parts, all columns are re-written because they are all stored together in one file.
From the steps above, we can see that lightweight deletes using the masking technique improves performance over traditional `ALTER table DELETE` commands because `ALTER table DELETE` reads and re-writes all the columns' files for affected parts.
From the steps above, we can see that lightweight `DELETE` using the masking technique improves performance over traditional `ALTER TABLE ... DELETE` because it does not re-write all the columns' files for affected parts.
## Related content

View File

@ -15,7 +15,14 @@ The `FROM` clause specifies the source to read data from:
Subquery is another `SELECT` query that may be specified in parenthesis inside `FROM` clause.
`FROM` clause can contain multiple data sources, separated by commas, which is equivalent of performing [CROSS JOIN](../../../sql-reference/statements/select/join.md) on them.
The `FROM` can contain multiple data sources, separated by commas, which is equivalent of performing [CROSS JOIN](../../../sql-reference/statements/select/join.md) on them.
`FROM` can optionally appear before a `SELECT` clause. This is a ClickHouse-specific extension of standard SQL which makes `SELECT` statements easier to read. Example:
```sql
FROM table
SELECT *
```
## FINAL Modifier
@ -45,19 +52,19 @@ As an alternative to using `FINAL`, it is sometimes possible to use different qu
### Example Usage
**Using the `FINAL` keyword**
Using the `FINAL` keyword
```sql
SELECT x, y FROM mytable FINAL WHERE x > 1;
```
**Using `FINAL` as a query-level setting**
Using `FINAL` as a query-level setting
```sql
SELECT x, y FROM mytable WHERE x > 1 SETTINGS final = 1;
```
**Using `FINAL` as a session-level setting**
Using `FINAL` as a session-level setting
```sql
SET final = 1;

View File

@ -6,35 +6,37 @@ sidebar_label: iceberg
# iceberg Table Function
Provides a read-only table-like interface to Apache [Iceberg](https://iceberg.apache.org/) tables in Amazon S3.
Provides a read-only table-like interface to Apache [Iceberg](https://iceberg.apache.org/) tables in Amazon S3, Azure or locally stored.
## Syntax
``` sql
iceberg(url [,aws_access_key_id, aws_secret_access_key] [,format] [,structure])
icebergS3(url [, NOSIGN | access_key_id, secret_access_key, [session_token]] [,format] [,compression_method])
icebergS3(named_collection[, option=value [,..]])
icebergAzure(connection_string|storage_account_url, container_name, blobpath, [,account_name], [,account_key] [,format] [,compression_method])
icebergAzure(named_collection[, option=value [,..]])
icebergLocal(path_to_table, [,format] [,compression_method])
icebergLocal(named_collection[, option=value [,..]])
```
## Arguments
- `url` — Bucket url with the path to an existing Iceberg table in S3.
- `aws_access_key_id`, `aws_secret_access_key` - Long-term credentials for the [AWS](https://aws.amazon.com/) account user. You can use these to authenticate your requests. These parameters are optional. If credentials are not specified, they are used from the ClickHouse configuration. For more information see [Using S3 for Data Storage](/docs/en/engines/table-engines/mergetree-family/mergetree.md/#table_engine-mergetree-s3).
- `format` — The [format](/docs/en/interfaces/formats.md/#formats) of the file. By default `Parquet` is used.
- `structure` — Structure of the table. Format `'column1_name column1_type, column2_name column2_type, ...'`.
Engine parameters can be specified using [Named Collections](/docs/en/operations/named-collections.md).
Description of the arguments coincides with description of arguments in table functions `s3`, `azureBlobStorage` and `file` correspondingly.
`format` stands for the format of data files in the Iceberg table.
**Returned value**
A table with the specified structure for reading data in the specified Iceberg table in S3.
A table with the specified structure for reading data in the specified Iceberg table.
**Example**
```sql
SELECT * FROM iceberg('http://test.s3.amazonaws.com/clickhouse-bucket/test_table', 'test', 'test')
SELECT * FROM icebergS3('http://test.s3.amazonaws.com/clickhouse-bucket/test_table', 'test', 'test')
```
:::important
ClickHouse currently supports reading v1 (v2 support is coming soon!) of the Iceberg format via the `iceberg` table function and `Iceberg` table engine.
ClickHouse currently supports reading v1 and v2 of the Iceberg format via the `icebergS3`, `icebergAzure` and `icebergLocal` table functions and `IcebergS3`, `icebergAzure` ans `icebergLocal` table engines.
:::
## Defining a named collection
@ -56,10 +58,14 @@ Here is an example of configuring a named collection for storing the URL and cre
```
```sql
SELECT * FROM iceberg(iceberg_conf, filename = 'test_table')
DESCRIBE iceberg(iceberg_conf, filename = 'test_table')
SELECT * FROM icebergS3(iceberg_conf, filename = 'test_table')
DESCRIBE icebergS3(iceberg_conf, filename = 'test_table')
```
**Aliases**
Table function `iceberg` is an alias to `icebergS3` now.
**See Also**
- [Iceberg engine](/docs/en/engines/table-engines/integrations/iceberg.md)

View File

@ -54,6 +54,7 @@ Character `|` inside patterns is used to specify failover addresses. They are it
- `_file` — Resource name of the `URL`. Type: `LowCardinalty(String)`.
- `_size` — Size of the resource in bytes. Type: `Nullable(UInt64)`. If the size is unknown, the value is `NULL`.
- `_time` — Last modified time of the file. Type: `Nullable(DateTime)`. If the time is unknown, the value is `NULL`.
- `_headers` - HTTP response headers. Type: `Map(LowCardinality(String), LowCardinality(String))`.
## Hive-style partitioning {#hive-style-partitioning}

View File

@ -8,14 +8,14 @@ slug: /en/guides/developer/transactional
This is transactional (ACID) if the inserted rows are packed and inserted as a single block (see Notes):
- Atomic: an INSERT succeeds or is rejected as a whole: if a confirmation is sent to the client, then all rows were inserted; if an error is sent to the client, then no rows were inserted.
- Consistent: if there are no table constraints violated, then all rows in an INSERT are inserted and the INSERT succeeds; if constraints are violated, then no rows are inserted.
- Isolated: concurrent clients observe a consistent snapshot of the tablethe state of the table either as it was before the INSERT attempt, or after the successful INSERT; no partial state is seen
- Isolated: concurrent clients observe a consistent snapshot of the tablethe state of the table either as it was before the INSERT attempt, or after the successful INSERT; no partial state is seen. Clients inside of another transaction have [snapshot isolation](https://en.wikipedia.org/wiki/Snapshot_isolation), while clients outside of a transaction have [read uncommitted](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Read_uncommitted) isolation level.
- Durable: a successful INSERT is written to the filesystem before answering to the client, on a single replica or multiple replicas (controlled by the `insert_quorum` setting), and ClickHouse can ask the OS to sync the filesystem data on the storage media (controlled by the `fsync_after_insert` setting).
- INSERT into multiple tables with one statement is possible if materialized views are involved (the INSERT from the client is to a table which has associate materialized views).
## Case 2: INSERT into multiple partitions, of one table, of the MergeTree* family
Same as Case 1 above, with this detail:
- If table has many partitions and INSERT covers many partitionsthen insertion into every partition is transactional on its own
- If table has many partitions and INSERT covers many partitions, then insertion into every partition is transactional on its own
## Case 3: INSERT into one distributed table of the MergeTree* family
@ -38,7 +38,7 @@ Same as Case 1 above, with this detail:
- the insert format is column-based (like Native, Parquet, ORC, etc) and the data contains only one block of data
- the size of the inserted block in general may depend on many settings (for example: `max_block_size`, `max_insert_block_size`, `min_insert_block_size_rows`, `min_insert_block_size_bytes`, `preferred_block_size_bytes`, etc)
- if the client did not receive an answer from the server, the client does not know if the transaction succeeded, and it can repeat the transaction, using exactly-once insertion properties
- ClickHouse is using MVCC with snapshot isolation internally
- ClickHouse is using [MVCC](https://en.wikipedia.org/wiki/Multiversion_concurrency_control) with [snapshot isolation](https://en.wikipedia.org/wiki/Snapshot_isolation) internally for concurrent transactions
- all ACID properties are valid even in the case of server kill/crash
- either insert_quorum into different AZ or fsync should be enabled to ensure durable inserts in the typical setup
- "consistency" in ACID terms does not cover the semantics of distributed systems, see https://jepsen.io/consistency which is controlled by different settings (select_sequential_consistency)
@ -260,7 +260,7 @@ FROM mergetree_table
### Transactions introspection
You can inspect transactions by querying the `system.transactions` table, but note that you cannot query that
table from a session that is in a transactionopen a second `clickhouse client` session to query that table.
table from a session that is in a transaction. Open a second `clickhouse client` session to query that table.
```sql
SELECT *

View File

@ -22,18 +22,26 @@ grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not su
### Из deb-пакетов {#install-from-deb-packages}
Яндекс рекомендует использовать официальные скомпилированные `deb`-пакеты для Debian или Ubuntu. Для установки пакетов выполните:
Рекомендуется использовать официальные скомпилированные `deb`-пакеты для Debian или Ubuntu. Для установки пакетов выполните:
``` bash
sudo apt-get install -y apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 8919F6BD2B48D754
sudo apt-get install -y apt-transport-https ca-certificates curl gnupg
curl -fsSL 'https://packages.clickhouse.com/rpm/lts/repodata/repomd.xml.key' | sudo gpg --dearmor -o /usr/share/keyrings/clickhouse-keyring.gpg
echo "deb https://packages.clickhouse.com/deb stable main" | sudo tee \
echo "deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb stable main" | sudo tee \
/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update
```
#### Установка ClickHouse server и client
```bash
sudo apt-get install -y clickhouse-server clickhouse-client
```
#### Запуск ClickHouse server
```bash
sudo service clickhouse-server start
clickhouse-client # or "clickhouse-client --password" if you've set up a password.
```
@ -55,7 +63,7 @@ clickhouse-client # or "clickhouse-client --password" if you've set up a passwor
:::
### Из rpm-пакетов {#from-rpm-packages}
Команда ClickHouse в Яндексе рекомендует использовать официальные предкомпилированные `rpm`-пакеты для CentOS, RedHat и всех остальных дистрибутивов Linux, основанных на rpm.
Команда ClickHouse рекомендует использовать официальные предкомпилированные `rpm`-пакеты для CentOS, RedHat и всех остальных дистрибутивов Linux, основанных на rpm.
#### Установка официального репозитория
@ -102,7 +110,7 @@ sudo yum install clickhouse-server clickhouse-client
### Из tgz-архивов {#from-tgz-archives}
Команда ClickHouse в Яндексе рекомендует использовать предкомпилированные бинарники из `tgz`-архивов для всех дистрибутивов, где невозможна установка `deb`- и `rpm`- пакетов.
Команда ClickHouse рекомендует использовать предкомпилированные бинарники из `tgz`-архивов для всех дистрибутивов, где невозможна установка `deb`- и `rpm`- пакетов.
Интересующую версию архивов можно скачать вручную с помощью `curl` или `wget` из репозитория https://packages.clickhouse.com/tgz/.
После этого архивы нужно распаковать и воспользоваться скриптами установки. Пример установки самой свежей версии:

View File

@ -50,7 +50,7 @@ Connection: Close
Content-Type: text/tab-separated-values; charset=UTF-8
X-ClickHouse-Server-Display-Name: clickhouse.ru-central1.internal
X-ClickHouse-Query-Id: 5abe861c-239c-467f-b955-8a201abb8b7f
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
1
```
@ -367,7 +367,7 @@ $ curl -v 'http://localhost:8123/predefined_query'
< X-ClickHouse-Format: Template
< X-ClickHouse-Timezone: Asia/Shanghai
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0", "elapsed_ns":"662334", "real_time_microseconds":"0"}
<
# HELP "Query" "Number of executing queries"
# TYPE "Query" counter
@ -601,7 +601,7 @@ $ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler'
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
* Connection #0 to host localhost left intact
<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>%
@ -659,7 +659,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
<html><body>Absolute Path File</body></html>
* Connection #0 to host localhost left intact
@ -678,7 +678,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
<html><body>Relative Path File</body></html>
* Connection #0 to host localhost left intact

View File

@ -124,6 +124,40 @@ SELECT hex(sipHash128('foo', '\x01', 3));
└──────────────────────────────────┘
```
## ripeMD160
Генерирует [RIPEMD-160](https://en.wikipedia.org/wiki/RIPEMD) хеш строки.
**Синтаксис**
```sql
ripeMD160(input)
```
**Аргументы**
- `input`: Строка [String](../data-types/string.md)
**Возвращаемое значение**
- [UInt256](../data-types/int-uint.md), где 160-битный хеш RIPEMD-160 хранится в первых 20 байтах. Оставшиеся 12 байт заполняются нулями.
**Пример**
Используйте функцию [hex](../functions/encoding-functions.md#hex) для представления результата в виде строки с шестнадцатеричной кодировкой
Запрос:
```sql
SELECT hex(ripeMD160('The quick brown fox jumps over the lazy dog'));
```
Результат:
```response
┌─hex(ripeMD160('The quick brown fox jumps over the lazy dog'))─┐
│ 37F332F68DB77BD9D7EDD4969571AD671CF9DD3B │
└───────────────────────────────────────────────────────────────┘
```
## cityHash64 {#cityhash64}
Генерирует 64-х битное значение [CityHash](https://github.com/google/cityhash).

View File

@ -280,7 +280,7 @@ SYSTEM START REPLICATION QUEUES [ON CLUSTER cluster_name] [[db.]replicated_merge
Ждет когда таблица семейства `ReplicatedMergeTree` будет синхронизирована с другими репликами в кластере, но не более `receive_timeout` секунд:
``` sql
SYSTEM SYNC REPLICA [db.]replicated_merge_tree_family_table_name [STRICT | LIGHTWEIGHT [FROM 'srcReplica1'[, 'srcReplica2'[, ...]]] | PULL]
SYSTEM SYNC REPLICA [ON CLUSTER cluster_name] [db.]replicated_merge_tree_family_table_name [STRICT | LIGHTWEIGHT [FROM 'srcReplica1'[, 'srcReplica2'[, ...]]] | PULL]
```
После выполнения этого запроса таблица `[db.]replicated_merge_tree_family_table_name` загружает команды из общего реплицированного лога в свою собственную очередь репликации. Затем запрос ждет, пока реплика не обработает все загруженные команды. Поддерживаются следующие модификаторы:

View File

@ -53,7 +53,7 @@ Connection: Close
Content-Type: text/tab-separated-values; charset=UTF-8
X-ClickHouse-Server-Display-Name: clickhouse.ru-central1.internal
X-ClickHouse-Query-Id: 5abe861c-239c-467f-b955-8a201abb8b7f
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
1
```
@ -363,7 +363,7 @@ $ curl -v 'http://localhost:8123/predefined_query'
< X-ClickHouse-Format: Template
< X-ClickHouse-Timezone: Asia/Shanghai
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
# HELP "Query" "Number of executing queries"
# TYPE "Query" counter
@ -524,7 +524,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/hi'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334", "real_time_microseconds":"0"}
<
* Connection #0 to host localhost left intact
Say Hi!%
@ -564,7 +564,7 @@ $ curl -v -H 'XXX:xxx' 'http://localhost:8123/get_config_static_handler'
< Content-Type: text/plain; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
<
* Connection #0 to host localhost left intact
<html ng-app="SMI2"><head><base href="http://ui.tabix.io/"></head><body><div ui-view="" class="content-ui"></div><script src="http://loader.tabix.io/master.js"></script></body></html>%
@ -616,7 +616,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_absolute_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
<
<html><body>Absolute Path File</body></html>
* Connection #0 to host localhost left intact
@ -635,7 +635,7 @@ $ curl -vv -H 'XXX:xxx' 'http://localhost:8123/get_relative_path_static_handler'
< Content-Type: text/html; charset=UTF-8
< Transfer-Encoding: chunked
< Keep-Alive: timeout=10
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334"}
< X-ClickHouse-Summary: {"read_rows":"0","read_bytes":"0","written_rows":"0","written_bytes":"0","total_rows_to_read":"0","elapsed_ns":"662334","real_time_microseconds":"0"}
<
<html><body>Relative Path File</body></html>
* Connection #0 to host localhost left intact

View File

@ -1157,7 +1157,7 @@ SELECT toModifiedJulianDayOrNull('2020-01-01');
## fromModifiedJulianDay {#frommodifiedjulianday}
将 [Modified Julian Day](https://en.wikipedia.org/wiki/Julian_day#Variants) 数字转换为 `YYYY-MM-DD` 文本格式的 [Proleptic Gregorian calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) 日期。该函数支持从 `-678941``2973119` 的天数(分别代表 0000-01-01 和 9999-12-31。如果天数超出支持范围则会引发异常。
将 [Modified Julian Day](https://en.wikipedia.org/wiki/Julian_day#Variants) 数字转换为 `YYYY-MM-DD` 文本格式的 [Proleptic Gregorian calendar](https://en.wikipedia.org/wiki/Proleptic_Gregorian_calendar) 日期。该函数支持从 `-678941``2973483` 的天数(分别代表 0000-01-01 和 9999-12-31。如果天数超出支持范围则会引发异常。
**语法**

View File

@ -677,4 +677,122 @@ void GetAllChildrenNumberCommand::execute(const ASTKeeperQuery * query, KeeperCl
std::cout << totalNumChildren << "\n";
}
namespace
{
class CPMVOperation
{
constexpr static UInt64 kTryLimit = 1000;
public:
CPMVOperation(String src_, String dest_, bool remove_src_, KeeperClient * client_)
: src(std::move(src_)), dest(std::move(dest_)), remove_src(remove_src_), client(client_)
{
}
bool isTryLimitReached() const { return failed_tries_count >= kTryLimit; }
bool isCompleted() const { return is_completed; }
void perform()
{
Coordination::Stat src_stat;
String data = client->zookeeper->get(src, &src_stat);
Coordination::Requests ops{
zkutil::makeCheckRequest(src, src_stat.version),
zkutil::makeCreateRequest(dest, data, zkutil::CreateMode::Persistent), // Do we need to copy ACLs here?
};
if (remove_src)
ops.push_back(zkutil::makeRemoveRequest(src, src_stat.version));
Coordination::Responses responses;
auto code = client->zookeeper->tryMulti(ops, responses);
switch (code)
{
case Coordination::Error::ZOK: {
is_completed = true;
return;
}
case Coordination::Error::ZBADVERSION: {
++failed_tries_count;
if (isTryLimitReached())
zkutil::KeeperMultiException::check(code, ops, responses);
return;
}
default:
zkutil::KeeperMultiException::check(code, ops, responses);
}
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unreachable");
}
private:
String src;
String dest;
bool remove_src = false;
KeeperClient * client = nullptr;
bool is_completed = false;
uint64_t failed_tries_count = 0;
};
}
bool CPCommand::parse(IParser::Pos & pos, std::shared_ptr<ASTKeeperQuery> & node, [[maybe_unused]] Expected & expected) const
{
String src_path;
if (!parseKeeperPath(pos, expected, src_path))
return false;
node->args.push_back(std::move(src_path));
String to_path;
if (!parseKeeperPath(pos, expected, to_path))
return false;
node->args.push_back(std::move(to_path));
return true;
}
void CPCommand::execute(const ASTKeeperQuery * query, KeeperClient * client) const
{
auto src = client->getAbsolutePath(query->args[0].safeGet<String>());
auto dest = client->getAbsolutePath(query->args[1].safeGet<String>());
CPMVOperation operation(std::move(src), std::move(dest), /*remove_src_=*/false, /*client_=*/client);
while (!operation.isTryLimitReached() && !operation.isCompleted())
operation.perform();
}
bool MVCommand::parse(IParser::Pos & pos, std::shared_ptr<ASTKeeperQuery> & node, Expected & expected) const
{
String src_path;
if (!parseKeeperPath(pos, expected, src_path))
return false;
node->args.push_back(std::move(src_path));
String to_path;
if (!parseKeeperPath(pos, expected, to_path))
return false;
node->args.push_back(std::move(to_path));
return true;
}
void MVCommand::execute(const ASTKeeperQuery * query, KeeperClient * client) const
{
auto src = client->getAbsolutePath(query->args[0].safeGet<String>());
auto dest = client->getAbsolutePath(query->args[1].safeGet<String>());
CPMVOperation operation(std::move(src), std::move(dest), /*remove_src_=*/true, /*client_=*/client);
while (!operation.isTryLimitReached() && !operation.isCompleted())
operation.perform();
}
}

View File

@ -266,4 +266,32 @@ class GetAllChildrenNumberCommand : public IKeeperClientCommand
}
};
class CPCommand : public IKeeperClientCommand
{
String getName() const override { return "cp"; }
bool parse(IParser::Pos & pos, std::shared_ptr<ASTKeeperQuery> & node, Expected & expected) const override;
void execute(const ASTKeeperQuery * query, KeeperClient * client) const override;
String getHelpMessage() const override
{
return "{} <src> <dest> -- Copies 'src' node to 'dest' path.";
}
};
class MVCommand : public IKeeperClientCommand
{
String getName() const override { return "mv"; }
bool parse(IParser::Pos & pos, std::shared_ptr<ASTKeeperQuery> & node, Expected & expected) const override;
void execute(const ASTKeeperQuery * query, KeeperClient * client) const override;
String getHelpMessage() const override
{
return "{} <src> <dest> -- Moves 'src' node to the 'dest' path.";
}
};
}

View File

@ -212,6 +212,8 @@ void KeeperClient::initialize(Poco::Util::Application & /* self */)
std::make_shared<FourLetterWordCommand>(),
std::make_shared<GetDirectChildrenNumberCommand>(),
std::make_shared<GetAllChildrenNumberCommand>(),
std::make_shared<CPCommand>(),
std::make_shared<MVCommand>(),
});
String home_path;

View File

@ -11,6 +11,7 @@
#include <Core/ServerUUID.h>
#include <Common/logger_useful.h>
#include <Common/CgroupsMemoryUsageObserver.h>
#include <Common/MemoryWorker.h>
#include <Common/ErrorHandlers.h>
#include <Common/assertProcessUserMatchesDataOwner.h>
#include <Common/makeSocketAddress.h>
@ -384,6 +385,9 @@ try
LOG_INFO(log, "Background threads finished in {} ms", watch.elapsedMilliseconds());
});
MemoryWorker memory_worker(config().getUInt64("memory_worker_period_ms", 0));
memory_worker.start();
static ServerErrorHandler error_handler;
Poco::ErrorHandler::set(&error_handler);
@ -425,8 +429,9 @@ try
for (const auto & server : *servers)
metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads(), server.refusedConnections()});
return metrics;
}
);
},
/*update_jemalloc_epoch_=*/memory_worker.getSource() != MemoryWorker::MemoryUsageSource::Jemalloc,
/*update_rss_=*/memory_worker.getSource() == MemoryWorker::MemoryUsageSource::None);
std::vector<std::string> listen_hosts = DB::getMultipleValuesFromConfig(config(), "", "listen_host");
@ -655,7 +660,6 @@ try
GWPAsan::initFinished();
#endif
LOG_INFO(log, "Ready for connections.");
waitForTerminationRequest();

View File

@ -11,7 +11,6 @@
#include <Poco/Util/HelpFormatter.h>
#include <Poco/Environment.h>
#include <Poco/Config.h>
#include <Common/Jemalloc.h>
#include <Common/scope_guard_safe.h>
#include <Common/logger_useful.h>
#include <base/phdr_cache.h>
@ -25,6 +24,7 @@
#include <base/Numa.h>
#include <Common/PoolId.h>
#include <Common/MemoryTracker.h>
#include <Common/MemoryWorker.h>
#include <Common/ClickHouseRevision.h>
#include <Common/DNSResolver.h>
#include <Common/CgroupsMemoryUsageObserver.h>
@ -111,6 +111,8 @@
#include <filesystem>
#include <unordered_set>
#include <Common/Jemalloc.h>
#include "config.h"
#include <Common/config_version.h>
@ -449,9 +451,12 @@ void checkForUsersNotInMainConfig(
}
}
namespace
{
/// Unused in other builds
#if defined(OS_LINUX)
static String readLine(const String & path)
String readLine(const String & path)
{
ReadBufferFromFile in(path);
String contents;
@ -459,7 +464,7 @@ static String readLine(const String & path)
return contents;
}
static int readNumber(const String & path)
int readNumber(const String & path)
{
ReadBufferFromFile in(path);
int result;
@ -469,7 +474,7 @@ static int readNumber(const String & path)
#endif
static void sanityChecks(Server & server)
void sanityChecks(Server & server)
{
std::string data_path = getCanonicalPath(server.config().getString("path", DBMS_DEFAULT_PATH));
std::string logs_path = server.config().getString("logger.log", "");
@ -590,6 +595,8 @@ static void sanityChecks(Server & server)
}
}
}
void loadStartupScripts(const Poco::Util::AbstractConfiguration & config, ContextMutablePtr context, Poco::Logger * log)
{
try
@ -906,6 +913,8 @@ try
LOG_INFO(log, "Background threads finished in {} ms", watch.elapsedMilliseconds());
});
MemoryWorker memory_worker(global_context->getServerSettings().memory_worker_period_ms);
/// This object will periodically calculate some metrics.
ServerAsynchronousMetrics async_metrics(
global_context,
@ -924,8 +933,9 @@ try
for (const auto & server : servers)
metrics.emplace_back(ProtocolServerMetrics{server.getPortName(), server.currentThreads(), server.refusedConnections()});
return metrics;
}
);
},
/*update_jemalloc_epoch_=*/memory_worker.getSource() != MemoryWorker::MemoryUsageSource::Jemalloc,
/*update_rss_=*/memory_worker.getSource() == MemoryWorker::MemoryUsageSource::None);
/// NOTE: global context should be destroyed *before* GlobalThreadPool::shutdown()
/// Otherwise GlobalThreadPool::shutdown() will hang, since Context holds some threads.
@ -978,6 +988,7 @@ try
/** Explicitly destroy Context. It is more convenient than in destructor of Server, because logger is still available.
* At this moment, no one could own shared part of Context.
*/
global_context->resetSharedContext();
global_context.reset();
shared_context.reset();
LOG_DEBUG(log, "Destroyed global context.");
@ -1203,6 +1214,8 @@ try
FailPointInjection::enableFromGlobalConfig(config());
memory_worker.start();
int default_oom_score = 0;
#if !defined(NDEBUG)
@ -1546,15 +1559,6 @@ try
total_memory_tracker.setDescription("(total)");
total_memory_tracker.setMetric(CurrentMetrics::MemoryTracking);
if (cgroups_memory_usage_observer)
{
double hard_limit_ratio = new_server_settings.cgroup_memory_watcher_hard_limit_ratio;
double soft_limit_ratio = new_server_settings.cgroup_memory_watcher_soft_limit_ratio;
cgroups_memory_usage_observer->setMemoryUsageLimits(
static_cast<uint64_t>(max_server_memory_usage * hard_limit_ratio),
static_cast<uint64_t>(max_server_memory_usage * soft_limit_ratio));
}
size_t merges_mutations_memory_usage_soft_limit = new_server_settings.merges_mutations_memory_usage_soft_limit;
size_t default_merges_mutations_server_memory_usage = static_cast<size_t>(current_physical_server_memory * new_server_settings.merges_mutations_memory_usage_to_ram_ratio);
@ -1583,8 +1587,6 @@ try
background_memory_tracker.setDescription("(background)");
background_memory_tracker.setMetric(CurrentMetrics::MergesMutationsMemoryTracking);
total_memory_tracker.setAllowUseJemallocMemory(new_server_settings.allow_use_jemalloc_memory);
auto * global_overcommit_tracker = global_context->getGlobalOvercommitTracker();
total_memory_tracker.setOvercommitTracker(global_overcommit_tracker);

View File

@ -120,7 +120,7 @@ void RoleCache::collectEnabledRoles(EnabledRoles & enabled_roles, SubscriptionsO
SubscriptionsOnRoles new_subscriptions_on_roles;
new_subscriptions_on_roles.reserve(subscriptions_on_roles.size());
auto get_role_function = [this, &subscriptions_on_roles](const UUID & id) TSA_NO_THREAD_SAFETY_ANALYSIS { return getRole(id, subscriptions_on_roles); };
auto get_role_function = [this, &new_subscriptions_on_roles](const UUID & id) TSA_NO_THREAD_SAFETY_ANALYSIS { return getRole(id, new_subscriptions_on_roles); };
for (const auto & current_role : enabled_roles.params.current_roles)
collectRoles(*new_info, skip_ids, get_role_function, current_role, true, false);

Some files were not shown because too many files have changed in this diff Show More