mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-23 08:02:02 +00:00
Merge remote-tracking branch 'origin/master' into pr-receive-timeout-on-handshake
This commit is contained in:
commit
912592ca78
@ -96,7 +96,6 @@ Checks: [
|
||||
'-modernize-use-default-member-init',
|
||||
'-modernize-use-emplace',
|
||||
'-modernize-use-nodiscard',
|
||||
'-modernize-use-override',
|
||||
'-modernize-use-trailing-return-type',
|
||||
|
||||
'-performance-inefficient-string-concatenation',
|
||||
@ -120,7 +119,6 @@ Checks: [
|
||||
'-readability-named-parameter',
|
||||
'-readability-redundant-declaration',
|
||||
'-readability-simplify-boolean-expr',
|
||||
'-readability-static-accessed-through-instance',
|
||||
'-readability-suspicious-call-argument',
|
||||
'-readability-uppercase-literal-suffix',
|
||||
'-readability-use-anyofallof',
|
||||
|
41
.github/PULL_REQUEST_TEMPLATE.md
vendored
41
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -44,22 +44,35 @@ At a minimum, the following information should be added (but add more as needed)
|
||||
---
|
||||
### Modify your CI run:
|
||||
**NOTE:** If your merge the PR with modified CI you **MUST KNOW** what you are doing
|
||||
**NOTE:** Set desired options before CI starts or re-push after updates
|
||||
**NOTE:** Checked options will be applied if set before CI RunConfig/PrepareRunConfig step
|
||||
|
||||
#### Run only:
|
||||
- [ ] <!---ci_set_integration--> Integration tests
|
||||
- [ ] <!---ci_set_arm--> Integration tests (arm64)
|
||||
- [ ] <!---ci_set_stateless--> Stateless tests (release)
|
||||
- [ ] <!---ci_set_stateless_asan--> Stateless tests (asan)
|
||||
- [ ] <!---ci_set_stateful--> Stateful tests (release)
|
||||
- [ ] <!---ci_set_stateful_asan--> Stateful tests (asan)
|
||||
- [ ] <!---ci_set_reduced--> No sanitizers
|
||||
- [ ] <!---ci_set_analyzer--> Tests with analyzer
|
||||
- [ ] <!---ci_set_fast--> Fast tests
|
||||
- [ ] <!---job_package_debug--> Only package_debug build
|
||||
- [ ] <!---PLACE_YOUR_TAG_CONFIGURED_IN_ci_config.py_FILE_HERE--> Add your CI variant description here
|
||||
#### Include tests (required builds will be added automatically):
|
||||
- [ ] <!---ci_include_fast--> Fast test
|
||||
- [ ] <!---ci_include_integration--> Integration Tests
|
||||
- [ ] <!---ci_include_stateless--> Stateless tests
|
||||
- [ ] <!---ci_include_stateful--> Stateful tests
|
||||
- [ ] <!---ci_include_unit--> Unit tests
|
||||
- [ ] <!---ci_include_performance--> Performance tests
|
||||
- [ ] <!---ci_include_asan--> All with ASAN
|
||||
- [ ] <!---ci_include_tsan--> All with TSAN
|
||||
- [ ] <!---ci_include_analyzer--> All with Analyzer
|
||||
- [ ] <!---ci_include_KEYWORD--> Add your option here
|
||||
|
||||
#### CI options:
|
||||
#### Exclude tests:
|
||||
- [ ] <!---ci_exclude_fast--> Fast test
|
||||
- [ ] <!---ci_exclude_integration--> Integration Tests
|
||||
- [ ] <!---ci_exclude_stateless--> Stateless tests
|
||||
- [ ] <!---ci_exclude_stateful--> Stateful tests
|
||||
- [ ] <!---ci_exclude_performance--> Performance tests
|
||||
- [ ] <!---ci_exclude_asan--> All with ASAN
|
||||
- [ ] <!---ci_exclude_tsan--> All with TSAN
|
||||
- [ ] <!---ci_exclude_msan--> All with MSAN
|
||||
- [ ] <!---ci_exclude_ubsan--> All with UBSAN
|
||||
- [ ] <!---ci_exclude_coverage--> All with Coverage
|
||||
- [ ] <!---ci_exclude_aarch64--> All with Aarch64
|
||||
- [ ] <!---ci_exclude_KEYWORD--> Add your option here
|
||||
|
||||
#### Extra options:
|
||||
- [ ] <!---do_not_test--> do not test (only style check)
|
||||
- [ ] <!---no_merge_commit--> disable merge-commit (no merge from master before tests)
|
||||
- [ ] <!---no_ci_cache--> disable CI cache (job reuse)
|
||||
|
4
.github/workflows/master.yml
vendored
4
.github/workflows/master.yml
vendored
@ -374,7 +374,7 @@ jobs:
|
||||
if: ${{ !failure() && !cancelled() }}
|
||||
uses: ./.github/workflows/reusable_test.yml
|
||||
with:
|
||||
test_name: Stateless tests (release, analyzer, s3, DatabaseReplicated)
|
||||
test_name: Stateless tests (release, old analyzer, s3, DatabaseReplicated)
|
||||
runner_type: func-tester
|
||||
data: ${{ needs.RunConfig.outputs.data }}
|
||||
FunctionalStatelessTestS3Debug:
|
||||
@ -632,7 +632,7 @@ jobs:
|
||||
if: ${{ !failure() && !cancelled() }}
|
||||
uses: ./.github/workflows/reusable_test.yml
|
||||
with:
|
||||
test_name: Integration tests (asan, analyzer)
|
||||
test_name: Integration tests (asan, old analyzer)
|
||||
runner_type: stress-tester
|
||||
data: ${{ needs.RunConfig.outputs.data }}
|
||||
IntegrationTestsTsan:
|
||||
|
2
.github/workflows/pull_request.yml
vendored
2
.github/workflows/pull_request.yml
vendored
@ -157,7 +157,7 @@ jobs:
|
||||
################################# Stage Final #################################
|
||||
#
|
||||
FinishCheck:
|
||||
if: ${{ !failure() && !cancelled() }}
|
||||
if: ${{ !failure() && !cancelled() && github.event_name != 'merge_group' }}
|
||||
needs: [Tests_1, Tests_2]
|
||||
runs-on: [self-hosted, style-checker]
|
||||
steps:
|
||||
|
2
.github/workflows/release_branches.yml
vendored
2
.github/workflows/release_branches.yml
vendored
@ -436,7 +436,7 @@ jobs:
|
||||
if: ${{ !failure() && !cancelled() }}
|
||||
uses: ./.github/workflows/reusable_test.yml
|
||||
with:
|
||||
test_name: Integration tests (asan, analyzer)
|
||||
test_name: Integration tests (asan, old analyzer)
|
||||
runner_type: stress-tester
|
||||
data: ${{ needs.RunConfig.outputs.data }}
|
||||
IntegrationTestsTsan:
|
||||
|
3
.gitignore
vendored
3
.gitignore
vendored
@ -164,6 +164,9 @@ tests/queries/0_stateless/*.generated-expect
|
||||
tests/queries/0_stateless/*.expect.history
|
||||
tests/integration/**/_gen
|
||||
|
||||
# pytest --pdb history
|
||||
.pdb_history
|
||||
|
||||
# rust
|
||||
/rust/**/target*
|
||||
# It is autogenerated from *.in
|
||||
|
@ -6,7 +6,7 @@
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### <a id="243"></a> ClickHouse release 24.3 LTS, 2024-03-26
|
||||
### <a id="243"></a> ClickHouse release 24.3 LTS, 2024-03-27
|
||||
|
||||
#### Upgrade Notes
|
||||
* The setting `allow_experimental_analyzer` is enabled by default and it switches the query analysis to a new implementation, which has better compatibility and feature completeness. The feature "analyzer" is considered beta instead of experimental. You can turn the old behavior by setting the `compatibility` to `24.2` or disabling the `allow_experimental_analyzer` setting. Watch the [video on YouTube](https://www.youtube.com/watch?v=zhrOYQpgvkk).
|
||||
@ -123,7 +123,6 @@
|
||||
* Something was wrong with Apache Hive, which is experimental and not supported. [#60262](https://github.com/ClickHouse/ClickHouse/pull/60262) ([shanfengp](https://github.com/Aed-p)).
|
||||
* An improvement for experimental parallel replicas: force reanalysis if parallel replicas changed [#60362](https://github.com/ClickHouse/ClickHouse/pull/60362) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix usage of plain metadata type with new disks configuration option [#60396](https://github.com/ClickHouse/ClickHouse/pull/60396) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Don't allow to set max_parallel_replicas to 0 as it doesn't make sense [#60430](https://github.com/ClickHouse/ClickHouse/pull/60430) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Try to fix logical error 'Cannot capture column because it has incompatible type' in mapContainsKeyLike [#60451](https://github.com/ClickHouse/ClickHouse/pull/60451) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Avoid calculation of scalar subqueries for CREATE TABLE. [#60464](https://github.com/ClickHouse/ClickHouse/pull/60464) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix deadlock in parallel parsing when lots of rows are skipped due to errors [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
@ -13,18 +13,16 @@ The following versions of ClickHouse server are currently being supported with s
|
||||
|
||||
| Version | Supported |
|
||||
|:-|:-|
|
||||
| 24.3 | ✔️ |
|
||||
| 24.2 | ✔️ |
|
||||
| 24.1 | ✔️ |
|
||||
| 23.12 | ✔️ |
|
||||
| 23.11 | ❌ |
|
||||
| 23.10 | ❌ |
|
||||
| 23.9 | ❌ |
|
||||
| 23.* | ❌ |
|
||||
| 23.8 | ✔️ |
|
||||
| 23.7 | ❌ |
|
||||
| 23.6 | ❌ |
|
||||
| 23.5 | ❌ |
|
||||
| 23.4 | ❌ |
|
||||
| 23.3 | ✔️ |
|
||||
| 23.3 | ❌ |
|
||||
| 23.2 | ❌ |
|
||||
| 23.1 | ❌ |
|
||||
| 22.* | ❌ |
|
||||
|
@ -13,8 +13,6 @@
|
||||
#include <tuple>
|
||||
#include <limits>
|
||||
|
||||
#include <boost/math/special_functions/fpclassify.hpp>
|
||||
|
||||
// NOLINTBEGIN(*)
|
||||
|
||||
/// Use same extended double for all platforms
|
||||
@ -22,6 +20,7 @@
|
||||
#define CONSTEXPR_FROM_DOUBLE constexpr
|
||||
using FromDoubleIntermediateType = long double;
|
||||
#else
|
||||
#include <boost/math/special_functions/fpclassify.hpp>
|
||||
#include <boost/multiprecision/cpp_bin_float.hpp>
|
||||
/// `wide_integer_from_builtin` can't be constexpr with non-literal `cpp_bin_float_double_extended`
|
||||
#define CONSTEXPR_FROM_DOUBLE
|
||||
@ -309,6 +308,13 @@ struct integer<Bits, Signed>::_impl
|
||||
constexpr uint64_t max_int = std::numeric_limits<uint64_t>::max();
|
||||
static_assert(std::is_same_v<T, double> || std::is_same_v<T, FromDoubleIntermediateType>);
|
||||
/// Implementation specific behaviour on overflow (if we don't check here, stack overflow will triggered in bigint_cast).
|
||||
#if (LDBL_MANT_DIG == 64)
|
||||
if (!std::isfinite(t))
|
||||
{
|
||||
self = 0;
|
||||
return;
|
||||
}
|
||||
#else
|
||||
if constexpr (std::is_same_v<T, double>)
|
||||
{
|
||||
if (!std::isfinite(t))
|
||||
@ -325,6 +331,7 @@ struct integer<Bits, Signed>::_impl
|
||||
return;
|
||||
}
|
||||
}
|
||||
#endif
|
||||
|
||||
const T alpha = t / static_cast<T>(max_int);
|
||||
|
||||
|
@ -4835,7 +4835,7 @@ for (;; ptr++)
|
||||
|
||||
If the class contains characters outside the 0-255 range, a different
|
||||
opcode is compiled. It may optionally have a bit map for characters < 256,
|
||||
but those above are are explicitly listed afterwards. A flag byte tells
|
||||
but those above are explicitly listed afterwards. A flag byte tells
|
||||
whether the bitmap is present, and whether this is a negated class or not.
|
||||
|
||||
In JavaScript compatibility mode, an isolated ']' causes an error. In
|
||||
|
@ -314,13 +314,13 @@ static int read_unicode(json_stream *json)
|
||||
|
||||
if (l < 0xdc00 || l > 0xdfff) {
|
||||
json_error(json, "invalid surrogate pair continuation \\u%04lx out "
|
||||
"of range (dc00-dfff)", l);
|
||||
"of range (dc00-dfff)", (unsigned long)l);
|
||||
return -1;
|
||||
}
|
||||
|
||||
cp = ((h - 0xd800) * 0x400) + ((l - 0xdc00) + 0x10000);
|
||||
} else if (cp >= 0xdc00 && cp <= 0xdfff) {
|
||||
json_error(json, "dangling surrogate \\u%04lx", cp);
|
||||
json_error(json, "dangling surrogate \\u%04lx", (unsigned long)cp);
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
@ -213,6 +213,19 @@ namespace Net
|
||||
Poco::Timespan getKeepAliveTimeout() const;
|
||||
/// Returns the connection timeout for HTTP connections.
|
||||
|
||||
void setKeepAliveMaxRequests(int max_requests);
|
||||
|
||||
int getKeepAliveMaxRequests() const;
|
||||
|
||||
int getKeepAliveRequest() const;
|
||||
|
||||
bool isKeepAliveExpired(double reliability = 1.0) const;
|
||||
/// Returns if the connection is expired with some margin as fraction of timeout as reliability
|
||||
|
||||
double getKeepAliveReliability() const;
|
||||
/// Returns the current fraction of keep alive timeout when connection is considered safe to use
|
||||
/// It helps to avoid situation when a client uses nearly expired connection and receives NoMessageException
|
||||
|
||||
virtual std::ostream & sendRequest(HTTPRequest & request);
|
||||
/// Sends the header for the given HTTP request to
|
||||
/// the server.
|
||||
@ -345,6 +358,8 @@ namespace Net
|
||||
|
||||
void assign(HTTPClientSession & session);
|
||||
|
||||
void setKeepAliveRequest(int request);
|
||||
|
||||
HTTPSessionFactory _proxySessionFactory;
|
||||
/// Factory to create HTTPClientSession to proxy.
|
||||
private:
|
||||
@ -353,6 +368,8 @@ namespace Net
|
||||
Poco::UInt16 _port;
|
||||
ProxyConfig _proxyConfig;
|
||||
Poco::Timespan _keepAliveTimeout;
|
||||
int _keepAliveCurrentRequest = 0;
|
||||
int _keepAliveMaxRequests = 1000;
|
||||
Poco::Timestamp _lastRequest;
|
||||
bool _reconnect;
|
||||
bool _mustReconnect;
|
||||
@ -361,6 +378,7 @@ namespace Net
|
||||
Poco::SharedPtr<std::ostream> _pRequestStream;
|
||||
Poco::SharedPtr<std::istream> _pResponseStream;
|
||||
|
||||
static const double _defaultKeepAliveReliabilityLevel;
|
||||
static ProxyConfig _globalProxyConfig;
|
||||
|
||||
HTTPClientSession(const HTTPClientSession &);
|
||||
@ -450,9 +468,19 @@ namespace Net
|
||||
return _lastRequest;
|
||||
}
|
||||
|
||||
inline void HTTPClientSession::setLastRequest(Poco::Timestamp time)
|
||||
inline double HTTPClientSession::getKeepAliveReliability() const
|
||||
{
|
||||
_lastRequest = time;
|
||||
return _defaultKeepAliveReliabilityLevel;
|
||||
}
|
||||
|
||||
inline int HTTPClientSession::getKeepAliveMaxRequests() const
|
||||
{
|
||||
return _keepAliveMaxRequests;
|
||||
}
|
||||
|
||||
inline int HTTPClientSession::getKeepAliveRequest() const
|
||||
{
|
||||
return _keepAliveCurrentRequest;
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -120,6 +120,10 @@ namespace Net
|
||||
/// The value is set to "Keep-Alive" if keepAlive is
|
||||
/// true, or to "Close" otherwise.
|
||||
|
||||
void setKeepAliveTimeout(int timeout, int max_requests);
|
||||
int getKeepAliveTimeout() const;
|
||||
int getKeepAliveMaxRequests() const;
|
||||
|
||||
bool getKeepAlive() const;
|
||||
/// Returns true if
|
||||
/// * the message has a Connection header field and its value is "Keep-Alive"
|
||||
|
@ -44,7 +44,7 @@ namespace Net
|
||||
/// - timeout: 60 seconds
|
||||
/// - keepAlive: true
|
||||
/// - maxKeepAliveRequests: 0
|
||||
/// - keepAliveTimeout: 10 seconds
|
||||
/// - keepAliveTimeout: 15 seconds
|
||||
|
||||
void setServerName(const std::string & serverName);
|
||||
/// Sets the name and port (name:port) that the server uses to identify itself.
|
||||
|
@ -56,6 +56,8 @@ namespace Net
|
||||
SocketAddress serverAddress();
|
||||
/// Returns the server's address.
|
||||
|
||||
void setKeepAliveTimeout(Poco::Timespan keepAliveTimeout);
|
||||
|
||||
private:
|
||||
bool _firstRequest;
|
||||
Poco::Timespan _keepAliveTimeout;
|
||||
|
@ -37,6 +37,7 @@ namespace Net {
|
||||
|
||||
|
||||
HTTPClientSession::ProxyConfig HTTPClientSession::_globalProxyConfig;
|
||||
const double HTTPClientSession::_defaultKeepAliveReliabilityLevel = 0.9;
|
||||
|
||||
|
||||
HTTPClientSession::HTTPClientSession():
|
||||
@ -220,7 +221,41 @@ void HTTPClientSession::setGlobalProxyConfig(const ProxyConfig& config)
|
||||
|
||||
void HTTPClientSession::setKeepAliveTimeout(const Poco::Timespan& timeout)
|
||||
{
|
||||
_keepAliveTimeout = timeout;
|
||||
if (connected())
|
||||
{
|
||||
throw Poco::IllegalStateException("cannot change keep alive timeout on initiated connection, "
|
||||
"That value is managed privately after connection is established.");
|
||||
}
|
||||
_keepAliveTimeout = timeout;
|
||||
}
|
||||
|
||||
|
||||
void HTTPClientSession::setKeepAliveMaxRequests(int max_requests)
|
||||
{
|
||||
if (connected())
|
||||
{
|
||||
throw Poco::IllegalStateException("cannot change keep alive max requests on initiated connection, "
|
||||
"That value is managed privately after connection is established.");
|
||||
}
|
||||
_keepAliveMaxRequests = max_requests;
|
||||
}
|
||||
|
||||
|
||||
void HTTPClientSession::setKeepAliveRequest(int request)
|
||||
{
|
||||
_keepAliveCurrentRequest = request;
|
||||
}
|
||||
|
||||
|
||||
|
||||
void HTTPClientSession::setLastRequest(Poco::Timestamp time)
|
||||
{
|
||||
if (connected())
|
||||
{
|
||||
throw Poco::IllegalStateException("cannot change last request on initiated connection, "
|
||||
"That value is managed privately after connection is established.");
|
||||
}
|
||||
_lastRequest = time;
|
||||
}
|
||||
|
||||
|
||||
@ -231,6 +266,8 @@ std::ostream& HTTPClientSession::sendRequest(HTTPRequest& request)
|
||||
clearException();
|
||||
_responseReceived = false;
|
||||
|
||||
_keepAliveCurrentRequest += 1;
|
||||
|
||||
bool keepAlive = getKeepAlive();
|
||||
if (((connected() && !keepAlive) || mustReconnect()) && !_host.empty())
|
||||
{
|
||||
@ -241,8 +278,10 @@ std::ostream& HTTPClientSession::sendRequest(HTTPRequest& request)
|
||||
{
|
||||
if (!connected())
|
||||
reconnect();
|
||||
if (!keepAlive)
|
||||
request.setKeepAlive(false);
|
||||
if (!request.has(HTTPMessage::CONNECTION))
|
||||
request.setKeepAlive(keepAlive);
|
||||
if (keepAlive && !request.has(HTTPMessage::CONNECTION_KEEP_ALIVE) && _keepAliveTimeout.totalSeconds() > 0)
|
||||
request.setKeepAliveTimeout(_keepAliveTimeout.totalSeconds(), _keepAliveMaxRequests);
|
||||
if (!request.has(HTTPRequest::HOST) && !_host.empty())
|
||||
request.setHost(_host, _port);
|
||||
if (!_proxyConfig.host.empty() && !bypassProxy())
|
||||
@ -324,6 +363,17 @@ std::istream& HTTPClientSession::receiveResponse(HTTPResponse& response)
|
||||
|
||||
_mustReconnect = getKeepAlive() && !response.getKeepAlive();
|
||||
|
||||
if (!_mustReconnect)
|
||||
{
|
||||
/// when server sends its keep alive timeout, client has to follow that value
|
||||
auto timeout = response.getKeepAliveTimeout();
|
||||
if (timeout > 0)
|
||||
_keepAliveTimeout = std::min(_keepAliveTimeout, Poco::Timespan(timeout, 0));
|
||||
auto max_requests = response.getKeepAliveMaxRequests();
|
||||
if (max_requests > 0)
|
||||
_keepAliveMaxRequests = std::min(_keepAliveMaxRequests, max_requests);
|
||||
}
|
||||
|
||||
if (!_expectResponseBody || response.getStatus() < 200 || response.getStatus() == HTTPResponse::HTTP_NO_CONTENT || response.getStatus() == HTTPResponse::HTTP_NOT_MODIFIED)
|
||||
_pResponseStream = new HTTPFixedLengthInputStream(*this, 0);
|
||||
else if (response.getChunkedTransferEncoding())
|
||||
@ -430,15 +480,18 @@ std::string HTTPClientSession::proxyRequestPrefix() const
|
||||
return result;
|
||||
}
|
||||
|
||||
bool HTTPClientSession::isKeepAliveExpired(double reliability) const
|
||||
{
|
||||
Poco::Timestamp now;
|
||||
return Timespan(Timestamp::TimeDiff(reliability *_keepAliveTimeout.totalMicroseconds())) <= now - _lastRequest
|
||||
|| _keepAliveCurrentRequest > _keepAliveMaxRequests;
|
||||
}
|
||||
|
||||
bool HTTPClientSession::mustReconnect() const
|
||||
{
|
||||
if (!_mustReconnect)
|
||||
{
|
||||
Poco::Timestamp now;
|
||||
return _keepAliveTimeout <= now - _lastRequest;
|
||||
}
|
||||
else return true;
|
||||
return isKeepAliveExpired(_defaultKeepAliveReliabilityLevel);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
||||
@ -511,14 +564,21 @@ void HTTPClientSession::assign(Poco::Net::HTTPClientSession & session)
|
||||
if (buffered())
|
||||
throw Poco::LogicException("assign to a session with not empty buffered data");
|
||||
|
||||
attachSocket(session.detachSocket());
|
||||
setLastRequest(session.getLastRequest());
|
||||
poco_assert(!connected());
|
||||
|
||||
setResolvedHost(session.getResolvedHost());
|
||||
setKeepAlive(session.getKeepAlive());
|
||||
setProxyConfig(session.getProxyConfig());
|
||||
|
||||
setTimeout(session.getConnectionTimeout(), session.getSendTimeout(), session.getReceiveTimeout());
|
||||
setKeepAlive(session.getKeepAlive());
|
||||
|
||||
setLastRequest(session.getLastRequest());
|
||||
setKeepAliveTimeout(session.getKeepAliveTimeout());
|
||||
setProxyConfig(session.getProxyConfig());
|
||||
|
||||
_keepAliveMaxRequests = session._keepAliveMaxRequests;
|
||||
_keepAliveCurrentRequest = session._keepAliveCurrentRequest;
|
||||
|
||||
attachSocket(session.detachSocket());
|
||||
|
||||
session.reset();
|
||||
}
|
||||
|
@ -17,6 +17,7 @@
|
||||
#include "Poco/NumberFormatter.h"
|
||||
#include "Poco/NumberParser.h"
|
||||
#include "Poco/String.h"
|
||||
#include <format>
|
||||
|
||||
|
||||
using Poco::NumberFormatter;
|
||||
@ -179,4 +180,51 @@ bool HTTPMessage::getKeepAlive() const
|
||||
}
|
||||
|
||||
|
||||
void HTTPMessage::setKeepAliveTimeout(int timeout, int max_requests)
|
||||
{
|
||||
add(HTTPMessage::CONNECTION_KEEP_ALIVE, std::format("timeout={}, max={}", timeout, max_requests));
|
||||
}
|
||||
|
||||
|
||||
int parseFromHeaderValues(const std::string_view header_value, const std::string_view param_name)
|
||||
{
|
||||
auto param_value_pos = header_value.find(param_name);
|
||||
if (param_value_pos == std::string::npos)
|
||||
param_value_pos = header_value.size();
|
||||
if (param_value_pos != header_value.size())
|
||||
param_value_pos += param_name.size();
|
||||
|
||||
auto param_value_end = header_value.find(',', param_value_pos);
|
||||
if (param_value_end == std::string::npos)
|
||||
param_value_end = header_value.size();
|
||||
|
||||
auto timeout_value_substr = header_value.substr(param_value_pos, param_value_end - param_value_pos);
|
||||
if (timeout_value_substr.empty())
|
||||
return -1;
|
||||
|
||||
int value = 0;
|
||||
auto [ptr, ec] = std::from_chars(timeout_value_substr.begin(), timeout_value_substr.end(), value);
|
||||
|
||||
if (ec == std::errc())
|
||||
return value;
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
|
||||
int HTTPMessage::getKeepAliveTimeout() const
|
||||
{
|
||||
const std::string& ka_header = get(HTTPMessage::CONNECTION_KEEP_ALIVE, HTTPMessage::EMPTY);
|
||||
static const std::string_view timeout_param = "timeout=";
|
||||
return parseFromHeaderValues(ka_header, timeout_param);
|
||||
}
|
||||
|
||||
|
||||
int HTTPMessage::getKeepAliveMaxRequests() const
|
||||
{
|
||||
const std::string& ka_header = get(HTTPMessage::CONNECTION_KEEP_ALIVE, HTTPMessage::EMPTY);
|
||||
static const std::string_view timeout_param = "max=";
|
||||
return parseFromHeaderValues(ka_header, timeout_param);
|
||||
}
|
||||
|
||||
} } // namespace Poco::Net
|
||||
|
@ -88,7 +88,18 @@ void HTTPServerConnection::run()
|
||||
|
||||
pHandler->handleRequest(request, response);
|
||||
session.setKeepAlive(_pParams->getKeepAlive() && response.getKeepAlive() && session.canKeepAlive());
|
||||
}
|
||||
|
||||
/// all that fuzz is all about to make session close with less timeout than 15s (set in HTTPServerParams c-tor)
|
||||
if (_pParams->getKeepAlive() && response.getKeepAlive() && session.canKeepAlive())
|
||||
{
|
||||
int value = response.getKeepAliveTimeout();
|
||||
if (value < 0)
|
||||
value = request.getKeepAliveTimeout();
|
||||
if (value > 0)
|
||||
session.setKeepAliveTimeout(Poco::Timespan(value, 0));
|
||||
}
|
||||
|
||||
}
|
||||
else sendErrorResponse(session, HTTPResponse::HTTP_NOT_IMPLEMENTED);
|
||||
}
|
||||
catch (Poco::Exception&)
|
||||
|
@ -33,6 +33,12 @@ HTTPServerSession::~HTTPServerSession()
|
||||
{
|
||||
}
|
||||
|
||||
void HTTPServerSession::setKeepAliveTimeout(Poco::Timespan keepAliveTimeout)
|
||||
{
|
||||
_keepAliveTimeout = keepAliveTimeout;
|
||||
}
|
||||
|
||||
|
||||
|
||||
bool HTTPServerSession::hasMoreRequests()
|
||||
{
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||
SET(VERSION_REVISION 54484)
|
||||
SET(VERSION_REVISION 54485)
|
||||
SET(VERSION_MAJOR 24)
|
||||
SET(VERSION_MINOR 3)
|
||||
SET(VERSION_MINOR 4)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 891689a41506d00aa169548f5b4a8774351242c4)
|
||||
SET(VERSION_DESCRIBE v24.3.1.1-testing)
|
||||
SET(VERSION_STRING 24.3.1.1)
|
||||
SET(VERSION_GITHASH 2c5c589a882ceec35439650337b92db3e76f0081)
|
||||
SET(VERSION_DESCRIBE v24.4.1.1-testing)
|
||||
SET(VERSION_STRING 24.4.1.1)
|
||||
# end of autochange
|
||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
||||
Subproject commit 4a12f99dfc9d47c687ff7700b927cc76856225d1
|
||||
Subproject commit cb5dc3c906e80f253e9ce9535807caef827cc2e0
|
2
contrib/arrow
vendored
2
contrib/arrow
vendored
@ -1 +1 @@
|
||||
Subproject commit ba5c67934e8274d649befcffab56731632dc5253
|
||||
Subproject commit 8f36d71d18587f1f315ec832f424183cb6519cbb
|
@ -59,12 +59,3 @@ target_link_libraries (_avrocpp PRIVATE boost::headers_only boost::iostreams)
|
||||
target_compile_definitions (_avrocpp PUBLIC SNAPPY_CODEC_AVAILABLE)
|
||||
target_include_directories (_avrocpp PRIVATE ${SNAPPY_INCLUDE_DIR})
|
||||
target_link_libraries (_avrocpp PRIVATE ch_contrib::snappy)
|
||||
|
||||
# create a symlink to include headers with <avro/...>
|
||||
set(AVRO_INCLUDE_DIR "${CMAKE_CURRENT_BINARY_DIR}/include")
|
||||
ADD_CUSTOM_TARGET(avro_symlink_headers ALL
|
||||
COMMAND ${CMAKE_COMMAND} -E make_directory "${AVRO_INCLUDE_DIR}"
|
||||
COMMAND ${CMAKE_COMMAND} -E create_symlink "${AVROCPP_ROOT_DIR}/api" "${AVRO_INCLUDE_DIR}/avro"
|
||||
)
|
||||
add_dependencies(_avrocpp avro_symlink_headers)
|
||||
target_include_directories(_avrocpp SYSTEM BEFORE PUBLIC "${AVRO_INCLUDE_DIR}")
|
||||
|
@ -1,26 +1,18 @@
|
||||
option (ENABLE_SSH "Enable support for SSH keys and protocol" ${ENABLE_LIBRARIES})
|
||||
option (ENABLE_SSH "Enable support for libssh" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (NOT ENABLE_SSH)
|
||||
message(STATUS "Not using SSH")
|
||||
message(STATUS "Not using libssh")
|
||||
return()
|
||||
endif()
|
||||
|
||||
# CMake variables needed by libssh_version.h.cmake, update them when you update libssh
|
||||
set(libssh_VERSION_MAJOR 0)
|
||||
set(libssh_VERSION_MINOR 9)
|
||||
set(libssh_VERSION_PATCH 8)
|
||||
|
||||
set(LIB_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libssh")
|
||||
set(LIB_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/libssh")
|
||||
|
||||
# Set CMake variables which are used in libssh_version.h.cmake
|
||||
project(libssh VERSION 0.9.8 LANGUAGES C)
|
||||
|
||||
set(LIBRARY_VERSION "4.8.8")
|
||||
set(LIBRARY_SOVERSION "4")
|
||||
|
||||
set(CMAKE_THREAD_PREFER_PTHREADS ON)
|
||||
set(THREADS_PREFER_PTHREAD_FLAG ON)
|
||||
|
||||
set(WITH_ZLIB OFF)
|
||||
set(WITH_SYMBOL_VERSIONING OFF)
|
||||
set(WITH_SERVER ON)
|
||||
|
||||
set(libssh_SRCS
|
||||
${LIB_SOURCE_DIR}/src/agent.c
|
||||
${LIB_SOURCE_DIR}/src/auth.c
|
||||
@ -28,15 +20,21 @@ set(libssh_SRCS
|
||||
${LIB_SOURCE_DIR}/src/bignum.c
|
||||
${LIB_SOURCE_DIR}/src/buffer.c
|
||||
${LIB_SOURCE_DIR}/src/callbacks.c
|
||||
${LIB_SOURCE_DIR}/src/chachapoly.c
|
||||
${LIB_SOURCE_DIR}/src/channels.c
|
||||
${LIB_SOURCE_DIR}/src/client.c
|
||||
${LIB_SOURCE_DIR}/src/config.c
|
||||
${LIB_SOURCE_DIR}/src/config_parser.c
|
||||
${LIB_SOURCE_DIR}/src/connect.c
|
||||
${LIB_SOURCE_DIR}/src/connector.c
|
||||
${LIB_SOURCE_DIR}/src/curve25519.c
|
||||
${LIB_SOURCE_DIR}/src/dh.c
|
||||
${LIB_SOURCE_DIR}/src/ecdh.c
|
||||
${LIB_SOURCE_DIR}/src/error.c
|
||||
${LIB_SOURCE_DIR}/src/external/bcrypt_pbkdf.c
|
||||
${LIB_SOURCE_DIR}/src/external/blowfish.c
|
||||
${LIB_SOURCE_DIR}/src/external/chacha.c
|
||||
${LIB_SOURCE_DIR}/src/external/poly1305.c
|
||||
${LIB_SOURCE_DIR}/src/getpass.c
|
||||
${LIB_SOURCE_DIR}/src/init.c
|
||||
${LIB_SOURCE_DIR}/src/kdf.c
|
||||
@ -55,37 +53,32 @@ set(libssh_SRCS
|
||||
${LIB_SOURCE_DIR}/src/pcap.c
|
||||
${LIB_SOURCE_DIR}/src/pki.c
|
||||
${LIB_SOURCE_DIR}/src/pki_container_openssh.c
|
||||
${LIB_SOURCE_DIR}/src/pki_ed25519_common.c
|
||||
${LIB_SOURCE_DIR}/src/poll.c
|
||||
${LIB_SOURCE_DIR}/src/session.c
|
||||
${LIB_SOURCE_DIR}/src/scp.c
|
||||
${LIB_SOURCE_DIR}/src/session.c
|
||||
${LIB_SOURCE_DIR}/src/socket.c
|
||||
${LIB_SOURCE_DIR}/src/string.c
|
||||
${LIB_SOURCE_DIR}/src/threads.c
|
||||
${LIB_SOURCE_DIR}/src/wrapper.c
|
||||
${LIB_SOURCE_DIR}/src/external/bcrypt_pbkdf.c
|
||||
${LIB_SOURCE_DIR}/src/external/blowfish.c
|
||||
${LIB_SOURCE_DIR}/src/external/chacha.c
|
||||
${LIB_SOURCE_DIR}/src/external/poly1305.c
|
||||
${LIB_SOURCE_DIR}/src/chachapoly.c
|
||||
${LIB_SOURCE_DIR}/src/config_parser.c
|
||||
${LIB_SOURCE_DIR}/src/token.c
|
||||
${LIB_SOURCE_DIR}/src/pki_ed25519_common.c
|
||||
${LIB_SOURCE_DIR}/src/wrapper.c
|
||||
# some files of libssh/src/ are missing - why?
|
||||
|
||||
${LIB_SOURCE_DIR}/src/threads/noop.c
|
||||
${LIB_SOURCE_DIR}/src/threads/pthread.c
|
||||
# files missing - why?
|
||||
|
||||
# LIBCRYPT specific
|
||||
${libssh_SRCS}
|
||||
${LIB_SOURCE_DIR}/src/threads/libcrypto.c
|
||||
${LIB_SOURCE_DIR}/src/pki_crypto.c
|
||||
${LIB_SOURCE_DIR}/src/dh_crypto.c
|
||||
${LIB_SOURCE_DIR}/src/ecdh_crypto.c
|
||||
${LIB_SOURCE_DIR}/src/libcrypto.c
|
||||
${LIB_SOURCE_DIR}/src/dh_crypto.c
|
||||
${LIB_SOURCE_DIR}/src/pki_crypto.c
|
||||
${LIB_SOURCE_DIR}/src/threads/libcrypto.c
|
||||
|
||||
${LIB_SOURCE_DIR}/src/options.c
|
||||
${LIB_SOURCE_DIR}/src/server.c
|
||||
${LIB_SOURCE_DIR}/src/bind.c
|
||||
${LIB_SOURCE_DIR}/src/bind_config.c
|
||||
${LIB_SOURCE_DIR}/src/options.c
|
||||
${LIB_SOURCE_DIR}/src/server.c
|
||||
)
|
||||
|
||||
if (NOT (ENABLE_OPENSSL OR ENABLE_OPENSSL_DYNAMIC))
|
||||
@ -94,7 +87,7 @@ endif()
|
||||
|
||||
configure_file(${LIB_SOURCE_DIR}/include/libssh/libssh_version.h.cmake ${LIB_BINARY_DIR}/include/libssh/libssh_version.h @ONLY)
|
||||
|
||||
add_library(_ssh STATIC ${libssh_SRCS})
|
||||
add_library(_ssh ${libssh_SRCS})
|
||||
add_library(ch_contrib::ssh ALIAS _ssh)
|
||||
|
||||
target_link_libraries(_ssh PRIVATE OpenSSL::Crypto)
|
||||
|
@ -32,6 +32,7 @@ set(SRCS
|
||||
"${LIBRARY_DIR}/src/handle_custom_notification.cxx"
|
||||
"${LIBRARY_DIR}/src/handle_vote.cxx"
|
||||
"${LIBRARY_DIR}/src/launcher.cxx"
|
||||
"${LIBRARY_DIR}/src/log_entry.cxx"
|
||||
"${LIBRARY_DIR}/src/srv_config.cxx"
|
||||
"${LIBRARY_DIR}/src/snapshot_sync_req.cxx"
|
||||
"${LIBRARY_DIR}/src/snapshot_sync_ctx.cxx"
|
||||
@ -50,6 +51,12 @@ else()
|
||||
target_compile_definitions(_nuraft PRIVATE USE_BOOST_ASIO=1 BOOST_ASIO_STANDALONE=1)
|
||||
endif()
|
||||
|
||||
target_link_libraries (_nuraft PRIVATE clickhouse_common_io)
|
||||
# We must have it PUBLIC here because some headers which depend on it directly
|
||||
# included in clickhouse
|
||||
target_compile_definitions(_nuraft PUBLIC USE_CLICKHOUSE_THREADS=1)
|
||||
MESSAGE(STATUS "Will use clickhouse threads for NuRaft")
|
||||
|
||||
target_include_directories (_nuraft SYSTEM PRIVATE "${LIBRARY_DIR}/include/libnuraft")
|
||||
# for some reason include "asio.h" directly without "boost/" prefix.
|
||||
target_include_directories (_nuraft SYSTEM PRIVATE "${ClickHouse_SOURCE_DIR}/contrib/boost/boost")
|
||||
|
@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="24.2.2.71"
|
||||
ARG VERSION="24.3.2.23"
|
||||
ARG PACKAGES="clickhouse-keeper"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
@ -44,7 +44,10 @@ ARG DIRECT_DOWNLOAD_URLS=""
|
||||
# We do that in advance at the begining of Dockerfile before any packages will be
|
||||
# installed to prevent picking those uid / gid by some unrelated software.
|
||||
# The same uid / gid (101) is used both for alpine and ubuntu.
|
||||
|
||||
ARG DEFAULT_UID="101"
|
||||
ARG DEFAULT_GID="101"
|
||||
RUN addgroup -S -g "${DEFAULT_GID}" clickhouse && \
|
||||
adduser -S -h "/var/lib/clickhouse" -s /bin/bash -G clickhouse -g "ClickHouse keeper" -u "${DEFAULT_UID}" clickhouse
|
||||
|
||||
ARG TARGETARCH
|
||||
RUN arch=${TARGETARCH:-amd64} \
|
||||
@ -71,20 +74,21 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
fi \
|
||||
; done \
|
||||
&& rm /tmp/*.tgz /install -r \
|
||||
&& addgroup -S -g 101 clickhouse \
|
||||
&& adduser -S -h /var/lib/clickhouse -s /bin/bash -G clickhouse -g "ClickHouse keeper" -u 101 clickhouse \
|
||||
&& mkdir -p /var/lib/clickhouse /var/log/clickhouse-keeper /etc/clickhouse-keeper \
|
||||
&& chown clickhouse:clickhouse /var/lib/clickhouse \
|
||||
&& chown root:clickhouse /var/log/clickhouse-keeper \
|
||||
&& chmod +x /entrypoint.sh \
|
||||
&& apk add --no-cache su-exec bash tzdata \
|
||||
&& cp /usr/share/zoneinfo/UTC /etc/localtime \
|
||||
&& echo "UTC" > /etc/timezone \
|
||||
&& chmod ugo+Xrw -R /var/lib/clickhouse /var/log/clickhouse-keeper /etc/clickhouse-keeper
|
||||
&& echo "UTC" > /etc/timezone
|
||||
|
||||
ARG DEFAULT_CONFIG_DIR="/etc/clickhouse-keeper"
|
||||
ARG DEFAULT_DATA_DIR="/var/lib/clickhouse-keeper"
|
||||
ARG DEFAULT_LOG_DIR="/var/log/clickhouse-keeper"
|
||||
RUN mkdir -p "${DEFAULT_DATA_DIR}" "${DEFAULT_LOG_DIR}" "${DEFAULT_CONFIG_DIR}" \
|
||||
&& chown clickhouse:clickhouse "${DEFAULT_DATA_DIR}" \
|
||||
&& chown root:clickhouse "${DEFAULT_LOG_DIR}" \
|
||||
&& chmod ugo+Xrw -R "${DEFAULT_DATA_DIR}" "${DEFAULT_LOG_DIR}" "${DEFAULT_CONFIG_DIR}"
|
||||
|
||||
# /var/lib/clickhouse is necessary due to the current default configuration for Keeper
|
||||
VOLUME "${DEFAULT_DATA_DIR}" /var/lib/clickhouse
|
||||
EXPOSE 2181 10181 44444 9181
|
||||
|
||||
VOLUME /var/lib/clickhouse /var/log/clickhouse-keeper /etc/clickhouse-keeper
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
|
@ -80,7 +80,7 @@ if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
|
||||
# so the container can't be finished by ctrl+c
|
||||
export CLICKHOUSE_WATCHDOG_ENABLE
|
||||
|
||||
cd /var/lib/clickhouse
|
||||
cd "${DATA_DIR}"
|
||||
|
||||
# There is a config file. It is already tested with gosu (if it is readably by keeper user)
|
||||
if [ -f "$KEEPER_CONFIG" ]; then
|
||||
|
@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="24.2.2.71"
|
||||
ARG VERSION="24.3.2.23"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
@ -42,6 +42,10 @@ ARG DIRECT_DOWNLOAD_URLS=""
|
||||
# We do that in advance at the begining of Dockerfile before any packages will be
|
||||
# installed to prevent picking those uid / gid by some unrelated software.
|
||||
# The same uid / gid (101) is used both for alpine and ubuntu.
|
||||
ARG DEFAULT_UID="101"
|
||||
ARG DEFAULT_GID="101"
|
||||
RUN addgroup -S -g "${DEFAULT_GID}" clickhouse && \
|
||||
adduser -S -h "/var/lib/clickhouse" -s /bin/bash -G clickhouse -g "ClickHouse server" -u "${DEFAULT_UID}" clickhouse
|
||||
|
||||
RUN arch=${TARGETARCH:-amd64} \
|
||||
&& cd /tmp \
|
||||
@ -66,23 +70,30 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
fi \
|
||||
; done \
|
||||
&& rm /tmp/*.tgz /install -r \
|
||||
&& addgroup -S -g 101 clickhouse \
|
||||
&& adduser -S -h /var/lib/clickhouse -s /bin/bash -G clickhouse -g "ClickHouse server" -u 101 clickhouse \
|
||||
&& mkdir -p /var/lib/clickhouse /var/log/clickhouse-server /etc/clickhouse-server/config.d /etc/clickhouse-server/users.d /etc/clickhouse-client /docker-entrypoint-initdb.d \
|
||||
&& chown clickhouse:clickhouse /var/lib/clickhouse \
|
||||
&& chown root:clickhouse /var/log/clickhouse-server \
|
||||
&& chmod +x /entrypoint.sh \
|
||||
&& apk add --no-cache bash tzdata \
|
||||
&& cp /usr/share/zoneinfo/UTC /etc/localtime \
|
||||
&& echo "UTC" > /etc/timezone \
|
||||
&& chmod ugo+Xrw -R /var/lib/clickhouse /var/log/clickhouse-server /etc/clickhouse-server /etc/clickhouse-client
|
||||
&& echo "UTC" > /etc/timezone
|
||||
|
||||
# we need to allow "others" access to clickhouse folder, because docker container
|
||||
# can be started with arbitrary uid (openshift usecase)
|
||||
ARG DEFAULT_CLIENT_CONFIG_DIR="/etc/clickhouse-client"
|
||||
ARG DEFAULT_SERVER_CONFIG_DIR="/etc/clickhouse-server"
|
||||
ARG DEFAULT_DATA_DIR="/var/lib/clickhouse"
|
||||
ARG DEFAULT_LOG_DIR="/var/log/clickhouse-server"
|
||||
|
||||
# we need to allow "others" access to ClickHouse folders, because docker containers
|
||||
# can be started with arbitrary uids (OpenShift usecase)
|
||||
RUN mkdir -p \
|
||||
"${DEFAULT_DATA_DIR}" \
|
||||
"${DEFAULT_LOG_DIR}" \
|
||||
"${DEFAULT_CLIENT_CONFIG_DIR}" \
|
||||
"${DEFAULT_SERVER_CONFIG_DIR}/config.d" \
|
||||
"${DEFAULT_SERVER_CONFIG_DIR}/users.d" \
|
||||
/docker-entrypoint-initdb.d \
|
||||
&& chown clickhouse:clickhouse "${DEFAULT_DATA_DIR}" \
|
||||
&& chown root:clickhouse "${DEFAULT_LOG_DIR}" \
|
||||
&& chmod ugo+Xrw -R "${DEFAULT_DATA_DIR}" "${DEFAULT_LOG_DIR}" "${DEFAULT_CLIENT_CONFIG_DIR}" "${DEFAULT_SERVER_CONFIG_DIR}"
|
||||
|
||||
VOLUME "${DEFAULT_DATA_DIR}"
|
||||
EXPOSE 9000 8123 9009
|
||||
|
||||
VOLUME /var/lib/clickhouse \
|
||||
/var/log/clickhouse-server
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
|
@ -27,7 +27,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
||||
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||
ARG VERSION="24.2.2.71"
|
||||
ARG VERSION="24.3.2.23"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
|
@ -138,7 +138,7 @@ ENGINE = MergeTree
|
||||
PARTITION BY toYYYYMM(EventDate)
|
||||
ORDER BY (CounterID, EventDate, intHash32(UserID))
|
||||
SAMPLE BY intHash32(UserID)
|
||||
SETTINGS disk = disk(type = cache, path = '/var/lib/clickhouse/filesystem_caches/', max_size = '4G',
|
||||
SETTINGS disk = disk(type = cache, path = '/var/lib/clickhouse/filesystem_caches/stateful/', max_size = '4G',
|
||||
disk = disk(type = web, endpoint = 'https://clickhouse-datasets-web.s3.us-east-1.amazonaws.com/'));
|
||||
|
||||
ATTACH TABLE datasets.visits_v1 UUID '5131f834-711f-4168-98a5-968b691a104b'
|
||||
@ -329,5 +329,5 @@ ENGINE = CollapsingMergeTree(Sign)
|
||||
PARTITION BY toYYYYMM(StartDate)
|
||||
ORDER BY (CounterID, StartDate, intHash32(UserID), VisitID)
|
||||
SAMPLE BY intHash32(UserID)
|
||||
SETTINGS disk = disk(type = cache, path = '/var/lib/clickhouse/filesystem_caches/', max_size = '4G',
|
||||
SETTINGS disk = disk(type = cache, path = '/var/lib/clickhouse/filesystem_caches/stateful/', max_size = '4G',
|
||||
disk = disk(type = web, endpoint = 'https://clickhouse-datasets-web.s3.us-east-1.amazonaws.com/'));
|
||||
|
@ -25,7 +25,7 @@ azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
|
||||
config_logs_export_cluster /etc/clickhouse-server/config.d/system_logs_export.yaml
|
||||
|
||||
cache_policy=""
|
||||
if [ $(( $(date +%-d) % 2 )) -eq 1 ]; then
|
||||
if [ $(($RANDOM%2)) -eq 1 ]; then
|
||||
cache_policy="SLRU"
|
||||
else
|
||||
cache_policy="LRU"
|
||||
|
@ -16,6 +16,8 @@ ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||
|
||||
dpkg -i package_folder/clickhouse-common-static_*.deb
|
||||
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
||||
dpkg -i package_folder/clickhouse-odbc-bridge_*.deb
|
||||
dpkg -i package_folder/clickhouse-library-bridge_*.deb
|
||||
dpkg -i package_folder/clickhouse-server_*.deb
|
||||
dpkg -i package_folder/clickhouse-client_*.deb
|
||||
|
||||
@ -41,6 +43,8 @@ source /utils.lib
|
||||
|
||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
echo "Azure is disabled"
|
||||
elif [[ -n "$USE_SHARED_CATALOG" ]] && [[ "$USE_SHARED_CATALOG" -eq 1 ]]; then
|
||||
echo "Azure is disabled"
|
||||
else
|
||||
azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
|
||||
fi
|
||||
@ -137,6 +141,32 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
MAX_RUN_TIME=$((MAX_RUN_TIME != 0 ? MAX_RUN_TIME : 9000)) # set to 2.5 hours if 0 (unlimited)
|
||||
fi
|
||||
|
||||
if [[ -n "$USE_SHARED_CATALOG" ]] && [[ "$USE_SHARED_CATALOG" -eq 1 ]]; then
|
||||
sudo cat /etc/clickhouse-server1/config.d/filesystem_caches_path.xml \
|
||||
| sed "s|<filesystem_caches_path>/var/lib/clickhouse/filesystem_caches/</filesystem_caches_path>|<filesystem_caches_path>/var/lib/clickhouse/filesystem_caches_1/</filesystem_caches_path>|" \
|
||||
> /etc/clickhouse-server1/config.d/filesystem_caches_path.xml.tmp
|
||||
mv /etc/clickhouse-server1/config.d/filesystem_caches_path.xml.tmp /etc/clickhouse-server1/config.d/filesystem_caches_path.xml
|
||||
|
||||
sudo cat /etc/clickhouse-server1/config.d/filesystem_caches_path.xml \
|
||||
| sed "s|<custom_cached_disks_base_directory replace=\"replace\">/var/lib/clickhouse/filesystem_caches/</custom_cached_disks_base_directory>|<custom_cached_disks_base_directory replace=\"replace\">/var/lib/clickhouse/filesystem_caches_1/</custom_cached_disks_base_directory>|" \
|
||||
> /etc/clickhouse-server1/config.d/filesystem_caches_path.xml.tmp
|
||||
mv /etc/clickhouse-server1/config.d/filesystem_caches_path.xml.tmp /etc/clickhouse-server1/config.d/filesystem_caches_path.xml
|
||||
|
||||
mkdir -p /var/run/clickhouse-server1
|
||||
sudo chown clickhouse:clickhouse /var/run/clickhouse-server1
|
||||
sudo -E -u clickhouse /usr/bin/clickhouse server --config /etc/clickhouse-server1/config.xml --daemon \
|
||||
--pid-file /var/run/clickhouse-server1/clickhouse-server.pid \
|
||||
-- --path /var/lib/clickhouse1/ --logger.stderr /var/log/clickhouse-server/stderr1.log \
|
||||
--logger.log /var/log/clickhouse-server/clickhouse-server1.log --logger.errorlog /var/log/clickhouse-server/clickhouse-server1.err.log \
|
||||
--tcp_port 19000 --tcp_port_secure 19440 --http_port 18123 --https_port 18443 --interserver_http_port 19009 --tcp_with_proxy_port 19010 \
|
||||
--mysql_port 19004 --postgresql_port 19005 \
|
||||
--keeper_server.tcp_port 19181 --keeper_server.server_id 2 \
|
||||
--prometheus.port 19988 \
|
||||
--macros.replica r2 # It doesn't work :(
|
||||
|
||||
MAX_RUN_TIME=$((MAX_RUN_TIME < 9000 ? MAX_RUN_TIME : 9000)) # min(MAX_RUN_TIME, 2.5 hours)
|
||||
MAX_RUN_TIME=$((MAX_RUN_TIME != 0 ? MAX_RUN_TIME : 9000)) # set to 2.5 hours if 0 (unlimited)
|
||||
fi
|
||||
|
||||
# Wait for the server to start, but not for too long.
|
||||
for _ in {1..100}
|
||||
@ -183,6 +213,10 @@ function run_tests()
|
||||
ADDITIONAL_OPTIONS+=('--s3-storage')
|
||||
fi
|
||||
|
||||
if [[ -n "$USE_SHARED_CATALOG" ]] && [[ "$USE_SHARED_CATALOG" -eq 1 ]]; then
|
||||
ADDITIONAL_OPTIONS+=('--shared-catalog')
|
||||
fi
|
||||
|
||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
ADDITIONAL_OPTIONS+=('--replicated-database')
|
||||
# Too many tests fail for DatabaseReplicated in parallel.
|
||||
@ -257,10 +291,16 @@ do
|
||||
echo "$err"
|
||||
[[ "0" != "${#err}" ]] && failed_to_save_logs=1
|
||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
err=$( { clickhouse-client -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.1.tsv.zst; } 2>&1 )
|
||||
err=$( { clickhouse-client --port 19000 -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.1.tsv.zst; } 2>&1 )
|
||||
echo "$err"
|
||||
[[ "0" != "${#err}" ]] && failed_to_save_logs=1
|
||||
err=$( { clickhouse-client -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.2.tsv.zst; } 2>&1 )
|
||||
err=$( { clickhouse-client --port 29000 -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.2.tsv.zst; } 2>&1 )
|
||||
echo "$err"
|
||||
[[ "0" != "${#err}" ]] && failed_to_save_logs=1
|
||||
fi
|
||||
|
||||
if [[ -n "$USE_SHARED_CATALOG" ]] && [[ "$USE_SHARED_CATALOG" -eq 1 ]]; then
|
||||
err=$( { clickhouse-client --port 19000 -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.1.tsv.zst; } 2>&1 )
|
||||
echo "$err"
|
||||
[[ "0" != "${#err}" ]] && failed_to_save_logs=1
|
||||
fi
|
||||
@ -275,6 +315,10 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
sudo clickhouse stop --pid-path /var/run/clickhouse-server2 ||:
|
||||
fi
|
||||
|
||||
if [[ -n "$USE_SHARED_CATALOG" ]] && [[ "$USE_SHARED_CATALOG" -eq 1 ]]; then
|
||||
sudo clickhouse stop --pid-path /var/run/clickhouse-server1 ||:
|
||||
fi
|
||||
|
||||
rg -Fa "<Fatal>" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||
rg -A50 -Fa "============" /var/log/clickhouse-server/stderr.log ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.zst &
|
||||
@ -302,6 +346,10 @@ if [ $failed_to_save_logs -ne 0 ]; then
|
||||
clickhouse-local --path /var/lib/clickhouse1/ --only-system-tables --stacktrace -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.1.tsv.zst ||:
|
||||
clickhouse-local --path /var/lib/clickhouse2/ --only-system-tables --stacktrace -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.2.tsv.zst ||:
|
||||
fi
|
||||
|
||||
if [[ -n "$USE_SHARED_CATALOG" ]] && [[ "$USE_SHARED_CATALOG" -eq 1 ]]; then
|
||||
clickhouse-local --path /var/lib/clickhouse1/ --only-system-tables --stacktrace -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.1.tsv.zst ||:
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
@ -341,3 +389,10 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||:
|
||||
tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||:
|
||||
fi
|
||||
|
||||
if [[ -n "$USE_SHARED_CATALOG" ]] && [[ "$USE_SHARED_CATALOG" -eq 1 ]]; then
|
||||
rg -Fa "<Fatal>" /var/log/clickhouse-server/clickhouse-server1.log ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.zst ||:
|
||||
mv /var/log/clickhouse-server/stderr1.log /test_output/ ||:
|
||||
tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||:
|
||||
fi
|
||||
|
@ -72,7 +72,7 @@ mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/c
|
||||
|
||||
# Randomize cache policies.
|
||||
cache_policy=""
|
||||
if [ $(( $(date +%-d) % 2 )) -eq 1 ]; then
|
||||
if [ $(($RANDOM%2)) -eq 1 ]; then
|
||||
cache_policy="SLRU"
|
||||
else
|
||||
cache_policy="LRU"
|
||||
@ -87,6 +87,25 @@ if [ "$cache_policy" = "SLRU" ]; then
|
||||
mv /etc/clickhouse-server/config.d/storage_conf.xml.tmp /etc/clickhouse-server/config.d/storage_conf.xml
|
||||
fi
|
||||
|
||||
# Disable experimental WINDOW VIEW tests for stress tests, since they may be
|
||||
# created with old analyzer and then, after server restart it will refuse to
|
||||
# start.
|
||||
# FIXME: remove once the support for WINDOW VIEW will be implemented in analyzer.
|
||||
sudo cat /etc/clickhouse-server/users.d/stress_tests_overrides.xml <<EOL
|
||||
<clickhouse>
|
||||
<profiles>
|
||||
<default>
|
||||
<allow_experimental_window_view>false</allow_experimental_window_view>
|
||||
<constraints>
|
||||
<allow_experimental_window_view>
|
||||
<readonly/>
|
||||
</allow_experimental_window_view>
|
||||
</constraints>
|
||||
</default>
|
||||
</profiles>
|
||||
</clickhouse>
|
||||
EOL
|
||||
|
||||
start_server
|
||||
|
||||
clickhouse-client --query "SHOW TABLES FROM datasets"
|
||||
|
537
docs/changelogs/v24.3.1.2672-lts.md
Normal file
537
docs/changelogs/v24.3.1.2672-lts.md
Normal file
@ -0,0 +1,537 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v24.3.1.2672-lts (2c5c589a882) FIXME as compared to v24.2.1.2248-stable (891689a4150)
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Don't allow to set max_parallel_replicas to 0 as it doesn't make sense. Setting it to 0 could lead to unexpected logical errors. Closes [#60140](https://github.com/ClickHouse/ClickHouse/issues/60140). [#60430](https://github.com/ClickHouse/ClickHouse/pull/60430) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Change the column name from `duration_ms` to `duration_microseconds` in the `system.zookeeper` table to reflect the reality that the duration is in the microsecond resolution. [#60774](https://github.com/ClickHouse/ClickHouse/pull/60774) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Reject incoming INSERT queries in case when query-level settings `async_insert` and `deduplicate_blocks_in_dependent_materialized_views` are enabled together. This behaviour is controlled by a setting `throw_if_deduplication_in_dependent_materialized_views_enabled_with_async_insert` and enabled by default. This is a continuation of https://github.com/ClickHouse/ClickHouse/pull/59699 needed to unblock https://github.com/ClickHouse/ClickHouse/pull/59915. [#60888](https://github.com/ClickHouse/ClickHouse/pull/60888) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Utility `clickhouse-copier` is moved to a separate repository on GitHub: https://github.com/ClickHouse/copier. It is no longer included in the bundle but is still available as a separate download. This closes: [#60734](https://github.com/ClickHouse/ClickHouse/issues/60734) This closes: [#60540](https://github.com/ClickHouse/ClickHouse/issues/60540) This closes: [#60250](https://github.com/ClickHouse/ClickHouse/issues/60250) This closes: [#52917](https://github.com/ClickHouse/ClickHouse/issues/52917) This closes: [#51140](https://github.com/ClickHouse/ClickHouse/issues/51140) This closes: [#47517](https://github.com/ClickHouse/ClickHouse/issues/47517) This closes: [#47189](https://github.com/ClickHouse/ClickHouse/issues/47189) This closes: [#46598](https://github.com/ClickHouse/ClickHouse/issues/46598) This closes: [#40257](https://github.com/ClickHouse/ClickHouse/issues/40257) This closes: [#36504](https://github.com/ClickHouse/ClickHouse/issues/36504) This closes: [#35485](https://github.com/ClickHouse/ClickHouse/issues/35485) This closes: [#33702](https://github.com/ClickHouse/ClickHouse/issues/33702) This closes: [#26702](https://github.com/ClickHouse/ClickHouse/issues/26702) ### Documentation entry for user-facing changes. [#61058](https://github.com/ClickHouse/ClickHouse/pull/61058) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* To increase compatibility with MySQL, function `locate` now accepts arguments `(needle, haystack[, start_pos])` by default. The previous behavior `(haystack, needle, [, start_pos])` can be restored by setting `function_locate_has_mysql_compatible_argument_order = 0`. [#61092](https://github.com/ClickHouse/ClickHouse/pull/61092) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* The obsolete in-memory data parts have been deprecated since version 23.5 and have not been supported since version 23.10. Now the remaining code is removed. Continuation of [#55186](https://github.com/ClickHouse/ClickHouse/issues/55186) and [#45409](https://github.com/ClickHouse/ClickHouse/issues/45409). It is unlikely that you have used in-memory data parts because they were available only before version 23.5 and only when you enabled them manually by specifying the corresponding SETTINGS for a MergeTree table. To check if you have in-memory data parts, run the following query: `SELECT part_type, count() FROM system.parts GROUP BY part_type ORDER BY part_type`. To disable the usage of in-memory data parts, do `ALTER TABLE ... MODIFY SETTING min_bytes_for_compact_part = DEFAULT, min_rows_for_compact_part = DEFAULT`. Before upgrading from old ClickHouse releases, first check that you don't have in-memory data parts. If there are in-memory data parts, disable them first, then wait while there are no in-memory data parts and continue the upgrade. [#61127](https://github.com/ClickHouse/ClickHouse/pull/61127) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Forbid `SimpleAggregateFunction` in `ORDER BY` of `MergeTree` tables (like `AggregateFunction` is forbidden, but they are forbidden because they are not comparable) by default (use `allow_suspicious_primary_key` to allow them). [#61399](https://github.com/ClickHouse/ClickHouse/pull/61399) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* ClickHouse allows arbitrary binary data in the String data type, which is typically UTF-8. Parquet/ORC/Arrow Strings only support UTF-8. That's why you can choose which Arrow's data type to use for the ClickHouse String data type - String or Binary. This is controlled by the settings, `output_format_parquet_string_as_string`, `output_format_orc_string_as_string`, `output_format_arrow_string_as_string`. While Binary would be more correct and compatible, using String by default will correspond to user expectations in most cases. Parquet/ORC/Arrow supports many compression methods, including lz4 and zstd. ClickHouse supports each and every compression method. Some inferior tools lack support for the faster `lz4` compression method, that's why we set `zstd` by default. This is controlled by the settings `output_format_parquet_compression_method`, `output_format_orc_compression_method`, and `output_format_arrow_compression_method`. We changed the default to `zstd` for Parquet and ORC, but not Arrow (it is emphasized for low-level usages). [#61817](https://github.com/ClickHouse/ClickHouse/pull/61817) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* In the new ClickHouse version, the functions `geoDistance`, `greatCircleDistance`, and `greatCircleAngle` will use 64-bit double precision floating point data type for internal calculations and return type if all the arguments are Float64. This closes [#58476](https://github.com/ClickHouse/ClickHouse/issues/58476). In previous versions, the function always used Float32. You can switch to the old behavior by setting `geo_distance_returns_float64_on_float64_arguments` to `false` or setting `compatibility` to `24.2` or earlier. [#61848](https://github.com/ClickHouse/ClickHouse/pull/61848) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### New Feature
|
||||
* Topk/topkweighed support mode, which return count of values and it's error. [#54508](https://github.com/ClickHouse/ClickHouse/pull/54508) ([UnamedRus](https://github.com/UnamedRus)).
|
||||
* Add generate_series as a table function. This function generates table with an arithmetic progression with natural numbers. [#59390](https://github.com/ClickHouse/ClickHouse/pull/59390) ([divanik](https://github.com/divanik)).
|
||||
* Support reading and writing backups as tar archives. [#59535](https://github.com/ClickHouse/ClickHouse/pull/59535) ([josh-hildred](https://github.com/josh-hildred)).
|
||||
* Implemented support for S3Express buckets. [#59965](https://github.com/ClickHouse/ClickHouse/pull/59965) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Allow to attach parts from a different disk * attach partition from the table on other disks using copy instead of hard link (such as instant table) * attach partition using copy when the hard link fails even on the same disk. [#60112](https://github.com/ClickHouse/ClickHouse/pull/60112) ([Unalian](https://github.com/Unalian)).
|
||||
* Added function `toMillisecond` which returns the millisecond component for values of type`DateTime` or `DateTime64`. [#60281](https://github.com/ClickHouse/ClickHouse/pull/60281) ([Shaun Struwig](https://github.com/Blargian)).
|
||||
* Make all format names case insensitive, like Tsv, or TSV, or tsv, or even rowbinary. [#60420](https://github.com/ClickHouse/ClickHouse/pull/60420) ([豪肥肥](https://github.com/HowePa)).
|
||||
* Add four properties to the `StorageMemory` (memory-engine) `min_bytes_to_keep, max_bytes_to_keep, min_rows_to_keep` and `max_rows_to_keep` - Add tests to reflect new changes - Update `memory.md` documentation - Add table `context` property to `MemorySink` to enable access to table parameter bounds. [#60612](https://github.com/ClickHouse/ClickHouse/pull/60612) ([Jake Bamrah](https://github.com/JakeBamrah)).
|
||||
* Added function `toMillisecond` which returns the millisecond component for values of type`DateTime` or `DateTime64`. [#60649](https://github.com/ClickHouse/ClickHouse/pull/60649) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Separate limits on number of waiting and executing queries. Added new server setting `max_waiting_queries` that limits the number of queries waiting due to `async_load_databases`. Existing limits on number of executing queries no longer count waiting queries. [#61053](https://github.com/ClickHouse/ClickHouse/pull/61053) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Add support for `ATTACH PARTITION ALL`. [#61107](https://github.com/ClickHouse/ClickHouse/pull/61107) ([Kirill Nikiforov](https://github.com/allmazz)).
|
||||
* Add a new function, `getClientHTTPHeader`. This closes [#54665](https://github.com/ClickHouse/ClickHouse/issues/54665). Co-authored with @lingtaolf. [#61820](https://github.com/ClickHouse/ClickHouse/pull/61820) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Improve the performance of serialized aggregation method when involving multiple [nullable] columns. This is a general version of [#51399](https://github.com/ClickHouse/ClickHouse/issues/51399) that doesn't compromise on abstraction integrity. [#55809](https://github.com/ClickHouse/ClickHouse/pull/55809) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Lazy build join output to improve performance of ALL join. [#58278](https://github.com/ClickHouse/ClickHouse/pull/58278) ([LiuNeng](https://github.com/liuneng1994)).
|
||||
* Improvements to aggregate functions ArgMin / ArgMax / any / anyLast / anyHeavy, as well as `ORDER BY {u8/u16/u32/u64/i8/i16/u32/i64) LIMIT 1` queries. [#58640](https://github.com/ClickHouse/ClickHouse/pull/58640) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Trivial optimize on column filter. Avoid those filter columns whoes underlying data type is not number being filtered with `result_size_hint = -1`. Peak memory can be reduced to 44% of the original in some cases. [#59698](https://github.com/ClickHouse/ClickHouse/pull/59698) ([李扬](https://github.com/taiyang-li)).
|
||||
* If the table's primary key contains mostly useless columns, don't keep them in memory. This is controlled by a new setting `primary_key_ratio_of_unique_prefix_values_to_skip_suffix_columns` with the value `0.9` by default, which means: for a composite primary key, if a column changes its value for at least 0.9 of all the times, the next columns after it will be not loaded. [#60255](https://github.com/ClickHouse/ClickHouse/pull/60255) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Execute multiIf function columnarly when result_type's underlying type is number. [#60384](https://github.com/ClickHouse/ClickHouse/pull/60384) ([李扬](https://github.com/taiyang-li)).
|
||||
* Faster (almost 2x) mutexes (was slower due to ThreadFuzzer). [#60823](https://github.com/ClickHouse/ClickHouse/pull/60823) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Move connection drain from prepare to work, and drain multiple connections in parallel. [#60845](https://github.com/ClickHouse/ClickHouse/pull/60845) ([lizhuoyu5](https://github.com/lzydmxy)).
|
||||
* Optimize insertManyFrom of nullable number or nullable string. [#60846](https://github.com/ClickHouse/ClickHouse/pull/60846) ([李扬](https://github.com/taiyang-li)).
|
||||
* Optimized function `dotProduct` to omit unnecessary and expensive memory copies. [#60928](https://github.com/ClickHouse/ClickHouse/pull/60928) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Operations with the filesystem cache will suffer less from the lock contention. [#61066](https://github.com/ClickHouse/ClickHouse/pull/61066) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Optimize ColumnString::replicate and prevent memcpySmallAllowReadWriteOverflow15Impl from being optimized to built-in memcpy. Close [#61074](https://github.com/ClickHouse/ClickHouse/issues/61074). ColumnString::replicate speeds up by 2.46x on x86-64. [#61075](https://github.com/ClickHouse/ClickHouse/pull/61075) ([李扬](https://github.com/taiyang-li)).
|
||||
* 30x faster printing for 256-bit integers. [#61100](https://github.com/ClickHouse/ClickHouse/pull/61100) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* If a query with a syntax error contained COLUMNS matcher with a regular expression, the regular expression was compiled each time during the parser's backtracking, instead of being compiled once. This was a fundamental error. The compiled regexp was put to AST. But the letter A in AST means "abstract" which means it should not contain heavyweight objects. Parts of AST can be created and discarded during parsing, including a large number of backtracking. This leads to slowness on the parsing side and consequently allows DoS by a readonly user. But the main problem is that it prevents progress in fuzzers. [#61543](https://github.com/ClickHouse/ClickHouse/pull/61543) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a new analyzer pass to optimize in single value. [#61564](https://github.com/ClickHouse/ClickHouse/pull/61564) ([LiuNeng](https://github.com/liuneng1994)).
|
||||
|
||||
#### Improvement
|
||||
* While running the MODIFY COLUMN query for materialized views, check the inner table's structure to ensure every column exists. [#47427](https://github.com/ClickHouse/ClickHouse/pull/47427) ([sunny](https://github.com/sunny19930321)).
|
||||
* Added table `system.keywords` which contains all the keywords from parser. Mostly needed and will be used for better fuzzing and syntax highlighting. [#51808](https://github.com/ClickHouse/ClickHouse/pull/51808) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Ordinary database engine is deprecated. You will receive a warning in clickhouse-client if your server is using it. This closes [#52229](https://github.com/ClickHouse/ClickHouse/issues/52229). [#56942](https://github.com/ClickHouse/ClickHouse/pull/56942) ([shabroo](https://github.com/shabroo)).
|
||||
* All zero copy locks related to a table have to be dropped when the table is dropped. The directory which contains these locks has to be removed also. [#57575](https://github.com/ClickHouse/ClickHouse/pull/57575) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Allow declaring enum in external table structure. [#57857](https://github.com/ClickHouse/ClickHouse/pull/57857) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Consider lightweight deleted rows when selecting parts to merge. [#58223](https://github.com/ClickHouse/ClickHouse/pull/58223) ([Zhuo Qiu](https://github.com/jewelzqiu)).
|
||||
* This PR makes http/https connections reusable for all uses cases. Even when response is 3xx or 4xx. [#58845](https://github.com/ClickHouse/ClickHouse/pull/58845) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Added comments for columns for more system tables. Continuation of https://github.com/ClickHouse/ClickHouse/pull/58356. [#59016](https://github.com/ClickHouse/ClickHouse/pull/59016) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Now we can use virtual columns in PREWHERE. It's worthwhile for non-const virtual columns like `_part_offset`. [#59033](https://github.com/ClickHouse/ClickHouse/pull/59033) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add ability to skip read-only replicas for INSERT into Distributed engine (Controlled with `distributed_insert_skip_read_only_replicas` setting, by default OFF - backward compatible). [#59176](https://github.com/ClickHouse/ClickHouse/pull/59176) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Instead using a constant key, now object storage generates key for determining remove objects capability. [#59495](https://github.com/ClickHouse/ClickHouse/pull/59495) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Add positional pread in libhdfs3. If you want to call positional read in libhdfs3, use the hdfsPread function in hdfs.h as follows. `tSize hdfsPread(hdfsFS fs, hdfsFile file, void * buffer, tSize length, tOffset position);`. [#59624](https://github.com/ClickHouse/ClickHouse/pull/59624) ([M1eyu](https://github.com/M1eyu2018)).
|
||||
* Add asynchronous WriteBuffer for AzureBlobStorage similar to S3. [#59929](https://github.com/ClickHouse/ClickHouse/pull/59929) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Allow "local" as object storage type instead of "local_blob_storage". [#60165](https://github.com/ClickHouse/ClickHouse/pull/60165) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Improved overall usability of virtual columns. Now it is allowed to use virtual columns in `PREWHERE` (it's worthwhile for non-const virtual columns like `_part_offset`). Now a builtin documentation is available for virtual columns as a comment of column in `DESCRIBE` query with enabled setting `describe_include_virtual_columns`. [#60205](https://github.com/ClickHouse/ClickHouse/pull/60205) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Parallel flush of pending INSERT blocks of Distributed engine on `DETACH`/server shutdown and `SYSTEM FLUSH DISTRIBUTED` (Parallelism will work only if you have multi disk policy for table (like everything in Distributed engine right now)). [#60225](https://github.com/ClickHouse/ClickHouse/pull/60225) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Filter setting is improper in `joinRightColumnsSwitchNullability`, resolve [#59625](https://github.com/ClickHouse/ClickHouse/issues/59625). [#60259](https://github.com/ClickHouse/ClickHouse/pull/60259) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* Add a setting to force read-through cache for merges. [#60308](https://github.com/ClickHouse/ClickHouse/pull/60308) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Issue [#57598](https://github.com/ClickHouse/ClickHouse/issues/57598) mentions a variant behaviour regarding transaction handling. An issued COMMIT/ROLLBACK when no transaction is active is reported as an error contrary to MySQL behaviour. [#60338](https://github.com/ClickHouse/ClickHouse/pull/60338) ([PapaToemmsn](https://github.com/PapaToemmsn)).
|
||||
* Added `none_only_active` mode for `distributed_ddl_output_mode` setting. [#60340](https://github.com/ClickHouse/ClickHouse/pull/60340) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Allow configuring HTTP redirect handlers for clickhouse-server. For example, you can make `/` redirect to the Play UI. [#60390](https://github.com/ClickHouse/ClickHouse/pull/60390) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* The advanced dashboard has slightly better colors for multi-line graphs. [#60391](https://github.com/ClickHouse/ClickHouse/pull/60391) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Function `substring` now has a new alias `byteSlice`. [#60494](https://github.com/ClickHouse/ClickHouse/pull/60494) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Renamed server setting `dns_cache_max_size` to `dns_cache_max_entries` to reduce ambiguity. [#60500](https://github.com/ClickHouse/ClickHouse/pull/60500) ([Kirill Nikiforov](https://github.com/allmazz)).
|
||||
* `SHOW INDEX | INDEXES | INDICES | KEYS` no longer sorts by the primary key columns (which was unintuitive). [#60514](https://github.com/ClickHouse/ClickHouse/pull/60514) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Keeper improvement: abort during startup if an invalid snapshot is detected to avoid data loss. [#60537](https://github.com/ClickHouse/ClickHouse/pull/60537) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Added MergeTree read split ranges into intersecting and non intersecting fault injection using `merge_tree_read_split_ranges_into_intersecting_and_non_intersecting_fault_probability` setting. [#60548](https://github.com/ClickHouse/ClickHouse/pull/60548) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* The Advanced dashboard now has controls always visible on scrolling. This allows you to add a new chart without scrolling up. [#60692](https://github.com/ClickHouse/ClickHouse/pull/60692) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* String types and Enums can be used in the same context, such as: arrays, UNION queries, conditional expressions. This closes [#60726](https://github.com/ClickHouse/ClickHouse/issues/60726). [#60727](https://github.com/ClickHouse/ClickHouse/pull/60727) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update tzdata to 2024a. [#60768](https://github.com/ClickHouse/ClickHouse/pull/60768) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Support files without format extension in Filesystem database. [#60795](https://github.com/ClickHouse/ClickHouse/pull/60795) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Keeper improvement: support `leadership_expiry_ms` in Keeper's settings. [#60806](https://github.com/ClickHouse/ClickHouse/pull/60806) ([Brokenice0415](https://github.com/Brokenice0415)).
|
||||
* Always infer exponential numbers in JSON formats regardless of the setting `input_format_try_infer_exponent_floats`. Add setting `input_format_json_use_string_type_for_ambiguous_paths_in_named_tuples_inference_from_objects` that allows to use String type for ambiguous paths instead of an exception during named Tuples inference from JSON objects. [#60808](https://github.com/ClickHouse/ClickHouse/pull/60808) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add support for `START TRANSACTION` syntax typically used in MySQL syntax, resolving https://github.com/ClickHouse/ClickHouse/discussions/60865. [#60886](https://github.com/ClickHouse/ClickHouse/pull/60886) ([Zach Naimon](https://github.com/ArctypeZach)).
|
||||
* Add a flag for SMJ to treat null as biggest/smallest. So the behavior can be compitable with other SQL systems, like Apache Spark. [#60896](https://github.com/ClickHouse/ClickHouse/pull/60896) ([loudongfeng](https://github.com/loudongfeng)).
|
||||
* Clickhouse version has been added to docker labels. Closes [#54224](https://github.com/ClickHouse/ClickHouse/issues/54224). [#60949](https://github.com/ClickHouse/ClickHouse/pull/60949) ([Nikolay Monkov](https://github.com/nikmonkov)).
|
||||
* Add a setting `parallel_replicas_allow_in_with_subquery = 1` which allows subqueries for IN work with parallel replicas. [#60950](https://github.com/ClickHouse/ClickHouse/pull/60950) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* DNSResolver shuffles set of resolved IPs. [#60965](https://github.com/ClickHouse/ClickHouse/pull/60965) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Support detect output format by file exctension in `clickhouse-client` and `clickhouse-local`. [#61036](https://github.com/ClickHouse/ClickHouse/pull/61036) ([豪肥肥](https://github.com/HowePa)).
|
||||
* Check memory limit update periodically. [#61049](https://github.com/ClickHouse/ClickHouse/pull/61049) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Enable processors profiling (time spent/in and out bytes for sorting, aggregation, ...) by default. [#61096](https://github.com/ClickHouse/ClickHouse/pull/61096) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add the function `toUInt128OrZero`, which was missed by mistake (the mistake is related to https://github.com/ClickHouse/ClickHouse/pull/945). The compatibility aliases `FROM_UNIXTIME` and `DATE_FORMAT` (they are not ClickHouse-native and only exist for MySQL compatibility) have been made case insensitive, as expected for SQL-compatibility aliases. [#61114](https://github.com/ClickHouse/ClickHouse/pull/61114) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improvements for the access checks, allowing to revoke of unpossessed rights in case the target user doesn't have the revoking grants either. Example: ```sql GRANT SELECT ON *.* TO user1; REVOKE SELECT ON system.* FROM user1;. [#61115](https://github.com/ClickHouse/ClickHouse/pull/61115) ([pufit](https://github.com/pufit)).
|
||||
* Fix an error in previeous opt: https://github.com/ClickHouse/ClickHouse/pull/59698: remove break to make sure the first filtered column has minimum size cc @jsc0218. [#61145](https://github.com/ClickHouse/ClickHouse/pull/61145) ([李扬](https://github.com/taiyang-li)).
|
||||
* Fix `has()` function with `Nullable` column (fixes [#60214](https://github.com/ClickHouse/ClickHouse/issues/60214)). [#61249](https://github.com/ClickHouse/ClickHouse/pull/61249) ([Mikhail Koviazin](https://github.com/mkmkme)).
|
||||
* Now it's possible to specify attribute `merge="true"` in config substitutions for subtrees `<include from_zk="/path" merge="true">`. In case this attribute specified, clickhouse will merge subtree with existing configuration, otherwise default behavior is append new content to configuration. [#61299](https://github.com/ClickHouse/ClickHouse/pull/61299) ([alesapin](https://github.com/alesapin)).
|
||||
* Add async metrics for virtual memory mappings: VMMaxMapCount & VMNumMaps. Closes [#60662](https://github.com/ClickHouse/ClickHouse/issues/60662). [#61354](https://github.com/ClickHouse/ClickHouse/pull/61354) ([Tuan Pham Anh](https://github.com/tuanpavn)).
|
||||
* Use `temporary_files_codec` setting in all places where we create temporary data, for example external memory sorting and external memory GROUP BY. Before it worked only in `partial_merge` JOIN algorithm. [#61456](https://github.com/ClickHouse/ClickHouse/pull/61456) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Remove duplicated check `containing_part.empty()`, It's already being checked here: https://github.com/ClickHouse/ClickHouse/blob/1296dac3c7e47670872c15e3f5e58f869e0bd2f2/src/Storages/MergeTree/MergeTreeData.cpp#L6141. [#61467](https://github.com/ClickHouse/ClickHouse/pull/61467) ([William Schoeffel](https://github.com/wiledusc)).
|
||||
* Add a new setting `max_parser_backtracks` which allows to limit the complexity of query parsing. [#61502](https://github.com/ClickHouse/ClickHouse/pull/61502) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Support parallel reading for azure blob storage. [#61503](https://github.com/ClickHouse/ClickHouse/pull/61503) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Less contention during dynamic resize of filesystem cache. [#61524](https://github.com/ClickHouse/ClickHouse/pull/61524) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Disallow sharded mode of StorageS3 queue, because it will be rewritten. [#61537](https://github.com/ClickHouse/ClickHouse/pull/61537) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fixed typo: from `use_leagcy_max_level` to `use_legacy_max_level`. [#61545](https://github.com/ClickHouse/ClickHouse/pull/61545) ([William Schoeffel](https://github.com/wiledusc)).
|
||||
* Remove some duplicate entries in blob_storage_log. [#61622](https://github.com/ClickHouse/ClickHouse/pull/61622) ([YenchangChan](https://github.com/YenchangChan)).
|
||||
* Enable `allow_experimental_analyzer` setting by default. [#61652](https://github.com/ClickHouse/ClickHouse/pull/61652) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Added `current_user` function as a compatibility alias for MySQL. [#61770](https://github.com/ClickHouse/ClickHouse/pull/61770) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Use managed identity for backups IO when using Azure Blob Storage. Add a setting to prevent ClickHouse from attempting to create a non-existent container, which requires permissions at the storage account level. [#61785](https://github.com/ClickHouse/ClickHouse/pull/61785) ([Daniel Pozo Escalona](https://github.com/danipozo)).
|
||||
* Enable `output_format_pretty_row_numbers` by default. It is better for usability. [#61791](https://github.com/ClickHouse/ClickHouse/pull/61791) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* In the previous version, some numbers in Pretty formats were not pretty enough. [#61794](https://github.com/ClickHouse/ClickHouse/pull/61794) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* A long value in Pretty formats won't be cut if it is the single value in the resultset, such as in the result of the `SHOW CREATE TABLE` query. [#61795](https://github.com/ClickHouse/ClickHouse/pull/61795) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Similarly to `clickhouse-local`, `clickhouse-client` will accept the `--output-format` option as a synonym to the `--format` option. This closes [#59848](https://github.com/ClickHouse/ClickHouse/issues/59848). [#61797](https://github.com/ClickHouse/ClickHouse/pull/61797) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* If stdout is a terminal and the output format is not specified, `clickhouse-client` and similar tools will use `PrettyCompact` by default, similarly to the interactive mode. `clickhouse-client` and `clickhouse-local` will handle command line arguments for input and output formats in a unified fashion. This closes [#61272](https://github.com/ClickHouse/ClickHouse/issues/61272). [#61800](https://github.com/ClickHouse/ClickHouse/pull/61800) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Underscore digit groups in Pretty formats for better readability. This is controlled by a new setting, `output_format_pretty_highlight_digit_groups`. [#61802](https://github.com/ClickHouse/ClickHouse/pull/61802) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add ability to override initial INSERT SETTINGS via SYSTEM FLUSH DISTRIBUTED. [#61832](https://github.com/ClickHouse/ClickHouse/pull/61832) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fixed grammar from "a" to "the" in the warning message. There is only one Atomic engine, so it should be "to the new Atomic engine" instead of "to a new Atomic engine". [#61952](https://github.com/ClickHouse/ClickHouse/pull/61952) ([shabroo](https://github.com/shabroo)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Update sccache to the latest version; significantly reduce images size by reshaking the dependency trees; use the latest working odbc driver. [#59953](https://github.com/ClickHouse/ClickHouse/pull/59953) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Update python related style checkers. Continue the [#50174](https://github.com/ClickHouse/ClickHouse/issues/50174). [#60408](https://github.com/ClickHouse/ClickHouse/pull/60408) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Upgrade `prqlc` to 0.11.3. [#60616](https://github.com/ClickHouse/ClickHouse/pull/60616) ([Maximilian Roos](https://github.com/max-sixty)).
|
||||
* Attach gdb to running fuzzer process. [#60654](https://github.com/ClickHouse/ClickHouse/pull/60654) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Use explicit template instantiation more aggressively. Get rid of templates in favor of overloaded functions in some places. [#60730](https://github.com/ClickHouse/ClickHouse/pull/60730) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* The real-time query profiler now works on AArch64. In previous versions, it worked only when a program didn't spend time inside a syscall. [#60807](https://github.com/ClickHouse/ClickHouse/pull/60807) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* ... Too big translation unit in `Aggregator`. [#61211](https://github.com/ClickHouse/ClickHouse/pull/61211) ([lgbo](https://github.com/lgbo-ustc)).
|
||||
* Fixed flakiness of 01603_insert_select_too_many_parts test. Closes [#61158](https://github.com/ClickHouse/ClickHouse/issues/61158). [#61259](https://github.com/ClickHouse/ClickHouse/pull/61259) ([Ilya Yatsishin](https://github.com/qoega)).
|
||||
* Now it possible to use `chassert(expression, comment)` in the codebase. [#61263](https://github.com/ClickHouse/ClickHouse/pull/61263) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Teach the fuzzer to use other numeric types. [#61317](https://github.com/ClickHouse/ClickHouse/pull/61317) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Increase memory limit for coverage builds. [#61405](https://github.com/ClickHouse/ClickHouse/pull/61405) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Add generic query text fuzzer in `clickhouse-local`. [#61508](https://github.com/ClickHouse/ClickHouse/pull/61508) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Fix function execution over const and LowCardinality with GROUP BY const for analyzer [#59986](https://github.com/ClickHouse/ClickHouse/pull/59986) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix finished_mutations_to_keep=0 for MergeTree (as docs says 0 is to keep everything) [#60031](https://github.com/ClickHouse/ClickHouse/pull/60031) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* PartsSplitter invalid ranges for the same part [#60041](https://github.com/ClickHouse/ClickHouse/pull/60041) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Azure Blob Storage : Fix issues endpoint and prefix [#60251](https://github.com/ClickHouse/ClickHouse/pull/60251) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* fix LRUResource Cache bug (Hive cache) [#60262](https://github.com/ClickHouse/ClickHouse/pull/60262) ([shanfengp](https://github.com/Aed-p)).
|
||||
* Force reanalysis if parallel replicas changed [#60362](https://github.com/ClickHouse/ClickHouse/pull/60362) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix usage of plain metadata type with new disks configuration option [#60396](https://github.com/ClickHouse/ClickHouse/pull/60396) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Try to fix logical error 'Cannot capture column because it has incompatible type' in mapContainsKeyLike [#60451](https://github.com/ClickHouse/ClickHouse/pull/60451) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Try to avoid calculation of scalar subqueries for CREATE TABLE. [#60464](https://github.com/ClickHouse/ClickHouse/pull/60464) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix deadlock in parallel parsing when lots of rows are skipped due to errors [#60516](https://github.com/ClickHouse/ClickHouse/pull/60516) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix_max_query_size_for_kql_compound_operator: [#60534](https://github.com/ClickHouse/ClickHouse/pull/60534) ([Yong Wang](https://github.com/kashwy)).
|
||||
* Keeper fix: add timeouts when waiting for commit logs [#60544](https://github.com/ClickHouse/ClickHouse/pull/60544) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Reduce the number of read rows from `system.numbers` [#60546](https://github.com/ClickHouse/ClickHouse/pull/60546) ([JackyWoo](https://github.com/JackyWoo)).
|
||||
* Don't output number tips for date types [#60577](https://github.com/ClickHouse/ClickHouse/pull/60577) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix reading from MergeTree with non-deterministic functions in filter [#60586](https://github.com/ClickHouse/ClickHouse/pull/60586) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix logical error on bad compatibility setting value type [#60596](https://github.com/ClickHouse/ClickHouse/pull/60596) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix inconsistent aggregate function states in mixed x86-64 / ARM clusters [#60610](https://github.com/ClickHouse/ClickHouse/pull/60610) ([Harry Lee](https://github.com/HarryLeeIBM)).
|
||||
* fix(prql): Robust panic handler [#60615](https://github.com/ClickHouse/ClickHouse/pull/60615) ([Maximilian Roos](https://github.com/max-sixty)).
|
||||
* Fix `intDiv` for decimal and date arguments [#60672](https://github.com/ClickHouse/ClickHouse/pull/60672) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Fix: expand CTE in alter modify query [#60682](https://github.com/ClickHouse/ClickHouse/pull/60682) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix system.parts for non-Atomic/Ordinary database engine (i.e. Memory) [#60689](https://github.com/ClickHouse/ClickHouse/pull/60689) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix "Invalid storage definition in metadata file" for parameterized views [#60708](https://github.com/ClickHouse/ClickHouse/pull/60708) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix buffer overflow in CompressionCodecMultiple [#60731](https://github.com/ClickHouse/ClickHouse/pull/60731) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove nonsense from SQL/JSON [#60738](https://github.com/ClickHouse/ClickHouse/pull/60738) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove wrong sanitize checking in aggregate function quantileGK [#60740](https://github.com/ClickHouse/ClickHouse/pull/60740) ([李扬](https://github.com/taiyang-li)).
|
||||
* Fix insert-select + insert_deduplication_token bug by setting streams to 1 [#60745](https://github.com/ClickHouse/ClickHouse/pull/60745) ([Jordi Villar](https://github.com/jrdi)).
|
||||
* Prevent setting custom metadata headers on unsupported multipart upload operations [#60748](https://github.com/ClickHouse/ClickHouse/pull/60748) ([Francisco J. Jurado Moreno](https://github.com/Beetelbrox)).
|
||||
* Fix toStartOfInterval [#60763](https://github.com/ClickHouse/ClickHouse/pull/60763) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Fix crash in arrayEnumerateRanked [#60764](https://github.com/ClickHouse/ClickHouse/pull/60764) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix crash when using input() in INSERT SELECT JOIN [#60765](https://github.com/ClickHouse/ClickHouse/pull/60765) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix crash with different allow_experimental_analyzer value in subqueries [#60770](https://github.com/ClickHouse/ClickHouse/pull/60770) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Remove recursion when reading from S3 [#60849](https://github.com/ClickHouse/ClickHouse/pull/60849) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix possible stuck on error in HashedDictionaryParallelLoader [#60926](https://github.com/ClickHouse/ClickHouse/pull/60926) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix async RESTORE with Replicated database [#60934](https://github.com/ClickHouse/ClickHouse/pull/60934) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* fix csv format not support tuple [#60994](https://github.com/ClickHouse/ClickHouse/pull/60994) ([shuai.xu](https://github.com/shuai-xu)).
|
||||
* Fix deadlock in async inserts to `Log` tables via native protocol [#61055](https://github.com/ClickHouse/ClickHouse/pull/61055) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix lazy execution of default argument in dictGetOrDefault for RangeHashedDictionary [#61196](https://github.com/ClickHouse/ClickHouse/pull/61196) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix multiple bugs in groupArraySorted [#61203](https://github.com/ClickHouse/ClickHouse/pull/61203) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix Keeper reconfig for standalone binary [#61233](https://github.com/ClickHouse/ClickHouse/pull/61233) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix usage of session_token in S3 engine [#61234](https://github.com/ClickHouse/ClickHouse/pull/61234) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix possible incorrect result of aggregate function `uniqExact` [#61257](https://github.com/ClickHouse/ClickHouse/pull/61257) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix bugs in show database [#61269](https://github.com/ClickHouse/ClickHouse/pull/61269) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix logical error in RabbitMQ storage with MATERIALIZED columns [#61320](https://github.com/ClickHouse/ClickHouse/pull/61320) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix CREATE OR REPLACE DICTIONARY [#61356](https://github.com/ClickHouse/ClickHouse/pull/61356) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix crash in ObjectJson parsing array with nulls [#61364](https://github.com/ClickHouse/ClickHouse/pull/61364) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix ATTACH query with external ON CLUSTER [#61365](https://github.com/ClickHouse/ClickHouse/pull/61365) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix consecutive keys optimization for nullable keys [#61393](https://github.com/ClickHouse/ClickHouse/pull/61393) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* fix issue of actions dag split [#61458](https://github.com/ClickHouse/ClickHouse/pull/61458) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix finishing a failed RESTORE [#61466](https://github.com/ClickHouse/ClickHouse/pull/61466) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Disable async_insert_use_adaptive_busy_timeout correctly with compatibility settings [#61468](https://github.com/ClickHouse/ClickHouse/pull/61468) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Allow queuing in restore pool [#61475](https://github.com/ClickHouse/ClickHouse/pull/61475) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix bug when reading system.parts using UUID (issue 61220). [#61479](https://github.com/ClickHouse/ClickHouse/pull/61479) ([Dan Wu](https://github.com/wudanzy)).
|
||||
* Fix ALTER QUERY MODIFY SQL SECURITY [#61480](https://github.com/ClickHouse/ClickHouse/pull/61480) ([pufit](https://github.com/pufit)).
|
||||
* Fix crash in window view [#61526](https://github.com/ClickHouse/ClickHouse/pull/61526) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `repeat` with non native integers [#61527](https://github.com/ClickHouse/ClickHouse/pull/61527) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix client `-s` argument [#61530](https://github.com/ClickHouse/ClickHouse/pull/61530) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Reset part level upon attach from disk on MergeTree [#61536](https://github.com/ClickHouse/ClickHouse/pull/61536) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix crash in arrayPartialReverseSort [#61539](https://github.com/ClickHouse/ClickHouse/pull/61539) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix string search with const position [#61547](https://github.com/ClickHouse/ClickHouse/pull/61547) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix addDays cause an error when used datetime64 [#61561](https://github.com/ClickHouse/ClickHouse/pull/61561) ([Shuai li](https://github.com/loneylee)).
|
||||
* disallow LowCardinality input type for JSONExtract [#61617](https://github.com/ClickHouse/ClickHouse/pull/61617) ([Julia Kartseva](https://github.com/jkartseva)).
|
||||
* Fix `system.part_log` for async insert with deduplication [#61620](https://github.com/ClickHouse/ClickHouse/pull/61620) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Non-ready set for system.parts. [#61666](https://github.com/ClickHouse/ClickHouse/pull/61666) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Don't allow the same expression in ORDER BY with and without WITH FILL [#61667](https://github.com/ClickHouse/ClickHouse/pull/61667) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix actual_part_name for REPLACE_RANGE (`Entry actual part isn't empty yet`) [#61675](https://github.com/ClickHouse/ClickHouse/pull/61675) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix columns after executing MODIFY QUERY for a materialized view with internal table [#61734](https://github.com/ClickHouse/ClickHouse/pull/61734) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix crash in `multiSearchAllPositionsCaseInsensitiveUTF8` for incorrect UTF-8 [#61749](https://github.com/ClickHouse/ClickHouse/pull/61749) ([pufit](https://github.com/pufit)).
|
||||
* Fix RANGE frame is not supported for Nullable columns. [#61766](https://github.com/ClickHouse/ClickHouse/pull/61766) ([YuanLiu](https://github.com/ditgittube)).
|
||||
* Revert "Revert "Fix bug when reading system.parts using UUID (issue 61220)."" [#61779](https://github.com/ClickHouse/ClickHouse/pull/61779) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
|
||||
#### CI Fix or Improvement (changelog entry is not required)
|
||||
|
||||
* Decoupled changes from [#60408](https://github.com/ClickHouse/ClickHouse/issues/60408). [#60553](https://github.com/ClickHouse/ClickHouse/pull/60553) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Eliminates the need to provide input args to docker server jobs to clean yml files. [#60602](https://github.com/ClickHouse/ClickHouse/pull/60602) ([Max K.](https://github.com/maxknv)).
|
||||
* Debug and fix markreleaseready. [#60611](https://github.com/ClickHouse/ClickHouse/pull/60611) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix build_report job so that it's defined by ci_config only (not yml file). [#60613](https://github.com/ClickHouse/ClickHouse/pull/60613) ([Max K.](https://github.com/maxknv)).
|
||||
* Do not await ci pending jobs on release branches decrease wait timeout to fit into gh job timeout. [#60652](https://github.com/ClickHouse/ClickHouse/pull/60652) ([Max K.](https://github.com/maxknv)).
|
||||
* Set limited number of builds for "special build check" report in backports. [#60850](https://github.com/ClickHouse/ClickHouse/pull/60850) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#60935](https://github.com/ClickHouse/ClickHouse/pull/60935) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#60947](https://github.com/ClickHouse/ClickHouse/pull/60947) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#60952](https://github.com/ClickHouse/ClickHouse/pull/60952) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#60958](https://github.com/ClickHouse/ClickHouse/pull/60958) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#61022](https://github.com/ClickHouse/ClickHouse/pull/61022) ([Max K.](https://github.com/maxknv)).
|
||||
* Just a preparation for the merge queue support. [#61099](https://github.com/ClickHouse/ClickHouse/pull/61099) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#61133](https://github.com/ClickHouse/ClickHouse/pull/61133) ([Max K.](https://github.com/maxknv)).
|
||||
* In PRs: - run typos, aspell check - always - run pylint, mypy - only if py file(s) changed in PRs - run basic source files style check - only if not all changes in py files. [#61148](https://github.com/ClickHouse/ClickHouse/pull/61148) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#61172](https://github.com/ClickHouse/ClickHouse/pull/61172) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#61183](https://github.com/ClickHouse/ClickHouse/pull/61183) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* ... [#61185](https://github.com/ClickHouse/ClickHouse/pull/61185) ([Max K.](https://github.com/maxknv)).
|
||||
* TBD. [#61197](https://github.com/ClickHouse/ClickHouse/pull/61197) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* ... [#61214](https://github.com/ClickHouse/ClickHouse/pull/61214) ([Max K.](https://github.com/maxknv)).
|
||||
* ... [#61441](https://github.com/ClickHouse/ClickHouse/pull/61441) ([Max K.](https://github.com/maxknv)).
|
||||
* ![Screenshot_20240323_025055](https://github.com/ClickHouse/ClickHouse/assets/18581488/ccaab212-a1d3-4dfb-8d56-b1991760b6bf). [#61801](https://github.com/ClickHouse/ClickHouse/pull/61801) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* ... [#61877](https://github.com/ClickHouse/ClickHouse/pull/61877) ([Max K.](https://github.com/maxknv)).
|
||||
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Revert "Revert "Use `MergeTree` as a default table engine""'. [#60524](https://github.com/ClickHouse/ClickHouse/pull/60524) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert "Revert "Support resource request canceling""'. [#60558](https://github.com/ClickHouse/ClickHouse/pull/60558) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* NO CL ENTRY: 'Revert "Add `toMillisecond` function"'. [#60644](https://github.com/ClickHouse/ClickHouse/pull/60644) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Synchronize parsers"'. [#60759](https://github.com/ClickHouse/ClickHouse/pull/60759) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Fix wacky primary key sorting in `SHOW INDEX`"'. [#60898](https://github.com/ClickHouse/ClickHouse/pull/60898) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* NO CL ENTRY: 'Revert "CI: make style check faster"'. [#61142](https://github.com/ClickHouse/ClickHouse/pull/61142) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert "Don't allow to set max_parallel_replicas to 0 as it doesn't make sense"'. [#61200](https://github.com/ClickHouse/ClickHouse/pull/61200) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* NO CL ENTRY: 'Revert "Fix usage of session_token in S3 engine"'. [#61359](https://github.com/ClickHouse/ClickHouse/pull/61359) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* NO CL ENTRY: 'Revert "Revert "Fix usage of session_token in S3 engine""'. [#61362](https://github.com/ClickHouse/ClickHouse/pull/61362) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* NO CL ENTRY: 'Reorder hidden and shown checks in comment, change url of Mergeable check'. [#61373](https://github.com/ClickHouse/ClickHouse/pull/61373) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* NO CL ENTRY: 'Remove unnecessary layers from clickhouse/cctools'. [#61374](https://github.com/ClickHouse/ClickHouse/pull/61374) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* NO CL ENTRY: 'Revert "Updated format settings references in the docs (datetime.md)"'. [#61435](https://github.com/ClickHouse/ClickHouse/pull/61435) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* NO CL ENTRY: 'Revert "CI: ARM integration tests: disable tests with HDFS "'. [#61449](https://github.com/ClickHouse/ClickHouse/pull/61449) ([Max K.](https://github.com/maxknv)).
|
||||
* NO CL ENTRY: 'Revert "Analyzer: Fix virtual columns in StorageMerge"'. [#61518](https://github.com/ClickHouse/ClickHouse/pull/61518) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* NO CL ENTRY: 'Revert "Revert "Analyzer: Fix virtual columns in StorageMerge""'. [#61528](https://github.com/ClickHouse/ClickHouse/pull/61528) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* NO CL ENTRY: 'Improve build_download_helper'. [#61592](https://github.com/ClickHouse/ClickHouse/pull/61592) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* NO CL ENTRY: 'Revert "Un-flake `test_undrop_query`"'. [#61668](https://github.com/ClickHouse/ClickHouse/pull/61668) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* NO CL ENTRY: 'Fix flaky tests (stateless, integration)'. [#61816](https://github.com/ClickHouse/ClickHouse/pull/61816) ([Nikita Fomichev](https://github.com/fm4v)).
|
||||
* NO CL ENTRY: 'Better usability of "expect" tests: less trouble with running directly'. [#61818](https://github.com/ClickHouse/ClickHouse/pull/61818) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Revert "Fix flaky `02122_parallel_formatting_Template`"'. [#61868](https://github.com/ClickHouse/ClickHouse/pull/61868) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Add --now option to enable and start the service" #job_Install_packages_amd64'. [#61878](https://github.com/ClickHouse/ClickHouse/pull/61878) ([Max K.](https://github.com/maxknv)).
|
||||
* NO CL ENTRY: 'Revert "disallow LowCardinality input type for JSONExtract"'. [#61960](https://github.com/ClickHouse/ClickHouse/pull/61960) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Improve query performance in case of very small blocks [#58879](https://github.com/ClickHouse/ClickHouse/pull/58879) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Analyzer: fixes for JOIN columns resolution [#59007](https://github.com/ClickHouse/ClickHouse/pull/59007) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix race on `Context::async_insert_queue` [#59082](https://github.com/ClickHouse/ClickHouse/pull/59082) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* CI: support batch specification in commit message [#59738](https://github.com/ClickHouse/ClickHouse/pull/59738) ([Max K.](https://github.com/maxknv)).
|
||||
* Update storing-data.md [#60024](https://github.com/ClickHouse/ClickHouse/pull/60024) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Make max_insert_delayed_streams_for_parallel_write actually work [#60079](https://github.com/ClickHouse/ClickHouse/pull/60079) ([alesapin](https://github.com/alesapin)).
|
||||
* Analyzer: support join using column from select list [#60182](https://github.com/ClickHouse/ClickHouse/pull/60182) ([vdimir](https://github.com/vdimir)).
|
||||
* test for [#60223](https://github.com/ClickHouse/ClickHouse/issues/60223) [#60258](https://github.com/ClickHouse/ClickHouse/pull/60258) ([Denny Crane](https://github.com/den-crane)).
|
||||
* Analyzer: Refactor execution name for ConstantNode [#60313](https://github.com/ClickHouse/ClickHouse/pull/60313) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix database iterator waiting code [#60314](https://github.com/ClickHouse/ClickHouse/pull/60314) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* QueryCache: Don't acquire the query count mutex if not necessary [#60348](https://github.com/ClickHouse/ClickHouse/pull/60348) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Fix bugfix check (due to unknown commit_logs_cache_size_threshold) [#60375](https://github.com/ClickHouse/ClickHouse/pull/60375) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Enable testing with `io_uring` back [#60383](https://github.com/ClickHouse/ClickHouse/pull/60383) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Analyzer - improve hiding secret arguments. [#60386](https://github.com/ClickHouse/ClickHouse/pull/60386) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* CI: make workflow yml abstract [#60421](https://github.com/ClickHouse/ClickHouse/pull/60421) ([Max K.](https://github.com/maxknv)).
|
||||
* Improve test test_reload_clusters_config [#60426](https://github.com/ClickHouse/ClickHouse/pull/60426) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Revert "Revert "Merge pull request [#56864](https://github.com/ClickHouse/ClickHouse/issues/56864) from ClickHouse/broken-projections-better-handling"" [#60452](https://github.com/ClickHouse/ClickHouse/pull/60452) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Do not check to and from files existence in metadata_storage because it does not see uncommitted changes [#60462](https://github.com/ClickHouse/ClickHouse/pull/60462) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Fix option ambiguous in `clickhouse-local` [#60475](https://github.com/ClickHouse/ClickHouse/pull/60475) ([豪肥肥](https://github.com/HowePa)).
|
||||
* Fix: test_parallel_replicas_custom_key_load_balancing [#60485](https://github.com/ClickHouse/ClickHouse/pull/60485) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix: progress bar for *Cluster table functions [#60491](https://github.com/ClickHouse/ClickHouse/pull/60491) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Analyzer: Support different ObjectJSON on shards [#60497](https://github.com/ClickHouse/ClickHouse/pull/60497) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Cancel PipelineExecutor properly in case of exception in spawnThreads [#60499](https://github.com/ClickHouse/ClickHouse/pull/60499) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Refactor StorageSystemOneBlock [#60510](https://github.com/ClickHouse/ClickHouse/pull/60510) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Simple cleanup while fixing progress bar [#60513](https://github.com/ClickHouse/ClickHouse/pull/60513) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* PullingAsyncPipelineExecutor cleanup [#60515](https://github.com/ClickHouse/ClickHouse/pull/60515) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fix bad error message [#60518](https://github.com/ClickHouse/ClickHouse/pull/60518) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Synchronize Access [#60519](https://github.com/ClickHouse/ClickHouse/pull/60519) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Synchronize metrics and Keeper [#60520](https://github.com/ClickHouse/ClickHouse/pull/60520) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Enforce clang-tidy in `programs/` and `utils/` headers [#60521](https://github.com/ClickHouse/ClickHouse/pull/60521) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Synchronize parsers [#60522](https://github.com/ClickHouse/ClickHouse/pull/60522) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix a bunch of clang-tidy warnings in headers [#60523](https://github.com/ClickHouse/ClickHouse/pull/60523) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* General sanity in function `seriesOutliersDetectTukey` [#60535](https://github.com/ClickHouse/ClickHouse/pull/60535) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update Chinese document for max_query_size, max_parser_depth and optimize_functions_to_subcolumns [#60541](https://github.com/ClickHouse/ClickHouse/pull/60541) ([Alex Cheng](https://github.com/Alex-Cheng)).
|
||||
* Userspace page cache again [#60552](https://github.com/ClickHouse/ClickHouse/pull/60552) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Traverse shadow directory for system.remote_data_paths [#60585](https://github.com/ClickHouse/ClickHouse/pull/60585) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||
* Add test for [#58906](https://github.com/ClickHouse/ClickHouse/issues/58906) [#60597](https://github.com/ClickHouse/ClickHouse/pull/60597) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Use python zipfile to have x-platform idempotent lambda packages [#60603](https://github.com/ClickHouse/ClickHouse/pull/60603) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* tests: suppress data-race in librdkafka statistics code [#60604](https://github.com/ClickHouse/ClickHouse/pull/60604) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Update version after release [#60605](https://github.com/ClickHouse/ClickHouse/pull/60605) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update version_date.tsv and changelogs after v24.2.1.2248-stable [#60607](https://github.com/ClickHouse/ClickHouse/pull/60607) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Addition to changelog [#60609](https://github.com/ClickHouse/ClickHouse/pull/60609) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* internal: Refine rust prql code [#60617](https://github.com/ClickHouse/ClickHouse/pull/60617) ([Maximilian Roos](https://github.com/max-sixty)).
|
||||
* fix(rust): Fix skim's panic handler [#60621](https://github.com/ClickHouse/ClickHouse/pull/60621) ([Maximilian Roos](https://github.com/max-sixty)).
|
||||
* Resubmit "Analyzer: compute ALIAS columns right after reading" [#60641](https://github.com/ClickHouse/ClickHouse/pull/60641) ([vdimir](https://github.com/vdimir)).
|
||||
* Analyzer: Fix bug with join_use_nulls and PREWHERE [#60655](https://github.com/ClickHouse/ClickHouse/pull/60655) ([vdimir](https://github.com/vdimir)).
|
||||
* Add test for [#59891](https://github.com/ClickHouse/ClickHouse/issues/59891) [#60657](https://github.com/ClickHouse/ClickHouse/pull/60657) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix missed entries in system.part_log in case of fetch preferred over merges/mutations [#60659](https://github.com/ClickHouse/ClickHouse/pull/60659) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Always apply first minmax index among available skip indices [#60675](https://github.com/ClickHouse/ClickHouse/pull/60675) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Remove bad test `02152_http_external_tables_memory_tracking` [#60690](https://github.com/ClickHouse/ClickHouse/pull/60690) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix questionable behavior in the `parseDateTimeBestEffort` function. [#60691](https://github.com/ClickHouse/ClickHouse/pull/60691) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix flaky checks [#60694](https://github.com/ClickHouse/ClickHouse/pull/60694) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Resubmit http_external_tables_memory_tracking test [#60695](https://github.com/ClickHouse/ClickHouse/pull/60695) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix bugfix and upgrade checks (due to "Unknown handler type 'redirect'" error) [#60696](https://github.com/ClickHouse/ClickHouse/pull/60696) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix test_grant_and_revoke/test.py::test_grant_all_on_table (after syncing with cloud) [#60699](https://github.com/ClickHouse/ClickHouse/pull/60699) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove unit test for ColumnObject [#60709](https://github.com/ClickHouse/ClickHouse/pull/60709) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve unit tests [#60710](https://github.com/ClickHouse/ClickHouse/pull/60710) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix scheduler fairness test [#60712](https://github.com/ClickHouse/ClickHouse/pull/60712) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Do not retry queries if container is down in integration tests (resubmit) [#60714](https://github.com/ClickHouse/ClickHouse/pull/60714) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Mark one setting as obsolete [#60715](https://github.com/ClickHouse/ClickHouse/pull/60715) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix a test with Analyzer [#60723](https://github.com/ClickHouse/ClickHouse/pull/60723) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Two tests are fixed with Analyzer [#60724](https://github.com/ClickHouse/ClickHouse/pull/60724) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove old code [#60728](https://github.com/ClickHouse/ClickHouse/pull/60728) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove more code from LIVE VIEW [#60729](https://github.com/ClickHouse/ClickHouse/pull/60729) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix `test_keeper_back_to_back/test.py::test_concurrent_watches` [#60749](https://github.com/ClickHouse/ClickHouse/pull/60749) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Catch exceptions on finalize in `InterserverIOHTTPHandler` [#60769](https://github.com/ClickHouse/ClickHouse/pull/60769) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Reduce flakiness of 02932_refreshable_materialized_views [#60771](https://github.com/ClickHouse/ClickHouse/pull/60771) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Use 64-bit capabilities if available [#60775](https://github.com/ClickHouse/ClickHouse/pull/60775) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Include multiline logs in fuzzer fatal.log report [#60796](https://github.com/ClickHouse/ClickHouse/pull/60796) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Add missing clone calls related to compression [#60810](https://github.com/ClickHouse/ClickHouse/pull/60810) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* New private runners [#60811](https://github.com/ClickHouse/ClickHouse/pull/60811) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Move userspace page cache settings to the correct section of SettingsChangeHistory.h [#60812](https://github.com/ClickHouse/ClickHouse/pull/60812) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Update version_date.tsv and changelogs after v23.8.10.43-lts [#60851](https://github.com/ClickHouse/ClickHouse/pull/60851) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Fix fuzzer report [#60853](https://github.com/ClickHouse/ClickHouse/pull/60853) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Update version_date.tsv and changelogs after v23.3.20.27-lts [#60857](https://github.com/ClickHouse/ClickHouse/pull/60857) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Refactor OptimizeDateOrDateTimeConverterWithPreimageVisitor [#60875](https://github.com/ClickHouse/ClickHouse/pull/60875) ([Zhiguo Zhou](https://github.com/ZhiguoZh)).
|
||||
* Fix race in PageCache [#60878](https://github.com/ClickHouse/ClickHouse/pull/60878) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Small changes in async inserts code [#60885](https://github.com/ClickHouse/ClickHouse/pull/60885) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Remove useless verbose logging from AWS library [#60921](https://github.com/ClickHouse/ClickHouse/pull/60921) ([alesapin](https://github.com/alesapin)).
|
||||
* Throw on query timeout in ZooKeeperRetries [#60922](https://github.com/ClickHouse/ClickHouse/pull/60922) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Bring clickhouse-test changes from private [#60924](https://github.com/ClickHouse/ClickHouse/pull/60924) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Add debug info to exceptions in `IMergeTreeDataPart::checkConsistency()` [#60981](https://github.com/ClickHouse/ClickHouse/pull/60981) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix a typo [#60987](https://github.com/ClickHouse/ClickHouse/pull/60987) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Replace some header includes with forward declarations [#61003](https://github.com/ClickHouse/ClickHouse/pull/61003) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Speed up cctools building [#61011](https://github.com/ClickHouse/ClickHouse/pull/61011) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix ASTRenameQuery::clone [#61013](https://github.com/ClickHouse/ClickHouse/pull/61013) ([vdimir](https://github.com/vdimir)).
|
||||
* Update README.md [#61021](https://github.com/ClickHouse/ClickHouse/pull/61021) ([Tyler Hannan](https://github.com/tylerhannan)).
|
||||
* Fix TableFunctionExecutable::skipAnalysisForArguments [#61037](https://github.com/ClickHouse/ClickHouse/pull/61037) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix: parallel replicas with PREWHERE (ubsan) [#61052](https://github.com/ClickHouse/ClickHouse/pull/61052) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Fast fix tests. [#61056](https://github.com/ClickHouse/ClickHouse/pull/61056) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix `test_placement_info` [#61057](https://github.com/ClickHouse/ClickHouse/pull/61057) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
* Fix: parallel replicas with CTEs, crash in EXPLAIN SYNTAX with analyzer [#61059](https://github.com/ClickHouse/ClickHouse/pull/61059) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Debug fuzzer failures [#61062](https://github.com/ClickHouse/ClickHouse/pull/61062) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Add regression tests for fixed issues [#61076](https://github.com/ClickHouse/ClickHouse/pull/61076) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Analyzer: Fix 01244_optimize_distributed_group_by_sharding_key [#61089](https://github.com/ClickHouse/ClickHouse/pull/61089) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Use global scalars cache with analyzer [#61104](https://github.com/ClickHouse/ClickHouse/pull/61104) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix removing is_active node after re-creation [#61105](https://github.com/ClickHouse/ClickHouse/pull/61105) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Update 02962_system_sync_replica_lightweight_from_modifier.sh [#61110](https://github.com/ClickHouse/ClickHouse/pull/61110) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Simplify bridges [#61118](https://github.com/ClickHouse/ClickHouse/pull/61118) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* update cppkafka to v0.4.1 [#61119](https://github.com/ClickHouse/ClickHouse/pull/61119) ([Ilya Golshtein](https://github.com/ilejn)).
|
||||
* CI: add wf class in ci_config [#61122](https://github.com/ClickHouse/ClickHouse/pull/61122) ([Max K.](https://github.com/maxknv)).
|
||||
* QueryFuzzer: replace element randomly when AST part buffer is full [#61124](https://github.com/ClickHouse/ClickHouse/pull/61124) ([Tomer Shafir](https://github.com/tomershafir)).
|
||||
* CI: make style check fast [#61125](https://github.com/ClickHouse/ClickHouse/pull/61125) ([Max K.](https://github.com/maxknv)).
|
||||
* Better gitignore [#61128](https://github.com/ClickHouse/ClickHouse/pull/61128) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix something strange [#61129](https://github.com/ClickHouse/ClickHouse/pull/61129) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update check-large-objects.sh to be language neutral [#61130](https://github.com/ClickHouse/ClickHouse/pull/61130) ([Dan Wu](https://github.com/wudanzy)).
|
||||
* Throw memory limit exceptions to avoid OOM in some places [#61132](https://github.com/ClickHouse/ClickHouse/pull/61132) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix test_distributed_directory_monitor_split_batch_on_failure flakienss [#61136](https://github.com/ClickHouse/ClickHouse/pull/61136) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix llvm symbolizer on CI [#61147](https://github.com/ClickHouse/ClickHouse/pull/61147) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Some clang-tidy fixes [#61150](https://github.com/ClickHouse/ClickHouse/pull/61150) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Revive "Less contention in the cache, part 2" [#61152](https://github.com/ClickHouse/ClickHouse/pull/61152) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Enable black back [#61159](https://github.com/ClickHouse/ClickHouse/pull/61159) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* CI: fix nightly job issue [#61160](https://github.com/ClickHouse/ClickHouse/pull/61160) ([Max K.](https://github.com/maxknv)).
|
||||
* Split `RangeHashedDictionary` [#61162](https://github.com/ClickHouse/ClickHouse/pull/61162) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Remove a few templates from Aggregator.cpp [#61171](https://github.com/ClickHouse/ClickHouse/pull/61171) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Avoid some logical errors in experimantal Object type [#61173](https://github.com/ClickHouse/ClickHouse/pull/61173) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Update ReadSettings.h [#61174](https://github.com/ClickHouse/ClickHouse/pull/61174) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* CI: ARM integration tests: disable tests with HDFS [#61182](https://github.com/ClickHouse/ClickHouse/pull/61182) ([Max K.](https://github.com/maxknv)).
|
||||
* Disable sanitizers with 02784_parallel_replicas_automatic_decision_join [#61184](https://github.com/ClickHouse/ClickHouse/pull/61184) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix `02887_mutations_subcolumns` test flakiness [#61198](https://github.com/ClickHouse/ClickHouse/pull/61198) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Make variant tests a bit faster [#61199](https://github.com/ClickHouse/ClickHouse/pull/61199) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix strange log message [#61206](https://github.com/ClickHouse/ClickHouse/pull/61206) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix 01603_insert_select_too_many_parts flakiness [#61218](https://github.com/ClickHouse/ClickHouse/pull/61218) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Make every style-checker runner types scaling-out very quickly [#61231](https://github.com/ClickHouse/ClickHouse/pull/61231) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Improve `test_failed_mutations` [#61235](https://github.com/ClickHouse/ClickHouse/pull/61235) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fix `test_merge_tree_load_parts/test.py::test_merge_tree_load_parts_corrupted` [#61236](https://github.com/ClickHouse/ClickHouse/pull/61236) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* fix `forget_partition` test [#61237](https://github.com/ClickHouse/ClickHouse/pull/61237) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Print more info in `02572_system_logs_materialized_views_ignore_errors` to debug [#61246](https://github.com/ClickHouse/ClickHouse/pull/61246) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Fix runtime error in AST Fuzzer [#61248](https://github.com/ClickHouse/ClickHouse/pull/61248) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* Add retries to `02908_many_requests_to_system_replicas` [#61253](https://github.com/ClickHouse/ClickHouse/pull/61253) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Followup fix ASTRenameQuery::clone [#61254](https://github.com/ClickHouse/ClickHouse/pull/61254) ([vdimir](https://github.com/vdimir)).
|
||||
* Disable test 02998_primary_key_skip_columns.sql in sanitizer builds as it can be slow [#61256](https://github.com/ClickHouse/ClickHouse/pull/61256) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Update curl to curl with data race fix [#61264](https://github.com/ClickHouse/ClickHouse/pull/61264) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix `01417_freeze_partition_verbose` [#61266](https://github.com/ClickHouse/ClickHouse/pull/61266) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Free memory earlier in inserts [#61267](https://github.com/ClickHouse/ClickHouse/pull/61267) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fixing test_build_sets_from_multiple_threads/test.py::test_set [#61286](https://github.com/ClickHouse/ClickHouse/pull/61286) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Analyzer: Fix virtual columns in StorageMerge [#61298](https://github.com/ClickHouse/ClickHouse/pull/61298) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix 01952_optimize_distributed_group_by_sharding_key with analyzer. [#61301](https://github.com/ClickHouse/ClickHouse/pull/61301) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* fix data race in poco tcp server [#61309](https://github.com/ClickHouse/ClickHouse/pull/61309) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Don't use default cluster in test test_distibuted_settings [#61314](https://github.com/ClickHouse/ClickHouse/pull/61314) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix false positive assertion in cache [#61319](https://github.com/ClickHouse/ClickHouse/pull/61319) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix test test_input_format_parallel_parsing_memory_tracking [#61322](https://github.com/ClickHouse/ClickHouse/pull/61322) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix 01761_cast_to_enum_nullable with analyzer. [#61323](https://github.com/ClickHouse/ClickHouse/pull/61323) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Add zookeeper retries for exists check in forcefullyRemoveBrokenOutdatedPartFromZooKeeper [#61324](https://github.com/ClickHouse/ClickHouse/pull/61324) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Minor changes in stress and fuzzer reports [#61333](https://github.com/ClickHouse/ClickHouse/pull/61333) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Un-flake `test_undrop_query` [#61348](https://github.com/ClickHouse/ClickHouse/pull/61348) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Tiny improvement for replication.lib [#61361](https://github.com/ClickHouse/ClickHouse/pull/61361) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix bugfix check (due to "unknown object storage type: azure") [#61363](https://github.com/ClickHouse/ClickHouse/pull/61363) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix `01599_multiline_input_and_singleline_comments` 3 minute wait [#61371](https://github.com/ClickHouse/ClickHouse/pull/61371) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Terminate EC2 on spot event if runner isn't running [#61377](https://github.com/ClickHouse/ClickHouse/pull/61377) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Try fix docs check [#61378](https://github.com/ClickHouse/ClickHouse/pull/61378) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Fix `heap-use-after-free` for Merge table with alias [#61380](https://github.com/ClickHouse/ClickHouse/pull/61380) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Disable `optimize_rewrite_sum_if_to_count_if` if return type is nullable (new analyzer) [#61389](https://github.com/ClickHouse/ClickHouse/pull/61389) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Analyzer: Fix planner context for subquery in StorageMerge [#61392](https://github.com/ClickHouse/ClickHouse/pull/61392) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Fix `test_failed_async_inserts` [#61394](https://github.com/ClickHouse/ClickHouse/pull/61394) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix test test_system_clusters_actual_information flakiness [#61395](https://github.com/ClickHouse/ClickHouse/pull/61395) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Remove default cluster from default config from test config [#61396](https://github.com/ClickHouse/ClickHouse/pull/61396) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Enable clang-tidy in headers [#61406](https://github.com/ClickHouse/ClickHouse/pull/61406) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Add sanity check for poll_max_batch_size FileLog setting [#61408](https://github.com/ClickHouse/ClickHouse/pull/61408) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* ThreadFuzzer: randomize sleep time [#61410](https://github.com/ClickHouse/ClickHouse/pull/61410) ([Tomer Shafir](https://github.com/tomershafir)).
|
||||
* Update version_date.tsv and changelogs after v23.8.11.28-lts [#61416](https://github.com/ClickHouse/ClickHouse/pull/61416) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v23.3.21.26-lts [#61418](https://github.com/ClickHouse/ClickHouse/pull/61418) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v24.1.7.18-stable [#61419](https://github.com/ClickHouse/ClickHouse/pull/61419) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v24.2.2.71-stable [#61420](https://github.com/ClickHouse/ClickHouse/pull/61420) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v23.12.5.81-stable [#61421](https://github.com/ClickHouse/ClickHouse/pull/61421) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Restore automerge for approved PRs [#61433](https://github.com/ClickHouse/ClickHouse/pull/61433) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Disable broken SonarCloud [#61434](https://github.com/ClickHouse/ClickHouse/pull/61434) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix `01599_multiline_input_and_singleline_comments` properly [#61440](https://github.com/ClickHouse/ClickHouse/pull/61440) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Convert test 02998_system_dns_cache_table to smoke and mirrors [#61443](https://github.com/ClickHouse/ClickHouse/pull/61443) ([vdimir](https://github.com/vdimir)).
|
||||
* Check boundaries for some settings in parallel replicas [#61455](https://github.com/ClickHouse/ClickHouse/pull/61455) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Use SHARD_LOAD_QUEUE_BACKLOG for dictionaries in tests [#61462](https://github.com/ClickHouse/ClickHouse/pull/61462) ([vdimir](https://github.com/vdimir)).
|
||||
* Split `02125_lz4_compression_bug` [#61465](https://github.com/ClickHouse/ClickHouse/pull/61465) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Correctly process last stacktrace in `postprocess-traces.pl` [#61470](https://github.com/ClickHouse/ClickHouse/pull/61470) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix test `test_polymorphic_parts` [#61477](https://github.com/ClickHouse/ClickHouse/pull/61477) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* A definitive guide to CAST [#61491](https://github.com/ClickHouse/ClickHouse/pull/61491) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Minor rename in FileCache [#61494](https://github.com/ClickHouse/ClickHouse/pull/61494) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Remove useless code [#61498](https://github.com/ClickHouse/ClickHouse/pull/61498) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix fuzzers [#61499](https://github.com/ClickHouse/ClickHouse/pull/61499) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update jdbc.md [#61506](https://github.com/ClickHouse/ClickHouse/pull/61506) ([San](https://github.com/santrancisco)).
|
||||
* Fix error in clickhouse-client [#61507](https://github.com/ClickHouse/ClickHouse/pull/61507) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix clang-tidy build [#61519](https://github.com/ClickHouse/ClickHouse/pull/61519) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix infinite loop in function `hop` [#61523](https://github.com/ClickHouse/ClickHouse/pull/61523) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Improve tests 00159_parallel_formatting_* to to avoid timeouts [#61532](https://github.com/ClickHouse/ClickHouse/pull/61532) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Refactoring of reading from compact parts [#61535](https://github.com/ClickHouse/ClickHouse/pull/61535) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Don't run 01459_manual_write_to_replicas in debug build as it's too slow [#61538](https://github.com/ClickHouse/ClickHouse/pull/61538) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* CI: ARM integration test - skip hdfs, kerberos, kafka [#61542](https://github.com/ClickHouse/ClickHouse/pull/61542) ([Max K.](https://github.com/maxknv)).
|
||||
* More logging for loading of tables [#61546](https://github.com/ClickHouse/ClickHouse/pull/61546) ([Sergei Trifonov](https://github.com/serxa)).
|
||||
* Fixing 01584_distributed_buffer_cannot_find_column with analyzer. [#61550](https://github.com/ClickHouse/ClickHouse/pull/61550) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Wait for done mutation with more logs and asserts [#61554](https://github.com/ClickHouse/ClickHouse/pull/61554) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix read_rows count with external group by [#61555](https://github.com/ClickHouse/ClickHouse/pull/61555) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* queries-file should be used to specify file [#61557](https://github.com/ClickHouse/ClickHouse/pull/61557) ([danila-ermakov](https://github.com/danila-ermakov)).
|
||||
* Fix `02481_async_insert_dedup_token` [#61568](https://github.com/ClickHouse/ClickHouse/pull/61568) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add a comment after [#61458](https://github.com/ClickHouse/ClickHouse/issues/61458) [#61580](https://github.com/ClickHouse/ClickHouse/pull/61580) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix clickhouse-test client option and CLICKHOUSE_URL_PARAMS interference [#61596](https://github.com/ClickHouse/ClickHouse/pull/61596) ([vdimir](https://github.com/vdimir)).
|
||||
* CI: remove compose files from integration test docker [#61597](https://github.com/ClickHouse/ClickHouse/pull/61597) ([Max K.](https://github.com/maxknv)).
|
||||
* Fix 01244_optimize_distributed_group_by_sharding_key by ordering output [#61602](https://github.com/ClickHouse/ClickHouse/pull/61602) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Remove some tests from analyzer_tech_debt [#61603](https://github.com/ClickHouse/ClickHouse/pull/61603) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Reduce header dependencies [#61604](https://github.com/ClickHouse/ClickHouse/pull/61604) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Remove some magic_enum from headers [#61606](https://github.com/ClickHouse/ClickHouse/pull/61606) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix configs for upgrade and bugfix [#61607](https://github.com/ClickHouse/ClickHouse/pull/61607) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Add tests for multiple fuzzer issues [#61614](https://github.com/ClickHouse/ClickHouse/pull/61614) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Try to fix `02908_many_requests_to_system_replicas` again [#61616](https://github.com/ClickHouse/ClickHouse/pull/61616) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Verbose error message about analyzer_compatibility_join_using_top_level_identifier [#61631](https://github.com/ClickHouse/ClickHouse/pull/61631) ([vdimir](https://github.com/vdimir)).
|
||||
* Fix 00223_shard_distributed_aggregation_memory_efficient with analyzer [#61649](https://github.com/ClickHouse/ClickHouse/pull/61649) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Better fuzzer logs [#61650](https://github.com/ClickHouse/ClickHouse/pull/61650) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix flaky `02122_parallel_formatting_Template` [#61651](https://github.com/ClickHouse/ClickHouse/pull/61651) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Aggregator when data is empty [#61654](https://github.com/ClickHouse/ClickHouse/pull/61654) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Restore poco SUN files [#61655](https://github.com/ClickHouse/ClickHouse/pull/61655) ([Andy Fiddaman](https://github.com/citrus-it)).
|
||||
* Another fix for `SumIfToCountIfPass` [#61656](https://github.com/ClickHouse/ClickHouse/pull/61656) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Keeper: fix data race during snapshot destructor call [#61657](https://github.com/ClickHouse/ClickHouse/pull/61657) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* CI: integration tests: use runner as py module [#61658](https://github.com/ClickHouse/ClickHouse/pull/61658) ([Max K.](https://github.com/maxknv)).
|
||||
* Fix logging of autoscaling lambda, add test for effective_capacity [#61662](https://github.com/ClickHouse/ClickHouse/pull/61662) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Small change in `DatabaseOnDisk::iterateMetadataFiles()` [#61664](https://github.com/ClickHouse/ClickHouse/pull/61664) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Build improvements by removing magic enum from header and apply some explicit template instantiation [#61665](https://github.com/ClickHouse/ClickHouse/pull/61665) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* Update the dictionary for OSSFuzz [#61672](https://github.com/ClickHouse/ClickHouse/pull/61672) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Inhibit randomization in some tests and exclude some long tests from debug runs [#61676](https://github.com/ClickHouse/ClickHouse/pull/61676) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add a test for [#61669](https://github.com/ClickHouse/ClickHouse/issues/61669) [#61678](https://github.com/ClickHouse/ClickHouse/pull/61678) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix use-of-uninitialized-value in HedgedConnections [#61679](https://github.com/ClickHouse/ClickHouse/pull/61679) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Remove clickhouse-diagnostics from the package [#61681](https://github.com/ClickHouse/ClickHouse/pull/61681) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix use-of-uninitialized-value in parseDateTimeBestEffort [#61694](https://github.com/ClickHouse/ClickHouse/pull/61694) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* poco foundation: add illumos support [#61701](https://github.com/ClickHouse/ClickHouse/pull/61701) ([Andy Fiddaman](https://github.com/citrus-it)).
|
||||
* contrib/c-ares: add illumos as a platform [#61702](https://github.com/ClickHouse/ClickHouse/pull/61702) ([Andy Fiddaman](https://github.com/citrus-it)).
|
||||
* contrib/curl: Add illumos support [#61704](https://github.com/ClickHouse/ClickHouse/pull/61704) ([Andy Fiddaman](https://github.com/citrus-it)).
|
||||
* Fuzzer: Try a different way to wait for the server [#61706](https://github.com/ClickHouse/ClickHouse/pull/61706) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Disable some tests for SMT [#61708](https://github.com/ClickHouse/ClickHouse/pull/61708) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix signal handler for sanitizer signals [#61709](https://github.com/ClickHouse/ClickHouse/pull/61709) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Avoid `IsADirectoryError: Is a directory contrib/azure` [#61710](https://github.com/ClickHouse/ClickHouse/pull/61710) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Analyzer: fix group_by_use_nulls [#61717](https://github.com/ClickHouse/ClickHouse/pull/61717) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Analyzer: Clear list of broken integration tests [#61718](https://github.com/ClickHouse/ClickHouse/pull/61718) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* CI: modify CI from PR body [#61725](https://github.com/ClickHouse/ClickHouse/pull/61725) ([Max K.](https://github.com/maxknv)).
|
||||
* Add test for [#57820](https://github.com/ClickHouse/ClickHouse/issues/57820) [#61726](https://github.com/ClickHouse/ClickHouse/pull/61726) ([Dmitry Novik](https://github.com/novikd)).
|
||||
* Revert "Revert "Un-flake test_undrop_query"" [#61727](https://github.com/ClickHouse/ClickHouse/pull/61727) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* FunctionsConversion: Start simplifying templates [#61733](https://github.com/ClickHouse/ClickHouse/pull/61733) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* CI: modify it [#61735](https://github.com/ClickHouse/ClickHouse/pull/61735) ([Max K.](https://github.com/maxknv)).
|
||||
* Fix segfault in SquashingTransform [#61736](https://github.com/ClickHouse/ClickHouse/pull/61736) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Fix DWARF format failing to skip DW_FORM_strx3 attributes [#61737](https://github.com/ClickHouse/ClickHouse/pull/61737) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* There is no such thing as broken tests [#61739](https://github.com/ClickHouse/ClickHouse/pull/61739) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Process removed files, decouple _check_mime [#61751](https://github.com/ClickHouse/ClickHouse/pull/61751) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Keeper fix: destroy `KeeperDispatcher` first [#61752](https://github.com/ClickHouse/ClickHouse/pull/61752) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix flaky `03014_async_with_dedup_part_log_rmt` [#61757](https://github.com/ClickHouse/ClickHouse/pull/61757) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* FunctionsConversion: Remove another batch of bad templates [#61773](https://github.com/ClickHouse/ClickHouse/pull/61773) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Revert "Fix bug when reading system.parts using UUID (issue 61220)." [#61774](https://github.com/ClickHouse/ClickHouse/pull/61774) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
|
||||
* CI: disable grpc tests on ARM [#61778](https://github.com/ClickHouse/ClickHouse/pull/61778) ([Max K.](https://github.com/maxknv)).
|
||||
* Fix more tests with virtual columns in StorageMerge. [#61787](https://github.com/ClickHouse/ClickHouse/pull/61787) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Remove already not flaky tests with analyzer. [#61788](https://github.com/ClickHouse/ClickHouse/pull/61788) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Analyzer: Fix assert in JOIN with Distributed table [#61789](https://github.com/ClickHouse/ClickHouse/pull/61789) ([vdimir](https://github.com/vdimir)).
|
||||
* A test can be slow in debug build [#61796](https://github.com/ClickHouse/ClickHouse/pull/61796) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Updated clang-19 to master. [#61798](https://github.com/ClickHouse/ClickHouse/pull/61798) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix test "00002_log_and_exception_messages_formatting" [#61821](https://github.com/ClickHouse/ClickHouse/pull/61821) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* A test is too slow for debug [#61822](https://github.com/ClickHouse/ClickHouse/pull/61822) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove DataStreams [#61824](https://github.com/ClickHouse/ClickHouse/pull/61824) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Better message for logging errors [#61827](https://github.com/ClickHouse/ClickHouse/pull/61827) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix sanitizers suppressions [#61828](https://github.com/ClickHouse/ClickHouse/pull/61828) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove unused code [#61830](https://github.com/ClickHouse/ClickHouse/pull/61830) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Remove DataStreams (2) [#61831](https://github.com/ClickHouse/ClickHouse/pull/61831) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update xxhash to v0.8.2 [#61838](https://github.com/ClickHouse/ClickHouse/pull/61838) ([Shubham Ranjan](https://github.com/shubhamranjan)).
|
||||
* Fix: DISTINCT in subquery with analyzer [#61847](https://github.com/ClickHouse/ClickHouse/pull/61847) ([Igor Nikonov](https://github.com/devcrafter)).
|
||||
* Analyzer: fix limit/offset on shards [#61849](https://github.com/ClickHouse/ClickHouse/pull/61849) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Remove PoolBase::AllocateNewBypassingPool [#61866](https://github.com/ClickHouse/ClickHouse/pull/61866) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Try to fix 02901_parallel_replicas_rollup with analyzer. [#61875](https://github.com/ClickHouse/ClickHouse/pull/61875) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Add test for [#57808](https://github.com/ClickHouse/ClickHouse/issues/57808) [#61879](https://github.com/ClickHouse/ClickHouse/pull/61879) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* CI: merge queue support [#61881](https://github.com/ClickHouse/ClickHouse/pull/61881) ([Max K.](https://github.com/maxknv)).
|
||||
* Update create.sql [#61885](https://github.com/ClickHouse/ClickHouse/pull/61885) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* no smaller unit in date_trunc [#61888](https://github.com/ClickHouse/ClickHouse/pull/61888) ([jsc0218](https://github.com/jsc0218)).
|
||||
* Move KQL trash where it is supposed to be [#61903](https://github.com/ClickHouse/ClickHouse/pull/61903) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Changelog for 24.3 [#61909](https://github.com/ClickHouse/ClickHouse/pull/61909) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update version_date.tsv and changelogs after v23.3.22.3-lts [#61914](https://github.com/ClickHouse/ClickHouse/pull/61914) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v23.8.12.13-lts [#61915](https://github.com/ClickHouse/ClickHouse/pull/61915) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* No "please" [#61916](https://github.com/ClickHouse/ClickHouse/pull/61916) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update version_date.tsv and changelogs after v23.12.6.19-stable [#61917](https://github.com/ClickHouse/ClickHouse/pull/61917) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Update version_date.tsv and changelogs after v24.1.8.22-stable [#61918](https://github.com/ClickHouse/ClickHouse/pull/61918) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Fix flaky test_broken_projestions/test.py::test_broken_ignored_replic… [#61932](https://github.com/ClickHouse/ClickHouse/pull/61932) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Check is Rust avaiable for build, if not, suggest a way to disable Rust support [#61938](https://github.com/ClickHouse/ClickHouse/pull/61938) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* CI: new ci menu in PR body [#61948](https://github.com/ClickHouse/ClickHouse/pull/61948) ([Max K.](https://github.com/maxknv)).
|
||||
* Remove flaky test `01193_metadata_loading` [#61961](https://github.com/ClickHouse/ClickHouse/pull/61961) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
|
||||
#### Packaging Improvement
|
||||
|
||||
* Adding the `--now` option to enable and start service automatically when installing the database server completely. [#60656](https://github.com/ClickHouse/ClickHouse/pull/60656) ([Chun-Sheng, Li](https://github.com/peter279k)).
|
||||
|
29
docs/changelogs/v24.3.2.23-lts.md
Normal file
29
docs/changelogs/v24.3.2.23-lts.md
Normal file
@ -0,0 +1,29 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v24.3.2.23-lts (8b7d910960c) FIXME as compared to v24.3.1.2672-lts (2c5c589a882)
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Fix logical error in group_by_use_nulls + grouping set + analyzer + materialize/constant [#61567](https://github.com/ClickHouse/ClickHouse/pull/61567) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix external table cannot parse data type Bool [#62115](https://github.com/ClickHouse/ClickHouse/pull/62115) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Revert "Merge pull request [#61564](https://github.com/ClickHouse/ClickHouse/issues/61564) from liuneng1994/optimize_in_single_value" [#62135](https://github.com/ClickHouse/ClickHouse/pull/62135) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
|
||||
#### CI Fix or Improvement (changelog entry is not required)
|
||||
|
||||
* Backported in [#62030](https://github.com/ClickHouse/ClickHouse/issues/62030):. [#61869](https://github.com/ClickHouse/ClickHouse/pull/61869) ([Nikita Fomichev](https://github.com/fm4v)).
|
||||
* Backported in [#62057](https://github.com/ClickHouse/ClickHouse/issues/62057): ... [#62044](https://github.com/ClickHouse/ClickHouse/pull/62044) ([Max K.](https://github.com/maxknv)).
|
||||
* Backported in [#62204](https://github.com/ClickHouse/ClickHouse/issues/62204):. [#62190](https://github.com/ClickHouse/ClickHouse/pull/62190) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Fix some crashes with analyzer and group_by_use_nulls. [#61933](https://github.com/ClickHouse/ClickHouse/pull/61933) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix scalars create as select [#61998](https://github.com/ClickHouse/ClickHouse/pull/61998) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Ignore IfChainToMultiIfPass if returned type changed. [#62059](https://github.com/ClickHouse/ClickHouse/pull/62059) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix type for ConvertInToEqualPass [#62066](https://github.com/ClickHouse/ClickHouse/pull/62066) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Revert output Pretty in tty [#62090](https://github.com/ClickHouse/ClickHouse/pull/62090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
@ -68,6 +68,12 @@ In the results of `SELECT` query, the values of `AggregateFunction` type have im
|
||||
|
||||
## Example of an Aggregated Materialized View {#example-of-an-aggregated-materialized-view}
|
||||
|
||||
The following examples assumes that you have a database named `test` so make sure you create that if it doesn't already exist:
|
||||
|
||||
```sql
|
||||
CREATE DATABASE test;
|
||||
```
|
||||
|
||||
We will create the table `test.visits` that contain the raw data:
|
||||
|
||||
``` sql
|
||||
@ -80,17 +86,24 @@ CREATE TABLE test.visits
|
||||
) ENGINE = MergeTree ORDER BY (StartDate, CounterID);
|
||||
```
|
||||
|
||||
Next, we need to create an `AggregatingMergeTree` table that will store `AggregationFunction`s that keep track of the total number of visits and the number of unique users.
|
||||
|
||||
`AggregatingMergeTree` materialized view that watches the `test.visits` table, and use the `AggregateFunction` type:
|
||||
|
||||
``` sql
|
||||
CREATE MATERIALIZED VIEW test.mv_visits
|
||||
(
|
||||
CREATE TABLE test.agg_visits (
|
||||
StartDate DateTime64 NOT NULL,
|
||||
CounterID UInt64,
|
||||
Visits AggregateFunction(sum, Nullable(Int32)),
|
||||
Users AggregateFunction(uniq, Nullable(Int32))
|
||||
)
|
||||
ENGINE = AggregatingMergeTree() ORDER BY (StartDate, CounterID)
|
||||
ENGINE = AggregatingMergeTree() ORDER BY (StartDate, CounterID);
|
||||
```
|
||||
|
||||
And then let's create a materialized view that populates `test.agg_visits` from `test.visits` :
|
||||
|
||||
```sql
|
||||
CREATE MATERIALIZED VIEW test.visits_mv TO test.agg_visits
|
||||
AS SELECT
|
||||
StartDate,
|
||||
CounterID,
|
||||
@ -104,25 +117,45 @@ Inserting data into the `test.visits` table.
|
||||
|
||||
``` sql
|
||||
INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
|
||||
VALUES (1667446031, 1, 3, 4)
|
||||
INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
|
||||
VALUES (1667446031, 1, 6, 3)
|
||||
VALUES (1667446031000, 1, 3, 4), (1667446031000, 1, 6, 3);
|
||||
```
|
||||
|
||||
The data is inserted in both the table and the materialized view `test.mv_visits`.
|
||||
The data is inserted in both `test.visits` and `test.agg_visits`.
|
||||
|
||||
To get the aggregated data, we need to execute a query such as `SELECT ... GROUP BY ...` from the materialized view `test.mv_visits`:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT
|
||||
StartDate,
|
||||
sumMerge(Visits) AS Visits,
|
||||
uniqMerge(Users) AS Users
|
||||
FROM test.mv_visits
|
||||
FROM test.agg_visits
|
||||
GROUP BY StartDate
|
||||
ORDER BY StartDate;
|
||||
```
|
||||
|
||||
```text
|
||||
┌───────────────StartDate─┬─Visits─┬─Users─┐
|
||||
│ 2022-11-03 03:27:11.000 │ 9 │ 2 │
|
||||
└─────────────────────────┴────────┴───────┘
|
||||
```
|
||||
|
||||
And how about if we add another couple of records to `test.visits`, but this time we'll use a different timestamp for one of the records:
|
||||
|
||||
```sql
|
||||
INSERT INTO test.visits (StartDate, CounterID, Sign, UserID)
|
||||
VALUES (1669446031000, 2, 5, 10), (1667446031000, 3, 7, 5);
|
||||
```
|
||||
|
||||
If we then run the `SELECT` query again, we'll see the following output:
|
||||
|
||||
```text
|
||||
┌───────────────StartDate─┬─Visits─┬─Users─┐
|
||||
│ 2022-11-03 03:27:11.000 │ 16 │ 3 │
|
||||
│ 2022-11-26 07:00:31.000 │ 5 │ 1 │
|
||||
└─────────────────────────┴────────┴───────┘
|
||||
```
|
||||
|
||||
## Related Content
|
||||
|
||||
- Blog: [Using Aggregate Combinators in ClickHouse](https://clickhouse.com/blog/aggregate-functions-combinators-in-clickhouse-for-arrays-maps-and-states)
|
||||
|
@ -45,6 +45,11 @@ Upper and lower bounds can be specified to limit Memory engine table size, effec
|
||||
CREATE TABLE memory (i UInt32) ENGINE = Memory SETTINGS min_rows_to_keep = 100, max_rows_to_keep = 1000;
|
||||
```
|
||||
|
||||
**Modify settings**
|
||||
```sql
|
||||
ALTER TABLE memory MODIFY SETTING min_rows_to_keep = 100, max_rows_to_keep = 1000;
|
||||
```
|
||||
|
||||
**Note:** Both `bytes` and `rows` capping parameters can be set at the same time, however, the lower bounds of `max` and `min` will be adhered to.
|
||||
|
||||
## Examples {#examples}
|
||||
@ -97,3 +102,4 @@ SELECT total_bytes, total_rows FROM system.tables WHERE name = 'memory' and data
|
||||
│ 65536 │ 10000 │
|
||||
└─────────────┴────────────┘
|
||||
```
|
||||
|
||||
|
@ -18,6 +18,9 @@ Run the command:
|
||||
|
||||
```bash
|
||||
wget https://s3.amazonaws.com/menusdata.nypl.org/gzips/2021_08_01_07_01_17_data.tgz
|
||||
# Option: Validate the checksum
|
||||
md5sum 2021_08_01_07_01_17_data.tgz
|
||||
# Checksum should be equal to: db6126724de939a5481e3160a2d67d15
|
||||
```
|
||||
|
||||
Replace the link to the up to date link from http://menus.nypl.org/data if needed.
|
||||
|
@ -7,7 +7,7 @@ title: "Crowdsourced air traffic data from The OpenSky Network 2020"
|
||||
|
||||
The data in this dataset is derived and cleaned from the full OpenSky dataset to illustrate the development of air traffic during the COVID-19 pandemic. It spans all flights seen by the network's more than 2500 members since 1 January 2019. More data will be periodically included in the dataset until the end of the COVID-19 pandemic.
|
||||
|
||||
Source: https://zenodo.org/record/5092942#.YRBCyTpRXYd
|
||||
Source: https://zenodo.org/records/5092942
|
||||
|
||||
Martin Strohmeier, Xavier Olive, Jannis Luebbe, Matthias Schaefer, and Vincent Lenders
|
||||
"Crowdsourced air traffic data from the OpenSky Network 2019–2020"
|
||||
@ -19,7 +19,7 @@ https://doi.org/10.5194/essd-13-357-2021
|
||||
Run the command:
|
||||
|
||||
```bash
|
||||
wget -O- https://zenodo.org/record/5092942 | grep -oP 'https://zenodo.org/record/5092942/files/flightlist_\d+_\d+\.csv\.gz' | xargs wget
|
||||
wget -O- https://zenodo.org/records/5092942 | grep -oE 'https://zenodo.org/records/5092942/files/flightlist_[0-9]+_[0-9]+\.csv\.gz' | xargs wget
|
||||
```
|
||||
|
||||
Download will take about 2 minutes with good internet connection. There are 30 files with total size of 4.3 GB.
|
||||
@ -127,15 +127,15 @@ Average flight distance is around 1000 km.
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT avg(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2)) FROM opensky;
|
||||
SELECT round(avg(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2)), 2) FROM opensky;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─avg(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2))─┐
|
||||
│ 1041090.6465708319 │
|
||||
└────────────────────────────────────────────────────────────────────┘
|
||||
┌─round(avg(geoDistance(longitude_1, latitude_1, longitude_2, latitude_2)), 2)─┐
|
||||
1. │ 1041090.67 │ -- 1.04 million
|
||||
└──────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Most busy origin airports and the average distance seen {#busy-airports-average-distance}
|
||||
|
@ -79,7 +79,7 @@ The supported formats are:
|
||||
| [RowBinary](#rowbinary) | ✔ | ✔ |
|
||||
| [RowBinaryWithNames](#rowbinarywithnamesandtypes) | ✔ | ✔ |
|
||||
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ |
|
||||
| [RowBinaryWithDefaults](#rowbinarywithdefaults) | ✔ | ✔ |
|
||||
| [RowBinaryWithDefaults](#rowbinarywithdefaults) | ✔ | ✗ |
|
||||
| [Native](#native) | ✔ | ✔ |
|
||||
| [Null](#null) | ✗ | ✔ |
|
||||
| [XML](#xml) | ✗ | ✔ |
|
||||
@ -1270,12 +1270,13 @@ SELECT * FROM json_each_row_nested
|
||||
- [input_format_json_read_arrays_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_arrays_as_strings) - allow to parse JSON arrays as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_objects_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_objects_as_strings) - allow to parse JSON objects as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_named_tuples_as_objects](/docs/en/operations/settings/settings-formats.md/#input_format_json_named_tuples_as_objects) - parse named tuple columns as JSON objects. Default value - `true`.
|
||||
- [input_format_json_try_infer_numbers_from_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_try_infer_numbers_from_strings) - Try to infer numbers from string fields while schema inference. Default value - `false`.
|
||||
- [input_format_json_try_infer_numbers_from_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_try_infer_numbers_from_strings) - try to infer numbers from string fields while schema inference. Default value - `false`.
|
||||
- [input_format_json_try_infer_named_tuples_from_objects](/docs/en/operations/settings/settings-formats.md/#input_format_json_try_infer_named_tuples_from_objects) - try to infer named tuple from JSON objects during schema inference. Default value - `true`.
|
||||
- [input_format_json_infer_incomplete_types_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_infer_incomplete_types_as_strings) - use type String for keys that contains only Nulls or empty objects/arrays during schema inference in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_defaults_for_missing_elements_in_named_tuple](/docs/en/operations/settings/settings-formats.md/#input_format_json_defaults_for_missing_elements_in_named_tuple) - insert default values for missing elements in JSON object while parsing named tuple. Default value - `true`.
|
||||
- [input_format_json_ignore_unknown_keys_in_named_tuple](/docs/en/operations/settings/settings-formats.md/#input_format_json_ignore_unknown_keys_in_named_tuple) - Ignore unknown keys in json object for named tuples. Default value - `false`.
|
||||
- [input_format_json_ignore_unknown_keys_in_named_tuple](/docs/en/operations/settings/settings-formats.md/#input_format_json_ignore_unknown_keys_in_named_tuple) - ignore unknown keys in json object for named tuples. Default value - `false`.
|
||||
- [input_format_json_compact_allow_variable_number_of_columns](/docs/en/operations/settings/settings-formats.md/#input_format_json_compact_allow_variable_number_of_columns) - allow variable number of columns in JSONCompact/JSONCompactEachRow format, ignore extra columns and use default values on missing columns. Default value - `false`.
|
||||
- [input_format_json_throw_on_bad_escape_sequence](/docs/en/operations/settings/settings-formats.md/#input_format_json_throw_on_bad_escape_sequence) - throw an exception if JSON string contains bad escape sequence. If disabled, bad escape sequences will remain as is in the data. Default value - `true`.
|
||||
- [output_format_json_quote_64bit_integers](/docs/en/operations/settings/settings-formats.md/#output_format_json_quote_64bit_integers) - controls quoting of 64-bit integers in JSON output format. Default value - `true`.
|
||||
- [output_format_json_quote_64bit_floats](/docs/en/operations/settings/settings-formats.md/#output_format_json_quote_64bit_floats) - controls quoting of 64-bit floats in JSON output format. Default value - `false`.
|
||||
- [output_format_json_quote_denormals](/docs/en/operations/settings/settings-formats.md/#output_format_json_quote_denormals) - enables '+nan', '-nan', '+inf', '-inf' outputs in JSON output format. Default value - `false`.
|
||||
@ -1486,7 +1487,7 @@ Differs from [PrettySpaceNoEscapes](#prettyspacenoescapes) in that up to 10,000
|
||||
- [output_format_pretty_max_value_width](/docs/en/operations/settings/settings-formats.md/#output_format_pretty_max_value_width) - Maximum width of value to display in Pretty formats. If greater - it will be cut. Default value - `10000`.
|
||||
- [output_format_pretty_color](/docs/en/operations/settings/settings-formats.md/#output_format_pretty_color) - use ANSI escape sequences to paint colors in Pretty formats. Default value - `true`.
|
||||
- [output_format_pretty_grid_charset](/docs/en/operations/settings/settings-formats.md/#output_format_pretty_grid_charset) - Charset for printing grid borders. Available charsets: ASCII, UTF-8. Default value - `UTF-8`.
|
||||
- [output_format_pretty_row_numbers](/docs/en/operations/settings/settings-formats.md/#output_format_pretty_row_numbers) - Add row numbers before each row for pretty output format. Default value - `false`.
|
||||
- [output_format_pretty_row_numbers](/docs/en/operations/settings/settings-formats.md/#output_format_pretty_row_numbers) - Add row numbers before each row for pretty output format. Default value - `true`.
|
||||
|
||||
## RowBinary {#rowbinary}
|
||||
|
||||
@ -2356,7 +2357,7 @@ You can select data from a ClickHouse table and save them into some file in the
|
||||
$ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Arrow" > {filename.arrow}
|
||||
```
|
||||
|
||||
### Arrow format settings {#parquet-format-settings}
|
||||
### Arrow format settings {#arrow-format-settings}
|
||||
|
||||
- [output_format_arrow_low_cardinality_as_dictionary](/docs/en/operations/settings/settings-formats.md/#output_format_arrow_low_cardinality_as_dictionary) - enable output ClickHouse LowCardinality type as Dictionary Arrow type. Default value - `false`.
|
||||
- [output_format_arrow_use_64_bit_indexes_for_dictionary](/docs/en/operations/settings/settings-formats.md/#output_format_arrow_use_64_bit_indexes_for_dictionary) - use 64-bit integer type for Dictionary indexes. Default value - `false`.
|
||||
@ -2464,7 +2465,7 @@ Result:
|
||||
|
||||
## Npy {#data-format-npy}
|
||||
|
||||
This function is designed to load a NumPy array from a .npy file into ClickHouse. The NumPy file format is a binary format used for efficiently storing arrays of numerical data. During import, ClickHouse treats top level dimension as an array of rows with single column. Supported Npy data types and their corresponding type in ClickHouse:
|
||||
This function is designed to load a NumPy array from a .npy file into ClickHouse. The NumPy file format is a binary format used for efficiently storing arrays of numerical data. During import, ClickHouse treats top level dimension as an array of rows with single column. Supported Npy data types and their corresponding type in ClickHouse:
|
||||
| Npy type | ClickHouse type |
|
||||
|:--------:|:---------------:|
|
||||
| b1 | UInt8 |
|
||||
|
@ -507,16 +507,18 @@ Example:
|
||||
``` xml
|
||||
<http_handlers>
|
||||
<rule>
|
||||
<url><![CDATA[/query_param_with_url/\w+/(?P<name_1>[^/]+)(/(?P<name_2>[^/]+))?]]></url>
|
||||
<url><![CDATA[regex:/query_param_with_url/(?P<name_1>[^/]+)]]></url>
|
||||
<methods>GET</methods>
|
||||
<headers>
|
||||
<XXX>TEST_HEADER_VALUE</XXX>
|
||||
<PARAMS_XXX><![CDATA[(?P<name_1>[^/]+)(/(?P<name_2>[^/]+))?]]></PARAMS_XXX>
|
||||
<PARAMS_XXX><![CDATA[regex:(?P<name_2>[^/]+)]]></PARAMS_XXX>
|
||||
</headers>
|
||||
<handler>
|
||||
<type>predefined_query_handler</type>
|
||||
<query>SELECT value FROM system.settings WHERE name = {name_1:String}</query>
|
||||
<query>SELECT name, value FROM system.settings WHERE name = {name_2:String}</query>
|
||||
<query>
|
||||
SELECT name, value FROM system.settings
|
||||
WHERE name IN ({name_1:String}, {name_2:String})
|
||||
</query>
|
||||
</handler>
|
||||
</rule>
|
||||
<defaults/>
|
||||
@ -524,13 +526,13 @@ Example:
|
||||
```
|
||||
|
||||
``` bash
|
||||
$ curl -H 'XXX:TEST_HEADER_VALUE' -H 'PARAMS_XXX:max_threads' 'http://localhost:8123/query_param_with_url/1/max_threads/max_final_threads?max_threads=1&max_final_threads=2'
|
||||
1
|
||||
max_final_threads 2
|
||||
$ curl -H 'XXX:TEST_HEADER_VALUE' -H 'PARAMS_XXX:max_final_threads' 'http://localhost:8123/query_param_with_url/max_threads?max_threads=1&max_final_threads=2'
|
||||
max_final_threads 2
|
||||
max_threads 1
|
||||
```
|
||||
|
||||
:::note
|
||||
In one `predefined_query_handler` only supports one `query` of an insert type.
|
||||
In one `predefined_query_handler` only one `query` is supported.
|
||||
:::
|
||||
|
||||
### dynamic_query_handler {#dynamic_query_handler}
|
||||
|
@ -67,8 +67,7 @@ SETTINGS use_query_cache = true, enable_writes_to_query_cache = false;
|
||||
|
||||
For maximum control, it is generally recommended to provide settings `use_query_cache`, `enable_writes_to_query_cache` and
|
||||
`enable_reads_from_query_cache` only with specific queries. It is also possible to enable caching at user or profile level (e.g. via `SET
|
||||
use_query_cache = true`) but one should keep in mind that all `SELECT` queries including monitoring or debugging queries to system tables
|
||||
may return cached results then.
|
||||
use_query_cache = true`) but one should keep in mind that all `SELECT` queries may return cached results then.
|
||||
|
||||
The query cache can be cleared using statement `SYSTEM DROP QUERY CACHE`. The content of the query cache is displayed in system table
|
||||
[system.query_cache](system-tables/query_cache.md). The number of query cache hits and misses since database start are shown as events
|
||||
@ -175,6 +174,10 @@ Also, results of queries with non-deterministic functions are not cached by defa
|
||||
To force caching of results of queries with non-deterministic functions regardless, use setting
|
||||
[query_cache_nondeterministic_function_handling](settings/settings.md#query-cache-nondeterministic-function-handling).
|
||||
|
||||
Results of queries that involve system tables, e.g. `system.processes` or `information_schema.tables`, are not cached by default. To force
|
||||
caching of results of queries with system tables regardless, use setting
|
||||
[query_cache_system_table_handling](settings/settings.md#query-cache-system-table-handling).
|
||||
|
||||
:::note
|
||||
Prior to ClickHouse v23.11, setting 'query_cache_store_results_of_queries_with_nondeterministic_functions = 0 / 1' controlled whether
|
||||
results of queries with non-deterministic results were cached. In newer ClickHouse versions, this setting is obsolete and has no effect.
|
||||
|
@ -436,7 +436,7 @@ Default: 0
|
||||
Restriction on dropping partitions.
|
||||
|
||||
If the size of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `max_partition_size_to_drop` (in bytes), you can’t drop a partition using a [DROP PARTITION](../../sql-reference/statements/alter/partition.md#drop-partitionpart) query.
|
||||
This setting does not require a restart of the Clickhouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
This setting does not require a restart of the ClickHouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
Default value: 50 GB.
|
||||
The value 0 means that you can drop partitions without any restrictions.
|
||||
|
||||
@ -518,7 +518,7 @@ Restriction on deleting tables.
|
||||
|
||||
If the size of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `max_table_size_to_drop` (in bytes), you can’t delete it using a [DROP](../../sql-reference/statements/drop.md) query or [TRUNCATE](../../sql-reference/statements/truncate.md) query.
|
||||
|
||||
This setting does not require a restart of the Clickhouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
This setting does not require a restart of the ClickHouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
|
||||
Default value: 50 GB.
|
||||
The value 0 means that you can delete all tables without any restrictions.
|
||||
@ -945,9 +945,9 @@ Hard limit is configured via system tools
|
||||
|
||||
## database_atomic_delay_before_drop_table_sec {#database_atomic_delay_before_drop_table_sec}
|
||||
|
||||
Sets the delay before remove table data in seconds. If the query has `SYNC` modifier, this setting is ignored.
|
||||
The delay during which a dropped table can be restored using the [UNDROP](/docs/en/sql-reference/statements/undrop.md) statement. If `DROP TABLE` ran with a `SYNC` modifier, the setting is ignored.
|
||||
|
||||
Default value: `480` (8 minute).
|
||||
Default value: `480` (8 minutes).
|
||||
|
||||
## database_catalog_unused_dir_hide_timeout_sec {#database_catalog_unused_dir_hide_timeout_sec}
|
||||
|
||||
@ -1354,6 +1354,7 @@ Keys:
|
||||
- `count` – The number of archived log files that ClickHouse stores.
|
||||
- `console` – Send `log` and `errorlog` to the console instead of file. To enable, set to `1` or `true`.
|
||||
- `stream_compress` – Compress `log` and `errorlog` with `lz4` stream compression. To enable, set to `1` or `true`.
|
||||
- `formatting` – Specify log format to be printed in console log (currently only `json` supported).
|
||||
|
||||
Both log and error log file names (only file names, not directories) support date and time format specifiers.
|
||||
|
||||
@ -1422,6 +1423,8 @@ Writing to the console can be configured. Config example:
|
||||
</logger>
|
||||
```
|
||||
|
||||
### syslog
|
||||
|
||||
Writing to the syslog is also supported. Config example:
|
||||
|
||||
``` xml
|
||||
@ -1445,6 +1448,52 @@ Keys for syslog:
|
||||
Default value: `LOG_USER` if `address` is specified, `LOG_DAEMON` otherwise.
|
||||
- format – Message format. Possible values: `bsd` and `syslog.`
|
||||
|
||||
### Log formats
|
||||
|
||||
You can specify the log format that will be outputted in the console log. Currently, only JSON is supported. Here is an example of an output JSON log:
|
||||
|
||||
```json
|
||||
{
|
||||
"date_time": "1650918987.180175",
|
||||
"thread_name": "#1",
|
||||
"thread_id": "254545",
|
||||
"level": "Trace",
|
||||
"query_id": "",
|
||||
"logger_name": "BaseDaemon",
|
||||
"message": "Received signal 2",
|
||||
"source_file": "../base/daemon/BaseDaemon.cpp; virtual void SignalListener::run()",
|
||||
"source_line": "192"
|
||||
}
|
||||
```
|
||||
To enable JSON logging support, use the following snippet:
|
||||
|
||||
```xml
|
||||
<logger>
|
||||
<formatting>
|
||||
<type>json</type>
|
||||
<names>
|
||||
<date_time>date_time</date_time>
|
||||
<thread_name>thread_name</thread_name>
|
||||
<thread_id>thread_id</thread_id>
|
||||
<level>level</level>
|
||||
<query_id>query_id</query_id>
|
||||
<logger_name>logger_name</logger_name>
|
||||
<message>message</message>
|
||||
<source_file>source_file</source_file>
|
||||
<source_line>source_line</source_line>
|
||||
</names>
|
||||
</formatting>
|
||||
</logger>
|
||||
```
|
||||
|
||||
**Renaming keys for JSON logs**
|
||||
|
||||
Key names can be modified by changing tag values inside the `<names>` tag. For example, to change `DATE_TIME` to `MY_DATE_TIME`, you can use `<date_time>MY_DATE_TIME</date_time>`.
|
||||
|
||||
**Omitting keys for JSON logs**
|
||||
|
||||
Log properties can be omitted by commenting out the property. For example, if you do not want your log to print `query_id`, you can comment out the `<query_id>` tag.
|
||||
|
||||
## send_crash_reports {#send_crash_reports}
|
||||
|
||||
Settings for opt-in sending crash reports to the ClickHouse core developers team via [Sentry](https://sentry.io).
|
||||
@ -1521,7 +1570,7 @@ Restriction on deleting tables.
|
||||
|
||||
If the size of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `max_table_size_to_drop` (in bytes), you can’t delete it using a [DROP](../../sql-reference/statements/drop.md) query or [TRUNCATE](../../sql-reference/statements/truncate.md) query.
|
||||
|
||||
This setting does not require a restart of the Clickhouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
This setting does not require a restart of the ClickHouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
|
||||
Default value: 50 GB.
|
||||
|
||||
@ -1539,7 +1588,7 @@ Restriction on dropping partitions.
|
||||
|
||||
If the size of a [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) table exceeds `max_partition_size_to_drop` (in bytes), you can’t drop a partition using a [DROP PARTITION](../../sql-reference/statements/alter/partition.md#drop-partitionpart) query.
|
||||
|
||||
This setting does not require a restart of the Clickhouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
This setting does not require a restart of the ClickHouse server to apply. Another way to disable the restriction is to create the `<clickhouse-path>/flags/force_drop_table` file.
|
||||
|
||||
Default value: 50 GB.
|
||||
|
||||
|
@ -287,7 +287,7 @@ Default value: 0 (seconds)
|
||||
|
||||
## remote_fs_execute_merges_on_single_replica_time_threshold
|
||||
|
||||
When this setting has a value greater than than zero only a single replica starts the merge immediately if merged part on shared storage and `allow_remote_fs_zero_copy_replication` is enabled.
|
||||
When this setting has a value greater than zero only a single replica starts the merge immediately if merged part on shared storage and `allow_remote_fs_zero_copy_replication` is enabled.
|
||||
|
||||
:::note Zero-copy replication is not ready for production
|
||||
Zero-copy replication is disabled by default in ClickHouse version 22.8 and higher. This feature is not recommended for production use.
|
||||
|
@ -651,6 +651,12 @@ This setting works only when setting `input_format_json_named_tuples_as_objects`
|
||||
|
||||
Enabled by default.
|
||||
|
||||
## input_format_json_throw_on_bad_escape_sequence {#input_format_json_throw_on_bad_escape_sequence}
|
||||
|
||||
Throw an exception if JSON string contains bad escape sequence in JSON input formats. If disabled, bad escape sequences will remain as is in the data.
|
||||
|
||||
Enabled by default.
|
||||
|
||||
## output_format_json_array_of_rows {#output_format_json_array_of_rows}
|
||||
|
||||
Enables the ability to output all rows as a JSON array in the [JSONEachRow](../../interfaces/formats.md/#jsoneachrow) format.
|
||||
@ -1367,7 +1373,7 @@ Default value: `1'000'000`.
|
||||
|
||||
While importing data, when column is not found in schema default value will be used instead of error.
|
||||
|
||||
Disabled by default.
|
||||
Enabled by default.
|
||||
|
||||
### input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference {#input_format_parquet_skip_columns_with_unsupported_types_in_schema_inference}
|
||||
|
||||
@ -1636,7 +1642,7 @@ Possible values:
|
||||
- 0 — Output without row numbers.
|
||||
- 1 — Output with row numbers.
|
||||
|
||||
Default value: `0`.
|
||||
Default value: `1`.
|
||||
|
||||
**Example**
|
||||
|
||||
|
@ -1689,6 +1689,18 @@ Possible values:
|
||||
|
||||
Default value: `throw`.
|
||||
|
||||
## query_cache_system_table_handling {#query-cache-system-table-handling}
|
||||
|
||||
Controls how the [query cache](../query-cache.md) handles `SELECT` queries against system tables, i.e. tables in databases `system.*` and `information_schema.*`.
|
||||
|
||||
Possible values:
|
||||
|
||||
- `'throw'` - Throw an exception and don't cache the query result.
|
||||
- `'save'` - Cache the query result.
|
||||
- `'ignore'` - Don't cache the query result and don't throw an exception.
|
||||
|
||||
Default value: `throw`.
|
||||
|
||||
## query_cache_min_query_runs {#query-cache-min-query-runs}
|
||||
|
||||
Minimum number of times a `SELECT` query must run before its result is stored in the [query cache](../query-cache.md).
|
||||
@ -1776,7 +1788,7 @@ Default value: 0 (no restriction).
|
||||
## insert_quorum {#insert_quorum}
|
||||
|
||||
:::note
|
||||
This setting is not applicable to SharedMergeTree, see [SharedMergeTree consistency](/docs/en/cloud/reference/shared-merge-tree/#consistency) for more information.
|
||||
This setting is not applicable to SharedMergeTree, see [SharedMergeTree consistency](/docs/en/cloud/reference/shared-merge-tree/#consistency) for more information.
|
||||
:::
|
||||
|
||||
Enables the quorum writes.
|
||||
@ -1819,7 +1831,7 @@ See also:
|
||||
## insert_quorum_parallel {#insert_quorum_parallel}
|
||||
|
||||
:::note
|
||||
This setting is not applicable to SharedMergeTree, see [SharedMergeTree consistency](/docs/en/cloud/reference/shared-merge-tree/#consistency) for more information.
|
||||
This setting is not applicable to SharedMergeTree, see [SharedMergeTree consistency](/docs/en/cloud/reference/shared-merge-tree/#consistency) for more information.
|
||||
:::
|
||||
|
||||
Enables or disables parallelism for quorum `INSERT` queries. If enabled, additional `INSERT` queries can be sent while previous queries have not yet finished. If disabled, additional writes to the same table will be rejected.
|
||||
@ -1840,7 +1852,7 @@ See also:
|
||||
## select_sequential_consistency {#select_sequential_consistency}
|
||||
|
||||
:::note
|
||||
This setting differ in behavior between SharedMergeTree and ReplicatedMergeTree, see [SharedMergeTree consistency](/docs/en/cloud/reference/shared-merge-tree/#consistency) for more information about the behavior of `select_sequential_consistency` in SharedMergeTree.
|
||||
This setting differ in behavior between SharedMergeTree and ReplicatedMergeTree, see [SharedMergeTree consistency](/docs/en/cloud/reference/shared-merge-tree/#consistency) for more information about the behavior of `select_sequential_consistency` in SharedMergeTree.
|
||||
:::
|
||||
|
||||
Enables or disables sequential consistency for `SELECT` queries. Requires `insert_quorum_parallel` to be disabled (enabled by default).
|
||||
@ -2817,6 +2829,17 @@ Possible values:
|
||||
|
||||
Default value: 0.
|
||||
|
||||
## distributed_insert_skip_read_only_replicas {#distributed_insert_skip_read_only_replicas}
|
||||
|
||||
Enables skipping read-only replicas for INSERT queries into Distributed.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — INSERT was as usual, if it will go to read-only replica it will fail
|
||||
- 1 — Initiator will skip read-only replicas before sending data to shards.
|
||||
|
||||
Default value: `0`
|
||||
|
||||
## distributed_foreground_insert {#distributed_foreground_insert}
|
||||
|
||||
Enables or disables synchronous data insertion into a [Distributed](../../engines/table-engines/special/distributed.md/#distributed) table.
|
||||
@ -5291,7 +5314,7 @@ SETTINGS(dictionary_use_async_executor=1, max_threads=8);
|
||||
## storage_metadata_write_full_object_key {#storage_metadata_write_full_object_key}
|
||||
|
||||
When set to `true` the metadata files are written with `VERSION_FULL_OBJECT_KEY` format version. With that format full object storage key names are written to the metadata files.
|
||||
When set to `false` the metadata files are written with the previous format version, `VERSION_INLINE_DATA`. With that format only suffixes of object storage key names are are written to the metadata files. The prefix for all of object storage key names is set in configurations files at `storage_configuration.disks` section.
|
||||
When set to `false` the metadata files are written with the previous format version, `VERSION_INLINE_DATA`. With that format only suffixes of object storage key names are written to the metadata files. The prefix for all of object storage key names is set in configurations files at `storage_configuration.disks` section.
|
||||
|
||||
Default value: `false`.
|
||||
|
||||
@ -5442,3 +5465,7 @@ Enabling this setting can lead to incorrect result as in case of evolved schema
|
||||
:::
|
||||
|
||||
Default value: 'false'.
|
||||
|
||||
## allow_suspicious_primary_key {#allow_suspicious_primary_key}
|
||||
|
||||
Allow suspicious `PRIMARY KEY`/`ORDER BY` for MergeTree (i.e. SimpleAggregateFunction).
|
||||
|
@ -36,7 +36,7 @@ E.g. configuration option
|
||||
<s3>
|
||||
<type>s3</type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3>
|
||||
```
|
||||
|
||||
@ -47,7 +47,7 @@ is equal to configuration (from `24.1`):
|
||||
<object_storage_type>s3</object_storage_type>
|
||||
<metadata_type>local</metadata_type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3>
|
||||
```
|
||||
|
||||
@ -56,7 +56,7 @@ Configuration
|
||||
<s3_plain>
|
||||
<type>s3_plain</type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3_plain>
|
||||
```
|
||||
|
||||
@ -67,7 +67,7 @@ is equal to
|
||||
<object_storage_type>s3</object_storage_type>
|
||||
<metadata_type>plain</metadata_type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3_plain>
|
||||
```
|
||||
|
||||
@ -79,7 +79,7 @@ Example of full storage configuration will look like:
|
||||
<s3>
|
||||
<type>s3</type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3>
|
||||
</disks>
|
||||
<policies>
|
||||
@ -105,7 +105,7 @@ Starting with 24.1 clickhouse version, it can also look like:
|
||||
<object_storage_type>s3</object_storage_type>
|
||||
<metadata_type>local</metadata_type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3>
|
||||
</disks>
|
||||
<policies>
|
||||
@ -324,7 +324,7 @@ Configuration:
|
||||
<s3_plain>
|
||||
<type>s3_plain</type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3_plain>
|
||||
```
|
||||
|
||||
@ -337,7 +337,7 @@ Configuration:
|
||||
<object_storage_type>azure</object_storage_type>
|
||||
<metadata_type>plain</metadata_type>
|
||||
<endpoint>https://s3.eu-west-1.amazonaws.com/clickhouse-eu-west-1.clickhouse.com/data/</endpoint>
|
||||
<use_invironment_credentials>1</use_invironment_credentials>
|
||||
<use_environment_credentials>1</use_environment_credentials>
|
||||
</s3_plain>
|
||||
```
|
||||
|
||||
@ -520,13 +520,13 @@ Example of configuration for versions later or equal to 22.8:
|
||||
</cache>
|
||||
</disks>
|
||||
<policies>
|
||||
<s3-cache>
|
||||
<s3_cache>
|
||||
<volumes>
|
||||
<main>
|
||||
<disk>cache</disk>
|
||||
</main>
|
||||
</volumes>
|
||||
</s3-cache>
|
||||
</s3_cache>
|
||||
<policies>
|
||||
</storage_configuration>
|
||||
```
|
||||
@ -546,13 +546,13 @@ Example of configuration for versions earlier than 22.8:
|
||||
</s3>
|
||||
</disks>
|
||||
<policies>
|
||||
<s3-cache>
|
||||
<s3_cache>
|
||||
<volumes>
|
||||
<main>
|
||||
<disk>s3</disk>
|
||||
</main>
|
||||
</volumes>
|
||||
</s3-cache>
|
||||
</s3_cache>
|
||||
<policies>
|
||||
</storage_configuration>
|
||||
```
|
||||
|
@ -7,6 +7,7 @@ Contains logging entries with information about various blob storage operations
|
||||
|
||||
Columns:
|
||||
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Date of the event.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Time of the event.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Time of the event with microseconds precision.
|
||||
@ -38,6 +39,7 @@ SELECT * FROM system.blob_storage_log WHERE query_id = '7afe0450-504d-4e4b-9a80-
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
hostname: clickhouse.eu-central1.internal
|
||||
event_date: 2023-10-31
|
||||
event_time: 2023-10-31 16:03:40
|
||||
event_time_microseconds: 2023-10-31 16:03:40.481437
|
||||
|
@ -47,7 +47,7 @@ An example:
|
||||
<engine>ENGINE = MergeTree PARTITION BY toYYYYMM(event_date) ORDER BY (event_date, event_time) SETTINGS index_granularity = 1024</engine>
|
||||
-->
|
||||
<flush_interval_milliseconds>7500</flush_interval_milliseconds>
|
||||
<max_size_rows>1048576</max_size>
|
||||
<max_size_rows>1048576</max_size_rows>
|
||||
<reserved_size_rows>8192</reserved_size_rows>
|
||||
<buffer_size_rows_flush_threshold>524288</buffer_size_rows_flush_threshold>
|
||||
<flush_on_crash>false</flush_on_crash>
|
||||
|
@ -483,7 +483,7 @@ Where:
|
||||
|
||||
- `r1`- the number of unique visitors who visited the site during 2020-01-01 (the `cond1` condition).
|
||||
- `r2`- the number of unique visitors who visited the site during a specific time period between 2020-01-01 and 2020-01-02 (`cond1` and `cond2` conditions).
|
||||
- `r3`- the number of unique visitors who visited the site during a specific time period between 2020-01-01 and 2020-01-03 (`cond1` and `cond3` conditions).
|
||||
- `r3`- the number of unique visitors who visited the site during a specific time period on 2020-01-01 and 2020-01-03 (`cond1` and `cond3` conditions).
|
||||
|
||||
## uniqUpTo(N)(x)
|
||||
|
||||
|
@ -7,26 +7,33 @@ sidebar_position: 351
|
||||
|
||||
[Cramer's V](https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_V) (sometimes referred to as Cramer's phi) is a measure of association between two columns in a table. The result of the `cramersV` function ranges from 0 (corresponding to no association between the variables) to 1 and can reach 1 only when each value is completely determined by the other. It may be viewed as the association between two variables as a percentage of their maximum possible variation.
|
||||
|
||||
:::note
|
||||
For a bias corrected version of Cramer's V see: [cramersVBiasCorrected](./cramersvbiascorrected.md)
|
||||
:::
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
cramersV(column1, column2)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
**Parameters**
|
||||
|
||||
- `column1` and `column2` are the columns to be compared
|
||||
- `column1`: first column to be compared.
|
||||
- `column2`: second column to be compared.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- a value between 0 (corresponding to no association between the columns' values) to 1 (complete association).
|
||||
|
||||
**Return type** is always [Float64](../../../sql-reference/data-types/float.md).
|
||||
Type: always [Float64](../../../sql-reference/data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
The following two columns being compared below have no association with each other, so the result of `cramersV` is 0:
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
cramersV(a, b)
|
||||
|
@ -5,31 +5,31 @@ sidebar_position: 352
|
||||
|
||||
# cramersVBiasCorrected
|
||||
|
||||
|
||||
Cramer's V is a measure of association between two columns in a table. The result of the [`cramersV` function](./cramersv.md) ranges from 0 (corresponding to no association between the variables) to 1 and can reach 1 only when each value is completely determined by the other. The function can be heavily biased, so this version of Cramer's V uses the [bias correction](https://en.wikipedia.org/wiki/Cram%C3%A9r%27s_V#Bias_correction).
|
||||
|
||||
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
cramersVBiasCorrected(column1, column2)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
**Parameters**
|
||||
|
||||
- `column1` and `column2` are the columns to be compared
|
||||
- `column1`: first column to be compared.
|
||||
- `column2`: second column to be compared.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- a value between 0 (corresponding to no association between the columns' values) to 1 (complete association).
|
||||
|
||||
**Return type** is always [Float64](../../../sql-reference/data-types/float.md).
|
||||
Type: always [Float64](../../../sql-reference/data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
The following two columns being compared below have a small association with each other. Notice the result of `cramersVBiasCorrected` is smaller than the result of `cramersV`:
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
cramersV(a, b),
|
||||
|
@ -15,9 +15,9 @@ The `uniqCombined` function is a good choice for calculating the number of diffe
|
||||
|
||||
**Arguments**
|
||||
|
||||
The function takes a variable number of parameters. Parameters can be `Tuple`, `Array`, `Date`, `DateTime`, `String`, or numeric types.
|
||||
- `HLL_precision`: The base-2 logarithm of the number of cells in [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog). Optional, you can use the function as `uniqCombined(x[, ...])`. The default value for `HLL_precision` is 17, which is effectively 96 KiB of space (2^17 cells, 6 bits each).
|
||||
- `X`: A variable number of parameters. Parameters can be `Tuple`, `Array`, `Date`, `DateTime`, `String`, or numeric types.
|
||||
|
||||
`HLL_precision` is the base-2 logarithm of the number of cells in [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog). Optional, you can use the function as `uniqCombined(x[, ...])`. The default value for `HLL_precision` is 17, which is effectively 96 KiB of space (2^17 cells, 6 bits each).
|
||||
|
||||
**Returned value**
|
||||
|
||||
@ -25,26 +25,43 @@ The function takes a variable number of parameters. Parameters can be `Tuple`, `
|
||||
|
||||
**Implementation details**
|
||||
|
||||
Function:
|
||||
The `uniqCombined` function:
|
||||
|
||||
- Calculates a hash (64-bit hash for `String` and 32-bit otherwise) for all parameters in the aggregate, then uses it in calculations.
|
||||
|
||||
- Uses a combination of three algorithms: array, hash table, and HyperLogLog with an error correction table.
|
||||
|
||||
For a small number of distinct elements, an array is used. When the set size is larger, a hash table is used. For a larger number of elements, HyperLogLog is used, which will occupy a fixed amount of memory.
|
||||
|
||||
- For a small number of distinct elements, an array is used.
|
||||
- When the set size is larger, a hash table is used.
|
||||
- For a larger number of elements, HyperLogLog is used, which will occupy a fixed amount of memory.
|
||||
- Provides the result deterministically (it does not depend on the query processing order).
|
||||
|
||||
:::note
|
||||
Since it uses 32-bit hash for non-`String` type, the result will have very high error for cardinalities significantly larger than `UINT_MAX` (error will raise quickly after a few tens of billions of distinct values), hence in this case you should use [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64)
|
||||
Since it uses a 32-bit hash for non-`String` types, the result will have very high error for cardinalities significantly larger than `UINT_MAX` (error will raise quickly after a few tens of billions of distinct values), hence in this case you should use [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64).
|
||||
:::
|
||||
|
||||
Compared to the [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function, the `uniqCombined`:
|
||||
Compared to the [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function, the `uniqCombined` function:
|
||||
|
||||
- Consumes several times less memory.
|
||||
- Calculates with several times higher accuracy.
|
||||
- Usually has slightly lower performance. In some scenarios, `uniqCombined` can perform better than `uniq`, for example, with distributed queries that transmit a large number of aggregation states over the network.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT uniqCombined(number) FROM numbers(1e6);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─uniqCombined(number)─┐
|
||||
│ 1001148 │ -- 1.00 million
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
See the example section of [uniqCombined64](../../../sql-reference/aggregate-functions/reference/uniqcombined64.md#agg_function-uniqcombined64) for an example of the difference between `uniqCombined` and `uniqCombined64` for much larger inputs.
|
||||
|
||||
**See Also**
|
||||
|
||||
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq)
|
||||
|
@ -5,4 +5,78 @@ sidebar_position: 193
|
||||
|
||||
# uniqCombined64
|
||||
|
||||
Same as [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined), but uses 64-bit hash for all data types.
|
||||
Calculates the approximate number of different argument values. It is the same as [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md#agg_function-uniqcombined), but uses a 64-bit hash for all data types rather than just for the String data type.
|
||||
|
||||
``` sql
|
||||
uniqCombined64(HLL_precision)(x[, ...])
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `HLL_precision`: The base-2 logarithm of the number of cells in [HyperLogLog](https://en.wikipedia.org/wiki/HyperLogLog). Optionally, you can use the function as `uniqCombined64(x[, ...])`. The default value for `HLL_precision` is 17, which is effectively 96 KiB of space (2^17 cells, 6 bits each).
|
||||
- `X`: A variable number of parameters. Parameters can be `Tuple`, `Array`, `Date`, `DateTime`, `String`, or numeric types.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A number [UInt64](../../../sql-reference/data-types/int-uint.md)-type number.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
The `uniqCombined64` function:
|
||||
- Calculates a hash (64-bit hash for all data types) for all parameters in the aggregate, then uses it in calculations.
|
||||
- Uses a combination of three algorithms: array, hash table, and HyperLogLog with an error correction table.
|
||||
- For a small number of distinct elements, an array is used.
|
||||
- When the set size is larger, a hash table is used.
|
||||
- For a larger number of elements, HyperLogLog is used, which will occupy a fixed amount of memory.
|
||||
- Provides the result deterministically (it does not depend on the query processing order).
|
||||
|
||||
:::note
|
||||
Since it uses 64-bit hash for all types, the result does not suffer from very high error for cardinalities significantly larger than `UINT_MAX` like [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md) does, which uses a 32-bit hash for non-`String` types.
|
||||
:::
|
||||
|
||||
Compared to the [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq) function, the `uniqCombined64` function:
|
||||
|
||||
- Consumes several times less memory.
|
||||
- Calculates with several times higher accuracy.
|
||||
|
||||
**Example**
|
||||
|
||||
In the example below `uniqCombined64` is run on `1e10` different numbers returning a very close approximation of the number of different argument values.
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT uniqCombined64(number) FROM numbers(1e10);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─uniqCombined64(number)─┐
|
||||
│ 9998568925 │ -- 10.00 billion
|
||||
└────────────────────────┘
|
||||
```
|
||||
|
||||
By comparison the `uniqCombined` function returns a rather poor approximation for an input this size.
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT uniqCombined(number) FROM numbers(1e10);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─uniqCombined(number)─┐
|
||||
│ 5545308725 │ -- 5.55 billion
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
||||
- [uniq](../../../sql-reference/aggregate-functions/reference/uniq.md#agg_function-uniq)
|
||||
- [uniqCombined](../../../sql-reference/aggregate-functions/reference/uniqcombined.md)
|
||||
- [uniqHLL12](../../../sql-reference/aggregate-functions/reference/uniqhll12.md#agg_function-uniqhll12)
|
||||
- [uniqExact](../../../sql-reference/aggregate-functions/reference/uniqexact.md#agg_function-uniqexact)
|
||||
- [uniqTheta](../../../sql-reference/aggregate-functions/reference/uniqthetasketch.md#agg_function-uniqthetasketch)
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/aggregatefunction
|
||||
sidebar_position: 53
|
||||
sidebar_position: 46
|
||||
sidebar_label: AggregateFunction
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/array
|
||||
sidebar_position: 52
|
||||
sidebar_position: 32
|
||||
sidebar_label: Array(T)
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/boolean
|
||||
sidebar_position: 43
|
||||
sidebar_position: 22
|
||||
sidebar_label: Boolean
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/date
|
||||
sidebar_position: 47
|
||||
sidebar_position: 12
|
||||
sidebar_label: Date
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/date32
|
||||
sidebar_position: 48
|
||||
sidebar_position: 14
|
||||
sidebar_label: Date32
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/datetime
|
||||
sidebar_position: 48
|
||||
sidebar_position: 16
|
||||
sidebar_label: DateTime
|
||||
---
|
||||
|
||||
@ -36,9 +36,9 @@ You can explicitly set a time zone for `DateTime`-type columns when creating a t
|
||||
|
||||
The [clickhouse-client](../../interfaces/cli.md) applies the server time zone by default if a time zone isn’t explicitly set when initializing the data type. To use the client time zone, run `clickhouse-client` with the `--use_client_time_zone` parameter.
|
||||
|
||||
ClickHouse outputs values depending on the value of the [date_time_output_format](../../operations/settings/settings.md#settings-date_time_output_format) setting. `YYYY-MM-DD hh:mm:ss` text format by default. Additionally, you can change the output with the [formatDateTime](../../sql-reference/functions/date-time-functions.md#formatdatetime) function.
|
||||
ClickHouse outputs values depending on the value of the [date_time_output_format](../../operations/settings/settings-formats.md#date_time_output_format) setting. `YYYY-MM-DD hh:mm:ss` text format by default. Additionally, you can change the output with the [formatDateTime](../../sql-reference/functions/date-time-functions.md#formatdatetime) function.
|
||||
|
||||
When inserting data into ClickHouse, you can use different formats of date and time strings, depending on the value of the [date_time_input_format](../../operations/settings/settings.md#settings-date_time_input_format) setting.
|
||||
When inserting data into ClickHouse, you can use different formats of date and time strings, depending on the value of the [date_time_input_format](../../operations/settings/settings-formats.md#date_time_input_format) setting.
|
||||
|
||||
## Examples
|
||||
|
||||
@ -147,8 +147,8 @@ Time shifts for multiple days. Some pacific islands changed their timezone offse
|
||||
- [Type conversion functions](../../sql-reference/functions/type-conversion-functions.md)
|
||||
- [Functions for working with dates and times](../../sql-reference/functions/date-time-functions.md)
|
||||
- [Functions for working with arrays](../../sql-reference/functions/array-functions.md)
|
||||
- [The `date_time_input_format` setting](../../operations/settings/settings-formats.md#settings-date_time_input_format)
|
||||
- [The `date_time_output_format` setting](../../operations/settings/settings-formats.md#settings-date_time_output_format)
|
||||
- [The `date_time_input_format` setting](../../operations/settings/settings-formats.md#date_time_input_format)
|
||||
- [The `date_time_output_format` setting](../../operations/settings/settings-formats.md#date_time_output_format)
|
||||
- [The `timezone` server configuration parameter](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-timezone)
|
||||
- [The `session_timezone` setting](../../operations/settings/settings.md#session_timezone)
|
||||
- [Operators for working with dates and times](../../sql-reference/operators/index.md#operators-datetime)
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/datetime64
|
||||
sidebar_position: 49
|
||||
sidebar_position: 18
|
||||
sidebar_label: DateTime64
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/decimal
|
||||
sidebar_position: 42
|
||||
sidebar_position: 6
|
||||
sidebar_label: Decimal
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/enum
|
||||
sidebar_position: 50
|
||||
sidebar_position: 20
|
||||
sidebar_label: Enum
|
||||
---
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/fixedstring
|
||||
sidebar_position: 45
|
||||
sidebar_position: 10
|
||||
sidebar_label: FixedString(N)
|
||||
---
|
||||
|
||||
# FixedString
|
||||
# FixedString(N)
|
||||
|
||||
A fixed-length string of `N` bytes (neither characters nor code points).
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/float
|
||||
sidebar_position: 41
|
||||
sidebar_position: 4
|
||||
sidebar_label: Float32, Float64
|
||||
---
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/geo
|
||||
sidebar_position: 62
|
||||
sidebar_position: 54
|
||||
sidebar_label: Geo
|
||||
title: "Geo Data Types"
|
||||
title: "Geometric"
|
||||
---
|
||||
|
||||
ClickHouse supports data types for representing geographical objects — locations, lands, etc.
|
||||
|
@ -1,10 +1,10 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/
|
||||
sidebar_label: List of data types
|
||||
sidebar_position: 37
|
||||
sidebar_position: 1
|
||||
---
|
||||
|
||||
# ClickHouse Data Types
|
||||
# Data Types in ClickHouse
|
||||
|
||||
ClickHouse can store various kinds of data in table cells. This section describes the supported data types and special considerations for using and/or implementing them if any.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/int-uint
|
||||
sidebar_position: 40
|
||||
sidebar_position: 2
|
||||
sidebar_label: UInt8, UInt16, UInt32, UInt64, UInt128, UInt256, Int8, Int16, Int32, Int64, Int128, Int256
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/ipv4
|
||||
sidebar_position: 59
|
||||
sidebar_position: 28
|
||||
sidebar_label: IPv4
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/ipv6
|
||||
sidebar_position: 60
|
||||
sidebar_position: 30
|
||||
sidebar_label: IPv6
|
||||
---
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/json
|
||||
sidebar_position: 54
|
||||
sidebar_position: 26
|
||||
sidebar_label: JSON
|
||||
---
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/lowcardinality
|
||||
sidebar_position: 51
|
||||
sidebar_label: LowCardinality
|
||||
sidebar_position: 42
|
||||
sidebar_label: LowCardinality(T)
|
||||
---
|
||||
|
||||
# LowCardinality
|
||||
# LowCardinality(T)
|
||||
|
||||
Changes the internal representation of other data types to be dictionary-encoded.
|
||||
|
||||
|
@ -1,12 +1,12 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/map
|
||||
sidebar_position: 65
|
||||
sidebar_label: Map(key, value)
|
||||
sidebar_position: 36
|
||||
sidebar_label: Map(K, V)
|
||||
---
|
||||
|
||||
# Map(key, value)
|
||||
# Map(K, V)
|
||||
|
||||
`Map(key, value)` data type stores `key:value` pairs.
|
||||
`Map(K, V)` data type stores `key:value` pairs.
|
||||
|
||||
**Parameters**
|
||||
|
||||
|
@ -1,27 +0,0 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/multiword-types
|
||||
sidebar_position: 61
|
||||
sidebar_label: Multiword Type Names
|
||||
title: "Multiword Types"
|
||||
---
|
||||
|
||||
When creating tables, you can use data types with a name consisting of several words. This is implemented for better SQL compatibility.
|
||||
|
||||
## Multiword Types Support
|
||||
|
||||
| Multiword types | Simple types |
|
||||
|----------------------------------|--------------------------------------------------------------|
|
||||
| DOUBLE PRECISION | [Float64](../../sql-reference/data-types/float.md) |
|
||||
| CHAR LARGE OBJECT | [String](../../sql-reference/data-types/string.md) |
|
||||
| CHAR VARYING | [String](../../sql-reference/data-types/string.md) |
|
||||
| CHARACTER LARGE OBJECT | [String](../../sql-reference/data-types/string.md) |
|
||||
| CHARACTER VARYING | [String](../../sql-reference/data-types/string.md) |
|
||||
| NCHAR LARGE OBJECT | [String](../../sql-reference/data-types/string.md) |
|
||||
| NCHAR VARYING | [String](../../sql-reference/data-types/string.md) |
|
||||
| NATIONAL CHARACTER LARGE OBJECT | [String](../../sql-reference/data-types/string.md) |
|
||||
| NATIONAL CHARACTER VARYING | [String](../../sql-reference/data-types/string.md) |
|
||||
| NATIONAL CHAR VARYING | [String](../../sql-reference/data-types/string.md) |
|
||||
| NATIONAL CHARACTER | [String](../../sql-reference/data-types/string.md) |
|
||||
| NATIONAL CHAR | [String](../../sql-reference/data-types/string.md) |
|
||||
| BINARY LARGE OBJECT | [String](../../sql-reference/data-types/string.md) |
|
||||
| BINARY VARYING | [String](../../sql-reference/data-types/string.md) |
|
@ -1,7 +1,7 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/nullable
|
||||
sidebar_position: 55
|
||||
sidebar_label: Nullable
|
||||
sidebar_position: 44
|
||||
sidebar_label: Nullable(T)
|
||||
---
|
||||
|
||||
# Nullable(T)
|
||||
|
@ -1,5 +1,7 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/simpleaggregatefunction
|
||||
sidebar_position: 48
|
||||
sidebar_label: SimpleAggregateFunction
|
||||
---
|
||||
# SimpleAggregateFunction
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/string
|
||||
sidebar_position: 44
|
||||
sidebar_position: 8
|
||||
sidebar_label: String
|
||||
---
|
||||
|
||||
@ -13,7 +13,7 @@ When creating tables, numeric parameters for string fields can be set (e.g. `VAR
|
||||
|
||||
Aliases:
|
||||
|
||||
- `String` — `LONGTEXT`, `MEDIUMTEXT`, `TINYTEXT`, `TEXT`, `LONGBLOB`, `MEDIUMBLOB`, `TINYBLOB`, `BLOB`, `VARCHAR`, `CHAR`.
|
||||
- `String` — `LONGTEXT`, `MEDIUMTEXT`, `TINYTEXT`, `TEXT`, `LONGBLOB`, `MEDIUMBLOB`, `TINYBLOB`, `BLOB`, `VARCHAR`, `CHAR`, `CHAR LARGE OBJECT`, `CHAR VARYING`, `CHARACTER LARGE OBJECT`, `CHARACTER VARYING`, `NCHAR LARGE OBJECT`, `NCHAR VARYING`, `NATIONAL CHARACTER LARGE OBJECT`, `NATIONAL CHARACTER VARYING`, `NATIONAL CHAR VARYING`, `NATIONAL CHARACTER`, `NATIONAL CHAR`, `BINARY LARGE OBJECT`, `BINARY VARYING`,
|
||||
|
||||
## Encodings
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/tuple
|
||||
sidebar_position: 54
|
||||
sidebar_position: 34
|
||||
sidebar_label: Tuple(T1, T2, ...)
|
||||
---
|
||||
|
||||
# Tuple(T1, T2, …)
|
||||
# Tuple(T1, T2, ...)
|
||||
|
||||
A tuple of elements, each having an individual [type](../../sql-reference/data-types/index.md#data_types). Tuple must contain at least one element.
|
||||
|
||||
|
@ -1,6 +1,6 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/uuid
|
||||
sidebar_position: 46
|
||||
sidebar_position: 24
|
||||
sidebar_label: UUID
|
||||
---
|
||||
|
||||
|
@ -1,10 +1,10 @@
|
||||
---
|
||||
slug: /en/sql-reference/data-types/variant
|
||||
sidebar_position: 55
|
||||
sidebar_label: Variant
|
||||
sidebar_position: 40
|
||||
sidebar_label: Variant(T1, T2, ...)
|
||||
---
|
||||
|
||||
# Variant(T1, T2, T3, ...)
|
||||
# Variant(T1, T2, ...)
|
||||
|
||||
This type represents a union of other data types. Type `Variant(T1, T2, ..., TN)` means that each row of this type
|
||||
has a value of either type `T1` or `T2` or ... or `TN` or none of them (`NULL` value).
|
||||
@ -190,22 +190,67 @@ SELECT toTypeName(variantType(v)) FROM test LIMIT 1;
|
||||
└─────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Conversion between Variant column and other columns
|
||||
## Conversion between a Variant column and other columns
|
||||
|
||||
There are 3 possible conversions that can be performed with Variant column.
|
||||
There are 4 possible conversions that can be performed with a column of type `Variant`.
|
||||
|
||||
### Converting an ordinary column to a Variant column
|
||||
### Converting a String column to a Variant column
|
||||
|
||||
It is possible to convert ordinary column with type `T` to a `Variant` column containing this type:
|
||||
Conversion from `String` to `Variant` is performed by parsing a value of `Variant` type from the string value:
|
||||
|
||||
```sql
|
||||
SELECT toTypeName(variant) as type_name, 'Hello, World!'::Variant(UInt64, String, Array(UInt64)) as variant;
|
||||
SELECT '42'::Variant(String, UInt64) as variant, variantType(variant) as variant_type
|
||||
```
|
||||
|
||||
```text
|
||||
┌─type_name──────────────────────────────┬─variant───────┐
|
||||
│ Variant(Array(UInt64), String, UInt64) │ Hello, World! │
|
||||
└────────────────────────────────────────┴───────────────┘
|
||||
┌─variant─┬─variant_type─┐
|
||||
│ 42 │ UInt64 │
|
||||
└─────────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT '[1, 2, 3]'::Variant(String, Array(UInt64)) as variant, variantType(variant) as variant_type
|
||||
```
|
||||
|
||||
```text
|
||||
┌─variant─┬─variant_type──┐
|
||||
│ [1,2,3] │ Array(UInt64) │
|
||||
└─────────┴───────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT CAST(map('key1', '42', 'key2', 'true', 'key3', '2020-01-01'), 'Map(String, Variant(UInt64, Bool, Date))') as map_of_variants, mapApply((k, v) -> (k, variantType(v)), map_of_variants) as map_of_variant_types```
|
||||
```
|
||||
|
||||
```text
|
||||
┌─map_of_variants─────────────────────────────┬─map_of_variant_types──────────────────────────┐
|
||||
│ {'key1':42,'key2':true,'key3':'2020-01-01'} │ {'key1':'UInt64','key2':'Bool','key3':'Date'} │
|
||||
└─────────────────────────────────────────────┴───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Converting an ordinary column to a Variant column
|
||||
|
||||
It is possible to convert an ordinary column with type `T` to a `Variant` column containing this type:
|
||||
|
||||
```sql
|
||||
SELECT toTypeName(variant) as type_name, [1,2,3]::Array(UInt64)::Variant(UInt64, String, Array(UInt64)) as variant, variantType(variant) as variant_name
|
||||
```
|
||||
|
||||
```text
|
||||
┌─type_name──────────────────────────────┬─variant─┬─variant_name──┐
|
||||
│ Variant(Array(UInt64), String, UInt64) │ [1,2,3] │ Array(UInt64) │
|
||||
└────────────────────────────────────────┴─────────┴───────────────┘
|
||||
```
|
||||
|
||||
Note: converting from `String` type is always performed through parsing, if you need to convert `String` column to `String` variant of a `Variant` without parsing, you can do the following:
|
||||
```sql
|
||||
SELECT '[1, 2, 3]'::Variant(String)::Variant(String, Array(UInt64), UInt64) as variant, variantType(variant) as variant_type
|
||||
```
|
||||
|
||||
```sql
|
||||
┌─variant───┬─variant_type─┐
|
||||
│ [1, 2, 3] │ String │
|
||||
└───────────┴──────────────┘
|
||||
```
|
||||
|
||||
### Converting a Variant column to an ordinary column
|
||||
@ -395,3 +440,37 @@ SELECT v, variantType(v) FROM test ORDER by v;
|
||||
│ 100 │ UInt32 │
|
||||
└─────┴────────────────┘
|
||||
```
|
||||
|
||||
## JSONExtract functions with Variant
|
||||
|
||||
All `JSONExtract*` functions support `Variant` type:
|
||||
|
||||
```sql
|
||||
SELECT JSONExtract('{"a" : [1, 2, 3]}', 'a', 'Variant(UInt32, String, Array(UInt32))') AS variant, variantType(variant) AS variant_type;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─variant─┬─variant_type──┐
|
||||
│ [1,2,3] │ Array(UInt32) │
|
||||
└─────────┴───────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT JSONExtract('{"obj" : {"a" : 42, "b" : "Hello", "c" : [1,2,3]}}', 'obj', 'Map(String, Variant(UInt32, String, Array(UInt32)))') AS map_of_variants, mapApply((k, v) -> (k, variantType(v)), map_of_variants) AS map_of_variant_types
|
||||
```
|
||||
|
||||
```text
|
||||
┌─map_of_variants──────────────────┬─map_of_variant_types────────────────────────────┐
|
||||
│ {'a':42,'b':'Hello','c':[1,2,3]} │ {'a':'UInt32','b':'String','c':'Array(UInt32)'} │
|
||||
└──────────────────────────────────┴─────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT JSONExtractKeysAndValues('{"a" : 42, "b" : "Hello", "c" : [1,2,3]}', 'Variant(UInt32, String, Array(UInt32))') AS variants, arrayMap(x -> (x.1, variantType(x.2)), variants) AS variant_types
|
||||
```
|
||||
|
||||
```text
|
||||
┌─variants───────────────────────────────┬─variant_types─────────────────────────────────────────┐
|
||||
│ [('a',42),('b','Hello'),('c',[1,2,3])] │ [('a','UInt32'),('b','String'),('c','Array(UInt32)')] │
|
||||
└────────────────────────────────────────┴───────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
@ -774,6 +774,59 @@ Returns the number of elements for which `func(arr1[i], …, arrN[i])` returns s
|
||||
|
||||
Note that the `arrayCount` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayDotProduct
|
||||
|
||||
Returns the dot product of two arrays.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayDotProduct(vector1, vector2)
|
||||
```
|
||||
|
||||
Alias: `scalarProduct`, `dotProduct`
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `vector1`: First vector. [Array](../data-types/array.md) or [Tuple](../data-types/tuple.md) of numeric values.
|
||||
- `vector2`: Second vector. [Array](../data-types/array.md) or [Tuple](../data-types/tuple.md) of numeric values.
|
||||
|
||||
:::note
|
||||
The sizes of the two vectors must be equal. Arrays and Tuples may also contain mixed element types.
|
||||
:::
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The dot product of the two vectors.
|
||||
|
||||
Type: numeric - determined by the type of the arguments. If Arrays or Tuples contain mixed element types then the result type is the supertype.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayDotProduct([1, 2, 3], [4, 5, 6]) AS res, toTypeName(res);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
32 UInt16
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT dotProduct((1::UInt16, 2::UInt8, 3::Float32),(4::Int16, 5::Float32, 6::UInt8)) AS res, toTypeName(res);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
32 Float64
|
||||
```
|
||||
|
||||
## countEqual(arr, x)
|
||||
|
||||
Returns the number of elements in the array equal to x. Equivalent to arrayCount (elem -\> elem = x, arr).
|
||||
@ -888,6 +941,66 @@ SELECT arrayEnumerateUniq([1, 1, 1, 2, 2, 2], [1, 1, 2, 1, 1, 2]) AS res
|
||||
|
||||
This is necessary when using ARRAY JOIN with a nested data structure and further aggregation across multiple elements in this structure.
|
||||
|
||||
## arrayEnumerateUniqRanked
|
||||
|
||||
Returns an array the same size as the source array, indicating for each element what its position is among elements with the same value. It allows for enumeration of a multidimensional array with the ability to specify how deep to look inside the array.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayEnumerateUniqRanked(clear_depth, arr, max_array_depth)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `clear_depth`: Enumerate elements at the specified level separately. Positive [Integer](../data-types/int-uint.md) less than or equal to `max_arr_depth`.
|
||||
- `arr`: N-dimensional array to enumerate. [Array](../data-types/array.md).
|
||||
- `max_array_depth`: The maximum effective depth. Positive [Integer](../data-types/int-uint.md) less than or equal to the depth of `arr`.
|
||||
|
||||
**Example**
|
||||
|
||||
With `clear_depth=1` and `max_array_depth=1`, the result of `arrayEnumerateUniqRanked` is identical to that which [`arrayEnumerateUniq`](#arrayenumerateuniqarr) would give for the same array.
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayEnumerateUniqRanked(1, [1,2,1], 1);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
[1,1,2]
|
||||
```
|
||||
|
||||
In this example, `arrayEnumerateUniqRanked` is used to obtain an array indicating, for each element of the multidimensional array, what its position is among elements of the same value. For the first row of the passed array,`[1,2,3]`, the corresponding result is `[1,1,1]`, indicating that this is the first time `1`,`2` and `3` are encountered. For the second row of the provided array,`[2,2,1]`, the corresponding result is `[2,3,3]`, indicating that `2` is encountered for a second and third time, and `1` is encountered for the second time. Likewise, for the third row of the provided array `[3]` the corresponding result is `[2]` indicating that `3` is encountered for the second time.
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayEnumerateUniqRanked(1, [[1,2,3],[2,2,1],[3]], 2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
[[1,1,1],[2,3,2],[2]]
|
||||
```
|
||||
|
||||
Changing `clear_depth=2`, results in elements being enumerated separately for each row.
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayEnumerateUniqRanked(2, [[1,2,3],[2,2,1],[3]], 2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
[[1,1,1],[1,2,1],[1]]
|
||||
```
|
||||
|
||||
## arrayPopBack
|
||||
|
||||
Removes the last item from the array.
|
||||
@ -1303,6 +1416,125 @@ SELECT arrayReverseSort((x, y) -> -y, [4, 3, 5], [1, 2, 3]) AS res;
|
||||
|
||||
Same as `arrayReverseSort` with additional `limit` argument allowing partial sorting. Returns an array of the same size as the original array where elements in range `[1..limit]` are sorted in descending order. Remaining elements `(limit..N]` shall contain elements in unspecified order.
|
||||
|
||||
## arrayShuffle
|
||||
|
||||
Returns an array of the same size as the original array containing the elements in shuffled order.
|
||||
Elements are reordered in such a way that each possible permutation of those elements has equal probability of appearance.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayShuffle(arr[, seed])
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `arr`: The array to partially shuffle. [Array](../data-types/array.md).
|
||||
- `seed` (optional): seed to be used with random number generation. If not provided a random one is used. [UInt or Int](../data-types/int-uint.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Array with elements shuffled.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
:::note
|
||||
This function will not materialize constants.
|
||||
:::
|
||||
|
||||
**Examples**
|
||||
|
||||
In this example, `arrayShuffle` is used without providing a `seed` and will therefore generate one randomly itself.
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayShuffle([1, 2, 3, 4]);
|
||||
```
|
||||
|
||||
Note: when using [ClickHouse Fiddle](https://fiddle.clickhouse.com/), the exact response may differ due to random nature of the function.
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
[1,4,2,3]
|
||||
```
|
||||
|
||||
In this example, `arrayShuffle` is provided a `seed` and will produce stable results.
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayShuffle([1, 2, 3, 4], 41);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
[3,2,1,4]
|
||||
```
|
||||
|
||||
## arrayPartialShuffle
|
||||
|
||||
Given an input array of cardinality `N`, returns an array of size N where elements in the range `[1...limit]` are shuffled and the remaining elements in the range `(limit...n]` are unshuffled.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayPartialShuffle(arr[, limit[, seed]])
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `arr`: The array size `N` to partially shuffle. [Array](../data-types/array.md).
|
||||
- `limit` (optional): The number to limit element swaps to, in the range `[1..N]`. [UInt or Int](../data-types/int-uint.md).
|
||||
- `seed` (optional): The seed value to be used with random number generation. If not provided a random one is used. [UInt or Int](../data-types/int-uint.md)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Array with elements partially shuffled.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
:::note
|
||||
This function will not materialize constants.
|
||||
|
||||
The value of `limit` should be in the range `[1..N]`. Values outside of that range are equivalent to performing full [arrayShuffle](#arrayshuffle).
|
||||
:::
|
||||
|
||||
**Examples**
|
||||
|
||||
Note: when using [ClickHouse Fiddle](https://fiddle.clickhouse.com/), the exact response may differ due to random nature of the function.
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayPartialShuffle([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 1)
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
The order of elements is preserved (`[2,3,4,5], [7,8,9,10]`) except for the two shuffled elements `[1, 6]`. No `seed` is provided so the function selects its own randomly.
|
||||
|
||||
```response
|
||||
[6,2,3,4,5,1,7,8,9,10]
|
||||
```
|
||||
|
||||
In this example, the `limit` is increased to `2` and a `seed` value is provided. The order
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayPartialShuffle([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 2);
|
||||
```
|
||||
|
||||
The order of elements is preserved (`[4, 5, 6, 7, 8], [10]`) except for the four shuffled elements `[1, 2, 3, 9]`.
|
||||
|
||||
Result:
|
||||
```response
|
||||
[3,9,1,4,5,6,7,8,2,10]
|
||||
```
|
||||
|
||||
## arrayUniq(arr, …)
|
||||
|
||||
If one argument is passed, it counts the number of different elements in the array.
|
||||
@ -1400,21 +1632,91 @@ Result:
|
||||
└────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayEnumerateDense(arr)
|
||||
## arrayEnumerateDense
|
||||
|
||||
Returns an array of the same size as the source array, indicating where each element first appears in the source array.
|
||||
|
||||
Example:
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayEnumerateDense(arr)
|
||||
```
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayEnumerateDense([10, 20, 10, 30])
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─arrayEnumerateDense([10, 20, 10, 30])─┐
|
||||
│ [1,2,1,3] │
|
||||
└───────────────────────────────────────┘
|
||||
```
|
||||
## arrayEnumerateDenseRanked
|
||||
|
||||
Returns an array the same size as the source array, indicating where each element first appears in the source array. It allows for enumeration of a multidimensional array with the ability to specify how deep to look inside the array.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayEnumerateDenseRanked(clear_depth, arr, max_array_depth)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `clear_depth`: Enumerate elements at the specified level separately. Positive [Integer](../data-types/int-uint.md) less than or equal to `max_arr_depth`.
|
||||
- `arr`: N-dimensional array to enumerate. [Array](../data-types/array.md).
|
||||
- `max_array_depth`: The maximum effective depth. Positive [Integer](../data-types/int-uint.md) less than or equal to the depth of `arr`.
|
||||
|
||||
**Example**
|
||||
|
||||
With `clear_depth=1` and `max_array_depth=1`, the result is identical to what [arrayEnumerateDense](#arrayenumeratedense) would give.
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayEnumerateDenseRanked(1,[10, 20, 10, 30],1);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
[1,2,1,3]
|
||||
```
|
||||
|
||||
In this example, `arrayEnumerateDenseRanked` is used to obtain an array indicating, for each element of the multidimensional array, what its position is among elements of the same value. For the first row of the passed array,`[10,10,30,20]`, the corresponding first row of the result is `[1,1,2,3]`, indicating that `10` is the first number encountered in position 1 and 2, `30` the second number encountered in position 3 and `20` is the third number encountered in position 4. For the second row, `[40, 50, 10, 30]`, the corresponding second row of the result is `[4,5,1,2]`, indicating that `40` and `50` are the fourth and fifth numbers encountered in position 1 and 2 of that row, that another `10` (the first encountered number) is in position 3 and `30` (the second number encountered) is in the last position.
|
||||
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayEnumerateDenseRanked(1,[[10,10,30,20],[40,50,10,30]],2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
[[1,1,2,3],[4,5,1,2]]
|
||||
```
|
||||
|
||||
Changing `clear_depth=2` results in the enumeration occurring separately for each row anew.
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT arrayEnumerateDenseRanked(2,[[10,10,30,20],[40,50,10,30]],2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
[[1,1,2,3],[1,2,3,4]]
|
||||
```
|
||||
|
||||
## arrayIntersect(arr)
|
||||
|
||||
@ -1652,7 +1954,7 @@ flatten(array_of_arrays)
|
||||
|
||||
Alias: `flatten`.
|
||||
|
||||
**Arguments**
|
||||
**Parameters**
|
||||
|
||||
- `array_of_arrays` — [Array](../../sql-reference/data-types/array.md) of arrays. For example, `[[1,2,3], [4,5]]`.
|
||||
|
||||
@ -1928,7 +2230,67 @@ Note that the `arrayAll` is a [higher-order function](../../sql-reference/functi
|
||||
|
||||
Returns the first element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0.
|
||||
|
||||
Note that the `arrayFirst` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
## arrayFirstOrNull
|
||||
|
||||
Returns the first element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0, otherwise it returns `NULL`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayFirstOrNull(func, arr1, …)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `func`: Lambda function. [Lambda function](../functions/#higher-order-functions---operator-and-lambdaparams-expr-function).
|
||||
- `arr1`: Array to operate on. [Array](../data-types/array.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The first element in the passed array.
|
||||
- Otherwise, returns `NULL`
|
||||
|
||||
**Implementation details**
|
||||
|
||||
Note that the `arrayFirstOrNull` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayFirstOrNull(x -> x >= 2, [1, 2, 3]);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
2
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayFirstOrNull(x -> x >= 2, emptyArrayUInt8());
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
\N
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayLastOrNull((x,f) -> f, [1,2,3,NULL], [0,1,0,1]);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
\N
|
||||
```
|
||||
|
||||
## arrayLast(func, arr1, …)
|
||||
|
||||
@ -1936,6 +2298,56 @@ Returns the last element in the `arr1` array for which `func(arr1[i], …, arrN[
|
||||
|
||||
Note that the `arrayLast` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayLastOrNull
|
||||
|
||||
Returns the last element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0, otherwise returns `NULL`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
arrayLastOrNull(func, arr1, …)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `func`: Lambda function. [Lambda function](../functions/#higher-order-functions---operator-and-lambdaparams-expr-function).
|
||||
- `arr1`: Array to operate on. [Array](../data-types/array.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The last element in the passed array.
|
||||
- Otherwise, returns `NULL`
|
||||
|
||||
**Implementation details**
|
||||
|
||||
Note that the `arrayLastOrNull` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayLastOrNull(x -> x >= 2, [1, 2, 3]);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
3
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT arrayLastOrNull(x -> x >= 2, emptyArrayUInt8());
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
\N
|
||||
```
|
||||
|
||||
## arrayFirstIndex(func, arr1, …)
|
||||
|
||||
Returns the index of the first element in the `arr1` array for which `func(arr1[i], …, arrN[i])` returns something other than 0.
|
||||
|
@ -1670,7 +1670,7 @@ Like [fromDaysSinceYearZero](#fromDaysSinceYearZero) but returns a [Date32](../.
|
||||
|
||||
## age
|
||||
|
||||
Returns the `unit` component of the difference between `startdate` and `enddate`. The difference is calculated using a precision of 1 microsecond.
|
||||
Returns the `unit` component of the difference between `startdate` and `enddate`. The difference is calculated using a precision of 1 nanosecond.
|
||||
E.g. the difference between `2021-12-29` and `2022-01-01` is 3 days for `day` unit, 0 months for `month` unit, 0 years for `year` unit.
|
||||
|
||||
For an alternative to `age`, see function `date\_diff`.
|
||||
@ -1686,16 +1686,17 @@ age('unit', startdate, enddate, [timezone])
|
||||
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
- `microsecond` `microseconds` `us` `u`
|
||||
- `millisecond` `milliseconds` `ms`
|
||||
- `second` `seconds` `ss` `s`
|
||||
- `minute` `minutes` `mi` `n`
|
||||
- `hour` `hours` `hh` `h`
|
||||
- `day` `days` `dd` `d`
|
||||
- `week` `weeks` `wk` `ww`
|
||||
- `month` `months` `mm` `m`
|
||||
- `quarter` `quarters` `qq` `q`
|
||||
- `year` `years` `yyyy` `yy`
|
||||
- `nanosecond`, `nanoseconds`, `ns`
|
||||
- `microsecond`, `microseconds`, `us`, `u`
|
||||
- `millisecond`, `milliseconds`, `ms`
|
||||
- `second`, `seconds`, `ss`, `s`
|
||||
- `minute`, `minutes`, `mi`, `n`
|
||||
- `hour`, `hours`, `hh`, `h`
|
||||
- `day`, `days`, `dd`, `d`
|
||||
- `week`, `weeks`, `wk`, `ww`
|
||||
- `month`, `months`, `mm`, `m`
|
||||
- `quarter`, `quarters`, `qq`, `q`
|
||||
- `year`, `years`, `yyyy`, `yy`
|
||||
|
||||
- `startdate` — The first time value to subtract (the subtrahend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
@ -1763,16 +1764,17 @@ Aliases: `dateDiff`, `DATE_DIFF`, `timestampDiff`, `timestamp_diff`, `TIMESTAMP_
|
||||
- `unit` — The type of interval for result. [String](../../sql-reference/data-types/string.md).
|
||||
Possible values:
|
||||
|
||||
- `microsecond` `microseconds` `us` `u`
|
||||
- `millisecond` `milliseconds` `ms`
|
||||
- `second` `seconds` `ss` `s`
|
||||
- `minute` `minutes` `mi` `n`
|
||||
- `hour` `hours` `hh` `h`
|
||||
- `day` `days` `dd` `d`
|
||||
- `week` `weeks` `wk` `ww`
|
||||
- `month` `months` `mm` `m`
|
||||
- `quarter` `quarters` `qq` `q`
|
||||
- `year` `years` `yyyy` `yy`
|
||||
- `nanosecond`, `nanoseconds`, `ns`
|
||||
- `microsecond`, `microseconds`, `us`, `u`
|
||||
- `millisecond`, `milliseconds`, `ms`
|
||||
- `second`, `seconds`, `ss`, `s`
|
||||
- `minute`, `minutes`, `mi`, `n`
|
||||
- `hour`, `hours`, `hh`, `h`
|
||||
- `day`, `days`, `dd`, `d`
|
||||
- `week`, `weeks`, `wk`, `ww`
|
||||
- `month`, `months`, `mm`, `m`
|
||||
- `quarter`, `quarters`, `qq`, `q`
|
||||
- `year`, `years`, `yyyy`, `yy`
|
||||
|
||||
- `startdate` — The first time value to subtract (the subtrahend). [Date](../../sql-reference/data-types/date.md), [Date32](../../sql-reference/data-types/date32.md), [DateTime](../../sql-reference/data-types/datetime.md) or [DateTime64](../../sql-reference/data-types/datetime64.md).
|
||||
|
||||
@ -1904,7 +1906,7 @@ Aliases: `dateAdd`, `DATE_ADD`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval to add. [String](../../sql-reference/data-types/string.md).
|
||||
- `unit` — The type of interval to add. Note: This is not a [String](../../sql-reference/data-types/string.md) and must therefore not be quoted.
|
||||
Possible values:
|
||||
|
||||
- `second`
|
||||
@ -1959,7 +1961,7 @@ Aliases: `dateSub`, `DATE_SUB`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `unit` — The type of interval to subtract. Note: The unit should be unquoted.
|
||||
- `unit` — The type of interval to subtract. Note: This is not a [String](../../sql-reference/data-types/string.md) and must therefore not be quoted.
|
||||
|
||||
Possible values:
|
||||
|
||||
|
@ -81,6 +81,43 @@ Result:
|
||||
│ 2.23606797749979 │
|
||||
└──────────────────┘
|
||||
```
|
||||
## L2SquaredNorm
|
||||
|
||||
Calculates the square root of the sum of the squares of the vector values (the [L2Norm](#l2norm)) squared.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
L2SquaredNorm(vector)
|
||||
```
|
||||
|
||||
Alias: `normL2Squared`.
|
||||
|
||||
***Arguments**
|
||||
|
||||
- `vector` — [Tuple](../../sql-reference/data-types/tuple.md) or [Array](../../sql-reference/data-types/array.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- L2-norm squared.
|
||||
|
||||
Type: [Float](../../sql-reference/data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT L2SquaredNorm((1, 2));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─L2SquaredNorm((1, 2))─┐
|
||||
│ 5 │
|
||||
└───────────────────────┘
|
||||
```
|
||||
|
||||
## LinfNorm
|
||||
|
||||
|
@ -594,6 +594,45 @@ Calculates JumpConsistentHash form a UInt64.
|
||||
Accepts two arguments: a UInt64-type key and the number of buckets. Returns Int32.
|
||||
For more information, see the link: [JumpConsistentHash](https://arxiv.org/pdf/1406.2294.pdf)
|
||||
|
||||
## kostikConsistentHash
|
||||
|
||||
An O(1) time and space consistent hash algorithm by Konstantin 'kostik' Oblakov. Previously `yandexConsistentHash`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
kostikConsistentHash(input, n)
|
||||
```
|
||||
|
||||
Alias: `yandexConsistentHash` (left for backwards compatibility sake).
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `input`: A UInt64-type key [UInt64](/docs/en/sql-reference/data-types/int-uint.md).
|
||||
- `n`: Number of buckets. [UInt16](/docs/en/sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A [UInt16](/docs/en/sql-reference/data-types/int-uint.md) data type hash value.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
It is efficient only if n <= 32768.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT kostikConsistentHash(16045690984833335023, 2);
|
||||
```
|
||||
|
||||
```response
|
||||
┌─kostikConsistentHash(16045690984833335023, 2)─┐
|
||||
│ 1 │
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## murmurHash2_32, murmurHash2_64
|
||||
|
||||
Produces a [MurmurHash2](https://github.com/aappleby/smhasher) hash value.
|
||||
@ -1153,6 +1192,42 @@ Result:
|
||||
└────────────┘
|
||||
```
|
||||
|
||||
## wyHash64
|
||||
|
||||
Produces a 64-bit [wyHash64](https://github.com/wangyi-fudan/wyhash) hash value.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
wyHash64(string)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `string` — String. [String](/docs/en/sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Hash value.
|
||||
|
||||
Type: [UInt64](/docs/en/sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT wyHash64('ClickHouse') AS Hash;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```response
|
||||
┌─────────────────Hash─┐
|
||||
│ 12336419557878201794 │
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## ngramMinHash
|
||||
|
||||
Splits a ASCII string into n-grams of `ngramsize` symbols and calculates hash values for each n-gram. Uses `hashnum` minimum hashes to calculate the minimum hash and `hashnum` maximum hashes to calculate the maximum hash. Returns a tuple with these hashes. Is case sensitive.
|
||||
|
@ -543,12 +543,64 @@ You can get similar result by using the [ternary operator](../../sql-reference/f
|
||||
|
||||
Returns 1 if the Float32 and Float64 argument is NaN, otherwise this function 0.
|
||||
|
||||
## hasColumnInTable(\[‘hostname’\[, ‘username’\[, ‘password’\]\],\] ‘database’, ‘table’, ‘column’)
|
||||
## hasColumnInTable
|
||||
|
||||
Given the database name, the table name, and the column name as constant strings, returns 1 if the given column exists, otherwise 0.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
hasColumnInTable(\[‘hostname’\[, ‘username’\[, ‘password’\]\],\] ‘database’, ‘table’, ‘column’)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `database` : name of the database. [String literal](../syntax#syntax-string-literal)
|
||||
- `table` : name of the table. [String literal](../syntax#syntax-string-literal)
|
||||
- `column` : name of the column. [String literal](../syntax#syntax-string-literal)
|
||||
- `hostname` : remote server name to perform the check on. [String literal](../syntax#syntax-string-literal)
|
||||
- `username` : username for remote server. [String literal](../syntax#syntax-string-literal)
|
||||
- `password` : password for remote server. [String literal](../syntax#syntax-string-literal)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `1` if the given column exists.
|
||||
- `0`, otherwise.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
Given the database name, the table name, and the column name as constant strings, returns 1 if the given column exists, otherwise 0. If parameter `hostname` is given, the check is performed on a remote server.
|
||||
If the table does not exist, an exception is thrown.
|
||||
For elements in a nested data structure, the function checks for the existence of a column. For the nested data structure itself, the function returns 0.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT hasColumnInTable('system','metrics','metric')
|
||||
```
|
||||
|
||||
```response
|
||||
1
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT hasColumnInTable('system','metrics','non-existing_column')
|
||||
```
|
||||
|
||||
```response
|
||||
0
|
||||
```
|
||||
|
||||
## hasThreadFuzzer
|
||||
|
||||
Returns whether Thread Fuzzer is effective. It can be used in tests to prevent runs from being too long.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
hasThreadFuzzer();
|
||||
```
|
||||
|
||||
## bar
|
||||
|
||||
Builds a bar chart.
|
||||
@ -864,6 +916,34 @@ Returns the larger value of a and b.
|
||||
Returns the server’s uptime in seconds.
|
||||
If executed in the context of a distributed table, this function generates a normal column with values relevant to each shard. Otherwise it produces a constant value.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
uptime()
|
||||
```
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Time value of seconds.
|
||||
|
||||
Type: [UInt32](/docs/en/sql-reference/data-types/int-uint.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT uptime() as Uptime;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` response
|
||||
┌─Uptime─┐
|
||||
│ 55867 │
|
||||
└────────┘
|
||||
```
|
||||
|
||||
## version()
|
||||
|
||||
Returns the current version of ClickHouse as a string in the form of:
|
||||
|
@ -99,7 +99,7 @@ Alias: `OCTET_LENGTH`
|
||||
Returns the length of a string in Unicode code points (not: in bytes or characters). It assumes that the string contains valid UTF-8 encoded text. If this assumption is violated, no exception is thrown and the result is undefined.
|
||||
|
||||
Alias:
|
||||
- `CHAR_LENGTH``
|
||||
- `CHAR_LENGTH`
|
||||
- `CHARACTER_LENGTH`
|
||||
|
||||
## leftPad
|
||||
@ -254,14 +254,70 @@ Result:
|
||||
|
||||
Converts the ASCII Latin symbols in a string to lowercase.
|
||||
|
||||
*Syntax**
|
||||
|
||||
``` sql
|
||||
lower(input)
|
||||
```
|
||||
|
||||
Alias: `lcase`
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `input`: A string type [String](/docs/en/sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A [String](/docs/en/sql-reference/data-types/string.md) data type value.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT lower('CLICKHOUSE');
|
||||
```
|
||||
|
||||
```response
|
||||
┌─lower('CLICKHOUSE')─┐
|
||||
│ clickhouse │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
## upper
|
||||
|
||||
Converts the ASCII Latin symbols in a string to uppercase.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
upper(input)
|
||||
```
|
||||
|
||||
Alias: `ucase`
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `input`: A string type [String](/docs/en/sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A [String](/docs/en/sql-reference/data-types/string.md) data type value.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT upper('clickhouse');
|
||||
```
|
||||
|
||||
``` response
|
||||
┌─upper('clickhouse')─┐
|
||||
│ CLICKHOUSE │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
## lowerUTF8
|
||||
|
||||
Converts a string to lowercase, assuming that the string contains valid UTF-8 encoded text. If this assumption is violated, no exception is thrown and the result is undefined.
|
||||
@ -278,6 +334,34 @@ Does not detect the language, e.g. for Turkish the result might not be exactly c
|
||||
|
||||
If the length of the UTF-8 byte sequence is different for upper and lower case of a code point, the result may be incorrect for this code point.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
upperUTF8(input)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `input`: A string type [String](/docs/en/sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A [String](/docs/en/sql-reference/data-types/string.md) data type value.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT upperUTF8('München') as Upperutf8;
|
||||
```
|
||||
|
||||
``` response
|
||||
┌─Upperutf8─┐
|
||||
│ MÜNCHEN │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
## isValidUTF8
|
||||
|
||||
Returns 1, if the set of bytes constitutes valid UTF-8-encoded text, otherwise 0.
|
||||
|
@ -193,3 +193,33 @@ Result:
|
||||
## translateUTF8
|
||||
|
||||
Like [translate](#translate) but assumes `s`, `from` and `to` are UTF-8 encoded strings.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
translateUTF8(s, from, to)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `s`: A string type [String](/docs/en/sql-reference/data-types/string.md).
|
||||
- `from`: A string type [String](/docs/en/sql-reference/data-types/string.md).
|
||||
- `to`: A string type [String](/docs/en/sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `s`: A string type [String](/docs/en/sql-reference/data-types/string.md).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT translateUTF8('Münchener Straße', 'üß', 'us') AS res;
|
||||
```
|
||||
|
||||
``` response
|
||||
┌─res──────────────┐
|
||||
│ Munchener Strase │
|
||||
└──────────────────┘
|
||||
```
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -521,45 +521,6 @@ Result:
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
## dotProduct
|
||||
|
||||
Calculates the scalar product of two tuples of the same size.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
dotProduct(tuple1, tuple2)
|
||||
```
|
||||
|
||||
Alias: `scalarProduct`.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `tuple1` — First tuple. [Tuple](../../sql-reference/data-types/tuple.md).
|
||||
- `tuple2` — Second tuple. [Tuple](../../sql-reference/data-types/tuple.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Scalar product.
|
||||
|
||||
Type: [Int/UInt](../../sql-reference/data-types/int-uint.md) or [Float](../../sql-reference/data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
```sql
|
||||
SELECT dotProduct((1, 2), (2, 3));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─dotProduct((1, 2), (2, 3))─┐
|
||||
│ 8 │
|
||||
└────────────────────────────┘
|
||||
```
|
||||
|
||||
## tupleConcat
|
||||
|
||||
Combines tuples passed as arguments.
|
||||
@ -584,6 +545,278 @@ SELECT tupleConcat((1, 2), (3, 4), (true, false)) AS res
|
||||
└──────────────────────┘
|
||||
```
|
||||
|
||||
## tupleIntDiv
|
||||
|
||||
Does integer division of a tuple of numerators and a tuple of denominators, and returns a tuple of the quotients.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
tupleIntDiv(tuple_num, tuple_div)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `tuple_num`: Tuple of numerator values. [Tuple](../data-types/tuple) of numeric type.
|
||||
- `tuple_div`: Tuple of divisor values. [Tuple](../data-types/tuple) of numeric type.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Tuple of the quotients of `tuple_num` and `tuple_div`. [Tuple](../data-types/tuple) of integer values.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
- If either `tuple_num` or `tuple_div` contain non-integer values then the result is calculated by rounding to the nearest integer for each non-integer numerator or divisor.
|
||||
- An error will be thrown for division by 0.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleIntDiv((15, 10, 5), (5, 5, 5));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleIntDiv((15, 10, 5), (5, 5, 5))─┐
|
||||
│ (3,2,1) │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleIntDiv((15, 10, 5), (5.5, 5.5, 5.5));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleIntDiv((15, 10, 5), (5.5, 5.5, 5.5))─┐
|
||||
│ (2,1,0) │
|
||||
└───────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## tupleIntDivOrZero
|
||||
|
||||
Like [tupleIntDiv](#tupleintdiv) it does integer division of a tuple of numerators and a tuple of denominators, and returns a tuple of the quotients. It does not throw an error for 0 divisors, but rather returns the quotient as 0.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
tupleIntDivOrZero(tuple_num, tuple_div)
|
||||
```
|
||||
|
||||
- `tuple_num`: Tuple of numerator values. [Tuple](../data-types/tuple) of numeric type.
|
||||
- `tuple_div`: Tuple of divisor values. [Tuple](../data-types/tuple) of numeric type.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Tuple of the quotients of `tuple_num` and `tuple_div`. [Tuple](../data-types/tuple) of integer values.
|
||||
- Returns 0 for quotients where the divisor is 0.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
- If either `tuple_num` or `tuple_div` contain non-integer values then the result is calculated by rounding to the nearest integer for each non-integer numerator or divisor as in [tupleIntDiv](#tupleintdiv).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleIntDivOrZero((5, 10, 15), (0, 0, 0));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleIntDivOrZero((5, 10, 15), (0, 0, 0))─┐
|
||||
│ (0,0,0) │
|
||||
└───────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## tupleIntDivByNumber
|
||||
|
||||
Does integer division of a tuple of numerators by a given denominator, and returns a tuple of the quotients.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
tupleIntDivByNumber(tuple_num, div)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `tuple_num`: Tuple of numerator values. [Tuple](../data-types/tuple) of numeric type.
|
||||
- `div`: The divisor value. [Numeric](../data-types/int-uint.md) type.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Tuple of the quotients of `tuple_num` and `div`. [Tuple](../data-types/tuple) of integer values.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
- If either `tuple_num` or `div` contain non-integer values then the result is calculated by rounding to the nearest integer for each non-integer numerator or divisor.
|
||||
- An error will be thrown for division by 0.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleIntDivByNumber((15, 10, 5), 5);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleIntDivByNumber((15, 10, 5), 5)─┐
|
||||
│ (3,2,1) │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleIntDivByNumber((15.2, 10.7, 5.5), 5.8);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleIntDivByNumber((15.2, 10.7, 5.5), 5.8)─┐
|
||||
│ (2,1,0) │
|
||||
└─────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## tupleIntDivOrZeroByNumber
|
||||
|
||||
Like [tupleIntDivByNumber](#tupleintdivbynumber) it does integer division of a tuple of numerators by a given denominator, and returns a tuple of the quotients. It does not throw an error for 0 divisors, but rather returns the quotient as 0.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
tupleIntDivOrZeroByNumber(tuple_num, div)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `tuple_num`: Tuple of numerator values. [Tuple](../data-types/tuple) of numeric type.
|
||||
- `div`: The divisor value. [Numeric](../data-types/int-uint.md) type.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Tuple of the quotients of `tuple_num` and `div`. [Tuple](../data-types/tuple) of integer values.
|
||||
- Returns 0 for quotients where the divisor is 0.
|
||||
|
||||
**Implementation details**
|
||||
|
||||
- If either `tuple_num` or `div` contain non-integer values then the result is calculated by rounding to the nearest integer for each non-integer numerator or divisor as in [tupleIntDivByNumber](#tupleintdivbynumber).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleIntDivOrZeroByNumber((15, 10, 5), 5);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleIntDivOrZeroByNumber((15, 10, 5), 5)─┐
|
||||
│ (3,2,1) │
|
||||
└───────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleIntDivOrZeroByNumber((15, 10, 5), 0)
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleIntDivOrZeroByNumber((15, 10, 5), 0)─┐
|
||||
│ (0,0,0) │
|
||||
└───────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## tupleModulo
|
||||
|
||||
Returns a tuple of the moduli (remainders) of division operations of two tuples.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
tupleModulo(tuple_num, tuple_mod)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `tuple_num`: Tuple of numerator values. [Tuple](../data-types/tuple) of numeric type.
|
||||
- `tuple_div`: Tuple of modulus values. [Tuple](../data-types/tuple) of numeric type.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Tuple of the remainders of division of `tuple_num` and `tuple_div`. [Tuple](../data-types/tuple) of non-zero integer values.
|
||||
- An error is thrown for division by zero.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleModulo((15, 10, 5), (5, 3, 2));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleModulo((15, 10, 5), (5, 3, 2))─┐
|
||||
│ (0,1,1) │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## tupleModuloByNumber
|
||||
|
||||
Returns a tuple of the moduli (remainders) of division operations of a tuple and a given divisor.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
tupleModuloByNumber(tuple_num, div)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `tuple_num`: Tuple of numerator values. [Tuple](../data-types/tuple) of numeric type.
|
||||
- `div`: The divisor value. [Numeric](../data-types/int-uint.md) type.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Tuple of the remainders of division of `tuple_num` and `div`. [Tuple](../data-types/tuple) of non-zero integer values.
|
||||
- An error is thrown for division by zero.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT tupleModuloByNumber((15, 10, 5), 2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─tupleModuloByNumber((15, 10, 5), 2)─┐
|
||||
│ (1,0,1) │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## Distance functions
|
||||
|
||||
All supported functions are described in [distance functions documentation](../../sql-reference/functions/distance-functions.md).
|
||||
|
@ -128,9 +128,9 @@ Returns the part of the domain that includes top-level subdomains up to the “f
|
||||
|
||||
For example:
|
||||
|
||||
- `cutToFirstSignificantSubdomain('https://news.clickhouse.com.tr/') = 'clickhouse.com.tr'`.
|
||||
- `cutToFirstSignificantSubdomain('www.tr') = 'www.tr'`.
|
||||
- `cutToFirstSignificantSubdomain('tr') = ''`.
|
||||
- `cutToFirstSignificantSubdomainWithWWW('https://news.clickhouse.com.tr/') = 'clickhouse.com.tr'`.
|
||||
- `cutToFirstSignificantSubdomainWithWWW('www.tr') = 'www.tr'`.
|
||||
- `cutToFirstSignificantSubdomainWithWWW('tr') = ''`.
|
||||
|
||||
### cutToFirstSignificantSubdomainCustom
|
||||
|
||||
|
@ -56,7 +56,9 @@ Entries for finished mutations are not deleted right away (the number of preserv
|
||||
|
||||
For non-replicated tables, all `ALTER` queries are performed synchronously. For replicated tables, the query just adds instructions for the appropriate actions to `ZooKeeper`, and the actions themselves are performed as soon as possible. However, the query can wait for these actions to be completed on all the replicas.
|
||||
|
||||
For all `ALTER` queries, you can use the [alter_sync](/docs/en/operations/settings/settings.md/#alter-sync) setting to set up waiting.
|
||||
For `ALTER` queries that creates mutations (e.g.: including, but not limited to `UPDATE`, `DELETE`, `MATERIALIZE INDEX`, `MATERIALIZE PROJECTION`, `MATERIALIZE COLUMN`, `APPLY DELETED MASK`, `CLEAR STATISTIC`, `MATERIALIZE STATISTIC`) the synchronicity is defined by the [mutations_sync](/docs/en/operations/settings/settings.md/#mutations_sync) setting.
|
||||
|
||||
For other `ALTER` queries which only modify the metadata, you can use the [alter_sync](/docs/en/operations/settings/settings.md/#alter-sync) setting to set up waiting.
|
||||
|
||||
You can specify how long (in seconds) to wait for inactive replicas to execute all `ALTER` queries with the [replication_wait_for_inactive_replica_timeout](/docs/en/operations/settings/settings.md/#replication-wait-for-inactive-replica-timeout) setting.
|
||||
|
||||
@ -64,8 +66,6 @@ You can specify how long (in seconds) to wait for inactive replicas to execute a
|
||||
For all `ALTER` queries, if `alter_sync = 2` and some replicas are not active for more than the time, specified in the `replication_wait_for_inactive_replica_timeout` setting, then an exception `UNFINISHED` is thrown.
|
||||
:::
|
||||
|
||||
For `ALTER TABLE ... UPDATE|DELETE|MATERIALIZE INDEX|MATERIALIZE PROJECTION|MATERIALIZE COLUMN` queries the synchronicity is defined by the [mutations_sync](/docs/en/operations/settings/settings.md/#mutations_sync) setting.
|
||||
|
||||
## Related content
|
||||
|
||||
- Blog: [Handling Updates and Deletes in ClickHouse](https://clickhouse.com/blog/handling-updates-and-deletes-in-clickhouse)
|
||||
|
@ -8,7 +8,7 @@ sidebar_label: VIEW
|
||||
|
||||
You can modify `SELECT` query that was specified when a [materialized view](../create/view.md#materialized) was created with the `ALTER TABLE … MODIFY QUERY` statement without interrupting ingestion process.
|
||||
|
||||
This command is created to change materialized view created with `TO [db.]name` clause. It does not change the structure of the underling storage table and it does not change the columns' definition of the materialized view, because of this the application of this command is very limited for materialized views are created without `TO [db.]name` clause.
|
||||
This command is created to change materialized view created with `TO [db.]name` clause. It does not change the structure of the underlying storage table and it does not change the columns' definition of the materialized view, because of this the application of this command is very limited for materialized views are created without `TO [db.]name` clause.
|
||||
|
||||
**Example with TO table**
|
||||
|
||||
|
@ -20,19 +20,22 @@ DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster] [SYNC]
|
||||
|
||||
## DROP TABLE
|
||||
|
||||
Deletes the table.
|
||||
In case when `IF EMPTY` clause is specified server will check if table is empty only on replica that received initial query.
|
||||
Deletes one or more tables.
|
||||
|
||||
:::tip
|
||||
Also see [UNDROP TABLE](/docs/en/sql-reference/statements/undrop.md)
|
||||
To undo the deletion of a table, please see [UNDROP TABLE](/docs/en/sql-reference/statements/undrop.md)
|
||||
:::
|
||||
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
DROP [TEMPORARY] TABLE [IF EXISTS] [IF EMPTY] [db.]name [ON CLUSTER cluster] [SYNC]
|
||||
DROP [TEMPORARY] TABLE [IF EXISTS] [IF EMPTY] [db1.]name_1[, [db2.]name_2, ...] [ON CLUSTER cluster] [SYNC]
|
||||
```
|
||||
|
||||
Limitations:
|
||||
- If the clause `IF EMPTY` is specified, the server checks the emptiness of the table only on the replica which received the query.
|
||||
- Deleting multiple tables at once is not an atomic operation, i.e. if the deletion of a table fails, subsequent tables will not be deleted.
|
||||
|
||||
## DROP DICTIONARY
|
||||
|
||||
Deletes the dictionary.
|
||||
|
@ -64,6 +64,14 @@ RELOAD FUNCTIONS [ON CLUSTER cluster_name]
|
||||
RELOAD FUNCTION [ON CLUSTER cluster_name] function_name
|
||||
```
|
||||
|
||||
## RELOAD ASYNCHRONOUS METRICS
|
||||
|
||||
Re-calculates all [asynchronous metrics](../../operations/system-tables/asynchronous_metrics.md). Since asynchronous metrics are periodically updated based on setting [asynchronous_metrics_update_period_s](../../operations/server-configuration-parameters/settings.md), updating them manually using this statement is typically not necessary.
|
||||
|
||||
```sql
|
||||
RELOAD ASYNCHRONOUS METRICS [ON CLUSTER cluster_name]
|
||||
```
|
||||
|
||||
## DROP DNS CACHE
|
||||
|
||||
Clears ClickHouse’s internal DNS cache. Sometimes (for old ClickHouse versions) it is necessary to use this command when changing the infrastructure (changing the IP address of another ClickHouse server or the server used by dictionaries).
|
||||
|
@ -23,9 +23,16 @@ You can specify how long (in seconds) to wait for inactive replicas to execute `
|
||||
If the `alter_sync` is set to `2` and some replicas are not active for more than the time, specified by the `replication_wait_for_inactive_replica_timeout` setting, then an exception `UNFINISHED` is thrown.
|
||||
:::
|
||||
|
||||
## TRUNCATE ALL TABLES
|
||||
``` sql
|
||||
TRUNCATE ALL TABLES [IF EXISTS] db [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Removes all data from all tables in a database.
|
||||
|
||||
## TRUNCATE DATABASE
|
||||
``` sql
|
||||
TRUNCATE DATABASE [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
||||
TRUNCATE DATABASE [IF EXISTS] db [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Removes all tables from a database but keeps the database itself. When the clause `IF EXISTS` is omitted, the query returns an error if the database does not exist.
|
||||
|
@ -13,13 +13,6 @@ a system table called `system.dropped_tables`.
|
||||
|
||||
If you have a materialized view without a `TO` clause associated with the dropped table, then you will also have to UNDROP the inner table of that view.
|
||||
|
||||
:::note
|
||||
UNDROP TABLE is experimental. To use it add this setting:
|
||||
```sql
|
||||
set allow_experimental_undrop_table_query = 1;
|
||||
```
|
||||
:::
|
||||
|
||||
:::tip
|
||||
Also see [DROP TABLE](/docs/en/sql-reference/statements/drop.md)
|
||||
:::
|
||||
@ -32,60 +25,53 @@ UNDROP TABLE [db.]name [UUID '<uuid>'] [ON CLUSTER cluster]
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
set allow_experimental_undrop_table_query = 1;
|
||||
```
|
||||
|
||||
```sql
|
||||
CREATE TABLE undropMe
|
||||
CREATE TABLE tab
|
||||
(
|
||||
`id` UInt8
|
||||
)
|
||||
ENGINE = MergeTree
|
||||
ORDER BY id
|
||||
```
|
||||
ORDER BY id;
|
||||
|
||||
DROP TABLE tab;
|
||||
|
||||
```sql
|
||||
DROP TABLE undropMe
|
||||
```
|
||||
```sql
|
||||
SELECT *
|
||||
FROM system.dropped_tables
|
||||
FORMAT Vertical
|
||||
FORMAT Vertical;
|
||||
```
|
||||
|
||||
```response
|
||||
Row 1:
|
||||
──────
|
||||
index: 0
|
||||
database: default
|
||||
table: undropMe
|
||||
table: tab
|
||||
uuid: aa696a1a-1d70-4e60-a841-4c80827706cc
|
||||
engine: MergeTree
|
||||
metadata_dropped_path: /var/lib/clickhouse/metadata_dropped/default.undropMe.aa696a1a-1d70-4e60-a841-4c80827706cc.sql
|
||||
metadata_dropped_path: /var/lib/clickhouse/metadata_dropped/default.tab.aa696a1a-1d70-4e60-a841-4c80827706cc.sql
|
||||
table_dropped_time: 2023-04-05 14:12:12
|
||||
|
||||
1 row in set. Elapsed: 0.001 sec.
|
||||
```
|
||||
|
||||
```sql
|
||||
UNDROP TABLE undropMe
|
||||
```
|
||||
```response
|
||||
Ok.
|
||||
```
|
||||
```sql
|
||||
UNDROP TABLE tab;
|
||||
|
||||
SELECT *
|
||||
FROM system.dropped_tables
|
||||
FORMAT Vertical
|
||||
```
|
||||
FORMAT Vertical;
|
||||
|
||||
```response
|
||||
Ok.
|
||||
|
||||
0 rows in set. Elapsed: 0.001 sec.
|
||||
```
|
||||
|
||||
```sql
|
||||
DESCRIBE TABLE undropMe
|
||||
FORMAT Vertical
|
||||
DESCRIBE TABLE tab
|
||||
FORMAT Vertical;
|
||||
```
|
||||
|
||||
```response
|
||||
Row 1:
|
||||
──────
|
||||
|
@ -53,7 +53,7 @@ SELECT * FROM random;
|
||||
└──────────────────────────────┴──────────────┴────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
In combination with [generateRandomStructure](../../sql-reference/functions/other-functions.md#generateRandomStructure):
|
||||
In combination with [generateRandomStructure](../../sql-reference/functions/other-functions.md#generaterandomstructure):
|
||||
|
||||
```sql
|
||||
SELECT * FROM generateRandom(generateRandomStructure(4, 101), 101) LIMIT 3;
|
||||
|
@ -12,25 +12,23 @@ Some of the calculations that you can do are similar to those that can be done w
|
||||
|
||||
ClickHouse supports the standard grammar for defining windows and window functions. The table below indicates whether a feature is currently supported.
|
||||
|
||||
| Feature | Support or workaround |
|
||||
| Feature | Supported? |
|
||||
|------------------------------------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| ad hoc window specification (`count(*) over (partition by id order by time desc)`) | supported |
|
||||
| expressions involving window functions, e.g. `(count(*) over ()) / 2)` | supported |
|
||||
| `WINDOW` clause (`select ... from table window w as (partition by id)`) | supported |
|
||||
| `ROWS` frame | supported |
|
||||
| `RANGE` frame | supported, the default |
|
||||
| `INTERVAL` syntax for `DateTime` `RANGE OFFSET` frame | not supported, specify the number of seconds instead (`RANGE` works with any numeric type). |
|
||||
| `GROUPS` frame | not supported |
|
||||
| Calculating aggregate functions over a frame (`sum(value) over (order by time)`) | all aggregate functions are supported |
|
||||
| `rank()`, `dense_rank()`, `row_number()` | supported |
|
||||
| `lag/lead(value, offset)` | Not supported. Workarounds: |
|
||||
| | 1) replace with `any(value) over (.... rows between <offset> preceding and <offset> preceding)`, or `following` for `lead` |
|
||||
| | 2) use `lagInFrame/leadInFrame`, which are analogous, but respect the window frame. To get behavior identical to `lag/lead`, use `rows between unbounded preceding and unbounded following` |
|
||||
| ntile(buckets) | Supported. Specify window like, (partition by x order by y rows between unbounded preceding and unrounded following). |
|
||||
| ad hoc window specification (`count(*) over (partition by id order by time desc)`) | ✅ |
|
||||
| expressions involving window functions, e.g. `(count(*) over ()) / 2)` | ✅ |
|
||||
| `WINDOW` clause (`select ... from table window w as (partition by id)`) | ✅ |
|
||||
| `ROWS` frame | ✅ |
|
||||
| `RANGE` frame | ✅ (the default) |
|
||||
| `INTERVAL` syntax for `DateTime` `RANGE OFFSET` frame | ❌ (specify the number of seconds instead (`RANGE` works with any numeric type).) |
|
||||
| `GROUPS` frame | ❌ |
|
||||
| Calculating aggregate functions over a frame (`sum(value) over (order by time)`) | ✅ (All aggregate functions are supported) |
|
||||
| `rank()`, `dense_rank()`, `row_number()` | ✅ |
|
||||
| `lag/lead(value, offset)` | ❌ <br/> You can use one of the following workarounds:<br/> 1) `any(value) over (.... rows between <offset> preceding and <offset> preceding)`, or `following` for `lead` <br/> 2) `lagInFrame/leadInFrame`, which are analogous, but respect the window frame. To get behavior identical to `lag/lead`, use `rows between unbounded preceding and unbounded following` |
|
||||
| ntile(buckets) | ✅ <br/> Specify window like, (partition by x order by y rows between unbounded preceding and unrounded following). |
|
||||
|
||||
## ClickHouse-specific Window Functions
|
||||
|
||||
There are also the following window function that's specific to ClickHouse:
|
||||
There is also the following ClickHouse specific window function:
|
||||
|
||||
### nonNegativeDerivative(metric_column, timestamp_column[, INTERVAL X UNITS])
|
||||
|
||||
@ -89,6 +87,102 @@ These functions can be used only as a window function.
|
||||
|
||||
Let's have a look at some examples of how window functions can be used.
|
||||
|
||||
### Numbering rows
|
||||
|
||||
```sql
|
||||
CREATE TABLE salaries
|
||||
(
|
||||
`team` String,
|
||||
`player` String,
|
||||
`salary` UInt32,
|
||||
`position` String
|
||||
)
|
||||
Engine = Memory;
|
||||
|
||||
INSERT INTO salaries FORMAT Values
|
||||
('Port Elizabeth Barbarians', 'Gary Chen', 195000, 'F'),
|
||||
('New Coreystad Archdukes', 'Charles Juarez', 190000, 'F'),
|
||||
('Port Elizabeth Barbarians', 'Michael Stanley', 150000, 'D'),
|
||||
('New Coreystad Archdukes', 'Scott Harrison', 150000, 'D'),
|
||||
('Port Elizabeth Barbarians', 'Robert George', 195000, 'M');
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT player, salary,
|
||||
row_number() OVER (ORDER BY salary) AS row
|
||||
FROM salaries;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─player──────────┬─salary─┬─row─┐
|
||||
│ Michael Stanley │ 150000 │ 1 │
|
||||
│ Scott Harrison │ 150000 │ 2 │
|
||||
│ Charles Juarez │ 190000 │ 3 │
|
||||
│ Gary Chen │ 195000 │ 4 │
|
||||
│ Robert George │ 195000 │ 5 │
|
||||
└─────────────────┴────────┴─────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT player, salary,
|
||||
row_number() OVER (ORDER BY salary) AS row,
|
||||
rank() OVER (ORDER BY salary) AS rank,
|
||||
dense_rank() OVER (ORDER BY salary) AS denseRank
|
||||
FROM salaries;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─player──────────┬─salary─┬─row─┬─rank─┬─denseRank─┐
|
||||
│ Michael Stanley │ 150000 │ 1 │ 1 │ 1 │
|
||||
│ Scott Harrison │ 150000 │ 2 │ 1 │ 1 │
|
||||
│ Charles Juarez │ 190000 │ 3 │ 3 │ 2 │
|
||||
│ Gary Chen │ 195000 │ 4 │ 4 │ 3 │
|
||||
│ Robert George │ 195000 │ 5 │ 4 │ 3 │
|
||||
└─────────────────┴────────┴─────┴──────┴───────────┘
|
||||
```
|
||||
|
||||
### Aggregation functions
|
||||
|
||||
Compare each player's salary to the average for their team.
|
||||
|
||||
```sql
|
||||
SELECT player, salary, team,
|
||||
avg(salary) OVER (PARTITION BY team) AS teamAvg,
|
||||
salary - teamAvg AS diff
|
||||
FROM salaries;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─player──────────┬─salary─┬─team──────────────────────┬─teamAvg─┬───diff─┐
|
||||
│ Charles Juarez │ 190000 │ New Coreystad Archdukes │ 170000 │ 20000 │
|
||||
│ Scott Harrison │ 150000 │ New Coreystad Archdukes │ 170000 │ -20000 │
|
||||
│ Gary Chen │ 195000 │ Port Elizabeth Barbarians │ 180000 │ 15000 │
|
||||
│ Michael Stanley │ 150000 │ Port Elizabeth Barbarians │ 180000 │ -30000 │
|
||||
│ Robert George │ 195000 │ Port Elizabeth Barbarians │ 180000 │ 15000 │
|
||||
└─────────────────┴────────┴───────────────────────────┴─────────┴────────┘
|
||||
```
|
||||
|
||||
Compare each player's salary to the maximum for their team.
|
||||
|
||||
```sql
|
||||
SELECT player, salary, team,
|
||||
max(salary) OVER (PARTITION BY team) AS teamAvg,
|
||||
salary - teamAvg AS diff
|
||||
FROM salaries;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─player──────────┬─salary─┬─team──────────────────────┬─teamAvg─┬───diff─┐
|
||||
│ Charles Juarez │ 190000 │ New Coreystad Archdukes │ 190000 │ 0 │
|
||||
│ Scott Harrison │ 150000 │ New Coreystad Archdukes │ 190000 │ -40000 │
|
||||
│ Gary Chen │ 195000 │ Port Elizabeth Barbarians │ 195000 │ 0 │
|
||||
│ Michael Stanley │ 150000 │ Port Elizabeth Barbarians │ 195000 │ -45000 │
|
||||
│ Robert George │ 195000 │ Port Elizabeth Barbarians │ 195000 │ 0 │
|
||||
└─────────────────┴────────┴───────────────────────────┴─────────┴────────┘
|
||||
```
|
||||
|
||||
### Partitioning by column
|
||||
|
||||
```sql
|
||||
CREATE TABLE wf_partition
|
||||
(
|
||||
@ -120,6 +214,8 @@ ORDER BY
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
### Frame bounding
|
||||
|
||||
```sql
|
||||
CREATE TABLE wf_frame
|
||||
(
|
||||
@ -131,14 +227,19 @@ ENGINE = Memory;
|
||||
|
||||
INSERT INTO wf_frame FORMAT Values
|
||||
(1,1,1), (1,2,2), (1,3,3), (1,4,4), (1,5,5);
|
||||
```
|
||||
|
||||
-- frame is bounded by bounds of a partition (BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
|
||||
```sql
|
||||
-- Frame is bounded by bounds of a partition (BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
|
||||
SELECT
|
||||
part_key,
|
||||
value,
|
||||
order,
|
||||
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||
Rows BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) AS frame_values
|
||||
groupArray(value) OVER (
|
||||
PARTITION BY part_key
|
||||
ORDER BY order ASC
|
||||
Rows BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
|
||||
) AS frame_values
|
||||
FROM wf_frame
|
||||
ORDER BY
|
||||
part_key ASC,
|
||||
@ -151,7 +252,9 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [1,2,3,4,5] │
|
||||
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
-- short form - no bound expression, no order by
|
||||
SELECT
|
||||
part_key,
|
||||
@ -169,14 +272,19 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [1,2,3,4,5] │
|
||||
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
-- frame is bounded by the beggining of a partition and the current row
|
||||
```sql
|
||||
-- frame is bounded by the beginning of a partition and the current row
|
||||
SELECT
|
||||
part_key,
|
||||
value,
|
||||
order,
|
||||
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||
Rows BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) AS frame_values
|
||||
groupArray(value) OVER (
|
||||
PARTITION BY part_key
|
||||
ORDER BY order ASC
|
||||
Rows BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
|
||||
) AS frame_values
|
||||
FROM wf_frame
|
||||
ORDER BY
|
||||
part_key ASC,
|
||||
@ -189,8 +297,10 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [1,2,3,4] │
|
||||
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
-- short form (frame is bounded by the beggining of a partition and the current row)
|
||||
```sql
|
||||
-- short form (frame is bounded by the beginning of a partition and the current row)
|
||||
SELECT
|
||||
part_key,
|
||||
value,
|
||||
@ -207,8 +317,10 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [1,2,3,4] │
|
||||
│ 1 │ 5 │ 5 │ [1,2,3,4,5] │
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
-- frame is bounded by the beggining of a partition and the current row, but order is backward
|
||||
```sql
|
||||
-- frame is bounded by the beginning of a partition and the current row, but order is backward
|
||||
SELECT
|
||||
part_key,
|
||||
value,
|
||||
@ -225,14 +337,19 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [5,4] │
|
||||
│ 1 │ 5 │ 5 │ [5] │
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
-- sliding frame - 1 PRECEDING ROW AND CURRENT ROW
|
||||
SELECT
|
||||
part_key,
|
||||
value,
|
||||
order,
|
||||
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||
Rows BETWEEN 1 PRECEDING AND CURRENT ROW) AS frame_values
|
||||
groupArray(value) OVER (
|
||||
PARTITION BY part_key
|
||||
ORDER BY order ASC
|
||||
Rows BETWEEN 1 PRECEDING AND CURRENT ROW
|
||||
) AS frame_values
|
||||
FROM wf_frame
|
||||
ORDER BY
|
||||
part_key ASC,
|
||||
@ -245,14 +362,19 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [3,4] │
|
||||
│ 1 │ 5 │ 5 │ [4,5] │
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
-- sliding frame - Rows BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING
|
||||
SELECT
|
||||
part_key,
|
||||
value,
|
||||
order,
|
||||
groupArray(value) OVER (PARTITION BY part_key ORDER BY order ASC
|
||||
Rows BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING) AS frame_values
|
||||
groupArray(value) OVER (
|
||||
PARTITION BY part_key
|
||||
ORDER BY order ASC
|
||||
Rows BETWEEN 1 PRECEDING AND UNBOUNDED FOLLOWING
|
||||
) AS frame_values
|
||||
FROM wf_frame
|
||||
ORDER BY
|
||||
part_key ASC,
|
||||
@ -264,7 +386,9 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [3,4,5] │
|
||||
│ 1 │ 5 │ 5 │ [4,5] │
|
||||
└──────────┴───────┴───────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
-- row_number does not respect the frame, so rn_1 = rn_2 = rn_3 != rn_4
|
||||
SELECT
|
||||
part_key,
|
||||
@ -278,8 +402,11 @@ SELECT
|
||||
FROM wf_frame
|
||||
WINDOW
|
||||
w1 AS (PARTITION BY part_key ORDER BY order DESC),
|
||||
w2 AS (PARTITION BY part_key ORDER BY order DESC
|
||||
Rows BETWEEN 1 PRECEDING AND CURRENT ROW)
|
||||
w2 AS (
|
||||
PARTITION BY part_key
|
||||
ORDER BY order DESC
|
||||
Rows BETWEEN 1 PRECEDING AND CURRENT ROW
|
||||
)
|
||||
ORDER BY
|
||||
part_key ASC,
|
||||
value ASC;
|
||||
@ -290,7 +417,9 @@ ORDER BY
|
||||
│ 1 │ 4 │ 4 │ [5,4] │ 2 │ 2 │ 2 │ 2 │
|
||||
│ 1 │ 5 │ 5 │ [5] │ 1 │ 1 │ 1 │ 1 │
|
||||
└──────────┴───────┴───────┴──────────────┴──────┴──────┴──────┴──────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
-- first_value and last_value respect the frame
|
||||
SELECT
|
||||
groupArray(value) OVER w1 AS frame_values_1,
|
||||
@ -313,7 +442,9 @@ ORDER BY
|
||||
│ [1,2,3,4] │ 1 │ 4 │ [3,4] │ 3 │ 4 │
|
||||
│ [1,2,3,4,5] │ 1 │ 5 │ [4,5] │ 4 │ 5 │
|
||||
└────────────────┴───────────────┴──────────────┴────────────────┴───────────────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
-- second value within the frame
|
||||
SELECT
|
||||
groupArray(value) OVER w1 AS frame_values_1,
|
||||
@ -330,7 +461,9 @@ ORDER BY
|
||||
│ [1,2,3,4] │ 2 │
|
||||
│ [2,3,4,5] │ 3 │
|
||||
└────────────────┴──────────────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
-- second value within the frame + Null for missing values
|
||||
SELECT
|
||||
groupArray(value) OVER w1 AS frame_values_1,
|
||||
@ -351,7 +484,9 @@ ORDER BY
|
||||
|
||||
## Real world examples
|
||||
|
||||
### Maximum/total salary per department.
|
||||
The following examples solve common real-world problems.
|
||||
|
||||
### Maximum/total salary per department
|
||||
|
||||
```sql
|
||||
CREATE TABLE employees
|
||||
@ -369,7 +504,9 @@ INSERT INTO employees FORMAT Values
|
||||
('IT', 'Tim', 200),
|
||||
('IT', 'Anna', 300),
|
||||
('IT', 'Elen', 500);
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
department,
|
||||
employee_name AS emp,
|
||||
@ -386,8 +523,10 @@ FROM
|
||||
max(salary) OVER wndw AS max_salary_per_dep,
|
||||
sum(salary) OVER wndw AS total_salary_per_dep
|
||||
FROM employees
|
||||
WINDOW wndw AS (PARTITION BY department
|
||||
rows BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING)
|
||||
WINDOW wndw AS (
|
||||
PARTITION BY department
|
||||
rows BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING
|
||||
)
|
||||
ORDER BY
|
||||
department ASC,
|
||||
employee_name ASC
|
||||
@ -403,7 +542,7 @@ FROM
|
||||
└────────────┴──────┴────────┴────────────────────┴──────────────────────┴──────────────────┘
|
||||
```
|
||||
|
||||
### Cumulative sum.
|
||||
### Cumulative sum
|
||||
|
||||
```sql
|
||||
CREATE TABLE warehouse
|
||||
@ -421,7 +560,9 @@ INSERT INTO warehouse VALUES
|
||||
('sku1', '2020-01-01', 1),
|
||||
('sku1', '2020-02-01', 1),
|
||||
('sku1', '2020-03-01', 1);
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
item,
|
||||
ts,
|
||||
@ -461,13 +602,18 @@ insert into sensors values('cpu_temp', '2020-01-01 00:00:00', 87),
|
||||
('cpu_temp', '2020-01-01 00:00:05', 87),
|
||||
('cpu_temp', '2020-01-01 00:00:06', 87),
|
||||
('cpu_temp', '2020-01-01 00:00:07', 87);
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
metric,
|
||||
ts,
|
||||
value,
|
||||
avg(value) OVER
|
||||
(PARTITION BY metric ORDER BY ts ASC Rows BETWEEN 2 PRECEDING AND CURRENT ROW)
|
||||
AS moving_avg_temp
|
||||
avg(value) OVER (
|
||||
PARTITION BY metric
|
||||
ORDER BY ts ASC
|
||||
Rows BETWEEN 2 PRECEDING AND CURRENT ROW
|
||||
) AS moving_avg_temp
|
||||
FROM sensors
|
||||
ORDER BY
|
||||
metric ASC,
|
||||
@ -536,7 +682,9 @@ insert into sensors values('ambient_temp', '2020-01-01 00:00:00', 16),
|
||||
('ambient_temp', '2020-03-01 12:00:00', 16),
|
||||
('ambient_temp', '2020-03-01 12:00:00', 16),
|
||||
('ambient_temp', '2020-03-01 12:00:00', 16);
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
metric,
|
||||
ts,
|
||||
|
@ -434,16 +434,18 @@ $ curl -v 'http://localhost:8123/predefined_query'
|
||||
``` xml
|
||||
<http_handlers>
|
||||
<rule>
|
||||
<url><![CDATA[regex:/query_param_with_url/\w+/(?P<name_1>[^/]+)(/(?P<name_2>[^/]+))?]]></url>
|
||||
<url><![CDATA[regex:/query_param_with_url/(?P<name_1>[^/]+)]]></url>
|
||||
<methods>GET</methods>
|
||||
<headers>
|
||||
<XXX>TEST_HEADER_VALUE</XXX>
|
||||
<PARAMS_XXX><![CDATA[(?P<name_1>[^/]+)(/(?P<name_2>[^/]+))?]]></PARAMS_XXX>
|
||||
<PARAMS_XXX><![CDATA[regex:(?P<name_2>[^/]+)]]></PARAMS_XXX>
|
||||
</headers>
|
||||
<handler>
|
||||
<type>predefined_query_handler</type>
|
||||
<query>SELECT value FROM system.settings WHERE name = {name_1:String}</query>
|
||||
<query>SELECT name, value FROM system.settings WHERE name = {name_2:String}</query>
|
||||
<query>
|
||||
SELECT name, value FROM system.settings
|
||||
WHERE name IN ({name_1:String}, {name_2:String})
|
||||
</query>
|
||||
</handler>
|
||||
</rule>
|
||||
<defaults/>
|
||||
@ -451,13 +453,13 @@ $ curl -v 'http://localhost:8123/predefined_query'
|
||||
```
|
||||
|
||||
``` bash
|
||||
$ curl -H 'XXX:TEST_HEADER_VALUE' -H 'PARAMS_XXX:max_threads' 'http://localhost:8123/query_param_with_url/1/max_threads/max_final_threads?max_threads=1&max_final_threads=2'
|
||||
1
|
||||
max_final_threads 2
|
||||
$ curl -H 'XXX:TEST_HEADER_VALUE' -H 'PARAMS_XXX:max_final_threads' 'http://localhost:8123/query_param_with_url/max_threads?max_threads=1&max_final_threads=2'
|
||||
max_final_threads 2
|
||||
max_threads 1
|
||||
```
|
||||
|
||||
:::note Предупреждение
|
||||
В одном `predefined_query_handler` поддерживается только один запрос типа `INSERT`.
|
||||
В одном `predefined_query_handler` поддерживается только один запрос.
|
||||
:::
|
||||
### dynamic_query_handler {#dynamic_query_handler}
|
||||
|
||||
|
@ -2776,7 +2776,7 @@ SELECT range(number) FROM system.numbers LIMIT 5 FORMAT PrettyCompactNoEscapes;
|
||||
- 0 — номера строк не выводятся.
|
||||
- 1 — номера строк выводятся.
|
||||
|
||||
Значение по умолчанию: `0`.
|
||||
Значение по умолчанию: `1`.
|
||||
|
||||
**Пример**
|
||||
|
||||
@ -2798,7 +2798,7 @@ SELECT TOP 3 name, value FROM system.settings;
|
||||
```
|
||||
### output_format_pretty_color {#output_format_pretty_color}
|
||||
|
||||
Включает/выключает управляющие последовательности ANSI в форматах Pretty.
|
||||
Включает/выключает управляющие последовательности ANSI в форматах Pretty.
|
||||
|
||||
Возможные значения:
|
||||
|
||||
@ -4123,7 +4123,7 @@ SELECT sum(number) FROM numbers(10000000000) SETTINGS partial_result_on_first_ca
|
||||
## session_timezone {#session_timezone}
|
||||
|
||||
Задаёт значение часового пояса (session_timezone) по умолчанию для текущей сессии вместо [часового пояса сервера](../server-configuration-parameters/settings.md#server_configuration_parameters-timezone). То есть, все значения DateTime/DateTime64, для которых явно не задан часовой пояс, будут интерпретированы как относящиеся к указанной зоне.
|
||||
При значении настройки `''` (пустая строка), будет совпадать с часовым поясом сервера.
|
||||
При значении настройки `''` (пустая строка), будет совпадать с часовым поясом сервера.
|
||||
|
||||
Функции `timeZone()` and `serverTimezone()` возвращают часовой пояс текущей сессии и сервера соответственно.
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user