mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 07:31:57 +00:00
Merge branch 'master' into ldap-any-user-authentication
This commit is contained in:
commit
c202364f01
10
CHANGELOG.md
10
CHANGELOG.md
@ -22,14 +22,14 @@
|
||||
* Add setting `allow_non_metadata_alters` which restricts to execute `ALTER` queries which modify data on disk. Disabled be default. Closes [#11547](https://github.com/ClickHouse/ClickHouse/issues/11547). [#12635](https://github.com/ClickHouse/ClickHouse/pull/12635) ([alesapin](https://github.com/alesapin)).
|
||||
* A function `formatRow` is added to support turning arbitrary expressions into a string via given format. It's useful for manipulating SQL outputs and is quite versatile combined with the `columns` function. [#12574](https://github.com/ClickHouse/ClickHouse/pull/12574) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Add `FROM_UNIXTIME` function for compatibility with MySQL, related to [12149](https://github.com/ClickHouse/ClickHouse/issues/12149). [#12484](https://github.com/ClickHouse/ClickHouse/pull/12484) ([flynn](https://github.com/ucasFL)).
|
||||
* Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. https://github.com/ClickHouse/ClickHouse/issues/5319. [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Allow Nullable types as keys in MergeTree tables if `allow_nullable_key` table setting is enabled. Closes [#5319](https://github.com/ClickHouse/ClickHouse/issues/5319). [#12433](https://github.com/ClickHouse/ClickHouse/pull/12433) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Integration with [COS](https://intl.cloud.tencent.com/product/cos). [#12386](https://github.com/ClickHouse/ClickHouse/pull/12386) ([fastio](https://github.com/fastio)).
|
||||
* Add mapAdd and mapSubtract functions for adding/subtracting key-mapped values. [#11735](https://github.com/ClickHouse/ClickHouse/pull/11735) ([Ildus Kurbangaliev](https://github.com/ildus)).
|
||||
|
||||
#### Bug Fix
|
||||
|
||||
* Fix premature `ON CLUSTER` timeouts for queries that must be executed on a single replica. Fixes [#6704](https://github.com/ClickHouse/ClickHouse/issues/6704), [#7228](https://github.com/ClickHouse/ClickHouse/issues/7228), [#13361](https://github.com/ClickHouse/ClickHouse/issues/13361), [#11884](https://github.com/ClickHouse/ClickHouse/issues/11884). [#13450](https://github.com/ClickHouse/ClickHouse/pull/13450) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix crash in mark inclusion search introduced in https://github.com/ClickHouse/ClickHouse/pull/12277. [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix crash in mark inclusion search introduced in [#12277](https://github.com/ClickHouse/ClickHouse/pull/12277). [#14225](https://github.com/ClickHouse/ClickHouse/pull/14225) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix race condition in external dictionaries with cache layout which can lead server crash. [#12566](https://github.com/ClickHouse/ClickHouse/pull/12566) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix visible data clobbering by progress bar in client in interactive mode. This fixes [#12562](https://github.com/ClickHouse/ClickHouse/issues/12562) and [#13369](https://github.com/ClickHouse/ClickHouse/issues/13369) and [#13584](https://github.com/ClickHouse/ClickHouse/issues/13584) and fixes [#12964](https://github.com/ClickHouse/ClickHouse/issues/12964). [#13691](https://github.com/ClickHouse/ClickHouse/pull/13691) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fixed incorrect sorting order for `LowCardinality` columns when ORDER BY multiple columns is used. This fixes [#13958](https://github.com/ClickHouse/ClickHouse/issues/13958). [#14223](https://github.com/ClickHouse/ClickHouse/pull/14223) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
@ -71,7 +71,7 @@
|
||||
* Fix function if with nullable constexpr as cond that is not literal NULL. Fixes [#12463](https://github.com/ClickHouse/ClickHouse/issues/12463). [#13226](https://github.com/ClickHouse/ClickHouse/pull/13226) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix assert in `arrayElement` function in case of array elements are Nullable and array subscript is also Nullable. This fixes [#12172](https://github.com/ClickHouse/ClickHouse/issues/12172). [#13224](https://github.com/ClickHouse/ClickHouse/pull/13224) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix DateTime64 conversion functions with constant argument. [#13205](https://github.com/ClickHouse/ClickHouse/pull/13205) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes https://github.com/ClickHouse/ClickHouse/issues/5779, https://github.com/ClickHouse/ClickHouse/issues/12527. [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix parsing row policies from users.xml when names of databases or tables contain dots. This fixes [#5779](https://github.com/ClickHouse/ClickHouse/issues/5779), [#12527](https://github.com/ClickHouse/ClickHouse/issues/12527). [#13199](https://github.com/ClickHouse/ClickHouse/pull/13199) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix access to `redis` dictionary after connection was dropped once. It may happen with `cache` and `direct` dictionary layouts. [#13082](https://github.com/ClickHouse/ClickHouse/pull/13082) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix wrong index analysis with functions. It could lead to some data parts being skipped when reading from `MergeTree` tables. Fixes [#13060](https://github.com/ClickHouse/ClickHouse/issues/13060). Fixes [#12406](https://github.com/ClickHouse/ClickHouse/issues/12406). [#13081](https://github.com/ClickHouse/ClickHouse/pull/13081) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Fix error `Cannot convert column because it is constant but values of constants are different in source and result` for remote queries which use deterministic functions in scope of query, but not deterministic between queries, like `now()`, `now64()`, `randConstant()`. Fixes [#11327](https://github.com/ClickHouse/ClickHouse/issues/11327). [#13075](https://github.com/ClickHouse/ClickHouse/pull/13075) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
@ -89,7 +89,7 @@
|
||||
* Fixed [#10572](https://github.com/ClickHouse/ClickHouse/issues/10572) fix bloom filter index with const expression. [#12659](https://github.com/ClickHouse/ClickHouse/pull/12659) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix SIGSEGV in StorageKafka when broker is unavailable (and not only). [#12658](https://github.com/ClickHouse/ClickHouse/pull/12658) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add support for function `if` with `Array(UUID)` arguments. This fixes [#11066](https://github.com/ClickHouse/ClickHouse/issues/11066). [#12648](https://github.com/ClickHouse/ClickHouse/pull/12648) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* CREATE USER IF NOT EXISTS now doesn't throw exception if the user exists. This fixes https://github.com/ClickHouse/ClickHouse/issues/12507. [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* CREATE USER IF NOT EXISTS now doesn't throw exception if the user exists. This fixes [#12507](https://github.com/ClickHouse/ClickHouse/issues/12507). [#12646](https://github.com/ClickHouse/ClickHouse/pull/12646) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Exception `There is no supertype...` can be thrown during `ALTER ... UPDATE` in unexpected cases (e.g. when subtracting from UInt64 column). This fixes [#7306](https://github.com/ClickHouse/ClickHouse/issues/7306). This fixes [#4165](https://github.com/ClickHouse/ClickHouse/issues/4165). [#12633](https://github.com/ClickHouse/ClickHouse/pull/12633) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix possible `Pipeline stuck` error for queries with external sorting. Fixes [#12617](https://github.com/ClickHouse/ClickHouse/issues/12617). [#12618](https://github.com/ClickHouse/ClickHouse/pull/12618) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix error `Output of TreeExecutor is not sorted` for `OPTIMIZE DEDUPLICATE`. Fixes [#11572](https://github.com/ClickHouse/ClickHouse/issues/11572). [#12613](https://github.com/ClickHouse/ClickHouse/pull/12613) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
@ -123,7 +123,7 @@
|
||||
* Fix assert in `parseDateTimeBestEffort`. This fixes [#12649](https://github.com/ClickHouse/ClickHouse/issues/12649). [#13227](https://github.com/ClickHouse/ClickHouse/pull/13227) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Minor optimization in Processors/PipelineExecutor: breaking out of a loop because it makes sense to do so. [#13058](https://github.com/ClickHouse/ClickHouse/pull/13058) ([Mark Papadakis](https://github.com/markpapadakis)).
|
||||
* Support TRUNCATE table without TABLE keyword. [#12653](https://github.com/ClickHouse/ClickHouse/pull/12653) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Fix explain query format overwrite by default, issue https://github.com/ClickHouse/ClickHouse/issues/12432. [#12541](https://github.com/ClickHouse/ClickHouse/pull/12541) ([BohuTANG](https://github.com/BohuTANG)).
|
||||
* Fix explain query format overwrite by default. This fixes [#12541](https://github.com/ClickHouse/ClickHouse/issues/12432). [#12541](https://github.com/ClickHouse/ClickHouse/pull/12541) ([BohuTANG](https://github.com/BohuTANG)).
|
||||
* Allow to set JOIN kind and type in more standad way: `LEFT SEMI JOIN` instead of `SEMI LEFT JOIN`. For now both are correct. [#12520](https://github.com/ClickHouse/ClickHouse/pull/12520) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Changes default value for `multiple_joins_rewriter_version` to 2. It enables new multiple joins rewriter that knows about column names. [#12469](https://github.com/ClickHouse/ClickHouse/pull/12469) ([Artem Zuikov](https://github.com/4ertus2)).
|
||||
* Add several metrics for requests to S3 storages. [#12464](https://github.com/ClickHouse/ClickHouse/pull/12464) ([ianton-ru](https://github.com/ianton-ru)).
|
||||
|
@ -17,5 +17,4 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
* [ClickHouse at ByteDance (in Chinese)](https://mp.weixin.qq.com/s/Em-HjPylO8D7WPui4RREAQ) on August 28, 2020.
|
||||
* [ClickHouse Data Integration Virtual Meetup](https://www.eventbrite.com/e/clickhouse-september-virtual-meetup-data-integration-tickets-117421895049) on September 10, 2020.
|
||||
* [ClickHouse talk at Ya.Subbotnik (in Russian)](https://ya.cc/t/cIBI-3yECj5JF) on September 12, 2020.
|
||||
|
@ -38,18 +38,18 @@ namespace common
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool addOverflow(bInt256 x, bInt256 y, bInt256 & res)
|
||||
inline bool addOverflow(wInt256 x, wInt256 y, wInt256 & res)
|
||||
{
|
||||
res = x + y;
|
||||
return (y > 0 && x > std::numeric_limits<bInt256>::max() - y) ||
|
||||
(y < 0 && x < std::numeric_limits<bInt256>::min() - y);
|
||||
return (y > 0 && x > std::numeric_limits<wInt256>::max() - y) ||
|
||||
(y < 0 && x < std::numeric_limits<wInt256>::min() - y);
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool addOverflow(bUInt256 x, bUInt256 y, bUInt256 & res)
|
||||
inline bool addOverflow(wUInt256 x, wUInt256 y, wUInt256 & res)
|
||||
{
|
||||
res = x + y;
|
||||
return x > std::numeric_limits<bUInt256>::max() - y;
|
||||
return x > std::numeric_limits<wUInt256>::max() - y;
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
@ -86,15 +86,15 @@ namespace common
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool subOverflow(bInt256 x, bInt256 y, bInt256 & res)
|
||||
inline bool subOverflow(wInt256 x, wInt256 y, wInt256 & res)
|
||||
{
|
||||
res = x - y;
|
||||
return (y < 0 && x > std::numeric_limits<bInt256>::max() + y) ||
|
||||
(y > 0 && x < std::numeric_limits<bInt256>::min() + y);
|
||||
return (y < 0 && x > std::numeric_limits<wInt256>::max() + y) ||
|
||||
(y > 0 && x < std::numeric_limits<wInt256>::min() + y);
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool subOverflow(bUInt256 x, bUInt256 y, bUInt256 & res)
|
||||
inline bool subOverflow(wUInt256 x, wUInt256 y, wUInt256 & res)
|
||||
{
|
||||
res = x - y;
|
||||
return x < y;
|
||||
@ -137,19 +137,19 @@ namespace common
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool mulOverflow(bInt256 x, bInt256 y, bInt256 & res)
|
||||
inline bool mulOverflow(wInt256 x, wInt256 y, wInt256 & res)
|
||||
{
|
||||
res = x * y;
|
||||
if (!x || !y)
|
||||
return false;
|
||||
|
||||
bInt256 a = (x > 0) ? x : -x;
|
||||
bInt256 b = (y > 0) ? y : -y;
|
||||
wInt256 a = (x > 0) ? x : -x;
|
||||
wInt256 b = (y > 0) ? y : -y;
|
||||
return (a * b) / b != a;
|
||||
}
|
||||
|
||||
template <>
|
||||
inline bool mulOverflow(bUInt256 x, bUInt256 y, bUInt256 & res)
|
||||
inline bool mulOverflow(wUInt256 x, wUInt256 y, wUInt256 & res)
|
||||
{
|
||||
res = x * y;
|
||||
if (!x || !y)
|
||||
|
13
base/common/throwError.h
Normal file
13
base/common/throwError.h
Normal file
@ -0,0 +1,13 @@
|
||||
#pragma once
|
||||
#include <stdexcept>
|
||||
|
||||
/// Throw DB::Exception-like exception before its definition.
|
||||
/// DB::Exception derived from Poco::Exception derived from std::exception.
|
||||
/// DB::Exception generally cought as Poco::Exception. std::exception generally has other catch blocks and could lead to other outcomes.
|
||||
/// DB::Exception is not defined yet. It'd better to throw Poco::Exception but we do not want to include any big header here, even <string>.
|
||||
/// So we throw some std::exception instead in the hope its catch block is the same as DB::Exception one.
|
||||
template <typename T>
|
||||
inline void throwError(const T & err)
|
||||
{
|
||||
throw std::runtime_error(err);
|
||||
}
|
@ -1,12 +1,10 @@
|
||||
#pragma once
|
||||
|
||||
#include <algorithm>
|
||||
#include <cstdint>
|
||||
#include <cstdlib>
|
||||
#include <string>
|
||||
#include <type_traits>
|
||||
|
||||
#include <boost/multiprecision/cpp_int.hpp>
|
||||
#include <common/wide_integer.h>
|
||||
|
||||
using Int8 = int8_t;
|
||||
using Int16 = int16_t;
|
||||
@ -25,12 +23,11 @@ using UInt64 = uint64_t;
|
||||
|
||||
using Int128 = __int128;
|
||||
|
||||
/// We have to use 127 and 255 bit integers to safe a bit for a sign serialization
|
||||
//using bInt256 = boost::multiprecision::int256_t;
|
||||
using bInt256 = boost::multiprecision::number<boost::multiprecision::cpp_int_backend<
|
||||
255, 255, boost::multiprecision::signed_magnitude, boost::multiprecision::unchecked, void> >;
|
||||
using bUInt256 = boost::multiprecision::uint256_t;
|
||||
using wInt256 = wide::integer<256, signed>;
|
||||
using wUInt256 = wide::integer<256, unsigned>;
|
||||
|
||||
static_assert(sizeof(wInt256) == 32);
|
||||
static_assert(sizeof(wUInt256) == 32);
|
||||
|
||||
using String = std::string;
|
||||
|
||||
@ -44,7 +41,7 @@ struct is_signed
|
||||
};
|
||||
|
||||
template <> struct is_signed<Int128> { static constexpr bool value = true; };
|
||||
template <> struct is_signed<bInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_signed<wInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_signed_v = is_signed<T>::value;
|
||||
@ -55,7 +52,7 @@ struct is_unsigned
|
||||
static constexpr bool value = std::is_unsigned_v<T>;
|
||||
};
|
||||
|
||||
template <> struct is_unsigned<bUInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_unsigned<wUInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_unsigned_v = is_unsigned<T>::value;
|
||||
@ -69,8 +66,8 @@ struct is_integer
|
||||
};
|
||||
|
||||
template <> struct is_integer<Int128> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<bInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<bUInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<wInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_integer<wUInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_integer_v = is_integer<T>::value;
|
||||
@ -93,9 +90,9 @@ struct make_unsigned
|
||||
typedef std::make_unsigned_t<T> type;
|
||||
};
|
||||
|
||||
template <> struct make_unsigned<__int128> { using type = unsigned __int128; };
|
||||
template <> struct make_unsigned<bInt256> { using type = bUInt256; };
|
||||
template <> struct make_unsigned<bUInt256> { using type = bUInt256; };
|
||||
template <> struct make_unsigned<Int128> { using type = unsigned __int128; };
|
||||
template <> struct make_unsigned<wInt256> { using type = wUInt256; };
|
||||
template <> struct make_unsigned<wUInt256> { using type = wUInt256; };
|
||||
|
||||
template <typename T> using make_unsigned_t = typename make_unsigned<T>::type;
|
||||
|
||||
@ -105,8 +102,8 @@ struct make_signed
|
||||
typedef std::make_signed_t<T> type;
|
||||
};
|
||||
|
||||
template <> struct make_signed<bInt256> { typedef bInt256 type; };
|
||||
template <> struct make_signed<bUInt256> { typedef bInt256 type; };
|
||||
template <> struct make_signed<wInt256> { using type = wInt256; };
|
||||
template <> struct make_signed<wUInt256> { using type = wInt256; };
|
||||
|
||||
template <typename T> using make_signed_t = typename make_signed<T>::type;
|
||||
|
||||
@ -116,23 +113,14 @@ struct is_big_int
|
||||
static constexpr bool value = false;
|
||||
};
|
||||
|
||||
template <> struct is_big_int<bUInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_big_int<bInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_big_int<wInt256> { static constexpr bool value = true; };
|
||||
template <> struct is_big_int<wUInt256> { static constexpr bool value = true; };
|
||||
|
||||
template <typename T>
|
||||
inline constexpr bool is_big_int_v = is_big_int<T>::value;
|
||||
|
||||
template <typename T>
|
||||
inline std::string bigintToString(const T & x)
|
||||
{
|
||||
return x.str();
|
||||
}
|
||||
|
||||
template <typename To, typename From>
|
||||
inline To bigint_cast(const From & x [[maybe_unused]])
|
||||
{
|
||||
if constexpr ((is_big_int_v<From> && std::is_same_v<To, UInt8>) || (is_big_int_v<To> && std::is_same_v<From, UInt8>))
|
||||
return static_cast<uint8_t>(x);
|
||||
else
|
||||
return static_cast<To>(x);
|
||||
}
|
||||
|
259
base/common/wide_integer.h
Normal file
259
base/common/wide_integer.h
Normal file
@ -0,0 +1,259 @@
|
||||
#pragma once
|
||||
|
||||
///////////////////////////////////////////////////////////////
|
||||
// Distributed under the Boost Software License, Version 1.0.
|
||||
// (See at http://www.boost.org/LICENSE_1_0.txt)
|
||||
///////////////////////////////////////////////////////////////
|
||||
|
||||
/* Divide and multiply
|
||||
*
|
||||
*
|
||||
* Copyright (c) 2008
|
||||
* Evan Teran
|
||||
*
|
||||
* Permission to use, copy, modify, and distribute this software and its
|
||||
* documentation for any purpose and without fee is hereby granted, provided
|
||||
* that the above copyright notice appears in all copies and that both the
|
||||
* copyright notice and this permission notice appear in supporting
|
||||
* documentation, and that the same name not be used in advertising or
|
||||
* publicity pertaining to distribution of the software without specific,
|
||||
* written prior permission. We make no representations about the
|
||||
* suitability this software for any purpose. It is provided "as is"
|
||||
* without express or implied warranty.
|
||||
*/
|
||||
|
||||
#include <cstdint>
|
||||
#include <limits>
|
||||
#include <type_traits>
|
||||
#include <initializer_list>
|
||||
|
||||
namespace wide
|
||||
{
|
||||
template <size_t Bits, typename Signed>
|
||||
class integer;
|
||||
}
|
||||
|
||||
namespace std
|
||||
{
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
struct common_type<wide::integer<Bits, Signed>, wide::integer<Bits2, Signed2>>;
|
||||
|
||||
template <size_t Bits, typename Signed, typename Arithmetic>
|
||||
struct common_type<wide::integer<Bits, Signed>, Arithmetic>;
|
||||
|
||||
template <typename Arithmetic, size_t Bits, typename Signed>
|
||||
struct common_type<Arithmetic, wide::integer<Bits, Signed>>;
|
||||
|
||||
}
|
||||
|
||||
namespace wide
|
||||
{
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
class integer
|
||||
{
|
||||
public:
|
||||
using base_type = uint8_t;
|
||||
using signed_base_type = int8_t;
|
||||
|
||||
// ctors
|
||||
integer() = default;
|
||||
|
||||
template <typename T>
|
||||
constexpr integer(T rhs) noexcept;
|
||||
template <typename T>
|
||||
constexpr integer(std::initializer_list<T> il) noexcept;
|
||||
|
||||
// assignment
|
||||
template <size_t Bits2, typename Signed2>
|
||||
constexpr integer<Bits, Signed> & operator=(const integer<Bits2, Signed2> & rhs) noexcept;
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr integer<Bits, Signed> & operator=(Arithmetic rhs) noexcept;
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr integer<Bits, Signed> & operator*=(const Arithmetic & rhs);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr integer<Bits, Signed> & operator/=(const Arithmetic & rhs);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr integer<Bits, Signed> & operator+=(const Arithmetic & rhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
template <typename Arithmetic>
|
||||
constexpr integer<Bits, Signed> & operator-=(const Arithmetic & rhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
template <typename Integral>
|
||||
constexpr integer<Bits, Signed> & operator%=(const Integral & rhs);
|
||||
|
||||
template <typename Integral>
|
||||
constexpr integer<Bits, Signed> & operator&=(const Integral & rhs) noexcept;
|
||||
|
||||
template <typename Integral>
|
||||
constexpr integer<Bits, Signed> & operator|=(const Integral & rhs) noexcept;
|
||||
|
||||
template <typename Integral>
|
||||
constexpr integer<Bits, Signed> & operator^=(const Integral & rhs) noexcept;
|
||||
|
||||
constexpr integer<Bits, Signed> & operator<<=(int n) noexcept;
|
||||
constexpr integer<Bits, Signed> & operator>>=(int n) noexcept;
|
||||
|
||||
constexpr integer<Bits, Signed> & operator++() noexcept(std::is_same_v<Signed, unsigned>);
|
||||
constexpr integer<Bits, Signed> operator++(int) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
constexpr integer<Bits, Signed> & operator--() noexcept(std::is_same_v<Signed, unsigned>);
|
||||
constexpr integer<Bits, Signed> operator--(int) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
// observers
|
||||
|
||||
constexpr explicit operator bool() const noexcept;
|
||||
|
||||
template <class T>
|
||||
using __integral_not_wide_integer_class = typename std::enable_if<std::is_arithmetic<T>::value, T>::type;
|
||||
|
||||
template <class T, class = __integral_not_wide_integer_class<T>>
|
||||
constexpr operator T() const noexcept;
|
||||
|
||||
constexpr operator long double() const noexcept;
|
||||
constexpr operator double() const noexcept;
|
||||
constexpr operator float() const noexcept;
|
||||
|
||||
struct _impl;
|
||||
|
||||
private:
|
||||
template <size_t Bits2, typename Signed2>
|
||||
friend class integer;
|
||||
|
||||
friend class std::numeric_limits<integer<Bits, signed>>;
|
||||
friend class std::numeric_limits<integer<Bits, unsigned>>;
|
||||
|
||||
base_type m_arr[_impl::arr_size];
|
||||
};
|
||||
|
||||
template <typename T>
|
||||
static constexpr bool ArithmeticConcept() noexcept;
|
||||
template <class T1, class T2>
|
||||
using __only_arithmetic = typename std::enable_if<ArithmeticConcept<T1>() && ArithmeticConcept<T2>()>::type;
|
||||
|
||||
template <typename T>
|
||||
static constexpr bool IntegralConcept() noexcept;
|
||||
template <class T, class T2>
|
||||
using __only_integer = typename std::enable_if<IntegralConcept<T>() && IntegralConcept<T2>()>::type;
|
||||
|
||||
// Unary operators
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr integer<Bits, Signed> operator~(const integer<Bits, Signed> & lhs) noexcept;
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr integer<Bits, Signed> operator-(const integer<Bits, Signed> & lhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr integer<Bits, Signed> operator+(const integer<Bits, Signed> & lhs) noexcept(std::is_same_v<Signed, unsigned>);
|
||||
|
||||
// Binary operators
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator*(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator*(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator/(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator/(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator+(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator+(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator-(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
std::common_type_t<Arithmetic, Arithmetic2> constexpr operator-(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator%(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator%(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator&(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator&(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator|(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator|(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
std::common_type_t<integer<Bits, Signed>, integer<Bits2, Signed2>> constexpr
|
||||
operator^(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Integral, typename Integral2, class = __only_integer<Integral, Integral2>>
|
||||
std::common_type_t<Integral, Integral2> constexpr operator^(const Integral & rhs, const Integral2 & lhs);
|
||||
|
||||
// TODO: Integral
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr integer<Bits, Signed> operator<<(const integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
template <size_t Bits, typename Signed>
|
||||
constexpr integer<Bits, Signed> operator>>(const integer<Bits, Signed> & lhs, int n) noexcept;
|
||||
|
||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||
constexpr integer<Bits, Signed> operator<<(const integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
{
|
||||
return lhs << int(n);
|
||||
}
|
||||
template <size_t Bits, typename Signed, typename Int, typename = std::enable_if_t<!std::is_same_v<Int, int>>>
|
||||
constexpr integer<Bits, Signed> operator>>(const integer<Bits, Signed> & lhs, Int n) noexcept
|
||||
{
|
||||
return lhs >> int(n);
|
||||
}
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator<(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator<(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator>(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator>(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator<=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator<=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator>=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator>=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator==(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator==(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
template <size_t Bits, typename Signed, size_t Bits2, typename Signed2>
|
||||
constexpr bool operator!=(const integer<Bits, Signed> & lhs, const integer<Bits2, Signed2> & rhs);
|
||||
template <typename Arithmetic, typename Arithmetic2, class = __only_arithmetic<Arithmetic, Arithmetic2>>
|
||||
constexpr bool operator!=(const Arithmetic & rhs, const Arithmetic2 & lhs);
|
||||
|
||||
}
|
||||
|
||||
namespace std
|
||||
{
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
struct hash<wide::integer<Bits, Signed>>;
|
||||
|
||||
}
|
||||
|
||||
#include "wide_integer_impl.h"
|
1290
base/common/wide_integer_impl.h
Normal file
1290
base/common/wide_integer_impl.h
Normal file
File diff suppressed because it is too large
Load Diff
35
base/common/wide_integer_to_string.h
Normal file
35
base/common/wide_integer_to_string.h
Normal file
@ -0,0 +1,35 @@
|
||||
#pragma once
|
||||
|
||||
#include <string>
|
||||
|
||||
#include "wide_integer.h"
|
||||
|
||||
namespace wide
|
||||
{
|
||||
|
||||
template <size_t Bits, typename Signed>
|
||||
inline std::string to_string(const integer<Bits, Signed> & n)
|
||||
{
|
||||
std::string res;
|
||||
if (integer<Bits, Signed>::_impl::operator_eq(n, 0U))
|
||||
return "0";
|
||||
|
||||
integer<Bits, unsigned> t;
|
||||
bool is_neg = integer<Bits, Signed>::_impl::is_negative(n);
|
||||
if (is_neg)
|
||||
t = integer<Bits, Signed>::_impl::operator_unary_minus(n);
|
||||
else
|
||||
t = n;
|
||||
|
||||
while (!integer<Bits, unsigned>::_impl::operator_eq(t, 0U))
|
||||
{
|
||||
res.insert(res.begin(), '0' + char(integer<Bits, unsigned>::_impl::operator_percent(t, 10U)));
|
||||
t = integer<Bits, unsigned>::_impl::operator_slash(t, 10U);
|
||||
}
|
||||
|
||||
if (is_neg)
|
||||
res.insert(res.begin(), '-');
|
||||
return res;
|
||||
}
|
||||
|
||||
}
|
@ -32,6 +32,8 @@ PEERDIR(
|
||||
contrib/restricted/cityhash-1.0.2
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
|
@ -31,6 +31,8 @@ PEERDIR(
|
||||
contrib/restricted/cityhash-1.0.2
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
<? find . -name '*.cpp' | grep -v -F tests/ | grep -v -F Replxx | grep -v -F Readline | sed 's/^\.\// /' | sort ?>
|
||||
)
|
||||
|
@ -6,6 +6,8 @@ PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
BaseDaemon.cpp
|
||||
GraphiteWriter.cpp
|
||||
|
@ -4,6 +4,8 @@ PEERDIR(
|
||||
clickhouse/src/Common
|
||||
)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
ExtendedLogChannel.cpp
|
||||
Loggers.cpp
|
||||
|
@ -1,9 +1,7 @@
|
||||
#pragma once
|
||||
|
||||
#include <boost/noncopyable.hpp>
|
||||
#include <mysqlxx/Types.h>
|
||||
|
||||
|
||||
namespace mysqlxx
|
||||
{
|
||||
|
||||
@ -22,6 +20,11 @@ class ResultBase
|
||||
public:
|
||||
ResultBase(MYSQL_RES * res_, Connection * conn_, const Query * query_);
|
||||
|
||||
ResultBase(const ResultBase &) = delete;
|
||||
ResultBase & operator=(const ResultBase &) = delete;
|
||||
ResultBase(ResultBase &&) = default;
|
||||
ResultBase & operator=(ResultBase &&) = default;
|
||||
|
||||
Connection * getConnection() { return conn; }
|
||||
MYSQL_FIELDS getFields() { return fields; }
|
||||
unsigned getNumFields() { return num_fields; }
|
||||
|
@ -254,7 +254,23 @@ template <> inline std::string Value::get<std::string >() cons
|
||||
template <> inline LocalDate Value::get<LocalDate >() const { return getDate(); }
|
||||
template <> inline LocalDateTime Value::get<LocalDateTime >() const { return getDateTime(); }
|
||||
|
||||
template <typename T> inline T Value::get() const { return T(*this); }
|
||||
|
||||
namespace details
|
||||
{
|
||||
// To avoid stack overflow when converting to type with no appropriate c-tor,
|
||||
// resulting in endless recursive calls from `Value::get<T>()` to `Value::operator T()` to `Value::get<T>()` to ...
|
||||
template <typename T, typename std::enable_if_t<std::is_constructible_v<T, Value>>>
|
||||
inline T contructFromValue(const Value & val)
|
||||
{
|
||||
return T(val);
|
||||
}
|
||||
}
|
||||
|
||||
template <typename T>
|
||||
inline T Value::get() const
|
||||
{
|
||||
return details::contructFromValue<T>(*this);
|
||||
}
|
||||
|
||||
|
||||
inline std::ostream & operator<< (std::ostream & ostr, const Value & x)
|
||||
|
@ -1,5 +1,7 @@
|
||||
LIBRARY()
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
readpassphrase.c
|
||||
)
|
||||
|
@ -2,6 +2,8 @@ LIBRARY()
|
||||
|
||||
ADDINCL(GLOBAL clickhouse/base/widechar_width)
|
||||
|
||||
CFLAGS(-g0)
|
||||
|
||||
SRCS(
|
||||
widechar_width.cpp
|
||||
)
|
||||
|
@ -1,9 +1,9 @@
|
||||
# This strings autochanged from release_lib.sh:
|
||||
SET(VERSION_REVISION 54439)
|
||||
SET(VERSION_REVISION 54440)
|
||||
SET(VERSION_MAJOR 20)
|
||||
SET(VERSION_MINOR 9)
|
||||
SET(VERSION_MINOR 10)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 0586f0d555f7481b394afc55bbb29738cd573a1c)
|
||||
SET(VERSION_DESCRIBE v20.9.1.1-prestable)
|
||||
SET(VERSION_STRING 20.9.1.1)
|
||||
SET(VERSION_GITHASH 11a247d2f42010c1a17bf678c3e00a4bc89b23f8)
|
||||
SET(VERSION_DESCRIBE v20.10.1.1-prestable)
|
||||
SET(VERSION_STRING 20.10.1.1)
|
||||
# end of autochange
|
||||
|
@ -6,6 +6,11 @@ endif()
|
||||
|
||||
if ((ENABLE_CCACHE OR NOT DEFINED ENABLE_CCACHE) AND NOT COMPILER_MATCHES_CCACHE)
|
||||
find_program (CCACHE_FOUND ccache)
|
||||
if (CCACHE_FOUND)
|
||||
set(ENABLE_CCACHE_BY_DEFAULT 1)
|
||||
else()
|
||||
set(ENABLE_CCACHE_BY_DEFAULT 0)
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if (NOT CCACHE_FOUND AND NOT DEFINED ENABLE_CCACHE AND NOT COMPILER_MATCHES_CCACHE)
|
||||
@ -13,7 +18,7 @@ if (NOT CCACHE_FOUND AND NOT DEFINED ENABLE_CCACHE AND NOT COMPILER_MATCHES_CCAC
|
||||
"Setting it up will significantly reduce compilation time for 2nd and consequent builds")
|
||||
endif()
|
||||
|
||||
option(ENABLE_CCACHE "Speedup re-compilations using ccache" ${CCACHE_FOUND})
|
||||
option(ENABLE_CCACHE "Speedup re-compilations using ccache" ${ENABLE_CCACHE_BY_DEFAULT})
|
||||
|
||||
if (NOT ENABLE_CCACHE)
|
||||
return()
|
||||
@ -24,7 +29,7 @@ if (CCACHE_FOUND AND NOT COMPILER_MATCHES_CCACHE)
|
||||
string(REGEX REPLACE "ccache version ([0-9\\.]+).*" "\\1" CCACHE_VERSION ${CCACHE_VERSION})
|
||||
|
||||
if (CCACHE_VERSION VERSION_GREATER "3.2.0" OR NOT CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
|
||||
#message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")
|
||||
message(STATUS "Using ${CCACHE_FOUND} ${CCACHE_VERSION}")
|
||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE ${CCACHE_FOUND})
|
||||
set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK ${CCACHE_FOUND})
|
||||
else ()
|
||||
|
@ -36,7 +36,15 @@ if (SANITIZE)
|
||||
endif ()
|
||||
|
||||
elseif (SANITIZE STREQUAL "thread")
|
||||
set (TSAN_FLAGS "-fsanitize=thread -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt")
|
||||
set (TSAN_FLAGS "-fsanitize=thread")
|
||||
if (COMPILER_CLANG)
|
||||
set (TSAN_FLAGS "${TSAN_FLAGS} -fsanitize-blacklist=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt")
|
||||
else()
|
||||
message (WARNING "TSAN suppressions was not passed to the compiler (since the compiler is not clang)")
|
||||
message (WARNING "Use the following command to pass them manually:")
|
||||
message (WARNING " export TSAN_OPTIONS=\"$TSAN_OPTIONS suppressions=${CMAKE_SOURCE_DIR}/tests/tsan_suppressions.txt\"")
|
||||
endif()
|
||||
|
||||
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} ${SAN_FLAGS} ${TSAN_FLAGS}")
|
||||
|
@ -74,10 +74,9 @@ target_link_libraries(capnpc PUBLIC capnp)
|
||||
|
||||
# The library has substandard code
|
||||
if (COMPILER_GCC)
|
||||
set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor -Wno-sign-compare -Wno-strict-aliasing -Wno-maybe-uninitialized
|
||||
-Wno-deprecated-declarations -Wno-class-memaccess)
|
||||
set (SUPPRESS_WARNINGS -w)
|
||||
elseif (COMPILER_CLANG)
|
||||
set (SUPPRESS_WARNINGS -Wno-non-virtual-dtor -Wno-sign-compare -Wno-strict-aliasing -Wno-deprecated-declarations)
|
||||
set (SUPPRESS_WARNINGS -w)
|
||||
set (CAPNP_PRIVATE_CXX_FLAGS -fno-char8_t)
|
||||
endif ()
|
||||
|
||||
|
4
debian/changelog
vendored
4
debian/changelog
vendored
@ -1,5 +1,5 @@
|
||||
clickhouse (20.9.1.1) unstable; urgency=low
|
||||
clickhouse (20.10.1.1) unstable; urgency=low
|
||||
|
||||
* Modified source code
|
||||
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Mon, 31 Aug 2020 23:07:38 +0300
|
||||
-- clickhouse-release <clickhouse-release@yandex-team.ru> Tue, 08 Sep 2020 17:04:39 +0300
|
||||
|
23
debian/clickhouse-server.init
vendored
23
debian/clickhouse-server.init
vendored
@ -67,13 +67,6 @@ if uname -mpi | grep -q 'x86_64'; then
|
||||
fi
|
||||
|
||||
|
||||
SUPPORTED_COMMANDS="{start|stop|status|restart|forcestop|forcerestart|reload|condstart|condstop|condrestart|condreload|initdb}"
|
||||
is_supported_command()
|
||||
{
|
||||
echo "$SUPPORTED_COMMANDS" | grep -E "(\{|\|)$1(\||})" &> /dev/null
|
||||
}
|
||||
|
||||
|
||||
is_running()
|
||||
{
|
||||
pgrep --pidfile "$CLICKHOUSE_PIDFILE" $(echo "${PROGRAM}" | cut -c1-15) 1> /dev/null 2> /dev/null
|
||||
@ -283,13 +276,12 @@ use_cron()
|
||||
fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# returns false if cron disabled (with systemd)
|
||||
enable_cron()
|
||||
{
|
||||
use_cron && sed -i 's/^#*//' "$CLICKHOUSE_CRONFILE"
|
||||
}
|
||||
|
||||
|
||||
# returns false if cron disabled (with systemd)
|
||||
disable_cron()
|
||||
{
|
||||
use_cron && sed -i 's/^#*/#/' "$CLICKHOUSE_CRONFILE"
|
||||
@ -312,15 +304,14 @@ main()
|
||||
EXIT_STATUS=0
|
||||
case "$1" in
|
||||
start)
|
||||
start && enable_cron
|
||||
service_or_func start && enable_cron
|
||||
;;
|
||||
stop)
|
||||
# disable_cron returns false if cron disabled (with systemd) - not checking return status
|
||||
disable_cron
|
||||
stop
|
||||
service_or_func stop
|
||||
;;
|
||||
restart)
|
||||
restart && enable_cron
|
||||
service_or_func restart && enable_cron
|
||||
;;
|
||||
forcestop)
|
||||
disable_cron
|
||||
@ -330,7 +321,7 @@ main()
|
||||
forcerestart && enable_cron
|
||||
;;
|
||||
reload)
|
||||
restart
|
||||
service_or_func restart
|
||||
;;
|
||||
condstart)
|
||||
is_running || service_or_func start
|
||||
@ -354,7 +345,7 @@ main()
|
||||
disable_cron
|
||||
;;
|
||||
*)
|
||||
echo "Usage: $0 $SUPPORTED_COMMANDS"
|
||||
echo "Usage: $0 {start|stop|status|restart|forcestop|forcerestart|reload|condstart|condstop|condrestart|condreload|initdb}"
|
||||
exit 2
|
||||
;;
|
||||
esac
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.9.1.*
|
||||
ARG version=20.10.1.*
|
||||
|
||||
RUN apt-get update \
|
||||
&& apt-get install --yes --no-install-recommends \
|
||||
|
@ -1,5 +1,5 @@
|
||||
# docker build -t yandex/clickhouse-binary-builder .
|
||||
FROM ubuntu:19.10
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=10
|
||||
|
||||
@ -32,6 +32,8 @@ RUN apt-get update \
|
||||
curl \
|
||||
gcc-9 \
|
||||
g++-9 \
|
||||
gcc-10 \
|
||||
g++-10 \
|
||||
llvm-${LLVM_VERSION} \
|
||||
clang-${LLVM_VERSION} \
|
||||
lld-${LLVM_VERSION} \
|
||||
|
@ -18,7 +18,7 @@ ccache --zero-stats ||:
|
||||
ln -s /usr/lib/x86_64-linux-gnu/libOpenCL.so.1.0.0 /usr/lib/libOpenCL.so ||:
|
||||
rm -f CMakeCache.txt
|
||||
cmake --debug-trycompile --verbose=1 -DCMAKE_VERBOSE_MAKEFILE=1 -LA -DCMAKE_BUILD_TYPE=$BUILD_TYPE -DSANITIZE=$SANITIZER $CMAKE_FLAGS ..
|
||||
ninja $NINJA_FLAGS clickhouse-bundle
|
||||
ninja -j $(($(nproc) / 2)) $NINJA_FLAGS clickhouse-bundle
|
||||
mv ./programs/clickhouse* /output
|
||||
mv ./src/unit_tests_dbms /output
|
||||
find . -name '*.so' -print -exec mv '{}' /output \;
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:20.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.9.1.*
|
||||
ARG version=20.10.1.*
|
||||
ARG gosu_ver=1.10
|
||||
|
||||
RUN apt-get update \
|
||||
|
@ -1,7 +1,7 @@
|
||||
FROM ubuntu:18.04
|
||||
|
||||
ARG repository="deb https://repo.clickhouse.tech/deb/stable/ main/"
|
||||
ARG version=20.9.1.*
|
||||
ARG version=20.10.1.*
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y apt-transport-https dirmngr && \
|
||||
|
@ -5,8 +5,7 @@ trap "exit" INT TERM
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
|
||||
# This script is separated into two stages, cloning and everything else, so
|
||||
# that we can run the "everything else" stage from the cloned source (we don't
|
||||
# do this yet).
|
||||
# that we can run the "everything else" stage from the cloned source.
|
||||
stage=${stage:-}
|
||||
|
||||
# A variable to pass additional flags to CMake.
|
||||
@ -16,7 +15,6 @@ stage=${stage:-}
|
||||
# empty parameter.
|
||||
read -ra FASTTEST_CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
||||
|
||||
ls -la
|
||||
|
||||
function kill_clickhouse
|
||||
{
|
||||
@ -60,6 +58,7 @@ function clone_root
|
||||
git clone https://github.com/ClickHouse/ClickHouse.git | ts '%Y-%m-%d %H:%M:%S' | tee /test_output/clone_log.txt
|
||||
cd ClickHouse
|
||||
CLICKHOUSE_DIR=$(pwd)
|
||||
export CLICKHOUSE_DIR
|
||||
|
||||
|
||||
if [ "$PULL_REQUEST_NUMBER" != "0" ]; then
|
||||
@ -128,6 +127,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||
#ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||
@ -251,12 +251,20 @@ fi
|
||||
|
||||
case "$stage" in
|
||||
"")
|
||||
ls -la
|
||||
;&
|
||||
|
||||
"clone_root")
|
||||
clone_root
|
||||
# TODO bootstrap into the cloned script here. Add this on Sep 1 2020 or
|
||||
# later, so that most of the old branches are updated with this code.
|
||||
|
||||
# Pass control to the script from cloned sources, unless asked otherwise.
|
||||
if ! [ -v FASTTEST_LOCAL_SCRIPT ]
|
||||
then
|
||||
stage=run "$CLICKHOUSE_DIR/docker/test/fasttest/run.sh"
|
||||
exit $?
|
||||
fi
|
||||
;&
|
||||
|
||||
"run")
|
||||
run
|
||||
;&
|
||||
|
@ -7,3 +7,4 @@ services:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- 3308:3306
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
@ -1,10 +0,0 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
mysql5_7:
|
||||
image: mysql:5.7
|
||||
restart: always
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- 33307:3306
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
@ -5,4 +5,4 @@ services:
|
||||
restart: always
|
||||
ports:
|
||||
- 6380:6379
|
||||
command: redis-server --requirepass "clickhouse"
|
||||
command: redis-server --requirepass "clickhouse" --databases 32
|
||||
|
@ -394,12 +394,24 @@ create table query_run_metrics_denorm engine File(TSV, 'analyze/query-run-metric
|
||||
order by test, query_index, metric_names, version, query_id
|
||||
;
|
||||
|
||||
-- Filter out tests that don't have an even number of runs, to avoid breaking
|
||||
-- the further calculations. This may happen if there was an error during the
|
||||
-- test runs, e.g. the server died. It will be reported in test errors, so we
|
||||
-- don't have to report it again.
|
||||
create view broken_queries as
|
||||
select test, query_index
|
||||
from query_runs
|
||||
group by test, query_index
|
||||
having count(*) % 2 != 0
|
||||
;
|
||||
|
||||
-- This is for statistical processing with eqmed.sql
|
||||
create table query_run_metrics_for_stats engine File(
|
||||
TSV, -- do not add header -- will parse with grep
|
||||
'analyze/query-run-metrics-for-stats.tsv')
|
||||
as select test, query_index, 0 run, version, metric_values
|
||||
from query_run_metric_arrays
|
||||
where (test, query_index) not in broken_queries
|
||||
order by test, query_index, run, version
|
||||
;
|
||||
|
||||
@ -565,40 +577,54 @@ create table unstable_queries_report engine File(TSV, 'report/unstable-queries.t
|
||||
toDecimal64(stat_threshold, 3), unstable_fail, test, query_index, query_display_name
|
||||
from queries where unstable_show order by stat_threshold desc;
|
||||
|
||||
create table test_time_changes engine File(TSV, 'report/test-time-changes.tsv') as
|
||||
select test, queries, average_time_change from (
|
||||
select test, count(*) queries,
|
||||
sum(left) as left, sum(right) as right,
|
||||
(right - left) / right average_time_change
|
||||
|
||||
create view test_speedup as
|
||||
select
|
||||
test,
|
||||
exp2(avg(log2(left / right))) times_speedup,
|
||||
count(*) queries,
|
||||
unstable + changed bad,
|
||||
sum(changed_show) changed,
|
||||
sum(unstable_show) unstable
|
||||
from queries
|
||||
group by test
|
||||
order by abs(average_time_change) desc
|
||||
)
|
||||
order by times_speedup desc
|
||||
;
|
||||
|
||||
create table unstable_tests engine File(TSV, 'report/unstable-tests.tsv') as
|
||||
select test, sum(unstable_show) total_unstable, sum(changed_show) total_changed
|
||||
from queries
|
||||
group by test
|
||||
order by total_unstable + total_changed desc
|
||||
create view total_speedup as
|
||||
select
|
||||
'Total' test,
|
||||
exp2(avg(log2(times_speedup))) times_speedup,
|
||||
sum(queries) queries,
|
||||
unstable + changed bad,
|
||||
sum(changed) changed,
|
||||
sum(unstable) unstable
|
||||
from test_speedup
|
||||
;
|
||||
|
||||
create table test_perf_changes_report engine File(TSV, 'report/test-perf-changes.tsv') as
|
||||
select test,
|
||||
queries,
|
||||
coalesce(total_unstable, 0) total_unstable,
|
||||
coalesce(total_changed, 0) total_changed,
|
||||
total_unstable + total_changed total_bad,
|
||||
coalesce(toString(toDecimal64(average_time_change, 3)), '??') average_time_change_str
|
||||
from test_time_changes
|
||||
full join unstable_tests
|
||||
using test
|
||||
where (abs(average_time_change) > 0.05 and queries > 5)
|
||||
or (total_bad > 0)
|
||||
order by total_bad desc, average_time_change desc
|
||||
settings join_use_nulls = 1
|
||||
with
|
||||
(times_speedup >= 1
|
||||
? '-' || toString(toDecimal64(times_speedup, 3)) || 'x'
|
||||
: '+' || toString(toDecimal64(1 / times_speedup, 3)) || 'x')
|
||||
as times_speedup_str
|
||||
select test, times_speedup_str, queries, bad, changed, unstable
|
||||
-- Not sure what's the precedence of UNION ALL vs WHERE & ORDER BY, hence all
|
||||
-- the braces.
|
||||
from (
|
||||
(
|
||||
select * from total_speedup
|
||||
) union all (
|
||||
select * from test_speedup
|
||||
where
|
||||
(times_speedup >= 1 ? times_speedup : (1 / times_speedup)) >= 1.005
|
||||
or bad
|
||||
)
|
||||
)
|
||||
order by test = 'Total' desc, times_speedup desc
|
||||
;
|
||||
|
||||
|
||||
create view total_client_time_per_query as select *
|
||||
from file('analyze/client-times.tsv', TSV,
|
||||
'test text, query_index int, client float, server float');
|
||||
|
@ -262,6 +262,13 @@ for query_index, q in enumerate(test_queries):
|
||||
print(f'query\t{query_index}\t{run_id}\t{conn_index}\t{c.last_query.elapsed}')
|
||||
server_seconds += c.last_query.elapsed
|
||||
|
||||
if c.last_query.elapsed > 10:
|
||||
# Stop processing pathologically slow queries, to avoid timing out
|
||||
# the entire test task. This shouldn't really happen, so we don't
|
||||
# need much handling for this case and can just exit.
|
||||
print(f'The query no. {query_index} is taking too long to run ({c.last_query.elapsed} s)', file=sys.stderr)
|
||||
exit(2)
|
||||
|
||||
client_seconds = time.perf_counter() - start_seconds
|
||||
print(f'client-time\t{query_index}\t{client_seconds}\t{server_seconds}')
|
||||
|
||||
|
@ -370,7 +370,7 @@ if args.report == 'main':
|
||||
columns = [
|
||||
'Old, s', # 0
|
||||
'New, s', # 1
|
||||
'Times speedup / slowdown', # 2
|
||||
'Ratio of speedup (-) or slowdown (+)', # 2
|
||||
'Relative difference (new − old) / old', # 3
|
||||
'p < 0.001 threshold', # 4
|
||||
# Failed # 5
|
||||
@ -447,7 +447,7 @@ if args.report == 'main':
|
||||
addSimpleTable('Skipped tests', ['Test', 'Reason'], skipped_tests_rows)
|
||||
|
||||
addSimpleTable('Test performance changes',
|
||||
['Test', 'Queries', 'Unstable', 'Changed perf', 'Total not OK', 'Avg relative time diff'],
|
||||
['Test', 'Ratio of speedup (-) or slowdown (+)', 'Queries', 'Total not OK', 'Changed perf', 'Unstable'],
|
||||
tsvRows('report/test-perf-changes.tsv'))
|
||||
|
||||
def add_test_times():
|
||||
@ -647,7 +647,7 @@ elif args.report == 'all-queries':
|
||||
# Unstable #1
|
||||
'Old, s', #2
|
||||
'New, s', #3
|
||||
'Times speedup / slowdown', #4
|
||||
'Ratio of speedup (-) or slowdown (+)', #4
|
||||
'Relative difference (new − old) / old', #5
|
||||
'p < 0.001 threshold', #6
|
||||
'Test', #7
|
||||
|
@ -29,17 +29,26 @@ if [[ -n "$USE_DATABASE_ATOMIC" ]] && [[ "$USE_DATABASE_ATOMIC" -eq 1 ]]; then
|
||||
ln -s /usr/share/clickhouse-test/config/database_atomic_usersd.xml /etc/clickhouse-server/users.d/
|
||||
fi
|
||||
|
||||
echo "TSAN_OPTIONS='verbosity=1000 halt_on_error=1 history_size=7'" >> /etc/environment
|
||||
echo "TSAN_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
echo "UBSAN_OPTIONS='print_stacktrace=1'" >> /etc/environment
|
||||
echo "ASAN_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
echo "UBSAN_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
echo "LLVM_SYMBOLIZER_PATH=/usr/lib/llvm-10/bin/llvm-symbolizer" >> /etc/environment
|
||||
function start()
|
||||
{
|
||||
counter=0
|
||||
until clickhouse-client --query "SELECT 1"
|
||||
do
|
||||
if [ "$counter" -gt 120 ]
|
||||
then
|
||||
echo "Cannot start clickhouse-server"
|
||||
cat /var/log/clickhouse-server/stdout.log
|
||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
||||
break
|
||||
fi
|
||||
timeout 120 service clickhouse-server start
|
||||
sleep 0.5
|
||||
counter=$(($counter + 1))
|
||||
done
|
||||
}
|
||||
|
||||
service zookeeper start
|
||||
sleep 5
|
||||
service clickhouse-server start
|
||||
sleep 5
|
||||
start
|
||||
/s3downloader --dataset-names $DATASETS
|
||||
chmod 777 -R /var/lib/clickhouse
|
||||
clickhouse-client --query "SHOW DATABASES"
|
||||
|
@ -2,6 +2,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import tarfile
|
||||
import logging
|
||||
import argparse
|
||||
@ -16,6 +17,8 @@ AVAILABLE_DATASETS = {
|
||||
'visits': 'visits_v1.tar',
|
||||
}
|
||||
|
||||
RETRIES_COUNT = 5
|
||||
|
||||
def _get_temp_file_name():
|
||||
return os.path.join(tempfile._get_default_tempdir(), next(tempfile._get_candidate_names()))
|
||||
|
||||
@ -24,6 +27,8 @@ def build_url(base_url, dataset):
|
||||
|
||||
def dowload_with_progress(url, path):
|
||||
logging.info("Downloading from %s to temp path %s", url, path)
|
||||
for i in range(RETRIES_COUNT):
|
||||
try:
|
||||
with open(path, 'w') as f:
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
@ -43,6 +48,16 @@ def dowload_with_progress(url, path):
|
||||
percent = int(100 * float(dl) / total_length)
|
||||
sys.stdout.write("\r[{}{}] {}%".format('=' * done, ' ' * (50-done), percent))
|
||||
sys.stdout.flush()
|
||||
break
|
||||
except Exception as ex:
|
||||
sys.stdout.write("\n")
|
||||
time.sleep(3)
|
||||
logging.info("Exception while downloading %s, retry %s", ex, i + 1)
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
else:
|
||||
raise Exception("Cannot download dataset from {}, all retries exceeded".format(url))
|
||||
|
||||
sys.stdout.write("\n")
|
||||
logging.info("Downloading finished")
|
||||
|
||||
|
@ -71,14 +71,26 @@ ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config
|
||||
ln -s --backup=simple --suffix=_original.xml \
|
||||
/usr/share/clickhouse-test/config/query_masking_rules.xml /etc/clickhouse-server/config.d/
|
||||
|
||||
function start()
|
||||
{
|
||||
counter=0
|
||||
until clickhouse-client --query "SELECT 1"
|
||||
do
|
||||
if [ "$counter" -gt 120 ]
|
||||
then
|
||||
echo "Cannot start clickhouse-server"
|
||||
cat /var/log/clickhouse-server/stdout.log
|
||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
||||
break
|
||||
fi
|
||||
timeout 120 service clickhouse-server start
|
||||
sleep 0.5
|
||||
counter=$(($counter + 1))
|
||||
done
|
||||
}
|
||||
|
||||
service zookeeper start
|
||||
|
||||
sleep 5
|
||||
|
||||
start_clickhouse
|
||||
|
||||
sleep 5
|
||||
start
|
||||
|
||||
if ! /s3downloader --dataset-names $DATASETS; then
|
||||
echo "Cannot download datatsets"
|
||||
|
@ -2,6 +2,7 @@
|
||||
# -*- coding: utf-8 -*-
|
||||
import os
|
||||
import sys
|
||||
import time
|
||||
import tarfile
|
||||
import logging
|
||||
import argparse
|
||||
@ -16,6 +17,8 @@ AVAILABLE_DATASETS = {
|
||||
'visits': 'visits_v1.tar',
|
||||
}
|
||||
|
||||
RETRIES_COUNT = 5
|
||||
|
||||
def _get_temp_file_name():
|
||||
return os.path.join(tempfile._get_default_tempdir(), next(tempfile._get_candidate_names()))
|
||||
|
||||
@ -24,6 +27,8 @@ def build_url(base_url, dataset):
|
||||
|
||||
def dowload_with_progress(url, path):
|
||||
logging.info("Downloading from %s to temp path %s", url, path)
|
||||
for i in range(RETRIES_COUNT):
|
||||
try:
|
||||
with open(path, 'w') as f:
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
@ -43,6 +48,16 @@ def dowload_with_progress(url, path):
|
||||
percent = int(100 * float(dl) / total_length)
|
||||
sys.stdout.write("\r[{}{}] {}%".format('=' * done, ' ' * (50-done), percent))
|
||||
sys.stdout.flush()
|
||||
break
|
||||
except Exception as ex:
|
||||
sys.stdout.write("\n")
|
||||
time.sleep(3)
|
||||
logging.info("Exception while downloading %s, retry %s", ex, i + 1)
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
else:
|
||||
raise Exception("Cannot download dataset from {}, all retries exceeded".format(url))
|
||||
|
||||
sys.stdout.write("\n")
|
||||
logging.info("Downloading finished")
|
||||
|
||||
|
@ -24,6 +24,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||
|
@ -24,6 +24,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||
|
@ -57,6 +57,7 @@ ln -s /usr/share/clickhouse-test/config/access_management.xml /etc/clickhouse-se
|
||||
ln -s /usr/share/clickhouse-test/config/ints_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/strings_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/decimals_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/executable_dictionary.xml /etc/clickhouse-server/
|
||||
ln -s /usr/share/clickhouse-test/config/macros.xml /etc/clickhouse-server/config.d/
|
||||
ln -s /usr/share/clickhouse-test/config/disks.xml /etc/clickhouse-server/config.d/
|
||||
ln -s /usr/share/clickhouse-test/config/secure_ports.xml /etc/clickhouse-server/config.d/
|
||||
|
@ -32,7 +32,8 @@ SETTINGS
|
||||
[kafka_num_consumers = N,]
|
||||
[kafka_max_block_size = 0,]
|
||||
[kafka_skip_broken_messages = N,]
|
||||
[kafka_commit_every_batch = 0]
|
||||
[kafka_commit_every_batch = 0,]
|
||||
[kafka_thread_per_consumer = 0]
|
||||
```
|
||||
|
||||
Required parameters:
|
||||
@ -50,6 +51,7 @@ Optional parameters:
|
||||
- `kafka_max_block_size` - The maximum batch size (in messages) for poll (default: `max_block_size`).
|
||||
- `kafka_skip_broken_messages` – Kafka message parser tolerance to schema-incompatible messages per block. Default: `0`. If `kafka_skip_broken_messages = N` then the engine skips *N* Kafka messages that cannot be parsed (a message equals a row of data).
|
||||
- `kafka_commit_every_batch` - Commit every consumed and handled batch instead of a single commit after writing a whole block (default: `0`).
|
||||
- `kafka_thread_per_consumer` - Provide independent thread for each consumer (default: `0`). When enabled, every consumer flush the data independently, in parallel (otherwise - rows from several consumers squashed to form one block).
|
||||
|
||||
Examples:
|
||||
|
||||
|
@ -27,9 +27,15 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
[rabbitmq_exchange_type = 'exchange_type',]
|
||||
[rabbitmq_routing_key_list = 'key1,key2,...',]
|
||||
[rabbitmq_row_delimiter = 'delimiter_symbol',]
|
||||
[rabbitmq_schema = '',]
|
||||
[rabbitmq_num_consumers = N,]
|
||||
[rabbitmq_num_queues = N,]
|
||||
[rabbitmq_transactional_channel = 0]
|
||||
[rabbitmq_queue_base = 'queue',]
|
||||
[rabbitmq_deadletter_exchange = 'dl-exchange',]
|
||||
[rabbitmq_persistent = 0,]
|
||||
[rabbitmq_skip_broken_messages = N,]
|
||||
[rabbitmq_max_block_size = N,]
|
||||
[rabbitmq_flush_interval_ms = N]
|
||||
```
|
||||
|
||||
Required parameters:
|
||||
@ -40,12 +46,18 @@ Required parameters:
|
||||
|
||||
Optional parameters:
|
||||
|
||||
- `rabbitmq_exchange_type` – The type of RabbitMQ exchange: `direct`, `fanout`, `topic`, `headers`, `consistent-hash`. Default: `fanout`.
|
||||
- `rabbitmq_exchange_type` – The type of RabbitMQ exchange: `direct`, `fanout`, `topic`, `headers`, `consistent_hash`. Default: `fanout`.
|
||||
- `rabbitmq_routing_key_list` – A comma-separated list of routing keys.
|
||||
- `rabbitmq_row_delimiter` – Delimiter character, which ends the message.
|
||||
- `rabbitmq_schema` – Parameter that must be used if the format requires a schema definition. For example, [Cap’n Proto](https://capnproto.org/) requires the path to the schema file and the name of the root `schema.capnp:Message` object.
|
||||
- `rabbitmq_num_consumers` – The number of consumers per table. Default: `1`. Specify more consumers if the throughput of one consumer is insufficient.
|
||||
- `rabbitmq_num_queues` – The number of queues per consumer. Default: `1`. Specify more queues if the capacity of one queue per consumer is insufficient. A single queue can contain up to 50K messages at the same time.
|
||||
- `rabbitmq_transactional_channel` – Wrap `INSERT` queries in transactions. Default: `0`.
|
||||
- `rabbitmq_num_queues` – The number of queues per consumer. Default: `1`. Specify more queues if the capacity of one queue per consumer is insufficient.
|
||||
- `rabbitmq_queue_base` - Specify a base name for queues that will be declared. By default, queues are declared unique to tables based on db and table names.
|
||||
- `rabbitmq_deadletter_exchange` - Specify name for a [dead letter exchange](https://www.rabbitmq.com/dlx.html). You can create another table with this exchange name and collect messages in cases when they are republished to dead letter exchange. By default dead letter exchange is not specified.
|
||||
- `persistent` - If set to 1 (true), in insert query delivery mode will be set to 2 (marks messages as 'persistent'). Default: `0`.
|
||||
- `rabbitmq_skip_broken_messages` – RabbitMQ message parser tolerance to schema-incompatible messages per block. Default: `0`. If `rabbitmq_skip_broken_messages = N` then the engine skips *N* RabbitMQ messages that cannot be parsed (a message equals a row of data).
|
||||
- `rabbitmq_max_block_size`
|
||||
- `rabbitmq_flush_interval_ms`
|
||||
|
||||
Required configuration:
|
||||
|
||||
@ -92,13 +104,22 @@ Exchange type options:
|
||||
- `headers` - Routing is based on `key=value` matches with a setting `x-match=all` or `x-match=any`. Example table key list: `x-match=all,format=logs,type=report,year=2020`.
|
||||
- `consistent-hash` - Data is evenly distributed between all bound tables (where the exchange name is the same). Note that this exchange type must be enabled with RabbitMQ plugin: `rabbitmq-plugins enable rabbitmq_consistent_hash_exchange`.
|
||||
|
||||
If exchange type is not specified, then default is `fanout` and routing keys for data publishing must be randomized in range `[1, num_consumers]` for every message/batch (or in range `[1, num_consumers * num_queues]` if `rabbitmq_num_queues` is set). This table configuration works quicker than any other, especially when `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are set.
|
||||
Setting `rabbitmq_queue_base` may be used for the following cases:
|
||||
- to let different tables share queues, so that multiple consumers could be registered for the same queues, which makes a better performance. If using `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` settings, the exact match of queues is achieved in case these parameters are the same.
|
||||
- to be able to restore reading from certain durable queues when not all messages were successfully consumed. To be able to resume consumption from one specific queue - set its name in `rabbitmq_queue_base` setting and do not specify `rabbitmq_num_consumers` and `rabbitmq_num_queues` (defaults to 1). To be able to resume consumption from all queues, which were declared for a specific table - just specify the same settings: `rabbitmq_queue_base`, `rabbitmq_num_consumers`, `rabbitmq_num_queues`. By default, queue names will be unique to tables. Note: it makes sence only if messages are sent with delivery mode 2 - marked 'persistent', durable.
|
||||
- to reuse queues as they are declared durable and not auto-deleted.
|
||||
|
||||
If `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` parameters are specified along with `rabbitmq_exchange_type`, then:
|
||||
To improve performance, received messages are grouped into blocks the size of [max\_insert\_block\_size](../../../operations/server-configuration-parameters/settings.md#settings-max_insert_block_size). If the block wasn’t formed within [stream\_flush\_interval\_ms](../../../operations/server-configuration-parameters/settings.md) milliseconds, the data will be flushed to the table regardless of the completeness of the block.
|
||||
|
||||
If `rabbitmq_num_consumers` and/or `rabbitmq_num_queues` settings are specified along with `rabbitmq_exchange_type`, then:
|
||||
|
||||
- `rabbitmq-consistent-hash-exchange` plugin must be enabled.
|
||||
- `message_id` property of the published messages must be specified (unique for each message/batch).
|
||||
|
||||
For insert query there is message metadata, which is added for each published message: `messageID` and `republished` flag (true, if published more than once) - can be accessed via message headers.
|
||||
|
||||
Do not use the same table for inserts and materialized views.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
@ -113,10 +134,18 @@ Example:
|
||||
rabbitmq_num_consumers = 5;
|
||||
|
||||
CREATE TABLE daily (key UInt64, value UInt64)
|
||||
ENGINE = MergeTree();
|
||||
ENGINE = MergeTree() ORDER BY key;
|
||||
|
||||
CREATE MATERIALIZED VIEW consumer TO daily
|
||||
AS SELECT key, value FROM queue;
|
||||
|
||||
SELECT key, value FROM daily ORDER BY key;
|
||||
```
|
||||
|
||||
## Virtual Columns {#virtual-columns}
|
||||
|
||||
- `_exchange_name` - RabbitMQ exchange name.
|
||||
- `_channel_id` - ChannelID, on which consumer, who received the message, was declared.
|
||||
- `_delivery_tag` - DeliveryTag of the received message. Scoped per channel.
|
||||
- `_redelivered` - `redelivered` flag of the message.
|
||||
- `_message_id` - MessageID of the received message; non-empty if was set, when message was published.
|
||||
|
@ -11,7 +11,7 @@ results of a `SELECT`, and to perform `INSERT`s into a file-backed table.
|
||||
The supported formats are:
|
||||
|
||||
| Format | Input | Output |
|
||||
|-----------------------------------------------------------------|-------|--------|
|
||||
|-----------------------------------------------------------------------------------------|-------|--------|
|
||||
| [TabSeparated](#tabseparated) | ✔ | ✔ |
|
||||
| [TabSeparatedRaw](#tabseparatedraw) | ✔ | ✔ |
|
||||
| [TabSeparatedWithNames](#tabseparatedwithnames) | ✔ | ✔ |
|
||||
@ -25,8 +25,17 @@ The supported formats are:
|
||||
| [Vertical](#vertical) | ✗ | ✔ |
|
||||
| [VerticalRaw](#verticalraw) | ✗ | ✔ |
|
||||
| [JSON](#json) | ✗ | ✔ |
|
||||
| [JSONString](#jsonstring) | ✗ | ✔ |
|
||||
| [JSONCompact](#jsoncompact) | ✗ | ✔ |
|
||||
| [JSONCompactString](#jsoncompactstring) | ✗ | ✔ |
|
||||
| [JSONEachRow](#jsoneachrow) | ✔ | ✔ |
|
||||
| [JSONEachRowWithProgress](#jsoneachrowwithprogress) | ✗ | ✔ |
|
||||
| [JSONStringEachRow](#jsonstringeachrow) | ✔ | ✔ |
|
||||
| [JSONStringEachRowWithProgress](#jsonstringeachrowwithprogress) | ✗ | ✔ |
|
||||
| [JSONCompactEachRow](#jsoncompacteachrow) | ✔ | ✔ |
|
||||
| [JSONCompactEachRowWithNamesAndTypes](#jsoncompacteachrowwithnamesandtypes) | ✔ | ✔ |
|
||||
| [JSONCompactStringEachRow](#jsoncompactstringeachrow) | ✔ | ✔ |
|
||||
| [JSONCompactStringEachRowWithNamesAndTypes](#jsoncompactstringeachrowwithnamesandtypes) | ✔ | ✔ |
|
||||
| [TSKV](#tskv) | ✔ | ✔ |
|
||||
| [Pretty](#pretty) | ✗ | ✔ |
|
||||
| [PrettyCompact](#prettycompact) | ✗ | ✔ |
|
||||
@ -392,62 +401,41 @@ SELECT SearchPhrase, count() AS c FROM test.hits GROUP BY SearchPhrase WITH TOTA
|
||||
"meta":
|
||||
[
|
||||
{
|
||||
"name": "SearchPhrase",
|
||||
"name": "'hello'",
|
||||
"type": "String"
|
||||
},
|
||||
{
|
||||
"name": "c",
|
||||
"name": "multiply(42, number)",
|
||||
"type": "UInt64"
|
||||
},
|
||||
{
|
||||
"name": "range(5)",
|
||||
"type": "Array(UInt8)"
|
||||
}
|
||||
],
|
||||
|
||||
"data":
|
||||
[
|
||||
{
|
||||
"SearchPhrase": "",
|
||||
"c": "8267016"
|
||||
"'hello'": "hello",
|
||||
"multiply(42, number)": "0",
|
||||
"range(5)": [0,1,2,3,4]
|
||||
},
|
||||
{
|
||||
"SearchPhrase": "bathroom interior design",
|
||||
"c": "2166"
|
||||
"'hello'": "hello",
|
||||
"multiply(42, number)": "42",
|
||||
"range(5)": [0,1,2,3,4]
|
||||
},
|
||||
{
|
||||
"SearchPhrase": "yandex",
|
||||
"c": "1655"
|
||||
},
|
||||
{
|
||||
"SearchPhrase": "spring 2014 fashion",
|
||||
"c": "1549"
|
||||
},
|
||||
{
|
||||
"SearchPhrase": "freeform photos",
|
||||
"c": "1480"
|
||||
"'hello'": "hello",
|
||||
"multiply(42, number)": "84",
|
||||
"range(5)": [0,1,2,3,4]
|
||||
}
|
||||
],
|
||||
|
||||
"totals":
|
||||
{
|
||||
"SearchPhrase": "",
|
||||
"c": "8873898"
|
||||
},
|
||||
"rows": 3,
|
||||
|
||||
"extremes":
|
||||
{
|
||||
"min":
|
||||
{
|
||||
"SearchPhrase": "",
|
||||
"c": "1480"
|
||||
},
|
||||
"max":
|
||||
{
|
||||
"SearchPhrase": "",
|
||||
"c": "8267016"
|
||||
}
|
||||
},
|
||||
|
||||
"rows": 5,
|
||||
|
||||
"rows_before_limit_at_least": 141137
|
||||
"rows_before_limit_at_least": 3
|
||||
}
|
||||
```
|
||||
|
||||
@ -468,63 +456,165 @@ ClickHouse supports [NULL](../sql-reference/syntax.md), which is displayed as `n
|
||||
|
||||
See also the [JSONEachRow](#jsoneachrow) format.
|
||||
|
||||
## JSONString {#jsonstring}
|
||||
|
||||
Differs from JSON only in that data fields are output in strings, not in typed json values.
|
||||
|
||||
Example:
|
||||
|
||||
```json
|
||||
{
|
||||
"meta":
|
||||
[
|
||||
{
|
||||
"name": "'hello'",
|
||||
"type": "String"
|
||||
},
|
||||
{
|
||||
"name": "multiply(42, number)",
|
||||
"type": "UInt64"
|
||||
},
|
||||
{
|
||||
"name": "range(5)",
|
||||
"type": "Array(UInt8)"
|
||||
}
|
||||
],
|
||||
|
||||
"data":
|
||||
[
|
||||
{
|
||||
"'hello'": "hello",
|
||||
"multiply(42, number)": "0",
|
||||
"range(5)": "[0,1,2,3,4]"
|
||||
},
|
||||
{
|
||||
"'hello'": "hello",
|
||||
"multiply(42, number)": "42",
|
||||
"range(5)": "[0,1,2,3,4]"
|
||||
},
|
||||
{
|
||||
"'hello'": "hello",
|
||||
"multiply(42, number)": "84",
|
||||
"range(5)": "[0,1,2,3,4]"
|
||||
}
|
||||
],
|
||||
|
||||
"rows": 3,
|
||||
|
||||
"rows_before_limit_at_least": 3
|
||||
}
|
||||
```
|
||||
|
||||
## JSONCompact {#jsoncompact}
|
||||
## JSONCompactString {#jsoncompactstring}
|
||||
|
||||
Differs from JSON only in that data rows are output in arrays, not in objects.
|
||||
|
||||
Example:
|
||||
|
||||
``` json
|
||||
// JSONCompact
|
||||
{
|
||||
"meta":
|
||||
[
|
||||
{
|
||||
"name": "SearchPhrase",
|
||||
"name": "'hello'",
|
||||
"type": "String"
|
||||
},
|
||||
{
|
||||
"name": "c",
|
||||
"name": "multiply(42, number)",
|
||||
"type": "UInt64"
|
||||
},
|
||||
{
|
||||
"name": "range(5)",
|
||||
"type": "Array(UInt8)"
|
||||
}
|
||||
],
|
||||
|
||||
"data":
|
||||
[
|
||||
["", "8267016"],
|
||||
["bathroom interior design", "2166"],
|
||||
["yandex", "1655"],
|
||||
["fashion trends spring 2014", "1549"],
|
||||
["freeform photo", "1480"]
|
||||
["hello", "0", [0,1,2,3,4]],
|
||||
["hello", "42", [0,1,2,3,4]],
|
||||
["hello", "84", [0,1,2,3,4]]
|
||||
],
|
||||
|
||||
"totals": ["","8873898"],
|
||||
"rows": 3,
|
||||
|
||||
"extremes":
|
||||
{
|
||||
"min": ["","1480"],
|
||||
"max": ["","8267016"]
|
||||
},
|
||||
|
||||
"rows": 5,
|
||||
|
||||
"rows_before_limit_at_least": 141137
|
||||
"rows_before_limit_at_least": 3
|
||||
}
|
||||
```
|
||||
|
||||
This format is only appropriate for outputting a query result, but not for parsing (retrieving data to insert in a table).
|
||||
See also the `JSONEachRow` format.
|
||||
```json
|
||||
// JSONCompactString
|
||||
{
|
||||
"meta":
|
||||
[
|
||||
{
|
||||
"name": "'hello'",
|
||||
"type": "String"
|
||||
},
|
||||
{
|
||||
"name": "multiply(42, number)",
|
||||
"type": "UInt64"
|
||||
},
|
||||
{
|
||||
"name": "range(5)",
|
||||
"type": "Array(UInt8)"
|
||||
}
|
||||
],
|
||||
|
||||
## JSONEachRow {#jsoneachrow}
|
||||
"data":
|
||||
[
|
||||
["hello", "0", "[0,1,2,3,4]"],
|
||||
["hello", "42", "[0,1,2,3,4]"],
|
||||
["hello", "84", "[0,1,2,3,4]"]
|
||||
],
|
||||
|
||||
When using this format, ClickHouse outputs rows as separated, newline-delimited JSON objects, but the data as a whole is not valid JSON.
|
||||
"rows": 3,
|
||||
|
||||
``` json
|
||||
{"SearchPhrase":"curtain designs","count()":"1064"}
|
||||
{"SearchPhrase":"baku","count()":"1000"}
|
||||
{"SearchPhrase":"","count()":"8267016"}
|
||||
"rows_before_limit_at_least": 3
|
||||
}
|
||||
```
|
||||
|
||||
When inserting the data, you should provide a separate JSON object for each row.
|
||||
## JSONEachRow {#jsoneachrow}
|
||||
## JSONStringEachRow {#jsonstringeachrow}
|
||||
## JSONCompactEachRow {#jsoncompacteachrow}
|
||||
## JSONCompactStringEachRow {#jsoncompactstringeachrow}
|
||||
|
||||
When using these formats, ClickHouse outputs rows as separated, newline-delimited JSON values, but the data as a whole is not valid JSON.
|
||||
|
||||
``` json
|
||||
{"some_int":42,"some_str":"hello","some_tuple":[1,"a"]} // JSONEachRow
|
||||
[42,"hello",[1,"a"]] // JSONCompactEachRow
|
||||
["42","hello","(2,'a')"] // JSONCompactStringsEachRow
|
||||
```
|
||||
|
||||
When inserting the data, you should provide a separate JSON value for each row.
|
||||
|
||||
## JSONEachRowWithProgress {#jsoneachrowwithprogress}
|
||||
## JSONStringEachRowWithProgress {#jsonstringeachrowwithprogress}
|
||||
|
||||
Differs from JSONEachRow/JSONStringEachRow in that ClickHouse will also yield progress information as JSON objects.
|
||||
|
||||
```json
|
||||
{"row":{"'hello'":"hello","multiply(42, number)":"0","range(5)":[0,1,2,3,4]}}
|
||||
{"row":{"'hello'":"hello","multiply(42, number)":"42","range(5)":[0,1,2,3,4]}}
|
||||
{"row":{"'hello'":"hello","multiply(42, number)":"84","range(5)":[0,1,2,3,4]}}
|
||||
{"progress":{"read_rows":"3","read_bytes":"24","written_rows":"0","written_bytes":"0","total_rows_to_read":"3"}}
|
||||
```
|
||||
|
||||
## JSONCompactEachRowWithNamesAndTypes {#jsoncompacteachrowwithnamesandtypes}
|
||||
## JSONCompactStringEachRowWithNamesAndTypes {#jsoncompactstringeachrowwithnamesandtypes}
|
||||
|
||||
Differs from JSONCompactEachRow/JSONCompactStringEachRow in that the column names and types are written as the first two rows.
|
||||
|
||||
```json
|
||||
["'hello'", "multiply(42, number)", "range(5)"]
|
||||
["String", "UInt64", "Array(UInt8)"]
|
||||
["hello", "0", [0,1,2,3,4]]
|
||||
["hello", "42", [0,1,2,3,4]]
|
||||
["hello", "84", [0,1,2,3,4]]
|
||||
```
|
||||
|
||||
### Inserting Data {#inserting-data}
|
||||
|
||||
|
@ -62,6 +62,7 @@ Management queries:
|
||||
- [ALTER USER](../sql-reference/statements/alter/user.md#alter-user-statement)
|
||||
- [DROP USER](../sql-reference/statements/drop.md)
|
||||
- [SHOW CREATE USER](../sql-reference/statements/show.md#show-create-user-statement)
|
||||
- [SHOW USERS](../sql-reference/statements/show.md#show-users-statement)
|
||||
|
||||
### Settings Applying {#access-control-settings-applying}
|
||||
|
||||
@ -90,6 +91,7 @@ Management queries:
|
||||
- [SET ROLE](../sql-reference/statements/set-role.md)
|
||||
- [SET DEFAULT ROLE](../sql-reference/statements/set-role.md#set-default-role-statement)
|
||||
- [SHOW CREATE ROLE](../sql-reference/statements/show.md#show-create-role-statement)
|
||||
- [SHOW ROLES](../sql-reference/statements/show.md#show-roles-statement)
|
||||
|
||||
Privileges can be granted to a role by the [GRANT](../sql-reference/statements/grant.md) query. To revoke privileges from a role ClickHouse provides the [REVOKE](../sql-reference/statements/revoke.md) query.
|
||||
|
||||
@ -103,6 +105,7 @@ Management queries:
|
||||
- [ALTER ROW POLICY](../sql-reference/statements/alter/row-policy.md#alter-row-policy-statement)
|
||||
- [DROP ROW POLICY](../sql-reference/statements/drop.md#drop-row-policy-statement)
|
||||
- [SHOW CREATE ROW POLICY](../sql-reference/statements/show.md#show-create-row-policy-statement)
|
||||
- [SHOW POLICIES](../sql-reference/statements/show.md#show-policies-statement)
|
||||
|
||||
## Settings Profile {#settings-profiles-management}
|
||||
|
||||
@ -114,6 +117,7 @@ Management queries:
|
||||
- [ALTER SETTINGS PROFILE](../sql-reference/statements/alter/settings-profile.md#alter-settings-profile-statement)
|
||||
- [DROP SETTINGS PROFILE](../sql-reference/statements/drop.md#drop-settings-profile-statement)
|
||||
- [SHOW CREATE SETTINGS PROFILE](../sql-reference/statements/show.md#show-create-settings-profile-statement)
|
||||
- [SHOW PROFILES](../sql-reference/statements/show.md#show-profiles-statement)
|
||||
|
||||
## Quota {#quotas-management}
|
||||
|
||||
@ -127,6 +131,8 @@ Management queries:
|
||||
- [ALTER QUOTA](../sql-reference/statements/alter/quota.md#alter-quota-statement)
|
||||
- [DROP QUOTA](../sql-reference/statements/drop.md#drop-quota-statement)
|
||||
- [SHOW CREATE QUOTA](../sql-reference/statements/show.md#show-create-quota-statement)
|
||||
- [SHOW QUOTA](../sql-reference/statements/show.md#show-quota-statement)
|
||||
- [SHOW QUOTAS](../sql-reference/statements/show.md#show-quotas-statement)
|
||||
|
||||
## Enabling SQL-driven Access Control and Account Management {#enabling-access-control}
|
||||
|
||||
|
@ -1290,6 +1290,47 @@ Possible values:
|
||||
|
||||
Default value: 0.
|
||||
|
||||
## distributed\_group\_by\_no\_merge {#distributed-group-by-no-merge}
|
||||
|
||||
Do not merge aggregation states from different servers for distributed query processing, you can use this in case it is for certain that there are different keys on different shards
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled (final query processing is done on the initiator node).
|
||||
- 1 - Do not merge aggregation states from different servers for distributed query processing (query completelly processed on the shard, initiator only proxy the data).
|
||||
- 2 - Same as 1 but apply `ORDER BY` and `LIMIT` on the initiator (can be used for queries with `ORDER BY` and/or `LIMIT`).
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM remote('127.0.0.{2,3}', system.one)
|
||||
GROUP BY dummy
|
||||
LIMIT 1
|
||||
SETTINGS distributed_group_by_no_merge = 1
|
||||
FORMAT PrettyCompactMonoBlock
|
||||
|
||||
┌─dummy─┐
|
||||
│ 0 │
|
||||
│ 0 │
|
||||
└───────┘
|
||||
```
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM remote('127.0.0.{2,3}', system.one)
|
||||
GROUP BY dummy
|
||||
LIMIT 1
|
||||
SETTINGS distributed_group_by_no_merge = 2
|
||||
FORMAT PrettyCompactMonoBlock
|
||||
|
||||
┌─dummy─┐
|
||||
│ 0 │
|
||||
└───────┘
|
||||
```
|
||||
|
||||
Default value: 0
|
||||
|
||||
## optimize\_skip\_unused\_shards {#optimize-skip-unused-shards}
|
||||
|
||||
Enables or disables skipping of unused shards for [SELECT](../../sql-reference/statements/select/index.md) queries that have sharding key condition in `WHERE/PREWHERE` (assuming that the data is distributed by sharding key, otherwise does nothing).
|
||||
@ -1337,6 +1378,40 @@ Possible values:
|
||||
|
||||
Default value: 0
|
||||
|
||||
## optimize\_distributed\_group\_by\_sharding\_key {#optimize-distributed-group-by-sharding-key}
|
||||
|
||||
Optimize `GROUP BY sharding_key` queries, by avoiding costly aggregation on the initiator server (which will reduce memory usage for the query on the initiator server).
|
||||
|
||||
The following types of queries are supported (and all combinations of them):
|
||||
|
||||
- `SELECT DISTINCT [..., ]sharding_key[, ...] FROM dist`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...]`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...] ORDER BY x`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...] LIMIT 1`
|
||||
- `SELECT ... FROM dist GROUP BY sharding_key[, ...] LIMIT 1 BY x`
|
||||
|
||||
The following types of queries are not supported (support for some of them may be added later):
|
||||
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] WITH TOTALS`
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] WITH ROLLUP`
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] WITH CUBE`
|
||||
- `SELECT ... GROUP BY sharding_key[, ...] SETTINGS extremes=1`
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled.
|
||||
- 1 — Enabled.
|
||||
|
||||
Default value: 0
|
||||
|
||||
See also:
|
||||
|
||||
- [distributed\_group\_by\_no\_merge](#distributed-group-by-no-merge)
|
||||
- [optimize\_skip\_unused\_shards](#optimize-skip-unused-shards)
|
||||
|
||||
!!! note "Note"
|
||||
Right now it requires `optimize_skip_unused_shards` (the reason behind this is that one day it may be enabled by default, and it will work correctly only if data was inserted via Distributed table, i.e. data is distributed according to sharding_key).
|
||||
|
||||
## optimize\_throw\_if\_noop {#setting-optimize_throw_if_noop}
|
||||
|
||||
Enables or disables throwing an exception if an [OPTIMIZE](../../sql-reference/statements/misc.md#misc_operations-optimize) query didn’t perform a merge.
|
||||
@ -1894,9 +1969,9 @@ Locking timeout is used to protect from deadlocks while executing read/write ope
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
- Positive integer (in seconds).
|
||||
- 0 — No locking timeout.
|
||||
|
||||
Default value: `120`.
|
||||
Default value: `120` seconds.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/settings/) <!-- hide -->
|
||||
|
@ -6,6 +6,7 @@ Columns:
|
||||
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Event date.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Event time.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Event time with microseconds resolution.
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Metric name.
|
||||
- `value` ([Float64](../../sql-reference/data-types/float.md)) — Metric value.
|
||||
|
||||
@ -16,18 +17,18 @@ SELECT * FROM system.asynchronous_metric_log LIMIT 10
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─event_date─┬──────────event_time─┬─name─────────────────────────────────────┬────value─┐
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.arenas.all.pmuzzy │ 0 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.arenas.all.pdirty │ 4214 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.background_thread.run_intervals │ 0 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.background_thread.num_runs │ 0 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.retained │ 17657856 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.mapped │ 71471104 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.resident │ 61538304 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.metadata │ 6199264 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.allocated │ 38074336 │
|
||||
│ 2020-06-22 │ 2020-06-22 06:57:30 │ jemalloc.epoch │ 2 │
|
||||
└────────────┴─────────────────────┴──────────────────────────────────────────┴──────────┘
|
||||
┌─event_date─┬──────────event_time─┬────event_time_microseconds─┬─name─────────────────────────────────────┬─────value─┐
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ CPUFrequencyMHz_0 │ 2120.9 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.arenas.all.pmuzzy │ 743 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.arenas.all.pdirty │ 26288 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.background_thread.run_intervals │ 0 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.background_thread.num_runs │ 0 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.retained │ 60694528 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.mapped │ 303161344 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.resident │ 260931584 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.metadata │ 12079488 │
|
||||
│ 2020-09-05 │ 2020-09-05 15:56:30 │ 2020-09-05 15:56:30.025227 │ jemalloc.allocated │ 133756128 │
|
||||
└────────────┴─────────────────────┴────────────────────────────┴──────────────────────────────────────────┴───────────┘
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
@ -10,12 +10,16 @@ Columns:
|
||||
- `progress` (Float64) — The percentage of completed work from 0 to 1.
|
||||
- `num_parts` (UInt64) — The number of pieces to be merged.
|
||||
- `result_part_name` (String) — The name of the part that will be formed as the result of merging.
|
||||
- `is_mutation` (UInt8) - 1 if this process is a part mutation.
|
||||
- `is_mutation` (UInt8) — 1 if this process is a part mutation.
|
||||
- `total_size_bytes_compressed` (UInt64) — The total size of the compressed data in the merged chunks.
|
||||
- `total_size_marks` (UInt64) — The total number of marks in the merged parts.
|
||||
- `bytes_read_uncompressed` (UInt64) — Number of bytes read, uncompressed.
|
||||
- `rows_read` (UInt64) — Number of rows read.
|
||||
- `bytes_written_uncompressed` (UInt64) — Number of bytes written, uncompressed.
|
||||
- `rows_written` (UInt64) — Number of rows written.
|
||||
- `memory_usage` (UInt64) — Memory consumption of the merge process.
|
||||
- `thread_id` (UInt64) — Thread ID of the merge process.
|
||||
- `merge_type` — The type of current merge. Empty if it's an mutation.
|
||||
- `merge_algorithm` — The algorithm used in current merge. Empty if it's an mutation.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/merges) <!--hide-->
|
||||
|
@ -23,28 +23,28 @@ SELECT * FROM system.metric_log LIMIT 1 FORMAT Vertical;
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
event_date: 2020-02-18
|
||||
event_time: 2020-02-18 07:15:33
|
||||
milliseconds: 554
|
||||
event_date: 2020-09-05
|
||||
event_time: 2020-09-05 16:22:33
|
||||
event_time_microseconds: 2020-09-05 16:22:33.196807
|
||||
milliseconds: 196
|
||||
ProfileEvent_Query: 0
|
||||
ProfileEvent_SelectQuery: 0
|
||||
ProfileEvent_InsertQuery: 0
|
||||
ProfileEvent_FileOpen: 0
|
||||
ProfileEvent_Seek: 0
|
||||
ProfileEvent_ReadBufferFromFileDescriptorRead: 1
|
||||
ProfileEvent_ReadBufferFromFileDescriptorReadFailed: 0
|
||||
ProfileEvent_ReadBufferFromFileDescriptorReadBytes: 0
|
||||
ProfileEvent_WriteBufferFromFileDescriptorWrite: 1
|
||||
ProfileEvent_WriteBufferFromFileDescriptorWriteFailed: 0
|
||||
ProfileEvent_WriteBufferFromFileDescriptorWriteBytes: 56
|
||||
ProfileEvent_FailedQuery: 0
|
||||
ProfileEvent_FailedSelectQuery: 0
|
||||
...
|
||||
CurrentMetric_Query: 0
|
||||
CurrentMetric_Merge: 0
|
||||
CurrentMetric_PartMutation: 0
|
||||
CurrentMetric_ReplicatedFetch: 0
|
||||
CurrentMetric_ReplicatedSend: 0
|
||||
CurrentMetric_ReplicatedChecks: 0
|
||||
...
|
||||
CurrentMetric_Revision: 54439
|
||||
CurrentMetric_VersionInteger: 20009001
|
||||
CurrentMetric_RWLockWaitingReaders: 0
|
||||
CurrentMetric_RWLockWaitingWriters: 0
|
||||
CurrentMetric_RWLockActiveReaders: 0
|
||||
CurrentMetric_RWLockActiveWriters: 0
|
||||
CurrentMetric_GlobalThread: 74
|
||||
CurrentMetric_GlobalThreadActive: 26
|
||||
CurrentMetric_LocalThread: 0
|
||||
CurrentMetric_LocalThreadActive: 0
|
||||
CurrentMetric_DistributedFilesToInsert: 0
|
||||
```
|
||||
|
||||
**See also**
|
||||
|
@ -34,6 +34,7 @@ Columns:
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Query starting date.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Query starting time.
|
||||
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
||||
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
|
||||
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds.
|
||||
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or rows read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_rows` includes the total number of rows read at all replicas. Each replica sends it’s `read_rows` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn’t affect this value.
|
||||
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Total number or bytes read from all tables and table functions participated in query. It includes usual subqueries, subqueries for `IN` and `JOIN`. For distributed queries `read_bytes` includes the total number of rows read at all replicas. Each replica sends it’s `read_bytes` value, and the server-initiator of the query summarize all received and local values. The cache volumes doesn’t affect this value.
|
||||
|
@ -16,6 +16,7 @@ Columns:
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — The date when the thread has finished execution of the query.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — The date and time when the thread has finished execution of the query.
|
||||
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
||||
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
|
||||
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution.
|
||||
- `read_rows` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of read rows.
|
||||
- `read_bytes` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Number of read bytes.
|
||||
|
@ -23,4 +23,8 @@ Columns:
|
||||
- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — The total query execution time, in seconds (wall time).
|
||||
- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Maximum of query execution time.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quota_usage) <!--hide-->
|
||||
|
@ -20,5 +20,9 @@ Columns:
|
||||
- `apply_to_list` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — List of user names/[roles](../../operations/access-rights.md#role-management) that the quota should be applied to.
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — List of user names/roles that the quota should not apply to.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW QUOTAS](../../sql-reference/statements/show.md#show-quotas-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quotas) <!--hide-->
|
||||
|
||||
|
@ -24,4 +24,8 @@ Columns:
|
||||
- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — The total query execution time, in seconds (wall time).
|
||||
- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Maximum of query execution time.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/quotas_usage) <!--hide-->
|
||||
|
@ -5,11 +5,15 @@ Contains the role grants for users and roles. To add entries to this table, use
|
||||
Columns:
|
||||
|
||||
- `user_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — User name.
|
||||
|
||||
- `role_name` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Role name.
|
||||
|
||||
- `granted_role_name` ([String](../../sql-reference/data-types/string.md)) — Name of role granted to the `role_name` role. To grant one role to another one use `GRANT role1 TO role2`.
|
||||
|
||||
- `granted_role_is_default` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `granted_role` is a default role. Possible values:
|
||||
- 1 — `granted_role` is a default role.
|
||||
- 0 — `granted_role` is not a default role.
|
||||
|
||||
- `with_admin_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Flag that shows whether `granted_role` is a role with [ADMIN OPTION](../../sql-reference/statements/grant.md#admin-option-privilege) privilege. Possible values:
|
||||
- 1 — The role has `ADMIN OPTION` privilege.
|
||||
- 0 — The role without `ADMIN OPTION` privilege.
|
||||
|
@ -8,4 +8,8 @@ Columns:
|
||||
- `id` ([UUID](../../sql-reference/data-types/uuid.md)) — Role ID.
|
||||
- `storage` ([String](../../sql-reference/data-types/string.md)) — Path to the storage of roles. Configured in the `access_control_path` parameter.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW ROLES](../../sql-reference/statements/show.md#show-roles-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/roles) <!--hide-->
|
||||
|
@ -27,4 +27,8 @@ Columns:
|
||||
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — The row policies is applied to all roles and/or users excepting of the listed ones.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW POLICIES](../../sql-reference/statements/show.md#show-policies-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/row_policies) <!--hide-->
|
||||
|
@ -17,4 +17,8 @@ Columns:
|
||||
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — The setting profile is applied to all roles and/or users excepting of the listed ones.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW PROFILES](../../sql-reference/statements/show.md#show-profiles-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/settings_profiles) <!--hide-->
|
||||
|
@ -82,8 +82,8 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so
|
||||
|
||||
- [Introspection Functions](../../sql-reference/functions/introspection.md) — Which introspection functions are available and how to use them.
|
||||
- [system.trace_log](../system-tables/trace_log.md) — Contains stack traces collected by the sampling query profiler.
|
||||
- [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) — Description and usage example of the `arrayMap` function.
|
||||
- [arrayFilter](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-filter) — Description and usage example of the `arrayFilter` function.
|
||||
- [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Description and usage example of the `arrayMap` function.
|
||||
- [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Description and usage example of the `arrayFilter` function.
|
||||
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system-tables/stack_trace) <!--hide-->
|
||||
|
@ -27,4 +27,8 @@ Columns:
|
||||
|
||||
- `default_roles_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — All the granted roles set as default excepting of the listed ones.
|
||||
|
||||
## See Also {#see-also}
|
||||
|
||||
- [SHOW USERS](../../sql-reference/statements/show.md#show-users-statement)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/users) <!--hide-->
|
||||
|
@ -35,7 +35,7 @@ $ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
|
||||
Always disable transparent huge pages. It interferes with memory allocators, which leads to significant performance degradation.
|
||||
|
||||
``` bash
|
||||
$ echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
|
||||
$ echo 'madvise' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
|
||||
```
|
||||
|
||||
Use `perf top` to watch the time spent in the kernel for memory management.
|
||||
|
@ -7,7 +7,7 @@ toc_title: Tuple(T1, T2, ...)
|
||||
|
||||
A tuple of elements, each having an individual [type](../../sql-reference/data-types/index.md#data_types).
|
||||
|
||||
Tuples are used for temporary column grouping. Columns can be grouped when an IN expression is used in a query, and for specifying certain formal parameters of lambda functions. For more information, see the sections [IN operators](../../sql-reference/operators/in.md) and [Higher order functions](../../sql-reference/functions/higher-order-functions.md).
|
||||
Tuples are used for temporary column grouping. Columns can be grouped when an IN expression is used in a query, and for specifying certain formal parameters of lambda functions. For more information, see the sections [IN operators](../../sql-reference/operators/in.md) and [Higher order functions](../../sql-reference/functions/index.md#higher-order-functions).
|
||||
|
||||
Tuples can be the result of a query. In this case, for text formats other than JSON, values are comma-separated in brackets. In JSON formats, tuples are output as arrays (in square brackets).
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 35
|
||||
toc_priority: 34
|
||||
toc_title: Arithmetic
|
||||
---
|
||||
|
||||
|
@ -1,9 +1,9 @@
|
||||
---
|
||||
toc_priority: 46
|
||||
toc_priority: 35
|
||||
toc_title: Arrays
|
||||
---
|
||||
|
||||
# Functions for Working with Arrays {#functions-for-working-with-arrays}
|
||||
# Array Functions {#functions-for-working-with-arrays}
|
||||
|
||||
## empty {#function-empty}
|
||||
|
||||
@ -241,6 +241,12 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
|
||||
|
||||
Elements set to `NULL` are handled as normal values.
|
||||
|
||||
## arrayCount(\[func,\] arr1, …) {#array-count}
|
||||
|
||||
Returns the number of elements in the arr array for which func returns something other than 0. If ‘func’ is not specified, it returns the number of non-zero elements in the array.
|
||||
|
||||
Note that the `arrayCount` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## countEqual(arr, x) {#countequalarr-x}
|
||||
|
||||
Returns the number of elements in the array equal to x. Equivalent to arrayCount (elem -\> elem = x, arr).
|
||||
@ -568,7 +574,7 @@ SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]);
|
||||
- `NaN` values are right before `NULL`.
|
||||
- `Inf` values are right before `NaN`.
|
||||
|
||||
Note that `arraySort` is a [higher-order function](../../sql-reference/functions/higher-order-functions.md). You can pass a lambda function to it as the first argument. In this case, sorting order is determined by the result of the lambda function applied to the elements of the array.
|
||||
Note that `arraySort` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. In this case, sorting order is determined by the result of the lambda function applied to the elements of the array.
|
||||
|
||||
Let’s consider the following example:
|
||||
|
||||
@ -668,7 +674,7 @@ SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res;
|
||||
- `NaN` values are right before `NULL`.
|
||||
- `-Inf` values are right before `NaN`.
|
||||
|
||||
Note that the `arrayReverseSort` is a [higher-order function](../../sql-reference/functions/higher-order-functions.md). You can pass a lambda function to it as the first argument. Example is shown below.
|
||||
Note that the `arrayReverseSort` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument. Example is shown below.
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
|
||||
@ -1120,7 +1126,205 @@ Result:
|
||||
``` text
|
||||
┌─arrayAUC([0.1, 0.4, 0.35, 0.8], [0, 0, 1, 1])─┐
|
||||
│ 0.75 │
|
||||
└────────────────────────────────────────---──┘
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayMap(func, arr1, …) {#array-map}
|
||||
|
||||
Returns an array obtained from the original application of the `func` function to each element in the `arr` array.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────┐
|
||||
│ [3,4,5] │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
The following example shows how to create a tuple of elements from different arrays:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────────┐
|
||||
│ [(1,4),(2,5),(3,6)] │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayMap` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFilter(func, arr1, …) {#array-filter}
|
||||
|
||||
Returns an array containing only the elements in `arr1` for which `func` returns something other than 0.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────┐
|
||||
│ ['abc World'] │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayFilter(
|
||||
(i, x) -> x LIKE '%World%',
|
||||
arrayEnumerate(arr),
|
||||
['Hello', 'abc World'] AS arr)
|
||||
AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─┐
|
||||
│ [2] │
|
||||
└─────┘
|
||||
```
|
||||
|
||||
Note that the `arrayFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFill(func, arr1, …) {#array-fill}
|
||||
|
||||
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func` returns 0. The first element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────────────────────────┐
|
||||
│ [1,1,3,11,12,12,12,5,6,14,14,14] │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayFill` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayReverseFill(func, arr1, …) {#array-reverse-fill}
|
||||
|
||||
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func` returns 0. The last element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res────────────────────────────────┐
|
||||
│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayReverseFilter` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arraySplit(func, arr1, …) {#array-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────┐
|
||||
│ [[1,2,3],[4,5]] │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arraySplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayReverseSplit(func, arr1, …) {#array-reverse-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────────┐
|
||||
│ [[1],[2,3,4],[5]] │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayReverseSplit` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||
|
||||
Returns 1 if there is at least one element in `arr` for which `func` returns something other than 0. Otherwise, it returns 0.
|
||||
|
||||
Note that the `arrayExists` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||
|
||||
Returns 1 if `func` returns something other than 0 for all the elements in `arr`. Otherwise, it returns 0.
|
||||
|
||||
Note that the `arrayAll` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayFirst(func, arr1, …) {#array-first}
|
||||
|
||||
Returns the first element in the `arr1` array for which `func` returns something other than 0.
|
||||
|
||||
Note that the `arrayFirst` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arrayFirstIndex(func, arr1, …) {#array-first-index}
|
||||
|
||||
Returns the index of the first element in the `arr1` array for which `func` returns something other than 0.
|
||||
|
||||
Note that the `arrayFirstIndex` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You must pass a lambda function to it as the first argument, and it can’t be omitted.
|
||||
|
||||
## arraySum(\[func,\] arr1, …) {#array-sum}
|
||||
|
||||
Returns the sum of the `func` values. If the function is omitted, it just returns the sum of the array elements.
|
||||
|
||||
Note that the `arraySum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||
|
||||
Returns an array of partial sums of elements in the source array (a running sum). If the `func` function is specified, then the values of the array elements are converted by this function before summing.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────┐
|
||||
│ [1, 2, 3, 4] │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
Note that the `arrayCumSum` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
## arrayCumSumNonNegative(arr) {#arraycumsumnonnegativearr}
|
||||
|
||||
Same as `arrayCumSum`, returns an array of partial sums of elements in the source array (a running sum). Different `arrayCumSum`, when then returned value contains a value less than zero, the value is replace with zero and the subsequent calculation is performed with zero parameters. For example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────┐
|
||||
│ [1,2,0,1] │
|
||||
└───────────┘
|
||||
```
|
||||
Note that the `arraySumNonNegative` is a [higher-order function](../../sql-reference/functions/index.md#higher-order-functions). You can pass a lambda function to it as the first argument.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/array_functions/) <!--hide-->
|
||||
|
@ -3,6 +3,9 @@ toc_priority: 58
|
||||
toc_title: External Dictionaries
|
||||
---
|
||||
|
||||
!!! attention "Attention"
|
||||
`dict_name` parameter must be fully qualified for dictionaries created with DDL queries. Eg. `<database>.<dict_name>`.
|
||||
|
||||
# Functions for Working with External Dictionaries {#ext_dict_functions}
|
||||
|
||||
For information on connecting and configuring external dictionaries, see [External dictionaries](../../sql-reference/dictionaries/external-dictionaries/external-dicts.md).
|
||||
|
@ -1,262 +0,0 @@
|
||||
---
|
||||
toc_priority: 57
|
||||
toc_title: Higher-Order
|
||||
---
|
||||
|
||||
# Higher-order Functions {#higher-order-functions}
|
||||
|
||||
## `->` operator, lambda(params, expr) function {#operator-lambdaparams-expr-function}
|
||||
|
||||
Allows describing a lambda function for passing to a higher-order function. The left side of the arrow has a formal parameter, which is any ID, or multiple formal parameters – any IDs in a tuple. The right side of the arrow has an expression that can use these formal parameters, as well as any table columns.
|
||||
|
||||
Examples: `x -> 2 * x, str -> str != Referer.`
|
||||
|
||||
Higher-order functions can only accept lambda functions as their functional argument.
|
||||
|
||||
A lambda function that accepts multiple arguments can be passed to a higher-order function. In this case, the higher-order function is passed several arrays of identical length that these arguments will correspond to.
|
||||
|
||||
For some functions, such as [arrayCount](#higher_order_functions-array-count) or [arraySum](#higher_order_functions-array-count), the first argument (the lambda function) can be omitted. In this case, identical mapping is assumed.
|
||||
|
||||
A lambda function can’t be omitted for the following functions:
|
||||
|
||||
- [arrayMap](#higher_order_functions-array-map)
|
||||
- [arrayFilter](#higher_order_functions-array-filter)
|
||||
- [arrayFill](#higher_order_functions-array-fill)
|
||||
- [arrayReverseFill](#higher_order_functions-array-reverse-fill)
|
||||
- [arraySplit](#higher_order_functions-array-split)
|
||||
- [arrayReverseSplit](#higher_order_functions-array-reverse-split)
|
||||
- [arrayFirst](#higher_order_functions-array-first)
|
||||
- [arrayFirstIndex](#higher_order_functions-array-first-index)
|
||||
|
||||
### arrayMap(func, arr1, …) {#higher_order_functions-array-map}
|
||||
|
||||
Returns an array obtained from the original application of the `func` function to each element in the `arr` array.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────┐
|
||||
│ [3,4,5] │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
The following example shows how to create a tuple of elements from different arrays:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────────┐
|
||||
│ [(1,4),(2,5),(3,6)] │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayMap` function.
|
||||
|
||||
### arrayFilter(func, arr1, …) {#higher_order_functions-array-filter}
|
||||
|
||||
Returns an array containing only the elements in `arr1` for which `func` returns something other than 0.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────┐
|
||||
│ ['abc World'] │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayFilter(
|
||||
(i, x) -> x LIKE '%World%',
|
||||
arrayEnumerate(arr),
|
||||
['Hello', 'abc World'] AS arr)
|
||||
AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─┐
|
||||
│ [2] │
|
||||
└─────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFilter` function.
|
||||
|
||||
### arrayFill(func, arr1, …) {#higher_order_functions-array-fill}
|
||||
|
||||
Scan through `arr1` from the first element to the last element and replace `arr1[i]` by `arr1[i - 1]` if `func` returns 0. The first element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────────────────────────┐
|
||||
│ [1,1,3,11,12,12,12,5,6,14,14,14] │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFill` function.
|
||||
|
||||
### arrayReverseFill(func, arr1, …) {#higher_order_functions-array-reverse-fill}
|
||||
|
||||
Scan through `arr1` from the last element to the first element and replace `arr1[i]` by `arr1[i + 1]` if `func` returns 0. The last element of `arr1` will not be replaced.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseFill(x -> not isNull(x), [1, null, 3, 11, 12, null, null, 5, 6, 14, null, null]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res────────────────────────────────┐
|
||||
│ [1,3,3,11,12,5,5,5,6,14,NULL,NULL] │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayReverseFill` function.
|
||||
|
||||
### arraySplit(func, arr1, …) {#higher_order_functions-array-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the left hand side of the element. The array will not be split before the first element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arraySplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────┐
|
||||
│ [[1,2,3],[4,5]] │
|
||||
└─────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arraySplit` function.
|
||||
|
||||
### arrayReverseSplit(func, arr1, …) {#higher_order_functions-array-reverse-split}
|
||||
|
||||
Split `arr1` into multiple arrays. When `func` returns something other than 0, the array will be split on the right hand side of the element. The array will not be split after the last element.
|
||||
|
||||
Examples:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSplit((x, y) -> y, [1, 2, 3, 4, 5], [1, 0, 0, 1, 0]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────────┐
|
||||
│ [[1],[2,3,4],[5]] │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arraySplit` function.
|
||||
|
||||
### arrayCount(\[func,\] arr1, …) {#higher_order_functions-array-count}
|
||||
|
||||
Returns the number of elements in the arr array for which func returns something other than 0. If ‘func’ is not specified, it returns the number of non-zero elements in the array.
|
||||
|
||||
### arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||
|
||||
Returns 1 if there is at least one element in ‘arr’ for which ‘func’ returns something other than 0. Otherwise, it returns 0.
|
||||
|
||||
### arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||
|
||||
Returns 1 if ‘func’ returns something other than 0 for all the elements in ‘arr’. Otherwise, it returns 0.
|
||||
|
||||
### arraySum(\[func,\] arr1, …) {#higher-order-functions-array-sum}
|
||||
|
||||
Returns the sum of the ‘func’ values. If the function is omitted, it just returns the sum of the array elements.
|
||||
|
||||
### arrayFirst(func, arr1, …) {#higher_order_functions-array-first}
|
||||
|
||||
Returns the first element in the ‘arr1’ array for which ‘func’ returns something other than 0.
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFirst` function.
|
||||
|
||||
### arrayFirstIndex(func, arr1, …) {#higher_order_functions-array-first-index}
|
||||
|
||||
Returns the index of the first element in the ‘arr1’ array for which ‘func’ returns something other than 0.
|
||||
|
||||
Note that the first argument (lambda function) can’t be omitted in the `arrayFirstIndex` function.
|
||||
|
||||
### arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||
|
||||
Returns an array of partial sums of elements in the source array (a running sum). If the `func` function is specified, then the values of the array elements are converted by this function before summing.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────┐
|
||||
│ [1, 2, 3, 4] │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
### arrayCumSumNonNegative(arr) {#arraycumsumnonnegativearr}
|
||||
|
||||
Same as `arrayCumSum`, returns an array of partial sums of elements in the source array (a running sum). Different `arrayCumSum`, when then returned value contains a value less than zero, the value is replace with zero and the subsequent calculation is performed with zero parameters. For example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSumNonNegative([1, 1, -4, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────┐
|
||||
│ [1,2,0,1] │
|
||||
└───────────┘
|
||||
```
|
||||
|
||||
### arraySort(\[func,\] arr1, …) {#arraysortfunc-arr1}
|
||||
|
||||
Returns an array as result of sorting the elements of `arr1` in ascending order. If the `func` function is specified, sorting order is determined by the result of the function `func` applied to the elements of array (arrays)
|
||||
|
||||
The [Schwartzian transform](https://en.wikipedia.org/wiki/Schwartzian_transform) is used to improve sorting efficiency.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res────────────────┐
|
||||
│ ['world', 'hello'] │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
For more information about the `arraySort` method, see the [Functions for Working With Arrays](../../sql-reference/functions/array-functions.md#array_functions-sort) section.
|
||||
|
||||
### arrayReverseSort(\[func,\] arr1, …) {#arrayreversesortfunc-arr1}
|
||||
|
||||
Returns an array as result of sorting the elements of `arr1` in descending order. If the `func` function is specified, sorting order is determined by the result of the function `func` applied to the elements of array (arrays).
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────────┐
|
||||
│ ['hello','world'] │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
For more information about the `arrayReverseSort` method, see the [Functions for Working With Arrays](../../sql-reference/functions/array-functions.md#array_functions-reverse-sort) section.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/higher_order_functions/) <!--hide-->
|
@ -44,6 +44,21 @@ Functions have the following behaviors:
|
||||
|
||||
Functions can’t change the values of their arguments – any changes are returned as the result. Thus, the result of calculating separate functions does not depend on the order in which the functions are written in the query.
|
||||
|
||||
## Higher-order functions, `->` operator and lambda(params, expr) function {#higher-order-functions}
|
||||
|
||||
Higher-order functions can only accept lambda functions as their functional argument. To pass a lambda function to a higher-order function use `->` operator. The left side of the arrow has a formal parameter, which is any ID, or multiple formal parameters – any IDs in a tuple. The right side of the arrow has an expression that can use these formal parameters, as well as any table columns.
|
||||
|
||||
Examples:
|
||||
|
||||
```
|
||||
x -> 2 * x
|
||||
str -> str != Referer
|
||||
```
|
||||
|
||||
A lambda function that accepts multiple arguments can also be passed to a higher-order function. In this case, the higher-order function is passed several arrays of identical length that these arguments will correspond to.
|
||||
|
||||
For some functions the first argument (the lambda function) can be omitted. In this case, identical mapping is assumed.
|
||||
|
||||
## Error Handling {#error-handling}
|
||||
|
||||
Some functions might throw an exception if the data is invalid. In this case, the query is canceled and an error text is returned to the client. For distributed processing, when an exception occurs on one of the servers, the other servers also attempt to abort the query.
|
||||
|
@ -98,7 +98,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `addressToLine` function. The result of this processing you see in the `trace_source_code_lines` column of output.
|
||||
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `addressToLine` function. The result of this processing you see in the `trace_source_code_lines` column of output.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
@ -184,7 +184,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `addressToSymbols` function. The result of this processing you see in the `trace_symbols` column of output.
|
||||
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `addressToSymbols` function. The result of this processing you see in the `trace_symbols` column of output.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
@ -281,7 +281,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
The [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) function allows to process each individual element of the `trace` array by the `demangle` function. The result of this processing you see in the `trace_functions` column of output.
|
||||
The [arrayMap](../../sql-reference/functions/array-functions.md#array-map) function allows to process each individual element of the `trace` array by the `demangle` function. The result of this processing you see in the `trace_functions` column of output.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
|
@ -515,6 +515,29 @@ SELECT
|
||||
└────────────────┴────────────┘
|
||||
```
|
||||
|
||||
## formatReadableQuantity(x) {#formatreadablequantityx}
|
||||
|
||||
Accepts the number. Returns a rounded number with a suffix (thousand, million, billion, etc.) as a string.
|
||||
|
||||
It is useful for reading big numbers by human.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayJoin([1024, 1234 * 1000, (4567 * 1000) * 1000, 98765432101234]) AS number,
|
||||
formatReadableQuantity(number) AS number_for_humans
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─────────number─┬─number_for_humans─┐
|
||||
│ 1024 │ 1.02 thousand │
|
||||
│ 1234000 │ 1.23 million │
|
||||
│ 4567000000 │ 4.57 billion │
|
||||
│ 98765432101234 │ 98.77 trillion │
|
||||
└────────────────┴───────────────────┘
|
||||
```
|
||||
|
||||
## least(a, b) {#leasta-b}
|
||||
|
||||
Returns the smallest value from a and b.
|
||||
|
@ -10,7 +10,7 @@ Changes settings profiles.
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
ALTER SETTINGS PROFILE [IF EXISTS] name [ON CLUSTER cluster_name]
|
||||
ALTER SETTINGS PROFILE [IF EXISTS] TO name [ON CLUSTER cluster_name]
|
||||
[RENAME TO new_name]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | INHERIT 'profile_name'] [,...]
|
||||
```
|
||||
|
@ -10,7 +10,7 @@ Creates a [settings profile](../../../operations/access-rights.md#settings-profi
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] name [ON CLUSTER cluster_name]
|
||||
CREATE SETTINGS PROFILE [IF NOT EXISTS | OR REPLACE] TO name [ON CLUSTER cluster_name]
|
||||
[SETTINGS variable [= value] [MIN [=] min_value] [MAX [=] max_value] [READONLY|WRITABLE] | INHERIT 'profile_name'] [,...]
|
||||
```
|
||||
|
||||
|
@ -148,7 +148,7 @@ SHOW CREATE [ROW] POLICY name ON [database.]table
|
||||
|
||||
Shows parameters that were used at a [quota creation](../../sql-reference/statements/create/quota.md).
|
||||
|
||||
### Syntax {#show-create-row-policy-syntax}
|
||||
### Syntax {#show-create-quota-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW CREATE QUOTA [name | CURRENT]
|
||||
@ -158,10 +158,70 @@ SHOW CREATE QUOTA [name | CURRENT]
|
||||
|
||||
Shows parameters that were used at a [settings profile creation](../../sql-reference/statements/create/settings-profile.md).
|
||||
|
||||
### Syntax {#show-create-row-policy-syntax}
|
||||
### Syntax {#show-create-settings-profile-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW CREATE [SETTINGS] PROFILE name
|
||||
```
|
||||
|
||||
## SHOW USERS {#show-users-statement}
|
||||
|
||||
Returns a list of [user account](../../operations/access-rights.md#user-account-management) names. To view user accounts parameters, see the system table [system.users](../../operations/system-tables/users.md#system_tables-users).
|
||||
|
||||
### Syntax {#show-users-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW USERS
|
||||
```
|
||||
|
||||
## SHOW ROLES {#show-roles-statement}
|
||||
|
||||
Returns a list of [roles](../../operations/access-rights.md#role-management). To view another parameters, see system tables [system.roles](../../operations/system-tables/roles.md#system_tables-roles) and [system.role-grants](../../operations/system-tables/role-grants.md#system_tables-role_grants).
|
||||
|
||||
### Syntax {#show-roles-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [CURRENT|ENABLED] ROLES
|
||||
```
|
||||
|
||||
## SHOW PROFILES {#show-profiles-statement}
|
||||
|
||||
Returns a list of [setting profiles](../../operations/access-rights.md#settings-profiles-management). To view user accounts parameters, see the system table [settings_profiles](../../operations/system-tables/settings_profiles.md#system_tables-settings_profiles).
|
||||
|
||||
### Syntax {#show-profiles-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [SETTINGS] PROFILES
|
||||
```
|
||||
|
||||
## SHOW POLICIES {#show-policies-statement}
|
||||
|
||||
Returns a list of [row policies](../../operations/access-rights.md#row-policy-management) for the specified table. To view user accounts parameters, see the system table [system.row_policies](../../operations/system-tables/row_policies.md#system_tables-row_policies).
|
||||
|
||||
### Syntax {#show-policies-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [ROW] POLICIES [ON [db.]table]
|
||||
```
|
||||
|
||||
## SHOW QUOTAS {#show-quotas-statement}
|
||||
|
||||
Returns a list of [quotas](../../operations/access-rights.md#quotas-management). To view quotas parameters, see the system table [system.quotas](../../operations/system-tables/quotas.md#system_tables-quotas).
|
||||
|
||||
### Syntax {#show-quotas-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW QUOTAS
|
||||
```
|
||||
|
||||
## SHOW QUOTA {#show-quota-statement}
|
||||
|
||||
Returns a [quota](../../operations/quotas.md) consumption for all users or for current user. To view another parameters, see system tables [system.quotas_usage](../../operations/system-tables/quotas_usage.md#system_tables-quotas_usage) and [system.quota_usage](../../operations/system-tables/quota_usage.md#system_tables-quota_usage).
|
||||
|
||||
### Syntax {#show-quota-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [CURRENT] QUOTA
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/show/) <!--hide-->
|
||||
|
@ -1,20 +1,18 @@
|
||||
---
|
||||
machine_translated: true
|
||||
machine_translated_rev: 72537a2d527c63c07aa5d2361a8829f3895cf2bd
|
||||
toc_priority: 49
|
||||
toc_title: Copia de seguridad de datos
|
||||
---
|
||||
|
||||
# Copia de seguridad de datos {#data-backup}
|
||||
|
||||
Mientras [replicación](../engines/table-engines/mergetree-family/replication.md) provides protection from hardware failures, it does not protect against human errors: accidental deletion of data, deletion of the wrong table or a table on the wrong cluster, and software bugs that result in incorrect data processing or data corruption. In many cases mistakes like these will affect all replicas. ClickHouse has built-in safeguards to prevent some types of mistakes — for example, by default [no puede simplemente eliminar tablas con un motor similar a MergeTree que contenga más de 50 Gb de datos](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/programs/server/config.xml#L322-L330). Sin embargo, estas garantías no cubren todos los casos posibles y pueden eludirse.
|
||||
Mientras que la [replicación](../engines/table-engines/mergetree-family/replication.md) proporciona protección contra fallos de hardware, no protege de errores humanos: el borrado accidental de datos, elminar la tabla equivocada o una tabla en el clúster equivocado, y bugs de software que dan como resultado un procesado incorrecto de los datos o la corrupción de los datos. En muchos casos, errores como estos afectarán a todas las réplicas. ClickHouse dispone de salvaguardas para prevenir algunos tipos de errores — por ejemplo, por defecto [no se puede simplemente eliminar tablas con un motor similar a MergeTree que contenga más de 50 Gb de datos](https://github.com/ClickHouse/ClickHouse/blob/v18.14.18-stable/programs/server/config.xml#L322-L330). Sin embargo, estas salvaguardas no cubren todos los casos posibles y pueden eludirse.
|
||||
|
||||
Para mitigar eficazmente los posibles errores humanos, debe preparar cuidadosamente una estrategia para realizar copias de seguridad y restaurar sus datos **previamente**.
|
||||
|
||||
Cada empresa tiene diferentes recursos disponibles y requisitos comerciales, por lo que no existe una solución universal para las copias de seguridad y restauraciones de ClickHouse que se adapten a cada situación. Lo que funciona para un gigabyte de datos probablemente no funcionará para decenas de petabytes. Hay una variedad de posibles enfoques con sus propios pros y contras, que se discutirán a continuación. Es una buena idea utilizar varios enfoques en lugar de solo uno para compensar sus diversas deficiencias.
|
||||
Cada empresa tiene diferentes recursos disponibles y requisitos comerciales, por lo que no existe una solución universal para las copias de seguridad y restauraciones de ClickHouse que se adapten a cada situación. Lo que funciona para un gigabyte de datos probablemente no funcionará para decenas de petabytes. Hay una variedad de posibles enfoques con sus propios pros y contras, que se discutirán a continuación. Es una buena idea utilizar varios enfoques en lugar de uno solo para compensar sus diversas deficiencias.
|
||||
|
||||
!!! note "Nota"
|
||||
Tenga en cuenta que si realizó una copia de seguridad de algo y nunca intentó restaurarlo, es probable que la restauración no funcione correctamente cuando realmente la necesite (o al menos tomará más tiempo de lo que las empresas pueden tolerar). Por lo tanto, cualquiera que sea el enfoque de copia de seguridad que elija, asegúrese de automatizar el proceso de restauración también y practicarlo en un clúster de ClickHouse de repuesto regularmente.
|
||||
Tenga en cuenta que si realizó una copia de seguridad de algo y nunca intentó restaurarlo, es probable que la restauración no funcione correctamente cuando realmente la necesite (o al menos tomará más tiempo de lo que las empresas pueden tolerar). Por lo tanto, cualquiera que sea el enfoque de copia de seguridad que elija, asegúrese de automatizar el proceso de restauración también y ponerlo en practica en un clúster de ClickHouse de repuesto regularmente.
|
||||
|
||||
## Duplicar datos de origen en otro lugar {#duplicating-source-data-somewhere-else}
|
||||
|
||||
@ -32,7 +30,7 @@ Para volúmenes de datos más pequeños, un simple `INSERT INTO ... SELECT ...`
|
||||
|
||||
## Manipulaciones con piezas {#manipulations-with-parts}
|
||||
|
||||
ClickHouse permite usar el `ALTER TABLE ... FREEZE PARTITION ...` consulta para crear una copia local de particiones de tabla. Esto se implementa utilizando enlaces duros al `/var/lib/clickhouse/shadow/` carpeta, por lo que generalmente no consume espacio adicional en disco para datos antiguos. Las copias creadas de archivos no son manejadas por el servidor ClickHouse, por lo que puede dejarlas allí: tendrá una copia de seguridad simple que no requiere ningún sistema externo adicional, pero seguirá siendo propenso a problemas de hardware. Por esta razón, es mejor copiarlos de forma remota en otra ubicación y luego eliminar las copias locales. Los sistemas de archivos distribuidos y los almacenes de objetos siguen siendo una buena opción para esto, pero los servidores de archivos conectados normales con una capacidad lo suficientemente grande podrían funcionar también (en este caso, la transferencia ocurrirá a través del sistema de archivos de red o tal vez [rsync](https://en.wikipedia.org/wiki/Rsync)).
|
||||
ClickHouse permite usar la consulta `ALTER TABLE ... FREEZE PARTITION ...` para crear una copia local de particiones de tabla. Esto se implementa utilizando enlaces duros a la carpeta `/var/lib/clickhouse/shadow/`, por lo que generalmente no consume espacio adicional en disco para datos antiguos. Las copias creadas de archivos no son manejadas por el servidor ClickHouse, por lo que puede dejarlas allí: tendrá una copia de seguridad simple que no requiere ningún sistema externo adicional, pero seguirá siendo propenso a problemas de hardware. Por esta razón, es mejor copiarlos de forma remota en otra ubicación y luego eliminar las copias locales. Los sistemas de archivos distribuidos y los almacenes de objetos siguen siendo una buena opción para esto, pero los servidores de archivos conectados normales con una capacidad lo suficientemente grande podrían funcionar también (en este caso, la transferencia ocurrirá a través del sistema de archivos de red o tal vez [rsync](https://en.wikipedia.org/wiki/Rsync)).
|
||||
|
||||
Para obtener más información sobre las consultas relacionadas con las manipulaciones de particiones, consulte [Documentación de ALTER](../sql-reference/statements/alter.md#alter_manipulations-with-partitions).
|
||||
|
||||
|
@ -1,3 +1,8 @@
|
||||
---
|
||||
toc_priority: 30
|
||||
toc_title: MergeTree
|
||||
---
|
||||
|
||||
# MergeTree {#table_engines-mergetree}
|
||||
|
||||
Движок `MergeTree`, а также другие движки этого семейства (`*MergeTree`) — это наиболее функциональные движки таблиц ClickHouse.
|
||||
@ -28,8 +33,8 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
INDEX index_name1 expr1 TYPE type1(...) GRANULARITY value1,
|
||||
INDEX index_name2 expr2 TYPE type2(...) GRANULARITY value2
|
||||
) ENGINE = MergeTree()
|
||||
ORDER BY expr
|
||||
[PARTITION BY expr]
|
||||
[ORDER BY expr]
|
||||
[PRIMARY KEY expr]
|
||||
[SAMPLE BY expr]
|
||||
[TTL expr [DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'], ...]
|
||||
@ -38,27 +43,42 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
Описание параметров смотрите в [описании запроса CREATE](../../../engines/table-engines/mergetree-family/mergetree.md).
|
||||
|
||||
!!! note "Note"
|
||||
!!! note "Примечание"
|
||||
`INDEX` — экспериментальная возможность, смотрите [Индексы пропуска данных](#table_engine-mergetree-data_skipping-indexes).
|
||||
|
||||
### Секции запроса {#mergetree-query-clauses}
|
||||
|
||||
- `ENGINE` — имя и параметры движка. `ENGINE = MergeTree()`. `MergeTree` не имеет параметров.
|
||||
|
||||
- `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`.
|
||||
- `ORDER BY` — ключ сортировки.
|
||||
|
||||
- `ORDER BY` — ключ сортировки. Кортеж столбцов или произвольных выражений. Пример: `ORDER BY (CounterID, EventDate)`.
|
||||
Кортеж столбцов или произвольных выражений. Пример: `ORDER BY (CounterID, EventDate)`.
|
||||
|
||||
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно.
|
||||
ClickHouse использует ключ сортировки в качестве первичного ключа, если первичный ключ не задан в секции `PRIMARY KEY`.
|
||||
|
||||
- `SAMPLE BY` — выражение для сэмплирования. Если используется выражение для сэмплирования, то первичный ключ должен содержать его. Пример: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
Чтобы отключить сортировку, используйте синтаксис `ORDER BY tuple()`. Смотрите [выбор первичного ключа](#vybor-pervichnogo-kliucha).
|
||||
|
||||
- `TTL` — список правил, определяющих длительности хранения строк, а также задающих правила перемещения частей на определённые тома или диски. Выражение должно возвращать столбец `Date` или `DateTime`. Пример: `TTL date + INTERVAL 1 DAY`.
|
||||
- Тип правила `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'` указывает действие, которое будет выполнено с частью, удаление строк (прореживание), перемещение (при выполнении условия для всех строк части) на определённый диск (`TO DISK 'xxx'`) или том (`TO VOLUME 'xxx'`).
|
||||
- Поведение по умолчанию соответствует удалению строк (`DELETE`). В списке правил может быть указано только одно выражение с поведением `DELETE`.
|
||||
- Дополнительные сведения смотрите в разделе [TTL для столбцов и таблиц](#table_engine-mergetree-ttl)
|
||||
- `PARTITION BY` — [ключ партиционирования](custom-partitioning-key.md). Необязательный параметр.
|
||||
|
||||
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree`:
|
||||
Для партиционирования по месяцам используйте выражение `toYYYYMM(date_column)`, где `date_column` — столбец с датой типа [Date](../../../engines/table-engines/mergetree-family/mergetree.md). В этом случае имена партиций имеют формат `"YYYYMM"`.
|
||||
|
||||
- `PRIMARY KEY` — первичный ключ, если он [отличается от ключа сортировки](#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki). Необязательный параметр.
|
||||
|
||||
По умолчанию первичный ключ совпадает с ключом сортировки (который задаётся секцией `ORDER BY`.) Поэтому в большинстве случаев секцию `PRIMARY KEY` отдельно указывать не нужно.
|
||||
|
||||
- `SAMPLE BY` — выражение для сэмплирования. Необязательный параметр.
|
||||
|
||||
Если используется выражение для сэмплирования, то первичный ключ должен содержать его. Пример: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
|
||||
- `TTL` — список правил, определяющих длительности хранения строк, а также задающих правила перемещения частей на определённые тома или диски. Необязательный параметр.
|
||||
|
||||
Выражение должно возвращать столбец `Date` или `DateTime`. Пример: `TTL date + INTERVAL 1 DAY`.
|
||||
|
||||
Тип правила `DELETE|TO DISK 'xxx'|TO VOLUME 'xxx'` указывает действие, которое будет выполнено с частью, удаление строк (прореживание), перемещение (при выполнении условия для всех строк части) на определённый диск (`TO DISK 'xxx'`) или том (`TO VOLUME 'xxx'`). Поведение по умолчанию соответствует удалению строк (`DELETE`). В списке правил может быть указано только одно выражение с поведением `DELETE`.
|
||||
|
||||
Дополнительные сведения смотрите в разделе [TTL для столбцов и таблиц](#table_engine-mergetree-ttl)
|
||||
|
||||
- `SETTINGS` — дополнительные параметры, регулирующие поведение `MergeTree` (необязательные):
|
||||
|
||||
- `index_granularity` — максимальное количество строк данных между засечками индекса. По умолчанию — 8192. Смотрите [Хранение данных](#mergetree-data-storage).
|
||||
- `index_granularity_bytes` — максимальный размер гранул данных в байтах. По умолчанию — 10Mb. Чтобы ограничить размер гранул только количеством строк, установите значение 0 (не рекомендовано). Смотрите [Хранение данных](#mergetree-data-storage).
|
||||
@ -180,6 +200,14 @@ ClickHouse не требует уникального первичного кл
|
||||
|
||||
Длинный первичный ключ будет негативно влиять на производительность вставки и потребление памяти, однако на производительность ClickHouse при запросах `SELECT` лишние столбцы в первичном ключе не влияют.
|
||||
|
||||
Вы можете создать таблицу без первичного ключа, используя синтаксис `ORDER BY tuple()`. В этом случае ClickHouse хранит данные в порядке вставки. Если вы хотите сохранить порядок данных при вставке данных с помощью запросов `INSERT ... SELECT`, установите [max\_insert\_threads = 1](../../../operations/settings/settings.md#settings-max-insert-threads).
|
||||
|
||||
Чтобы выбрать данные в первоначальном порядке, используйте
|
||||
[однопоточные](../../../operations/settings/settings.md#settings-max_threads) запросы `SELECT.
|
||||
|
||||
|
||||
|
||||
|
||||
### Первичный ключ, отличный от ключа сортировки {#pervichnyi-kliuch-otlichnyi-ot-kliucha-sortirovki}
|
||||
|
||||
Существует возможность задать первичный ключ (выражение, значения которого будут записаны в индексный файл для
|
||||
|
@ -28,6 +28,8 @@ ClickHouse может принимать (`INSERT`) и отдавать (`SELECT
|
||||
| [PrettySpace](#prettyspace) | ✗ | ✔ |
|
||||
| [Protobuf](#protobuf) | ✔ | ✔ |
|
||||
| [Parquet](#data-format-parquet) | ✔ | ✔ |
|
||||
| [Arrow](#data-format-arrow) | ✔ | ✔ |
|
||||
| [ArrowStream](#data-format-arrow-stream) | ✔ | ✔ |
|
||||
| [ORC](#data-format-orc) | ✔ | ✗ |
|
||||
| [RowBinary](#rowbinary) | ✔ | ✔ |
|
||||
| [RowBinaryWithNamesAndTypes](#rowbinarywithnamesandtypes) | ✔ | ✔ |
|
||||
@ -947,6 +949,12 @@ ClickHouse пишет и читает сообщения `Protocol Buffers` в
|
||||
|
||||
## Avro {#data-format-avro}
|
||||
|
||||
[Apache Avro](https://avro.apache.org/) — это ориентированный на строки фреймворк для сериализации данных. Разработан в рамках проекта Apache Hadoop.
|
||||
|
||||
В ClickHouse формат Avro поддерживает чтение и запись [файлов данных Avro](https://avro.apache.org/docs/current/spec.html#Object+Container+Files).
|
||||
|
||||
[Логические типы Avro](https://avro.apache.org/docs/current/spec.html#Logical+Types)
|
||||
|
||||
## AvroConfluent {#data-format-avro-confluent}
|
||||
|
||||
Для формата `AvroConfluent` ClickHouse поддерживает декодирование сообщений `Avro` с одним объектом. Такие сообщения используются с [Kafka] (http://kafka.apache.org/) и реестром схем [Confluent](https://docs.confluent.io/current/schema-registry/index.html).
|
||||
@ -996,7 +1004,7 @@ SELECT * FROM topic1_stream;
|
||||
|
||||
## Parquet {#data-format-parquet}
|
||||
|
||||
[Apache Parquet](http://parquet.apache.org/) — формат поколоночного хранения данных, который распространён в экосистеме Hadoop. Для формата `Parquet` ClickHouse поддерживает операции чтения и записи.
|
||||
[Apache Parquet](https://parquet.apache.org/) — формат поколоночного хранения данных, который распространён в экосистеме Hadoop. Для формата `Parquet` ClickHouse поддерживает операции чтения и записи.
|
||||
|
||||
### Соответствие типов данных {#sootvetstvie-tipov-dannykh}
|
||||
|
||||
@ -1042,6 +1050,16 @@ $ clickhouse-client --query="SELECT * FROM {some_table} FORMAT Parquet" > {some_
|
||||
|
||||
Для обмена данными с экосистемой Hadoop можно использовать движки таблиц [HDFS](../engines/table-engines/integrations/hdfs.md).
|
||||
|
||||
## Arrow {data-format-arrow}
|
||||
|
||||
[Apache Arrow](https://arrow.apache.org/) поставляется с двумя встроенными поколоночнами форматами хранения. ClickHouse поддерживает операции чтения и записи для этих форматов.
|
||||
|
||||
`Arrow` — это Apache Arrow's "file mode" формат. Он предназначен для произвольного доступа в памяти.
|
||||
|
||||
## ArrowStream {data-format-arrow-stream}
|
||||
|
||||
`ArrowStream` — это Apache Arrow's "stream mode" формат. Он предназначен для обработки потоков в памяти.
|
||||
|
||||
## ORC {#data-format-orc}
|
||||
|
||||
[Apache ORC](https://orc.apache.org/) - это column-oriented формат данных, распространённый в экосистеме Hadoop. Вы можете только вставлять данные этого формата в ClickHouse.
|
||||
|
4
docs/ru/interfaces/third-party/gui.md
vendored
4
docs/ru/interfaces/third-party/gui.md
vendored
@ -93,6 +93,10 @@
|
||||
|
||||
[cickhouse-plantuml](https://pypi.org/project/clickhouse-plantuml/) — скрипт, генерирующий [PlantUML](https://plantuml.com/) диаграммы схем таблиц.
|
||||
|
||||
### xeus-clickhouse {#xeus-clickhouse}
|
||||
|
||||
[xeus-clickhouse](https://github.com/wangfenjin/xeus-clickhouse) — это ядро Jupyter для ClickHouse, которое поддерживает запрос ClickHouse-данных с использованием SQL в Jupyter.
|
||||
|
||||
## Коммерческие {#kommercheskie}
|
||||
|
||||
### DataGrip {#datagrip}
|
||||
|
@ -1756,4 +1756,17 @@ SELECT idx, i FROM null_in WHERE i IN (1, NULL) SETTINGS transform_null_in = 1;
|
||||
- [Секции и настройки запроса CREATE TABLE](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-query-clauses) (настройка `merge_with_ttl_timeout`)
|
||||
- [Table TTL](../../engines/table-engines/mergetree-family/mergetree.md#mergetree-table-ttl)
|
||||
|
||||
## lock_acquire_timeout {#lock_acquire_timeout}
|
||||
|
||||
Устанавливает, сколько секунд сервер ожидает возможности выполнить блокировку таблицы.
|
||||
|
||||
Таймаут устанавливается для защиты от взаимоблокировки при выполнении операций чтения или записи. Если время ожидания истекло, а блокировку выполнить не удалось, сервер возвращает исключение с кодом `DEADLOCK_AVOIDED` и сообщением "Locking attempt timed out! Possible deadlock avoided. Client should retry." ("Время ожидания блокировки истекло! Возможная взаимоблокировка предотвращена. Повторите запрос.").
|
||||
|
||||
Возможные значения:
|
||||
|
||||
- Положительное целое число (в секундах).
|
||||
- 0 — таймаут не устанавливается.
|
||||
|
||||
Значение по умолчанию: `120` секунд.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/settings/settings/) <!--hide-->
|
||||
|
@ -7,10 +7,38 @@ toc_title: Системные таблицы
|
||||
|
||||
## Введение {#system-tables-introduction}
|
||||
|
||||
Системные таблицы используются для реализации части функциональности системы, а также предоставляют доступ к информации о работе системы.
|
||||
Вы не можете удалить системную таблицу (хотя можете сделать DETACH).
|
||||
Для системных таблиц нет файлов с данными на диске и файлов с метаданными. Сервер создаёт все системные таблицы при старте.
|
||||
В системные таблицы нельзя записывать данные - можно только читать.
|
||||
Системные таблицы расположены в базе данных system.
|
||||
Системные таблицы содержат информацию о:
|
||||
|
||||
- Состоянии сервера, процессов и окружении.
|
||||
- Внутренних процессах сервера.
|
||||
|
||||
Системные таблицы:
|
||||
|
||||
- Находятся в базе данных `system`.
|
||||
- Доступны только для чтения данных.
|
||||
- Не могут быть удалены или изменены, но их можно отсоединить.
|
||||
|
||||
Системные таблицы `metric_log`, `query_log`, `query_thread_log`, `trace_log` системные таблицы хранят данные в файловой системе. Остальные системные таблицы хранят свои данные в оперативной памяти. Сервер ClickHouse создает такие системные таблицы при запуске.
|
||||
|
||||
### Источники системных показателей
|
||||
|
||||
Для сбора системных показателей сервер ClickHouse использует:
|
||||
|
||||
- Возможности `CAP_NET_ADMIN`.
|
||||
- [procfs](https://ru.wikipedia.org/wiki/Procfs) (только Linux).
|
||||
|
||||
**procfs**
|
||||
|
||||
Если для сервера ClickHouse не включено `CAP_NET_ADMIN`, он пытается обратиться к `ProcfsMetricsProvider`. `ProcfsMetricsProvider` позволяет собирать системные показатели для каждого запроса (для CPU и I/O).
|
||||
|
||||
Если procfs поддерживается и включена в системе, то сервер ClickHouse собирает следующие системные показатели:
|
||||
|
||||
- `OSCPUVirtualTimeMicroseconds`
|
||||
- `OSCPUWaitMicroseconds`
|
||||
- `OSIOWaitMicroseconds`
|
||||
- `OSReadChars`
|
||||
- `OSWriteChars`
|
||||
- `OSReadBytes`
|
||||
- `OSWriteBytes`
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system-tables/) <!--hide-->
|
||||
|
@ -24,4 +24,8 @@
|
||||
- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Общее время выполнения запроса, в секундах.
|
||||
- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Максимальное время выполнения запроса.
|
||||
|
||||
## Смотрите также {#see-also}
|
||||
|
||||
- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/quota_usage) <!--hide-->
|
||||
|
@ -21,5 +21,9 @@
|
||||
- `apply_to_list` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Список имен пользователей/[ролей](../../operations/access-rights.md#role-management) к которым применяется квота.
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Список имен пользователей/ролей к которым квота применяться не должна.
|
||||
|
||||
## Смотрите также {#see-also}
|
||||
|
||||
- [SHOW QUOTAS](../../sql-reference/statements/show.md#show-quotas-statement)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/quotas) <!--hide-->
|
||||
|
||||
|
@ -25,4 +25,8 @@
|
||||
- `execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Общее время выполнения запроса, в секундах.
|
||||
- `max_execution_time` ([Nullable](../../sql-reference/data-types/nullable.md)([Float64](../../sql-reference/data-types/float.md))) — Максимальное время выполнения запроса.
|
||||
|
||||
## Смотрите также {#see-also}
|
||||
|
||||
- [SHOW QUOTA](../../sql-reference/statements/show.md#show-quota-statement)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/quotas_usage) <!--hide-->
|
||||
|
@ -5,7 +5,13 @@
|
||||
Столбцы:
|
||||
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Имя роли.
|
||||
|
||||
- `id` ([UUID](../../sql-reference/data-types/uuid.md)) — ID роли.
|
||||
|
||||
- `storage` ([String](../../sql-reference/data-types/string.md)) — Путь к хранилищу ролей. Настраивается в параметре `access_control_path`.
|
||||
|
||||
## Смотрите также {#see-also}
|
||||
|
||||
- [SHOW ROLES](../../sql-reference/statements/show.md#show-roles-statement)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/roles) <!--hide-->
|
||||
|
@ -27,4 +27,8 @@
|
||||
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Политики строк применяются ко всем ролям и/или пользователям, за исключением перечисленных.
|
||||
|
||||
## Смотрите также {#see-also}
|
||||
|
||||
- [SHOW POLICIES](../../sql-reference/statements/show.md#show-policies-statement)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/row_policies) <!--hide-->
|
||||
|
@ -17,4 +17,8 @@
|
||||
|
||||
- `apply_to_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Профиль настроек применяется ко всем ролям и/или пользователям, за исключением перечисленных.
|
||||
|
||||
## Смотрите также {#see-also}
|
||||
|
||||
- [SHOW PROFILES](../../sql-reference/statements/show.md#show-profiles-statement)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/settings_profiles) <!--hide-->
|
||||
|
@ -82,7 +82,7 @@ res: /lib/x86_64-linux-gnu/libc-2.27.so
|
||||
|
||||
- [Функции интроспекции](../../sql-reference/functions/introspection.md) — Что такое функции интроспекции и как их использовать.
|
||||
- [system.trace_log](../../operations/system-tables/trace_log.md#system_tables-trace_log) — Содержит трассировки стека, собранные профилировщиком выборочных запросов.
|
||||
- [arrayMap](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-map) — Описание и пример использования функции `arrayMap`.
|
||||
- [arrayFilter](../../sql-reference/functions/higher-order-functions.md#higher_order_functions-array-filter) — Описание и пример использования функции `arrayFilter`.
|
||||
- [arrayMap](../../sql-reference/functions/array-functions.md#array-map) — Описание и пример использования функции `arrayMap`.
|
||||
- [arrayFilter](../../sql-reference/functions/array-functions.md#array-filter) — Описание и пример использования функции `arrayFilter`.
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/stack_trace) <!--hide-->
|
||||
|
@ -27,4 +27,8 @@
|
||||
|
||||
- `default_roles_except` ([Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md))) — Все предоставленные роли задаются по умолчанию, за исключением перечисленных.
|
||||
|
||||
## Смотрите также {#see-also}
|
||||
|
||||
- [SHOW USERS](../../sql-reference/statements/show.md#show-users-statement)
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/operations/system_tables/users) <!--hide-->
|
||||
|
@ -30,7 +30,7 @@ $ echo 0 | sudo tee /proc/sys/vm/overcommit_memory
|
||||
Механизм прозрачных huge pages нужно отключить. Он мешает работе аллокаторов памяти, что приводит к значительной деградации производительности.
|
||||
|
||||
``` bash
|
||||
$ echo 'never' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
|
||||
$ echo 'madvise' | sudo tee /sys/kernel/mm/transparent_hugepage/enabled
|
||||
```
|
||||
|
||||
С помощью `perf top` можно наблюдать за временем, проведенном в ядре операционной системы для управления памятью.
|
||||
|
@ -9,6 +9,7 @@ The following aggregate functions are supported:
|
||||
- [`min`](../../sql-reference/aggregate-functions/reference/min.md#agg_function-min)
|
||||
- [`max`](../../sql-reference/aggregate-functions/reference/max.md#agg_function-max)
|
||||
- [`sum`](../../sql-reference/aggregate-functions/reference/sum.md#agg_function-sum)
|
||||
- [`sumWithOverflow`](../../sql-reference/aggregate-functions/reference/sumwithoverflow.md#sumwithoverflowx)
|
||||
- [`groupBitAnd`](../../sql-reference/aggregate-functions/reference/groupbitand.md#groupbitand)
|
||||
- [`groupBitOr`](../../sql-reference/aggregate-functions/reference/groupbitor.md#groupbitor)
|
||||
- [`groupBitXor`](../../sql-reference/aggregate-functions/reference/groupbitxor.md#groupbitxor)
|
||||
|
@ -2,7 +2,7 @@
|
||||
|
||||
Кортеж из элементов любого [типа](index.md#data_types). Элементы кортежа могут быть одного или разных типов.
|
||||
|
||||
Кортежи используются для временной группировки столбцов. Столбцы могут группироваться при использовании выражения IN в запросе, а также для указания нескольких формальных параметров лямбда-функций. Подробнее смотрите разделы [Операторы IN](../../sql-reference/data-types/tuple.md), [Функции высшего порядка](../../sql-reference/functions/higher-order-functions.md#higher-order-functions).
|
||||
Кортежи используются для временной группировки столбцов. Столбцы могут группироваться при использовании выражения IN в запросе, а также для указания нескольких формальных параметров лямбда-функций. Подробнее смотрите разделы [Операторы IN](../../sql-reference/data-types/tuple.md), [Функции высшего порядка](../../sql-reference/functions/index.md#higher-order-functions).
|
||||
|
||||
Кортежи могут быть результатом запроса. В этом случае, в текстовых форматах кроме JSON, значения выводятся в круглых скобках через запятую. В форматах JSON, кортежи выводятся в виде массивов (в квадратных скобках).
|
||||
|
||||
|
@ -1,4 +1,4 @@
|
||||
# Функции по работе с массивами {#funktsii-po-rabote-s-massivami}
|
||||
# Массивы {#functions-for-working-with-arrays}
|
||||
|
||||
## empty {#function-empty}
|
||||
|
||||
@ -186,6 +186,13 @@ SELECT indexOf([1, 3, NULL, NULL], NULL)
|
||||
|
||||
Элементы, равные `NULL`, обрабатываются как обычные значения.
|
||||
|
||||
## arrayCount(\[func,\] arr1, …) {#array-count}
|
||||
|
||||
Возвращает количество элементов массива `arr`, для которых функция `func` возвращает не 0. Если `func` не указана - возвращает количество ненулевых элементов массива.
|
||||
|
||||
Функция `arrayCount` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||
|
||||
|
||||
## countEqual(arr, x) {#countequalarr-x}
|
||||
|
||||
Возвращает количество элементов массива, равных x. Эквивалентно arrayCount(elem -\> elem = x, arr).
|
||||
@ -513,7 +520,7 @@ SELECT arraySort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]);
|
||||
- Значения `NaN` идут перед `NULL`.
|
||||
- Значения `Inf` идут перед `NaN`.
|
||||
|
||||
Функция `arraySort` является [функцией высшего порядка](higher-order-functions.md) — в качестве первого аргумента ей можно передать лямбда-функцию. В этом случае порядок сортировки определяется результатом применения лямбда-функции на элементы массива.
|
||||
Функция `arraySort` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. В этом случае порядок сортировки определяется результатом применения лямбда-функции на элементы массива.
|
||||
|
||||
Рассмотрим пример:
|
||||
|
||||
@ -613,7 +620,7 @@ SELECT arrayReverseSort([1, nan, 2, NULL, 3, nan, -4, NULL, inf, -inf]) as res;
|
||||
- Значения `NaN` идут перед `NULL`.
|
||||
- Значения `-Inf` идут перед `NaN`.
|
||||
|
||||
Функция `arrayReverseSort` является [функцией высшего порядка](higher-order-functions.md). Вы можете передать ей в качестве первого аргумента лямбда-функцию. Например:
|
||||
Функция `arrayReverseSort` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей можно передать лямбда-функцию. Например:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSort((x) -> -x, [1, 2, 3]) as res;
|
||||
@ -1036,6 +1043,116 @@ SELECT arrayZip(['a', 'b', 'c'], [5, 2, 1])
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayMap(func, arr1, …) {#array-map}
|
||||
|
||||
Возвращает массив, полученный на основе результатов применения функции `func` к каждому элементу массива `arr`.
|
||||
|
||||
Примеры:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────┐
|
||||
│ [3,4,5] │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
Следующий пример показывает, как создать кортежи из элементов разных массивов:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────────┐
|
||||
│ [(1,4),(2,5),(3,6)] │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
Функция `arrayMap` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayFilter(func, arr1, …) {#array-filter}
|
||||
|
||||
Возвращает массив, содержащий только те элементы массива `arr1`, для которых функция `func` возвращает не 0.
|
||||
|
||||
Примеры:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────┐
|
||||
│ ['abc World'] │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayFilter(
|
||||
(i, x) -> x LIKE '%World%',
|
||||
arrayEnumerate(arr),
|
||||
['Hello', 'abc World'] AS arr)
|
||||
AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─┐
|
||||
│ [2] │
|
||||
└─────┘
|
||||
```
|
||||
|
||||
Функция `arrayFilter` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||
|
||||
Возвращает 1, если существует хотя бы один элемент массива `arr`, для которого функция func возвращает не 0. Иначе возвращает 0.
|
||||
|
||||
Функция `arrayExists` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||
|
||||
## arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||
|
||||
Возвращает 1, если для всех элементов массива `arr`, функция `func` возвращает не 0. Иначе возвращает 0.
|
||||
|
||||
Функция `arrayAll` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||
|
||||
## arrayFirst(func, arr1, …) {#array-first}
|
||||
|
||||
Возвращает первый элемент массива `arr1`, для которого функция func возвращает не 0.
|
||||
|
||||
Функция `arrayFirst` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arrayFirstIndex(func, arr1, …) {#array-first-index}
|
||||
|
||||
Возвращает индекс первого элемента массива `arr1`, для которого функция func возвращает не 0.
|
||||
|
||||
Функция `arrayFirstIndex` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) — в качестве первого аргумента ей нужно передать лямбда-функцию, и этот аргумент не может быть опущен.
|
||||
|
||||
## arraySum(\[func,\] arr1, …) {#array-sum}
|
||||
|
||||
Возвращает сумму значений функции `func`. Если функция не указана - просто возвращает сумму элементов массива.
|
||||
|
||||
Функция `arraySum` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||
|
||||
## arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||
|
||||
Возвращает массив из частичных сумм элементов исходного массива (сумма с накоплением). Если указана функция `func`, то значения элементов массива преобразуются этой функцией перед суммированием.
|
||||
|
||||
Функция `arrayCumSum` является [функцией высшего порядка](../../sql-reference/functions/index.md#higher-order-functions) - в качестве первого аргумента ей можно передать лямбда-функцию.
|
||||
|
||||
Пример:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────┐
|
||||
│ [1, 2, 3, 4] │
|
||||
└──────────────┘
|
||||
|
||||
## arrayAUC {#arrayauc}
|
||||
|
||||
Вычисляет площадь под кривой.
|
||||
|
@ -1,167 +0,0 @@
|
||||
# Функции высшего порядка {#higher-order-functions}
|
||||
|
||||
## Оператор `->`, функция lambda(params, expr) {#operator-funktsiia-lambdaparams-expr}
|
||||
|
||||
Позволяет описать лямбда-функцию для передачи в функцию высшего порядка. Слева от стрелочки стоит формальный параметр - произвольный идентификатор, или несколько формальных параметров - произвольные идентификаторы в кортеже. Справа от стрелочки стоит выражение, в котором могут использоваться эти формальные параметры, а также любые столбцы таблицы.
|
||||
|
||||
Примеры: `x -> 2 * x, str -> str != Referer.`
|
||||
|
||||
Функции высшего порядка, в качестве своего функционального аргумента могут принимать только лямбда-функции.
|
||||
|
||||
В функции высшего порядка может быть передана лямбда-функция, принимающая несколько аргументов. В этом случае, в функцию высшего порядка передаётся несколько массивов одинаковых длин, которым эти аргументы будут соответствовать.
|
||||
|
||||
Для некоторых функций, например [arrayCount](#higher_order_functions-array-count) или [arraySum](#higher_order_functions-array-sum), первый аргумент (лямбда-функция) может отсутствовать. В этом случае, подразумевается тождественное отображение.
|
||||
|
||||
Для функций, перечисленных ниже, лямбда-функцию должна быть указана всегда:
|
||||
|
||||
- [arrayMap](#higher_order_functions-array-map)
|
||||
- [arrayFilter](#higher_order_functions-array-filter)
|
||||
- [arrayFirst](#higher_order_functions-array-first)
|
||||
- [arrayFirstIndex](#higher_order_functions-array-first-index)
|
||||
|
||||
### arrayMap(func, arr1, …) {#higher_order_functions-array-map}
|
||||
|
||||
Вернуть массив, полученный на основе результатов применения функции `func` к каждому элементу массива `arr`.
|
||||
|
||||
Примеры:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap(x -> (x + 2), [1, 2, 3]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────┐
|
||||
│ [3,4,5] │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
Следующий пример показывает, как создать кортежи из элементов разных массивов:
|
||||
|
||||
``` sql
|
||||
SELECT arrayMap((x, y) -> (x, y), [1, 2, 3], [4, 5, 6]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─────────────────┐
|
||||
│ [(1,4),(2,5),(3,6)] │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
Обратите внимание, что у функции `arrayMap` первый аргумент (лямбда-функция) не может быть опущен.
|
||||
|
||||
### arrayFilter(func, arr1, …) {#higher_order_functions-array-filter}
|
||||
|
||||
Вернуть массив, содержащий только те элементы массива `arr1`, для которых функция `func` возвращает не 0.
|
||||
|
||||
Примеры:
|
||||
|
||||
``` sql
|
||||
SELECT arrayFilter(x -> x LIKE '%World%', ['Hello', 'abc World']) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────┐
|
||||
│ ['abc World'] │
|
||||
└───────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayFilter(
|
||||
(i, x) -> x LIKE '%World%',
|
||||
arrayEnumerate(arr),
|
||||
['Hello', 'abc World'] AS arr)
|
||||
AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res─┐
|
||||
│ [2] │
|
||||
└─────┘
|
||||
```
|
||||
|
||||
Обратите внимание, что у функции `arrayFilter` первый аргумент (лямбда-функция) не может быть опущен.
|
||||
|
||||
### arrayCount(\[func,\] arr1, …) {#higher_order_functions-array-count}
|
||||
|
||||
Вернуть количество элементов массива `arr`, для которых функция func возвращает не 0. Если func не указана - вернуть количество ненулевых элементов массива.
|
||||
|
||||
### arrayExists(\[func,\] arr1, …) {#arrayexistsfunc-arr1}
|
||||
|
||||
Вернуть 1, если существует хотя бы один элемент массива `arr`, для которого функция func возвращает не 0. Иначе вернуть 0.
|
||||
|
||||
### arrayAll(\[func,\] arr1, …) {#arrayallfunc-arr1}
|
||||
|
||||
Вернуть 1, если для всех элементов массива `arr`, функция `func` возвращает не 0. Иначе вернуть 0.
|
||||
|
||||
### arraySum(\[func,\] arr1, …) {#higher_order_functions-array-sum}
|
||||
|
||||
Вернуть сумму значений функции `func`. Если функция не указана - просто вернуть сумму элементов массива.
|
||||
|
||||
### arrayFirst(func, arr1, …) {#higher_order_functions-array-first}
|
||||
|
||||
Вернуть первый элемент массива `arr1`, для которого функция func возвращает не 0.
|
||||
|
||||
Обратите внимание, что у функции `arrayFirst` первый аргумент (лямбда-функция) не может быть опущен.
|
||||
|
||||
### arrayFirstIndex(func, arr1, …) {#higher_order_functions-array-first-index}
|
||||
|
||||
Вернуть индекс первого элемента массива `arr1`, для которого функция func возвращает не 0.
|
||||
|
||||
Обратите внимание, что у функции `arrayFirstFilter` первый аргумент (лямбда-функция) не может быть опущен.
|
||||
|
||||
### arrayCumSum(\[func,\] arr1, …) {#arraycumsumfunc-arr1}
|
||||
|
||||
Возвращает массив из частичных сумм элементов исходного массива (сумма с накоплением). Если указана функция `func`, то значения элементов массива преобразуются этой функцией перед суммированием.
|
||||
|
||||
Пример:
|
||||
|
||||
``` sql
|
||||
SELECT arrayCumSum([1, 1, 1, 1]) AS res
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res──────────┐
|
||||
│ [1, 2, 3, 4] │
|
||||
└──────────────┘
|
||||
```
|
||||
|
||||
### arraySort(\[func,\] arr1, …) {#arraysortfunc-arr1}
|
||||
|
||||
Возвращает отсортированный в восходящем порядке массив `arr1`. Если задана функция `func`, то порядок сортировки определяется результатом применения функции `func` на элементы массива (массивов).
|
||||
|
||||
Для улучшения эффективности сортировки применяется [Преобразование Шварца](https://ru.wikipedia.org/wiki/%D0%9F%D1%80%D0%B5%D0%BE%D0%B1%D1%80%D0%B0%D0%B7%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5_%D0%A8%D0%B2%D0%B0%D1%80%D1%86%D0%B0).
|
||||
|
||||
Пример:
|
||||
|
||||
``` sql
|
||||
SELECT arraySort((x, y) -> y, ['hello', 'world'], [2, 1]);
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res────────────────┐
|
||||
│ ['world', 'hello'] │
|
||||
└────────────────────┘
|
||||
```
|
||||
|
||||
Подробная информация о методе `arraySort` приведена в разделе [Функции по работе с массивами](array-functions.md#array_functions-sort).
|
||||
|
||||
### arrayReverseSort(\[func,\] arr1, …) {#arrayreversesortfunc-arr1}
|
||||
|
||||
Возвращает отсортированный в нисходящем порядке массив `arr1`. Если задана функция `func`, то порядок сортировки определяется результатом применения функции `func` на элементы массива (массивов).
|
||||
|
||||
Пример:
|
||||
|
||||
``` sql
|
||||
SELECT arrayReverseSort((x, y) -> y, ['hello', 'world'], [2, 1]) as res;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─res───────────────┐
|
||||
│ ['hello','world'] │
|
||||
└───────────────────┘
|
||||
```
|
||||
|
||||
Подробная информация о методе `arrayReverseSort` приведена в разделе [Функции по работе с массивами](array-functions.md#array_functions-reverse-sort).
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/higher_order_functions/) <!--hide-->
|
@ -38,6 +38,20 @@
|
||||
|
||||
Функции не могут поменять значения своих аргументов - любые изменения возвращаются в качестве результата. Соответственно, от порядка записи функций в запросе, результат вычислений отдельных функций не зависит.
|
||||
|
||||
## Функции высшего порядка, оператор `->` и функция lambda(params, expr) {#higher-order-functions}
|
||||
|
||||
Функции высшего порядка, в качестве своего функционального аргумента могут принимать только лямбда-функции. Чтобы передать лямбда-функцию в функцию высшего порядка, используйте оператор `->`. Слева от стрелочки стоит формальный параметр — произвольный идентификатор, или несколько формальных параметров — произвольные идентификаторы в кортеже. Справа от стрелочки стоит выражение, в котором могут использоваться эти формальные параметры, а также любые столбцы таблицы.
|
||||
|
||||
Примеры:
|
||||
```
|
||||
x -> 2 * x
|
||||
str -> str != Referer
|
||||
```
|
||||
|
||||
В функции высшего порядка может быть передана лямбда-функция, принимающая несколько аргументов. В этом случае в функцию высшего порядка передаётся несколько массивов одинаковой длины, которым эти аргументы будут соответствовать.
|
||||
|
||||
Для некоторых функций первый аргумент (лямбда-функция) может отсутствовать. В этом случае подразумевается тождественное отображение.
|
||||
|
||||
## Обработка ошибок {#obrabotka-oshibok}
|
||||
|
||||
Некоторые функции могут кидать исключения в случае ошибочных данных. В этом случае, выполнение запроса прерывается, и текст ошибки выводится клиенту. При распределённой обработке запроса, при возникновении исключения на одном из серверов, на другие серверы пытается отправиться просьба тоже прервать выполнение запроса.
|
||||
|
@ -93,7 +93,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
Функция [arrayMap](higher-order-functions.md#higher_order_functions-array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `addressToLine`. Результат этой обработки вы видите в виде `trace_source_code_lines` колонки выходных данных.
|
||||
Функция [arrayMap](../../sql-reference/functions/array-functions.md#array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `addressToLine`. Результат этой обработки вы видите в виде `trace_source_code_lines` колонки выходных данных.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
@ -179,7 +179,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
То [arrayMap](higher-order-functions.md#higher_order_functions-array-map) функция позволяет обрабатывать каждый отдельный элемент системы. `trace` массив по типу `addressToSymbols` функция. Результат этой обработки вы видите в виде `trace_symbols` колонка выходных данных.
|
||||
То [arrayMap](../../sql-reference/functions/array-functions.md#array-map) функция позволяет обрабатывать каждый отдельный элемент системы. `trace` массив по типу `addressToSymbols` функция. Результат этой обработки вы видите в виде `trace_symbols` колонка выходных данных.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
@ -276,7 +276,7 @@ LIMIT 1
|
||||
\G
|
||||
```
|
||||
|
||||
Функция [arrayMap](higher-order-functions.md#higher_order_functions-array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `demangle`.
|
||||
Функция [arrayMap](../../sql-reference/functions/array-functions.md#array-map) позволяет обрабатывать каждый отдельный элемент массива `trace` с помощью функции `demangle`.
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
|
@ -508,6 +508,29 @@ SELECT
|
||||
└────────────────┴────────────┘
|
||||
```
|
||||
|
||||
## formatReadableQuantity(x) {#formatreadablequantityx}
|
||||
|
||||
Принимает число. Возвращает округленное число с суффиксом (thousand, million, billion и т.д.) в виде строки.
|
||||
|
||||
Облегчает визуальное восприятие больших чисел живым человеком.
|
||||
|
||||
Пример:
|
||||
|
||||
``` sql
|
||||
SELECT
|
||||
arrayJoin([1024, 1234 * 1000, (4567 * 1000) * 1000, 98765432101234]) AS number,
|
||||
formatReadableQuantity(number) AS number_for_humans
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─────────number─┬─number_for_humans─┐
|
||||
│ 1024 │ 1.02 thousand │
|
||||
│ 1234000 │ 1.23 million │
|
||||
│ 4567000000 │ 4.57 billion │
|
||||
│ 98765432101234 │ 98.77 trillion │
|
||||
└────────────────┴───────────────────┘
|
||||
```
|
||||
|
||||
## least(a, b) {#leasta-b}
|
||||
|
||||
Возвращает наименьшее значение из a и b.
|
||||
|
@ -55,4 +55,50 @@ FROM numbers(3)
|
||||
└────────────┴────────────┴──────────────┴────────────────┴─────────────────┴──────────────────────┘
|
||||
```
|
||||
|
||||
# Случайные функции для работы со строками {#random-functions-for-working-with-strings}
|
||||
|
||||
## randomString {#random-string}
|
||||
|
||||
## randomFixedString {#random-fixed-string}
|
||||
|
||||
## randomPrintableASCII {#random-printable-ascii}
|
||||
|
||||
## randomStringUTF8 {#random-string-utf8}
|
||||
|
||||
## fuzzBits {#fuzzbits}
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
fuzzBits([s], [prob])
|
||||
```
|
||||
Инвертирует каждый бит `s` с вероятностью `prob`.
|
||||
|
||||
**Параметры**
|
||||
|
||||
- `s` — `String` or `FixedString`
|
||||
- `prob` — constant `Float32/64`
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
Измененная случайным образом строка с тем же типом, что и `s`.
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
SELECT fuzzBits(materialize('abacaba'), 0.1)
|
||||
FROM numbers(3)
|
||||
```
|
||||
|
||||
Результат:
|
||||
|
||||
``` text
|
||||
┌─fuzzBits(materialize('abacaba'), 0.1)─┐
|
||||
│ abaaaja │
|
||||
│ a*cjab+ │
|
||||
│ aeca2A │
|
||||
└───────────────────────────────────────┘
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/random_functions/) <!--hide-->
|
||||
|
@ -513,4 +513,95 @@ SELECT parseDateTimeBestEffort('10 20:19')
|
||||
- [toDate](#todate)
|
||||
- [toDateTime](#todatetime)
|
||||
|
||||
## toUnixTimestamp64Milli
|
||||
## toUnixTimestamp64Micro
|
||||
## toUnixTimestamp64Nano
|
||||
|
||||
Преобразует значение `DateTime64` в значение `Int64` с фиксированной точностью менее одной секунды.
|
||||
Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. Обратите внимание, что возвращаемое значение - это временная метка в UTC, а не в часовом поясе `DateTime64`.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
toUnixTimestamp64Milli(value)
|
||||
```
|
||||
|
||||
**Параметры**
|
||||
|
||||
- `value` — значение `DateTime64` с любой точностью.
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- Значение `value`, преобразованное в тип данных `Int64`.
|
||||
|
||||
**Примеры**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
|
||||
SELECT toUnixTimestamp64Milli(dt64)
|
||||
```
|
||||
|
||||
Ответ:
|
||||
|
||||
``` text
|
||||
┌─toUnixTimestamp64Milli(dt64)─┐
|
||||
│ 1568650812345 │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
WITH toDateTime64('2019-09-16 19:20:12.345678910', 6) AS dt64
|
||||
SELECT toUnixTimestamp64Nano(dt64)
|
||||
```
|
||||
|
||||
Ответ:
|
||||
|
||||
``` text
|
||||
┌─toUnixTimestamp64Nano(dt64)─┐
|
||||
│ 1568650812345678000 │
|
||||
└─────────────────────────────┘
|
||||
```
|
||||
|
||||
## fromUnixTimestamp64Milli
|
||||
## fromUnixTimestamp64Micro
|
||||
## fromUnixTimestamp64Nano
|
||||
|
||||
Преобразует значение `Int64` в значение `DateTime64` с фиксированной точностью менее одной секунды и дополнительным часовым поясом. Входное значение округляется соответствующим образом вверх или вниз в зависимости от его точности. Обратите внимание, что входное значение обрабатывается как метка времени UTC, а не метка времени в заданном (или неявном) часовом поясе.
|
||||
|
||||
**Синтаксис**
|
||||
|
||||
``` sql
|
||||
fromUnixTimestamp64Milli(value [, ti])
|
||||
```
|
||||
|
||||
**Параметры**
|
||||
|
||||
- `value` — значение типы `Int64` с любой точностью.
|
||||
- `timezone` — (не обязательный параметр) часовой пояс в формате `String` для возвращаемого результата.
|
||||
|
||||
**Возвращаемое значение**
|
||||
|
||||
- Значение `value`, преобразованное в тип данных `DateTime64`.
|
||||
|
||||
**Пример**
|
||||
|
||||
Запрос:
|
||||
|
||||
``` sql
|
||||
WITH CAST(1234567891011, 'Int64') AS i64
|
||||
SELECT fromUnixTimestamp64Milli(i64, 'UTC')
|
||||
```
|
||||
|
||||
Ответ:
|
||||
|
||||
``` text
|
||||
┌─fromUnixTimestamp64Milli(i64, 'UTC')─┐
|
||||
│ 2009-02-13 23:31:31.011 │
|
||||
└──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/functions/type_conversion_functions/) <!--hide-->
|
||||
|
@ -5,13 +5,15 @@ toc_title: Представление
|
||||
|
||||
# CREATE VIEW {#create-view}
|
||||
|
||||
``` sql
|
||||
CREATE [MATERIALIZED] VIEW [IF NOT EXISTS] [db.]table_name [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
|
||||
```
|
||||
|
||||
Создаёт представление. Представления бывают двух видов - обычные и материализованные (MATERIALIZED).
|
||||
|
||||
Обычные представления не хранят никаких данных, а всего лишь производят чтение из другой таблицы. То есть, обычное представление - не более чем сохранённый запрос. При чтении из представления, этот сохранённый запрос, используется в качестве подзапроса в секции FROM.
|
||||
## Обычные представления {#normal}
|
||||
|
||||
``` sql
|
||||
CREATE [OR REPLACE] VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] AS SELECT ...
|
||||
```
|
||||
|
||||
Normal views don’t store any data, they just perform a read from another table on each access. In other words, a normal view is nothing more than a saved query. When reading from a view, this saved query is used as a subquery in the [FROM](../../../sql-reference/statements/select/from.md) clause.
|
||||
|
||||
Для примера, пусть вы создали представление:
|
||||
|
||||
@ -31,15 +33,24 @@ SELECT a, b, c FROM view
|
||||
SELECT a, b, c FROM (SELECT ...)
|
||||
```
|
||||
|
||||
Материализованные (MATERIALIZED) представления хранят данные, преобразованные соответствующим запросом SELECT.
|
||||
## Материализованные представления {#materialized}
|
||||
|
||||
При создании материализованного представления без использования `TO [db].[table]`, нужно обязательно указать ENGINE - движок таблицы для хранения данных.
|
||||
``` sql
|
||||
CREATE MATERIALIZED VIEW [IF NOT EXISTS] [db.]table_name [ON CLUSTER] [TO[db.]name] [ENGINE = engine] [POPULATE] AS SELECT ...
|
||||
```
|
||||
|
||||
Материализованные (MATERIALIZED) представления хранят данные, преобразованные соответствующим запросом [SELECT](../../../sql-reference/statements/select/index.md).
|
||||
|
||||
При создании материализованного представления без использования `TO [db].[table]`, нужно обязательно указать `ENGINE` - движок таблицы для хранения данных.
|
||||
|
||||
При создании материализованного представления с испольованием `TO [db].[table]`, нельзя указывать `POPULATE`
|
||||
|
||||
Материализованное представление устроено следующим образом: при вставке данных в таблицу, указанную в SELECT-е, кусок вставляемых данных преобразуется этим запросом SELECT, и полученный результат вставляется в представление.
|
||||
|
||||
Если указано POPULATE, то при создании представления, в него будут вставлены имеющиеся данные таблицы, как если бы был сделан запрос `CREATE TABLE ... AS SELECT ...` . Иначе, представление будет содержать только данные, вставляемые в таблицу после создания представления. Не рекомендуется использовать POPULATE, так как вставляемые в таблицу данные во время создания представления, не попадут в него.
|
||||
!!! important "Важно"
|
||||
Материализованные представлени в ClickHouse больше похожи на `after insert` триггеры. Если в запросе материализованного представления есть агрегирование, оно применяется только к вставляемому блоку записей. Любые изменения существующих данных исходной таблицы (например обновление, удаление, удаление раздела и т.д.) не изменяют материализованное представление.
|
||||
|
||||
Если указано `POPULATE`, то при создании представления, в него будут вставлены имеющиеся данные таблицы, как если бы был сделан запрос `CREATE TABLE ... AS SELECT ...` . Иначе, представление будет содержать только данные, вставляемые в таблицу после создания представления. Не рекомендуется использовать POPULATE, так как вставляемые в таблицу данные во время создания представления, не попадут в него.
|
||||
|
||||
Запрос `SELECT` может содержать `DISTINCT`, `GROUP BY`, `ORDER BY`, `LIMIT`… Следует иметь ввиду, что соответствующие преобразования будут выполняться независимо, на каждый блок вставляемых данных. Например, при наличии `GROUP BY`, данные будут агрегироваться при вставке, но только в рамках одной пачки вставляемых данных. Далее, данные не будут доагрегированы. Исключение - использование ENGINE, производящего агрегацию данных самостоятельно, например, `SummingMergeTree`.
|
||||
|
||||
|
@ -5,18 +5,35 @@ toc_title: DROP
|
||||
|
||||
# DROP {#drop}
|
||||
|
||||
Запрос имеет два вида: `DROP DATABASE` и `DROP TABLE`.
|
||||
Удаляет существующий объект.
|
||||
Если указано `IF EXISTS` - не выдавать ошибку, если объекта не существует.
|
||||
|
||||
## DROP DATABASE {#drop-database}
|
||||
|
||||
``` sql
|
||||
DROP DATABASE [IF EXISTS] db [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Удаляет все таблицы в базе данных db, затем удаляет саму базу данных db.
|
||||
|
||||
|
||||
## DROP TABLE {#drop-table}
|
||||
|
||||
``` sql
|
||||
DROP [TEMPORARY] TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Удаляет таблицу.
|
||||
Если указано `IF EXISTS` - не выдавать ошибку, если таблица не существует или база данных не существует.
|
||||
|
||||
|
||||
## DROP DICTIONARY {#drop-dictionary}
|
||||
|
||||
``` sql
|
||||
DROP DICTIONARY [IF EXISTS] [db.]name
|
||||
```
|
||||
|
||||
Удаляет словарь.
|
||||
|
||||
|
||||
## DROP USER {#drop-user-statement}
|
||||
|
||||
@ -41,6 +58,7 @@ DROP USER [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
||||
DROP ROLE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
||||
```
|
||||
|
||||
|
||||
## DROP ROW POLICY {#drop-row-policy-statement}
|
||||
|
||||
Удаляет политику доступа к строкам.
|
||||
@ -80,5 +98,13 @@ DROP [SETTINGS] PROFILE [IF EXISTS] name [,...] [ON CLUSTER cluster_name]
|
||||
```
|
||||
|
||||
|
||||
## DROP VIEW {#drop-view}
|
||||
|
||||
``` sql
|
||||
DROP VIEW [IF EXISTS] [db.]name [ON CLUSTER cluster]
|
||||
```
|
||||
|
||||
Удаляет представление. Представления могут быть удалены и командой `DROP TABLE`, но команда `DROP VIEW` проверяет, что `[db.]name` является представлением.
|
||||
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/sql-reference/statements/drop/) <!--hide-->
|
@ -169,4 +169,65 @@ SHOW CREATE QUOTA [name | CURRENT]
|
||||
SHOW CREATE [SETTINGS] PROFILE name
|
||||
```
|
||||
|
||||
|
||||
## SHOW USERS {#show-users-statement}
|
||||
|
||||
Выводит список [пользовательских аккаунтов](../../operations/access-rights.md#user-account-management). Для просмотра параметров пользовательских аккаунтов, см. системную таблицу [system.users](../../operations/system-tables/users.md#system_tables-users).
|
||||
|
||||
### Синтаксис {#show-users-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW USERS
|
||||
```
|
||||
|
||||
## SHOW ROLES {#show-roles-statement}
|
||||
|
||||
Выводит список [ролей](../../operations/access-rights.md#role-management). Для просмотра параметров ролей, см. системные таблицы [system.roles](../../operations/system-tables/roles.md#system_tables-roles) и [system.role-grants](../../operations/system-tables/role-grants.md#system_tables-role_grants).
|
||||
|
||||
### Синтаксис {#show-roles-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [CURRENT|ENABLED] ROLES
|
||||
```
|
||||
|
||||
## SHOW PROFILES {#show-profiles-statement}
|
||||
|
||||
Выводит список [профилей настроек](../../operations/access-rights.md#settings-profiles-management). Для просмотра других параметров профилей настроек, см. системную таблицу [settings_profiles](../../operations/system-tables/settings_profiles.md#system_tables-settings_profiles).
|
||||
|
||||
### Синтаксис {#show-profiles-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [SETTINGS] PROFILES
|
||||
```
|
||||
|
||||
## SHOW POLICIES {#show-policies-statement}
|
||||
|
||||
Выводит список [политик доступа к строкам](../../operations/access-rights.md#row-policy-management) для указанной таблицы. Для просмотра других параметров, см. системную таблицу [system.row_policies](../../operations/system-tables/row_policies.md#system_tables-row_policies).
|
||||
|
||||
### Синтаксис {#show-policies-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [ROW] POLICIES [ON [db.]table]
|
||||
```
|
||||
|
||||
## SHOW QUOTAS {#show-quotas-statement}
|
||||
|
||||
Выводит список [квот](../../operations/access-rights.md#quotas-management). Для просмотра параметров квот, см. системную таблицу [system.quotas](../../operations/system-tables/quotas.md#system_tables-quotas).
|
||||
|
||||
### Синтаксис {#show-quotas-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW QUOTAS
|
||||
```
|
||||
|
||||
## SHOW QUOTA {#show-quota-statement}
|
||||
|
||||
Выводит потребление [квоты](../../operations/quotas.md) для всех пользователей или только для текущего пользователя. Для просмотра других параметров, см. системные таблицы [system.quotas_usage](../../operations/system-tables/quotas_usage.md#system_tables-quotas_usage) и [system.quota_usage](../../operations/system-tables/quota_usage.md#system_tables-quota_usage).
|
||||
|
||||
### Синтаксис {#show-quota-syntax}
|
||||
|
||||
``` sql
|
||||
SHOW [CURRENT] QUOTA
|
||||
```
|
||||
|
||||
[Оригинальная статья](https://clickhouse.tech/docs/ru/query_language/show/) <!--hide-->
|
||||
|
@ -180,10 +180,11 @@ def build(args):
|
||||
if not args.skip_website:
|
||||
website.build_website(args)
|
||||
|
||||
if not args.skip_test_templates:
|
||||
test.test_templates(args.website_dir)
|
||||
|
||||
if not args.skip_docs:
|
||||
build_docs(args)
|
||||
|
||||
from github import build_releases
|
||||
build_releases(args, build_docs)
|
||||
|
||||
@ -220,6 +221,8 @@ if __name__ == '__main__':
|
||||
arg_parser.add_argument('--skip-website', action='store_true')
|
||||
arg_parser.add_argument('--skip-blog', action='store_true')
|
||||
arg_parser.add_argument('--skip-git-log', action='store_true')
|
||||
arg_parser.add_argument('--skip-docs', action='store_true')
|
||||
arg_parser.add_argument('--skip-test-templates', action='store_true')
|
||||
arg_parser.add_argument('--test-only', action='store_true')
|
||||
arg_parser.add_argument('--minify', action='store_true')
|
||||
arg_parser.add_argument('--htmlproofer', action='store_true')
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user