mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 23:52:03 +00:00
Merge branch 'master' of https://github.com/ClickHouse/ClickHouse into ana-uvarova-DOCSUP-3122-update-alter
This commit is contained in:
commit
05657b10d1
13
.gitignore
vendored
13
.gitignore
vendored
@ -124,3 +124,16 @@ website/package-lock.json
|
||||
|
||||
# Toolchains
|
||||
/cmake/toolchain/*
|
||||
|
||||
# ANTLR extension cache
|
||||
.antlr
|
||||
|
||||
# ANTLR generated files
|
||||
/src/Parsers/New/*.interp
|
||||
/src/Parsers/New/*.tokens
|
||||
/src/Parsers/New/ClickHouseParserBaseVisitor.*
|
||||
|
||||
# pytest-profiling
|
||||
/prof
|
||||
|
||||
*.iml
|
||||
|
9
.gitmodules
vendored
9
.gitmodules
vendored
@ -157,7 +157,7 @@
|
||||
url = https://github.com/ClickHouse-Extras/libcpuid.git
|
||||
[submodule "contrib/openldap"]
|
||||
path = contrib/openldap
|
||||
url = https://github.com/openldap/openldap.git
|
||||
url = https://github.com/ClickHouse-Extras/openldap.git
|
||||
[submodule "contrib/AMQP-CPP"]
|
||||
path = contrib/AMQP-CPP
|
||||
url = https://github.com/ClickHouse-Extras/AMQP-CPP.git
|
||||
@ -172,6 +172,9 @@
|
||||
[submodule "contrib/fmtlib"]
|
||||
path = contrib/fmtlib
|
||||
url = https://github.com/fmtlib/fmt.git
|
||||
[submodule "contrib/antlr4-runtime"]
|
||||
path = contrib/antlr4-runtime
|
||||
url = https://github.com/ClickHouse-Extras/antlr4-runtime.git
|
||||
[submodule "contrib/sentry-native"]
|
||||
path = contrib/sentry-native
|
||||
url = https://github.com/ClickHouse-Extras/sentry-native.git
|
||||
@ -200,8 +203,8 @@
|
||||
url = https://github.com/facebook/rocksdb
|
||||
branch = v6.14.5
|
||||
[submodule "contrib/xz"]
|
||||
path = contrib/xz
|
||||
url = https://github.com/xz-mirror/xz
|
||||
path = contrib/xz
|
||||
url = https://github.com/xz-mirror/xz
|
||||
[submodule "contrib/abseil-cpp"]
|
||||
path = contrib/abseil-cpp
|
||||
url = https://github.com/ClickHouse-Extras/abseil-cpp.git
|
||||
|
13
CHANGELOG.md
13
CHANGELOG.md
@ -16,6 +16,7 @@
|
||||
* Remove `ANALYZE` and `AST` queries, and make the setting `enable_debug_queries` obsolete since now it is the part of full featured `EXPLAIN` query. [#16536](https://github.com/ClickHouse/ClickHouse/pull/16536) ([Ivan](https://github.com/abyss7)).
|
||||
* Aggregate functions `boundingRatio`, `rankCorr`, `retention`, `timeSeriesGroupSum`, `timeSeriesGroupRateSum`, `windowFunnel` were erroneously made case-insensitive. Now their names are made case sensitive as designed. Only functions that are specified in SQL standard or made for compatibility with other DBMS or functions similar to those should be case-insensitive. [#16407](https://github.com/ClickHouse/ClickHouse/pull/16407) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make `rankCorr` function return nan on insufficient data https://github.com/ClickHouse/ClickHouse/issues/16124. [#16135](https://github.com/ClickHouse/ClickHouse/pull/16135) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -154,6 +155,7 @@
|
||||
* Change default value of `format_regexp_escaping_rule` setting (it's related to `Regexp` format) to `Raw` (it means - read whole subpattern as a value) to make the behaviour more like to what users expect. [#15426](https://github.com/ClickHouse/ClickHouse/pull/15426) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add support for nested multiline comments `/* comment /* comment */ */` in SQL. This conforms to the SQL standard. [#14655](https://github.com/ClickHouse/ClickHouse/pull/14655) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added MergeTree settings (`max_replicated_merges_with_ttl_in_queue` and `max_number_of_merges_with_ttl_in_pool`) to control the number of merges with TTL in the background pool and replicated queue. This change breaks compatibility with older versions only if you use delete TTL. Otherwise, replication will stay compatible. You can avoid incompatibility issues if you update all shard replicas at once or execute `SYSTEM STOP TTL MERGES` until you finish the update of all replicas. If you'll get an incompatible entry in the replication queue, first of all, execute `SYSTEM STOP TTL MERGES` and after `ALTER TABLE ... DETACH PARTITION ...` the partition where incompatible TTL merge was assigned. Attach it back on a single replica. [#14490](https://github.com/ClickHouse/ClickHouse/pull/14490) ([alesapin](https://github.com/alesapin)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -438,6 +440,10 @@
|
||||
|
||||
### ClickHouse release v20.9.2.20, 2020-09-22
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Added column transformers `EXCEPT`, `REPLACE`, `APPLY`, which can be applied to the list of selected columns (after `*` or `COLUMNS(...)`). For example, you can write `SELECT * EXCEPT(URL) REPLACE(number + 1 AS number)`. Another example: `select * apply(length) apply(max) from wide_string_table` to find out the maxium length of all string columns. [#14233](https://github.com/ClickHouse/ClickHouse/pull/14233) ([Amos Bird](https://github.com/amosbird)).
|
||||
@ -621,6 +627,7 @@
|
||||
* Now `OPTIMIZE FINAL` query doesn't recalculate TTL for parts that were added before TTL was created. Use `ALTER TABLE ... MATERIALIZE TTL` once to calculate them, after that `OPTIMIZE FINAL` will evaluate TTL's properly. This behavior never worked for replicated tables. [#14220](https://github.com/ClickHouse/ClickHouse/pull/14220) ([alesapin](https://github.com/alesapin)).
|
||||
* Extend `parallel_distributed_insert_select` setting, adding an option to run `INSERT` into local table. The setting changes type from `Bool` to `UInt64`, so the values `false` and `true` are no longer supported. If you have these values in server configuration, the server will not start. Please replace them with `0` and `1`, respectively. [#14060](https://github.com/ClickHouse/ClickHouse/pull/14060) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Remove support for the `ODBCDriver` input/output format. This was a deprecated format once used for communication with the ClickHouse ODBC driver, now long superseded by the `ODBCDriver2` format. Resolves [#13629](https://github.com/ClickHouse/ClickHouse/issues/13629). [#13847](https://github.com/ClickHouse/ClickHouse/pull/13847) ([hexiaoting](https://github.com/hexiaoting)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -765,6 +772,7 @@
|
||||
* The function `groupArrayMoving*` was not working for distributed queries. It's result was calculated within incorrect data type (without promotion to the largest type). The function `groupArrayMovingAvg` was returning integer number that was inconsistent with the `avg` function. This fixes [#12568](https://github.com/ClickHouse/ClickHouse/issues/12568). [#12622](https://github.com/ClickHouse/ClickHouse/pull/12622) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Add sanity check for MergeTree settings. If the settings are incorrect, the server will refuse to start or to create a table, printing detailed explanation to the user. [#13153](https://github.com/ClickHouse/ClickHouse/pull/13153) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Protect from the cases when user may set `background_pool_size` to value lower than `number_of_free_entries_in_pool_to_execute_mutation` or `number_of_free_entries_in_pool_to_lower_max_size_of_merge`. In these cases ALTERs won't work or the maximum size of merge will be too limited. It will throw exception explaining what to do. This closes [#10897](https://github.com/ClickHouse/ClickHouse/issues/10897). [#12728](https://github.com/ClickHouse/ClickHouse/pull/12728) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
@ -951,6 +959,10 @@
|
||||
|
||||
### ClickHouse release v20.6.3.28-stable
|
||||
|
||||
#### Backward Incompatible Change
|
||||
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
* Added an initial implementation of `EXPLAIN` query. Syntax: `EXPLAIN SELECT ...`. This fixes [#1118](https://github.com/ClickHouse/ClickHouse/issues/1118). [#11873](https://github.com/ClickHouse/ClickHouse/pull/11873) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
@ -1139,6 +1151,7 @@
|
||||
* Update `zstd` to 1.4.4. It has some minor improvements in performance and compression ratio. If you run replicas with different versions of ClickHouse you may see reasonable error messages `Data after merge is not byte-identical to data on another replicas.` with explanation. These messages are Ok and you should not worry. This change is backward compatible but we list it here in changelog in case you will wonder about these messages. [#10663](https://github.com/ClickHouse/ClickHouse/pull/10663) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Added a check for meaningless codecs and a setting `allow_suspicious_codecs` to control this check. This closes [#4966](https://github.com/ClickHouse/ClickHouse/issues/4966). [#10645](https://github.com/ClickHouse/ClickHouse/pull/10645) ([alexey-milovidov](https://github.com/alexey-milovidov)).
|
||||
* Several Kafka setting changes their defaults. See [#11388](https://github.com/ClickHouse/ClickHouse/pull/11388).
|
||||
* When upgrading from versions older than 20.5, if rolling update is performed and cluster contains both versions 20.5 or greater and less than 20.5, if ClickHouse nodes with old versions are restarted and old version has been started up in presence of newer versions, it may lead to `Part ... intersects previous part` errors. To prevent this error, first install newer clickhouse-server packages on all cluster nodes and then do restarts (so, when clickhouse-server is restarted, it will start up with the new version).
|
||||
|
||||
#### New Feature
|
||||
|
||||
|
@ -154,17 +154,19 @@ endif ()
|
||||
# Make sure the final executable has symbols exported
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -rdynamic")
|
||||
|
||||
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
|
||||
if (OBJCOPY_PATH)
|
||||
message(STATUS "Using objcopy: ${OBJCOPY_PATH}.")
|
||||
if (OS_LINUX)
|
||||
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
|
||||
if (OBJCOPY_PATH)
|
||||
message(STATUS "Using objcopy: ${OBJCOPY_PATH}.")
|
||||
|
||||
if (ARCH_AMD64)
|
||||
set(OBJCOPY_ARCH_OPTIONS -O elf64-x86-64 -B i386)
|
||||
elseif (ARCH_AARCH64)
|
||||
set(OBJCOPY_ARCH_OPTIONS -O elf64-aarch64 -B aarch64)
|
||||
if (ARCH_AMD64)
|
||||
set(OBJCOPY_ARCH_OPTIONS -O elf64-x86-64 -B i386)
|
||||
elseif (ARCH_AARCH64)
|
||||
set(OBJCOPY_ARCH_OPTIONS -O elf64-aarch64 -B aarch64)
|
||||
endif ()
|
||||
else ()
|
||||
message(FATAL_ERROR "Cannot find objcopy.")
|
||||
endif ()
|
||||
else ()
|
||||
message(FATAL_ERROR "Cannot find objcopy.")
|
||||
endif ()
|
||||
|
||||
if (OS_DARWIN)
|
||||
@ -255,6 +257,8 @@ if (WITH_COVERAGE AND COMPILER_GCC)
|
||||
set(WITHOUT_COVERAGE "-fno-profile-arcs -fno-test-coverage")
|
||||
endif()
|
||||
|
||||
set(COMPILER_FLAGS "${COMPILER_FLAGS}")
|
||||
|
||||
set (CMAKE_BUILD_COLOR_MAKEFILE ON)
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${COMPILER_FLAGS} ${PLATFORM_EXTRA_CXX_FLAG} ${COMMON_WARNING_FLAGS} ${CXX_WARNING_FLAGS}")
|
||||
set (CMAKE_CXX_FLAGS_RELWITHDEBINFO "${CMAKE_CXX_FLAGS_RELWITHDEBINFO} -O3 ${CMAKE_CXX_FLAGS_ADD}")
|
||||
@ -475,9 +479,6 @@ find_contrib_lib(cityhash)
|
||||
|
||||
find_contrib_lib(farmhash)
|
||||
|
||||
set (USE_INTERNAL_BTRIE_LIBRARY ON CACHE INTERNAL "")
|
||||
find_contrib_lib(btrie)
|
||||
|
||||
if (ENABLE_TESTS)
|
||||
include (cmake/find/gtest.cmake)
|
||||
endif ()
|
||||
|
@ -1,6 +1,6 @@
|
||||
[![ClickHouse — open source distributed column-oriented DBMS](https://github.com/ClickHouse/ClickHouse/raw/master/website/images/logo-400x240.png)](https://clickhouse.tech)
|
||||
|
||||
ClickHouse is an open-source column-oriented database management system that allows generating analytical data reports in real time.
|
||||
ClickHouse® is an open-source column-oriented database management system that allows generating analytical data reports in real time.
|
||||
|
||||
## Useful Links
|
||||
|
||||
@ -16,7 +16,4 @@ ClickHouse is an open-source column-oriented database management system that all
|
||||
* You can also [fill this form](https://clickhouse.tech/#meet) to meet Yandex ClickHouse team in person.
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
* [The Second ClickHouse Meetup East (online)](https://www.eventbrite.com/e/the-second-clickhouse-meetup-east-tickets-126787955187) on October 31, 2020.
|
||||
* [ClickHouse for Enterprise Meetup (online in Russian)](https://arenadata-events.timepad.ru/event/1465249/) on November 10, 2020.
|
||||
|
||||
* [SF Bay Area ClickHouse Virtual Office Hours (online)](https://www.meetup.com/San-Francisco-Bay-Area-ClickHouse-Meetup/events/274273549/) on 20 January 2020.
|
||||
|
@ -127,7 +127,7 @@ String LineReader::readLine(const String & first_prompt, const String & second_p
|
||||
}
|
||||
#endif
|
||||
|
||||
line += (line.empty() ? "" : " ") + input;
|
||||
line += (line.empty() ? "" : "\n") + input;
|
||||
|
||||
if (!need_next_line)
|
||||
break;
|
||||
|
@ -76,12 +76,6 @@
|
||||
# define NO_SANITIZE_THREAD
|
||||
#endif
|
||||
|
||||
#if defined __GNUC__ && !defined __clang__
|
||||
# define OPTIMIZE(x) __attribute__((__optimize__(x)))
|
||||
#else
|
||||
# define OPTIMIZE(x)
|
||||
#endif
|
||||
|
||||
/// A macro for suppressing warnings about unused variables or function results.
|
||||
/// Useful for structured bindings which have no standard way to declare this.
|
||||
#define UNUSED(...) (void)(__VA_ARGS__)
|
||||
|
@ -1,100 +1,28 @@
|
||||
#include <stdexcept>
|
||||
#include "common/getMemoryAmount.h"
|
||||
|
||||
// http://nadeausoftware.com/articles/2012/09/c_c_tip_how_get_physical_memory_size_system
|
||||
|
||||
/*
|
||||
* Author: David Robert Nadeau
|
||||
* Site: http://NadeauSoftware.com/
|
||||
* License: Creative Commons Attribution 3.0 Unported License
|
||||
* http://creativecommons.org/licenses/by/3.0/deed.en_US
|
||||
*/
|
||||
|
||||
#if defined(WIN32) || defined(_WIN32)
|
||||
#include <Windows.h>
|
||||
#else
|
||||
#include <unistd.h>
|
||||
#include <sys/types.h>
|
||||
#include <sys/param.h>
|
||||
#if defined(BSD)
|
||||
#include <sys/sysctl.h>
|
||||
#endif
|
||||
#endif
|
||||
|
||||
|
||||
/**
|
||||
* Returns the size of physical memory (RAM) in bytes.
|
||||
* Returns 0 on unsupported platform
|
||||
*/
|
||||
/** Returns the size of physical memory (RAM) in bytes.
|
||||
* Returns 0 on unsupported platform
|
||||
*/
|
||||
uint64_t getMemoryAmountOrZero()
|
||||
{
|
||||
#if defined(_WIN32) && (defined(__CYGWIN__) || defined(__CYGWIN32__))
|
||||
/* Cygwin under Windows. ------------------------------------ */
|
||||
/* New 64-bit MEMORYSTATUSEX isn't available. Use old 32.bit */
|
||||
MEMORYSTATUS status;
|
||||
status.dwLength = sizeof(status);
|
||||
GlobalMemoryStatus(&status);
|
||||
return status.dwTotalPhys;
|
||||
int64_t num_pages = sysconf(_SC_PHYS_PAGES);
|
||||
if (num_pages <= 0)
|
||||
return 0;
|
||||
|
||||
#elif defined(WIN32) || defined(_WIN32)
|
||||
/* Windows. ------------------------------------------------- */
|
||||
/* Use new 64-bit MEMORYSTATUSEX, not old 32-bit MEMORYSTATUS */
|
||||
MEMORYSTATUSEX status;
|
||||
status.dwLength = sizeof(status);
|
||||
GlobalMemoryStatusEx(&status);
|
||||
return status.ullTotalPhys;
|
||||
int64_t page_size = sysconf(_SC_PAGESIZE);
|
||||
if (page_size <= 0)
|
||||
return 0;
|
||||
|
||||
#else
|
||||
/* UNIX variants. ------------------------------------------- */
|
||||
/* Prefer sysctl() over sysconf() except sysctl() HW_REALMEM and HW_PHYSMEM */
|
||||
|
||||
#if defined(CTL_HW) && (defined(HW_MEMSIZE) || defined(HW_PHYSMEM64))
|
||||
int mib[2];
|
||||
mib[0] = CTL_HW;
|
||||
#if defined(HW_MEMSIZE)
|
||||
mib[1] = HW_MEMSIZE; /* OSX. --------------------- */
|
||||
#elif defined(HW_PHYSMEM64)
|
||||
mib[1] = HW_PHYSMEM64; /* NetBSD, OpenBSD. --------- */
|
||||
#endif
|
||||
uint64_t size = 0; /* 64-bit */
|
||||
size_t len = sizeof(size);
|
||||
if (sysctl(mib, 2, &size, &len, nullptr, 0) == 0)
|
||||
return size;
|
||||
|
||||
return 0; /* Failed? */
|
||||
|
||||
#elif defined(_SC_AIX_REALMEM)
|
||||
/* AIX. ----------------------------------------------------- */
|
||||
return sysconf(_SC_AIX_REALMEM) * 1024;
|
||||
|
||||
#elif defined(_SC_PHYS_PAGES) && defined(_SC_PAGESIZE)
|
||||
/* FreeBSD, Linux, OpenBSD, and Solaris. -------------------- */
|
||||
return uint64_t(sysconf(_SC_PHYS_PAGES))
|
||||
*uint64_t(sysconf(_SC_PAGESIZE));
|
||||
|
||||
#elif defined(_SC_PHYS_PAGES) && defined(_SC_PAGE_SIZE)
|
||||
/* Legacy. -------------------------------------------------- */
|
||||
return uint64_t(sysconf(_SC_PHYS_PAGES))
|
||||
* uint64_t(sysconf(_SC_PAGE_SIZE));
|
||||
|
||||
#elif defined(CTL_HW) && (defined(HW_PHYSMEM) || defined(HW_REALMEM))
|
||||
/* DragonFly BSD, FreeBSD, NetBSD, OpenBSD, and OSX. -------- */
|
||||
int mib[2];
|
||||
mib[0] = CTL_HW;
|
||||
#if defined(HW_REALMEM)
|
||||
mib[1] = HW_REALMEM; /* FreeBSD. ----------------- */
|
||||
#elif defined(HW_PYSMEM)
|
||||
mib[1] = HW_PHYSMEM; /* Others. ------------------ */
|
||||
#endif
|
||||
unsigned int size = 0; /* 32-bit */
|
||||
size_t len = sizeof(size);
|
||||
if (sysctl(mib, 2, &size, &len, nullptr, 0) == 0)
|
||||
return size;
|
||||
|
||||
return 0; /* Failed? */
|
||||
#endif /* sysctl and sysconf variants */
|
||||
|
||||
#endif
|
||||
return num_pages * page_size;
|
||||
}
|
||||
|
||||
|
||||
|
@ -8,7 +8,7 @@ using Int16 = int16_t;
|
||||
using Int32 = int32_t;
|
||||
using Int64 = int64_t;
|
||||
|
||||
#if __cplusplus <= 201703L
|
||||
#ifndef __cpp_char8_t
|
||||
using char8_t = unsigned char;
|
||||
#endif
|
||||
|
||||
|
@ -5,7 +5,6 @@ LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
GLOBAL clickhouse/base
|
||||
GLOBAL contrib/libs/cctz/include
|
||||
)
|
||||
|
||||
CFLAGS (GLOBAL -DARCADIA_BUILD)
|
||||
@ -24,7 +23,7 @@ ELSEIF (OS_LINUX)
|
||||
ENDIF ()
|
||||
|
||||
PEERDIR(
|
||||
contrib/libs/cctz/src
|
||||
contrib/libs/cctz
|
||||
contrib/libs/cxxsupp/libcxx-filesystem
|
||||
contrib/libs/poco/Net
|
||||
contrib/libs/poco/Util
|
||||
|
@ -4,7 +4,6 @@ LIBRARY()
|
||||
|
||||
ADDINCL(
|
||||
GLOBAL clickhouse/base
|
||||
GLOBAL contrib/libs/cctz/include
|
||||
)
|
||||
|
||||
CFLAGS (GLOBAL -DARCADIA_BUILD)
|
||||
@ -23,7 +22,7 @@ ELSEIF (OS_LINUX)
|
||||
ENDIF ()
|
||||
|
||||
PEERDIR(
|
||||
contrib/libs/cctz/src
|
||||
contrib/libs/cctz
|
||||
contrib/libs/cxxsupp/libcxx-filesystem
|
||||
contrib/libs/poco/Net
|
||||
contrib/libs/poco/Util
|
||||
|
@ -6,10 +6,12 @@
|
||||
|
||||
#include <common/defines.h>
|
||||
#include <common/getFQDNOrHostName.h>
|
||||
#include <common/getMemoryAmount.h>
|
||||
#include <common/logger_useful.h>
|
||||
|
||||
#include <Common/SymbolIndex.h>
|
||||
#include <Common/StackTrace.h>
|
||||
#include <Common/getNumberOfPhysicalCPUCores.h>
|
||||
|
||||
#if !defined(ARCADIA_BUILD)
|
||||
# include "Common/config_version.h"
|
||||
@ -28,14 +30,13 @@ namespace
|
||||
|
||||
bool initialized = false;
|
||||
bool anonymize = false;
|
||||
std::string server_data_path;
|
||||
|
||||
void setExtras()
|
||||
{
|
||||
|
||||
if (!anonymize)
|
||||
{
|
||||
sentry_set_extra("server_name", sentry_value_new_string(getFQDNOrHostName().c_str()));
|
||||
}
|
||||
|
||||
sentry_set_tag("version", VERSION_STRING);
|
||||
sentry_set_extra("version_githash", sentry_value_new_string(VERSION_GITHASH));
|
||||
sentry_set_extra("version_describe", sentry_value_new_string(VERSION_DESCRIBE));
|
||||
@ -44,6 +45,15 @@ void setExtras()
|
||||
sentry_set_extra("version_major", sentry_value_new_int32(VERSION_MAJOR));
|
||||
sentry_set_extra("version_minor", sentry_value_new_int32(VERSION_MINOR));
|
||||
sentry_set_extra("version_patch", sentry_value_new_int32(VERSION_PATCH));
|
||||
sentry_set_extra("version_official", sentry_value_new_string(VERSION_OFFICIAL));
|
||||
|
||||
/// Sentry does not support 64-bit integers.
|
||||
sentry_set_extra("total_ram", sentry_value_new_string(formatReadableSizeWithBinarySuffix(getMemoryAmountOrZero()).c_str()));
|
||||
sentry_set_extra("physical_cpu_cores", sentry_value_new_int32(getNumberOfPhysicalCPUCores()));
|
||||
|
||||
if (!server_data_path.empty())
|
||||
sentry_set_extra("disk_free_space", sentry_value_new_string(formatReadableSizeWithBinarySuffix(
|
||||
Poco::File(server_data_path).freeSpace()).c_str()));
|
||||
}
|
||||
|
||||
void sentry_logger(sentry_level_e level, const char * message, va_list args, void *)
|
||||
@ -98,6 +108,7 @@ void SentryWriter::initialize(Poco::Util::LayeredConfiguration & config)
|
||||
}
|
||||
if (enabled)
|
||||
{
|
||||
server_data_path = config.getString("path", "");
|
||||
const std::filesystem::path & default_tmp_path = std::filesystem::path(config.getString("tmp_path", Poco::Path::temp())) / "sentry";
|
||||
const std::string & endpoint
|
||||
= config.getString("send_crash_reports.endpoint");
|
||||
|
@ -104,6 +104,11 @@ void Connection::connect(const char* db,
|
||||
if (mysql_options(driver.get(), MYSQL_OPT_LOCAL_INFILE, &enable_local_infile_arg))
|
||||
throw ConnectionFailed(errorMessage(driver.get()), mysql_errno(driver.get()));
|
||||
|
||||
/// Enables auto-reconnect.
|
||||
bool reconnect = true;
|
||||
if (mysql_options(driver.get(), MYSQL_OPT_RECONNECT, reinterpret_cast<const char *>(&reconnect)))
|
||||
throw ConnectionFailed(errorMessage(driver.get()), mysql_errno(driver.get()));
|
||||
|
||||
/// Specifies particular ssl key and certificate if it needs
|
||||
if (mysql_ssl_set(driver.get(), ifNotEmpty(ssl_key), ifNotEmpty(ssl_cert), ifNotEmpty(ssl_ca), nullptr, nullptr))
|
||||
throw ConnectionFailed(errorMessage(driver.get()), mysql_errno(driver.get()));
|
||||
@ -115,11 +120,6 @@ void Connection::connect(const char* db,
|
||||
if (mysql_set_character_set(driver.get(), "UTF8"))
|
||||
throw ConnectionFailed(errorMessage(driver.get()), mysql_errno(driver.get()));
|
||||
|
||||
/// Enables auto-reconnect.
|
||||
bool reconnect = true;
|
||||
if (mysql_options(driver.get(), MYSQL_OPT_RECONNECT, reinterpret_cast<const char *>(&reconnect)))
|
||||
throw ConnectionFailed(errorMessage(driver.get()), mysql_errno(driver.get()));
|
||||
|
||||
is_connected = true;
|
||||
}
|
||||
|
||||
|
@ -26,6 +26,7 @@ void Pool::Entry::incrementRefCount()
|
||||
mysql_thread_init();
|
||||
}
|
||||
|
||||
|
||||
void Pool::Entry::decrementRefCount()
|
||||
{
|
||||
if (!data)
|
||||
@ -150,28 +151,39 @@ Pool::Entry Pool::tryGet()
|
||||
|
||||
initialize();
|
||||
|
||||
/// Searching for connection which was established but wasn't used.
|
||||
for (auto & connection : connections)
|
||||
/// Try to pick an idle connection from already allocated
|
||||
for (auto connection_it = connections.cbegin(); connection_it != connections.cend();)
|
||||
{
|
||||
if (connection->ref_count == 0)
|
||||
Connection * connection_ptr = *connection_it;
|
||||
/// Fixme: There is a race condition here b/c we do not synchronize with Pool::Entry's copy-assignment operator
|
||||
if (connection_ptr->ref_count == 0)
|
||||
{
|
||||
Entry res(connection, this);
|
||||
return res.tryForceConnected() ? res : Entry();
|
||||
Entry res(connection_ptr, this);
|
||||
if (res.tryForceConnected()) /// Tries to reestablish connection as well
|
||||
return res;
|
||||
|
||||
auto & logger = Poco::Util::Application::instance().logger();
|
||||
logger.information("Idle connection to mysql server cannot be recovered, dropping it.");
|
||||
|
||||
/// This one is disconnected, cannot be reestablished and so needs to be disposed of.
|
||||
connection_it = connections.erase(connection_it);
|
||||
::delete connection_ptr; /// TODO: Manual memory management is awkward (matches allocConnection() method)
|
||||
}
|
||||
else
|
||||
++connection_it;
|
||||
}
|
||||
|
||||
/// Throws if pool is overflowed.
|
||||
if (connections.size() >= max_connections)
|
||||
throw Poco::Exception("mysqlxx::Pool is full");
|
||||
|
||||
/// Allocates new connection.
|
||||
Connection * conn = allocConnection(true);
|
||||
if (conn)
|
||||
return Entry(conn, this);
|
||||
Connection * connection_ptr = allocConnection(true);
|
||||
if (connection_ptr)
|
||||
return {connection_ptr, this};
|
||||
|
||||
return Entry();
|
||||
return {};
|
||||
}
|
||||
|
||||
|
||||
void Pool::removeConnection(Connection* connection)
|
||||
{
|
||||
std::lock_guard<std::mutex> lock(mutex);
|
||||
@ -199,11 +211,9 @@ void Pool::Entry::forceConnected() const
|
||||
throw Poco::RuntimeException("Tried to access NULL database connection.");
|
||||
|
||||
Poco::Util::Application & app = Poco::Util::Application::instance();
|
||||
if (data->conn.ping())
|
||||
return;
|
||||
|
||||
bool first = true;
|
||||
do
|
||||
while (!tryForceConnected())
|
||||
{
|
||||
if (first)
|
||||
first = false;
|
||||
@ -225,7 +235,26 @@ void Pool::Entry::forceConnected() const
|
||||
pool->rw_timeout,
|
||||
pool->enable_local_infile);
|
||||
}
|
||||
while (!data->conn.ping());
|
||||
}
|
||||
|
||||
|
||||
bool Pool::Entry::tryForceConnected() const
|
||||
{
|
||||
auto * const mysql_driver = data->conn.getDriver();
|
||||
const auto prev_connection_id = mysql_thread_id(mysql_driver);
|
||||
if (data->conn.ping()) /// Attempts to reestablish lost connection
|
||||
{
|
||||
const auto current_connection_id = mysql_thread_id(mysql_driver);
|
||||
if (prev_connection_id != current_connection_id)
|
||||
{
|
||||
auto & logger = Poco::Util::Application::instance().logger();
|
||||
logger.information("Connection to mysql server has been reestablished. Connection id changed: %d -> %d",
|
||||
prev_connection_id, current_connection_id);
|
||||
}
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
|
||||
|
@ -127,10 +127,7 @@ public:
|
||||
void forceConnected() const;
|
||||
|
||||
/// Connects to database. If connection is failed then returns false.
|
||||
bool tryForceConnected() const
|
||||
{
|
||||
return data->conn.ping();
|
||||
}
|
||||
bool tryForceConnected() const;
|
||||
|
||||
void incrementRefCount();
|
||||
void decrementRefCount();
|
||||
|
@ -1,2 +1,5 @@
|
||||
add_executable (mysqlxx_test mysqlxx_test.cpp)
|
||||
target_link_libraries (mysqlxx_test PRIVATE mysqlxx)
|
||||
|
||||
add_executable (mysqlxx_pool_test mysqlxx_pool_test.cpp)
|
||||
target_link_libraries (mysqlxx_pool_test PRIVATE mysqlxx)
|
||||
|
98
base/mysqlxx/tests/mysqlxx_pool_test.cpp
Normal file
98
base/mysqlxx/tests/mysqlxx_pool_test.cpp
Normal file
@ -0,0 +1,98 @@
|
||||
#include <mysqlxx/mysqlxx.h>
|
||||
|
||||
#include <chrono>
|
||||
#include <iostream>
|
||||
#include <sstream>
|
||||
#include <thread>
|
||||
|
||||
|
||||
namespace
|
||||
{
|
||||
mysqlxx::Pool::Entry getWithFailover(mysqlxx::Pool & connections_pool)
|
||||
{
|
||||
using namespace std::chrono;
|
||||
|
||||
constexpr size_t max_tries = 3;
|
||||
|
||||
mysqlxx::Pool::Entry worker_connection;
|
||||
|
||||
for (size_t try_no = 1; try_no <= max_tries; ++try_no)
|
||||
{
|
||||
try
|
||||
{
|
||||
worker_connection = connections_pool.tryGet();
|
||||
|
||||
if (!worker_connection.isNull())
|
||||
{
|
||||
return worker_connection;
|
||||
}
|
||||
}
|
||||
catch (const Poco::Exception & e)
|
||||
{
|
||||
if (e.displayText().find("mysqlxx::Pool is full") != std::string::npos)
|
||||
{
|
||||
std::cerr << e.displayText() << std::endl;
|
||||
}
|
||||
|
||||
std::cerr << "Connection to " << connections_pool.getDescription() << " failed: " << e.displayText() << std::endl;
|
||||
}
|
||||
|
||||
std::clog << "Connection to all replicas failed " << try_no << " times" << std::endl;
|
||||
std::this_thread::sleep_for(1s);
|
||||
}
|
||||
|
||||
std::stringstream message;
|
||||
message << "Connections to all replicas failed: " << connections_pool.getDescription();
|
||||
|
||||
throw Poco::Exception(message.str());
|
||||
}
|
||||
}
|
||||
|
||||
int main(int, char **)
|
||||
{
|
||||
using namespace std::chrono;
|
||||
|
||||
const char * remote_mysql = "localhost";
|
||||
const std::string test_query = "SHOW DATABASES";
|
||||
|
||||
mysqlxx::Pool mysql_conn_pool("", remote_mysql, "default", "10203040", 3306);
|
||||
|
||||
size_t iteration = 0;
|
||||
while (++iteration)
|
||||
{
|
||||
std::clog << "Iteration: " << iteration << std::endl;
|
||||
try
|
||||
{
|
||||
std::clog << "Acquiring DB connection ...";
|
||||
mysqlxx::Pool::Entry worker = getWithFailover(mysql_conn_pool);
|
||||
std::clog << "ok" << std::endl;
|
||||
|
||||
std::clog << "Preparing query (5s sleep) ...";
|
||||
std::this_thread::sleep_for(5s);
|
||||
mysqlxx::Query query = worker->query();
|
||||
query << test_query;
|
||||
std::clog << "ok" << std::endl;
|
||||
|
||||
std::clog << "Querying result (5s sleep) ...";
|
||||
std::this_thread::sleep_for(5s);
|
||||
mysqlxx::UseQueryResult result = query.use();
|
||||
std::clog << "ok" << std::endl;
|
||||
|
||||
std::clog << "Fetching result data (5s sleep) ...";
|
||||
std::this_thread::sleep_for(5s);
|
||||
size_t rows_count = 0;
|
||||
while (result.fetch())
|
||||
++rows_count;
|
||||
std::clog << "ok" << std::endl;
|
||||
|
||||
std::clog << "Read " << rows_count << " rows." << std::endl;
|
||||
}
|
||||
catch (const Poco::Exception & e)
|
||||
{
|
||||
std::cerr << "Iteration FAILED:\n" << e.displayText() << std::endl;
|
||||
}
|
||||
|
||||
std::clog << "====================" << std::endl;
|
||||
std::this_thread::sleep_for(3s);
|
||||
}
|
||||
}
|
@ -1,44 +0,0 @@
|
||||
# - Try to find btrie headers and libraries.
|
||||
#
|
||||
# Usage of this module as follows:
|
||||
#
|
||||
# find_package(btrie)
|
||||
#
|
||||
# Variables used by this module, they can change the default behaviour and need
|
||||
# to be set before calling find_package:
|
||||
#
|
||||
# BTRIE_ROOT_DIR Set this variable to the root installation of
|
||||
# btrie if the module has problems finding
|
||||
# the proper installation path.
|
||||
#
|
||||
# Variables defined by this module:
|
||||
#
|
||||
# BTRIE_FOUND System has btrie libs/headers
|
||||
# BTRIE_LIBRARIES The btrie library/libraries
|
||||
# BTRIE_INCLUDE_DIR The location of btrie headers
|
||||
|
||||
find_path(BTRIE_ROOT_DIR
|
||||
NAMES include/btrie.h
|
||||
)
|
||||
|
||||
find_library(BTRIE_LIBRARIES
|
||||
NAMES btrie
|
||||
PATHS ${BTRIE_ROOT_DIR}/lib ${BTRIE_LIBRARIES_PATHS}
|
||||
)
|
||||
|
||||
find_path(BTRIE_INCLUDE_DIR
|
||||
NAMES btrie.h
|
||||
PATHS ${BTRIE_ROOT_DIR}/include ${BTRIE_INCLUDE_PATHS}
|
||||
)
|
||||
|
||||
include(FindPackageHandleStandardArgs)
|
||||
find_package_handle_standard_args(btrie DEFAULT_MSG
|
||||
BTRIE_LIBRARIES
|
||||
BTRIE_INCLUDE_DIR
|
||||
)
|
||||
|
||||
mark_as_advanced(
|
||||
BTRIE_ROOT_DIR
|
||||
BTRIE_LIBRARIES
|
||||
BTRIE_INCLUDE_DIR
|
||||
)
|
@ -12,13 +12,7 @@ set(CMAKE_CXX_STANDARD_LIBRARIES ${DEFAULT_LIBS})
|
||||
set(CMAKE_C_STANDARD_LIBRARIES ${DEFAULT_LIBS})
|
||||
|
||||
# Minimal supported SDK version
|
||||
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -mmacosx-version-min=10.15")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -mmacosx-version-min=10.15")
|
||||
set (CMAKE_ASM_FLAGS "${CMAKE_ASM_FLAGS} -mmacosx-version-min=10.15")
|
||||
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -mmacosx-version-min=10.15")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -mmacosx-version-min=10.15")
|
||||
set(CMAKE_OSX_DEPLOYMENT_TARGET 10.15)
|
||||
|
||||
# Global libraries
|
||||
|
||||
|
@ -1,3 +1,4 @@
|
||||
# Needed when using Apache Avro serialization format
|
||||
option (ENABLE_AVRO "Enable Avro" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (NOT ENABLE_AVRO)
|
||||
|
@ -1,3 +1,5 @@
|
||||
# Needed when securely connecting to an external server, e.g.
|
||||
# clickhouse-client --host ... --secure
|
||||
option(ENABLE_SSL "Enable ssl" ${ENABLE_LIBRARIES})
|
||||
|
||||
if(NOT ENABLE_SSL)
|
||||
|
@ -23,8 +23,8 @@ option (WEVERYTHING "Enable -Weverything option with some exceptions." ON)
|
||||
|
||||
# Control maximum size of stack frames. It can be important if the code is run in fibers with small stack size.
|
||||
# Only in release build because debug has too large stack frames.
|
||||
if ((NOT CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") AND (NOT SANITIZE))
|
||||
add_warning(frame-larger-than=32768)
|
||||
if ((NOT CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") AND (NOT SANITIZE) AND (NOT CMAKE_CXX_COMPILER_ID MATCHES "AppleClang"))
|
||||
add_warning(frame-larger-than=65536)
|
||||
endif ()
|
||||
|
||||
if (COMPILER_CLANG)
|
||||
|
2
contrib/AMQP-CPP
vendored
2
contrib/AMQP-CPP
vendored
@ -1 +1 @@
|
||||
Subproject commit d63e1f016582e9faaaf279aa24513087a07bc6e7
|
||||
Subproject commit 03781aaff0f10ef41f902b8cf865fe0067180c10
|
5
contrib/CMakeLists.txt
vendored
5
contrib/CMakeLists.txt
vendored
@ -21,6 +21,7 @@ endif()
|
||||
|
||||
set_property(DIRECTORY PROPERTY EXCLUDE_FROM_ALL 1)
|
||||
|
||||
add_subdirectory (antlr4-runtime-cmake)
|
||||
add_subdirectory (boost-cmake)
|
||||
add_subdirectory (cctz-cmake)
|
||||
add_subdirectory (consistent-hashing-sumbur)
|
||||
@ -66,10 +67,6 @@ if (USE_INTERNAL_FARMHASH_LIBRARY)
|
||||
add_subdirectory (libfarmhash)
|
||||
endif ()
|
||||
|
||||
if (USE_INTERNAL_BTRIE_LIBRARY)
|
||||
add_subdirectory (libbtrie)
|
||||
endif ()
|
||||
|
||||
if (USE_INTERNAL_ZLIB_LIBRARY)
|
||||
set (ZLIB_ENABLE_TESTS 0 CACHE INTERNAL "")
|
||||
set (SKIP_INSTALL_ALL 1 CACHE INTERNAL "")
|
||||
|
1
contrib/antlr4-runtime
vendored
Submodule
1
contrib/antlr4-runtime
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit a2fa7b76e2ee16d2ad955e9214a90bbf79da66fc
|
156
contrib/antlr4-runtime-cmake/CMakeLists.txt
Normal file
156
contrib/antlr4-runtime-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,156 @@
|
||||
set (LIBRARY_DIR ${ClickHouse_SOURCE_DIR}/contrib/antlr4-runtime)
|
||||
|
||||
set (SRCS
|
||||
${LIBRARY_DIR}/ANTLRErrorListener.cpp
|
||||
${LIBRARY_DIR}/ANTLRErrorStrategy.cpp
|
||||
${LIBRARY_DIR}/ANTLRFileStream.cpp
|
||||
${LIBRARY_DIR}/ANTLRInputStream.cpp
|
||||
${LIBRARY_DIR}/atn/AbstractPredicateTransition.cpp
|
||||
${LIBRARY_DIR}/atn/ActionTransition.cpp
|
||||
${LIBRARY_DIR}/atn/AmbiguityInfo.cpp
|
||||
${LIBRARY_DIR}/atn/ArrayPredictionContext.cpp
|
||||
${LIBRARY_DIR}/atn/ATN.cpp
|
||||
${LIBRARY_DIR}/atn/ATNConfig.cpp
|
||||
${LIBRARY_DIR}/atn/ATNConfigSet.cpp
|
||||
${LIBRARY_DIR}/atn/ATNDeserializationOptions.cpp
|
||||
${LIBRARY_DIR}/atn/ATNDeserializer.cpp
|
||||
${LIBRARY_DIR}/atn/ATNSerializer.cpp
|
||||
${LIBRARY_DIR}/atn/ATNSimulator.cpp
|
||||
${LIBRARY_DIR}/atn/ATNState.cpp
|
||||
${LIBRARY_DIR}/atn/AtomTransition.cpp
|
||||
${LIBRARY_DIR}/atn/BasicBlockStartState.cpp
|
||||
${LIBRARY_DIR}/atn/BasicState.cpp
|
||||
${LIBRARY_DIR}/atn/BlockEndState.cpp
|
||||
${LIBRARY_DIR}/atn/BlockStartState.cpp
|
||||
${LIBRARY_DIR}/atn/ContextSensitivityInfo.cpp
|
||||
${LIBRARY_DIR}/atn/DecisionEventInfo.cpp
|
||||
${LIBRARY_DIR}/atn/DecisionInfo.cpp
|
||||
${LIBRARY_DIR}/atn/DecisionState.cpp
|
||||
${LIBRARY_DIR}/atn/EmptyPredictionContext.cpp
|
||||
${LIBRARY_DIR}/atn/EpsilonTransition.cpp
|
||||
${LIBRARY_DIR}/atn/ErrorInfo.cpp
|
||||
${LIBRARY_DIR}/atn/LexerAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerActionExecutor.cpp
|
||||
${LIBRARY_DIR}/atn/LexerATNConfig.cpp
|
||||
${LIBRARY_DIR}/atn/LexerATNSimulator.cpp
|
||||
${LIBRARY_DIR}/atn/LexerChannelAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerCustomAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerIndexedCustomAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerModeAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerMoreAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerPopModeAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerPushModeAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerSkipAction.cpp
|
||||
${LIBRARY_DIR}/atn/LexerTypeAction.cpp
|
||||
${LIBRARY_DIR}/atn/LL1Analyzer.cpp
|
||||
${LIBRARY_DIR}/atn/LookaheadEventInfo.cpp
|
||||
${LIBRARY_DIR}/atn/LoopEndState.cpp
|
||||
${LIBRARY_DIR}/atn/NotSetTransition.cpp
|
||||
${LIBRARY_DIR}/atn/OrderedATNConfigSet.cpp
|
||||
${LIBRARY_DIR}/atn/ParseInfo.cpp
|
||||
${LIBRARY_DIR}/atn/ParserATNSimulator.cpp
|
||||
${LIBRARY_DIR}/atn/PlusBlockStartState.cpp
|
||||
${LIBRARY_DIR}/atn/PlusLoopbackState.cpp
|
||||
${LIBRARY_DIR}/atn/PrecedencePredicateTransition.cpp
|
||||
${LIBRARY_DIR}/atn/PredicateEvalInfo.cpp
|
||||
${LIBRARY_DIR}/atn/PredicateTransition.cpp
|
||||
${LIBRARY_DIR}/atn/PredictionContext.cpp
|
||||
${LIBRARY_DIR}/atn/PredictionMode.cpp
|
||||
${LIBRARY_DIR}/atn/ProfilingATNSimulator.cpp
|
||||
${LIBRARY_DIR}/atn/RangeTransition.cpp
|
||||
${LIBRARY_DIR}/atn/RuleStartState.cpp
|
||||
${LIBRARY_DIR}/atn/RuleStopState.cpp
|
||||
${LIBRARY_DIR}/atn/RuleTransition.cpp
|
||||
${LIBRARY_DIR}/atn/SemanticContext.cpp
|
||||
${LIBRARY_DIR}/atn/SetTransition.cpp
|
||||
${LIBRARY_DIR}/atn/SingletonPredictionContext.cpp
|
||||
${LIBRARY_DIR}/atn/StarBlockStartState.cpp
|
||||
${LIBRARY_DIR}/atn/StarLoopbackState.cpp
|
||||
${LIBRARY_DIR}/atn/StarLoopEntryState.cpp
|
||||
${LIBRARY_DIR}/atn/TokensStartState.cpp
|
||||
${LIBRARY_DIR}/atn/Transition.cpp
|
||||
${LIBRARY_DIR}/atn/WildcardTransition.cpp
|
||||
${LIBRARY_DIR}/BailErrorStrategy.cpp
|
||||
${LIBRARY_DIR}/BaseErrorListener.cpp
|
||||
${LIBRARY_DIR}/BufferedTokenStream.cpp
|
||||
${LIBRARY_DIR}/CharStream.cpp
|
||||
${LIBRARY_DIR}/CommonToken.cpp
|
||||
${LIBRARY_DIR}/CommonTokenFactory.cpp
|
||||
${LIBRARY_DIR}/CommonTokenStream.cpp
|
||||
${LIBRARY_DIR}/ConsoleErrorListener.cpp
|
||||
${LIBRARY_DIR}/DefaultErrorStrategy.cpp
|
||||
${LIBRARY_DIR}/dfa/DFA.cpp
|
||||
${LIBRARY_DIR}/dfa/DFASerializer.cpp
|
||||
${LIBRARY_DIR}/dfa/DFAState.cpp
|
||||
${LIBRARY_DIR}/dfa/LexerDFASerializer.cpp
|
||||
${LIBRARY_DIR}/DiagnosticErrorListener.cpp
|
||||
${LIBRARY_DIR}/Exceptions.cpp
|
||||
${LIBRARY_DIR}/FailedPredicateException.cpp
|
||||
${LIBRARY_DIR}/InputMismatchException.cpp
|
||||
${LIBRARY_DIR}/InterpreterRuleContext.cpp
|
||||
${LIBRARY_DIR}/IntStream.cpp
|
||||
${LIBRARY_DIR}/Lexer.cpp
|
||||
${LIBRARY_DIR}/LexerInterpreter.cpp
|
||||
${LIBRARY_DIR}/LexerNoViableAltException.cpp
|
||||
${LIBRARY_DIR}/ListTokenSource.cpp
|
||||
${LIBRARY_DIR}/misc/InterpreterDataReader.cpp
|
||||
${LIBRARY_DIR}/misc/Interval.cpp
|
||||
${LIBRARY_DIR}/misc/IntervalSet.cpp
|
||||
${LIBRARY_DIR}/misc/MurmurHash.cpp
|
||||
${LIBRARY_DIR}/misc/Predicate.cpp
|
||||
${LIBRARY_DIR}/NoViableAltException.cpp
|
||||
${LIBRARY_DIR}/Parser.cpp
|
||||
${LIBRARY_DIR}/ParserInterpreter.cpp
|
||||
${LIBRARY_DIR}/ParserRuleContext.cpp
|
||||
${LIBRARY_DIR}/ProxyErrorListener.cpp
|
||||
${LIBRARY_DIR}/RecognitionException.cpp
|
||||
${LIBRARY_DIR}/Recognizer.cpp
|
||||
${LIBRARY_DIR}/RuleContext.cpp
|
||||
${LIBRARY_DIR}/RuleContextWithAltNum.cpp
|
||||
${LIBRARY_DIR}/RuntimeMetaData.cpp
|
||||
${LIBRARY_DIR}/support/Any.cpp
|
||||
${LIBRARY_DIR}/support/Arrays.cpp
|
||||
${LIBRARY_DIR}/support/CPPUtils.cpp
|
||||
${LIBRARY_DIR}/support/guid.cpp
|
||||
${LIBRARY_DIR}/support/StringUtils.cpp
|
||||
${LIBRARY_DIR}/Token.cpp
|
||||
${LIBRARY_DIR}/TokenSource.cpp
|
||||
${LIBRARY_DIR}/TokenStream.cpp
|
||||
${LIBRARY_DIR}/TokenStreamRewriter.cpp
|
||||
${LIBRARY_DIR}/tree/ErrorNode.cpp
|
||||
${LIBRARY_DIR}/tree/ErrorNodeImpl.cpp
|
||||
${LIBRARY_DIR}/tree/IterativeParseTreeWalker.cpp
|
||||
${LIBRARY_DIR}/tree/ParseTree.cpp
|
||||
${LIBRARY_DIR}/tree/ParseTreeListener.cpp
|
||||
${LIBRARY_DIR}/tree/ParseTreeVisitor.cpp
|
||||
${LIBRARY_DIR}/tree/ParseTreeWalker.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/Chunk.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/ParseTreeMatch.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/ParseTreePattern.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/ParseTreePatternMatcher.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/RuleTagToken.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/TagChunk.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/TextChunk.cpp
|
||||
${LIBRARY_DIR}/tree/pattern/TokenTagToken.cpp
|
||||
${LIBRARY_DIR}/tree/TerminalNode.cpp
|
||||
${LIBRARY_DIR}/tree/TerminalNodeImpl.cpp
|
||||
${LIBRARY_DIR}/tree/Trees.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPath.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathElement.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathLexer.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathLexerErrorListener.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathRuleAnywhereElement.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathRuleElement.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathTokenAnywhereElement.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathTokenElement.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathWildcardAnywhereElement.cpp
|
||||
${LIBRARY_DIR}/tree/xpath/XPathWildcardElement.cpp
|
||||
${LIBRARY_DIR}/UnbufferedCharStream.cpp
|
||||
${LIBRARY_DIR}/UnbufferedTokenStream.cpp
|
||||
${LIBRARY_DIR}/Vocabulary.cpp
|
||||
${LIBRARY_DIR}/WritableToken.cpp
|
||||
)
|
||||
|
||||
add_library (antlr4-runtime ${SRCS})
|
||||
|
||||
target_include_directories (antlr4-runtime SYSTEM PUBLIC ${LIBRARY_DIR})
|
2
contrib/cassandra
vendored
2
contrib/cassandra
vendored
@ -1 +1 @@
|
||||
Subproject commit a49b4e0e2696a4b8ef286a5b9538d1cbe8490509
|
||||
Subproject commit d10187efb25b26da391def077edf3c6f2f3a23dd
|
@ -1,6 +0,0 @@
|
||||
add_library(btrie
|
||||
src/btrie.c
|
||||
include/btrie.h
|
||||
)
|
||||
|
||||
target_include_directories (btrie SYSTEM PUBLIC include)
|
@ -1,23 +0,0 @@
|
||||
Copyright (c) 2013, CobbLiu
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without modification,
|
||||
are permitted provided that the following conditions are met:
|
||||
|
||||
Redistributions of source code must retain the above copyright notice, this
|
||||
list of conditions and the following disclaimer.
|
||||
|
||||
Redistributions in binary form must reproduce the above copyright notice, this
|
||||
list of conditions and the following disclaimer in the documentation and/or
|
||||
other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
|
||||
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
|
||||
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
|
||||
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
|
||||
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
|
||||
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
|
||||
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
|
||||
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
@ -1,160 +0,0 @@
|
||||
#pragma once
|
||||
|
||||
#if defined (__cplusplus)
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
#include <stdlib.h>
|
||||
#include <stdint.h>
|
||||
|
||||
/**
|
||||
* In btrie, each leaf means one bit in ip tree.
|
||||
* Left means 0, and right means 1.
|
||||
*/
|
||||
|
||||
#define BTRIE_NULL (uintptr_t) -1
|
||||
|
||||
#if !defined(BTRIE_MAX_PAGES)
|
||||
/// 54 ip per page. 8 bytes memory per page when empty
|
||||
#define BTRIE_MAX_PAGES 1024 * 2048 /// 128m ips , ~16mb ram when empty
|
||||
// #define BTRIE_MAX_PAGES 1024 * 65535 /// 4g ips (whole ipv4), ~512mb ram when empty
|
||||
#endif
|
||||
|
||||
typedef struct btrie_node_s btrie_node_t;
|
||||
|
||||
struct btrie_node_s {
|
||||
btrie_node_t *right;
|
||||
btrie_node_t *left;
|
||||
btrie_node_t *parent;
|
||||
uintptr_t value;
|
||||
};
|
||||
|
||||
|
||||
typedef struct btrie_s {
|
||||
btrie_node_t *root;
|
||||
|
||||
btrie_node_t *free; /* free list of btrie */
|
||||
char *start;
|
||||
size_t size;
|
||||
|
||||
/*
|
||||
* memory pool.
|
||||
* memory management(esp free) will be so easy by using this facility.
|
||||
*/
|
||||
char *pools[BTRIE_MAX_PAGES];
|
||||
size_t len;
|
||||
} btrie_t;
|
||||
|
||||
|
||||
/**
|
||||
* Create an empty btrie
|
||||
*
|
||||
* @Return:
|
||||
* An ip radix_tree created.
|
||||
* NULL if creation failed.
|
||||
*/
|
||||
|
||||
btrie_t *btrie_create();
|
||||
|
||||
/**
|
||||
* Destroy the ip radix_tree
|
||||
*
|
||||
* @Return:
|
||||
* OK if deletion succeed.
|
||||
* ERROR if error occurs while deleting.
|
||||
*/
|
||||
int btrie_destroy(btrie_t *tree);
|
||||
|
||||
/**
|
||||
* Count the nodes in the radix tree.
|
||||
*/
|
||||
size_t btrie_count(btrie_t *tree);
|
||||
|
||||
/**
|
||||
* Return the allocated number of bytes.
|
||||
*/
|
||||
size_t btrie_allocated(btrie_t *tree);
|
||||
|
||||
|
||||
/**
|
||||
* Add an ipv4 into btrie
|
||||
*
|
||||
* @Args:
|
||||
* key: ip address
|
||||
* mask: key's mask
|
||||
* value: value of this IP, may be NULL.
|
||||
*
|
||||
* @Return:
|
||||
* OK for success.
|
||||
* ERROR for failure.
|
||||
*/
|
||||
int btrie_insert(btrie_t *tree, uint32_t key, uint32_t mask,
|
||||
uintptr_t value);
|
||||
|
||||
|
||||
/**
|
||||
* Delete an ipv4 from btrie
|
||||
*
|
||||
* @Args:
|
||||
*
|
||||
* @Return:
|
||||
* OK for success.
|
||||
* ERROR for failure.
|
||||
*/
|
||||
int btrie_delete(btrie_t *tree, uint32_t key, uint32_t mask);
|
||||
|
||||
|
||||
/**
|
||||
* Find an ipv4 from btrie
|
||||
*
|
||||
|
||||
* @Args:
|
||||
*
|
||||
* @Return:
|
||||
* Value if succeed.
|
||||
* NULL if failed.
|
||||
*/
|
||||
uintptr_t btrie_find(btrie_t *tree, uint32_t key);
|
||||
|
||||
|
||||
/**
|
||||
* Add an ipv6 into btrie
|
||||
*
|
||||
* @Args:
|
||||
* key: ip address
|
||||
* mask: key's mask
|
||||
* value: value of this IP, may be NULL.
|
||||
*
|
||||
* @Return:
|
||||
* OK for success.
|
||||
* ERROR for failure.
|
||||
*/
|
||||
int btrie_insert_a6(btrie_t *tree, const uint8_t *key, const uint8_t *mask,
|
||||
uintptr_t value);
|
||||
|
||||
/**
|
||||
* Delete an ipv6 from btrie
|
||||
*
|
||||
* @Args:
|
||||
*
|
||||
* @Return:
|
||||
* OK for success.
|
||||
* ERROR for failure.
|
||||
*/
|
||||
int btrie_delete_a6(btrie_t *tree, const uint8_t *key, const uint8_t *mask);
|
||||
|
||||
/**
|
||||
* Find an ipv6 from btrie
|
||||
*
|
||||
|
||||
* @Args:
|
||||
*
|
||||
* @Return:
|
||||
* Value if succeed.
|
||||
* NULL if failed.
|
||||
*/
|
||||
uintptr_t btrie_find_a6(btrie_t *tree, const uint8_t *key);
|
||||
|
||||
#if defined (__cplusplus)
|
||||
}
|
||||
#endif
|
@ -1,460 +0,0 @@
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <btrie.h>
|
||||
|
||||
#define PAGE_SIZE 4096
|
||||
|
||||
|
||||
static btrie_node_t *
|
||||
btrie_alloc(btrie_t *tree)
|
||||
{
|
||||
btrie_node_t *p;
|
||||
|
||||
if (tree->free) {
|
||||
p = tree->free;
|
||||
tree->free = tree->free->right;
|
||||
return p;
|
||||
}
|
||||
|
||||
if (tree->size < sizeof(btrie_node_t)) {
|
||||
tree->start = (char *) calloc(sizeof(char), PAGE_SIZE);
|
||||
if (tree->start == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tree->pools[tree->len++] = tree->start;
|
||||
tree->size = PAGE_SIZE;
|
||||
}
|
||||
|
||||
p = (btrie_node_t *) tree->start;
|
||||
|
||||
tree->start += sizeof(btrie_node_t);
|
||||
tree->size -= sizeof(btrie_node_t);
|
||||
|
||||
return p;
|
||||
}
|
||||
|
||||
|
||||
btrie_t *
|
||||
btrie_create()
|
||||
{
|
||||
btrie_t *tree = (btrie_t *) malloc(sizeof(btrie_t));
|
||||
if (tree == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tree->free = NULL;
|
||||
tree->start = NULL;
|
||||
tree->size = 0;
|
||||
memset(tree->pools, 0, sizeof(btrie_t *) * BTRIE_MAX_PAGES);
|
||||
tree->len = 0;
|
||||
|
||||
tree->root = btrie_alloc(tree);
|
||||
if (tree->root == NULL) {
|
||||
return NULL;
|
||||
}
|
||||
|
||||
tree->root->right = NULL;
|
||||
tree->root->left = NULL;
|
||||
tree->root->parent = NULL;
|
||||
tree->root->value = BTRIE_NULL;
|
||||
|
||||
return tree;
|
||||
}
|
||||
|
||||
static size_t
|
||||
subtree_weight(btrie_node_t *node)
|
||||
{
|
||||
size_t weight = 1;
|
||||
if (node->left) {
|
||||
weight += subtree_weight(node->left);
|
||||
}
|
||||
if (node->right) {
|
||||
weight += subtree_weight(node->right);
|
||||
}
|
||||
return weight;
|
||||
}
|
||||
|
||||
size_t
|
||||
btrie_count(btrie_t *tree)
|
||||
{
|
||||
if (tree->root == NULL) {
|
||||
return 0;
|
||||
}
|
||||
|
||||
return subtree_weight(tree->root);
|
||||
}
|
||||
|
||||
size_t
|
||||
btrie_allocated(btrie_t *tree)
|
||||
{
|
||||
return tree->len * PAGE_SIZE;
|
||||
}
|
||||
|
||||
|
||||
int
|
||||
btrie_insert(btrie_t *tree, uint32_t key, uint32_t mask,
|
||||
uintptr_t value)
|
||||
{
|
||||
uint32_t bit;
|
||||
btrie_node_t *node, *next;
|
||||
|
||||
bit = 0x80000000;
|
||||
|
||||
node = tree->root;
|
||||
next = tree->root;
|
||||
|
||||
while (bit & mask) {
|
||||
if (key & bit) {
|
||||
next = node->right;
|
||||
|
||||
} else {
|
||||
next = node->left;
|
||||
}
|
||||
|
||||
if (next == NULL) {
|
||||
break;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
node = next;
|
||||
}
|
||||
|
||||
if (next) {
|
||||
if (node->value != BTRIE_NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
node->value = value;
|
||||
return 0;
|
||||
}
|
||||
|
||||
while (bit & mask) {
|
||||
next = btrie_alloc(tree);
|
||||
if (next == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
next->right = NULL;
|
||||
next->left = NULL;
|
||||
next->parent = node;
|
||||
next->value = BTRIE_NULL;
|
||||
|
||||
if (key & bit) {
|
||||
node->right = next;
|
||||
|
||||
} else {
|
||||
node->left = next;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
node = next;
|
||||
}
|
||||
|
||||
node->value = value;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int
|
||||
btrie_delete(btrie_t *tree, uint32_t key, uint32_t mask)
|
||||
{
|
||||
uint32_t bit;
|
||||
btrie_node_t *node;
|
||||
|
||||
bit = 0x80000000;
|
||||
node = tree->root;
|
||||
|
||||
while (node && (bit & mask)) {
|
||||
if (key & bit) {
|
||||
node = node->right;
|
||||
|
||||
} else {
|
||||
node = node->left;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
}
|
||||
|
||||
if (node == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (node->right || node->left) {
|
||||
if (node->value != BTRIE_NULL) {
|
||||
node->value = BTRIE_NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
if (node->parent->right == node) {
|
||||
node->parent->right = NULL;
|
||||
|
||||
} else {
|
||||
node->parent->left = NULL;
|
||||
}
|
||||
|
||||
node->right = tree->free;
|
||||
tree->free = node;
|
||||
|
||||
node = node->parent;
|
||||
|
||||
if (node->right || node->left) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (node->value != BTRIE_NULL) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (node->parent == NULL) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
uintptr_t
|
||||
btrie_find(btrie_t *tree, uint32_t key)
|
||||
{
|
||||
uint32_t bit;
|
||||
uintptr_t value;
|
||||
btrie_node_t *node;
|
||||
|
||||
bit = 0x80000000;
|
||||
value = BTRIE_NULL;
|
||||
node = tree->root;
|
||||
|
||||
while (node) {
|
||||
if (node->value != BTRIE_NULL) {
|
||||
value = node->value;
|
||||
}
|
||||
|
||||
if (key & bit) {
|
||||
node = node->right;
|
||||
|
||||
} else {
|
||||
node = node->left;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
|
||||
int
|
||||
btrie_insert_a6(btrie_t *tree, const uint8_t *key, const uint8_t *mask,
|
||||
uintptr_t value)
|
||||
{
|
||||
uint8_t bit;
|
||||
unsigned int i;
|
||||
btrie_node_t *node, *next;
|
||||
|
||||
i = 0;
|
||||
bit = 0x80;
|
||||
|
||||
node = tree->root;
|
||||
next = tree->root;
|
||||
|
||||
while (bit & mask[i]) {
|
||||
if (key[i] & bit) {
|
||||
next = node->right;
|
||||
|
||||
} else {
|
||||
next = node->left;
|
||||
}
|
||||
|
||||
if (next == NULL) {
|
||||
break;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
node = next;
|
||||
|
||||
if (bit == 0) {
|
||||
if (++i == 16) {
|
||||
break;
|
||||
}
|
||||
|
||||
bit = 0x80;
|
||||
}
|
||||
}
|
||||
|
||||
if (next) {
|
||||
if (node->value != BTRIE_NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
node->value = value;
|
||||
return 0;
|
||||
}
|
||||
|
||||
while (bit & mask[i]) {
|
||||
next = btrie_alloc(tree);
|
||||
if (next == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
next->right = NULL;
|
||||
next->left = NULL;
|
||||
next->parent = node;
|
||||
next->value = BTRIE_NULL;
|
||||
|
||||
if (key[i] & bit) {
|
||||
node->right = next;
|
||||
|
||||
} else {
|
||||
node->left = next;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
node = next;
|
||||
|
||||
if (bit == 0) {
|
||||
if (++i == 16) {
|
||||
break;
|
||||
}
|
||||
|
||||
bit = 0x80;
|
||||
}
|
||||
}
|
||||
|
||||
node->value = value;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
int
|
||||
btrie_delete_a6(btrie_t *tree, const uint8_t *key, const uint8_t *mask)
|
||||
{
|
||||
uint8_t bit;
|
||||
unsigned int i;
|
||||
btrie_node_t *node;
|
||||
|
||||
i = 0;
|
||||
bit = 0x80;
|
||||
node = tree->root;
|
||||
|
||||
while (node && (bit & mask[i])) {
|
||||
if (key[i] & bit) {
|
||||
node = node->right;
|
||||
|
||||
} else {
|
||||
node = node->left;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
|
||||
if (bit == 0) {
|
||||
if (++i == 16) {
|
||||
break;
|
||||
}
|
||||
|
||||
bit = 0x80;
|
||||
}
|
||||
}
|
||||
|
||||
if (node == NULL) {
|
||||
return -1;
|
||||
}
|
||||
|
||||
if (node->right || node->left) {
|
||||
if (node->value != BTRIE_NULL) {
|
||||
node->value = BTRIE_NULL;
|
||||
return 0;
|
||||
}
|
||||
|
||||
return -1;
|
||||
}
|
||||
|
||||
for ( ;; ) {
|
||||
if (node->parent->right == node) {
|
||||
node->parent->right = NULL;
|
||||
|
||||
} else {
|
||||
node->parent->left = NULL;
|
||||
}
|
||||
|
||||
node->right = tree->free;
|
||||
tree->free = node;
|
||||
|
||||
node = node->parent;
|
||||
|
||||
if (node->right || node->left) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (node->value != BTRIE_NULL) {
|
||||
break;
|
||||
}
|
||||
|
||||
if (node->parent == NULL) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
||||
uintptr_t
|
||||
btrie_find_a6(btrie_t *tree, const uint8_t *key)
|
||||
{
|
||||
uint8_t bit;
|
||||
uintptr_t value;
|
||||
unsigned int i;
|
||||
btrie_node_t *node;
|
||||
|
||||
i = 0;
|
||||
bit = 0x80;
|
||||
value = BTRIE_NULL;
|
||||
node = tree->root;
|
||||
|
||||
while (node) {
|
||||
if (node->value != BTRIE_NULL) {
|
||||
value = node->value;
|
||||
}
|
||||
|
||||
if (key[i] & bit) {
|
||||
node = node->right;
|
||||
|
||||
} else {
|
||||
node = node->left;
|
||||
}
|
||||
|
||||
bit >>= 1;
|
||||
|
||||
if (bit == 0) {
|
||||
i++;
|
||||
bit = 0x80;
|
||||
}
|
||||
}
|
||||
|
||||
return value;
|
||||
}
|
||||
|
||||
|
||||
int
|
||||
btrie_destroy(btrie_t *tree)
|
||||
{
|
||||
size_t i;
|
||||
|
||||
|
||||
/* free memory pools */
|
||||
for (i = 0; i < tree->len; i++) {
|
||||
free(tree->pools[i]);
|
||||
}
|
||||
|
||||
free(tree);
|
||||
|
||||
return 0;
|
||||
}
|
@ -1,103 +0,0 @@
|
||||
#include <stdio.h>
|
||||
#include <btrie.h>
|
||||
|
||||
int main()
|
||||
{
|
||||
btrie_t *it;
|
||||
int ret;
|
||||
|
||||
uint8_t prefix_v6[16] = {0xde, 0xad, 0xbe, 0xef};
|
||||
uint8_t mask_v6[16] = {0xff, 0xff, 0xff};
|
||||
uint8_t ip_v6[16] = {0xde, 0xad, 0xbe, 0xef, 0xde};
|
||||
|
||||
it = btrie_create();
|
||||
if (it == NULL) {
|
||||
printf("create error!\n");
|
||||
return 0;
|
||||
}
|
||||
|
||||
//add 101.45.69.50/16
|
||||
ret = btrie_insert(it, 1697465650, 0xffff0000, 1);
|
||||
if (ret != 0) {
|
||||
printf("insert 1 error.\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
//add 10.45.69.50/16
|
||||
ret = btrie_insert(it, 170738994, 0xffff0000, 1);
|
||||
if (ret != 0) {
|
||||
printf("insert 2 error.\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
//add 10.45.79.50/16
|
||||
ret = btrie_insert(it, 170741554, 0xffff0000, 1);
|
||||
if (ret == 0) {
|
||||
printf("insert 3 error.\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
//add 102.45.79.50/24
|
||||
ret = btrie_insert(it, 1714245426, 0xffffff00, 1);
|
||||
if (ret != 0) {
|
||||
printf("insert 4 error.\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
ret = btrie_find(it, 170741554);
|
||||
if (ret == 1) {
|
||||
printf("test case 1 passed\n");
|
||||
} else {
|
||||
printf("test case 1 error\n");
|
||||
}
|
||||
|
||||
ret = btrie_find(it, 170786817);
|
||||
if (ret != 1) {
|
||||
printf("test case 2 passed\n");
|
||||
} else {
|
||||
printf("test case 2 error\n");
|
||||
}
|
||||
|
||||
ret = btrie_delete(it, 1714245426, 0xffffff00);
|
||||
if (ret != 0) {
|
||||
printf("delete 1 error\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
ret = btrie_find(it, 1714245426);
|
||||
if (ret != 1) {
|
||||
printf("test case 3 passed\n");
|
||||
} else {
|
||||
printf("test case 3 error\n");
|
||||
}
|
||||
|
||||
//add dead:beef::/32
|
||||
ret = btrie_insert_a6(it, prefix_v6, mask_v6, 1);
|
||||
if (ret != 0) {
|
||||
printf("insert 5 error\n");
|
||||
goto error;
|
||||
}
|
||||
|
||||
ret = btrie_find_a6(it, ip_v6);
|
||||
if (ret == 1) {
|
||||
printf("test case 4 passed\n");
|
||||
} else {
|
||||
printf("test case 4 error\n");
|
||||
}
|
||||
|
||||
// insert 4m ips
|
||||
for (size_t ip = 1; ip < 1024 * 1024 * 4; ++ip) {
|
||||
ret = btrie_insert(it, ip, 0xffffffff, 1);
|
||||
if (ret != 0) {
|
||||
printf("insert 5 error (%d) (%zu) .\n", ret, ip);
|
||||
goto error;
|
||||
}
|
||||
}
|
||||
|
||||
return 0;
|
||||
|
||||
error:
|
||||
btrie_destroy(it);
|
||||
printf("test failed\n");
|
||||
return 1;
|
||||
}
|
2
contrib/librdkafka
vendored
2
contrib/librdkafka
vendored
@ -1 +1 @@
|
||||
Subproject commit 2090cbf56b715247ec2be7f768707a7ab1bf7ede
|
||||
Subproject commit 9902bc4fb18bb441fa55ca154b341cdda191e5d3
|
@ -22,7 +22,16 @@ set_source_files_properties(${LIBUNWIND_C_SOURCES} PROPERTIES COMPILE_FLAGS "-st
|
||||
set(LIBUNWIND_ASM_SOURCES
|
||||
${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersRestore.S
|
||||
${LIBUNWIND_SOURCE_DIR}/src/UnwindRegistersSave.S)
|
||||
set_source_files_properties(${LIBUNWIND_ASM_SOURCES} PROPERTIES LANGUAGE C)
|
||||
|
||||
# CMake doesn't pass the correct architecture for Apple prior to CMake 3.19 [1]
|
||||
# Workaround these two issues by compiling as C.
|
||||
#
|
||||
# [1]: https://gitlab.kitware.com/cmake/cmake/-/issues/20771
|
||||
if (APPLE AND CMAKE_VERSION VERSION_LESS 3.19)
|
||||
set_source_files_properties(${LIBUNWIND_ASM_SOURCES} PROPERTIES LANGUAGE C)
|
||||
else()
|
||||
enable_language(ASM)
|
||||
endif()
|
||||
|
||||
set(LIBUNWIND_SOURCES
|
||||
${LIBUNWIND_CXX_SOURCES}
|
||||
|
2
contrib/mariadb-connector-c
vendored
2
contrib/mariadb-connector-c
vendored
@ -1 +1 @@
|
||||
Subproject commit 1485b0de3eaa1508dfe49a5ba1e4aa2a71fd8335
|
||||
Subproject commit e05523ca7c1fb8d095b612a1b1cfe96e199ffb17
|
2
contrib/openldap
vendored
2
contrib/openldap
vendored
@ -1 +1 @@
|
||||
Subproject commit 34b9ba94b30319ed6389a4e001d057f7983fe363
|
||||
Subproject commit 0208811b6043ca06fda8631a5e473df1ec515ccb
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit f49c6ab8d3aa71828bd1b411485c21722e8c9d82
|
||||
Subproject commit f3d791f6568b99366d089b4479f76a515beb66d5
|
96
debian/clickhouse-server.init
vendored
96
debian/clickhouse-server.init
vendored
@ -67,26 +67,6 @@ if uname -mpi | grep -q 'x86_64'; then
|
||||
fi
|
||||
|
||||
|
||||
is_running()
|
||||
{
|
||||
pgrep --pidfile "$CLICKHOUSE_PIDFILE" $(echo "${PROGRAM}" | cut -c1-15) 1> /dev/null 2> /dev/null
|
||||
}
|
||||
|
||||
|
||||
wait_for_done()
|
||||
{
|
||||
timeout=$1
|
||||
attempts=0
|
||||
while is_running; do
|
||||
attempts=$(($attempts + 1))
|
||||
if [ -n "$timeout" ] && [ $attempts -gt $timeout ]; then
|
||||
return 1
|
||||
fi
|
||||
sleep 1
|
||||
done
|
||||
}
|
||||
|
||||
|
||||
die()
|
||||
{
|
||||
echo $1 >&2
|
||||
@ -105,49 +85,7 @@ check_config()
|
||||
|
||||
initdb()
|
||||
{
|
||||
if [ -x "$CLICKHOUSE_BINDIR/$EXTRACT_FROM_CONFIG" ]; then
|
||||
CLICKHOUSE_DATADIR_FROM_CONFIG=$(su -s $SHELL ${CLICKHOUSE_USER} -c "$CLICKHOUSE_BINDIR/$EXTRACT_FROM_CONFIG --config-file=\"$CLICKHOUSE_CONFIG\" --key=path")
|
||||
if [ "(" "$?" -ne "0" ")" -o "(" -z "${CLICKHOUSE_DATADIR_FROM_CONFIG}" ")" ]; then
|
||||
die "Cannot obtain value of path from config file: ${CLICKHOUSE_CONFIG}";
|
||||
fi
|
||||
echo "Path to data directory in ${CLICKHOUSE_CONFIG}: ${CLICKHOUSE_DATADIR_FROM_CONFIG}"
|
||||
else
|
||||
CLICKHOUSE_DATADIR_FROM_CONFIG=$CLICKHOUSE_DATADIR
|
||||
fi
|
||||
|
||||
if ! getent passwd ${CLICKHOUSE_USER} >/dev/null; then
|
||||
echo "Can't chown to non-existing user ${CLICKHOUSE_USER}"
|
||||
return
|
||||
fi
|
||||
if ! getent group ${CLICKHOUSE_GROUP} >/dev/null; then
|
||||
echo "Can't chown to non-existing group ${CLICKHOUSE_GROUP}"
|
||||
return
|
||||
fi
|
||||
|
||||
if ! $(su -s $SHELL ${CLICKHOUSE_USER} -c "test -r ${CLICKHOUSE_CONFIG}"); then
|
||||
echo "Warning! clickhouse config [${CLICKHOUSE_CONFIG}] not readable by user [${CLICKHOUSE_USER}]"
|
||||
fi
|
||||
|
||||
if ! $(su -s $SHELL ${CLICKHOUSE_USER} -c "test -O \"${CLICKHOUSE_DATADIR_FROM_CONFIG}\" && test -G \"${CLICKHOUSE_DATADIR_FROM_CONFIG}\""); then
|
||||
if [ $(dirname "${CLICKHOUSE_DATADIR_FROM_CONFIG}") = "/" ]; then
|
||||
echo "Directory ${CLICKHOUSE_DATADIR_FROM_CONFIG} seems too dangerous to chown."
|
||||
else
|
||||
if [ ! -e "${CLICKHOUSE_DATADIR_FROM_CONFIG}" ]; then
|
||||
echo "Creating directory ${CLICKHOUSE_DATADIR_FROM_CONFIG}"
|
||||
mkdir -p "${CLICKHOUSE_DATADIR_FROM_CONFIG}"
|
||||
fi
|
||||
|
||||
echo "Changing owner of [${CLICKHOUSE_DATADIR_FROM_CONFIG}] to [${CLICKHOUSE_USER}:${CLICKHOUSE_GROUP}]"
|
||||
chown -R ${CLICKHOUSE_USER}:${CLICKHOUSE_GROUP} "${CLICKHOUSE_DATADIR_FROM_CONFIG}"
|
||||
fi
|
||||
fi
|
||||
|
||||
if ! $(su -s $SHELL ${CLICKHOUSE_USER} -c "test -w ${CLICKHOUSE_LOGDIR}"); then
|
||||
echo "Changing owner of [${CLICKHOUSE_LOGDIR}/*] to [${CLICKHOUSE_USER}:${CLICKHOUSE_GROUP}]"
|
||||
chown -R ${CLICKHOUSE_USER}:${CLICKHOUSE_GROUP} ${CLICKHOUSE_LOGDIR}/*
|
||||
echo "Changing owner of [${CLICKHOUSE_LOGDIR}] to [${CLICKHOUSE_LOGDIR_USER}:${CLICKHOUSE_GROUP}]"
|
||||
chown ${CLICKHOUSE_LOGDIR_USER}:${CLICKHOUSE_GROUP} ${CLICKHOUSE_LOGDIR}
|
||||
fi
|
||||
${CLICKHOUSE_GENERIC_PROGRAM} install --user "${CLICKHOUSE_USER}" --pid-path "${CLICKHOUSE_PIDDIR}" --config-path "${CLICKHOUSE_CONFDIR}" --binary-path "${CLICKHOUSE_BINDIR}"
|
||||
}
|
||||
|
||||
|
||||
@ -171,17 +109,7 @@ restart()
|
||||
|
||||
forcestop()
|
||||
{
|
||||
local EXIT_STATUS
|
||||
EXIT_STATUS=0
|
||||
|
||||
echo -n "Stop forcefully $PROGRAM service: "
|
||||
|
||||
kill -KILL $(cat "$CLICKHOUSE_PIDFILE")
|
||||
|
||||
wait_for_done
|
||||
|
||||
echo "DONE"
|
||||
return $EXIT_STATUS
|
||||
${CLICKHOUSE_GENERIC_PROGRAM} stop --force --pid-path "${CLICKHOUSE_PIDDIR}"
|
||||
}
|
||||
|
||||
|
||||
@ -261,16 +189,16 @@ main()
|
||||
service_or_func restart
|
||||
;;
|
||||
condstart)
|
||||
is_running || service_or_func start
|
||||
service_or_func start
|
||||
;;
|
||||
condstop)
|
||||
is_running && service_or_func stop
|
||||
service_or_func stop
|
||||
;;
|
||||
condrestart)
|
||||
is_running && service_or_func restart
|
||||
service_or_func restart
|
||||
;;
|
||||
condreload)
|
||||
is_running && service_or_func restart
|
||||
service_or_func restart
|
||||
;;
|
||||
initdb)
|
||||
initdb
|
||||
@ -293,17 +221,7 @@ main()
|
||||
|
||||
status()
|
||||
{
|
||||
if is_running; then
|
||||
echo "$PROGRAM service is running"
|
||||
exit 0
|
||||
else
|
||||
if is_cron_disabled; then
|
||||
echo "$PROGRAM service is stopped";
|
||||
else
|
||||
echo "$PROGRAM: process unexpectedly terminated"
|
||||
fi
|
||||
exit 3
|
||||
fi
|
||||
${CLICKHOUSE_GENERIC_PROGRAM} status --pid-path "${CLICKHOUSE_PIDDIR}"
|
||||
}
|
||||
|
||||
|
||||
|
@ -104,223 +104,249 @@ function start_server
|
||||
|
||||
function clone_root
|
||||
{
|
||||
git clone https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt"
|
||||
git clone https://github.com/ClickHouse/ClickHouse.git -- "$FASTTEST_SOURCE" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/clone_log.txt"
|
||||
|
||||
(
|
||||
cd "$FASTTEST_SOURCE"
|
||||
if [ "$PULL_REQUEST_NUMBER" != "0" ]; then
|
||||
if git fetch origin "+refs/pull/$PULL_REQUEST_NUMBER/merge"; then
|
||||
git checkout FETCH_HEAD
|
||||
echo 'Clonned merge head'
|
||||
else
|
||||
git fetch
|
||||
git checkout "$COMMIT_SHA"
|
||||
echo 'Checked out to commit'
|
||||
fi
|
||||
else
|
||||
if [ -v COMMIT_SHA ]; then
|
||||
git checkout "$COMMIT_SHA"
|
||||
fi
|
||||
fi
|
||||
)
|
||||
(
|
||||
cd "$FASTTEST_SOURCE"
|
||||
if [ "$PULL_REQUEST_NUMBER" != "0" ]; then
|
||||
if git fetch origin "+refs/pull/$PULL_REQUEST_NUMBER/merge"; then
|
||||
git checkout FETCH_HEAD
|
||||
echo 'Clonned merge head'
|
||||
else
|
||||
git fetch
|
||||
git checkout "$COMMIT_SHA"
|
||||
echo 'Checked out to commit'
|
||||
fi
|
||||
else
|
||||
if [ -v COMMIT_SHA ]; then
|
||||
git checkout "$COMMIT_SHA"
|
||||
fi
|
||||
fi
|
||||
)
|
||||
}
|
||||
|
||||
function clone_submodules
|
||||
{
|
||||
(
|
||||
cd "$FASTTEST_SOURCE"
|
||||
(
|
||||
cd "$FASTTEST_SOURCE"
|
||||
|
||||
SUBMODULES_TO_UPDATE=(contrib/boost contrib/zlib-ng contrib/libxml2 contrib/poco contrib/libunwind contrib/ryu contrib/fmtlib contrib/base64 contrib/cctz contrib/libcpuid contrib/double-conversion contrib/libcxx contrib/libcxxabi contrib/libc-headers contrib/lz4 contrib/zstd contrib/fastops contrib/rapidjson contrib/re2 contrib/sparsehash-c11 contrib/croaring contrib/miniselect contrib/xz)
|
||||
SUBMODULES_TO_UPDATE=(
|
||||
contrib/antlr4-runtime
|
||||
contrib/boost
|
||||
contrib/zlib-ng
|
||||
contrib/libxml2
|
||||
contrib/poco
|
||||
contrib/libunwind
|
||||
contrib/ryu
|
||||
contrib/fmtlib
|
||||
contrib/base64
|
||||
contrib/cctz
|
||||
contrib/libcpuid
|
||||
contrib/double-conversion
|
||||
contrib/libcxx
|
||||
contrib/libcxxabi
|
||||
contrib/libc-headers
|
||||
contrib/lz4
|
||||
contrib/zstd
|
||||
contrib/fastops
|
||||
contrib/rapidjson
|
||||
contrib/re2
|
||||
contrib/sparsehash-c11
|
||||
contrib/croaring
|
||||
contrib/miniselect
|
||||
contrib/xz
|
||||
)
|
||||
|
||||
git submodule sync
|
||||
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
||||
git submodule foreach git reset --hard
|
||||
git submodule foreach git checkout @ -f
|
||||
git submodule foreach git clean -xfd
|
||||
)
|
||||
git submodule sync
|
||||
git submodule update --init --recursive "${SUBMODULES_TO_UPDATE[@]}"
|
||||
git submodule foreach git reset --hard
|
||||
git submodule foreach git checkout @ -f
|
||||
git submodule foreach git clean -xfd
|
||||
)
|
||||
}
|
||||
|
||||
function run_cmake
|
||||
{
|
||||
CMAKE_LIBS_CONFIG=(
|
||||
"-DENABLE_LIBRARIES=0"
|
||||
"-DENABLE_TESTS=0"
|
||||
"-DENABLE_UTILS=0"
|
||||
"-DENABLE_EMBEDDED_COMPILER=0"
|
||||
"-DENABLE_THINLTO=0"
|
||||
"-DUSE_UNWIND=1"
|
||||
)
|
||||
CMAKE_LIBS_CONFIG=(
|
||||
"-DENABLE_LIBRARIES=0"
|
||||
"-DENABLE_TESTS=0"
|
||||
"-DENABLE_UTILS=0"
|
||||
"-DENABLE_EMBEDDED_COMPILER=0"
|
||||
"-DENABLE_THINLTO=0"
|
||||
"-DUSE_UNWIND=1"
|
||||
)
|
||||
|
||||
# TODO remove this? we don't use ccache anyway. An option would be to download it
|
||||
# from S3 simultaneously with cloning.
|
||||
export CCACHE_DIR="$FASTTEST_WORKSPACE/ccache"
|
||||
export CCACHE_BASEDIR="$FASTTEST_SOURCE"
|
||||
export CCACHE_NOHASHDIR=true
|
||||
export CCACHE_COMPILERCHECK=content
|
||||
export CCACHE_MAXSIZE=15G
|
||||
# TODO remove this? we don't use ccache anyway. An option would be to download it
|
||||
# from S3 simultaneously with cloning.
|
||||
export CCACHE_DIR="$FASTTEST_WORKSPACE/ccache"
|
||||
export CCACHE_BASEDIR="$FASTTEST_SOURCE"
|
||||
export CCACHE_NOHASHDIR=true
|
||||
export CCACHE_COMPILERCHECK=content
|
||||
export CCACHE_MAXSIZE=15G
|
||||
|
||||
ccache --show-stats ||:
|
||||
ccache --zero-stats ||:
|
||||
ccache --show-stats ||:
|
||||
ccache --zero-stats ||:
|
||||
|
||||
mkdir "$FASTTEST_BUILD" ||:
|
||||
mkdir "$FASTTEST_BUILD" ||:
|
||||
|
||||
(
|
||||
cd "$FASTTEST_BUILD"
|
||||
cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER=clang++-10 -DCMAKE_C_COMPILER=clang-10 "${CMAKE_LIBS_CONFIG[@]}" "${FASTTEST_CMAKE_FLAGS[@]}" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt"
|
||||
)
|
||||
(
|
||||
cd "$FASTTEST_BUILD"
|
||||
cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER=clang++-10 -DCMAKE_C_COMPILER=clang-10 "${CMAKE_LIBS_CONFIG[@]}" "${FASTTEST_CMAKE_FLAGS[@]}" | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt"
|
||||
)
|
||||
}
|
||||
|
||||
function build
|
||||
{
|
||||
(
|
||||
cd "$FASTTEST_BUILD"
|
||||
time ninja clickhouse-bundle | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt"
|
||||
if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then
|
||||
cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse"
|
||||
fi
|
||||
ccache --show-stats ||:
|
||||
)
|
||||
(
|
||||
cd "$FASTTEST_BUILD"
|
||||
time ninja clickhouse-bundle | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/build_log.txt"
|
||||
if [ "$COPY_CLICKHOUSE_BINARY_TO_OUTPUT" -eq "1" ]; then
|
||||
cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse"
|
||||
fi
|
||||
ccache --show-stats ||:
|
||||
)
|
||||
}
|
||||
|
||||
function configure
|
||||
{
|
||||
clickhouse-client --version
|
||||
clickhouse-test --help
|
||||
clickhouse-client --version
|
||||
clickhouse-test --help
|
||||
|
||||
mkdir -p "$FASTTEST_DATA"{,/client-config}
|
||||
cp -a "$FASTTEST_SOURCE/programs/server/"{config,users}.xml "$FASTTEST_DATA"
|
||||
"$FASTTEST_SOURCE/tests/config/install.sh" "$FASTTEST_DATA" "$FASTTEST_DATA/client-config"
|
||||
cp -a "$FASTTEST_SOURCE/programs/server/config.d/log_to_console.xml" "$FASTTEST_DATA/config.d"
|
||||
# doesn't support SSL
|
||||
rm -f "$FASTTEST_DATA/config.d/secure_ports.xml"
|
||||
mkdir -p "$FASTTEST_DATA"{,/client-config}
|
||||
cp -a "$FASTTEST_SOURCE/programs/server/"{config,users}.xml "$FASTTEST_DATA"
|
||||
"$FASTTEST_SOURCE/tests/config/install.sh" "$FASTTEST_DATA" "$FASTTEST_DATA/client-config"
|
||||
cp -a "$FASTTEST_SOURCE/programs/server/config.d/log_to_console.xml" "$FASTTEST_DATA/config.d"
|
||||
# doesn't support SSL
|
||||
rm -f "$FASTTEST_DATA/config.d/secure_ports.xml"
|
||||
}
|
||||
|
||||
function run_tests
|
||||
{
|
||||
clickhouse-server --version
|
||||
clickhouse-test --help
|
||||
clickhouse-server --version
|
||||
clickhouse-test --help
|
||||
|
||||
# Kill the server in case we are running locally and not in docker
|
||||
stop_server ||:
|
||||
|
||||
start_server
|
||||
|
||||
TESTS_TO_SKIP=(
|
||||
00105_shard_collations
|
||||
00109_shard_totals_after_having
|
||||
00110_external_sort
|
||||
00302_http_compression
|
||||
00417_kill_query
|
||||
00436_convert_charset
|
||||
00490_special_line_separators_and_characters_outside_of_bmp
|
||||
00652_replicated_mutations_zookeeper
|
||||
00682_empty_parts_merge
|
||||
00701_rollup
|
||||
00834_cancel_http_readonly_queries_on_client_close
|
||||
00911_tautological_compare
|
||||
00926_multimatch
|
||||
00929_multi_match_edit_distance
|
||||
01031_mutations_interpreter_and_context
|
||||
01053_ssd_dictionary # this test mistakenly requires acces to /var/lib/clickhouse -- can't run this locally, disabled
|
||||
01083_expressions_in_engine_arguments
|
||||
01092_memory_profiler
|
||||
01098_msgpack_format
|
||||
01098_temporary_and_external_tables
|
||||
01103_check_cpu_instructions_at_startup # avoid dependency on qemu -- invonvenient when running locally
|
||||
01193_metadata_loading
|
||||
01238_http_memory_tracking # max_memory_usage_for_user can interfere another queries running concurrently
|
||||
01251_dict_is_in_infinite_loop
|
||||
01259_dictionary_custom_settings_ddl
|
||||
01268_dictionary_direct_layout
|
||||
01280_ssd_complex_key_dictionary
|
||||
01281_group_by_limit_memory_tracking # max_memory_usage_for_user can interfere another queries running concurrently
|
||||
01318_encrypt # Depends on OpenSSL
|
||||
01318_decrypt # Depends on OpenSSL
|
||||
01281_unsucceeded_insert_select_queries_counter
|
||||
01292_create_user
|
||||
01294_lazy_database_concurrent
|
||||
01305_replica_create_drop_zookeeper
|
||||
01354_order_by_tuple_collate_const
|
||||
01355_ilike
|
||||
01411_bayesian_ab_testing
|
||||
01532_collate_in_low_cardinality
|
||||
01533_collate_in_nullable
|
||||
01542_collate_in_array
|
||||
01543_collate_in_tuple
|
||||
_orc_
|
||||
arrow
|
||||
avro
|
||||
base64
|
||||
brotli
|
||||
capnproto
|
||||
client
|
||||
ddl_dictionaries
|
||||
h3
|
||||
hashing
|
||||
hdfs
|
||||
java_hash
|
||||
json
|
||||
limit_memory
|
||||
live_view
|
||||
memory_leak
|
||||
memory_limit
|
||||
mysql
|
||||
odbc
|
||||
parallel_alter
|
||||
parquet
|
||||
protobuf
|
||||
secure
|
||||
sha256
|
||||
xz
|
||||
|
||||
# Not sure why these two fail even in sequential mode. Disabled for now
|
||||
# to make some progress.
|
||||
00646_url_engine
|
||||
00974_query_profiler
|
||||
|
||||
# In fasttest, ENABLE_LIBRARIES=0, so rocksdb engine is not enabled by default
|
||||
01504_rocksdb
|
||||
|
||||
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
||||
01460_DistributedFilesToInsert
|
||||
|
||||
01541_max_memory_usage_for_user
|
||||
|
||||
# Require python libraries like scipy, pandas and numpy
|
||||
01322_ttest_scipy
|
||||
|
||||
01545_system_errors
|
||||
# Checks system.errors
|
||||
01563_distributed_query_finish
|
||||
)
|
||||
|
||||
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||
|
||||
# substr is to remove semicolon after test name
|
||||
readarray -t FAILED_TESTS < <(awk '/FAIL|TIMEOUT|ERROR/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt")
|
||||
|
||||
# We will rerun sequentially any tests that have failed during parallel run.
|
||||
# They might have failed because there was some interference from other tests
|
||||
# running concurrently. If they fail even in seqential mode, we will report them.
|
||||
# FIXME All tests that require exclusive access to the server must be
|
||||
# explicitly marked as `sequential`, and `clickhouse-test` must detect them and
|
||||
# run them in a separate group after all other tests. This is faster and also
|
||||
# explicit instead of guessing.
|
||||
if [[ -n "${FAILED_TESTS[*]}" ]]
|
||||
then
|
||||
# Kill the server in case we are running locally and not in docker
|
||||
stop_server ||:
|
||||
|
||||
# Clean the data so that there is no interference from the previous test run.
|
||||
rm -rf "$FASTTEST_DATA"/{{meta,}data,user_files} ||:
|
||||
|
||||
start_server
|
||||
|
||||
echo "Going to run again: ${FAILED_TESTS[*]}"
|
||||
TESTS_TO_SKIP=(
|
||||
00105_shard_collations
|
||||
00109_shard_totals_after_having
|
||||
00110_external_sort
|
||||
00302_http_compression
|
||||
00417_kill_query
|
||||
00436_convert_charset
|
||||
00490_special_line_separators_and_characters_outside_of_bmp
|
||||
00652_replicated_mutations_zookeeper
|
||||
00682_empty_parts_merge
|
||||
00701_rollup
|
||||
00834_cancel_http_readonly_queries_on_client_close
|
||||
00911_tautological_compare
|
||||
00926_multimatch
|
||||
00929_multi_match_edit_distance
|
||||
01031_mutations_interpreter_and_context
|
||||
01053_ssd_dictionary # this test mistakenly requires acces to /var/lib/clickhouse -- can't run this locally, disabled
|
||||
01083_expressions_in_engine_arguments
|
||||
01092_memory_profiler
|
||||
01098_msgpack_format
|
||||
01098_temporary_and_external_tables
|
||||
01103_check_cpu_instructions_at_startup # avoid dependency on qemu -- invonvenient when running locally
|
||||
01193_metadata_loading
|
||||
01238_http_memory_tracking # max_memory_usage_for_user can interfere another queries running concurrently
|
||||
01251_dict_is_in_infinite_loop
|
||||
01259_dictionary_custom_settings_ddl
|
||||
01268_dictionary_direct_layout
|
||||
01280_ssd_complex_key_dictionary
|
||||
01281_group_by_limit_memory_tracking # max_memory_usage_for_user can interfere another queries running concurrently
|
||||
01318_encrypt # Depends on OpenSSL
|
||||
01318_decrypt # Depends on OpenSSL
|
||||
01281_unsucceeded_insert_select_queries_counter
|
||||
01292_create_user
|
||||
01294_lazy_database_concurrent
|
||||
01305_replica_create_drop_zookeeper
|
||||
01354_order_by_tuple_collate_const
|
||||
01355_ilike
|
||||
01411_bayesian_ab_testing
|
||||
01532_collate_in_low_cardinality
|
||||
01533_collate_in_nullable
|
||||
01542_collate_in_array
|
||||
01543_collate_in_tuple
|
||||
_orc_
|
||||
arrow
|
||||
avro
|
||||
base64
|
||||
brotli
|
||||
capnproto
|
||||
client
|
||||
ddl_dictionaries
|
||||
h3
|
||||
hashing
|
||||
hdfs
|
||||
java_hash
|
||||
json
|
||||
limit_memory
|
||||
live_view
|
||||
memory_leak
|
||||
memory_limit
|
||||
mysql
|
||||
odbc
|
||||
parallel_alter
|
||||
parquet
|
||||
protobuf
|
||||
secure
|
||||
sha256
|
||||
xz
|
||||
|
||||
clickhouse-test --order=random --no-long --testname --shard --zookeeper "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a "$FASTTEST_OUTPUT/test_log.txt"
|
||||
else
|
||||
echo "No failed tests"
|
||||
fi
|
||||
# Not sure why these two fail even in sequential mode. Disabled for now
|
||||
# to make some progress.
|
||||
00646_url_engine
|
||||
00974_query_profiler
|
||||
|
||||
# In fasttest, ENABLE_LIBRARIES=0, so rocksdb engine is not enabled by default
|
||||
01504_rocksdb
|
||||
|
||||
# Look at DistributedFilesToInsert, so cannot run in parallel.
|
||||
01460_DistributedFilesToInsert
|
||||
|
||||
01541_max_memory_usage_for_user
|
||||
|
||||
# Require python libraries like scipy, pandas and numpy
|
||||
01322_ttest_scipy
|
||||
01561_mann_whitney_scipy
|
||||
|
||||
01545_system_errors
|
||||
# Checks system.errors
|
||||
01563_distributed_query_finish
|
||||
)
|
||||
|
||||
time clickhouse-test -j 8 --order=random --no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" -- "$FASTTEST_FOCUS" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/test_log.txt"
|
||||
|
||||
# substr is to remove semicolon after test name
|
||||
readarray -t FAILED_TESTS < <(awk '/FAIL|TIMEOUT|ERROR/ { print substr($3, 1, length($3)-1) }' "$FASTTEST_OUTPUT/test_log.txt" | tee "$FASTTEST_OUTPUT/failed-parallel-tests.txt")
|
||||
|
||||
# We will rerun sequentially any tests that have failed during parallel run.
|
||||
# They might have failed because there was some interference from other tests
|
||||
# running concurrently. If they fail even in seqential mode, we will report them.
|
||||
# FIXME All tests that require exclusive access to the server must be
|
||||
# explicitly marked as `sequential`, and `clickhouse-test` must detect them and
|
||||
# run them in a separate group after all other tests. This is faster and also
|
||||
# explicit instead of guessing.
|
||||
if [[ -n "${FAILED_TESTS[*]}" ]]
|
||||
then
|
||||
stop_server ||:
|
||||
|
||||
# Clean the data so that there is no interference from the previous test run.
|
||||
rm -rf "$FASTTEST_DATA"/{{meta,}data,user_files} ||:
|
||||
|
||||
start_server
|
||||
|
||||
echo "Going to run again: ${FAILED_TESTS[*]}"
|
||||
|
||||
clickhouse-test --order=random --no-long --testname --shard --zookeeper "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a "$FASTTEST_OUTPUT/test_log.txt"
|
||||
else
|
||||
echo "No failed tests"
|
||||
fi
|
||||
}
|
||||
|
||||
case "$stage" in
|
||||
|
@ -28,6 +28,7 @@ RUN apt-get update \
|
||||
libssl-dev \
|
||||
libcurl4-openssl-dev \
|
||||
gdb \
|
||||
software-properties-common \
|
||||
&& rm -rf \
|
||||
/var/lib/apt/lists/* \
|
||||
/var/cache/debconf \
|
||||
@ -37,6 +38,22 @@ RUN apt-get update \
|
||||
ENV TZ=Europe/Moscow
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
|
||||
ENV DOCKER_CHANNEL stable
|
||||
ENV DOCKER_VERSION 5:19.03.13~3-0~ubuntu-bionic
|
||||
RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
|
||||
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -c -s) ${DOCKER_CHANNEL}"
|
||||
|
||||
RUN apt-get update \
|
||||
&& env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||
docker-ce \
|
||||
&& rm -rf \
|
||||
/var/lib/apt/lists/* \
|
||||
/var/cache/debconf \
|
||||
/tmp/* \
|
||||
&& apt-get clean
|
||||
|
||||
RUN dockerd --version; docker --version
|
||||
|
||||
RUN python3 -m pip install \
|
||||
PyMySQL \
|
||||
aerospike \
|
||||
@ -60,28 +77,6 @@ RUN python3 -m pip install \
|
||||
tzlocal \
|
||||
urllib3
|
||||
|
||||
ENV DOCKER_CHANNEL stable
|
||||
ENV DOCKER_VERSION 17.09.1-ce
|
||||
|
||||
RUN set -eux; \
|
||||
\
|
||||
# this "case" statement is generated via "update.sh"
|
||||
\
|
||||
if ! wget -nv -O docker.tgz "https://download.docker.com/linux/static/${DOCKER_CHANNEL}/x86_64/docker-${DOCKER_VERSION}.tgz"; then \
|
||||
echo >&2 "error: failed to download 'docker-${DOCKER_VERSION}' from '${DOCKER_CHANNEL}' for '${x86_64}'"; \
|
||||
exit 1; \
|
||||
fi; \
|
||||
\
|
||||
tar --extract \
|
||||
--file docker.tgz \
|
||||
--strip-components 1 \
|
||||
--directory /usr/local/bin/ \
|
||||
; \
|
||||
rm docker.tgz; \
|
||||
\
|
||||
dockerd --version; \
|
||||
docker --version
|
||||
|
||||
COPY modprobe.sh /usr/local/bin/modprobe
|
||||
COPY dockerd-entrypoint.sh /usr/local/bin/
|
||||
COPY compose/ /compose/
|
||||
|
@ -50,7 +50,7 @@ services:
|
||||
- label:disable
|
||||
|
||||
kafka_kerberos:
|
||||
image: yandex/clickhouse-kerberos-kdc:${DOCKER_KERBEROS_KDC_TAG}
|
||||
image: yandex/clickhouse-kerberos-kdc:${DOCKER_KERBEROS_KDC_TAG:-latest}
|
||||
hostname: kafka_kerberos
|
||||
volumes:
|
||||
- ${KERBERIZED_KAFKA_DIR}/secrets:/tmp/keytab
|
||||
|
@ -7,4 +7,4 @@ services:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- 3308:3306
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
||||
|
@ -0,0 +1,10 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
mysql1:
|
||||
image: mysql:5.7
|
||||
restart: 'no'
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- 3308:3306
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log' --default-time-zone='+3:00' --gtid-mode="ON" --enforce-gtid-consistency
|
@ -2,7 +2,7 @@ version: '2.3'
|
||||
services:
|
||||
mysql8_0:
|
||||
image: mysql:8.0
|
||||
restart: always
|
||||
restart: 'no'
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
@ -1,6 +1,6 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
golang1:
|
||||
image: yandex/clickhouse-mysql-golang-client:${DOCKER_MYSQL_GOLANG_CLIENT_TAG}
|
||||
image: yandex/clickhouse-mysql-golang-client:${DOCKER_MYSQL_GOLANG_CLIENT_TAG:-latest}
|
||||
# to keep container running
|
||||
command: sleep infinity
|
||||
|
@ -1,6 +1,6 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
java1:
|
||||
image: yandex/clickhouse-mysql-java-client:${DOCKER_MYSQL_JAVA_CLIENT_TAG}
|
||||
image: yandex/clickhouse-mysql-java-client:${DOCKER_MYSQL_JAVA_CLIENT_TAG:-latest}
|
||||
# to keep container running
|
||||
command: sleep infinity
|
||||
|
@ -1,6 +1,6 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
mysqljs1:
|
||||
image: yandex/clickhouse-mysql-js-client:${DOCKER_MYSQL_JS_CLIENT_TAG}
|
||||
image: yandex/clickhouse-mysql-js-client:${DOCKER_MYSQL_JS_CLIENT_TAG:-latest}
|
||||
# to keep container running
|
||||
command: sleep infinity
|
||||
|
@ -1,6 +1,6 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
php1:
|
||||
image: yandex/clickhouse-mysql-php-client:${DOCKER_MYSQL_PHP_CLIENT_TAG}
|
||||
image: yandex/clickhouse-mysql-php-client:${DOCKER_MYSQL_PHP_CLIENT_TAG:-latest}
|
||||
# to keep container running
|
||||
command: sleep infinity
|
||||
|
@ -1,6 +1,6 @@
|
||||
version: '2.2'
|
||||
services:
|
||||
java:
|
||||
image: yandex/clickhouse-postgresql-java-client:${DOCKER_POSTGRESQL_JAVA_CLIENT_TAG}
|
||||
image: yandex/clickhouse-postgresql-java-client:${DOCKER_POSTGRESQL_JAVA_CLIENT_TAG:-latest}
|
||||
# to keep container running
|
||||
command: sleep infinity
|
||||
|
@ -25,12 +25,13 @@ RUN apt-get update \
|
||||
python3 \
|
||||
python3-dev \
|
||||
python3-pip \
|
||||
python3-setuptools \
|
||||
rsync \
|
||||
tree \
|
||||
tzdata \
|
||||
vim \
|
||||
wget \
|
||||
&& pip3 --no-cache-dir install 'clickhouse-driver>=0.1.5' scipy \
|
||||
&& pip3 --no-cache-dir install 'git+https://github.com/mymarilyn/clickhouse-driver.git' scipy \
|
||||
&& apt-get purge --yes python3-dev g++ \
|
||||
&& apt-get autoremove --yes \
|
||||
&& apt-get clean \
|
||||
|
@ -143,7 +143,8 @@ reportStageEnd('before-connect')
|
||||
|
||||
# Open connections
|
||||
servers = [{'host': host or args.host[0], 'port': port or args.port[0]} for (host, port) in itertools.zip_longest(args.host, args.port)]
|
||||
all_connections = [clickhouse_driver.Client(**server) for server in servers]
|
||||
# Force settings_is_important to fail queries on unknown settings.
|
||||
all_connections = [clickhouse_driver.Client(**server, settings_is_important=True) for server in servers]
|
||||
|
||||
for i, s in enumerate(servers):
|
||||
print(f'server\t{i}\t{s["host"]}\t{s["port"]}')
|
||||
@ -167,12 +168,6 @@ if not args.use_existing_tables:
|
||||
reportStageEnd('drop-1')
|
||||
|
||||
# Apply settings.
|
||||
# If there are errors, report them and continue -- maybe a new test uses a setting
|
||||
# that is not in master, but the queries can still run. If we have multiple
|
||||
# settings and one of them throws an exception, all previous settings for this
|
||||
# connection will be reset, because the driver reconnects on error (not
|
||||
# configurable). So the end result is uncertain, but hopefully we'll be able to
|
||||
# run at least some queries.
|
||||
settings = root.findall('settings/*')
|
||||
for conn_index, c in enumerate(all_connections):
|
||||
for s in settings:
|
||||
@ -415,4 +410,4 @@ if not args.keep_created_tables and not args.use_existing_tables:
|
||||
c.execute(q)
|
||||
print(f'drop\t{conn_index}\t{c.last_query.elapsed}\t{tsv_escape(q)}')
|
||||
|
||||
reportStageEnd('drop-2')
|
||||
reportStageEnd('drop-2')
|
||||
|
@ -8,6 +8,7 @@ RUN apt-get update -y \
|
||||
apt-get install --yes --no-install-recommends \
|
||||
brotli \
|
||||
expect \
|
||||
zstd \
|
||||
lsof \
|
||||
ncdu \
|
||||
netcat-openbsd \
|
||||
|
@ -8,6 +8,7 @@ RUN apt-get --allow-unauthenticated update -y \
|
||||
apt-get --allow-unauthenticated install --yes --no-install-recommends \
|
||||
alien \
|
||||
brotli \
|
||||
zstd \
|
||||
cmake \
|
||||
devscripts \
|
||||
expect \
|
||||
|
@ -24,6 +24,7 @@ RUN apt-get update -y \
|
||||
tree \
|
||||
moreutils \
|
||||
brotli \
|
||||
zstd \
|
||||
gdb \
|
||||
lsof \
|
||||
unixodbc \
|
||||
|
@ -13,9 +13,9 @@ cmake .. \
|
||||
-DENABLE_CLICKHOUSE_SERVER=ON \
|
||||
-DENABLE_CLICKHOUSE_CLIENT=ON \
|
||||
-DUSE_STATIC_LIBRARIES=OFF \
|
||||
-DCLICKHOUSE_SPLIT_BINARY=ON \
|
||||
-DSPLIT_SHARED_LIBRARIES=ON \
|
||||
-DENABLE_LIBRARIES=OFF \
|
||||
-DUSE_UNWIND=ON \
|
||||
-DENABLE_UTILS=OFF \
|
||||
-DENABLE_TESTS=OFF
|
||||
```
|
||||
|
@ -17,7 +17,6 @@ toc_title: Third-Party Libraries Used
|
||||
| googletest | [BSD 3-Clause License](https://github.com/google/googletest/blob/master/LICENSE) |
|
||||
| h3 | [Apache License 2.0](https://github.com/uber/h3/blob/master/LICENSE) |
|
||||
| hyperscan | [BSD 3-Clause License](https://github.com/intel/hyperscan/blob/master/LICENSE) |
|
||||
| libbtrie | [BSD 2-Clause License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libbtrie/LICENSE) |
|
||||
| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) |
|
||||
| libdivide | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) |
|
||||
| libgsasl | [LGPL v2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) |
|
||||
|
@ -273,13 +273,15 @@ SELECT
|
||||
sum(Duration) AS Duration
|
||||
FROM UAct
|
||||
GROUP BY UserID
|
||||
```text
|
||||
```
|
||||
|
||||
``` text
|
||||
┌──────────────UserID─┬─PageViews─┬─Duration─┐
|
||||
│ 4324182021466249494 │ 6 │ 185 │
|
||||
└─────────────────────┴───────────┴──────────┘
|
||||
```
|
||||
|
||||
``` sqk
|
||||
``` sql
|
||||
select count() FROM UAct
|
||||
```
|
||||
|
||||
|
@ -579,6 +579,7 @@ Tags:
|
||||
- `disk` — a disk within a volume.
|
||||
- `max_data_part_size_bytes` — the maximum size of a part that can be stored on any of the volume’s disks.
|
||||
- `move_factor` — when the amount of available space gets lower than this factor, data automatically start to move on the next volume if any (by default, 0.1).
|
||||
- `prefer_not_to_merge` — Disables merging of data parts on this volume. When this setting is enabled, merging data on this volume is not allowed. This allows controlling how ClickHouse works with slow disks.
|
||||
|
||||
Cofiguration examples:
|
||||
|
||||
@ -607,6 +608,18 @@ Cofiguration examples:
|
||||
</volumes>
|
||||
<move_factor>0.2</move_factor>
|
||||
</moving_from_ssd_to_hdd>
|
||||
|
||||
<small_jbod_with_external_no_merges>
|
||||
<volumes>
|
||||
<main>
|
||||
<disk>jbod1</disk>
|
||||
</main>
|
||||
<external>
|
||||
<disk>external</disk>
|
||||
<prefer_not_to_merge>true</prefer_not_to_merge>
|
||||
</external>
|
||||
</volumes>
|
||||
</small_jbod_with_external_no_merges>
|
||||
</policies>
|
||||
...
|
||||
</storage_configuration>
|
||||
|
@ -5,7 +5,7 @@ toc_title: ReplacingMergeTree
|
||||
|
||||
# ReplacingMergeTree {#replacingmergetree}
|
||||
|
||||
The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value.
|
||||
The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value (`ORDER BY` table section, not `PRIMARY KEY`).
|
||||
|
||||
Data deduplication occurs only during a merge. Merging occurs in the background at an unknown time, so you can’t plan for it. Some of the data may remain unprocessed. Although you can run an unscheduled merge using the `OPTIMIZE` query, don’t count on using it, because the `OPTIMIZE` query will read and write a large amount of data.
|
||||
|
||||
@ -29,13 +29,16 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name [ON CLUSTER cluster]
|
||||
|
||||
For a description of request parameters, see [statement description](../../../sql-reference/statements/create/table.md).
|
||||
|
||||
!!! note "Attention"
|
||||
Uniqueness of rows is determined by the `ORDER BY` table section, not `PRIMARY KEY`.
|
||||
|
||||
**ReplacingMergeTree Parameters**
|
||||
|
||||
- `ver` — column with version. Type `UInt*`, `Date` or `DateTime`. Optional parameter.
|
||||
|
||||
When merging, `ReplacingMergeTree` from all the rows with the same sorting key leaves only one:
|
||||
|
||||
- Last in the selection, if `ver` not set.
|
||||
- The last in the selection, if `ver` not set. A selection is a set of rows in a set of parts participating in the merge. The most recently created part (the last insert) will be the last one in the selection. Thus, after deduplication, the very last row from the most recent insert will remain for each unique sorting key.
|
||||
- With the maximum version, if `ver` specified.
|
||||
|
||||
**Query clauses**
|
||||
|
@ -53,6 +53,42 @@ Example of setting the addresses of the ZooKeeper cluster:
|
||||
</zookeeper>
|
||||
```
|
||||
|
||||
ClickHouse also supports to store replicas meta information in the auxiliary ZooKeeper cluster by providing ZooKeeper cluster name and path as engine arguments.
|
||||
In other word, it supports to store the metadata of differnt tables in different ZooKeeper clusters.
|
||||
|
||||
Example of setting the addresses of the auxiliary ZooKeeper cluster:
|
||||
|
||||
``` xml
|
||||
<auxiliary_zookeepers>
|
||||
<zookeeper2>
|
||||
<node index="1">
|
||||
<host>example_2_1</host>
|
||||
<port>2181</port>
|
||||
</node>
|
||||
<node index="2">
|
||||
<host>example_2_2</host>
|
||||
<port>2181</port>
|
||||
</node>
|
||||
<node index="3">
|
||||
<host>example_2_3</host>
|
||||
<port>2181</port>
|
||||
</node>
|
||||
</zookeeper2>
|
||||
<zookeeper3>
|
||||
<node index="1">
|
||||
<host>example_3_1</host>
|
||||
<port>2181</port>
|
||||
</node>
|
||||
</zookeeper3>
|
||||
</auxiliary_zookeepers>
|
||||
```
|
||||
|
||||
To store table datameta in a auxiliary ZooKeeper cluster instead of default ZooKeeper cluster, we can use the SQL to create table with
|
||||
ReplicatedMergeTree engine as follow:
|
||||
|
||||
```
|
||||
CREATE TABLE table_name ( ... ) ENGINE = ReplicatedMergeTree('zookeeper_name_configured_in_auxiliary_zookeepers:path', 'replica_name') ...
|
||||
```
|
||||
You can specify any existing ZooKeeper cluster and the system will use a directory on it for its own data (the directory is specified when creating a replicatable table).
|
||||
|
||||
If ZooKeeper isn’t set in the config file, you can’t create replicated tables, and any existing replicated tables will be read-only.
|
||||
@ -152,7 +188,7 @@ You can specify default arguments for `Replicated` table engine in the server co
|
||||
|
||||
```xml
|
||||
<default_replica_path>/clickhouse/tables/{shard}/{database}/{table}</default_replica_path>
|
||||
<default_replica_name>{replica}</default_replica_path>
|
||||
<default_replica_name>{replica}</default_replica_name>
|
||||
```
|
||||
|
||||
In this case, you can omit arguments when creating tables:
|
||||
|
@ -11,7 +11,7 @@ By going through this tutorial, you’ll learn how to set up a simple ClickHouse
|
||||
|
||||
## Single Node Setup {#single-node-setup}
|
||||
|
||||
To postpone the complexities of a distributed environment, we’ll start with deploying ClickHouse on a single server or virtual machine. ClickHouse is usually installed from [deb](../getting-started/install.md#install-from-deb-packages) or [rpm](../getting-started/install.md#from-rpm-packages) packages, but there are [alternatives](../getting-started/install.md#from-docker-image) for the operating systems that do no support them.
|
||||
To postpone the complexities of a distributed environment, we’ll start with deploying ClickHouse on a single server or virtual machine. ClickHouse is usually installed from [deb](../getting-started/install.md#install-from-deb-packages) or [rpm](../getting-started/install.md#from-rpm-packages) packages, but there are [alternatives](../getting-started/install.md#from-docker-image) for the operating systems that do not support them.
|
||||
|
||||
For example, you have chosen `deb` packages and executed:
|
||||
|
||||
|
@ -5,7 +5,7 @@ toc_title: Overview
|
||||
|
||||
# What Is ClickHouse? {#what-is-clickhouse}
|
||||
|
||||
ClickHouse is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).
|
||||
ClickHouse® is a column-oriented database management system (DBMS) for online analytical processing of queries (OLAP).
|
||||
|
||||
In a “normal” row-oriented DBMS, data is stored in this order:
|
||||
|
||||
|
@ -25,6 +25,7 @@ The supported formats are:
|
||||
| [Vertical](#vertical) | ✗ | ✔ |
|
||||
| [VerticalRaw](#verticalraw) | ✗ | ✔ |
|
||||
| [JSON](#json) | ✗ | ✔ |
|
||||
| [JSONAsString](#jsonasstring) | ✔ | ✗ |
|
||||
| [JSONString](#jsonstring) | ✗ | ✔ |
|
||||
| [JSONCompact](#jsoncompact) | ✗ | ✔ |
|
||||
| [JSONCompactString](#jsoncompactstring) | ✗ | ✔ |
|
||||
@ -507,6 +508,34 @@ Example:
|
||||
}
|
||||
```
|
||||
|
||||
## JSONAsString {#jsonasstring}
|
||||
|
||||
In this format, a single JSON object is interpreted as a single value. If input has several JSON objects (comma separated) they will be interpreted as a sepatate rows.
|
||||
|
||||
This format can only be parsed for table with a single field of type [String](../sql-reference/data-types/string.md). The remaining columns must be set to [DEFAULT](../sql-reference/statements/create/table.md#default) or [MATERIALIZED](../sql-reference/statements/create/table.md#materialized), or omitted. Once you collect whole JSON object to string you can use [JSON functions](../sql-reference/functions/json-functions.md) to process it.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
DROP TABLE IF EXISTS json_as_string;
|
||||
CREATE TABLE json_as_string (json String) ENGINE = Memory;
|
||||
INSERT INTO json_as_string FORMAT JSONAsString {"foo":{"bar":{"x":"y"},"baz":1}},{},{"any json stucture":1}
|
||||
SELECT * FROM json_as_string;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─json──────────────────────────────┐
|
||||
│ {"foo":{"bar":{"x":"y"},"baz":1}} │
|
||||
│ {} │
|
||||
│ {"any json stucture":1} │
|
||||
└───────────────────────────────────┘
|
||||
```
|
||||
|
||||
|
||||
## JSONCompact {#jsoncompact}
|
||||
## JSONCompactString {#jsoncompactstring}
|
||||
|
||||
|
@ -23,6 +23,7 @@ toc_title: Adopters
|
||||
| <a href="https://www.bigo.sg/" class="favicon">BIGO</a> | Video | Computing Platform | — | — | [Blog Article, August 2020](https://www.programmersought.com/article/44544895251/) |
|
||||
| <a href="https://www.bloomberg.com/" class="favicon">Bloomberg</a> | Finance, Media | Monitoring | 102 servers | — | [Slides, May 2018](https://www.slideshare.net/Altinity/http-analytics-for-6m-requests-per-second-using-clickhouse-by-alexander-bocharov) |
|
||||
| <a href="https://bloxy.info" class="favicon">Bloxy</a> | Blockchain | Analytics | — | — | [Slides in Russian, August 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup17/4_bloxy.pptx) |
|
||||
| <a href="https://www.bytedance.com" class="favicon">Bytedance</a> | Social platforms | — | — | — | [The ClickHouse Meetup East, October 2020](https://www.youtube.com/watch?v=ckChUkC3Pns) |
|
||||
| <a href="https://cardsmobile.ru/" class="favicon">CardsMobile</a> | Finance | Analytics | — | — | [VC.ru](https://vc.ru/s/cardsmobile/143449-rukovoditel-gruppy-analiza-dannyh) |
|
||||
| <a href="https://carto.com/" class="favicon">CARTO</a> | Business Intelligence | Geo analytics | — | — | [Geospatial processing with ClickHouse](https://carto.com/blog/geospatial-processing-with-clickhouse/) |
|
||||
| <a href="http://public.web.cern.ch/public/" class="favicon">CERN</a> | Research | Experiment | — | — | [Press release, April 2012](https://www.yandex.com/company/press_center/press_releases/2012/2012-04-10/) |
|
||||
@ -96,6 +97,7 @@ toc_title: Adopters
|
||||
| <a href="https://www.splunk.com/" class="favicon">Splunk</a> | Business Analytics | Main product | — | — | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) |
|
||||
| <a href="https://www.spotify.com" class="favicon">Spotify</a> | Music | Experimentation | — | — | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) |
|
||||
| <a href="https://www.staffcop.ru/" class="favicon">Staffcop</a> | Information Security | Main Product | — | — | [Official website, Documentation](https://www.staffcop.ru/sce43) |
|
||||
| <a href="https://www.suning.com/" class="favicon">Suning</a> | E-Commerce | User behaviour analytics | — | — | [Blog article](https://www.sohu.com/a/434152235_411876) |
|
||||
| <a href="https://www.teralytics.net/" class="favicon">Teralytics</a> | Mobility | Analytics | — | — | [Tech blog](https://www.teralytics.net/knowledge-hub/visualizing-mobility-data-the-scalability-challenge) |
|
||||
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Big Data | Data processing | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/5.%20ClickHouse大数据集群应用_李俊飞腾讯网媒事业部.pdf) |
|
||||
| <a href="https://www.tencent.com" class="favicon">Tencent</a> | Messaging | Logging | — | — | [Talk in Chinese, November 2019](https://youtu.be/T-iVQRuw-QY?t=5050) |
|
||||
@ -111,7 +113,7 @@ toc_title: Adopters
|
||||
| <a href="https://cloud.yandex.ru/services/managed-clickhouse" class="favicon">Yandex Cloud</a> | Public Cloud | Main product | — | — | [Talk in Russian, December 2019](https://www.youtube.com/watch?v=pgnak9e_E0o) |
|
||||
| <a href="https://cloud.yandex.ru/services/datalens" class="favicon">Yandex DataLens</a> | Business Intelligence | Main product | — | — | [Slides in Russian, December 2019](https://presentations.clickhouse.tech/meetup38/datalens.pdf) |
|
||||
| <a href="https://market.yandex.ru/" class="favicon">Yandex Market</a> | e-Commerce | Metrics, Logging | — | — | [Talk in Russian, January 2019](https://youtu.be/_l1qP0DyBcA?t=478) |
|
||||
| <a href="https://metrica.yandex.com" class="favicon">Yandex Metrica</a> | Web analytics | Main product | 360 servers in one cluster, 1862 servers in one department | 66.41 PiB / 5.68 PiB | [Slides, February 2020](https://presentations.clickhouse.tech/meetup40/introduction/#13) |
|
||||
| <a href="https://metrica.yandex.com" class="favicon">Yandex Metrica</a> | Web analytics | Main product | 630 servers in one cluster, 360 servers in another cluster, 1862 servers in one department | 133 PiB / 8.31 PiB / 120 trillion records | [Slides, February 2020](https://presentations.clickhouse.tech/meetup40/introduction/#13) |
|
||||
| <a href="https://htc-cs.ru/" class="favicon">ЦВТ</a> | Software Development | Metrics, Logging | — | — | [Blog Post, March 2019, in Russian](https://vc.ru/dev/62715-kak-my-stroili-monitoring-na-prometheus-clickhouse-i-elk) |
|
||||
| <a href="https://mkb.ru/" class="favicon">МКБ</a> | Bank | Web-system monitoring | — | — | [Slides in Russian, September 2019](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup28/mkb.pdf) |
|
||||
| <a href="https://cft.ru/" class="favicon">ЦФТ</a> | Banking, Financial products, Payments | — | — | — | [Meetup in Russian, April 2020](https://team.cft.ru/events/162) |
|
||||
|
@ -44,11 +44,10 @@ stages, such as query planning or distributed queries.
|
||||
|
||||
To be useful, the tracing information has to be exported to a monitoring system
|
||||
that supports OpenTelemetry, such as Jaeger or Prometheus. ClickHouse avoids
|
||||
a dependency on a particular monitoring system, instead only
|
||||
providing the tracing data conforming to the standard. A natural way to do so
|
||||
in an SQL RDBMS is a system table. OpenTelemetry trace span information
|
||||
a dependency on a particular monitoring system, instead only providing the
|
||||
tracing data through a system table. OpenTelemetry trace span information
|
||||
[required by the standard](https://github.com/open-telemetry/opentelemetry-specification/blob/master/specification/overview.md#span)
|
||||
is stored in the system table called `system.opentelemetry_span_log`.
|
||||
is stored in the `system.opentelemetry_span_log` table.
|
||||
|
||||
The table must be enabled in the server configuration, see the `opentelemetry_span_log`
|
||||
element in the default config file `config.xml`. It is enabled by default.
|
||||
@ -67,3 +66,31 @@ The table has the following columns:
|
||||
|
||||
The tags or attributes are saved as two parallel arrays, containing the keys
|
||||
and values. Use `ARRAY JOIN` to work with them.
|
||||
|
||||
## Integration with monitoring systems
|
||||
|
||||
At the moment, there is no ready tool that can export the tracing data from
|
||||
ClickHouse to a monitoring system.
|
||||
|
||||
For testing, it is possible to setup the export using a materialized view with the URL engine over the `system.opentelemetry_span_log` table, which would push the arriving log data to an HTTP endpoint of a trace collector. For example, to push the minimal span data to a Zipkin instance running at `http://localhost:9411`, in Zipkin v2 JSON format:
|
||||
|
||||
```sql
|
||||
CREATE MATERIALIZED VIEW default.zipkin_spans
|
||||
ENGINE = URL('http://127.0.0.1:9411/api/v2/spans', 'JSONEachRow')
|
||||
SETTINGS output_format_json_named_tuples_as_objects = 1,
|
||||
output_format_json_array_of_rows = 1 AS
|
||||
SELECT
|
||||
lower(hex(reinterpretAsFixedString(trace_id))) AS traceId,
|
||||
lower(hex(parent_span_id)) AS parentId,
|
||||
lower(hex(span_id)) AS id,
|
||||
operation_name AS name,
|
||||
start_time_us AS timestamp,
|
||||
finish_time_us - start_time_us AS duration,
|
||||
cast(tuple('clickhouse'), 'Tuple(serviceName text)') AS localEndpoint,
|
||||
cast(tuple(
|
||||
attribute.values[indexOf(attribute.names, 'db.statement')]),
|
||||
'Tuple("db.statement" text)') AS tags
|
||||
FROM system.opentelemetry_span_log
|
||||
```
|
||||
|
||||
In case of any errors, the part of the log data for which the error has occurred will be silently lost. Check the server log for error messages if the data does not arrive.
|
||||
|
@ -139,7 +139,7 @@ Lazy loading of dictionaries.
|
||||
|
||||
If `true`, then each dictionary is created on first use. If dictionary creation failed, the function that was using the dictionary throws an exception.
|
||||
|
||||
If `false`, all dictionaries are created when the server starts, and if there is an error, the server shuts down.
|
||||
If `false`, all dictionaries are created when the server starts, if the dictionary or dictionaries are created too long or are created with errors, then the server boots without of these dictionaries and continues to try to create these dictionaries.
|
||||
|
||||
The default is `true`.
|
||||
|
||||
|
@ -2293,6 +2293,47 @@ Result:
|
||||
└─────────────────────────┴─────────┘
|
||||
```
|
||||
|
||||
## system_events_show_zero_values {#system_events_show_zero_values}
|
||||
|
||||
Allows to select zero-valued events from [`system.events`](../../operations/system-tables/events.md).
|
||||
|
||||
Some monitoring systems require passing all the metrics values to them for each checkpoint, even if the metric value is zero.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled.
|
||||
- 1 — Enabled.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query
|
||||
|
||||
```sql
|
||||
SELECT * FROM system.events WHERE event='QueryMemoryLimitExceeded';
|
||||
```
|
||||
|
||||
Result
|
||||
|
||||
```text
|
||||
Ok.
|
||||
```
|
||||
|
||||
Query
|
||||
```sql
|
||||
SET system_events_show_zero_values = 1;
|
||||
SELECT * FROM system.events WHERE event='QueryMemoryLimitExceeded';
|
||||
```
|
||||
|
||||
Result
|
||||
|
||||
```text
|
||||
┌─event────────────────────┬─value─┬─description───────────────────────────────────────────┐
|
||||
│ QueryMemoryLimitExceeded │ 0 │ Number of times when memory limit exceeded for query. │
|
||||
└──────────────────────────┴───────┴───────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## allow_experimental_bigint_types {#allow_experimental_bigint_types}
|
||||
|
||||
Enables or disables integer values exceeding the range that is supported by the int data type.
|
||||
|
@ -23,4 +23,44 @@ Please note that `errors_count` is updated once per query to the cluster, but `e
|
||||
- [distributed_replica_error_cap setting](../../operations/settings/settings.md#settings-distributed_replica_error_cap)
|
||||
- [distributed_replica_error_half_life setting](../../operations/settings/settings.md#settings-distributed_replica_error_half_life)
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
:) SELECT * FROM system.clusters LIMIT 2 FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
cluster: test_cluster
|
||||
shard_num: 1
|
||||
shard_weight: 1
|
||||
replica_num: 1
|
||||
host_name: clickhouse01
|
||||
host_address: 172.23.0.11
|
||||
port: 9000
|
||||
is_local: 1
|
||||
user: default
|
||||
default_database:
|
||||
errors_count: 0
|
||||
estimated_recovery_time: 0
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
cluster: test_cluster
|
||||
shard_num: 1
|
||||
shard_weight: 1
|
||||
replica_num: 2
|
||||
host_name: clickhouse02
|
||||
host_address: 172.23.0.12
|
||||
port: 9000
|
||||
is_local: 0
|
||||
user: default
|
||||
default_database:
|
||||
errors_count: 0
|
||||
estimated_recovery_time: 0
|
||||
|
||||
2 rows in set. Elapsed: 0.002 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/clusters) <!--hide-->
|
||||
|
@ -23,4 +23,50 @@ The `system.columns` table contains the following columns (the column type is sh
|
||||
- `is_in_sampling_key` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Flag that indicates whether the column is in the sampling key expression.
|
||||
- `compression_codec` ([String](../../sql-reference/data-types/string.md)) — Compression codec name.
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
:) select * from system.columns LIMIT 2 FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
database: system
|
||||
table: aggregate_function_combinators
|
||||
name: name
|
||||
type: String
|
||||
default_kind:
|
||||
default_expression:
|
||||
data_compressed_bytes: 0
|
||||
data_uncompressed_bytes: 0
|
||||
marks_bytes: 0
|
||||
comment:
|
||||
is_in_partition_key: 0
|
||||
is_in_sorting_key: 0
|
||||
is_in_primary_key: 0
|
||||
is_in_sampling_key: 0
|
||||
compression_codec:
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
database: system
|
||||
table: aggregate_function_combinators
|
||||
name: is_internal
|
||||
type: UInt8
|
||||
default_kind:
|
||||
default_expression:
|
||||
data_compressed_bytes: 0
|
||||
data_uncompressed_bytes: 0
|
||||
marks_bytes: 0
|
||||
comment:
|
||||
is_in_partition_key: 0
|
||||
is_in_sorting_key: 0
|
||||
is_in_primary_key: 0
|
||||
is_in_sampling_key: 0
|
||||
compression_codec:
|
||||
|
||||
2 rows in set. Elapsed: 0.002 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/columns) <!--hide-->
|
||||
|
@ -1,9 +1,38 @@
|
||||
# system.databases {#system-databases}
|
||||
|
||||
This table contains a single String column called ‘name’ – the name of a database.
|
||||
Contains information about the databases that are available to the current user.
|
||||
|
||||
Each database that the server knows about has a corresponding entry in the table.
|
||||
Columns:
|
||||
|
||||
This system table is used for implementing the `SHOW DATABASES` query.
|
||||
- `name` ([String](../../sql-reference/data-types/string.md)) — Database name.
|
||||
- `engine` ([String](../../sql-reference/data-types/string.md)) — [Database engine](../../engines/database-engines/index.md).
|
||||
- `data_path` ([String](../../sql-reference/data-types/string.md)) — Data path.
|
||||
- `metadata_path` ([String](../../sql-reference/data-types/enum.md)) — Metadata path.
|
||||
- `uuid` ([UUID](../../sql-reference/data-types/uuid.md)) — Database UUID.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/databases) <!--hide-->
|
||||
The `name` column from this system table is used for implementing the `SHOW DATABASES` query.
|
||||
|
||||
**Example**
|
||||
|
||||
Create a database.
|
||||
|
||||
``` sql
|
||||
CREATE DATABASE test
|
||||
```
|
||||
|
||||
Check all of the available databases to the user.
|
||||
|
||||
``` sql
|
||||
SELECT * FROM system.databases
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─name───────────────────────────┬─engine─┬─data_path──────────────────┬─metadata_path───────────────────────────────────────────────────────┬─────────────────────────────────uuid─┐
|
||||
│ _temporary_and_external_tables │ Memory │ /var/lib/clickhouse/ │ │ 00000000-0000-0000-0000-000000000000 │
|
||||
│ default │ Atomic │ /var/lib/clickhouse/store/ │ /var/lib/clickhouse/store/d31/d317b4bd-3595-4386-81ee-c2334694128a/ │ d317b4bd-3595-4386-81ee-c2334694128a │
|
||||
│ test │ Atomic │ /var/lib/clickhouse/store/ │ /var/lib/clickhouse/store/39b/39bf0cc5-4c06-4717-87fe-c75ff3bd8ebb/ │ 39bf0cc5-4c06-4717-87fe-c75ff3bd8ebb │
|
||||
│ system │ Atomic │ /var/lib/clickhouse/store/ │ /var/lib/clickhouse/store/1d1/1d1c869d-e465-4b1b-a51f-be033436ebf9/ │ 1d1c869d-e465-4b1b-a51f-be033436ebf9 │
|
||||
└────────────────────────────────┴────────┴────────────────────────────┴─────────────────────────────────────────────────────────────────────┴──────────────────────────────────────┘
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/databases) <!--hide-->
|
||||
|
@ -11,3 +11,21 @@ Columns:
|
||||
- `keep_free_space` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Amount of disk space that should stay free on disk in bytes. Defined in the `keep_free_space_bytes` parameter of disk configuration.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/disks) <!--hide-->
|
||||
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
:) SELECT * FROM system.disks;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─name────┬─path─────────────────┬───free_space─┬──total_space─┬─keep_free_space─┐
|
||||
│ default │ /var/lib/clickhouse/ │ 276392587264 │ 490652508160 │ 0 │
|
||||
└─────────┴──────────────────────┴──────────────┴──────────────┴─────────────────┘
|
||||
|
||||
1 rows in set. Elapsed: 0.001 sec.
|
||||
```
|
||||
|
||||
|
||||
|
||||
|
@ -8,3 +8,26 @@ Columns:
|
||||
- `is_aggregate`(`UInt8`) — Whether the function is aggregate.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/functions) <!--hide-->
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
SELECT * FROM system.functions LIMIT 10;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─name─────────────────────┬─is_aggregate─┬─case_insensitive─┬─alias_to─┐
|
||||
│ sumburConsistentHash │ 0 │ 0 │ │
|
||||
│ yandexConsistentHash │ 0 │ 0 │ │
|
||||
│ demangle │ 0 │ 0 │ │
|
||||
│ addressToLine │ 0 │ 0 │ │
|
||||
│ JSONExtractRaw │ 0 │ 0 │ │
|
||||
│ JSONExtractKeysAndValues │ 0 │ 0 │ │
|
||||
│ JSONExtract │ 0 │ 0 │ │
|
||||
│ JSONExtractString │ 0 │ 0 │ │
|
||||
│ JSONExtractFloat │ 0 │ 0 │ │
|
||||
│ JSONExtractInt │ 0 │ 0 │ │
|
||||
└──────────────────────────┴──────────────┴──────────────────┴──────────┘
|
||||
|
||||
10 rows in set. Elapsed: 0.002 sec.
|
||||
```
|
@ -10,4 +10,45 @@ Columns:
|
||||
- `type` (String) — Setting type (implementation specific string value).
|
||||
- `changed` (UInt8) — Whether the setting was explicitly defined in the config or explicitly changed.
|
||||
|
||||
**Example**
|
||||
```sql
|
||||
:) SELECT * FROM system.merge_tree_settings LIMIT 4 FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
name: index_granularity
|
||||
value: 8192
|
||||
changed: 0
|
||||
description: How many rows correspond to one primary key value.
|
||||
type: SettingUInt64
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
name: min_bytes_for_wide_part
|
||||
value: 0
|
||||
changed: 0
|
||||
description: Minimal uncompressed size in bytes to create part in wide format instead of compact
|
||||
type: SettingUInt64
|
||||
|
||||
Row 3:
|
||||
──────
|
||||
name: min_rows_for_wide_part
|
||||
value: 0
|
||||
changed: 0
|
||||
description: Minimal number of rows to create part in wide format instead of compact
|
||||
type: SettingUInt64
|
||||
|
||||
Row 4:
|
||||
──────
|
||||
name: merge_max_block_size
|
||||
value: 8192
|
||||
changed: 0
|
||||
description: How many rows in blocks should be formed for merge operations.
|
||||
type: SettingUInt64
|
||||
|
||||
4 rows in set. Elapsed: 0.001 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/merge_tree_settings) <!--hide-->
|
||||
|
@ -6,4 +6,27 @@ You can use this table for tests, or if you need to do a brute force search.
|
||||
|
||||
Reads from this table are not parallelized.
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
:) SELECT * FROM system.numbers LIMIT 10;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─number─┐
|
||||
│ 0 │
|
||||
│ 1 │
|
||||
│ 2 │
|
||||
│ 3 │
|
||||
│ 4 │
|
||||
│ 5 │
|
||||
│ 6 │
|
||||
│ 7 │
|
||||
│ 8 │
|
||||
│ 9 │
|
||||
└────────┘
|
||||
|
||||
10 rows in set. Elapsed: 0.001 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/numbers) <!--hide-->
|
||||
|
@ -4,4 +4,27 @@ The same as [system.numbers](../../operations/system-tables/numbers.md) but read
|
||||
|
||||
Used for tests.
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
:) SELECT * FROM system.numbers_mt LIMIT 10;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─number─┐
|
||||
│ 0 │
|
||||
│ 1 │
|
||||
│ 2 │
|
||||
│ 3 │
|
||||
│ 4 │
|
||||
│ 5 │
|
||||
│ 6 │
|
||||
│ 7 │
|
||||
│ 8 │
|
||||
│ 9 │
|
||||
└────────┘
|
||||
|
||||
10 rows in set. Elapsed: 0.001 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/numbers_mt) <!--hide-->
|
||||
|
@ -6,4 +6,18 @@ This table is used if a `SELECT` query doesn’t specify the `FROM` clause.
|
||||
|
||||
This is similar to the `DUAL` table found in other DBMSs.
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
:) SELECT * FROM system.one LIMIT 10;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─dummy─┐
|
||||
│ 0 │
|
||||
└───────┘
|
||||
|
||||
1 rows in set. Elapsed: 0.001 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/one) <!--hide-->
|
||||
|
@ -14,4 +14,51 @@ Columns:
|
||||
- `query` (String) – The query text. For `INSERT`, it doesn’t include the data to insert.
|
||||
- `query_id` (String) – Query ID, if defined.
|
||||
|
||||
|
||||
```sql
|
||||
:) SELECT * FROM system.processes LIMIT 10 FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
is_initial_query: 1
|
||||
user: default
|
||||
query_id: 35a360fa-3743-441d-8e1f-228c938268da
|
||||
address: ::ffff:172.23.0.1
|
||||
port: 47588
|
||||
initial_user: default
|
||||
initial_query_id: 35a360fa-3743-441d-8e1f-228c938268da
|
||||
initial_address: ::ffff:172.23.0.1
|
||||
initial_port: 47588
|
||||
interface: 1
|
||||
os_user: bharatnc
|
||||
client_hostname: tower
|
||||
client_name: ClickHouse
|
||||
client_revision: 54437
|
||||
client_version_major: 20
|
||||
client_version_minor: 7
|
||||
client_version_patch: 2
|
||||
http_method: 0
|
||||
http_user_agent:
|
||||
quota_key:
|
||||
elapsed: 0.000582537
|
||||
is_cancelled: 0
|
||||
read_rows: 0
|
||||
read_bytes: 0
|
||||
total_rows_approx: 0
|
||||
written_rows: 0
|
||||
written_bytes: 0
|
||||
memory_usage: 0
|
||||
peak_memory_usage: 0
|
||||
query: SELECT * from system.processes LIMIT 10 FORMAT Vertical;
|
||||
thread_ids: [67]
|
||||
ProfileEvents.Names: ['Query','SelectQuery','ReadCompressedBytes','CompressedReadBufferBlocks','CompressedReadBufferBytes','IOBufferAllocs','IOBufferAllocBytes','ContextLock','RWLockAcquiredReadLocks']
|
||||
ProfileEvents.Values: [1,1,36,1,10,1,89,16,1]
|
||||
Settings.Names: ['use_uncompressed_cache','load_balancing','log_queries','max_memory_usage']
|
||||
Settings.Values: ['0','in_order','1','10000000000']
|
||||
|
||||
1 rows in set. Elapsed: 0.002 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/processes) <!--hide-->
|
||||
|
@ -10,6 +10,7 @@ Columns:
|
||||
- `disks` ([Array(String)](../../sql-reference/data-types/array.md)) — Disk names, defined in the storage policy.
|
||||
- `max_data_part_size` ([UInt64](../../sql-reference/data-types/int-uint.md)) — Maximum size of a data part that can be stored on volume disks (0 — no limit).
|
||||
- `move_factor` ([Float64](../../sql-reference/data-types/float.md)) — Ratio of free disk space. When the ratio exceeds the value of configuration parameter, ClickHouse start to move data to the next volume in order.
|
||||
- `prefer_not_to_merge` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Value of the `prefer_not_to_merge` setting. When this setting is enabled, merging data on this volume is not allowed. This allows controlling how ClickHouse works with slow disks.
|
||||
|
||||
If the storage policy contains more then one volume, then information for each volume is stored in the individual row of the table.
|
||||
|
||||
|
@ -52,4 +52,56 @@ This table contains the following columns (the column type is shown in brackets)
|
||||
|
||||
The `system.tables` table is used in `SHOW TABLES` query implementation.
|
||||
|
||||
```sql
|
||||
:) SELECT * FROM system.tables LIMIT 2 FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
database: system
|
||||
name: aggregate_function_combinators
|
||||
uuid: 00000000-0000-0000-0000-000000000000
|
||||
engine: SystemAggregateFunctionCombinators
|
||||
is_temporary: 0
|
||||
data_paths: []
|
||||
metadata_path: /var/lib/clickhouse/metadata/system/aggregate_function_combinators.sql
|
||||
metadata_modification_time: 1970-01-01 03:00:00
|
||||
dependencies_database: []
|
||||
dependencies_table: []
|
||||
create_table_query:
|
||||
engine_full:
|
||||
partition_key:
|
||||
sorting_key:
|
||||
primary_key:
|
||||
sampling_key:
|
||||
storage_policy:
|
||||
total_rows: ᴺᵁᴸᴸ
|
||||
total_bytes: ᴺᵁᴸᴸ
|
||||
|
||||
Row 2:
|
||||
──────
|
||||
database: system
|
||||
name: asynchronous_metrics
|
||||
uuid: 00000000-0000-0000-0000-000000000000
|
||||
engine: SystemAsynchronousMetrics
|
||||
is_temporary: 0
|
||||
data_paths: []
|
||||
metadata_path: /var/lib/clickhouse/metadata/system/asynchronous_metrics.sql
|
||||
metadata_modification_time: 1970-01-01 03:00:00
|
||||
dependencies_database: []
|
||||
dependencies_table: []
|
||||
create_table_query:
|
||||
engine_full:
|
||||
partition_key:
|
||||
sorting_key:
|
||||
primary_key:
|
||||
sampling_key:
|
||||
storage_policy:
|
||||
total_rows: ᴺᵁᴸᴸ
|
||||
total_bytes: ᴺᵁᴸᴸ
|
||||
|
||||
2 rows in set. Elapsed: 0.004 sec.
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/tables) <!--hide-->
|
||||
|
@ -70,11 +70,21 @@ Parameters:
|
||||
<!-- Configuration of clusters as in an ordinary server config -->
|
||||
<remote_servers>
|
||||
<source_cluster>
|
||||
<!--
|
||||
source cluster & destination clusters accepts exactly the same
|
||||
parameters as parameters for usual Distributed table
|
||||
see https://clickhouse.tech/docs/en/engines/table-engines/special/distributed/
|
||||
-->
|
||||
<shard>
|
||||
<internal_replication>false</internal_replication>
|
||||
<replica>
|
||||
<host>127.0.0.1</host>
|
||||
<port>9000</port>
|
||||
<!--
|
||||
<user>default</user>
|
||||
<password>default</password>
|
||||
<secure>1</secure>
|
||||
-->
|
||||
</replica>
|
||||
</shard>
|
||||
...
|
||||
|
@ -4,4 +4,59 @@ toc_priority: 5
|
||||
|
||||
# avg {#agg_function-avg}
|
||||
|
||||
Calculates the average. Only works for numbers. The result is always Float64.
|
||||
Calculates the arithmetic mean.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
avgWeighted(x)
|
||||
```
|
||||
|
||||
**Parameter**
|
||||
|
||||
- `x` — Values.
|
||||
|
||||
`x` must be
|
||||
[Integer](../../../sql-reference/data-types/int-uint.md),
|
||||
[floating-point](../../../sql-reference/data-types/float.md), or
|
||||
[Decimal](../../../sql-reference/data-types/decimal.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- `NaN` if the supplied parameter is empty.
|
||||
- Mean otherwise.
|
||||
|
||||
**Return type** is always [Float64](../../../sql-reference/data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT avg(x) FROM values('x Int8', 0, 1, 2, 3, 4, 5)
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─avg(x)─┐
|
||||
│ 2.5 │
|
||||
└────────┘
|
||||
```
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
CREATE table test (t UInt8) ENGINE = Memory;
|
||||
SELECT avg(t) FROM test
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─avg(x)─┐
|
||||
│ nan │
|
||||
└────────┘
|
||||
```
|
||||
|
@ -14,17 +14,21 @@ avgWeighted(x, weight)
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `x` — Values. [Integer](../../../sql-reference/data-types/int-uint.md) or [floating-point](../../../sql-reference/data-types/float.md).
|
||||
- `weight` — Weights of the values. [Integer](../../../sql-reference/data-types/int-uint.md) or [floating-point](../../../sql-reference/data-types/float.md).
|
||||
- `x` — Values.
|
||||
- `weight` — Weights of the values.
|
||||
|
||||
Type of `x` and `weight` must be the same.
|
||||
`x` and `weight` must both be
|
||||
[Integer](../../../sql-reference/data-types/int-uint.md),
|
||||
[floating-point](../../../sql-reference/data-types/float.md), or
|
||||
[Decimal](../../../sql-reference/data-types/decimal.md),
|
||||
but may have different types.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Weighted mean.
|
||||
- `NaN`. If all the weights are equal to 0.
|
||||
- `NaN` if all the weights are equal to 0 or the supplied weights parameter is empty.
|
||||
- Weighted mean otherwise.
|
||||
|
||||
Type: [Float64](../../../sql-reference/data-types/float.md).
|
||||
**Return type** is always [Float64](../../../sql-reference/data-types/float.md).
|
||||
|
||||
**Example**
|
||||
|
||||
@ -42,3 +46,54 @@ Result:
|
||||
│ 8 │
|
||||
└────────────────────────┘
|
||||
```
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT avgWeighted(x, w)
|
||||
FROM values('x Int8, w Float64', (4, 1), (1, 0), (10, 2))
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─avgWeighted(x, weight)─┐
|
||||
│ 8 │
|
||||
└────────────────────────┘
|
||||
```
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT avgWeighted(x, w)
|
||||
FROM values('x Int8, w Int8', (0, 0), (1, 0), (10, 0))
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─avgWeighted(x, weight)─┐
|
||||
│ nan │
|
||||
└────────────────────────┘
|
||||
```
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
CREATE table test (t UInt8) ENGINE = Memory;
|
||||
SELECT avgWeighted(t) FROM test
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─avgWeighted(x, weight)─┐
|
||||
│ nan │
|
||||
└────────────────────────┘
|
||||
```
|
||||
|
@ -0,0 +1,53 @@
|
||||
## rankCorr {#agg_function-rankcorr}
|
||||
|
||||
Computes a rank correlation coefficient.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
rankCorr(x, y)
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `x` — Arbitrary value. [Float32](../../../sql-reference/data-types/float.md#float32-float64) or [Float64](../../../sql-reference/data-types/float.md#float32-float64).
|
||||
- `y` — Arbitrary value. [Float32](../../../sql-reference/data-types/float.md#float32-float64) or [Float64](../../../sql-reference/data-types/float.md#float32-float64).
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
- Returns a rank correlation coefficient of the ranks of x and y. The value of the correlation coefficient ranges from -1 to +1. If less than two arguments are passed, the function will return an exception. The value close to +1 denotes a high linear relationship, and with an increase of one random variable, the second random variable also increases. The value close to -1 denotes a high linear relationship, and with an increase of one random variable, the second random variable decreases. The value close or equal to 0 denotes no relationship between the two random variables.
|
||||
|
||||
Type: [Float64](../../../sql-reference/data-types/float.md#float32-float64).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT rankCorr(number, number) FROM numbers(100);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─rankCorr(number, number)─┐
|
||||
│ 1 │
|
||||
└──────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT roundBankers(rankCorr(exp(number), sin(number)), 3) FROM numbers(100);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─roundBankers(rankCorr(exp(number), sin(number)), 3)─┐
|
||||
│ -0.037 │
|
||||
└─────────────────────────────────────────────────────┘
|
||||
```
|
||||
**See Also**
|
||||
|
||||
- [Spearman's rank correlation coefficient](https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient)
|
@ -25,7 +25,37 @@ SELECT
|
||||
|
||||
## toTimeZone {#totimezone}
|
||||
|
||||
Convert time or date and time to the specified time zone.
|
||||
Convert time or date and time to the specified time zone. The time zone is an attribute of the Date/DateTime types. The internal value (number of seconds) of the table field or of the resultset's column does not change, the column's type changes and its string representation changes accordingly.
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
toDateTime('2019-01-01 00:00:00', 'UTC') AS time_utc,
|
||||
toTypeName(time_utc) AS type_utc,
|
||||
toInt32(time_utc) AS int32utc,
|
||||
toTimeZone(time_utc, 'Asia/Yekaterinburg') AS time_yekat,
|
||||
toTypeName(time_yekat) AS type_yekat,
|
||||
toInt32(time_yekat) AS int32yekat,
|
||||
toTimeZone(time_utc, 'US/Samoa') AS time_samoa,
|
||||
toTypeName(time_samoa) AS type_samoa,
|
||||
toInt32(time_samoa) AS int32samoa
|
||||
FORMAT Vertical;
|
||||
```
|
||||
|
||||
```text
|
||||
Row 1:
|
||||
──────
|
||||
time_utc: 2019-01-01 00:00:00
|
||||
type_utc: DateTime('UTC')
|
||||
int32utc: 1546300800
|
||||
time_yekat: 2019-01-01 05:00:00
|
||||
type_yekat: DateTime('Asia/Yekaterinburg')
|
||||
int32yekat: 1546300800
|
||||
time_samoa: 2018-12-31 13:00:00
|
||||
type_samoa: DateTime('US/Samoa')
|
||||
int32samoa: 1546300800
|
||||
```
|
||||
|
||||
`toTimeZone(time_utc, 'Asia/Yekaterinburg')` changes the `DateTime('UTC')` type to `DateTime('Asia/Yekaterinburg')`. The value (Unixtimestamp) 1546300800 stays the same, but the string representation (the result of the toString() function) changes from `time_utc: 2019-01-01 00:00:00` to `time_yekat: 2019-01-01 05:00:00`.
|
||||
|
||||
## toYear {#toyear}
|
||||
|
||||
@ -67,9 +97,8 @@ Leap seconds are not accounted for.
|
||||
|
||||
## toUnixTimestamp {#to-unix-timestamp}
|
||||
|
||||
For DateTime argument: converts value to its internal numeric representation (Unix Timestamp).
|
||||
For String argument: parse datetime from string according to the timezone (optional second argument, server timezone is used by default) and returns the corresponding unix timestamp.
|
||||
For Date argument: the behaviour is unspecified.
|
||||
For DateTime argument: converts value to the number with type UInt32 -- Unix Timestamp (https://en.wikipedia.org/wiki/Unix_time).
|
||||
For String argument: converts the input string to the datetime according to the timezone (optional second argument, server timezone is used by default) and returns the corresponding unix timestamp.
|
||||
|
||||
**Syntax**
|
||||
|
||||
|
@ -9,16 +9,4 @@ toc_title: IN Operator
|
||||
|
||||
See the section [IN operators](../../sql-reference/operators/in.md#select-in-operators).
|
||||
|
||||
## tuple(x, y, …), operator (x, y, …) {#tuplex-y-operator-x-y}
|
||||
|
||||
A function that allows grouping multiple columns.
|
||||
For columns with the types T1, T2, …, it returns a Tuple(T1, T2, …) type tuple containing these columns. There is no cost to execute the function.
|
||||
Tuples are normally used as intermediate values for an argument of IN operators, or for creating a list of formal parameters of lambda functions. Tuples can’t be written to a table.
|
||||
|
||||
## tupleElement(tuple, n), operator x.N {#tupleelementtuple-n-operator-x-n}
|
||||
|
||||
A function that allows getting a column from a tuple.
|
||||
‘N’ is the column index, starting from 1. N must be a constant. ‘N’ must be a constant. ‘N’ must be a strict postive integer no greater than the size of the tuple.
|
||||
There is no cost to execute the function.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/in_functions/) <!--hide-->
|
||||
|
@ -1,5 +1,5 @@
|
||||
---
|
||||
toc_priority: 66
|
||||
toc_priority: 67
|
||||
toc_title: Other
|
||||
---
|
||||
|
||||
|
@ -536,4 +536,58 @@ For case-insensitive search or/and in UTF-8 format use functions `ngramSearchCas
|
||||
!!! note "Note"
|
||||
For UTF-8 case we use 3-gram distance. All these are not perfectly fair n-gram distances. We use 2-byte hashes to hash n-grams and then calculate the (non-)symmetric difference between these hash tables – collisions may occur. With UTF-8 case-insensitive format we do not use fair `tolower` function – we zero the 5-th bit (starting from zero) of each codepoint byte and first bit of zeroth byte if bytes more than one – this works for Latin and mostly for all Cyrillic letters.
|
||||
|
||||
## countSubstrings(haystack, needle) {#countSubstrings}
|
||||
|
||||
Count the number of substring occurrences
|
||||
|
||||
For a case-insensitive search, use the function `countSubstringsCaseInsensitive` (or `countSubstringsCaseInsensitiveUTF8`).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
countSubstrings(haystack, needle[, start_pos])
|
||||
```
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `haystack` — The string to search in. [String](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `needle` — The substring to search for. [String](../../sql-reference/syntax.md#syntax-string-literal).
|
||||
- `start_pos` – Optional parameter, position of the first character in the string to start search. [UInt](../../sql-reference/data-types/int-uint.md)
|
||||
|
||||
**Returned values**
|
||||
|
||||
- Number of occurrences.
|
||||
|
||||
Type: `Integer`.
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT countSubstrings('foobar.com', '.')
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─countSubstrings('foobar.com', '.')─┐
|
||||
│ 1 │
|
||||
└────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT countSubstrings('aaaa', 'aa')
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─countSubstrings('aaaa', 'aa')─┐
|
||||
│ 2 │
|
||||
└───────────────────────────────┘
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/query_language/functions/string_search_functions/) <!--hide-->
|
||||
|
114
docs/en/sql-reference/functions/tuple-functions.md
Normal file
114
docs/en/sql-reference/functions/tuple-functions.md
Normal file
@ -0,0 +1,114 @@
|
||||
---
|
||||
toc_priority: 66
|
||||
toc_title: Tuples
|
||||
---
|
||||
|
||||
# Functions for Working with Tuples {#tuple-functions}
|
||||
|
||||
## tuple {#tuple}
|
||||
|
||||
A function that allows grouping multiple columns.
|
||||
For columns with the types T1, T2, …, it returns a Tuple(T1, T2, …) type tuple containing these columns. There is no cost to execute the function.
|
||||
Tuples are normally used as intermediate values for an argument of IN operators, or for creating a list of formal parameters of lambda functions. Tuples can’t be written to a table.
|
||||
|
||||
The function implements the operator `(x, y, …)`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
tuple(x, y, …)
|
||||
```
|
||||
|
||||
## tupleElement {#tupleelement}
|
||||
|
||||
A function that allows getting a column from a tuple.
|
||||
‘N’ is the column index, starting from 1. N must be a constant. ‘N’ must be a constant. ‘N’ must be a strict postive integer no greater than the size of the tuple.
|
||||
There is no cost to execute the function.
|
||||
|
||||
The function implements the operator `x.N`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
tupleElement(tuple, n)
|
||||
```
|
||||
|
||||
## untuple {#untuple}
|
||||
|
||||
Performs syntactic substitution of [tuple](../../sql-reference/data-types/tuple.md#tuplet1-t2) elements in the call location.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
untuple(x)
|
||||
```
|
||||
|
||||
You can use the `EXCEPT` expression to skip columns as a result of the query.
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `x` - A `tuple` function, column, or tuple of elements. [Tuple](../../sql-reference/data-types/tuple.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- None.
|
||||
|
||||
**Examples**
|
||||
|
||||
Input table:
|
||||
|
||||
``` text
|
||||
┌─key─┬─v1─┬─v2─┬─v3─┬─v4─┬─v5─┬─v6────────┐
|
||||
│ 1 │ 10 │ 20 │ 40 │ 30 │ 15 │ (33,'ab') │
|
||||
│ 2 │ 25 │ 65 │ 70 │ 40 │ 6 │ (44,'cd') │
|
||||
│ 3 │ 57 │ 30 │ 20 │ 10 │ 5 │ (55,'ef') │
|
||||
│ 4 │ 55 │ 12 │ 7 │ 80 │ 90 │ (66,'gh') │
|
||||
│ 5 │ 30 │ 50 │ 70 │ 25 │ 55 │ (77,'kl') │
|
||||
└─────┴────┴────┴────┴────┴────┴───────────┘
|
||||
```
|
||||
|
||||
Example of using a `Tuple`-type column as the `untuple` function parameter:
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT untuple(v6) FROM kv;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─_ut_1─┬─_ut_2─┐
|
||||
│ 33 │ ab │
|
||||
│ 44 │ cd │
|
||||
│ 55 │ ef │
|
||||
│ 66 │ gh │
|
||||
│ 77 │ kl │
|
||||
└───────┴───────┘
|
||||
```
|
||||
|
||||
Example of using an `EXCEPT` expression:
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT untuple((* EXCEPT (v2, v3),)) FROM kv;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─key─┬─v1─┬─v4─┬─v5─┬─v6────────┐
|
||||
│ 1 │ 10 │ 30 │ 15 │ (33,'ab') │
|
||||
│ 2 │ 25 │ 40 │ 6 │ (44,'cd') │
|
||||
│ 3 │ 57 │ 10 │ 5 │ (55,'ef') │
|
||||
│ 4 │ 55 │ 80 │ 90 │ (66,'gh') │
|
||||
│ 5 │ 30 │ 25 │ 55 │ (77,'kl') │
|
||||
└─────┴────┴────┴────┴───────────┘
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
||||
- [Tuple](../../sql-reference/data-types/tuple.md)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/sql-reference/functions/tuple-functions/) <!--hide-->
|
@ -9,7 +9,7 @@ toc_title: DELETE
|
||||
ALTER TABLE [db.]table [ON CLUSTER cluster] DELETE WHERE filter_expr
|
||||
```
|
||||
|
||||
Allows to delete data matching the specified filtering expression. Implemented as a [mutation](../../../sql-reference/statements/alter/index.md#mutations).
|
||||
Deletes data matching the specified filtering expression. Implemented as a [mutation](../../../sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
!!! note "Note"
|
||||
The `ALTER TABLE` prefix makes this syntax different from most other systems supporting SQL. It is intended to signify that unlike similar queries in OLTP databases this is a heavy operation not designed for frequent use.
|
||||
|
@ -14,10 +14,9 @@ The following operations are available:
|
||||
|
||||
- `ALTER TABLE [db.]table MATERIALIZE INDEX name IN PARTITION partition_name` - The query rebuilds the secondary index `name` in the partition `partition_name`. Implemented as a [mutation](../../../../sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
The first two commands areare lightweight in a sense that they only change metadata or remove files.
|
||||
The first two commands are lightweight in a sense that they only change metadata or remove files.
|
||||
|
||||
Also, they are replicated, syncing indices metadata via ZooKeeper.
|
||||
|
||||
!!! note "Note"
|
||||
Index manipulation is supported only for tables with [`*MergeTree`](../../../../engines/table-engines/mergetree-family/mergetree.md) engine (including
|
||||
[replicated](../../../../engines/table-engines/mergetree-family/replication.md) variants).
|
||||
Index manipulation is supported only for tables with [`*MergeTree`](../../../../engines/table-engines/mergetree-family/mergetree.md) engine (including [replicated](../../../../engines/table-engines/mergetree-family/replication.md) variants).
|
||||
|
@ -21,10 +21,10 @@ The following operations with [partitions](../../../engines/table-engines/merget
|
||||
|
||||
<!-- -->
|
||||
|
||||
## DETACH PARTITION {#alter_detach-partition}
|
||||
## DETACH PARTITION\|PART {#alter_detach-partition}
|
||||
|
||||
``` sql
|
||||
ALTER TABLE table_name DETACH PARTITION partition_expr
|
||||
ALTER TABLE table_name DETACH PARTITION|PART partition_expr
|
||||
```
|
||||
|
||||
Moves all data for the specified partition to the `detached` directory. The server forgets about the detached data partition as if it does not exist. The server will not know about this data until you make the [ATTACH](#alter_attach-partition) query.
|
||||
@ -32,7 +32,8 @@ Moves all data for the specified partition to the `detached` directory. The serv
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
ALTER TABLE visits DETACH PARTITION 201901
|
||||
ALTER TABLE mt DETACH PARTITION '2020-11-21';
|
||||
ALTER TABLE mt DETACH PART 'all_2_2_0';
|
||||
```
|
||||
|
||||
Read about setting the partition expression in a section [How to specify the partition expression](#alter-how-to-specify-part-expr).
|
||||
@ -41,10 +42,10 @@ After the query is executed, you can do whatever you want with the data in the `
|
||||
|
||||
This query is replicated – it moves the data to the `detached` directory on all replicas. Note that you can execute this query only on a leader replica. To find out if a replica is a leader, perform the `SELECT` query to the [system.replicas](../../../operations/system-tables/replicas.md#system_tables-replicas) table. Alternatively, it is easier to make a `DETACH` query on all replicas - all the replicas throw an exception, except the leader replica.
|
||||
|
||||
## DROP PARTITION {#alter_drop-partition}
|
||||
## DROP PARTITION\|PART {#alter_drop-partition}
|
||||
|
||||
``` sql
|
||||
ALTER TABLE table_name DROP PARTITION partition_expr
|
||||
ALTER TABLE table_name DROP PARTITION|PART partition_expr
|
||||
```
|
||||
|
||||
Deletes the specified partition from the table. This query tags the partition as inactive and deletes data completely, approximately in 10 minutes.
|
||||
@ -53,6 +54,13 @@ Read about setting the partition expression in a section [How to specify the par
|
||||
|
||||
The query is replicated – it deletes data on all replicas.
|
||||
|
||||
Example:
|
||||
|
||||
``` sql
|
||||
ALTER TABLE mt DROP PARTITION '2020-11-21';
|
||||
ALTER TABLE mt DROP PART 'all_4_4_0';
|
||||
```
|
||||
|
||||
## DROP DETACHED PARTITION\|PART {#alter_drop-detached}
|
||||
|
||||
``` sql
|
||||
@ -233,6 +241,46 @@ ALTER TABLE hits MOVE PART '20190301_14343_16206_438' TO VOLUME 'slow'
|
||||
ALTER TABLE hits MOVE PARTITION '2019-09-01' TO DISK 'fast_ssd'
|
||||
```
|
||||
|
||||
## UPDATE IN PARTITION {#update-in-partition}
|
||||
|
||||
Manipulates data in the specifies partition matching the specified filtering expression. Implemented as a [mutation](../../../sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] [IN PARTITION partition_id] WHERE filter_expr
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
``` sql
|
||||
ALTER TABLE mt UPDATE x = x + 1 IN PARTITION 2 WHERE p = 2;
|
||||
```
|
||||
|
||||
### See Also
|
||||
|
||||
- [UPDATE](../../../sql-reference/statements/alter/update.md#alter-table-update-statements)
|
||||
|
||||
## DELETE IN PARTITION {#delete-in-partition}
|
||||
|
||||
Deletes data in the specifies partition matching the specified filtering expression. Implemented as a [mutation](../../../sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
ALTER TABLE [db.]table DELETE [IN PARTITION partition_id] WHERE filter_expr
|
||||
```
|
||||
|
||||
### Example
|
||||
|
||||
``` sql
|
||||
ALTER TABLE mt DELETE IN PARTITION 2 WHERE p = 2;
|
||||
```
|
||||
|
||||
### See Also
|
||||
|
||||
- [DELETE](../../../sql-reference/statements/alter/delete.md#alter-mutations)
|
||||
|
||||
## How to Set Partition Expression {#alter-how-to-specify-part-expr}
|
||||
|
||||
You can specify the partition expression in `ALTER ... PARTITION` queries in different ways:
|
||||
@ -250,4 +298,6 @@ All the rules above are also true for the [OPTIMIZE](../../../sql-reference/stat
|
||||
OPTIMIZE TABLE table_not_partitioned PARTITION tuple() FINAL;
|
||||
```
|
||||
|
||||
`IN PARTITION` specifies the partition to which the [UPDATE](../../../sql-reference/statements/alter/update.md#alter-table-update-statements) or [DELETE](../../../sql-reference/statements/alter/delete.md#alter-mutations) expressions are applied as a result of the `ALTER TABLE` query. New parts are created only from the specified partition. In this way, `IN PARTITION` helps to reduce the load when the table is divided into many partitions, and you only need to update the data point-by-point.
|
||||
|
||||
The examples of `ALTER ... PARTITION` queries are demonstrated in the tests [`00502_custom_partitioning_local`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_local.sql) and [`00502_custom_partitioning_replicated_zookeeper`](https://github.com/ClickHouse/ClickHouse/blob/master/tests/queries/0_stateless/00502_custom_partitioning_replicated_zookeeper.sql).
|
||||
|
@ -9,7 +9,7 @@ toc_title: UPDATE
|
||||
ALTER TABLE [db.]table UPDATE column1 = expr1 [, ...] WHERE filter_expr
|
||||
```
|
||||
|
||||
Allows to manipulate data matching the specified filtering expression. Implemented as a [mutation](../../../sql-reference/statements/alter/index.md#mutations).
|
||||
Manipulates data matching the specified filtering expression. Implemented as a [mutation](../../../sql-reference/statements/alter/index.md#mutations).
|
||||
|
||||
!!! note "Note"
|
||||
The `ALTER TABLE` prefix makes this syntax different from most other systems supporting SQL. It is intended to signify that unlike similar queries in OLTP databases this is a heavy operation not designed for frequent use.
|
||||
|
@ -29,6 +29,8 @@ A column description is `name type` in the simplest case. Example: `RegionID UIn
|
||||
|
||||
Expressions can also be defined for default values (see below).
|
||||
|
||||
If necessary, primary key can be specified, with one or more key expressions.
|
||||
|
||||
### With a Schema Similar to Other Table {#with-a-schema-similar-to-other-table}
|
||||
|
||||
``` sql
|
||||
@ -97,6 +99,34 @@ If you add a new column to a table but later change its default expression, the
|
||||
|
||||
It is not possible to set default values for elements in nested data structures.
|
||||
|
||||
## Primary Key {#primary-key}
|
||||
|
||||
You can define a [primary key](../../../engines/table-engines/mergetree-family/mergetree.md#primary-keys-and-indexes-in-queries) when creating a table. Primary key can be specified in two ways:
|
||||
|
||||
- inside the column list
|
||||
|
||||
``` sql
|
||||
CREATE TABLE db.table_name
|
||||
(
|
||||
name1 type1, name2 type2, ...,
|
||||
PRIMARY KEY(expr1[, expr2,...])]
|
||||
)
|
||||
ENGINE = engine;
|
||||
```
|
||||
|
||||
- outside the column list
|
||||
|
||||
``` sql
|
||||
CREATE TABLE db.table_name
|
||||
(
|
||||
name1 type1, name2 type2, ...
|
||||
)
|
||||
ENGINE = engine
|
||||
PRIMARY KEY(expr1[, expr2,...]);
|
||||
```
|
||||
|
||||
You can't combine both ways in one query.
|
||||
|
||||
## Constraints {#constraints}
|
||||
|
||||
Along with columns descriptions constraints could be defined:
|
||||
|
@ -152,7 +152,7 @@ ClickHouse can manage background processes in [MergeTree](../../engines/table-en
|
||||
Provides possibility to stop background merges for tables in the MergeTree family:
|
||||
|
||||
``` sql
|
||||
SYSTEM STOP MERGES [[db.]merge_tree_family_table_name]
|
||||
SYSTEM STOP MERGES [ON VOLUME <volume_name> | [db.]merge_tree_family_table_name]
|
||||
```
|
||||
|
||||
!!! note "Note"
|
||||
@ -163,7 +163,7 @@ SYSTEM STOP MERGES [[db.]merge_tree_family_table_name]
|
||||
Provides possibility to start background merges for tables in the MergeTree family:
|
||||
|
||||
``` sql
|
||||
SYSTEM START MERGES [[db.]merge_tree_family_table_name]
|
||||
SYSTEM START MERGES [ON VOLUME <volume_name> | [db.]merge_tree_family_table_name]
|
||||
```
|
||||
|
||||
### STOP TTL MERGES {#query_language-stop-ttl-merges}
|
||||
|
@ -57,7 +57,7 @@ Identifiers are:
|
||||
|
||||
Identifiers can be quoted or non-quoted. The latter is preferred.
|
||||
|
||||
Non-quoted identifiers must match the regex `^[0-9a-zA-Z_]*[a-zA-Z_]$` and can not be equal to [keywords](#syntax-keywords). Examples: `x, _1, X_y__Z123_.`
|
||||
Non-quoted identifiers must match the regex `^[a-zA-Z_][0-9a-zA-Z_]*$` and can not be equal to [keywords](#syntax-keywords). Examples: `x`, `_1`, `X_y__Z123_`.
|
||||
|
||||
If you want to use identifiers the same as keywords or you want to use other symbols in identifiers, quote it using double quotes or backticks, for example, `"id"`, `` `id` ``.
|
||||
|
||||
|
@ -19,7 +19,6 @@ toc_title: Bibliotecas de terceros utilizadas
|
||||
| Más información | [Licencia de 3 cláusulas BSD](https://github.com/google/googletest/blob/master/LICENSE) |
|
||||
| H3 | [Licencia Apache 2.0](https://github.com/uber/h3/blob/master/LICENSE) |
|
||||
| hyperscan | [Licencia de 3 cláusulas BSD](https://github.com/intel/hyperscan/blob/master/LICENSE) |
|
||||
| libbtrie | [Licencia BSD de 2 cláusulas](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libbtrie/LICENSE) |
|
||||
| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) |
|
||||
| libdivide | [Licencia Zlib](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) |
|
||||
| libgsasl | [Información adicional](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) |
|
||||
|
@ -21,7 +21,6 @@ toc_title: "\u06A9\u062A\u0627\u0628\u062E\u0627\u0646\u0647 \u0647\u0627\u06CC
|
||||
| googletest | [لیسانس 3 بند](https://github.com/google/googletest/blob/master/LICENSE) |
|
||||
| اچ 3 | [نمایی مجوز 2.0](https://github.com/uber/h3/blob/master/LICENSE) |
|
||||
| hyperscan | [لیسانس 3 بند](https://github.com/intel/hyperscan/blob/master/LICENSE) |
|
||||
| لیبتری | [لیسانس 2 بند](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libbtrie/LICENSE) |
|
||||
| شکنجه نوجوان | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) |
|
||||
| لیبیدوید | [مجوز زلب](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) |
|
||||
| نوشیدن شراب | [الجی پی ال2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) |
|
||||
|
@ -19,7 +19,6 @@ toc_title: "Biblioth\xE8ques Tierces Utilis\xE9es"
|
||||
| googletest | [Licence BSD 3-Clause](https://github.com/google/googletest/blob/master/LICENSE) |
|
||||
| h3 | [Licence Apache 2.0](https://github.com/uber/h3/blob/master/LICENSE) |
|
||||
| hyperscan | [Licence BSD 3-Clause](https://github.com/intel/hyperscan/blob/master/LICENSE) |
|
||||
| libbtrie | [Licence BSD 2-Clause](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libbtrie/LICENSE) |
|
||||
| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) |
|
||||
| libdivide | [Licence Zlib](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) |
|
||||
| libgsasl | [LGPL v2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) |
|
||||
|
@ -20,7 +20,6 @@ toc_title: "\u30B5\u30FC\u30C9\u30D1\u30FC\u30C6\u30A3\u88FD\u30E9\u30A4\u30D6\u
|
||||
| googletest | [BSD3条項ライセンス](https://github.com/google/googletest/blob/master/LICENSE) |
|
||||
| h3 | [Apacheライセンス2.0](https://github.com/uber/h3/blob/master/LICENSE) |
|
||||
| hyperscan | [BSD3条項ライセンス](https://github.com/intel/hyperscan/blob/master/LICENSE) |
|
||||
| libbtrie | [BSD2条項ライセンス](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libbtrie/LICENSE) |
|
||||
| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) |
|
||||
| libdivide | [Zlibライセンス](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) |
|
||||
| libgsasl | [LGPL v2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) |
|
||||
|
@ -18,7 +18,6 @@ toc_title: "\u0418\u0441\u043f\u043e\u043b\u044c\u0437\u0443\u0435\u043c\u044b\u
|
||||
| googletest | [BSD 3-Clause License](https://github.com/google/googletest/blob/master/LICENSE) |
|
||||
| h3 | [Apache License 2.0](https://github.com/uber/h3/blob/master/LICENSE) |
|
||||
| hyperscan | [BSD 3-Clause License](https://github.com/intel/hyperscan/blob/master/LICENSE) |
|
||||
| libbtrie | [BSD 2-Clause License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libbtrie/LICENSE) |
|
||||
| libcxxabi | [BSD + MIT](https://github.com/ClickHouse/ClickHouse/blob/master/libs/libglibc-compatibility/libcxxabi/LICENSE.TXT) |
|
||||
| libdivide | [Zlib License](https://github.com/ClickHouse/ClickHouse/blob/master/contrib/libdivide/LICENSE.txt) |
|
||||
| libgsasl | [LGPL v2.1](https://github.com/ClickHouse-Extras/libgsasl/blob/3b8948a4042e34fb00b4fb987535dc9e02e39040/LICENSE) |
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user