Merge branch 'master' into improvement/fn-traits

This commit is contained in:
mergify[bot] 2021-09-30 14:26:19 +00:00 committed by GitHub
commit bfbe49a268
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
3506 changed files with 59389 additions and 24735 deletions

View File

@ -1,12 +0,0 @@
# .arcignore is the same as .gitignore but for Arc VCS.
# Arc VCS is a proprietary VCS in Yandex that is very similar to Git
# from the user perspective but with the following differences:
# 1. Data is stored in distributed object storage.
# 2. Local copy works via FUSE without downloading all the objects.
# For this reason, it is better suited for huge monorepositories that can be found in large companies (e.g. Yandex, Google).
# As ClickHouse developers, we don't use Arc as a VCS (we use Git).
# But the ClickHouse source code is also mirrored into internal monorepository and our collegues are using Arc.
# You can read more about Arc here: https://habr.com/en/company/yandex/blog/482926/
# Repository is synchronized without 3rd-party submodules.
contrib

View File

@ -1,5 +1,3 @@
I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla/?lang=en
Changelog category (leave one):
- New Feature
- Improvement

3
.gitignore vendored
View File

@ -33,6 +33,9 @@
/docs/ja/single.md
/docs/fa/single.md
/docs/en/development/cmake-in-clickhouse.md
/docs/ja/development/cmake-in-clickhouse.md
/docs/zh/development/cmake-in-clickhouse.md
/docs/ru/development/cmake-in-clickhouse.md
# callgrind files
callgrind.out.*

View File

@ -1,2 +1,2 @@
To see the list of authors who created the source code of ClickHouse, published and distributed by YANDEX LLC as the owner,
To see the list of authors who created the source code of ClickHouse, published and distributed by ClickHouse, Inc. as the owner,
run "SELECT * FROM system.contributors;" query on any ClickHouse server.

View File

@ -7,6 +7,7 @@
* Under clickhouse-local, always treat local addresses with a port as remote. [#26736](https://github.com/ClickHouse/ClickHouse/pull/26736) ([Raúl Marín](https://github.com/Algunenano)).
* Fix the issue that in case of some sophisticated query with column aliases identical to the names of expressions, bad cast may happen. This fixes [#25447](https://github.com/ClickHouse/ClickHouse/issues/25447). This fixes [#26914](https://github.com/ClickHouse/ClickHouse/issues/26914). This fix may introduce backward incompatibility: if there are different expressions with identical names, exception will be thrown. It may break some rare cases when `enable_optimize_predicate_expression` is set. [#26639](https://github.com/ClickHouse/ClickHouse/pull/26639) ([alexey-milovidov](https://github.com/alexey-milovidov)).
* Now, scalar subquery always returns `Nullable` result if it's type can be `Nullable`. It is needed because in case of empty subquery it's result should be `Null`. Previously, it was possible to get error about incompatible types (type deduction does not execute scalar subquery, and it could use not-nullable type). Scalar subquery with empty result which can't be converted to `Nullable` (like `Array` or `Tuple`) now throws error. Fixes [#25411](https://github.com/ClickHouse/ClickHouse/issues/25411). [#26423](https://github.com/ClickHouse/ClickHouse/pull/26423) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Introduce syntax for here documents. Example `SELECT $doc$ VALUE $doc$`. [#26671](https://github.com/ClickHouse/ClickHouse/pull/26671) ([Maksim Kita](https://github.com/kitaisreal)). This change is backward incompatible if in query there are identifiers that contain `$` [#28768](https://github.com/ClickHouse/ClickHouse/issues/28768).
#### New Feature
@ -17,7 +18,6 @@
* Added integration with S2 geometry library. [#24980](https://github.com/ClickHouse/ClickHouse/pull/24980) ([Andr0901](https://github.com/Andr0901)). ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
* Add SQLite table engine, table function, database engine. [#24194](https://github.com/ClickHouse/ClickHouse/pull/24194) ([Arslan Gumerov](https://github.com/g-arslan)). ([Kseniia Sumarokova](https://github.com/kssenii)).
* Added support for custom query for `MySQL`, `PostgreSQL`, `ClickHouse`, `JDBC`, `Cassandra` dictionary source. Closes [#1270](https://github.com/ClickHouse/ClickHouse/issues/1270). [#26995](https://github.com/ClickHouse/ClickHouse/pull/26995) ([Maksim Kita](https://github.com/kitaisreal)).
* Introduce syntax for here documents. Example `SELECT $doc$ VALUE $doc$`. [#26671](https://github.com/ClickHouse/ClickHouse/pull/26671) ([Maksim Kita](https://github.com/kitaisreal)).
* Add shared (replicated) storage of user, roles, row policies, quotas and settings profiles through ZooKeeper. [#27426](https://github.com/ClickHouse/ClickHouse/pull/27426) ([Kevin Michel](https://github.com/kmichel-aiven)).
* Add compression for `INTO OUTFILE` that automatically choose compression algorithm. Closes [#3473](https://github.com/ClickHouse/ClickHouse/issues/3473). [#27134](https://github.com/ClickHouse/ClickHouse/pull/27134) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
* Add `INSERT ... FROM INFILE` similarly to `SELECT ... INTO OUTFILE`. [#27655](https://github.com/ClickHouse/ClickHouse/pull/27655) ([Filatenkov Artur](https://github.com/FArthur-cmd)).
@ -34,7 +34,6 @@
* New functions `currentProfiles()`, `enabledProfiles()`, `defaultProfiles()`. [#26714](https://github.com/ClickHouse/ClickHouse/pull/26714) ([Vitaly Baranov](https://github.com/vitlibar)).
* Add functions that return (initial_)query_id of the current query. This closes [#23682](https://github.com/ClickHouse/ClickHouse/issues/23682). [#26410](https://github.com/ClickHouse/ClickHouse/pull/26410) ([Alexey Boykov](https://github.com/mathalex)).
* Add `REPLACE GRANT` feature. [#26384](https://github.com/ClickHouse/ClickHouse/pull/26384) ([Caspian](https://github.com/Cas-pian)).
* Implement window function `nth_value(expr, N)` that returns the value of the Nth row of the window frame. [#26334](https://github.com/ClickHouse/ClickHouse/pull/26334) ([Zuo, RuoYu](https://github.com/ryzuo)).
* `EXPLAIN` query now has `EXPLAIN ESTIMATE ...` mode that will show information about read rows, marks and parts from MergeTree tables. Closes [#23941](https://github.com/ClickHouse/ClickHouse/issues/23941). [#26131](https://github.com/ClickHouse/ClickHouse/pull/26131) ([fastio](https://github.com/fastio)).
* Added `system.zookeeper_log` table. All actions of ZooKeeper client are logged into this table. Implements [#25449](https://github.com/ClickHouse/ClickHouse/issues/25449). [#26129](https://github.com/ClickHouse/ClickHouse/pull/26129) ([tavplubix](https://github.com/tavplubix)).
* Zero-copy replication for `ReplicatedMergeTree` over `HDFS` storage. [#25918](https://github.com/ClickHouse/ClickHouse/pull/25918) ([Zhichang Yu](https://github.com/yuzhichang)).

View File

@ -1,4 +1,4 @@
cmake_minimum_required(VERSION 3.3)
cmake_minimum_required(VERSION 3.14)
foreach(policy
CMP0023
@ -152,6 +152,7 @@ if (CMAKE_GENERATOR STREQUAL "Ninja" AND NOT DISABLE_COLORED_BUILD)
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fdiagnostics-color=always")
endif ()
include (cmake/check_flags.cmake)
include (cmake/add_warning.cmake)
if (NOT MSVC)
@ -166,7 +167,8 @@ if (COMPILER_CLANG)
set(COMPILER_FLAGS "${COMPILER_FLAGS} -gdwarf-aranges")
endif ()
if (CMAKE_CXX_COMPILER_VERSION VERSION_GREATER_EQUAL 12.0.0)
if (HAS_USE_CTOR_HOMING)
# For more info see https://blog.llvm.org/posts/2021-04-05-constructor-homing-for-debug-info/
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG" OR CMAKE_BUILD_TYPE_UC STREQUAL "RELWITHDEBINFO")
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Xclang -fuse-ctor-homing")
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Xclang -fuse-ctor-homing")
@ -192,7 +194,7 @@ endif ()
# Make sure the final executable has symbols exported
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -rdynamic")
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-12" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
find_program (OBJCOPY_PATH NAMES "llvm-objcopy" "llvm-objcopy-13" "llvm-objcopy-12" "llvm-objcopy-11" "llvm-objcopy-10" "llvm-objcopy-9" "llvm-objcopy-8" "objcopy")
if (NOT OBJCOPY_PATH AND OS_DARWIN)
find_program (BREW_PATH NAMES "brew")
@ -379,7 +381,7 @@ if (COMPILER_CLANG)
endif ()
# Always prefer llvm tools when using clang. For instance, we cannot use GNU ar when llvm LTO is enabled
find_program (LLVM_AR_PATH NAMES "llvm-ar" "llvm-ar-12" "llvm-ar-11" "llvm-ar-10" "llvm-ar-9" "llvm-ar-8")
find_program (LLVM_AR_PATH NAMES "llvm-ar" "llvm-ar-13" "llvm-ar-12" "llvm-ar-11" "llvm-ar-10" "llvm-ar-9" "llvm-ar-8")
if (LLVM_AR_PATH)
message(STATUS "Using llvm-ar: ${LLVM_AR_PATH}.")
@ -388,7 +390,7 @@ if (COMPILER_CLANG)
message(WARNING "Cannot find llvm-ar. System ar will be used instead. It does not work with ThinLTO.")
endif ()
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib" "llvm-ranlib-12" "llvm-ranlib-11" "llvm-ranlib-10" "llvm-ranlib-9" "llvm-ranlib-8")
find_program (LLVM_RANLIB_PATH NAMES "llvm-ranlib" "llvm-ranlib-13" "llvm-ranlib-12" "llvm-ranlib-11" "llvm-ranlib-10" "llvm-ranlib-9" "llvm-ranlib-8")
if (LLVM_RANLIB_PATH)
message(STATUS "Using llvm-ranlib: ${LLVM_RANLIB_PATH}.")
@ -629,9 +631,6 @@ include_directories(${ConfigIncludePath})
# Add as many warnings as possible for our own code.
include (cmake/warnings.cmake)
# Check if needed compiler flags are supported
include (cmake/check_flags.cmake)
add_subdirectory (base)
add_subdirectory (src)
add_subdirectory (programs)

View File

@ -6,38 +6,6 @@ Thank you.
## Technical Info
We have a [developer's guide](https://clickhouse.yandex/docs/en/development/developer_instruction/) for writing code for ClickHouse. Besides this guide, you can find [Overview of ClickHouse Architecture](https://clickhouse.yandex/docs/en/development/architecture/) and instructions on how to build ClickHouse in different environments.
We have a [developer's guide](https://clickhouse.com/docs/en/development/developer_instruction/) for writing code for ClickHouse. Besides this guide, you can find [Overview of ClickHouse Architecture](https://clickhouse.com/docs/en/development/architecture/) and instructions on how to build ClickHouse in different environments.
If you want to contribute to documentation, read the [Contributing to ClickHouse Documentation](docs/README.md) guide.
## Legal Info
In order for us (YANDEX LLC) to accept patches and other contributions from you, you may adopt our Yandex Contributor License Agreement (the "**CLA**"). The current version of the CLA you may find here:
1) https://yandex.ru/legal/cla/?lang=en (in English) and
2) https://yandex.ru/legal/cla/?lang=ru (in Russian).
By adopting the CLA, you state the following:
* You obviously wish and are willingly licensing your contributions to us for our open source projects under the terms of the CLA,
* You have read the terms and conditions of the CLA and agree with them in full,
* You are legally able to provide and license your contributions as stated,
* We may use your contributions for our open source projects and for any other our project too,
* We rely on your assurances concerning the rights of third parties in relation to your contributions.
If you agree with these principles, please read and adopt our CLA. By providing us your contributions, you hereby declare that you have already read and adopt our CLA, and we may freely merge your contributions with our corresponding open source project and use it in further in accordance with terms and conditions of the CLA.
If you have already adopted terms and conditions of the CLA, you are able to provide your contributes. When you submit your pull request, please add the following information into it:
```
I hereby agree to the terms of the CLA available at: [link].
```
Replace the bracketed text as follows:
* [link] is the link at the current version of the CLA (you may add here a link https://yandex.ru/legal/cla/?lang=en (in English) or a link https://yandex.ru/legal/cla/?lang=ru (in Russian).
It is enough to provide us such notification once.
As an alternative, you can provide DCO instead of CLA. You can find the text of DCO here: https://developercertificate.org/
It is enough to read and copy it verbatim to your pull request.
If you don't agree with the CLA and don't want to provide DCO, you still can open a pull request to provide your contributions.

View File

@ -1,4 +1,4 @@
Copyright 2016-2021 Yandex LLC
Copyright 2016-2021 ClickHouse, Inc.
Apache License
Version 2.0, January 2004

View File

@ -28,15 +28,16 @@ The following versions of ClickHouse server are currently being supported with s
| 21.3 | ✅ |
| 21.4 | :x: |
| 21.5 | :x: |
| 21.6 | |
| 21.6 | :x: |
| 21.7 | ✅ |
| 21.8 | ✅ |
| 21.9 | ✅ |
## Reporting a Vulnerability
We're extremely grateful for security researchers and users that report vulnerabilities to the ClickHouse Open Source Community. All reports are thoroughly investigated by developers.
To report a potential vulnerability in ClickHouse please send the details about it to [clickhouse-feedback@yandex-team.com](mailto:clickhouse-feedback@yandex-team.com).
To report a potential vulnerability in ClickHouse please send the details about it to [security@clickhouse.com](mailto:security@clickhouse.com).
### When Should I Report a Vulnerability?

View File

@ -16,6 +16,10 @@ extern "C"
}
#endif
#if defined(__clang__) && __clang_major__ >= 13
#pragma clang diagnostic ignored "-Wreserved-identifier"
#endif
namespace
{

View File

@ -96,6 +96,39 @@ inline bool compareSSE2x4(const char * p1, const char * p2)
inline bool memequalSSE2Wide(const char * p1, const char * p2, size_t size)
{
/** The order of branches and the trick with overlapping comparisons
* are the same as in memcpy implementation.
* See the comments in base/glibc-compatibility/memcpy/memcpy.h
*/
if (size <= 16)
{
if (size >= 8)
{
/// Chunks of 8..16 bytes.
return unalignedLoad<uint64_t>(p1) == unalignedLoad<uint64_t>(p2)
&& unalignedLoad<uint64_t>(p1 + size - 8) == unalignedLoad<uint64_t>(p2 + size - 8);
}
else if (size >= 4)
{
/// Chunks of 4..7 bytes.
return unalignedLoad<uint32_t>(p1) == unalignedLoad<uint32_t>(p2)
&& unalignedLoad<uint32_t>(p1 + size - 4) == unalignedLoad<uint32_t>(p2 + size - 4);
}
else if (size >= 2)
{
/// Chunks of 2..3 bytes.
return unalignedLoad<uint16_t>(p1) == unalignedLoad<uint16_t>(p2)
&& unalignedLoad<uint16_t>(p1 + size - 2) == unalignedLoad<uint16_t>(p2 + size - 2);
}
else if (size >= 1)
{
/// A single byte.
return *p1 == *p2;
}
return true;
}
while (size >= 64)
{
if (compareSSE2x4(p1, p2))
@ -108,39 +141,14 @@ inline bool memequalSSE2Wide(const char * p1, const char * p2, size_t size)
return false;
}
switch ((size % 64) / 16)
switch (size / 16)
{
case 3: if (!compareSSE2(p1 + 32, p2 + 32)) return false; [[fallthrough]];
case 2: if (!compareSSE2(p1 + 16, p2 + 16)) return false; [[fallthrough]];
case 1: if (!compareSSE2(p1 , p2 )) return false; [[fallthrough]];
case 0: break;
case 1: if (!compareSSE2(p1, p2)) return false;
}
p1 += (size % 64) / 16 * 16;
p2 += (size % 64) / 16 * 16;
switch (size % 16)
{
case 15: if (p1[14] != p2[14]) return false; [[fallthrough]];
case 14: if (p1[13] != p2[13]) return false; [[fallthrough]];
case 13: if (p1[12] != p2[12]) return false; [[fallthrough]];
case 12: if (unalignedLoad<uint32_t>(p1 + 8) == unalignedLoad<uint32_t>(p2 + 8)) goto l8; else return false;
case 11: if (p1[10] != p2[10]) return false; [[fallthrough]];
case 10: if (p1[9] != p2[9]) return false; [[fallthrough]];
case 9: if (p1[8] != p2[8]) return false;
l8: [[fallthrough]];
case 8: return unalignedLoad<uint64_t>(p1) == unalignedLoad<uint64_t>(p2);
case 7: if (p1[6] != p2[6]) return false; [[fallthrough]];
case 6: if (p1[5] != p2[5]) return false; [[fallthrough]];
case 5: if (p1[4] != p2[4]) return false; [[fallthrough]];
case 4: return unalignedLoad<uint32_t>(p1) == unalignedLoad<uint32_t>(p2);
case 3: if (p1[2] != p2[2]) return false; [[fallthrough]];
case 2: return unalignedLoad<uint16_t>(p1) == unalignedLoad<uint16_t>(p2);
case 1: if (p1[0] != p2[0]) return false; [[fallthrough]];
case 0: break;
}
return true;
return compareSSE2(p1 + size - 16, p2 + size - 16);
}
#endif

View File

@ -145,6 +145,19 @@ namespace common
return __builtin_mul_overflow(x, y, &res);
}
template <typename T, typename U, typename R>
inline bool mulOverflow(T x, U y, R & res)
{
// not built in type, wide integer
if constexpr (is_big_int_v<T> || is_big_int_v<R> || is_big_int_v<U>)
{
res = mulIgnoreOverflow<R>(x, y);
return false;
}
else
return __builtin_mul_overflow(x, y, &res);
}
template <>
inline bool mulOverflow(int x, int y, int & res)
{

View File

@ -1,3 +1,7 @@
#if defined(__clang__) && __clang_major__ >= 13
#pragma clang diagnostic ignored "-Wreserved-identifier"
#endif
/// This code was based on the code by Fedor Korotkiy (prime@yandex-team.ru) for YT product in Yandex.
#include <common/defines.h>

View File

@ -1,6 +1,10 @@
#pragma once
#include <cstddef>
#if defined(__clang__) && __clang_major__ >= 13
#pragma clang diagnostic ignored "-Wreserved-identifier"
#endif
constexpr size_t KiB = 1024;
constexpr size_t MiB = 1024 * KiB;
constexpr size_t GiB = 1024 * MiB;

View File

@ -1,63 +0,0 @@
# This file is generated automatically, do not edit. See 'ya.make.in' and use 'utils/generate-ya-make' to regenerate it.
OWNER(g:clickhouse)
LIBRARY()
ADDINCL(
GLOBAL clickhouse/base
)
CFLAGS (GLOBAL -DARCADIA_BUILD)
CFLAGS (GLOBAL -DUSE_CPUID=1)
CFLAGS (GLOBAL -DUSE_JEMALLOC=0)
CFLAGS (GLOBAL -DUSE_RAPIDJSON=1)
CFLAGS (GLOBAL -DUSE_SSL=1)
IF (OS_DARWIN)
CFLAGS (GLOBAL -DOS_DARWIN)
ELSEIF (OS_FREEBSD)
CFLAGS (GLOBAL -DOS_FREEBSD)
ELSEIF (OS_LINUX)
CFLAGS (GLOBAL -DOS_LINUX)
ENDIF ()
PEERDIR(
contrib/libs/cctz
contrib/libs/cxxsupp/libcxx-filesystem
contrib/libs/poco/Net
contrib/libs/poco/Util
contrib/libs/poco/NetSSL_OpenSSL
contrib/libs/fmt
contrib/restricted/boost
contrib/restricted/cityhash-1.0.2
)
CFLAGS(-g0)
SRCS(
DateLUT.cpp
DateLUTImpl.cpp
JSON.cpp
LineReader.cpp
StringRef.cpp
argsToConfig.cpp
coverage.cpp
demangle.cpp
errnoToString.cpp
getFQDNOrHostName.cpp
getMemoryAmount.cpp
getPageSize.cpp
getResource.cpp
getThreadId.cpp
mremap.cpp
phdr_cache.cpp
preciseExp10.cpp
setTerminalEcho.cpp
shift10.cpp
sleep.cpp
terminalColors.cpp
)
END()

View File

@ -1,41 +0,0 @@
OWNER(g:clickhouse)
LIBRARY()
ADDINCL(
GLOBAL clickhouse/base
)
CFLAGS (GLOBAL -DARCADIA_BUILD)
CFLAGS (GLOBAL -DUSE_CPUID=1)
CFLAGS (GLOBAL -DUSE_JEMALLOC=0)
CFLAGS (GLOBAL -DUSE_RAPIDJSON=1)
CFLAGS (GLOBAL -DUSE_SSL=1)
IF (OS_DARWIN)
CFLAGS (GLOBAL -DOS_DARWIN)
ELSEIF (OS_FREEBSD)
CFLAGS (GLOBAL -DOS_FREEBSD)
ELSEIF (OS_LINUX)
CFLAGS (GLOBAL -DOS_LINUX)
ENDIF ()
PEERDIR(
contrib/libs/cctz
contrib/libs/cxxsupp/libcxx-filesystem
contrib/libs/poco/Net
contrib/libs/poco/Util
contrib/libs/poco/NetSSL_OpenSSL
contrib/libs/fmt
contrib/restricted/boost
contrib/restricted/cityhash-1.0.2
)
CFLAGS(-g0)
SRCS(
<? find . -name '*.cpp' | grep -v -F tests/ | grep -v -F examples | grep -v -F Replxx | grep -v -F Readline | sed 's/^\.\// /' | sort ?>
)
END()

View File

@ -1,3 +1,7 @@
#if defined(__clang__) && __clang_major__ >= 13
#pragma clang diagnostic ignored "-Wreserved-identifier"
#endif
#include <daemon/BaseDaemon.h>
#include <daemon/SentryWriter.h>

View File

@ -1,19 +0,0 @@
OWNER(g:clickhouse)
LIBRARY()
NO_COMPILER_WARNINGS()
PEERDIR(
clickhouse/src/Common
)
CFLAGS(-g0)
SRCS(
BaseDaemon.cpp
GraphiteWriter.cpp
SentryWriter.cpp
)
END()

View File

@ -1,19 +0,0 @@
OWNER(g:clickhouse)
LIBRARY()
PEERDIR(
clickhouse/src/Common
)
CFLAGS(-g0)
SRCS(
ExtendedLogChannel.cpp
Loggers.cpp
OwnFormattingChannel.cpp
OwnPatternFormatter.cpp
OwnSplitChannel.cpp
)
END()

View File

@ -49,6 +49,8 @@ if (NOT USE_INTERNAL_MYSQL_LIBRARY AND OPENSSL_INCLUDE_DIR)
target_include_directories (mysqlxx SYSTEM PRIVATE ${OPENSSL_INCLUDE_DIR})
endif ()
target_no_warning(mysqlxx reserved-macro-identifier)
if (NOT USE_INTERNAL_MYSQL_LIBRARY AND USE_STATIC_LIBRARIES)
message(WARNING "Statically linking with system mysql/mariadb only works "
"if mysql client libraries are built with same openssl version as "

View File

@ -79,7 +79,7 @@ PoolWithFailover PoolFactory::get(const Poco::Util::AbstractConfiguration & conf
std::lock_guard<std::mutex> lock(impl->mutex);
if (auto entry = impl->pools.find(config_name); entry != impl->pools.end())
{
return *(entry->second.get());
return *(entry->second);
}
else
{
@ -100,7 +100,7 @@ PoolWithFailover PoolFactory::get(const Poco::Util::AbstractConfiguration & conf
impl->pools.insert_or_assign(config_name, pool);
impl->pools_by_ids.insert_or_assign(entry_name, config_name);
}
return *(pool.get());
return *pool;
}
}

View File

@ -77,7 +77,9 @@ void Query::executeImpl()
case CR_SERVER_LOST:
throw ConnectionLost(errorMessage(mysql_driver), err_no);
default:
throw BadQuery(errorMessage(mysql_driver), err_no);
/// Add query to the exception message, since it may differs from the user input query.
/// (also you can use this and create query with an error to see what query ClickHouse created)
throw BadQuery(errorMessage(mysql_driver) + " (query: " + query_string + ")", err_no);
}
}
}

View File

@ -1,39 +0,0 @@
# This file is generated automatically, do not edit. See 'ya.make.in' and use 'utils/generate-ya-make' to regenerate it.
LIBRARY()
OWNER(g:clickhouse)
CFLAGS(-g0)
PEERDIR(
contrib/restricted/boost/libs
contrib/libs/libmysql_r
contrib/libs/poco/Foundation
contrib/libs/poco/Util
)
ADDINCL(
GLOBAL clickhouse/base
clickhouse/base
contrib/libs/libmysql_r
)
NO_COMPILER_WARNINGS()
NO_UTIL()
SRCS(
Connection.cpp
Exception.cpp
Pool.cpp
PoolFactory.cpp
PoolWithFailover.cpp
Query.cpp
ResultBase.cpp
Row.cpp
UseQueryResult.cpp
Value.cpp
)
END()

View File

@ -1,28 +0,0 @@
LIBRARY()
OWNER(g:clickhouse)
CFLAGS(-g0)
PEERDIR(
contrib/restricted/boost/libs
contrib/libs/libmysql_r
contrib/libs/poco/Foundation
contrib/libs/poco/Util
)
ADDINCL(
GLOBAL clickhouse/base
clickhouse/base
contrib/libs/libmysql_r
)
NO_COMPILER_WARNINGS()
NO_UTIL()
SRCS(
<? find . -name '*.cpp' | grep -v -F tests/ | grep -v -F examples | sed 's/^\.\// /' | sort ?>
)
END()

View File

@ -1,7 +0,0 @@
OWNER(g:clickhouse)
LIBRARY()
ADDINCL (GLOBAL clickhouse/base/pcg-random)
END()

View File

@ -27,6 +27,10 @@
#define _PATH_TTY "/dev/tty"
#endif
#if defined(__clang__) && __clang_major__ >= 13
#pragma clang diagnostic ignored "-Wreserved-identifier"
#endif
#include <termios.h>
#include <signal.h>
#include <ctype.h>

View File

@ -1,11 +0,0 @@
OWNER(g:clickhouse)
LIBRARY()
CFLAGS(-g0)
SRCS(
readpassphrase.c
)
END()

View File

@ -1,13 +0,0 @@
OWNER(g:clickhouse)
LIBRARY()
ADDINCL(GLOBAL clickhouse/base/widechar_width)
CFLAGS(-g0)
SRCS(
widechar_width.cpp
)
END()

View File

@ -1,11 +0,0 @@
OWNER(g:clickhouse)
RECURSE(
common
daemon
loggers
mysqlxx
pcg-random
widechar_width
readpassphrase
)

View File

@ -6,7 +6,7 @@ if (ENABLE_CLANG_TIDY)
message(FATAL_ERROR "clang-tidy requires CMake version at least 3.6.")
endif()
find_program (CLANG_TIDY_PATH NAMES "clang-tidy" "clang-tidy-12" "clang-tidy-11" "clang-tidy-10" "clang-tidy-9" "clang-tidy-8")
find_program (CLANG_TIDY_PATH NAMES "clang-tidy" "clang-tidy-13" "clang-tidy-12" "clang-tidy-11" "clang-tidy-10" "clang-tidy-9" "clang-tidy-8")
if (CLANG_TIDY_PATH)
message(STATUS

View File

@ -4,3 +4,4 @@ include (CheckCCompilerFlag)
check_cxx_compiler_flag("-Wsuggest-destructor-override" HAS_SUGGEST_DESTRUCTOR_OVERRIDE)
check_cxx_compiler_flag("-Wshadow" HAS_SHADOW)
check_cxx_compiler_flag("-Wsuggest-override" HAS_SUGGEST_OVERRIDE)
check_cxx_compiler_flag("-Xclang -fuse-ctor-homing" HAS_USE_CTOR_HOMING)

View File

@ -51,8 +51,8 @@ if (CCACHE_FOUND AND NOT COMPILER_MATCHES_CCACHE)
message(STATUS "ccache is 4.2+ no quirks for SOURCE_DATE_EPOCH required")
elseif (CCACHE_VERSION VERSION_GREATER_EQUAL "4.0")
message(STATUS "Ignore SOURCE_DATE_EPOCH for ccache")
set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE "env -u SOURCE_DATE_EPOCH ${CCACHE_FOUND}")
set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK "env -u SOURCE_DATE_EPOCH ${CCACHE_FOUND}")
set_property (GLOBAL PROPERTY RULE_LAUNCH_COMPILE "env -u SOURCE_DATE_EPOCH")
set_property (GLOBAL PROPERTY RULE_LAUNCH_LINK "env -u SOURCE_DATE_EPOCH")
endif()
else ()
message(${RECONFIGURE_MESSAGE_LEVEL} "Not using ${CCACHE_FOUND} ${CCACHE_VERSION} bug: https://bugzilla.samba.org/show_bug.cgi?id=8118")

View File

@ -1,25 +0,0 @@
INCLUDE(${ARCADIA_ROOT}/clickhouse/cmake/autogenerated_versions.txt)
# TODO: not sure if this is customizable per-binary
SET(VERSION_NAME "ClickHouse")
# TODO: not quite sure how to replace dash with space in ya.make
SET(VERSION_FULL "${VERSION_NAME}-${VERSION_STRING}")
CFLAGS (GLOBAL -DDBMS_NAME=\"ClickHouse\")
CFLAGS (GLOBAL -DDBMS_VERSION_MAJOR=${VERSION_MAJOR})
CFLAGS (GLOBAL -DDBMS_VERSION_MINOR=${VERSION_MINOR})
CFLAGS (GLOBAL -DDBMS_VERSION_PATCH=${VERSION_PATCH})
CFLAGS (GLOBAL -DVERSION_FULL=\"\\\"${VERSION_FULL}\\\"\")
CFLAGS (GLOBAL -DVERSION_MAJOR=${VERSION_MAJOR})
CFLAGS (GLOBAL -DVERSION_MINOR=${VERSION_MINOR})
CFLAGS (GLOBAL -DVERSION_PATCH=${VERSION_PATCH})
# TODO: not supported yet, not sure if ya.make supports arithmetic.
CFLAGS (GLOBAL -DVERSION_INTEGER=0)
CFLAGS (GLOBAL -DVERSION_NAME=\"\\\"${VERSION_NAME}\\\"\")
CFLAGS (GLOBAL -DVERSION_OFFICIAL=\"-arcadia\")
CFLAGS (GLOBAL -DVERSION_REVISION=${VERSION_REVISION})
CFLAGS (GLOBAL -DVERSION_STRING=\"\\\"${VERSION_STRING}\\\"\")

2
contrib/libhdfs3 vendored

@ -1 +1 @@
Subproject commit 095b9d48b400abb72d967cb0539af13b1e3d90cf
Subproject commit 082e55f17d1c58bf124290fb044fea40e985ec11

2
contrib/rocksdb vendored

@ -1 +1 @@
Subproject commit 5ea892c8673e6c5a052887653673b967d44cc59b
Subproject commit 296c1b8b95fd448b8097a1b2cc9f704ff4a73a2c

10
debian/control vendored
View File

@ -1,16 +1,14 @@
Source: clickhouse
Section: database
Priority: optional
Maintainer: Alexey Milovidov <milovidov@yandex-team.ru>
Maintainer: Alexey Milovidov <milovidov@clickhouse.com>
Build-Depends: debhelper (>= 9),
cmake | cmake3,
ninja-build,
clang-11,
llvm-11,
clang-13,
llvm-13,
lld-13,
libc6-dev,
libicu-dev,
libreadline-dev,
gperf,
tzdata
Standards-Version: 3.9.8

View File

@ -1,6 +1,6 @@
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=12
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
RUN sed -i 's|http://archive|http://ru.archive|g' /etc/apt/sources.list

View File

@ -4,7 +4,7 @@ set -e
#ccache -s # uncomment to display CCache statistics
mkdir -p /server/build_docker
cd /server/build_docker
cmake -G Ninja /server "-DCMAKE_C_COMPILER=$(command -v clang-12)" "-DCMAKE_CXX_COMPILER=$(command -v clang++-12)"
cmake -G Ninja /server "-DCMAKE_C_COMPILER=$(command -v clang-13)" "-DCMAKE_CXX_COMPILER=$(command -v clang++-13)"
# Set the number of build jobs to the half of number of virtual CPU cores (rounded up).
# By default, ninja use all virtual CPU cores, that leads to very high memory consumption without much improvement in build time.

View File

@ -1,7 +1,7 @@
# docker build -t clickhouse/binary-builder .
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=12
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
RUN sed -i 's|http://archive|http://ru.archive|g' /etc/apt/sources.list
@ -44,13 +44,11 @@ RUN apt-get update \
gdb \
git \
gperf \
libicu-dev \
libreadline-dev \
clang-12 \
clang-tidy-12 \
lld-12 \
llvm-12 \
llvm-12-dev \
clang-${LLVM_VERSION} \
clang-tidy-${LLVM_VERSION} \
lld-${LLVM_VERSION} \
llvm-${LLVM_VERSION} \
llvm-${LLVM_VERSION}-dev \
libicu-dev \
libreadline-dev \
moreutils \

View File

@ -1,7 +1,7 @@
# docker build -t clickhouse/deb-builder .
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=12
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
RUN sed -i 's|http://archive|http://ru.archive|g' /etc/apt/sources.list
@ -37,17 +37,17 @@ RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
RUN apt-get update \
&& apt-get install \
alien \
clang-12 \
clang-tidy-12 \
clang-${LLVM_VERSION} \
clang-tidy-${LLVM_VERSION} \
cmake \
debhelper \
devscripts \
gdb \
git \
gperf \
lld-12 \
llvm-12 \
llvm-12-dev \
lld-${LLVM_VERSION} \
llvm-${LLVM_VERSION} \
llvm-${LLVM_VERSION}-dev \
moreutils \
ninja-build \
perl \

View File

@ -205,7 +205,8 @@ if __name__ == "__main__":
parser.add_argument("--build-type", choices=("debug", ""), default="")
parser.add_argument("--compiler", choices=("clang-11", "clang-11-darwin", "clang-11-darwin-aarch64", "clang-11-aarch64",
"clang-12", "clang-12-darwin", "clang-12-darwin-aarch64", "clang-12-aarch64",
"clang-11-freebsd", "clang-12-freebsd", "gcc-11"), default="clang-12")
"clang-13", "clang-13-darwin", "clang-13-darwin-aarch64", "clang-13-aarch64",
"clang-11-freebsd", "clang-12-freebsd", "clang-13-freebsd", "gcc-11"), default="clang-13")
parser.add_argument("--sanitizer", choices=("address", "thread", "memory", "undefined", ""), default="")
parser.add_argument("--unbundled", action="store_true")
parser.add_argument("--split-binary", action="store_true")

View File

@ -1,7 +1,7 @@
# docker build -t clickhouse/test-base .
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=12
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
RUN sed -i 's|http://archive|http://ru.archive|g' /etc/apt/sources.list

View File

@ -11,7 +11,7 @@ RUN apt-get update && apt-get --yes --allow-unauthenticated install clang-9 libl
# https://github.com/ClickHouse-Extras/woboq_codebrowser/commit/37e15eaf377b920acb0b48dbe82471be9203f76b
RUN git clone https://github.com/ClickHouse-Extras/woboq_codebrowser
RUN cd woboq_codebrowser && cmake . -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=clang\+\+-12 -DCMAKE_C_COMPILER=clang-12 && make -j
RUN cd woboq_codebrowser && cmake . -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=clang\+\+-13 -DCMAKE_C_COMPILER=clang-13 && make -j
ENV CODEGEN=/woboq_codebrowser/generator/codebrowser_generator
ENV CODEINDEX=/woboq_codebrowser/indexgenerator/codebrowser_indexgenerator
@ -24,7 +24,7 @@ ENV SHA=nosha
ENV DATA="data"
CMD mkdir -p $BUILD_DIRECTORY && cd $BUILD_DIRECTORY && \
cmake $SOURCE_DIRECTORY -DCMAKE_CXX_COMPILER=/usr/bin/clang\+\+-12 -DCMAKE_C_COMPILER=/usr/bin/clang-12 -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DENABLE_EMBEDDED_COMPILER=0 -DENABLE_S3=0 && \
cmake $SOURCE_DIRECTORY -DCMAKE_CXX_COMPILER=/usr/bin/clang\+\+-13 -DCMAKE_C_COMPILER=/usr/bin/clang-13 -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DENABLE_EMBEDDED_COMPILER=0 -DENABLE_S3=0 && \
mkdir -p $HTML_RESULT_DIRECTORY && \
$CODEGEN -b $BUILD_DIRECTORY -a -o $HTML_RESULT_DIRECTORY -p ClickHouse:$SOURCE_DIRECTORY:$SHA -d $DATA | ts '%Y-%m-%d %H:%M:%S' && \
cp -r $STATIC_DATA $HTML_RESULT_DIRECTORY/ &&\

View File

@ -80,7 +80,7 @@ LLVM_PROFILE_FILE='client_coverage_%5m.profraw' clickhouse-client --query "RENAM
LLVM_PROFILE_FILE='client_coverage_%5m.profraw' clickhouse-client --query "RENAME TABLE datasets.visits_v1 TO test.visits"
LLVM_PROFILE_FILE='client_coverage_%5m.profraw' clickhouse-client --query "SHOW TABLES FROM test"
LLVM_PROFILE_FILE='client_coverage_%5m.profraw' clickhouse-test -j 8 --testname --shard --zookeeper --print-time --use-skip-list 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee /test_result.txt
LLVM_PROFILE_FILE='client_coverage_%5m.profraw' clickhouse-test -j 8 --testname --shard --zookeeper --print-time 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee /test_result.txt
readarray -t FAILED_TESTS < <(awk '/FAIL|TIMEOUT|ERROR/ { print substr($3, 1, length($3)-1) }' "/test_result.txt")
@ -97,7 +97,7 @@ then
echo "Going to run again: ${FAILED_TESTS[*]}"
LLVM_PROFILE_FILE='client_coverage_%5m.profraw' clickhouse-test --order=random --testname --shard --zookeeper --use-skip-list "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a /test_result.txt
LLVM_PROFILE_FILE='client_coverage_%5m.profraw' clickhouse-test --order=random --testname --shard --zookeeper "${FAILED_TESTS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee -a /test_result.txt
else
echo "No failed tests"
fi

View File

@ -92,7 +92,7 @@ if __name__ == "__main__":
logging.info("Some exception occured %s", str(ex))
raise
finally:
logging.info("Will remove dowloaded file %s from filesystem if it exists", temp_archive_path)
logging.info("Will remove downloaded file %s from filesystem if it exists", temp_archive_path)
if os.path.exists(temp_archive_path):
os.remove(temp_archive_path)
logging.info("Processing of %s finished", dataset)

View File

@ -1,7 +1,7 @@
# docker build -t clickhouse/fasttest .
FROM ubuntu:20.04
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=12
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
RUN sed -i 's|http://archive|http://ru.archive|g' /etc/apt/sources.list

View File

@ -9,7 +9,7 @@ trap 'kill $(jobs -pr) ||:' EXIT
stage=${stage:-}
# Compiler version, normally set by Dockerfile
export LLVM_VERSION=${LLVM_VERSION:-12}
export LLVM_VERSION=${LLVM_VERSION:-13}
# A variable to pass additional flags to CMake.
# Here we explicitly default it to nothing so that bash doesn't complain about
@ -262,153 +262,8 @@ function run_tests
start_server
TESTS_TO_SKIP=(
00105_shard_collations
00109_shard_totals_after_having
00110_external_sort
00302_http_compression
00417_kill_query
00436_convert_charset
00490_special_line_separators_and_characters_outside_of_bmp
00652_replicated_mutations_zookeeper
00682_empty_parts_merge
00701_rollup
00834_cancel_http_readonly_queries_on_client_close
00911_tautological_compare
# Hyperscan
00926_multimatch
00929_multi_match_edit_distance
01681_hyperscan_debug_assertion
02004_max_hyperscan_regex_length
01176_mysql_client_interactive # requires mysql client
01031_mutations_interpreter_and_context
01053_ssd_dictionary # this test mistakenly requires acces to /var/lib/clickhouse -- can't run this locally, disabled
01083_expressions_in_engine_arguments
01092_memory_profiler
01098_msgpack_format
01098_temporary_and_external_tables
01103_check_cpu_instructions_at_startup # avoid dependency on qemu -- invonvenient when running locally
01193_metadata_loading
01238_http_memory_tracking # max_memory_usage_for_user can interfere another queries running concurrently
01251_dict_is_in_infinite_loop
01259_dictionary_custom_settings_ddl
01268_dictionary_direct_layout
01280_ssd_complex_key_dictionary
01281_group_by_limit_memory_tracking # max_memory_usage_for_user can interfere another queries running concurrently
01318_encrypt # Depends on OpenSSL
01318_decrypt # Depends on OpenSSL
01663_aes_msan # Depends on OpenSSL
01667_aes_args_check # Depends on OpenSSL
01683_codec_encrypted # Depends on OpenSSL
01776_decrypt_aead_size_check # Depends on OpenSSL
01811_filter_by_null # Depends on OpenSSL
02012_sha512_fixedstring # Depends on OpenSSL
01281_unsucceeded_insert_select_queries_counter
01292_create_user
01294_lazy_database_concurrent
01305_replica_create_drop_zookeeper
01354_order_by_tuple_collate_const
01355_ilike
01411_bayesian_ab_testing
01798_uniq_theta_sketch
01799_long_uniq_theta_sketch
01890_stem # depends on libstemmer_c
02003_compress_bz2 # depends on bzip2
01059_storage_file_compression # depends on brotli and bzip2
collate
collation
_orc_
arrow
avro
base64
brotli
capnproto
client
ddl_dictionaries
h3
hashing
hdfs
java_hash
json
limit_memory
live_view
memory_leak
memory_limit
mysql
odbc
parallel_alter
parquet
protobuf
secure
sha256
xz
# Not sure why these two fail even in sequential mode. Disabled for now
# to make some progress.
00646_url_engine
00974_query_profiler
# In fasttest, ENABLE_LIBRARIES=0, so rocksdb engine is not enabled by default
01504_rocksdb
01686_rocksdb
# Look at DistributedFilesToInsert, so cannot run in parallel.
01460_DistributedFilesToInsert
01541_max_memory_usage_for_user_long
# Require python libraries like scipy, pandas and numpy
01322_ttest_scipy
01561_mann_whitney_scipy
01545_system_errors
# Checks system.errors
01563_distributed_query_finish
# nc - command not found
01601_proxy_protocol
01622_defaults_for_url_engine
# JSON functions
01666_blns
# Requires postgresql-client
01802_test_postgresql_protocol_with_row_policy
# Depends on AWS
01801_s3_cluster
02012_settings_clause_for_s3
# needs psql
01889_postgresql_protocol_null_fields
# needs pv
01923_network_receive_time_metric_insert
01889_sqlite_read_write
# needs s2
01849_geoToS2
01851_s2_to_geo
01852_s2_get_neighbours
01853_s2_cells_intersect
01854_s2_cap_contains
01854_s2_cap_union
# needs s3
01944_insert_partition_by
# depends on Go
02013_zlib_read_after_eof
# Accesses CH via mysql table function (which is unavailable)
01747_system_session_log_long
)
time clickhouse-test --hung-check -j 8 --order=random --use-skip-list \
--no-long --testname --shard --zookeeper --skip "${TESTS_TO_SKIP[@]}" \
time clickhouse-test --hung-check -j 8 --order=random \
--fast-tests-only --no-long --testname --shard --zookeeper \
-- "$FASTTEST_FOCUS" 2>&1 \
| ts '%Y-%m-%d %H:%M:%S' \
| tee "$FASTTEST_OUTPUT/test_log.txt"

View File

@ -12,7 +12,7 @@ stage=${stage:-}
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
echo "$script_dir"
repo_dir=ch
BINARY_TO_DOWNLOAD=${BINARY_TO_DOWNLOAD:="clang-12_debug_none_bundled_unsplitted_disable_False_binary"}
BINARY_TO_DOWNLOAD=${BINARY_TO_DOWNLOAD:="clang-13_debug_none_bundled_unsplitted_disable_False_binary"}
function clone
{

View File

@ -2,7 +2,7 @@
set -euo pipefail
CLICKHOUSE_PACKAGE=${CLICKHOUSE_PACKAGE:="https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/clang-12_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"}
CLICKHOUSE_PACKAGE=${CLICKHOUSE_PACKAGE:="https://clickhouse-builds.s3.yandex.net/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/clang-13_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"}
CLICKHOUSE_REPO_PATH=${CLICKHOUSE_REPO_PATH:=""}

View File

@ -38,7 +38,7 @@ RUN set -x \
&& dpkg -i "${PKG_VERSION}.deb"
CMD echo "Running PVS version $PKG_VERSION" && cd /repo_folder && pvs-studio-analyzer credentials $LICENCE_NAME $LICENCE_KEY -o ./licence.lic \
&& cmake . -D"ENABLE_EMBEDDED_COMPILER"=OFF -D"USE_INTERNAL_PROTOBUF_LIBRARY"=OFF -D"USE_INTERNAL_GRPC_LIBRARY"=OFF -DCMAKE_C_COMPILER=clang-12 -DCMAKE_CXX_COMPILER=clang\+\+-12 \
&& cmake . -D"ENABLE_EMBEDDED_COMPILER"=OFF -D"USE_INTERNAL_PROTOBUF_LIBRARY"=OFF -D"USE_INTERNAL_GRPC_LIBRARY"=OFF -DCMAKE_C_COMPILER=clang-13 -DCMAKE_CXX_COMPILER=clang\+\+-13 \
&& ninja re2_st clickhouse_grpc_protos \
&& pvs-studio-analyzer analyze -o pvs-studio.log -e contrib -j 4 -l ./licence.lic; \
cp /repo_folder/pvs-studio.log /test_output; \

View File

@ -108,7 +108,7 @@ function run_tests()
ADDITIONAL_OPTIONS+=('--replicated-database')
fi
clickhouse-test --testname --shard --zookeeper --no-stateless --hung-check --use-skip-list --print-time "${ADDITIONAL_OPTIONS[@]}" \
clickhouse-test --testname --shard --zookeeper --no-stateless --hung-check --print-time "${ADDITIONAL_OPTIONS[@]}" \
"$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
}

View File

@ -92,7 +92,7 @@ if __name__ == "__main__":
logging.info("Some exception occured %s", str(ex))
raise
finally:
logging.info("Will remove dowloaded file %s from filesystem if it exists", temp_archive_path)
logging.info("Will remove downloaded file %s from filesystem if it exists", temp_archive_path)
if os.path.exists(temp_archive_path):
os.remove(temp_archive_path)
logging.info("Processing of %s finished", dataset)

View File

@ -97,7 +97,7 @@ function run_tests()
fi
clickhouse-test --testname --shard --zookeeper --hung-check --print-time \
--use-skip-list --test-runs "$NUM_TRIES" "${ADDITIONAL_OPTIONS[@]}" 2>&1 \
--test-runs "$NUM_TRIES" "${ADDITIONAL_OPTIONS[@]}" 2>&1 \
| ts '%Y-%m-%d %H:%M:%S' \
| tee -a test_output/test_result.txt
}

View File

@ -13,8 +13,4 @@ dpkg -i package_folder/clickhouse-test_*.deb
service clickhouse-server start && sleep 5
if grep -q -- "--use-skip-list" /usr/bin/clickhouse-test; then
SKIP_LIST_OPT="--use-skip-list"
fi
clickhouse-test --testname --shard --zookeeper "$SKIP_LIST_OPT" "$ADDITIONAL_OPTIONS" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
clickhouse-test --testname --shard --zookeeper "$ADDITIONAL_OPTIONS" "$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt

View File

@ -10,14 +10,6 @@ import logging
import time
def get_skip_list_cmd(path):
with open(path, 'r') as f:
for line in f:
if '--use-skip-list' in line:
return '--use-skip-list'
return ''
def get_options(i):
options = []
client_options = []
@ -56,8 +48,6 @@ def get_options(i):
def run_func_test(cmd, output_prefix, num_processes, skip_tests_option, global_time_limit):
skip_list_opt = get_skip_list_cmd(cmd)
global_time_limit_option = ''
if global_time_limit:
global_time_limit_option = "--global_time_limit={}".format(global_time_limit)
@ -66,7 +56,7 @@ def run_func_test(cmd, output_prefix, num_processes, skip_tests_option, global_t
pipes = []
for i in range(0, len(output_paths)):
f = open(output_paths[i], 'w')
full_command = "{} {} {} {} {}".format(cmd, skip_list_opt, get_options(i), global_time_limit_option, skip_tests_option)
full_command = "{} {} {} {}".format(cmd, get_options(i), global_time_limit_option, skip_tests_option)
logging.info("Run func tests '%s'", full_command)
p = Popen(full_command, shell=True, stdout=f, stderr=f)
pipes.append(p)
@ -80,6 +70,9 @@ def compress_stress_logs(output_path, files_prefix):
def prepare_for_hung_check(drop_databases):
# FIXME this function should not exist, but...
# ThreadFuzzer significantly slows down server and causes false-positive hung check failures
call("clickhouse client -q 'SYSTEM STOP THREAD FUZZER'", shell=True, stderr=STDOUT)
# We attach gdb to clickhouse-server before running tests
# to print stacktraces of all crashes even if clickhouse cannot print it for some reason.
# However, it obstruct checking for hung queries.

View File

@ -38,7 +38,7 @@ Writing the docs is extremely useful for project's users and developers, and gro
The documentation contains information about all the aspects of the ClickHouse lifecycle: developing, testing, installing, operating, and using. The base language of the documentation is English. The English version is the most actual. All other languages are supported as much as they can by contributors from different countries.
At the moment, [documentation](https://clickhouse.tech/docs) exists in English, Russian, Chinese, Japanese, and Farsi. We store the documentation besides the ClickHouse source code in the [GitHub repository](https://github.com/ClickHouse/ClickHouse/tree/master/docs).
At the moment, [documentation](https://clickhouse.com/docs) exists in English, Russian, Chinese, Japanese, and Farsi. We store the documentation besides the ClickHouse source code in the [GitHub repository](https://github.com/ClickHouse/ClickHouse/tree/master/docs).
Each language lays in the corresponding folder. Files that are not translated from English are the symbolic links to the English ones.
@ -54,7 +54,7 @@ You can contribute to the documentation in many ways, for example:
- Open a required file in the ClickHouse repository and edit it from the GitHub web interface.
You can do it on GitHub, or on the [ClickHouse Documentation](https://clickhouse.tech/docs/en/) site. Each page of ClickHouse Documentation site contains an "Edit this page" (🖋) element in the upper right corner. Clicking this symbol, you get to the ClickHouse docs file opened for editing.
You can do it on GitHub, or on the [ClickHouse Documentation](https://clickhouse.com/docs/en/) site. Each page of ClickHouse Documentation site contains an "Edit this page" (🖋) element in the upper right corner. Clicking this symbol, you get to the ClickHouse docs file opened for editing.
When you are saving a file, GitHub opens a pull-request for your contribution. Add the `documentation` label to this pull request for proper automatic checks applying. If you have no permissions for adding labels, the reviewer of your PR adds it.
@ -128,7 +128,7 @@ Contribute all new information in English language. Other languages are translat
When you add a new file, it should end with a link like:
`[Original article](https://clickhouse.tech/docs/<path-to-the-page>) <!--hide-->`
`[Original article](https://clickhouse.com/docs/<path-to-the-page>) <!--hide-->`
and there should be **a new empty line** after it.
@ -164,7 +164,7 @@ When writing documentation, think about people who read it. Each audience has sp
ClickHouse documentation can be divided by the audience for the following parts:
- Conceptual topics in [Introduction](https://clickhouse.tech/docs/en/), tutorials and overviews, changelog.
- Conceptual topics in [Introduction](https://clickhouse.com/docs/en/), tutorials and overviews, changelog.
These topics are for the most common auditory. When editing text in them, use the most common terms that are comfortable for the audience with basic technical skills.

View File

@ -26,4 +26,4 @@ The name of an additional section can be any, for example, **Usage**.
- [link](#)
[Original article](https://clickhouse.tech/docs/en/data-types/<data-type-name>/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/data-types/<data-type-name>/) <!--hide-->

View File

@ -12,9 +12,9 @@ Syntax of the statement.
Examples of descriptions with a complicated structure:
- https://clickhouse.tech/docs/en/sql-reference/statements/grant/
- https://clickhouse.tech/docs/en/sql-reference/statements/revoke/
- https://clickhouse.tech/docs/en/sql-reference/statements/select/join/
- https://clickhouse.com/docs/en/sql-reference/statements/grant/
- https://clickhouse.com/docs/en/sql-reference/statements/revoke/
- https://clickhouse.com/docs/en/sql-reference/statements/select/join/
**See Also** (Optional)

View File

@ -1,7 +1,7 @@
sudo apt-get install apt-transport-https ca-certificates dirmngr
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv E0C56BD4
echo "deb https://repo.clickhouse.tech/deb/stable/ main/" | sudo tee \
echo "deb https://repo.clickhouse.com/deb/stable/ main/" | sudo tee \
/etc/apt/sources.list.d/clickhouse.list
sudo apt-get update

View File

@ -1,6 +1,6 @@
sudo yum install yum-utils
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/clickhouse.repo
sudo rpm --import https://repo.clickhouse.com/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.clickhouse.com/rpm/clickhouse.repo
sudo yum install clickhouse-server clickhouse-client
sudo /etc/init.d/clickhouse-server start

View File

@ -1,9 +1,9 @@
export LATEST_VERSION=$(curl -s https://repo.clickhouse.tech/tgz/stable/ | \
export LATEST_VERSION=$(curl -s https://repo.clickhouse.com/tgz/stable/ | \
grep -Eo '[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+' | sort -V -r | head -n 1)
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.tech/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-common-static-dbg-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-server-$LATEST_VERSION.tgz
curl -O https://repo.clickhouse.com/tgz/stable/clickhouse-client-$LATEST_VERSION.tgz
tar -xzvf clickhouse-common-static-$LATEST_VERSION.tgz
sudo clickhouse-common-static-$LATEST_VERSION/install/doinst.sh

View File

@ -3,60 +3,7 @@ toc_priority: 1
toc_title: Cloud
---
# ClickHouse Cloud Service Providers {#clickhouse-cloud-service-providers}
# ClickHouse Cloud Service {#clickhouse-cloud-service}
!!! info "Info"
If you have launched a public cloud with managed ClickHouse service, feel free to [open a pull-request](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/cloud.md) adding it to the following list.
## Yandex Cloud {#yandex-cloud}
[Yandex Managed Service for ClickHouse](https://cloud.yandex.com/services/managed-clickhouse?utm_source=referrals&utm_medium=clickhouseofficialsite&utm_campaign=link3) provides the following key features:
- Fully managed ZooKeeper service for [ClickHouse replication](../engines/table-engines/mergetree-family/replication.md)
- Multiple storage type choices
- Replicas in different availability zones
- Encryption and isolation
- Automated maintenance
## Altinity.Cloud {#altinity.cloud}
[Altinity.Cloud](https://altinity.com/cloud-database/) is a fully managed ClickHouse-as-a-Service for the Amazon public cloud.
- Fast deployment of ClickHouse clusters on Amazon resources
- Easy scale-out/scale-in as well as vertical scaling of nodes
- Isolated per-tenant VPCs with public endpoint or VPC peering
- Configurable storage types and volume configurations
- Cross-AZ scaling for performance and high availability
- Built-in monitoring and SQL query editor
## Alibaba Cloud {#alibaba-cloud}
[Alibaba Cloud Managed Service for ClickHouse](https://www.alibabacloud.com/product/clickhouse) provides the following key features:
- Highly reliable cloud disk storage engine based on [Alibaba Cloud Apsara](https://www.alibabacloud.com/product/apsara-stack) distributed system
- Expand capacity on demand without manual data migration
- Support single-node, single-replica, multi-node, and multi-replica architectures, and support hot and cold data tiering
- Support access allow-list, one-key recovery, multi-layer network security protection, cloud disk encryption
- Seamless integration with cloud log systems, databases, and data application tools
- Built-in monitoring and database management platform
- Professional database expert technical support and service
## SberCloud {#sbercloud}
[SberCloud.Advanced](https://sbercloud.ru/en/advanced) provides [MapReduce Service (MRS)](https://docs.sbercloud.ru/mrs/ug/topics/ug__clickhouse.html), a reliable, secure, and easy-to-use enterprise-level platform for storing, processing, and analyzing big data. MRS allows you to quickly create and manage ClickHouse clusters.
- A ClickHouse instance consists of three ZooKeeper nodes and multiple ClickHouse nodes. The Dedicated Replica mode is used to ensure high reliability of dual data copies.
- MRS provides smooth and elastic scaling capabilities to quickly meet service growth requirements in scenarios where the cluster storage capacity or CPU computing resources are not enough. When you expand the capacity of ClickHouse nodes in a cluster, MRS provides a one-click data balancing tool and gives you the initiative to balance data. You can determine the data balancing mode and time based on service characteristics to ensure service availability, implementing smooth scaling.
- MRS uses the Elastic Load Balance ensuring high availability deployment architecture to automatically distribute user access traffic to multiple backend nodes, expanding service capabilities to external systems and improving fault tolerance. With the ELB polling mechanism, data is written to local tables and read from distributed tables on different nodes. In this way, data read/write load and high availability of application access are guaranteed.
## Tencent Cloud {#tencent-cloud}
[Tencent Managed Service for ClickHouse](https://cloud.tencent.com/product/cdwch) provides the following key features:
- Easy to deploy and manage on Tencent Cloud
- Highly scalable and available
- Integrated monitor and alert service
- High security with isolated per cluster VPCs
- On-demand pricing with no upfront costs or long-term commitments
{## [Original article](https://clickhouse.tech/docs/en/commercial/cloud/) ##}
Detailed public description for ClickHouse cloud services is not ready yet, please [contact us](https://clickhouse.com/company/#contact) to learn more.

View File

@ -6,12 +6,8 @@ toc_title: Introduction
# ClickHouse Commercial Services {#clickhouse-commercial-services}
This section is a directory of commercial service providers specializing in ClickHouse. They are independent companies not necessarily affiliated with Yandex.
Service categories:
- [Cloud](../commercial/cloud.md)
- [Support](../commercial/support.md)
!!! note "For service providers"
If you happen to represent one of them, feel free to open a pull request adding your company to the respective section (or even adding a new section if the service does not fit into existing categories). The easiest way to open a pull-request for documentation page is by using a “pencil” edit button in the top-right corner. If your service available in some local market, make sure to mention it in a localized documentation page as well (or at least point it out in a pull-request description).

View File

@ -3,23 +3,7 @@ toc_priority: 3
toc_title: Support
---
# ClickHouse Commercial Support Service Providers {#clickhouse-commercial-support-service-providers}
# ClickHouse Commercial Support Service {#clickhouse-commercial-support-service}
!!! info "Info"
If you have launched a ClickHouse commercial support service, feel free to [open a pull-request](https://github.com/ClickHouse/ClickHouse/edit/master/docs/en/commercial/support.md) adding it to the following list.
## Yandex.Cloud
ClickHouse worldwide support from the authors of ClickHouse. Supports on-premise and cloud deployments. Ask details on clickhouse-support@yandex-team.com
## Altinity {#altinity}
Altinity has offered enterprise ClickHouse support and services since 2017. Altinity customers range from Fortune 100 enterprises to startups. Visit [www.altinity.com](https://www.altinity.com/) for more information.
## Mafiree {#mafiree}
[Service description](http://mafiree.com/clickhouse-analytics-services.php)
## MinervaDB {#minervadb}
[Service description](https://minervadb.com/index.php/clickhouse-consulting-and-support-by-minervadb/)
Detailed public description for ClickHouse support services is not ready yet, please [contact us](https://clickhouse.com/company/#contact) to learn more.

View File

@ -63,7 +63,7 @@ git checkout -b name_for_a_branch_with_my_test upstream/master
#### Install & run clickhouse
1) install `clickhouse-server` (follow [official docs](https://clickhouse.tech/docs/en/getting-started/install/))
1) install `clickhouse-server` (follow [official docs](https://clickhouse.com/docs/en/getting-started/install/))
2) install test configurations (it will use Zookeeper mock implementation and adjust some settings)
```
cd ~/workspace/ClickHouse/tests/config

View File

@ -196,4 +196,4 @@ Besides, each replica stores its state in ZooKeeper as the set of parts and its
!!! note "Note"
The ClickHouse cluster consists of independent shards, and each shard consists of replicas. The cluster is **not elastic**, so after adding a new shard, data is not rebalanced between shards automatically. Instead, the cluster load is supposed to be adjusted to be uneven. This implementation gives you more control, and it is ok for relatively small clusters, such as tens of nodes. But for clusters with hundreds of nodes that we are using in production, this approach becomes a significant drawback. We should implement a table engine that spans across the cluster with dynamically replicated regions that could be split and balanced between clusters automatically.
{## [Original article](https://clickhouse.tech/docs/en/development/architecture/) ##}
{## [Original article](https://clickhouse.com/docs/en/development/architecture/) ##}

View File

@ -5,7 +5,7 @@ toc_title: Source Code Browser
# Browse ClickHouse Source Code {#browse-clickhouse-source-code}
You can use **Woboq** online code browser available [here](https://clickhouse.tech/codebrowser/html_report/ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
You can use **Woboq** online code browser available [here](https://clickhouse.com/codebrowser/html_report/ClickHouse/src/index.html). It provides code navigation and semantic highlighting, search and indexing. The code snapshot is updated daily.
Also, you can browse sources on [GitHub](https://github.com/ClickHouse/ClickHouse) as usual.

View File

@ -114,15 +114,25 @@ To do so, create the `/Library/LaunchDaemons/limit.maxfiles.plist` file with the
</plist>
```
Execute the following command:
Give the file correct permissions:
``` bash
sudo chown root:wheel /Library/LaunchDaemons/limit.maxfiles.plist
```
Reboot.
Validate that the file is correct:
To check if its working, you can use `ulimit -n` command.
``` bash
plutil /Library/LaunchDaemons/limit.maxfiles.plist
```
Load the file (or reboot):
``` bash
sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist
```
To check if its working, use the `ulimit -n` or `launchctl limit maxfiles` commands.
## Run ClickHouse server:
@ -131,4 +141,4 @@ cd ClickHouse
./build/programs/clickhouse-server --config-file ./programs/server/config.xml
```
[Original article](https://clickhouse.tech/docs/en/development/build_osx/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/development/build_osx/) <!--hide-->

View File

@ -23,7 +23,7 @@ $ sudo apt-get install git cmake python ninja-build
Or cmake3 instead of cmake on older systems.
### Install clang-12 (recommended) {#install-clang-12}
### Install clang-13 (recommended) {#install-clang-13}
On Ubuntu/Debian you can use the automatic installation script (check [official webpage](https://apt.llvm.org/))
@ -33,11 +33,11 @@ sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"
For other Linux distribution - check the availability of the [prebuild packages](https://releases.llvm.org/download.html) or build clang [from sources](https://clang.llvm.org/get_started.html).
#### Use clang-12 for Builds
#### Use clang-13 for Builds
``` bash
$ export CC=clang-12
$ export CXX=clang++-12
$ export CC=clang-13
$ export CXX=clang++-13
```
Gcc can also be used though it is discouraged.
@ -161,4 +161,4 @@ Note that the split build has several drawbacks:
* You cannot run the integration tests since they only work a single complete binary.
* You can't easily copy the binaries elsewhere. Instead of moving a single binary you'll need to copy all binaries and libraries.
[Original article](https://clickhouse.tech/docs/en/development/build/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/development/build/) <!--hide-->

View File

@ -117,7 +117,7 @@ described [here](tests.md#functional-test-locally).
## Build Check {#build-check}
Builds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The `cmake` options can be found in the build log, grepping for `cmake`. Use these options and follow the [general build process](build.md).
Builds ClickHouse in various configurations for use in further steps. You have to fix the builds that fail. Build logs often has enough information to fix the error, but you might have to reproduce the failure locally. The `cmake` options can be found in the build log, grepping for `cmake`. Use these options and follow the [general build process](../development/build.md).
### Report Details
@ -127,7 +127,7 @@ Builds ClickHouse in various configurations for use in further steps. You have t
- **Build type**: `Debug` or `RelWithDebInfo` (cmake).
- **Sanitizer**: `none` (without sanitizers), `address` (ASan), `memory` (MSan), `undefined` (UBSan), or `thread` (TSan).
- **Bundled**: `bundled` build uses libraries from `contrib` folder, and `unbundled` build uses system libraries.
- **Splitted** `splitted` is a [split build](build.md#split-build)
- **Splitted** `splitted` is a [split build](../development/build.md#split-build)
- **Status**: `success` or `fail`
- **Build log**: link to the building and files copying log, useful when build failed.
- **Build time**.
@ -157,7 +157,7 @@ etc. Look at the report to see which tests fail, then reproduce the failure
locally as described [here](tests.md#functional-test-locally). Note that you
have to use the correct build configuration to reproduce -- a test might fail
under AddressSanitizer but pass in Debug. Download the binary from [CI build
checks page](build.md#you-dont-have-to-build-clickhouse), or build it locally.
checks page](../development/build.md#you-dont-have-to-build-clickhouse), or build it locally.
## Functional Stateful Tests
@ -183,11 +183,11 @@ concurrency-related errors. If it fails:
## Split Build Smoke Test
Checks that the server build in [split build](build.md#split-build)
Checks that the server build in [split build](../development/build.md#split-build)
configuration can start and run simple queries. If it fails:
* Fix other test errors first;
* Build the server in [split build](build.md#split-build) configuration
* Build the server in [split build](../development/build.md#split-build) configuration
locally and check whether it can start and run `select 1`.

View File

@ -233,13 +233,13 @@ Just in case, it is worth mentioning that CLion creates `build` path on its own,
## Writing Code {#writing-code}
The description of ClickHouse architecture can be found here: https://clickhouse.tech/docs/en/development/architecture/
The description of ClickHouse architecture can be found here: https://clickhouse.com/docs/en/development/architecture/
The Code Style Guide: https://clickhouse.tech/docs/en/development/style/
The Code Style Guide: https://clickhouse.com/docs/en/development/style/
Adding third-party libraries: https://clickhouse.tech/docs/en/development/contrib/#adding-third-party-libraries
Adding third-party libraries: https://clickhouse.com/docs/en/development/contrib/#adding-third-party-libraries
Writing tests: https://clickhouse.tech/docs/en/development/tests/
Writing tests: https://clickhouse.com/docs/en/development/tests/
List of tasks: https://github.com/ClickHouse/ClickHouse/issues?q=is%3Aopen+is%3Aissue+label%3A%22easy+task%22

View File

@ -7,4 +7,4 @@ toc_title: hidden
# ClickHouse Development {#clickhouse-development}
[Original article](https://clickhouse.tech/docs/en/development/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/development/) <!--hide-->

View File

@ -828,4 +828,4 @@ function(
size_t limit)
```
[Original article](https://clickhouse.tech/docs/en/development/style/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/development/style/) <!--hide-->

View File

@ -239,7 +239,7 @@ Google OSS-Fuzz can be found at `docker/fuzz`.
We also use simple fuzz test to generate random SQL queries and to check that the server does not die executing them.
You can find it in `00746_sql_fuzzy.pl`. This test should be run continuously (overnight and longer).
We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases. It does random permutations and substitutions in queries AST. It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order. You can learn more about this fuzzer in [this blog article](https://clickhouse.tech/blog/en/2021/fuzzing-clickhouse/).
We also use sophisticated AST-based query fuzzer that is able to find huge amount of corner cases. It does random permutations and substitutions in queries AST. It remembers AST nodes from previous tests to use them for fuzzing of subsequent tests while processing them in random order. You can learn more about this fuzzer in [this blog article](https://clickhouse.com/blog/en/2021/fuzzing-clickhouse/).
## Stress test
@ -341,4 +341,4 @@ Build jobs and tests are run in Sandbox on per commit basis. Resulting packages
We do not use Travis CI due to the limit on time and computational power.
We do not use Jenkins. It was used before and now we are happy we are not using Jenkins.
[Original article](https://clickhouse.tech/docs/en/development/tests/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/development/tests/) <!--hide-->

View File

@ -13,4 +13,4 @@ Its optimized for storing many small \*Log tables, for which there is a long
CREATE DATABASE testlazy ENGINE = Lazy(expiration_time_in_seconds);
[Original article](https://clickhouse.tech/docs/en/database_engines/lazy/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/database_engines/lazy/) <!--hide-->

View File

@ -197,4 +197,4 @@ SELECT * FROM mysql.test;
└───┴─────┴──────┘
```
[Original article](https://clickhouse.tech/docs/en/engines/database-engines/materialized-mysql/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/database-engines/materialized-mysql/) <!--hide-->

View File

@ -23,6 +23,20 @@ ENGINE = MaterializedPostgreSQL('host:port', ['database' | database], 'user', 'p
- `user` — PostgreSQL user.
- `password` — User password.
## Dynamically adding new tables to replication
``` sql
ATTACH TABLE postgres_database.new_table;
```
It will work as well if there is a setting `materialized_postgresql_tables_list`.
## Dynamically removing tables from replication
``` sql
DETACH TABLE postgres_database.table_to_remove;
```
## Settings {#settings}
- [materialized_postgresql_max_block_size](../../operations/settings/settings.md#materialized-postgresql-max-block-size)
@ -44,6 +58,12 @@ SETTINGS materialized_postgresql_max_block_size = 65536,
SELECT * FROM database1.table1;
```
It is also possible to change settings at run time.
``` sql
ALTER DATABASE postgres_database MODIFY SETTING materialized_postgresql_max_block_size = <new_size>;
```
## Requirements {#requirements}
1. The [wal_level](https://www.postgresql.org/docs/current/runtime-config-wal.html) setting must have a value `logical` and `max_replication_slots` parameter must have a value at least `2` in the PostgreSQL config file.

View File

@ -147,4 +147,4 @@ SELECT * FROM mysql_db.mysql_table
└────────┴───────┘
```
[Original article](https://clickhouse.tech/docs/en/database_engines/mysql/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/database_engines/mysql/) <!--hide-->

View File

@ -136,4 +136,4 @@ DESCRIBE TABLE test_database.test_table;
└────────┴───────────────────┘
```
[Original article](https://clickhouse.tech/docs/en/database-engines/postgresql/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/database-engines/postgresql/) <!--hide-->

View File

@ -12,4 +12,4 @@ There are two key engine kinds in ClickHouse:
- [Table engines](../engines/table-engines/index.md)
- [Database engines](../engines/database-engines/index.md)
{## [Original article](https://clickhouse.tech/docs/en/engines/) ##}
{## [Original article](https://clickhouse.com/docs/en/engines/) ##}

View File

@ -86,4 +86,4 @@ To select data from a virtual column, you must specify its name in the `SELECT`
If you create a table with a column that has the same name as one of the table virtual columns, the virtual column becomes inaccessible. We do not recommend doing this. To help avoid conflicts, virtual column names are usually prefixed with an underscore.
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/) <!--hide-->

View File

@ -81,4 +81,4 @@ You can also change any [rocksdb options](https://github.com/facebook/rocksdb/wi
</rocksdb>
```
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/embedded-rocksdb/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/embedded-rocksdb/) <!--hide-->

View File

@ -224,4 +224,4 @@ libhdfs3 support HDFS namenode HA.
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns)
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/hdfs/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/hdfs/) <!--hide-->

View File

@ -92,4 +92,4 @@ FROM system.numbers
- [JDBC table function](../../../sql-reference/table-functions/jdbc.md).
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/jdbc/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/jdbc/) <!--hide-->

View File

@ -194,4 +194,4 @@ Example:
- [Virtual columns](../../../engines/table-engines/index.md#table_engines-virtual_columns)
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/kafka/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/kafka/) <!--hide-->

View File

@ -66,4 +66,4 @@ SELECT COUNT() FROM mongo_table;
└─────────┘
```
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/mongodb/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/mongodb/) <!--hide-->

View File

@ -113,4 +113,4 @@ SELECT * FROM mysql_table
- [The mysql table function](../../../sql-reference/table-functions/mysql.md)
- [Using MySQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-mysql)
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/mysql/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/mysql/) <!--hide-->

View File

@ -128,4 +128,4 @@ SELECT * FROM odbc_t
- [ODBC external dictionaries](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-odbc)
- [ODBC table function](../../../sql-reference/table-functions/odbc.md)
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/odbc/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/odbc/) <!--hide-->

View File

@ -149,4 +149,4 @@ CREATE TABLE pg_table_schema_with_dots (a UInt32)
- [The `postgresql` table function](../../../sql-reference/table-functions/postgresql.md)
- [Using PostgreSQL as a source of external dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md#dicts-external_dicts_dict_sources-postgresql)
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/postgresql/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/postgresql/) <!--hide-->

View File

@ -170,4 +170,4 @@ Example:
- `_message_id` - messageID of the received message; non-empty if was set, when message was published.
- `_timestamp` - timestamp of the received message; non-empty if was set, when message was published.
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/rabbitmq/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/engines/table-engines/integrations/rabbitmq/) <!--hide-->

View File

@ -44,4 +44,4 @@ The `TinyLog` engine is the simplest in the family and provides the poorest func
The `Log` and `StripeLog` engines support parallel data reading. When reading data, ClickHouse uses multiple threads. Each thread processes a separate data block. The `Log` engine uses a separate file for each column of the table. `StripeLog` stores all the data in one file. As a result, the `StripeLog` engine uses fewer file descriptors, but the `Log` engine provides higher efficiency when reading data.
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/log_family/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/log_family/) <!--hide-->

View File

@ -90,4 +90,4 @@ SELECT * FROM stripe_log_table ORDER BY timestamp
└─────────────────────┴──────────────┴────────────────────────────┘
```
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/stripelog/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/stripelog/) <!--hide-->

View File

@ -11,4 +11,4 @@ This table engine is typically used with the write-once method: write data one t
Queries are executed in a single stream. In other words, this engine is intended for relatively small tables (up to about 1,000,000 rows). It makes sense to use this table engine if you have many small tables, since its simpler than the [Log](../../../engines/table-engines/log-family/log.md) engine (fewer files need to be opened).
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/tinylog/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/tinylog/) <!--hide-->

View File

@ -100,4 +100,4 @@ GROUP BY StartDate
ORDER BY StartDate;
```
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/aggregatingmergetree/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/aggregatingmergetree/) <!--hide-->

View File

@ -303,4 +303,4 @@ select * FROM UAct
└─────────────────────┴───────────┴──────────┴──────┘
```
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/collapsingmergetree/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/collapsingmergetree/) <!--hide-->

View File

@ -127,4 +127,4 @@ Note that on the operating server, you cannot manually change the set of parts o
ClickHouse allows you to perform operations with the partitions: delete them, copy from one table to another, or create a backup. See the list of all operations in the section [Manipulations With Partitions and Parts](../../../sql-reference/statements/alter/partition.md#alter_manipulations-with-partitions).
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/custom_partitioning_key/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/custom_partitioning_key/) <!--hide-->

View File

@ -170,4 +170,4 @@ Fields for `pattern` and `default` sections:
!!! warning "Warning"
Data rollup is performed during merges. Usually, for old partitions, merges are not started, so for rollup it is necessary to trigger an unscheduled merge using [optimize](../../../sql-reference/statements/optimize.md). Or use additional tools, for example [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer).
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/graphitemergetree/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/graphitemergetree/) <!--hide-->

View File

@ -100,8 +100,8 @@ For a description of parameters, see the [CREATE query description](../../../sql
- `min_merge_bytes_to_use_direct_io` — The minimum data volume for merge operation that is required for using direct I/O access to the storage disk. When merging data parts, ClickHouse calculates the total storage volume of all the data to be merged. If the volume exceeds `min_merge_bytes_to_use_direct_io` bytes, ClickHouse reads and writes the data to the storage disk using the direct I/O interface (`O_DIRECT` option). If `min_merge_bytes_to_use_direct_io = 0`, then direct I/O is disabled. Default value: `10 * 1024 * 1024 * 1024` bytes.
<a name="mergetree_setting-merge_with_ttl_timeout"></a>
- `merge_with_ttl_timeout` — Minimum delay in seconds before repeating a merge with delete TTL. Default value: `14400` seconds (4 hours).
- `merge_with_recompression_ttl_timeout` — Minimum delay in seconds before repeating a merge with recompression TTL. Default value: `14400` seconds (4 hours).
- `try_fetch_recompressed_part_timeout` — Timeout (in seconds) before starting merge with recompression. During this time ClickHouse tries to fetch recompressed part from replica which assigned this merge with recompression. Default value: `7200` seconds (2 hours).
- `merge_with_recompression_ttl_timeout` — Minimum delay in seconds before repeating a merge with recompression TTL. Default value: `14400` seconds (4 hours).
- `try_fetch_recompressed_part_timeout` — Timeout (in seconds) before starting merge with recompression. During this time ClickHouse tries to fetch recompressed part from replica which assigned this merge with recompression. Default value: `7200` seconds (2 hours).
- `write_final_mark` — Enables or disables writing the final index mark at the end of data part (after the last byte). Default value: 1. Dont turn it off.
- `merge_max_block_size` — Maximum number of rows in block for merge operations. Default value: 8192.
- `storage_policy` — Storage policy. See [Using Multiple Block Devices for Data Storage](#table_engine-mergetree-multiple-volumes).
@ -335,7 +335,16 @@ SELECT count() FROM table WHERE u64 * i32 == 10 AND u64 * length(s) >= 1234
The optional `false_positive` parameter is the probability of receiving a false positive response from the filter. Possible values: (0, 1). Default value: 0.025.
Supported data types: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`, `Array`, `LowCardinality`, `Nullable`, `UUID`.
Supported data types: `Int*`, `UInt*`, `Float*`, `Enum`, `Date`, `DateTime`, `String`, `FixedString`, `Array`, `LowCardinality`, `Nullable`, `UUID`, `Map`.
For `Map` data type client can specify if index should be created for keys or values using [mapKeys](../../../sql-reference/functions/tuple-map-functions.md#mapkeys) or [mapValues](../../../sql-reference/functions/tuple-map-functions.md#mapvalues) function.
Example of index creation for `Map` data type
```
INDEX map_key_index mapKeys(map_column) TYPE bloom_filter GRANULARITY 1
INDEX map_key_index mapValues(map_column) TYPE bloom_filter GRANULARITY 1
```
The following functions can use it: [equals](../../../sql-reference/functions/comparison-functions.md), [notEquals](../../../sql-reference/functions/comparison-functions.md), [in](../../../sql-reference/functions/in-functions.md), [notIn](../../../sql-reference/functions/in-functions.md), [has](../../../sql-reference/functions/array-functions.md).
@ -398,7 +407,7 @@ Projections are an experimental feature. To enable them you must set the [allow_
Projections are not supported in the `SELECT` statements with the [FINAL](../../../sql-reference/statements/select/from.md#select-from-final) modifier.
### Projection Query {#projection-query}
A projection query is what defines a projection. It implicitly selects data from the parent table.
A projection query is what defines a projection. It implicitly selects data from the parent table.
**Syntax**
```sql
@ -548,7 +557,7 @@ ORDER BY d
TTL d + INTERVAL 1 MONTH DELETE WHERE toDayOfWeek(d) = 1;
```
Creating a table, where expired rows are recompressed:
Creating a table, where expired rows are recompressed:
```sql
CREATE TABLE table_for_recompression

View File

@ -288,5 +288,7 @@ If the data in ZooKeeper was lost or damaged, you can save data by moving it to
- [background_schedule_pool_size](../../../operations/settings/settings.md#background_schedule_pool_size)
- [background_fetches_pool_size](../../../operations/settings/settings.md#background_fetches_pool_size)
- [execute_merges_on_single_replica_time_threshold](../../../operations/settings/settings.md#execute-merges-on-single-replica-time-threshold)
- [max_replicated_fetches_network_bandwidth](../../../operations/settings/merge-tree-settings.md#max_replicated_fetches_network_bandwidth)
- [max_replicated_sends_network_bandwidth](../../../operations/settings/merge-tree-settings.md#max_replicated_sends_network_bandwidth)
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/replication/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/replication/) <!--hide-->

View File

@ -136,4 +136,4 @@ When requesting data, use the [sumMap(key, value)](../../../sql-reference/aggreg
For nested data structure, you do not need to specify its columns in the tuple of columns for summation.
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/summingmergetree/) <!--hide-->
[Original article](https://clickhouse.com/docs/en/operations/table_engines/summingmergetree/) <!--hide-->

Some files were not shown because too many files have changed in this diff Show More