mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-22 23:52:03 +00:00
Merge branch 'master' into control_execution_period_of_clearOldTemporaryDirectories
This commit is contained in:
commit
c696817a79
4
.github/ISSUE_TEMPLATE/10_question.md
vendored
4
.github/ISSUE_TEMPLATE/10_question.md
vendored
@ -7,6 +7,6 @@ assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
|
||||
> Make sure to check documentation https://clickhouse.yandex/docs/en/ first. If the question is concise and probably has a short answer, asking it in Telegram chat https://telegram.me/clickhouse_en is probably the fastest way to find the answer. For more complicated questions, consider asking them on StackOverflow with "clickhouse" tag https://stackoverflow.com/questions/tagged/clickhouse
|
||||
|
||||
If you still prefer GitHub issues, remove all this text and ask your question here.
|
||||
> If you still prefer GitHub issues, remove all this text and ask your question here.
|
||||
|
14
.github/ISSUE_TEMPLATE/20_feature-request.md
vendored
14
.github/ISSUE_TEMPLATE/20_feature-request.md
vendored
@ -7,16 +7,20 @@ assignees: ''
|
||||
|
||||
---
|
||||
|
||||
(you don't have to strictly follow this form)
|
||||
> (you don't have to strictly follow this form)
|
||||
|
||||
**Use case**
|
||||
A clear and concise description of what is the intended usage scenario is.
|
||||
|
||||
> A clear and concise description of what is the intended usage scenario is.
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
> A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
> A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
|
||||
> Add any other context or screenshots about the feature request here.
|
||||
|
12
.github/ISSUE_TEMPLATE/40_bug-report.md
vendored
12
.github/ISSUE_TEMPLATE/40_bug-report.md
vendored
@ -7,11 +7,11 @@ assignees: ''
|
||||
|
||||
---
|
||||
|
||||
You have to provide the following information whenever possible.
|
||||
> You have to provide the following information whenever possible.
|
||||
|
||||
**Describe the bug**
|
||||
|
||||
A clear and concise description of what works not as it is supposed to.
|
||||
> A clear and concise description of what works not as it is supposed to.
|
||||
|
||||
**Does it reproduce on recent release?**
|
||||
|
||||
@ -19,7 +19,7 @@ A clear and concise description of what works not as it is supposed to.
|
||||
|
||||
**Enable crash reporting**
|
||||
|
||||
If possible, change "enabled" to true in "send_crash_reports" section in `config.xml`:
|
||||
> If possible, change "enabled" to true in "send_crash_reports" section in `config.xml`:
|
||||
|
||||
```
|
||||
<send_crash_reports>
|
||||
@ -39,12 +39,12 @@ If possible, change "enabled" to true in "send_crash_reports" section in `config
|
||||
|
||||
**Expected behavior**
|
||||
|
||||
A clear and concise description of what you expected to happen.
|
||||
> A clear and concise description of what you expected to happen.
|
||||
|
||||
**Error message and/or stacktrace**
|
||||
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
> If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Additional context**
|
||||
|
||||
Add any other context about the problem here.
|
||||
> Add any other context about the problem here.
|
||||
|
5
.github/ISSUE_TEMPLATE/50_build-issue.md
vendored
5
.github/ISSUE_TEMPLATE/50_build-issue.md
vendored
@ -7,10 +7,11 @@ assignees: ''
|
||||
|
||||
---
|
||||
|
||||
Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.yandex/docs/en/development/build/
|
||||
> Make sure that `git diff` result is empty and you've just pulled fresh master. Try cleaning up cmake cache. Just in case, official build instructions are published here: https://clickhouse.yandex/docs/en/development/build/
|
||||
|
||||
**Operating system**
|
||||
OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
|
||||
|
||||
> OS kind or distribution, specific version/release, non-standard kernel if any. If you are trying to build inside virtual machine, please mention it too.
|
||||
|
||||
**Cmake version**
|
||||
|
||||
|
12
.github/PULL_REQUEST_TEMPLATE.md
vendored
12
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -2,28 +2,26 @@ I hereby agree to the terms of the CLA available at: https://yandex.ru/legal/cla
|
||||
|
||||
Changelog category (leave one):
|
||||
- New Feature
|
||||
- Bug Fix
|
||||
- Improvement
|
||||
- Bug Fix
|
||||
- Performance Improvement
|
||||
- Backward Incompatible Change
|
||||
- Build/Testing/Packaging Improvement
|
||||
- Documentation (changelog entry is not required)
|
||||
- Other
|
||||
- Not for changelog (changelog entry is not required)
|
||||
|
||||
|
||||
Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
|
||||
|
||||
...
|
||||
|
||||
|
||||
Detailed description / Documentation draft:
|
||||
|
||||
...
|
||||
|
||||
By adding documentation, you'll allow users to try your new feature immediately, not when someone else will have time to document it later. Documentation is necessary for all features that affect user experience in any way. You can add brief documentation draft above, or add documentation right into your patch as Markdown files in [docs](https://github.com/ClickHouse/ClickHouse/tree/master/docs) folder.
|
||||
|
||||
If you are doing this for the first time, it's recommended to read the lightweight [Contributing to ClickHouse Documentation](https://github.com/ClickHouse/ClickHouse/tree/master/docs/README.md) guide first.
|
||||
> By adding documentation, you'll allow users to try your new feature immediately, not when someone else will have time to document it later. Documentation is necessary for all features that affect user experience in any way. You can add brief documentation draft above, or add documentation right into your patch as Markdown files in [docs](https://github.com/ClickHouse/ClickHouse/tree/master/docs) folder.
|
||||
|
||||
> If you are doing this for the first time, it's recommended to read the lightweight [Contributing to ClickHouse Documentation](https://github.com/ClickHouse/ClickHouse/tree/master/docs/README.md) guide first.
|
||||
|
||||
|
||||
Information about CI checks: https://clickhouse.tech/docs/en/development/continuous-integration/
|
||||
> Information about CI checks: https://clickhouse.tech/docs/en/development/continuous-integration/
|
||||
|
9
.gitmodules
vendored
9
.gitmodules
vendored
@ -225,6 +225,15 @@
|
||||
[submodule "contrib/yaml-cpp"]
|
||||
path = contrib/yaml-cpp
|
||||
url = https://github.com/ClickHouse-Extras/yaml-cpp.git
|
||||
[submodule "contrib/libstemmer_c"]
|
||||
path = contrib/libstemmer_c
|
||||
url = https://github.com/ClickHouse-Extras/libstemmer_c.git
|
||||
[submodule "contrib/wordnet-blast"]
|
||||
path = contrib/wordnet-blast
|
||||
url = https://github.com/ClickHouse-Extras/wordnet-blast.git
|
||||
[submodule "contrib/lemmagen-c"]
|
||||
path = contrib/lemmagen-c
|
||||
url = https://github.com/ClickHouse-Extras/lemmagen-c.git
|
||||
[submodule "contrib/libpqxx"]
|
||||
path = contrib/libpqxx
|
||||
url = https://github.com/ClickHouse-Extras/libpqxx.git
|
||||
|
@ -542,6 +542,7 @@ include (cmake/find/libpqxx.cmake)
|
||||
include (cmake/find/nuraft.cmake)
|
||||
include (cmake/find/yaml-cpp.cmake)
|
||||
include (cmake/find/s2geometry.cmake)
|
||||
include (cmake/find/nlp.cmake)
|
||||
|
||||
if(NOT USE_INTERNAL_PARQUET_LIBRARY)
|
||||
set (ENABLE_ORC OFF CACHE INTERNAL "")
|
||||
|
@ -69,7 +69,7 @@ void convertHistoryFile(const std::string & path, replxx::Replxx & rx)
|
||||
}
|
||||
|
||||
std::string line;
|
||||
if (!getline(in, line).good())
|
||||
if (getline(in, line).bad())
|
||||
{
|
||||
rx.print("Cannot read from %s (for conversion): %s\n",
|
||||
path.c_str(), errnoToString(errno).c_str());
|
||||
@ -78,7 +78,7 @@ void convertHistoryFile(const std::string & path, replxx::Replxx & rx)
|
||||
|
||||
/// This is the marker of the date, no need to convert.
|
||||
static char const REPLXX_TIMESTAMP_PATTERN[] = "### dddd-dd-dd dd:dd:dd.ddd";
|
||||
if (line.starts_with("### ") && line.size() == strlen(REPLXX_TIMESTAMP_PATTERN))
|
||||
if (line.empty() || (line.starts_with("### ") && line.size() == strlen(REPLXX_TIMESTAMP_PATTERN)))
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
24
base/common/removeDuplicates.h
Normal file
24
base/common/removeDuplicates.h
Normal file
@ -0,0 +1,24 @@
|
||||
#pragma once
|
||||
#include <vector>
|
||||
|
||||
/// Removes duplicates from a container without changing the order of its elements.
|
||||
/// Keeps the last occurrence of each element.
|
||||
/// Should NOT be used for containers with a lot of elements because it has O(N^2) complexity.
|
||||
template <typename T>
|
||||
void removeDuplicatesKeepLast(std::vector<T> & vec)
|
||||
{
|
||||
auto begin = vec.begin();
|
||||
auto end = vec.end();
|
||||
auto new_begin = end;
|
||||
for (auto current = end; current != begin;)
|
||||
{
|
||||
--current;
|
||||
if (std::find(new_begin, end, *current) == end)
|
||||
{
|
||||
--new_begin;
|
||||
if (new_begin != current)
|
||||
*new_begin = *current;
|
||||
}
|
||||
}
|
||||
vec.erase(begin, new_begin);
|
||||
}
|
@ -259,10 +259,25 @@ private:
|
||||
Poco::Logger * log;
|
||||
BaseDaemon & daemon;
|
||||
|
||||
void onTerminate(const std::string & message, UInt32 thread_num) const
|
||||
void onTerminate(std::string_view message, UInt32 thread_num) const
|
||||
{
|
||||
size_t pos = message.find('\n');
|
||||
|
||||
LOG_FATAL(log, "(version {}{}, {}) (from thread {}) {}",
|
||||
VERSION_STRING, VERSION_OFFICIAL, daemon.build_id_info, thread_num, message);
|
||||
VERSION_STRING, VERSION_OFFICIAL, daemon.build_id_info, thread_num, message.substr(0, pos));
|
||||
|
||||
/// Print trace from std::terminate exception line-by-line to make it easy for grep.
|
||||
while (pos != std::string_view::npos)
|
||||
{
|
||||
++pos;
|
||||
size_t next_pos = message.find('\n', pos);
|
||||
size_t size = next_pos;
|
||||
if (next_pos != std::string_view::npos)
|
||||
size = next_pos - pos;
|
||||
|
||||
LOG_FATAL(log, "{}", message.substr(pos, size));
|
||||
pos = next_pos;
|
||||
}
|
||||
}
|
||||
|
||||
void onFault(
|
||||
|
@ -4,13 +4,24 @@ QUERIES_FILE="queries.sql"
|
||||
TABLE=$1
|
||||
TRIES=3
|
||||
|
||||
if [ -x ./clickhouse ]
|
||||
then
|
||||
CLICKHOUSE_CLIENT="./clickhouse client"
|
||||
elif command -v clickhouse-client >/dev/null 2>&1
|
||||
then
|
||||
CLICKHOUSE_CLIENT="clickhouse-client"
|
||||
else
|
||||
echo "clickhouse-client is not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do
|
||||
sync
|
||||
echo 3 | sudo tee /proc/sys/vm/drop_caches >/dev/null
|
||||
|
||||
echo -n "["
|
||||
for i in $(seq 1 $TRIES); do
|
||||
RES=$(clickhouse-client --time --format=Null --query="$query" 2>&1)
|
||||
RES=$(${CLICKHOUSE_CLIENT} --time --format=Null --max_memory_usage=100G --query="$query" 2>&1)
|
||||
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
||||
[[ "$i" != $TRIES ]] && echo -n ", "
|
||||
done
|
||||
|
@ -11,8 +11,8 @@ DATASET="${TABLE}_v1.tar.xz"
|
||||
QUERIES_FILE="queries.sql"
|
||||
TRIES=3
|
||||
|
||||
AMD64_BIN_URL="https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_build_check/gcc-10_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"
|
||||
AARCH64_BIN_URL="https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_special_build_check/clang-10-aarch64_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"
|
||||
AMD64_BIN_URL="https://builds.clickhouse.tech/master/amd64/clickhouse"
|
||||
AARCH64_BIN_URL="https://builds.clickhouse.tech/master/aarch64/clickhouse"
|
||||
|
||||
# Note: on older Ubuntu versions, 'axel' does not support IPv6. If you are using IPv6-only servers on very old Ubuntu, just don't install 'axel'.
|
||||
|
||||
@ -89,7 +89,7 @@ cat "$QUERIES_FILE" | sed "s/{table}/${TABLE}/g" | while read query; do
|
||||
|
||||
echo -n "["
|
||||
for i in $(seq 1 $TRIES); do
|
||||
RES=$(./clickhouse client --max_memory_usage 100000000000 --time --format=Null --query="$query" 2>&1 ||:)
|
||||
RES=$(./clickhouse client --max_memory_usage 100G --time --format=Null --query="$query" 2>&1 ||:)
|
||||
[[ "$?" == "0" ]] && echo -n "${RES}" || echo -n "null"
|
||||
[[ "$i" != $TRIES ]] && echo -n ", "
|
||||
done
|
||||
|
32
cmake/find/nlp.cmake
Normal file
32
cmake/find/nlp.cmake
Normal file
@ -0,0 +1,32 @@
|
||||
option(ENABLE_NLP "Enable NLP functions support" ${ENABLE_LIBRARIES})
|
||||
|
||||
if (NOT ENABLE_NLP)
|
||||
|
||||
message (STATUS "NLP functions disabled")
|
||||
return()
|
||||
endif()
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/libstemmer_c/Makefile")
|
||||
message (WARNING "submodule contrib/libstemmer_c is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal libstemmer_c library, NLP functions will be disabled")
|
||||
set (USE_NLP 0)
|
||||
return()
|
||||
endif ()
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/wordnet-blast/CMakeLists.txt")
|
||||
message (WARNING "submodule contrib/wordnet-blast is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal wordnet-blast library, NLP functions will be disabled")
|
||||
set (USE_NLP 0)
|
||||
return()
|
||||
endif ()
|
||||
|
||||
if (NOT EXISTS "${ClickHouse_SOURCE_DIR}/contrib/lemmagen-c/README.md")
|
||||
message (WARNING "submodule contrib/lemmagen-c is missing. to fix try run: \n git submodule update --init --recursive")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find internal lemmagen-c library, NLP functions will be disabled")
|
||||
set (USE_NLP 0)
|
||||
return()
|
||||
endif ()
|
||||
|
||||
set (USE_NLP 1)
|
||||
|
||||
message (STATUS "Using Libraries for NLP functions: contrib/wordnet-blast, contrib/libstemmer_c, contrib/lemmagen-c")
|
2
contrib/AMQP-CPP
vendored
2
contrib/AMQP-CPP
vendored
@ -1 +1 @@
|
||||
Subproject commit 03781aaff0f10ef41f902b8cf865fe0067180c10
|
||||
Subproject commit 1a6c51f4ac51ac56610fa95081bd2f349911375a
|
6
contrib/CMakeLists.txt
vendored
6
contrib/CMakeLists.txt
vendored
@ -328,6 +328,12 @@ endif()
|
||||
|
||||
add_subdirectory(fast_float)
|
||||
|
||||
if (USE_NLP)
|
||||
add_subdirectory(libstemmer-c-cmake)
|
||||
add_subdirectory(wordnet-blast-cmake)
|
||||
add_subdirectory(lemmagen-c-cmake)
|
||||
endif()
|
||||
|
||||
if (USE_SQLITE)
|
||||
add_subdirectory(sqlite-cmake)
|
||||
endif()
|
||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
||||
Subproject commit 976874b7aa7f422bf4ea595bb7d1166c617b1c26
|
||||
Subproject commit 0ce9490093021c63564cca159571a8b27772ad48
|
@ -10,11 +10,12 @@ set (SRCS
|
||||
"${LIBRARY_DIR}/src/deferredconsumer.cpp"
|
||||
"${LIBRARY_DIR}/src/deferredextreceiver.cpp"
|
||||
"${LIBRARY_DIR}/src/deferredget.cpp"
|
||||
"${LIBRARY_DIR}/src/deferredpublisher.cpp"
|
||||
"${LIBRARY_DIR}/src/deferredrecall.cpp"
|
||||
"${LIBRARY_DIR}/src/deferredreceiver.cpp"
|
||||
"${LIBRARY_DIR}/src/field.cpp"
|
||||
"${LIBRARY_DIR}/src/flags.cpp"
|
||||
"${LIBRARY_DIR}/src/linux_tcp/openssl.cpp"
|
||||
"${LIBRARY_DIR}/src/linux_tcp/sslerrorprinter.cpp"
|
||||
"${LIBRARY_DIR}/src/linux_tcp/tcpconnection.cpp"
|
||||
"${LIBRARY_DIR}/src/inbuffer.cpp"
|
||||
"${LIBRARY_DIR}/src/receivedframe.cpp"
|
||||
|
2
contrib/arrow
vendored
2
contrib/arrow
vendored
@ -1 +1 @@
|
||||
Subproject commit debf751a129bdda9ff4d1e895e08957ff77000a1
|
||||
Subproject commit 078e21bad344747b7656ef2d7a4f7410a0a303eb
|
@ -194,9 +194,18 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/compute/cast.cc"
|
||||
"${LIBRARY_DIR}/compute/exec.cc"
|
||||
"${LIBRARY_DIR}/compute/function.cc"
|
||||
"${LIBRARY_DIR}/compute/function_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernel.cc"
|
||||
"${LIBRARY_DIR}/compute/registry.cc"
|
||||
|
||||
"${LIBRARY_DIR}/compute/exec/exec_plan.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/expression.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_compare.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_encode.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_hash.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_map.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/util.cc"
|
||||
|
||||
"${LIBRARY_DIR}/compute/kernels/aggregate_basic.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/aggregate_mode.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/aggregate_quantile.cc"
|
||||
@ -207,6 +216,7 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_arithmetic.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_boolean.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_boolean.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_dictionary.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_nested.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_numeric.cc"
|
||||
@ -214,15 +224,18 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_temporal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_compare.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_fill_null.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_if_else.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_nested.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_set_lookup.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_string.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_temporal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_validity.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/util_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_hash.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_nested.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_replace.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_selection.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_sort.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/util_internal.cc"
|
||||
|
||||
"${LIBRARY_DIR}/csv/chunker.cc"
|
||||
"${LIBRARY_DIR}/csv/column_builder.cc"
|
||||
@ -231,6 +244,7 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/csv/options.cc"
|
||||
"${LIBRARY_DIR}/csv/parser.cc"
|
||||
"${LIBRARY_DIR}/csv/reader.cc"
|
||||
"${LIBRARY_DIR}/csv/writer.cc"
|
||||
|
||||
"${LIBRARY_DIR}/ipc/dictionary.cc"
|
||||
"${LIBRARY_DIR}/ipc/feather.cc"
|
||||
@ -247,6 +261,7 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/io/interfaces.cc"
|
||||
"${LIBRARY_DIR}/io/memory.cc"
|
||||
"${LIBRARY_DIR}/io/slow.cc"
|
||||
"${LIBRARY_DIR}/io/stdio.cc"
|
||||
"${LIBRARY_DIR}/io/transform.cc"
|
||||
|
||||
"${LIBRARY_DIR}/tensor/coo_converter.cc"
|
||||
@ -257,9 +272,9 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/util/bit_block_counter.cc"
|
||||
"${LIBRARY_DIR}/util/bit_run_reader.cc"
|
||||
"${LIBRARY_DIR}/util/bit_util.cc"
|
||||
"${LIBRARY_DIR}/util/bitmap.cc"
|
||||
"${LIBRARY_DIR}/util/bitmap_builders.cc"
|
||||
"${LIBRARY_DIR}/util/bitmap_ops.cc"
|
||||
"${LIBRARY_DIR}/util/bitmap.cc"
|
||||
"${LIBRARY_DIR}/util/bpacking.cc"
|
||||
"${LIBRARY_DIR}/util/cancel.cc"
|
||||
"${LIBRARY_DIR}/util/compression.cc"
|
||||
|
2
contrib/boost
vendored
2
contrib/boost
vendored
@ -1 +1 @@
|
||||
Subproject commit 1ccbb5a522a571ce83b606dbc2e1011c42ecccfb
|
||||
Subproject commit 9cf09dbfd55a5c6202dedbdf40781a51b02c2675
|
@ -13,11 +13,12 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
|
||||
regex
|
||||
context
|
||||
coroutine
|
||||
graph
|
||||
)
|
||||
|
||||
if(Boost_INCLUDE_DIR AND Boost_FILESYSTEM_LIBRARY AND Boost_FILESYSTEM_LIBRARY AND
|
||||
Boost_PROGRAM_OPTIONS_LIBRARY AND Boost_REGEX_LIBRARY AND Boost_SYSTEM_LIBRARY AND Boost_CONTEXT_LIBRARY AND
|
||||
Boost_COROUTINE_LIBRARY)
|
||||
Boost_COROUTINE_LIBRARY AND Boost_GRAPH_LIBRARY)
|
||||
|
||||
set(EXTERNAL_BOOST_FOUND 1)
|
||||
|
||||
@ -32,6 +33,7 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
|
||||
add_library (_boost_system INTERFACE)
|
||||
add_library (_boost_context INTERFACE)
|
||||
add_library (_boost_coroutine INTERFACE)
|
||||
add_library (_boost_graph INTERFACE)
|
||||
|
||||
target_link_libraries (_boost_filesystem INTERFACE ${Boost_FILESYSTEM_LIBRARY})
|
||||
target_link_libraries (_boost_iostreams INTERFACE ${Boost_IOSTREAMS_LIBRARY})
|
||||
@ -40,6 +42,7 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
|
||||
target_link_libraries (_boost_system INTERFACE ${Boost_SYSTEM_LIBRARY})
|
||||
target_link_libraries (_boost_context INTERFACE ${Boost_CONTEXT_LIBRARY})
|
||||
target_link_libraries (_boost_coroutine INTERFACE ${Boost_COROUTINE_LIBRARY})
|
||||
target_link_libraries (_boost_graph INTERFACE ${Boost_GRAPH_LIBRARY})
|
||||
|
||||
add_library (boost::filesystem ALIAS _boost_filesystem)
|
||||
add_library (boost::iostreams ALIAS _boost_iostreams)
|
||||
@ -48,6 +51,7 @@ if (NOT USE_INTERNAL_BOOST_LIBRARY)
|
||||
add_library (boost::system ALIAS _boost_system)
|
||||
add_library (boost::context ALIAS _boost_context)
|
||||
add_library (boost::coroutine ALIAS _boost_coroutine)
|
||||
add_library (boost::graph ALIAS _boost_graph)
|
||||
else()
|
||||
set(EXTERNAL_BOOST_FOUND 0)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "Can't find system boost")
|
||||
@ -221,4 +225,17 @@ if (NOT EXTERNAL_BOOST_FOUND)
|
||||
add_library (boost::coroutine ALIAS _boost_coroutine)
|
||||
target_include_directories (_boost_coroutine PRIVATE ${LIBRARY_DIR})
|
||||
target_link_libraries(_boost_coroutine PRIVATE _boost_context)
|
||||
|
||||
# graph
|
||||
|
||||
set (SRCS_GRAPH
|
||||
"${LIBRARY_DIR}/libs/graph/src/graphml.cpp"
|
||||
"${LIBRARY_DIR}/libs/graph/src/read_graphviz_new.cpp"
|
||||
)
|
||||
|
||||
add_library (_boost_graph ${SRCS_GRAPH})
|
||||
add_library (boost::graph ALIAS _boost_graph)
|
||||
target_include_directories (_boost_graph PRIVATE ${LIBRARY_DIR})
|
||||
target_link_libraries(_boost_graph PRIVATE _boost_regex)
|
||||
|
||||
endif ()
|
||||
|
1
contrib/lemmagen-c
vendored
Submodule
1
contrib/lemmagen-c
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 59537bdcf57bbed17913292cb4502d15657231f1
|
9
contrib/lemmagen-c-cmake/CMakeLists.txt
Normal file
9
contrib/lemmagen-c-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,9 @@
|
||||
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/lemmagen-c")
|
||||
set(LEMMAGEN_INCLUDE_DIR "${LIBRARY_DIR}/include")
|
||||
|
||||
set(SRCS
|
||||
"${LIBRARY_DIR}/src/RdrLemmatizer.cpp"
|
||||
)
|
||||
|
||||
add_library(lemmagen STATIC ${SRCS})
|
||||
target_include_directories(lemmagen PUBLIC "${LEMMAGEN_INCLUDE_DIR}")
|
31
contrib/libstemmer-c-cmake/CMakeLists.txt
Normal file
31
contrib/libstemmer-c-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,31 @@
|
||||
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/libstemmer_c")
|
||||
set(STEMMER_INCLUDE_DIR "${LIBRARY_DIR}/include")
|
||||
|
||||
FILE ( READ "${LIBRARY_DIR}/mkinc.mak" _CONTENT )
|
||||
# replace '\ ' into one big line
|
||||
STRING ( REGEX REPLACE "\\\\\n " " ${LIBRARY_DIR}/" _CONTENT "${_CONTENT}" )
|
||||
# escape ';' (if any)
|
||||
STRING ( REGEX REPLACE ";" "\\\\;" _CONTENT "${_CONTENT}" )
|
||||
# now replace lf into ';' (it makes list from the line)
|
||||
STRING ( REGEX REPLACE "\n" ";" _CONTENT "${_CONTENT}" )
|
||||
FOREACH ( LINE ${_CONTENT} )
|
||||
# skip comments (beginning with #)
|
||||
IF ( NOT "${LINE}" MATCHES "^#.*" )
|
||||
# parse 'name=value1 value2..." - extract the 'name' part
|
||||
STRING ( REGEX REPLACE "=.*$" "" _NAME "${LINE}" )
|
||||
# extract the list of values part
|
||||
STRING ( REGEX REPLACE "^.*=" "" _LIST "${LINE}" )
|
||||
# replace (multi)spaces into ';' (it makes list from the line)
|
||||
STRING ( REGEX REPLACE " +" ";" _LIST "${_LIST}" )
|
||||
# finally get our two variables
|
||||
IF ( "${_NAME}" MATCHES "snowball_sources" )
|
||||
SET ( _SOURCES "${_LIST}" )
|
||||
ELSEIF ( "${_NAME}" MATCHES "snowball_headers" )
|
||||
SET ( _HEADERS "${_LIST}" )
|
||||
ENDIF ()
|
||||
endif ()
|
||||
endforeach ()
|
||||
|
||||
# all the sources parsed. Now just add the lib
|
||||
add_library ( stemmer STATIC ${_SOURCES} ${_HEADERS} )
|
||||
target_include_directories (stemmer PUBLIC "${STEMMER_INCLUDE_DIR}")
|
1
contrib/libstemmer_c
vendored
Submodule
1
contrib/libstemmer_c
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit c753054304d87daf460057c1a649c482aa094835
|
@ -22,6 +22,7 @@ set(SRCS
|
||||
"${LIBRARY_DIR}/src/launcher.cxx"
|
||||
"${LIBRARY_DIR}/src/srv_config.cxx"
|
||||
"${LIBRARY_DIR}/src/snapshot_sync_req.cxx"
|
||||
"${LIBRARY_DIR}/src/snapshot_sync_ctx.cxx"
|
||||
"${LIBRARY_DIR}/src/handle_timeout.cxx"
|
||||
"${LIBRARY_DIR}/src/handle_append_entries.cxx"
|
||||
"${LIBRARY_DIR}/src/cluster_config.cxx"
|
||||
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit 5994506908028612869fee627d68d8212dfe7c1e
|
||||
Subproject commit 7351c4691b5d401f59e3959adfc5b4fa263b32da
|
2
contrib/protobuf
vendored
2
contrib/protobuf
vendored
@ -1 +1 @@
|
||||
Subproject commit 73b12814204ad9068ba352914d0dc244648b48ee
|
||||
Subproject commit 75601841d172c73ae6bf4ce8121f42b875cdbabd
|
2
contrib/rocksdb
vendored
2
contrib/rocksdb
vendored
@ -1 +1 @@
|
||||
Subproject commit 07c77549a20b63ff6981b400085eba36bb5c80c4
|
||||
Subproject commit b6480c69bf3ab6e298e0d019a07fd4f69029b26a
|
@ -70,11 +70,6 @@ else()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
set(BUILD_VERSION_CC rocksdb_build_version.cc)
|
||||
add_library(rocksdb_build_version OBJECT ${BUILD_VERSION_CC})
|
||||
|
||||
target_include_directories(rocksdb_build_version PRIVATE "${ROCKSDB_SOURCE_DIR}/util")
|
||||
|
||||
include(CheckCCompilerFlag)
|
||||
if(CMAKE_SYSTEM_PROCESSOR MATCHES "^(powerpc|ppc)64")
|
||||
CHECK_C_COMPILER_FLAG("-mcpu=power9" HAS_POWER9)
|
||||
@ -243,272 +238,293 @@ find_package(Threads REQUIRED)
|
||||
# Main library source code
|
||||
|
||||
set(SOURCES
|
||||
"${ROCKSDB_SOURCE_DIR}/cache/cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/c.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/column_family.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compacted_db_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/convenience.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/db_iter.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/dbformat.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/error_handler.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/experimental.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/flush_job.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/log_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/log_writer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/memtable.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/output_validator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/repair.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/table_cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/version_builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/version_edit.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/version_set.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/write_batch.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/write_controller.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/db/write_thread.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/env/env.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/env/file_system.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/env/mock_env.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/file_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/filename.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memory/arena.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/options/cf_options.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/options/configurable.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/options/customizable.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/options/db_options.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/options/options.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/options/options_helper.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/options/options_parser.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/format.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/get_context.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/iterator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/table_factory.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/table_properties.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/coding.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/comparator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/crc32c.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/hash.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/random.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/slice.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/status.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/string_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/thread_local.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/util/xxhash.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/debug.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc"
|
||||
"${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc"
|
||||
$<TARGET_OBJECTS:rocksdb_build_version>)
|
||||
${ROCKSDB_SOURCE_DIR}/cache/cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/cache/cache_entry_roles.cc
|
||||
${ROCKSDB_SOURCE_DIR}/cache/clock_cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/cache/lru_cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/cache/sharded_cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/arena_wrapped_db_iter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_fetcher.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_addition.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_garbage.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_meta.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_file_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_garbage_meter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_format.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_sequential_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/blob/blob_log_writer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/c.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/column_family.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_iterator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_job.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_fifo.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_level.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/compaction_picker_universal.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/compaction/sst_partitioner.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/convenience.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_filesnapshot.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/compacted_db_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_write.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_compaction_flush.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_files.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_open.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_debug.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_experimental.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_readonly.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_impl/db_impl_secondary.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_info_dumper.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/db_iter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/dbformat.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/error_handler.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/event_helpers.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/experimental.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/external_sst_file_ingestion_job.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/file_indexer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/flush_job.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/flush_scheduler.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/forward_iterator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/import_column_family_job.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/internal_stats.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/logs_with_prep_tracker.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/log_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/log_writer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/malloc_stats.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/memtable.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/memtable_list.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/merge_helper.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/merge_operator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/output_validator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/periodic_work_scheduler.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/range_del_aggregator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/range_tombstone_fragmenter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/repair.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/snapshot_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/table_cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/table_properties_collector.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/transaction_log_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/trim_history_scheduler.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/version_builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/version_edit.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/version_edit_handler.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/version_set.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/wal_edit.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/wal_manager.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/write_batch.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/write_batch_base.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/write_controller.cc
|
||||
${ROCKSDB_SOURCE_DIR}/db/write_thread.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/composite_env.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/env.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/env_chroot.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/env_encryption.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/env_hdfs.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/file_system.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/file_system_tracer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/fs_remap.cc
|
||||
${ROCKSDB_SOURCE_DIR}/env/mock_env.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/delete_scheduler.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/file_prefetch_buffer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/file_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/filename.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/line_file_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/random_access_file_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/read_write_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/readahead_raf.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/sequence_file_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/sst_file_manager_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/file/writable_file_writer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/logging/auto_roll_logger.cc
|
||||
${ROCKSDB_SOURCE_DIR}/logging/event_logger.cc
|
||||
${ROCKSDB_SOURCE_DIR}/logging/log_buffer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memory/arena.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memory/concurrent_arena.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memory/jemalloc_nodump_allocator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memory/memkind_kmem_allocator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memtable/alloc_tracker.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memtable/hash_linklist_rep.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memtable/hash_skiplist_rep.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memtable/skiplistrep.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memtable/vectorrep.cc
|
||||
${ROCKSDB_SOURCE_DIR}/memtable/write_buffer_manager.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/histogram.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/histogram_windowing.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/in_memory_stats_history.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/instrumented_mutex.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/iostats_context.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/perf_context.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/perf_level.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/persistent_stats_history.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/statistics.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_updater.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/monitoring/thread_status_util_debug.cc
|
||||
${ROCKSDB_SOURCE_DIR}/options/cf_options.cc
|
||||
${ROCKSDB_SOURCE_DIR}/options/configurable.cc
|
||||
${ROCKSDB_SOURCE_DIR}/options/customizable.cc
|
||||
${ROCKSDB_SOURCE_DIR}/options/db_options.cc
|
||||
${ROCKSDB_SOURCE_DIR}/options/options.cc
|
||||
${ROCKSDB_SOURCE_DIR}/options/options_helper.cc
|
||||
${ROCKSDB_SOURCE_DIR}/options/options_parser.cc
|
||||
${ROCKSDB_SOURCE_DIR}/port/stack_trace.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/adaptive/adaptive_table_factory.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/binary_search_index_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_filter_block.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_factory.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_iterator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_based_table_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefetcher.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/block_prefix_index.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_hash_index.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/data_block_footer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_block_reader_common.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/filter_policy.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/flush_block_policy.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/full_filter_block.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/hash_index_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/index_builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/index_reader_common.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/parsed_full_filter_block.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_filter_block.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_iterator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/partitioned_index_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/reader_common.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_based/uncompression_dict_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/block_fetcher.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_factory.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/cuckoo/cuckoo_table_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/format.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/get_context.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/iterator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/merging_iterator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/meta_blocks.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/persistent_cache_helper.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_bloom.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_builder.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_factory.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_index.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_key_coding.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/plain/plain_table_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/sst_file_dumper.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/sst_file_reader.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/sst_file_writer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/table_factory.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/table_properties.cc
|
||||
${ROCKSDB_SOURCE_DIR}/table/two_level_iterator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/test_util/sync_point.cc
|
||||
${ROCKSDB_SOURCE_DIR}/test_util/sync_point_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/test_util/testutil.cc
|
||||
${ROCKSDB_SOURCE_DIR}/test_util/transaction_test_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/tools/block_cache_analyzer/block_cache_trace_analyzer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/tools/dump/db_dump_tool.cc
|
||||
${ROCKSDB_SOURCE_DIR}/tools/io_tracer_parser_tool.cc
|
||||
${ROCKSDB_SOURCE_DIR}/tools/ldb_cmd.cc
|
||||
${ROCKSDB_SOURCE_DIR}/tools/ldb_tool.cc
|
||||
${ROCKSDB_SOURCE_DIR}/tools/sst_dump_tool.cc
|
||||
${ROCKSDB_SOURCE_DIR}/tools/trace_analyzer_tool.cc
|
||||
${ROCKSDB_SOURCE_DIR}/trace_replay/trace_replay.cc
|
||||
${ROCKSDB_SOURCE_DIR}/trace_replay/block_cache_tracer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/trace_replay/io_tracer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/coding.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/compaction_job_stats_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/comparator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/compression_context_cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/concurrent_task_limiter_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/crc32c.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/dynamic_bloom.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/hash.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/murmurhash.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/random.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/rate_limiter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/ribbon_config.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/slice.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/file_checksum_helper.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/status.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/string_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/thread_local.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/threadpool_imp.cc
|
||||
${ROCKSDB_SOURCE_DIR}/util/xxhash.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/backupable/backupable_db.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_compaction_filter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_db_impl_filesnapshot.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_dump_tool.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/blob_db/blob_file.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/cassandra_compaction_filter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/format.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/cassandra/merge_operator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/checkpoint/checkpoint_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/compaction_filters/remove_emptyvalue_compactionfilter.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/debug.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/env_mirror.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/env_timed.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_env.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/fault_injection_fs.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/leveldb_options/leveldb_options.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/memory/memory_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/bytesxor.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/max.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/put.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/sortlist.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/string_append/stringappend2.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/merge_operators/uint64add.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/object_registry.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/option_change_migration/option_change_migration.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/options/options_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_file.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/block_cache_tier_metadata.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/persistent_cache_tier.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/persistent_cache/volatile_tier_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/cache_simulator.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/simulator_cache/sim_cache.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/table_properties_collectors/compact_on_deletion_collector.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/trace/file_trace_reader_writer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/lock_manager.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_tracker.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/point/point_lock_manager.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/range_tree_lock_manager.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/range_tree_lock_tracker.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction_db_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/optimistic_transaction.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/pessimistic_transaction_db.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/snapshot_checker.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_base.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_db_mutex_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/transaction_util.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_prepared_txn_db.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/write_unprepared_txn_db.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/ttl/db_ttl_impl.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/write_batch_with_index/write_batch_with_index_internal.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/concurrent_tree.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/keyrange.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/lock_request.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/locktree.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/manager.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/range_buffer.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/treenode.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/txnid_set.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/locktree/wfg.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/standalone_port.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/util/dbt.cc
|
||||
${ROCKSDB_SOURCE_DIR}/utilities/transactions/lock/range/range_tree/lib/util/memarena.cc
|
||||
rocksdb_build_version.cc)
|
||||
|
||||
if(HAVE_SSE42 AND NOT MSVC)
|
||||
set_source_files_properties(
|
||||
|
@ -1,3 +1,62 @@
|
||||
const char* rocksdb_build_git_sha = "rocksdb_build_git_sha:0";
|
||||
const char* rocksdb_build_git_date = "rocksdb_build_git_date:2000-01-01";
|
||||
const char* rocksdb_build_compile_date = "2000-01-01";
|
||||
// Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
|
||||
/// This file was edited for ClickHouse.
|
||||
|
||||
#include <memory>
|
||||
|
||||
#include "rocksdb/version.h"
|
||||
#include "util/string_util.h"
|
||||
|
||||
// The build script may replace these values with real values based
|
||||
// on whether or not GIT is available and the platform settings
|
||||
static const std::string rocksdb_build_git_sha = "rocksdb_build_git_sha:0";
|
||||
static const std::string rocksdb_build_git_tag = "rocksdb_build_git_tag:master";
|
||||
static const std::string rocksdb_build_date = "rocksdb_build_date:2000-01-01";
|
||||
|
||||
namespace ROCKSDB_NAMESPACE {
|
||||
static void AddProperty(std::unordered_map<std::string, std::string> *props, const std::string& name) {
|
||||
size_t colon = name.find(":");
|
||||
if (colon != std::string::npos && colon > 0 && colon < name.length() - 1) {
|
||||
// If we found a "@:", then this property was a build-time substitution that failed. Skip it
|
||||
size_t at = name.find("@", colon);
|
||||
if (at != colon + 1) {
|
||||
// Everything before the colon is the name, after is the value
|
||||
(*props)[name.substr(0, colon)] = name.substr(colon + 1);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
static std::unordered_map<std::string, std::string>* LoadPropertiesSet() {
|
||||
auto * properties = new std::unordered_map<std::string, std::string>();
|
||||
AddProperty(properties, rocksdb_build_git_sha);
|
||||
AddProperty(properties, rocksdb_build_git_tag);
|
||||
AddProperty(properties, rocksdb_build_date);
|
||||
return properties;
|
||||
}
|
||||
|
||||
const std::unordered_map<std::string, std::string>& GetRocksBuildProperties() {
|
||||
static std::unique_ptr<std::unordered_map<std::string, std::string>> props(LoadPropertiesSet());
|
||||
return *props;
|
||||
}
|
||||
|
||||
std::string GetRocksVersionAsString(bool with_patch) {
|
||||
std::string version = ToString(ROCKSDB_MAJOR) + "." + ToString(ROCKSDB_MINOR);
|
||||
if (with_patch) {
|
||||
return version + "." + ToString(ROCKSDB_PATCH);
|
||||
} else {
|
||||
return version;
|
||||
}
|
||||
}
|
||||
|
||||
std::string GetRocksBuildInfoAsString(const std::string& program, bool verbose) {
|
||||
std::string info = program + " (RocksDB) " + GetRocksVersionAsString(true);
|
||||
if (verbose) {
|
||||
for (const auto& it : GetRocksBuildProperties()) {
|
||||
info.append("\n ");
|
||||
info.append(it.first);
|
||||
info.append(": ");
|
||||
info.append(it.second);
|
||||
}
|
||||
}
|
||||
return info;
|
||||
}
|
||||
} // namespace ROCKSDB_NAMESPACE
|
||||
|
@ -115,6 +115,8 @@ set(S2_SRCS
|
||||
|
||||
add_library(s2 ${S2_SRCS})
|
||||
|
||||
set_property(TARGET s2 PROPERTY CXX_STANDARD 11)
|
||||
|
||||
if (OPENSSL_FOUND)
|
||||
target_link_libraries(s2 PRIVATE ${OPENSSL_LIBRARIES})
|
||||
endif()
|
||||
|
1
contrib/wordnet-blast
vendored
Submodule
1
contrib/wordnet-blast
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 1d16ac28036e19fe8da7ba72c16a307fbdf8c87e
|
13
contrib/wordnet-blast-cmake/CMakeLists.txt
Normal file
13
contrib/wordnet-blast-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,13 @@
|
||||
set(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/wordnet-blast")
|
||||
|
||||
set(SRCS
|
||||
"${LIBRARY_DIR}/wnb/core/info_helper.cc"
|
||||
"${LIBRARY_DIR}/wnb/core/load_wordnet.cc"
|
||||
"${LIBRARY_DIR}/wnb/core/wordnet.cc"
|
||||
)
|
||||
|
||||
add_library(wnb ${SRCS})
|
||||
|
||||
target_link_libraries(wnb PRIVATE boost::headers_only boost::graph)
|
||||
|
||||
target_include_directories(wnb PUBLIC "${LIBRARY_DIR}")
|
@ -27,7 +27,7 @@ RUN apt-get update \
|
||||
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
||||
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
||||
# Significantly increase deb packaging speed and compatible with old systems
|
||||
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
||||
RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
|
||||
&& chmod +x dpkg-deb \
|
||||
&& cp dpkg-deb /usr/bin
|
||||
|
||||
|
@ -2,7 +2,7 @@
|
||||
FROM yandex/clickhouse-deb-builder
|
||||
|
||||
RUN export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
|
||||
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.bintray.com/arrow/ubuntu/apache-arrow-archive-keyring-latest-${CODENAME}.deb" \
|
||||
&& wget -nv -O /tmp/arrow-keyring.deb "https://apache.jfrog.io/artifactory/arrow/ubuntu/apache-arrow-apt-source-latest-${CODENAME}.deb" \
|
||||
&& dpkg -i /tmp/arrow-keyring.deb
|
||||
|
||||
# Libraries from OS are only needed to test the "unbundled" build (that is not used in production).
|
||||
@ -23,6 +23,7 @@ RUN apt-get update \
|
||||
libboost-regex-dev \
|
||||
libboost-context-dev \
|
||||
libboost-coroutine-dev \
|
||||
libboost-graph-dev \
|
||||
zlib1g-dev \
|
||||
liblz4-dev \
|
||||
libdouble-conversion-dev \
|
||||
|
@ -72,7 +72,10 @@ do
|
||||
|
||||
if [ "$DO_CHOWN" = "1" ]; then
|
||||
# ensure proper directories permissions
|
||||
# but skip it for if directory already has proper premissions, cause recursive chown may be slow
|
||||
if [ "$(stat -c %u "$dir")" != "$USER" ] || [ "$(stat -c %g "$dir")" != "$GROUP" ]; then
|
||||
chown -R "$USER:$GROUP" "$dir"
|
||||
fi
|
||||
elif ! $gosu test -d "$dir" -a -w "$dir" -a -r "$dir"; then
|
||||
echo "Necessary directory '$dir' isn't accessible by user with id '$USER'"
|
||||
exit 1
|
||||
@ -161,6 +164,10 @@ fi
|
||||
|
||||
# if no args passed to `docker run` or first argument start with `--`, then the user is passing clickhouse-server arguments
|
||||
if [[ $# -lt 1 ]] || [[ "$1" == "--"* ]]; then
|
||||
# Watchdog is launched by default, but does not send SIGINT to the main process,
|
||||
# so the container can't be finished by ctrl+c
|
||||
CLICKHOUSE_WATCHDOG_ENABLE=${CLICKHOUSE_WATCHDOG_ENABLE:-0}
|
||||
export CLICKHOUSE_WATCHDOG_ENABLE
|
||||
exec $gosu /usr/bin/clickhouse-server --config-file="$CLICKHOUSE_CONFIG" "$@"
|
||||
fi
|
||||
|
||||
|
@ -27,7 +27,7 @@ RUN apt-get update \
|
||||
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
||||
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
||||
# Significantly increase deb packaging speed and compatible with old systems
|
||||
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
||||
RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
|
||||
&& chmod +x dpkg-deb \
|
||||
&& cp dpkg-deb /usr/bin
|
||||
|
||||
@ -61,4 +61,7 @@ ENV TSAN_OPTIONS='halt_on_error=1 history_size=7'
|
||||
ENV UBSAN_OPTIONS='print_stacktrace=1'
|
||||
ENV MSAN_OPTIONS='abort_on_error=1 poison_in_dtor=1'
|
||||
|
||||
ENV TZ=Europe/Moscow
|
||||
RUN ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||
|
||||
CMD sleep 1
|
||||
|
@ -27,7 +27,7 @@ RUN apt-get update \
|
||||
# Special dpkg-deb (https://github.com/ClickHouse-Extras/dpkg) version which is able
|
||||
# to compress files using pigz (https://zlib.net/pigz/) instead of gzip.
|
||||
# Significantly increase deb packaging speed and compatible with old systems
|
||||
RUN curl -O https://clickhouse-builds.s3.yandex.net/utils/1/dpkg-deb \
|
||||
RUN curl -O https://clickhouse-datasets.s3.yandex.net/utils/1/dpkg-deb \
|
||||
&& chmod +x dpkg-deb \
|
||||
&& cp dpkg-deb /usr/bin
|
||||
|
||||
@ -65,7 +65,7 @@ RUN apt-get update \
|
||||
unixodbc \
|
||||
--yes --no-install-recommends
|
||||
|
||||
RUN pip3 install numpy scipy pandas
|
||||
RUN pip3 install numpy scipy pandas Jinja2
|
||||
|
||||
# This symlink required by gcc to find lld compiler
|
||||
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
|
||||
|
@ -299,6 +299,7 @@ function run_tests
|
||||
01318_decrypt # Depends on OpenSSL
|
||||
01663_aes_msan # Depends on OpenSSL
|
||||
01667_aes_args_check # Depends on OpenSSL
|
||||
01683_codec_encrypted # Depends on OpenSSL
|
||||
01776_decrypt_aead_size_check # Depends on OpenSSL
|
||||
01811_filter_by_null # Depends on OpenSSL
|
||||
01281_unsucceeded_insert_select_queries_counter
|
||||
@ -310,6 +311,7 @@ function run_tests
|
||||
01411_bayesian_ab_testing
|
||||
01798_uniq_theta_sketch
|
||||
01799_long_uniq_theta_sketch
|
||||
01890_stem # depends on libstemmer_c
|
||||
collate
|
||||
collation
|
||||
_orc_
|
||||
|
@ -194,6 +194,10 @@ continue
|
||||
jobs
|
||||
pstree -aspgT
|
||||
|
||||
server_exit_code=0
|
||||
wait $server_pid || server_exit_code=$?
|
||||
echo "Server exit code is $server_exit_code"
|
||||
|
||||
# Make files with status and description we'll show for this check on Github.
|
||||
task_exit_code=$fuzzer_exit_code
|
||||
if [ "$server_died" == 1 ]
|
||||
|
@ -32,7 +32,7 @@ RUN rm -rf \
|
||||
RUN apt-get clean
|
||||
|
||||
# Install MySQL ODBC driver
|
||||
RUN curl 'https://cdn.mysql.com//Downloads/Connector-ODBC/8.0/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
|
||||
RUN curl 'https://downloads.mysql.com/archives/get/p/10/file/mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit.tar.gz' --location --output 'mysql-connector.tar.gz' && tar -xzf mysql-connector.tar.gz && cd mysql-connector-odbc-8.0.21-linux-glibc2.12-x86-64bit/lib && mv * /usr/local/lib && ln -s /usr/local/lib/libmyodbc8a.so /usr/lib/x86_64-linux-gnu/odbc/libmyodbc.so
|
||||
|
||||
# Unfortunately this is required for a single test for conversion data from zookeeper to clickhouse-keeper.
|
||||
# ZooKeeper is not started by default, but consumes some space in containers.
|
||||
@ -49,4 +49,3 @@ RUN mkdir /zookeeper && chmod -R 777 /zookeeper
|
||||
|
||||
ENV TZ=Europe/Moscow
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
|
||||
|
@ -76,6 +76,7 @@ RUN python3 -m pip install \
|
||||
pytest \
|
||||
pytest-timeout \
|
||||
pytest-xdist \
|
||||
pytest-repeat \
|
||||
redis \
|
||||
tzlocal \
|
||||
urllib3 \
|
||||
|
@ -14,10 +14,14 @@ services:
|
||||
}
|
||||
EOF
|
||||
./docker-entrypoint.sh'
|
||||
ports:
|
||||
- 9020:9019
|
||||
expose:
|
||||
- 9019
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-s", "localhost:9019/ping"]
|
||||
interval: 5s
|
||||
timeout: 3s
|
||||
retries: 30
|
||||
volumes:
|
||||
- type: ${JDBC_BRIDGE_FS:-tmpfs}
|
||||
source: ${JDBC_BRIDGE_LOGS:-}
|
||||
target: /app/logs
|
@ -0,0 +1,13 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
mongo1:
|
||||
image: mongo:3.6
|
||||
restart: always
|
||||
environment:
|
||||
MONGO_INITDB_ROOT_USERNAME: root
|
||||
MONGO_INITDB_ROOT_PASSWORD: clickhouse
|
||||
volumes:
|
||||
- ${MONGO_CONFIG_PATH}:/mongo/
|
||||
ports:
|
||||
- ${MONGO_EXTERNAL_PORT}:${MONGO_INTERNAL_PORT}
|
||||
command: --config /mongo/mongo_secure.conf --profile=2 --verbose
|
@ -2,7 +2,7 @@ version: '2.3'
|
||||
services:
|
||||
postgres1:
|
||||
image: postgres
|
||||
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
|
||||
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all", "-c", "max_connections=200"]
|
||||
restart: always
|
||||
expose:
|
||||
- ${POSTGRES_PORT}
|
||||
|
@ -2,7 +2,7 @@ version: '2.3'
|
||||
|
||||
services:
|
||||
rabbitmq1:
|
||||
image: rabbitmq:3-management-alpine
|
||||
image: rabbitmq:3.8-management-alpine
|
||||
hostname: rabbitmq1
|
||||
expose:
|
||||
- ${RABBITMQ_PORT}
|
||||
|
@ -1196,7 +1196,7 @@ create table changes engine File(TSV, 'metrics/changes.tsv') as
|
||||
if(left > right, left / right, right / left) times_diff
|
||||
from metrics
|
||||
group by metric
|
||||
having abs(diff) > 0.05 and isFinite(diff)
|
||||
having abs(diff) > 0.05 and isFinite(diff) and isFinite(times_diff)
|
||||
)
|
||||
order by diff desc
|
||||
;
|
||||
|
@ -183,6 +183,10 @@ for conn_index, c in enumerate(all_connections):
|
||||
# requires clickhouse-driver >= 1.1.5 to accept arbitrary new settings
|
||||
# (https://github.com/mymarilyn/clickhouse-driver/pull/142)
|
||||
c.settings[s.tag] = s.text
|
||||
# We have to perform a query to make sure the settings work. Otherwise an
|
||||
# unknown setting will lead to failing precondition check, and we will skip
|
||||
# the test, which is wrong.
|
||||
c.execute("select 1")
|
||||
|
||||
reportStageEnd('settings')
|
||||
|
||||
|
@ -2,6 +2,11 @@
|
||||
|
||||
set -e -x
|
||||
|
||||
# Choose random timezone for this test run
|
||||
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||
echo "Choosen random timezone $TZ"
|
||||
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||
|
||||
dpkg -i package_folder/clickhouse-common-static_*.deb;
|
||||
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
||||
dpkg -i package_folder/clickhouse-server_*.deb
|
||||
|
@ -32,7 +32,7 @@ RUN apt-get update -y \
|
||||
postgresql-client \
|
||||
sqlite3
|
||||
|
||||
RUN pip3 install numpy scipy pandas
|
||||
RUN pip3 install numpy scipy pandas Jinja2
|
||||
|
||||
RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
|
||||
|
@ -12,7 +12,7 @@ UNKNOWN_SIGN = "[ UNKNOWN "
|
||||
SKIPPED_SIGN = "[ SKIPPED "
|
||||
HUNG_SIGN = "Found hung queries in processlist"
|
||||
|
||||
NO_TASK_TIMEOUT_SIGN = "All tests have finished"
|
||||
NO_TASK_TIMEOUT_SIGNS = ["All tests have finished", "No tests were run"]
|
||||
|
||||
RETRIES_SIGN = "Some tests were restarted"
|
||||
|
||||
@ -29,7 +29,7 @@ def process_test_log(log_path):
|
||||
with open(log_path, 'r') as test_file:
|
||||
for line in test_file:
|
||||
line = line.strip()
|
||||
if NO_TASK_TIMEOUT_SIGN in line:
|
||||
if any(s in line for s in NO_TASK_TIMEOUT_SIGNS):
|
||||
task_timeout = False
|
||||
if HUNG_SIGN in line:
|
||||
hung = True
|
||||
@ -80,6 +80,7 @@ def process_result(result_path):
|
||||
if result_path and os.path.exists(result_path):
|
||||
total, skipped, unknown, failed, success, hung, task_timeout, retries, test_results = process_test_log(result_path)
|
||||
is_flacky_check = 1 < int(os.environ.get('NUM_TRIES', 1))
|
||||
logging.info("Is flacky check: %s", is_flacky_check)
|
||||
# If no tests were run (success == 0) it indicates an error (e.g. server did not start or crashed immediately)
|
||||
# But it's Ok for "flaky checks" - they can contain just one test for check which is marked as skipped.
|
||||
if failed != 0 or unknown != 0 or (success == 0 and (not is_flacky_check)):
|
||||
|
@ -3,6 +3,11 @@
|
||||
# fail on errors, verbose and export all env variables
|
||||
set -e -x -a
|
||||
|
||||
# Choose random timezone for this test run.
|
||||
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||
echo "Choosen random timezone $TZ"
|
||||
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||
|
||||
dpkg -i package_folder/clickhouse-common-static_*.deb
|
||||
dpkg -i package_folder/clickhouse-common-static-dbg_*.deb
|
||||
dpkg -i package_folder/clickhouse-server_*.deb
|
||||
@ -138,6 +143,7 @@ if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
|
||||
fi
|
||||
tar -chf /test_output/text_log_dump.tar /var/lib/clickhouse/data/system/text_log ||:
|
||||
tar -chf /test_output/query_log_dump.tar /var/lib/clickhouse/data/system/query_log ||:
|
||||
tar -chf /test_output/zookeeper_log_dump.tar /var/lib/clickhouse/data/system/zookeeper_log ||:
|
||||
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||
|
||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
@ -147,6 +153,8 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
|
||||
mv /var/log/clickhouse-server/stderr1.log /test_output/ ||:
|
||||
mv /var/log/clickhouse-server/stderr2.log /test_output/ ||:
|
||||
tar -chf /test_output/zookeeper_log_dump1.tar /var/lib/clickhouse1/data/system/zookeeper_log ||:
|
||||
tar -chf /test_output/zookeeper_log_dump2.tar /var/lib/clickhouse2/data/system/zookeeper_log ||:
|
||||
tar -chf /test_output/coordination1.tar /var/lib/clickhouse1/coordination ||:
|
||||
tar -chf /test_output/coordination2.tar /var/lib/clickhouse2/coordination ||:
|
||||
fi
|
||||
|
@ -77,9 +77,6 @@ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
|
||||
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
|
||||
&& rm -rf /tmp/clickhouse-odbc-tmp
|
||||
|
||||
ENV TZ=Europe/Moscow
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
|
||||
COPY run.sh /
|
||||
CMD ["/bin/bash", "/run.sh"]
|
||||
|
||||
|
@ -58,11 +58,11 @@ function start()
|
||||
echo "Cannot start clickhouse-server"
|
||||
cat /var/log/clickhouse-server/stdout.log
|
||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||
tail -n1000 /var/log/clickhouse-server/clickhouse-server.log
|
||||
tail -n100000 /var/log/clickhouse-server/clickhouse-server.log | grep -F -v '<Warning> RaftInstance:' -e '<Information> RaftInstance' | tail -n1000
|
||||
break
|
||||
fi
|
||||
# use root to match with current uid
|
||||
clickhouse start --user root >/var/log/clickhouse-server/stdout.log 2>/var/log/clickhouse-server/stderr.log
|
||||
clickhouse start --user root >/var/log/clickhouse-server/stdout.log 2>>/var/log/clickhouse-server/stderr.log
|
||||
sleep 0.5
|
||||
counter=$((counter + 1))
|
||||
done
|
||||
@ -118,35 +118,35 @@ clickhouse-client --query "SELECT 'Server successfully started', 'OK'" >> /test_
|
||||
[ -f /var/log/clickhouse-server/stderr.log ] || echo -e "Stderr log does not exist\tFAIL"
|
||||
|
||||
# Print Fatal log messages to stdout
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log*
|
||||
|
||||
# Grep logs for sanitizer asserts, crashes and other critical errors
|
||||
|
||||
# Sanitizer asserts
|
||||
zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" > /dev/null \
|
||||
zgrep -Fav "ASan doesn't fully support makecontext/swapcontext functions" /test_output/tmp > /dev/null \
|
||||
&& echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
||||
rm -f /test_output/tmp
|
||||
|
||||
# OOM
|
||||
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||
&& echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# Logical errors
|
||||
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||
&& echo -e 'Logical error thrown (see clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# Crash
|
||||
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# It also checks for crash without stacktrace (printed by watchdog)
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log > /dev/null \
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.log* > /dev/null \
|
||||
&& echo -e 'Fatal message in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
|
@ -20,6 +20,7 @@ def get_skip_list_cmd(path):
|
||||
|
||||
def get_options(i):
|
||||
options = []
|
||||
client_options = []
|
||||
if 0 < i:
|
||||
options.append("--order=random")
|
||||
|
||||
@ -27,25 +28,29 @@ def get_options(i):
|
||||
options.append("--db-engine=Ordinary")
|
||||
|
||||
if i % 3 == 2:
|
||||
options.append('''--client-option='allow_experimental_database_replicated=1' --db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i))
|
||||
options.append('''--db-engine="Replicated('/test/db/test_{}', 's1', 'r1')"'''.format(i))
|
||||
client_options.append('allow_experimental_database_replicated=1')
|
||||
|
||||
# If database name is not specified, new database is created for each functional test.
|
||||
# Run some threads with one database for all tests.
|
||||
if i % 2 == 1:
|
||||
options.append(" --database=test_{}".format(i))
|
||||
|
||||
if i % 7 == 0:
|
||||
options.append(" --client-option='join_use_nulls=1'")
|
||||
if i % 5 == 1:
|
||||
client_options.append("join_use_nulls=1")
|
||||
|
||||
if i % 14 == 0:
|
||||
options.append(' --client-option="join_algorithm=\'partial_merge\'"')
|
||||
if i % 15 == 6:
|
||||
client_options.append("join_algorithm='partial_merge'")
|
||||
|
||||
if i % 21 == 0:
|
||||
options.append(' --client-option="join_algorithm=\'auto\'"')
|
||||
options.append(' --client-option="max_rows_in_join=1000"')
|
||||
if i % 15 == 11:
|
||||
client_options.append("join_algorithm='auto'")
|
||||
client_options.append('max_rows_in_join=1000')
|
||||
|
||||
if i == 13:
|
||||
options.append(" --client-option='memory_tracker_fault_probability=0.00001'")
|
||||
client_options.append('memory_tracker_fault_probability=0.001')
|
||||
|
||||
if client_options:
|
||||
options.append(" --client-option " + ' '.join(client_options))
|
||||
|
||||
return ' '.join(options)
|
||||
|
||||
|
@ -35,7 +35,7 @@ RUN apt-get update \
|
||||
ENV TZ=Europe/Moscow
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
|
||||
RUN pip3 install urllib3 testflows==1.6.90 docker-compose==1.29.1 docker==5.0.0 dicttoxml kazoo tzlocal python-dateutil numpy
|
||||
RUN pip3 install urllib3 testflows==1.7.20 docker-compose==1.29.1 docker==5.0.0 dicttoxml kazoo tzlocal python-dateutil numpy
|
||||
|
||||
ENV DOCKER_CHANNEL stable
|
||||
ENV DOCKER_VERSION 20.10.6
|
||||
|
@ -1,8 +1,6 @@
|
||||
# docker build -t yandex/clickhouse-unit-test .
|
||||
FROM yandex/clickhouse-stateless-test
|
||||
|
||||
ENV TZ=Europe/Moscow
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
RUN apt-get install gdb
|
||||
|
||||
COPY run.sh /
|
||||
|
@ -8,7 +8,7 @@ toc_title: Third-Party Libraries Used
|
||||
The list of third-party libraries can be obtained by the following query:
|
||||
|
||||
``` sql
|
||||
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en'
|
||||
SELECT library_name, license_type, license_path FROM system.licenses ORDER BY library_name COLLATE 'en';
|
||||
```
|
||||
|
||||
[Example](https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUIGxpYnJhcnlfbmFtZSwgbGljZW5zZV90eXBlLCBsaWNlbnNlX3BhdGggRlJPTSBzeXN0ZW0ubGljZW5zZXMgT1JERVIgQlkgbGlicmFyeV9uYW1lIENPTExBVEUgJ2VuJw==)
|
||||
|
@ -123,7 +123,7 @@ For installing CMake and Ninja on Mac OS X first install Homebrew and then insta
|
||||
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
|
||||
brew install cmake ninja
|
||||
|
||||
Next, check the version of CMake: `cmake --version`. If it is below 3.3, you should install a newer version from the website: https://cmake.org/download/.
|
||||
Next, check the version of CMake: `cmake --version`. If it is below 3.12, you should install a newer version from the website: https://cmake.org/download/.
|
||||
|
||||
## Optional External Libraries {#optional-external-libraries}
|
||||
|
||||
|
@ -749,7 +749,7 @@ If your code in the `master` branch is not buildable yet, exclude it from the bu
|
||||
|
||||
**1.** The C++20 standard library is used (experimental extensions are allowed), as well as `boost` and `Poco` frameworks.
|
||||
|
||||
**2.** It is not allowed to use libraries from OS packages. It is also not allowed to use pre-installed libraries. All libraries should be placed in form of source code in `contrib` directory and built with ClickHouse.
|
||||
**2.** It is not allowed to use libraries from OS packages. It is also not allowed to use pre-installed libraries. All libraries should be placed in form of source code in `contrib` directory and built with ClickHouse. See [Guidelines for adding new third-party libraries](contrib.md#adding-third-party-libraries) for details.
|
||||
|
||||
**3.** Preference is always given to libraries that are already in use.
|
||||
|
||||
|
@ -70,7 +70,13 @@ Note that integration of ClickHouse with third-party drivers is not tested. Also
|
||||
|
||||
Unit tests are useful when you want to test not the ClickHouse as a whole, but a single isolated library or class. You can enable or disable build of tests with `ENABLE_TESTS` CMake option. Unit tests (and other test programs) are located in `tests` subdirectories across the code. To run unit tests, type `ninja test`. Some tests use `gtest`, but some are just programs that return non-zero exit code on test failure.
|
||||
|
||||
It’s not necessarily to have unit tests if the code is already covered by functional tests (and functional tests are usually much more simple to use).
|
||||
It’s not necessary to have unit tests if the code is already covered by functional tests (and functional tests are usually much more simple to use).
|
||||
|
||||
You can run individual gtest checks by calling the executable directly, for example:
|
||||
|
||||
```bash
|
||||
$ ./src/unit_tests_dbms --gtest_filter=LocalAddress*
|
||||
```
|
||||
|
||||
## Performance Tests {#performance-tests}
|
||||
|
||||
|
@ -47,7 +47,7 @@ EXCHANGE TABLES new_table AND old_table;
|
||||
|
||||
### ReplicatedMergeTree in Atomic Database {#replicatedmergetree-in-atomic-database}
|
||||
|
||||
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables, it is recommended to not specify engine parameters - path in ZooKeeper and replica name. In this case, configuration parameters will be used [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want to specify engine parameters explicitly, it is recommended to use {uuid} macros. This is useful so that unique paths are automatically generated for each table in ZooKeeper.
|
||||
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables, it is recommended to not specify engine parameters - path in ZooKeeper and replica name. In this case, configuration parameters will be used [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). If you want to specify engine parameters explicitly, it is recommended to use `{uuid}` macros. This is useful so that unique paths are automatically generated for each table in ZooKeeper.
|
||||
|
||||
## See Also
|
||||
|
||||
|
@ -14,7 +14,7 @@ You can also use the following database engines:
|
||||
|
||||
- [MySQL](../../engines/database-engines/mysql.md)
|
||||
|
||||
- [MaterializeMySQL](../../engines/database-engines/materialize-mysql.md)
|
||||
- [MaterializedMySQL](../../engines/database-engines/materialized-mysql.md)
|
||||
|
||||
- [Lazy](../../engines/database-engines/lazy.md)
|
||||
|
||||
@ -22,4 +22,4 @@ You can also use the following database engines:
|
||||
|
||||
- [PostgreSQL](../../engines/database-engines/postgresql.md)
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/database_engines/) <!--hide-->
|
||||
- [Replicated](../../engines/database-engines/replicated.md)
|
||||
|
@ -1,9 +1,11 @@
|
||||
---
|
||||
toc_priority: 29
|
||||
toc_title: MaterializeMySQL
|
||||
toc_title: MaterializedMySQL
|
||||
---
|
||||
|
||||
# MaterializeMySQL {#materialize-mysql}
|
||||
# MaterializedMySQL {#materialized-mysql}
|
||||
|
||||
**This is experimental feature that should not be used in production.**
|
||||
|
||||
Creates ClickHouse database with all the tables existing in MySQL, and all the data in those tables.
|
||||
|
||||
@ -15,7 +17,7 @@ This feature is experimental.
|
||||
|
||||
``` sql
|
||||
CREATE DATABASE [IF NOT EXISTS] db_name [ON CLUSTER cluster]
|
||||
ENGINE = MaterializeMySQL('host:port', ['database' | database], 'user', 'password') [SETTINGS ...]
|
||||
ENGINE = MaterializedMySQL('host:port', ['database' | database], 'user', 'password') [SETTINGS ...]
|
||||
```
|
||||
|
||||
**Engine Parameters**
|
||||
@ -25,9 +27,31 @@ ENGINE = MaterializeMySQL('host:port', ['database' | database], 'user', 'passwor
|
||||
- `user` — MySQL user.
|
||||
- `password` — User password.
|
||||
|
||||
**Engine Settings**
|
||||
- `max_rows_in_buffer` — Max rows that data is allowed to cache in memory(for single table and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `65505`.
|
||||
- `max_bytes_in_buffer` — Max bytes that data is allowed to cache in memory(for single table and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `1048576`.
|
||||
- `max_rows_in_buffers` — Max rows that data is allowed to cache in memory(for database and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `65505`.
|
||||
- `max_bytes_in_buffers` — Max bytes that data is allowed to cache in memory(for database and the cache data unable to query). when rows is exceeded, the data will be materialized. Default: `1048576`.
|
||||
- `max_flush_data_time` — Max milliseconds that data is allowed to cache in memory(for database and the cache data unable to query). when this time is exceeded, the data will be materialized. Default: `1000`.
|
||||
- `max_wait_time_when_mysql_unavailable` — Retry interval when MySQL is not available (milliseconds). Negative value disable retry. Default: `1000`.
|
||||
- `allows_query_when_mysql_lost` — Allow query materialized table when mysql is lost. Default: `0` (`false`).
|
||||
```
|
||||
CREATE DATABASE mysql ENGINE = MaterializedMySQL('localhost:3306', 'db', 'user', '***')
|
||||
SETTINGS
|
||||
allows_query_when_mysql_lost=true,
|
||||
max_wait_time_when_mysql_unavailable=10000;
|
||||
```
|
||||
|
||||
**Settings on MySQL-server side**
|
||||
|
||||
For the correct work of `MaterializeMySQL`, there are few mandatory `MySQL`-side configuration settings that should be set:
|
||||
|
||||
- `default_authentication_plugin = mysql_native_password` since `MaterializeMySQL` can only authorize with this method.
|
||||
- `gtid_mode = on` since GTID based logging is a mandatory for providing correct `MaterializeMySQL` replication. Pay attention that while turning this mode `On` you should also specify `enforce_gtid_consistency = on`.
|
||||
|
||||
## Virtual columns {#virtual-columns}
|
||||
|
||||
When working with the `MaterializeMySQL` database engine, [ReplacingMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md) tables are used with virtual `_sign` and `_version` columns.
|
||||
When working with the `MaterializedMySQL` database engine, [ReplacingMergeTree](../../engines/table-engines/mergetree-family/replacingmergetree.md) tables are used with virtual `_sign` and `_version` columns.
|
||||
|
||||
- `_version` — Transaction counter. Type [UInt64](../../sql-reference/data-types/int-uint.md).
|
||||
- `_sign` — Deletion mark. Type [Int8](../../sql-reference/data-types/int-uint.md). Possible values:
|
||||
@ -53,6 +77,7 @@ When working with the `MaterializeMySQL` database engine, [ReplacingMergeTree](.
|
||||
| STRING | [String](../../sql-reference/data-types/string.md) |
|
||||
| VARCHAR, VAR_STRING | [String](../../sql-reference/data-types/string.md) |
|
||||
| BLOB | [String](../../sql-reference/data-types/string.md) |
|
||||
| BINARY | [FixedString](../../sql-reference/data-types/fixedstring.md) |
|
||||
|
||||
Other types are not supported. If MySQL table contains a column of such type, ClickHouse throws exception "Unhandled data type" and stops replication.
|
||||
|
||||
@ -60,13 +85,21 @@ Other types are not supported. If MySQL table contains a column of such type, Cl
|
||||
|
||||
## Specifics and Recommendations {#specifics-and-recommendations}
|
||||
|
||||
### Compatibility restrictions
|
||||
|
||||
Apart of the data types limitations there are few restrictions comparing to `MySQL` databases, that should be resolved before replication will be possible:
|
||||
|
||||
- Each table in `MySQL` should contain `PRIMARY KEY`.
|
||||
|
||||
- Replication for tables, those are containing rows with `ENUM` field values out of range (specified in `ENUM` signature) will not work.
|
||||
|
||||
### DDL Queries {#ddl-queries}
|
||||
|
||||
MySQL DDL queries are converted into the corresponding ClickHouse DDL queries ([ALTER](../../sql-reference/statements/alter/index.md), [CREATE](../../sql-reference/statements/create/index.md), [DROP](../../sql-reference/statements/drop.md), [RENAME](../../sql-reference/statements/rename.md)). If ClickHouse cannot parse some DDL query, the query is ignored.
|
||||
|
||||
### Data Replication {#data-replication}
|
||||
|
||||
`MaterializeMySQL` does not support direct `INSERT`, `DELETE` and `UPDATE` queries. However, they are supported in terms of data replication:
|
||||
`MaterializedMySQL` does not support direct `INSERT`, `DELETE` and `UPDATE` queries. However, they are supported in terms of data replication:
|
||||
|
||||
- MySQL `INSERT` query is converted into `INSERT` with `_sign=1`.
|
||||
|
||||
@ -74,14 +107,16 @@ MySQL DDL queries are converted into the corresponding ClickHouse DDL queries ([
|
||||
|
||||
- MySQL `UPDATE` query is converted into `INSERT` with `_sign=-1` and `INSERT` with `_sign=1`.
|
||||
|
||||
### Selecting from MaterializeMySQL Tables {#select}
|
||||
### Selecting from MaterializedMySQL Tables {#select}
|
||||
|
||||
`SELECT` query from `MaterializeMySQL` tables has some specifics:
|
||||
`SELECT` query from `MaterializedMySQL` tables has some specifics:
|
||||
|
||||
- If `_version` is not specified in the `SELECT` query, [FINAL](../../sql-reference/statements/select/from.md#select-from-final) modifier is used. So only rows with `MAX(_version)` are selected.
|
||||
|
||||
- If `_sign` is not specified in the `SELECT` query, `WHERE _sign=1` is used by default. So the deleted rows are not included into the result set.
|
||||
|
||||
- The result includes columns comments in case they exist in MySQL database tables.
|
||||
|
||||
### Index Conversion {#index-conversion}
|
||||
|
||||
MySQL `PRIMARY KEY` and `INDEX` clauses are converted into `ORDER BY` tuples in ClickHouse tables.
|
||||
@ -91,10 +126,10 @@ ClickHouse has only one physical order, which is determined by `ORDER BY` clause
|
||||
**Notes**
|
||||
|
||||
- Rows with `_sign=-1` are not deleted physically from the tables.
|
||||
- Cascade `UPDATE/DELETE` queries are not supported by the `MaterializeMySQL` engine.
|
||||
- Cascade `UPDATE/DELETE` queries are not supported by the `MaterializedMySQL` engine.
|
||||
- Replication can be easily broken.
|
||||
- Manual operations on database and tables are forbidden.
|
||||
- `MaterializeMySQL` is influenced by [optimize_on_insert](../../operations/settings/settings.md#optimize-on-insert) setting. The data is merged in the corresponding table in the `MaterializeMySQL` database when a table in the MySQL server changes.
|
||||
- `MaterializedMySQL` is influenced by [optimize_on_insert](../../operations/settings/settings.md#optimize-on-insert) setting. The data is merged in the corresponding table in the `MaterializedMySQL` database when a table in the MySQL server changes.
|
||||
|
||||
## Examples of Use {#examples-of-use}
|
||||
|
||||
@ -123,7 +158,7 @@ Database in ClickHouse, exchanging data with the MySQL server:
|
||||
The database and the table created:
|
||||
|
||||
``` sql
|
||||
CREATE DATABASE mysql ENGINE = MaterializeMySQL('localhost:3306', 'db', 'user', '***');
|
||||
CREATE DATABASE mysql ENGINE = MaterializedMySQL('localhost:3306', 'db', 'user', '***');
|
||||
SHOW TABLES FROM mysql;
|
||||
```
|
||||
|
||||
@ -158,4 +193,4 @@ SELECT * FROM mysql.test;
|
||||
└───┴─────┴──────┘
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/engines/database-engines/materialize-mysql/) <!--hide-->
|
||||
[Original article](https://clickhouse.tech/docs/en/engines/database-engines/materialized-mysql/) <!--hide-->
|
@ -15,7 +15,7 @@ Supports table structure modifications (`ALTER TABLE ... ADD|DROP COLUMN`). If `
|
||||
|
||||
``` sql
|
||||
CREATE DATABASE test_database
|
||||
ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cache`]);
|
||||
ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `schema`, `use_table_cache`]);
|
||||
```
|
||||
|
||||
**Engine Parameters**
|
||||
@ -24,6 +24,7 @@ ENGINE = PostgreSQL('host:port', 'database', 'user', 'password'[, `use_table_cac
|
||||
- `database` — Remote database name.
|
||||
- `user` — PostgreSQL user.
|
||||
- `password` — User password.
|
||||
- `schema` — PostgreSQL schema.
|
||||
- `use_table_cache` — Defines if the database table structure is cached or not. Optional. Default value: `0`.
|
||||
|
||||
## Data Types Support {#data_types-support}
|
||||
|
115
docs/en/engines/database-engines/replicated.md
Normal file
115
docs/en/engines/database-engines/replicated.md
Normal file
@ -0,0 +1,115 @@
|
||||
# [experimental] Replicated {#replicated}
|
||||
|
||||
The engine is based on the [Atomic](../../engines/database-engines/atomic.md) engine. It supports replication of metadata via DDL log being written to ZooKeeper and executed on all of the replicas for a given database.
|
||||
|
||||
One ClickHouse server can have multiple replicated databases running and updating at the same time. But there can't be multiple replicas of the same replicated database.
|
||||
|
||||
## Creating a Database {#creating-a-database}
|
||||
``` sql
|
||||
CREATE DATABASE testdb ENGINE = Replicated('zoo_path', 'shard_name', 'replica_name') [SETTINGS ...]
|
||||
```
|
||||
|
||||
**Engine Parameters**
|
||||
|
||||
- `zoo_path` — ZooKeeper path. The same ZooKeeper path corresponds to the same database.
|
||||
- `shard_name` — Shard name. Database replicas are grouped into shards by `shard_name`.
|
||||
- `replica_name` — Replica name. Replica names must be different for all replicas of the same shard.
|
||||
|
||||
!!! note "Warning"
|
||||
For [ReplicatedMergeTree](../table-engines/mergetree-family/replication.md#table_engines-replication) tables if no arguments provided, then default arguments are used: `/clickhouse/tables/{uuid}/{shard}` and `{replica}`. These can be changed in the server settings [default_replica_path](../../operations/server-configuration-parameters/settings.md#default_replica_path) and [default_replica_name](../../operations/server-configuration-parameters/settings.md#default_replica_name). Macro `{uuid}` is unfolded to table's uuid, `{shard}` and `{replica}` are unfolded to values from server config, not from database engine arguments. But in the future, it will be possible to use `shard_name` and `replica_name` of Replicated database.
|
||||
|
||||
## Specifics and Recommendations {#specifics-and-recommendations}
|
||||
|
||||
DDL queries with `Replicated` database work in a similar way to [ON CLUSTER](../../sql-reference/distributed-ddl.md) queries, but with minor differences.
|
||||
|
||||
First, the DDL request tries to execute on the initiator (the host that originally received the request from the user). If the request is not fulfilled, then the user immediately receives an error, other hosts do not try to fulfill it. If the request has been successfully completed on the initiator, then all other hosts will automatically retry until they complete it. The initiator will try to wait for the query to be completed on other hosts (no longer than [distributed_ddl_task_timeout](../../operations/settings/settings.md#distributed_ddl_task_timeout)) and will return a table with the query execution statuses on each host.
|
||||
|
||||
The behavior in case of errors is regulated by the [distributed_ddl_output_mode](../../operations/settings/settings.md#distributed_ddl_output_mode) setting, for a `Replicated` database it is better to set it to `null_status_on_timeout` — i.e. if some hosts did not have time to execute the request for [distributed_ddl_task_timeout](../../operations/settings/settings.md#distributed_ddl_task_timeout), then do not throw an exception, but show the `NULL` status for them in the table.
|
||||
|
||||
The [system.clusters](../../operations/system-tables/clusters.md) system table contains a cluster named like the replicated database, which consists of all replicas of the database. This cluster is updated automatically when creating/deleting replicas, and it can be used for [Distributed](../../engines/table-engines/special/distributed.md#distributed) tables.
|
||||
|
||||
When creating a new replica of the database, this replica creates tables by itself. If the replica has been unavailable for a long time and has lagged behind the replication log — it checks its local metadata with the current metadata in ZooKeeper, moves the extra tables with data to a separate non-replicated database (so as not to accidentally delete anything superfluous), creates the missing tables, updates the table names if they have been renamed. The data is replicated at the `ReplicatedMergeTree` level, i.e. if the table is not replicated, the data will not be replicated (the database is responsible only for metadata).
|
||||
|
||||
## Usage Example {#usage-example}
|
||||
|
||||
Creating a cluster with three hosts:
|
||||
|
||||
``` sql
|
||||
node1 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','shard1','replica1');
|
||||
node2 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','shard1','other_replica');
|
||||
node3 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','other_shard','{replica}');
|
||||
```
|
||||
|
||||
Running the DDL-query:
|
||||
|
||||
``` sql
|
||||
CREATE TABLE r.rmt (n UInt64) ENGINE=ReplicatedMergeTree ORDER BY n;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─────hosts────────────┬──status─┬─error─┬─num_hosts_remaining─┬─num_hosts_active─┐
|
||||
│ shard1|replica1 │ 0 │ │ 2 │ 0 │
|
||||
│ shard1|other_replica │ 0 │ │ 1 │ 0 │
|
||||
│ other_shard|r1 │ 0 │ │ 0 │ 0 │
|
||||
└──────────────────────┴─────────┴───────┴─────────────────────┴──────────────────┘
|
||||
```
|
||||
|
||||
Showing the system table:
|
||||
|
||||
``` sql
|
||||
SELECT cluster, shard_num, replica_num, host_name, host_address, port, is_local
|
||||
FROM system.clusters WHERE cluster='r';
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─cluster─┬─shard_num─┬─replica_num─┬─host_name─┬─host_address─┬─port─┬─is_local─┐
|
||||
│ r │ 1 │ 1 │ node3 │ 127.0.0.1 │ 9002 │ 0 │
|
||||
│ r │ 2 │ 1 │ node2 │ 127.0.0.1 │ 9001 │ 0 │
|
||||
│ r │ 2 │ 2 │ node1 │ 127.0.0.1 │ 9000 │ 1 │
|
||||
└─────────┴───────────┴─────────────┴───────────┴──────────────┴──────┴──────────┘
|
||||
```
|
||||
|
||||
Creating a distributed table and inserting the data:
|
||||
|
||||
``` sql
|
||||
node2 :) CREATE TABLE r.d (n UInt64) ENGINE=Distributed('r','r','rmt', n % 2);
|
||||
node3 :) INSERT INTO r SELECT * FROM numbers(10);
|
||||
node1 :) SELECT materialize(hostName()) AS host, groupArray(n) FROM r.d GROUP BY host;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─hosts─┬─groupArray(n)─┐
|
||||
│ node1 │ [1,3,5,7,9] │
|
||||
│ node2 │ [0,2,4,6,8] │
|
||||
└───────┴───────────────┘
|
||||
```
|
||||
|
||||
Adding replica on the one more host:
|
||||
|
||||
``` sql
|
||||
node4 :) CREATE DATABASE r ENGINE=Replicated('some/path/r','other_shard','r2');
|
||||
```
|
||||
|
||||
The cluster configuration will look like this:
|
||||
|
||||
``` text
|
||||
┌─cluster─┬─shard_num─┬─replica_num─┬─host_name─┬─host_address─┬─port─┬─is_local─┐
|
||||
│ r │ 1 │ 1 │ node3 │ 127.0.0.1 │ 9002 │ 0 │
|
||||
│ r │ 1 │ 2 │ node4 │ 127.0.0.1 │ 9003 │ 0 │
|
||||
│ r │ 2 │ 1 │ node2 │ 127.0.0.1 │ 9001 │ 0 │
|
||||
│ r │ 2 │ 2 │ node1 │ 127.0.0.1 │ 9000 │ 1 │
|
||||
└─────────┴───────────┴─────────────┴───────────┴──────────────┴──────┴──────────┘
|
||||
```
|
||||
|
||||
The distributed table also will get data from the new host:
|
||||
|
||||
```sql
|
||||
node2 :) SELECT materialize(hostName()) AS host, groupArray(n) FROM r.d GROUP BY host;
|
||||
```
|
||||
|
||||
```text
|
||||
┌─hosts─┬─groupArray(n)─┐
|
||||
│ node2 │ [1,3,5,7,9] │
|
||||
│ node4 │ [0,2,4,6,8] │
|
||||
└───────┴───────────────┘
|
||||
```
|
@ -39,4 +39,46 @@ ENGINE = EmbeddedRocksDB
|
||||
PRIMARY KEY key
|
||||
```
|
||||
|
||||
## Metrics
|
||||
|
||||
There is also `system.rocksdb` table, that expose rocksdb statistics:
|
||||
|
||||
```sql
|
||||
SELECT
|
||||
name,
|
||||
value
|
||||
FROM system.rocksdb
|
||||
|
||||
┌─name──────────────────────┬─value─┐
|
||||
│ no.file.opens │ 1 │
|
||||
│ number.block.decompressed │ 1 │
|
||||
└───────────────────────────┴───────┘
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
You can also change any [rocksdb options](https://github.com/facebook/rocksdb/wiki/Option-String-and-Option-Map) using config:
|
||||
|
||||
```xml
|
||||
<rocksdb>
|
||||
<options>
|
||||
<max_background_jobs>8</max_background_jobs>
|
||||
</options>
|
||||
<column_family_options>
|
||||
<num_levels>2</num_levels>
|
||||
</column_family_options>
|
||||
<tables>
|
||||
<table>
|
||||
<name>TABLE</name>
|
||||
<options>
|
||||
<max_background_jobs>8</max_background_jobs>
|
||||
</options>
|
||||
<column_family_options>
|
||||
<num_levels>2</num_levels>
|
||||
</column_family_options>
|
||||
</table>
|
||||
</tables>
|
||||
</rocksdb>
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/engines/table-engines/integrations/embedded-rocksdb/) <!--hide-->
|
||||
|
@ -15,7 +15,7 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name
|
||||
name1 [type1],
|
||||
name2 [type2],
|
||||
...
|
||||
) ENGINE = MongoDB(host:port, database, collection, user, password);
|
||||
) ENGINE = MongoDB(host:port, database, collection, user, password [, options]);
|
||||
```
|
||||
|
||||
**Engine Parameters**
|
||||
@ -30,9 +30,11 @@ CREATE TABLE [IF NOT EXISTS] [db.]table_name
|
||||
|
||||
- `password` — User password.
|
||||
|
||||
- `options` — MongoDB connection string options (optional parameter).
|
||||
|
||||
## Usage Example {#usage-example}
|
||||
|
||||
Table in ClickHouse which allows to read data from MongoDB collection:
|
||||
Create a table in ClickHouse which allows to read data from MongoDB collection:
|
||||
|
||||
``` text
|
||||
CREATE TABLE mongo_table
|
||||
@ -42,6 +44,16 @@ CREATE TABLE mongo_table
|
||||
) ENGINE = MongoDB('mongo1:27017', 'test', 'simple_table', 'testuser', 'clickhouse');
|
||||
```
|
||||
|
||||
To read from an SSL secured MongoDB server:
|
||||
|
||||
``` text
|
||||
CREATE TABLE mongo_table_ssl
|
||||
(
|
||||
key UInt64,
|
||||
data String
|
||||
) ENGINE = MongoDB('mongo2:27017', 'test', 'simple_table', 'testuser', 'clickhouse', 'ssl=true');
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
|
@ -14,6 +14,8 @@ Engines of the family:
|
||||
- [Log](../../../engines/table-engines/log-family/log.md)
|
||||
- [TinyLog](../../../engines/table-engines/log-family/tinylog.md)
|
||||
|
||||
`Log` family table engines can store data to [HDFS](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-hdfs) or [S3](../../../engines/table-engines/mergetree-family/mergetree.md#table_engine-mergetree-s3) distributed file systems.
|
||||
|
||||
## Common Properties {#common-properties}
|
||||
|
||||
Engines:
|
||||
|
@ -5,10 +5,8 @@ toc_title: Log
|
||||
|
||||
# Log {#log}
|
||||
|
||||
Engine belongs to the family of log engines. See the common properties of log engines and their differences in the [Log Engine Family](../../../engines/table-engines/log-family/index.md) article.
|
||||
The engine belongs to the family of `Log` engines. See the common properties of `Log` engines and their differences in the [Log Engine Family](../../../engines/table-engines/log-family/index.md) article.
|
||||
|
||||
Log differs from [TinyLog](../../../engines/table-engines/log-family/tinylog.md) in that a small file of “marks” resides with the column files. These marks are written on every data block and contain offsets that indicate where to start reading the file in order to skip the specified number of rows. This makes it possible to read table data in multiple threads.
|
||||
`Log` differs from [TinyLog](../../../engines/table-engines/log-family/tinylog.md) in that a small file of "marks" resides with the column files. These marks are written on every data block and contain offsets that indicate where to start reading the file in order to skip the specified number of rows. This makes it possible to read table data in multiple threads.
|
||||
For concurrent data access, the read operations can be performed simultaneously, while write operations block reads and each other.
|
||||
The Log engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The Log engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/table_engines/log/) <!--hide-->
|
||||
The `Log` engine does not support indexes. Similarly, if writing to a table failed, the table is broken, and reading from it returns an error. The `Log` engine is appropriate for temporary data, write-once tables, and for testing or demonstration purposes.
|
||||
|
@ -76,7 +76,7 @@ For a description of parameters, see the [CREATE query description](../../../sql
|
||||
|
||||
- `SAMPLE BY` — An expression for sampling. Optional.
|
||||
|
||||
If a sampling expression is used, the primary key must contain it. Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
If a sampling expression is used, the primary key must contain it. The result of sampling expression must be unsigned integer. Example: `SAMPLE BY intHash32(UserID) ORDER BY (CounterID, EventDate, intHash32(UserID))`.
|
||||
|
||||
- `TTL` — A list of rules specifying storage duration of rows and defining logic of automatic parts movement [between disks and volumes](#table_engine-mergetree-multiple-volumes). Optional.
|
||||
|
||||
@ -728,7 +728,9 @@ During this time, they are not moved to other volumes or disks. Therefore, until
|
||||
|
||||
## Using S3 for Data Storage {#table_engine-mergetree-s3}
|
||||
|
||||
`MergeTree` family table engines is able to store data to [S3](https://aws.amazon.com/s3/) using a disk with type `s3`.
|
||||
`MergeTree` family table engines can store data to [S3](https://aws.amazon.com/s3/) using a disk with type `s3`.
|
||||
|
||||
This feature is under development and not ready for production. There are known drawbacks such as very low performance.
|
||||
|
||||
Configuration markup:
|
||||
``` xml
|
||||
@ -762,11 +764,13 @@ Configuration markup:
|
||||
```
|
||||
|
||||
Required parameters:
|
||||
- `endpoint` — S3 endpoint url in `path` or `virtual hosted` [styles](https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html). Endpoint url should contain bucket and root path to store data.
|
||||
|
||||
- `endpoint` — S3 endpoint URL in `path` or `virtual hosted` [styles](https://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html). Endpoint URL should contain a bucket and root path to store data.
|
||||
- `access_key_id` — S3 access key id.
|
||||
- `secret_access_key` — S3 secret access key.
|
||||
|
||||
Optional parameters:
|
||||
|
||||
- `region` — S3 region name.
|
||||
- `use_environment_credentials` — Reads AWS credentials from the Environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN if they exist. Default value is `false`.
|
||||
- `use_insecure_imds_request` — If set to `true`, S3 client will use insecure IMDS request while obtaining credentials from Amazon EC2 metadata. Default value is `false`.
|
||||
@ -782,7 +786,6 @@ Optional parameters:
|
||||
- `skip_access_check` — If true, disk access checks will not be performed on disk start-up. Default value is `false`.
|
||||
- `server_side_encryption_customer_key_base64` — If specified, required headers for accessing S3 objects with SSE-C encryption will be set.
|
||||
|
||||
|
||||
S3 disk can be configured as `main` or `cold` storage:
|
||||
``` xml
|
||||
<storage_configuration>
|
||||
@ -821,4 +824,43 @@ S3 disk can be configured as `main` or `cold` storage:
|
||||
|
||||
In case of `cold` option a data can be moved to S3 if local disk free size will be smaller than `move_factor * disk_size` or by TTL move rule.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/ru/operations/table_engines/mergetree/) <!--hide-->
|
||||
## Using HDFS for Data Storage {#table_engine-mergetree-hdfs}
|
||||
|
||||
[HDFS](https://hadoop.apache.org/docs/r1.2.1/hdfs_design.html) is a distributed file system for remote data storage.
|
||||
|
||||
`MergeTree` family table engines can store data to HDFS using a disk with type `HDFS`.
|
||||
|
||||
Configuration markup:
|
||||
``` xml
|
||||
<yandex>
|
||||
<storage_configuration>
|
||||
<disks>
|
||||
<hdfs>
|
||||
<type>hdfs</type>
|
||||
<endpoint>hdfs://hdfs1:9000/clickhouse/</endpoint>
|
||||
</hdfs>
|
||||
</disks>
|
||||
<policies>
|
||||
<hdfs>
|
||||
<volumes>
|
||||
<main>
|
||||
<disk>hdfs</disk>
|
||||
</main>
|
||||
</volumes>
|
||||
</hdfs>
|
||||
</policies>
|
||||
</storage_configuration>
|
||||
|
||||
<merge_tree>
|
||||
<min_bytes_for_wide_part>0</min_bytes_for_wide_part>
|
||||
</merge_tree>
|
||||
</yandex>
|
||||
```
|
||||
|
||||
Required parameters:
|
||||
|
||||
- `endpoint` — HDFS endpoint URL in `path` format. Endpoint URL should contain a root path to store data.
|
||||
|
||||
Optional parameters:
|
||||
|
||||
- `min_bytes_for_seek` — The minimal number of bytes to use seek operation instead of sequential read. Default value: `1 Mb`.
|
||||
|
@ -37,6 +37,14 @@ Also, it accepts the following settings:
|
||||
|
||||
- `max_delay_to_insert` - max delay of inserting data into Distributed table in seconds, if there are a lot of pending bytes for async send. Default 60.
|
||||
|
||||
- `monitor_batch_inserts` - same as [distributed_directory_monitor_batch_inserts](../../../operations/settings/settings.md#distributed_directory_monitor_batch_inserts)
|
||||
|
||||
- `monitor_split_batch_on_failure` - same as [distributed_directory_monitor_split_batch_on_failure](../../../operations/settings/settings.md#distributed_directory_monitor_split_batch_on_failure)
|
||||
|
||||
- `monitor_sleep_time_ms` - same as [distributed_directory_monitor_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_sleep_time_ms)
|
||||
|
||||
- `monitor_max_sleep_time_ms` - same as [distributed_directory_monitor_max_sleep_time_ms](../../../operations/settings/settings.md#distributed_directory_monitor_max_sleep_time_ms)
|
||||
|
||||
!!! note "Note"
|
||||
|
||||
**Durability settings** (`fsync_...`):
|
||||
|
@ -1130,17 +1130,18 @@ The table below shows supported data types and how they match ClickHouse [data t
|
||||
| `boolean`, `int`, `long`, `float`, `double` | [Int64](../sql-reference/data-types/int-uint.md), [UInt64](../sql-reference/data-types/int-uint.md) | `long` |
|
||||
| `boolean`, `int`, `long`, `float`, `double` | [Float32](../sql-reference/data-types/float.md) | `float` |
|
||||
| `boolean`, `int`, `long`, `float`, `double` | [Float64](../sql-reference/data-types/float.md) | `double` |
|
||||
| `bytes`, `string`, `fixed`, `enum` | [String](../sql-reference/data-types/string.md) | `bytes` |
|
||||
| `bytes`, `string`, `fixed`, `enum` | [String](../sql-reference/data-types/string.md) | `bytes` or `string` \* |
|
||||
| `bytes`, `string`, `fixed` | [FixedString(N)](../sql-reference/data-types/fixedstring.md) | `fixed(N)` |
|
||||
| `enum` | [Enum(8\|16)](../sql-reference/data-types/enum.md) | `enum` |
|
||||
| `array(T)` | [Array(T)](../sql-reference/data-types/array.md) | `array(T)` |
|
||||
| `union(null, T)`, `union(T, null)` | [Nullable(T)](../sql-reference/data-types/date.md) | `union(null, T)` |
|
||||
| `null` | [Nullable(Nothing)](../sql-reference/data-types/special-data-types/nothing.md) | `null` |
|
||||
| `int (date)` \* | [Date](../sql-reference/data-types/date.md) | `int (date)` \* |
|
||||
| `long (timestamp-millis)` \* | [DateTime64(3)](../sql-reference/data-types/datetime.md) | `long (timestamp-millis)` \* |
|
||||
| `long (timestamp-micros)` \* | [DateTime64(6)](../sql-reference/data-types/datetime.md) | `long (timestamp-micros)` \* |
|
||||
| `int (date)` \** | [Date](../sql-reference/data-types/date.md) | `int (date)` \** |
|
||||
| `long (timestamp-millis)` \** | [DateTime64(3)](../sql-reference/data-types/datetime.md) | `long (timestamp-millis)` \* |
|
||||
| `long (timestamp-micros)` \** | [DateTime64(6)](../sql-reference/data-types/datetime.md) | `long (timestamp-micros)` \* |
|
||||
|
||||
\* [Avro logical types](https://avro.apache.org/docs/current/spec.html#Logical+Types)
|
||||
\* `bytes` is default, controlled by [output_format_avro_string_column_pattern](../operations/settings/settings.md#settings-output_format_avro_string_column_pattern)
|
||||
\** [Avro logical types](https://avro.apache.org/docs/current/spec.html#Logical+Types)
|
||||
|
||||
Unsupported Avro data types: `record` (non-root), `map`
|
||||
|
||||
@ -1246,12 +1247,14 @@ The table below shows supported data types and how they match ClickHouse [data t
|
||||
| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `DOUBLE` |
|
||||
| `DATE32` | [Date](../sql-reference/data-types/date.md) | `UINT16` |
|
||||
| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `UINT32` |
|
||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `STRING` |
|
||||
| — | [FixedString](../sql-reference/data-types/fixedstring.md) | `STRING` |
|
||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` |
|
||||
| — | [FixedString](../sql-reference/data-types/fixedstring.md) | `BINARY` |
|
||||
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
||||
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
||||
| `STRUCT` | [Tuple](../sql-reference/data-types/tuple.md) | `STRUCT` |
|
||||
| `MAP` | [Map](../sql-reference/data-types/map.md) | `MAP` |
|
||||
|
||||
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
||||
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
|
||||
|
||||
ClickHouse supports configurable precision of `Decimal` type. The `INSERT` query treats the Parquet `DECIMAL` type as the ClickHouse `Decimal128` type.
|
||||
|
||||
@ -1299,13 +1302,17 @@ The table below shows supported data types and how they match ClickHouse [data t
|
||||
| `DOUBLE` | [Float64](../sql-reference/data-types/float.md) | `FLOAT64` |
|
||||
| `DATE32` | [Date](../sql-reference/data-types/date.md) | `UINT16` |
|
||||
| `DATE64`, `TIMESTAMP` | [DateTime](../sql-reference/data-types/datetime.md) | `UINT32` |
|
||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `UTF8` |
|
||||
| `STRING`, `BINARY` | [FixedString](../sql-reference/data-types/fixedstring.md) | `UTF8` |
|
||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` |
|
||||
| `STRING`, `BINARY` | [FixedString](../sql-reference/data-types/fixedstring.md) | `BINARY` |
|
||||
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
||||
| `DECIMAL256` | [Decimal256](../sql-reference/data-types/decimal.md)| `DECIMAL256` |
|
||||
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
||||
| `STRUCT` | [Tuple](../sql-reference/data-types/tuple.md) | `STRUCT` |
|
||||
| `MAP` | [Map](../sql-reference/data-types/map.md) | `MAP` |
|
||||
|
||||
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
||||
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
|
||||
|
||||
The `DICTIONARY` type is supported for `INSERT` queries, and for `SELECT` queries there is an [output_format_arrow_low_cardinality_as_dictionary](../operations/settings/settings.md#output-format-arrow-low-cardinality-as-dictionary) setting that allows to output [LowCardinality](../sql-reference/data-types/lowcardinality.md) type as a `DICTIONARY` type.
|
||||
|
||||
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the Arrow `DECIMAL` type as the ClickHouse `Decimal128` type.
|
||||
|
||||
@ -1358,8 +1365,10 @@ The table below shows supported data types and how they match ClickHouse [data t
|
||||
| `STRING`, `BINARY` | [String](../sql-reference/data-types/string.md) | `BINARY` |
|
||||
| `DECIMAL` | [Decimal](../sql-reference/data-types/decimal.md) | `DECIMAL` |
|
||||
| `LIST` | [Array](../sql-reference/data-types/array.md) | `LIST` |
|
||||
| `STRUCT` | [Tuple](../sql-reference/data-types/tuple.md) | `STRUCT` |
|
||||
| `MAP` | [Map](../sql-reference/data-types/map.md) | `MAP` |
|
||||
|
||||
Arrays can be nested and can have a value of the `Nullable` type as an argument.
|
||||
Arrays can be nested and can have a value of the `Nullable` type as an argument. `Tuple` and `Map` types also can be nested.
|
||||
|
||||
ClickHouse supports configurable precision of the `Decimal` type. The `INSERT` query treats the ORC `DECIMAL` type as the ClickHouse `Decimal128` type.
|
||||
|
||||
|
2
docs/en/interfaces/third-party/gui.md
vendored
2
docs/en/interfaces/third-party/gui.md
vendored
@ -84,6 +84,8 @@ Features:
|
||||
- Table data preview.
|
||||
- Full-text search.
|
||||
|
||||
By default, DBeaver does not connect using a session (the CLI for example does). If you require session support (for example to set settings for your session), edit the driver connection properties and set session_id to a random string (it uses the http connection under the hood). Then you can use any setting from the query window
|
||||
|
||||
### clickhouse-cli {#clickhouse-cli}
|
||||
|
||||
[clickhouse-cli](https://github.com/hatarist/clickhouse-cli) is an alternative command-line client for ClickHouse, written in Python 3.
|
||||
|
@ -43,7 +43,7 @@ toc_title: Integrations
|
||||
- Monitoring
|
||||
- [Graphite](https://graphiteapp.org)
|
||||
- [graphouse](https://github.com/yandex/graphouse)
|
||||
- [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse) +
|
||||
- [carbon-clickhouse](https://github.com/lomik/carbon-clickhouse)
|
||||
- [graphite-clickhouse](https://github.com/lomik/graphite-clickhouse)
|
||||
- [graphite-ch-optimizer](https://github.com/innogames/graphite-ch-optimizer) - optimizes staled partitions in [\*GraphiteMergeTree](../../engines/table-engines/mergetree-family/graphitemergetree.md#graphitemergetree) if rules from [rollup configuration](../../engines/table-engines/mergetree-family/graphitemergetree.md#rollup-configuration) could be applied
|
||||
- [Grafana](https://grafana.com/)
|
||||
|
@ -115,6 +115,7 @@ toc_title: Adopters
|
||||
| <a href="http://english.sina.com/index.html" class="favicon">Sina</a> | News | — | — | — | [Slides in Chinese, October 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup19/6.%20ClickHouse最佳实践%20高鹏_新浪.pdf) |
|
||||
| <a href="https://smi2.ru/" class="favicon">SMI2</a> | News | Analytics | — | — | [Blog Post in Russian, November 2017](https://habr.com/ru/company/smi2/blog/314558/) |
|
||||
| <a href="https://www.spark.co.nz/" class="favicon">Spark New Zealand</a> | Telecommunications | Security Operations | — | — | [Blog Post, Feb 2020](https://blog.n0p.me/2020/02/2020-02-05-dnsmonster/) |
|
||||
| <a href="https://splitbee.io" class="favicon">Splitbee</a> | Analytics | Main Product | — | — | [Blog Post, Mai 2021](https://splitbee.io/blog/new-pricing) |
|
||||
| <a href="https://www.splunk.com/" class="favicon">Splunk</a> | Business Analytics | Main product | — | — | [Slides in English, January 2018](https://github.com/ClickHouse/clickhouse-presentations/blob/master/meetup12/splunk.pdf) |
|
||||
| <a href="https://www.spotify.com" class="favicon">Spotify</a> | Music | Experimentation | — | — | [Slides, July 2018](https://www.slideshare.net/glebus/using-clickhouse-for-experimentation-104247173) |
|
||||
| <a href="https://www.staffcop.ru/" class="favicon">Staffcop</a> | Information Security | Main Product | — | — | [Official website, Documentation](https://www.staffcop.ru/sce43) |
|
||||
@ -157,5 +158,6 @@ toc_title: Adopters
|
||||
| <a href="https://signoz.io/" class="favicon">SigNoz</a> | Observability Platform | Main Product | — | — | [Source code](https://github.com/SigNoz/signoz) |
|
||||
| <a href="https://chelpipegroup.com/" class="favicon">ChelPipe Group</a> | Analytics | — | — | — | [Blog post, June 2021](https://vc.ru/trade/253172-tyazhelomu-proizvodstvu-user-friendly-sayt-internet-magazin-trub-dlya-chtpz) |
|
||||
| <a href="https://zagravagames.com/en/" class="favicon">Zagrava Trading</a> | — | — | — | — | [Job offer, May 2021](https://twitter.com/datastackjobs/status/1394707267082063874) |
|
||||
| <a href="https://beeline.ru/" class="favicon">Beeline</a> | Telecom | Data Platform | — | — | [Blog post, July 2021](https://habr.com/en/company/beeline/blog/567508/) |
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/introduction/adopters/) <!--hide-->
|
||||
|
@ -10,7 +10,7 @@ ClickHouse server use [ZooKeeper](https://zookeeper.apache.org/) coordination sy
|
||||
!!! warning "Warning"
|
||||
This feature currently in pre-production stage. We test it in our CI and on small internal installations.
|
||||
|
||||
## Implemetation details
|
||||
## Implementation details
|
||||
|
||||
ZooKeeper is one of the first well-known open-source coordination systems. It's implemented in Java, has quite a simple and powerful data model. ZooKeeper's coordination algorithm called ZAB (ZooKeeper Atomic Broadcast) doesn't provide linearizability guarantees for reads, because each ZooKeeper node serves reads locally. Unlike ZooKeeper `clickhouse-keeper` written in C++ and use [RAFT algorithm](https://raft.github.io/) [implementation](https://github.com/eBay/NuRaft). This algorithm allows to have linearizability for reads and writes, has several open-source implementations in different languages.
|
||||
|
||||
@ -30,21 +30,25 @@ Other common parameters are inherited from clickhouse-server config (`listen_hos
|
||||
|
||||
Internal coordination settings are located in `<keeper_server>.<coordination_settings>` section:
|
||||
|
||||
- `operation_timeout_ms` — timeout for a single client operation
|
||||
- `session_timeout_ms` — timeout for client session
|
||||
- `dead_session_check_period_ms` — how often clickhouse-keeper check dead sessions and remove them
|
||||
- `heart_beat_interval_ms` — how often a clickhouse-keeper leader will send heartbeats to followers
|
||||
- `election_timeout_lower_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it can initiate leader election
|
||||
- `election_timeout_upper_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it must initiate leader election
|
||||
- `rotate_log_storage_interval` — how many logs to store in a single file
|
||||
- `reserved_log_items` — how many coordination logs to store before compaction
|
||||
- `snapshot_distance` — how often clickhouse-keeper will create new snapshots (in the number of logs)
|
||||
- `snapshots_to_keep` — how many snapshots to keep
|
||||
- `stale_log_gap` — the threshold when leader consider follower as stale and send snapshot to it instead of logs
|
||||
- `force_sync` — call `fsync` on each write to coordination log
|
||||
- `raft_logs_level` — text logging level about coordination (trace, debug, and so on)
|
||||
- `shutdown_timeout` — wait to finish internal connections and shutdown
|
||||
- `startup_timeout` — if the server doesn't connect to other quorum participants in the specified timeout it will terminate
|
||||
- `operation_timeout_ms` — timeout for a single client operation (default: 10000)
|
||||
- `session_timeout_ms` — timeout for client session (default: 30000)
|
||||
- `dead_session_check_period_ms` — how often clickhouse-keeper check dead sessions and remove them (default: 500)
|
||||
- `heart_beat_interval_ms` — how often a clickhouse-keeper leader will send heartbeats to followers (default: 500)
|
||||
- `election_timeout_lower_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it can initiate leader election (default: 1000)
|
||||
- `election_timeout_upper_bound_ms` — if follower didn't receive heartbeats from the leader in this interval, then it must initiate leader election (default: 2000)
|
||||
- `rotate_log_storage_interval` — how many log records to store in a single file (default: 100000)
|
||||
- `reserved_log_items` — how many coordination log records to store before compaction (default: 100000)
|
||||
- `snapshot_distance` — how often clickhouse-keeper will create new snapshots (in the number of records in logs) (default: 100000)
|
||||
- `snapshots_to_keep` — how many snapshots to keep (default: 3)
|
||||
- `stale_log_gap` — the threshold when leader consider follower as stale and send snapshot to it instead of logs (default: 10000)
|
||||
- `fresh_log_gap` - when node became fresh (default: 200)
|
||||
- `max_requests_batch_size` - max size of batch in requests count before it will be sent to RAFT (default: 100)
|
||||
- `force_sync` — call `fsync` on each write to coordination log (default: true)
|
||||
- `quorum_reads` - execute read requests as writes through whole RAFT consesus with similar speed (default: false)
|
||||
- `raft_logs_level` — text logging level about coordination (trace, debug, and so on) (default: system default)
|
||||
- `auto_forwarding` - allow to forward write requests from followers to leader (default: true)
|
||||
- `shutdown_timeout` — wait to finish internal connections and shutdown (ms) (default: 5000)
|
||||
- `startup_timeout` — if the server doesn't connect to other quorum participants in the specified timeout it will terminate (ms) (default: 30000)
|
||||
|
||||
Quorum configuration is located in `<keeper_server>.<raft_configuration>` section and contain servers description. The only parameter for the whole quorum is `secure`, which enables encrypted connection for communication between quorum participants. The main parameters for each `<server>` are:
|
||||
|
||||
|
@ -5,50 +5,67 @@ toc_title: Testing Hardware
|
||||
|
||||
# How to Test Your Hardware with ClickHouse {#how-to-test-your-hardware-with-clickhouse}
|
||||
|
||||
With this instruction you can run basic ClickHouse performance test on any server without installation of ClickHouse packages.
|
||||
You can run basic ClickHouse performance test on any server without installation of ClickHouse packages.
|
||||
|
||||
1. Go to “commits” page: https://github.com/ClickHouse/ClickHouse/commits/master
|
||||
2. Click on the first green check mark or red cross with green “ClickHouse Build Check” and click on the “Details” link near “ClickHouse Build Check”. There is no such link in some commits, for example commits with documentation. In this case, choose the nearest commit having this link.
|
||||
3. Copy the link to `clickhouse` binary for amd64 or aarch64.
|
||||
4. ssh to the server and download it with wget:
|
||||
|
||||
## Automated Run
|
||||
|
||||
You can run benchmark with a single script.
|
||||
|
||||
1. Download the script.
|
||||
```
|
||||
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/hardware.sh
|
||||
```
|
||||
|
||||
2. Run the script.
|
||||
```
|
||||
chmod a+x ./hardware.sh
|
||||
./hardware.sh
|
||||
```
|
||||
|
||||
3. Copy the output and send it to clickhouse-feedback@yandex-team.com
|
||||
|
||||
All the results are published here: https://clickhouse.tech/benchmark/hardware/
|
||||
|
||||
|
||||
## Manual Run
|
||||
|
||||
Alternatively you can perform benchmark in the following steps.
|
||||
|
||||
1. ssh to the server and download the binary with wget:
|
||||
```bash
|
||||
# These links are outdated, please obtain the fresh link from the "commits" page.
|
||||
# For amd64:
|
||||
wget https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_build_check/gcc-10_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse
|
||||
wget https://builds.clickhouse.tech/master/amd64/clickhouse
|
||||
# For aarch64:
|
||||
wget https://clickhouse-builds.s3.yandex.net/0/e29c4c3cc47ab2a6c4516486c1b77d57e7d42643/clickhouse_special_build_check/clang-10-aarch64_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse
|
||||
wget https://builds.clickhouse.tech/master/aarch64/clickhouse
|
||||
# Then do:
|
||||
chmod a+x clickhouse
|
||||
```
|
||||
5. Download benchmark files:
|
||||
2. Download benchmark files:
|
||||
```bash
|
||||
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/benchmark-new.sh
|
||||
chmod a+x benchmark-new.sh
|
||||
wget https://raw.githubusercontent.com/ClickHouse/ClickHouse/master/benchmark/clickhouse/queries.sql
|
||||
```
|
||||
6. Download test data according to the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) instruction (“hits” table containing 100 million rows).
|
||||
3. Download test data according to the [Yandex.Metrica dataset](../getting-started/example-datasets/metrica.md) instruction (“hits” table containing 100 million rows).
|
||||
```bash
|
||||
wget https://datasets.clickhouse.tech/hits/partitions/hits_100m_obfuscated_v1.tar.xz
|
||||
tar xvf hits_100m_obfuscated_v1.tar.xz -C .
|
||||
mv hits_100m_obfuscated_v1/* .
|
||||
```
|
||||
7. Run the server:
|
||||
4. Run the server:
|
||||
```bash
|
||||
./clickhouse server
|
||||
```
|
||||
8. Check the data: ssh to the server in another terminal
|
||||
5. Check the data: ssh to the server in another terminal
|
||||
```bash
|
||||
./clickhouse client --query "SELECT count() FROM hits_100m_obfuscated"
|
||||
100000000
|
||||
```
|
||||
9. Edit the benchmark-new.sh, change `clickhouse-client` to `./clickhouse client` and add `--max_memory_usage 100000000000` parameter.
|
||||
```bash
|
||||
mcedit benchmark-new.sh
|
||||
```
|
||||
10. Run the benchmark:
|
||||
6. Run the benchmark:
|
||||
```bash
|
||||
./benchmark-new.sh hits_100m_obfuscated
|
||||
```
|
||||
11. Send the numbers and the info about your hardware configuration to clickhouse-feedback@yandex-team.com
|
||||
7. Send the numbers and the info about your hardware configuration to clickhouse-feedback@yandex-team.com
|
||||
|
||||
All the results are published here: https://clickhouse.tech/benchmark/hardware/
|
||||
|
@ -34,6 +34,7 @@ Configuration template:
|
||||
<min_part_size>...</min_part_size>
|
||||
<min_part_size_ratio>...</min_part_size_ratio>
|
||||
<method>...</method>
|
||||
<level>...</level>
|
||||
</case>
|
||||
...
|
||||
</compression>
|
||||
@ -43,7 +44,8 @@ Configuration template:
|
||||
|
||||
- `min_part_size` – The minimum size of a data part.
|
||||
- `min_part_size_ratio` – The ratio of the data part size to the table size.
|
||||
- `method` – Compression method. Acceptable values: `lz4` or `zstd`.
|
||||
- `method` – Compression method. Acceptable values: `lz4`, `lz4hc`, `zstd`.
|
||||
- `level` – Compression level. See [Codecs](../../sql-reference/statements/create/table/#create-query-general-purpose-codecs).
|
||||
|
||||
You can configure multiple `<case>` sections.
|
||||
|
||||
@ -62,10 +64,33 @@ If no conditions met for a data part, ClickHouse uses the `lz4` compression.
|
||||
<min_part_size>10000000000</min_part_size>
|
||||
<min_part_size_ratio>0.01</min_part_size_ratio>
|
||||
<method>zstd</method>
|
||||
<level>1</level>
|
||||
</case>
|
||||
</compression>
|
||||
```
|
||||
|
||||
## encryption {#server-settings-encryption}
|
||||
|
||||
Configures a command to obtain a key to be used by [encryption codecs](../../sql-reference/statements/create/table.md#create-query-encryption-codecs). The command, or a shell script, is expected to write a Base64-encoded key of any length to the stdout.
|
||||
|
||||
**Example**
|
||||
|
||||
For Linux with systemd:
|
||||
|
||||
```xml
|
||||
<encryption>
|
||||
<key_command>/usr/bin/systemd-ask-password --id="clickhouse-server" --timeout=0 "Enter the ClickHouse encryption passphrase:" | base64</key_command>
|
||||
</encryption>
|
||||
```
|
||||
|
||||
For other systems:
|
||||
|
||||
```xml
|
||||
<encryption>
|
||||
<key_command><![CDATA[IFS=; echo -n >/dev/tty "Enter the ClickHouse encryption passphrase: "; stty=`stty -F /dev/tty -g`; stty -F /dev/tty -echo; read k </dev/tty; stty -F /dev/tty "$stty"; echo -n $k | base64]]></key_command>
|
||||
</encryption>
|
||||
```
|
||||
|
||||
## custom_settings_prefixes {#custom_settings_prefixes}
|
||||
|
||||
List of prefixes for [custom settings](../../operations/settings/index.md#custom_settings). The prefixes must be separated with commas.
|
||||
@ -713,7 +738,7 @@ Keys for server/client settings:
|
||||
- extendedVerification – Automatically extended verification of certificates after the session ends. Acceptable values: `true`, `false`.
|
||||
- requireTLSv1 – Require a TLSv1 connection. Acceptable values: `true`, `false`.
|
||||
- requireTLSv1_1 – Require a TLSv1.1 connection. Acceptable values: `true`, `false`.
|
||||
- requireTLSv1 – Require a TLSv1.2 connection. Acceptable values: `true`, `false`.
|
||||
- requireTLSv1_2 – Require a TLSv1.2 connection. Acceptable values: `true`, `false`.
|
||||
- fips – Activates OpenSSL FIPS mode. Supported if the library’s OpenSSL version supports FIPS.
|
||||
- privateKeyPassphraseHandler – Class (PrivateKeyPassphraseHandler subclass) that requests the passphrase for accessing the private key. For example: `<privateKeyPassphraseHandler>`, `<name>KeyFileHandler</name>`, `<options><password>test</password></options>`, `</privateKeyPassphraseHandler>`.
|
||||
- invalidCertificateHandler – Class (a subclass of CertificateHandler) for verifying invalid certificates. For example: `<invalidCertificateHandler> <name>ConsoleCertificateHandler</name> </invalidCertificateHandler>` .
|
||||
|
@ -278,4 +278,15 @@ Possible values:
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/settings/merge_tree_settings/) <!--hide-->
|
||||
## check_sample_column_is_correct {#check_sample_column_is_correct}
|
||||
|
||||
Enables the check at table creation, that the data type of a column for sampling or sampling expression is correct. The data type must be one of unsigned [integer types](../../sql-reference/data-types/int-uint.md): `UInt8`, `UInt16`, `UInt32`, `UInt64`.
|
||||
|
||||
Possible values:
|
||||
|
||||
- true — The check is enabled.
|
||||
- false — The check is disabled at table creation.
|
||||
|
||||
Default value: `true`.
|
||||
|
||||
By default, the ClickHouse server checks at table creation the data type of a column for sampling or sampling expression. If you already have tables with incorrect sampling expression and do not want the server to raise an exception during startup, set `check_sample_column_is_correct` to `false`.
|
||||
|
@ -28,7 +28,7 @@ Structure of the `users` section:
|
||||
<profile>profile_name</profile>
|
||||
|
||||
<quota>default</quota>
|
||||
|
||||
<default_database>default<default_database>
|
||||
<databases>
|
||||
<database_name>
|
||||
<table_name>
|
||||
|
@ -20,6 +20,29 @@ Possible values:
|
||||
- `global` — Replaces the `IN`/`JOIN` query with `GLOBAL IN`/`GLOBAL JOIN.`
|
||||
- `allow` — Allows the use of these types of subqueries.
|
||||
|
||||
## prefer_global_in_and_join {#prefer-global-in-and-join}
|
||||
|
||||
Enables the replacement of `IN`/`JOIN` operators with `GLOBAL IN`/`GLOBAL JOIN`.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled. `IN`/`JOIN` operators are not replaced with `GLOBAL IN`/`GLOBAL JOIN`.
|
||||
- 1 — Enabled. `IN`/`JOIN` operators are replaced with `GLOBAL IN`/`GLOBAL JOIN`.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
**Usage**
|
||||
|
||||
Although `SET distributed_product_mode=global` can change the queries behavior for the distributed tables, it's not suitable for local tables or tables from external resources. Here is when the `prefer_global_in_and_join` setting comes into play.
|
||||
|
||||
For example, we have query serving nodes that contain local tables, which are not suitable for distribution. We need to scatter their data on the fly during distributed processing with the `GLOBAL` keyword — `GLOBAL IN`/`GLOBAL JOIN`.
|
||||
|
||||
Another use case of `prefer_global_in_and_join` is accessing tables created by external engines. This setting helps to reduce the number of calls to external sources while joining such tables: only one call per query.
|
||||
|
||||
**See also:**
|
||||
|
||||
- [Distributed subqueries](../../sql-reference/operators/in.md#select-distributed-subqueries) for more information on how to use `GLOBAL IN`/`GLOBAL JOIN`
|
||||
|
||||
## enable_optimize_predicate_expression {#enable-optimize-predicate-expression}
|
||||
|
||||
Turns on predicate pushdown in `SELECT` queries.
|
||||
@ -153,6 +176,26 @@ Possible values:
|
||||
|
||||
Default value: 1048576.
|
||||
|
||||
## table_function_remote_max_addresses {#table_function_remote_max_addresses}
|
||||
|
||||
Sets the maximum number of addresses generated from patterns for the [remote](../../sql-reference/table-functions/remote.md) function.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
|
||||
Default value: `1000`.
|
||||
|
||||
## glob_expansion_max_elements {#glob_expansion_max_elements }
|
||||
|
||||
Sets the maximum number of addresses generated from patterns for external storages and table functions (like [url](../../sql-reference/table-functions/url.md)) except the `remote` function.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
|
||||
Default value: `1000`.
|
||||
|
||||
## send_progress_in_http_headers {#settings-send_progress_in_http_headers}
|
||||
|
||||
Enables or disables `X-ClickHouse-Progress` HTTP response headers in `clickhouse-server` responses.
|
||||
@ -509,6 +552,23 @@ Possible values:
|
||||
|
||||
Default value: `ALL`.
|
||||
|
||||
## join_algorithm {#settings-join_algorithm}
|
||||
|
||||
Specifies [JOIN](../../sql-reference/statements/select/join.md) algorithm.
|
||||
|
||||
Possible values:
|
||||
|
||||
- `hash` — [Hash join algorithm](https://en.wikipedia.org/wiki/Hash_join) is used.
|
||||
- `partial_merge` — [Sort-merge algorithm](https://en.wikipedia.org/wiki/Sort-merge_join) is used.
|
||||
- `prefer_partial_merge` — ClickHouse always tries to use `merge` join if possible.
|
||||
- `auto` — ClickHouse tries to change `hash` join to `merge` join on the fly to avoid out of memory.
|
||||
|
||||
Default value: `hash`.
|
||||
|
||||
When using `hash` algorithm the right part of `JOIN` is uploaded into RAM.
|
||||
|
||||
When using `partial_merge` algorithm ClickHouse sorts the data and dumps it to the disk. The `merge` algorithm in ClickHouse differs a bit from the classic realization. First ClickHouse sorts the right table by [join key](../../sql-reference/statements/select/join.md#select-join) in blocks and creates min-max index for sorted blocks. Then it sorts parts of left table by `join key` and joins them over right table. The min-max index is also used to skip unneeded right table blocks.
|
||||
|
||||
## join_any_take_last_row {#settings-join_any_take_last_row}
|
||||
|
||||
Changes behaviour of join operations with `ANY` strictness.
|
||||
@ -1989,6 +2049,13 @@ Possible values: 32 (32 bytes) - 1073741824 (1 GiB)
|
||||
|
||||
Default value: 32768 (32 KiB)
|
||||
|
||||
## output_format_avro_string_column_pattern {#output_format_avro_string_column_pattern}
|
||||
|
||||
Regexp of column names of type String to output as Avro `string` (default is `bytes`).
|
||||
RE2 syntax is supported.
|
||||
|
||||
Type: string
|
||||
|
||||
## format_avro_schema_registry_url {#format_avro_schema_registry_url}
|
||||
|
||||
Sets [Confluent Schema Registry](https://docs.confluent.io/current/schema-registry/index.html) URL to use with [AvroConfluent](../../interfaces/formats.md#data-format-avro-confluent) format.
|
||||
@ -2018,6 +2085,16 @@ Possible values:
|
||||
|
||||
Default value: 16.
|
||||
|
||||
## merge_selecting_sleep_ms {#merge_selecting_sleep_ms}
|
||||
|
||||
Sleep time for merge selecting when no part is selected. A lower setting triggers selecting tasks in `background_schedule_pool` frequently, which results in a large number of requests to Zookeeper in large-scale clusters.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Any positive integer.
|
||||
|
||||
Default value: `5000`.
|
||||
|
||||
## parallel_distributed_insert_select {#parallel_distributed_insert_select}
|
||||
|
||||
Enables parallel distributed `INSERT ... SELECT` query.
|
||||
@ -2893,7 +2970,7 @@ Result:
|
||||
└─────────────┘
|
||||
```
|
||||
|
||||
Note that this setting influences [Materialized view](../../sql-reference/statements/create/view.md#materialized) and [MaterializeMySQL](../../engines/database-engines/materialize-mysql.md) behaviour.
|
||||
Note that this setting influences [Materialized view](../../sql-reference/statements/create/view.md#materialized) and [MaterializedMySQL](../../engines/database-engines/materialized-mysql.md) behaviour.
|
||||
|
||||
## engine_file_empty_if_not_exists {#engine-file-empty_if-not-exists}
|
||||
|
||||
@ -3151,6 +3228,53 @@ SELECT
|
||||
FROM fuse_tbl
|
||||
```
|
||||
|
||||
## allow_experimental_database_replicated {#allow_experimental_database_replicated}
|
||||
|
||||
Enables to create databases with [Replicated](../../engines/database-engines/replicated.md) engine.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — Disabled.
|
||||
- 1 — Enabled.
|
||||
|
||||
Default value: `0`.
|
||||
|
||||
## database_replicated_initial_query_timeout_sec {#database_replicated_initial_query_timeout_sec}
|
||||
|
||||
Sets how long initial DDL query should wait for Replicated database to precess previous DDL queue entries in seconds.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
- 0 — Unlimited.
|
||||
|
||||
Default value: `300`.
|
||||
|
||||
## distributed_ddl_task_timeout {#distributed_ddl_task_timeout}
|
||||
|
||||
Sets timeout for DDL query responses from all hosts in cluster. If a DDL request has not been performed on all hosts, a response will contain a timeout error and a request will be executed in an async mode. Negative value means infinite.
|
||||
|
||||
Possible values:
|
||||
|
||||
- Positive integer.
|
||||
- 0 — Async mode.
|
||||
- Negative integer — infinite timeout.
|
||||
|
||||
Default value: `180`.
|
||||
|
||||
## distributed_ddl_output_mode {#distributed_ddl_output_mode}
|
||||
|
||||
Sets format of distributed DDL query result.
|
||||
|
||||
Possible values:
|
||||
|
||||
- `throw` — Returns result set with query execution status for all hosts where query is finished. If query has failed on some hosts, then it will rethrow the first exception. If query is not finished yet on some hosts and [distributed_ddl_task_timeout](#distributed_ddl_task_timeout) exceeded, then it throws `TIMEOUT_EXCEEDED` exception.
|
||||
- `none` — Is similar to throw, but distributed DDL query returns no result set.
|
||||
- `null_status_on_timeout` — Returns `NULL` as execution status in some rows of result set instead of throwing `TIMEOUT_EXCEEDED` if query is not finished on the corresponding hosts.
|
||||
- `never_throw` — Do not throw `TIMEOUT_EXCEEDED` and do not rethrow exceptions if query has failed on some hosts.
|
||||
|
||||
Default value: `throw`.
|
||||
|
||||
## flatten_nested {#flatten-nested}
|
||||
|
||||
Sets the data format of a [nested](../../sql-reference/data-types/nested-data-structures/nested.md) columns.
|
||||
@ -3230,3 +3354,14 @@ Default value: `1`.
|
||||
**Usage**
|
||||
|
||||
If the setting is set to `0`, the table function does not make Nullable columns and inserts default values instead of NULL. This is also applicable for NULL values inside arrays.
|
||||
|
||||
## output_format_arrow_low_cardinality_as_dictionary {#output-format-arrow-low-cardinality-as-dictionary}
|
||||
|
||||
Allows to convert the [LowCardinality](../../sql-reference/data-types/lowcardinality.md) type to the `DICTIONARY` type of the [Arrow](../../interfaces/formats.md#data-format-arrow) format for `SELECT` queries.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 — The `LowCardinality` type is not converted to the `DICTIONARY` type.
|
||||
- 1 — The `LowCardinality` type is converted to the `DICTIONARY` type.
|
||||
|
||||
Default value: `0`.
|
||||
|
@ -62,4 +62,3 @@ exception_code: ZOK
|
||||
```
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/operations/system_tables/distributed_ddl_queuedistributed_ddl_queue.md) <!--hide-->
|
||||
|
@ -51,6 +51,7 @@ Columns:
|
||||
- `databases` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — Names of the databases present in the query.
|
||||
- `tables` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — Names of the tables present in the query.
|
||||
- `columns` ([Array](../../sql-reference/data-types/array.md)([LowCardinality(String)](../../sql-reference/data-types/lowcardinality.md))) — Names of the columns present in the query.
|
||||
- `projections` ([String](../../sql-reference/data-types/string.md)) — Names of the projections used during the query execution.
|
||||
- `exception_code` ([Int32](../../sql-reference/data-types/int-uint.md)) — Code of an exception.
|
||||
- `exception` ([String](../../sql-reference/data-types/string.md)) — Exception message.
|
||||
- `stack_trace` ([String](../../sql-reference/data-types/string.md)) — [Stack trace](https://en.wikipedia.org/wiki/Stack_trace). An empty string, if the query was completed successfully.
|
||||
@ -65,6 +66,8 @@ Columns:
|
||||
- `initial_query_id` ([String](../../sql-reference/data-types/string.md)) — ID of the initial query (for distributed query execution).
|
||||
- `initial_address` ([IPv6](../../sql-reference/data-types/domains/ipv6.md)) — IP address that the parent query was launched from.
|
||||
- `initial_port` ([UInt16](../../sql-reference/data-types/int-uint.md)) — The client port that was used to make the parent query.
|
||||
- `initial_query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Initial query starting time (for distributed query execution).
|
||||
- `initial_query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Initial query starting time with microseconds precision (for distributed query execution).
|
||||
- `interface` ([UInt8](../../sql-reference/data-types/int-uint.md)) — Interface that the query was initiated from. Possible values:
|
||||
- 1 — TCP.
|
||||
- 2 — HTTP.
|
||||
@ -101,55 +104,77 @@ Columns:
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
SELECT * FROM system.query_log WHERE type = 'QueryFinish' AND (query LIKE '%toDate(\'2000-12-05\')%') ORDER BY query_start_time DESC LIMIT 1 FORMAT Vertical;
|
||||
SELECT * FROM system.query_log WHERE type = 'QueryFinish' ORDER BY query_start_time DESC LIMIT 1 FORMAT Vertical;
|
||||
```
|
||||
|
||||
``` text
|
||||
Row 1:
|
||||
──────
|
||||
type: QueryStart
|
||||
event_date: 2020-09-11
|
||||
event_time: 2020-09-11 10:08:17
|
||||
event_time_microseconds: 2020-09-11 10:08:17.063321
|
||||
query_start_time: 2020-09-11 10:08:17
|
||||
query_start_time_microseconds: 2020-09-11 10:08:17.063321
|
||||
query_duration_ms: 0
|
||||
read_rows: 0
|
||||
read_bytes: 0
|
||||
type: QueryFinish
|
||||
event_date: 2021-07-28
|
||||
event_time: 2021-07-28 13:46:56
|
||||
event_time_microseconds: 2021-07-28 13:46:56.719791
|
||||
query_start_time: 2021-07-28 13:46:56
|
||||
query_start_time_microseconds: 2021-07-28 13:46:56.704542
|
||||
query_duration_ms: 14
|
||||
read_rows: 8393
|
||||
read_bytes: 374325
|
||||
written_rows: 0
|
||||
written_bytes: 0
|
||||
result_rows: 0
|
||||
result_bytes: 0
|
||||
memory_usage: 0
|
||||
result_rows: 4201
|
||||
result_bytes: 153024
|
||||
memory_usage: 4714038
|
||||
current_database: default
|
||||
query: INSERT INTO test1 VALUES
|
||||
query: SELECT DISTINCT arrayJoin(extractAll(name, '[\\w_]{2,}')) AS res FROM (SELECT name FROM system.functions UNION ALL SELECT name FROM system.table_engines UNION ALL SELECT name FROM system.formats UNION ALL SELECT name FROM system.table_functions UNION ALL SELECT name FROM system.data_type_families UNION ALL SELECT name FROM system.merge_tree_settings UNION ALL SELECT name FROM system.settings UNION ALL SELECT cluster FROM system.clusters UNION ALL SELECT macro FROM system.macros UNION ALL SELECT policy_name FROM system.storage_policies UNION ALL SELECT concat(func.name, comb.name) FROM system.functions AS func CROSS JOIN system.aggregate_function_combinators AS comb WHERE is_aggregate UNION ALL SELECT name FROM system.databases LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.tables LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.dictionaries LIMIT 10000 UNION ALL SELECT DISTINCT name FROM system.columns LIMIT 10000) WHERE notEmpty(res)
|
||||
normalized_query_hash: 6666026786019643712
|
||||
query_kind: Select
|
||||
databases: ['system']
|
||||
tables: ['system.aggregate_function_combinators','system.clusters','system.columns','system.data_type_families','system.databases','system.dictionaries','system.formats','system.functions','system.macros','system.merge_tree_settings','system.settings','system.storage_policies','system.table_engines','system.table_functions','system.tables']
|
||||
columns: ['system.aggregate_function_combinators.name','system.clusters.cluster','system.columns.name','system.data_type_families.name','system.databases.name','system.dictionaries.name','system.formats.name','system.functions.is_aggregate','system.functions.name','system.macros.macro','system.merge_tree_settings.name','system.settings.name','system.storage_policies.policy_name','system.table_engines.name','system.table_functions.name','system.tables.name']
|
||||
projections: []
|
||||
exception_code: 0
|
||||
exception:
|
||||
stack_trace:
|
||||
is_initial_query: 1
|
||||
user: default
|
||||
query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef
|
||||
query_id: a3361f6e-a1fd-4d54-9f6f-f93a08bab0bf
|
||||
address: ::ffff:127.0.0.1
|
||||
port: 33452
|
||||
port: 51006
|
||||
initial_user: default
|
||||
initial_query_id: 50a320fd-85a8-49b8-8761-98a86bcbacef
|
||||
initial_query_id: a3361f6e-a1fd-4d54-9f6f-f93a08bab0bf
|
||||
initial_address: ::ffff:127.0.0.1
|
||||
initial_port: 33452
|
||||
initial_port: 51006
|
||||
initial_query_start_time: 2021-07-28 13:46:56
|
||||
initial_query_start_time_microseconds: 2021-07-28 13:46:56.704542
|
||||
interface: 1
|
||||
os_user: bharatnc
|
||||
client_hostname: tower
|
||||
client_name: ClickHouse
|
||||
client_revision: 54437
|
||||
client_version_major: 20
|
||||
client_version_minor: 7
|
||||
client_version_patch: 2
|
||||
os_user:
|
||||
client_hostname:
|
||||
client_name: ClickHouse client
|
||||
client_revision: 54449
|
||||
client_version_major: 21
|
||||
client_version_minor: 8
|
||||
client_version_patch: 0
|
||||
http_method: 0
|
||||
http_user_agent:
|
||||
http_referer:
|
||||
forwarded_for:
|
||||
quota_key:
|
||||
revision: 54440
|
||||
thread_ids: []
|
||||
ProfileEvents: {'Query':1,'SelectQuery':1,'ReadCompressedBytes':36,'CompressedReadBufferBlocks':1,'CompressedReadBufferBytes':10,'IOBufferAllocs':1,'IOBufferAllocBytes':89,'ContextLock':15,'RWLockAcquiredReadLocks':1}
|
||||
Settings: {'background_pool_size':'32','load_balancing':'random','allow_suspicious_low_cardinality_types':'1','distributed_aggregation_memory_efficient':'1','skip_unavailable_shards':'1','log_queries':'1','max_bytes_before_external_group_by':'20000000000','max_bytes_before_external_sort':'20000000000','allow_introspection_functions':'1'}
|
||||
revision: 54453
|
||||
log_comment:
|
||||
thread_ids: [5058,22097,22110,22094]
|
||||
ProfileEvents.Names: ['Query','SelectQuery','ArenaAllocChunks','ArenaAllocBytes','FunctionExecute','NetworkSendElapsedMicroseconds','SelectedRows','SelectedBytes','ContextLock','RWLockAcquiredReadLocks','RealTimeMicroseconds','UserTimeMicroseconds','SystemTimeMicroseconds','SoftPageFaults','OSCPUWaitMicroseconds','OSCPUVirtualTimeMicroseconds','OSWriteBytes','OSWriteChars']
|
||||
ProfileEvents.Values: [1,1,39,352256,64,360,8393,374325,412,440,34480,13108,4723,671,19,17828,8192,10240]
|
||||
Settings.Names: ['load_balancing','max_memory_usage']
|
||||
Settings.Values: ['random','10000000000']
|
||||
used_aggregate_functions: []
|
||||
used_aggregate_function_combinators: []
|
||||
used_database_engines: []
|
||||
used_data_type_families: ['UInt64','UInt8','Nullable','String','date']
|
||||
used_dictionaries: []
|
||||
used_formats: []
|
||||
used_functions: ['concat','notEmpty','extractAll']
|
||||
used_storages: []
|
||||
used_table_functions: []
|
||||
```
|
||||
|
||||
**See Also**
|
||||
|
@ -82,6 +82,7 @@ The next 4 columns have a non-zero value only where there is an active session w
|
||||
- `absolute_delay` (`UInt64`) - How big lag in seconds the current replica has.
|
||||
- `total_replicas` (`UInt8`) - The total number of known replicas of this table.
|
||||
- `active_replicas` (`UInt8`) - The number of replicas of this table that have a session in ZooKeeper (i.e., the number of functioning replicas).
|
||||
- `replica_is_active` ([Map(String, UInt8)](../../sql-reference/data-types/map.md)) — Map between replica name and is replica active.
|
||||
|
||||
If you request all the columns, the table may work a bit slowly, since several reads from ZooKeeper are made for each row.
|
||||
If you do not request the last 4 columns (log_max_index, log_pointer, total_replicas, active_replicas), the table works quickly.
|
||||
|
@ -47,6 +47,7 @@ Settings:
|
||||
- [low_cardinality_use_single_dictionary_for_part](../../operations/settings/settings.md#low_cardinality_use_single_dictionary_for_part)
|
||||
- [low_cardinality_allow_in_native_format](../../operations/settings/settings.md#low_cardinality_allow_in_native_format)
|
||||
- [allow_suspicious_low_cardinality_types](../../operations/settings/settings.md#allow_suspicious_low_cardinality_types)
|
||||
- [output_format_arrow_low_cardinality_as_dictionary](../../operations/settings/settings.md#output-format-arrow-low-cardinality-as-dictionary)
|
||||
|
||||
Functions:
|
||||
|
||||
@ -57,5 +58,3 @@ Functions:
|
||||
- [A Magical Mystery Tour of the LowCardinality Data Type](https://www.altinity.com/blog/2019/3/27/low-cardinality).
|
||||
- [Reducing ClickHouse Storage Cost with the Low Cardinality Type – Lessons from an Instana Engineer](https://www.instana.com/blog/reducing-clickhouse-storage-cost-with-the-low-cardinality-type-lessons-from-an-instana-engineer/).
|
||||
- [String Optimization (video presentation in Russian)](https://youtu.be/rqf-ILRgBdY?list=PL0Z2YDlm0b3iwXCpEFiOOYmwXzVmjJfEt). [Slides in English](https://github.com/yandex/clickhouse-presentations/raw/master/meetup19/string_optimization.pdf).
|
||||
|
||||
[Original article](https://clickhouse.tech/docs/en/sql-reference/data-types/lowcardinality/) <!--hide-->
|
||||
|
@ -9,8 +9,8 @@ toc_title: Map(key, value)
|
||||
|
||||
**Parameters**
|
||||
|
||||
- `key` — The key part of the pair. [String](../../sql-reference/data-types/string.md) or [Integer](../../sql-reference/data-types/int-uint.md).
|
||||
- `value` — The value part of the pair. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md) or [Array](../../sql-reference/data-types/array.md).
|
||||
- `key` — The key part of the pair. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md), [LowCardinality](../../sql-reference/data-types/lowcardinality.md), or [FixedString](../../sql-reference/data-types/fixedstring.md).
|
||||
- `value` — The value part of the pair. [String](../../sql-reference/data-types/string.md), [Integer](../../sql-reference/data-types/int-uint.md), [Array](../../sql-reference/data-types/array.md), [LowCardinality](../../sql-reference/data-types/lowcardinality.md), or [FixedString](../../sql-reference/data-types/fixedstring.md).
|
||||
|
||||
To get the value from an `a Map('key', 'value')` column, use `a['key']` syntax. This lookup works now with a linear complexity.
|
||||
|
||||
|
@ -275,9 +275,13 @@ The dictionary is stored in a cache that has a fixed number of cells. These cell
|
||||
|
||||
When searching for a dictionary, the cache is searched first. For each block of data, all keys that are not found in the cache or are outdated are requested from the source using `SELECT attrs... FROM db.table WHERE id IN (k1, k2, ...)`. The received data is then written to the cache.
|
||||
|
||||
For cache dictionaries, the expiration [lifetime](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md) of data in the cache can be set. If more time than `lifetime` has passed since loading the data in a cell, the cell’s value is not used, and it is re-requested the next time it needs to be used.
|
||||
If keys are not found in dictionary, then update cache task is created and added into update queue. Update queue properties can be controlled with settings `max_update_queue_size`, `update_queue_push_timeout_milliseconds`, `query_wait_timeout_milliseconds`, `max_threads_for_updates`.
|
||||
|
||||
For cache dictionaries, the expiration [lifetime](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-lifetime.md) of data in the cache can be set. If more time than `lifetime` has passed since loading the data in a cell, the cell’s value is not used and key becomes expired, and it is re-requested the next time it needs to be used this behaviour can be configured with setting `allow_read_expired_keys`.
|
||||
This is the least effective of all the ways to store dictionaries. The speed of the cache depends strongly on correct settings and the usage scenario. A cache type dictionary performs well only when the hit rates are high enough (recommended 99% and higher). You can view the average hit rate in the `system.dictionaries` table.
|
||||
|
||||
If setting `allow_read_expired_keys` is set to 1, by default 0. Then dictionary can support asynchronous updates. If a client requests keys and all of them are in cache, but some of them are expired, then dictionary will return expired keys for a client and request them asynchronously from the source.
|
||||
|
||||
To improve cache performance, use a subquery with `LIMIT`, and call the function with the dictionary externally.
|
||||
|
||||
Supported [sources](../../../sql-reference/dictionaries/external-dictionaries/external-dicts-dict-sources.md): MySQL, ClickHouse, executable, HTTP.
|
||||
@ -289,6 +293,16 @@ Example of settings:
|
||||
<cache>
|
||||
<!-- The size of the cache, in number of cells. Rounded up to a power of two. -->
|
||||
<size_in_cells>1000000000</size_in_cells>
|
||||
<!-- Allows to read expired keys. -->
|
||||
<allow_read_expired_keys>0</allow_read_expired_keys>
|
||||
<!-- Max size of update queue. -->
|
||||
<max_update_queue_size>100000</max_update_queue_size>
|
||||
<!-- Max timeout in milliseconds for push update task into queue. -->
|
||||
<update_queue_push_timeout_milliseconds>10</update_queue_push_timeout_milliseconds>
|
||||
<!-- Max wait timeout in milliseconds for update task to complete. -->
|
||||
<query_wait_timeout_milliseconds>60000</query_wait_timeout_milliseconds>
|
||||
<!-- Max threads for cache dictionary update. -->
|
||||
<max_threads_for_updates>4</max_threads_for_updates>
|
||||
</cache>
|
||||
</layout>
|
||||
```
|
||||
@ -315,7 +329,7 @@ This type of storage is for use with composite [keys](../../../sql-reference/dic
|
||||
|
||||
### ssd_cache {#ssd-cache}
|
||||
|
||||
Similar to `cache`, but stores data on SSD and index in RAM.
|
||||
Similar to `cache`, but stores data on SSD and index in RAM. All cache dictionary settings related to update queue can also be applied to SSD cache dictionaries.
|
||||
|
||||
``` xml
|
||||
<layout>
|
||||
|
@ -598,7 +598,7 @@ SOURCE(CLICKHOUSE(
|
||||
table 'ids'
|
||||
where 'id=10'
|
||||
secure 1
|
||||
))
|
||||
));
|
||||
```
|
||||
|
||||
Setting fields:
|
||||
|
@ -85,7 +85,7 @@ hex(arg)
|
||||
|
||||
The function is using uppercase letters `A-F` and not using any prefixes (like `0x`) or suffixes (like `h`).
|
||||
|
||||
For integer arguments, it prints hex digits (“nibbles”) from the most significant to least significant (big endian or “human readable” order). It starts with the most significant non-zero byte (leading zero bytes are omitted) but always prints both digits of every byte even if leading digit is zero.
|
||||
For integer arguments, it prints hex digits (“nibbles”) from the most significant to least significant (big-endian or “human-readable” order). It starts with the most significant non-zero byte (leading zero bytes are omitted) but always prints both digits of every byte even if the leading digit is zero.
|
||||
|
||||
**Example**
|
||||
|
||||
@ -105,7 +105,7 @@ Values of type `Date` and `DateTime` are formatted as corresponding integers (th
|
||||
|
||||
For `String` and `FixedString`, all bytes are simply encoded as two hexadecimal numbers. Zero bytes are not omitted.
|
||||
|
||||
Values of floating point and Decimal types are encoded as their representation in memory. As we support little endian architecture, they are encoded in little endian. Zero leading/trailing bytes are not omitted.
|
||||
Values of floating point and Decimal types are encoded as their representation in memory. As we support little-endian architecture, they are encoded in little-endian. Zero leading/trailing bytes are not omitted.
|
||||
|
||||
**Arguments**
|
||||
|
||||
@ -206,6 +206,141 @@ Result:
|
||||
└──────┘
|
||||
```
|
||||
|
||||
## bin {#bin}
|
||||
|
||||
Returns a string containing the argument’s binary representation.
|
||||
|
||||
Alias: `BIN`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
bin(arg)
|
||||
```
|
||||
|
||||
For integer arguments, it prints bin digits from the most significant to least significant (big-endian or “human-readable” order). It starts with the most significant non-zero byte (leading zero bytes are omitted) but always prints eight digits of every byte if the leading digit is zero.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT bin(1);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
00000001
|
||||
```
|
||||
|
||||
Values of type `Date` and `DateTime` are formatted as corresponding integers (the number of days since Epoch for Date and the value of Unix Timestamp for DateTime).
|
||||
|
||||
For `String` and `FixedString`, all bytes are simply encoded as eight binary numbers. Zero bytes are not omitted.
|
||||
|
||||
Values of floating-point and Decimal types are encoded as their representation in memory. As we support little-endian architecture, they are encoded in little-endian. Zero leading/trailing bytes are not omitted.
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `arg` — A value to convert to binary. Types: [String](../../sql-reference/data-types/string.md), [UInt](../../sql-reference/data-types/int-uint.md), [Float](../../sql-reference/data-types/float.md), [Decimal](../../sql-reference/data-types/decimal.md), [Date](../../sql-reference/data-types/date.md) or [DateTime](../../sql-reference/data-types/datetime.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A string with the binary representation of the argument.
|
||||
|
||||
Type: `String`.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT bin(toFloat32(number)) as bin_presentation FROM numbers(15, 2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─bin_presentation─────────────────┐
|
||||
│ 00000000000000000111000001000001 │
|
||||
│ 00000000000000001000000001000001 │
|
||||
└──────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT bin(toFloat64(number)) as bin_presentation FROM numbers(15, 2);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─bin_presentation─────────────────────────────────────────────────┐
|
||||
│ 0000000000000000000000000000000000000000000000000010111001000000 │
|
||||
│ 0000000000000000000000000000000000000000000000000011000001000000 │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## unbin {#unbinstr}
|
||||
|
||||
Performs the opposite operation of [bin](#bin). It interprets each pair of binary digits (in the argument) as a number and converts it to the byte represented by the number. The return value is a binary string (BLOB).
|
||||
|
||||
If you want to convert the result to a number, you can use the [reverse](../../sql-reference/functions/string-functions.md#reverse) and [reinterpretAs<Type>](../../sql-reference/functions/type-conversion-functions.md#type-conversion-functions) functions.
|
||||
|
||||
!!! note "Note"
|
||||
If `unbin` is invoked from within the `clickhouse-client`, binary strings display using UTF-8.
|
||||
|
||||
Alias: `UNBIN`.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
unbin(arg)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `arg` — A string containing any number of binary digits. Type: [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
Supports binary digits `0-1`. The number of binary digits does not have to be multiples of eight. If the argument string contains anything other than binary digits, some implementation-defined result is returned (an exception isn’t thrown). For a numeric argument the inverse of bin(N) is not performed by unbin().
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A binary string (BLOB).
|
||||
|
||||
Type: [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT UNBIN('001100000011000100110010'), UNBIN('0100110101111001010100110101000101001100');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─unbin('001100000011000100110010')─┬─unbin('0100110101111001010100110101000101001100')─┐
|
||||
│ 012 │ MySQL │
|
||||
└───────────────────────────────────┴───────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT reinterpretAsUInt64(reverse(unbin('1010'))) AS num;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─num─┐
|
||||
│ 10 │
|
||||
└─────┘
|
||||
```
|
||||
|
||||
## UUIDStringToNum(str) {#uuidstringtonumstr}
|
||||
|
||||
Accepts a string containing 36 characters in the format `123e4567-e89b-12d3-a456-426655440000`, and returns it as a set of bytes in a FixedString(16).
|
||||
|
@ -87,7 +87,7 @@ SELECT
|
||||
dictGetOrDefault('ext-dict-test', 'c1', number + 1, toUInt32(number * 10)) AS val,
|
||||
toTypeName(val) AS type
|
||||
FROM system.numbers
|
||||
LIMIT 3
|
||||
LIMIT 3;
|
||||
```
|
||||
|
||||
``` text
|
||||
|
@ -211,7 +211,7 @@ SELECT nullIf(1, 2);
|
||||
|
||||
## assumeNotNull {#assumenotnull}
|
||||
|
||||
Results in a value of type [Nullable](../../sql-reference/data-types/nullable.md) for a non- `Nullable`, if the value is not `NULL`.
|
||||
Results in an equivalent non-`Nullable` value for a [Nullable](../../sql-reference/data-types/nullable.md) type. In case the original value is `NULL` the result is undetermined. See also `ifNull` and `coalesce` functions.
|
||||
|
||||
``` sql
|
||||
assumeNotNull(x)
|
||||
|
132
docs/en/sql-reference/functions/nlp-functions.md
Normal file
132
docs/en/sql-reference/functions/nlp-functions.md
Normal file
@ -0,0 +1,132 @@
|
||||
---
|
||||
toc_priority: 67
|
||||
toc_title: NLP
|
||||
---
|
||||
|
||||
# [experimental] Natural Language Processing functions {#nlp-functions}
|
||||
|
||||
!!! warning "Warning"
|
||||
This is an experimental feature that is currently in development and is not ready for general use. It will change in unpredictable backwards-incompatible ways in future releases. Set `allow_experimental_nlp_functions = 1` to enable it.
|
||||
|
||||
## stem {#stem}
|
||||
|
||||
Performs stemming on a given word.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
stem('language', word)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `language` — Language which rules will be applied. Must be in lowercase. [String](../../sql-reference/data-types/string.md#string).
|
||||
- `word` — word that needs to be stemmed. Must be in lowercase. [String](../../sql-reference/data-types/string.md#string).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT SELECT arrayMap(x -> stem('en', x), ['I', 'think', 'it', 'is', 'a', 'blessing', 'in', 'disguise']) as res;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─res────────────────────────────────────────────────┐
|
||||
│ ['I','think','it','is','a','bless','in','disguis'] │
|
||||
└────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## lemmatize {#lemmatize}
|
||||
|
||||
Performs lemmatization on a given word. Needs dictionaries to operate, which can be obtained [here](https://github.com/vpodpecan/lemmagen3/tree/master/src/lemmagen3/models).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
lemmatize('language', word)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `language` — Language which rules will be applied. [String](../../sql-reference/data-types/string.md#string).
|
||||
- `word` — Word that needs to be lemmatized. Must be lowercase. [String](../../sql-reference/data-types/string.md#string).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT lemmatize('en', 'wolves');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─lemmatize("wolves")─┐
|
||||
│ "wolf" │
|
||||
└─────────────────────┘
|
||||
```
|
||||
|
||||
Configuration:
|
||||
``` xml
|
||||
<lemmatizers>
|
||||
<lemmatizer>
|
||||
<lang>en</lang>
|
||||
<path>en.bin</path>
|
||||
</lemmatizer>
|
||||
</lemmatizers>
|
||||
```
|
||||
|
||||
## synonyms {#synonyms}
|
||||
|
||||
Finds synonyms to a given word. There are two types of synonym extensions: `plain` and `wordnet`.
|
||||
|
||||
With the `plain` extension type we need to provide a path to a simple text file, where each line corresponds to a certain synonym set. Words in this line must be separated with space or tab characters.
|
||||
|
||||
With the `wordnet` extension type we need to provide a path to a directory with WordNet thesaurus in it. Thesaurus must contain a WordNet sense index.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
synonyms('extension_name', word)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `extension_name` — Name of the extension in which search will be performed. [String](../../sql-reference/data-types/string.md#string).
|
||||
- `word` — Word that will be searched in extension. [String](../../sql-reference/data-types/string.md#string).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT synonyms('list', 'important');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─synonyms('list', 'important')────────────┐
|
||||
│ ['important','big','critical','crucial'] │
|
||||
└──────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Configuration:
|
||||
``` xml
|
||||
<synonyms_extensions>
|
||||
<extension>
|
||||
<name>en</name>
|
||||
<type>plain</type>
|
||||
<path>en.txt</path>
|
||||
</extension>
|
||||
<extension>
|
||||
<name>en</name>
|
||||
<type>wordnet</type>
|
||||
<path>en/</path>
|
||||
</extension>
|
||||
</synonyms_extensions>
|
||||
```
|
@ -2138,3 +2138,52 @@ Result:
|
||||
|
||||
- [tcp_port](../../operations/server-configuration-parameters/settings.md#server_configuration_parameters-tcp_port)
|
||||
|
||||
## currentProfiles {#current-profiles}
|
||||
|
||||
Returns a list of the current [settings profiles](../../operations/access-rights.md#settings-profiles-management) for the current user.
|
||||
|
||||
The command [SET PROFILE](../../sql-reference/statements/set.md#query-set) could be used to change the current setting profile. If the command `SET PROFILE` was not used the function returns the profiles specified at the current user's definition (see [CREATE USER](../../sql-reference/statements/create/user.md#create-user-statement)).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
currentProfiles()
|
||||
```
|
||||
|
||||
**Returned value**
|
||||
|
||||
- List of the current user settings profiles.
|
||||
|
||||
Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)).
|
||||
|
||||
## enabledProfiles {#enabled-profiles}
|
||||
|
||||
Returns settings profiles, assigned to the current user both explicitly and implicitly. Explicitly assigned profiles are the same as returned by the [currentProfiles](#current-profiles) function. Implicitly assigned profiles include parent profiles of other assigned profiles, profiles assigned via granted roles, profiles assigned via their own settings, and the main default profile (see the `default_profile` section in the main server configuration file).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
enabledProfiles()
|
||||
```
|
||||
|
||||
**Returned value**
|
||||
|
||||
- List of the enabled settings profiles.
|
||||
|
||||
Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)).
|
||||
|
||||
## defaultProfiles {#default-profiles}
|
||||
|
||||
Returns all the profiles specified at the current user's definition (see [CREATE USER](../../sql-reference/statements/create/user.md#create-user-statement) statement).
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
defaultProfiles()
|
||||
```
|
||||
|
||||
**Returned value**
|
||||
|
||||
- List of the default settings profiles.
|
||||
|
||||
Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)).
|
||||
|
@ -145,6 +145,72 @@ Result:
|
||||
└────────────────────────────┘
|
||||
```
|
||||
|
||||
## splitByWhitespace(s) {#splitbywhitespaceseparator-s}
|
||||
|
||||
Splits a string into substrings separated by whitespace characters.
|
||||
Returns an array of selected substrings.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
splitByWhitespace(s)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `s` — The string to split. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
Returns an array of selected substrings.
|
||||
|
||||
Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
SELECT splitByWhitespace(' 1! a, b. ');
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─splitByWhitespace(' 1! a, b. ')─┐
|
||||
│ ['1!','a,','b.'] │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## splitByNonAlpha(s) {#splitbynonalphaseparator-s}
|
||||
|
||||
Splits a string into substrings separated by whitespace and punctuation characters.
|
||||
Returns an array of selected substrings.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
splitByNonAlpha(s)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `s` — The string to split. [String](../../sql-reference/data-types/string.md).
|
||||
|
||||
**Returned value(s)**
|
||||
|
||||
Returns an array of selected substrings.
|
||||
|
||||
Type: [Array](../../sql-reference/data-types/array.md)([String](../../sql-reference/data-types/string.md)).
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
SELECT splitByNonAlpha(' 1! a, b. ');
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─splitByNonAlpha(' 1! a, b. ')─┐
|
||||
│ ['1','a','b'] │
|
||||
└───────────────────────────────────┘
|
||||
```
|
||||
|
||||
## arrayStringConcat(arr\[, separator\]) {#arraystringconcatarr-separator}
|
||||
|
||||
Concatenates the strings listed in the array with the separator.’separator’ is an optional parameter: a constant string, set to an empty string by default.
|
||||
|
@ -13,13 +13,14 @@ toc_title: Strings
|
||||
Returns 1 for an empty string or 0 for a non-empty string.
|
||||
The result type is UInt8.
|
||||
A string is considered non-empty if it contains at least one byte, even if this is a space or a null byte.
|
||||
The function also works for arrays.
|
||||
The function also works for arrays or UUID.
|
||||
UUID is empty if it is all zeros (nil UUID).
|
||||
|
||||
## notEmpty {#notempty}
|
||||
|
||||
Returns 0 for an empty string or 1 for a non-empty string.
|
||||
The result type is UInt8.
|
||||
The function also works for arrays.
|
||||
The function also works for arrays or UUID.
|
||||
|
||||
## length {#length}
|
||||
|
||||
|
@ -103,6 +103,15 @@ Result:
|
||||
Query with `Map` type:
|
||||
|
||||
```sql
|
||||
SELECT mapAdd(map(1,1), map(1,1));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─mapAdd(map(1, 1), map(1, 1))─┐
|
||||
│ {1:2} │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
||||
## mapSubtract {#function-mapsubtract}
|
||||
@ -117,15 +126,15 @@ mapSubtract(Tuple(Array, Array), Tuple(Array, Array) [, ...])
|
||||
|
||||
**Arguments**
|
||||
|
||||
Arguments are [tuples](../../sql-reference/data-types/tuple.md#tuplet1-t2) of two [arrays](../../sql-reference/data-types/array.md#data-type-array), where items in the first array represent keys, and the second array contains values for the each key. All key arrays should have same type, and all value arrays should contain items which are promote to the one type ([Int64](../../sql-reference/data-types/int-uint.md#int-ranges), [UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges) or [Float64](../../sql-reference/data-types/float.md#float32-float64)). The common promoted type is used as a type for the result array.
|
||||
Arguments are [maps](../../sql-reference/data-types/map.md) or [tuples](../../sql-reference/data-types/tuple.md#tuplet1-t2) of two [arrays](../../sql-reference/data-types/array.md#data-type-array), where items in the first array represent keys, and the second array contains values for the each key. All key arrays should have same type, and all value arrays should contain items which are promote to the one type ([Int64](../../sql-reference/data-types/int-uint.md#int-ranges), [UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges) or [Float64](../../sql-reference/data-types/float.md#float32-float64)). The common promoted type is used as a type for the result array.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Returns one [tuple](../../sql-reference/data-types/tuple.md#tuplet1-t2), where the first array contains the sorted keys and the second array contains values.
|
||||
- Depending on the arguments returns one [map](../../sql-reference/data-types/map.md) or [tuple](../../sql-reference/data-types/tuple.md#tuplet1-t2), where the first array contains the sorted keys and the second array contains values.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
Query with a tuple map:
|
||||
|
||||
```sql
|
||||
SELECT mapSubtract(([toUInt8(1), 2], [toInt32(1), 1]), ([toUInt8(1), 2], [toInt32(2), 1])) as res, toTypeName(res) as type;
|
||||
@ -139,32 +148,54 @@ Result:
|
||||
└────────────────┴───────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query with `Map` type:
|
||||
|
||||
```sql
|
||||
SELECT mapSubtract(map(1,1), map(1,1));
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─mapSubtract(map(1, 1), map(1, 1))─┐
|
||||
│ {1:0} │
|
||||
└───────────────────────────────────┘
|
||||
```
|
||||
|
||||
## mapPopulateSeries {#function-mappopulateseries}
|
||||
|
||||
Fills missing keys in the maps (key and value array pair), where keys are integers. Also, it supports specifying the max key, which is used to extend the keys array.
|
||||
Arguments are [maps](../../sql-reference/data-types/map.md) or two [arrays](../../sql-reference/data-types/array.md#data-type-array), where the first array represent keys, and the second array contains values for the each key.
|
||||
|
||||
For array arguments the number of elements in `keys` and `values` must be the same for each row.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
mapPopulateSeries(keys, values[, max])
|
||||
mapPopulateSeries(map[, max])
|
||||
```
|
||||
|
||||
Generates a map, where keys are a series of numbers, from minimum to maximum keys (or `max` argument if it specified) taken from `keys` array with a step size of one, and corresponding values taken from `values` array. If the value is not specified for the key, then it uses the default value in the resulting map. For repeated keys, only the first value (in order of appearing) gets associated with the key.
|
||||
|
||||
The number of elements in `keys` and `values` must be the same for each row.
|
||||
Generates a map (a tuple with two arrays or a value of `Map` type, depending on the arguments), where keys are a series of numbers, from minimum to maximum keys (or `max` argument if it specified) taken from the map with a step size of one, and corresponding values. If the value is not specified for the key, then it uses the default value in the resulting map. For repeated keys, only the first value (in order of appearing) gets associated with the key.
|
||||
|
||||
**Arguments**
|
||||
|
||||
Mapped arrays:
|
||||
|
||||
- `keys` — Array of keys. [Array](../../sql-reference/data-types/array.md#data-type-array)([Int](../../sql-reference/data-types/int-uint.md#uint-ranges)).
|
||||
- `values` — Array of values. [Array](../../sql-reference/data-types/array.md#data-type-array)([Int](../../sql-reference/data-types/int-uint.md#uint-ranges)).
|
||||
|
||||
or
|
||||
|
||||
- `map` — Map with integer keys. [Map](../../sql-reference/data-types/map.md).
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Returns a [tuple](../../sql-reference/data-types/tuple.md#tuplet1-t2) of two [arrays](../../sql-reference/data-types/array.md#data-type-array): keys in sorted order, and values the corresponding keys.
|
||||
- Depending on the arguments returns a [map](../../sql-reference/data-types/map.md) or a [tuple](../../sql-reference/data-types/tuple.md#tuplet1-t2) of two [arrays](../../sql-reference/data-types/array.md#data-type-array): keys in sorted order, and values the corresponding keys.
|
||||
|
||||
**Example**
|
||||
|
||||
Query:
|
||||
Query with mapped arrays:
|
||||
|
||||
```sql
|
||||
select mapPopulateSeries([1,2,4], [11,22,44], 5) as res, toTypeName(res) as type;
|
||||
@ -178,6 +209,20 @@ Result:
|
||||
└──────────────────────────────┴───────────────────────────────────┘
|
||||
```
|
||||
|
||||
Query with `Map` type:
|
||||
|
||||
```sql
|
||||
SELECT mapPopulateSeries(map(1, 10, 5, 20), 6);
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```text
|
||||
┌─mapPopulateSeries(map(1, 10, 5, 20), 6)─┐
|
||||
│ {1:10,2:0,3:0,4:0,5:20,6:0} │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## mapContains {#mapcontains}
|
||||
|
||||
Determines whether the `map` contains the `key` parameter.
|
||||
|
@ -465,27 +465,29 @@ Result:
|
||||
|
||||
## CAST(x, T) {#type_conversion_function-cast}
|
||||
|
||||
Converts input value `x` to the `T` data type. Unlike to `reinterpret` function, type conversion is performed in a natural way.
|
||||
|
||||
The syntax `CAST(x AS t)` is also supported.
|
||||
|
||||
!!! note "Note"
|
||||
If value `x` does not fit the bounds of type `T`, the function overflows. For example, `CAST(-1, 'UInt8')` returns `255`.
|
||||
Converts an input value to the specified data type. Unlike the [reinterpret](#type_conversion_function-reinterpret) function, `CAST` tries to present the same value using the new data type. If the conversion can not be done then an exception is raised.
|
||||
Several syntax variants are supported.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
CAST(x, T)
|
||||
CAST(x AS t)
|
||||
x::t
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `x` — Any type.
|
||||
- `T` — Destination type. [String](../../sql-reference/data-types/string.md).
|
||||
- `x` — A value to convert. May be of any type.
|
||||
- `T` — The name of the target data type. [String](../../sql-reference/data-types/string.md).
|
||||
- `t` — The target data type.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- Destination type value.
|
||||
- Converted value.
|
||||
|
||||
!!! note "Note"
|
||||
If the input value does not fit the bounds of the target type, the result overflows. For example, `CAST(-1, 'UInt8')` returns `255`.
|
||||
|
||||
**Examples**
|
||||
|
||||
@ -494,16 +496,16 @@ Query:
|
||||
```sql
|
||||
SELECT
|
||||
CAST(toInt8(-1), 'UInt8') AS cast_int_to_uint,
|
||||
CAST(toInt8(1), 'Float32') AS cast_int_to_float,
|
||||
CAST('1', 'UInt32') AS cast_string_to_int;
|
||||
CAST(1.5 AS Decimal(3,2)) AS cast_float_to_decimal,
|
||||
'1'::Int32 AS cast_string_to_int;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```
|
||||
┌─cast_int_to_uint─┬─cast_int_to_float─┬─cast_string_to_int─┐
|
||||
│ 255 │ 1 │ 1 │
|
||||
└──────────────────┴───────────────────┴────────────────────┘
|
||||
┌─cast_int_to_uint─┬─cast_float_to_decimal─┬─cast_string_to_int─┐
|
||||
│ 255 │ 1.50 │ 1 │
|
||||
└──────────────────┴───────────────────────┴────────────────────┘
|
||||
```
|
||||
|
||||
Query:
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user