mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-12-17 20:02:05 +00:00
Merge remote-tracking branch 'origin/master' into negging
This commit is contained in:
commit
9ae02bf9c7
@ -488,6 +488,7 @@
|
|||||||
* Remove `is_deterministic` field from the `system.functions` table. [#66630](https://github.com/ClickHouse/ClickHouse/pull/66630) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
* Remove `is_deterministic` field from the `system.functions` table. [#66630](https://github.com/ClickHouse/ClickHouse/pull/66630) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
* Function `tuple` will now try to construct named tuples in query (controlled by `enable_named_columns_in_function_tuple`). Introduce function `tupleNames` to extract names from tuples. [#54881](https://github.com/ClickHouse/ClickHouse/pull/54881) ([Amos Bird](https://github.com/amosbird)).
|
* Function `tuple` will now try to construct named tuples in query (controlled by `enable_named_columns_in_function_tuple`). Introduce function `tupleNames` to extract names from tuples. [#54881](https://github.com/ClickHouse/ClickHouse/pull/54881) ([Amos Bird](https://github.com/amosbird)).
|
||||||
* Change how deduplication for Materialized Views works. Fixed a lot of cases like: - on destination table: data is split for 2 or more blocks and that blocks is considered as duplicate when that block is inserted in parallel. - on MV destination table: the equal blocks are deduplicated, that happens when MV often produces equal data as a result for different input data due to performing aggregation. - on MV destination table: the equal blocks which comes from different MV are deduplicated. [#61601](https://github.com/ClickHouse/ClickHouse/pull/61601) ([Sema Checherinda](https://github.com/CheSema)).
|
* Change how deduplication for Materialized Views works. Fixed a lot of cases like: - on destination table: data is split for 2 or more blocks and that blocks is considered as duplicate when that block is inserted in parallel. - on MV destination table: the equal blocks are deduplicated, that happens when MV often produces equal data as a result for different input data due to performing aggregation. - on MV destination table: the equal blocks which comes from different MV are deduplicated. [#61601](https://github.com/ClickHouse/ClickHouse/pull/61601) ([Sema Checherinda](https://github.com/CheSema)).
|
||||||
|
* Functions `bitShiftLeft` and `bitShitfRight` return an error for out of bounds shift positions [#65838](https://github.com/ClickHouse/ClickHouse/pull/65838) ([Pablo Marcos](https://github.com/pamarcos)).
|
||||||
|
|
||||||
#### New Feature
|
#### New Feature
|
||||||
* Add `ASOF JOIN` support for `full_sorting_join` algorithm. [#55051](https://github.com/ClickHouse/ClickHouse/pull/55051) ([vdimir](https://github.com/vdimir)).
|
* Add `ASOF JOIN` support for `full_sorting_join` algorithm. [#55051](https://github.com/ClickHouse/ClickHouse/pull/55051) ([vdimir](https://github.com/vdimir)).
|
||||||
@ -599,7 +600,6 @@
|
|||||||
* Functions `bitTest`, `bitTestAll`, and `bitTestAny` now return an error if the specified bit index is out-of-bounds [#65818](https://github.com/ClickHouse/ClickHouse/pull/65818) ([Pablo Marcos](https://github.com/pamarcos)).
|
* Functions `bitTest`, `bitTestAll`, and `bitTestAny` now return an error if the specified bit index is out-of-bounds [#65818](https://github.com/ClickHouse/ClickHouse/pull/65818) ([Pablo Marcos](https://github.com/pamarcos)).
|
||||||
* Setting `join_any_take_last_row` is supported in any query with hash join. [#65820](https://github.com/ClickHouse/ClickHouse/pull/65820) ([vdimir](https://github.com/vdimir)).
|
* Setting `join_any_take_last_row` is supported in any query with hash join. [#65820](https://github.com/ClickHouse/ClickHouse/pull/65820) ([vdimir](https://github.com/vdimir)).
|
||||||
* Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
|
* Better handling of join conditions involving `IS NULL` checks (for example `ON (a = b AND (a IS NOT NULL) AND (b IS NOT NULL) ) OR ( (a IS NULL) AND (b IS NULL) )` is rewritten to `ON a <=> b`), fix incorrect optimization when condition other then `IS NULL` are present. [#65835](https://github.com/ClickHouse/ClickHouse/pull/65835) ([vdimir](https://github.com/vdimir)).
|
||||||
* Functions `bitShiftLeft` and `bitShitfRight` return an error for out of bounds shift positions [#65838](https://github.com/ClickHouse/ClickHouse/pull/65838) ([Pablo Marcos](https://github.com/pamarcos)).
|
|
||||||
* Fix growing memory usage in S3Queue. [#65839](https://github.com/ClickHouse/ClickHouse/pull/65839) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
* Fix growing memory usage in S3Queue. [#65839](https://github.com/ClickHouse/ClickHouse/pull/65839) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
* Fix tie handling in `arrayAUC` to match sklearn. [#65840](https://github.com/ClickHouse/ClickHouse/pull/65840) ([gabrielmcg44](https://github.com/gabrielmcg44)).
|
* Fix tie handling in `arrayAUC` to match sklearn. [#65840](https://github.com/ClickHouse/ClickHouse/pull/65840) ([gabrielmcg44](https://github.com/gabrielmcg44)).
|
||||||
* Fix possible issues with MySQL server protocol TLS connections. [#65917](https://github.com/ClickHouse/ClickHouse/pull/65917) ([Azat Khuzhin](https://github.com/azat)).
|
* Fix possible issues with MySQL server protocol TLS connections. [#65917](https://github.com/ClickHouse/ClickHouse/pull/65917) ([Azat Khuzhin](https://github.com/azat)).
|
||||||
|
@ -88,6 +88,7 @@ string (TOUPPER ${CMAKE_BUILD_TYPE} CMAKE_BUILD_TYPE_UC)
|
|||||||
list(REVERSE CMAKE_FIND_LIBRARY_SUFFIXES)
|
list(REVERSE CMAKE_FIND_LIBRARY_SUFFIXES)
|
||||||
|
|
||||||
option (ENABLE_FUZZING "Fuzzy testing using libfuzzer" OFF)
|
option (ENABLE_FUZZING "Fuzzy testing using libfuzzer" OFF)
|
||||||
|
option (ENABLE_FUZZER_TEST "Build testing fuzzers in order to test libFuzzer functionality" OFF)
|
||||||
|
|
||||||
if (ENABLE_FUZZING)
|
if (ENABLE_FUZZING)
|
||||||
# Also set WITH_COVERAGE=1 for better fuzzing process
|
# Also set WITH_COVERAGE=1 for better fuzzing process
|
||||||
|
@ -47,6 +47,7 @@ Upcoming meetups
|
|||||||
* [Dubai Meetup](https://www.meetup.com/clickhouse-dubai-meetup-group/events/303096989/) - November 21
|
* [Dubai Meetup](https://www.meetup.com/clickhouse-dubai-meetup-group/events/303096989/) - November 21
|
||||||
* [Paris Meetup](https://www.meetup.com/clickhouse-france-user-group/events/303096434) - November 26
|
* [Paris Meetup](https://www.meetup.com/clickhouse-france-user-group/events/303096434) - November 26
|
||||||
* [Amsterdam Meetup](https://www.meetup.com/clickhouse-netherlands-user-group/events/303638814) - December 3
|
* [Amsterdam Meetup](https://www.meetup.com/clickhouse-netherlands-user-group/events/303638814) - December 3
|
||||||
|
* [Stockholm Meetup](https://www.meetup.com/clickhouse-stockholm-user-group/events/304382411) - December 9
|
||||||
* [New York Meetup](https://www.meetup.com/clickhouse-new-york-user-group/events/304268174) - December 9
|
* [New York Meetup](https://www.meetup.com/clickhouse-new-york-user-group/events/304268174) - December 9
|
||||||
* [San Francisco Meetup](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/304286951/) - December 12
|
* [San Francisco Meetup](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/304286951/) - December 12
|
||||||
|
|
||||||
|
@ -14,9 +14,10 @@ The following versions of ClickHouse server are currently supported with securit
|
|||||||
|
|
||||||
| Version | Supported |
|
| Version | Supported |
|
||||||
|:-|:-|
|
|:-|:-|
|
||||||
|
| 24.10 | ✔️ |
|
||||||
| 24.9 | ✔️ |
|
| 24.9 | ✔️ |
|
||||||
| 24.8 | ✔️ |
|
| 24.8 | ✔️ |
|
||||||
| 24.7 | ✔️ |
|
| 24.7 | ❌ |
|
||||||
| 24.6 | ❌ |
|
| 24.6 | ❌ |
|
||||||
| 24.5 | ❌ |
|
| 24.5 | ❌ |
|
||||||
| 24.4 | ❌ |
|
| 24.4 | ❌ |
|
||||||
|
2
contrib/SimSIMD
vendored
2
contrib/SimSIMD
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 935fef2964bc38e995c5f465b42259a35b8cf0d3
|
Subproject commit ee3c9c9c00b51645f62a1a9e99611b78c0052a21
|
@ -1,4 +1,8 @@
|
|||||||
set(SIMSIMD_PROJECT_DIR "${ClickHouse_SOURCE_DIR}/contrib/SimSIMD")
|
# See contrib/usearch-cmake/CMakeLists.txt, why only enabled on x86
|
||||||
|
if (ARCH_AMD64)
|
||||||
add_library(_simsimd INTERFACE)
|
set(SIMSIMD_PROJECT_DIR "${ClickHouse_SOURCE_DIR}/contrib/SimSIMD")
|
||||||
target_include_directories(_simsimd SYSTEM INTERFACE "${SIMSIMD_PROJECT_DIR}/include")
|
set(SIMSIMD_SRCS ${SIMSIMD_PROJECT_DIR}/c/lib.c)
|
||||||
|
add_library(_simsimd ${SIMSIMD_SRCS})
|
||||||
|
target_include_directories(_simsimd SYSTEM PUBLIC "${SIMSIMD_PROJECT_DIR}/include")
|
||||||
|
target_compile_definitions(_simsimd PUBLIC SIMSIMD_DYNAMIC_DISPATCH)
|
||||||
|
endif()
|
||||||
|
2
contrib/usearch
vendored
2
contrib/usearch
vendored
@ -1 +1 @@
|
|||||||
Subproject commit 53799b84ca9ad708b060d0b1cfa5f039371721cd
|
Subproject commit 7efe8b710c9831bfe06573b1df0fad001b04a2b5
|
@ -6,12 +6,63 @@ target_include_directories(_usearch SYSTEM INTERFACE ${USEARCH_PROJECT_DIR}/incl
|
|||||||
target_link_libraries(_usearch INTERFACE _fp16)
|
target_link_libraries(_usearch INTERFACE _fp16)
|
||||||
target_compile_definitions(_usearch INTERFACE USEARCH_USE_FP16LIB)
|
target_compile_definitions(_usearch INTERFACE USEARCH_USE_FP16LIB)
|
||||||
|
|
||||||
# target_compile_definitions(_usearch INTERFACE USEARCH_USE_SIMSIMD)
|
# Only x86 for now. On ARM, the linker goes down in flames. To make SimSIMD compile, I had to remove a macro checks in SimSIMD
|
||||||
# ^^ simsimd is not enabled at the moment. Reasons:
|
# for AVX512 (x86, worked nicely) and __ARM_BF16_FORMAT_ALTERNATIVE. It is probably because of that.
|
||||||
# - Vectorization is important for raw scans but not so much for HNSW. We use usearch only for HNSW.
|
if (ARCH_AMD64)
|
||||||
# - Simsimd does compile-time dispatch (choice of SIMD kernels determined by capabilities of the build machine) or dynamic dispatch (SIMD
|
target_link_libraries(_usearch INTERFACE _simsimd)
|
||||||
# kernels chosen at runtime based on cpuid instruction). Since current builds are limited to SSE 4.2 (x86) and NEON (ARM), the speedup of
|
target_compile_definitions(_usearch INTERFACE USEARCH_USE_SIMSIMD)
|
||||||
# the former would be moderate compared to AVX-512 / SVE. The latter is at the moment too fragile with respect to portability across x86
|
|
||||||
# and ARM machines ... certain conbinations of quantizations / distance functions / SIMD instructions are not implemented at the moment.
|
target_compile_definitions(_usearch INTERFACE USEARCH_CAN_COMPILE_FLOAT16)
|
||||||
|
target_compile_definitions(_usearch INTERFACE USEARCH_CAN_COMPILE_BF16)
|
||||||
|
endif ()
|
||||||
|
|
||||||
add_library(ch_contrib::usearch ALIAS _usearch)
|
add_library(ch_contrib::usearch ALIAS _usearch)
|
||||||
|
|
||||||
|
|
||||||
|
# Cf. https://github.com/llvm/llvm-project/issues/107810 (though it is not 100% the same stack)
|
||||||
|
#
|
||||||
|
# LLVM ERROR: Cannot select: 0x7996e7a73150: f32,ch = load<(load (s16) from %ir.22, !tbaa !54231), anyext from bf16> 0x79961cb737c0, 0x7996e7a1a500, undef:i64, ./contrib/SimSIMD/include/simsimd/dot.h:215:1
|
||||||
|
# 0x7996e7a1a500: i64 = add 0x79961e770d00, Constant:i64<-16>, ./contrib/SimSIMD/include/simsimd/dot.h:215:1
|
||||||
|
# 0x79961e770d00: i64,ch = CopyFromReg 0x79961cb737c0, Register:i64 %4, ./contrib/SimSIMD/include/simsimd/dot.h:215:1
|
||||||
|
# 0x7996e7a1ae10: i64 = Register %4
|
||||||
|
# 0x7996e7a1b5f0: i64 = Constant<-16>
|
||||||
|
# 0x7996e7a1a730: i64 = undef
|
||||||
|
# In function: _ZL23simsimd_dot_bf16_serialPKu6__bf16S0_yPd
|
||||||
|
# PLEASE submit a bug report to https://github.com/llvm/llvm-project/issues/ and include the crash backtrace.
|
||||||
|
# Stack dump:
|
||||||
|
# 0. Running pass 'Function Pass Manager' on module 'src/libdbms.a(MergeTreeIndexVectorSimilarity.cpp.o at 2312737440)'.
|
||||||
|
# 1. Running pass 'AArch64 Instruction Selection' on function '@_ZL23simsimd_dot_bf16_serialPKu6__bf16S0_yPd'
|
||||||
|
# #0 0x00007999e83a63bf llvm::sys::PrintStackTrace(llvm::raw_ostream&, int) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xda63bf)
|
||||||
|
# #1 0x00007999e83a44f9 llvm::sys::RunSignalHandlers() (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xda44f9)
|
||||||
|
# #2 0x00007999e83a6b00 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xda6b00)
|
||||||
|
# #3 0x00007999e6e45320 (/lib/x86_64-linux-gnu/libc.so.6+0x45320)
|
||||||
|
# #4 0x00007999e6e9eb1c pthread_kill (/lib/x86_64-linux-gnu/libc.so.6+0x9eb1c)
|
||||||
|
# #5 0x00007999e6e4526e raise (/lib/x86_64-linux-gnu/libc.so.6+0x4526e)
|
||||||
|
# #6 0x00007999e6e288ff abort (/lib/x86_64-linux-gnu/libc.so.6+0x288ff)
|
||||||
|
# #7 0x00007999e82fe0c2 llvm::report_fatal_error(llvm::Twine const&, bool) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xcfe0c2)
|
||||||
|
# #8 0x00007999e8c2f8e3 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x162f8e3)
|
||||||
|
# #9 0x00007999e8c2ed76 llvm::SelectionDAGISel::SelectCodeCommon(llvm::SDNode*, unsigned char const*, unsigned int) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x162ed76)
|
||||||
|
# #10 0x00007999ea1adbcb (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x2badbcb)
|
||||||
|
# #11 0x00007999e8c2611f llvm::SelectionDAGISel::DoInstructionSelection() (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x162611f)
|
||||||
|
# #12 0x00007999e8c25790 llvm::SelectionDAGISel::CodeGenAndEmitDAG() (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x1625790)
|
||||||
|
# #13 0x00007999e8c248de llvm::SelectionDAGISel::SelectAllBasicBlocks(llvm::Function const&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x16248de)
|
||||||
|
# #14 0x00007999e8c22934 llvm::SelectionDAGISel::runOnMachineFunction(llvm::MachineFunction&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x1622934)
|
||||||
|
# #15 0x00007999e87826b9 llvm::MachineFunctionPass::runOnFunction(llvm::Function&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x11826b9)
|
||||||
|
# #16 0x00007999e84f7772 llvm::FPPassManager::runOnFunction(llvm::Function&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xef7772)
|
||||||
|
# #17 0x00007999e84fd2f4 llvm::FPPassManager::runOnModule(llvm::Module&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xefd2f4)
|
||||||
|
# #18 0x00007999e84f7e9f llvm::legacy::PassManagerImpl::run(llvm::Module&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xef7e9f)
|
||||||
|
# #19 0x00007999e99f7d61 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f7d61)
|
||||||
|
# #20 0x00007999e99f8c91 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f8c91)
|
||||||
|
# #21 0x00007999e99f8b10 llvm::lto::thinBackend(llvm::lto::Config const&, unsigned int, std::function<llvm::Expected<std::unique_ptr<llvm::CachedFileStream, std::default_delete<llvm::CachedFileStream>>> (unsigned int, llvm::Twine const&)>, llvm::Module&, llvm::ModuleSummaryIndex const&, llvm::DenseMap<llvm::StringRef, std::unordered_set<unsigned long, std::hash<unsigned long>, std::equal_to<unsigned long>, std::allocator<unsigned long>>, llvm::DenseMapInfo<llvm::StringRef, void
|
||||||
|
# >, llvm::detail::DenseMapPair<llvm::StringRef, std::unordered_set<unsigned long, std::hash<unsigned long>, std::equal_to<unsigned long>, std::allocator<unsigned long>>>> const&, llvm::DenseMap<unsigned long, llvm::GlobalValueSummary*, llvm::DenseMapInfo<unsigned long, void>, llvm::detail::DenseMapPair<unsigned long, llvm::GlobalValueSummary*>> const&, llvm::MapVector<llvm::StringRef, llvm::BitcodeModule, llvm::DenseMap<llvm::StringRef, unsigned int, llvm::DenseMapInfo<llvm::S
|
||||||
|
# tringRef, void>, llvm::detail::DenseMapPair<llvm::StringRef, unsigned int>>, llvm::SmallVector<std::pair<llvm::StringRef, llvm::BitcodeModule>, 0u>>*, std::vector<unsigned char, std::allocator<unsigned char>> const&) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f8b10)
|
||||||
|
# #22 0x00007999e99f248d (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f248d)
|
||||||
|
# #23 0x00007999e99f1cd6 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0x23f1cd6)
|
||||||
|
# #24 0x00007999e82c9beb (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xcc9beb)
|
||||||
|
# #25 0x00007999e834ebe3 llvm::ThreadPool::processTasks(llvm::ThreadPoolTaskGroup*) (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xd4ebe3)
|
||||||
|
# #26 0x00007999e834f704 (/usr/lib/llvm-18/bin/../lib/libLLVM.so.18.1+0xd4f704)
|
||||||
|
# #27 0x00007999e6e9ca94 (/lib/x86_64-linux-gnu/libc.so.6+0x9ca94)
|
||||||
|
# #28 0x00007999e6f29c3c (/lib/x86_64-linux-gnu/libc.so.6+0x129c3c)
|
||||||
|
# clang++-18: error: unable to execute command: Aborted (core dumped)
|
||||||
|
# clang++-18: error: linker command failed due to signal (use -v to see invocation)
|
||||||
|
# ^[[A^Cninja: build stopped: interrupted by user.
|
||||||
|
@ -1,7 +1,7 @@
|
|||||||
# The Dockerfile.ubuntu exists for the tests/ci/docker_server.py script
|
# The Dockerfile.ubuntu exists for the tests/ci/docker_server.py script
|
||||||
# If the image is built from Dockerfile.alpine, then the `-alpine` suffix is added automatically,
|
# If the image is built from Dockerfile.alpine, then the `-alpine` suffix is added automatically,
|
||||||
# so the only purpose of Dockerfile.ubuntu is to push `latest`, `head` and so on w/o suffixes
|
# so the only purpose of Dockerfile.ubuntu is to push `latest`, `head` and so on w/o suffixes
|
||||||
FROM ubuntu:20.04 AS glibc-donor
|
FROM ubuntu:22.04 AS glibc-donor
|
||||||
ARG TARGETARCH
|
ARG TARGETARCH
|
||||||
|
|
||||||
RUN arch=${TARGETARCH:-amd64} \
|
RUN arch=${TARGETARCH:-amd64} \
|
||||||
@ -9,7 +9,11 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
amd64) rarch=x86_64 ;; \
|
amd64) rarch=x86_64 ;; \
|
||||||
arm64) rarch=aarch64 ;; \
|
arm64) rarch=aarch64 ;; \
|
||||||
esac \
|
esac \
|
||||||
&& ln -s "${rarch}-linux-gnu" /lib/linux-gnu
|
&& ln -s "${rarch}-linux-gnu" /lib/linux-gnu \
|
||||||
|
&& case $arch in \
|
||||||
|
amd64) ln /lib/linux-gnu/ld-linux-x86-64.so.2 /lib/linux-gnu/ld-2.35.so ;; \
|
||||||
|
arm64) ln /lib/linux-gnu/ld-linux-aarch64.so.1 /lib/linux-gnu/ld-2.35.so ;; \
|
||||||
|
esac
|
||||||
|
|
||||||
|
|
||||||
FROM alpine
|
FROM alpine
|
||||||
@ -20,7 +24,7 @@ ENV LANG=en_US.UTF-8 \
|
|||||||
TZ=UTC \
|
TZ=UTC \
|
||||||
CLICKHOUSE_CONFIG=/etc/clickhouse-server/config.xml
|
CLICKHOUSE_CONFIG=/etc/clickhouse-server/config.xml
|
||||||
|
|
||||||
COPY --from=glibc-donor /lib/linux-gnu/libc.so.6 /lib/linux-gnu/libdl.so.2 /lib/linux-gnu/libm.so.6 /lib/linux-gnu/libpthread.so.0 /lib/linux-gnu/librt.so.1 /lib/linux-gnu/libnss_dns.so.2 /lib/linux-gnu/libnss_files.so.2 /lib/linux-gnu/libresolv.so.2 /lib/linux-gnu/ld-2.31.so /lib/
|
COPY --from=glibc-donor /lib/linux-gnu/libc.so.6 /lib/linux-gnu/libdl.so.2 /lib/linux-gnu/libm.so.6 /lib/linux-gnu/libpthread.so.0 /lib/linux-gnu/librt.so.1 /lib/linux-gnu/libnss_dns.so.2 /lib/linux-gnu/libnss_files.so.2 /lib/linux-gnu/libresolv.so.2 /lib/linux-gnu/ld-2.35.so /lib/
|
||||||
COPY --from=glibc-donor /etc/nsswitch.conf /etc/
|
COPY --from=glibc-donor /etc/nsswitch.conf /etc/
|
||||||
COPY entrypoint.sh /entrypoint.sh
|
COPY entrypoint.sh /entrypoint.sh
|
||||||
|
|
||||||
@ -34,7 +38,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="24.9.2.42"
|
ARG VERSION="24.10.1.2812"
|
||||||
ARG PACKAGES="clickhouse-keeper"
|
ARG PACKAGES="clickhouse-keeper"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -35,7 +35,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
|||||||
# lts / testing / prestable / etc
|
# lts / testing / prestable / etc
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||||
ARG VERSION="24.9.2.42"
|
ARG VERSION="24.10.1.2812"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
ARG DIRECT_DOWNLOAD_URLS=""
|
ARG DIRECT_DOWNLOAD_URLS=""
|
||||||
|
|
||||||
|
@ -1,4 +1,4 @@
|
|||||||
FROM ubuntu:20.04
|
FROM ubuntu:22.04
|
||||||
|
|
||||||
# see https://github.com/moby/moby/issues/4032#issuecomment-192327844
|
# see https://github.com/moby/moby/issues/4032#issuecomment-192327844
|
||||||
# It could be removed after we move on a version 23:04+
|
# It could be removed after we move on a version 23:04+
|
||||||
@ -28,7 +28,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
|||||||
|
|
||||||
ARG REPO_CHANNEL="stable"
|
ARG REPO_CHANNEL="stable"
|
||||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||||
ARG VERSION="24.9.2.42"
|
ARG VERSION="24.10.1.2812"
|
||||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||||
|
|
||||||
#docker-official-library:off
|
#docker-official-library:off
|
||||||
|
@ -20,6 +20,7 @@ For more information and documentation see https://clickhouse.com/.
|
|||||||
|
|
||||||
- The amd64 image requires support for [SSE3 instructions](https://en.wikipedia.org/wiki/SSE3). Virtually all x86 CPUs after 2005 support SSE3.
|
- The amd64 image requires support for [SSE3 instructions](https://en.wikipedia.org/wiki/SSE3). Virtually all x86 CPUs after 2005 support SSE3.
|
||||||
- The arm64 image requires support for the [ARMv8.2-A architecture](https://en.wikipedia.org/wiki/AArch64#ARMv8.2-A) and additionally the Load-Acquire RCpc register. The register is optional in version ARMv8.2-A and mandatory in [ARMv8.3-A](https://en.wikipedia.org/wiki/AArch64#ARMv8.3-A). Supported in Graviton >=2, Azure and GCP instances. Examples for unsupported devices are Raspberry Pi 4 (ARMv8.0-A) and Jetson AGX Xavier/Orin (ARMv8.2-A).
|
- The arm64 image requires support for the [ARMv8.2-A architecture](https://en.wikipedia.org/wiki/AArch64#ARMv8.2-A) and additionally the Load-Acquire RCpc register. The register is optional in version ARMv8.2-A and mandatory in [ARMv8.3-A](https://en.wikipedia.org/wiki/AArch64#ARMv8.3-A). Supported in Graviton >=2, Azure and GCP instances. Examples for unsupported devices are Raspberry Pi 4 (ARMv8.0-A) and Jetson AGX Xavier/Orin (ARMv8.2-A).
|
||||||
|
- Since the Clickhouse 24.11 Ubuntu images started using `ubuntu:22.04` as its base image. It requires docker version >= `20.10.10` containing [patch](https://github.com/moby/moby/commit/977283509f75303bc6612665a04abf76ff1d2468). As a workaround you could use `docker run [--privileged | --security-opt seccomp=unconfined]` instead, however that has security implications.
|
||||||
|
|
||||||
## How to use this image
|
## How to use this image
|
||||||
|
|
||||||
|
@ -33,8 +33,6 @@ RUN apt-get update \
|
|||||||
COPY requirements.txt /
|
COPY requirements.txt /
|
||||||
RUN pip3 install --no-cache-dir -r /requirements.txt
|
RUN pip3 install --no-cache-dir -r /requirements.txt
|
||||||
|
|
||||||
ENV FUZZER_ARGS="-max_total_time=60"
|
|
||||||
|
|
||||||
SHELL ["/bin/bash", "-c"]
|
SHELL ["/bin/bash", "-c"]
|
||||||
|
|
||||||
# docker run --network=host --volume <workspace>:/workspace -e PR_TO_TEST=<> -e SHA_TO_TEST=<> clickhouse/libfuzzer
|
# docker run --network=host --volume <workspace>:/workspace -e PR_TO_TEST=<> -e SHA_TO_TEST=<> clickhouse/libfuzzer
|
||||||
|
@ -1,16 +0,0 @@
|
|||||||
# Since right now we can't set volumes to the docker during build, we split building container in stages:
|
|
||||||
# 1. build base container
|
|
||||||
# 2. run base conatiner with mounted volumes
|
|
||||||
# 3. commit container as image
|
|
||||||
FROM ubuntu:20.04 as clickhouse-test-runner-base
|
|
||||||
|
|
||||||
# A volume where directory with clickhouse packages to be mounted,
|
|
||||||
# for later installing.
|
|
||||||
VOLUME /packages
|
|
||||||
|
|
||||||
CMD apt-get update ;\
|
|
||||||
DEBIAN_FRONTEND=noninteractive \
|
|
||||||
apt install -y /packages/clickhouse-common-static_*.deb \
|
|
||||||
/packages/clickhouse-client_*.deb \
|
|
||||||
&& apt-get clean \
|
|
||||||
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
|
|
412
docs/changelogs/v24.10.1.2812-stable.md
Normal file
412
docs/changelogs/v24.10.1.2812-stable.md
Normal file
File diff suppressed because one or more lines are too long
31
docs/changelogs/v24.3.13.40-lts.md
Normal file
31
docs/changelogs/v24.3.13.40-lts.md
Normal file
@ -0,0 +1,31 @@
|
|||||||
|
---
|
||||||
|
sidebar_position: 1
|
||||||
|
sidebar_label: 2024
|
||||||
|
---
|
||||||
|
|
||||||
|
# 2024 Changelog
|
||||||
|
|
||||||
|
### ClickHouse release v24.3.13.40-lts (7acabd77389) FIXME as compared to v24.3.12.75-lts (7cb5dff8019)
|
||||||
|
|
||||||
|
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||||
|
* Backported in [#63976](https://github.com/ClickHouse/ClickHouse/issues/63976): Fix intersect parts when restart after drop range. [#63202](https://github.com/ClickHouse/ClickHouse/pull/63202) ([Han Fei](https://github.com/hanfei1991)).
|
||||||
|
* Backported in [#71482](https://github.com/ClickHouse/ClickHouse/issues/71482): Fix `Content-Encoding` not sent in some compressed responses. [#64802](https://github.com/ClickHouse/ClickHouse/issues/64802). [#68975](https://github.com/ClickHouse/ClickHouse/pull/68975) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||||
|
* Backported in [#70451](https://github.com/ClickHouse/ClickHouse/issues/70451): Fix vrash during insertion into FixedString column in PostgreSQL engine. [#69584](https://github.com/ClickHouse/ClickHouse/pull/69584) ([Pavel Kruglov](https://github.com/Avogar)).
|
||||||
|
* Backported in [#70619](https://github.com/ClickHouse/ClickHouse/issues/70619): Fix server segfault on creating a materialized view with two selects and an `INTERSECT`, e.g. `CREATE MATERIALIZED VIEW v0 AS (SELECT 1) INTERSECT (SELECT 1);`. [#70264](https://github.com/ClickHouse/ClickHouse/pull/70264) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||||
|
* Backported in [#70877](https://github.com/ClickHouse/ClickHouse/issues/70877): Fix table creation with `CREATE ... AS table_function()` with database `Replicated` and unavailable table function source on secondary replica. [#70511](https://github.com/ClickHouse/ClickHouse/pull/70511) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||||
|
* Backported in [#70571](https://github.com/ClickHouse/ClickHouse/issues/70571): Ignore all output on async insert with `wait_for_async_insert=1`. Closes [#62644](https://github.com/ClickHouse/ClickHouse/issues/62644). [#70530](https://github.com/ClickHouse/ClickHouse/pull/70530) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||||
|
* Backported in [#71146](https://github.com/ClickHouse/ClickHouse/issues/71146): Ignore frozen_metadata.txt while traversing shadow directory from system.remote_data_paths. [#70590](https://github.com/ClickHouse/ClickHouse/pull/70590) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||||
|
* Backported in [#70682](https://github.com/ClickHouse/ClickHouse/issues/70682): Fix creation of stateful window functions on misaligned memory. [#70631](https://github.com/ClickHouse/ClickHouse/pull/70631) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#71113](https://github.com/ClickHouse/ClickHouse/issues/71113): Fix a crash and a leak in AggregateFunctionGroupArraySorted. [#70820](https://github.com/ClickHouse/ClickHouse/pull/70820) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||||
|
* Backported in [#70990](https://github.com/ClickHouse/ClickHouse/issues/70990): Fix a logical error due to negative zeros in the two-level hash table. This closes [#70973](https://github.com/ClickHouse/ClickHouse/issues/70973). [#70979](https://github.com/ClickHouse/ClickHouse/pull/70979) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||||
|
* Backported in [#71246](https://github.com/ClickHouse/ClickHouse/issues/71246): Fixed named sessions not being closed and hanging on forever under certain circumstances. [#70998](https://github.com/ClickHouse/ClickHouse/pull/70998) ([Márcio Martins](https://github.com/marcio-absmartly)).
|
||||||
|
* Backported in [#71371](https://github.com/ClickHouse/ClickHouse/issues/71371): Add try/catch to data parts destructors to avoid terminate. [#71364](https://github.com/ClickHouse/ClickHouse/pull/71364) ([alesapin](https://github.com/alesapin)).
|
||||||
|
* Backported in [#71594](https://github.com/ClickHouse/ClickHouse/issues/71594): Prevent crash in SortCursor with 0 columns (old analyzer). [#71494](https://github.com/ClickHouse/ClickHouse/pull/71494) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
|
||||||
|
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||||
|
|
||||||
|
* Backported in [#71022](https://github.com/ClickHouse/ClickHouse/issues/71022): Fix dropping of file cache in CHECK query in case of enabled transactions. [#69256](https://github.com/ClickHouse/ClickHouse/pull/69256) ([Anton Popov](https://github.com/CurtizJ)).
|
||||||
|
* Backported in [#70384](https://github.com/ClickHouse/ClickHouse/issues/70384): CI: Enable Integration Tests for backport PRs. [#70329](https://github.com/ClickHouse/ClickHouse/pull/70329) ([Max Kainov](https://github.com/maxknv)).
|
||||||
|
* Backported in [#70538](https://github.com/ClickHouse/ClickHouse/issues/70538): Remove slow poll() logs in keeper. [#70508](https://github.com/ClickHouse/ClickHouse/pull/70508) ([Raúl Marín](https://github.com/Algunenano)).
|
||||||
|
* Backported in [#70971](https://github.com/ClickHouse/ClickHouse/issues/70971): Limiting logging some lines about configs. [#70879](https://github.com/ClickHouse/ClickHouse/pull/70879) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||||
|
|
50
docs/changelogs/v24.8.6.70-lts.md
Normal file
50
docs/changelogs/v24.8.6.70-lts.md
Normal file
File diff suppressed because one or more lines are too long
@ -4,9 +4,13 @@ sidebar_position: 50
|
|||||||
sidebar_label: EmbeddedRocksDB
|
sidebar_label: EmbeddedRocksDB
|
||||||
---
|
---
|
||||||
|
|
||||||
|
import CloudNotSupportedBadge from '@theme/badges/CloudNotSupportedBadge';
|
||||||
|
|
||||||
# EmbeddedRocksDB Engine
|
# EmbeddedRocksDB Engine
|
||||||
|
|
||||||
This engine allows integrating ClickHouse with [rocksdb](http://rocksdb.org/).
|
<CloudNotSupportedBadge />
|
||||||
|
|
||||||
|
This engine allows integrating ClickHouse with [RocksDB](http://rocksdb.org/).
|
||||||
|
|
||||||
## Creating a Table {#creating-a-table}
|
## Creating a Table {#creating-a-table}
|
||||||
|
|
||||||
|
@ -54,7 +54,7 @@ Parameters:
|
|||||||
- `distance_function`: either `L2Distance` (the [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) - the length of a
|
- `distance_function`: either `L2Distance` (the [Euclidean distance](https://en.wikipedia.org/wiki/Euclidean_distance) - the length of a
|
||||||
line between two points in Euclidean space), or `cosineDistance` (the [cosine
|
line between two points in Euclidean space), or `cosineDistance` (the [cosine
|
||||||
distance](https://en.wikipedia.org/wiki/Cosine_similarity#Cosine_distance)- the angle between two non-zero vectors).
|
distance](https://en.wikipedia.org/wiki/Cosine_similarity#Cosine_distance)- the angle between two non-zero vectors).
|
||||||
- `quantization`: either `f64`, `f32`, `f16`, `bf16`, or `i8` for storing the vector with reduced precision (optional, default: `bf16`)
|
- `quantization`: either `f64`, `f32`, `f16`, `bf16`, or `i8` for storing vectors with reduced precision (optional, default: `bf16`)
|
||||||
- `hnsw_max_connections_per_layer`: the number of neighbors per HNSW graph node, also known as `M` in the [HNSW
|
- `hnsw_max_connections_per_layer`: the number of neighbors per HNSW graph node, also known as `M` in the [HNSW
|
||||||
paper](https://doi.org/10.1109/TPAMI.2018.2889473) (optional, default: 32)
|
paper](https://doi.org/10.1109/TPAMI.2018.2889473) (optional, default: 32)
|
||||||
- `hnsw_candidate_list_size_for_construction`: the size of the dynamic candidate list when constructing the HNSW graph, also known as
|
- `hnsw_candidate_list_size_for_construction`: the size of the dynamic candidate list when constructing the HNSW graph, also known as
|
||||||
@ -92,8 +92,8 @@ Vector similarity indexes currently support two distance functions:
|
|||||||
- `cosineDistance`, also called cosine similarity, is the cosine of the angle between two (non-zero) vectors
|
- `cosineDistance`, also called cosine similarity, is the cosine of the angle between two (non-zero) vectors
|
||||||
([Wikipedia](https://en.wikipedia.org/wiki/Cosine_similarity)).
|
([Wikipedia](https://en.wikipedia.org/wiki/Cosine_similarity)).
|
||||||
|
|
||||||
Vector similarity indexes allows storing the vectors in reduced precision formats. Supported scalar kinds are `f64`, `f32`, `f16` or `i8`.
|
Vector similarity indexes allows storing the vectors in reduced precision formats. Supported scalar kinds are `f64`, `f32`, `f16`, `bf16`,
|
||||||
If no scalar kind was specified during index creation, `f16` is used as default.
|
and `i8`. If no scalar kind was specified during index creation, `bf16` is used as default.
|
||||||
|
|
||||||
For normalized data, `L2Distance` is usually a better choice, otherwise `cosineDistance` is recommended to compensate for scale. If no
|
For normalized data, `L2Distance` is usually a better choice, otherwise `cosineDistance` is recommended to compensate for scale. If no
|
||||||
distance function was specified during index creation, `L2Distance` is used as default.
|
distance function was specified during index creation, `L2Distance` is used as default.
|
||||||
|
@ -9,7 +9,7 @@ sidebar_label: Prometheus protocols
|
|||||||
## Exposing metrics {#expose}
|
## Exposing metrics {#expose}
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
ClickHouse Cloud does not currently support connecting to Prometheus. To be notified when this feature is supported, please contact support@clickhouse.com.
|
If you are using ClickHouse Cloud, you can expose metrics to Prometheus using the [Prometheus Integration](/en/integrations/prometheus).
|
||||||
:::
|
:::
|
||||||
|
|
||||||
ClickHouse can expose its own metrics for scraping from Prometheus:
|
ClickHouse can expose its own metrics for scraping from Prometheus:
|
||||||
|
@ -65,6 +65,34 @@ sudo rm -f /etc/yum.repos.d/clickhouse.repo
|
|||||||
|
|
||||||
After that follow the [install guide](../getting-started/install.md#from-rpm-packages)
|
After that follow the [install guide](../getting-started/install.md#from-rpm-packages)
|
||||||
|
|
||||||
|
### You Can't Run Docker Container
|
||||||
|
|
||||||
|
You are running a simple `docker run clickhouse/clickhouse-server` and it crashes with a stack trace similar to following:
|
||||||
|
|
||||||
|
```
|
||||||
|
$ docker run -it clickhouse/clickhouse-server
|
||||||
|
........
|
||||||
|
2024.11.06 21:04:48.912036 [ 1 ] {} <Information> SentryWriter: Sending crash reports is disabled
|
||||||
|
Poco::Exception. Code: 1000, e.code() = 0, System exception: cannot start thread, Stack trace (when copying this message, always include the lines below):
|
||||||
|
|
||||||
|
0. Poco::ThreadImpl::startImpl(Poco::SharedPtr<Poco::Runnable, Poco::ReferenceCounter, Poco::ReleasePolicy<Poco::Runnable>>) @ 0x00000000157c7b34
|
||||||
|
1. Poco::Thread::start(Poco::Runnable&) @ 0x00000000157c8a0e
|
||||||
|
2. BaseDaemon::initializeTerminationAndSignalProcessing() @ 0x000000000d267a14
|
||||||
|
3. BaseDaemon::initialize(Poco::Util::Application&) @ 0x000000000d2652cb
|
||||||
|
4. DB::Server::initialize(Poco::Util::Application&) @ 0x000000000d128b38
|
||||||
|
5. Poco::Util::Application::run() @ 0x000000001581cfda
|
||||||
|
6. DB::Server::run() @ 0x000000000d1288f0
|
||||||
|
7. Poco::Util::ServerApplication::run(int, char**) @ 0x0000000015825e27
|
||||||
|
8. mainEntryClickHouseServer(int, char**) @ 0x000000000d125b38
|
||||||
|
9. main @ 0x0000000007ea4eee
|
||||||
|
10. ? @ 0x00007f67ff946d90
|
||||||
|
11. ? @ 0x00007f67ff946e40
|
||||||
|
12. _start @ 0x00000000062e802e
|
||||||
|
(version 24.10.1.2812 (official build))
|
||||||
|
```
|
||||||
|
|
||||||
|
The reason is an old docker daemon with version lower than `20.10.10`. A way to fix it either upgrading it, or running `docker run [--privileged | --security-opt seccomp=unconfined]`. The latter has security implications.
|
||||||
|
|
||||||
## Connecting to the Server {#troubleshooting-accepts-no-connections}
|
## Connecting to the Server {#troubleshooting-accepts-no-connections}
|
||||||
|
|
||||||
Possible issues:
|
Possible issues:
|
||||||
|
@ -25,9 +25,10 @@ Query caches can generally be viewed as transactionally consistent or inconsiste
|
|||||||
slowly enough that the database only needs to compute the report once (represented by the first `SELECT` query). Further queries can be
|
slowly enough that the database only needs to compute the report once (represented by the first `SELECT` query). Further queries can be
|
||||||
served directly from the query cache. In this example, a reasonable validity period could be 30 min.
|
served directly from the query cache. In this example, a reasonable validity period could be 30 min.
|
||||||
|
|
||||||
Transactionally inconsistent caching is traditionally provided by client tools or proxy packages interacting with the database. As a result,
|
Transactionally inconsistent caching is traditionally provided by client tools or proxy packages (e.g.
|
||||||
the same caching logic and configuration is often duplicated. With ClickHouse's query cache, the caching logic moves to the server side.
|
[chproxy](https://www.chproxy.org/configuration/caching/)) interacting with the database. As a result, the same caching logic and
|
||||||
This reduces maintenance effort and avoids redundancy.
|
configuration is often duplicated. With ClickHouse's query cache, the caching logic moves to the server side. This reduces maintenance
|
||||||
|
effort and avoids redundancy.
|
||||||
|
|
||||||
## Configuration Settings and Usage
|
## Configuration Settings and Usage
|
||||||
|
|
||||||
@ -138,7 +139,10 @@ is only cached if the query runs longer than 5 seconds. It is also possible to s
|
|||||||
cached - for that use setting [query_cache_min_query_runs](settings/settings.md#query-cache-min-query-runs).
|
cached - for that use setting [query_cache_min_query_runs](settings/settings.md#query-cache-min-query-runs).
|
||||||
|
|
||||||
Entries in the query cache become stale after a certain time period (time-to-live). By default, this period is 60 seconds but a different
|
Entries in the query cache become stale after a certain time period (time-to-live). By default, this period is 60 seconds but a different
|
||||||
value can be specified at session, profile or query level using setting [query_cache_ttl](settings/settings.md#query-cache-ttl).
|
value can be specified at session, profile or query level using setting [query_cache_ttl](settings/settings.md#query-cache-ttl). The query
|
||||||
|
cache evicts entries "lazily", i.e. when an entry becomes stale, it is not immediately removed from the cache. Instead, when a new entry
|
||||||
|
is to be inserted into the query cache, the database checks whether the cache has enough free space for the new entry. If this is not the
|
||||||
|
case, the database tries to remove all stale entries. If the cache still has not enough free space, the new entry is not inserted.
|
||||||
|
|
||||||
Entries in the query cache are compressed by default. This reduces the overall memory consumption at the cost of slower writes into / reads
|
Entries in the query cache are compressed by default. This reduces the overall memory consumption at the cost of slower writes into / reads
|
||||||
from the query cache. To disable compression, use setting [query_cache_compress_entries](settings/settings.md#query-cache-compress-entries).
|
from the query cache. To disable compression, use setting [query_cache_compress_entries](settings/settings.md#query-cache-compress-entries).
|
||||||
@ -188,14 +192,9 @@ Also, results of queries with non-deterministic functions are not cached by defa
|
|||||||
To force caching of results of queries with non-deterministic functions regardless, use setting
|
To force caching of results of queries with non-deterministic functions regardless, use setting
|
||||||
[query_cache_nondeterministic_function_handling](settings/settings.md#query-cache-nondeterministic-function-handling).
|
[query_cache_nondeterministic_function_handling](settings/settings.md#query-cache-nondeterministic-function-handling).
|
||||||
|
|
||||||
Results of queries that involve system tables, e.g. `system.processes` or `information_schema.tables`, are not cached by default. To force
|
Results of queries that involve system tables (e.g. [system.processes](system-tables/processes.md)` or
|
||||||
caching of results of queries with system tables regardless, use setting
|
[information_schema.tables](system-tables/information_schema.md)) are not cached by default. To force caching of results of queries with
|
||||||
[query_cache_system_table_handling](settings/settings.md#query-cache-system-table-handling).
|
system tables regardless, use setting [query_cache_system_table_handling](settings/settings.md#query-cache-system-table-handling).
|
||||||
|
|
||||||
:::note
|
|
||||||
Prior to ClickHouse v23.11, setting 'query_cache_store_results_of_queries_with_nondeterministic_functions = 0 / 1' controlled whether
|
|
||||||
results of queries with non-deterministic results were cached. In newer ClickHouse versions, this setting is obsolete and has no effect.
|
|
||||||
:::
|
|
||||||
|
|
||||||
Finally, entries in the query cache are not shared between users due to security reasons. For example, user A must not be able to bypass a
|
Finally, entries in the query cache are not shared between users due to security reasons. For example, user A must not be able to bypass a
|
||||||
row policy on a table by running the same query as another user B for whom no such policy exists. However, if necessary, cache entries can
|
row policy on a table by running the same query as another user B for whom no such policy exists. However, if necessary, cache entries can
|
||||||
|
@ -131,16 +131,6 @@ Type: UInt64
|
|||||||
|
|
||||||
Default: 8
|
Default: 8
|
||||||
|
|
||||||
## background_pool_size
|
|
||||||
|
|
||||||
Sets the number of threads performing background merges and mutations for tables with MergeTree engines. You can only increase the number of threads at runtime. To lower the number of threads you have to restart the server. By adjusting this setting, you manage CPU and disk load. Smaller pool size utilizes less CPU and disk resources, but background processes advance slower which might eventually impact query performance.
|
|
||||||
|
|
||||||
Before changing it, please also take a look at related MergeTree settings, such as `number_of_free_entries_in_pool_to_lower_max_size_of_merge` and `number_of_free_entries_in_pool_to_execute_mutation`.
|
|
||||||
|
|
||||||
Type: UInt64
|
|
||||||
|
|
||||||
Default: 16
|
|
||||||
|
|
||||||
## background_schedule_pool_size
|
## background_schedule_pool_size
|
||||||
|
|
||||||
The maximum number of threads that will be used for constantly executing some lightweight periodic operations for replicated tables, Kafka streaming, and DNS cache updates.
|
The maximum number of threads that will be used for constantly executing some lightweight periodic operations for replicated tables, Kafka streaming, and DNS cache updates.
|
||||||
|
@ -19,7 +19,7 @@ Columns:
|
|||||||
- `column` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Name of a column to which access is granted.
|
- `column` ([Nullable](../../sql-reference/data-types/nullable.md)([String](../../sql-reference/data-types/string.md))) — Name of a column to which access is granted.
|
||||||
|
|
||||||
- `is_partial_revoke` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Logical value. It shows whether some privileges have been revoked. Possible values:
|
- `is_partial_revoke` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Logical value. It shows whether some privileges have been revoked. Possible values:
|
||||||
- `0` — The row describes a partial revoke.
|
- `0` — The row describes a grant.
|
||||||
- `1` — The row describes a grant.
|
- `1` — The row describes a partial revoke.
|
||||||
|
|
||||||
- `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Permission is granted `WITH GRANT OPTION`, see [GRANT](../../sql-reference/statements/grant.md#granting-privilege-syntax).
|
- `grant_option` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Permission is granted `WITH GRANT OPTION`, see [GRANT](../../sql-reference/statements/grant.md#granting-privilege-syntax).
|
||||||
|
@ -512,6 +512,8 @@ The result of operator `<` for values `d1` with underlying type `T1` and `d2` wi
|
|||||||
- If `T1 = T2 = T`, the result will be `d1.T < d2.T` (underlying values will be compared).
|
- If `T1 = T2 = T`, the result will be `d1.T < d2.T` (underlying values will be compared).
|
||||||
- If `T1 != T2`, the result will be `T1 < T2` (type names will be compared).
|
- If `T1 != T2`, the result will be `T1 < T2` (type names will be compared).
|
||||||
|
|
||||||
|
By default `Dynamic` type is not allowed in `GROUP BY`/`ORDER BY` keys, if you want to use it consider its special comparison rule and enable `allow_suspicious_types_in_group_by`/`allow_suspicious_types_in_order_by` settings.
|
||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
```sql
|
```sql
|
||||||
CREATE TABLE test (d Dynamic) ENGINE=Memory;
|
CREATE TABLE test (d Dynamic) ENGINE=Memory;
|
||||||
@ -535,7 +537,7 @@ SELECT d, dynamicType(d) FROM test;
|
|||||||
```
|
```
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT d, dynamicType(d) FROM test ORDER BY d;
|
SELECT d, dynamicType(d) FROM test ORDER BY d SETTINGS allow_suspicious_types_in_order_by=1;
|
||||||
```
|
```
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
@ -557,7 +559,7 @@ Example:
|
|||||||
```sql
|
```sql
|
||||||
CREATE TABLE test (d Dynamic) ENGINE=Memory;
|
CREATE TABLE test (d Dynamic) ENGINE=Memory;
|
||||||
INSERT INTO test VALUES (1::UInt32), (1::Int64), (100::UInt32), (100::Int64);
|
INSERT INTO test VALUES (1::UInt32), (1::Int64), (100::UInt32), (100::Int64);
|
||||||
SELECT d, dynamicType(d) FROM test ORDER by d;
|
SELECT d, dynamicType(d) FROM test ORDER BY d SETTINGS allow_suspicious_types_in_order_by=1;
|
||||||
```
|
```
|
||||||
|
|
||||||
```text
|
```text
|
||||||
@ -570,7 +572,7 @@ SELECT d, dynamicType(d) FROM test ORDER by d;
|
|||||||
```
|
```
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT d, dynamicType(d) FROM test GROUP by d;
|
SELECT d, dynamicType(d) FROM test GROUP by d SETTINGS allow_suspicious_types_in_group_by=1;
|
||||||
```
|
```
|
||||||
|
|
||||||
```text
|
```text
|
||||||
@ -582,7 +584,7 @@ SELECT d, dynamicType(d) FROM test GROUP by d;
|
|||||||
└─────┴────────────────┘
|
└─────┴────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
**Note**: the described comparison rule is not applied during execution of comparison functions like `<`/`>`/`=` and others because of [special work](#using-dynamic-type-in-functions) of functions with `Dynamic` type
|
**Note:** the described comparison rule is not applied during execution of comparison functions like `<`/`>`/`=` and others because of [special work](#using-dynamic-type-in-functions) of functions with `Dynamic` type
|
||||||
|
|
||||||
## Reaching the limit in number of different data types stored inside Dynamic
|
## Reaching the limit in number of different data types stored inside Dynamic
|
||||||
|
|
||||||
|
@ -58,10 +58,10 @@ SELECT json FROM test;
|
|||||||
└───────────────────────────────────┘
|
└───────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Using CAST from 'String':
|
Using CAST from `String`:
|
||||||
|
|
||||||
```sql
|
```sql
|
||||||
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON as json;
|
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON AS json;
|
||||||
```
|
```
|
||||||
|
|
||||||
```text
|
```text
|
||||||
@ -70,7 +70,47 @@ SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::JSON as json
|
|||||||
└────────────────────────────────────────────────┘
|
└────────────────────────────────────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
CAST from `JSON`, named `Tuple`, `Map` and `Object('json')` to `JSON` type will be supported later.
|
Using CAST from `Tuple`:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT (tuple(42 AS b) AS a, [1, 2, 3] AS c, 'Hello, World!' AS d)::JSON AS json;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─json───────────────────────────────────────────┐
|
||||||
|
│ {"a":{"b":42},"c":[1,2,3],"d":"Hello, World!"} │
|
||||||
|
└────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Using CAST from `Map`:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT map('a', map('b', 42), 'c', [1,2,3], 'd', 'Hello, World!')::JSON AS json;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─json───────────────────────────────────────────┐
|
||||||
|
│ {"a":{"b":42},"c":[1,2,3],"d":"Hello, World!"} │
|
||||||
|
└────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Using CAST from deprecated `Object('json')`:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT '{"a" : {"b" : 42},"c" : [1, 2, 3], "d" : "Hello, World!"}'::Object('json')::JSON AS json;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─json───────────────────────────────────────────┐
|
||||||
|
│ {"a":{"b":42},"c":[1,2,3],"d":"Hello, World!"} │
|
||||||
|
└────────────────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
:::note
|
||||||
|
CAST from `Tuple`/`Map`/`Object('json')` to `JSON` is implemented via serializing the column into `String` column containing JSON objects and deserializing it back to `JSON` type column.
|
||||||
|
:::
|
||||||
|
|
||||||
|
CAST between `JSON` types with different arguments will be supported later.
|
||||||
|
|
||||||
## Reading JSON paths as subcolumns
|
## Reading JSON paths as subcolumns
|
||||||
|
|
||||||
@ -630,6 +670,28 @@ SELECT arrayJoin(distinctJSONPathsAndTypes(json)) FROM s3('s3://clickhouse-publi
|
|||||||
└─arrayJoin(distinctJSONPathsAndTypes(json))──────────────────┘
|
└─arrayJoin(distinctJSONPathsAndTypes(json))──────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
## ALTER MODIFY COLUMN to JSON type
|
||||||
|
|
||||||
|
It's possible to alter an existing table and change the type of the column to the new `JSON` type. Right now only alter from `String` type is supported.
|
||||||
|
|
||||||
|
**Example**
|
||||||
|
|
||||||
|
```sql
|
||||||
|
CREATE TABLE test (json String) ENGINE=MergeTree ORDeR BY tuple();
|
||||||
|
INSERT INTO test VALUES ('{"a" : 42}'), ('{"a" : 43, "b" : "Hello"}'), ('{"a" : 44, "b" : [1, 2, 3]}')), ('{"c" : "2020-01-01"}');
|
||||||
|
ALTER TABLE test MODIFY COLUMN json JSON;
|
||||||
|
SELECT json, json.a, json.b, json.c FROM test;
|
||||||
|
```
|
||||||
|
|
||||||
|
```text
|
||||||
|
┌─json─────────────────────────┬─json.a─┬─json.b──┬─json.c─────┐
|
||||||
|
│ {"a":"42"} │ 42 │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │
|
||||||
|
│ {"a":"43","b":"Hello"} │ 43 │ Hello │ ᴺᵁᴸᴸ │
|
||||||
|
│ {"a":"44","b":["1","2","3"]} │ 44 │ [1,2,3] │ ᴺᵁᴸᴸ │
|
||||||
|
│ {"c":"2020-01-01"} │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 2020-01-01 │
|
||||||
|
└──────────────────────────────┴────────┴─────────┴────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
## Tips for better usage of the JSON type
|
## Tips for better usage of the JSON type
|
||||||
|
|
||||||
Before creating `JSON` column and loading data into it, consider the following tips:
|
Before creating `JSON` column and loading data into it, consider the following tips:
|
||||||
|
@ -441,6 +441,8 @@ SELECT v, variantType(v) FROM test ORDER by v;
|
|||||||
└─────┴────────────────┘
|
└─────┴────────────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**Note** by default `Variant` type is not allowed in `GROUP BY`/`ORDER BY` keys, if you want to use it consider its special comparison rule and enable `allow_suspicious_types_in_group_by`/`allow_suspicious_types_in_order_by` settings.
|
||||||
|
|
||||||
## JSONExtract functions with Variant
|
## JSONExtract functions with Variant
|
||||||
|
|
||||||
All `JSONExtract*` functions support `Variant` type:
|
All `JSONExtract*` functions support `Variant` type:
|
||||||
|
@ -291,7 +291,7 @@ All missed values of `expr` column will be filled sequentially and other columns
|
|||||||
To fill multiple columns, add `WITH FILL` modifier with optional parameters after each field name in `ORDER BY` section.
|
To fill multiple columns, add `WITH FILL` modifier with optional parameters after each field name in `ORDER BY` section.
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
ORDER BY expr [WITH FILL] [FROM const_expr] [TO const_expr] [STEP const_numeric_expr], ... exprN [WITH FILL] [FROM expr] [TO expr] [STEP numeric_expr]
|
ORDER BY expr [WITH FILL] [FROM const_expr] [TO const_expr] [STEP const_numeric_expr] [STALENESS const_numeric_expr], ... exprN [WITH FILL] [FROM expr] [TO expr] [STEP numeric_expr] [STALENESS numeric_expr]
|
||||||
[INTERPOLATE [(col [AS expr], ... colN [AS exprN])]]
|
[INTERPOLATE [(col [AS expr], ... colN [AS exprN])]]
|
||||||
```
|
```
|
||||||
|
|
||||||
@ -300,6 +300,7 @@ When `FROM const_expr` not defined sequence of filling use minimal `expr` field
|
|||||||
When `TO const_expr` not defined sequence of filling use maximum `expr` field value from `ORDER BY`.
|
When `TO const_expr` not defined sequence of filling use maximum `expr` field value from `ORDER BY`.
|
||||||
When `STEP const_numeric_expr` defined then `const_numeric_expr` interprets `as is` for numeric types, as `days` for Date type, as `seconds` for DateTime type. It also supports [INTERVAL](https://clickhouse.com/docs/en/sql-reference/data-types/special-data-types/interval/) data type representing time and date intervals.
|
When `STEP const_numeric_expr` defined then `const_numeric_expr` interprets `as is` for numeric types, as `days` for Date type, as `seconds` for DateTime type. It also supports [INTERVAL](https://clickhouse.com/docs/en/sql-reference/data-types/special-data-types/interval/) data type representing time and date intervals.
|
||||||
When `STEP const_numeric_expr` omitted then sequence of filling use `1.0` for numeric type, `1 day` for Date type and `1 second` for DateTime type.
|
When `STEP const_numeric_expr` omitted then sequence of filling use `1.0` for numeric type, `1 day` for Date type and `1 second` for DateTime type.
|
||||||
|
When `STALENESS const_numeric_expr` is defined, the query will generate rows until the difference from the previous row in the original data exceeds `const_numeric_expr`.
|
||||||
`INTERPOLATE` can be applied to columns not participating in `ORDER BY WITH FILL`. Such columns are filled based on previous fields values by applying `expr`. If `expr` is not present will repeat previous value. Omitted list will result in including all allowed columns.
|
`INTERPOLATE` can be applied to columns not participating in `ORDER BY WITH FILL`. Such columns are filled based on previous fields values by applying `expr`. If `expr` is not present will repeat previous value. Omitted list will result in including all allowed columns.
|
||||||
|
|
||||||
Example of a query without `WITH FILL`:
|
Example of a query without `WITH FILL`:
|
||||||
@ -497,6 +498,64 @@ Result:
|
|||||||
└────────────┴────────────┴──────────┘
|
└────────────┴────────────┴──────────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Example of a query without `STALENESS`:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT number as key, 5 * number value, 'original' AS source
|
||||||
|
FROM numbers(16) WHERE key % 5 == 0
|
||||||
|
ORDER BY key WITH FILL;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─key─┬─value─┬─source───┐
|
||||||
|
1. │ 0 │ 0 │ original │
|
||||||
|
2. │ 1 │ 0 │ │
|
||||||
|
3. │ 2 │ 0 │ │
|
||||||
|
4. │ 3 │ 0 │ │
|
||||||
|
5. │ 4 │ 0 │ │
|
||||||
|
6. │ 5 │ 25 │ original │
|
||||||
|
7. │ 6 │ 0 │ │
|
||||||
|
8. │ 7 │ 0 │ │
|
||||||
|
9. │ 8 │ 0 │ │
|
||||||
|
10. │ 9 │ 0 │ │
|
||||||
|
11. │ 10 │ 50 │ original │
|
||||||
|
12. │ 11 │ 0 │ │
|
||||||
|
13. │ 12 │ 0 │ │
|
||||||
|
14. │ 13 │ 0 │ │
|
||||||
|
15. │ 14 │ 0 │ │
|
||||||
|
16. │ 15 │ 75 │ original │
|
||||||
|
└─────┴───────┴──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
Same query after applying `STALENESS 3`:
|
||||||
|
|
||||||
|
``` sql
|
||||||
|
SELECT number as key, 5 * number value, 'original' AS source
|
||||||
|
FROM numbers(16) WHERE key % 5 == 0
|
||||||
|
ORDER BY key WITH FILL STALENESS 3;
|
||||||
|
```
|
||||||
|
|
||||||
|
Result:
|
||||||
|
|
||||||
|
``` text
|
||||||
|
┌─key─┬─value─┬─source───┐
|
||||||
|
1. │ 0 │ 0 │ original │
|
||||||
|
2. │ 1 │ 0 │ │
|
||||||
|
3. │ 2 │ 0 │ │
|
||||||
|
4. │ 5 │ 25 │ original │
|
||||||
|
5. │ 6 │ 0 │ │
|
||||||
|
6. │ 7 │ 0 │ │
|
||||||
|
7. │ 10 │ 50 │ original │
|
||||||
|
8. │ 11 │ 0 │ │
|
||||||
|
9. │ 12 │ 0 │ │
|
||||||
|
10. │ 15 │ 75 │ original │
|
||||||
|
11. │ 16 │ 0 │ │
|
||||||
|
12. │ 17 │ 0 │ │
|
||||||
|
└─────┴───────┴──────────┘
|
||||||
|
```
|
||||||
|
|
||||||
Example of a query without `INTERPOLATE`:
|
Example of a query without `INTERPOLATE`:
|
||||||
|
|
||||||
``` sql
|
``` sql
|
||||||
|
@ -1353,9 +1353,11 @@ try
|
|||||||
}
|
}
|
||||||
|
|
||||||
FailPointInjection::enableFromGlobalConfig(config());
|
FailPointInjection::enableFromGlobalConfig(config());
|
||||||
|
#endif
|
||||||
|
|
||||||
memory_worker.start();
|
memory_worker.start();
|
||||||
|
|
||||||
|
#if defined(OS_LINUX)
|
||||||
int default_oom_score = 0;
|
int default_oom_score = 0;
|
||||||
|
|
||||||
#if !defined(NDEBUG)
|
#if !defined(NDEBUG)
|
||||||
|
@ -608,7 +608,7 @@ AuthResult AccessControl::authenticate(const Credentials & credentials, const Po
|
|||||||
}
|
}
|
||||||
catch (...)
|
catch (...)
|
||||||
{
|
{
|
||||||
tryLogCurrentException(getLogger(), "from: " + address.toString() + ", user: " + credentials.getUserName() + ": Authentication failed");
|
tryLogCurrentException(getLogger(), "from: " + address.toString() + ", user: " + credentials.getUserName() + ": Authentication failed", LogsLevel::information);
|
||||||
|
|
||||||
WriteBufferFromOwnString message;
|
WriteBufferFromOwnString message;
|
||||||
message << credentials.getUserName() << ": Authentication failed: password is incorrect, or there is no user with such name.";
|
message << credentials.getUserName() << ": Authentication failed: password is incorrect, or there is no user with such name.";
|
||||||
@ -622,8 +622,9 @@ AuthResult AccessControl::authenticate(const Credentials & credentials, const Po
|
|||||||
<< "and deleting this file will reset the password.\n"
|
<< "and deleting this file will reset the password.\n"
|
||||||
<< "See also /etc/clickhouse-server/users.xml on the server where ClickHouse is installed.\n\n";
|
<< "See also /etc/clickhouse-server/users.xml on the server where ClickHouse is installed.\n\n";
|
||||||
|
|
||||||
/// We use the same message for all authentication failures because we don't want to give away any unnecessary information for security reasons,
|
/// We use the same message for all authentication failures because we don't want to give away any unnecessary information for security reasons.
|
||||||
/// only the log will show the exact reason.
|
/// Only the log ((*), above) will show the exact reason. Note that (*) logs at information level instead of the default error level as
|
||||||
|
/// authentication failures are not an unusual event.
|
||||||
throw Exception(PreformattedMessage{message.str(),
|
throw Exception(PreformattedMessage{message.str(),
|
||||||
"{}: Authentication failed: password is incorrect, or there is no user with such name",
|
"{}: Authentication failed: password is incorrect, or there is no user with such name",
|
||||||
std::vector<std::string>{credentials.getUserName()}},
|
std::vector<std::string>{credentials.getUserName()}},
|
||||||
|
@ -387,7 +387,7 @@ template <typename Value, bool return_float, bool interpolated>
|
|||||||
using FuncQuantileExactWeighted = AggregateFunctionQuantile<
|
using FuncQuantileExactWeighted = AggregateFunctionQuantile<
|
||||||
Value,
|
Value,
|
||||||
QuantileExactWeighted<Value, interpolated>,
|
QuantileExactWeighted<Value, interpolated>,
|
||||||
NameQuantileExactWeighted,
|
std::conditional_t<interpolated, NameQuantileExactWeightedInterpolated, NameQuantileExactWeighted>,
|
||||||
true,
|
true,
|
||||||
std::conditional_t<return_float, Float64, void>,
|
std::conditional_t<return_float, Float64, void>,
|
||||||
false,
|
false,
|
||||||
@ -396,7 +396,7 @@ template <typename Value, bool return_float, bool interpolated>
|
|||||||
using FuncQuantilesExactWeighted = AggregateFunctionQuantile<
|
using FuncQuantilesExactWeighted = AggregateFunctionQuantile<
|
||||||
Value,
|
Value,
|
||||||
QuantileExactWeighted<Value, interpolated>,
|
QuantileExactWeighted<Value, interpolated>,
|
||||||
NameQuantilesExactWeighted,
|
std::conditional_t<interpolated, NameQuantilesExactWeightedInterpolated, NameQuantilesExactWeighted>,
|
||||||
true,
|
true,
|
||||||
std::conditional_t<return_float, Float64, void>,
|
std::conditional_t<return_float, Float64, void>,
|
||||||
true,
|
true,
|
||||||
|
@ -1,2 +1,2 @@
|
|||||||
clickhouse_add_executable(aggregate_function_state_deserialization_fuzzer aggregate_function_state_deserialization_fuzzer.cpp ${SRCS})
|
clickhouse_add_executable(aggregate_function_state_deserialization_fuzzer aggregate_function_state_deserialization_fuzzer.cpp ${SRCS})
|
||||||
target_link_libraries(aggregate_function_state_deserialization_fuzzer PRIVATE clickhouse_aggregate_functions)
|
target_link_libraries(aggregate_function_state_deserialization_fuzzer PRIVATE clickhouse_aggregate_functions dbms)
|
||||||
|
@ -602,9 +602,21 @@ public:
|
|||||||
return projection_columns;
|
return projection_columns;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns true if query node is resolved, false otherwise
|
||||||
|
bool isResolved() const
|
||||||
|
{
|
||||||
|
return !projection_columns.empty();
|
||||||
|
}
|
||||||
|
|
||||||
/// Resolve query node projection columns
|
/// Resolve query node projection columns
|
||||||
void resolveProjectionColumns(NamesAndTypes projection_columns_value);
|
void resolveProjectionColumns(NamesAndTypes projection_columns_value);
|
||||||
|
|
||||||
|
/// Clear query node projection columns
|
||||||
|
void clearProjectionColumns()
|
||||||
|
{
|
||||||
|
projection_columns.clear();
|
||||||
|
}
|
||||||
|
|
||||||
/// Remove unused projection columns
|
/// Remove unused projection columns
|
||||||
void removeUnusedProjectionColumns(const std::unordered_set<std::string> & used_projection_columns);
|
void removeUnusedProjectionColumns(const std::unordered_set<std::string> & used_projection_columns);
|
||||||
|
|
||||||
|
@ -498,6 +498,8 @@ QueryTreeNodePtr QueryTreeBuilder::buildSortList(const ASTPtr & order_by_express
|
|||||||
sort_node->getFillTo() = buildExpression(order_by_element.getFillTo(), context);
|
sort_node->getFillTo() = buildExpression(order_by_element.getFillTo(), context);
|
||||||
if (order_by_element.getFillStep())
|
if (order_by_element.getFillStep())
|
||||||
sort_node->getFillStep() = buildExpression(order_by_element.getFillStep(), context);
|
sort_node->getFillStep() = buildExpression(order_by_element.getFillStep(), context);
|
||||||
|
if (order_by_element.getFillStaleness())
|
||||||
|
sort_node->getFillStaleness() = buildExpression(order_by_element.getFillStaleness(), context);
|
||||||
|
|
||||||
list_node->getNodes().push_back(std::move(sort_node));
|
list_node->getNodes().push_back(std::move(sort_node));
|
||||||
}
|
}
|
||||||
|
@ -103,6 +103,8 @@ namespace Setting
|
|||||||
extern const SettingsBool single_join_prefer_left_table;
|
extern const SettingsBool single_join_prefer_left_table;
|
||||||
extern const SettingsBool transform_null_in;
|
extern const SettingsBool transform_null_in;
|
||||||
extern const SettingsUInt64 use_structure_from_insertion_table_in_table_functions;
|
extern const SettingsUInt64 use_structure_from_insertion_table_in_table_functions;
|
||||||
|
extern const SettingsBool allow_suspicious_types_in_group_by;
|
||||||
|
extern const SettingsBool allow_suspicious_types_in_order_by;
|
||||||
extern const SettingsBool use_concurrency_control;
|
extern const SettingsBool use_concurrency_control;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -437,8 +439,13 @@ ProjectionName QueryAnalyzer::calculateWindowProjectionName(const QueryTreeNodeP
|
|||||||
return buffer.str();
|
return buffer.str();
|
||||||
}
|
}
|
||||||
|
|
||||||
ProjectionName QueryAnalyzer::calculateSortColumnProjectionName(const QueryTreeNodePtr & sort_column_node, const ProjectionName & sort_expression_projection_name,
|
ProjectionName QueryAnalyzer::calculateSortColumnProjectionName(
|
||||||
const ProjectionName & fill_from_expression_projection_name, const ProjectionName & fill_to_expression_projection_name, const ProjectionName & fill_step_expression_projection_name)
|
const QueryTreeNodePtr & sort_column_node,
|
||||||
|
const ProjectionName & sort_expression_projection_name,
|
||||||
|
const ProjectionName & fill_from_expression_projection_name,
|
||||||
|
const ProjectionName & fill_to_expression_projection_name,
|
||||||
|
const ProjectionName & fill_step_expression_projection_name,
|
||||||
|
const ProjectionName & fill_staleness_expression_projection_name)
|
||||||
{
|
{
|
||||||
auto & sort_node_typed = sort_column_node->as<SortNode &>();
|
auto & sort_node_typed = sort_column_node->as<SortNode &>();
|
||||||
|
|
||||||
@ -468,6 +475,9 @@ ProjectionName QueryAnalyzer::calculateSortColumnProjectionName(const QueryTreeN
|
|||||||
|
|
||||||
if (sort_node_typed.hasFillStep())
|
if (sort_node_typed.hasFillStep())
|
||||||
sort_column_projection_name_buffer << " STEP " << fill_step_expression_projection_name;
|
sort_column_projection_name_buffer << " STEP " << fill_step_expression_projection_name;
|
||||||
|
|
||||||
|
if (sort_node_typed.hasFillStaleness())
|
||||||
|
sort_column_projection_name_buffer << " STALENESS " << fill_staleness_expression_projection_name;
|
||||||
}
|
}
|
||||||
|
|
||||||
return sort_column_projection_name_buffer.str();
|
return sort_column_projection_name_buffer.str();
|
||||||
@ -2958,27 +2968,29 @@ ProjectionNames QueryAnalyzer::resolveFunction(QueryTreeNodePtr & node, Identifi
|
|||||||
/// Replace storage with values storage of insertion block
|
/// Replace storage with values storage of insertion block
|
||||||
if (StoragePtr storage = scope.context->getViewSource())
|
if (StoragePtr storage = scope.context->getViewSource())
|
||||||
{
|
{
|
||||||
QueryTreeNodePtr table_expression;
|
QueryTreeNodePtr table_expression = in_second_argument;
|
||||||
/// Process possibly nested sub-selects
|
|
||||||
for (auto * query_node = in_second_argument->as<QueryNode>(); query_node; query_node = table_expression->as<QueryNode>())
|
|
||||||
table_expression = extractLeftTableExpression(query_node->getJoinTree());
|
|
||||||
|
|
||||||
if (table_expression)
|
/// Process possibly nested sub-selects
|
||||||
|
while (table_expression)
|
||||||
{
|
{
|
||||||
if (auto * query_table_node = table_expression->as<TableNode>())
|
if (auto * query_node = table_expression->as<QueryNode>())
|
||||||
{
|
table_expression = extractLeftTableExpression(query_node->getJoinTree());
|
||||||
if (query_table_node->getStorageID().getFullNameNotQuoted() == storage->getStorageID().getFullNameNotQuoted())
|
else if (auto * union_node = table_expression->as<UnionNode>())
|
||||||
{
|
table_expression = union_node->getQueries().getNodes().at(0);
|
||||||
auto replacement_table_expression = std::make_shared<TableNode>(storage, scope.context);
|
else
|
||||||
if (std::optional<TableExpressionModifiers> table_expression_modifiers = query_table_node->getTableExpressionModifiers())
|
break;
|
||||||
replacement_table_expression->setTableExpressionModifiers(*table_expression_modifiers);
|
}
|
||||||
in_second_argument = in_second_argument->cloneAndReplace(table_expression, std::move(replacement_table_expression));
|
|
||||||
}
|
TableNode * table_expression_table_node = table_expression ? table_expression->as<TableNode>() : nullptr;
|
||||||
}
|
|
||||||
|
if (table_expression_table_node &&
|
||||||
|
table_expression_table_node->getStorageID().getFullNameNotQuoted() == storage->getStorageID().getFullNameNotQuoted())
|
||||||
|
{
|
||||||
|
auto replacement_table_expression_table_node = table_expression_table_node->clone();
|
||||||
|
replacement_table_expression_table_node->as<TableNode &>().updateStorage(storage, scope.context);
|
||||||
|
in_second_argument = in_second_argument->cloneAndReplace(table_expression, std::move(replacement_table_expression_table_node));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
resolveExpressionNode(in_second_argument, scope, false /*allow_lambda_expression*/, true /*allow_table_expression*/);
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Edge case when the first argument of IN is scalar subquery.
|
/// Edge case when the first argument of IN is scalar subquery.
|
||||||
@ -3998,6 +4010,7 @@ ProjectionNames QueryAnalyzer::resolveSortNodeList(QueryTreeNodePtr & sort_node_
|
|||||||
ProjectionNames fill_from_expression_projection_names;
|
ProjectionNames fill_from_expression_projection_names;
|
||||||
ProjectionNames fill_to_expression_projection_names;
|
ProjectionNames fill_to_expression_projection_names;
|
||||||
ProjectionNames fill_step_expression_projection_names;
|
ProjectionNames fill_step_expression_projection_names;
|
||||||
|
ProjectionNames fill_staleness_expression_projection_names;
|
||||||
|
|
||||||
auto & sort_node_list_typed = sort_node_list->as<ListNode &>();
|
auto & sort_node_list_typed = sort_node_list->as<ListNode &>();
|
||||||
for (auto & node : sort_node_list_typed.getNodes())
|
for (auto & node : sort_node_list_typed.getNodes())
|
||||||
@ -4019,6 +4032,8 @@ ProjectionNames QueryAnalyzer::resolveSortNodeList(QueryTreeNodePtr & sort_node_
|
|||||||
sort_node.getExpression() = sort_column_list_node->getNodes().front();
|
sort_node.getExpression() = sort_column_list_node->getNodes().front();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
validateSortingKeyType(sort_node.getExpression()->getResultType(), scope);
|
||||||
|
|
||||||
size_t sort_expression_projection_names_size = sort_expression_projection_names.size();
|
size_t sort_expression_projection_names_size = sort_expression_projection_names.size();
|
||||||
if (sort_expression_projection_names_size != 1)
|
if (sort_expression_projection_names_size != 1)
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||||
@ -4088,11 +4103,38 @@ ProjectionNames QueryAnalyzer::resolveSortNodeList(QueryTreeNodePtr & sort_node_
|
|||||||
fill_step_expression_projection_names_size);
|
fill_step_expression_projection_names_size);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (sort_node.hasFillStaleness())
|
||||||
|
{
|
||||||
|
fill_staleness_expression_projection_names = resolveExpressionNode(sort_node.getFillStaleness(), scope, false /*allow_lambda_expression*/, false /*allow_table_expression*/);
|
||||||
|
|
||||||
|
const auto * constant_node = sort_node.getFillStaleness()->as<ConstantNode>();
|
||||||
|
if (!constant_node)
|
||||||
|
throw Exception(ErrorCodes::INVALID_WITH_FILL_EXPRESSION,
|
||||||
|
"Sort FILL STALENESS expression must be constant with numeric or interval type. Actual {}. In scope {}",
|
||||||
|
sort_node.getFillStaleness()->formatASTForErrorMessage(),
|
||||||
|
scope.scope_node->formatASTForErrorMessage());
|
||||||
|
|
||||||
|
bool is_number = isColumnedAsNumber(constant_node->getResultType());
|
||||||
|
bool is_interval = WhichDataType(constant_node->getResultType()).isInterval();
|
||||||
|
if (!is_number && !is_interval)
|
||||||
|
throw Exception(ErrorCodes::INVALID_WITH_FILL_EXPRESSION,
|
||||||
|
"Sort FILL STALENESS expression must be constant with numeric or interval type. Actual {}. In scope {}",
|
||||||
|
sort_node.getFillStaleness()->formatASTForErrorMessage(),
|
||||||
|
scope.scope_node->formatASTForErrorMessage());
|
||||||
|
|
||||||
|
size_t fill_staleness_expression_projection_names_size = fill_staleness_expression_projection_names.size();
|
||||||
|
if (fill_staleness_expression_projection_names_size != 1)
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR,
|
||||||
|
"Sort FILL STALENESS expression expected 1 projection name. Actual {}",
|
||||||
|
fill_staleness_expression_projection_names_size);
|
||||||
|
}
|
||||||
|
|
||||||
auto sort_column_projection_name = calculateSortColumnProjectionName(node,
|
auto sort_column_projection_name = calculateSortColumnProjectionName(node,
|
||||||
sort_expression_projection_names[0],
|
sort_expression_projection_names[0],
|
||||||
fill_from_expression_projection_names.empty() ? "" : fill_from_expression_projection_names.front(),
|
fill_from_expression_projection_names.empty() ? "" : fill_from_expression_projection_names.front(),
|
||||||
fill_to_expression_projection_names.empty() ? "" : fill_to_expression_projection_names.front(),
|
fill_to_expression_projection_names.empty() ? "" : fill_to_expression_projection_names.front(),
|
||||||
fill_step_expression_projection_names.empty() ? "" : fill_step_expression_projection_names.front());
|
fill_step_expression_projection_names.empty() ? "" : fill_step_expression_projection_names.front(),
|
||||||
|
fill_staleness_expression_projection_names.empty() ? "" : fill_staleness_expression_projection_names.front());
|
||||||
|
|
||||||
result_projection_names.push_back(std::move(sort_column_projection_name));
|
result_projection_names.push_back(std::move(sort_column_projection_name));
|
||||||
|
|
||||||
@ -4100,11 +4142,32 @@ ProjectionNames QueryAnalyzer::resolveSortNodeList(QueryTreeNodePtr & sort_node_
|
|||||||
fill_from_expression_projection_names.clear();
|
fill_from_expression_projection_names.clear();
|
||||||
fill_to_expression_projection_names.clear();
|
fill_to_expression_projection_names.clear();
|
||||||
fill_step_expression_projection_names.clear();
|
fill_step_expression_projection_names.clear();
|
||||||
|
fill_staleness_expression_projection_names.clear();
|
||||||
}
|
}
|
||||||
|
|
||||||
return result_projection_names;
|
return result_projection_names;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void QueryAnalyzer::validateSortingKeyType(const DataTypePtr & sorting_key_type, const IdentifierResolveScope & scope) const
|
||||||
|
{
|
||||||
|
if (scope.context->getSettingsRef()[Setting::allow_suspicious_types_in_order_by])
|
||||||
|
return;
|
||||||
|
|
||||||
|
auto check = [](const IDataType & type)
|
||||||
|
{
|
||||||
|
if (isDynamic(type) || isVariant(type))
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::ILLEGAL_COLUMN,
|
||||||
|
"Data types Variant/Dynamic are not allowed in ORDER BY keys, because it can lead to unexpected results. "
|
||||||
|
"Consider using a subcolumn with a specific data type instead (for example 'column.Int64' or 'json.some.path.:Int64' if "
|
||||||
|
"its a JSON path subcolumn) or casting this column to a specific data type. "
|
||||||
|
"Set setting allow_suspicious_types_in_order_by = 1 in order to allow it");
|
||||||
|
};
|
||||||
|
|
||||||
|
check(*sorting_key_type);
|
||||||
|
sorting_key_type->forEachChild(check);
|
||||||
|
}
|
||||||
|
|
||||||
namespace
|
namespace
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -4144,11 +4207,12 @@ void QueryAnalyzer::resolveGroupByNode(QueryNode & query_node_typed, IdentifierR
|
|||||||
expandTuplesInList(group_by_list);
|
expandTuplesInList(group_by_list);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (scope.group_by_use_nulls)
|
for (const auto & grouping_set : query_node_typed.getGroupBy().getNodes())
|
||||||
{
|
{
|
||||||
for (const auto & grouping_set : query_node_typed.getGroupBy().getNodes())
|
for (const auto & group_by_elem : grouping_set->as<ListNode>()->getNodes())
|
||||||
{
|
{
|
||||||
for (const auto & group_by_elem : grouping_set->as<ListNode>()->getNodes())
|
validateGroupByKeyType(group_by_elem->getResultType(), scope);
|
||||||
|
if (scope.group_by_use_nulls)
|
||||||
scope.nullable_group_by_keys.insert(group_by_elem);
|
scope.nullable_group_by_keys.insert(group_by_elem);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -4164,14 +4228,37 @@ void QueryAnalyzer::resolveGroupByNode(QueryNode & query_node_typed, IdentifierR
|
|||||||
auto & group_by_list = query_node_typed.getGroupBy().getNodes();
|
auto & group_by_list = query_node_typed.getGroupBy().getNodes();
|
||||||
expandTuplesInList(group_by_list);
|
expandTuplesInList(group_by_list);
|
||||||
|
|
||||||
if (scope.group_by_use_nulls)
|
for (const auto & group_by_elem : query_node_typed.getGroupBy().getNodes())
|
||||||
{
|
{
|
||||||
for (const auto & group_by_elem : query_node_typed.getGroupBy().getNodes())
|
validateGroupByKeyType(group_by_elem->getResultType(), scope);
|
||||||
|
if (scope.group_by_use_nulls)
|
||||||
scope.nullable_group_by_keys.insert(group_by_elem);
|
scope.nullable_group_by_keys.insert(group_by_elem);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Validate data types of GROUP BY key.
|
||||||
|
*/
|
||||||
|
void QueryAnalyzer::validateGroupByKeyType(const DataTypePtr & group_by_key_type, const IdentifierResolveScope & scope) const
|
||||||
|
{
|
||||||
|
if (scope.context->getSettingsRef()[Setting::allow_suspicious_types_in_group_by])
|
||||||
|
return;
|
||||||
|
|
||||||
|
auto check = [](const IDataType & type)
|
||||||
|
{
|
||||||
|
if (isDynamic(type) || isVariant(type))
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::ILLEGAL_COLUMN,
|
||||||
|
"Data types Variant/Dynamic are not allowed in GROUP BY keys, because it can lead to unexpected results. "
|
||||||
|
"Consider using a subcolumn with a specific data type instead (for example 'column.Int64' or 'json.some.path.:Int64' if "
|
||||||
|
"its a JSON path subcolumn) or casting this column to a specific data type. "
|
||||||
|
"Set setting allow_suspicious_types_in_group_by = 1 in order to allow it");
|
||||||
|
};
|
||||||
|
|
||||||
|
check(*group_by_key_type);
|
||||||
|
group_by_key_type->forEachChild(check);
|
||||||
|
}
|
||||||
|
|
||||||
/** Resolve interpolate columns nodes list.
|
/** Resolve interpolate columns nodes list.
|
||||||
*/
|
*/
|
||||||
void QueryAnalyzer::resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpolate_node_list, IdentifierResolveScope & scope)
|
void QueryAnalyzer::resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpolate_node_list, IdentifierResolveScope & scope)
|
||||||
@ -5310,6 +5397,16 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier
|
|||||||
|
|
||||||
auto & query_node_typed = query_node->as<QueryNode &>();
|
auto & query_node_typed = query_node->as<QueryNode &>();
|
||||||
|
|
||||||
|
/** It is unsafe to call resolveQuery on already resolved query node, because during identifier resolution process
|
||||||
|
* we replace identifiers with expressions without aliases, also at the end of resolveQuery all aliases from all nodes will be removed.
|
||||||
|
* For subsequent resolveQuery executions it is possible to have wrong projection header, because for nodes
|
||||||
|
* with aliases projection name is alias.
|
||||||
|
*
|
||||||
|
* If for client it is necessary to resolve query node after clone, client must clear projection columns from query node before resolve.
|
||||||
|
*/
|
||||||
|
if (query_node_typed.isResolved())
|
||||||
|
return;
|
||||||
|
|
||||||
if (query_node_typed.isCTE())
|
if (query_node_typed.isCTE())
|
||||||
ctes_in_resolve_process.insert(query_node_typed.getCTEName());
|
ctes_in_resolve_process.insert(query_node_typed.getCTEName());
|
||||||
|
|
||||||
@ -5448,16 +5545,13 @@ void QueryAnalyzer::resolveQuery(const QueryTreeNodePtr & query_node, Identifier
|
|||||||
*/
|
*/
|
||||||
scope.use_identifier_lookup_to_result_cache = false;
|
scope.use_identifier_lookup_to_result_cache = false;
|
||||||
|
|
||||||
if (query_node_typed.getJoinTree())
|
TableExpressionsAliasVisitor table_expressions_visitor(scope);
|
||||||
{
|
table_expressions_visitor.visit(query_node_typed.getJoinTree());
|
||||||
TableExpressionsAliasVisitor table_expressions_visitor(scope);
|
|
||||||
table_expressions_visitor.visit(query_node_typed.getJoinTree());
|
|
||||||
|
|
||||||
initializeQueryJoinTreeNode(query_node_typed.getJoinTree(), scope);
|
initializeQueryJoinTreeNode(query_node_typed.getJoinTree(), scope);
|
||||||
scope.aliases.alias_name_to_table_expression_node.clear();
|
scope.aliases.alias_name_to_table_expression_node.clear();
|
||||||
|
|
||||||
resolveQueryJoinTreeNode(query_node_typed.getJoinTree(), scope, visitor);
|
resolveQueryJoinTreeNode(query_node_typed.getJoinTree(), scope, visitor);
|
||||||
}
|
|
||||||
|
|
||||||
if (!scope.group_by_use_nulls)
|
if (!scope.group_by_use_nulls)
|
||||||
scope.use_identifier_lookup_to_result_cache = true;
|
scope.use_identifier_lookup_to_result_cache = true;
|
||||||
@ -5675,6 +5769,9 @@ void QueryAnalyzer::resolveUnion(const QueryTreeNodePtr & union_node, Identifier
|
|||||||
{
|
{
|
||||||
auto & union_node_typed = union_node->as<UnionNode &>();
|
auto & union_node_typed = union_node->as<UnionNode &>();
|
||||||
|
|
||||||
|
if (union_node_typed.isResolved())
|
||||||
|
return;
|
||||||
|
|
||||||
if (union_node_typed.isCTE())
|
if (union_node_typed.isCTE())
|
||||||
ctes_in_resolve_process.insert(union_node_typed.getCTEName());
|
ctes_in_resolve_process.insert(union_node_typed.getCTEName());
|
||||||
|
|
||||||
|
@ -140,7 +140,8 @@ private:
|
|||||||
const ProjectionName & sort_expression_projection_name,
|
const ProjectionName & sort_expression_projection_name,
|
||||||
const ProjectionName & fill_from_expression_projection_name,
|
const ProjectionName & fill_from_expression_projection_name,
|
||||||
const ProjectionName & fill_to_expression_projection_name,
|
const ProjectionName & fill_to_expression_projection_name,
|
||||||
const ProjectionName & fill_step_expression_projection_name);
|
const ProjectionName & fill_step_expression_projection_name,
|
||||||
|
const ProjectionName & fill_staleness_expression_projection_name);
|
||||||
|
|
||||||
QueryTreeNodePtr tryGetLambdaFromSQLUserDefinedFunctions(const std::string & function_name, ContextPtr context);
|
QueryTreeNodePtr tryGetLambdaFromSQLUserDefinedFunctions(const std::string & function_name, ContextPtr context);
|
||||||
|
|
||||||
@ -219,8 +220,12 @@ private:
|
|||||||
|
|
||||||
ProjectionNames resolveSortNodeList(QueryTreeNodePtr & sort_node_list, IdentifierResolveScope & scope);
|
ProjectionNames resolveSortNodeList(QueryTreeNodePtr & sort_node_list, IdentifierResolveScope & scope);
|
||||||
|
|
||||||
|
void validateSortingKeyType(const DataTypePtr & sorting_key_type, const IdentifierResolveScope & scope) const;
|
||||||
|
|
||||||
void resolveGroupByNode(QueryNode & query_node_typed, IdentifierResolveScope & scope);
|
void resolveGroupByNode(QueryNode & query_node_typed, IdentifierResolveScope & scope);
|
||||||
|
|
||||||
|
void validateGroupByKeyType(const DataTypePtr & group_by_key_type, const IdentifierResolveScope & scope) const;
|
||||||
|
|
||||||
void resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpolate_node_list, IdentifierResolveScope & scope);
|
void resolveInterpolateColumnsNodeList(QueryTreeNodePtr & interpolate_node_list, IdentifierResolveScope & scope);
|
||||||
|
|
||||||
void resolveWindowNodeList(QueryTreeNodePtr & window_node_list, IdentifierResolveScope & scope);
|
void resolveWindowNodeList(QueryTreeNodePtr & window_node_list, IdentifierResolveScope & scope);
|
||||||
|
@ -69,6 +69,12 @@ void SortNode::dumpTreeImpl(WriteBuffer & buffer, FormatState & format_state, si
|
|||||||
buffer << '\n' << std::string(indent + 2, ' ') << "FILL STEP\n";
|
buffer << '\n' << std::string(indent + 2, ' ') << "FILL STEP\n";
|
||||||
getFillStep()->dumpTreeImpl(buffer, format_state, indent + 4);
|
getFillStep()->dumpTreeImpl(buffer, format_state, indent + 4);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (hasFillStaleness())
|
||||||
|
{
|
||||||
|
buffer << '\n' << std::string(indent + 2, ' ') << "FILL STALENESS\n";
|
||||||
|
getFillStaleness()->dumpTreeImpl(buffer, format_state, indent + 4);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
bool SortNode::isEqualImpl(const IQueryTreeNode & rhs, CompareOptions) const
|
bool SortNode::isEqualImpl(const IQueryTreeNode & rhs, CompareOptions) const
|
||||||
@ -132,6 +138,8 @@ ASTPtr SortNode::toASTImpl(const ConvertToASTOptions & options) const
|
|||||||
result->setFillTo(getFillTo()->toAST(options));
|
result->setFillTo(getFillTo()->toAST(options));
|
||||||
if (hasFillStep())
|
if (hasFillStep())
|
||||||
result->setFillStep(getFillStep()->toAST(options));
|
result->setFillStep(getFillStep()->toAST(options));
|
||||||
|
if (hasFillStaleness())
|
||||||
|
result->setFillStaleness(getFillStaleness()->toAST(options));
|
||||||
|
|
||||||
return result;
|
return result;
|
||||||
}
|
}
|
||||||
|
@ -105,6 +105,24 @@ public:
|
|||||||
return children[fill_step_child_index];
|
return children[fill_step_child_index];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns true if sort node has fill staleness, false otherwise
|
||||||
|
bool hasFillStaleness() const
|
||||||
|
{
|
||||||
|
return children[fill_staleness_child_index] != nullptr;
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get fill staleness
|
||||||
|
const QueryTreeNodePtr & getFillStaleness() const
|
||||||
|
{
|
||||||
|
return children[fill_staleness_child_index];
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Get fill staleness
|
||||||
|
QueryTreeNodePtr & getFillStaleness()
|
||||||
|
{
|
||||||
|
return children[fill_staleness_child_index];
|
||||||
|
}
|
||||||
|
|
||||||
/// Get collator
|
/// Get collator
|
||||||
const std::shared_ptr<Collator> & getCollator() const
|
const std::shared_ptr<Collator> & getCollator() const
|
||||||
{
|
{
|
||||||
@ -144,7 +162,8 @@ private:
|
|||||||
static constexpr size_t fill_from_child_index = 1;
|
static constexpr size_t fill_from_child_index = 1;
|
||||||
static constexpr size_t fill_to_child_index = 2;
|
static constexpr size_t fill_to_child_index = 2;
|
||||||
static constexpr size_t fill_step_child_index = 3;
|
static constexpr size_t fill_step_child_index = 3;
|
||||||
static constexpr size_t children_size = fill_step_child_index + 1;
|
static constexpr size_t fill_staleness_child_index = 4;
|
||||||
|
static constexpr size_t children_size = fill_staleness_child_index + 1;
|
||||||
|
|
||||||
SortDirection sort_direction = SortDirection::ASCENDING;
|
SortDirection sort_direction = SortDirection::ASCENDING;
|
||||||
std::optional<SortDirection> nulls_sort_direction;
|
std::optional<SortDirection> nulls_sort_direction;
|
||||||
|
@ -35,6 +35,7 @@ namespace ErrorCodes
|
|||||||
{
|
{
|
||||||
extern const int TYPE_MISMATCH;
|
extern const int TYPE_MISMATCH;
|
||||||
extern const int BAD_ARGUMENTS;
|
extern const int BAD_ARGUMENTS;
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
}
|
}
|
||||||
|
|
||||||
UnionNode::UnionNode(ContextMutablePtr context_, SelectUnionMode union_mode_)
|
UnionNode::UnionNode(ContextMutablePtr context_, SelectUnionMode union_mode_)
|
||||||
@ -50,6 +51,26 @@ UnionNode::UnionNode(ContextMutablePtr context_, SelectUnionMode union_mode_)
|
|||||||
children[queries_child_index] = std::make_shared<ListNode>();
|
children[queries_child_index] = std::make_shared<ListNode>();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool UnionNode::isResolved() const
|
||||||
|
{
|
||||||
|
for (const auto & query_node : getQueries().getNodes())
|
||||||
|
{
|
||||||
|
bool is_resolved = false;
|
||||||
|
|
||||||
|
if (auto * query_node_typed = query_node->as<QueryNode>())
|
||||||
|
is_resolved = query_node_typed->isResolved();
|
||||||
|
else if (auto * union_node_typed = query_node->as<UnionNode>())
|
||||||
|
is_resolved = union_node_typed->isResolved();
|
||||||
|
else
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Unexpected query tree node type in UNION node");
|
||||||
|
|
||||||
|
if (!is_resolved)
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
NamesAndTypes UnionNode::computeProjectionColumns() const
|
NamesAndTypes UnionNode::computeProjectionColumns() const
|
||||||
{
|
{
|
||||||
if (recursive_cte_table)
|
if (recursive_cte_table)
|
||||||
|
@ -163,6 +163,9 @@ public:
|
|||||||
return children[queries_child_index];
|
return children[queries_child_index];
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns true if union node is resolved, false otherwise
|
||||||
|
bool isResolved() const;
|
||||||
|
|
||||||
/// Compute union node projection columns
|
/// Compute union node projection columns
|
||||||
NamesAndTypes computeProjectionColumns() const;
|
NamesAndTypes computeProjectionColumns() const;
|
||||||
|
|
||||||
|
@ -196,6 +196,13 @@ public:
|
|||||||
bool hasDynamicStructure() const override { return getData().hasDynamicStructure(); }
|
bool hasDynamicStructure() const override { return getData().hasDynamicStructure(); }
|
||||||
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
||||||
|
|
||||||
|
bool dynamicStructureEquals(const IColumn & rhs) const override
|
||||||
|
{
|
||||||
|
if (const auto * rhs_concrete = typeid_cast<const ColumnArray *>(&rhs))
|
||||||
|
return data->dynamicStructureEquals(*rhs_concrete->data);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
private:
|
private:
|
||||||
WrappedPtr data;
|
WrappedPtr data;
|
||||||
WrappedPtr offsets;
|
WrappedPtr offsets;
|
||||||
|
@ -1208,6 +1208,15 @@ void ColumnDynamic::prepareVariantsForSquashing(const Columns & source_columns)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool ColumnDynamic::dynamicStructureEquals(const IColumn & rhs) const
|
||||||
|
{
|
||||||
|
if (const auto * rhs_concrete = typeid_cast<const ColumnDynamic *>(&rhs))
|
||||||
|
return max_dynamic_types == rhs_concrete->max_dynamic_types && global_max_dynamic_types == rhs_concrete->global_max_dynamic_types
|
||||||
|
&& variant_info.variant_name == rhs_concrete->variant_info.variant_name
|
||||||
|
&& variant_column->dynamicStructureEquals(*rhs_concrete->variant_column);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
void ColumnDynamic::takeDynamicStructureFromSourceColumns(const Columns & source_columns)
|
void ColumnDynamic::takeDynamicStructureFromSourceColumns(const Columns & source_columns)
|
||||||
{
|
{
|
||||||
if (!empty())
|
if (!empty())
|
||||||
|
@ -376,6 +376,7 @@ public:
|
|||||||
bool addNewVariant(const DataTypePtr & new_variant) { return addNewVariant(new_variant, new_variant->getName()); }
|
bool addNewVariant(const DataTypePtr & new_variant) { return addNewVariant(new_variant, new_variant->getName()); }
|
||||||
|
|
||||||
bool hasDynamicStructure() const override { return true; }
|
bool hasDynamicStructure() const override { return true; }
|
||||||
|
bool dynamicStructureEquals(const IColumn & rhs) const override;
|
||||||
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
||||||
|
|
||||||
const StatisticsPtr & getStatistics() const { return statistics; }
|
const StatisticsPtr & getStatistics() const { return statistics; }
|
||||||
|
@ -345,6 +345,13 @@ bool ColumnMap::structureEquals(const IColumn & rhs) const
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool ColumnMap::dynamicStructureEquals(const IColumn & rhs) const
|
||||||
|
{
|
||||||
|
if (const auto * rhs_map = typeid_cast<const ColumnMap *>(&rhs))
|
||||||
|
return nested->dynamicStructureEquals(*rhs_map->nested);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
ColumnPtr ColumnMap::compress() const
|
ColumnPtr ColumnMap::compress() const
|
||||||
{
|
{
|
||||||
auto compressed = nested->compress();
|
auto compressed = nested->compress();
|
||||||
|
@ -123,6 +123,7 @@ public:
|
|||||||
ColumnPtr compress() const override;
|
ColumnPtr compress() const override;
|
||||||
|
|
||||||
bool hasDynamicStructure() const override { return nested->hasDynamicStructure(); }
|
bool hasDynamicStructure() const override { return nested->hasDynamicStructure(); }
|
||||||
|
bool dynamicStructureEquals(const IColumn & rhs) const override;
|
||||||
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
@ -1415,6 +1415,31 @@ void ColumnObject::prepareForSquashing(const std::vector<ColumnPtr> & source_col
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool ColumnObject::dynamicStructureEquals(const IColumn & rhs) const
|
||||||
|
{
|
||||||
|
const auto * rhs_object = typeid_cast<const ColumnObject *>(&rhs);
|
||||||
|
if (!rhs_object || typed_paths.size() != rhs_object->typed_paths.size()
|
||||||
|
|| global_max_dynamic_paths != rhs_object->global_max_dynamic_paths || max_dynamic_types != rhs_object->max_dynamic_types
|
||||||
|
|| dynamic_paths.size() != rhs_object->dynamic_paths.size())
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for (const auto & [path, column] : typed_paths)
|
||||||
|
{
|
||||||
|
auto it = rhs_object->typed_paths.find(path);
|
||||||
|
if (it == rhs_object->typed_paths.end() || !it->second->dynamicStructureEquals(*column))
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
for (const auto & [path, column] : dynamic_paths)
|
||||||
|
{
|
||||||
|
auto it = rhs_object->dynamic_paths.find(path);
|
||||||
|
if (it == rhs_object->dynamic_paths.end() || !it->second->dynamicStructureEquals(*column))
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
void ColumnObject::takeDynamicStructureFromSourceColumns(const DB::Columns & source_columns)
|
void ColumnObject::takeDynamicStructureFromSourceColumns(const DB::Columns & source_columns)
|
||||||
{
|
{
|
||||||
if (!empty())
|
if (!empty())
|
||||||
|
@ -177,6 +177,7 @@ public:
|
|||||||
bool isFinalized() const override;
|
bool isFinalized() const override;
|
||||||
|
|
||||||
bool hasDynamicStructure() const override { return true; }
|
bool hasDynamicStructure() const override { return true; }
|
||||||
|
bool dynamicStructureEquals(const IColumn & rhs) const override;
|
||||||
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
||||||
|
|
||||||
const PathToColumnMap & getTypedPaths() const { return typed_paths; }
|
const PathToColumnMap & getTypedPaths() const { return typed_paths; }
|
||||||
@ -227,6 +228,7 @@ public:
|
|||||||
void setDynamicPaths(const std::vector<String> & paths);
|
void setDynamicPaths(const std::vector<String> & paths);
|
||||||
void setDynamicPaths(const std::vector<std::pair<String, ColumnPtr>> & paths);
|
void setDynamicPaths(const std::vector<std::pair<String, ColumnPtr>> & paths);
|
||||||
void setMaxDynamicPaths(size_t max_dynamic_paths_);
|
void setMaxDynamicPaths(size_t max_dynamic_paths_);
|
||||||
|
void setGlobalMaxDynamicPaths(size_t global_max_dynamic_paths_);
|
||||||
void setStatistics(const StatisticsPtr & statistics_) { statistics = statistics_; }
|
void setStatistics(const StatisticsPtr & statistics_) { statistics = statistics_; }
|
||||||
|
|
||||||
void serializePathAndValueIntoSharedData(ColumnString * shared_data_paths, ColumnString * shared_data_values, std::string_view path, const IColumn & column, size_t n);
|
void serializePathAndValueIntoSharedData(ColumnString * shared_data_paths, ColumnString * shared_data_values, std::string_view path, const IColumn & column, size_t n);
|
||||||
|
@ -757,6 +757,26 @@ bool ColumnTuple::hasDynamicStructure() const
|
|||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool ColumnTuple::dynamicStructureEquals(const IColumn & rhs) const
|
||||||
|
{
|
||||||
|
if (const auto * rhs_tuple = typeid_cast<const ColumnTuple *>(&rhs))
|
||||||
|
{
|
||||||
|
const size_t tuple_size = columns.size();
|
||||||
|
if (tuple_size != rhs_tuple->columns.size())
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for (size_t i = 0; i < tuple_size; ++i)
|
||||||
|
if (!columns[i]->dynamicStructureEquals(*rhs_tuple->columns[i]))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
void ColumnTuple::takeDynamicStructureFromSourceColumns(const Columns & source_columns)
|
void ColumnTuple::takeDynamicStructureFromSourceColumns(const Columns & source_columns)
|
||||||
{
|
{
|
||||||
std::vector<Columns> nested_source_columns;
|
std::vector<Columns> nested_source_columns;
|
||||||
|
@ -141,6 +141,7 @@ public:
|
|||||||
ColumnPtr & getColumnPtr(size_t idx) { return columns[idx]; }
|
ColumnPtr & getColumnPtr(size_t idx) { return columns[idx]; }
|
||||||
|
|
||||||
bool hasDynamicStructure() const override;
|
bool hasDynamicStructure() const override;
|
||||||
|
bool dynamicStructureEquals(const IColumn & rhs) const override;
|
||||||
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
||||||
|
|
||||||
/// Empty tuple needs a public method to manage its size.
|
/// Empty tuple needs a public method to manage its size.
|
||||||
|
@ -952,7 +952,7 @@ ColumnPtr ColumnVariant::permute(const Permutation & perm, size_t limit) const
|
|||||||
if (hasOnlyNulls())
|
if (hasOnlyNulls())
|
||||||
{
|
{
|
||||||
if (limit)
|
if (limit)
|
||||||
return cloneResized(limit);
|
return cloneResized(limit ? std::min(size(), limit) : size());
|
||||||
|
|
||||||
/// If no limit, we can just return current immutable column.
|
/// If no limit, we can just return current immutable column.
|
||||||
return this->getPtr();
|
return this->getPtr();
|
||||||
@ -1409,6 +1409,23 @@ bool ColumnVariant::structureEquals(const IColumn & rhs) const
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
bool ColumnVariant::dynamicStructureEquals(const IColumn & rhs) const
|
||||||
|
{
|
||||||
|
const auto * rhs_variant = typeid_cast<const ColumnVariant *>(&rhs);
|
||||||
|
if (!rhs_variant)
|
||||||
|
return false;
|
||||||
|
|
||||||
|
const size_t num_variants = variants.size();
|
||||||
|
if (num_variants != rhs_variant->variants.size())
|
||||||
|
return false;
|
||||||
|
|
||||||
|
for (size_t i = 0; i < num_variants; ++i)
|
||||||
|
if (!variants[i]->dynamicStructureEquals(rhs_variant->getVariantByGlobalDiscriminator(globalDiscriminatorByLocal(i))))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
ColumnPtr ColumnVariant::compress() const
|
ColumnPtr ColumnVariant::compress() const
|
||||||
{
|
{
|
||||||
ColumnPtr local_discriminators_compressed = local_discriminators->compress();
|
ColumnPtr local_discriminators_compressed = local_discriminators->compress();
|
||||||
|
@ -336,6 +336,7 @@ public:
|
|||||||
void extend(const std::vector<Discriminator> & old_to_new_global_discriminators, std::vector<std::pair<MutableColumnPtr, Discriminator>> && new_variants_and_discriminators);
|
void extend(const std::vector<Discriminator> & old_to_new_global_discriminators, std::vector<std::pair<MutableColumnPtr, Discriminator>> && new_variants_and_discriminators);
|
||||||
|
|
||||||
bool hasDynamicStructure() const override;
|
bool hasDynamicStructure() const override;
|
||||||
|
bool dynamicStructureEquals(const IColumn & rhs) const override;
|
||||||
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
void takeDynamicStructureFromSourceColumns(const Columns & source_columns) override;
|
||||||
|
|
||||||
private:
|
private:
|
||||||
|
@ -635,6 +635,9 @@ public:
|
|||||||
|
|
||||||
/// Checks if column has dynamic subcolumns.
|
/// Checks if column has dynamic subcolumns.
|
||||||
virtual bool hasDynamicStructure() const { return false; }
|
virtual bool hasDynamicStructure() const { return false; }
|
||||||
|
|
||||||
|
/// For columns with dynamic subcolumns checks if columns have equal dynamic structure.
|
||||||
|
[[nodiscard]] virtual bool dynamicStructureEquals(const IColumn & rhs) const { return structureEquals(rhs); }
|
||||||
/// For columns with dynamic subcolumns this method takes dynamic structure from source columns
|
/// For columns with dynamic subcolumns this method takes dynamic structure from source columns
|
||||||
/// and creates proper resulting dynamic structure in advance for merge of these source columns.
|
/// and creates proper resulting dynamic structure in advance for merge of these source columns.
|
||||||
virtual void takeDynamicStructureFromSourceColumns(const std::vector<Ptr> & /*source_columns*/) {}
|
virtual void takeDynamicStructureFromSourceColumns(const std::vector<Ptr> & /*source_columns*/) {}
|
||||||
|
@ -251,7 +251,7 @@ void Exception::setThreadFramePointers(ThreadFramePointersBase frame_pointers)
|
|||||||
thread_frame_pointers.frame_pointers = std::move(frame_pointers);
|
thread_frame_pointers.frame_pointers = std::move(frame_pointers);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void tryLogCurrentExceptionImpl(Poco::Logger * logger, const std::string & start_of_message)
|
static void tryLogCurrentExceptionImpl(Poco::Logger * logger, const std::string & start_of_message, LogsLevel level)
|
||||||
{
|
{
|
||||||
if (!isLoggingEnabled())
|
if (!isLoggingEnabled())
|
||||||
return;
|
return;
|
||||||
@ -262,14 +262,25 @@ static void tryLogCurrentExceptionImpl(Poco::Logger * logger, const std::string
|
|||||||
if (!start_of_message.empty())
|
if (!start_of_message.empty())
|
||||||
message.text = fmt::format("{}: {}", start_of_message, message.text);
|
message.text = fmt::format("{}: {}", start_of_message, message.text);
|
||||||
|
|
||||||
LOG_ERROR(logger, message);
|
switch (level)
|
||||||
|
{
|
||||||
|
case LogsLevel::none: break;
|
||||||
|
case LogsLevel::test: LOG_TEST(logger, message); break;
|
||||||
|
case LogsLevel::trace: LOG_TRACE(logger, message); break;
|
||||||
|
case LogsLevel::debug: LOG_DEBUG(logger, message); break;
|
||||||
|
case LogsLevel::information: LOG_INFO(logger, message); break;
|
||||||
|
case LogsLevel::warning: LOG_WARNING(logger, message); break;
|
||||||
|
case LogsLevel::error: LOG_ERROR(logger, message); break;
|
||||||
|
case LogsLevel::fatal: LOG_FATAL(logger, message); break;
|
||||||
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
catch (...) // NOLINT(bugprone-empty-catch)
|
catch (...) // NOLINT(bugprone-empty-catch)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void tryLogCurrentException(const char * log_name, const std::string & start_of_message)
|
void tryLogCurrentException(const char * log_name, const std::string & start_of_message, LogsLevel level)
|
||||||
{
|
{
|
||||||
if (!isLoggingEnabled())
|
if (!isLoggingEnabled())
|
||||||
return;
|
return;
|
||||||
@ -283,10 +294,10 @@ void tryLogCurrentException(const char * log_name, const std::string & start_of_
|
|||||||
|
|
||||||
/// getLogger can allocate memory too
|
/// getLogger can allocate memory too
|
||||||
auto logger = getLogger(log_name);
|
auto logger = getLogger(log_name);
|
||||||
tryLogCurrentExceptionImpl(logger.get(), start_of_message);
|
tryLogCurrentExceptionImpl(logger.get(), start_of_message, level);
|
||||||
}
|
}
|
||||||
|
|
||||||
void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_message)
|
void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_message, LogsLevel level)
|
||||||
{
|
{
|
||||||
/// Under high memory pressure, new allocations throw a
|
/// Under high memory pressure, new allocations throw a
|
||||||
/// MEMORY_LIMIT_EXCEEDED exception.
|
/// MEMORY_LIMIT_EXCEEDED exception.
|
||||||
@ -295,17 +306,17 @@ void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_
|
|||||||
/// MemoryTracker until the exception will be logged.
|
/// MemoryTracker until the exception will be logged.
|
||||||
LockMemoryExceptionInThread lock_memory_tracker(VariableContext::Global);
|
LockMemoryExceptionInThread lock_memory_tracker(VariableContext::Global);
|
||||||
|
|
||||||
tryLogCurrentExceptionImpl(logger, start_of_message);
|
tryLogCurrentExceptionImpl(logger, start_of_message, level);
|
||||||
}
|
}
|
||||||
|
|
||||||
void tryLogCurrentException(LoggerPtr logger, const std::string & start_of_message)
|
void tryLogCurrentException(LoggerPtr logger, const std::string & start_of_message, LogsLevel level)
|
||||||
{
|
{
|
||||||
tryLogCurrentException(logger.get(), start_of_message);
|
tryLogCurrentException(logger.get(), start_of_message, level);
|
||||||
}
|
}
|
||||||
|
|
||||||
void tryLogCurrentException(const AtomicLogger & logger, const std::string & start_of_message)
|
void tryLogCurrentException(const AtomicLogger & logger, const std::string & start_of_message, LogsLevel level)
|
||||||
{
|
{
|
||||||
tryLogCurrentException(logger.load(), start_of_message);
|
tryLogCurrentException(logger.load(), start_of_message, level);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void getNoSpaceLeftInfoMessage(std::filesystem::path path, String & msg)
|
static void getNoSpaceLeftInfoMessage(std::filesystem::path path, String & msg)
|
||||||
|
@ -7,6 +7,7 @@
|
|||||||
#include <Common/Logger.h>
|
#include <Common/Logger.h>
|
||||||
#include <Common/LoggingFormatStringHelpers.h>
|
#include <Common/LoggingFormatStringHelpers.h>
|
||||||
#include <Common/StackTrace.h>
|
#include <Common/StackTrace.h>
|
||||||
|
#include <Core/LogsLevel.h>
|
||||||
|
|
||||||
#include <cerrno>
|
#include <cerrno>
|
||||||
#include <exception>
|
#include <exception>
|
||||||
@ -276,10 +277,10 @@ using Exceptions = std::vector<std::exception_ptr>;
|
|||||||
* Can be used in destructors in the catch-all block.
|
* Can be used in destructors in the catch-all block.
|
||||||
*/
|
*/
|
||||||
/// TODO: Logger leak constexpr overload
|
/// TODO: Logger leak constexpr overload
|
||||||
void tryLogCurrentException(const char * log_name, const std::string & start_of_message = "");
|
void tryLogCurrentException(const char * log_name, const std::string & start_of_message = "", LogsLevel level = LogsLevel::error);
|
||||||
void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_message = "");
|
void tryLogCurrentException(Poco::Logger * logger, const std::string & start_of_message = "", LogsLevel level = LogsLevel::error);
|
||||||
void tryLogCurrentException(LoggerPtr logger, const std::string & start_of_message = "");
|
void tryLogCurrentException(LoggerPtr logger, const std::string & start_of_message = "", LogsLevel level = LogsLevel::error);
|
||||||
void tryLogCurrentException(const AtomicLogger & logger, const std::string & start_of_message = "");
|
void tryLogCurrentException(const AtomicLogger & logger, const std::string & start_of_message = "", LogsLevel level = LogsLevel::error);
|
||||||
|
|
||||||
|
|
||||||
/** Prints current exception in canonical format.
|
/** Prints current exception in canonical format.
|
||||||
|
30
src/Common/FieldVisitorScale.cpp
Normal file
30
src/Common/FieldVisitorScale.cpp
Normal file
@ -0,0 +1,30 @@
|
|||||||
|
#include <Common/FieldVisitorScale.h>
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
|
FieldVisitorScale::FieldVisitorScale(Int32 rhs_) : rhs(rhs_) {}
|
||||||
|
|
||||||
|
void FieldVisitorScale::operator() (Int64 & x) const { x *= rhs; }
|
||||||
|
void FieldVisitorScale::operator() (UInt64 & x) const { x *= rhs; }
|
||||||
|
void FieldVisitorScale::operator() (Float64 & x) const { x *= rhs; }
|
||||||
|
void FieldVisitorScale::operator() (Null &) const { /*Do not scale anything*/ }
|
||||||
|
|
||||||
|
void FieldVisitorScale::operator() (String &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale Strings"); }
|
||||||
|
void FieldVisitorScale::operator() (Array &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale Arrays"); }
|
||||||
|
void FieldVisitorScale::operator() (Tuple &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale Tuples"); }
|
||||||
|
void FieldVisitorScale::operator() (Map &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale Maps"); }
|
||||||
|
void FieldVisitorScale::operator() (Object &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale Objects"); }
|
||||||
|
void FieldVisitorScale::operator() (UUID &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale UUIDs"); }
|
||||||
|
void FieldVisitorScale::operator() (IPv4 &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale IPv4s"); }
|
||||||
|
void FieldVisitorScale::operator() (IPv6 &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale IPv6s"); }
|
||||||
|
void FieldVisitorScale::operator() (CustomType & x) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale custom type {}", x.getTypeName()); }
|
||||||
|
void FieldVisitorScale::operator() (AggregateFunctionStateData &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale AggregateFunctionStates"); }
|
||||||
|
void FieldVisitorScale::operator() (bool &) const { throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot scale Bools"); }
|
||||||
|
|
||||||
|
}
|
43
src/Common/FieldVisitorScale.h
Normal file
43
src/Common/FieldVisitorScale.h
Normal file
@ -0,0 +1,43 @@
|
|||||||
|
#pragma once
|
||||||
|
|
||||||
|
#include <Common/FieldVisitors.h>
|
||||||
|
#include <Common/FieldVisitorConvertToNumber.h>
|
||||||
|
|
||||||
|
namespace DB
|
||||||
|
{
|
||||||
|
|
||||||
|
/** Implements `*=` operation by number
|
||||||
|
*/
|
||||||
|
class FieldVisitorScale : public StaticVisitor<void>
|
||||||
|
{
|
||||||
|
private:
|
||||||
|
Int32 rhs;
|
||||||
|
|
||||||
|
public:
|
||||||
|
explicit FieldVisitorScale(Int32 rhs_);
|
||||||
|
|
||||||
|
void operator() (Int64 & x) const;
|
||||||
|
void operator() (UInt64 & x) const;
|
||||||
|
void operator() (Float64 & x) const;
|
||||||
|
void operator() (Null &) const;
|
||||||
|
[[noreturn]] void operator() (String &) const;
|
||||||
|
[[noreturn]] void operator() (Array &) const;
|
||||||
|
[[noreturn]] void operator() (Tuple &) const;
|
||||||
|
[[noreturn]] void operator() (Map &) const;
|
||||||
|
[[noreturn]] void operator() (Object &) const;
|
||||||
|
[[noreturn]] void operator() (UUID &) const;
|
||||||
|
[[noreturn]] void operator() (IPv4 &) const;
|
||||||
|
[[noreturn]] void operator() (IPv6 &) const;
|
||||||
|
[[noreturn]] void operator() (AggregateFunctionStateData &) const;
|
||||||
|
[[noreturn]] void operator() (CustomType &) const;
|
||||||
|
[[noreturn]] void operator() (bool &) const;
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
void operator() (DecimalField<T> & x) const { x = DecimalField<T>(x.getValue() * T(rhs), x.getScale()); }
|
||||||
|
|
||||||
|
template <typename T>
|
||||||
|
requires is_big_int_v<T>
|
||||||
|
void operator() (T & x) const { x *= rhs; }
|
||||||
|
};
|
||||||
|
|
||||||
|
}
|
@ -658,16 +658,11 @@ protected:
|
|||||||
{
|
{
|
||||||
if (!std::is_trivially_destructible_v<Cell>)
|
if (!std::is_trivially_destructible_v<Cell>)
|
||||||
{
|
{
|
||||||
for (iterator it = begin(), it_end = end(); it != it_end; ++it)
|
for (iterator it = begin(), it_end = end(); it != it_end;)
|
||||||
{
|
{
|
||||||
it.ptr->~Cell();
|
auto ptr = it.ptr;
|
||||||
/// In case of poison_in_dtor=1 it will be poisoned,
|
++it;
|
||||||
/// but it maybe used later, during iteration.
|
ptr->~Cell();
|
||||||
///
|
|
||||||
/// NOTE, that technically this is UB [1], but OK for now.
|
|
||||||
///
|
|
||||||
/// [1]: https://github.com/google/sanitizers/issues/854#issuecomment-329661378
|
|
||||||
__msan_unpoison(it.ptr, sizeof(*it.ptr));
|
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Everything had been destroyed in the loop above, reset the flag
|
/// Everything had been destroyed in the loop above, reset the flag
|
||||||
|
@ -568,7 +568,7 @@ std::vector<std::string> NamedCollectionsMetadataStorage::listCollections() cons
|
|||||||
std::vector<std::string> collections;
|
std::vector<std::string> collections;
|
||||||
collections.reserve(paths.size());
|
collections.reserve(paths.size());
|
||||||
for (const auto & path : paths)
|
for (const auto & path : paths)
|
||||||
collections.push_back(std::filesystem::path(path).stem());
|
collections.push_back(unescapeForFileName(std::filesystem::path(path).stem()));
|
||||||
return collections;
|
return collections;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -547,6 +547,7 @@ The server successfully detected this situation and will download merged part fr
|
|||||||
M(FilesystemCacheLoadMetadataMicroseconds, "Time spent loading filesystem cache metadata", ValueType::Microseconds) \
|
M(FilesystemCacheLoadMetadataMicroseconds, "Time spent loading filesystem cache metadata", ValueType::Microseconds) \
|
||||||
M(FilesystemCacheEvictedBytes, "Number of bytes evicted from filesystem cache", ValueType::Bytes) \
|
M(FilesystemCacheEvictedBytes, "Number of bytes evicted from filesystem cache", ValueType::Bytes) \
|
||||||
M(FilesystemCacheEvictedFileSegments, "Number of file segments evicted from filesystem cache", ValueType::Number) \
|
M(FilesystemCacheEvictedFileSegments, "Number of file segments evicted from filesystem cache", ValueType::Number) \
|
||||||
|
M(FilesystemCacheBackgroundDownloadQueuePush, "Number of file segments sent for background download in filesystem cache", ValueType::Number) \
|
||||||
M(FilesystemCacheEvictionSkippedFileSegments, "Number of file segments skipped for eviction because of being in unreleasable state", ValueType::Number) \
|
M(FilesystemCacheEvictionSkippedFileSegments, "Number of file segments skipped for eviction because of being in unreleasable state", ValueType::Number) \
|
||||||
M(FilesystemCacheEvictionSkippedEvictingFileSegments, "Number of file segments skipped for eviction because of being in evicting state", ValueType::Number) \
|
M(FilesystemCacheEvictionSkippedEvictingFileSegments, "Number of file segments skipped for eviction because of being in evicting state", ValueType::Number) \
|
||||||
M(FilesystemCacheEvictionTries, "Number of filesystem cache eviction attempts", ValueType::Number) \
|
M(FilesystemCacheEvictionTries, "Number of filesystem cache eviction attempts", ValueType::Number) \
|
||||||
@ -745,6 +746,12 @@ The server successfully detected this situation and will download merged part fr
|
|||||||
M(ReadTaskRequestsSentElapsedMicroseconds, "Time spent in callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the remote server side.", ValueType::Microseconds) \
|
M(ReadTaskRequestsSentElapsedMicroseconds, "Time spent in callbacks requested from the remote server back to the initiator server to choose the read task (for s3Cluster table function and similar). Measured on the remote server side.", ValueType::Microseconds) \
|
||||||
M(MergeTreeReadTaskRequestsSentElapsedMicroseconds, "Time spent in callbacks requested from the remote server back to the initiator server to choose the read task (for MergeTree tables). Measured on the remote server side.", ValueType::Microseconds) \
|
M(MergeTreeReadTaskRequestsSentElapsedMicroseconds, "Time spent in callbacks requested from the remote server back to the initiator server to choose the read task (for MergeTree tables). Measured on the remote server side.", ValueType::Microseconds) \
|
||||||
M(MergeTreeAllRangesAnnouncementsSentElapsedMicroseconds, "Time spent in sending the announcement from the remote server to the initiator server about the set of data parts (for MergeTree tables). Measured on the remote server side.", ValueType::Microseconds) \
|
M(MergeTreeAllRangesAnnouncementsSentElapsedMicroseconds, "Time spent in sending the announcement from the remote server to the initiator server about the set of data parts (for MergeTree tables). Measured on the remote server side.", ValueType::Microseconds) \
|
||||||
|
M(MergerMutatorsGetPartsForMergeElapsedMicroseconds, "Time spent to take data parts snapshot to build ranges from them.", ValueType::Microseconds) \
|
||||||
|
M(MergerMutatorPrepareRangesForMergeElapsedMicroseconds, "Time spent to prepare parts ranges which can be merged according to merge predicate.", ValueType::Microseconds) \
|
||||||
|
M(MergerMutatorSelectPartsForMergeElapsedMicroseconds, "Time spent to select parts from ranges which can be merged.", ValueType::Microseconds) \
|
||||||
|
M(MergerMutatorRangesForMergeCount, "Amount of candidate ranges for merge", ValueType::Number) \
|
||||||
|
M(MergerMutatorPartsInRangesForMergeCount, "Amount of candidate parts for merge", ValueType::Number) \
|
||||||
|
M(MergerMutatorSelectRangePartsCount, "Amount of parts in selected range for merge", ValueType::Number) \
|
||||||
\
|
\
|
||||||
M(ConnectionPoolIsFullMicroseconds, "Total time spent waiting for a slot in connection pool.", ValueType::Microseconds) \
|
M(ConnectionPoolIsFullMicroseconds, "Total time spent waiting for a slot in connection pool.", ValueType::Microseconds) \
|
||||||
M(AsyncLoaderWaitMicroseconds, "Total time a query was waiting for async loader jobs.", ValueType::Microseconds) \
|
M(AsyncLoaderWaitMicroseconds, "Total time a query was waiting for async loader jobs.", ValueType::Microseconds) \
|
||||||
|
@ -330,7 +330,7 @@ TYPED_TEST(CoordinationTest, TestSummingRaft1)
|
|||||||
this->setLogDirectory("./logs");
|
this->setLogDirectory("./logs");
|
||||||
this->setStateFileDirectory(".");
|
this->setStateFileDirectory(".");
|
||||||
|
|
||||||
SummingRaftServer s1(1, "localhost", 44444, this->keeper_context);
|
SummingRaftServer s1(1, "localhost", 0, this->keeper_context);
|
||||||
SCOPE_EXIT(if (std::filesystem::exists("./state")) std::filesystem::remove("./state"););
|
SCOPE_EXIT(if (std::filesystem::exists("./state")) std::filesystem::remove("./state"););
|
||||||
|
|
||||||
/// Single node is leader
|
/// Single node is leader
|
||||||
|
@ -119,15 +119,4 @@ enum class JoinTableSide : uint8_t
|
|||||||
|
|
||||||
const char * toString(JoinTableSide join_table_side);
|
const char * toString(JoinTableSide join_table_side);
|
||||||
|
|
||||||
/// Setting to choose which table to use as the inner table in hash join
|
|
||||||
enum class JoinInnerTableSelectionMode : uint8_t
|
|
||||||
{
|
|
||||||
/// Use left table
|
|
||||||
Left,
|
|
||||||
/// Use right table
|
|
||||||
Right,
|
|
||||||
/// Use the table with the smallest number of rows
|
|
||||||
Auto,
|
|
||||||
};
|
|
||||||
|
|
||||||
}
|
}
|
||||||
|
@ -873,6 +873,12 @@ In CREATE TABLE statement allows specifying Variant type with similar variant ty
|
|||||||
)", 0) \
|
)", 0) \
|
||||||
DECLARE(Bool, allow_suspicious_primary_key, false, R"(
|
DECLARE(Bool, allow_suspicious_primary_key, false, R"(
|
||||||
Allow suspicious `PRIMARY KEY`/`ORDER BY` for MergeTree (i.e. SimpleAggregateFunction).
|
Allow suspicious `PRIMARY KEY`/`ORDER BY` for MergeTree (i.e. SimpleAggregateFunction).
|
||||||
|
)", 0) \
|
||||||
|
DECLARE(Bool, allow_suspicious_types_in_group_by, false, R"(
|
||||||
|
Allows or restricts using [Variant](../../sql-reference/data-types/variant.md) and [Dynamic](../../sql-reference/data-types/dynamic.md) types in GROUP BY keys.
|
||||||
|
)", 0) \
|
||||||
|
DECLARE(Bool, allow_suspicious_types_in_order_by, false, R"(
|
||||||
|
Allows or restricts using [Variant](../../sql-reference/data-types/variant.md) and [Dynamic](../../sql-reference/data-types/dynamic.md) types in ORDER BY keys.
|
||||||
)", 0) \
|
)", 0) \
|
||||||
DECLARE(Bool, compile_expressions, false, R"(
|
DECLARE(Bool, compile_expressions, false, R"(
|
||||||
Compile some scalar functions and operators to native code. Due to a bug in the LLVM compiler infrastructure, on AArch64 machines, it is known to lead to a nullptr dereference and, consequently, server crash. Do not enable this setting.
|
Compile some scalar functions and operators to native code. Due to a bug in the LLVM compiler infrastructure, on AArch64 machines, it is known to lead to a nullptr dereference and, consequently, server crash. Do not enable this setting.
|
||||||
@ -1912,9 +1918,6 @@ See also:
|
|||||||
For single JOIN in case of identifier ambiguity prefer left table
|
For single JOIN in case of identifier ambiguity prefer left table
|
||||||
)", IMPORTANT) \
|
)", IMPORTANT) \
|
||||||
\
|
\
|
||||||
DECLARE(JoinInnerTableSelectionMode, query_plan_join_inner_table_selection, JoinInnerTableSelectionMode::Auto, R"(
|
|
||||||
Select the side of the join to be the inner table in the query plan. Supported only for `ALL` join strictness with `JOIN ON` clause. Possible values: 'auto', 'left', 'right'.
|
|
||||||
)", 0) \
|
|
||||||
DECLARE(UInt64, preferred_block_size_bytes, 1000000, R"(
|
DECLARE(UInt64, preferred_block_size_bytes, 1000000, R"(
|
||||||
This setting adjusts the data block size for query processing and represents additional fine-tuning to the more rough 'max_block_size' setting. If the columns are large and with 'max_block_size' rows the block size is likely to be larger than the specified amount of bytes, its size will be lowered for better CPU cache locality.
|
This setting adjusts the data block size for query processing and represents additional fine-tuning to the more rough 'max_block_size' setting. If the columns are large and with 'max_block_size' rows the block size is likely to be larger than the specified amount of bytes, its size will be lowered for better CPU cache locality.
|
||||||
)", 0) \
|
)", 0) \
|
||||||
@ -4239,7 +4242,7 @@ Rewrite aggregate functions with if expression as argument when logically equiva
|
|||||||
For example, `avg(if(cond, col, null))` can be rewritten to `avgOrNullIf(cond, col)`. It may improve performance.
|
For example, `avg(if(cond, col, null))` can be rewritten to `avgOrNullIf(cond, col)`. It may improve performance.
|
||||||
|
|
||||||
:::note
|
:::note
|
||||||
Supported only with experimental analyzer (`enable_analyzer = 1`).
|
Supported only with the analyzer (`enable_analyzer = 1`).
|
||||||
:::
|
:::
|
||||||
)", 0) \
|
)", 0) \
|
||||||
DECLARE(Bool, optimize_rewrite_array_exists_to_has, false, R"(
|
DECLARE(Bool, optimize_rewrite_array_exists_to_has, false, R"(
|
||||||
@ -5132,6 +5135,12 @@ Only in ClickHouse Cloud. A window for sending ACK for DataPacket sequence in a
|
|||||||
)", 0) \
|
)", 0) \
|
||||||
DECLARE(Bool, distributed_cache_discard_connection_if_unread_data, true, R"(
|
DECLARE(Bool, distributed_cache_discard_connection_if_unread_data, true, R"(
|
||||||
Only in ClickHouse Cloud. Discard connection if some data is unread.
|
Only in ClickHouse Cloud. Discard connection if some data is unread.
|
||||||
|
)", 0) \
|
||||||
|
DECLARE(Bool, filesystem_cache_enable_background_download_for_metadata_files_in_packed_storage, true, R"(
|
||||||
|
Only in ClickHouse Cloud. Wait time to lock cache for space reservation in filesystem cache
|
||||||
|
)", 0) \
|
||||||
|
DECLARE(Bool, filesystem_cache_enable_background_download_during_fetch, true, R"(
|
||||||
|
Only in ClickHouse Cloud. Wait time to lock cache for space reservation in filesystem cache
|
||||||
)", 0) \
|
)", 0) \
|
||||||
\
|
\
|
||||||
DECLARE(Bool, parallelize_output_from_storages, true, R"(
|
DECLARE(Bool, parallelize_output_from_storages, true, R"(
|
||||||
@ -5142,6 +5151,7 @@ The setting allows a user to provide own deduplication semantic in MergeTree/Rep
|
|||||||
For example, by providing a unique value for the setting in each INSERT statement,
|
For example, by providing a unique value for the setting in each INSERT statement,
|
||||||
user can avoid the same inserted data being deduplicated.
|
user can avoid the same inserted data being deduplicated.
|
||||||
|
|
||||||
|
|
||||||
Possible values:
|
Possible values:
|
||||||
|
|
||||||
- Any string
|
- Any string
|
||||||
@ -5616,7 +5626,7 @@ If true, and JOIN can be executed with parallel replicas algorithm, and all stor
|
|||||||
DECLARE(UInt64, parallel_replicas_mark_segment_size, 0, R"(
|
DECLARE(UInt64, parallel_replicas_mark_segment_size, 0, R"(
|
||||||
Parts virtually divided into segments to be distributed between replicas for parallel reading. This setting controls the size of these segments. Not recommended to change until you're absolutely sure in what you're doing. Value should be in range [128; 16384]
|
Parts virtually divided into segments to be distributed between replicas for parallel reading. This setting controls the size of these segments. Not recommended to change until you're absolutely sure in what you're doing. Value should be in range [128; 16384]
|
||||||
)", BETA) \
|
)", BETA) \
|
||||||
DECLARE(Bool, parallel_replicas_local_plan, false, R"(
|
DECLARE(Bool, parallel_replicas_local_plan, true, R"(
|
||||||
Build local plan for local replica
|
Build local plan for local replica
|
||||||
)", BETA) \
|
)", BETA) \
|
||||||
\
|
\
|
||||||
@ -5855,7 +5865,7 @@ Experimental data deduplication for SELECT queries based on part UUIDs
|
|||||||
// Please add settings related to formats in Core/FormatFactorySettings.h, move obsolete settings to OBSOLETE_SETTINGS and obsolete format settings to OBSOLETE_FORMAT_SETTINGS.
|
// Please add settings related to formats in Core/FormatFactorySettings.h, move obsolete settings to OBSOLETE_SETTINGS and obsolete format settings to OBSOLETE_FORMAT_SETTINGS.
|
||||||
|
|
||||||
#define OBSOLETE_SETTINGS(M, ALIAS) \
|
#define OBSOLETE_SETTINGS(M, ALIAS) \
|
||||||
/** Obsolete settings that do nothing but left for compatibility reasons. Remove each one after half a year of obsolescence. */ \
|
/** Obsolete settings which are kept around for compatibility reasons. They have no effect anymore. */ \
|
||||||
MAKE_OBSOLETE(M, Bool, update_insert_deduplication_token_in_dependent_materialized_views, 0) \
|
MAKE_OBSOLETE(M, Bool, update_insert_deduplication_token_in_dependent_materialized_views, 0) \
|
||||||
MAKE_OBSOLETE(M, UInt64, max_memory_usage_for_all_queries, 0) \
|
MAKE_OBSOLETE(M, UInt64, max_memory_usage_for_all_queries, 0) \
|
||||||
MAKE_OBSOLETE(M, UInt64, multiple_joins_rewriter_version, 0) \
|
MAKE_OBSOLETE(M, UInt64, multiple_joins_rewriter_version, 0) \
|
||||||
|
@ -66,7 +66,6 @@ class WriteBuffer;
|
|||||||
M(CLASS_NAME, IntervalOutputFormat) \
|
M(CLASS_NAME, IntervalOutputFormat) \
|
||||||
M(CLASS_NAME, JoinAlgorithm) \
|
M(CLASS_NAME, JoinAlgorithm) \
|
||||||
M(CLASS_NAME, JoinStrictness) \
|
M(CLASS_NAME, JoinStrictness) \
|
||||||
M(CLASS_NAME, JoinInnerTableSelectionMode) \
|
|
||||||
M(CLASS_NAME, LightweightMutationProjectionMode) \
|
M(CLASS_NAME, LightweightMutationProjectionMode) \
|
||||||
M(CLASS_NAME, LoadBalancing) \
|
M(CLASS_NAME, LoadBalancing) \
|
||||||
M(CLASS_NAME, LocalFSReadMethod) \
|
M(CLASS_NAME, LocalFSReadMethod) \
|
||||||
|
@ -64,14 +64,18 @@ static std::initializer_list<std::pair<ClickHouseVersion, SettingsChangesHistory
|
|||||||
},
|
},
|
||||||
{"24.11",
|
{"24.11",
|
||||||
{
|
{
|
||||||
|
{"allow_suspicious_types_in_group_by", true, false, "Don't allow Variant/Dynamic types in GROUP BY by default"},
|
||||||
|
{"allow_suspicious_types_in_order_by", true, false, "Don't allow Variant/Dynamic types in ORDER BY by default"},
|
||||||
{"distributed_cache_discard_connection_if_unread_data", true, true, "New setting"},
|
{"distributed_cache_discard_connection_if_unread_data", true, true, "New setting"},
|
||||||
|
{"filesystem_cache_enable_background_download_for_metadata_files_in_packed_storage", true, true, "New setting"},
|
||||||
|
{"filesystem_cache_enable_background_download_during_fetch", true, true, "New setting"},
|
||||||
{"azure_check_objects_after_upload", false, false, "Check each uploaded object in azure blob storage to be sure that upload was successful"},
|
{"azure_check_objects_after_upload", false, false, "Check each uploaded object in azure blob storage to be sure that upload was successful"},
|
||||||
{"backup_restore_keeper_max_retries", 20, 1000, "Should be big enough so the whole operation BACKUP or RESTORE operation won't fail because of a temporary [Zoo]Keeper failure in the middle of it."},
|
{"backup_restore_keeper_max_retries", 20, 1000, "Should be big enough so the whole operation BACKUP or RESTORE operation won't fail because of a temporary [Zoo]Keeper failure in the middle of it."},
|
||||||
{"backup_restore_failure_after_host_disconnected_for_seconds", 0, 3600, "New setting."},
|
{"backup_restore_failure_after_host_disconnected_for_seconds", 0, 3600, "New setting."},
|
||||||
{"backup_restore_keeper_max_retries_while_initializing", 0, 20, "New setting."},
|
{"backup_restore_keeper_max_retries_while_initializing", 0, 20, "New setting."},
|
||||||
{"backup_restore_keeper_max_retries_while_handling_error", 0, 20, "New setting."},
|
{"backup_restore_keeper_max_retries_while_handling_error", 0, 20, "New setting."},
|
||||||
{"backup_restore_finish_timeout_after_error_sec", 0, 180, "New setting."},
|
{"backup_restore_finish_timeout_after_error_sec", 0, 180, "New setting."},
|
||||||
{"query_plan_join_inner_table_selection", "auto", "auto", "New setting."},
|
{"parallel_replicas_local_plan", false, true, "Use local plan for local replica in a query with parallel replicas"},
|
||||||
}
|
}
|
||||||
},
|
},
|
||||||
{"24.10",
|
{"24.10",
|
||||||
|
@ -55,10 +55,6 @@ IMPLEMENT_SETTING_MULTI_ENUM(JoinAlgorithm, ErrorCodes::UNKNOWN_JOIN,
|
|||||||
{"full_sorting_merge", JoinAlgorithm::FULL_SORTING_MERGE},
|
{"full_sorting_merge", JoinAlgorithm::FULL_SORTING_MERGE},
|
||||||
{"grace_hash", JoinAlgorithm::GRACE_HASH}})
|
{"grace_hash", JoinAlgorithm::GRACE_HASH}})
|
||||||
|
|
||||||
IMPLEMENT_SETTING_ENUM(JoinInnerTableSelectionMode, ErrorCodes::BAD_ARGUMENTS,
|
|
||||||
{{"left", JoinInnerTableSelectionMode::Left},
|
|
||||||
{"right", JoinInnerTableSelectionMode::Right},
|
|
||||||
{"auto", JoinInnerTableSelectionMode::Auto}})
|
|
||||||
|
|
||||||
IMPLEMENT_SETTING_ENUM(TotalsMode, ErrorCodes::UNKNOWN_TOTALS_MODE,
|
IMPLEMENT_SETTING_ENUM(TotalsMode, ErrorCodes::UNKNOWN_TOTALS_MODE,
|
||||||
{{"before_having", TotalsMode::BEFORE_HAVING},
|
{{"before_having", TotalsMode::BEFORE_HAVING},
|
||||||
|
@ -128,8 +128,8 @@ constexpr auto getEnumValues();
|
|||||||
DECLARE_SETTING_ENUM(LoadBalancing)
|
DECLARE_SETTING_ENUM(LoadBalancing)
|
||||||
|
|
||||||
DECLARE_SETTING_ENUM(JoinStrictness)
|
DECLARE_SETTING_ENUM(JoinStrictness)
|
||||||
|
|
||||||
DECLARE_SETTING_MULTI_ENUM(JoinAlgorithm)
|
DECLARE_SETTING_MULTI_ENUM(JoinAlgorithm)
|
||||||
DECLARE_SETTING_ENUM(JoinInnerTableSelectionMode)
|
|
||||||
|
|
||||||
|
|
||||||
/// Which rows should be included in TOTALS.
|
/// Which rows should be included in TOTALS.
|
||||||
|
@ -35,6 +35,11 @@
|
|||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
namespace ErrorCodes
|
||||||
|
{
|
||||||
|
extern const int LOGICAL_ERROR;
|
||||||
|
}
|
||||||
|
|
||||||
/** Cursor allows to compare rows in different blocks (and parts).
|
/** Cursor allows to compare rows in different blocks (and parts).
|
||||||
* Cursor moves inside single block.
|
* Cursor moves inside single block.
|
||||||
* It is used in priority queue.
|
* It is used in priority queue.
|
||||||
@ -83,21 +88,27 @@ struct SortCursorImpl
|
|||||||
SortCursorImpl(
|
SortCursorImpl(
|
||||||
const Block & header,
|
const Block & header,
|
||||||
const Columns & columns,
|
const Columns & columns,
|
||||||
|
size_t num_rows,
|
||||||
const SortDescription & desc_,
|
const SortDescription & desc_,
|
||||||
size_t order_ = 0,
|
size_t order_ = 0,
|
||||||
IColumn::Permutation * perm = nullptr)
|
IColumn::Permutation * perm = nullptr)
|
||||||
: desc(desc_), sort_columns_size(desc.size()), order(order_), need_collation(desc.size())
|
: desc(desc_), sort_columns_size(desc.size()), order(order_), need_collation(desc.size())
|
||||||
{
|
{
|
||||||
reset(columns, header, perm);
|
reset(columns, header, num_rows, perm);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool empty() const { return rows == 0; }
|
bool empty() const { return rows == 0; }
|
||||||
|
|
||||||
/// Set the cursor to the beginning of the new block.
|
/// Set the cursor to the beginning of the new block.
|
||||||
void reset(const Block & block, IColumn::Permutation * perm = nullptr) { reset(block.getColumns(), block, perm); }
|
void reset(const Block & block, IColumn::Permutation * perm = nullptr)
|
||||||
|
{
|
||||||
|
if (block.getColumns().empty())
|
||||||
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Empty column list in block");
|
||||||
|
reset(block.getColumns(), block, block.getColumns()[0]->size(), perm);
|
||||||
|
}
|
||||||
|
|
||||||
/// Set the cursor to the beginning of the new block.
|
/// Set the cursor to the beginning of the new block.
|
||||||
void reset(const Columns & columns, const Block & block, IColumn::Permutation * perm = nullptr)
|
void reset(const Columns & columns, const Block & block, UInt64 num_rows, IColumn::Permutation * perm = nullptr)
|
||||||
{
|
{
|
||||||
all_columns.clear();
|
all_columns.clear();
|
||||||
sort_columns.clear();
|
sort_columns.clear();
|
||||||
@ -125,7 +136,7 @@ struct SortCursorImpl
|
|||||||
}
|
}
|
||||||
|
|
||||||
pos = 0;
|
pos = 0;
|
||||||
rows = all_columns[0]->size();
|
rows = num_rows;
|
||||||
permutation = perm;
|
permutation = perm;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -33,9 +33,12 @@ struct FillColumnDescription
|
|||||||
DataTypePtr fill_to_type;
|
DataTypePtr fill_to_type;
|
||||||
Field fill_step; /// Default = +1 or -1 according to direction
|
Field fill_step; /// Default = +1 or -1 according to direction
|
||||||
std::optional<IntervalKind> step_kind;
|
std::optional<IntervalKind> step_kind;
|
||||||
|
Field fill_staleness; /// Default = Null - should not be considered
|
||||||
|
std::optional<IntervalKind> staleness_kind;
|
||||||
|
|
||||||
using StepFunction = std::function<void(Field &)>;
|
using StepFunction = std::function<void(Field &, Int32 jumps_count)>;
|
||||||
StepFunction step_func;
|
StepFunction step_func;
|
||||||
|
StepFunction staleness_step_func;
|
||||||
};
|
};
|
||||||
|
|
||||||
/// Description of the sorting rule by one column.
|
/// Description of the sorting rule by one column.
|
||||||
|
@ -1,2 +1,2 @@
|
|||||||
clickhouse_add_executable (names_and_types_fuzzer names_and_types_fuzzer.cpp)
|
clickhouse_add_executable (names_and_types_fuzzer names_and_types_fuzzer.cpp)
|
||||||
target_link_libraries (names_and_types_fuzzer PRIVATE)
|
target_link_libraries (names_and_types_fuzzer PRIVATE dbms)
|
||||||
|
@ -1,6 +1,9 @@
|
|||||||
#include <DataTypes/DataTypeFactory.h>
|
#include <DataTypes/DataTypeFactory.h>
|
||||||
#include <DataTypes/DataTypeObject.h>
|
#include <DataTypes/DataTypeObject.h>
|
||||||
#include <DataTypes/DataTypeObjectDeprecated.h>
|
#include <DataTypes/DataTypeObjectDeprecated.h>
|
||||||
|
#include <DataTypes/DataTypeArray.h>
|
||||||
|
#include <DataTypes/DataTypeTuple.h>
|
||||||
|
#include <DataTypes/DataTypeString.h>
|
||||||
#include <DataTypes/Serializations/SerializationJSON.h>
|
#include <DataTypes/Serializations/SerializationJSON.h>
|
||||||
#include <DataTypes/Serializations/SerializationObjectTypedPath.h>
|
#include <DataTypes/Serializations/SerializationObjectTypedPath.h>
|
||||||
#include <DataTypes/Serializations/SerializationObjectDynamicPath.h>
|
#include <DataTypes/Serializations/SerializationObjectDynamicPath.h>
|
||||||
@ -230,6 +233,15 @@ MutableColumnPtr DataTypeObject::createColumn() const
|
|||||||
return ColumnObject::create(std::move(typed_path_columns), max_dynamic_paths, max_dynamic_types);
|
return ColumnObject::create(std::move(typed_path_columns), max_dynamic_paths, max_dynamic_types);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void DataTypeObject::forEachChild(const ChildCallback & callback) const
|
||||||
|
{
|
||||||
|
for (const auto & [path, type] : typed_paths)
|
||||||
|
{
|
||||||
|
callback(*type);
|
||||||
|
type->forEachChild(callback);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
namespace
|
namespace
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -522,6 +534,13 @@ static DataTypePtr createObject(const ASTPtr & arguments, const DataTypeObject::
|
|||||||
return std::make_shared<DataTypeObject>(schema_format, std::move(typed_paths), std::move(paths_to_skip), std::move(path_regexps_to_skip), max_dynamic_paths, max_dynamic_types);
|
return std::make_shared<DataTypeObject>(schema_format, std::move(typed_paths), std::move(paths_to_skip), std::move(path_regexps_to_skip), max_dynamic_paths, max_dynamic_types);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
const DataTypePtr & DataTypeObject::getTypeOfSharedData()
|
||||||
|
{
|
||||||
|
/// Array(Tuple(String, String))
|
||||||
|
static const DataTypePtr type = std::make_shared<DataTypeArray>(std::make_shared<DataTypeTuple>(DataTypes{std::make_shared<DataTypeString>(), std::make_shared<DataTypeString>()}, Names{"paths", "values"}));
|
||||||
|
return type;
|
||||||
|
}
|
||||||
|
|
||||||
static DataTypePtr createJSON(const ASTPtr & arguments)
|
static DataTypePtr createJSON(const ASTPtr & arguments)
|
||||||
{
|
{
|
||||||
auto context = CurrentThread::getQueryContext();
|
auto context = CurrentThread::getQueryContext();
|
||||||
|
@ -50,6 +50,8 @@ public:
|
|||||||
|
|
||||||
bool equals(const IDataType & rhs) const override;
|
bool equals(const IDataType & rhs) const override;
|
||||||
|
|
||||||
|
void forEachChild(const ChildCallback &) const override;
|
||||||
|
|
||||||
bool hasDynamicSubcolumnsData() const override { return true; }
|
bool hasDynamicSubcolumnsData() const override { return true; }
|
||||||
std::unique_ptr<SubstreamData> getDynamicSubcolumnData(std::string_view subcolumn_name, const SubstreamData & data, bool throw_if_null) const override;
|
std::unique_ptr<SubstreamData> getDynamicSubcolumnData(std::string_view subcolumn_name, const SubstreamData & data, bool throw_if_null) const override;
|
||||||
|
|
||||||
@ -63,6 +65,9 @@ public:
|
|||||||
size_t getMaxDynamicTypes() const { return max_dynamic_types; }
|
size_t getMaxDynamicTypes() const { return max_dynamic_types; }
|
||||||
size_t getMaxDynamicPaths() const { return max_dynamic_paths; }
|
size_t getMaxDynamicPaths() const { return max_dynamic_paths; }
|
||||||
|
|
||||||
|
/// Shared data has type Array(Tuple(String, String)).
|
||||||
|
static const DataTypePtr & getTypeOfSharedData();
|
||||||
|
|
||||||
private:
|
private:
|
||||||
SchemaFormat schema_format;
|
SchemaFormat schema_format;
|
||||||
/// Set of paths with types that were specified in type declaration.
|
/// Set of paths with types that were specified in type declaration.
|
||||||
|
@ -26,8 +26,8 @@ namespace ErrorCodes
|
|||||||
|
|
||||||
struct SerializeBinaryBulkStateDynamic : public ISerialization::SerializeBinaryBulkState
|
struct SerializeBinaryBulkStateDynamic : public ISerialization::SerializeBinaryBulkState
|
||||||
{
|
{
|
||||||
SerializationDynamic::DynamicStructureSerializationVersion structure_version;
|
SerializationDynamic::DynamicSerializationVersion structure_version;
|
||||||
size_t max_dynamic_types;
|
size_t num_dynamic_types;
|
||||||
DataTypePtr variant_type;
|
DataTypePtr variant_type;
|
||||||
Names variant_names;
|
Names variant_names;
|
||||||
SerializationPtr variant_serialization;
|
SerializationPtr variant_serialization;
|
||||||
@ -81,15 +81,15 @@ void SerializationDynamic::enumerateStreams(
|
|||||||
settings.path.pop_back();
|
settings.path.pop_back();
|
||||||
}
|
}
|
||||||
|
|
||||||
SerializationDynamic::DynamicStructureSerializationVersion::DynamicStructureSerializationVersion(UInt64 version) : value(static_cast<Value>(version))
|
SerializationDynamic::DynamicSerializationVersion::DynamicSerializationVersion(UInt64 version) : value(static_cast<Value>(version))
|
||||||
{
|
{
|
||||||
checkVersion(version);
|
checkVersion(version);
|
||||||
}
|
}
|
||||||
|
|
||||||
void SerializationDynamic::DynamicStructureSerializationVersion::checkVersion(UInt64 version)
|
void SerializationDynamic::DynamicSerializationVersion::checkVersion(UInt64 version)
|
||||||
{
|
{
|
||||||
if (version != VariantTypeName)
|
if (version != V1 && version != V2)
|
||||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Invalid version for Dynamic structure serialization.");
|
throw Exception(ErrorCodes::INCORRECT_DATA, "Invalid version for Dynamic structure serialization: {}", version);
|
||||||
}
|
}
|
||||||
|
|
||||||
void SerializationDynamic::serializeBinaryBulkStatePrefix(
|
void SerializationDynamic::serializeBinaryBulkStatePrefix(
|
||||||
@ -108,22 +108,17 @@ void SerializationDynamic::serializeBinaryBulkStatePrefix(
|
|||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Missing stream for Dynamic column structure during serialization of binary bulk state prefix");
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Missing stream for Dynamic column structure during serialization of binary bulk state prefix");
|
||||||
|
|
||||||
/// Write structure serialization version.
|
/// Write structure serialization version.
|
||||||
UInt64 structure_version = DynamicStructureSerializationVersion::Value::VariantTypeName;
|
UInt64 structure_version = DynamicSerializationVersion::Value::V2;
|
||||||
writeBinaryLittleEndian(structure_version, *stream);
|
writeBinaryLittleEndian(structure_version, *stream);
|
||||||
auto dynamic_state = std::make_shared<SerializeBinaryBulkStateDynamic>(structure_version);
|
auto dynamic_state = std::make_shared<SerializeBinaryBulkStateDynamic>(structure_version);
|
||||||
|
|
||||||
dynamic_state->max_dynamic_types = column_dynamic.getMaxDynamicTypes();
|
|
||||||
/// Write max_dynamic_types parameter, because it can differ from the max_dynamic_types
|
|
||||||
/// that is specified in the Dynamic type (we could decrease it before merge).
|
|
||||||
writeVarUInt(dynamic_state->max_dynamic_types, *stream);
|
|
||||||
|
|
||||||
dynamic_state->variant_type = variant_info.variant_type;
|
dynamic_state->variant_type = variant_info.variant_type;
|
||||||
dynamic_state->variant_names = variant_info.variant_names;
|
dynamic_state->variant_names = variant_info.variant_names;
|
||||||
const auto & variant_column = column_dynamic.getVariantColumn();
|
const auto & variant_column = column_dynamic.getVariantColumn();
|
||||||
|
|
||||||
/// Write information about variants.
|
/// Write information about dynamic types.
|
||||||
size_t num_variants = dynamic_state->variant_names.size() - 1; /// Don't write shared variant, Dynamic column should always have it.
|
dynamic_state->num_dynamic_types = dynamic_state->variant_names.size() - 1; /// -1 for SharedVariant
|
||||||
writeVarUInt(num_variants, *stream);
|
writeVarUInt(dynamic_state->num_dynamic_types, *stream);
|
||||||
if (settings.data_types_binary_encoding)
|
if (settings.data_types_binary_encoding)
|
||||||
{
|
{
|
||||||
const auto & variants = assert_cast<const DataTypeVariant &>(*dynamic_state->variant_type).getVariants();
|
const auto & variants = assert_cast<const DataTypeVariant &>(*dynamic_state->variant_type).getVariants();
|
||||||
@ -251,22 +246,25 @@ ISerialization::DeserializeBinaryBulkStatePtr SerializationDynamic::deserializeD
|
|||||||
UInt64 structure_version;
|
UInt64 structure_version;
|
||||||
readBinaryLittleEndian(structure_version, *structure_stream);
|
readBinaryLittleEndian(structure_version, *structure_stream);
|
||||||
auto structure_state = std::make_shared<DeserializeBinaryBulkStateDynamicStructure>(structure_version);
|
auto structure_state = std::make_shared<DeserializeBinaryBulkStateDynamicStructure>(structure_version);
|
||||||
/// Read max_dynamic_types parameter.
|
if (structure_state->structure_version.value == DynamicSerializationVersion::Value::V1)
|
||||||
readVarUInt(structure_state->max_dynamic_types, *structure_stream);
|
{
|
||||||
|
/// Skip max_dynamic_types parameter in V1 serialization version.
|
||||||
|
size_t max_dynamic_types;
|
||||||
|
readVarUInt(max_dynamic_types, *structure_stream);
|
||||||
|
}
|
||||||
/// Read information about variants.
|
/// Read information about variants.
|
||||||
DataTypes variants;
|
DataTypes variants;
|
||||||
size_t num_variants;
|
readVarUInt(structure_state->num_dynamic_types, *structure_stream);
|
||||||
readVarUInt(num_variants, *structure_stream);
|
variants.reserve(structure_state->num_dynamic_types + 1); /// +1 for shared variant.
|
||||||
variants.reserve(num_variants + 1); /// +1 for shared variant.
|
|
||||||
if (settings.data_types_binary_encoding)
|
if (settings.data_types_binary_encoding)
|
||||||
{
|
{
|
||||||
for (size_t i = 0; i != num_variants; ++i)
|
for (size_t i = 0; i != structure_state->num_dynamic_types; ++i)
|
||||||
variants.push_back(decodeDataType(*structure_stream));
|
variants.push_back(decodeDataType(*structure_stream));
|
||||||
}
|
}
|
||||||
else
|
else
|
||||||
{
|
{
|
||||||
String data_type_name;
|
String data_type_name;
|
||||||
for (size_t i = 0; i != num_variants; ++i)
|
for (size_t i = 0; i != structure_state->num_dynamic_types; ++i)
|
||||||
{
|
{
|
||||||
readStringBinary(data_type_name, *structure_stream);
|
readStringBinary(data_type_name, *structure_stream);
|
||||||
variants.push_back(DataTypeFactory::instance().get(data_type_name));
|
variants.push_back(DataTypeFactory::instance().get(data_type_name));
|
||||||
@ -364,9 +362,6 @@ void SerializationDynamic::serializeBinaryBulkWithMultipleStreamsAndCountTotalSi
|
|||||||
if (!variant_info.variant_type->equals(*dynamic_state->variant_type))
|
if (!variant_info.variant_type->equals(*dynamic_state->variant_type))
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Mismatch of internal columns of Dynamic. Expected: {}, Got: {}", dynamic_state->variant_type->getName(), variant_info.variant_type->getName());
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Mismatch of internal columns of Dynamic. Expected: {}, Got: {}", dynamic_state->variant_type->getName(), variant_info.variant_type->getName());
|
||||||
|
|
||||||
if (column_dynamic.getMaxDynamicTypes() != dynamic_state->max_dynamic_types)
|
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Mismatch of max_dynamic_types parameter of Dynamic. Expected: {}, Got: {}", dynamic_state->max_dynamic_types, column_dynamic.getMaxDynamicTypes());
|
|
||||||
|
|
||||||
settings.path.push_back(Substream::DynamicData);
|
settings.path.push_back(Substream::DynamicData);
|
||||||
assert_cast<const SerializationVariant &>(*dynamic_state->variant_serialization)
|
assert_cast<const SerializationVariant &>(*dynamic_state->variant_serialization)
|
||||||
.serializeBinaryBulkWithMultipleStreamsAndUpdateVariantStatistics(
|
.serializeBinaryBulkWithMultipleStreamsAndUpdateVariantStatistics(
|
||||||
@ -424,7 +419,7 @@ void SerializationDynamic::deserializeBinaryBulkWithMultipleStreams(
|
|||||||
|
|
||||||
if (mutable_column->empty())
|
if (mutable_column->empty())
|
||||||
{
|
{
|
||||||
column_dynamic.setMaxDynamicPaths(structure_state->max_dynamic_types);
|
column_dynamic.setMaxDynamicPaths(structure_state->num_dynamic_types);
|
||||||
column_dynamic.setVariantType(structure_state->variant_type);
|
column_dynamic.setVariantType(structure_state->variant_type);
|
||||||
column_dynamic.setStatistics(structure_state->statistics);
|
column_dynamic.setStatistics(structure_state->statistics);
|
||||||
}
|
}
|
||||||
|
@ -16,18 +16,28 @@ public:
|
|||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
struct DynamicStructureSerializationVersion
|
struct DynamicSerializationVersion
|
||||||
{
|
{
|
||||||
enum Value
|
enum Value
|
||||||
{
|
{
|
||||||
VariantTypeName = 1,
|
/// V1 serialization:
|
||||||
|
/// - DynamicStructure stream:
|
||||||
|
/// <max_dynamic_types parameter>
|
||||||
|
/// <actual number of dynamic types>
|
||||||
|
/// <list of dynamic types (list of variants in nested Variant column without SharedVariant)>
|
||||||
|
/// <statistics with number of values for each dynamic type> (only in MergeTree serialization)
|
||||||
|
/// <statistics with number of values for some types in SharedVariant> (only in MergeTree serialization)
|
||||||
|
/// - DynamicData stream: contains the data of nested Variant column.
|
||||||
|
V1 = 1,
|
||||||
|
/// V2 serialization: the same as V1 but without max_dynamic_types parameter in DynamicStructure stream.
|
||||||
|
V2 = 2,
|
||||||
};
|
};
|
||||||
|
|
||||||
Value value;
|
Value value;
|
||||||
|
|
||||||
static void checkVersion(UInt64 version);
|
static void checkVersion(UInt64 version);
|
||||||
|
|
||||||
explicit DynamicStructureSerializationVersion(UInt64 version);
|
explicit DynamicSerializationVersion(UInt64 version);
|
||||||
};
|
};
|
||||||
|
|
||||||
void enumerateStreams(
|
void enumerateStreams(
|
||||||
@ -113,9 +123,9 @@ private:
|
|||||||
|
|
||||||
struct DeserializeBinaryBulkStateDynamicStructure : public ISerialization::DeserializeBinaryBulkState
|
struct DeserializeBinaryBulkStateDynamicStructure : public ISerialization::DeserializeBinaryBulkState
|
||||||
{
|
{
|
||||||
DynamicStructureSerializationVersion structure_version;
|
DynamicSerializationVersion structure_version;
|
||||||
DataTypePtr variant_type;
|
DataTypePtr variant_type;
|
||||||
size_t max_dynamic_types;
|
size_t num_dynamic_types;
|
||||||
ColumnDynamic::StatisticsPtr statistics;
|
ColumnDynamic::StatisticsPtr statistics;
|
||||||
|
|
||||||
explicit DeserializeBinaryBulkStateDynamicStructure(UInt64 structure_version_)
|
explicit DeserializeBinaryBulkStateDynamicStructure(UInt64 structure_version_)
|
||||||
|
@ -25,7 +25,7 @@ SerializationObject::SerializationObject(
|
|||||||
: typed_path_serializations(std::move(typed_path_serializations_))
|
: typed_path_serializations(std::move(typed_path_serializations_))
|
||||||
, paths_to_skip(paths_to_skip_)
|
, paths_to_skip(paths_to_skip_)
|
||||||
, dynamic_serialization(std::make_shared<SerializationDynamic>())
|
, dynamic_serialization(std::make_shared<SerializationDynamic>())
|
||||||
, shared_data_serialization(getTypeOfSharedData()->getDefaultSerialization())
|
, shared_data_serialization(DataTypeObject::getTypeOfSharedData()->getDefaultSerialization())
|
||||||
{
|
{
|
||||||
/// We will need sorted order of typed paths to serialize them in order for consistency.
|
/// We will need sorted order of typed paths to serialize them in order for consistency.
|
||||||
sorted_typed_paths.reserve(typed_path_serializations.size());
|
sorted_typed_paths.reserve(typed_path_serializations.size());
|
||||||
@ -38,13 +38,6 @@ SerializationObject::SerializationObject(
|
|||||||
path_regexps_to_skip.emplace_back(regexp_str);
|
path_regexps_to_skip.emplace_back(regexp_str);
|
||||||
}
|
}
|
||||||
|
|
||||||
const DataTypePtr & SerializationObject::getTypeOfSharedData()
|
|
||||||
{
|
|
||||||
/// Array(Tuple(String, String))
|
|
||||||
static const DataTypePtr type = std::make_shared<DataTypeArray>(std::make_shared<DataTypeTuple>(DataTypes{std::make_shared<DataTypeString>(), std::make_shared<DataTypeString>()}, Names{"paths", "values"}));
|
|
||||||
return type;
|
|
||||||
}
|
|
||||||
|
|
||||||
bool SerializationObject::shouldSkipPath(const String & path) const
|
bool SerializationObject::shouldSkipPath(const String & path) const
|
||||||
{
|
{
|
||||||
if (paths_to_skip.contains(path))
|
if (paths_to_skip.contains(path))
|
||||||
@ -70,14 +63,13 @@ SerializationObject::ObjectSerializationVersion::ObjectSerializationVersion(UInt
|
|||||||
|
|
||||||
void SerializationObject::ObjectSerializationVersion::checkVersion(UInt64 version)
|
void SerializationObject::ObjectSerializationVersion::checkVersion(UInt64 version)
|
||||||
{
|
{
|
||||||
if (version != V1 && version != STRING)
|
if (version != V1 && version != V2 && version != STRING)
|
||||||
throw Exception(ErrorCodes::INCORRECT_DATA, "Invalid version for Object structure serialization.");
|
throw Exception(ErrorCodes::INCORRECT_DATA, "Invalid version for Object structure serialization.");
|
||||||
}
|
}
|
||||||
|
|
||||||
struct SerializeBinaryBulkStateObject: public ISerialization::SerializeBinaryBulkState
|
struct SerializeBinaryBulkStateObject: public ISerialization::SerializeBinaryBulkState
|
||||||
{
|
{
|
||||||
SerializationObject::ObjectSerializationVersion serialization_version;
|
SerializationObject::ObjectSerializationVersion serialization_version;
|
||||||
size_t max_dynamic_paths;
|
|
||||||
std::vector<String> sorted_dynamic_paths;
|
std::vector<String> sorted_dynamic_paths;
|
||||||
std::unordered_map<String, ISerialization::SerializeBinaryBulkStatePtr> typed_path_states;
|
std::unordered_map<String, ISerialization::SerializeBinaryBulkStatePtr> typed_path_states;
|
||||||
std::unordered_map<String, ISerialization::SerializeBinaryBulkStatePtr> dynamic_path_states;
|
std::unordered_map<String, ISerialization::SerializeBinaryBulkStatePtr> dynamic_path_states;
|
||||||
@ -168,7 +160,7 @@ void SerializationObject::enumerateStreams(EnumerateStreamsSettings & settings,
|
|||||||
|
|
||||||
settings.path.push_back(Substream::ObjectSharedData);
|
settings.path.push_back(Substream::ObjectSharedData);
|
||||||
auto shared_data_substream_data = SubstreamData(shared_data_serialization)
|
auto shared_data_substream_data = SubstreamData(shared_data_serialization)
|
||||||
.withType(getTypeOfSharedData())
|
.withType(DataTypeObject::getTypeOfSharedData())
|
||||||
.withColumn(column_object ? column_object->getSharedDataPtr() : nullptr)
|
.withColumn(column_object ? column_object->getSharedDataPtr() : nullptr)
|
||||||
.withSerializationInfo(data.serialization_info)
|
.withSerializationInfo(data.serialization_info)
|
||||||
.withDeserializeState(deserialize_state ? deserialize_state->shared_data_state : nullptr);
|
.withDeserializeState(deserialize_state ? deserialize_state->shared_data_state : nullptr);
|
||||||
@ -195,7 +187,7 @@ void SerializationObject::serializeBinaryBulkStatePrefix(
|
|||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Missing stream for Object column structure during serialization of binary bulk state prefix");
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Missing stream for Object column structure during serialization of binary bulk state prefix");
|
||||||
|
|
||||||
/// Write serialization version.
|
/// Write serialization version.
|
||||||
UInt64 serialization_version = settings.write_json_as_string ? ObjectSerializationVersion::Value::STRING : ObjectSerializationVersion::Value::V1;
|
UInt64 serialization_version = settings.write_json_as_string ? ObjectSerializationVersion::Value::STRING : ObjectSerializationVersion::Value::V2;
|
||||||
writeBinaryLittleEndian(serialization_version, *stream);
|
writeBinaryLittleEndian(serialization_version, *stream);
|
||||||
|
|
||||||
auto object_state = std::make_shared<SerializeBinaryBulkStateObject>(serialization_version);
|
auto object_state = std::make_shared<SerializeBinaryBulkStateObject>(serialization_version);
|
||||||
@ -205,9 +197,6 @@ void SerializationObject::serializeBinaryBulkStatePrefix(
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
object_state->max_dynamic_paths = column_object.getMaxDynamicPaths();
|
|
||||||
/// Write max_dynamic_paths parameter.
|
|
||||||
writeVarUInt(object_state->max_dynamic_paths, *stream);
|
|
||||||
/// Write all dynamic paths in sorted order.
|
/// Write all dynamic paths in sorted order.
|
||||||
object_state->sorted_dynamic_paths.reserve(dynamic_paths.size());
|
object_state->sorted_dynamic_paths.reserve(dynamic_paths.size());
|
||||||
for (const auto & [path, _] : dynamic_paths)
|
for (const auto & [path, _] : dynamic_paths)
|
||||||
@ -367,10 +356,15 @@ ISerialization::DeserializeBinaryBulkStatePtr SerializationObject::deserializeOb
|
|||||||
UInt64 serialization_version;
|
UInt64 serialization_version;
|
||||||
readBinaryLittleEndian(serialization_version, *structure_stream);
|
readBinaryLittleEndian(serialization_version, *structure_stream);
|
||||||
auto structure_state = std::make_shared<DeserializeBinaryBulkStateObjectStructure>(serialization_version);
|
auto structure_state = std::make_shared<DeserializeBinaryBulkStateObjectStructure>(serialization_version);
|
||||||
if (structure_state->serialization_version.value == ObjectSerializationVersion::Value::V1)
|
if (structure_state->serialization_version.value == ObjectSerializationVersion::Value::V1 || structure_state->serialization_version.value == ObjectSerializationVersion::Value::V2)
|
||||||
{
|
{
|
||||||
/// Read max_dynamic_paths parameter.
|
if (structure_state->serialization_version.value == ObjectSerializationVersion::Value::V1)
|
||||||
readVarUInt(structure_state->max_dynamic_paths, *structure_stream);
|
{
|
||||||
|
/// Skip max_dynamic_paths parameter in V1 serialization version.
|
||||||
|
size_t max_dynamic_paths;
|
||||||
|
readVarUInt(max_dynamic_paths, *structure_stream);
|
||||||
|
}
|
||||||
|
|
||||||
/// Read the sorted list of dynamic paths.
|
/// Read the sorted list of dynamic paths.
|
||||||
size_t dynamic_paths_size;
|
size_t dynamic_paths_size;
|
||||||
readVarUInt(dynamic_paths_size, *structure_stream);
|
readVarUInt(dynamic_paths_size, *structure_stream);
|
||||||
@ -453,9 +447,6 @@ void SerializationObject::serializeBinaryBulkWithMultipleStreams(
|
|||||||
const auto & dynamic_paths = column_object.getDynamicPaths();
|
const auto & dynamic_paths = column_object.getDynamicPaths();
|
||||||
const auto & shared_data = column_object.getSharedDataPtr();
|
const auto & shared_data = column_object.getSharedDataPtr();
|
||||||
|
|
||||||
if (column_object.getMaxDynamicPaths() != object_state->max_dynamic_paths)
|
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Mismatch of max_dynamic_paths parameter of Object. Expected: {}, Got: {}", object_state->max_dynamic_paths, column_object.getMaxDynamicPaths());
|
|
||||||
|
|
||||||
if (column_object.getDynamicPaths().size() != object_state->sorted_dynamic_paths.size())
|
if (column_object.getDynamicPaths().size() != object_state->sorted_dynamic_paths.size())
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Mismatch of number of dynamic paths in Object. Expected: {}, Got: {}", object_state->sorted_dynamic_paths.size(), column_object.getDynamicPaths().size());
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Mismatch of number of dynamic paths in Object. Expected: {}, Got: {}", object_state->sorted_dynamic_paths.size(), column_object.getDynamicPaths().size());
|
||||||
|
|
||||||
@ -604,7 +595,7 @@ void SerializationObject::deserializeBinaryBulkWithMultipleStreams(
|
|||||||
/// If it's a new object column, set dynamic paths and statistics.
|
/// If it's a new object column, set dynamic paths and statistics.
|
||||||
if (column_object.empty())
|
if (column_object.empty())
|
||||||
{
|
{
|
||||||
column_object.setMaxDynamicPaths(structure_state->max_dynamic_paths);
|
column_object.setMaxDynamicPaths(structure_state->sorted_dynamic_paths.size());
|
||||||
column_object.setDynamicPaths(structure_state->sorted_dynamic_paths);
|
column_object.setDynamicPaths(structure_state->sorted_dynamic_paths);
|
||||||
column_object.setStatistics(structure_state->statistics);
|
column_object.setStatistics(structure_state->statistics);
|
||||||
}
|
}
|
||||||
|
@ -31,6 +31,8 @@ public:
|
|||||||
/// - ObjectDynamicPath stream for each column in dynamic paths
|
/// - ObjectDynamicPath stream for each column in dynamic paths
|
||||||
/// - ObjectSharedData stream shared data column.
|
/// - ObjectSharedData stream shared data column.
|
||||||
V1 = 0,
|
V1 = 0,
|
||||||
|
/// V2 serialization: the same as V1 but without max_dynamic_paths parameter in ObjectStructure stream.
|
||||||
|
V2 = 2,
|
||||||
/// String serialization:
|
/// String serialization:
|
||||||
/// - ObjectData stream with single String column containing serialized JSON.
|
/// - ObjectData stream with single String column containing serialized JSON.
|
||||||
STRING = 1,
|
STRING = 1,
|
||||||
@ -98,7 +100,6 @@ private:
|
|||||||
struct DeserializeBinaryBulkStateObjectStructure : public ISerialization::DeserializeBinaryBulkState
|
struct DeserializeBinaryBulkStateObjectStructure : public ISerialization::DeserializeBinaryBulkState
|
||||||
{
|
{
|
||||||
ObjectSerializationVersion serialization_version;
|
ObjectSerializationVersion serialization_version;
|
||||||
size_t max_dynamic_paths;
|
|
||||||
std::vector<String> sorted_dynamic_paths;
|
std::vector<String> sorted_dynamic_paths;
|
||||||
std::unordered_set<String> dynamic_paths;
|
std::unordered_set<String> dynamic_paths;
|
||||||
/// Paths statistics. Map (dynamic path) -> (number of non-null values in this path).
|
/// Paths statistics. Map (dynamic path) -> (number of non-null values in this path).
|
||||||
@ -111,9 +112,6 @@ private:
|
|||||||
DeserializeBinaryBulkSettings & settings,
|
DeserializeBinaryBulkSettings & settings,
|
||||||
SubstreamsDeserializeStatesCache * cache);
|
SubstreamsDeserializeStatesCache * cache);
|
||||||
|
|
||||||
/// Shared data has type Array(Tuple(String, String)).
|
|
||||||
static const DataTypePtr & getTypeOfSharedData();
|
|
||||||
|
|
||||||
struct TypedPathSubcolumnCreator : public ISubcolumnCreator
|
struct TypedPathSubcolumnCreator : public ISubcolumnCreator
|
||||||
{
|
{
|
||||||
String path;
|
String path;
|
||||||
|
@ -18,7 +18,7 @@ SerializationObjectDynamicPath::SerializationObjectDynamicPath(
|
|||||||
, path(path_)
|
, path(path_)
|
||||||
, path_subcolumn(path_subcolumn_)
|
, path_subcolumn(path_subcolumn_)
|
||||||
, dynamic_serialization(std::make_shared<SerializationDynamic>())
|
, dynamic_serialization(std::make_shared<SerializationDynamic>())
|
||||||
, shared_data_serialization(SerializationObject::getTypeOfSharedData()->getDefaultSerialization())
|
, shared_data_serialization(DataTypeObject::getTypeOfSharedData()->getDefaultSerialization())
|
||||||
, max_dynamic_types(max_dynamic_types_)
|
, max_dynamic_types(max_dynamic_types_)
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
@ -67,8 +67,8 @@ void SerializationObjectDynamicPath::enumerateStreams(
|
|||||||
{
|
{
|
||||||
settings.path.push_back(Substream::ObjectSharedData);
|
settings.path.push_back(Substream::ObjectSharedData);
|
||||||
auto shared_data_substream_data = SubstreamData(shared_data_serialization)
|
auto shared_data_substream_data = SubstreamData(shared_data_serialization)
|
||||||
.withType(data.type ? SerializationObject::getTypeOfSharedData() : nullptr)
|
.withType(data.type ? DataTypeObject::getTypeOfSharedData() : nullptr)
|
||||||
.withColumn(data.column ? SerializationObject::getTypeOfSharedData()->createColumn() : nullptr)
|
.withColumn(data.column ? DataTypeObject::getTypeOfSharedData()->createColumn() : nullptr)
|
||||||
.withSerializationInfo(data.serialization_info)
|
.withSerializationInfo(data.serialization_info)
|
||||||
.withDeserializeState(deserialize_state->nested_state);
|
.withDeserializeState(deserialize_state->nested_state);
|
||||||
settings.path.back().data = shared_data_substream_data;
|
settings.path.back().data = shared_data_substream_data;
|
||||||
@ -164,7 +164,7 @@ void SerializationObjectDynamicPath::deserializeBinaryBulkWithMultipleStreams(
|
|||||||
settings.path.push_back(Substream::ObjectSharedData);
|
settings.path.push_back(Substream::ObjectSharedData);
|
||||||
/// Initialize shared_data column if needed.
|
/// Initialize shared_data column if needed.
|
||||||
if (result_column->empty())
|
if (result_column->empty())
|
||||||
dynamic_path_state->shared_data = SerializationObject::getTypeOfSharedData()->createColumn();
|
dynamic_path_state->shared_data = DataTypeObject::getTypeOfSharedData()->createColumn();
|
||||||
size_t prev_size = result_column->size();
|
size_t prev_size = result_column->size();
|
||||||
shared_data_serialization->deserializeBinaryBulkWithMultipleStreams(dynamic_path_state->shared_data, limit, settings, dynamic_path_state->nested_state, cache);
|
shared_data_serialization->deserializeBinaryBulkWithMultipleStreams(dynamic_path_state->shared_data, limit, settings, dynamic_path_state->nested_state, cache);
|
||||||
/// If we need to read a subcolumn from Dynamic column, create an empty Dynamic column, fill it and extract subcolumn.
|
/// If we need to read a subcolumn from Dynamic column, create an empty Dynamic column, fill it and extract subcolumn.
|
||||||
|
@ -17,7 +17,7 @@ SerializationSubObject::SerializationSubObject(
|
|||||||
: path_prefix(path_prefix_)
|
: path_prefix(path_prefix_)
|
||||||
, typed_paths_serializations(typed_paths_serializations_)
|
, typed_paths_serializations(typed_paths_serializations_)
|
||||||
, dynamic_serialization(std::make_shared<SerializationDynamic>())
|
, dynamic_serialization(std::make_shared<SerializationDynamic>())
|
||||||
, shared_data_serialization(SerializationObject::getTypeOfSharedData()->getDefaultSerialization())
|
, shared_data_serialization(DataTypeObject::getTypeOfSharedData()->getDefaultSerialization())
|
||||||
{
|
{
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -64,8 +64,8 @@ void SerializationSubObject::enumerateStreams(
|
|||||||
/// We will need to read shared data to find all paths with requested prefix.
|
/// We will need to read shared data to find all paths with requested prefix.
|
||||||
settings.path.push_back(Substream::ObjectSharedData);
|
settings.path.push_back(Substream::ObjectSharedData);
|
||||||
auto shared_data_substream_data = SubstreamData(shared_data_serialization)
|
auto shared_data_substream_data = SubstreamData(shared_data_serialization)
|
||||||
.withType(data.type ? SerializationObject::getTypeOfSharedData() : nullptr)
|
.withType(data.type ? DataTypeObject::getTypeOfSharedData() : nullptr)
|
||||||
.withColumn(data.column ? SerializationObject::getTypeOfSharedData()->createColumn() : nullptr)
|
.withColumn(data.column ? DataTypeObject::getTypeOfSharedData()->createColumn() : nullptr)
|
||||||
.withSerializationInfo(data.serialization_info)
|
.withSerializationInfo(data.serialization_info)
|
||||||
.withDeserializeState(deserialize_state ? deserialize_state->shared_data_state : nullptr);
|
.withDeserializeState(deserialize_state ? deserialize_state->shared_data_state : nullptr);
|
||||||
settings.path.back().data = shared_data_substream_data;
|
settings.path.back().data = shared_data_substream_data;
|
||||||
@ -208,7 +208,7 @@ void SerializationSubObject::deserializeBinaryBulkWithMultipleStreams(
|
|||||||
settings.path.push_back(Substream::ObjectSharedData);
|
settings.path.push_back(Substream::ObjectSharedData);
|
||||||
/// If it's a new object column, reinitialize column for shared data.
|
/// If it's a new object column, reinitialize column for shared data.
|
||||||
if (result_column->empty())
|
if (result_column->empty())
|
||||||
sub_object_state->shared_data = SerializationObject::getTypeOfSharedData()->createColumn();
|
sub_object_state->shared_data = DataTypeObject::getTypeOfSharedData()->createColumn();
|
||||||
size_t prev_size = column_object.size();
|
size_t prev_size = column_object.size();
|
||||||
shared_data_serialization->deserializeBinaryBulkWithMultipleStreams(sub_object_state->shared_data, limit, settings, sub_object_state->shared_data_state, cache);
|
shared_data_serialization->deserializeBinaryBulkWithMultipleStreams(sub_object_state->shared_data, limit, settings, sub_object_state->shared_data_state, cache);
|
||||||
settings.path.pop_back();
|
settings.path.pop_back();
|
||||||
|
@ -1,2 +1,3 @@
|
|||||||
clickhouse_add_executable(data_type_deserialization_fuzzer data_type_deserialization_fuzzer.cpp ${SRCS})
|
clickhouse_add_executable(data_type_deserialization_fuzzer data_type_deserialization_fuzzer.cpp ${SRCS})
|
||||||
target_link_libraries(data_type_deserialization_fuzzer PRIVATE clickhouse_aggregate_functions)
|
|
||||||
|
target_link_libraries(data_type_deserialization_fuzzer PRIVATE clickhouse_aggregate_functions dbms)
|
||||||
|
@ -3,6 +3,7 @@
|
|||||||
#include <IO/ReadBufferFromMemory.h>
|
#include <IO/ReadBufferFromMemory.h>
|
||||||
#include <IO/ReadHelpers.h>
|
#include <IO/ReadHelpers.h>
|
||||||
|
|
||||||
|
#include <DataTypes/IDataType.h>
|
||||||
#include <DataTypes/DataTypeFactory.h>
|
#include <DataTypes/DataTypeFactory.h>
|
||||||
|
|
||||||
#include <Common/MemoryTracker.h>
|
#include <Common/MemoryTracker.h>
|
||||||
|
@ -307,6 +307,13 @@ PostgreSQLTableStructure fetchPostgreSQLTableStructure(
|
|||||||
if (!columns.empty())
|
if (!columns.empty())
|
||||||
columns_part = fmt::format(" AND attname IN ('{}')", boost::algorithm::join(columns, "','"));
|
columns_part = fmt::format(" AND attname IN ('{}')", boost::algorithm::join(columns, "','"));
|
||||||
|
|
||||||
|
/// Bypassing the error of the missing column `attgenerated` in the system table `pg_attribute` for PostgreSQL versions below 12.
|
||||||
|
/// This trick involves executing a special query to the DBMS in advance to obtain the correct line with comment /// if column has GENERATED.
|
||||||
|
/// The result of the query will be the name of the column `attgenerated` or an empty string declaration for PostgreSQL version 11 and below.
|
||||||
|
/// This change does not degrade the function's performance but restores support for older versions and fix ERROR: column "attgenerated" does not exist.
|
||||||
|
pqxx::result gen_result{tx.exec("select case when current_setting('server_version_num')::int < 120000 then '''''' else 'attgenerated' end as generated")};
|
||||||
|
std::string generated = gen_result[0][0].as<std::string>();
|
||||||
|
|
||||||
std::string query = fmt::format(
|
std::string query = fmt::format(
|
||||||
"SELECT attname AS name, " /// column name
|
"SELECT attname AS name, " /// column name
|
||||||
"format_type(atttypid, atttypmod) AS type, " /// data type
|
"format_type(atttypid, atttypmod) AS type, " /// data type
|
||||||
@ -315,11 +322,11 @@ PostgreSQLTableStructure fetchPostgreSQLTableStructure(
|
|||||||
"atttypid as type_id, "
|
"atttypid as type_id, "
|
||||||
"atttypmod as type_modifier, "
|
"atttypmod as type_modifier, "
|
||||||
"attnum as att_num, "
|
"attnum as att_num, "
|
||||||
"attgenerated as generated " /// if column has GENERATED
|
"{} as generated " /// if column has GENERATED
|
||||||
"FROM pg_attribute "
|
"FROM pg_attribute "
|
||||||
"WHERE attrelid = (SELECT oid FROM pg_class WHERE {}) {}"
|
"WHERE attrelid = (SELECT oid FROM pg_class WHERE {}) {}"
|
||||||
"AND NOT attisdropped AND attnum > 0 "
|
"AND NOT attisdropped AND attnum > 0 "
|
||||||
"ORDER BY attnum ASC", where, columns_part);
|
"ORDER BY attnum ASC", generated, where, columns_part); /// Now we use variable `generated` to form query string. End of trick.
|
||||||
|
|
||||||
auto postgres_table_with_schema = postgres_schema.empty() ? postgres_table : doubleQuoteString(postgres_schema) + '.' + doubleQuoteString(postgres_table);
|
auto postgres_table_with_schema = postgres_schema.empty() ? postgres_table : doubleQuoteString(postgres_schema) + '.' + doubleQuoteString(postgres_table);
|
||||||
table.physical_columns = readNamesAndTypesList(tx, postgres_table_with_schema, query, use_nulls, false);
|
table.physical_columns = readNamesAndTypesList(tx, postgres_table_with_schema, query, use_nulls, false);
|
||||||
|
@ -32,6 +32,8 @@ void enableAllExperimentalSettings(ContextMutablePtr context)
|
|||||||
|
|
||||||
context->setSetting("allow_suspicious_low_cardinality_types", 1);
|
context->setSetting("allow_suspicious_low_cardinality_types", 1);
|
||||||
context->setSetting("allow_suspicious_fixed_string_types", 1);
|
context->setSetting("allow_suspicious_fixed_string_types", 1);
|
||||||
|
context->setSetting("allow_suspicious_types_in_group_by", 1);
|
||||||
|
context->setSetting("allow_suspicious_types_in_order_by", 1);
|
||||||
context->setSetting("allow_suspicious_indices", 1);
|
context->setSetting("allow_suspicious_indices", 1);
|
||||||
context->setSetting("allow_suspicious_codecs", 1);
|
context->setSetting("allow_suspicious_codecs", 1);
|
||||||
context->setSetting("allow_hyperscan", 1);
|
context->setSetting("allow_hyperscan", 1);
|
||||||
|
@ -535,7 +535,7 @@ bool CachedOnDiskReadBufferFromFile::completeFileSegmentAndGetNext()
|
|||||||
chassert(file_offset_of_buffer_end > completed_range.right);
|
chassert(file_offset_of_buffer_end > completed_range.right);
|
||||||
cache_file_reader.reset();
|
cache_file_reader.reset();
|
||||||
|
|
||||||
file_segments->popFront();
|
file_segments->completeAndPopFront(settings.filesystem_cache_allow_background_download);
|
||||||
if (file_segments->empty() && !nextFileSegmentsBatch())
|
if (file_segments->empty() && !nextFileSegmentsBatch())
|
||||||
return false;
|
return false;
|
||||||
|
|
||||||
@ -556,6 +556,12 @@ CachedOnDiskReadBufferFromFile::~CachedOnDiskReadBufferFromFile()
|
|||||||
{
|
{
|
||||||
appendFilesystemCacheLog(file_segments->front(), read_type);
|
appendFilesystemCacheLog(file_segments->front(), read_type);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if (file_segments && !file_segments->empty() && !file_segments->front().isCompleted())
|
||||||
|
{
|
||||||
|
file_segments->completeAndPopFront(settings.filesystem_cache_allow_background_download);
|
||||||
|
file_segments = {};
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void CachedOnDiskReadBufferFromFile::predownload(FileSegment & file_segment)
|
void CachedOnDiskReadBufferFromFile::predownload(FileSegment & file_segment)
|
||||||
@ -784,6 +790,7 @@ bool CachedOnDiskReadBufferFromFile::writeCache(char * data, size_t size, size_t
|
|||||||
LOG_INFO(log, "Insert into cache is skipped due to insufficient disk space. ({})", e.displayText());
|
LOG_INFO(log, "Insert into cache is skipped due to insufficient disk space. ({})", e.displayText());
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
chassert(file_segment.state() == FileSegment::State::PARTIALLY_DOWNLOADED_NO_CONTINUATION);
|
||||||
throw;
|
throw;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -196,7 +196,7 @@ void FileSegmentRangeWriter::completeFileSegment()
|
|||||||
if (file_segment.isDetached() || file_segment.isCompleted())
|
if (file_segment.isDetached() || file_segment.isCompleted())
|
||||||
return;
|
return;
|
||||||
|
|
||||||
file_segment.complete();
|
file_segment.complete(false);
|
||||||
appendFilesystemCacheLog(file_segment);
|
appendFilesystemCacheLog(file_segment);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -210,7 +210,7 @@ void FileSegmentRangeWriter::jumpToPosition(size_t position)
|
|||||||
if (position < current_write_offset)
|
if (position < current_write_offset)
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot jump backwards: {} < {}", position, current_write_offset);
|
throw Exception(ErrorCodes::LOGICAL_ERROR, "Cannot jump backwards: {} < {}", position, current_write_offset);
|
||||||
|
|
||||||
file_segment.complete();
|
file_segment.complete(false);
|
||||||
file_segments.reset();
|
file_segments.reset();
|
||||||
}
|
}
|
||||||
expected_write_offset = position;
|
expected_write_offset = position;
|
||||||
|
@ -1,2 +1,2 @@
|
|||||||
clickhouse_add_executable(format_fuzzer format_fuzzer.cpp ${SRCS})
|
clickhouse_add_executable(format_fuzzer format_fuzzer.cpp ${SRCS})
|
||||||
target_link_libraries(format_fuzzer PRIVATE clickhouse_aggregate_functions)
|
target_link_libraries(format_fuzzer PRIVATE clickhouse_aggregate_functions dbms)
|
||||||
|
@ -3921,7 +3921,7 @@ private:
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
WrapperType createTupleToObjectWrapper(const DataTypeTuple & from_tuple, bool has_nullable_subcolumns) const
|
WrapperType createTupleToObjectDeprecatedWrapper(const DataTypeTuple & from_tuple, bool has_nullable_subcolumns) const
|
||||||
{
|
{
|
||||||
if (!from_tuple.haveExplicitNames())
|
if (!from_tuple.haveExplicitNames())
|
||||||
throw Exception(ErrorCodes::TYPE_MISMATCH,
|
throw Exception(ErrorCodes::TYPE_MISMATCH,
|
||||||
@ -3968,7 +3968,7 @@ private:
|
|||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
WrapperType createMapToObjectWrapper(const DataTypeMap & from_map, bool has_nullable_subcolumns) const
|
WrapperType createMapToObjectDeprecatedWrapper(const DataTypeMap & from_map, bool has_nullable_subcolumns) const
|
||||||
{
|
{
|
||||||
auto key_value_types = from_map.getKeyValueTypes();
|
auto key_value_types = from_map.getKeyValueTypes();
|
||||||
|
|
||||||
@ -4048,11 +4048,11 @@ private:
|
|||||||
{
|
{
|
||||||
if (const auto * from_tuple = checkAndGetDataType<DataTypeTuple>(from_type.get()))
|
if (const auto * from_tuple = checkAndGetDataType<DataTypeTuple>(from_type.get()))
|
||||||
{
|
{
|
||||||
return createTupleToObjectWrapper(*from_tuple, to_type->hasNullableSubcolumns());
|
return createTupleToObjectDeprecatedWrapper(*from_tuple, to_type->hasNullableSubcolumns());
|
||||||
}
|
}
|
||||||
else if (const auto * from_map = checkAndGetDataType<DataTypeMap>(from_type.get()))
|
else if (const auto * from_map = checkAndGetDataType<DataTypeMap>(from_type.get()))
|
||||||
{
|
{
|
||||||
return createMapToObjectWrapper(*from_map, to_type->hasNullableSubcolumns());
|
return createMapToObjectDeprecatedWrapper(*from_map, to_type->hasNullableSubcolumns());
|
||||||
}
|
}
|
||||||
else if (checkAndGetDataType<DataTypeString>(from_type.get()))
|
else if (checkAndGetDataType<DataTypeString>(from_type.get()))
|
||||||
{
|
{
|
||||||
@ -4081,23 +4081,43 @@ private:
|
|||||||
"Cast to Object can be performed only from flatten named Tuple, Map or String. Got: {}", from_type->getName());
|
"Cast to Object can be performed only from flatten named Tuple, Map or String. Got: {}", from_type->getName());
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
WrapperType createObjectWrapper(const DataTypePtr & from_type, const DataTypeObject * to_object) const
|
WrapperType createObjectWrapper(const DataTypePtr & from_type, const DataTypeObject * to_object) const
|
||||||
{
|
{
|
||||||
if (checkAndGetDataType<DataTypeString>(from_type.get()))
|
if (checkAndGetDataType<DataTypeString>(from_type.get()))
|
||||||
{
|
{
|
||||||
return [this](ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, const ColumnNullable * nullable_source, size_t input_rows_count)
|
return [this](ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, const ColumnNullable * nullable_source, size_t input_rows_count)
|
||||||
{
|
{
|
||||||
auto res = ConvertImplGenericFromString<true>::execute(arguments, result_type, nullable_source, input_rows_count, context)->assumeMutable();
|
return ConvertImplGenericFromString<true>::execute(arguments, result_type, nullable_source, input_rows_count, context);
|
||||||
res->finalize();
|
};
|
||||||
return res;
|
}
|
||||||
|
|
||||||
|
/// Cast Tuple/Object/Map to JSON type through serializing into JSON string and parsing back into JSON column.
|
||||||
|
/// Potentially we can do smarter conversion Tuple -> JSON with type preservation, but it's questionable how exactly Tuple should be
|
||||||
|
/// converted to JSON (for example, should we recursively convert nested Array(Tuple) to Array(JSON) or not, should we infer types from String fields, etc).
|
||||||
|
if (checkAndGetDataType<DataTypeObjectDeprecated>(from_type.get()) || checkAndGetDataType<DataTypeTuple>(from_type.get()) || checkAndGetDataType<DataTypeMap>(from_type.get()))
|
||||||
|
{
|
||||||
|
return [this](ColumnsWithTypeAndName & arguments, const DataTypePtr & result_type, const ColumnNullable * nullable_source, size_t input_rows_count)
|
||||||
|
{
|
||||||
|
auto json_string = ColumnString::create();
|
||||||
|
ColumnStringHelpers::WriteHelper write_helper(assert_cast<ColumnString &>(*json_string), input_rows_count);
|
||||||
|
auto & write_buffer = write_helper.getWriteBuffer();
|
||||||
|
FormatSettings format_settings = context ? getFormatSettings(context) : FormatSettings{};
|
||||||
|
auto serialization = arguments[0].type->getDefaultSerialization();
|
||||||
|
for (size_t i = 0; i < input_rows_count; ++i)
|
||||||
|
{
|
||||||
|
serialization->serializeTextJSON(*arguments[0].column, i, write_buffer, format_settings);
|
||||||
|
write_helper.rowWritten();
|
||||||
|
}
|
||||||
|
write_helper.finalize();
|
||||||
|
|
||||||
|
ColumnsWithTypeAndName args_with_json_string = {ColumnWithTypeAndName(json_string->getPtr(), std::make_shared<DataTypeString>(), "")};
|
||||||
|
return ConvertImplGenericFromString<true>::execute(args_with_json_string, result_type, nullable_source, input_rows_count, context);
|
||||||
};
|
};
|
||||||
}
|
}
|
||||||
|
|
||||||
/// TODO: support CAST between JSON types with different parameters
|
/// TODO: support CAST between JSON types with different parameters
|
||||||
/// support CAST from Map to JSON
|
throw Exception(ErrorCodes::TYPE_MISMATCH, "Cast to {} can be performed only from String/Map/Object/Tuple. Got: {}", magic_enum::enum_name(to_object->getSchemaFormat()), from_type->getName());
|
||||||
/// support CAST from Tuple to JSON
|
|
||||||
/// support CAST from Object('json') to JSON
|
|
||||||
throw Exception(ErrorCodes::TYPE_MISMATCH, "Cast to {} can be performed only from String. Got: {}", magic_enum::enum_name(to_object->getSchemaFormat()), from_type->getName());
|
|
||||||
}
|
}
|
||||||
|
|
||||||
WrapperType createVariantToVariantWrapper(const DataTypeVariant & from_variant, const DataTypeVariant & to_variant) const
|
WrapperType createVariantToVariantWrapper(const DataTypeVariant & from_variant, const DataTypeVariant & to_variant) const
|
||||||
|
@ -24,92 +24,7 @@ namespace ErrorCodes
|
|||||||
|
|
||||||
void UserDefinedSQLFunctionVisitor::visit(ASTPtr & ast)
|
void UserDefinedSQLFunctionVisitor::visit(ASTPtr & ast)
|
||||||
{
|
{
|
||||||
if (!ast)
|
chassert(ast);
|
||||||
{
|
|
||||||
chassert(false);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
/// FIXME: this helper should use updatePointerToChild(), but
|
|
||||||
/// forEachPointerToChild() is not implemented for ASTColumnDeclaration
|
|
||||||
/// (and also some members should be adjusted for this).
|
|
||||||
const auto visit_child_with_shared_ptr = [&](ASTPtr & child)
|
|
||||||
{
|
|
||||||
if (!child)
|
|
||||||
return;
|
|
||||||
|
|
||||||
auto * old_value = child.get();
|
|
||||||
visit(child);
|
|
||||||
|
|
||||||
// child did not change
|
|
||||||
if (old_value == child.get())
|
|
||||||
return;
|
|
||||||
|
|
||||||
// child changed, we need to modify it in the list of children of the parent also
|
|
||||||
for (auto & current_child : ast->children)
|
|
||||||
{
|
|
||||||
if (current_child.get() == old_value)
|
|
||||||
current_child = child;
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
if (auto * col_decl = ast->as<ASTColumnDeclaration>())
|
|
||||||
{
|
|
||||||
visit_child_with_shared_ptr(col_decl->default_expression);
|
|
||||||
visit_child_with_shared_ptr(col_decl->ttl);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (auto * storage = ast->as<ASTStorage>())
|
|
||||||
{
|
|
||||||
const auto visit_child = [&](IAST * & child)
|
|
||||||
{
|
|
||||||
if (!child)
|
|
||||||
return;
|
|
||||||
|
|
||||||
if (const auto * function = child->template as<ASTFunction>())
|
|
||||||
{
|
|
||||||
std::unordered_set<std::string> udf_in_replace_process;
|
|
||||||
auto replace_result = tryToReplaceFunction(*function, udf_in_replace_process);
|
|
||||||
if (replace_result)
|
|
||||||
ast->setOrReplace(child, replace_result);
|
|
||||||
}
|
|
||||||
|
|
||||||
visit(child);
|
|
||||||
};
|
|
||||||
|
|
||||||
visit_child(storage->partition_by);
|
|
||||||
visit_child(storage->primary_key);
|
|
||||||
visit_child(storage->order_by);
|
|
||||||
visit_child(storage->sample_by);
|
|
||||||
visit_child(storage->ttl_table);
|
|
||||||
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (auto * alter = ast->as<ASTAlterCommand>())
|
|
||||||
{
|
|
||||||
/// It is OK to use updatePointerToChild() because ASTAlterCommand implements forEachPointerToChild()
|
|
||||||
const auto visit_child_update_parent = [&](ASTPtr & child)
|
|
||||||
{
|
|
||||||
if (!child)
|
|
||||||
return;
|
|
||||||
|
|
||||||
auto * old_ptr = child.get();
|
|
||||||
visit(child);
|
|
||||||
auto * new_ptr = child.get();
|
|
||||||
|
|
||||||
/// Some AST classes have naked pointers to children elements as members.
|
|
||||||
/// We have to replace them if the child was replaced.
|
|
||||||
if (new_ptr != old_ptr)
|
|
||||||
ast->updatePointerToChild(old_ptr, new_ptr);
|
|
||||||
};
|
|
||||||
|
|
||||||
for (auto & children : alter->children)
|
|
||||||
visit_child_update_parent(children);
|
|
||||||
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (const auto * function = ast->template as<ASTFunction>())
|
if (const auto * function = ast->template as<ASTFunction>())
|
||||||
{
|
{
|
||||||
@ -120,7 +35,19 @@ void UserDefinedSQLFunctionVisitor::visit(ASTPtr & ast)
|
|||||||
}
|
}
|
||||||
|
|
||||||
for (auto & child : ast->children)
|
for (auto & child : ast->children)
|
||||||
|
{
|
||||||
|
if (!child)
|
||||||
|
return;
|
||||||
|
|
||||||
|
auto * old_ptr = child.get();
|
||||||
visit(child);
|
visit(child);
|
||||||
|
auto * new_ptr = child.get();
|
||||||
|
|
||||||
|
/// Some AST classes have naked pointers to children elements as members.
|
||||||
|
/// We have to replace them if the child was replaced.
|
||||||
|
if (new_ptr != old_ptr)
|
||||||
|
ast->updatePointerToChild(old_ptr, new_ptr);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
void UserDefinedSQLFunctionVisitor::visit(IAST * ast)
|
void UserDefinedSQLFunctionVisitor::visit(IAST * ast)
|
||||||
|
@ -25,8 +25,10 @@ struct BitShiftLeftImpl
|
|||||||
{
|
{
|
||||||
if constexpr (is_big_int_v<B>)
|
if constexpr (is_big_int_v<B>)
|
||||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "BitShiftLeft is not implemented for big integers as second argument");
|
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "BitShiftLeft is not implemented for big integers as second argument");
|
||||||
else if (b < 0 || static_cast<UInt256>(b) > 8 * sizeof(A))
|
else if (b < 0)
|
||||||
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value and less or equal to the bit width of the value to shift");
|
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value");
|
||||||
|
else if (static_cast<UInt256>(b) > 8 * sizeof(A))
|
||||||
|
return static_cast<Result>(0);
|
||||||
else if constexpr (is_big_int_v<A>)
|
else if constexpr (is_big_int_v<A>)
|
||||||
return static_cast<Result>(a) << static_cast<UInt32>(b);
|
return static_cast<Result>(a) << static_cast<UInt32>(b);
|
||||||
else
|
else
|
||||||
@ -43,9 +45,10 @@ struct BitShiftLeftImpl
|
|||||||
const UInt8 word_size = 8 * sizeof(*pos);
|
const UInt8 word_size = 8 * sizeof(*pos);
|
||||||
size_t n = end - pos;
|
size_t n = end - pos;
|
||||||
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
||||||
if (b < 0 || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
if (b < 0)
|
||||||
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value and less or equal to the bit width of the value to shift");
|
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value");
|
||||||
if (b == bit_limit)
|
|
||||||
|
if (b == bit_limit || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
||||||
{
|
{
|
||||||
// insert default value
|
// insert default value
|
||||||
out_vec.push_back(0);
|
out_vec.push_back(0);
|
||||||
@ -111,9 +114,10 @@ struct BitShiftLeftImpl
|
|||||||
const UInt8 word_size = 8;
|
const UInt8 word_size = 8;
|
||||||
size_t n = end - pos;
|
size_t n = end - pos;
|
||||||
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
||||||
if (b < 0 || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
if (b < 0)
|
||||||
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value and less or equal to the bit width of the value to shift");
|
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value");
|
||||||
if (b == bit_limit)
|
|
||||||
|
if (b == bit_limit || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
||||||
{
|
{
|
||||||
// insert default value
|
// insert default value
|
||||||
out_vec.resize_fill(out_vec.size() + n);
|
out_vec.resize_fill(out_vec.size() + n);
|
||||||
|
@ -26,8 +26,10 @@ struct BitShiftRightImpl
|
|||||||
{
|
{
|
||||||
if constexpr (is_big_int_v<B>)
|
if constexpr (is_big_int_v<B>)
|
||||||
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "BitShiftRight is not implemented for big integers as second argument");
|
throw Exception(ErrorCodes::NOT_IMPLEMENTED, "BitShiftRight is not implemented for big integers as second argument");
|
||||||
else if (b < 0 || static_cast<UInt256>(b) > 8 * sizeof(A))
|
else if (b < 0)
|
||||||
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value and less or equal to the bit width of the value to shift");
|
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value");
|
||||||
|
else if (static_cast<UInt256>(b) > 8 * sizeof(A))
|
||||||
|
return static_cast<Result>(0);
|
||||||
else if constexpr (is_big_int_v<A>)
|
else if constexpr (is_big_int_v<A>)
|
||||||
return static_cast<Result>(a) >> static_cast<UInt32>(b);
|
return static_cast<Result>(a) >> static_cast<UInt32>(b);
|
||||||
else
|
else
|
||||||
@ -59,9 +61,10 @@ struct BitShiftRightImpl
|
|||||||
const UInt8 word_size = 8;
|
const UInt8 word_size = 8;
|
||||||
size_t n = end - pos;
|
size_t n = end - pos;
|
||||||
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
||||||
if (b < 0 || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
if (b < 0)
|
||||||
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value and less or equal to the bit width of the value to shift");
|
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value");
|
||||||
if (b == bit_limit)
|
|
||||||
|
if (b == bit_limit || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
||||||
{
|
{
|
||||||
/// insert default value
|
/// insert default value
|
||||||
out_vec.push_back(0);
|
out_vec.push_back(0);
|
||||||
@ -99,9 +102,10 @@ struct BitShiftRightImpl
|
|||||||
const UInt8 word_size = 8;
|
const UInt8 word_size = 8;
|
||||||
size_t n = end - pos;
|
size_t n = end - pos;
|
||||||
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
const UInt128 bit_limit = static_cast<UInt128>(word_size) * n;
|
||||||
if (b < 0 || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
if (b < 0)
|
||||||
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value and less or equal to the bit width of the value to shift");
|
throw Exception(ErrorCodes::ARGUMENT_OUT_OF_BOUND, "The number of shift positions needs to be a non-negative value");
|
||||||
if (b == bit_limit)
|
|
||||||
|
if (b == bit_limit || static_cast<decltype(bit_limit)>(b) > bit_limit)
|
||||||
{
|
{
|
||||||
// insert default value
|
// insert default value
|
||||||
out_vec.resize_fill(out_vec.size() + n);
|
out_vec.resize_fill(out_vec.size() + n);
|
||||||
|
@ -58,6 +58,9 @@ struct ReadSettings
|
|||||||
bool enable_filesystem_cache_log = false;
|
bool enable_filesystem_cache_log = false;
|
||||||
size_t filesystem_cache_segments_batch_size = 20;
|
size_t filesystem_cache_segments_batch_size = 20;
|
||||||
size_t filesystem_cache_reserve_space_wait_lock_timeout_milliseconds = 1000;
|
size_t filesystem_cache_reserve_space_wait_lock_timeout_milliseconds = 1000;
|
||||||
|
bool filesystem_cache_allow_background_download = true;
|
||||||
|
bool filesystem_cache_allow_background_download_for_metadata_files_in_packed_storage = true;
|
||||||
|
bool filesystem_cache_allow_background_download_during_fetch = true;
|
||||||
|
|
||||||
bool use_page_cache_for_disks_without_file_cache = false;
|
bool use_page_cache_for_disks_without_file_cache = false;
|
||||||
bool read_from_page_cache_if_exists_otherwise_bypass_cache = false;
|
bool read_from_page_cache_if_exists_otherwise_bypass_cache = false;
|
||||||
|
@ -83,7 +83,8 @@ void EvictionCandidates::removeQueueEntries(const CachePriorityGuard::Lock & loc
|
|||||||
queue_iterator->invalidate();
|
queue_iterator->invalidate();
|
||||||
|
|
||||||
chassert(candidate->releasable());
|
chassert(candidate->releasable());
|
||||||
candidate->file_segment->resetQueueIterator();
|
candidate->file_segment->markDelayedRemovalAndResetQueueIterator();
|
||||||
|
|
||||||
/// We need to set removed flag in file segment metadata,
|
/// We need to set removed flag in file segment metadata,
|
||||||
/// because in dynamic cache resize we first remove queue entries,
|
/// because in dynamic cache resize we first remove queue entries,
|
||||||
/// then evict which also removes file segment metadata,
|
/// then evict which also removes file segment metadata,
|
||||||
|
@ -37,6 +37,11 @@ namespace ProfileEvents
|
|||||||
extern const Event FilesystemCacheFailToReserveSpaceBecauseOfCacheResize;
|
extern const Event FilesystemCacheFailToReserveSpaceBecauseOfCacheResize;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
namespace CurrentMetrics
|
||||||
|
{
|
||||||
|
extern const Metric FilesystemCacheDownloadQueueElements;
|
||||||
|
}
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -918,7 +923,13 @@ bool FileCache::tryReserve(
|
|||||||
if (!query_priority->collectCandidatesForEviction(
|
if (!query_priority->collectCandidatesForEviction(
|
||||||
size, required_elements_num, reserve_stat, eviction_candidates, {}, user.user_id, cache_lock))
|
size, required_elements_num, reserve_stat, eviction_candidates, {}, user.user_id, cache_lock))
|
||||||
{
|
{
|
||||||
failure_reason = "cannot evict enough space for query limit";
|
const auto & stat = reserve_stat.total_stat;
|
||||||
|
failure_reason = fmt::format(
|
||||||
|
"cannot evict enough space for query limit "
|
||||||
|
"(non-releasable count: {}, non-releasable size: {}, "
|
||||||
|
"releasable count: {}, releasable size: {}, background download elements: {})",
|
||||||
|
stat.non_releasable_count, stat.non_releasable_size, stat.releasable_count, stat.releasable_size,
|
||||||
|
CurrentMetrics::get(CurrentMetrics::FilesystemCacheDownloadQueueElements));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -933,7 +944,13 @@ bool FileCache::tryReserve(
|
|||||||
if (!main_priority->collectCandidatesForEviction(
|
if (!main_priority->collectCandidatesForEviction(
|
||||||
size, required_elements_num, reserve_stat, eviction_candidates, queue_iterator, user.user_id, cache_lock))
|
size, required_elements_num, reserve_stat, eviction_candidates, queue_iterator, user.user_id, cache_lock))
|
||||||
{
|
{
|
||||||
failure_reason = "cannot evict enough space";
|
const auto & stat = reserve_stat.total_stat;
|
||||||
|
failure_reason = fmt::format(
|
||||||
|
"cannot evict enough space "
|
||||||
|
"(non-releasable count: {}, non-releasable size: {}, "
|
||||||
|
"releasable count: {}, releasable size: {}, background download elements: {})",
|
||||||
|
stat.non_releasable_count, stat.non_releasable_size, stat.releasable_count, stat.releasable_size,
|
||||||
|
CurrentMetrics::get(CurrentMetrics::FilesystemCacheDownloadQueueElements));
|
||||||
return false;
|
return false;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -28,6 +28,7 @@ namespace ProfileEvents
|
|||||||
extern const Event FileSegmentFailToIncreasePriority;
|
extern const Event FileSegmentFailToIncreasePriority;
|
||||||
extern const Event FilesystemCacheHoldFileSegments;
|
extern const Event FilesystemCacheHoldFileSegments;
|
||||||
extern const Event FilesystemCacheUnusedHoldFileSegments;
|
extern const Event FilesystemCacheUnusedHoldFileSegments;
|
||||||
|
extern const Event FilesystemCacheBackgroundDownloadQueuePush;
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace CurrentMetrics
|
namespace CurrentMetrics
|
||||||
@ -171,10 +172,11 @@ void FileSegment::setQueueIterator(Priority::IteratorPtr iterator)
|
|||||||
queue_iterator = iterator;
|
queue_iterator = iterator;
|
||||||
}
|
}
|
||||||
|
|
||||||
void FileSegment::resetQueueIterator()
|
void FileSegment::markDelayedRemovalAndResetQueueIterator()
|
||||||
{
|
{
|
||||||
auto lk = lock();
|
auto lk = lock();
|
||||||
queue_iterator.reset();
|
on_delayed_removal = true;
|
||||||
|
queue_iterator = {};
|
||||||
}
|
}
|
||||||
|
|
||||||
size_t FileSegment::getCurrentWriteOffset() const
|
size_t FileSegment::getCurrentWriteOffset() const
|
||||||
@ -627,7 +629,7 @@ void FileSegment::completePartAndResetDownloader()
|
|||||||
LOG_TEST(log, "Complete batch. ({})", getInfoForLogUnlocked(lk));
|
LOG_TEST(log, "Complete batch. ({})", getInfoForLogUnlocked(lk));
|
||||||
}
|
}
|
||||||
|
|
||||||
void FileSegment::complete()
|
void FileSegment::complete(bool allow_background_download)
|
||||||
{
|
{
|
||||||
ProfileEventTimeIncrement<Microseconds> watch(ProfileEvents::FileSegmentCompleteMicroseconds);
|
ProfileEventTimeIncrement<Microseconds> watch(ProfileEvents::FileSegmentCompleteMicroseconds);
|
||||||
|
|
||||||
@ -700,12 +702,15 @@ void FileSegment::complete()
|
|||||||
case State::PARTIALLY_DOWNLOADED:
|
case State::PARTIALLY_DOWNLOADED:
|
||||||
{
|
{
|
||||||
chassert(current_downloaded_size > 0);
|
chassert(current_downloaded_size > 0);
|
||||||
|
chassert(fs::exists(getPath()));
|
||||||
|
chassert(fs::file_size(getPath()) > 0);
|
||||||
|
|
||||||
if (is_last_holder)
|
if (is_last_holder)
|
||||||
{
|
{
|
||||||
bool added_to_download_queue = false;
|
bool added_to_download_queue = false;
|
||||||
if (background_download_enabled && remote_file_reader)
|
if (allow_background_download && background_download_enabled && remote_file_reader)
|
||||||
{
|
{
|
||||||
|
ProfileEvents::increment(ProfileEvents::FilesystemCacheBackgroundDownloadQueuePush);
|
||||||
added_to_download_queue = locked_key->addToDownloadQueue(offset(), segment_lock); /// Finish download in background.
|
added_to_download_queue = locked_key->addToDownloadQueue(offset(), segment_lock); /// Finish download in background.
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -841,29 +846,60 @@ bool FileSegment::assertCorrectnessUnlocked(const FileSegmentGuard::Lock & lock)
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
if (download_state == State::DOWNLOADED)
|
switch (download_state.load())
|
||||||
{
|
{
|
||||||
chassert(downloader_id.empty());
|
case State::EMPTY:
|
||||||
chassert(downloaded_size == reserved_size);
|
|
||||||
chassert(downloaded_size == range().size());
|
|
||||||
chassert(downloaded_size > 0);
|
|
||||||
chassert(std::filesystem::file_size(getPath()) > 0);
|
|
||||||
check_iterator(queue_iterator);
|
|
||||||
}
|
|
||||||
else
|
|
||||||
{
|
|
||||||
if (download_state == State::DOWNLOADING)
|
|
||||||
{
|
|
||||||
chassert(!downloader_id.empty());
|
|
||||||
}
|
|
||||||
else if (download_state == State::PARTIALLY_DOWNLOADED
|
|
||||||
|| download_state == State::EMPTY)
|
|
||||||
{
|
{
|
||||||
chassert(downloader_id.empty());
|
chassert(downloader_id.empty());
|
||||||
|
chassert(!fs::exists(getPath()));
|
||||||
|
chassert(!queue_iterator);
|
||||||
|
break;
|
||||||
}
|
}
|
||||||
|
case State::DOWNLOADED:
|
||||||
|
{
|
||||||
|
chassert(downloader_id.empty());
|
||||||
|
|
||||||
chassert(reserved_size >= downloaded_size);
|
chassert(downloaded_size == reserved_size);
|
||||||
check_iterator(queue_iterator);
|
chassert(downloaded_size == range().size());
|
||||||
|
chassert(downloaded_size > 0);
|
||||||
|
chassert(fs::file_size(getPath()) > 0);
|
||||||
|
|
||||||
|
chassert(queue_iterator || on_delayed_removal);
|
||||||
|
check_iterator(queue_iterator);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case State::DOWNLOADING:
|
||||||
|
{
|
||||||
|
chassert(!downloader_id.empty());
|
||||||
|
if (downloaded_size)
|
||||||
|
{
|
||||||
|
chassert(queue_iterator);
|
||||||
|
chassert(fs::file_size(getPath()) > 0);
|
||||||
|
}
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case State::PARTIALLY_DOWNLOADED:
|
||||||
|
{
|
||||||
|
chassert(downloader_id.empty());
|
||||||
|
|
||||||
|
chassert(reserved_size >= downloaded_size);
|
||||||
|
chassert(downloaded_size > 0);
|
||||||
|
chassert(fs::file_size(getPath()) > 0);
|
||||||
|
|
||||||
|
chassert(queue_iterator);
|
||||||
|
check_iterator(queue_iterator);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case State::PARTIALLY_DOWNLOADED_NO_CONTINUATION:
|
||||||
|
{
|
||||||
|
chassert(reserved_size >= downloaded_size);
|
||||||
|
check_iterator(queue_iterator);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
case State::DETACHED:
|
||||||
|
{
|
||||||
|
break;
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return true;
|
return true;
|
||||||
@ -991,7 +1027,12 @@ FileSegmentsHolder::FileSegmentsHolder(FileSegments && file_segments_)
|
|||||||
FileSegmentPtr FileSegmentsHolder::getSingleFileSegment() const
|
FileSegmentPtr FileSegmentsHolder::getSingleFileSegment() const
|
||||||
{
|
{
|
||||||
if (file_segments.size() != 1)
|
if (file_segments.size() != 1)
|
||||||
throw Exception(ErrorCodes::LOGICAL_ERROR, "Expected single file segment, got: {} in holder {}", file_segments.size(), toString());
|
{
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::LOGICAL_ERROR,
|
||||||
|
"Expected single file segment, got: {} in holder {}",
|
||||||
|
file_segments.size(), toString());
|
||||||
|
}
|
||||||
return file_segments.front();
|
return file_segments.front();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1001,7 +1042,23 @@ void FileSegmentsHolder::reset()
|
|||||||
|
|
||||||
ProfileEvents::increment(ProfileEvents::FilesystemCacheUnusedHoldFileSegments, file_segments.size());
|
ProfileEvents::increment(ProfileEvents::FilesystemCacheUnusedHoldFileSegments, file_segments.size());
|
||||||
for (auto file_segment_it = file_segments.begin(); file_segment_it != file_segments.end();)
|
for (auto file_segment_it = file_segments.begin(); file_segment_it != file_segments.end();)
|
||||||
file_segment_it = completeAndPopFrontImpl();
|
{
|
||||||
|
try
|
||||||
|
{
|
||||||
|
/// One might think it would have been more correct to do `false` here,
|
||||||
|
/// not to allow background download for file segments that we actually did not start reading.
|
||||||
|
/// But actually we would only do that, if those file segments were already read partially by some other thread/query
|
||||||
|
/// but they were not put to the download queue, because current thread was holding them in Holder.
|
||||||
|
/// So as a culprit, we need to allow to happen what would have happened if we did not exist.
|
||||||
|
file_segment_it = completeAndPopFrontImpl(true);
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||||
|
chassert(false);
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
}
|
||||||
file_segments.clear();
|
file_segments.clear();
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -1010,9 +1067,9 @@ FileSegmentsHolder::~FileSegmentsHolder()
|
|||||||
reset();
|
reset();
|
||||||
}
|
}
|
||||||
|
|
||||||
FileSegments::iterator FileSegmentsHolder::completeAndPopFrontImpl()
|
FileSegments::iterator FileSegmentsHolder::completeAndPopFrontImpl(bool allow_background_download)
|
||||||
{
|
{
|
||||||
front().complete();
|
front().complete(allow_background_download);
|
||||||
CurrentMetrics::sub(CurrentMetrics::FilesystemCacheHoldFileSegments);
|
CurrentMetrics::sub(CurrentMetrics::FilesystemCacheHoldFileSegments);
|
||||||
return file_segments.erase(file_segments.begin());
|
return file_segments.erase(file_segments.begin());
|
||||||
}
|
}
|
||||||
|
@ -177,7 +177,7 @@ public:
|
|||||||
|
|
||||||
void setQueueIterator(Priority::IteratorPtr iterator);
|
void setQueueIterator(Priority::IteratorPtr iterator);
|
||||||
|
|
||||||
void resetQueueIterator();
|
void markDelayedRemovalAndResetQueueIterator();
|
||||||
|
|
||||||
KeyMetadataPtr tryGetKeyMetadata() const;
|
KeyMetadataPtr tryGetKeyMetadata() const;
|
||||||
|
|
||||||
@ -189,7 +189,7 @@ public:
|
|||||||
* ========== Methods that must do cv.notify() ==================
|
* ========== Methods that must do cv.notify() ==================
|
||||||
*/
|
*/
|
||||||
|
|
||||||
void complete();
|
void complete(bool allow_background_download);
|
||||||
|
|
||||||
void completePartAndResetDownloader();
|
void completePartAndResetDownloader();
|
||||||
|
|
||||||
@ -249,12 +249,13 @@ private:
|
|||||||
|
|
||||||
String tryGetPath() const;
|
String tryGetPath() const;
|
||||||
|
|
||||||
Key file_key;
|
const Key file_key;
|
||||||
Range segment_range;
|
Range segment_range;
|
||||||
const FileSegmentKind segment_kind;
|
const FileSegmentKind segment_kind;
|
||||||
/// Size of the segment is not known until it is downloaded and
|
/// Size of the segment is not known until it is downloaded and
|
||||||
/// can be bigger than max_file_segment_size.
|
/// can be bigger than max_file_segment_size.
|
||||||
const bool is_unbound = false;
|
/// is_unbound == true for temporary data in cache.
|
||||||
|
const bool is_unbound;
|
||||||
const bool background_download_enabled;
|
const bool background_download_enabled;
|
||||||
|
|
||||||
std::atomic<State> download_state;
|
std::atomic<State> download_state;
|
||||||
@ -279,6 +280,8 @@ private:
|
|||||||
std::atomic<size_t> hits_count = 0; /// cache hits.
|
std::atomic<size_t> hits_count = 0; /// cache hits.
|
||||||
std::atomic<size_t> ref_count = 0; /// Used for getting snapshot state
|
std::atomic<size_t> ref_count = 0; /// Used for getting snapshot state
|
||||||
|
|
||||||
|
bool on_delayed_removal = false;
|
||||||
|
|
||||||
CurrentMetrics::Increment metric_increment{CurrentMetrics::CacheFileSegments};
|
CurrentMetrics::Increment metric_increment{CurrentMetrics::CacheFileSegments};
|
||||||
};
|
};
|
||||||
|
|
||||||
@ -297,7 +300,7 @@ struct FileSegmentsHolder final : private boost::noncopyable
|
|||||||
|
|
||||||
String toString(bool with_state = false) const;
|
String toString(bool with_state = false) const;
|
||||||
|
|
||||||
void popFront() { completeAndPopFrontImpl(); }
|
void completeAndPopFront(bool allow_background_download) { completeAndPopFrontImpl(allow_background_download); }
|
||||||
|
|
||||||
FileSegment & front() { return *file_segments.front(); }
|
FileSegment & front() { return *file_segments.front(); }
|
||||||
const FileSegment & front() const { return *file_segments.front(); }
|
const FileSegment & front() const { return *file_segments.front(); }
|
||||||
@ -319,7 +322,7 @@ struct FileSegmentsHolder final : private boost::noncopyable
|
|||||||
private:
|
private:
|
||||||
FileSegments file_segments{};
|
FileSegments file_segments{};
|
||||||
|
|
||||||
FileSegments::iterator completeAndPopFrontImpl();
|
FileSegments::iterator completeAndPopFrontImpl(bool allow_background_download);
|
||||||
};
|
};
|
||||||
|
|
||||||
using FileSegmentsHolderPtr = std::unique_ptr<FileSegmentsHolder>;
|
using FileSegmentsHolderPtr = std::unique_ptr<FileSegmentsHolder>;
|
||||||
|
@ -940,7 +940,16 @@ KeyMetadata::iterator LockedKey::removeFileSegmentImpl(
|
|||||||
if (file_segment->queue_iterator && invalidate_queue_entry)
|
if (file_segment->queue_iterator && invalidate_queue_entry)
|
||||||
file_segment->queue_iterator->invalidate();
|
file_segment->queue_iterator->invalidate();
|
||||||
|
|
||||||
file_segment->detach(segment_lock, *this);
|
try
|
||||||
|
{
|
||||||
|
file_segment->detach(segment_lock, *this);
|
||||||
|
}
|
||||||
|
catch (...)
|
||||||
|
{
|
||||||
|
tryLogCurrentException(__PRETTY_FUNCTION__);
|
||||||
|
chassert(false);
|
||||||
|
/// Do not rethrow, we must delete the file below.
|
||||||
|
}
|
||||||
|
|
||||||
try
|
try
|
||||||
{
|
{
|
||||||
@ -990,8 +999,8 @@ void LockedKey::shrinkFileSegmentToDownloadedSize(
|
|||||||
* because of no space left in cache, we need to be able to cut file segment's size to downloaded_size.
|
* because of no space left in cache, we need to be able to cut file segment's size to downloaded_size.
|
||||||
*/
|
*/
|
||||||
|
|
||||||
auto metadata = getByOffset(offset);
|
auto file_segment_metadata = getByOffset(offset);
|
||||||
const auto & file_segment = metadata->file_segment;
|
const auto & file_segment = file_segment_metadata->file_segment;
|
||||||
chassert(file_segment->assertCorrectnessUnlocked(segment_lock));
|
chassert(file_segment->assertCorrectnessUnlocked(segment_lock));
|
||||||
|
|
||||||
const size_t downloaded_size = file_segment->getDownloadedSize();
|
const size_t downloaded_size = file_segment->getDownloadedSize();
|
||||||
@ -1006,15 +1015,15 @@ void LockedKey::shrinkFileSegmentToDownloadedSize(
|
|||||||
chassert(file_segment->reserved_size >= downloaded_size);
|
chassert(file_segment->reserved_size >= downloaded_size);
|
||||||
int64_t diff = file_segment->reserved_size - downloaded_size;
|
int64_t diff = file_segment->reserved_size - downloaded_size;
|
||||||
|
|
||||||
metadata->file_segment = std::make_shared<FileSegment>(
|
file_segment_metadata->file_segment = std::make_shared<FileSegment>(
|
||||||
getKey(), offset, downloaded_size, FileSegment::State::DOWNLOADED,
|
getKey(), offset, downloaded_size, FileSegment::State::DOWNLOADED,
|
||||||
CreateFileSegmentSettings(file_segment->getKind()), false,
|
CreateFileSegmentSettings(file_segment->getKind()), false,
|
||||||
file_segment->cache, key_metadata, file_segment->queue_iterator);
|
file_segment->cache, key_metadata, file_segment->queue_iterator);
|
||||||
|
|
||||||
if (diff)
|
if (diff)
|
||||||
metadata->getQueueIterator()->decrementSize(diff);
|
file_segment_metadata->getQueueIterator()->decrementSize(diff);
|
||||||
|
|
||||||
chassert(file_segment->assertCorrectnessUnlocked(segment_lock));
|
chassert(file_segment_metadata->file_segment->assertCorrectnessUnlocked(segment_lock));
|
||||||
}
|
}
|
||||||
|
|
||||||
bool LockedKey::addToDownloadQueue(size_t offset, const FileSegmentGuard::Lock &)
|
bool LockedKey::addToDownloadQueue(size_t offset, const FileSegmentGuard::Lock &)
|
||||||
|
@ -60,17 +60,6 @@ public:
|
|||||||
IBlocksStreamPtr
|
IBlocksStreamPtr
|
||||||
getNonJoinedBlocks(const Block & left_sample_block, const Block & result_sample_block, UInt64 max_block_size) const override;
|
getNonJoinedBlocks(const Block & left_sample_block, const Block & result_sample_block, UInt64 max_block_size) const override;
|
||||||
|
|
||||||
|
|
||||||
bool isCloneSupported() const override
|
|
||||||
{
|
|
||||||
return !getTotals() && getTotalRowCount() == 0;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::shared_ptr<IJoin> clone(const std::shared_ptr<TableJoin> & table_join_, const Block &, const Block & right_sample_block_) const override
|
|
||||||
{
|
|
||||||
return std::make_shared<ConcurrentHashJoin>(context, table_join_, slots, right_sample_block_, stats_collecting_params);
|
|
||||||
}
|
|
||||||
|
|
||||||
private:
|
private:
|
||||||
struct InternalHashJoin
|
struct InternalHashJoin
|
||||||
{
|
{
|
||||||
|
@ -194,6 +194,8 @@ namespace Setting
|
|||||||
extern const SettingsUInt64 filesystem_cache_max_download_size;
|
extern const SettingsUInt64 filesystem_cache_max_download_size;
|
||||||
extern const SettingsUInt64 filesystem_cache_reserve_space_wait_lock_timeout_milliseconds;
|
extern const SettingsUInt64 filesystem_cache_reserve_space_wait_lock_timeout_milliseconds;
|
||||||
extern const SettingsUInt64 filesystem_cache_segments_batch_size;
|
extern const SettingsUInt64 filesystem_cache_segments_batch_size;
|
||||||
|
extern const SettingsBool filesystem_cache_enable_background_download_for_metadata_files_in_packed_storage;
|
||||||
|
extern const SettingsBool filesystem_cache_enable_background_download_during_fetch;
|
||||||
extern const SettingsBool http_make_head_request;
|
extern const SettingsBool http_make_head_request;
|
||||||
extern const SettingsUInt64 http_max_fields;
|
extern const SettingsUInt64 http_max_fields;
|
||||||
extern const SettingsUInt64 http_max_field_name_size;
|
extern const SettingsUInt64 http_max_field_name_size;
|
||||||
@ -5746,6 +5748,9 @@ ReadSettings Context::getReadSettings() const
|
|||||||
res.filesystem_cache_segments_batch_size = settings_ref[Setting::filesystem_cache_segments_batch_size];
|
res.filesystem_cache_segments_batch_size = settings_ref[Setting::filesystem_cache_segments_batch_size];
|
||||||
res.filesystem_cache_reserve_space_wait_lock_timeout_milliseconds
|
res.filesystem_cache_reserve_space_wait_lock_timeout_milliseconds
|
||||||
= settings_ref[Setting::filesystem_cache_reserve_space_wait_lock_timeout_milliseconds];
|
= settings_ref[Setting::filesystem_cache_reserve_space_wait_lock_timeout_milliseconds];
|
||||||
|
res.filesystem_cache_allow_background_download_for_metadata_files_in_packed_storage
|
||||||
|
= settings_ref[Setting::filesystem_cache_enable_background_download_for_metadata_files_in_packed_storage];
|
||||||
|
res.filesystem_cache_allow_background_download_during_fetch = settings_ref[Setting::filesystem_cache_enable_background_download_during_fetch];
|
||||||
|
|
||||||
res.filesystem_cache_max_download_size = settings_ref[Setting::filesystem_cache_max_download_size];
|
res.filesystem_cache_max_download_size = settings_ref[Setting::filesystem_cache_max_download_size];
|
||||||
res.skip_download_if_exceeds_query_cache = settings_ref[Setting::skip_download_if_exceeds_query_cache];
|
res.skip_download_if_exceeds_query_cache = settings_ref[Setting::skip_download_if_exceeds_query_cache];
|
||||||
|
@ -105,6 +105,8 @@ namespace Setting
|
|||||||
extern const SettingsBool query_plan_aggregation_in_order;
|
extern const SettingsBool query_plan_aggregation_in_order;
|
||||||
extern const SettingsBool query_plan_read_in_order;
|
extern const SettingsBool query_plan_read_in_order;
|
||||||
extern const SettingsUInt64 use_index_for_in_with_subqueries_max_values;
|
extern const SettingsUInt64 use_index_for_in_with_subqueries_max_values;
|
||||||
|
extern const SettingsBool allow_suspicious_types_in_group_by;
|
||||||
|
extern const SettingsBool allow_suspicious_types_in_order_by;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
@ -118,6 +120,7 @@ namespace ErrorCodes
|
|||||||
extern const int NOT_IMPLEMENTED;
|
extern const int NOT_IMPLEMENTED;
|
||||||
extern const int UNKNOWN_IDENTIFIER;
|
extern const int UNKNOWN_IDENTIFIER;
|
||||||
extern const int UNKNOWN_TYPE_OF_AST_NODE;
|
extern const int UNKNOWN_TYPE_OF_AST_NODE;
|
||||||
|
extern const int ILLEGAL_COLUMN;
|
||||||
}
|
}
|
||||||
|
|
||||||
namespace
|
namespace
|
||||||
@ -1368,6 +1371,7 @@ bool SelectQueryExpressionAnalyzer::appendGroupBy(ExpressionActionsChain & chain
|
|||||||
ExpressionActionsChain::Step & step = chain.lastStep(columns_after_join);
|
ExpressionActionsChain::Step & step = chain.lastStep(columns_after_join);
|
||||||
|
|
||||||
ASTs asts = select_query->groupBy()->children;
|
ASTs asts = select_query->groupBy()->children;
|
||||||
|
NameSet group_by_keys;
|
||||||
if (select_query->group_by_with_grouping_sets)
|
if (select_query->group_by_with_grouping_sets)
|
||||||
{
|
{
|
||||||
for (const auto & ast : asts)
|
for (const auto & ast : asts)
|
||||||
@ -1375,6 +1379,7 @@ bool SelectQueryExpressionAnalyzer::appendGroupBy(ExpressionActionsChain & chain
|
|||||||
for (const auto & ast_element : ast->children)
|
for (const auto & ast_element : ast->children)
|
||||||
{
|
{
|
||||||
step.addRequiredOutput(ast_element->getColumnName());
|
step.addRequiredOutput(ast_element->getColumnName());
|
||||||
|
group_by_keys.insert(ast_element->getColumnName());
|
||||||
getRootActions(ast_element, only_types, step.actions()->dag);
|
getRootActions(ast_element, only_types, step.actions()->dag);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -1384,10 +1389,17 @@ bool SelectQueryExpressionAnalyzer::appendGroupBy(ExpressionActionsChain & chain
|
|||||||
for (const auto & ast : asts)
|
for (const auto & ast : asts)
|
||||||
{
|
{
|
||||||
step.addRequiredOutput(ast->getColumnName());
|
step.addRequiredOutput(ast->getColumnName());
|
||||||
|
group_by_keys.insert(ast->getColumnName());
|
||||||
getRootActions(ast, only_types, step.actions()->dag);
|
getRootActions(ast, only_types, step.actions()->dag);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for (const auto & result_column : step.getResultColumns())
|
||||||
|
{
|
||||||
|
if (group_by_keys.contains(result_column.name))
|
||||||
|
validateGroupByKeyType(result_column.type);
|
||||||
|
}
|
||||||
|
|
||||||
if (optimize_aggregation_in_order)
|
if (optimize_aggregation_in_order)
|
||||||
{
|
{
|
||||||
for (auto & child : asts)
|
for (auto & child : asts)
|
||||||
@ -1402,6 +1414,26 @@ bool SelectQueryExpressionAnalyzer::appendGroupBy(ExpressionActionsChain & chain
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void SelectQueryExpressionAnalyzer::validateGroupByKeyType(const DB::DataTypePtr & key_type) const
|
||||||
|
{
|
||||||
|
if (getContext()->getSettingsRef()[Setting::allow_suspicious_types_in_group_by])
|
||||||
|
return;
|
||||||
|
|
||||||
|
auto check = [](const IDataType & type)
|
||||||
|
{
|
||||||
|
if (isDynamic(type) || isVariant(type))
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::ILLEGAL_COLUMN,
|
||||||
|
"Data types Variant/Dynamic are not allowed in GROUP BY keys, because it can lead to unexpected results. "
|
||||||
|
"Consider using a subcolumn with a specific data type instead (for example 'column.Int64' or 'json.some.path.:Int64' if "
|
||||||
|
"its a JSON path subcolumn) or casting this column to a specific data type. "
|
||||||
|
"Set setting allow_suspicious_types_in_group_by = 1 in order to allow it");
|
||||||
|
};
|
||||||
|
|
||||||
|
check(*key_type);
|
||||||
|
key_type->forEachChild(check);
|
||||||
|
}
|
||||||
|
|
||||||
void SelectQueryExpressionAnalyzer::appendAggregateFunctionsArguments(ExpressionActionsChain & chain, bool only_types)
|
void SelectQueryExpressionAnalyzer::appendAggregateFunctionsArguments(ExpressionActionsChain & chain, bool only_types)
|
||||||
{
|
{
|
||||||
const auto * select_query = getAggregatingQuery();
|
const auto * select_query = getAggregatingQuery();
|
||||||
@ -1599,6 +1631,12 @@ ActionsAndProjectInputsFlagPtr SelectQueryExpressionAnalyzer::appendOrderBy(
|
|||||||
with_fill = true;
|
with_fill = true;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
for (const auto & result_column : step.getResultColumns())
|
||||||
|
{
|
||||||
|
if (order_by_keys.contains(result_column.name))
|
||||||
|
validateOrderByKeyType(result_column.type);
|
||||||
|
}
|
||||||
|
|
||||||
if (auto interpolate_list = select_query->interpolate())
|
if (auto interpolate_list = select_query->interpolate())
|
||||||
{
|
{
|
||||||
|
|
||||||
@ -1664,6 +1702,26 @@ ActionsAndProjectInputsFlagPtr SelectQueryExpressionAnalyzer::appendOrderBy(
|
|||||||
return actions;
|
return actions;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void SelectQueryExpressionAnalyzer::validateOrderByKeyType(const DataTypePtr & key_type) const
|
||||||
|
{
|
||||||
|
if (getContext()->getSettingsRef()[Setting::allow_suspicious_types_in_order_by])
|
||||||
|
return;
|
||||||
|
|
||||||
|
auto check = [](const IDataType & type)
|
||||||
|
{
|
||||||
|
if (isDynamic(type) || isVariant(type))
|
||||||
|
throw Exception(
|
||||||
|
ErrorCodes::ILLEGAL_COLUMN,
|
||||||
|
"Data types Variant/Dynamic are not allowed in ORDER BY keys, because it can lead to unexpected results. "
|
||||||
|
"Consider using a subcolumn with a specific data type instead (for example 'column.Int64' or 'json.some.path.:Int64' if "
|
||||||
|
"its a JSON path subcolumn) or casting this column to a specific data type. "
|
||||||
|
"Set setting allow_suspicious_types_in_order_by = 1 in order to allow it");
|
||||||
|
};
|
||||||
|
|
||||||
|
check(*key_type);
|
||||||
|
key_type->forEachChild(check);
|
||||||
|
}
|
||||||
|
|
||||||
bool SelectQueryExpressionAnalyzer::appendLimitBy(ExpressionActionsChain & chain, bool only_types)
|
bool SelectQueryExpressionAnalyzer::appendLimitBy(ExpressionActionsChain & chain, bool only_types)
|
||||||
{
|
{
|
||||||
const auto * select_query = getSelectQuery();
|
const auto * select_query = getSelectQuery();
|
||||||
@ -1981,7 +2039,9 @@ ExpressionAnalysisResult::ExpressionAnalysisResult(
|
|||||||
Block before_prewhere_sample = source_header;
|
Block before_prewhere_sample = source_header;
|
||||||
if (sanitizeBlock(before_prewhere_sample))
|
if (sanitizeBlock(before_prewhere_sample))
|
||||||
{
|
{
|
||||||
before_prewhere_sample = prewhere_dag_and_flags->dag.updateHeader(before_prewhere_sample);
|
ExpressionActions(
|
||||||
|
prewhere_dag_and_flags->dag.clone(),
|
||||||
|
ExpressionActionsSettings::fromSettings(context->getSettingsRef())).execute(before_prewhere_sample);
|
||||||
auto & column_elem = before_prewhere_sample.getByName(query.prewhere()->getColumnName());
|
auto & column_elem = before_prewhere_sample.getByName(query.prewhere()->getColumnName());
|
||||||
/// If the filter column is a constant, record it.
|
/// If the filter column is a constant, record it.
|
||||||
if (column_elem.column)
|
if (column_elem.column)
|
||||||
@ -2013,7 +2073,9 @@ ExpressionAnalysisResult::ExpressionAnalysisResult(
|
|||||||
before_where_sample = source_header;
|
before_where_sample = source_header;
|
||||||
if (sanitizeBlock(before_where_sample))
|
if (sanitizeBlock(before_where_sample))
|
||||||
{
|
{
|
||||||
before_where_sample = before_where->dag.updateHeader(before_where_sample);
|
ExpressionActions(
|
||||||
|
before_where->dag.clone(),
|
||||||
|
ExpressionActionsSettings::fromSettings(context->getSettingsRef())).execute(before_where_sample);
|
||||||
|
|
||||||
auto & column_elem
|
auto & column_elem
|
||||||
= before_where_sample.getByName(query.where()->getColumnName());
|
= before_where_sample.getByName(query.where()->getColumnName());
|
||||||
|
@ -396,6 +396,7 @@ private:
|
|||||||
ActionsAndProjectInputsFlagPtr appendPrewhere(ExpressionActionsChain & chain, bool only_types);
|
ActionsAndProjectInputsFlagPtr appendPrewhere(ExpressionActionsChain & chain, bool only_types);
|
||||||
bool appendWhere(ExpressionActionsChain & chain, bool only_types);
|
bool appendWhere(ExpressionActionsChain & chain, bool only_types);
|
||||||
bool appendGroupBy(ExpressionActionsChain & chain, bool only_types, bool optimize_aggregation_in_order, ManyExpressionActions &);
|
bool appendGroupBy(ExpressionActionsChain & chain, bool only_types, bool optimize_aggregation_in_order, ManyExpressionActions &);
|
||||||
|
void validateGroupByKeyType(const DataTypePtr & key_type) const;
|
||||||
void appendAggregateFunctionsArguments(ExpressionActionsChain & chain, bool only_types);
|
void appendAggregateFunctionsArguments(ExpressionActionsChain & chain, bool only_types);
|
||||||
void appendWindowFunctionsArguments(ExpressionActionsChain & chain, bool only_types);
|
void appendWindowFunctionsArguments(ExpressionActionsChain & chain, bool only_types);
|
||||||
|
|
||||||
@ -408,6 +409,7 @@ private:
|
|||||||
bool appendHaving(ExpressionActionsChain & chain, bool only_types);
|
bool appendHaving(ExpressionActionsChain & chain, bool only_types);
|
||||||
/// appendSelect
|
/// appendSelect
|
||||||
ActionsAndProjectInputsFlagPtr appendOrderBy(ExpressionActionsChain & chain, bool only_types, bool optimize_read_in_order, ManyExpressionActions &);
|
ActionsAndProjectInputsFlagPtr appendOrderBy(ExpressionActionsChain & chain, bool only_types, bool optimize_read_in_order, ManyExpressionActions &);
|
||||||
|
void validateOrderByKeyType(const DataTypePtr & key_type) const;
|
||||||
bool appendLimitBy(ExpressionActionsChain & chain, bool only_types);
|
bool appendLimitBy(ExpressionActionsChain & chain, bool only_types);
|
||||||
/// appendProjectResult
|
/// appendProjectResult
|
||||||
};
|
};
|
||||||
|
@ -1,11 +1,24 @@
|
|||||||
#include <Interpreters/FillingRow.h>
|
#include <cstddef>
|
||||||
#include <Common/FieldVisitorsAccurateComparison.h>
|
|
||||||
#include <IO/Operators.h>
|
#include <IO/Operators.h>
|
||||||
|
#include <Common/Logger.h>
|
||||||
|
#include <Common/logger_useful.h>
|
||||||
|
#include <Common/FieldVisitorsAccurateComparison.h>
|
||||||
|
#include <Interpreters/FillingRow.h>
|
||||||
|
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
|
|
||||||
|
constexpr static bool debug_logging_enabled = false;
|
||||||
|
|
||||||
|
template <class... Args>
|
||||||
|
inline static void logDebug(const char * fmt_str, Args&&... args)
|
||||||
|
{
|
||||||
|
if constexpr (debug_logging_enabled)
|
||||||
|
LOG_DEBUG(getLogger("FillingRow"), "{}", fmt::format(fmt::runtime(fmt_str), std::forward<Args>(args)...));
|
||||||
|
}
|
||||||
|
|
||||||
bool less(const Field & lhs, const Field & rhs, int direction)
|
bool less(const Field & lhs, const Field & rhs, int direction)
|
||||||
{
|
{
|
||||||
if (direction == -1)
|
if (direction == -1)
|
||||||
@ -28,6 +41,10 @@ FillingRow::FillingRow(const SortDescription & sort_description_)
|
|||||||
: sort_description(sort_description_)
|
: sort_description(sort_description_)
|
||||||
{
|
{
|
||||||
row.resize(sort_description.size());
|
row.resize(sort_description.size());
|
||||||
|
|
||||||
|
constraints.reserve(sort_description.size());
|
||||||
|
for (size_t i = 0; i < size(); ++i)
|
||||||
|
constraints.push_back(getFillDescription(i).fill_to);
|
||||||
}
|
}
|
||||||
|
|
||||||
bool FillingRow::operator<(const FillingRow & other) const
|
bool FillingRow::operator<(const FillingRow & other) const
|
||||||
@ -63,71 +80,254 @@ bool FillingRow::isNull() const
|
|||||||
return true;
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
std::pair<bool, bool> FillingRow::next(const FillingRow & to_row)
|
std::optional<Field> FillingRow::doLongJump(const FillColumnDescription & descr, size_t column_ind, const Field & to)
|
||||||
{
|
{
|
||||||
|
Field shifted_value = row[column_ind];
|
||||||
|
|
||||||
|
if (less(to, shifted_value, getDirection(column_ind)))
|
||||||
|
return std::nullopt;
|
||||||
|
|
||||||
|
for (int32_t step_len = 1, step_no = 0; step_no < 100 && step_len > 0; ++step_no)
|
||||||
|
{
|
||||||
|
Field next_value = shifted_value;
|
||||||
|
descr.step_func(next_value, step_len);
|
||||||
|
|
||||||
|
if (less(to, next_value, getDirection(0)))
|
||||||
|
{
|
||||||
|
step_len /= 2;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
shifted_value = std::move(next_value);
|
||||||
|
step_len *= 2;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return shifted_value;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool FillingRow::hasSomeConstraints(size_t pos) const
|
||||||
|
{
|
||||||
|
return !constraints[pos].isNull();
|
||||||
|
}
|
||||||
|
|
||||||
|
bool FillingRow::isConstraintsSatisfied(size_t pos) const
|
||||||
|
{
|
||||||
|
chassert(!row[pos].isNull());
|
||||||
|
chassert(hasSomeConstraints(pos));
|
||||||
|
|
||||||
|
int direction = getDirection(pos);
|
||||||
|
logDebug("constraint: {}, row: {}, direction: {}", constraints[pos], row[pos], direction);
|
||||||
|
|
||||||
|
return less(row[pos], constraints[pos], direction);
|
||||||
|
}
|
||||||
|
|
||||||
|
static const Field & findBorder(const Field & constraint, const Field & next_original, int direction)
|
||||||
|
{
|
||||||
|
if (constraint.isNull())
|
||||||
|
return next_original;
|
||||||
|
|
||||||
|
if (next_original.isNull())
|
||||||
|
return constraint;
|
||||||
|
|
||||||
|
if (less(constraint, next_original, direction))
|
||||||
|
return constraint;
|
||||||
|
|
||||||
|
return next_original;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool FillingRow::next(const FillingRow & next_original_row, bool& value_changed)
|
||||||
|
{
|
||||||
|
|
||||||
const size_t row_size = size();
|
const size_t row_size = size();
|
||||||
size_t pos = 0;
|
size_t pos = 0;
|
||||||
|
|
||||||
/// Find position we need to increment for generating next row.
|
/// Find position we need to increment for generating next row.
|
||||||
for (; pos < row_size; ++pos)
|
for (; pos < row_size; ++pos)
|
||||||
if (!row[pos].isNull() && !to_row.row[pos].isNull() && !equals(row[pos], to_row.row[pos]))
|
{
|
||||||
|
if (row[pos].isNull())
|
||||||
|
continue;
|
||||||
|
|
||||||
|
const Field & border = findBorder(constraints[pos], next_original_row[pos], getDirection(pos));
|
||||||
|
logDebug("border: {}", border);
|
||||||
|
|
||||||
|
if (!border.isNull() && !equals(row[pos], border))
|
||||||
break;
|
break;
|
||||||
|
}
|
||||||
|
|
||||||
if (pos == row_size || less(to_row.row[pos], row[pos], getDirection(pos)))
|
logDebug("pos: {}", pos);
|
||||||
return {false, false};
|
|
||||||
|
|
||||||
/// If we have any 'fill_to' value at position greater than 'pos',
|
if (pos == row_size)
|
||||||
/// we need to generate rows up to 'fill_to' value.
|
return false;
|
||||||
|
|
||||||
|
if (!next_original_row[pos].isNull() && less(next_original_row[pos], row[pos], getDirection(pos)))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
if (!constraints[pos].isNull() && !less(row[pos], constraints[pos], getDirection(pos)))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
/// If we have any 'fill_to' value at position greater than 'pos' or configured staleness,
|
||||||
|
/// we need to generate rows up to one of this borders.
|
||||||
for (size_t i = row_size - 1; i > pos; --i)
|
for (size_t i = row_size - 1; i > pos; --i)
|
||||||
{
|
{
|
||||||
auto & fill_column_desc = getFillDescription(i);
|
auto & fill_column_desc = getFillDescription(i);
|
||||||
|
|
||||||
if (fill_column_desc.fill_to.isNull() || row[i].isNull())
|
if (row[i].isNull())
|
||||||
|
continue;
|
||||||
|
|
||||||
|
if (constraints[i].isNull())
|
||||||
continue;
|
continue;
|
||||||
|
|
||||||
Field next_value = row[i];
|
Field next_value = row[i];
|
||||||
fill_column_desc.step_func(next_value);
|
fill_column_desc.step_func(next_value, 1);
|
||||||
if (less(next_value, fill_column_desc.fill_to, getDirection(i)))
|
|
||||||
{
|
if (!less(next_value, constraints[i], getDirection(i)))
|
||||||
row[i] = next_value;
|
continue;
|
||||||
initFromDefaults(i + 1);
|
|
||||||
return {true, true};
|
row[i] = next_value;
|
||||||
}
|
initUsingFrom(i + 1);
|
||||||
|
|
||||||
|
value_changed = true;
|
||||||
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
auto next_value = row[pos];
|
auto next_value = row[pos];
|
||||||
getFillDescription(pos).step_func(next_value);
|
getFillDescription(pos).step_func(next_value, 1);
|
||||||
|
|
||||||
if (less(to_row.row[pos], next_value, getDirection(pos)) || equals(next_value, getFillDescription(pos).fill_to))
|
if (!next_original_row[pos].isNull() && less(next_original_row[pos], next_value, getDirection(pos)))
|
||||||
return {false, false};
|
return false;
|
||||||
|
|
||||||
|
if (!constraints[pos].isNull() && !less(next_value, constraints[pos], getDirection(pos)))
|
||||||
|
return false;
|
||||||
|
|
||||||
row[pos] = next_value;
|
row[pos] = next_value;
|
||||||
if (equals(row[pos], to_row.row[pos]))
|
if (equals(row[pos], next_original_row[pos]))
|
||||||
{
|
{
|
||||||
bool is_less = false;
|
bool is_less = false;
|
||||||
for (size_t i = pos + 1; i < row_size; ++i)
|
for (size_t i = pos + 1; i < row_size; ++i)
|
||||||
{
|
{
|
||||||
const auto & fill_from = getFillDescription(i).fill_from;
|
const auto & descr = getFillDescription(i);
|
||||||
if (!fill_from.isNull())
|
if (!descr.fill_from.isNull())
|
||||||
row[i] = fill_from;
|
row[i] = descr.fill_from;
|
||||||
else
|
else
|
||||||
row[i] = to_row.row[i];
|
row[i] = next_original_row[i];
|
||||||
is_less |= less(row[i], to_row.row[i], getDirection(i));
|
|
||||||
|
is_less |= (
|
||||||
|
(next_original_row[i].isNull() || less(row[i], next_original_row[i], getDirection(i))) &&
|
||||||
|
(constraints[i].isNull() || less(row[i], constraints[i], getDirection(i)))
|
||||||
|
);
|
||||||
}
|
}
|
||||||
|
|
||||||
return {is_less, true};
|
value_changed = true;
|
||||||
|
return is_less;
|
||||||
}
|
}
|
||||||
|
|
||||||
initFromDefaults(pos + 1);
|
initUsingFrom(pos + 1);
|
||||||
return {true, true};
|
|
||||||
|
value_changed = true;
|
||||||
|
return true;
|
||||||
}
|
}
|
||||||
|
|
||||||
void FillingRow::initFromDefaults(size_t from_pos)
|
bool FillingRow::shift(const FillingRow & next_original_row, bool& value_changed)
|
||||||
|
{
|
||||||
|
logDebug("next_original_row: {}, current: {}", next_original_row, *this);
|
||||||
|
|
||||||
|
for (size_t pos = 0; pos < size(); ++pos)
|
||||||
|
{
|
||||||
|
if (row[pos].isNull() || next_original_row[pos].isNull() || equals(row[pos], next_original_row[pos]))
|
||||||
|
continue;
|
||||||
|
|
||||||
|
if (less(next_original_row[pos], row[pos], getDirection(pos)))
|
||||||
|
return false;
|
||||||
|
|
||||||
|
std::optional<Field> next_value = doLongJump(getFillDescription(pos), pos, next_original_row[pos]);
|
||||||
|
logDebug("jumped to next value: {}", next_value.value_or("Did not complete"));
|
||||||
|
|
||||||
|
row[pos] = std::move(next_value.value());
|
||||||
|
|
||||||
|
if (equals(row[pos], next_original_row[pos]))
|
||||||
|
{
|
||||||
|
bool is_less = false;
|
||||||
|
for (size_t i = pos + 1; i < size(); ++i)
|
||||||
|
{
|
||||||
|
const auto & descr = getFillDescription(i);
|
||||||
|
if (!descr.fill_from.isNull())
|
||||||
|
row[i] = descr.fill_from;
|
||||||
|
else
|
||||||
|
row[i] = next_original_row[i];
|
||||||
|
|
||||||
|
is_less |= (
|
||||||
|
(next_original_row[i].isNull() || less(row[i], next_original_row[i], getDirection(i))) &&
|
||||||
|
(constraints[i].isNull() || less(row[i], constraints[i], getDirection(i)))
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
logDebug("is less: {}", is_less);
|
||||||
|
|
||||||
|
value_changed = true;
|
||||||
|
return is_less;
|
||||||
|
}
|
||||||
|
else
|
||||||
|
{
|
||||||
|
initUsingTo(/*from_pos=*/pos + 1);
|
||||||
|
|
||||||
|
value_changed = false;
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool FillingRow::hasSomeConstraints() const
|
||||||
|
{
|
||||||
|
for (size_t pos = 0; pos < size(); ++pos)
|
||||||
|
if (hasSomeConstraints(pos))
|
||||||
|
return true;
|
||||||
|
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
bool FillingRow::isConstraintsSatisfied() const
|
||||||
|
{
|
||||||
|
for (size_t pos = 0; pos < size(); ++pos)
|
||||||
|
{
|
||||||
|
if (row[pos].isNull() || !hasSomeConstraints(pos))
|
||||||
|
continue;
|
||||||
|
|
||||||
|
return isConstraintsSatisfied(pos);
|
||||||
|
}
|
||||||
|
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
void FillingRow::initUsingFrom(size_t from_pos)
|
||||||
{
|
{
|
||||||
for (size_t i = from_pos; i < sort_description.size(); ++i)
|
for (size_t i = from_pos; i < sort_description.size(); ++i)
|
||||||
row[i] = getFillDescription(i).fill_from;
|
row[i] = getFillDescription(i).fill_from;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
void FillingRow::initUsingTo(size_t from_pos)
|
||||||
|
{
|
||||||
|
for (size_t i = from_pos; i < sort_description.size(); ++i)
|
||||||
|
row[i] = getFillDescription(i).fill_to;
|
||||||
|
}
|
||||||
|
|
||||||
|
void FillingRow::updateConstraintsWithStalenessRow(const Columns& base_row, size_t row_ind)
|
||||||
|
{
|
||||||
|
for (size_t i = 0; i < size(); ++i)
|
||||||
|
{
|
||||||
|
const auto& descr = getFillDescription(i);
|
||||||
|
|
||||||
|
if (!descr.fill_staleness.isNull())
|
||||||
|
{
|
||||||
|
Field staleness_border = (*base_row[i])[row_ind];
|
||||||
|
descr.staleness_step_func(staleness_border, 1);
|
||||||
|
constraints[i] = findBorder(descr.fill_to, staleness_border, getDirection(i));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
String FillingRow::dump() const
|
String FillingRow::dump() const
|
||||||
{
|
{
|
||||||
WriteBufferFromOwnString out;
|
WriteBufferFromOwnString out;
|
||||||
@ -147,3 +347,12 @@ WriteBuffer & operator<<(WriteBuffer & out, const FillingRow & row)
|
|||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
template <>
|
||||||
|
struct fmt::formatter<DB::FillingRow> : fmt::formatter<string_view>
|
||||||
|
{
|
||||||
|
constexpr auto format(const DB::FillingRow & row, format_context & ctx) const
|
||||||
|
{
|
||||||
|
return fmt::format_to(ctx.out(), "{}", row.dump());
|
||||||
|
}
|
||||||
|
};
|
||||||
|
@ -1,6 +1,6 @@
|
|||||||
#pragma once
|
#pragma once
|
||||||
#include <Core/SortDescription.h>
|
|
||||||
|
|
||||||
|
#include <Core/SortDescription.h>
|
||||||
|
|
||||||
namespace DB
|
namespace DB
|
||||||
{
|
{
|
||||||
@ -15,16 +15,28 @@ bool equals(const Field & lhs, const Field & rhs);
|
|||||||
*/
|
*/
|
||||||
class FillingRow
|
class FillingRow
|
||||||
{
|
{
|
||||||
|
/// finds last value <= to
|
||||||
|
std::optional<Field> doLongJump(const FillColumnDescription & descr, size_t column_ind, const Field & to);
|
||||||
|
|
||||||
|
bool hasSomeConstraints(size_t pos) const;
|
||||||
|
bool isConstraintsSatisfied(size_t pos) const;
|
||||||
|
|
||||||
public:
|
public:
|
||||||
explicit FillingRow(const SortDescription & sort_description);
|
explicit FillingRow(const SortDescription & sort_description);
|
||||||
|
|
||||||
/// Generates next row according to fill 'from', 'to' and 'step' values.
|
/// Generates next row according to fill 'from', 'to' and 'step' values.
|
||||||
/// Return pair of boolean
|
/// Returns true if filling values should be inserted into result set
|
||||||
/// apply - true if filling values should be inserted into result set
|
bool next(const FillingRow & next_original_row, bool& value_changed);
|
||||||
/// value_changed - true if filling row value was changed
|
|
||||||
std::pair<bool, bool> next(const FillingRow & to_row);
|
|
||||||
|
|
||||||
void initFromDefaults(size_t from_pos = 0);
|
/// Returns true if need to generate some prefix for to_row
|
||||||
|
bool shift(const FillingRow & next_original_row, bool& value_changed);
|
||||||
|
|
||||||
|
bool hasSomeConstraints() const;
|
||||||
|
bool isConstraintsSatisfied() const;
|
||||||
|
|
||||||
|
void initUsingFrom(size_t from_pos = 0);
|
||||||
|
void initUsingTo(size_t from_pos = 0);
|
||||||
|
void updateConstraintsWithStalenessRow(const Columns& base_row, size_t row_ind);
|
||||||
|
|
||||||
Field & operator[](size_t index) { return row[index]; }
|
Field & operator[](size_t index) { return row[index]; }
|
||||||
const Field & operator[](size_t index) const { return row[index]; }
|
const Field & operator[](size_t index) const { return row[index]; }
|
||||||
@ -42,6 +54,7 @@ public:
|
|||||||
|
|
||||||
private:
|
private:
|
||||||
Row row;
|
Row row;
|
||||||
|
Row constraints;
|
||||||
SortDescription sort_description;
|
SortDescription sort_description;
|
||||||
};
|
};
|
||||||
|
|
||||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user