mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 01:25:21 +00:00
Merge branch 'master' into final_no_copy
Resolve conflicts Signed-off-by: Duc Canh Le <duccanh.le@ahrefs.com>
This commit is contained in:
commit
6331d8a6f2
9
.gitmodules
vendored
9
.gitmodules
vendored
@ -245,6 +245,12 @@
|
||||
[submodule "contrib/idxd-config"]
|
||||
path = contrib/idxd-config
|
||||
url = https://github.com/intel/idxd-config
|
||||
[submodule "contrib/QAT-ZSTD-Plugin"]
|
||||
path = contrib/QAT-ZSTD-Plugin
|
||||
url = https://github.com/intel/QAT-ZSTD-Plugin
|
||||
[submodule "contrib/qatlib"]
|
||||
path = contrib/qatlib
|
||||
url = https://github.com/intel/qatlib
|
||||
[submodule "contrib/wyhash"]
|
||||
path = contrib/wyhash
|
||||
url = https://github.com/wangyi-fudan/wyhash
|
||||
@ -360,3 +366,6 @@
|
||||
[submodule "contrib/sqids-cpp"]
|
||||
path = contrib/sqids-cpp
|
||||
url = https://github.com/sqids/sqids-cpp.git
|
||||
[submodule "contrib/idna"]
|
||||
path = contrib/idna
|
||||
url = https://github.com/ada-url/idna.git
|
||||
|
@ -22,7 +22,7 @@
|
||||
* The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for the `OPTIMIZE` is not allowed by default (it can be unlocked with the `allow_experimental_replacing_merge_with_cleanup` setting). [#58267](https://github.com/ClickHouse/ClickHouse/pull/58267) ([Alexander Tokmakov](https://github.com/tavplubix)). This fixes [#57930](https://github.com/ClickHouse/ClickHouse/issues/57930). This closes [#54988](https://github.com/ClickHouse/ClickHouse/issues/54988). This closes [#54570](https://github.com/ClickHouse/ClickHouse/issues/54570). This closes [#50346](https://github.com/ClickHouse/ClickHouse/issues/50346). This closes [#47579](https://github.com/ClickHouse/ClickHouse/issues/47579). The feature has to be removed because it is not good. We have to remove it as quickly as possible, because there is no other option. [#57932](https://github.com/ClickHouse/ClickHouse/pull/57932) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### New Feature
|
||||
* Implement Refreshable Materialized Views, requested in [#33919](https://github.com/ClickHouse/ClickHouse/issues/57995). [#56946](https://github.com/ClickHouse/ClickHouse/pull/56946) ([Michael Kolupaev](https://github.com/al13n321), [Michael Guzov](https://github.com/koloshmet)).
|
||||
* Implement Refreshable Materialized Views, requested in [#33919](https://github.com/ClickHouse/ClickHouse/issues/33919). [#56946](https://github.com/ClickHouse/ClickHouse/pull/56946) ([Michael Kolupaev](https://github.com/al13n321), [Michael Guzov](https://github.com/koloshmet)).
|
||||
* Introduce `PASTE JOIN`, which allows users to join tables without `ON` clause simply by row numbers. Example: `SELECT * FROM (SELECT number AS a FROM numbers(2)) AS t1 PASTE JOIN (SELECT number AS a FROM numbers(2) ORDER BY a DESC) AS t2`. [#57995](https://github.com/ClickHouse/ClickHouse/pull/57995) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
|
||||
* The `ORDER BY` clause now supports specifying `ALL`, meaning that ClickHouse sorts by all columns in the `SELECT` clause. Example: `SELECT col1, col2 FROM tab WHERE [...] ORDER BY ALL`. [#57875](https://github.com/ClickHouse/ClickHouse/pull/57875) ([zhongyuankai](https://github.com/zhongyuankai)).
|
||||
* Added a new mutation command `ALTER TABLE <table> APPLY DELETED MASK`, which allows to enforce applying of mask written by lightweight delete and to remove rows marked as deleted from disk. [#57433](https://github.com/ClickHouse/ClickHouse/pull/57433) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
@ -375,6 +375,7 @@
|
||||
* Do not interpret the `send_timeout` set on the client side as the `receive_timeout` on the server side and vise-versa. [#56035](https://github.com/ClickHouse/ClickHouse/pull/56035) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Comparison of time intervals with different units will throw an exception. This closes [#55942](https://github.com/ClickHouse/ClickHouse/issues/55942). You might have occasionally rely on the previous behavior when the underlying numeric values were compared regardless of the units. [#56090](https://github.com/ClickHouse/ClickHouse/pull/56090) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Rewrited the experimental `S3Queue` table engine completely: changed the way we keep information in zookeeper which allows to make less zookeeper requests, added caching of zookeeper state in cases when we know the state will not change, improved the polling from s3 process to make it less aggressive, changed the way ttl and max set for trached files is maintained, now it is a background process. Added `system.s3queue` and `system.s3queue_log` tables. Closes [#54998](https://github.com/ClickHouse/ClickHouse/issues/54998). [#54422](https://github.com/ClickHouse/ClickHouse/pull/54422) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Arbitrary paths on HTTP endpoint are no longer interpreted as a request to the `/query` endpoint. [#55521](https://github.com/ClickHouse/ClickHouse/pull/55521) ([Konstantin Bogdanov](https://github.com/thevar1able)).
|
||||
|
||||
#### New Feature
|
||||
* Add function `arrayFold(accumulator, x1, ..., xn -> expression, initial, array1, ..., arrayn)` which applies a lambda function to multiple arrays of the same cardinality and collects the result in an accumulator. [#49794](https://github.com/ClickHouse/ClickHouse/pull/49794) ([Lirikl](https://github.com/Lirikl)).
|
||||
|
@ -33,7 +33,7 @@ curl https://clickhouse.com/ | sh
|
||||
|
||||
## Upcoming Events
|
||||
|
||||
Keep an eye out for upcoming meetups around the world. Somewhere else you want us to be? Please feel free to reach out to tyler <at> clickhouse <dot> com.
|
||||
Keep an eye out for upcoming meetups around the world. Somewhere else you want us to be? Please feel free to reach out to tyler `<at>` clickhouse `<dot>` com.
|
||||
|
||||
## Recent Recordings
|
||||
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"
|
||||
|
27
contrib/CMakeLists.txt
vendored
27
contrib/CMakeLists.txt
vendored
@ -154,6 +154,7 @@ add_contrib (libpqxx-cmake libpqxx)
|
||||
add_contrib (libpq-cmake libpq)
|
||||
add_contrib (nuraft-cmake NuRaft)
|
||||
add_contrib (fast_float-cmake fast_float)
|
||||
add_contrib (idna-cmake idna)
|
||||
add_contrib (datasketches-cpp-cmake datasketches-cpp)
|
||||
add_contrib (incbin-cmake incbin)
|
||||
add_contrib (sqids-cpp-cmake sqids-cpp)
|
||||
@ -171,9 +172,9 @@ add_contrib (s2geometry-cmake s2geometry)
|
||||
add_contrib (c-ares-cmake c-ares)
|
||||
|
||||
if (OS_LINUX AND ARCH_AMD64 AND ENABLE_SSE42)
|
||||
option (ENABLE_QPL "Enable Intel® Query Processing Library" ${ENABLE_LIBRARIES})
|
||||
option (ENABLE_QPL "Enable Intel® Query Processing Library (QPL)" ${ENABLE_LIBRARIES})
|
||||
elseif(ENABLE_QPL)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "QPL library is only supported on x86_64 arch with SSE 4.2 or higher")
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "QPL library is only supported on x86_64 with SSE 4.2 or higher")
|
||||
endif()
|
||||
if (ENABLE_QPL)
|
||||
add_contrib (idxd-config-cmake idxd-config)
|
||||
@ -182,6 +183,28 @@ else()
|
||||
message(STATUS "Not using QPL")
|
||||
endif ()
|
||||
|
||||
if (OS_LINUX AND ARCH_AMD64)
|
||||
option (ENABLE_QATLIB "Enable Intel® QuickAssist Technology Library (QATlib)" ${ENABLE_LIBRARIES})
|
||||
elseif(ENABLE_QATLIB)
|
||||
message (${RECONFIGURE_MESSAGE_LEVEL} "QATLib is only supported on x86_64")
|
||||
endif()
|
||||
if (ENABLE_QATLIB)
|
||||
option (ENABLE_QAT_USDM_DRIVER "A User Space DMA-able Memory (USDM) component which allocates/frees DMA-able memory" OFF)
|
||||
option (ENABLE_QAT_OUT_OF_TREE_BUILD "Using out-of-tree driver, user needs to customize ICP_ROOT variable" OFF)
|
||||
set(ICP_ROOT "" CACHE STRING "ICP_ROOT variable to define the path of out-of-tree driver package")
|
||||
if (ENABLE_QAT_OUT_OF_TREE_BUILD)
|
||||
if (ICP_ROOT STREQUAL "")
|
||||
message(FATAL_ERROR "Please define the path of out-of-tree driver package with -DICP_ROOT=xxx or disable out-of-tree build with -DENABLE_QAT_OUT_OF_TREE_BUILD=OFF; \
|
||||
If you want out-of-tree build but have no package available, please download and build ICP package from: https://www.intel.com/content/www/us/en/download/765501.html")
|
||||
endif ()
|
||||
else()
|
||||
add_contrib (qatlib-cmake qatlib) # requires: isa-l
|
||||
endif ()
|
||||
add_contrib (QAT-ZSTD-Plugin-cmake QAT-ZSTD-Plugin)
|
||||
else()
|
||||
message(STATUS "Not using QATLib")
|
||||
endif ()
|
||||
|
||||
add_contrib (morton-nd-cmake morton-nd)
|
||||
if (ARCH_S390X)
|
||||
add_contrib(crc32-s390x-cmake crc32-s390x)
|
||||
|
1
contrib/QAT-ZSTD-Plugin
vendored
Submodule
1
contrib/QAT-ZSTD-Plugin
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit e5a134e12d2ea8a5b0f3b83c5b1c325fda4eb0a8
|
85
contrib/QAT-ZSTD-Plugin-cmake/CMakeLists.txt
Normal file
85
contrib/QAT-ZSTD-Plugin-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,85 @@
|
||||
# Intel® QuickAssist Technology ZSTD Plugin (QAT ZSTD Plugin) is a plugin to Zstandard*(ZSTD*) for accelerating compression by QAT.
|
||||
# ENABLE_QAT_OUT_OF_TREE_BUILD = 1 means kernel don't have native support, user will build and install driver from external package: https://www.intel.com/content/www/us/en/download/765501.html
|
||||
# meanwhile, user need to set ICP_ROOT environment variable which point to the root directory of QAT driver source tree.
|
||||
# ENABLE_QAT_OUT_OF_TREE_BUILD = 0 means kernel has built-in qat driver, QAT-ZSTD-PLUGIN just has dependency on qatlib.
|
||||
|
||||
if (ENABLE_QAT_OUT_OF_TREE_BUILD)
|
||||
message(STATUS "Intel QATZSTD out-of-tree build, ICP_ROOT:${ICP_ROOT}")
|
||||
|
||||
set(QATZSTD_SRC_DIR "${ClickHouse_SOURCE_DIR}/contrib/QAT-ZSTD-Plugin/src")
|
||||
set(QATZSTD_SRC "${QATZSTD_SRC_DIR}/qatseqprod.c")
|
||||
set(ZSTD_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/zstd/lib")
|
||||
set(QAT_INCLUDE_DIR "${ICP_ROOT}/quickassist/include")
|
||||
set(QAT_DC_INCLUDE_DIR "${ICP_ROOT}/quickassist/include/dc")
|
||||
set(QAT_AL_INCLUDE_DIR "${ICP_ROOT}/quickassist/lookaside/access_layer/include")
|
||||
set(QAT_USDM_INCLUDE_DIR "${ICP_ROOT}/quickassist/utilities/libusdm_drv")
|
||||
set(USDM_LIBRARY "${ICP_ROOT}/build/libusdm_drv_s.so")
|
||||
set(QAT_S_LIBRARY "${ICP_ROOT}/build/libqat_s.so")
|
||||
if (ENABLE_QAT_USDM_DRIVER)
|
||||
add_definitions(-DENABLE_USDM_DRV)
|
||||
endif()
|
||||
add_library(_qatzstd_plugin ${QATZSTD_SRC})
|
||||
target_link_libraries (_qatzstd_plugin PUBLIC ${USDM_LIBRARY} ${QAT_S_LIBRARY})
|
||||
target_include_directories(_qatzstd_plugin
|
||||
SYSTEM PUBLIC "${QATZSTD_SRC_DIR}"
|
||||
PRIVATE ${QAT_INCLUDE_DIR}
|
||||
${QAT_DC_INCLUDE_DIR}
|
||||
${QAT_AL_INCLUDE_DIR}
|
||||
${QAT_USDM_INCLUDE_DIR}
|
||||
${ZSTD_LIBRARY_DIR})
|
||||
target_compile_definitions(_qatzstd_plugin PRIVATE -DDEBUGLEVEL=0 PUBLIC -DENABLE_ZSTD_QAT_CODEC)
|
||||
add_library (ch_contrib::qatzstd_plugin ALIAS _qatzstd_plugin)
|
||||
else () # In-tree build
|
||||
message(STATUS "Intel QATZSTD in-tree build")
|
||||
set(QATZSTD_SRC_DIR "${ClickHouse_SOURCE_DIR}/contrib/QAT-ZSTD-Plugin/src")
|
||||
set(QATZSTD_SRC "${QATZSTD_SRC_DIR}/qatseqprod.c")
|
||||
set(ZSTD_LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/zstd/lib")
|
||||
|
||||
# please download&build ICP package from: https://www.intel.com/content/www/us/en/download/765501.html
|
||||
set(ICP_ROOT "${ClickHouse_SOURCE_DIR}/contrib/qatlib")
|
||||
set(QAT_INCLUDE_DIR "${ICP_ROOT}/quickassist/include")
|
||||
set(QAT_DC_INCLUDE_DIR "${ICP_ROOT}/quickassist/include/dc")
|
||||
set(QAT_AL_INCLUDE_DIR "${ICP_ROOT}/quickassist/lookaside/access_layer/include")
|
||||
set(QAT_USDM_INCLUDE_DIR "${ICP_ROOT}/quickassist/utilities/libusdm_drv")
|
||||
set(USDM_LIBRARY "${ICP_ROOT}/build/libusdm_drv_s.so")
|
||||
set(QAT_S_LIBRARY "${ICP_ROOT}/build/libqat_s.so")
|
||||
set(LIBQAT_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/qatlib")
|
||||
set(LIBQAT_HEADER_DIR "${CMAKE_CURRENT_BINARY_DIR}/include")
|
||||
|
||||
file(MAKE_DIRECTORY
|
||||
"${LIBQAT_HEADER_DIR}/qat"
|
||||
)
|
||||
file(COPY "${LIBQAT_ROOT_DIR}/quickassist/include/cpa.h"
|
||||
DESTINATION "${LIBQAT_HEADER_DIR}/qat/"
|
||||
)
|
||||
file(COPY "${LIBQAT_ROOT_DIR}/quickassist/include/dc/cpa_dc.h"
|
||||
DESTINATION "${LIBQAT_HEADER_DIR}/qat/"
|
||||
)
|
||||
file(COPY "${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/include/icp_sal_poll.h"
|
||||
DESTINATION "${LIBQAT_HEADER_DIR}/qat/"
|
||||
)
|
||||
file(COPY "${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/include/icp_sal_user.h"
|
||||
DESTINATION "${LIBQAT_HEADER_DIR}/qat/"
|
||||
)
|
||||
file(COPY "${LIBQAT_ROOT_DIR}/quickassist/utilities/libusdm_drv/qae_mem.h"
|
||||
DESTINATION "${LIBQAT_HEADER_DIR}/qat/"
|
||||
)
|
||||
|
||||
if (ENABLE_QAT_USDM_DRIVER)
|
||||
add_definitions(-DENABLE_USDM_DRV)
|
||||
endif()
|
||||
|
||||
add_library(_qatzstd_plugin ${QATZSTD_SRC})
|
||||
target_link_libraries (_qatzstd_plugin PUBLIC ch_contrib::qatlib ch_contrib::usdm)
|
||||
target_include_directories(_qatzstd_plugin PRIVATE
|
||||
${QAT_INCLUDE_DIR}
|
||||
${QAT_DC_INCLUDE_DIR}
|
||||
${QAT_AL_INCLUDE_DIR}
|
||||
${QAT_USDM_INCLUDE_DIR}
|
||||
${ZSTD_LIBRARY_DIR}
|
||||
${LIBQAT_HEADER_DIR})
|
||||
target_compile_definitions(_qatzstd_plugin PRIVATE -DDEBUGLEVEL=0 PUBLIC -DENABLE_ZSTD_QAT_CODEC -DINTREE)
|
||||
target_include_directories(_qatzstd_plugin SYSTEM PUBLIC $<BUILD_INTERFACE:${QATZSTD_SRC_DIR}> $<INSTALL_INTERFACE:include>)
|
||||
add_library (ch_contrib::qatzstd_plugin ALIAS _qatzstd_plugin)
|
||||
endif ()
|
||||
|
1
contrib/idna
vendored
Submodule
1
contrib/idna
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 3c8be01d42b75649f1ac9b697d0ef757eebfe667
|
24
contrib/idna-cmake/CMakeLists.txt
Normal file
24
contrib/idna-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,24 @@
|
||||
option(ENABLE_IDNA "Enable idna support" ${ENABLE_LIBRARIES})
|
||||
if ((NOT ENABLE_IDNA))
|
||||
message (STATUS "Not using idna")
|
||||
return()
|
||||
endif()
|
||||
set (LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/idna")
|
||||
|
||||
set (SRCS
|
||||
"${LIBRARY_DIR}/src/idna.cpp"
|
||||
"${LIBRARY_DIR}/src/mapping.cpp"
|
||||
"${LIBRARY_DIR}/src/mapping_tables.cpp"
|
||||
"${LIBRARY_DIR}/src/normalization.cpp"
|
||||
"${LIBRARY_DIR}/src/normalization_tables.cpp"
|
||||
"${LIBRARY_DIR}/src/punycode.cpp"
|
||||
"${LIBRARY_DIR}/src/to_ascii.cpp"
|
||||
"${LIBRARY_DIR}/src/to_unicode.cpp"
|
||||
"${LIBRARY_DIR}/src/unicode_transcoding.cpp"
|
||||
"${LIBRARY_DIR}/src/validity.cpp"
|
||||
)
|
||||
|
||||
add_library (_idna ${SRCS})
|
||||
target_include_directories(_idna PUBLIC "${LIBRARY_DIR}/include")
|
||||
|
||||
add_library (ch_contrib::idna ALIAS _idna)
|
1
contrib/qatlib
vendored
Submodule
1
contrib/qatlib
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit abe15d7bfc083117bfbb4baee0b49ffcd1c03c5c
|
213
contrib/qatlib-cmake/CMakeLists.txt
Normal file
213
contrib/qatlib-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,213 @@
|
||||
# Intel® QuickAssist Technology Library (QATlib).
|
||||
|
||||
message(STATUS "Intel QATlib ON")
|
||||
set(LIBQAT_ROOT_DIR "${ClickHouse_SOURCE_DIR}/contrib/qatlib")
|
||||
set(LIBQAT_DIR "${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src")
|
||||
set(LIBOSAL_DIR "${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/src")
|
||||
set(OPENSSL_DIR "${ClickHouse_SOURCE_DIR}/contrib/openssl")
|
||||
|
||||
# Build 3 libraries: _qatmgr, _osal, _qatlib
|
||||
# Produce ch_contrib::qatlib by linking these libraries.
|
||||
|
||||
# _qatmgr
|
||||
|
||||
SET(LIBQATMGR_sources ${LIBQAT_DIR}/qat_direct/vfio/qat_mgr_client.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/qat_mgr_lib.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/qat_log.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/vfio_lib.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/adf_pfvf_proto.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/adf_pfvf_vf_msg.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/adf_vfio_pf.c)
|
||||
|
||||
add_library(_qatmgr ${LIBQATMGR_sources})
|
||||
|
||||
target_include_directories(_qatmgr PRIVATE
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/qat_direct/vfio
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/src/linux/user_space/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/qat/drivers/crypto/qat/qat_common
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/qat_direct/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/qat_direct/common/include
|
||||
${ClickHouse_SOURCE_DIR}/contrib/sysroot/linux-x86_64-musl/include)
|
||||
|
||||
target_compile_definitions(_qatmgr PRIVATE -DUSER_SPACE)
|
||||
target_compile_options(_qatmgr PRIVATE -Wno-error=int-conversion)
|
||||
|
||||
# _osal
|
||||
|
||||
SET(LIBOSAL_sources
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalSemaphore.c
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalThread.c
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalMutex.c
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalSpinLock.c
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalAtomic.c
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalServices.c
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalUsrKrnProxy.c
|
||||
${LIBOSAL_DIR}/linux/user_space/OsalCryptoInterface.c)
|
||||
|
||||
add_library(_osal ${LIBOSAL_sources})
|
||||
|
||||
target_include_directories(_osal PRIVATE
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/src/linux/user_space
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/src/linux/user_space/include
|
||||
${OPENSSL_DIR}/include
|
||||
${ClickHouse_SOURCE_DIR}/contrib/openssl-cmake/linux_x86_64/include
|
||||
${ClickHouse_SOURCE_DIR}/contrib/qatlib-cmake/include)
|
||||
|
||||
target_compile_definitions(_osal PRIVATE -DOSAL_ENSURE_ON -DUSE_OPENSSL)
|
||||
|
||||
# _qatlib
|
||||
SET(LIBQAT_sources
|
||||
${LIBQAT_DIR}/common/compression/dc_buffers.c
|
||||
${LIBQAT_DIR}/common/compression/dc_chain.c
|
||||
${LIBQAT_DIR}/common/compression/dc_datapath.c
|
||||
${LIBQAT_DIR}/common/compression/dc_dp.c
|
||||
${LIBQAT_DIR}/common/compression/dc_header_footer.c
|
||||
${LIBQAT_DIR}/common/compression/dc_header_footer_lz4.c
|
||||
${LIBQAT_DIR}/common/compression/dc_session.c
|
||||
${LIBQAT_DIR}/common/compression/dc_stats.c
|
||||
${LIBQAT_DIR}/common/compression/dc_err_sim.c
|
||||
${LIBQAT_DIR}/common/compression/dc_ns_datapath.c
|
||||
${LIBQAT_DIR}/common/compression/dc_ns_header_footer.c
|
||||
${LIBQAT_DIR}/common/compression/dc_crc32.c
|
||||
${LIBQAT_DIR}/common/compression/dc_crc64.c
|
||||
${LIBQAT_DIR}/common/compression/dc_xxhash32.c
|
||||
${LIBQAT_DIR}/common/compression/icp_sal_dc_err_sim.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/diffie_hellman/lac_dh_control_path.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/diffie_hellman/lac_dh_data_path.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/diffie_hellman/lac_dh_interface_check.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/diffie_hellman/lac_dh_stats.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/dsa/lac_dsa.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/dsa/lac_dsa_interface_check.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_ec.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_ec_common.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_ec_montedwds.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_ec_nist_curves.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_ecdh.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_ecdsa.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_ecsm2.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/ecc/lac_kpt_ecdsa.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/large_number/lac_ln.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/large_number/lac_ln_interface_check.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/pke_common/lac_pke_mmp.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/pke_common/lac_pke_qat_comms.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/pke_common/lac_pke_utils.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/prime/lac_prime.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/prime/lac_prime_interface_check.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_rsa.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_rsa_control_path.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_rsa_decrypt.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_rsa_encrypt.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_rsa_interface_check.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_rsa_keygen.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_rsa_stats.c
|
||||
${LIBQAT_DIR}/common/crypto/asym/rsa/lac_kpt_rsa_decrypt.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/drbg/lac_sym_drbg_api.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/key/lac_sym_key.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_alg_chain.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_api.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_auth_enc.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_cb.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_cipher.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_compile_check.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_dp.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_hash.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_partial.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_queue.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_stats.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/nrbg/lac_sym_nrbg_api.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/qat/lac_sym_qat.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/qat/lac_sym_qat_cipher.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/qat/lac_sym_qat_constants_table.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/qat/lac_sym_qat_hash.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/qat/lac_sym_qat_hash_defs_lookup.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/qat/lac_sym_qat_key.c
|
||||
${LIBQAT_DIR}/common/crypto/sym/lac_sym_hash_sw_precomputes.c
|
||||
${LIBQAT_DIR}/common/crypto/kpt/provision/lac_kpt_provision.c
|
||||
${LIBQAT_DIR}/common/ctrl/sal_compression.c
|
||||
${LIBQAT_DIR}/common/ctrl/sal_create_services.c
|
||||
${LIBQAT_DIR}/common/ctrl/sal_ctrl_services.c
|
||||
${LIBQAT_DIR}/common/ctrl/sal_list.c
|
||||
${LIBQAT_DIR}/common/ctrl/sal_crypto.c
|
||||
${LIBQAT_DIR}/common/ctrl/sal_dc_chain.c
|
||||
${LIBQAT_DIR}/common/ctrl/sal_instances.c
|
||||
${LIBQAT_DIR}/common/qat_comms/sal_qat_cmn_msg.c
|
||||
${LIBQAT_DIR}/common/utils/lac_buffer_desc.c
|
||||
${LIBQAT_DIR}/common/utils/lac_log_message.c
|
||||
${LIBQAT_DIR}/common/utils/lac_mem.c
|
||||
${LIBQAT_DIR}/common/utils/lac_mem_pools.c
|
||||
${LIBQAT_DIR}/common/utils/lac_sw_responses.c
|
||||
${LIBQAT_DIR}/common/utils/lac_sync.c
|
||||
${LIBQAT_DIR}/common/utils/sal_service_state.c
|
||||
${LIBQAT_DIR}/common/utils/sal_statistics.c
|
||||
${LIBQAT_DIR}/common/utils/sal_misc_error_stats.c
|
||||
${LIBQAT_DIR}/common/utils/sal_string_parse.c
|
||||
${LIBQAT_DIR}/common/utils/sal_user_process.c
|
||||
${LIBQAT_DIR}/common/utils/sal_versions.c
|
||||
${LIBQAT_DIR}/common/device/sal_dev_info.c
|
||||
${LIBQAT_DIR}/user/sal_user.c
|
||||
${LIBQAT_DIR}/user/sal_user_dyn_instance.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_process_proxy.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_user_cfg.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_user_device.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_user_dyn.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_user_ETring_mgr_dp.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_user_init.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_user_ring.c
|
||||
${LIBQAT_DIR}/qat_direct/common/adf_user_transport_ctrl.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/adf_vfio_cfg.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/adf_vfio_ring.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/adf_vfio_user_bundles.c
|
||||
${LIBQAT_DIR}/qat_direct/vfio/adf_vfio_user_proxy.c
|
||||
${LIBQAT_DIR}/common/compression/dc_crc_base.c)
|
||||
|
||||
add_library(_qatlib ${LIBQAT_sources})
|
||||
|
||||
target_include_directories(_qatlib PRIVATE
|
||||
${CMAKE_SYSROOT}/usr/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/libusdm_drv
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/include
|
||||
${LIBOSAL_DIR}/linux/user_space/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/include/lac
|
||||
${LIBQAT_ROOT_DIR}/quickassist/include/dc
|
||||
${LIBQAT_ROOT_DIR}/quickassist/qat/drivers/crypto/qat/qat_common
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/common/compression/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/common/crypto/sym/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/common/crypto/asym/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/firmware/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/common/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/qat_direct/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/qat_direct/common/include
|
||||
${LIBQAT_ROOT_DIR}/quickassist/lookaside/access_layer/src/qat_direct/vfio
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/src/linux/user_space
|
||||
${LIBQAT_ROOT_DIR}/quickassist/utilities/osal/src/linux/user_space/include
|
||||
${ClickHouse_SOURCE_DIR}/contrib/sysroot/linux-x86_64-musl/include)
|
||||
|
||||
target_link_libraries(_qatlib PRIVATE _qatmgr _osal OpenSSL::SSL ch_contrib::isal)
|
||||
target_compile_definitions(_qatlib PRIVATE -DUSER_SPACE -DLAC_BYTE_ORDER=__LITTLE_ENDIAN -DOSAL_ENSURE_ON)
|
||||
target_link_options(_qatlib PRIVATE -pie -z relro -z now -z noexecstack)
|
||||
target_compile_options(_qatlib PRIVATE -march=native)
|
||||
add_library (ch_contrib::qatlib ALIAS _qatlib)
|
||||
|
||||
# _usdm
|
||||
|
||||
set(LIBUSDM_DIR "${ClickHouse_SOURCE_DIR}/contrib/qatlib/quickassist/utilities/libusdm_drv")
|
||||
set(LIBUSDM_sources
|
||||
${LIBUSDM_DIR}/user_space/vfio/qae_mem_utils_vfio.c
|
||||
${LIBUSDM_DIR}/user_space/qae_mem_utils_common.c
|
||||
${LIBUSDM_DIR}/user_space/vfio/qae_mem_hugepage_utils_vfio.c)
|
||||
|
||||
add_library(_usdm ${LIBUSDM_sources})
|
||||
|
||||
target_include_directories(_usdm PRIVATE
|
||||
${ClickHouse_SOURCE_DIR}/contrib/sysroot/linux-x86_64-musl/include
|
||||
${LIBUSDM_DIR}
|
||||
${LIBUSDM_DIR}/include
|
||||
${LIBUSDM_DIR}/user_space)
|
||||
|
||||
add_library (ch_contrib::usdm ALIAS _usdm)
|
14
contrib/qatlib-cmake/include/mqueue.h
Normal file
14
contrib/qatlib-cmake/include/mqueue.h
Normal file
@ -0,0 +1,14 @@
|
||||
/* This is a workaround for a build conflict issue
|
||||
1. __GLIBC_PREREQ (referenced in OsalServices.c) is only defined in './sysroot/linux-x86_64/include/features.h'
|
||||
2. mqueue.h only exist under './sysroot/linux-x86_64-musl/'
|
||||
This cause target_include_directories for _osal has a conflict between './sysroot/linux-x86_64/include' and './sysroot/linux-x86_64-musl/'
|
||||
hence create mqueue.h separately under ./qatlib-cmake/include as an alternative.
|
||||
*/
|
||||
|
||||
/* Major and minor version number of the GNU C library package. Use
|
||||
these macros to test for features in specific releases. */
|
||||
#define __GLIBC__ 2
|
||||
#define __GLIBC_MINOR__ 27
|
||||
|
||||
#define __GLIBC_PREREQ(maj, min) \
|
||||
((__GLIBC__ << 16) + __GLIBC_MINOR__ >= ((maj) << 16) + (min))
|
2
contrib/sqids-cpp
vendored
2
contrib/sqids-cpp
vendored
@ -1 +1 @@
|
||||
Subproject commit 3756e537d4d48cc0dd4176801fe19f99601439b0
|
||||
Subproject commit a471f53672e98d49223f598528a533b07b085c61
|
@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="23.12.1.1368"
|
||||
ARG VERSION="23.12.2.59"
|
||||
ARG PACKAGES="clickhouse-keeper"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
|
@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="23.12.1.1368"
|
||||
ARG VERSION="23.12.2.59"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
ARG DIRECT_DOWNLOAD_URLS=""
|
||||
|
||||
|
@ -30,7 +30,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
||||
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||
ARG VERSION="23.12.1.1368"
|
||||
ARG VERSION="23.12.2.59"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
|
@ -41,6 +41,10 @@ readarray -t DISKS_PATHS < <(clickhouse extract-from-config --config-file "$CLIC
|
||||
readarray -t DISKS_METADATA_PATHS < <(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key='storage_configuration.disks.*.metadata_path' || true)
|
||||
|
||||
CLICKHOUSE_USER="${CLICKHOUSE_USER:-default}"
|
||||
CLICKHOUSE_PASSWORD_FILE="${CLICKHOUSE_PASSWORD_FILE:-}"
|
||||
if [[ -n "${CLICKHOUSE_PASSWORD_FILE}" && -f "${CLICKHOUSE_PASSWORD_FILE}" ]]; then
|
||||
CLICKHOUSE_PASSWORD="$(cat "${CLICKHOUSE_PASSWORD_FILE}")"
|
||||
fi
|
||||
CLICKHOUSE_PASSWORD="${CLICKHOUSE_PASSWORD:-}"
|
||||
CLICKHOUSE_DB="${CLICKHOUSE_DB:-}"
|
||||
CLICKHOUSE_ACCESS_MANAGEMENT="${CLICKHOUSE_DEFAULT_ACCESS_MANAGEMENT:-0}"
|
||||
|
@ -44,6 +44,9 @@ if [[ -n "$USE_S3_STORAGE_FOR_MERGE_TREE" ]] && [[ "$USE_S3_STORAGE_FOR_MERGE_TR
|
||||
# It is not needed, we will explicitly create tables on s3.
|
||||
# We do not have statefull tests with s3 storage run in public repository, but this is needed for another repository.
|
||||
rm /etc/clickhouse-server/config.d/s3_storage_policy_for_merge_tree_by_default.xml
|
||||
|
||||
rm /etc/clickhouse-server/config.d/storage_metadata_with_full_object_key.xml
|
||||
rm /etc/clickhouse-server/config.d/s3_storage_policy_with_template_object_key.xml
|
||||
fi
|
||||
|
||||
function start()
|
||||
|
@ -236,6 +236,10 @@ function check_logs_for_critical_errors()
|
||||
&& echo -e "S3_ERROR No such key thrown (see clickhouse-server.log or no_such_key_errors.txt)$FAIL$(trim_server_logs no_such_key_errors.txt)" >> /test_output/test_results.tsv \
|
||||
|| echo -e "No lost s3 keys$OK" >> /test_output/test_results.tsv
|
||||
|
||||
rg -Fa "it is lost forever" /var/log/clickhouse-server/clickhouse-server*.log | grep 'SharedMergeTreePartCheckThread' > /dev/null \
|
||||
&& echo -e "Lost forever for SharedMergeTree$FAIL" >> /test_output/test_results.tsv \
|
||||
|| echo -e "No SharedMergeTree lost forever in clickhouse-server.log$OK" >> /test_output/test_results.tsv
|
||||
|
||||
# Remove file no_such_key_errors.txt if it's empty
|
||||
[ -s /test_output/no_such_key_errors.txt ] || rm /test_output/no_such_key_errors.txt
|
||||
|
||||
|
@ -193,6 +193,7 @@ stop
|
||||
|
||||
# Let's enable S3 storage by default
|
||||
export USE_S3_STORAGE_FOR_MERGE_TREE=1
|
||||
export $RANDOMIZE_OBJECT_KEY_TYPE=1
|
||||
export ZOOKEEPER_FAULT_INJECTION=1
|
||||
configure
|
||||
|
||||
|
51
docs/changelogs/v23.10.6.60-stable.md
Normal file
51
docs/changelogs/v23.10.6.60-stable.md
Normal file
@ -0,0 +1,51 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.10.6.60-stable (68907bbe643) FIXME as compared to v23.10.5.20-stable (e84001e5c61)
|
||||
|
||||
#### Improvement
|
||||
* Backported in [#58493](https://github.com/ClickHouse/ClickHouse/issues/58493): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||
* Backported in [#57659](https://github.com/ClickHouse/ClickHouse/issues/57659): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#57586](https://github.com/ClickHouse/ClickHouse/issues/57586): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Background merges correctly use temporary data storage in the cache [#57275](https://github.com/ClickHouse/ClickHouse/pull/57275) ([vdimir](https://github.com/vdimir)).
|
||||
* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix function jsonMergePatch for partially const columns [#57379](https://github.com/ClickHouse/ClickHouse/pull/57379) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
|
||||
* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
|
||||
#### NO CL CATEGORY
|
||||
|
||||
* Backported in [#57916](https://github.com/ClickHouse/ClickHouse/issues/57916):. [#57909](https://github.com/ClickHouse/ClickHouse/pull/57909) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
|
26
docs/changelogs/v23.11.4.24-stable.md
Normal file
26
docs/changelogs/v23.11.4.24-stable.md
Normal file
@ -0,0 +1,26 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.11.4.24-stable (e79d840d7fe) FIXME as compared to v23.11.3.23-stable (a14ab450b0e)
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
|
32
docs/changelogs/v23.12.2.59-stable.md
Normal file
32
docs/changelogs/v23.12.2.59-stable.md
Normal file
@ -0,0 +1,32 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.12.2.59-stable (17ab210e761) FIXME as compared to v23.12.1.1368-stable (a2faa65b080)
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Backported in [#58389](https://github.com/ClickHouse/ClickHouse/issues/58389): The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for `OPTIMIZE` is not allowed by default (unless `allow_experimental_replacing_merge_with_cleanup` is enabled). [#58316](https://github.com/ClickHouse/ClickHouse/pull/58316) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix lost blobs after dropping a replica with broken detached parts [#58333](https://github.com/ClickHouse/ClickHouse/pull/58333) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix segfault when graphite table does not have agg function [#58453](https://github.com/ClickHouse/ClickHouse/pull/58453) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* MergeTreePrefetchedReadPool disable for LIMIT only queries [#58505](https://github.com/ClickHouse/ClickHouse/pull/58505) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Revert "Refreshable materialized views (takeover)"'. [#58296](https://github.com/ClickHouse/ClickHouse/pull/58296) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Fix an error in the release script - it didn't allow to make 23.12. [#58288](https://github.com/ClickHouse/ClickHouse/pull/58288) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Update version_date.tsv and changelogs after v23.12.1.1368-stable [#58290](https://github.com/ClickHouse/ClickHouse/pull/58290) ([robot-clickhouse](https://github.com/robot-clickhouse)).
|
||||
* Fix test_storage_s3_queue/test.py::test_drop_table [#58293](https://github.com/ClickHouse/ClickHouse/pull/58293) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Handle another case for preprocessing in Keeper [#58308](https://github.com/ClickHouse/ClickHouse/pull/58308) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix test_user_valid_until [#58409](https://github.com/ClickHouse/ClickHouse/pull/58409) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
|
36
docs/changelogs/v23.3.19.32-lts.md
Normal file
36
docs/changelogs/v23.3.19.32-lts.md
Normal file
@ -0,0 +1,36 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.3.19.32-lts (c4d4ca8ec02) FIXME as compared to v23.3.18.15-lts (7228475d77a)
|
||||
|
||||
#### Backward Incompatible Change
|
||||
* Backported in [#57840](https://github.com/ClickHouse/ClickHouse/issues/57840): Remove function `arrayFold` because it has a bug. This closes [#57816](https://github.com/ClickHouse/ClickHouse/issues/57816). This closes [#57458](https://github.com/ClickHouse/ClickHouse/issues/57458). [#57836](https://github.com/ClickHouse/ClickHouse/pull/57836) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Improvement
|
||||
* Backported in [#58489](https://github.com/ClickHouse/ClickHouse/issues/58489): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||
* Backported in [#57653](https://github.com/ClickHouse/ClickHouse/issues/57653): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#57580](https://github.com/ClickHouse/ClickHouse/issues/57580): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||
* MergeTree mutations reuse source part index granularity [#57352](https://github.com/ClickHouse/ClickHouse/pull/57352) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
47
docs/changelogs/v23.8.9.54-lts.md
Normal file
47
docs/changelogs/v23.8.9.54-lts.md
Normal file
@ -0,0 +1,47 @@
|
||||
---
|
||||
sidebar_position: 1
|
||||
sidebar_label: 2024
|
||||
---
|
||||
|
||||
# 2024 Changelog
|
||||
|
||||
### ClickHouse release v23.8.9.54-lts (192a1d231fa) FIXME as compared to v23.8.8.20-lts (5e012a03bf2)
|
||||
|
||||
#### Improvement
|
||||
* Backported in [#57668](https://github.com/ClickHouse/ClickHouse/issues/57668): Output valid JSON/XML on excetpion during HTTP query execution. Add setting `http_write_exception_in_output_format` to enable/disable this behaviour (enabled by default). [#52853](https://github.com/ClickHouse/ClickHouse/pull/52853) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Backported in [#58491](https://github.com/ClickHouse/ClickHouse/issues/58491): Fix transfer query to MySQL compatible query. Fixes [#57253](https://github.com/ClickHouse/ClickHouse/issues/57253). Fixes [#52654](https://github.com/ClickHouse/ClickHouse/issues/52654). Fixes [#56729](https://github.com/ClickHouse/ClickHouse/issues/56729). [#56456](https://github.com/ClickHouse/ClickHouse/pull/56456) ([flynn](https://github.com/ucasfl)).
|
||||
* Backported in [#57238](https://github.com/ClickHouse/ClickHouse/issues/57238): Fetching a part waits when that part is fully committed on remote replica. It is better not send part in PreActive state. In case of zero copy this is mandatory restriction. [#56808](https://github.com/ClickHouse/ClickHouse/pull/56808) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Backported in [#57655](https://github.com/ClickHouse/ClickHouse/issues/57655): Handle sigabrt case when getting PostgreSQl table structure with empty array. [#57618](https://github.com/ClickHouse/ClickHouse/pull/57618) ([Mike Kot (Михаил Кот)](https://github.com/myrrc)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#57582](https://github.com/ClickHouse/ClickHouse/issues/57582): Fix issue caught in https://github.com/docker-library/official-images/pull/15846. [#57571](https://github.com/ClickHouse/ClickHouse/pull/57571) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
|
||||
* Flatten only true Nested type if flatten_nested=1, not all Array(Tuple) [#56132](https://github.com/ClickHouse/ClickHouse/pull/56132) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix ALTER COLUMN with ALIAS [#56493](https://github.com/ClickHouse/ClickHouse/pull/56493) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Prevent incompatible ALTER of projection columns [#56948](https://github.com/ClickHouse/ClickHouse/pull/56948) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix segfault after ALTER UPDATE with Nullable MATERIALIZED column [#57147](https://github.com/ClickHouse/ClickHouse/pull/57147) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fix incorrect JOIN plan optimization with partially materialized normal projection [#57196](https://github.com/ClickHouse/ClickHouse/pull/57196) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix `ReadonlyReplica` metric for all cases [#57267](https://github.com/ClickHouse/ClickHouse/pull/57267) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix working with read buffers in StreamingFormatExecutor [#57438](https://github.com/ClickHouse/ClickHouse/pull/57438) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* bugfix: correctly parse SYSTEM STOP LISTEN TCP SECURE [#57483](https://github.com/ClickHouse/ClickHouse/pull/57483) ([joelynch](https://github.com/joelynch)).
|
||||
* Ignore ON CLUSTER clause in grant/revoke queries for management of replicated access entities. [#57538](https://github.com/ClickHouse/ClickHouse/pull/57538) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Disable system.kafka_consumers by default (due to possible live memory leak) [#57822](https://github.com/ClickHouse/ClickHouse/pull/57822) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix invalid memory access in BLAKE3 (Rust) [#57876](https://github.com/ClickHouse/ClickHouse/pull/57876) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Normalize function names in CREATE INDEX [#57906](https://github.com/ClickHouse/ClickHouse/pull/57906) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix invalid preprocessing on Keeper [#58069](https://github.com/ClickHouse/ClickHouse/pull/58069) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix Integer overflow in Poco::UTF32Encoding [#58073](https://github.com/ClickHouse/ClickHouse/pull/58073) ([Andrey Fedotov](https://github.com/anfedotoff)).
|
||||
* Remove parallel parsing for JSONCompactEachRow [#58181](https://github.com/ClickHouse/ClickHouse/pull/58181) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix parallel parsing for JSONCompactEachRow [#58250](https://github.com/ClickHouse/ClickHouse/pull/58250) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Update PeekableWriteBuffer.cpp'. [#57701](https://github.com/ClickHouse/ClickHouse/pull/57701) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
#### NOT FOR CHANGELOG / INSIGNIFICANT
|
||||
|
||||
* Pin alpine version of integration tests helper container [#57669](https://github.com/ClickHouse/ClickHouse/pull/57669) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Remove heavy rust stable toolchain [#57905](https://github.com/ClickHouse/ClickHouse/pull/57905) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).
|
||||
* Fix docker image for integration tests (fixes CI) [#57952](https://github.com/ClickHouse/ClickHouse/pull/57952) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
@ -11,7 +11,7 @@ This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ec
|
||||
|
||||
``` sql
|
||||
CREATE TABLE s3_queue_engine_table (name String, value UInt32)
|
||||
ENGINE = S3Queue(path [, NOSIGN | aws_access_key_id, aws_secret_access_key,] format, [compression])
|
||||
ENGINE = S3Queue(path, [NOSIGN, | aws_access_key_id, aws_secret_access_key,] format, [compression])
|
||||
[SETTINGS]
|
||||
[mode = 'unordered',]
|
||||
[after_processing = 'keep',]
|
||||
|
@ -504,24 +504,25 @@ Indexes of type `set` can be utilized by all functions. The other index types ar
|
||||
|
||||
| Function (operator) / Index | primary key | minmax | ngrambf_v1 | tokenbf_v1 | bloom_filter | inverted |
|
||||
|------------------------------------------------------------------------------------------------------------|-------------|--------|------------|------------|--------------|----------|
|
||||
| [equals (=, ==)](/docs/en/sql-reference/functions/comparison-functions.md/#equals) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [notEquals(!=, <>)](/docs/en/sql-reference/functions/comparison-functions.md/#notequals) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [like](/docs/en/sql-reference/functions/string-search-functions.md/#function-like) | ✔ | ✔ | ✔ | ✔ | ✗ | ✔ |
|
||||
| [notLike](/docs/en/sql-reference/functions/string-search-functions.md/#function-notlike) | ✔ | ✔ | ✔ | ✔ | ✗ | ✔ |
|
||||
| [equals (=, ==)](/docs/en/sql-reference/functions/comparison-functions.md/#equals) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [notEquals(!=, <>)](/docs/en/sql-reference/functions/comparison-functions.md/#notequals) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [like](/docs/en/sql-reference/functions/string-search-functions.md/#like) | ✔ | ✔ | ✔ | ✔ | ✗ | ✔ |
|
||||
| [notLike](/docs/en/sql-reference/functions/string-search-functions.md/#notlike) | ✔ | ✔ | ✔ | ✔ | ✗ | ✔ |
|
||||
| [match](/docs/en/sql-reference/functions/string-search-functions.md/#match) | ✗ | ✗ | ✔ | ✔ | ✗ | ✗ |
|
||||
| [startsWith](/docs/en/sql-reference/functions/string-functions.md/#startswith) | ✔ | ✔ | ✔ | ✔ | ✗ | ✔ |
|
||||
| [endsWith](/docs/en/sql-reference/functions/string-functions.md/#endswith) | ✗ | ✗ | ✔ | ✔ | ✗ | ✔ |
|
||||
| [multiSearchAny](/docs/en/sql-reference/functions/string-search-functions.md/#function-multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ | ✔ |
|
||||
| [in](/docs/en/sql-reference/functions/in-functions#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [notIn](/docs/en/sql-reference/functions/in-functions#in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [less (<)](/docs/en/sql-reference/functions/comparison-functions.md/#less) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [greater (>)](/docs/en/sql-reference/functions/comparison-functions.md/#greater) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [lessOrEquals (<=)](/docs/en/sql-reference/functions/comparison-functions.md/#lessorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [greaterOrEquals (>=)](/docs/en/sql-reference/functions/comparison-functions.md/#greaterorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [empty](/docs/en/sql-reference/functions/array-functions#function-empty) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [notEmpty](/docs/en/sql-reference/functions/array-functions#function-notempty) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [has](/docs/en/sql-reference/functions/array-functions#function-has) | ✗ | ✗ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [hasAny](/docs/en/sql-reference/functions/array-functions#function-hasAny) | ✗ | ✗ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [hasAll](/docs/en/sql-reference/functions/array-functions#function-hasAll) | ✗ | ✗ | ✗ | ✗ | ✔ | ✗ |
|
||||
| [multiSearchAny](/docs/en/sql-reference/functions/string-search-functions.md/#multisearchany) | ✗ | ✗ | ✔ | ✗ | ✗ | ✔ |
|
||||
| [in](/docs/en/sql-reference/functions/in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [notIn](/docs/en/sql-reference/functions/in-functions) | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [less (<)](/docs/en/sql-reference/functions/comparison-functions.md/#less) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [greater (>)](/docs/en/sql-reference/functions/comparison-functions.md/#greater) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [lessOrEquals (<=)](/docs/en/sql-reference/functions/comparison-functions.md/#lessorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [greaterOrEquals (>=)](/docs/en/sql-reference/functions/comparison-functions.md/#greaterorequals) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [empty](/docs/en/sql-reference/functions/array-functions/#empty) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [notEmpty](/docs/en/sql-reference/functions/array-functions/#notempty) | ✔ | ✔ | ✗ | ✗ | ✗ | ✗ |
|
||||
| [has](/docs/en/sql-reference/functions/array-functions/#has) | ✗ | ✗ | ✔ | ✔ | ✔ | ✔ |
|
||||
| [hasAny](/docs/en/sql-reference/functions/array-functions/#hasany) | ✗ | ✗ | ✔ | ✔ | ✔ | ✗ |
|
||||
| [hasAll](/docs/en/sql-reference/functions/array-functions/#hasall) | ✗ | ✗ | ✗ | ✗ | ✔ | ✗ |
|
||||
| hasToken | ✗ | ✗ | ✗ | ✔ | ✗ | ✔ |
|
||||
| hasTokenOrNull | ✗ | ✗ | ✗ | ✔ | ✗ | ✔ |
|
||||
| hasTokenCaseInsensitive (*) | ✗ | ✗ | ✗ | ✔ | ✗ | ✗ |
|
||||
@ -1143,6 +1144,8 @@ Optional parameters:
|
||||
- `s3_max_get_burst` — Max number of requests that can be issued simultaneously before hitting request per second limit. By default (`0` value) equals to `s3_max_get_rps`.
|
||||
- `read_resource` — Resource name to be used for [scheduling](/docs/en/operations/workload-scheduling.md) of read requests to this disk. Default value is empty string (IO scheduling is not enabled for this disk).
|
||||
- `write_resource` — Resource name to be used for [scheduling](/docs/en/operations/workload-scheduling.md) of write requests to this disk. Default value is empty string (IO scheduling is not enabled for this disk).
|
||||
- `key_template` — Define the format with which the object keys are generated. By default, Clickhouse takes `root path` from `endpoint` option and adds random generated suffix. That suffix is a dir with 3 random symbols and a file name with 29 random symbols. With that option you have a full control how to the object keys are generated. Some usage scenarios require having random symbols in the prefix or in the middle of object key. For example: `[a-z]{3}-prefix-random/constant-part/random-middle-[a-z]{3}/random-suffix-[a-z]{29}`. The value is parsed with [`re2`](https://github.com/google/re2/wiki/Syntax). Only some subset of the syntax is supported. Check if your preferred format is supported before using that option. Disk isn't initialized if clickhouse is unable to generate a key by the value of `key_template`. It requires enabled feature flag [storage_metadata_write_full_object_key](/docs/en/operations/settings/settings#storage_metadata_write_full_object_key). It forbids declaring the `root path` in `endpoint` option. It requires definition of the option `key_compatibility_prefix`.
|
||||
- `key_compatibility_prefix` — That option is required when option `key_template` is in use. In order to be able to read the objects keys which were stored in the metadata files with the metadata version lower that `VERSION_FULL_OBJECT_KEY`, the previous `root path` from the `endpoint` option should be set here.
|
||||
|
||||
### Configuring the cache
|
||||
|
||||
|
@ -1262,6 +1262,7 @@ SELECT * FROM json_each_row_nested
|
||||
|
||||
- [input_format_import_nested_json](/docs/en/operations/settings/settings-formats.md/#input_format_import_nested_json) - map nested JSON data to nested tables (it works for JSONEachRow format). Default value - `false`.
|
||||
- [input_format_json_read_bools_as_numbers](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_bools_as_numbers) - allow to parse bools as numbers in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_bools_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_bools_as_strings) - allow to parse bools as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_numbers_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_numbers_as_strings) - allow to parse numbers as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_arrays_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_arrays_as_strings) - allow to parse JSON arrays as strings in JSON input formats. Default value - `true`.
|
||||
- [input_format_json_read_objects_as_strings](/docs/en/operations/settings/settings-formats.md/#input_format_json_read_objects_as_strings) - allow to parse JSON objects as strings in JSON input formats. Default value - `true`.
|
||||
|
@ -614,6 +614,26 @@ DESC format(JSONEachRow, $$
|
||||
└───────┴─────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
|
||||
```
|
||||
|
||||
##### input_format_json_read_bools_as_strings
|
||||
|
||||
Enabling this setting allows reading Bool values as strings.
|
||||
|
||||
This setting is enabled by default.
|
||||
|
||||
**Example:**
|
||||
|
||||
```sql
|
||||
SET input_format_json_read_bools_as_strings = 1;
|
||||
DESC format(JSONEachRow, $$
|
||||
{"value" : true}
|
||||
{"value" : "Hello, World"}
|
||||
$$)
|
||||
```
|
||||
```response
|
||||
┌─name──┬─type─────────────┬─default_type─┬─default_expression─┬─comment─┬─codec_expression─┬─ttl_expression─┐
|
||||
│ value │ Nullable(String) │ │ │ │ │ │
|
||||
└───────┴──────────────────┴──────────────┴────────────────────┴─────────┴──────────────────┴────────────────┘
|
||||
```
|
||||
##### input_format_json_read_arrays_as_strings
|
||||
|
||||
Enabling this setting allows reading JSON array values as strings.
|
||||
|
176
docs/en/operations/settings/mysql-binlog-client.md
Normal file
176
docs/en/operations/settings/mysql-binlog-client.md
Normal file
@ -0,0 +1,176 @@
|
||||
# The MySQL Binlog Client
|
||||
|
||||
The MySQL Binlog Client provides a mechanism in ClickHouse to share the binlog from a MySQL instance among multiple [MaterializedMySQL](../../engines/database-engines/materialized-mysql.md) databases. This avoids consuming unnecessary bandwidth and CPU when replicating more than one schema/database.
|
||||
|
||||
The implementation is resilient against crashes and disk issues. The executed GTID sets of the binlog itself and the consuming databases have persisted only after the data they describe has been safely persisted as well. The implementation also tolerates re-doing aborted operations (at-least-once delivery).
|
||||
|
||||
# Settings
|
||||
|
||||
## use_binlog_client
|
||||
|
||||
Forces to reuse existing MySQL binlog connection or creates new one if does not exist. The connection is defined by `user:pass@host:port`.
|
||||
|
||||
Default value: 0
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
-- create MaterializedMySQL databases that read the events from the binlog client
|
||||
CREATE DATABASE db1 ENGINE = MaterializedMySQL('host:port', 'db1', 'user', 'password') SETTINGS use_binlog_client=1
|
||||
CREATE DATABASE db2 ENGINE = MaterializedMySQL('host:port', 'db2', 'user', 'password') SETTINGS use_binlog_client=1
|
||||
CREATE DATABASE db3 ENGINE = MaterializedMySQL('host:port', 'db3', 'user2', 'password2') SETTINGS use_binlog_client=1
|
||||
```
|
||||
|
||||
Databases `db1` and `db2` will use the same binlog connection, since they use the same `user:pass@host:port`. Database `db3` will use separate binlog connection.
|
||||
|
||||
## max_bytes_in_binlog_queue
|
||||
|
||||
Defines the limit of bytes in the events binlog queue. If bytes in the queue increases this limit, it will stop reading new events from MySQL until the space for new events will be freed. This introduces the memory limits. Very high value could consume all available memory. Very low value could make the databases to wait for new events.
|
||||
|
||||
Default value: 67108864
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
CREATE DATABASE db1 ENGINE = MaterializedMySQL('host:port', 'db1', 'user', 'password') SETTINGS use_binlog_client=1, max_bytes_in_binlog_queue=33554432
|
||||
CREATE DATABASE db2 ENGINE = MaterializedMySQL('host:port', 'db2', 'user', 'password') SETTINGS use_binlog_client=1
|
||||
```
|
||||
|
||||
If database `db1` is unable to consume binlog events fast enough and the size of the events queue exceeds `33554432` bytes, reading of new events from MySQL is postponed until `db1`
|
||||
consumes the events and releases some space.
|
||||
|
||||
NOTE: This will impact to `db2`, and it will be waiting for new events too, since they share the same connection.
|
||||
|
||||
## max_milliseconds_to_wait_in_binlog_queue
|
||||
|
||||
Defines the max milliseconds to wait when `max_bytes_in_binlog_queue` exceeded. After that it will detach the database from current binlog connection and will retry establish new one to prevent other databases to wait for this database.
|
||||
|
||||
Default value: 10000
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
CREATE DATABASE db1 ENGINE = MaterializedMySQL('host:port', 'db1', 'user', 'password') SETTINGS use_binlog_client=1, max_bytes_in_binlog_queue=33554432, max_milliseconds_to_wait_in_binlog_queue=1000
|
||||
CREATE DATABASE db2 ENGINE = MaterializedMySQL('host:port', 'db2', 'user', 'password') SETTINGS use_binlog_client=1
|
||||
```
|
||||
|
||||
If the event queue of database `db1` is full, the binlog connection will be waiting in `1000`ms and if the database is not able to consume the events, it will be detached from the connection to create another one.
|
||||
|
||||
NOTE: If the database `db1` has been detached from the shared connection and created new one, after the binlog connections for `db1` and `db2` have the same positions they will be merged to one. And `db1` and `db2` will use the same connection again.
|
||||
|
||||
## max_bytes_in_binlog_dispatcher_buffer
|
||||
|
||||
Defines the max bytes in the binlog dispatcher's buffer before it is flushed to attached binlog. The events from MySQL binlog connection are buffered before sending to attached databases. It increases the events throughput from the binlog to databases.
|
||||
|
||||
Default value: 1048576
|
||||
|
||||
## max_flush_milliseconds_in_binlog_dispatcher
|
||||
|
||||
Defines the max milliseconds in the binlog dispatcher's buffer to wait before it is flushed to attached binlog. If there are no events received from MySQL binlog connection for a while, after some time buffered events should be sent to the attached databases.
|
||||
|
||||
Default value: 1000
|
||||
|
||||
# Design
|
||||
|
||||
## The Binlog Events Dispatcher
|
||||
|
||||
Currently each MaterializedMySQL database opens its own connection to MySQL to subscribe to binlog events. There is a need to have only one connection and _dispatch_ the binlog events to all databases that replicate from the same MySQL instance.
|
||||
|
||||
## Each MaterializedMySQL Database Has Its Own Event Queue
|
||||
|
||||
To prevent slowing down other instances there should be an _event queue_ per MaterializedMySQL database to handle the events independently of the speed of other instances. The dispatcher reads an event from the binlog, and sends it to every MaterializedMySQL database that needs it. Each database handles its events in separate threads.
|
||||
|
||||
## Catching up
|
||||
|
||||
If several databases have the same binlog position, they can use the same dispatcher. If a newly created database (or one that has been detached for some time) requests events that have been already processed, we need to create another communication _channel_ to the binlog. We do this by creating another temporary dispatcher for such databases. When the new dispatcher _catches up with_ the old one, the new/temporary dispatcher is not needed anymore and all databases getting events from this dispatcher can be moved to the old one.
|
||||
|
||||
## Memory Limit
|
||||
|
||||
There is a _memory limit_ to control event queue memory consumption per MySQL Client. If a database is not able to handle events fast enough, and the event queue is getting full, we have the following options:
|
||||
|
||||
1. The dispatcher is blocked until the slowest database frees up space for new events. All other databases are waiting for the slowest one. (Preferred)
|
||||
2. The dispatcher is _never_ blocked, but suspends incremental sync for the slow database and continues dispatching events to remained databases.
|
||||
|
||||
## Performance
|
||||
|
||||
A lot of CPU can be saved by not processing every event in every database. The binlog contains events for all databases, it is wasteful to distribute row events to a database that it will not process it, especially if there are a lot of databases. This requires some sort of per-database binlog filtering and buffering.
|
||||
|
||||
Currently all events are sent to all MaterializedMySQL databases but parsing the event which consumes CPU is up to the database.
|
||||
|
||||
# Detailed Design
|
||||
|
||||
1. If a client (e.g. database) wants to read a stream of the events from MySQL binlog, it creates a connection to remote binlog by host/user/password and _executed GTID set_ params.
|
||||
2. If another client wants to read the events from the binlog but for different _executed GTID set_, it is **not** possible to reuse existing connection to MySQL, then need to create another connection to the same remote binlog. (_This is how it is implemented today_).
|
||||
3. When these 2 connections get the same binlog positions, they read the same events. It is logical to drop duplicate connection and move all its users out. And now one connection dispatches binlog events to several clients. Obviously only connections to the same binlog should be merged.
|
||||
|
||||
## Classes
|
||||
|
||||
1. One connection can send (or dispatch) events to several clients and might be called `BinlogEventsDispatcher`.
|
||||
2. Several dispatchers grouped by _user:password@host:port_ in `BinlogClient`. Since they point to the same binlog.
|
||||
3. The clients should communicate only with public API from `BinlogClient`. The result of using `BinlogClient` is an object that implements `IBinlog` to read events from. This implementation of `IBinlog` must be compatible with old implementation `MySQLFlavor` -> when replacing old implementation by new one, the behavior must not be changed.
|
||||
|
||||
## SQL
|
||||
|
||||
```sql
|
||||
-- create MaterializedMySQL databases that read the events from the binlog client
|
||||
CREATE DATABASE db1_client1 ENGINE = MaterializedMySQL('host:port', 'db', 'user', 'password') SETTINGS use_binlog_client=1, max_bytes_in_binlog_queue=1024;
|
||||
CREATE DATABASE db2_client1 ENGINE = MaterializedMySQL('host:port', 'db', 'user', 'password') SETTINGS use_binlog_client=1;
|
||||
CREATE DATABASE db3_client1 ENGINE = MaterializedMySQL('host:port', 'db2', 'user', 'password') SETTINGS use_binlog_client=1;
|
||||
CREATE DATABASE db4_client2 ENGINE = MaterializedMySQL('host2:port', 'db', 'user', 'password') SETTINGS use_binlog_client=1;
|
||||
CREATE DATABASE db5_client3 ENGINE = MaterializedMySQL('host:port', 'db', 'user1', 'password') SETTINGS use_binlog_client=1;
|
||||
CREATE DATABASE db6_old ENGINE = MaterializedMySQL('host:port', 'db', 'user1', 'password') SETTINGS use_binlog_client=0;
|
||||
```
|
||||
|
||||
Databases `db1_client1`, `db2_client1` and `db3_client1` share one instance of `BinlogClient` since they have the same params. `BinlogClient` will create 3 connections to MySQL server thus 3 instances of `BinlogEventsDispatcher`, but if these connections would have the same binlog position, they should be merged to one connection. Means all clients will be moved to one dispatcher and others will be closed. Databases `db4_client2` and `db5_client3` would use 2 different independent `BinlogClient` instances. Database `db6_old` will use old implementation. NOTE: By default `use_binlog_client` is disabled. Setting `max_bytes_in_binlog_queue` defines the max allowed bytes in the binlog queue. By default, it is `1073741824` bytes. If number of bytes exceeds this limit, the dispatching will be stopped until the space will be freed for new events.
|
||||
|
||||
## Binlog Table Structure
|
||||
|
||||
To see the status of the all `BinlogClient` instances there is `system.mysql_binlogs` system table. It shows the list of all created and _alive_ `IBinlog` instances with information about its `BinlogEventsDispatcher` and `BinlogClient`.
|
||||
|
||||
Example:
|
||||
|
||||
```
|
||||
SELECT * FROM system.mysql_binlogs FORMAT Vertical
|
||||
Row 1:
|
||||
──────
|
||||
binlog_client_name: root@127.0.0.1:3306
|
||||
name: test_Clickhouse1
|
||||
mysql_binlog_name: binlog.001154
|
||||
mysql_binlog_pos: 7142294
|
||||
mysql_binlog_timestamp: 1660082447
|
||||
mysql_binlog_executed_gtid_set: a9d88f83-c14e-11ec-bb36-244bfedf7766:1-30523304
|
||||
dispatcher_name: Applier
|
||||
dispatcher_mysql_binlog_name: binlog.001154
|
||||
dispatcher_mysql_binlog_pos: 7142294
|
||||
dispatcher_mysql_binlog_timestamp: 1660082447
|
||||
dispatcher_mysql_binlog_executed_gtid_set: a9d88f83-c14e-11ec-bb36-244bfedf7766:1-30523304
|
||||
size: 0
|
||||
bytes: 0
|
||||
max_bytes: 0
|
||||
```
|
||||
|
||||
### Tests
|
||||
|
||||
Unit tests:
|
||||
|
||||
```
|
||||
$ ./unit_tests_dbms --gtest_filter=MySQLBinlog.*
|
||||
```
|
||||
|
||||
Integration tests:
|
||||
|
||||
```
|
||||
$ pytest -s -vv test_materialized_mysql_database/test.py::test_binlog_client
|
||||
```
|
||||
|
||||
Dumps events from the file
|
||||
|
||||
```
|
||||
$ ./utils/check-mysql-binlog/check-mysql-binlog --binlog binlog.001392
|
||||
```
|
||||
|
||||
Dumps events from the server
|
||||
|
||||
```
|
||||
$ ./utils/check-mysql-binlog/check-mysql-binlog --host 127.0.0.1 --port 3306 --user root --password pass --gtid a9d88f83-c14e-11ec-bb36-244bfedf7766:1-30462856
|
||||
```
|
@ -377,6 +377,12 @@ Allow parsing bools as numbers in JSON input formats.
|
||||
|
||||
Enabled by default.
|
||||
|
||||
## input_format_json_read_bools_as_strings {#input_format_json_read_bools_as_strings}
|
||||
|
||||
Allow parsing bools as strings in JSON input formats.
|
||||
|
||||
Enabled by default.
|
||||
|
||||
## input_format_json_read_numbers_as_strings {#input_format_json_read_numbers_as_strings}
|
||||
|
||||
Allow parsing numbers as strings in JSON input formats.
|
||||
|
@ -3847,6 +3847,8 @@ Possible values:
|
||||
- `none` — Is similar to throw, but distributed DDL query returns no result set.
|
||||
- `null_status_on_timeout` — Returns `NULL` as execution status in some rows of result set instead of throwing `TIMEOUT_EXCEEDED` if query is not finished on the corresponding hosts.
|
||||
- `never_throw` — Do not throw `TIMEOUT_EXCEEDED` and do not rethrow exceptions if query has failed on some hosts.
|
||||
- `null_status_on_timeout_only_active` — similar to `null_status_on_timeout`, but doesn't wait for inactive replicas of the `Replicated` database
|
||||
- `throw_only_active` — similar to `throw`, but doesn't wait for inactive replicas of the `Replicated` database
|
||||
|
||||
Default value: `throw`.
|
||||
|
||||
@ -4771,6 +4773,45 @@ Type: Int64
|
||||
|
||||
Default: 0
|
||||
|
||||
## enable_deflate_qpl_codec {#enable_deflate_qpl_codec}
|
||||
|
||||
If turned on, the DEFLATE_QPL codec may be used to compress columns.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 - Disabled
|
||||
- 1 - Enabled
|
||||
|
||||
Type: Bool
|
||||
|
||||
## enable_zstd_qat_codec {#enable_zstd_qat_codec}
|
||||
|
||||
If turned on, the ZSTD_QAT codec may be used to compress columns.
|
||||
|
||||
Possible values:
|
||||
|
||||
- 0 - Disabled
|
||||
- 1 - Enabled
|
||||
|
||||
Type: Bool
|
||||
|
||||
## output_format_compression_level
|
||||
|
||||
Default compression level if query output is compressed. The setting is applied when `SELECT` query has `INTO OUTFILE` or when writing to table functions `file`, `url`, `hdfs`, `s3`, or `azureBlobStorage`.
|
||||
|
||||
Possible values: from `1` to `22`
|
||||
|
||||
Default: `3`
|
||||
|
||||
|
||||
## output_format_compression_zstd_window_log
|
||||
|
||||
Can be used when the output compression method is `zstd`. If greater than `0`, this setting explicitly sets compression window size (power of `2`) and enables a long-range mode for zstd compression. This can help to achieve a better compression ratio.
|
||||
|
||||
Possible values: non-negative numbers. Note that if the value is too small or too big, `zstdlib` will throw an exception. Typical values are from `20` (window size = `1MB`) to `30` (window size = `1GB`).
|
||||
|
||||
Default: `0`
|
||||
|
||||
## rewrite_count_distinct_if_with_count_distinct_implementation
|
||||
|
||||
Allows you to rewrite `countDistcintIf` with [count_distinct_implementation](#count_distinct_implementation) setting.
|
||||
@ -5155,4 +5196,4 @@ The value 0 means that you can delete all tables without any restrictions.
|
||||
|
||||
:::note
|
||||
This query setting overwrites its server setting equivalent, see [max_table_size_to_drop](/docs/en/operations/server-configuration-parameters/settings.md/#max-table-size-to-drop)
|
||||
:::
|
||||
:::
|
||||
|
@ -196,7 +196,7 @@ These settings should be defined in the disk configuration section.
|
||||
|
||||
- `max_elements` - a limit for a number of cache files. Default: `10000000`.
|
||||
|
||||
- `load_metadata_threads` - number of threads being used to load cache metadata on starting time. Default: `1`.
|
||||
- `load_metadata_threads` - number of threads being used to load cache metadata on starting time. Default: `16`.
|
||||
|
||||
File Cache **query/profile settings**:
|
||||
|
||||
|
14
docs/en/operations/system-tables/dropped_tables_parts.md
Normal file
14
docs/en/operations/system-tables/dropped_tables_parts.md
Normal file
@ -0,0 +1,14 @@
|
||||
---
|
||||
slug: /en/operations/system-tables/dropped_tables_parts
|
||||
---
|
||||
# dropped_tables_parts {#system_tables-dropped_tables_parts}
|
||||
|
||||
Contains information about parts of [MergeTree](../../engines/table-engines/mergetree-family/mergetree.md) dropped tables from [system.dropped_tables](./dropped_tables.md)
|
||||
|
||||
The schema of this table is the same as [system.parts](./parts.md)
|
||||
|
||||
**See Also**
|
||||
|
||||
- [MergeTree family](../../engines/table-engines/mergetree-family/mergetree.md)
|
||||
- [system.parts](./parts.md)
|
||||
- [system.dropped_tables](./dropped_tables.md)
|
@ -42,7 +42,7 @@ Columns:
|
||||
- `'ExceptionWhileProcessing' = 4` — Exception during the query execution.
|
||||
- `event_date` ([Date](../../sql-reference/data-types/date.md)) — Query starting date.
|
||||
- `event_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Query starting time.
|
||||
- `event_time_microseconds` ([DateTime](../../sql-reference/data-types/datetime.md)) — Query starting time with microseconds precision.
|
||||
- `event_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Query starting time with microseconds precision.
|
||||
- `query_start_time` ([DateTime](../../sql-reference/data-types/datetime.md)) — Start time of query execution.
|
||||
- `query_start_time_microseconds` ([DateTime64](../../sql-reference/data-types/datetime64.md)) — Start time of query execution with microsecond precision.
|
||||
- `query_duration_ms` ([UInt64](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Duration of query execution in milliseconds.
|
||||
|
@ -14,6 +14,11 @@ Columns:
|
||||
- `changed` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) — Shows whether a setting was specified in `config.xml`
|
||||
- `description` ([String](../../sql-reference/data-types/string.md)) — Short server setting description.
|
||||
- `type` ([String](../../sql-reference/data-types/string.md)) — Server setting value type.
|
||||
- `changeable_without_restart` ([Enum8](../../sql-reference/data-types/enum.md)) — Whether the setting can be changed at server runtime. Values:
|
||||
- `'No' `
|
||||
- `'IncreaseOnly'`
|
||||
- `'DecreaseOnly'`
|
||||
- `'Yes'`
|
||||
- `is_obsolete` ([UInt8](../../sql-reference/data-types/int-uint.md#uint-ranges)) - Shows whether a setting is obsolete.
|
||||
|
||||
**Example**
|
||||
@ -27,22 +32,21 @@ WHERE name LIKE '%thread_pool%'
|
||||
```
|
||||
|
||||
``` text
|
||||
┌─name────────────────────────────────────────_─value─_─default─_─changed─_─description──────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
───────────────────────────────────_─type───_─is_obsolete─┐
|
||||
│ max_thread_pool_size │ 10000 │ 10000 │ 1 │ The maximum number of threads that could be allocated from the OS and used for query execution and background operations. │ UInt64 │ 0 │
|
||||
│ max_thread_pool_free_size │ 1000 │ 1000 │ 0 │ The maximum number of threads that will always stay in a global thread pool once allocated and remain idle in case of insufficient number of tasks. │ UInt64 │ 0 │
|
||||
│ thread_pool_queue_size │ 10000 │ 10000 │ 0 │ The maximum number of tasks that will be placed in a queue and wait for execution. │ UInt64 │ 0 │
|
||||
│ max_io_thread_pool_size │ 100 │ 100 │ 0 │ The maximum number of threads that would be used for IO operations │ UInt64 │ 0 │
|
||||
│ max_io_thread_pool_free_size │ 0 │ 0 │ 0 │ Max free size for IO thread pool. │ UInt64 │ 0 │
|
||||
│ io_thread_pool_queue_size │ 10000 │ 10000 │ 0 │ Queue size for IO thread pool. │ UInt64 │ 0 │
|
||||
│ max_active_parts_loading_thread_pool_size │ 64 │ 64 │ 0 │ The number of threads to load active set of data parts (Active ones) at startup. │ UInt64 │ 0 │
|
||||
│ max_outdated_parts_loading_thread_pool_size │ 32 │ 32 │ 0 │ The number of threads to load inactive set of data parts (Outdated ones) at startup. │ UInt64 │ 0 │
|
||||
│ max_parts_cleaning_thread_pool_size │ 128 │ 128 │ 0 │ The number of threads for concurrent removal of inactive data parts. │ UInt64 │ 0 │
|
||||
│ max_backups_io_thread_pool_size │ 1000 │ 1000 │ 0 │ The maximum number of threads that would be used for IO operations for BACKUP queries │ UInt64 │ 0 │
|
||||
│ max_backups_io_thread_pool_free_size │ 0 │ 0 │ 0 │ Max free size for backups IO thread pool. │ UInt64 │ 0 │
|
||||
│ backups_io_thread_pool_queue_size │ 0 │ 0 │ 0 │ Queue size for backups IO thread pool. │ UInt64 │ 0 │
|
||||
└─────────────────────────────────────────────┴───────┴─────────┴─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────
|
||||
───────────────────────────────────┴────────┴─────────────┘
|
||||
┌─name────────────────────────────────────────┬─value─┬─default─┬─changed─┬─description─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─type───┬─changeable_without_restart─┬─is_obsolete─┐
|
||||
│ max_thread_pool_size │ 10000 │ 10000 │ 0 │ The maximum number of threads that could be allocated from the OS and used for query execution and background operations. │ UInt64 │ No │ 0 │
|
||||
│ max_thread_pool_free_size │ 1000 │ 1000 │ 0 │ The maximum number of threads that will always stay in a global thread pool once allocated and remain idle in case of insufficient number of tasks. │ UInt64 │ No │ 0 │
|
||||
│ thread_pool_queue_size │ 10000 │ 10000 │ 0 │ The maximum number of tasks that will be placed in a queue and wait for execution. │ UInt64 │ No │ 0 │
|
||||
│ max_io_thread_pool_size │ 100 │ 100 │ 0 │ The maximum number of threads that would be used for IO operations │ UInt64 │ No │ 0 │
|
||||
│ max_io_thread_pool_free_size │ 0 │ 0 │ 0 │ Max free size for IO thread pool. │ UInt64 │ No │ 0 │
|
||||
│ io_thread_pool_queue_size │ 10000 │ 10000 │ 0 │ Queue size for IO thread pool. │ UInt64 │ No │ 0 │
|
||||
│ max_active_parts_loading_thread_pool_size │ 64 │ 64 │ 0 │ The number of threads to load active set of data parts (Active ones) at startup. │ UInt64 │ No │ 0 │
|
||||
│ max_outdated_parts_loading_thread_pool_size │ 32 │ 32 │ 0 │ The number of threads to load inactive set of data parts (Outdated ones) at startup. │ UInt64 │ No │ 0 │
|
||||
│ max_parts_cleaning_thread_pool_size │ 128 │ 128 │ 0 │ The number of threads for concurrent removal of inactive data parts. │ UInt64 │ No │ 0 │
|
||||
│ max_backups_io_thread_pool_size │ 1000 │ 1000 │ 0 │ The maximum number of threads that would be used for IO operations for BACKUP queries │ UInt64 │ No │ 0 │
|
||||
│ max_backups_io_thread_pool_free_size │ 0 │ 0 │ 0 │ Max free size for backups IO thread pool. │ UInt64 │ No │ 0 │
|
||||
│ backups_io_thread_pool_queue_size │ 0 │ 0 │ 0 │ Queue size for backups IO thread pool. │ UInt64 │ No │ 0 │
|
||||
└─────────────────────────────────────────────┴───────┴─────────┴─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────┴────────────────────────────┴─────────────┘
|
||||
|
||||
```
|
||||
|
||||
Using of `WHERE changed` can be useful, for example, when you want to check
|
||||
|
@ -10,7 +10,7 @@ Columns:
|
||||
- `hostname` ([LowCardinality(String)](../../sql-reference/data-types/string.md)) — Hostname of the server executing the query.
|
||||
- `event_date` (Date) — Date of the entry.
|
||||
- `event_time` (DateTime) — Time of the entry.
|
||||
- `event_time_microseconds` (DateTime) — Time of the entry with microseconds precision.
|
||||
- `event_time_microseconds` (DateTime64) — Time of the entry with microseconds precision.
|
||||
- `microseconds` (UInt32) — Microseconds of the entry.
|
||||
- `thread_name` (String) — Name of the thread from which the logging was done.
|
||||
- `thread_id` (UInt64) — OS thread ID.
|
||||
|
@ -11,6 +11,8 @@ Keys:
|
||||
- `--query` — Format queries of any length and complexity.
|
||||
- `--hilite` — Add syntax highlight with ANSI terminal escape sequences.
|
||||
- `--oneline` — Format in single line.
|
||||
- `--max_line_length` — Format in single line queries with length less than specified.
|
||||
- `--comments` — Keep comments in the output.
|
||||
- `--quiet` or `-q` — Just check syntax, no output on success.
|
||||
- `--multiquery` or `-n` — Allow multiple queries in the same file.
|
||||
- `--obfuscate` — Obfuscate instead of formatting.
|
||||
@ -27,7 +29,7 @@ $ clickhouse-format --query "select number from numbers(10) where number%2 order
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```bash
|
||||
SELECT number
|
||||
FROM numbers(10)
|
||||
WHERE number % 2
|
||||
@ -49,22 +51,20 @@ SELECT sum(number) FROM numbers(5)
|
||||
3. Multiqueries:
|
||||
|
||||
```bash
|
||||
$ clickhouse-format -n <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELECT 1 UNION DISTINCT SELECT 3);"
|
||||
$ clickhouse-format -n <<< "SELECT min(number) FROM numbers(5); SELECT max(number) FROM numbers(5);"
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
SELECT *
|
||||
FROM
|
||||
(
|
||||
SELECT 1 AS x
|
||||
UNION ALL
|
||||
SELECT 1
|
||||
UNION DISTINCT
|
||||
SELECT 3
|
||||
)
|
||||
```
|
||||
SELECT min(number)
|
||||
FROM numbers(5)
|
||||
;
|
||||
|
||||
SELECT max(number)
|
||||
FROM numbers(5)
|
||||
;
|
||||
|
||||
```
|
||||
|
||||
4. Obfuscating:
|
||||
@ -75,7 +75,7 @@ $ clickhouse-format --seed Hello --obfuscate <<< "SELECT cost_first_screen BETWE
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```
|
||||
SELECT treasury_mammoth_hazelnut BETWEEN nutmeg AND span, CASE WHEN chive >= 116 THEN switching ELSE ANYTHING END;
|
||||
```
|
||||
|
||||
@ -87,7 +87,7 @@ $ clickhouse-format --seed World --obfuscate <<< "SELECT cost_first_screen BETWE
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```
|
||||
SELECT horse_tape_summer BETWEEN folklore AND moccasins, CASE WHEN intestine >= 116 THEN nonconformist ELSE FORESTRY END;
|
||||
```
|
||||
|
||||
@ -99,7 +99,7 @@ $ clickhouse-format --backslash <<< "SELECT * FROM (SELECT 1 AS x UNION ALL SELE
|
||||
|
||||
Result:
|
||||
|
||||
```sql
|
||||
```
|
||||
SELECT * \
|
||||
FROM \
|
||||
( \
|
||||
|
@ -24,7 +24,7 @@ A client application to interact with clickhouse-keeper by its native protocol.
|
||||
## Example {#clickhouse-keeper-client-example}
|
||||
|
||||
```bash
|
||||
./clickhouse-keeper-client -h localhost:9181 --connection-timeout 30 --session-timeout 30 --operation-timeout 30
|
||||
./clickhouse-keeper-client -h localhost -p 9181 --connection-timeout 30 --session-timeout 30 --operation-timeout 30
|
||||
Connected to ZooKeeper at [::1]:9181 with session_id 137
|
||||
/ :) ls
|
||||
keeper foo bar
|
||||
|
@ -6,7 +6,7 @@ sidebar_label: Arrays
|
||||
|
||||
# Array Functions
|
||||
|
||||
## empty
|
||||
## empty {#empty}
|
||||
|
||||
Checks whether the input array is empty.
|
||||
|
||||
@ -50,7 +50,7 @@ Result:
|
||||
└────────────────┘
|
||||
```
|
||||
|
||||
## notEmpty
|
||||
## notEmpty {#notempty}
|
||||
|
||||
Checks whether the input array is non-empty.
|
||||
|
||||
@ -221,7 +221,7 @@ SELECT has([1, 2, NULL], NULL)
|
||||
└─────────────────────────┘
|
||||
```
|
||||
|
||||
## hasAll
|
||||
## hasAll {#hasall}
|
||||
|
||||
Checks whether one array is a subset of another.
|
||||
|
||||
@ -261,7 +261,7 @@ Raises an exception `NO_COMMON_TYPE` if the set and subset elements do not share
|
||||
|
||||
`SELECT hasAll([[1, 2], [3, 4]], [[1, 2], [3, 5]])` returns 0.
|
||||
|
||||
## hasAny
|
||||
## hasAny {#hasany}
|
||||
|
||||
Checks whether two arrays have intersection by some elements.
|
||||
|
||||
|
@ -1483,7 +1483,9 @@ For mode values with a meaning of “with 4 or more days this year,” weeks are
|
||||
|
||||
- Otherwise, it is the last week of the previous year, and the next week is week 1.
|
||||
|
||||
For mode values with a meaning of “contains January 1”, the week contains January 1 is week 1. It does not matter how many days in the new year the week contained, even if it contained only one day.
|
||||
For mode values with a meaning of “contains January 1”, the week contains January 1 is week 1.
|
||||
It does not matter how many days in the new year the week contained, even if it contained only one day.
|
||||
I.e. if the last week of December contains January 1 of the next year, it will be week 1 of the next year.
|
||||
|
||||
**Syntax**
|
||||
|
||||
|
@ -1777,32 +1777,67 @@ Result:
|
||||
└────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## sqid
|
||||
## sqidEncode
|
||||
|
||||
Transforms numbers into YouTube-like short URL hash called [Sqid](https://sqids.org/).
|
||||
Encodes numbers as a [Sqid](https://sqids.org/) which is a YouTube-like ID string.
|
||||
The output alphabet is `abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789`.
|
||||
Do not use this function for hashing - the generated IDs can be decoded back into the original numbers.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
sqid(number1, ...)
|
||||
sqidEncode(number1, ...)
|
||||
```
|
||||
|
||||
Alias: `sqid`
|
||||
|
||||
**Arguments**
|
||||
|
||||
- A variable number of UInt8, UInt16, UInt32 or UInt64 numbers.
|
||||
|
||||
**Returned Value**
|
||||
|
||||
A hash id [String](/docs/en/sql-reference/data-types/string.md).
|
||||
A sqid [String](/docs/en/sql-reference/data-types/string.md).
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
SELECT sqid(1, 2, 3, 4, 5);
|
||||
SELECT sqidEncode(1, 2, 3, 4, 5);
|
||||
```
|
||||
|
||||
```response
|
||||
┌─sqid(1, 2, 3, 4, 5)─┐
|
||||
│ gXHfJ1C6dN │
|
||||
└─────────────────────┘
|
||||
┌─sqidEncode(1, 2, 3, 4, 5)─┐
|
||||
│ gXHfJ1C6dN │
|
||||
└───────────────────────────┘
|
||||
```
|
||||
|
||||
## sqidDecode
|
||||
|
||||
Decodes a [Sqid](https://sqids.org/) back into its original numbers.
|
||||
Returns an empty array in case the input string is not a valid sqid.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
sqidDecode(sqid)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- A sqid - [String](/docs/en/sql-reference/data-types/string.md)
|
||||
|
||||
**Returned Value**
|
||||
|
||||
The sqid transformed to numbers [Array(UInt64)](/docs/en/sql-reference/data-types/array.md).
|
||||
|
||||
**Example**
|
||||
|
||||
```sql
|
||||
SELECT sqidDecode('gXHfJ1C6dN');
|
||||
```
|
||||
|
||||
```response
|
||||
┌─sqidDecode('gXHfJ1C6dN')─┐
|
||||
│ [1,2,3,4,5] │
|
||||
└──────────────────────────┘
|
||||
```
|
||||
|
@ -53,7 +53,7 @@ The rounded number of the same type as the input number.
|
||||
**Example of use with Float**
|
||||
|
||||
``` sql
|
||||
SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
|
||||
SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3;
|
||||
```
|
||||
|
||||
``` text
|
||||
@ -67,7 +67,22 @@ SELECT number / 2 AS x, round(x) FROM system.numbers LIMIT 3
|
||||
**Example of use with Decimal**
|
||||
|
||||
``` sql
|
||||
SELECT cast(number / 2 AS Decimal(10,4)) AS x, round(x) FROM system.numbers LIMIT 3
|
||||
SELECT cast(number / 2 AS Decimal(10,4)) AS x, round(x) FROM system.numbers LIMIT 3;
|
||||
```
|
||||
|
||||
``` text
|
||||
┌───x─┬─round(CAST(divide(number, 2), 'Decimal(10, 4)'))─┐
|
||||
│ 0 │ 0 │
|
||||
│ 0.5 │ 1 │
|
||||
│ 1 │ 1 │
|
||||
└─────┴──────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
If you want to keep the trailing zeros, you need to enable `output_format_decimal_trailing_zeros`
|
||||
|
||||
``` sql
|
||||
SELECT cast(number / 2 AS Decimal(10,4)) AS x, round(x) FROM system.numbers LIMIT 3 settings output_format_decimal_trailing_zeros=1;
|
||||
|
||||
```
|
||||
|
||||
``` text
|
||||
|
@ -731,7 +731,7 @@ Alias: `FROM_BASE64`.
|
||||
|
||||
Like `base64Decode` but returns an empty string in case of error.
|
||||
|
||||
## endsWith
|
||||
## endsWith {#endswith}
|
||||
|
||||
Returns whether string `str` ends with `suffix`.
|
||||
|
||||
@ -765,7 +765,7 @@ Result:
|
||||
└──────────────────────────┴──────────────────────┘
|
||||
```
|
||||
|
||||
## startsWith
|
||||
## startsWith {#startswith}
|
||||
|
||||
Returns whether string `str` starts with `prefix`.
|
||||
|
||||
@ -1383,6 +1383,148 @@ Result:
|
||||
└──────────────────┘
|
||||
```
|
||||
|
||||
## punycodeEncode
|
||||
|
||||
Returns the [Punycode](https://en.wikipedia.org/wiki/Punycode) representation of a string.
|
||||
The string must be UTF8-encoded, otherwise the behavior is undefined.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
punycodeEncode(val)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val` - Input value. [String](../data-types/string.md)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A Punycode representation of the input value. [String](../data-types/string.md)
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
select punycodeEncode('München');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```result
|
||||
┌─punycodeEncode('München')─┐
|
||||
│ Mnchen-3ya │
|
||||
└───────────────────────────┘
|
||||
```
|
||||
|
||||
## punycodeDecode
|
||||
|
||||
Returns the UTF8-encoded plaintext of a [Punycode](https://en.wikipedia.org/wiki/Punycode)-encoded string.
|
||||
If no valid Punycode-encoded string is given, an exception is thrown.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
punycodeEncode(val)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val` - Punycode-encoded string. [String](../data-types/string.md)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- The plaintext of the input value. [String](../data-types/string.md)
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
select punycodeDecode('Mnchen-3ya');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```result
|
||||
┌─punycodeDecode('Mnchen-3ya')─┐
|
||||
│ München │
|
||||
└──────────────────────────────┘
|
||||
```
|
||||
|
||||
## tryPunycodeDecode
|
||||
|
||||
Like `punycodeDecode` but returns an empty string if no valid Punycode-encoded string is given.
|
||||
|
||||
## idnaEncode
|
||||
|
||||
Returns the the ASCII representation (ToASCII algorithm) of a domain name according to the [Internationalized Domain Names in Applications](https://en.wikipedia.org/wiki/Internationalized_domain_name#Internationalizing_Domain_Names_in_Applications) (IDNA) mechanism.
|
||||
The input string must be UTF-encoded and translatable to an ASCII string, otherwise an exception is thrown.
|
||||
Note: No percent decoding or trimming of tabs, spaces or control characters is performed.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
idnaEncode(val)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val` - Input value. [String](../data-types/string.md)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A ASCII representation according to the IDNA mechanism of the input value. [String](../data-types/string.md)
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
select idnaEncode('straße.münchen.de');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```result
|
||||
┌─idnaEncode('straße.münchen.de')─────┐
|
||||
│ xn--strae-oqa.xn--mnchen-3ya.de │
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## tryIdnaEncode
|
||||
|
||||
Like `idnaEncode` but returns an empty string in case of an error instead of throwing an exception.
|
||||
|
||||
## idnaDecode
|
||||
|
||||
Returns the the Unicode (UTF-8) representation (ToUnicode algorithm) of a domain name according to the [Internationalized Domain Names in Applications](https://en.wikipedia.org/wiki/Internationalized_domain_name#Internationalizing_Domain_Names_in_Applications) (IDNA) mechanism.
|
||||
In case of an error (e.g. because the input is invalid), the input string is returned.
|
||||
Note that repeated application of `idnaEncode()` and `idnaDecode()` does not necessarily return the original string due to case normalization.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
idnaDecode(val)
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `val` - Input value. [String](../data-types/string.md)
|
||||
|
||||
**Returned value**
|
||||
|
||||
- A Unicode (UTF-8) representation according to the IDNA mechanism of the input value. [String](../data-types/string.md)
|
||||
|
||||
**Example**
|
||||
|
||||
``` sql
|
||||
select idnaDecode('xn--strae-oqa.xn--mnchen-3ya.de');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
```result
|
||||
┌─idnaDecode('xn--strae-oqa.xn--mnchen-3ya.de')─┐
|
||||
│ straße.münchen.de │
|
||||
└───────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## byteHammingDistance
|
||||
|
||||
Calculates the [hamming distance](https://en.wikipedia.org/wiki/Hamming_distance) between two byte strings.
|
||||
@ -1463,6 +1605,78 @@ Result:
|
||||
|
||||
Alias: levenshteinDistance
|
||||
|
||||
## damerauLevenshteinDistance
|
||||
|
||||
Calculates the [Damerau-Levenshtein distance](https://en.wikipedia.org/wiki/Damerau%E2%80%93Levenshtein_distance) between two byte strings.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
damerauLevenshteinDistance(string1, string2)
|
||||
```
|
||||
|
||||
**Examples**
|
||||
|
||||
``` sql
|
||||
SELECT damerauLevenshteinDistance('clickhouse', 'mouse');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─damerauLevenshteinDistance('clickhouse', 'mouse')─┐
|
||||
│ 6 │
|
||||
└───────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## jaroSimilarity
|
||||
|
||||
Calculates the [Jaro similarity](https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance#Jaro_similarity) between two byte strings.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
jaroSimilarity(string1, string2)
|
||||
```
|
||||
|
||||
**Examples**
|
||||
|
||||
``` sql
|
||||
SELECT jaroSimilarity('clickhouse', 'click');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─jaroSimilarity('clickhouse', 'click')─┐
|
||||
│ 0.8333333333333333 │
|
||||
└───────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## jaroWinklerSimilarity
|
||||
|
||||
Calculates the [Jaro-Winkler similarity](https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler_distance#Jaro%E2%80%93Winkler_similarity) between two byte strings.
|
||||
|
||||
**Syntax**
|
||||
|
||||
```sql
|
||||
jaroWinklerSimilarity(string1, string2)
|
||||
```
|
||||
|
||||
**Examples**
|
||||
|
||||
``` sql
|
||||
SELECT jaroWinklerSimilarity('clickhouse', 'click');
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌─jaroWinklerSimilarity('clickhouse', 'click')─┐
|
||||
│ 0.8999999999999999 │
|
||||
└──────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## initcap
|
||||
|
||||
Convert the first letter of each word to upper case and the rest to lower case. Words are sequences of alphanumeric characters separated by non-alphanumeric characters.
|
||||
|
@ -207,7 +207,7 @@ Functions `multiSearchFirstIndexCaseInsensitive`, `multiSearchFirstIndexUTF8` an
|
||||
multiSearchFirstIndex(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, …, needle<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## multiSearchAny
|
||||
## multiSearchAny {#multisearchany}
|
||||
|
||||
Returns 1, if at least one string needle<sub>i</sub> matches the string `haystack` and 0 otherwise.
|
||||
|
||||
@ -219,7 +219,7 @@ Functions `multiSearchAnyCaseInsensitive`, `multiSearchAnyUTF8` and `multiSearch
|
||||
multiSearchAny(haystack, \[needle<sub>1</sub>, needle<sub>2</sub>, …, needle<sub>n</sub>\])
|
||||
```
|
||||
|
||||
## match
|
||||
## match {#match}
|
||||
|
||||
Returns whether string `haystack` matches the regular expression `pattern` in [re2 regular syntax](https://github.com/google/re2/wiki/Syntax).
|
||||
|
||||
@ -414,7 +414,7 @@ Result:
|
||||
└────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
## like
|
||||
## like {#like}
|
||||
|
||||
Returns whether string `haystack` matches the LIKE expression `pattern`.
|
||||
|
||||
@ -445,7 +445,7 @@ like(haystack, pattern)
|
||||
|
||||
Alias: `haystack LIKE pattern` (operator)
|
||||
|
||||
## notLike
|
||||
## notLike {#notlike}
|
||||
|
||||
Like `like` but negates the result.
|
||||
|
||||
|
@ -57,3 +57,56 @@ Result:
|
||||
│ 6 │
|
||||
└─────────┘
|
||||
```
|
||||
|
||||
## seriesDecomposeSTL
|
||||
|
||||
Decomposes a time series using STL [(Seasonal-Trend Decomposition Procedure Based on Loess)](https://www.wessa.net/download/stl.pdf) into a season, a trend and a residual component.
|
||||
|
||||
**Syntax**
|
||||
|
||||
``` sql
|
||||
seriesDecomposeSTL(series, period);
|
||||
```
|
||||
|
||||
**Arguments**
|
||||
|
||||
- `series` - An array of numeric values
|
||||
- `period` - A positive integer
|
||||
|
||||
The number of data points in `series` should be at least twice the value of `period`.
|
||||
|
||||
**Returned value**
|
||||
|
||||
- An array of three arrays where the first array include seasonal components, the second array - trend,
|
||||
and the third array - residue component.
|
||||
|
||||
Type: [Array](../../sql-reference/data-types/array.md).
|
||||
|
||||
**Examples**
|
||||
|
||||
Query:
|
||||
|
||||
``` sql
|
||||
SELECT seriesDecomposeSTL([10.1, 20.45, 40.34, 10.1, 20.45, 40.34, 10.1, 20.45, 40.34, 10.1, 20.45, 40.34, 10.1, 20.45, 40.34, 10.1, 20.45, 40.34, 10.1, 20.45, 40.34, 10.1, 20.45, 40.34], 3) AS print_0;
|
||||
```
|
||||
|
||||
Result:
|
||||
|
||||
``` text
|
||||
┌───────────print_0──────────────────────────────────────────────────────────────────────────────────────────────────────┐
|
||||
│ [[
|
||||
-13.529999, -3.1799996, 16.71, -13.53, -3.1799996, 16.71, -13.53, -3.1799996,
|
||||
16.71, -13.530001, -3.18, 16.710001, -13.530001, -3.1800003, 16.710001, -13.530001,
|
||||
-3.1800003, 16.710001, -13.530001, -3.1799994, 16.71, -13.529999, -3.1799994, 16.709997
|
||||
],
|
||||
[
|
||||
23.63, 23.63, 23.630003, 23.630001, 23.630001, 23.630001, 23.630001, 23.630001,
|
||||
23.630001, 23.630001, 23.630001, 23.63, 23.630001, 23.630001, 23.63, 23.630001,
|
||||
23.630001, 23.63, 23.630001, 23.630001, 23.630001, 23.630001, 23.630001, 23.630003
|
||||
],
|
||||
[
|
||||
0, 0.0000019073486, -0.0000019073486, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -0.0000019073486, 0,
|
||||
0
|
||||
]] │
|
||||
└────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
@ -372,15 +372,23 @@ ClickHouse supports general purpose codecs and specialized codecs.
|
||||
|
||||
#### ZSTD
|
||||
|
||||
`ZSTD[(level)]` — [ZSTD compression algorithm](https://en.wikipedia.org/wiki/Zstandard) with configurable `level`. Possible levels: \[1, 22\]. Default value: 1.
|
||||
`ZSTD[(level)]` — [ZSTD compression algorithm](https://en.wikipedia.org/wiki/Zstandard) with configurable `level`. Possible levels: \[1, 22\]. Default level: 1.
|
||||
|
||||
High compression levels are useful for asymmetric scenarios, like compress once, decompress repeatedly. Higher levels mean better compression and higher CPU usage.
|
||||
|
||||
#### ZSTD_QAT
|
||||
|
||||
`ZSTD_QAT[(level)]` — [ZSTD compression algorithm](https://en.wikipedia.org/wiki/Zstandard) with configurable level, implemented by [Intel® QATlib](https://github.com/intel/qatlib) and [Intel® QAT ZSTD Plugin](https://github.com/intel/QAT-ZSTD-Plugin). Possible levels: \[1, 12\]. Default level: 1. Recommended level range: \[6, 12\]. Some limitations apply:
|
||||
|
||||
- ZSTD_QAT is disabled by default and can only be used after enabling configuration setting [enable_zstd_qat_codec](../../../operations/settings/settings.md#enable_zstd_qat_codec).
|
||||
- For compression, ZSTD_QAT tries to use an Intel® QAT offloading device ([QuickAssist Technology](https://www.intel.com/content/www/us/en/developer/topic-technology/open/quick-assist-technology/overview.html)). If no such device was found, it will fallback to ZSTD compression in software.
|
||||
- Decompression is always performed in software.
|
||||
|
||||
#### DEFLATE_QPL
|
||||
|
||||
`DEFLATE_QPL` — [Deflate compression algorithm](https://github.com/intel/qpl) implemented by Intel® Query Processing Library. Some limitations apply:
|
||||
|
||||
- DEFLATE_QPL is disabled by default and can only be used after setting configuration parameter `enable_deflate_qpl_codec = 1`.
|
||||
- DEFLATE_QPL is disabled by default and can only be used after enabling configuration setting [enable_deflate_qpl_codec](../../../operations/settings/settings.md#enable_deflate_qpl_codec).
|
||||
- DEFLATE_QPL requires a ClickHouse build compiled with SSE 4.2 instructions (by default, this is the case). Refer to [Build Clickhouse with DEFLATE_QPL](/docs/en/development/building_and_benchmarking_deflate_qpl.md/#Build-Clickhouse-with-DEFLATE_QPL) for more details.
|
||||
- DEFLATE_QPL works best if the system has a Intel® IAA (In-Memory Analytics Accelerator) offloading device. Refer to [Accelerator Configuration](https://intel.github.io/qpl/documentation/get_started_docs/installation.html#accelerator-configuration) and [Benchmark with DEFLATE_QPL](/docs/en/development/building_and_benchmarking_deflate_qpl.md/#Run-Benchmark-with-DEFLATE_QPL) for more details.
|
||||
- DEFLATE_QPL-compressed data can only be transferred between ClickHouse nodes compiled with SSE 4.2 enabled.
|
||||
|
@ -11,7 +11,7 @@ Its name comes from the fact that it can be looked at as executing `JOIN` with a
|
||||
|
||||
Syntax:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT <expr_list>
|
||||
FROM <left_subquery>
|
||||
[LEFT] ARRAY JOIN <array>
|
||||
@ -30,7 +30,7 @@ Supported types of `ARRAY JOIN` are listed below:
|
||||
|
||||
The examples below demonstrate the usage of the `ARRAY JOIN` and `LEFT ARRAY JOIN` clauses. Let’s create a table with an [Array](../../../sql-reference/data-types/array.md) type column and insert values into it:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
CREATE TABLE arrays_test
|
||||
(
|
||||
s String,
|
||||
@ -41,7 +41,7 @@ INSERT INTO arrays_test
|
||||
VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []);
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s───────────┬─arr─────┐
|
||||
│ Hello │ [1,2] │
|
||||
│ World │ [3,4,5] │
|
||||
@ -51,13 +51,13 @@ VALUES ('Hello', [1,2]), ('World', [3,4,5]), ('Goodbye', []);
|
||||
|
||||
The example below uses the `ARRAY JOIN` clause:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, arr
|
||||
FROM arrays_test
|
||||
ARRAY JOIN arr;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─arr─┐
|
||||
│ Hello │ 1 │
|
||||
│ Hello │ 2 │
|
||||
@ -69,13 +69,13 @@ ARRAY JOIN arr;
|
||||
|
||||
The next example uses the `LEFT ARRAY JOIN` clause:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, arr
|
||||
FROM arrays_test
|
||||
LEFT ARRAY JOIN arr;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s───────────┬─arr─┐
|
||||
│ Hello │ 1 │
|
||||
│ Hello │ 2 │
|
||||
@ -90,13 +90,13 @@ LEFT ARRAY JOIN arr;
|
||||
|
||||
An alias can be specified for an array in the `ARRAY JOIN` clause. In this case, an array item can be accessed by this alias, but the array itself is accessed by the original name. Example:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, arr, a
|
||||
FROM arrays_test
|
||||
ARRAY JOIN arr AS a;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─arr─────┬─a─┐
|
||||
│ Hello │ [1,2] │ 1 │
|
||||
│ Hello │ [1,2] │ 2 │
|
||||
@ -108,13 +108,13 @@ ARRAY JOIN arr AS a;
|
||||
|
||||
Using aliases, you can perform `ARRAY JOIN` with an external array. For example:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, arr_external
|
||||
FROM arrays_test
|
||||
ARRAY JOIN [1, 2, 3] AS arr_external;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s───────────┬─arr_external─┐
|
||||
│ Hello │ 1 │
|
||||
│ Hello │ 2 │
|
||||
@ -130,13 +130,13 @@ ARRAY JOIN [1, 2, 3] AS arr_external;
|
||||
|
||||
Multiple arrays can be comma-separated in the `ARRAY JOIN` clause. In this case, `JOIN` is performed with them simultaneously (the direct sum, not the cartesian product). Note that all the arrays must have the same size by default. Example:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, arr, a, num, mapped
|
||||
FROM arrays_test
|
||||
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS mapped;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─arr─────┬─a─┬─num─┬─mapped─┐
|
||||
│ Hello │ [1,2] │ 1 │ 1 │ 2 │
|
||||
│ Hello │ [1,2] │ 2 │ 2 │ 3 │
|
||||
@ -148,13 +148,13 @@ ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num, arrayMap(x -> x + 1, arr) AS ma
|
||||
|
||||
The example below uses the [arrayEnumerate](../../../sql-reference/functions/array-functions.md#array_functions-arrayenumerate) function:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, arr, a, num, arrayEnumerate(arr)
|
||||
FROM arrays_test
|
||||
ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─arr─────┬─a─┬─num─┬─arrayEnumerate(arr)─┐
|
||||
│ Hello │ [1,2] │ 1 │ 1 │ [1,2] │
|
||||
│ Hello │ [1,2] │ 2 │ 2 │ [1,2] │
|
||||
@ -163,6 +163,7 @@ ARRAY JOIN arr AS a, arrayEnumerate(arr) AS num;
|
||||
│ World │ [3,4,5] │ 5 │ 3 │ [1,2,3] │
|
||||
└───────┴─────────┴───┴─────┴─────────────────────┘
|
||||
```
|
||||
|
||||
Multiple arrays with different sizes can be joined by using: `SETTINGS enable_unaligned_array_join = 1`. Example:
|
||||
|
||||
```sql
|
||||
@ -171,7 +172,7 @@ FROM arrays_test ARRAY JOIN arr as a, [['a','b'],['c']] as b
|
||||
SETTINGS enable_unaligned_array_join = 1;
|
||||
```
|
||||
|
||||
```text
|
||||
```response
|
||||
┌─s───────┬─arr─────┬─a─┬─b─────────┐
|
||||
│ Hello │ [1,2] │ 1 │ ['a','b'] │
|
||||
│ Hello │ [1,2] │ 2 │ ['c'] │
|
||||
@ -187,7 +188,7 @@ SETTINGS enable_unaligned_array_join = 1;
|
||||
|
||||
`ARRAY JOIN` also works with [nested data structures](../../../sql-reference/data-types/nested-data-structures/index.md):
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
CREATE TABLE nested_test
|
||||
(
|
||||
s String,
|
||||
@ -200,7 +201,7 @@ INSERT INTO nested_test
|
||||
VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', [], []);
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s───────┬─nest.x──┬─nest.y─────┐
|
||||
│ Hello │ [1,2] │ [10,20] │
|
||||
│ World │ [3,4,5] │ [30,40,50] │
|
||||
@ -208,13 +209,13 @@ VALUES ('Hello', [1,2], [10,20]), ('World', [3,4,5], [30,40,50]), ('Goodbye', []
|
||||
└─────────┴─────────┴────────────┘
|
||||
```
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, `nest.x`, `nest.y`
|
||||
FROM nested_test
|
||||
ARRAY JOIN nest;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─nest.x─┬─nest.y─┐
|
||||
│ Hello │ 1 │ 10 │
|
||||
│ Hello │ 2 │ 20 │
|
||||
@ -226,13 +227,13 @@ ARRAY JOIN nest;
|
||||
|
||||
When specifying names of nested data structures in `ARRAY JOIN`, the meaning is the same as `ARRAY JOIN` with all the array elements that it consists of. Examples are listed below:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, `nest.x`, `nest.y`
|
||||
FROM nested_test
|
||||
ARRAY JOIN `nest.x`, `nest.y`;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─nest.x─┬─nest.y─┐
|
||||
│ Hello │ 1 │ 10 │
|
||||
│ Hello │ 2 │ 20 │
|
||||
@ -244,13 +245,13 @@ ARRAY JOIN `nest.x`, `nest.y`;
|
||||
|
||||
This variation also makes sense:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, `nest.x`, `nest.y`
|
||||
FROM nested_test
|
||||
ARRAY JOIN `nest.x`;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─nest.x─┬─nest.y─────┐
|
||||
│ Hello │ 1 │ [10,20] │
|
||||
│ Hello │ 2 │ [10,20] │
|
||||
@ -262,13 +263,13 @@ ARRAY JOIN `nest.x`;
|
||||
|
||||
An alias may be used for a nested data structure, in order to select either the `JOIN` result or the source array. Example:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`
|
||||
FROM nested_test
|
||||
ARRAY JOIN nest AS n;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┐
|
||||
│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │
|
||||
│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │
|
||||
@ -280,13 +281,13 @@ ARRAY JOIN nest AS n;
|
||||
|
||||
Example of using the [arrayEnumerate](../../../sql-reference/functions/array-functions.md#array_functions-arrayenumerate) function:
|
||||
|
||||
``` sql
|
||||
```sql
|
||||
SELECT s, `n.x`, `n.y`, `nest.x`, `nest.y`, num
|
||||
FROM nested_test
|
||||
ARRAY JOIN nest AS n, arrayEnumerate(`nest.x`) AS num;
|
||||
```
|
||||
|
||||
``` text
|
||||
```response
|
||||
┌─s─────┬─n.x─┬─n.y─┬─nest.x──┬─nest.y─────┬─num─┐
|
||||
│ Hello │ 1 │ 10 │ [1,2] │ [10,20] │ 1 │
|
||||
│ Hello │ 2 │ 20 │ [1,2] │ [10,20] │ 2 │
|
||||
@ -300,6 +301,11 @@ ARRAY JOIN nest AS n, arrayEnumerate(`nest.x`) AS num;
|
||||
|
||||
The query execution order is optimized when running `ARRAY JOIN`. Although `ARRAY JOIN` must always be specified before the [WHERE](../../../sql-reference/statements/select/where.md)/[PREWHERE](../../../sql-reference/statements/select/prewhere.md) clause in a query, technically they can be performed in any order, unless result of `ARRAY JOIN` is used for filtering. The processing order is controlled by the query optimizer.
|
||||
|
||||
### Incompatibility with short-circuit function evaluation
|
||||
|
||||
[Short-circuit function evaluation](../../../operations/settings/index.md#short-circuit-function-evaluation) is a feature that optimizes the execution of complex expressions in specific functions such as `if`, `multiIf`, `and`, and `or`. It prevents potential exceptions, such as division by zero, from occurring during the execution of these functions.
|
||||
|
||||
`arrayJoin` is always executed and not supported for short circuit function evaluation. That's because it's a unique function processed separately from all other functions during query analysis and execution and requires additional logic that doesn't work with short circuit function execution. The reason is that the number of rows in the result depends on the arrayJoin result, and it's too complex and expensive to implement lazy execution of `arrayJoin`.
|
||||
|
||||
## Related content
|
||||
|
||||
|
@ -12,7 +12,7 @@ Join produces a new table by combining columns from one or multiple tables by us
|
||||
``` sql
|
||||
SELECT <expr_list>
|
||||
FROM <left_table>
|
||||
[GLOBAL] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER|SEMI|ANTI|ANY|ASOF] JOIN <right_table>
|
||||
[GLOBAL] [INNER|LEFT|RIGHT|FULL|CROSS] [OUTER|SEMI|ANTI|ANY|ALL|ASOF] JOIN <right_table>
|
||||
(ON <expr_list>)|(USING <column_list>) ...
|
||||
```
|
||||
|
||||
@ -296,6 +296,34 @@ PASTE JOIN
|
||||
│ 1 │ 0 │
|
||||
└───┴──────┘
|
||||
```
|
||||
Note: In this case result can be nondeterministic if the reading is parallel. Example:
|
||||
```SQL
|
||||
SELECT *
|
||||
FROM
|
||||
(
|
||||
SELECT number AS a
|
||||
FROM numbers_mt(5)
|
||||
) AS t1
|
||||
PASTE JOIN
|
||||
(
|
||||
SELECT number AS a
|
||||
FROM numbers(10)
|
||||
ORDER BY a DESC
|
||||
) AS t2
|
||||
SETTINGS max_block_size = 2;
|
||||
|
||||
┌─a─┬─t2.a─┐
|
||||
│ 2 │ 9 │
|
||||
│ 3 │ 8 │
|
||||
└───┴──────┘
|
||||
┌─a─┬─t2.a─┐
|
||||
│ 0 │ 7 │
|
||||
│ 1 │ 6 │
|
||||
└───┴──────┘
|
||||
┌─a─┬─t2.a─┐
|
||||
│ 4 │ 5 │
|
||||
└───┴──────┘
|
||||
```
|
||||
|
||||
## Distributed JOIN
|
||||
|
||||
|
@ -578,7 +578,9 @@ SELECT
|
||||
|
||||
- В противном случае это последняя неделя предыдущего года, а следующая неделя - неделя 1.
|
||||
|
||||
Для режимов со значением «содержит 1 января», неделя 1 – это неделя содержащая 1 января. Не имеет значения, сколько дней в новом году содержала неделя, даже если она содержала только один день.
|
||||
Для режимов со значением «содержит 1 января», неделя 1 – это неделя, содержащая 1 января.
|
||||
Не имеет значения, сколько дней нового года содержит эта неделя, даже если она содержит только один день.
|
||||
Так, если последняя неделя декабря содержит 1 января следующего года, то она считается неделей 1 следующего года.
|
||||
|
||||
**Пример**
|
||||
|
||||
|
@ -1559,7 +1559,7 @@ TaskStatus ClusterCopier::processPartitionPieceTaskImpl(
|
||||
QueryPipeline input;
|
||||
QueryPipeline output;
|
||||
{
|
||||
BlockIO io_insert = InterpreterFactory::get(query_insert_ast, context_insert)->execute();
|
||||
BlockIO io_insert = InterpreterFactory::instance().get(query_insert_ast, context_insert)->execute();
|
||||
|
||||
InterpreterSelectWithUnionQuery select(query_select_ast, context_select, SelectQueryOptions{});
|
||||
QueryPlan plan;
|
||||
@ -1944,7 +1944,7 @@ bool ClusterCopier::checkShardHasPartition(const ConnectionTimeouts & timeouts,
|
||||
|
||||
auto local_context = Context::createCopy(context);
|
||||
local_context->setSettings(task_cluster->settings_pull);
|
||||
auto pipeline = InterpreterFactory::get(query_ast, local_context)->execute().pipeline;
|
||||
auto pipeline = InterpreterFactory::instance().get(query_ast, local_context)->execute().pipeline;
|
||||
PullingPipelineExecutor executor(pipeline);
|
||||
Block block;
|
||||
executor.pull(block);
|
||||
@ -1989,7 +1989,7 @@ bool ClusterCopier::checkPresentPartitionPiecesOnCurrentShard(const ConnectionTi
|
||||
|
||||
auto local_context = Context::createCopy(context);
|
||||
local_context->setSettings(task_cluster->settings_pull);
|
||||
auto pipeline = InterpreterFactory::get(query_ast, local_context)->execute().pipeline;
|
||||
auto pipeline = InterpreterFactory::instance().get(query_ast, local_context)->execute().pipeline;
|
||||
PullingPipelineExecutor executor(pipeline);
|
||||
Block result;
|
||||
executor.pull(result);
|
||||
|
@ -4,6 +4,7 @@
|
||||
#include <Common/TerminalSize.h>
|
||||
#include <Databases/registerDatabases.h>
|
||||
#include <IO/ConnectionTimeouts.h>
|
||||
#include <Interpreters/registerInterpreters.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <Common/scope_guard_safe.h>
|
||||
#include <unistd.h>
|
||||
@ -157,6 +158,7 @@ void ClusterCopierApp::mainImpl()
|
||||
context->setApplicationType(Context::ApplicationType::LOCAL);
|
||||
context->setPath(process_path + "/");
|
||||
|
||||
registerInterpreters();
|
||||
registerFunctions();
|
||||
registerAggregateFunctions();
|
||||
registerTableFunctions();
|
||||
|
@ -17,15 +17,7 @@
|
||||
#include <Common/Config/ConfigProcessor.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/parseGlobs.h>
|
||||
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic push
|
||||
# pragma clang diagnostic ignored "-Wzero-as-null-pointer-constant"
|
||||
#endif
|
||||
#include <re2/re2.h>
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic pop
|
||||
#endif
|
||||
#include <Common/re2.h>
|
||||
|
||||
static void setupLogging(const std::string & log_level)
|
||||
{
|
||||
|
@ -3,16 +3,19 @@
|
||||
#include <string_view>
|
||||
#include <boost/program_options.hpp>
|
||||
|
||||
#include <IO/copyData.h>
|
||||
#include <IO/ReadBufferFromFileDescriptor.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/WriteBufferFromFileDescriptor.h>
|
||||
#include <IO/WriteBufferFromOStream.h>
|
||||
#include <Interpreters/registerInterpreters.h>
|
||||
#include <Parsers/ASTInsertQuery.h>
|
||||
#include <Parsers/ParserQuery.h>
|
||||
#include <Parsers/formatAST.h>
|
||||
#include <Parsers/obfuscateQueries.h>
|
||||
#include <Parsers/parseQuery.h>
|
||||
#include <Common/ErrorCodes.h>
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Common/TerminalSize.h>
|
||||
|
||||
#include <Interpreters/Context.h>
|
||||
@ -29,22 +32,49 @@
|
||||
#include <DataTypes/DataTypeFactory.h>
|
||||
#include <Formats/FormatFactory.h>
|
||||
#include <Formats/registerFormats.h>
|
||||
#include <Processors/Transforms/getSourceFromASTInsertQuery.h>
|
||||
|
||||
|
||||
namespace DB::ErrorCodes
|
||||
{
|
||||
extern const int NOT_IMPLEMENTED;
|
||||
}
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
void skipSpacesAndComments(const char*& pos, const char* end, bool print_comments)
|
||||
{
|
||||
do
|
||||
{
|
||||
/// skip spaces to avoid throw exception after last query
|
||||
while (pos != end && std::isspace(*pos))
|
||||
++pos;
|
||||
|
||||
const char * comment_begin = pos;
|
||||
/// for skip comment after the last query and to not throw exception
|
||||
if (end - pos > 2 && *pos == '-' && *(pos + 1) == '-')
|
||||
{
|
||||
pos += 2;
|
||||
/// skip until the end of the line
|
||||
while (pos != end && *pos != '\n')
|
||||
++pos;
|
||||
if (print_comments)
|
||||
std::cout << std::string_view(comment_begin, pos - comment_begin) << "\n";
|
||||
}
|
||||
/// need to parse next sql
|
||||
else
|
||||
break;
|
||||
} while (pos != end);
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
#pragma GCC diagnostic ignored "-Wunused-function"
|
||||
#pragma GCC diagnostic ignored "-Wmissing-declarations"
|
||||
|
||||
extern const char * auto_time_zones[];
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
{
|
||||
extern const int INVALID_FORMAT_INSERT_QUERY_WITH_DATA;
|
||||
}
|
||||
}
|
||||
|
||||
int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
{
|
||||
using namespace DB;
|
||||
@ -55,8 +85,10 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
desc.add_options()
|
||||
("query", po::value<std::string>(), "query to format")
|
||||
("help,h", "produce help message")
|
||||
("comments", "keep comments in the output")
|
||||
("hilite", "add syntax highlight with ANSI terminal escape sequences")
|
||||
("oneline", "format in single line")
|
||||
("max_line_length", po::value<size_t>()->default_value(0), "format in single line queries with length less than specified")
|
||||
("quiet,q", "just check syntax, no output on success")
|
||||
("multiquery,n", "allow multiple queries in the same file")
|
||||
("obfuscate", "obfuscate instead of formatting")
|
||||
@ -88,6 +120,8 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
bool oneline = options.count("oneline");
|
||||
bool quiet = options.count("quiet");
|
||||
bool multiple = options.count("multiquery");
|
||||
bool print_comments = options.count("comments");
|
||||
size_t max_line_length = options["max_line_length"].as<size_t>();
|
||||
bool obfuscate = options.count("obfuscate");
|
||||
bool backslash = options.count("backslash");
|
||||
bool allow_settings_after_format_in_insert = options.count("allow_settings_after_format_in_insert");
|
||||
@ -104,6 +138,19 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
return 2;
|
||||
}
|
||||
|
||||
if (oneline && max_line_length)
|
||||
{
|
||||
std::cerr << "Options 'oneline' and 'max_line_length' are mutually exclusive." << std::endl;
|
||||
return 2;
|
||||
}
|
||||
|
||||
if (max_line_length > 255)
|
||||
{
|
||||
std::cerr << "Option 'max_line_length' must be less than 256." << std::endl;
|
||||
return 2;
|
||||
}
|
||||
|
||||
|
||||
String query;
|
||||
|
||||
if (options.count("query"))
|
||||
@ -124,10 +171,10 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
|
||||
if (options.count("seed"))
|
||||
{
|
||||
std::string seed;
|
||||
hash_func.update(options["seed"].as<std::string>());
|
||||
}
|
||||
|
||||
registerInterpreters();
|
||||
registerFunctions();
|
||||
registerAggregateFunctions();
|
||||
registerTableFunctions();
|
||||
@ -179,30 +226,75 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
{
|
||||
const char * pos = query.data();
|
||||
const char * end = pos + query.size();
|
||||
skipSpacesAndComments(pos, end, print_comments);
|
||||
|
||||
ParserQuery parser(end, allow_settings_after_format_in_insert);
|
||||
do
|
||||
while (pos != end)
|
||||
{
|
||||
size_t approx_query_length = multiple ? find_first_symbols<';'>(pos, end) - pos : end - pos;
|
||||
|
||||
ASTPtr res = parseQueryAndMovePosition(
|
||||
parser, pos, end, "query", multiple, cmd_settings.max_query_size, cmd_settings.max_parser_depth);
|
||||
|
||||
/// For insert query with data(INSERT INTO ... VALUES ...), that will lead to the formatting failure,
|
||||
/// we should throw an exception early, and make exception message more readable.
|
||||
if (const auto * insert_query = res->as<ASTInsertQuery>(); insert_query && insert_query->data)
|
||||
std::unique_ptr<ReadBuffer> insert_query_payload = nullptr;
|
||||
/// If the query is INSERT ... VALUES, then we will try to parse the data.
|
||||
if (auto * insert_query = res->as<ASTInsertQuery>(); insert_query && insert_query->data)
|
||||
{
|
||||
throw Exception(DB::ErrorCodes::INVALID_FORMAT_INSERT_QUERY_WITH_DATA,
|
||||
"Can't format ASTInsertQuery with data, since data will be lost");
|
||||
if ("Values" != insert_query->format)
|
||||
throw Exception(DB::ErrorCodes::NOT_IMPLEMENTED, "Can't format INSERT query with data format '{}'", insert_query->format);
|
||||
|
||||
/// Reset format to default to have `INSERT INTO table VALUES` instead of `INSERT INTO table VALUES FORMAT Values`
|
||||
insert_query->format = {};
|
||||
|
||||
/// We assume that data ends with a newline character (same as client does)
|
||||
const char * this_query_end = find_first_symbols<'\n'>(insert_query->data, end);
|
||||
insert_query->end = this_query_end;
|
||||
pos = this_query_end;
|
||||
insert_query_payload = getReadBufferFromASTInsertQuery(res);
|
||||
}
|
||||
|
||||
if (!quiet)
|
||||
{
|
||||
if (!backslash)
|
||||
{
|
||||
WriteBufferFromOStream res_buf(std::cout, 4096);
|
||||
formatAST(*res, res_buf, hilite, oneline);
|
||||
res_buf.finalize();
|
||||
if (multiple)
|
||||
std::cout << "\n;\n";
|
||||
WriteBufferFromOwnString str_buf;
|
||||
formatAST(*res, str_buf, hilite, oneline || approx_query_length < max_line_length);
|
||||
|
||||
if (insert_query_payload)
|
||||
{
|
||||
str_buf.write(' ');
|
||||
copyData(*insert_query_payload, str_buf);
|
||||
}
|
||||
|
||||
String res_string = str_buf.str();
|
||||
const char * s_pos = res_string.data();
|
||||
const char * s_end = s_pos + res_string.size();
|
||||
/// remove trailing spaces
|
||||
while (s_end > s_pos && isWhitespaceASCIIOneLine(*(s_end - 1)))
|
||||
--s_end;
|
||||
WriteBufferFromOStream res_cout(std::cout, 4096);
|
||||
/// For multiline queries we print ';' at new line,
|
||||
/// but for single line queries we print ';' at the same line
|
||||
bool has_multiple_lines = false;
|
||||
while (s_pos != s_end)
|
||||
{
|
||||
if (*s_pos == '\n')
|
||||
has_multiple_lines = true;
|
||||
res_cout.write(*s_pos++);
|
||||
}
|
||||
res_cout.finalize();
|
||||
|
||||
if (multiple && !insert_query_payload)
|
||||
{
|
||||
if (oneline || !has_multiple_lines)
|
||||
std::cout << ";\n";
|
||||
else
|
||||
std::cout << "\n;\n";
|
||||
}
|
||||
else if (multiple && insert_query_payload)
|
||||
/// Do not need to add ; because it's already in the insert_query_payload
|
||||
std::cout << "\n";
|
||||
|
||||
std::cout << std::endl;
|
||||
}
|
||||
/// add additional '\' at the end of each line;
|
||||
@ -230,27 +322,10 @@ int mainEntryClickHouseFormat(int argc, char ** argv)
|
||||
std::cout << std::endl;
|
||||
}
|
||||
}
|
||||
|
||||
do
|
||||
{
|
||||
/// skip spaces to avoid throw exception after last query
|
||||
while (pos != end && std::isspace(*pos))
|
||||
++pos;
|
||||
|
||||
/// for skip comment after the last query and to not throw exception
|
||||
if (end - pos > 2 && *pos == '-' && *(pos + 1) == '-')
|
||||
{
|
||||
pos += 2;
|
||||
/// skip until the end of the line
|
||||
while (pos != end && *pos != '\n')
|
||||
++pos;
|
||||
}
|
||||
/// need to parse next sql
|
||||
else
|
||||
break;
|
||||
} while (pos != end);
|
||||
|
||||
} while (multiple && pos != end);
|
||||
skipSpacesAndComments(pos, end, print_comments);
|
||||
if (!multiple)
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
catch (...)
|
||||
|
@ -16,6 +16,7 @@
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/StringUtils/StringUtils.h>
|
||||
#include <Common/ShellCommand.h>
|
||||
#include <Common/re2.h>
|
||||
#include <base/find_symbols.h>
|
||||
|
||||
#include <IO/copyData.h>
|
||||
@ -24,15 +25,6 @@
|
||||
#include <IO/WriteBufferFromFile.h>
|
||||
#include <IO/WriteBufferFromFileDescriptor.h>
|
||||
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic push
|
||||
# pragma clang diagnostic ignored "-Wzero-as-null-pointer-constant"
|
||||
#endif
|
||||
#include <re2/re2.h>
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic pop
|
||||
#endif
|
||||
|
||||
static constexpr auto documentation = R"(
|
||||
A tool to extract information from Git repository for analytics.
|
||||
|
||||
|
@ -335,7 +335,7 @@ try
|
||||
else if (std::filesystem::is_directory(std::filesystem::path{config().getString("path", DBMS_DEFAULT_PATH)} / "coordination"))
|
||||
{
|
||||
throw Exception(ErrorCodes::NO_ELEMENTS_IN_CONFIG,
|
||||
"By default 'keeper.storage_path' could be assigned to {}, but the directory {} already exists. Please specify 'keeper.storage_path' in the keeper configuration explicitly",
|
||||
"By default 'keeper_server.storage_path' could be assigned to {}, but the directory {} already exists. Please specify 'keeper_server.storage_path' in the keeper configuration explicitly",
|
||||
KEEPER_DEFAULT_PATH, String{std::filesystem::path{config().getString("path", DBMS_DEFAULT_PATH)} / "coordination"});
|
||||
}
|
||||
else
|
||||
|
@ -20,6 +20,7 @@
|
||||
#include <Interpreters/JIT/CompiledExpressionCache.h>
|
||||
#include <Interpreters/ProcessList.h>
|
||||
#include <Interpreters/loadMetadata.h>
|
||||
#include <Interpreters/registerInterpreters.h>
|
||||
#include <base/getFQDNOrHostName.h>
|
||||
#include <Common/scope_guard_safe.h>
|
||||
#include <Interpreters/Session.h>
|
||||
@ -486,6 +487,7 @@ try
|
||||
Poco::ErrorHandler::set(&error_handler);
|
||||
}
|
||||
|
||||
registerInterpreters();
|
||||
/// Don't initialize DateLUT
|
||||
registerFunctions();
|
||||
registerAggregateFunctions();
|
||||
@ -728,12 +730,7 @@ void LocalServer::processConfig()
|
||||
/// We load temporary database first, because projections need it.
|
||||
DatabaseCatalog::instance().initializeAndLoadTemporaryDatabase();
|
||||
|
||||
/** Init dummy default DB
|
||||
* NOTE: We force using isolated default database to avoid conflicts with default database from server environment
|
||||
* Otherwise, metadata of temporary File(format, EXPLICIT_PATH) tables will pollute metadata/ directory;
|
||||
* if such tables will not be dropped, clickhouse-server will not be able to load them due to security reasons.
|
||||
*/
|
||||
std::string default_database = config().getString("default_database", "_local");
|
||||
std::string default_database = config().getString("default_database", "default");
|
||||
DatabaseCatalog::instance().attachDatabase(default_database, createClickHouseLocalDatabaseOverlay(default_database, global_context));
|
||||
global_context->setCurrentDatabase(default_database);
|
||||
|
||||
|
@ -58,6 +58,7 @@
|
||||
#include <Interpreters/ExternalDictionariesLoader.h>
|
||||
#include <Interpreters/ProcessList.h>
|
||||
#include <Interpreters/loadMetadata.h>
|
||||
#include <Interpreters/registerInterpreters.h>
|
||||
#include <Interpreters/JIT/CompiledExpressionCache.h>
|
||||
#include <Access/AccessControl.h>
|
||||
#include <Storages/StorageReplicatedMergeTree.h>
|
||||
@ -646,6 +647,7 @@ try
|
||||
}
|
||||
#endif
|
||||
|
||||
registerInterpreters();
|
||||
registerFunctions();
|
||||
registerAggregateFunctions();
|
||||
registerTableFunctions();
|
||||
@ -1260,11 +1262,11 @@ try
|
||||
{
|
||||
Settings::checkNoSettingNamesAtTopLevel(*config, config_path);
|
||||
|
||||
ServerSettings server_settings_;
|
||||
server_settings_.loadSettingsFromConfig(*config);
|
||||
ServerSettings new_server_settings;
|
||||
new_server_settings.loadSettingsFromConfig(*config);
|
||||
|
||||
size_t max_server_memory_usage = server_settings_.max_server_memory_usage;
|
||||
double max_server_memory_usage_to_ram_ratio = server_settings_.max_server_memory_usage_to_ram_ratio;
|
||||
size_t max_server_memory_usage = new_server_settings.max_server_memory_usage;
|
||||
double max_server_memory_usage_to_ram_ratio = new_server_settings.max_server_memory_usage_to_ram_ratio;
|
||||
|
||||
size_t current_physical_server_memory = getMemoryAmount(); /// With cgroups, the amount of memory available to the server can be changed dynamically.
|
||||
size_t default_max_server_memory_usage = static_cast<size_t>(current_physical_server_memory * max_server_memory_usage_to_ram_ratio);
|
||||
@ -1294,9 +1296,9 @@ try
|
||||
total_memory_tracker.setDescription("(total)");
|
||||
total_memory_tracker.setMetric(CurrentMetrics::MemoryTracking);
|
||||
|
||||
size_t merges_mutations_memory_usage_soft_limit = server_settings_.merges_mutations_memory_usage_soft_limit;
|
||||
size_t merges_mutations_memory_usage_soft_limit = new_server_settings.merges_mutations_memory_usage_soft_limit;
|
||||
|
||||
size_t default_merges_mutations_server_memory_usage = static_cast<size_t>(current_physical_server_memory * server_settings_.merges_mutations_memory_usage_to_ram_ratio);
|
||||
size_t default_merges_mutations_server_memory_usage = static_cast<size_t>(current_physical_server_memory * new_server_settings.merges_mutations_memory_usage_to_ram_ratio);
|
||||
if (merges_mutations_memory_usage_soft_limit == 0)
|
||||
{
|
||||
merges_mutations_memory_usage_soft_limit = default_merges_mutations_server_memory_usage;
|
||||
@ -1304,7 +1306,7 @@ try
|
||||
" ({} available * {:.2f} merges_mutations_memory_usage_to_ram_ratio)",
|
||||
formatReadableSizeWithBinarySuffix(merges_mutations_memory_usage_soft_limit),
|
||||
formatReadableSizeWithBinarySuffix(current_physical_server_memory),
|
||||
server_settings_.merges_mutations_memory_usage_to_ram_ratio);
|
||||
new_server_settings.merges_mutations_memory_usage_to_ram_ratio);
|
||||
}
|
||||
else if (merges_mutations_memory_usage_soft_limit > default_merges_mutations_server_memory_usage)
|
||||
{
|
||||
@ -1313,7 +1315,7 @@ try
|
||||
" ({} available * {:.2f} merges_mutations_memory_usage_to_ram_ratio)",
|
||||
formatReadableSizeWithBinarySuffix(merges_mutations_memory_usage_soft_limit),
|
||||
formatReadableSizeWithBinarySuffix(current_physical_server_memory),
|
||||
server_settings_.merges_mutations_memory_usage_to_ram_ratio);
|
||||
new_server_settings.merges_mutations_memory_usage_to_ram_ratio);
|
||||
}
|
||||
|
||||
LOG_INFO(log, "Merges and mutations memory limit is set to {}",
|
||||
@ -1322,7 +1324,7 @@ try
|
||||
background_memory_tracker.setDescription("(background)");
|
||||
background_memory_tracker.setMetric(CurrentMetrics::MergesMutationsMemoryTracking);
|
||||
|
||||
total_memory_tracker.setAllowUseJemallocMemory(server_settings_.allow_use_jemalloc_memory);
|
||||
total_memory_tracker.setAllowUseJemallocMemory(new_server_settings.allow_use_jemalloc_memory);
|
||||
|
||||
auto * global_overcommit_tracker = global_context->getGlobalOvercommitTracker();
|
||||
total_memory_tracker.setOvercommitTracker(global_overcommit_tracker);
|
||||
@ -1346,26 +1348,26 @@ try
|
||||
global_context->setRemoteHostFilter(*config);
|
||||
global_context->setHTTPHeaderFilter(*config);
|
||||
|
||||
global_context->setMaxTableSizeToDrop(server_settings_.max_table_size_to_drop);
|
||||
global_context->setMaxPartitionSizeToDrop(server_settings_.max_partition_size_to_drop);
|
||||
global_context->setMaxTableNumToWarn(server_settings_.max_table_num_to_warn);
|
||||
global_context->setMaxDatabaseNumToWarn(server_settings_.max_database_num_to_warn);
|
||||
global_context->setMaxPartNumToWarn(server_settings_.max_part_num_to_warn);
|
||||
global_context->setMaxTableSizeToDrop(new_server_settings.max_table_size_to_drop);
|
||||
global_context->setMaxPartitionSizeToDrop(new_server_settings.max_partition_size_to_drop);
|
||||
global_context->setMaxTableNumToWarn(new_server_settings.max_table_num_to_warn);
|
||||
global_context->setMaxDatabaseNumToWarn(new_server_settings.max_database_num_to_warn);
|
||||
global_context->setMaxPartNumToWarn(new_server_settings.max_part_num_to_warn);
|
||||
|
||||
ConcurrencyControl::SlotCount concurrent_threads_soft_limit = ConcurrencyControl::Unlimited;
|
||||
if (server_settings_.concurrent_threads_soft_limit_num > 0 && server_settings_.concurrent_threads_soft_limit_num < concurrent_threads_soft_limit)
|
||||
concurrent_threads_soft_limit = server_settings_.concurrent_threads_soft_limit_num;
|
||||
if (server_settings_.concurrent_threads_soft_limit_ratio_to_cores > 0)
|
||||
if (new_server_settings.concurrent_threads_soft_limit_num > 0 && new_server_settings.concurrent_threads_soft_limit_num < concurrent_threads_soft_limit)
|
||||
concurrent_threads_soft_limit = new_server_settings.concurrent_threads_soft_limit_num;
|
||||
if (new_server_settings.concurrent_threads_soft_limit_ratio_to_cores > 0)
|
||||
{
|
||||
auto value = server_settings_.concurrent_threads_soft_limit_ratio_to_cores * std::thread::hardware_concurrency();
|
||||
auto value = new_server_settings.concurrent_threads_soft_limit_ratio_to_cores * std::thread::hardware_concurrency();
|
||||
if (value > 0 && value < concurrent_threads_soft_limit)
|
||||
concurrent_threads_soft_limit = value;
|
||||
}
|
||||
ConcurrencyControl::instance().setMaxConcurrency(concurrent_threads_soft_limit);
|
||||
|
||||
global_context->getProcessList().setMaxSize(server_settings_.max_concurrent_queries);
|
||||
global_context->getProcessList().setMaxInsertQueriesAmount(server_settings_.max_concurrent_insert_queries);
|
||||
global_context->getProcessList().setMaxSelectQueriesAmount(server_settings_.max_concurrent_select_queries);
|
||||
global_context->getProcessList().setMaxSize(new_server_settings.max_concurrent_queries);
|
||||
global_context->getProcessList().setMaxInsertQueriesAmount(new_server_settings.max_concurrent_insert_queries);
|
||||
global_context->getProcessList().setMaxSelectQueriesAmount(new_server_settings.max_concurrent_select_queries);
|
||||
|
||||
if (config->has("keeper_server"))
|
||||
global_context->updateKeeperConfiguration(*config);
|
||||
@ -1376,68 +1378,68 @@ try
|
||||
/// This is done for backward compatibility.
|
||||
if (global_context->areBackgroundExecutorsInitialized())
|
||||
{
|
||||
auto new_pool_size = server_settings_.background_pool_size;
|
||||
auto new_ratio = server_settings_.background_merges_mutations_concurrency_ratio;
|
||||
auto new_pool_size = new_server_settings.background_pool_size;
|
||||
auto new_ratio = new_server_settings.background_merges_mutations_concurrency_ratio;
|
||||
global_context->getMergeMutateExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, static_cast<size_t>(new_pool_size * new_ratio));
|
||||
global_context->getMergeMutateExecutor()->updateSchedulingPolicy(server_settings_.background_merges_mutations_scheduling_policy.toString());
|
||||
global_context->getMergeMutateExecutor()->updateSchedulingPolicy(new_server_settings.background_merges_mutations_scheduling_policy.toString());
|
||||
}
|
||||
|
||||
if (global_context->areBackgroundExecutorsInitialized())
|
||||
{
|
||||
auto new_pool_size = server_settings_.background_move_pool_size;
|
||||
auto new_pool_size = new_server_settings.background_move_pool_size;
|
||||
global_context->getMovesExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, new_pool_size);
|
||||
}
|
||||
|
||||
if (global_context->areBackgroundExecutorsInitialized())
|
||||
{
|
||||
auto new_pool_size = server_settings_.background_fetches_pool_size;
|
||||
auto new_pool_size = new_server_settings.background_fetches_pool_size;
|
||||
global_context->getFetchesExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, new_pool_size);
|
||||
}
|
||||
|
||||
if (global_context->areBackgroundExecutorsInitialized())
|
||||
{
|
||||
auto new_pool_size = server_settings_.background_common_pool_size;
|
||||
auto new_pool_size = new_server_settings.background_common_pool_size;
|
||||
global_context->getCommonExecutor()->increaseThreadsAndMaxTasksCount(new_pool_size, new_pool_size);
|
||||
}
|
||||
|
||||
global_context->getBufferFlushSchedulePool().increaseThreadsCount(server_settings_.background_buffer_flush_schedule_pool_size);
|
||||
global_context->getSchedulePool().increaseThreadsCount(server_settings_.background_schedule_pool_size);
|
||||
global_context->getMessageBrokerSchedulePool().increaseThreadsCount(server_settings_.background_message_broker_schedule_pool_size);
|
||||
global_context->getDistributedSchedulePool().increaseThreadsCount(server_settings_.background_distributed_schedule_pool_size);
|
||||
global_context->getBufferFlushSchedulePool().increaseThreadsCount(new_server_settings.background_buffer_flush_schedule_pool_size);
|
||||
global_context->getSchedulePool().increaseThreadsCount(new_server_settings.background_schedule_pool_size);
|
||||
global_context->getMessageBrokerSchedulePool().increaseThreadsCount(new_server_settings.background_message_broker_schedule_pool_size);
|
||||
global_context->getDistributedSchedulePool().increaseThreadsCount(new_server_settings.background_distributed_schedule_pool_size);
|
||||
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderForegroundPoolId, server_settings_.tables_loader_foreground_pool_size);
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundLoadPoolId, server_settings_.tables_loader_background_pool_size);
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundStartupPoolId, server_settings_.tables_loader_background_pool_size);
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderForegroundPoolId, new_server_settings.tables_loader_foreground_pool_size);
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundLoadPoolId, new_server_settings.tables_loader_background_pool_size);
|
||||
global_context->getAsyncLoader().setMaxThreads(TablesLoaderBackgroundStartupPoolId, new_server_settings.tables_loader_background_pool_size);
|
||||
|
||||
getIOThreadPool().reloadConfiguration(
|
||||
server_settings.max_io_thread_pool_size,
|
||||
server_settings.max_io_thread_pool_free_size,
|
||||
server_settings.io_thread_pool_queue_size);
|
||||
new_server_settings.max_io_thread_pool_size,
|
||||
new_server_settings.max_io_thread_pool_free_size,
|
||||
new_server_settings.io_thread_pool_queue_size);
|
||||
|
||||
getBackupsIOThreadPool().reloadConfiguration(
|
||||
server_settings.max_backups_io_thread_pool_size,
|
||||
server_settings.max_backups_io_thread_pool_free_size,
|
||||
server_settings.backups_io_thread_pool_queue_size);
|
||||
new_server_settings.max_backups_io_thread_pool_size,
|
||||
new_server_settings.max_backups_io_thread_pool_free_size,
|
||||
new_server_settings.backups_io_thread_pool_queue_size);
|
||||
|
||||
getActivePartsLoadingThreadPool().reloadConfiguration(
|
||||
server_settings.max_active_parts_loading_thread_pool_size,
|
||||
new_server_settings.max_active_parts_loading_thread_pool_size,
|
||||
0, // We don't need any threads once all the parts will be loaded
|
||||
server_settings.max_active_parts_loading_thread_pool_size);
|
||||
new_server_settings.max_active_parts_loading_thread_pool_size);
|
||||
|
||||
getOutdatedPartsLoadingThreadPool().reloadConfiguration(
|
||||
server_settings.max_outdated_parts_loading_thread_pool_size,
|
||||
new_server_settings.max_outdated_parts_loading_thread_pool_size,
|
||||
0, // We don't need any threads once all the parts will be loaded
|
||||
server_settings.max_outdated_parts_loading_thread_pool_size);
|
||||
new_server_settings.max_outdated_parts_loading_thread_pool_size);
|
||||
|
||||
/// It could grow if we need to synchronously wait until all the data parts will be loaded.
|
||||
getOutdatedPartsLoadingThreadPool().setMaxTurboThreads(
|
||||
server_settings.max_active_parts_loading_thread_pool_size
|
||||
new_server_settings.max_active_parts_loading_thread_pool_size
|
||||
);
|
||||
|
||||
getPartsCleaningThreadPool().reloadConfiguration(
|
||||
server_settings.max_parts_cleaning_thread_pool_size,
|
||||
new_server_settings.max_parts_cleaning_thread_pool_size,
|
||||
0, // We don't need any threads one all the parts will be deleted
|
||||
server_settings.max_parts_cleaning_thread_pool_size);
|
||||
new_server_settings.max_parts_cleaning_thread_pool_size);
|
||||
|
||||
if (config->has("resources"))
|
||||
{
|
||||
|
@ -713,11 +713,11 @@
|
||||
For example, if there two users A, B and a row policy is defined only for A, then
|
||||
if this setting is true the user B will see all rows, and if this setting is false the user B will see no rows.
|
||||
By default this setting is false for compatibility with earlier access configurations. -->
|
||||
<users_without_row_policies_can_read_rows>false</users_without_row_policies_can_read_rows>
|
||||
<users_without_row_policies_can_read_rows>true</users_without_row_policies_can_read_rows>
|
||||
|
||||
<!-- By default, for backward compatibility ON CLUSTER queries ignore CLUSTER grant,
|
||||
however you can change this behaviour by setting this to true -->
|
||||
<on_cluster_queries_require_cluster_grant>false</on_cluster_queries_require_cluster_grant>
|
||||
<on_cluster_queries_require_cluster_grant>true</on_cluster_queries_require_cluster_grant>
|
||||
|
||||
<!-- By default, for backward compatibility "SELECT * FROM system.<table>" doesn't require any grants and can be executed
|
||||
by any user. You can change this behaviour by setting this to true.
|
||||
@ -725,19 +725,19 @@
|
||||
Exceptions: a few system tables ("tables", "columns", "databases", and some constant tables like "one", "contributors")
|
||||
are still accessible for everyone; and if there is a SHOW privilege (e.g. "SHOW USERS") granted the corresponding system
|
||||
table (i.e. "system.users") will be accessible. -->
|
||||
<select_from_system_db_requires_grant>false</select_from_system_db_requires_grant>
|
||||
<select_from_system_db_requires_grant>true</select_from_system_db_requires_grant>
|
||||
|
||||
<!-- By default, for backward compatibility "SELECT * FROM information_schema.<table>" doesn't require any grants and can be
|
||||
executed by any user. You can change this behaviour by setting this to true.
|
||||
If it's set to true then this query requires "GRANT SELECT ON information_schema.<table>" just like as for ordinary tables. -->
|
||||
<select_from_information_schema_requires_grant>false</select_from_information_schema_requires_grant>
|
||||
<select_from_information_schema_requires_grant>true</select_from_information_schema_requires_grant>
|
||||
|
||||
<!-- By default, for backward compatibility a settings profile constraint for a specific setting inherit every not set field from
|
||||
previous profile. You can change this behaviour by setting this to true.
|
||||
If it's set to true then if settings profile has a constraint for a specific setting, then this constraint completely cancels all
|
||||
actions of previous constraint (defined in other profiles) for the same specific setting, including fields that are not set by new constraint.
|
||||
It also enables 'changeable_in_readonly' constraint type -->
|
||||
<settings_constraints_replace_previous>false</settings_constraints_replace_previous>
|
||||
<settings_constraints_replace_previous>true</settings_constraints_replace_previous>
|
||||
|
||||
<!-- Number of seconds since last access a role is stored in the Role Cache -->
|
||||
<role_cache_expiration_time_seconds>600</role_cache_expiration_time_seconds>
|
||||
@ -1379,6 +1379,9 @@
|
||||
|
||||
<!-- Controls how many tasks could be in the queue -->
|
||||
<!-- <max_tasks_in_queue>1000</max_tasks_in_queue> -->
|
||||
|
||||
<!-- Host name of the current node. If specified, will only compare and not resolve hostnames inside the DDL tasks -->
|
||||
<!-- <host_name>replica</host_name> -->
|
||||
</distributed_ddl>
|
||||
|
||||
<!-- Settings to fine tune MergeTree tables. See documentation in source code, in MergeTreeSettings.h -->
|
||||
|
@ -1,5 +1,6 @@
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/TerminalSize.h>
|
||||
#include <Common/re2.h>
|
||||
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <IO/ReadBufferFromFile.h>
|
||||
@ -12,15 +13,6 @@
|
||||
#include <boost/program_options.hpp>
|
||||
#include <filesystem>
|
||||
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic push
|
||||
# pragma clang diagnostic ignored "-Wzero-as-null-pointer-constant"
|
||||
#endif
|
||||
#include <re2/re2.h>
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic pop
|
||||
#endif
|
||||
|
||||
namespace fs = std::filesystem;
|
||||
|
||||
#define EXTRACT_PATH_PATTERN ".*\\/store/(.*)"
|
||||
|
@ -24,20 +24,12 @@
|
||||
#include <Storages/MergeTree/MergeTreeSettings.h>
|
||||
#include <base/defines.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <Common/re2.h>
|
||||
#include <Poco/AccessExpireCache.h>
|
||||
#include <boost/algorithm/string/join.hpp>
|
||||
#include <filesystem>
|
||||
#include <mutex>
|
||||
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic push
|
||||
# pragma clang diagnostic ignored "-Wzero-as-null-pointer-constant"
|
||||
#endif
|
||||
#include <re2/re2.h>
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic pop
|
||||
#endif
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
|
@ -140,8 +140,7 @@ void SettingsProfilesCache::mergeSettingsAndConstraintsFor(EnabledSettings & ena
|
||||
|
||||
auto info = std::make_shared<SettingsProfilesInfo>(access_control);
|
||||
|
||||
info->profiles = merged_settings.toProfileIDs();
|
||||
substituteProfiles(merged_settings, info->profiles_with_implicit, info->names_of_profiles);
|
||||
substituteProfiles(merged_settings, info->profiles, info->profiles_with_implicit, info->names_of_profiles);
|
||||
|
||||
info->settings = merged_settings.toSettingsChanges();
|
||||
info->constraints = merged_settings.toSettingsConstraints(access_control);
|
||||
@ -152,9 +151,12 @@ void SettingsProfilesCache::mergeSettingsAndConstraintsFor(EnabledSettings & ena
|
||||
|
||||
void SettingsProfilesCache::substituteProfiles(
|
||||
SettingsProfileElements & elements,
|
||||
std::vector<UUID> & profiles,
|
||||
std::vector<UUID> & substituted_profiles,
|
||||
std::unordered_map<UUID, String> & names_of_substituted_profiles) const
|
||||
{
|
||||
profiles = elements.toProfileIDs();
|
||||
|
||||
/// We should substitute profiles in reversive order because the same profile can occur
|
||||
/// in `elements` multiple times (with some other settings in between) and in this case
|
||||
/// the last occurrence should override all the previous ones.
|
||||
@ -184,6 +186,11 @@ void SettingsProfilesCache::substituteProfiles(
|
||||
names_of_substituted_profiles.emplace(profile_id, profile->getName());
|
||||
}
|
||||
std::reverse(substituted_profiles.begin(), substituted_profiles.end());
|
||||
|
||||
std::erase_if(profiles, [&substituted_profiles_set](const UUID & profile_id)
|
||||
{
|
||||
return !substituted_profiles_set.contains(profile_id);
|
||||
});
|
||||
}
|
||||
|
||||
std::shared_ptr<const EnabledSettings> SettingsProfilesCache::getEnabledSettings(
|
||||
@ -225,13 +232,13 @@ std::shared_ptr<const SettingsProfilesInfo> SettingsProfilesCache::getSettingsPr
|
||||
if (auto pos = this->profile_infos_cache.get(profile_id))
|
||||
return *pos;
|
||||
|
||||
SettingsProfileElements elements = all_profiles[profile_id]->elements;
|
||||
SettingsProfileElements elements;
|
||||
auto & element = elements.emplace_back();
|
||||
element.parent_profile = profile_id;
|
||||
|
||||
auto info = std::make_shared<SettingsProfilesInfo>(access_control);
|
||||
|
||||
info->profiles.push_back(profile_id);
|
||||
info->profiles_with_implicit.push_back(profile_id);
|
||||
substituteProfiles(elements, info->profiles_with_implicit, info->names_of_profiles);
|
||||
substituteProfiles(elements, info->profiles, info->profiles_with_implicit, info->names_of_profiles);
|
||||
info->settings = elements.toSettingsChanges();
|
||||
info->constraints.merge(elements.toSettingsConstraints(access_control));
|
||||
|
||||
|
@ -37,7 +37,11 @@ private:
|
||||
void profileRemoved(const UUID & profile_id);
|
||||
void mergeSettingsAndConstraints();
|
||||
void mergeSettingsAndConstraintsFor(EnabledSettings & enabled) const;
|
||||
void substituteProfiles(SettingsProfileElements & elements, std::vector<UUID> & substituted_profiles, std::unordered_map<UUID, String> & names_of_substituted_profiles) const;
|
||||
|
||||
void substituteProfiles(SettingsProfileElements & elements,
|
||||
std::vector<UUID> & profiles,
|
||||
std::vector<UUID> & substituted_profiles,
|
||||
std::unordered_map<UUID, String> & names_of_substituted_profiles) const;
|
||||
|
||||
const AccessControl & access_control;
|
||||
std::unordered_map<UUID, SettingsProfilePtr> all_profiles;
|
||||
|
@ -14,8 +14,9 @@
|
||||
#include <DataTypes/DataTypesDecimal.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
#include <Common/PODArray.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/PODArray.h>
|
||||
#include <Common/iota.h>
|
||||
#include <base/types.h>
|
||||
|
||||
#include <boost/math/distributions/normal.hpp>
|
||||
@ -48,7 +49,7 @@ struct LargestTriangleThreeBucketsData : public StatisticalSample<Float64, Float
|
||||
// sort the this->x and this->y in ascending order of this->x using index
|
||||
std::vector<size_t> index(this->x.size());
|
||||
|
||||
std::iota(index.begin(), index.end(), 0);
|
||||
iota(index.data(), index.size(), size_t(0));
|
||||
::sort(index.begin(), index.end(), [&](size_t i1, size_t i2) { return this->x[i1] < this->x[i2]; });
|
||||
|
||||
SampleX temp_x{};
|
||||
|
@ -1,7 +1,8 @@
|
||||
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||
#include <AggregateFunctions/FactoryHelpers.h>
|
||||
#include <AggregateFunctions/HelpersMinMaxAny.h>
|
||||
#include <AggregateFunctions/findNumeric.h>
|
||||
#include <Common/Concepts.h>
|
||||
#include <Common/findExtreme.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -19,7 +20,7 @@ public:
|
||||
explicit AggregateFunctionsSingleValueMax(const DataTypePtr & type) : Parent(type) { }
|
||||
|
||||
/// Specializations for native numeric types
|
||||
ALWAYS_INLINE inline void addBatchSinglePlace(
|
||||
void addBatchSinglePlace(
|
||||
size_t row_begin,
|
||||
size_t row_end,
|
||||
AggregateDataPtr __restrict place,
|
||||
@ -27,7 +28,7 @@ public:
|
||||
Arena * arena,
|
||||
ssize_t if_argument_pos) const override;
|
||||
|
||||
ALWAYS_INLINE inline void addBatchSinglePlaceNotNull(
|
||||
void addBatchSinglePlaceNotNull(
|
||||
size_t row_begin,
|
||||
size_t row_end,
|
||||
AggregateDataPtr __restrict place,
|
||||
@ -53,10 +54,10 @@ void AggregateFunctionsSingleValueMax<typename DB::AggregateFunctionMaxData<Sing
|
||||
if (if_argument_pos >= 0) \
|
||||
{ \
|
||||
const auto & flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData(); \
|
||||
opt = findNumericMaxIf(column.getData().data(), flags.data(), row_begin, row_end); \
|
||||
opt = findExtremeMaxIf(column.getData().data(), flags.data(), row_begin, row_end); \
|
||||
} \
|
||||
else \
|
||||
opt = findNumericMax(column.getData().data(), row_begin, row_end); \
|
||||
opt = findExtremeMax(column.getData().data(), row_begin, row_end); \
|
||||
if (opt.has_value()) \
|
||||
this->data(place).changeIfGreater(opt.value()); \
|
||||
}
|
||||
@ -74,7 +75,57 @@ void AggregateFunctionsSingleValueMax<Data>::addBatchSinglePlace(
|
||||
Arena * arena,
|
||||
ssize_t if_argument_pos) const
|
||||
{
|
||||
return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
|
||||
if constexpr (!is_any_of<typename Data::Impl, SingleValueDataString, SingleValueDataGeneric>)
|
||||
{
|
||||
/// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
|
||||
/// faster than doing a permutation
|
||||
return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
|
||||
}
|
||||
|
||||
constexpr int nan_direction_hint = 1;
|
||||
auto const & column = *columns[0];
|
||||
if (if_argument_pos >= 0)
|
||||
{
|
||||
size_t index = row_begin;
|
||||
const auto & if_flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData();
|
||||
while (if_flags[index] == 0 && index < row_end)
|
||||
index++;
|
||||
if (index >= row_end)
|
||||
return;
|
||||
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if ((if_flags[i] != 0) && (column.compareAt(i, index, column, nan_direction_hint) > 0))
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfGreater(column, index, arena);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (row_begin >= row_end)
|
||||
return;
|
||||
|
||||
/// TODO: Introduce row_begin and row_end to getPermutation
|
||||
if (row_begin != 0 || row_end != column.size())
|
||||
{
|
||||
size_t index = row_begin;
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if (column.compareAt(i, index, column, nan_direction_hint) > 0)
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfGreater(column, index, arena);
|
||||
}
|
||||
else
|
||||
{
|
||||
constexpr IColumn::PermutationSortDirection direction = IColumn::PermutationSortDirection::Descending;
|
||||
constexpr IColumn::PermutationSortStability stability = IColumn::PermutationSortStability::Unstable;
|
||||
IColumn::Permutation permutation;
|
||||
constexpr UInt64 limit = 1;
|
||||
column.getPermutation(direction, stability, limit, nan_direction_hint, permutation);
|
||||
this->data(place).changeIfGreater(column, permutation[0], arena);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NOLINTBEGIN(bugprone-macro-parentheses)
|
||||
@ -97,10 +148,10 @@ void AggregateFunctionsSingleValueMax<typename DB::AggregateFunctionMaxData<Sing
|
||||
auto final_flags = std::make_unique<UInt8[]>(row_end); \
|
||||
for (size_t i = row_begin; i < row_end; ++i) \
|
||||
final_flags[i] = (!null_map[i]) & !!if_flags[i]; \
|
||||
opt = findNumericMaxIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
|
||||
opt = findExtremeMaxIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
|
||||
} \
|
||||
else \
|
||||
opt = findNumericMaxNotNull(column.getData().data(), null_map, row_begin, row_end); \
|
||||
opt = findExtremeMaxNotNull(column.getData().data(), null_map, row_begin, row_end); \
|
||||
if (opt.has_value()) \
|
||||
this->data(place).changeIfGreater(opt.value()); \
|
||||
}
|
||||
@ -119,7 +170,46 @@ void AggregateFunctionsSingleValueMax<Data>::addBatchSinglePlaceNotNull(
|
||||
Arena * arena,
|
||||
ssize_t if_argument_pos) const
|
||||
{
|
||||
return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
|
||||
if constexpr (!is_any_of<typename Data::Impl, SingleValueDataString, SingleValueDataGeneric>)
|
||||
{
|
||||
/// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
|
||||
/// faster than doing a permutation
|
||||
return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
|
||||
}
|
||||
|
||||
constexpr int nan_direction_hint = 1;
|
||||
auto const & column = *columns[0];
|
||||
if (if_argument_pos >= 0)
|
||||
{
|
||||
size_t index = row_begin;
|
||||
const auto & if_flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData();
|
||||
while ((if_flags[index] == 0 || null_map[index] != 0) && (index < row_end))
|
||||
index++;
|
||||
if (index >= row_end)
|
||||
return;
|
||||
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if ((if_flags[i] != 0) && (null_map[i] == 0) && (column.compareAt(i, index, column, nan_direction_hint) > 0))
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfGreater(column, index, arena);
|
||||
}
|
||||
else
|
||||
{
|
||||
size_t index = row_begin;
|
||||
while ((null_map[index] != 0) && (index < row_end))
|
||||
index++;
|
||||
if (index >= row_end)
|
||||
return;
|
||||
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if ((null_map[i] == 0) && (column.compareAt(i, index, column, nan_direction_hint) > 0))
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfGreater(column, index, arena);
|
||||
}
|
||||
}
|
||||
|
||||
AggregateFunctionPtr createAggregateFunctionMax(
|
||||
|
@ -1,7 +1,8 @@
|
||||
#include <AggregateFunctions/AggregateFunctionFactory.h>
|
||||
#include <AggregateFunctions/FactoryHelpers.h>
|
||||
#include <AggregateFunctions/HelpersMinMaxAny.h>
|
||||
#include <AggregateFunctions/findNumeric.h>
|
||||
#include <Common/Concepts.h>
|
||||
#include <Common/findExtreme.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -20,7 +21,7 @@ public:
|
||||
explicit AggregateFunctionsSingleValueMin(const DataTypePtr & type) : Parent(type) { }
|
||||
|
||||
/// Specializations for native numeric types
|
||||
ALWAYS_INLINE inline void addBatchSinglePlace(
|
||||
void addBatchSinglePlace(
|
||||
size_t row_begin,
|
||||
size_t row_end,
|
||||
AggregateDataPtr __restrict place,
|
||||
@ -28,7 +29,7 @@ public:
|
||||
Arena * arena,
|
||||
ssize_t if_argument_pos) const override;
|
||||
|
||||
ALWAYS_INLINE inline void addBatchSinglePlaceNotNull(
|
||||
void addBatchSinglePlaceNotNull(
|
||||
size_t row_begin,
|
||||
size_t row_end,
|
||||
AggregateDataPtr __restrict place,
|
||||
@ -54,10 +55,10 @@ public:
|
||||
if (if_argument_pos >= 0) \
|
||||
{ \
|
||||
const auto & flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData(); \
|
||||
opt = findNumericMinIf(column.getData().data(), flags.data(), row_begin, row_end); \
|
||||
opt = findExtremeMinIf(column.getData().data(), flags.data(), row_begin, row_end); \
|
||||
} \
|
||||
else \
|
||||
opt = findNumericMin(column.getData().data(), row_begin, row_end); \
|
||||
opt = findExtremeMin(column.getData().data(), row_begin, row_end); \
|
||||
if (opt.has_value()) \
|
||||
this->data(place).changeIfLess(opt.value()); \
|
||||
}
|
||||
@ -75,7 +76,57 @@ void AggregateFunctionsSingleValueMin<Data>::addBatchSinglePlace(
|
||||
Arena * arena,
|
||||
ssize_t if_argument_pos) const
|
||||
{
|
||||
return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
|
||||
if constexpr (!is_any_of<typename Data::Impl, SingleValueDataString, SingleValueDataGeneric>)
|
||||
{
|
||||
/// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
|
||||
/// faster than doing a permutation
|
||||
return Parent::addBatchSinglePlace(row_begin, row_end, place, columns, arena, if_argument_pos);
|
||||
}
|
||||
|
||||
constexpr int nan_direction_hint = 1;
|
||||
auto const & column = *columns[0];
|
||||
if (if_argument_pos >= 0)
|
||||
{
|
||||
size_t index = row_begin;
|
||||
const auto & if_flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData();
|
||||
while (if_flags[index] == 0 && index < row_end)
|
||||
index++;
|
||||
if (index >= row_end)
|
||||
return;
|
||||
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if ((if_flags[i] != 0) && (column.compareAt(i, index, column, nan_direction_hint) < 0))
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfLess(column, index, arena);
|
||||
}
|
||||
else
|
||||
{
|
||||
if (row_begin >= row_end)
|
||||
return;
|
||||
|
||||
/// TODO: Introduce row_begin and row_end to getPermutation
|
||||
if (row_begin != 0 || row_end != column.size())
|
||||
{
|
||||
size_t index = row_begin;
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if (column.compareAt(i, index, column, nan_direction_hint) < 0)
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfLess(column, index, arena);
|
||||
}
|
||||
else
|
||||
{
|
||||
constexpr IColumn::PermutationSortDirection direction = IColumn::PermutationSortDirection::Ascending;
|
||||
constexpr IColumn::PermutationSortStability stability = IColumn::PermutationSortStability::Unstable;
|
||||
IColumn::Permutation permutation;
|
||||
constexpr UInt64 limit = 1;
|
||||
column.getPermutation(direction, stability, limit, nan_direction_hint, permutation);
|
||||
this->data(place).changeIfLess(column, permutation[0], arena);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// NOLINTBEGIN(bugprone-macro-parentheses)
|
||||
@ -98,10 +149,10 @@ void AggregateFunctionsSingleValueMin<Data>::addBatchSinglePlace(
|
||||
auto final_flags = std::make_unique<UInt8[]>(row_end); \
|
||||
for (size_t i = row_begin; i < row_end; ++i) \
|
||||
final_flags[i] = (!null_map[i]) & !!if_flags[i]; \
|
||||
opt = findNumericMinIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
|
||||
opt = findExtremeMinIf(column.getData().data(), final_flags.get(), row_begin, row_end); \
|
||||
} \
|
||||
else \
|
||||
opt = findNumericMinNotNull(column.getData().data(), null_map, row_begin, row_end); \
|
||||
opt = findExtremeMinNotNull(column.getData().data(), null_map, row_begin, row_end); \
|
||||
if (opt.has_value()) \
|
||||
this->data(place).changeIfLess(opt.value()); \
|
||||
}
|
||||
@ -120,7 +171,46 @@ void AggregateFunctionsSingleValueMin<Data>::addBatchSinglePlaceNotNull(
|
||||
Arena * arena,
|
||||
ssize_t if_argument_pos) const
|
||||
{
|
||||
return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
|
||||
if constexpr (!is_any_of<typename Data::Impl, SingleValueDataString, SingleValueDataGeneric>)
|
||||
{
|
||||
/// Leave other numeric types (large integers, decimals, etc) to keep doing the comparison as it's
|
||||
/// faster than doing a permutation
|
||||
return Parent::addBatchSinglePlaceNotNull(row_begin, row_end, place, columns, null_map, arena, if_argument_pos);
|
||||
}
|
||||
|
||||
constexpr int nan_direction_hint = 1;
|
||||
auto const & column = *columns[0];
|
||||
if (if_argument_pos >= 0)
|
||||
{
|
||||
size_t index = row_begin;
|
||||
const auto & if_flags = assert_cast<const ColumnUInt8 &>(*columns[if_argument_pos]).getData();
|
||||
while ((if_flags[index] == 0 || null_map[index] != 0) && (index < row_end))
|
||||
index++;
|
||||
if (index >= row_end)
|
||||
return;
|
||||
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if ((if_flags[i] != 0) && (null_map[index] == 0) && (column.compareAt(i, index, column, nan_direction_hint) < 0))
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfLess(column, index, arena);
|
||||
}
|
||||
else
|
||||
{
|
||||
size_t index = row_begin;
|
||||
while ((null_map[index] != 0) && (index < row_end))
|
||||
index++;
|
||||
if (index >= row_end)
|
||||
return;
|
||||
|
||||
for (size_t i = index + 1; i < row_end; i++)
|
||||
{
|
||||
if ((null_map[i] == 0) && (column.compareAt(i, index, column, nan_direction_hint) < 0))
|
||||
index = i;
|
||||
}
|
||||
this->data(place).changeIfLess(column, index, arena);
|
||||
}
|
||||
}
|
||||
|
||||
AggregateFunctionPtr createAggregateFunctionMin(
|
||||
|
@ -965,6 +965,7 @@ template <typename Data>
|
||||
struct AggregateFunctionMinData : Data
|
||||
{
|
||||
using Self = AggregateFunctionMinData;
|
||||
using Impl = Data;
|
||||
|
||||
bool changeIfBetter(const IColumn & column, size_t row_num, Arena * arena) { return this->changeIfLess(column, row_num, arena); }
|
||||
bool changeIfBetter(const Self & to, Arena * arena) { return this->changeIfLess(to, arena); }
|
||||
@ -993,6 +994,7 @@ template <typename Data>
|
||||
struct AggregateFunctionMaxData : Data
|
||||
{
|
||||
using Self = AggregateFunctionMaxData;
|
||||
using Impl = Data;
|
||||
|
||||
bool changeIfBetter(const IColumn & column, size_t row_num, Arena * arena) { return this->changeIfGreater(column, row_num, arena); }
|
||||
bool changeIfBetter(const Self & to, Arena * arena) { return this->changeIfGreater(to, arena); }
|
||||
|
@ -6,6 +6,7 @@
|
||||
|
||||
#include <Common/FieldVisitorConvertToNumber.h>
|
||||
#include <Common/NaNUtils.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -63,10 +64,9 @@ struct QuantileLevels
|
||||
|
||||
if (isNaN(levels[i]) || levels[i] < 0 || levels[i] > 1)
|
||||
throw Exception(ErrorCodes::PARAMETER_OUT_OF_BOUND, "Quantile level is out of range [0..1]");
|
||||
|
||||
permutation[i] = i;
|
||||
}
|
||||
|
||||
iota(permutation.data(), size, Permutation::value_type(0));
|
||||
::sort(permutation.begin(), permutation.end(), [this] (size_t a, size_t b) { return levels[a] < levels[b]; });
|
||||
}
|
||||
};
|
||||
|
@ -7,6 +7,7 @@
|
||||
#include <base/sort.h>
|
||||
|
||||
#include <Common/ArenaAllocator.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <IO/ReadHelpers.h>
|
||||
@ -30,7 +31,7 @@ std::pair<RanksArray, Float64> computeRanksAndTieCorrection(const Values & value
|
||||
const size_t size = values.size();
|
||||
/// Save initial positions, than sort indices according to the values.
|
||||
std::vector<size_t> indexes(size);
|
||||
std::iota(indexes.begin(), indexes.end(), 0);
|
||||
iota(indexes.data(), indexes.size(), size_t(0));
|
||||
std::sort(indexes.begin(), indexes.end(),
|
||||
[&] (size_t lhs, size_t rhs) { return values[lhs] < values[rhs]; });
|
||||
|
||||
|
@ -1,15 +0,0 @@
|
||||
#include <AggregateFunctions/findNumeric.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
#define INSTANTIATION(T) \
|
||||
template std::optional<T> findNumericMin(const T * __restrict ptr, size_t start, size_t end); \
|
||||
template std::optional<T> findNumericMinNotNull(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end); \
|
||||
template std::optional<T> findNumericMinIf(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end); \
|
||||
template std::optional<T> findNumericMax(const T * __restrict ptr, size_t start, size_t end); \
|
||||
template std::optional<T> findNumericMaxNotNull(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end); \
|
||||
template std::optional<T> findNumericMaxIf(const T * __restrict ptr, const UInt8 * __restrict condition_map, size_t start, size_t end);
|
||||
|
||||
FOR_BASIC_NUMERIC_TYPES(INSTANTIATION)
|
||||
#undef INSTANTIATION
|
||||
}
|
@ -3,15 +3,7 @@
|
||||
#include <Analyzer/Identifier.h>
|
||||
#include <Analyzer/IQueryTreeNode.h>
|
||||
#include <Analyzer/ListNode.h>
|
||||
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic push
|
||||
# pragma clang diagnostic ignored "-Wzero-as-null-pointer-constant"
|
||||
#endif
|
||||
#include <re2/re2.h>
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic pop
|
||||
#endif
|
||||
#include <Common/re2.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -143,9 +143,17 @@ public:
|
||||
return alias;
|
||||
}
|
||||
|
||||
const String & getOriginalAlias() const
|
||||
{
|
||||
return original_alias.empty() ? alias : original_alias;
|
||||
}
|
||||
|
||||
/// Set node alias
|
||||
void setAlias(String alias_value)
|
||||
{
|
||||
if (original_alias.empty())
|
||||
original_alias = std::move(alias);
|
||||
|
||||
alias = std::move(alias_value);
|
||||
}
|
||||
|
||||
@ -276,6 +284,9 @@ protected:
|
||||
|
||||
private:
|
||||
String alias;
|
||||
/// An alias from query. Alias can be replaced by query passes,
|
||||
/// but we need to keep the original one to support additional_table_filters.
|
||||
String original_alias;
|
||||
ASTPtr original_ast;
|
||||
};
|
||||
|
||||
|
@ -4,15 +4,7 @@
|
||||
#include <Analyzer/IQueryTreeNode.h>
|
||||
#include <Analyzer/ColumnTransformers.h>
|
||||
#include <Parsers/ASTAsterisk.h>
|
||||
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic push
|
||||
# pragma clang diagnostic ignored "-Wzero-as-null-pointer-constant"
|
||||
#endif
|
||||
#include <re2/re2.h>
|
||||
#ifdef __clang__
|
||||
# pragma clang diagnostic pop
|
||||
#endif
|
||||
#include <Common/re2.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
@ -64,39 +64,43 @@ public:
|
||||
auto lhs_argument_node_type = lhs_argument->getNodeType();
|
||||
auto rhs_argument_node_type = rhs_argument->getNodeType();
|
||||
|
||||
QueryTreeNodePtr candidate;
|
||||
|
||||
if (lhs_argument_node_type == QueryTreeNodeType::FUNCTION && rhs_argument_node_type == QueryTreeNodeType::FUNCTION)
|
||||
tryOptimizeComparisonTupleFunctions(node, lhs_argument, rhs_argument, comparison_function_name);
|
||||
candidate = tryOptimizeComparisonTupleFunctions(lhs_argument, rhs_argument, comparison_function_name);
|
||||
else if (lhs_argument_node_type == QueryTreeNodeType::FUNCTION && rhs_argument_node_type == QueryTreeNodeType::CONSTANT)
|
||||
tryOptimizeComparisonTupleFunctionAndConstant(node, lhs_argument, rhs_argument, comparison_function_name);
|
||||
candidate = tryOptimizeComparisonTupleFunctionAndConstant(lhs_argument, rhs_argument, comparison_function_name);
|
||||
else if (lhs_argument_node_type == QueryTreeNodeType::CONSTANT && rhs_argument_node_type == QueryTreeNodeType::FUNCTION)
|
||||
tryOptimizeComparisonTupleFunctionAndConstant(node, rhs_argument, lhs_argument, comparison_function_name);
|
||||
candidate = tryOptimizeComparisonTupleFunctionAndConstant(rhs_argument, lhs_argument, comparison_function_name);
|
||||
|
||||
if (candidate != nullptr && node->getResultType()->equals(*candidate->getResultType()))
|
||||
node = candidate;
|
||||
}
|
||||
|
||||
private:
|
||||
void tryOptimizeComparisonTupleFunctions(QueryTreeNodePtr & node,
|
||||
QueryTreeNodePtr tryOptimizeComparisonTupleFunctions(
|
||||
const QueryTreeNodePtr & lhs_function_node,
|
||||
const QueryTreeNodePtr & rhs_function_node,
|
||||
const std::string & comparison_function_name) const
|
||||
{
|
||||
const auto & lhs_function_node_typed = lhs_function_node->as<FunctionNode &>();
|
||||
if (lhs_function_node_typed.getFunctionName() != "tuple")
|
||||
return;
|
||||
return {};
|
||||
|
||||
const auto & rhs_function_node_typed = rhs_function_node->as<FunctionNode &>();
|
||||
if (rhs_function_node_typed.getFunctionName() != "tuple")
|
||||
return;
|
||||
return {};
|
||||
|
||||
const auto & lhs_tuple_function_arguments_nodes = lhs_function_node_typed.getArguments().getNodes();
|
||||
size_t lhs_tuple_function_arguments_nodes_size = lhs_tuple_function_arguments_nodes.size();
|
||||
|
||||
const auto & rhs_tuple_function_arguments_nodes = rhs_function_node_typed.getArguments().getNodes();
|
||||
if (lhs_tuple_function_arguments_nodes_size != rhs_tuple_function_arguments_nodes.size())
|
||||
return;
|
||||
return {};
|
||||
|
||||
if (lhs_tuple_function_arguments_nodes_size == 1)
|
||||
{
|
||||
node = makeComparisonFunction(lhs_tuple_function_arguments_nodes[0], rhs_tuple_function_arguments_nodes[0], comparison_function_name);
|
||||
return;
|
||||
return makeComparisonFunction(lhs_tuple_function_arguments_nodes[0], rhs_tuple_function_arguments_nodes[0], comparison_function_name);
|
||||
}
|
||||
|
||||
QueryTreeNodes tuple_arguments_equals_functions;
|
||||
@ -108,45 +112,44 @@ private:
|
||||
tuple_arguments_equals_functions.push_back(std::move(equals_function));
|
||||
}
|
||||
|
||||
node = makeEquivalentTupleComparisonFunction(std::move(tuple_arguments_equals_functions), comparison_function_name);
|
||||
return makeEquivalentTupleComparisonFunction(std::move(tuple_arguments_equals_functions), comparison_function_name);
|
||||
}
|
||||
|
||||
void tryOptimizeComparisonTupleFunctionAndConstant(QueryTreeNodePtr & node,
|
||||
QueryTreeNodePtr tryOptimizeComparisonTupleFunctionAndConstant(
|
||||
const QueryTreeNodePtr & function_node,
|
||||
const QueryTreeNodePtr & constant_node,
|
||||
const std::string & comparison_function_name) const
|
||||
{
|
||||
const auto & function_node_typed = function_node->as<FunctionNode &>();
|
||||
if (function_node_typed.getFunctionName() != "tuple")
|
||||
return;
|
||||
return {};
|
||||
|
||||
auto & constant_node_typed = constant_node->as<ConstantNode &>();
|
||||
const auto & constant_node_value = constant_node_typed.getValue();
|
||||
if (constant_node_value.getType() != Field::Types::Which::Tuple)
|
||||
return;
|
||||
return {};
|
||||
|
||||
const auto & constant_tuple = constant_node_value.get<const Tuple &>();
|
||||
|
||||
const auto & function_arguments_nodes = function_node_typed.getArguments().getNodes();
|
||||
size_t function_arguments_nodes_size = function_arguments_nodes.size();
|
||||
if (function_arguments_nodes_size != constant_tuple.size())
|
||||
return;
|
||||
return {};
|
||||
|
||||
auto constant_node_result_type = constant_node_typed.getResultType();
|
||||
const auto * tuple_data_type = typeid_cast<const DataTypeTuple *>(constant_node_result_type.get());
|
||||
if (!tuple_data_type)
|
||||
return;
|
||||
return {};
|
||||
|
||||
const auto & tuple_data_type_elements = tuple_data_type->getElements();
|
||||
if (tuple_data_type_elements.size() != function_arguments_nodes_size)
|
||||
return;
|
||||
return {};
|
||||
|
||||
if (function_arguments_nodes_size == 1)
|
||||
{
|
||||
auto comparison_argument_constant_value = std::make_shared<ConstantValue>(constant_tuple[0], tuple_data_type_elements[0]);
|
||||
auto comparison_argument_constant_node = std::make_shared<ConstantNode>(std::move(comparison_argument_constant_value));
|
||||
node = makeComparisonFunction(function_arguments_nodes[0], std::move(comparison_argument_constant_node), comparison_function_name);
|
||||
return;
|
||||
return makeComparisonFunction(function_arguments_nodes[0], std::move(comparison_argument_constant_node), comparison_function_name);
|
||||
}
|
||||
|
||||
QueryTreeNodes tuple_arguments_equals_functions;
|
||||
@ -160,7 +163,7 @@ private:
|
||||
tuple_arguments_equals_functions.push_back(std::move(equals_function));
|
||||
}
|
||||
|
||||
node = makeEquivalentTupleComparisonFunction(std::move(tuple_arguments_equals_functions), comparison_function_name);
|
||||
return makeEquivalentTupleComparisonFunction(std::move(tuple_arguments_equals_functions), comparison_function_name);
|
||||
}
|
||||
|
||||
QueryTreeNodePtr makeEquivalentTupleComparisonFunction(QueryTreeNodes tuple_arguments_equals_functions,
|
||||
|
@ -1,5 +1,6 @@
|
||||
#include <Analyzer/Passes/FuseFunctionsPass.h>
|
||||
|
||||
#include <Common/iota.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
@ -184,7 +185,7 @@ FunctionNodePtr createFusedQuantilesNode(std::vector<QueryTreeNodePtr *> & nodes
|
||||
{
|
||||
/// Sort nodes and parameters in ascending order of quantile level
|
||||
std::vector<size_t> permutation(nodes.size());
|
||||
std::iota(permutation.begin(), permutation.end(), 0);
|
||||
iota(permutation.data(), permutation.size(), size_t(0));
|
||||
std::sort(permutation.begin(), permutation.end(), [&](size_t i, size_t j) { return parameters[i].get<Float64>() < parameters[j].get<Float64>(); });
|
||||
|
||||
std::vector<QueryTreeNodePtr *> new_nodes;
|
||||
|
@ -52,6 +52,7 @@
|
||||
|
||||
#include <Processors/Executors/PullingAsyncPipelineExecutor.h>
|
||||
|
||||
#include <Analyzer/createUniqueTableAliases.h>
|
||||
#include <Analyzer/Utils.h>
|
||||
#include <Analyzer/SetUtils.h>
|
||||
#include <Analyzer/AggregationUtils.h>
|
||||
@ -1198,7 +1199,7 @@ private:
|
||||
|
||||
static void mergeWindowWithParentWindow(const QueryTreeNodePtr & window_node, const QueryTreeNodePtr & parent_window_node, IdentifierResolveScope & scope);
|
||||
|
||||
static void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope);
|
||||
void replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_list, const QueryTreeNodes & projection_nodes, IdentifierResolveScope & scope);
|
||||
|
||||
static void convertLimitOffsetExpression(QueryTreeNodePtr & expression_node, const String & expression_description, IdentifierResolveScope & scope);
|
||||
|
||||
@ -2168,7 +2169,12 @@ void QueryAnalyzer::replaceNodesWithPositionalArguments(QueryTreeNodePtr & node_
|
||||
scope.scope_node->formatASTForErrorMessage());
|
||||
|
||||
--positional_argument_number;
|
||||
*node_to_replace = projection_nodes[positional_argument_number];
|
||||
*node_to_replace = projection_nodes[positional_argument_number]->clone();
|
||||
if (auto it = resolved_expressions.find(projection_nodes[positional_argument_number]);
|
||||
it != resolved_expressions.end())
|
||||
{
|
||||
resolved_expressions[*node_to_replace] = it->second;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@ -7366,6 +7372,7 @@ void QueryAnalysisPass::run(QueryTreeNodePtr query_tree_node, ContextPtr context
|
||||
{
|
||||
QueryAnalyzer analyzer;
|
||||
analyzer.resolve(query_tree_node, table_expression, context);
|
||||
createUniqueTableAliases(query_tree_node, table_expression, context);
|
||||
}
|
||||
|
||||
}
|
||||
|
@ -326,7 +326,7 @@ void addTableExpressionOrJoinIntoTablesInSelectQuery(ASTPtr & tables_in_select_q
|
||||
}
|
||||
}
|
||||
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node)
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node, bool add_array_join)
|
||||
{
|
||||
QueryTreeNodes result;
|
||||
|
||||
@ -357,6 +357,8 @@ QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node)
|
||||
{
|
||||
auto & array_join_node = node_to_process->as<ArrayJoinNode &>();
|
||||
nodes_to_process.push_front(array_join_node.getTableExpression());
|
||||
if (add_array_join)
|
||||
result.push_back(std::move(node_to_process));
|
||||
break;
|
||||
}
|
||||
case QueryTreeNodeType::JOIN:
|
||||
|
@ -51,7 +51,7 @@ std::optional<bool> tryExtractConstantFromConditionNode(const QueryTreeNodePtr &
|
||||
void addTableExpressionOrJoinIntoTablesInSelectQuery(ASTPtr & tables_in_select_query_ast, const QueryTreeNodePtr & table_expression, const IQueryTreeNode::ConvertToASTOptions & convert_to_ast_options);
|
||||
|
||||
/// Extract table, table function, query, union from join tree
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node);
|
||||
QueryTreeNodes extractTableExpressions(const QueryTreeNodePtr & join_tree_node, bool add_array_join = false);
|
||||
|
||||
/// Extract left table expression from join tree
|
||||
QueryTreeNodePtr extractLeftTableExpression(const QueryTreeNodePtr & join_tree_node);
|
||||
|
141
src/Analyzer/createUniqueTableAliases.cpp
Normal file
141
src/Analyzer/createUniqueTableAliases.cpp
Normal file
@ -0,0 +1,141 @@
|
||||
#include <memory>
|
||||
#include <unordered_map>
|
||||
#include <Analyzer/createUniqueTableAliases.h>
|
||||
#include <Analyzer/FunctionNode.h>
|
||||
#include <Analyzer/InDepthQueryTreeVisitor.h>
|
||||
#include <Analyzer/IQueryTreeNode.h>
|
||||
#include <Analyzer/LambdaNode.h>
|
||||
#include <Analyzer/Utils.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
namespace
|
||||
{
|
||||
|
||||
class CreateUniqueTableAliasesVisitor : public InDepthQueryTreeVisitorWithContext<CreateUniqueTableAliasesVisitor>
|
||||
{
|
||||
public:
|
||||
using Base = InDepthQueryTreeVisitorWithContext<CreateUniqueTableAliasesVisitor>;
|
||||
|
||||
explicit CreateUniqueTableAliasesVisitor(const ContextPtr & context)
|
||||
: Base(context)
|
||||
{
|
||||
// Insert a fake node on top of the stack.
|
||||
scope_nodes_stack.push_back(std::make_shared<LambdaNode>(Names{}, nullptr));
|
||||
}
|
||||
|
||||
void enterImpl(QueryTreeNodePtr & node)
|
||||
{
|
||||
auto node_type = node->getNodeType();
|
||||
|
||||
switch (node_type)
|
||||
{
|
||||
case QueryTreeNodeType::QUERY:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::UNION:
|
||||
{
|
||||
/// Queries like `(SELECT 1) as t` have invalid syntax. To avoid creating such queries (e.g. in StorageDistributed)
|
||||
/// we need to remove aliases for top level queries.
|
||||
/// N.B. Subquery depth starts count from 1, so the following condition checks if it's a top level.
|
||||
if (getSubqueryDepth() == 1)
|
||||
{
|
||||
node->removeAlias();
|
||||
break;
|
||||
}
|
||||
[[fallthrough]];
|
||||
}
|
||||
case QueryTreeNodeType::TABLE:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::TABLE_FUNCTION:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::ARRAY_JOIN:
|
||||
{
|
||||
auto & alias = table_expression_to_alias[node];
|
||||
if (alias.empty())
|
||||
{
|
||||
scope_to_nodes_with_aliases[scope_nodes_stack.back()].push_back(node);
|
||||
alias = fmt::format("__table{}", ++next_id);
|
||||
node->setAlias(alias);
|
||||
}
|
||||
break;
|
||||
}
|
||||
default:
|
||||
break;
|
||||
}
|
||||
|
||||
switch (node_type)
|
||||
{
|
||||
case QueryTreeNodeType::QUERY:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::UNION:
|
||||
[[fallthrough]];
|
||||
case QueryTreeNodeType::LAMBDA:
|
||||
scope_nodes_stack.push_back(node);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
void leaveImpl(QueryTreeNodePtr & node)
|
||||
{
|
||||
if (scope_nodes_stack.back() == node)
|
||||
{
|
||||
if (auto it = scope_to_nodes_with_aliases.find(scope_nodes_stack.back());
|
||||
it != scope_to_nodes_with_aliases.end())
|
||||
{
|
||||
for (const auto & node_with_alias : it->second)
|
||||
{
|
||||
table_expression_to_alias.erase(node_with_alias);
|
||||
}
|
||||
scope_to_nodes_with_aliases.erase(it);
|
||||
}
|
||||
scope_nodes_stack.pop_back();
|
||||
}
|
||||
|
||||
/// Here we revisit subquery for IN function. Reasons:
|
||||
/// * For remote query execution, query tree may be traversed a few times.
|
||||
/// In such a case, it is possible to get AST like
|
||||
/// `IN ((SELECT ... FROM table AS __table4) AS __table1)` which result in
|
||||
/// `Multiple expressions for the alias` exception
|
||||
/// * Tables in subqueries could have different aliases => different three hashes,
|
||||
/// which is important to be able to find a set in PreparedSets
|
||||
/// See 01253_subquery_in_aggregate_function_JustStranger.
|
||||
///
|
||||
/// So, we revisit this subquery to make aliases stable.
|
||||
/// This should be safe cause columns from IN subquery can't be used in main query anyway.
|
||||
if (node->getNodeType() == QueryTreeNodeType::FUNCTION)
|
||||
{
|
||||
auto * function_node = node->as<FunctionNode>();
|
||||
if (isNameOfInFunction(function_node->getFunctionName()))
|
||||
{
|
||||
auto arg = function_node->getArguments().getNodes().back();
|
||||
/// Avoid aliasing IN `table`
|
||||
if (arg->getNodeType() != QueryTreeNodeType::TABLE)
|
||||
CreateUniqueTableAliasesVisitor(getContext()).visit(function_node->getArguments().getNodes().back());
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
private:
|
||||
size_t next_id = 0;
|
||||
|
||||
// Stack of nodes which create scopes: QUERY, UNION and LAMBDA.
|
||||
QueryTreeNodes scope_nodes_stack;
|
||||
|
||||
std::unordered_map<QueryTreeNodePtr, QueryTreeNodes> scope_to_nodes_with_aliases;
|
||||
|
||||
// We need to use raw pointer as a key, not a QueryTreeNodePtrWithHash.
|
||||
std::unordered_map<QueryTreeNodePtr, String> table_expression_to_alias;
|
||||
};
|
||||
|
||||
}
|
||||
|
||||
|
||||
void createUniqueTableAliases(QueryTreeNodePtr & node, const QueryTreeNodePtr & /*table_expression*/, const ContextPtr & context)
|
||||
{
|
||||
CreateUniqueTableAliasesVisitor(context).visit(node);
|
||||
}
|
||||
|
||||
}
|
18
src/Analyzer/createUniqueTableAliases.h
Normal file
18
src/Analyzer/createUniqueTableAliases.h
Normal file
@ -0,0 +1,18 @@
|
||||
#pragma once
|
||||
|
||||
#include <memory>
|
||||
#include <Interpreters/Context_fwd.h>
|
||||
|
||||
class IQueryTreeNode;
|
||||
using QueryTreeNodePtr = std::shared_ptr<IQueryTreeNode>;
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
/*
|
||||
* For each table expression in the Query Tree generate and add a unique alias.
|
||||
* If table expression had an alias in initial query tree, override it.
|
||||
*/
|
||||
void createUniqueTableAliases(QueryTreeNodePtr & node, const QueryTreeNodePtr & table_expression, const ContextPtr & context);
|
||||
|
||||
}
|
@ -396,7 +396,7 @@ OperationID BackupsWorker::startMakingBackup(const ASTPtr & query, const Context
|
||||
String backup_name_for_logging = backup_info.toStringForLogging();
|
||||
String base_backup_name;
|
||||
if (backup_settings.base_backup_info)
|
||||
base_backup_name = backup_settings.base_backup_info->toString();
|
||||
base_backup_name = backup_settings.base_backup_info->toStringForLogging();
|
||||
|
||||
try
|
||||
{
|
||||
@ -750,7 +750,7 @@ OperationID BackupsWorker::startRestoring(const ASTPtr & query, ContextMutablePt
|
||||
String backup_name_for_logging = backup_info.toStringForLogging();
|
||||
String base_backup_name;
|
||||
if (restore_settings.base_backup_info)
|
||||
base_backup_name = restore_settings.base_backup_info->toString();
|
||||
base_backup_name = restore_settings.base_backup_info->toStringForLogging();
|
||||
|
||||
addInfo(restore_id, backup_name_for_logging, base_backup_name, restore_settings.internal, BackupStatus::RESTORING);
|
||||
|
||||
|
@ -573,11 +573,12 @@ void RestorerFromBackup::createDatabase(const String & database_name) const
|
||||
create_database_query->if_not_exists = (restore_settings.create_table == RestoreTableCreationMode::kCreateIfNotExists);
|
||||
|
||||
LOG_TRACE(log, "Creating database {}: {}", backQuoteIfNeed(database_name), serializeAST(*create_database_query));
|
||||
|
||||
auto query_context = Context::createCopy(context);
|
||||
query_context->setSetting("allow_deprecated_database_ordinary", 1);
|
||||
try
|
||||
{
|
||||
/// Execute CREATE DATABASE query.
|
||||
InterpreterCreateQuery interpreter{create_database_query, context};
|
||||
InterpreterCreateQuery interpreter{create_database_query, query_context};
|
||||
interpreter.setInternal(true);
|
||||
interpreter.execute();
|
||||
}
|
||||
|
@ -551,13 +551,18 @@ endif ()
|
||||
target_link_libraries (clickhouse_common_io PRIVATE ch_contrib::lz4)
|
||||
|
||||
if (TARGET ch_contrib::qpl)
|
||||
dbms_target_link_libraries(PUBLIC ch_contrib::qpl)
|
||||
dbms_target_link_libraries(PUBLIC ch_contrib::qpl)
|
||||
endif ()
|
||||
|
||||
if (TARGET ch_contrib::accel-config)
|
||||
dbms_target_link_libraries(PUBLIC ch_contrib::accel-config)
|
||||
endif ()
|
||||
|
||||
if (TARGET ch_contrib::qatzstd_plugin)
|
||||
dbms_target_link_libraries(PUBLIC ch_contrib::qatzstd_plugin)
|
||||
target_link_libraries(clickhouse_common_io PUBLIC ch_contrib::qatzstd_plugin)
|
||||
endif ()
|
||||
|
||||
target_link_libraries(clickhouse_common_io PUBLIC boost::context)
|
||||
dbms_target_link_libraries(PUBLIC boost::context)
|
||||
|
||||
|
@ -651,7 +651,13 @@ void Connection::sendQuery(
|
||||
if (method == "ZSTD")
|
||||
level = settings->network_zstd_compression_level;
|
||||
|
||||
CompressionCodecFactory::instance().validateCodec(method, level, !settings->allow_suspicious_codecs, settings->allow_experimental_codecs, settings->enable_deflate_qpl_codec);
|
||||
CompressionCodecFactory::instance().validateCodec(
|
||||
method,
|
||||
level,
|
||||
!settings->allow_suspicious_codecs,
|
||||
settings->allow_experimental_codecs,
|
||||
settings->enable_deflate_qpl_codec,
|
||||
settings->enable_zstd_qat_codec);
|
||||
compression_codec = CompressionCodecFactory::instance().get(method, level);
|
||||
}
|
||||
else
|
||||
|
@ -77,7 +77,6 @@ static String getLoadSuggestionQuery(Int32 suggestion_limit, bool basic_suggesti
|
||||
};
|
||||
|
||||
add_column("name", "functions", false, {});
|
||||
add_column("name", "database_engines", false, {});
|
||||
add_column("name", "table_engines", false, {});
|
||||
add_column("name", "formats", false, {});
|
||||
add_column("name", "table_functions", false, {});
|
||||
|
@ -1,18 +1,19 @@
|
||||
#include <Columns/ColumnAggregateFunction.h>
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Columns/MaskOperations.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Processors/Transforms/ColumnGathererTransform.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <IO/WriteBufferFromArena.h>
|
||||
#include <IO/WriteBufferFromString.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <Common/FieldVisitorToString.h>
|
||||
#include <Common/SipHash.h>
|
||||
#include <Processors/Transforms/ColumnGathererTransform.h>
|
||||
#include <Common/AlignedBuffer.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/Arena.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/FieldVisitorToString.h>
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/iota.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -626,8 +627,7 @@ void ColumnAggregateFunction::getPermutation(PermutationSortDirection /*directio
|
||||
{
|
||||
size_t s = data.size();
|
||||
res.resize(s);
|
||||
for (size_t i = 0; i < s; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), s, IColumn::Permutation::value_type(0));
|
||||
}
|
||||
|
||||
void ColumnAggregateFunction::updatePermutation(PermutationSortDirection, PermutationSortStability,
|
||||
|
@ -2,9 +2,10 @@
|
||||
|
||||
#include <Columns/ColumnConst.h>
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/iota.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
|
||||
#include <base/defines.h>
|
||||
|
||||
@ -128,8 +129,7 @@ void ColumnConst::getPermutation(PermutationSortDirection /*direction*/, Permuta
|
||||
size_t /*limit*/, int /*nan_direction_hint*/, Permutation & res) const
|
||||
{
|
||||
res.resize(s);
|
||||
for (size_t i = 0; i < s; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), s, IColumn::Permutation::value_type(0));
|
||||
}
|
||||
|
||||
void ColumnConst::updatePermutation(PermutationSortDirection /*direction*/, PermutationSortStability /*stability*/,
|
||||
|
@ -1,10 +1,11 @@
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/Arena.h>
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Common/RadixSort.h>
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
#include <base/sort.h>
|
||||
|
||||
@ -163,8 +164,7 @@ void ColumnDecimal<T>::getPermutation(IColumn::PermutationSortDirection directio
|
||||
if (limit >= data_size)
|
||||
limit = 0;
|
||||
|
||||
for (size_t i = 0; i < data_size; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), data_size, IColumn::Permutation::value_type(0));
|
||||
|
||||
if constexpr (is_arithmetic_v<NativeT> && !is_big_int_v<NativeT>)
|
||||
{
|
||||
@ -183,8 +183,7 @@ void ColumnDecimal<T>::getPermutation(IColumn::PermutationSortDirection directio
|
||||
/// Thresholds on size. Lower threshold is arbitrary. Upper threshold is chosen by the type for histogram counters.
|
||||
if (data_size >= 256 && data_size <= std::numeric_limits<UInt32>::max() && use_radix_sort)
|
||||
{
|
||||
for (size_t i = 0; i < data_size; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), data_size, IColumn::Permutation::value_type(0));
|
||||
|
||||
bool try_sort = false;
|
||||
|
||||
|
@ -5,6 +5,7 @@
|
||||
#include <Core/ColumnsWithTypeAndName.h>
|
||||
#include <Columns/IColumn.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
{
|
||||
namespace ErrorCodes
|
||||
@ -16,7 +17,7 @@ class IFunctionBase;
|
||||
using FunctionBasePtr = std::shared_ptr<const IFunctionBase>;
|
||||
|
||||
/** A column containing a lambda expression.
|
||||
* Behaves like a constant-column. Contains an expression, but not input or output data.
|
||||
* Contains an expression and captured columns, but not input arguments.
|
||||
*/
|
||||
class ColumnFunction final : public COWHelper<IColumn, ColumnFunction>
|
||||
{
|
||||
@ -207,8 +208,6 @@ private:
|
||||
bool is_function_compiled;
|
||||
|
||||
void appendArgument(const ColumnWithTypeAndName & column);
|
||||
|
||||
void addOffsetsForReplication(const IColumn::Offsets & offsets);
|
||||
};
|
||||
|
||||
const ColumnFunction * checkAndGetShortCircuitArgument(const ColumnPtr & column);
|
||||
|
@ -2,6 +2,7 @@
|
||||
#include <Columns/ColumnObject.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
#include <Columns/ColumnArray.h>
|
||||
#include <Common/iota.h>
|
||||
#include <DataTypes/ObjectUtils.h>
|
||||
#include <DataTypes/getLeastSupertype.h>
|
||||
#include <DataTypes/DataTypeNothing.h>
|
||||
@ -838,7 +839,7 @@ MutableColumnPtr ColumnObject::cloneResized(size_t new_size) const
|
||||
void ColumnObject::getPermutation(PermutationSortDirection, PermutationSortStability, size_t, int, Permutation & res) const
|
||||
{
|
||||
res.resize(num_rows);
|
||||
std::iota(res.begin(), res.end(), 0);
|
||||
iota(res.data(), res.size(), size_t(0));
|
||||
}
|
||||
|
||||
void ColumnObject::compareColumn(const IColumn & rhs, size_t rhs_row_num,
|
||||
|
@ -1,11 +1,12 @@
|
||||
#include <Columns/ColumnSparse.h>
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Columns/ColumnCompressed.h>
|
||||
#include <Columns/ColumnSparse.h>
|
||||
#include <Columns/ColumnTuple.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Processors/Transforms/ColumnGathererTransform.h>
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
#include <algorithm>
|
||||
#include <bit>
|
||||
@ -499,8 +500,7 @@ void ColumnSparse::getPermutationImpl(IColumn::PermutationSortDirection directio
|
||||
res.resize(_size);
|
||||
if (offsets->empty())
|
||||
{
|
||||
for (size_t i = 0; i < _size; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), _size, IColumn::Permutation::value_type(0));
|
||||
return;
|
||||
}
|
||||
|
||||
|
@ -1,16 +1,17 @@
|
||||
#include <Columns/ColumnTuple.h>
|
||||
|
||||
#include <base/sort.h>
|
||||
#include <Columns/IColumnImpl.h>
|
||||
#include <Columns/ColumnCompressed.h>
|
||||
#include <Columns/IColumnImpl.h>
|
||||
#include <Core/Field.h>
|
||||
#include <Processors/Transforms/ColumnGathererTransform.h>
|
||||
#include <DataTypes/Serializations/SerializationInfoTuple.h>
|
||||
#include <IO/Operators.h>
|
||||
#include <IO/WriteBufferFromString.h>
|
||||
#include <Processors/Transforms/ColumnGathererTransform.h>
|
||||
#include <base/sort.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <Common/iota.h>
|
||||
#include <Common/typeid_cast.h>
|
||||
#include <DataTypes/Serializations/SerializationInfoTuple.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -378,8 +379,7 @@ void ColumnTuple::getPermutationImpl(IColumn::PermutationSortDirection direction
|
||||
{
|
||||
size_t rows = size();
|
||||
res.resize(rows);
|
||||
for (size_t i = 0; i < rows; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), rows, IColumn::Permutation::value_type(0));
|
||||
|
||||
if (limit >= rows)
|
||||
limit = 0;
|
||||
|
@ -1,24 +1,25 @@
|
||||
#include "ColumnVector.h"
|
||||
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Columns/ColumnCompressed.h>
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Columns/MaskOperations.h>
|
||||
#include <Columns/RadixSortHelper.h>
|
||||
#include <Processors/Transforms/ColumnGathererTransform.h>
|
||||
#include <IO/WriteHelpers.h>
|
||||
#include <Processors/Transforms/ColumnGathererTransform.h>
|
||||
#include <base/bit_cast.h>
|
||||
#include <base/scope_guard.h>
|
||||
#include <base/sort.h>
|
||||
#include <base/unaligned.h>
|
||||
#include <Common/Arena.h>
|
||||
#include <Common/Exception.h>
|
||||
#include <Common/HashTable/Hash.h>
|
||||
#include <Common/NaNUtils.h>
|
||||
#include <Common/RadixSort.h>
|
||||
#include <Common/SipHash.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/TargetSpecific.h>
|
||||
#include <Common/WeakHash.h>
|
||||
#include <Common/assert_cast.h>
|
||||
#include <base/sort.h>
|
||||
#include <base/unaligned.h>
|
||||
#include <base/bit_cast.h>
|
||||
#include <base/scope_guard.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
#include <bit>
|
||||
#include <cmath>
|
||||
@ -244,8 +245,7 @@ void ColumnVector<T>::getPermutation(IColumn::PermutationSortDirection direction
|
||||
if (limit >= data_size)
|
||||
limit = 0;
|
||||
|
||||
for (size_t i = 0; i < data_size; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), data_size, IColumn::Permutation::value_type(0));
|
||||
|
||||
if constexpr (is_arithmetic_v<T> && !is_big_int_v<T>)
|
||||
{
|
||||
|
@ -1,7 +1,8 @@
|
||||
#include <Common/Arena.h>
|
||||
#include <Core/Field.h>
|
||||
#include <Columns/IColumnDummy.h>
|
||||
#include <Columns/ColumnsCommon.h>
|
||||
#include <Columns/IColumnDummy.h>
|
||||
#include <Core/Field.h>
|
||||
#include <Common/Arena.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -87,8 +88,7 @@ void IColumnDummy::getPermutation(IColumn::PermutationSortDirection /*direction*
|
||||
size_t /*limit*/, int /*nan_direction_hint*/, Permutation & res) const
|
||||
{
|
||||
res.resize(s);
|
||||
for (size_t i = 0; i < s; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), s, IColumn::Permutation::value_type(0));
|
||||
}
|
||||
|
||||
ColumnPtr IColumnDummy::replicate(const Offsets & offsets) const
|
||||
|
@ -6,10 +6,11 @@
|
||||
* implementation.
|
||||
*/
|
||||
|
||||
#include <Columns/IColumn.h>
|
||||
#include <Common/PODArray.h>
|
||||
#include <base/sort.h>
|
||||
#include <algorithm>
|
||||
#include <Columns/IColumn.h>
|
||||
#include <base/sort.h>
|
||||
#include <Common/PODArray.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
|
||||
namespace DB
|
||||
@ -299,8 +300,7 @@ void IColumn::getPermutationImpl(
|
||||
if (limit >= data_size)
|
||||
limit = 0;
|
||||
|
||||
for (size_t i = 0; i < data_size; ++i)
|
||||
res[i] = i;
|
||||
iota(res.data(), data_size, Permutation::value_type(0));
|
||||
|
||||
if (limit)
|
||||
{
|
||||
|
@ -1,6 +1,7 @@
|
||||
#include <Columns/ColumnSparse.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
|
||||
#include <Common/iota.h>
|
||||
#include <Common/randomSeed.h>
|
||||
#include <pcg_random.hpp>
|
||||
#include <gtest/gtest.h>
|
||||
@ -191,7 +192,7 @@ TEST(ColumnSparse, Permute)
|
||||
auto [sparse_src, full_src] = createColumns(n, k);
|
||||
|
||||
IColumn::Permutation perm(n);
|
||||
std::iota(perm.begin(), perm.end(), 0);
|
||||
iota(perm.data(), perm.size(), size_t(0));
|
||||
std::shuffle(perm.begin(), perm.end(), rng);
|
||||
|
||||
auto sparse_dst = sparse_src->permute(perm, limit);
|
||||
|
@ -9,7 +9,6 @@
|
||||
#include <Columns/ColumnUnique.h>
|
||||
#include <Columns/ColumnVector.h>
|
||||
#include <Columns/ColumnsNumber.h>
|
||||
|
||||
#include <DataTypes/DataTypeArray.h>
|
||||
#include <DataTypes/DataTypeLowCardinality.h>
|
||||
#include <DataTypes/DataTypeMap.h>
|
||||
@ -17,6 +16,7 @@
|
||||
#include <DataTypes/DataTypeString.h>
|
||||
#include <DataTypes/DataTypeTuple.h>
|
||||
#include <DataTypes/DataTypesNumber.h>
|
||||
#include <Common/iota.h>
|
||||
|
||||
|
||||
using namespace DB;
|
||||
@ -32,8 +32,7 @@ void stableGetColumnPermutation(
|
||||
|
||||
size_t size = column.size();
|
||||
out_permutation.resize(size);
|
||||
for (size_t i = 0; i < size; ++i)
|
||||
out_permutation[i] = i;
|
||||
iota(out_permutation.data(), size, IColumn::Permutation::value_type(0));
|
||||
|
||||
std::stable_sort(
|
||||
out_permutation.begin(),
|
||||
@ -146,10 +145,7 @@ void assertColumnPermutations(ColumnCreateFunc column_create_func, ValueTransfor
|
||||
|
||||
std::vector<std::vector<Field>> ranges(ranges_size);
|
||||
std::vector<size_t> ranges_permutations(ranges_size);
|
||||
for (size_t i = 0; i < ranges_size; ++i)
|
||||
{
|
||||
ranges_permutations[i] = i;
|
||||
}
|
||||
iota(ranges_permutations.data(), ranges_size, IColumn::Permutation::value_type(0));
|
||||
|
||||
IColumn::Permutation actual_permutation;
|
||||
IColumn::Permutation expected_permutation;
|
||||
|
@ -43,6 +43,19 @@ void logAboutProgress(Poco::Logger * log, size_t processed, size_t total, Atomic
|
||||
}
|
||||
}
|
||||
|
||||
void cancelOnDependencyFailure(const LoadJobPtr & self, const LoadJobPtr & dependency, std::exception_ptr & cancel)
|
||||
{
|
||||
cancel = std::make_exception_ptr(Exception(ErrorCodes::ASYNC_LOAD_CANCELED,
|
||||
"Load job '{}' -> {}",
|
||||
self->name,
|
||||
getExceptionMessage(dependency->exception(), /* with_stacktrace = */ false)));
|
||||
}
|
||||
|
||||
void ignoreDependencyFailure(const LoadJobPtr &, const LoadJobPtr &, std::exception_ptr &)
|
||||
{
|
||||
// No-op
|
||||
}
|
||||
|
||||
LoadStatus LoadJob::status() const
|
||||
{
|
||||
std::unique_lock lock{mutex};
|
||||
@ -96,7 +109,10 @@ size_t LoadJob::canceled(const std::exception_ptr & ptr)
|
||||
|
||||
size_t LoadJob::finish()
|
||||
{
|
||||
func = {}; // To ensure job function is destructed before `AsyncLoader::wait()` return
|
||||
// To ensure functions are destructed before `AsyncLoader::wait()` return
|
||||
func = {};
|
||||
dependency_failure = {};
|
||||
|
||||
finish_time = std::chrono::system_clock::now();
|
||||
if (waiters > 0)
|
||||
finished.notify_all();
|
||||
@ -327,17 +343,19 @@ void AsyncLoader::schedule(const LoadJobSet & jobs_to_schedule)
|
||||
|
||||
if (dep_status == LoadStatus::FAILED || dep_status == LoadStatus::CANCELED)
|
||||
{
|
||||
// Dependency on already failed or canceled job -- it's okay. Cancel all dependent jobs.
|
||||
std::exception_ptr e;
|
||||
// Dependency on already failed or canceled job -- it's okay.
|
||||
// Process as usual (may lead to cancel of all dependent jobs).
|
||||
std::exception_ptr cancel;
|
||||
NOEXCEPT_SCOPE({
|
||||
ALLOW_ALLOCATIONS_IN_SCOPE;
|
||||
e = std::make_exception_ptr(Exception(ErrorCodes::ASYNC_LOAD_CANCELED,
|
||||
"Load job '{}' -> {}",
|
||||
job->name,
|
||||
getExceptionMessage(dep->exception(), /* with_stacktrace = */ false)));
|
||||
if (job->dependency_failure)
|
||||
job->dependency_failure(job, dep, cancel);
|
||||
});
|
||||
finish(job, LoadStatus::CANCELED, e, lock);
|
||||
break; // This job is now finished, stop its dependencies processing
|
||||
if (cancel)
|
||||
{
|
||||
finish(job, LoadStatus::CANCELED, cancel, lock);
|
||||
break; // This job is now finished, stop its dependencies processing
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@ -515,63 +533,76 @@ String AsyncLoader::checkCycle(const LoadJobPtr & job, LoadJobSet & left, LoadJo
|
||||
return {};
|
||||
}
|
||||
|
||||
void AsyncLoader::finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job, std::unique_lock<std::mutex> & lock)
|
||||
void AsyncLoader::finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr reason, std::unique_lock<std::mutex> & lock)
|
||||
{
|
||||
chassert(scheduled_jobs.contains(job)); // Job was pending
|
||||
|
||||
// Notify waiters
|
||||
size_t resumed_workers = 0; // Number of workers resumed in the execution pool of the job
|
||||
if (status == LoadStatus::OK)
|
||||
{
|
||||
// Notify waiters
|
||||
resumed_workers += job->ok();
|
||||
resumed_workers = job->ok();
|
||||
else if (status == LoadStatus::FAILED)
|
||||
resumed_workers = job->failed(reason);
|
||||
else if (status == LoadStatus::CANCELED)
|
||||
resumed_workers = job->canceled(reason);
|
||||
|
||||
// Update dependent jobs and enqueue if ready
|
||||
for (const auto & dep : scheduled_jobs[job].dependent_jobs)
|
||||
// Adjust suspended workers count
|
||||
if (resumed_workers)
|
||||
{
|
||||
Pool & pool = pools[job->executionPool()];
|
||||
pool.suspended_workers -= resumed_workers;
|
||||
}
|
||||
|
||||
Info & info = scheduled_jobs[job];
|
||||
if (info.isReady())
|
||||
{
|
||||
// Job could be in ready queue (on cancel) -- must be dequeued
|
||||
pools[job->pool_id].ready_queue.erase(info.ready_seqno);
|
||||
info.ready_seqno = 0;
|
||||
}
|
||||
|
||||
// To avoid container modification during recursion (during clean dependency graph edges below)
|
||||
LoadJobSet dependent;
|
||||
dependent.swap(info.dependent_jobs);
|
||||
|
||||
// Update dependent jobs
|
||||
for (const auto & dpt : dependent)
|
||||
{
|
||||
if (auto dpt_info = scheduled_jobs.find(dpt); dpt_info != scheduled_jobs.end())
|
||||
{
|
||||
chassert(scheduled_jobs.contains(dep)); // All depended jobs must be pending
|
||||
Info & dep_info = scheduled_jobs[dep];
|
||||
dep_info.dependencies_left--;
|
||||
if (!dep_info.isBlocked())
|
||||
enqueue(dep_info, dep, lock);
|
||||
dpt_info->second.dependencies_left--;
|
||||
if (!dpt_info->second.isBlocked())
|
||||
enqueue(dpt_info->second, dpt, lock);
|
||||
|
||||
if (status != LoadStatus::OK)
|
||||
{
|
||||
std::exception_ptr cancel;
|
||||
NOEXCEPT_SCOPE({
|
||||
ALLOW_ALLOCATIONS_IN_SCOPE;
|
||||
if (dpt->dependency_failure)
|
||||
dpt->dependency_failure(dpt, job, cancel);
|
||||
});
|
||||
// Recurse into dependent job if it should be canceled
|
||||
if (cancel)
|
||||
finish(dpt, LoadStatus::CANCELED, cancel, lock);
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
// Job has already been canceled. Do not enter twice into the same job during finish recursion.
|
||||
// This happens in {A<-B; A<-C; B<-D; C<-D} graph for D if A is failed or canceled.
|
||||
chassert(status == LoadStatus::CANCELED);
|
||||
}
|
||||
}
|
||||
else
|
||||
|
||||
// Clean dependency graph edges pointing to canceled jobs
|
||||
if (status != LoadStatus::OK)
|
||||
{
|
||||
// Notify waiters
|
||||
if (status == LoadStatus::FAILED)
|
||||
resumed_workers += job->failed(exception_from_job);
|
||||
else if (status == LoadStatus::CANCELED)
|
||||
resumed_workers += job->canceled(exception_from_job);
|
||||
|
||||
Info & info = scheduled_jobs[job];
|
||||
if (info.isReady())
|
||||
{
|
||||
pools[job->pool_id].ready_queue.erase(info.ready_seqno);
|
||||
info.ready_seqno = 0;
|
||||
}
|
||||
|
||||
// Recurse into all dependent jobs
|
||||
LoadJobSet dependent;
|
||||
dependent.swap(info.dependent_jobs); // To avoid container modification during recursion
|
||||
for (const auto & dep : dependent)
|
||||
{
|
||||
if (!scheduled_jobs.contains(dep))
|
||||
continue; // Job has already been canceled
|
||||
std::exception_ptr e;
|
||||
NOEXCEPT_SCOPE({
|
||||
ALLOW_ALLOCATIONS_IN_SCOPE;
|
||||
e = std::make_exception_ptr(
|
||||
Exception(ErrorCodes::ASYNC_LOAD_CANCELED,
|
||||
"Load job '{}' -> {}",
|
||||
dep->name,
|
||||
getExceptionMessage(exception_from_job, /* with_stacktrace = */ false)));
|
||||
});
|
||||
finish(dep, LoadStatus::CANCELED, e, lock);
|
||||
}
|
||||
|
||||
// Clean dependency graph edges pointing to canceled jobs
|
||||
for (const auto & dep : job->dependencies)
|
||||
{
|
||||
if (auto dep_info = scheduled_jobs.find(dep); dep_info != scheduled_jobs.end())
|
||||
dep_info->second.dependent_jobs.erase(job);
|
||||
}
|
||||
}
|
||||
|
||||
// Job became finished
|
||||
@ -582,12 +613,6 @@ void AsyncLoader::finish(const LoadJobPtr & job, LoadStatus status, std::excepti
|
||||
if (log_progress)
|
||||
logAboutProgress(log, finished_jobs.size() - old_jobs, finished_jobs.size() + scheduled_jobs.size() - old_jobs, stopwatch);
|
||||
});
|
||||
|
||||
if (resumed_workers)
|
||||
{
|
||||
Pool & pool = pools[job->executionPool()];
|
||||
pool.suspended_workers -= resumed_workers;
|
||||
}
|
||||
}
|
||||
|
||||
void AsyncLoader::prioritize(const LoadJobPtr & job, size_t new_pool_id, std::unique_lock<std::mutex> & lock)
|
||||
@ -612,6 +637,9 @@ void AsyncLoader::prioritize(const LoadJobPtr & job, size_t new_pool_id, std::un
|
||||
}
|
||||
|
||||
job->pool_id.store(new_pool_id);
|
||||
// TODO(serxa): we should adjust suspended_workers and suspended_waiters here.
|
||||
// Otherwise suspended_workers we be left inconsistent. Fix it and add a test.
|
||||
// Scenario: schedule a job A, wait for it from a job B in the same pool, prioritize A
|
||||
|
||||
// Recurse into dependencies
|
||||
for (const auto & dep : job->dependencies)
|
||||
|
@ -1,6 +1,7 @@
|
||||
#pragma once
|
||||
|
||||
#include <condition_variable>
|
||||
#include <concepts>
|
||||
#include <exception>
|
||||
#include <memory>
|
||||
#include <map>
|
||||
@ -57,12 +58,13 @@ enum class LoadStatus
|
||||
class LoadJob : private boost::noncopyable
|
||||
{
|
||||
public:
|
||||
template <class Func, class LoadJobSetType>
|
||||
LoadJob(LoadJobSetType && dependencies_, String name_, size_t pool_id_, Func && func_)
|
||||
template <class LoadJobSetType, class Func, class DFFunc>
|
||||
LoadJob(LoadJobSetType && dependencies_, String name_, size_t pool_id_, DFFunc && dependency_failure_, Func && func_)
|
||||
: dependencies(std::forward<LoadJobSetType>(dependencies_))
|
||||
, name(std::move(name_))
|
||||
, execution_pool_id(pool_id_)
|
||||
, pool_id(pool_id_)
|
||||
, dependency_failure(std::forward<DFFunc>(dependency_failure_))
|
||||
, func(std::forward<Func>(func_))
|
||||
{}
|
||||
|
||||
@ -108,6 +110,14 @@ private:
|
||||
std::atomic<UInt64> job_id{0};
|
||||
std::atomic<size_t> execution_pool_id;
|
||||
std::atomic<size_t> pool_id;
|
||||
|
||||
// Handler for failed or canceled dependencies.
|
||||
// If job needs to be canceled on `dependency` failure, then function should set `cancel` to a specific reason.
|
||||
// Note that implementation should be fast and cannot use AsyncLoader, because it is called under `AsyncLoader::mutex`.
|
||||
// Note that `dependency_failure` is called only on pending jobs.
|
||||
std::function<void(const LoadJobPtr & self, const LoadJobPtr & dependency, std::exception_ptr & cancel)> dependency_failure;
|
||||
|
||||
// Function to be called to execute the job.
|
||||
std::function<void(AsyncLoader & loader, const LoadJobPtr & self)> func;
|
||||
|
||||
mutable std::mutex mutex;
|
||||
@ -123,35 +133,54 @@ private:
|
||||
std::atomic<TimePoint> finish_time{TimePoint{}};
|
||||
};
|
||||
|
||||
struct EmptyJobFunc
|
||||
{
|
||||
void operator()(AsyncLoader &, const LoadJobPtr &) {}
|
||||
};
|
||||
// For LoadJob::dependency_failure. Cancels the job on the first dependency failure or cancel.
|
||||
void cancelOnDependencyFailure(const LoadJobPtr & self, const LoadJobPtr & dependency, std::exception_ptr & cancel);
|
||||
|
||||
template <class Func = EmptyJobFunc>
|
||||
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, String name, Func && func = EmptyJobFunc())
|
||||
// For LoadJob::dependency_failure. Never cancels the job due to dependency failure or cancel.
|
||||
void ignoreDependencyFailure(const LoadJobPtr & self, const LoadJobPtr & dependency, std::exception_ptr & cancel);
|
||||
|
||||
template <class F> concept LoadJobDependencyFailure = std::invocable<F, const LoadJobPtr &, const LoadJobPtr &, std::exception_ptr &>;
|
||||
template <class F> concept LoadJobFunc = std::invocable<F, AsyncLoader &, const LoadJobPtr &>;
|
||||
|
||||
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, String name, LoadJobDependencyFailure auto && dependency_failure, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), 0, std::forward<Func>(func));
|
||||
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), 0, std::forward<decltype(dependency_failure)>(dependency_failure), std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
template <class Func = EmptyJobFunc>
|
||||
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, String name, Func && func = EmptyJobFunc())
|
||||
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, String name, LoadJobDependencyFailure auto && dependency_failure, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(dependencies, std::move(name), 0, std::forward<Func>(func));
|
||||
return std::make_shared<LoadJob>(dependencies, std::move(name), 0, std::forward<decltype(dependency_failure)>(dependency_failure), std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
template <class Func = EmptyJobFunc>
|
||||
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, size_t pool_id, String name, Func && func = EmptyJobFunc())
|
||||
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, size_t pool_id, String name, LoadJobDependencyFailure auto && dependency_failure, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), pool_id, std::forward<Func>(func));
|
||||
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), pool_id, std::forward<decltype(dependency_failure)>(dependency_failure), std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
template <class Func = EmptyJobFunc>
|
||||
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, size_t pool_id, String name, Func && func = EmptyJobFunc())
|
||||
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, size_t pool_id, String name, LoadJobDependencyFailure auto && dependency_failure, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(dependencies, std::move(name), pool_id, std::forward<Func>(func));
|
||||
return std::make_shared<LoadJob>(dependencies, std::move(name), pool_id, std::forward<decltype(dependency_failure)>(dependency_failure), std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, String name, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), 0, cancelOnDependencyFailure, std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, String name, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(dependencies, std::move(name), 0, cancelOnDependencyFailure, std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
LoadJobPtr makeLoadJob(LoadJobSet && dependencies, size_t pool_id, String name, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(std::move(dependencies), std::move(name), pool_id, cancelOnDependencyFailure, std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
LoadJobPtr makeLoadJob(const LoadJobSet & dependencies, size_t pool_id, String name, LoadJobFunc auto && func)
|
||||
{
|
||||
return std::make_shared<LoadJob>(dependencies, std::move(name), pool_id, cancelOnDependencyFailure, std::forward<decltype(func)>(func));
|
||||
}
|
||||
|
||||
// Represents a logically connected set of LoadJobs required to achieve some goals (final LoadJob in the set).
|
||||
class LoadTask : private boost::noncopyable
|
||||
@ -277,7 +306,7 @@ private:
|
||||
{
|
||||
size_t dependencies_left = 0; // Current number of dependencies on pending jobs.
|
||||
UInt64 ready_seqno = 0; // Zero means that job is not in ready queue.
|
||||
LoadJobSet dependent_jobs; // Set of jobs dependent on this job.
|
||||
LoadJobSet dependent_jobs; // Set of jobs dependent on this job. Contains only scheduled jobs.
|
||||
|
||||
// Three independent states of a scheduled job.
|
||||
bool isBlocked() const { return dependencies_left > 0; }
|
||||
@ -371,7 +400,7 @@ public:
|
||||
private:
|
||||
void checkCycle(const LoadJobSet & jobs, std::unique_lock<std::mutex> & lock);
|
||||
String checkCycle(const LoadJobPtr & job, LoadJobSet & left, LoadJobSet & visited, std::unique_lock<std::mutex> & lock);
|
||||
void finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr exception_from_job, std::unique_lock<std::mutex> & lock);
|
||||
void finish(const LoadJobPtr & job, LoadStatus status, std::exception_ptr reason, std::unique_lock<std::mutex> & lock);
|
||||
void gatherNotScheduled(const LoadJobPtr & job, LoadJobSet & jobs, std::unique_lock<std::mutex> & lock);
|
||||
void prioritize(const LoadJobPtr & job, size_t new_pool_id, std::unique_lock<std::mutex> & lock);
|
||||
void enqueue(Info & info, const LoadJobPtr & job, std::unique_lock<std::mutex> & lock);
|
||||
|
@ -1,10 +1,11 @@
|
||||
#pragma once
|
||||
|
||||
#include <list>
|
||||
#include <memory>
|
||||
#include <mutex>
|
||||
#include <optional>
|
||||
#include <base/types.h>
|
||||
#include <boost/core/noncopyable.hpp>
|
||||
#include <mutex>
|
||||
#include <memory>
|
||||
#include <list>
|
||||
|
||||
|
||||
namespace DB
|
||||
|
@ -88,7 +88,7 @@ public:
|
||||
{
|
||||
/// A more understandable error message.
|
||||
if (e.code() == DB::ErrorCodes::CANNOT_READ_ALL_DATA || e.code() == DB::ErrorCodes::ATTEMPT_TO_READ_AFTER_EOF)
|
||||
throw DB::ParsingException(e.code(), "File {} is empty. You must fill it manually with appropriate value.", path);
|
||||
throw DB::Exception(e.code(), "File {} is empty. You must fill it manually with appropriate value.", path);
|
||||
else
|
||||
throw;
|
||||
}
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user