mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-10 01:25:21 +00:00
Merge branch 'master' into bg-pool-catch
This commit is contained in:
commit
c3c1ed01d9
19
.github/PULL_REQUEST_TEMPLATE.md
vendored
19
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -18,5 +18,24 @@ tests/ci/run_check.py
|
||||
### Changelog entry (a user-readable short description of the changes that goes to CHANGELOG.md):
|
||||
...
|
||||
|
||||
### Documentation entry for user-facing changes
|
||||
|
||||
- [ ] Documentation is written (mandatory for new features)
|
||||
|
||||
<!---
|
||||
Directly edit documentation source files in the "docs" folder with the same pull-request as code changes
|
||||
|
||||
or
|
||||
|
||||
Add a user-readable short description of the changes that should be added to docs.clickhouse.com below.
|
||||
|
||||
At a minimum, the following information should be added (but add more as needed).
|
||||
- Motivation: Why is this function, table engine, etc. useful to ClickHouse users?
|
||||
|
||||
- Parameters: If the feature being added takes arguments, options or is influenced by settings, please list them below with a brief explanation.
|
||||
|
||||
- Example use: A query or command.
|
||||
-->
|
||||
|
||||
|
||||
> Information about CI checks: https://clickhouse.com/docs/en/development/continuous-integration/
|
||||
|
1
.github/workflows/backport_branches.yml
vendored
1
.github/workflows/backport_branches.yml
vendored
@ -683,3 +683,4 @@ jobs:
|
||||
run: |
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 finish_check.py
|
||||
python3 merge_pr.py
|
||||
|
2
.github/workflows/debug.yml
vendored
2
.github/workflows/debug.yml
vendored
@ -8,4 +8,4 @@ jobs:
|
||||
DebugInfo:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: hmarr/debug-action@1201a20fc9d278ddddd5f0f46922d06513892491
|
||||
- uses: hmarr/debug-action@a701ed95a46e6f2fb0df25e1a558c16356fae35a
|
||||
|
1
.github/workflows/docs_check.yml
vendored
1
.github/workflows/docs_check.yml
vendored
@ -169,3 +169,4 @@ jobs:
|
||||
run: |
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 finish_check.py
|
||||
python3 merge_pr.py --check-approved
|
||||
|
2
.github/workflows/nightly.yml
vendored
2
.github/workflows/nightly.yml
vendored
@ -107,7 +107,7 @@ jobs:
|
||||
run: |
|
||||
curl --form token="${COVERITY_TOKEN}" \
|
||||
--form email='security+coverity@clickhouse.com' \
|
||||
--form file="@$TEMP_PATH/$BUILD_NAME/coverity-scan.tgz" \
|
||||
--form file="@$TEMP_PATH/$BUILD_NAME/coverity-scan.tar.zst" \
|
||||
--form version="${GITHUB_REF#refs/heads/}-${GITHUB_SHA::6}" \
|
||||
--form description="Nighly Scan: $(date +'%Y-%m-%dT%H:%M:%S')" \
|
||||
https://scan.coverity.com/builds?project=ClickHouse%2FClickHouse
|
||||
|
1
.github/workflows/pull_request.yml
vendored
1
.github/workflows/pull_request.yml
vendored
@ -4388,3 +4388,4 @@ jobs:
|
||||
run: |
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 finish_check.py
|
||||
python3 merge_pr.py --check-approved
|
||||
|
34
.github/workflows/release.yml
vendored
34
.github/workflows/release.yml
vendored
@ -12,38 +12,10 @@ jobs:
|
||||
ReleasePublish:
|
||||
runs-on: [self-hosted, style-checker]
|
||||
steps:
|
||||
- name: Set envs
|
||||
- name: Deploy packages and assets
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
JFROG_API_KEY=${{ secrets.JFROG_ARTIFACTORY_API_KEY }}
|
||||
TEMP_PATH=${{runner.temp}}/release_packages
|
||||
REPO_COPY=${{runner.temp}}/release_packages/ClickHouse
|
||||
EOF
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
# Always use the most recent script version
|
||||
ref: master
|
||||
- name: Download packages and push to Artifactory
|
||||
run: |
|
||||
rm -rf "$TEMP_PATH" && mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY"
|
||||
# Download and push packages to artifactory
|
||||
python3 ./tests/ci/push_to_artifactory.py --release '${{ github.ref }}' \
|
||||
--commit '${{ github.sha }}' --artifactory-url '${{ secrets.JFROG_ARTIFACTORY_URL }}' --all
|
||||
# Download macos binaries to ${{runner.temp}}/download_binary
|
||||
python3 ./tests/ci/download_binary.py --version '${{ github.ref }}' \
|
||||
--commit '${{ github.sha }}' binary_darwin binary_darwin_aarch64
|
||||
mv '${{runner.temp}}/download_binary/'clickhouse-* '${{runner.temp}}/push_to_artifactory'
|
||||
- name: Upload packages to release assets
|
||||
uses: svenstaro/upload-release-action@v2
|
||||
with:
|
||||
repo_token: ${{ secrets.GITHUB_TOKEN }}
|
||||
file: ${{runner.temp}}/push_to_artifactory/*
|
||||
overwrite: true
|
||||
tag: ${{ github.ref }}
|
||||
file_glob: true
|
||||
GITHUB_TAG="${GITHUB_REF#refs/tags/}"
|
||||
curl '${{ secrets.PACKAGES_RELEASE_URL }}/release/'"${GITHUB_TAG}"'?binary=binary_darwin&binary=binary_darwin_aarch64&sync=true' -d ''
|
||||
############################################################################################
|
||||
##################################### Docker images #######################################
|
||||
############################################################################################
|
||||
|
2
.gitignore
vendored
2
.gitignore
vendored
@ -154,6 +154,8 @@ website/package-lock.json
|
||||
/programs/server/data
|
||||
/programs/server/metadata
|
||||
/programs/server/store
|
||||
/programs/server/uuid
|
||||
/programs/server/coordination
|
||||
|
||||
# temporary test files
|
||||
tests/queries/0_stateless/test_*
|
||||
|
176
.gitmodules
vendored
176
.gitmodules
vendored
@ -1,88 +1,88 @@
|
||||
[submodule "contrib/poco"]
|
||||
path = contrib/poco
|
||||
url = https://github.com/ClickHouse/poco.git
|
||||
url = https://github.com/ClickHouse/poco
|
||||
branch = clickhouse
|
||||
[submodule "contrib/zstd"]
|
||||
path = contrib/zstd
|
||||
url = https://github.com/facebook/zstd.git
|
||||
url = https://github.com/facebook/zstd
|
||||
[submodule "contrib/lz4"]
|
||||
path = contrib/lz4
|
||||
url = https://github.com/lz4/lz4.git
|
||||
url = https://github.com/lz4/lz4
|
||||
[submodule "contrib/librdkafka"]
|
||||
path = contrib/librdkafka
|
||||
url = https://github.com/ClickHouse/librdkafka.git
|
||||
url = https://github.com/ClickHouse/librdkafka
|
||||
[submodule "contrib/cctz"]
|
||||
path = contrib/cctz
|
||||
url = https://github.com/ClickHouse/cctz.git
|
||||
url = https://github.com/ClickHouse/cctz
|
||||
[submodule "contrib/zlib-ng"]
|
||||
path = contrib/zlib-ng
|
||||
url = https://github.com/ClickHouse/zlib-ng.git
|
||||
url = https://github.com/ClickHouse/zlib-ng
|
||||
branch = clickhouse-2.0.x
|
||||
[submodule "contrib/googletest"]
|
||||
path = contrib/googletest
|
||||
url = https://github.com/google/googletest.git
|
||||
url = https://github.com/google/googletest
|
||||
[submodule "contrib/capnproto"]
|
||||
path = contrib/capnproto
|
||||
url = https://github.com/capnproto/capnproto.git
|
||||
url = https://github.com/capnproto/capnproto
|
||||
[submodule "contrib/double-conversion"]
|
||||
path = contrib/double-conversion
|
||||
url = https://github.com/google/double-conversion.git
|
||||
url = https://github.com/google/double-conversion
|
||||
[submodule "contrib/re2"]
|
||||
path = contrib/re2
|
||||
url = https://github.com/google/re2.git
|
||||
url = https://github.com/google/re2
|
||||
[submodule "contrib/mariadb-connector-c"]
|
||||
path = contrib/mariadb-connector-c
|
||||
url = https://github.com/ClickHouse/mariadb-connector-c.git
|
||||
url = https://github.com/ClickHouse/mariadb-connector-c
|
||||
[submodule "contrib/jemalloc"]
|
||||
path = contrib/jemalloc
|
||||
url = https://github.com/jemalloc/jemalloc.git
|
||||
url = https://github.com/jemalloc/jemalloc
|
||||
[submodule "contrib/unixodbc"]
|
||||
path = contrib/unixodbc
|
||||
url = https://github.com/ClickHouse/UnixODBC.git
|
||||
url = https://github.com/ClickHouse/UnixODBC
|
||||
[submodule "contrib/protobuf"]
|
||||
path = contrib/protobuf
|
||||
url = https://github.com/ClickHouse/protobuf.git
|
||||
url = https://github.com/ClickHouse/protobuf
|
||||
branch = v3.13.0.1
|
||||
[submodule "contrib/boost"]
|
||||
path = contrib/boost
|
||||
url = https://github.com/ClickHouse/boost.git
|
||||
url = https://github.com/ClickHouse/boost
|
||||
[submodule "contrib/base64"]
|
||||
path = contrib/base64
|
||||
url = https://github.com/ClickHouse/Turbo-Base64.git
|
||||
url = https://github.com/ClickHouse/Turbo-Base64
|
||||
[submodule "contrib/arrow"]
|
||||
path = contrib/arrow
|
||||
url = https://github.com/ClickHouse/arrow.git
|
||||
url = https://github.com/ClickHouse/arrow
|
||||
branch = blessed/release-6.0.1
|
||||
[submodule "contrib/thrift"]
|
||||
path = contrib/thrift
|
||||
url = https://github.com/apache/thrift.git
|
||||
url = https://github.com/apache/thrift
|
||||
[submodule "contrib/libhdfs3"]
|
||||
path = contrib/libhdfs3
|
||||
url = https://github.com/ClickHouse/libhdfs3.git
|
||||
url = https://github.com/ClickHouse/libhdfs3
|
||||
[submodule "contrib/libxml2"]
|
||||
path = contrib/libxml2
|
||||
url = https://github.com/GNOME/libxml2.git
|
||||
url = https://github.com/GNOME/libxml2
|
||||
[submodule "contrib/libgsasl"]
|
||||
path = contrib/libgsasl
|
||||
url = https://github.com/ClickHouse/libgsasl.git
|
||||
url = https://github.com/ClickHouse/libgsasl
|
||||
[submodule "contrib/snappy"]
|
||||
path = contrib/snappy
|
||||
url = https://github.com/ClickHouse/snappy.git
|
||||
url = https://github.com/ClickHouse/snappy
|
||||
[submodule "contrib/cppkafka"]
|
||||
path = contrib/cppkafka
|
||||
url = https://github.com/mfontanini/cppkafka.git
|
||||
url = https://github.com/mfontanini/cppkafka
|
||||
[submodule "contrib/brotli"]
|
||||
path = contrib/brotli
|
||||
url = https://github.com/google/brotli.git
|
||||
url = https://github.com/google/brotli
|
||||
[submodule "contrib/h3"]
|
||||
path = contrib/h3
|
||||
url = https://github.com/ClickHouse/h3
|
||||
[submodule "contrib/libunwind"]
|
||||
path = contrib/libunwind
|
||||
url = https://github.com/ClickHouse/libunwind.git
|
||||
url = https://github.com/ClickHouse/libunwind
|
||||
[submodule "contrib/simdjson"]
|
||||
path = contrib/simdjson
|
||||
url = https://github.com/simdjson/simdjson.git
|
||||
url = https://github.com/simdjson/simdjson
|
||||
[submodule "contrib/rapidjson"]
|
||||
path = contrib/rapidjson
|
||||
url = https://github.com/ClickHouse/rapidjson
|
||||
@ -94,68 +94,68 @@
|
||||
url = https://github.com/ClickHouse/orc
|
||||
[submodule "contrib/sparsehash-c11"]
|
||||
path = contrib/sparsehash-c11
|
||||
url = https://github.com/sparsehash/sparsehash-c11.git
|
||||
url = https://github.com/sparsehash/sparsehash-c11
|
||||
[submodule "contrib/grpc"]
|
||||
path = contrib/grpc
|
||||
url = https://github.com/ClickHouse/grpc.git
|
||||
url = https://github.com/ClickHouse/grpc
|
||||
branch = v1.33.2
|
||||
[submodule "contrib/aws"]
|
||||
path = contrib/aws
|
||||
url = https://github.com/ClickHouse/aws-sdk-cpp.git
|
||||
url = https://github.com/ClickHouse/aws-sdk-cpp
|
||||
[submodule "aws-c-event-stream"]
|
||||
path = contrib/aws-c-event-stream
|
||||
url = https://github.com/awslabs/aws-c-event-stream.git
|
||||
url = https://github.com/awslabs/aws-c-event-stream
|
||||
[submodule "aws-c-common"]
|
||||
path = contrib/aws-c-common
|
||||
url = https://github.com/ClickHouse/aws-c-common.git
|
||||
url = https://github.com/ClickHouse/aws-c-common
|
||||
[submodule "aws-checksums"]
|
||||
path = contrib/aws-checksums
|
||||
url = https://github.com/awslabs/aws-checksums.git
|
||||
url = https://github.com/awslabs/aws-checksums
|
||||
[submodule "contrib/curl"]
|
||||
path = contrib/curl
|
||||
url = https://github.com/curl/curl.git
|
||||
url = https://github.com/curl/curl
|
||||
[submodule "contrib/icudata"]
|
||||
path = contrib/icudata
|
||||
url = https://github.com/ClickHouse/icudata.git
|
||||
url = https://github.com/ClickHouse/icudata
|
||||
[submodule "contrib/icu"]
|
||||
path = contrib/icu
|
||||
url = https://github.com/unicode-org/icu.git
|
||||
url = https://github.com/unicode-org/icu
|
||||
[submodule "contrib/flatbuffers"]
|
||||
path = contrib/flatbuffers
|
||||
url = https://github.com/ClickHouse/flatbuffers.git
|
||||
url = https://github.com/ClickHouse/flatbuffers
|
||||
[submodule "contrib/replxx"]
|
||||
path = contrib/replxx
|
||||
url = https://github.com/ClickHouse/replxx.git
|
||||
url = https://github.com/ClickHouse/replxx
|
||||
[submodule "contrib/avro"]
|
||||
path = contrib/avro
|
||||
url = https://github.com/ClickHouse/avro.git
|
||||
url = https://github.com/ClickHouse/avro
|
||||
ignore = untracked
|
||||
[submodule "contrib/msgpack-c"]
|
||||
path = contrib/msgpack-c
|
||||
url = https://github.com/msgpack/msgpack-c
|
||||
[submodule "contrib/libcpuid"]
|
||||
path = contrib/libcpuid
|
||||
url = https://github.com/ClickHouse/libcpuid.git
|
||||
url = https://github.com/ClickHouse/libcpuid
|
||||
[submodule "contrib/openldap"]
|
||||
path = contrib/openldap
|
||||
url = https://github.com/ClickHouse/openldap.git
|
||||
url = https://github.com/ClickHouse/openldap
|
||||
[submodule "contrib/AMQP-CPP"]
|
||||
path = contrib/AMQP-CPP
|
||||
url = https://github.com/ClickHouse/AMQP-CPP.git
|
||||
url = https://github.com/ClickHouse/AMQP-CPP
|
||||
[submodule "contrib/cassandra"]
|
||||
path = contrib/cassandra
|
||||
url = https://github.com/ClickHouse/cpp-driver.git
|
||||
url = https://github.com/ClickHouse/cpp-driver
|
||||
branch = clickhouse
|
||||
[submodule "contrib/libuv"]
|
||||
path = contrib/libuv
|
||||
url = https://github.com/ClickHouse/libuv.git
|
||||
url = https://github.com/ClickHouse/libuv
|
||||
branch = clickhouse
|
||||
[submodule "contrib/fmtlib"]
|
||||
path = contrib/fmtlib
|
||||
url = https://github.com/fmtlib/fmt.git
|
||||
url = https://github.com/fmtlib/fmt
|
||||
[submodule "contrib/sentry-native"]
|
||||
path = contrib/sentry-native
|
||||
url = https://github.com/ClickHouse/sentry-native.git
|
||||
url = https://github.com/ClickHouse/sentry-native
|
||||
[submodule "contrib/krb5"]
|
||||
path = contrib/krb5
|
||||
url = https://github.com/ClickHouse/krb5
|
||||
@ -172,17 +172,17 @@
|
||||
url = https://github.com/danlark1/miniselect
|
||||
[submodule "contrib/rocksdb"]
|
||||
path = contrib/rocksdb
|
||||
url = https://github.com/ClickHouse/rocksdb.git
|
||||
url = https://github.com/ClickHouse/rocksdb
|
||||
[submodule "contrib/xz"]
|
||||
path = contrib/xz
|
||||
url = https://github.com/xz-mirror/xz
|
||||
[submodule "contrib/abseil-cpp"]
|
||||
path = contrib/abseil-cpp
|
||||
url = https://github.com/abseil/abseil-cpp.git
|
||||
url = https://github.com/abseil/abseil-cpp
|
||||
branch = lts_2021_11_02
|
||||
[submodule "contrib/dragonbox"]
|
||||
path = contrib/dragonbox
|
||||
url = https://github.com/ClickHouse/dragonbox.git
|
||||
url = https://github.com/ClickHouse/dragonbox
|
||||
[submodule "contrib/fast_float"]
|
||||
path = contrib/fast_float
|
||||
url = https://github.com/fastfloat/fast_float
|
||||
@ -191,44 +191,44 @@
|
||||
url = https://github.com/ClickHouse/libpq
|
||||
[submodule "contrib/boringssl"]
|
||||
path = contrib/boringssl
|
||||
url = https://github.com/ClickHouse/boringssl.git
|
||||
url = https://github.com/ClickHouse/boringssl
|
||||
branch = unknown_branch_from_artur
|
||||
[submodule "contrib/NuRaft"]
|
||||
path = contrib/NuRaft
|
||||
url = https://github.com/ClickHouse/NuRaft.git
|
||||
url = https://github.com/ClickHouse/NuRaft
|
||||
[submodule "contrib/nanodbc"]
|
||||
path = contrib/nanodbc
|
||||
url = https://github.com/ClickHouse/nanodbc.git
|
||||
url = https://github.com/ClickHouse/nanodbc
|
||||
[submodule "contrib/datasketches-cpp"]
|
||||
path = contrib/datasketches-cpp
|
||||
url = https://github.com/ClickHouse/datasketches-cpp.git
|
||||
url = https://github.com/ClickHouse/datasketches-cpp
|
||||
[submodule "contrib/yaml-cpp"]
|
||||
path = contrib/yaml-cpp
|
||||
url = https://github.com/ClickHouse/yaml-cpp.git
|
||||
url = https://github.com/ClickHouse/yaml-cpp
|
||||
[submodule "contrib/cld2"]
|
||||
path = contrib/cld2
|
||||
url = https://github.com/ClickHouse/cld2.git
|
||||
url = https://github.com/ClickHouse/cld2
|
||||
[submodule "contrib/libstemmer_c"]
|
||||
path = contrib/libstemmer_c
|
||||
url = https://github.com/ClickHouse/libstemmer_c.git
|
||||
url = https://github.com/ClickHouse/libstemmer_c
|
||||
[submodule "contrib/wordnet-blast"]
|
||||
path = contrib/wordnet-blast
|
||||
url = https://github.com/ClickHouse/wordnet-blast.git
|
||||
url = https://github.com/ClickHouse/wordnet-blast
|
||||
[submodule "contrib/lemmagen-c"]
|
||||
path = contrib/lemmagen-c
|
||||
url = https://github.com/ClickHouse/lemmagen-c.git
|
||||
url = https://github.com/ClickHouse/lemmagen-c
|
||||
[submodule "contrib/libpqxx"]
|
||||
path = contrib/libpqxx
|
||||
url = https://github.com/ClickHouse/libpqxx.git
|
||||
url = https://github.com/ClickHouse/libpqxx
|
||||
[submodule "contrib/sqlite-amalgamation"]
|
||||
path = contrib/sqlite-amalgamation
|
||||
url = https://github.com/azadkuh/sqlite-amalgamation
|
||||
url = https://github.com/ClickHouse/sqlite-amalgamation
|
||||
[submodule "contrib/s2geometry"]
|
||||
path = contrib/s2geometry
|
||||
url = https://github.com/ClickHouse/s2geometry.git
|
||||
url = https://github.com/ClickHouse/s2geometry
|
||||
[submodule "contrib/bzip2"]
|
||||
path = contrib/bzip2
|
||||
url = https://github.com/ClickHouse/bzip2.git
|
||||
url = https://github.com/ClickHouse/bzip2
|
||||
[submodule "contrib/magic_enum"]
|
||||
path = contrib/magic_enum
|
||||
url = https://github.com/Neargye/magic_enum
|
||||
@ -237,90 +237,96 @@
|
||||
url = https://github.com/google/libprotobuf-mutator
|
||||
[submodule "contrib/sysroot"]
|
||||
path = contrib/sysroot
|
||||
url = https://github.com/ClickHouse/sysroot.git
|
||||
url = https://github.com/ClickHouse/sysroot
|
||||
[submodule "contrib/nlp-data"]
|
||||
path = contrib/nlp-data
|
||||
url = https://github.com/ClickHouse/nlp-data.git
|
||||
url = https://github.com/ClickHouse/nlp-data
|
||||
[submodule "contrib/hive-metastore"]
|
||||
path = contrib/hive-metastore
|
||||
url = https://github.com/ClickHouse/hive-metastore
|
||||
[submodule "contrib/azure"]
|
||||
path = contrib/azure
|
||||
url = https://github.com/ClickHouse/azure-sdk-for-cpp.git
|
||||
url = https://github.com/ClickHouse/azure-sdk-for-cpp
|
||||
[submodule "contrib/minizip-ng"]
|
||||
path = contrib/minizip-ng
|
||||
url = https://github.com/zlib-ng/minizip-ng
|
||||
[submodule "contrib/annoy"]
|
||||
path = contrib/annoy
|
||||
url = https://github.com/ClickHouse/annoy.git
|
||||
url = https://github.com/ClickHouse/annoy
|
||||
branch = ClickHouse-master
|
||||
[submodule "contrib/qpl"]
|
||||
path = contrib/qpl
|
||||
url = https://github.com/intel/qpl.git
|
||||
url = https://github.com/intel/qpl
|
||||
[submodule "contrib/wyhash"]
|
||||
path = contrib/wyhash
|
||||
url = https://github.com/wangyi-fudan/wyhash.git
|
||||
url = https://github.com/wangyi-fudan/wyhash
|
||||
[submodule "contrib/hashidsxx"]
|
||||
path = contrib/hashidsxx
|
||||
url = https://github.com/schoentoon/hashidsxx.git
|
||||
url = https://github.com/schoentoon/hashidsxx
|
||||
[submodule "contrib/nats-io"]
|
||||
path = contrib/nats-io
|
||||
url = https://github.com/ClickHouse/nats.c.git
|
||||
url = https://github.com/ClickHouse/nats.c
|
||||
[submodule "contrib/vectorscan"]
|
||||
path = contrib/vectorscan
|
||||
url = https://github.com/VectorCamp/vectorscan.git
|
||||
url = https://github.com/VectorCamp/vectorscan
|
||||
[submodule "contrib/c-ares"]
|
||||
path = contrib/c-ares
|
||||
url = https://github.com/ClickHouse/c-ares
|
||||
[submodule "contrib/llvm-project"]
|
||||
path = contrib/llvm-project
|
||||
url = https://github.com/ClickHouse/llvm-project.git
|
||||
url = https://github.com/ClickHouse/llvm-project
|
||||
[submodule "contrib/corrosion"]
|
||||
path = contrib/corrosion
|
||||
url = https://github.com/corrosion-rs/corrosion.git
|
||||
url = https://github.com/corrosion-rs/corrosion
|
||||
[submodule "contrib/morton-nd"]
|
||||
path = contrib/morton-nd
|
||||
url = https://github.com/morton-nd/morton-nd
|
||||
[submodule "contrib/xxHash"]
|
||||
path = contrib/xxHash
|
||||
url = https://github.com/Cyan4973/xxHash.git
|
||||
url = https://github.com/Cyan4973/xxHash
|
||||
[submodule "contrib/crc32-s390x"]
|
||||
path = contrib/crc32-s390x
|
||||
url = https://github.com/linux-on-ibm-z/crc32-s390x
|
||||
[submodule "contrib/openssl"]
|
||||
path = contrib/openssl
|
||||
url = https://github.com/openssl/openssl
|
||||
branch = openssl-3.0
|
||||
[submodule "contrib/google-benchmark"]
|
||||
path = contrib/google-benchmark
|
||||
url = https://github.com/google/benchmark.git
|
||||
url = https://github.com/google/benchmark
|
||||
[submodule "contrib/libdivide"]
|
||||
path = contrib/libdivide
|
||||
url = https://github.com/ridiculousfish/libdivide.git
|
||||
url = https://github.com/ridiculousfish/libdivide
|
||||
[submodule "contrib/aws-crt-cpp"]
|
||||
path = contrib/aws-crt-cpp
|
||||
url = https://github.com/ClickHouse/aws-crt-cpp.git
|
||||
url = https://github.com/ClickHouse/aws-crt-cpp
|
||||
[submodule "contrib/aws-c-io"]
|
||||
path = contrib/aws-c-io
|
||||
url = https://github.com/ClickHouse/aws-c-io.git
|
||||
url = https://github.com/ClickHouse/aws-c-io
|
||||
[submodule "contrib/aws-c-mqtt"]
|
||||
path = contrib/aws-c-mqtt
|
||||
url = https://github.com/awslabs/aws-c-mqtt.git
|
||||
url = https://github.com/awslabs/aws-c-mqtt
|
||||
[submodule "contrib/aws-c-auth"]
|
||||
path = contrib/aws-c-auth
|
||||
url = https://github.com/awslabs/aws-c-auth.git
|
||||
url = https://github.com/awslabs/aws-c-auth
|
||||
[submodule "contrib/aws-c-cal"]
|
||||
path = contrib/aws-c-cal
|
||||
url = https://github.com/ClickHouse/aws-c-cal.git
|
||||
url = https://github.com/ClickHouse/aws-c-cal
|
||||
[submodule "contrib/aws-c-sdkutils"]
|
||||
path = contrib/aws-c-sdkutils
|
||||
url = https://github.com/awslabs/aws-c-sdkutils.git
|
||||
url = https://github.com/awslabs/aws-c-sdkutils
|
||||
[submodule "contrib/aws-c-http"]
|
||||
path = contrib/aws-c-http
|
||||
url = https://github.com/awslabs/aws-c-http.git
|
||||
url = https://github.com/awslabs/aws-c-http
|
||||
[submodule "contrib/aws-c-s3"]
|
||||
path = contrib/aws-c-s3
|
||||
url = https://github.com/awslabs/aws-c-s3.git
|
||||
url = https://github.com/awslabs/aws-c-s3
|
||||
[submodule "contrib/aws-c-compression"]
|
||||
path = contrib/aws-c-compression
|
||||
url = https://github.com/awslabs/aws-c-compression.git
|
||||
url = https://github.com/awslabs/aws-c-compression
|
||||
[submodule "contrib/aws-s2n-tls"]
|
||||
path = contrib/aws-s2n-tls
|
||||
url = https://github.com/aws/s2n-tls.git
|
||||
url = https://github.com/ClickHouse/s2n-tls
|
||||
[submodule "contrib/crc32-vpmsum"]
|
||||
path = contrib/crc32-vpmsum
|
||||
url = https://github.com/antonblanchard/crc32-vpmsum.git
|
||||
|
2013
CHANGELOG.md
2013
CHANGELOG.md
File diff suppressed because it is too large
Load Diff
@ -16,6 +16,5 @@ ClickHouse® is an open-source column-oriented database management system that a
|
||||
* [Contacts](https://clickhouse.com/company/contact) can help to get your questions answered if there are any.
|
||||
|
||||
## Upcoming events
|
||||
* **Recording available**: [**v22.12 Release Webinar**](https://www.youtube.com/watch?v=sREupr6uc2k) 22.12 is the ClickHouse Christmas release. There are plenty of gifts (a new JOIN algorithm among them) and we adopted something from MongoDB. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release.
|
||||
* [**ClickHouse Meetup at the CHEQ office in Tel Aviv**](https://www.meetup.com/clickhouse-tel-aviv-user-group/events/289599423/) - Jan 16 - We are very excited to be holding our next in-person ClickHouse meetup at the CHEQ office in Tel Aviv! Hear from CHEQ, ServiceNow and Contentsquare, as well as a deep dive presentation from ClickHouse CTO Alexey Milovidov. Join us for a fun evening of talks, food and discussion!
|
||||
* [**ClickHouse Meetup at Microsoft Office in Seattle**](https://www.meetup.com/clickhouse-seattle-user-group/events/290310025/) - Jan 18 - Keep an eye on this space as we will be announcing speakers soon!
|
||||
* **Recording available**: [**v23.1 Release Webinar**](https://www.youtube.com/watch?v=zYSZXBnTMSE) 23.1 is the ClickHouse New Year release. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release. Inverted indices, query cache, and so -- very -- much more.
|
||||
* **Recording available**: [**ClickHouse Meetup at the CHEQ office in Tel Aviv**](https://www.meetup.com/clickhouse-tel-aviv-user-group/events/289599423/) - We are very excited to be holding our next in-person ClickHouse meetup at the CHEQ office in Tel Aviv! Hear from CHEQ, ServiceNow and Contentsquare, as well as a deep dive presentation from ClickHouse CTO Alexey Milovidov. Join us for a fun evening of talks, food and discussion!
|
||||
|
16
SECURITY.md
16
SECURITY.md
@ -13,9 +13,10 @@ The following versions of ClickHouse server are currently being supported with s
|
||||
|
||||
| Version | Supported |
|
||||
|:-|:-|
|
||||
| 23.1 | ✔️ |
|
||||
| 22.12 | ✔️ |
|
||||
| 22.11 | ✔️ |
|
||||
| 22.10 | ✔️ |
|
||||
| 22.10 | ❌ |
|
||||
| 22.9 | ❌ |
|
||||
| 22.8 | ✔️ |
|
||||
| 22.7 | ❌ |
|
||||
@ -25,18 +26,7 @@ The following versions of ClickHouse server are currently being supported with s
|
||||
| 22.3 | ✔️ |
|
||||
| 22.2 | ❌ |
|
||||
| 22.1 | ❌ |
|
||||
| 21.12 | ❌ |
|
||||
| 21.11 | ❌ |
|
||||
| 21.10 | ❌ |
|
||||
| 21.9 | ❌ |
|
||||
| 21.8 | ❌ |
|
||||
| 21.7 | ❌ |
|
||||
| 21.6 | ❌ |
|
||||
| 21.5 | ❌ |
|
||||
| 21.4 | ❌ |
|
||||
| 21.3 | ❌ |
|
||||
| 21.2 | ❌ |
|
||||
| 21.1 | ❌ |
|
||||
| 21.* | ❌ |
|
||||
| 20.* | ❌ |
|
||||
| 19.* | ❌ |
|
||||
| 18.* | ❌ |
|
||||
|
53
base/base/IPv4andIPv6.h
Normal file
53
base/base/IPv4andIPv6.h
Normal file
@ -0,0 +1,53 @@
|
||||
#pragma once
|
||||
|
||||
#include <base/strong_typedef.h>
|
||||
#include <base/extended_types.h>
|
||||
#include <Common/memcmpSmall.h>
|
||||
|
||||
namespace DB
|
||||
{
|
||||
|
||||
using IPv4 = StrongTypedef<UInt32, struct IPv4Tag>;
|
||||
|
||||
struct IPv6 : StrongTypedef<UInt128, struct IPv6Tag>
|
||||
{
|
||||
constexpr IPv6() = default;
|
||||
constexpr explicit IPv6(const UInt128 & x) : StrongTypedef(x) {}
|
||||
constexpr explicit IPv6(UInt128 && x) : StrongTypedef(std::move(x)) {}
|
||||
|
||||
IPv6 & operator=(const UInt128 & rhs) { StrongTypedef::operator=(rhs); return *this; }
|
||||
IPv6 & operator=(UInt128 && rhs) { StrongTypedef::operator=(std::move(rhs)); return *this; }
|
||||
|
||||
bool operator<(const IPv6 & rhs) const
|
||||
{
|
||||
return
|
||||
memcmp16(
|
||||
reinterpret_cast<const unsigned char *>(toUnderType().items),
|
||||
reinterpret_cast<const unsigned char *>(rhs.toUnderType().items)
|
||||
) < 0;
|
||||
}
|
||||
|
||||
bool operator>(const IPv6 & rhs) const
|
||||
{
|
||||
return
|
||||
memcmp16(
|
||||
reinterpret_cast<const unsigned char *>(toUnderType().items),
|
||||
reinterpret_cast<const unsigned char *>(rhs.toUnderType().items)
|
||||
) > 0;
|
||||
}
|
||||
|
||||
bool operator==(const IPv6 & rhs) const
|
||||
{
|
||||
return
|
||||
memcmp16(
|
||||
reinterpret_cast<const unsigned char *>(toUnderType().items),
|
||||
reinterpret_cast<const unsigned char *>(rhs.toUnderType().items)
|
||||
) == 0;
|
||||
}
|
||||
|
||||
bool operator<=(const IPv6 & rhs) const { return !operator>(rhs); }
|
||||
bool operator>=(const IPv6 & rhs) const { return !operator<(rhs); }
|
||||
bool operator!=(const IPv6 & rhs) const { return !operator==(rhs); }
|
||||
};
|
||||
|
||||
}
|
@ -2,6 +2,7 @@
|
||||
|
||||
#include "Decimal.h"
|
||||
#include "UUID.h"
|
||||
#include "IPv4andIPv6.h"
|
||||
|
||||
namespace DB
|
||||
{
|
||||
@ -35,6 +36,8 @@ TN_MAP(Float32)
|
||||
TN_MAP(Float64)
|
||||
TN_MAP(String)
|
||||
TN_MAP(UUID)
|
||||
TN_MAP(IPv4)
|
||||
TN_MAP(IPv6)
|
||||
TN_MAP(Decimal32)
|
||||
TN_MAP(Decimal64)
|
||||
TN_MAP(Decimal128)
|
||||
|
@ -144,6 +144,13 @@
|
||||
# define TSA_REQUIRES_SHARED(...) __attribute__((requires_shared_capability(__VA_ARGS__))) /// thread needs shared possession of given capability
|
||||
# define TSA_ACQUIRED_AFTER(...) __attribute__((acquired_after(__VA_ARGS__))) /// annotated lock must be locked after given lock
|
||||
# define TSA_NO_THREAD_SAFETY_ANALYSIS __attribute__((no_thread_safety_analysis)) /// disable TSA for a function
|
||||
# define TSA_CAPABILITY(...) __attribute__((capability(__VA_ARGS__))) /// object of a class can be used as capability
|
||||
# define TSA_ACQUIRE(...) __attribute__((acquire_capability(__VA_ARGS__))) /// function acquires a capability, but does not release it
|
||||
# define TSA_TRY_ACQUIRE(...) __attribute__((try_acquire_capability(__VA_ARGS__))) /// function tries to acquire a capability and returns a boolean value indicating success or failure
|
||||
# define TSA_RELEASE(...) __attribute__((release_capability(__VA_ARGS__))) /// function releases the given capability
|
||||
# define TSA_ACQUIRE_SHARED(...) __attribute__((acquire_shared_capability(__VA_ARGS__))) /// function acquires a shared capability, but does not release it
|
||||
# define TSA_TRY_ACQUIRE_SHARED(...) __attribute__((try_acquire_shared_capability(__VA_ARGS__))) /// function tries to acquire a shared capability and returns a boolean value indicating success or failure
|
||||
# define TSA_RELEASE_SHARED(...) __attribute__((release_shared_capability(__VA_ARGS__))) /// function releases the given shared capability
|
||||
|
||||
/// Macros for suppressing TSA warnings for specific reads/writes (instead of suppressing it for the whole function)
|
||||
/// They use a lambda function to apply function attribute to a single statement. This enable us to suppress warnings locally instead of
|
||||
@ -164,6 +171,13 @@
|
||||
# define TSA_REQUIRES(...)
|
||||
# define TSA_REQUIRES_SHARED(...)
|
||||
# define TSA_NO_THREAD_SAFETY_ANALYSIS
|
||||
# define TSA_CAPABILITY(...)
|
||||
# define TSA_ACQUIRE(...)
|
||||
# define TSA_TRY_ACQUIRE(...)
|
||||
# define TSA_RELEASE(...)
|
||||
# define TSA_ACQUIRE_SHARED(...)
|
||||
# define TSA_TRY_ACQUIRE_SHARED(...)
|
||||
# define TSA_RELEASE_SHARED(...)
|
||||
|
||||
# define TSA_SUPPRESS_WARNING_FOR_READ(x) (x)
|
||||
# define TSA_SUPPRESS_WARNING_FOR_WRITE(x) (x)
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||
SET(VERSION_REVISION 54470)
|
||||
SET(VERSION_MAJOR 22)
|
||||
SET(VERSION_MINOR 13)
|
||||
SET(VERSION_REVISION 54471)
|
||||
SET(VERSION_MAJOR 23)
|
||||
SET(VERSION_MINOR 2)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 688e488e930c83eefeac4f87c4cc029cc5b231e3)
|
||||
SET(VERSION_DESCRIBE v22.13.1.1-testing)
|
||||
SET(VERSION_STRING 22.13.1.1)
|
||||
SET(VERSION_GITHASH dcaac47702510cc87ddf266bc524f6b7ce0a8e6e)
|
||||
SET(VERSION_DESCRIBE v23.2.1.1-testing)
|
||||
SET(VERSION_STRING 23.2.1.1)
|
||||
# end of autochange
|
||||
|
5
contrib/CMakeLists.txt
vendored
5
contrib/CMakeLists.txt
vendored
@ -55,6 +55,7 @@ else ()
|
||||
endif ()
|
||||
add_contrib (miniselect-cmake miniselect)
|
||||
add_contrib (pdqsort-cmake pdqsort)
|
||||
add_contrib (crc32-vpmsum-cmake crc32-vpmsum)
|
||||
add_contrib (sparsehash-c11-cmake sparsehash-c11)
|
||||
add_contrib (abseil-cpp-cmake abseil-cpp)
|
||||
add_contrib (magic-enum-cmake magic_enum)
|
||||
@ -179,6 +180,10 @@ add_contrib (c-ares-cmake c-ares)
|
||||
add_contrib (qpl-cmake qpl)
|
||||
add_contrib (morton-nd-cmake morton-nd)
|
||||
|
||||
if (ARCH_S390X)
|
||||
add_contrib(crc32-s390x-cmake crc32-s390x)
|
||||
endif()
|
||||
|
||||
add_contrib (annoy-cmake annoy)
|
||||
|
||||
add_contrib (xxHash-cmake xxHash)
|
||||
|
2
contrib/NuRaft
vendored
2
contrib/NuRaft
vendored
@ -1 +1 @@
|
||||
Subproject commit afc36dfa9b0beb45bc4cd935060631cc80ba04a5
|
||||
Subproject commit 545b8c810a956b2efdc116e86be219af7e83d68a
|
2
contrib/arrow
vendored
2
contrib/arrow
vendored
@ -1 +1 @@
|
||||
Subproject commit 450a5638704386356f8e520080468fc9bc8bcaf8
|
||||
Subproject commit d03245f801f798c63ee9a7d2b8914a9e5c5cd666
|
2
contrib/aws-s2n-tls
vendored
2
contrib/aws-s2n-tls
vendored
@ -1 +1 @@
|
||||
Subproject commit 15d534e8a9ca1eda6bacee514e37d08b4f38a526
|
||||
Subproject commit 0f1ba9e5c4a67cb3898de0c0b4f911d4194dc8de
|
2
contrib/azure
vendored
2
contrib/azure
vendored
@ -1 +1 @@
|
||||
Subproject commit ef75afc075fc71fbcd8fe28dcda3794ae265fd1c
|
||||
Subproject commit ea8c3044f43f5afa7016d2d580ed201f495d7e94
|
1
contrib/crc32-s390x
vendored
Submodule
1
contrib/crc32-s390x
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 30980583bf9ed3fa193abb83a1849705ff457f70
|
27
contrib/crc32-s390x-cmake/CMakeLists.txt
Normal file
27
contrib/crc32-s390x-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,27 @@
|
||||
if(ARCH_S390X)
|
||||
option (ENABLE_CRC32_S390X "Enable crc32 on s390x platform" ON)
|
||||
endif()
|
||||
|
||||
if (NOT ENABLE_CRC32_S390X)
|
||||
return()
|
||||
endif()
|
||||
|
||||
set(CRC32_S390X_SOURCE_DIR ${ClickHouse_SOURCE_DIR}/contrib/crc32-s390x)
|
||||
set(CRC32_S390X_INCLUDE_DIR ${ClickHouse_SOURCE_DIR}/contrib/crc32-s390x)
|
||||
|
||||
set(CRC32_SRCS
|
||||
"${CRC32_S390X_SOURCE_DIR}/crc32-s390x.c"
|
||||
"${CRC32_S390X_SOURCE_DIR}/crc32be-vx.S"
|
||||
"${CRC32_S390X_SOURCE_DIR}/crc32le-vx.S"
|
||||
)
|
||||
|
||||
set(CRC32_HDRS
|
||||
"${CRC32_S390X_INCLUDE_DIR}/crc32-s390x.h"
|
||||
)
|
||||
|
||||
add_library(_crc32_s390x ${CRC32_SRCS} ${CRC32_HDRS})
|
||||
|
||||
target_include_directories(_crc32_s390x SYSTEM PUBLIC "${CRC32_S390X_INCLUDE_DIR}")
|
||||
target_compile_definitions(_crc32_s390x PUBLIC)
|
||||
|
||||
add_library(ch_contrib::crc32_s390x ALIAS _crc32_s390x)
|
1
contrib/crc32-vpmsum
vendored
Submodule
1
contrib/crc32-vpmsum
vendored
Submodule
@ -0,0 +1 @@
|
||||
Subproject commit 452155439389311fc7d143621eaf56a258e02476
|
14
contrib/crc32-vpmsum-cmake/CMakeLists.txt
Normal file
14
contrib/crc32-vpmsum-cmake/CMakeLists.txt
Normal file
@ -0,0 +1,14 @@
|
||||
# module crc32-vpmsum gets build along with the files vec_crc32.h and crc32_constants.h in crc32-vpmsum-cmake
|
||||
# Please see README.md for information about how to generate crc32_constants.h
|
||||
if (NOT ARCH_PPC64LE)
|
||||
message (STATUS "crc32-vpmsum library is only supported on ppc64le")
|
||||
return()
|
||||
endif()
|
||||
|
||||
SET(LIBRARY_DIR "${ClickHouse_SOURCE_DIR}/contrib/crc32-vpmsum")
|
||||
|
||||
add_library(_crc32-vpmsum
|
||||
"${LIBRARY_DIR}/vec_crc32.c"
|
||||
)
|
||||
target_include_directories(_crc32-vpmsum SYSTEM BEFORE PUBLIC "${CMAKE_CURRENT_SOURCE_DIR}")
|
||||
add_library(ch_contrib::crc32-vpmsum ALIAS _crc32-vpmsum)
|
9
contrib/crc32-vpmsum-cmake/README.md
Normal file
9
contrib/crc32-vpmsum-cmake/README.md
Normal file
@ -0,0 +1,9 @@
|
||||
# To Generate crc32_constants.h
|
||||
|
||||
- Run make file in `../crc32-vpmsum` directory using following options and CRC polynomial. These options should use the same polynomial and order used by intel intrinisic functions
|
||||
```bash
|
||||
make crc32_constants.h CRC="0x11EDC6F41" OPTIONS="-x -r -c"
|
||||
```
|
||||
- move the generated `crc32_constants.h` into this directory
|
||||
- To understand more about this go here: https://masterchef2209.wordpress.com/2020/06/17/guide-to-intel-sse4-2-crc-intrinisics-implementation-for-simde/
|
||||
- Here is the link to information about intel intrinsic functions: https://www.intel.com/content/www/us/en/docs/intrinsics-guide/index.html#text=_mm_crc32_u64&ig_expand=1492,1493,1559
|
1206
contrib/crc32-vpmsum-cmake/crc32_constants.h
Normal file
1206
contrib/crc32-vpmsum-cmake/crc32_constants.h
Normal file
File diff suppressed because it is too large
Load Diff
26
contrib/crc32-vpmsum-cmake/vec_crc32.h
Normal file
26
contrib/crc32-vpmsum-cmake/vec_crc32.h
Normal file
@ -0,0 +1,26 @@
|
||||
#ifndef VEC_CRC32
|
||||
#define VEC_CRC32
|
||||
|
||||
#if ! ((defined(__PPC64__) || defined(__powerpc64__)) && __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
|
||||
# error PowerPC architecture is expected
|
||||
#endif
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
unsigned int crc32_vpmsum(unsigned int crc, const unsigned char *p, unsigned long len);
|
||||
|
||||
static inline uint32_t crc32_ppc(uint64_t crc, unsigned char const *buffer, size_t len)
|
||||
{
|
||||
assert(buffer);
|
||||
crc = crc32_vpmsum(crc, buffer, (unsigned long)len);
|
||||
|
||||
return crc;
|
||||
}
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
2
contrib/googletest
vendored
2
contrib/googletest
vendored
@ -1 +1 @@
|
||||
Subproject commit e7e591764baba0a0c3c9ad0014430e7a27331d16
|
||||
Subproject commit 71140c3ca7a87bb1b5b9c9f1500fea8858cce344
|
2
contrib/poco
vendored
2
contrib/poco
vendored
@ -1 +1 @@
|
||||
Subproject commit 799234226187c0ae0b8c90f23465b25ed7956e56
|
||||
Subproject commit 7fefdf30244a9bf8eb58562a9b2a51cc59a8877a
|
2
contrib/sqlite-amalgamation
vendored
2
contrib/sqlite-amalgamation
vendored
@ -1 +1 @@
|
||||
Subproject commit 9818baa5d027ffb26d57f810dc4c597d4946781c
|
||||
Subproject commit 400ad7152a0c7ee07756d96ab4f6a8f6d1080916
|
@ -63,10 +63,6 @@
|
||||
"name": "clickhouse/integration-tests-runner",
|
||||
"dependent": []
|
||||
},
|
||||
"docker/test/testflows/runner": {
|
||||
"name": "clickhouse/testflows-runner",
|
||||
"dependent": []
|
||||
},
|
||||
"docker/test/fasttest": {
|
||||
"name": "clickhouse/fasttest",
|
||||
"dependent": []
|
||||
|
@ -22,7 +22,8 @@ RUN apt-get update && \
|
||||
build-essential \
|
||||
libc6 \
|
||||
libc6-dev \
|
||||
libc6-dev-arm64-cross && \
|
||||
libc6-dev-arm64-cross \
|
||||
zstd && \
|
||||
apt-get clean
|
||||
|
||||
ENV CC=clang-${LLVM_VERSION}
|
||||
|
@ -159,7 +159,7 @@ then
|
||||
git -C "$PERF_OUTPUT"/ch log -5
|
||||
(
|
||||
cd "$PERF_OUTPUT"/..
|
||||
tar -cv -I pigz -f /output/performance.tgz output
|
||||
tar -cv --zstd -f /output/performance.tar.zst output
|
||||
)
|
||||
fi
|
||||
|
||||
@ -167,15 +167,15 @@ fi
|
||||
if [ "" != "$COMBINED_OUTPUT" ]
|
||||
then
|
||||
prepare_combined_output /output
|
||||
tar -cv -I pigz -f "$COMBINED_OUTPUT.tgz" /output
|
||||
tar -cv --zstd -f "$COMBINED_OUTPUT.tar.zst" /output
|
||||
rm -r /output/*
|
||||
mv "$COMBINED_OUTPUT.tgz" /output
|
||||
mv "$COMBINED_OUTPUT.tar.zst" /output
|
||||
fi
|
||||
|
||||
if [ "coverity" == "$COMBINED_OUTPUT" ]
|
||||
then
|
||||
tar -cv -I pigz -f "coverity-scan.tgz" cov-int
|
||||
mv "coverity-scan.tgz" /output
|
||||
tar -cv --zstd -f "coverity-scan.tar.zst" cov-int
|
||||
mv "coverity-scan.tar.zst" /output
|
||||
fi
|
||||
|
||||
ccache_status
|
||||
|
@ -33,7 +33,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="22.12.1.1752"
|
||||
ARG VERSION="23.1.1.3077"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# user/group precreated explicitly with fixed uid/gid on purpose.
|
||||
|
@ -21,7 +21,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
||||
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="deb https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||
ARG VERSION="22.12.1.1752"
|
||||
ARG VERSION="23.1.1.3077"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
|
@ -58,7 +58,7 @@ echo 'SELECT version()' | curl 'http://localhost:18123/' --data-binary @-
|
||||
22.6.3.35
|
||||
```
|
||||
|
||||
or by allowing the container to use [host ports directly](https://docs.docker.com/network/host/) using `--network=host` (also allows archiving better network performance):
|
||||
or by allowing the container to use [host ports directly](https://docs.docker.com/network/host/) using `--network=host` (also allows achieving better network performance):
|
||||
|
||||
```bash
|
||||
docker run -d --network=host --name some-clickhouse-server --ulimit nofile=262144:262144 clickhouse/clickhouse-server
|
||||
|
@ -9,6 +9,8 @@ RUN apt-get update \
|
||||
netbase \
|
||||
perl \
|
||||
pv \
|
||||
ripgrep \
|
||||
zstd \
|
||||
--yes --no-install-recommends
|
||||
|
||||
# Sanitizer options for services (clickhouse-server)
|
||||
|
@ -17,6 +17,7 @@ RUN apt-get update \
|
||||
python3-termcolor \
|
||||
unixodbc \
|
||||
pv \
|
||||
zstd \
|
||||
--yes --no-install-recommends
|
||||
|
||||
# Install CMake 3.20+ for Rust compilation
|
||||
|
@ -138,6 +138,7 @@ function clone_submodules
|
||||
contrib/c-ares
|
||||
contrib/morton-nd
|
||||
contrib/xxHash
|
||||
contrib/simdjson
|
||||
)
|
||||
|
||||
git submodule sync
|
||||
@ -158,6 +159,7 @@ function run_cmake
|
||||
"-DENABLE_THINLTO=0"
|
||||
"-DUSE_UNWIND=1"
|
||||
"-DENABLE_NURAFT=1"
|
||||
"-DENABLE_SIMDJSON=1"
|
||||
"-DENABLE_JEMALLOC=1"
|
||||
)
|
||||
|
||||
@ -188,7 +190,7 @@ function build
|
||||
cp programs/clickhouse "$FASTTEST_OUTPUT/clickhouse"
|
||||
|
||||
strip programs/clickhouse -o "$FASTTEST_OUTPUT/clickhouse-stripped"
|
||||
gzip "$FASTTEST_OUTPUT/clickhouse-stripped"
|
||||
zstd --threads=0 "$FASTTEST_OUTPUT/clickhouse-stripped"
|
||||
fi
|
||||
ccache --show-stats ||:
|
||||
ccache --evict-older-than 1d ||:
|
||||
@ -234,6 +236,7 @@ function run_tests
|
||||
--check-zookeeper-session
|
||||
--order random
|
||||
--print-time
|
||||
--report-logs-stats
|
||||
--jobs "${NPROC}"
|
||||
)
|
||||
time clickhouse-test "${test_opts[@]}" -- "$FASTTEST_FOCUS" 2>&1 \
|
||||
|
@ -2,6 +2,13 @@
|
||||
<profiles>
|
||||
<default>
|
||||
<max_execution_time>10</max_execution_time>
|
||||
<max_memory_usage>10G</max_memory_usage>
|
||||
|
||||
<!--
|
||||
Otherwise we will get the TOO_MANY_SIMULTANEOUS_QUERIES errors,
|
||||
they are ok, but complicate debugging.
|
||||
-->
|
||||
<table_function_remote_max_addresses>200</table_function_remote_max_addresses>
|
||||
|
||||
<!--
|
||||
Don't let the fuzzer change this setting (I've actually seen it
|
||||
@ -20,6 +27,10 @@
|
||||
<allow_experimental_analyzer>
|
||||
<readonly/>
|
||||
</allow_experimental_analyzer>
|
||||
|
||||
<table_function_remote_max_addresses>
|
||||
<max>200</max>
|
||||
</table_function_remote_max_addresses>
|
||||
</constraints>
|
||||
</default>
|
||||
</profiles>
|
||||
|
@ -5,6 +5,7 @@ set -x
|
||||
|
||||
# core.COMM.PID-TID
|
||||
sysctl kernel.core_pattern='core.%e.%p-%P'
|
||||
dmesg --clear ||:
|
||||
|
||||
set -e
|
||||
set -u
|
||||
@ -17,13 +18,25 @@ repo_dir=ch
|
||||
BINARY_TO_DOWNLOAD=${BINARY_TO_DOWNLOAD:="clang-15_debug_none_unsplitted_disable_False_binary"}
|
||||
BINARY_URL_TO_DOWNLOAD=${BINARY_URL_TO_DOWNLOAD:="https://clickhouse-builds.s3.amazonaws.com/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/$BINARY_TO_DOWNLOAD/clickhouse"}
|
||||
|
||||
function git_clone_with_retry
|
||||
{
|
||||
for _ in 1 2 3 4; do
|
||||
if git clone --depth 1 https://github.com/ClickHouse/ClickHouse.git -- "$1" 2>&1 | ts '%Y-%m-%d %H:%M:%S';then
|
||||
return 0
|
||||
else
|
||||
sleep 0.5
|
||||
fi
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
function clone
|
||||
{
|
||||
# For local runs, start directly from the "fuzz" stage.
|
||||
rm -rf "$repo_dir" ||:
|
||||
mkdir "$repo_dir" ||:
|
||||
|
||||
git clone --depth 1 https://github.com/ClickHouse/ClickHouse.git -- "$repo_dir" 2>&1 | ts '%Y-%m-%d %H:%M:%S'
|
||||
git_clone_with_retry "$repo_dir"
|
||||
(
|
||||
cd "$repo_dir"
|
||||
if [ "$PR_TO_TEST" != "0" ]; then
|
||||
@ -241,13 +254,29 @@ quit
|
||||
# clickhouse-client. We don't check for existence of server process, because
|
||||
# the process is still present while the server is terminating and not
|
||||
# accepting the connections anymore.
|
||||
if clickhouse-client --query "select 1 format Null"
|
||||
then
|
||||
server_died=0
|
||||
else
|
||||
echo "Server live check returns $?"
|
||||
server_died=1
|
||||
fi
|
||||
|
||||
for _ in {1..100}
|
||||
do
|
||||
if clickhouse-client --query "SELECT 1" 2> err
|
||||
then
|
||||
server_died=0
|
||||
break
|
||||
else
|
||||
# There are legitimate queries leading to this error, example:
|
||||
# SELECT * FROM remote('127.0.0.{1..255}', system, one)
|
||||
if grep -F 'TOO_MANY_SIMULTANEOUS_QUERIES' err
|
||||
then
|
||||
# Give it some time to cool down
|
||||
clickhouse-client --query "SHOW PROCESSLIST"
|
||||
sleep 1
|
||||
else
|
||||
echo "Server live check returns $?"
|
||||
cat err
|
||||
server_died=1
|
||||
break
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# wait in background to call wait in foreground and ensure that the
|
||||
# process is alive, since w/o job control this is the only way to obtain
|
||||
@ -262,14 +291,17 @@ quit
|
||||
if [ "$server_died" == 1 ]
|
||||
then
|
||||
# The server has died.
|
||||
if ! grep --text -ao "Received signal.*\|Logical error.*\|Assertion.*failed\|Failed assertion.*\|.*runtime error: .*\|.*is located.*\|SUMMARY: AddressSanitizer:.*\|SUMMARY: MemorySanitizer:.*\|SUMMARY: ThreadSanitizer:.*\|.*_LIBCPP_ASSERT.*" server.log > description.txt
|
||||
if ! rg --text -o 'Received signal.*|Logical error.*|Assertion.*failed|Failed assertion.*|.*runtime error: .*|.*is located.*|(SUMMARY|ERROR): [a-zA-Z]+Sanitizer:.*|.*_LIBCPP_ASSERT.*' server.log > description.txt
|
||||
then
|
||||
echo "Lost connection to server. See the logs." > description.txt
|
||||
fi
|
||||
|
||||
if grep -E --text 'Sanitizer: (out-of-memory|failed to allocate)' description.txt
|
||||
IS_SANITIZED=$(clickhouse-local --query "SELECT value LIKE '%-fsanitize=%' FROM system.build_options WHERE name = 'CXX_FLAGS'")
|
||||
|
||||
if [ "${IS_SANITIZED}" -eq "1" ] && rg --text 'Sanitizer:? (out-of-memory|out of memory|failed to allocate)|Child process was terminated by signal 9' description.txt
|
||||
then
|
||||
# OOM of sanitizer is not a problem we can handle - treat it as success, but preserve the description.
|
||||
# Why? Because sanitizers have the memory overhead, that is not controllable from inside clickhouse-server.
|
||||
task_exit_code=0
|
||||
echo "success" > status.txt
|
||||
else
|
||||
@ -299,18 +331,18 @@ quit
|
||||
# which is confusing.
|
||||
task_exit_code=$fuzzer_exit_code
|
||||
echo "failure" > status.txt
|
||||
{ grep --text -o "Found error:.*" fuzzer.log \
|
||||
|| grep --text -ao "Exception:.*" fuzzer.log \
|
||||
{ rg --text -o "Found error:.*" fuzzer.log \
|
||||
|| rg --text -ao "Exception:.*" fuzzer.log \
|
||||
|| echo "Fuzzer failed ($fuzzer_exit_code). See the logs." ; } \
|
||||
| tail -1 > description.txt
|
||||
fi
|
||||
|
||||
if test -f core.*; then
|
||||
pigz core.*
|
||||
mv core.*.gz core.gz
|
||||
zstd --threads=0 core.*
|
||||
mv core.*.zst core.zst
|
||||
fi
|
||||
|
||||
dmesg -T | grep -q -F -e 'Out of memory: Killed process' -e 'oom_reaper: reaped process' -e 'oom-kill:constraint=CONSTRAINT_NONE' && echo "OOM in dmesg" ||:
|
||||
dmesg -T | rg -q -F -e 'Out of memory: Killed process' -e 'oom_reaper: reaped process' -e 'oom-kill:constraint=CONSTRAINT_NONE' && echo "OOM in dmesg" ||:
|
||||
}
|
||||
|
||||
case "$stage" in
|
||||
@ -344,13 +376,14 @@ case "$stage" in
|
||||
"report")
|
||||
|
||||
CORE_LINK=''
|
||||
if [ -f core.gz ]; then
|
||||
CORE_LINK='<a href="core.gz">core.gz</a>'
|
||||
if [ -f core.zst ]; then
|
||||
CORE_LINK='<a href="core.zst">core.zst</a>'
|
||||
fi
|
||||
|
||||
grep --text -F '<Fatal>' server.log > fatal.log ||:
|
||||
rg --text -F '<Fatal>' server.log > fatal.log ||:
|
||||
dmesg -T > dmesg.log ||:
|
||||
|
||||
pigz server.log
|
||||
zstd --threads=0 server.log
|
||||
|
||||
cat > report.html <<EOF ||:
|
||||
<!DOCTYPE html>
|
||||
@ -375,8 +408,9 @@ p.links a { padding: 5px; margin: 3px; background: #FFF; line-height: 2; white-s
|
||||
<p class="links">
|
||||
<a href="run.log">run.log</a>
|
||||
<a href="fuzzer.log">fuzzer.log</a>
|
||||
<a href="server.log.gz">server.log.gz</a>
|
||||
<a href="server.log.zst">server.log.zst</a>
|
||||
<a href="main.log">main.log</a>
|
||||
<a href="dmesg.log">dmesg.log</a>
|
||||
${CORE_LINK}
|
||||
</p>
|
||||
<table>
|
||||
|
@ -49,7 +49,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
&& curl -o mysql-odbc.rpm "https://cdn.mysql.com/archives/mysql-connector-odbc-8.0/mysql-connector-odbc-8.0.27-1.el8.${rarch}.rpm" \
|
||||
&& rpm2archive mysql-odbc.rpm \
|
||||
&& tar xf mysql-odbc.rpm.tgz -C / ./usr/lib64/ \
|
||||
&& LINK_DIR=$(dpkg -L libodbc1 | grep '^/usr/lib/.*-linux-gnu/odbc$') \
|
||||
&& LINK_DIR=$(dpkg -L libodbc1 | rg '^/usr/lib/.*-linux-gnu/odbc$') \
|
||||
&& ln -s /usr/lib64/libmyodbc8a.so "$LINK_DIR" \
|
||||
&& ln -s /usr/lib64/libmyodbc8a.so "$LINK_DIR"/libmyodbc.so
|
||||
|
||||
@ -57,14 +57,17 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# ZooKeeper is not started by default, but consumes some space in containers.
|
||||
# 777 perms used to allow anybody to start/stop ZooKeeper
|
||||
ENV ZOOKEEPER_VERSION='3.6.3'
|
||||
RUN curl -O "https://dlcdn.apache.org/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/apache-zookeeper-${ZOOKEEPER_VERSION}-bin.tar.gz"
|
||||
RUN tar -zxvf apache-zookeeper-${ZOOKEEPER_VERSION}-bin.tar.gz && mv apache-zookeeper-${ZOOKEEPER_VERSION}-bin /opt/zookeeper && chmod -R 777 /opt/zookeeper && rm apache-zookeeper-${ZOOKEEPER_VERSION}-bin.tar.gz
|
||||
RUN echo $'tickTime=2500 \n\
|
||||
RUN curl "https://archive.apache.org/dist/zookeeper/zookeeper-${ZOOKEEPER_VERSION}/apache-zookeeper-${ZOOKEEPER_VERSION}-bin.tar.gz" | \
|
||||
tar -C opt -zxv && \
|
||||
mv /opt/apache-zookeeper-${ZOOKEEPER_VERSION}-bin /opt/zookeeper && \
|
||||
chmod -R 777 /opt/zookeeper && \
|
||||
echo $'tickTime=2500 \n\
|
||||
tickTime=2500 \n\
|
||||
dataDir=/zookeeper \n\
|
||||
clientPort=2181 \n\
|
||||
maxClientCnxns=80' > /opt/zookeeper/conf/zoo.cfg
|
||||
RUN mkdir /zookeeper && chmod -R 777 /zookeeper
|
||||
maxClientCnxns=80' > /opt/zookeeper/conf/zoo.cfg && \
|
||||
mkdir /zookeeper && \
|
||||
chmod -R 777 /zookeeper
|
||||
|
||||
ENV TZ=Etc/UTC
|
||||
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
|
||||
|
@ -8,6 +8,7 @@ RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
|
||||
|
||||
RUN apt-get update \
|
||||
&& env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||
adduser \
|
||||
ca-certificates \
|
||||
bash \
|
||||
btrfs-progs \
|
||||
|
@ -0,0 +1,5 @@
|
||||
version: '2.3'
|
||||
# Used to pre-pull images with docker-compose
|
||||
services:
|
||||
clickhouse1:
|
||||
image: clickhouse/integration-test
|
@ -5,10 +5,10 @@ services:
|
||||
hostname: hdfs1
|
||||
restart: always
|
||||
expose:
|
||||
- ${HDFS_NAME_PORT}
|
||||
- ${HDFS_DATA_PORT}
|
||||
- ${HDFS_NAME_PORT:-50070}
|
||||
- ${HDFS_DATA_PORT:-50075}
|
||||
entrypoint: /etc/bootstrap.sh -d
|
||||
volumes:
|
||||
- type: ${HDFS_FS:-tmpfs}
|
||||
source: ${HDFS_LOGS:-}
|
||||
target: /usr/local/hadoop/logs
|
||||
target: /usr/local/hadoop/logs
|
||||
|
@ -15,7 +15,7 @@ services:
|
||||
image: confluentinc/cp-kafka:5.2.0
|
||||
hostname: kafka1
|
||||
ports:
|
||||
- ${KAFKA_EXTERNAL_PORT}:${KAFKA_EXTERNAL_PORT}
|
||||
- ${KAFKA_EXTERNAL_PORT:-8081}:${KAFKA_EXTERNAL_PORT:-8081}
|
||||
environment:
|
||||
KAFKA_ADVERTISED_LISTENERS: INSIDE://localhost:${KAFKA_EXTERNAL_PORT},OUTSIDE://kafka1:19092
|
||||
KAFKA_ADVERTISED_HOST_NAME: kafka1
|
||||
@ -35,7 +35,7 @@ services:
|
||||
image: confluentinc/cp-schema-registry:5.2.0
|
||||
hostname: schema-registry
|
||||
ports:
|
||||
- ${SCHEMA_REGISTRY_EXTERNAL_PORT}:${SCHEMA_REGISTRY_INTERNAL_PORT}
|
||||
- ${SCHEMA_REGISTRY_EXTERNAL_PORT:-12313}:${SCHEMA_REGISTRY_INTERNAL_PORT:-12313}
|
||||
environment:
|
||||
SCHEMA_REGISTRY_HOST_NAME: schema-registry
|
||||
SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL: PLAINTEXT
|
||||
|
@ -15,8 +15,8 @@ services:
|
||||
source: ${KERBERIZED_HDFS_LOGS:-}
|
||||
target: /var/log/hadoop-hdfs
|
||||
expose:
|
||||
- ${KERBERIZED_HDFS_NAME_PORT}
|
||||
- ${KERBERIZED_HDFS_DATA_PORT}
|
||||
- ${KERBERIZED_HDFS_NAME_PORT:-50070}
|
||||
- ${KERBERIZED_HDFS_DATA_PORT:-1006}
|
||||
depends_on:
|
||||
- hdfskerberos
|
||||
entrypoint: /etc/bootstrap.sh -d
|
||||
|
@ -23,7 +23,7 @@ services:
|
||||
# restart: always
|
||||
hostname: kerberized_kafka1
|
||||
ports:
|
||||
- ${KERBERIZED_KAFKA_EXTERNAL_PORT}:${KERBERIZED_KAFKA_EXTERNAL_PORT}
|
||||
- ${KERBERIZED_KAFKA_EXTERNAL_PORT:-19092}:${KERBERIZED_KAFKA_EXTERNAL_PORT:-19092}
|
||||
environment:
|
||||
KAFKA_LISTENERS: OUTSIDE://:19092,UNSECURED_OUTSIDE://:19093,UNSECURED_INSIDE://0.0.0.0:${KERBERIZED_KAFKA_EXTERNAL_PORT}
|
||||
KAFKA_ADVERTISED_LISTENERS: OUTSIDE://kerberized_kafka1:19092,UNSECURED_OUTSIDE://kerberized_kafka1:19093,UNSECURED_INSIDE://localhost:${KERBERIZED_KAFKA_EXTERNAL_PORT}
|
||||
@ -41,7 +41,7 @@ services:
|
||||
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
|
||||
KAFKA_OPTS: "-Djava.security.auth.login.config=/etc/kafka/secrets/broker_jaas.conf -Djava.security.krb5.conf=/etc/kafka/secrets/krb.conf -Dsun.security.krb5.debug=true"
|
||||
volumes:
|
||||
- ${KERBERIZED_KAFKA_DIR}/secrets:/etc/kafka/secrets
|
||||
- ${KERBERIZED_KAFKA_DIR:-}/secrets:/etc/kafka/secrets
|
||||
- /dev/urandom:/dev/random
|
||||
depends_on:
|
||||
- kafka_kerberized_zookeeper
|
||||
|
@ -0,0 +1,11 @@
|
||||
version: '2.3'
|
||||
|
||||
services:
|
||||
kerberoskdc:
|
||||
image: clickhouse/kerberos-kdc:${DOCKER_KERBEROS_KDC_TAG:-latest}
|
||||
hostname: kerberoskdc
|
||||
volumes:
|
||||
- ${KERBEROS_KDC_DIR}/secrets:/tmp/keytab
|
||||
- ${KERBEROS_KDC_DIR}/../kerberos_image_config.sh:/config.sh
|
||||
- /dev/urandom:/dev/random
|
||||
ports: [88, 749]
|
@ -4,13 +4,13 @@ services:
|
||||
image: getmeili/meilisearch:v0.27.0
|
||||
restart: always
|
||||
ports:
|
||||
- ${MEILI_EXTERNAL_PORT}:${MEILI_INTERNAL_PORT}
|
||||
- ${MEILI_EXTERNAL_PORT:-7700}:${MEILI_INTERNAL_PORT:-7700}
|
||||
|
||||
meili_secure:
|
||||
image: getmeili/meilisearch:v0.27.0
|
||||
restart: always
|
||||
ports:
|
||||
- ${MEILI_SECURE_EXTERNAL_PORT}:${MEILI_SECURE_INTERNAL_PORT}
|
||||
- ${MEILI_SECURE_EXTERNAL_PORT:-7700}:${MEILI_SECURE_INTERNAL_PORT:-7700}
|
||||
environment:
|
||||
MEILI_MASTER_KEY: "password"
|
||||
|
||||
|
@ -9,7 +9,7 @@ services:
|
||||
- data1-1:/data1
|
||||
- ${MINIO_CERTS_DIR:-}:/certs
|
||||
expose:
|
||||
- ${MINIO_PORT}
|
||||
- ${MINIO_PORT:-9001}
|
||||
environment:
|
||||
MINIO_ACCESS_KEY: minio
|
||||
MINIO_SECRET_KEY: minio123
|
||||
|
@ -7,11 +7,11 @@ services:
|
||||
MONGO_INITDB_ROOT_USERNAME: root
|
||||
MONGO_INITDB_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- ${MONGO_EXTERNAL_PORT}:${MONGO_INTERNAL_PORT}
|
||||
- ${MONGO_EXTERNAL_PORT:-27017}:${MONGO_INTERNAL_PORT:-27017}
|
||||
command: --profile=2 --verbose
|
||||
|
||||
mongo2:
|
||||
image: mongo:5.0
|
||||
restart: always
|
||||
ports:
|
||||
- ${MONGO_NO_CRED_EXTERNAL_PORT}:${MONGO_NO_CRED_INTERNAL_PORT}
|
||||
- ${MONGO_NO_CRED_EXTERNAL_PORT:-27017}:${MONGO_NO_CRED_INTERNAL_PORT:-27017}
|
||||
|
@ -7,7 +7,7 @@ services:
|
||||
MONGO_INITDB_ROOT_USERNAME: root
|
||||
MONGO_INITDB_ROOT_PASSWORD: clickhouse
|
||||
volumes:
|
||||
- ${MONGO_CONFIG_PATH}:/mongo/
|
||||
- ${MONGO_CONFIG_PATH:-}:/mongo/
|
||||
ports:
|
||||
- ${MONGO_EXTERNAL_PORT}:${MONGO_INTERNAL_PORT}
|
||||
- ${MONGO_EXTERNAL_PORT:-27017}:${MONGO_INTERNAL_PORT:-27017}
|
||||
command: --config /mongo/mongo_secure.conf --profile=2 --verbose
|
||||
|
@ -8,7 +8,7 @@ services:
|
||||
MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST}
|
||||
DATADIR: /mysql/
|
||||
expose:
|
||||
- ${MYSQL_PORT}
|
||||
- ${MYSQL_PORT:-3306}
|
||||
command: --server_id=100
|
||||
--log-bin='mysql-bin-1.log'
|
||||
--default-time-zone='+3:00'
|
||||
|
@ -1,21 +0,0 @@
|
||||
version: '2.3'
|
||||
services:
|
||||
mysql1:
|
||||
image: mysql:5.7
|
||||
restart: 'no'
|
||||
environment:
|
||||
MYSQL_ROOT_PASSWORD: clickhouse
|
||||
ports:
|
||||
- 3308:3306
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log'
|
||||
--default-time-zone='+3:00'
|
||||
--gtid-mode="ON"
|
||||
--enforce-gtid-consistency
|
||||
--log-error-verbosity=3
|
||||
--log-error=/var/log/mysqld/error.log
|
||||
--general-log=ON
|
||||
--general-log-file=/var/log/mysqld/general.log
|
||||
volumes:
|
||||
- type: ${MYSQL_LOGS_FS:-tmpfs}
|
||||
source: ${MYSQL_LOGS:-}
|
||||
target: /var/log/mysqld/
|
@ -8,7 +8,7 @@ services:
|
||||
MYSQL_ROOT_HOST: ${MYSQL_ROOT_HOST}
|
||||
DATADIR: /mysql/
|
||||
expose:
|
||||
- ${MYSQL8_PORT}
|
||||
- ${MYSQL8_PORT:-3306}
|
||||
command: --server_id=100 --log-bin='mysql-bin-1.log'
|
||||
--default_authentication_plugin='mysql_native_password'
|
||||
--default-time-zone='+3:00' --gtid-mode="ON"
|
||||
|
@ -8,7 +8,7 @@ services:
|
||||
MYSQL_ROOT_HOST: ${MYSQL_CLUSTER_ROOT_HOST}
|
||||
DATADIR: /mysql/
|
||||
expose:
|
||||
- ${MYSQL_CLUSTER_PORT}
|
||||
- ${MYSQL_CLUSTER_PORT:-3306}
|
||||
command: --server_id=100
|
||||
--log-bin='mysql-bin-2.log'
|
||||
--default-time-zone='+3:00'
|
||||
@ -30,7 +30,7 @@ services:
|
||||
MYSQL_ROOT_HOST: ${MYSQL_CLUSTER_ROOT_HOST}
|
||||
DATADIR: /mysql/
|
||||
expose:
|
||||
- ${MYSQL_CLUSTER_PORT}
|
||||
- ${MYSQL_CLUSTER_PORT:-3306}
|
||||
command: --server_id=100
|
||||
--log-bin='mysql-bin-3.log'
|
||||
--default-time-zone='+3:00'
|
||||
@ -52,7 +52,7 @@ services:
|
||||
MYSQL_ROOT_HOST: ${MYSQL_CLUSTER_ROOT_HOST}
|
||||
DATADIR: /mysql/
|
||||
expose:
|
||||
- ${MYSQL_CLUSTER_PORT}
|
||||
- ${MYSQL_CLUSTER_PORT:-3306}
|
||||
command: --server_id=100
|
||||
--log-bin='mysql-bin-4.log'
|
||||
--default-time-zone='+3:00'
|
||||
|
@ -3,9 +3,9 @@ services:
|
||||
nats1:
|
||||
image: nats
|
||||
ports:
|
||||
- "${NATS_EXTERNAL_PORT}:${NATS_INTERNAL_PORT}"
|
||||
- "${NATS_EXTERNAL_PORT:-4444}:${NATS_INTERNAL_PORT:-4444}"
|
||||
command: "-p 4444 --user click --pass house --tls --tlscert=/etc/certs/server-cert.pem --tlskey=/etc/certs/server-key.pem"
|
||||
volumes:
|
||||
- type: bind
|
||||
source: "${NATS_CERT_DIR}/nats"
|
||||
source: "${NATS_CERT_DIR:-}/nats"
|
||||
target: /etc/certs
|
||||
|
@ -5,7 +5,7 @@ services:
|
||||
command: ["postgres", "-c", "wal_level=logical", "-c", "max_replication_slots=2", "-c", "logging_collector=on", "-c", "log_directory=/postgres/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all", "-c", "max_connections=200"]
|
||||
restart: always
|
||||
expose:
|
||||
- ${POSTGRES_PORT}
|
||||
- ${POSTGRES_PORT:-5432}
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U postgres"]
|
||||
interval: 10s
|
||||
|
@ -9,7 +9,7 @@ services:
|
||||
POSTGRES_PASSWORD: mysecretpassword
|
||||
PGDATA: /postgres/data
|
||||
expose:
|
||||
- ${POSTGRES_PORT}
|
||||
- ${POSTGRES_PORT:-5432}
|
||||
volumes:
|
||||
- type: ${POSTGRES_LOGS_FS:-tmpfs}
|
||||
source: ${POSTGRES2_DIR:-}
|
||||
@ -23,7 +23,7 @@ services:
|
||||
POSTGRES_PASSWORD: mysecretpassword
|
||||
PGDATA: /postgres/data
|
||||
expose:
|
||||
- ${POSTGRES_PORT}
|
||||
- ${POSTGRES_PORT:-5432}
|
||||
volumes:
|
||||
- type: ${POSTGRES_LOGS_FS:-tmpfs}
|
||||
source: ${POSTGRES3_DIR:-}
|
||||
@ -37,7 +37,7 @@ services:
|
||||
POSTGRES_PASSWORD: mysecretpassword
|
||||
PGDATA: /postgres/data
|
||||
expose:
|
||||
- ${POSTGRES_PORT}
|
||||
- ${POSTGRES_PORT:-5432}
|
||||
volumes:
|
||||
- type: ${POSTGRES_LOGS_FS:-tmpfs}
|
||||
source: ${POSTGRES4_DIR:-}
|
||||
|
@ -5,7 +5,7 @@ services:
|
||||
image: rabbitmq:3.8-management-alpine
|
||||
hostname: rabbitmq1
|
||||
expose:
|
||||
- ${RABBITMQ_PORT}
|
||||
- ${RABBITMQ_PORT:-5672}
|
||||
environment:
|
||||
RABBITMQ_DEFAULT_USER: "root"
|
||||
RABBITMQ_DEFAULT_PASS: "clickhouse"
|
||||
|
@ -4,5 +4,5 @@ services:
|
||||
image: redis
|
||||
restart: always
|
||||
ports:
|
||||
- ${REDIS_EXTERNAL_PORT}:${REDIS_INTERNAL_PORT}
|
||||
- ${REDIS_EXTERNAL_PORT:-6379}:${REDIS_INTERNAL_PORT:-6379}
|
||||
command: redis-server --requirepass "clickhouse" --databases 32
|
||||
|
@ -11,7 +11,7 @@ set -eu
|
||||
for module; do
|
||||
if [ "${module#-}" = "$module" ]; then
|
||||
ip link show "$module" || true
|
||||
lsmod | grep "$module" || true
|
||||
lsmod | rg "$module" || true
|
||||
fi
|
||||
done
|
||||
|
||||
|
@ -37,6 +37,8 @@ RUN apt-get update \
|
||||
wget \
|
||||
rustc \
|
||||
cargo \
|
||||
ripgrep \
|
||||
zstd \
|
||||
&& pip3 --no-cache-dir install 'clickhouse-driver==0.2.1' scipy \
|
||||
&& apt-get purge --yes python3-dev g++ \
|
||||
&& apt-get autoremove --yes \
|
||||
|
@ -50,7 +50,7 @@ Action required for every item -- these are errors that must be fixed.
|
||||
|
||||
A query is supposed to run longer than 0.1 second. If your query runs faster, increase the amount of processed data to bring the run time above this threshold. You can use a bigger table (e.g. `hits_100m` instead of `hits_10m`), increase a `LIMIT`, make a query single-threaded, and so on. Queries that are too fast suffer from poor stability and precision.
|
||||
|
||||
#### Partial Queries
|
||||
#### Backward-incompatible Queries
|
||||
Action required for the cells marked in red.
|
||||
|
||||
Shows the queries we are unable to run on an old server -- probably because they contain a new function. You should see this table when you add a new function and a performance test for it. Check that the run time and variance are acceptable (run time between 0.1 and 1 seconds, variance below 10%). If not, they will be highlighted in red.
|
||||
|
@ -193,7 +193,7 @@ function run_tests
|
||||
then
|
||||
# Run only explicitly specified tests, if any.
|
||||
# shellcheck disable=SC2010
|
||||
test_files=($(ls "$test_prefix" | grep "$CHPC_TEST_GREP" | xargs -I{} -n1 readlink -f "$test_prefix/{}"))
|
||||
test_files=($(ls "$test_prefix" | rg "$CHPC_TEST_GREP" | xargs -I{} -n1 readlink -f "$test_prefix/{}"))
|
||||
elif [ "$PR_TO_TEST" -ne 0 ] \
|
||||
&& [ "$(wc -l < changed-test-definitions.txt)" -gt 0 ] \
|
||||
&& [ "$(wc -l < other-changed-files.txt)" -eq 0 ]
|
||||
@ -210,7 +210,7 @@ function run_tests
|
||||
# We can filter out certain tests
|
||||
if [ -v CHPC_TEST_GREP_EXCLUDE ]; then
|
||||
# filter tests array in bash https://stackoverflow.com/a/40375567
|
||||
filtered_test_files=( $( for i in ${test_files[@]} ; do echo $i ; done | grep -v ${CHPC_TEST_GREP_EXCLUDE} ) )
|
||||
filtered_test_files=( $( for i in ${test_files[@]} ; do echo $i ; done | rg -v ${CHPC_TEST_GREP_EXCLUDE} ) )
|
||||
test_files=("${filtered_test_files[@]}")
|
||||
fi
|
||||
|
||||
@ -284,7 +284,7 @@ function run_tests
|
||||
# Use awk because bash doesn't support floating point arithmetic.
|
||||
profile_seconds=$(awk "BEGIN { print ($profile_seconds_left > 0 ? 10 : 0) }")
|
||||
|
||||
if [ "$(grep -c $(basename $test) changed-test-definitions.txt)" -gt 0 ]
|
||||
if [ "$(rg -c $(basename $test) changed-test-definitions.txt)" -gt 0 ]
|
||||
then
|
||||
# Run all queries from changed test files to ensure that all new queries will be tested.
|
||||
max_queries=0
|
||||
@ -399,7 +399,7 @@ clickhouse-local --query "
|
||||
create view query_runs as select * from file('analyze/query-runs.tsv', TSV,
|
||||
'test text, query_index int, query_id text, version UInt8, time float');
|
||||
|
||||
-- Separately process 'partial' queries which we could only run on the new server
|
||||
-- Separately process backward-incompatible ('partial') queries which we could only run on the new server
|
||||
-- because they use new functions. We can't make normal stats for them, but still
|
||||
-- have to show some stats so that the PR author can tweak them.
|
||||
create view partial_queries as select test, query_index
|
||||
@ -518,7 +518,7 @@ IFS=$'\n'
|
||||
for prefix in $(cut -f1,2 "analyze/query-run-metrics-for-stats.tsv" | sort | uniq)
|
||||
do
|
||||
file="analyze/tmp/${prefix// /_}.tsv"
|
||||
grep "^$prefix " "analyze/query-run-metrics-for-stats.tsv" > "$file" &
|
||||
rg "^$prefix " "analyze/query-run-metrics-for-stats.tsv" > "$file" &
|
||||
printf "%s\0\n" \
|
||||
"clickhouse-local \
|
||||
--file \"$file\" \
|
||||
@ -650,7 +650,7 @@ create view partial_query_times as select * from
|
||||
'test text, query_index int, time_stddev float, time_median double')
|
||||
;
|
||||
|
||||
-- Report for partial queries that we could only run on the new server (e.g.
|
||||
-- Report for backward-incompatible ('partial') queries that we could only run on the new server (e.g.
|
||||
-- queries with new functions added in the tested PR).
|
||||
create table partial_queries_report engine File(TSV, 'report/partial-queries-report.tsv')
|
||||
settings output_format_decimal_trailing_zeros = 1
|
||||
@ -829,7 +829,7 @@ create view query_runs as select * from file('analyze/query-runs.tsv', TSV,
|
||||
-- Guess the number of query runs used for this test. The number is required to
|
||||
-- calculate and check the average query run time in the report.
|
||||
-- We have to be careful, because we will encounter:
|
||||
-- 1) partial queries which run only on one server
|
||||
-- 1) backward-incompatible ('partial') queries which run only on one server
|
||||
-- 3) some errors that make query run for a different number of times on a
|
||||
-- particular server.
|
||||
--
|
||||
@ -1088,7 +1088,7 @@ do
|
||||
# Build separate .svg flamegraph for each query.
|
||||
# -F is somewhat unsafe because it might match not the beginning of the
|
||||
# string, but this is unlikely and escaping the query for grep is a pain.
|
||||
grep -F "$query " "report/stacks.$version.tsv" \
|
||||
rg -F "$query " "report/stacks.$version.tsv" \
|
||||
| cut -f 5- \
|
||||
| sed 's/\t/ /g' \
|
||||
| tee "report/tmp/$query_file.stacks.$version.tsv" \
|
||||
@ -1117,7 +1117,7 @@ do
|
||||
query_file=$(echo "$query" | cut -c-120 | sed 's/[/ ]/_/g')
|
||||
|
||||
# Ditto the above comment about -F.
|
||||
grep -F "$query " "report/metric-deviation.$version.tsv" \
|
||||
rg -F "$query " "report/metric-deviation.$version.tsv" \
|
||||
| cut -f4- > "$query_file.$version.metrics.rep" &
|
||||
done
|
||||
done
|
||||
@ -1132,8 +1132,8 @@ do
|
||||
{
|
||||
# The second grep is a heuristic for error messages like
|
||||
# "socket.timeout: timed out".
|
||||
grep -h -m2 -i '\(Exception\|Error\):[^:]' "$log" \
|
||||
|| grep -h -m2 -i '^[^ ]\+: ' "$log" \
|
||||
rg --no-filename --max-count=2 -i '\(Exception\|Error\):[^:]' "$log" \
|
||||
|| rg --no-filename --max-count=2 -i '^[^ ]\+: ' "$log" \
|
||||
|| head -2 "$log"
|
||||
} | sed "s/^/$test\t/" >> run-errors.tsv ||:
|
||||
done
|
||||
@ -1180,7 +1180,7 @@ IFS=$'\n'
|
||||
for prefix in $(cut -f1 "metrics/metrics.tsv" | sort | uniq)
|
||||
do
|
||||
file="metrics/$prefix.tsv"
|
||||
grep "^$prefix " "metrics/metrics.tsv" | cut -f2- > "$file"
|
||||
rg "^$prefix " "metrics/metrics.tsv" | cut -f2- > "$file"
|
||||
|
||||
gnuplot -e "
|
||||
set datafile separator '\t';
|
||||
|
@ -28,8 +28,8 @@ function download
|
||||
# Historically there were various paths for the performance test package.
|
||||
# Test all of them.
|
||||
declare -a urls_to_try=(
|
||||
"https://s3.amazonaws.com/clickhouse-builds/$left_pr/$left_sha/$BUILD_NAME/performance.tar.zst"
|
||||
"https://s3.amazonaws.com/clickhouse-builds/$left_pr/$left_sha/$BUILD_NAME/performance.tgz"
|
||||
"https://s3.amazonaws.com/clickhouse-builds/$left_pr/$left_sha/performance/performance.tgz"
|
||||
)
|
||||
|
||||
for path in "${urls_to_try[@]}"
|
||||
@ -45,7 +45,7 @@ function download
|
||||
# download anything, for example in some manual runs. In this case, SHAs are not set.
|
||||
if ! [ "$left_sha" = "$right_sha" ]
|
||||
then
|
||||
wget -nv -nd -c "$left_path" -O- | tar -C left --no-same-owner --strip-components=1 -zxv &
|
||||
wget -nv -nd -c "$left_path" -O- | tar -C left --no-same-owner --strip-components=1 --zstd --extract --verbose &
|
||||
elif [ "$right_sha" != "" ]
|
||||
then
|
||||
mkdir left ||:
|
||||
@ -60,7 +60,7 @@ function download
|
||||
>&2 echo "Unknown dataset '$dataset_name'"
|
||||
exit 1
|
||||
fi
|
||||
cd db0 && wget -nv -nd -c "$dataset_path" -O- | tar -xv &
|
||||
cd db0 && wget -nv -nd -c "$dataset_path" -O- | tar --extract --verbose &
|
||||
done
|
||||
|
||||
mkdir ~/fg ||:
|
||||
|
@ -66,10 +66,8 @@ function find_reference_sha
|
||||
# test all of them.
|
||||
unset found
|
||||
declare -a urls_to_try=(
|
||||
"https://s3.amazonaws.com/clickhouse-builds/0/$REF_SHA/$BUILD_NAME/performance.tar.zst"
|
||||
"https://s3.amazonaws.com/clickhouse-builds/0/$REF_SHA/$BUILD_NAME/performance.tgz"
|
||||
# FIXME: the following link is left there for backward compatibility.
|
||||
# We should remove it after 2022-11-01
|
||||
"https://s3.amazonaws.com/clickhouse-builds/0/$REF_SHA/performance/performance.tgz"
|
||||
)
|
||||
for path in "${urls_to_try[@]}"
|
||||
do
|
||||
@ -94,13 +92,13 @@ chmod 777 workspace output
|
||||
cd workspace
|
||||
|
||||
# Download the package for the version we are going to test.
|
||||
if curl_with_retry "$S3_URL/$PR_TO_TEST/$SHA_TO_TEST$COMMON_BUILD_PREFIX/$BUILD_NAME/performance.tgz"
|
||||
if curl_with_retry "$S3_URL/$PR_TO_TEST/$SHA_TO_TEST$COMMON_BUILD_PREFIX/$BUILD_NAME/performance.tar.zst"
|
||||
then
|
||||
right_path="$S3_URL/$PR_TO_TEST/$SHA_TO_TEST$COMMON_BUILD_PREFIX/$BUILD_NAME/performance.tgz"
|
||||
right_path="$S3_URL/$PR_TO_TEST/$SHA_TO_TEST$COMMON_BUILD_PREFIX/$BUILD_NAME/performance.tar.zst"
|
||||
fi
|
||||
|
||||
mkdir right
|
||||
wget -nv -nd -c "$right_path" -O- | tar -C right --no-same-owner --strip-components=1 -zxv
|
||||
wget -nv -nd -c "$right_path" -O- | tar -C right --no-same-owner --strip-components=1 --zstd --extract --verbose
|
||||
|
||||
# Find reference revision if not specified explicitly
|
||||
if [ "$REF_SHA" == "" ]; then find_reference_sha; fi
|
||||
|
@ -30,7 +30,7 @@ faster_queries = 0
|
||||
slower_queries = 0
|
||||
unstable_queries = 0
|
||||
very_unstable_queries = 0
|
||||
unstable_partial_queries = 0
|
||||
unstable_backward_incompatible_queries = 0
|
||||
|
||||
# max seconds to run one query by itself, not counting preparation
|
||||
allowed_single_run_time = 2
|
||||
@ -378,13 +378,13 @@ if args.report == "main":
|
||||
]
|
||||
)
|
||||
|
||||
def add_partial():
|
||||
def add_backward_incompatible():
|
||||
rows = tsvRows("report/partial-queries-report.tsv")
|
||||
if not rows:
|
||||
return
|
||||
|
||||
global unstable_partial_queries, slow_average_tests, tables
|
||||
text = tableStart("Partial Queries")
|
||||
global unstable_backward_incompatible_queries, slow_average_tests, tables
|
||||
text = tableStart("Backward-incompatible queries")
|
||||
columns = ["Median time, s", "Relative time variance", "Test", "#", "Query"]
|
||||
text += tableHeader(columns)
|
||||
attrs = ["" for c in columns]
|
||||
@ -392,7 +392,7 @@ if args.report == "main":
|
||||
anchor = f"{currentTableAnchor()}.{row[2]}.{row[3]}"
|
||||
if float(row[1]) > 0.10:
|
||||
attrs[1] = f'style="background: {color_bad}"'
|
||||
unstable_partial_queries += 1
|
||||
unstable_backward_incompatible_queries += 1
|
||||
errors_explained.append(
|
||||
[
|
||||
f"<a href=\"#{anchor}\">The query no. {row[3]} of test '{row[2]}' has excessive variance of run time. Keep it below 10%</a>"
|
||||
@ -414,7 +414,7 @@ if args.report == "main":
|
||||
text += tableEnd()
|
||||
tables.append(text)
|
||||
|
||||
add_partial()
|
||||
add_backward_incompatible()
|
||||
|
||||
def add_changes():
|
||||
rows = tsvRows("report/changed-perf.tsv")
|
||||
@ -630,8 +630,8 @@ if args.report == "main":
|
||||
status = "failure"
|
||||
message_array.append(str(slower_queries) + " slower")
|
||||
|
||||
if unstable_partial_queries:
|
||||
very_unstable_queries += unstable_partial_queries
|
||||
if unstable_backward_incompatible_queries:
|
||||
very_unstable_queries += unstable_backward_incompatible_queries
|
||||
status = "failure"
|
||||
|
||||
# Don't show mildly unstable queries, only the very unstable ones we
|
||||
|
@ -5,12 +5,18 @@ FROM ubuntu:22.04
|
||||
ARG apt_archive="http://archive.ubuntu.com"
|
||||
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
|
||||
|
||||
RUN apt-get update --yes && env DEBIAN_FRONTEND=noninteractive apt-get install wget unzip git default-jdk maven python3 --yes --no-install-recommends
|
||||
RUN wget https://github.com/sqlancer/sqlancer/archive/master.zip -O /sqlancer.zip
|
||||
RUN apt-get update --yes && \
|
||||
env DEBIAN_FRONTEND=noninteractive apt-get install wget git default-jdk maven python3 --yes --no-install-recommends && \
|
||||
apt-get clean
|
||||
|
||||
# We need to get the repository's HEAD each time despite, so we invalidate layers' cache
|
||||
ARG CACHE_INVALIDATOR=0
|
||||
RUN mkdir /sqlancer && \
|
||||
cd /sqlancer && \
|
||||
unzip /sqlancer.zip
|
||||
RUN cd /sqlancer/sqlancer-master && mvn package -DskipTests
|
||||
wget -q -O- https://github.com/sqlancer/sqlancer/archive/master.tar.gz | \
|
||||
tar zx -C /sqlancer && \
|
||||
cd /sqlancer/sqlancer-master && \
|
||||
mvn package -DskipTests && \
|
||||
rm -r /root/.m2
|
||||
|
||||
COPY run.sh /
|
||||
COPY process_sqlancer_result.py /
|
||||
|
@ -3,7 +3,7 @@
|
||||
set -e -x
|
||||
|
||||
# Choose random timezone for this test run
|
||||
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||
TZ="$(rg -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||
echo "Choosen random timezone $TZ"
|
||||
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||
|
||||
@ -152,21 +152,21 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
sudo clickhouse stop --pid-path /var/run/clickhouse-server2 ||:
|
||||
fi
|
||||
|
||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||
rg -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.zst ||:
|
||||
# FIXME: remove once only github actions will be left
|
||||
rm /var/log/clickhouse-server/clickhouse-server.log
|
||||
mv /var/log/clickhouse-server/stderr.log /test_output/ ||:
|
||||
|
||||
if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
|
||||
tar -chf /test_output/clickhouse_coverage.tar.gz /profraw ||:
|
||||
tar --zstd -c -h -f /test_output/clickhouse_coverage.tar.zst /profraw ||:
|
||||
fi
|
||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||:
|
||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||:
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||:
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
|
||||
rg -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||:
|
||||
rg -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.zst ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.zst ||:
|
||||
# FIXME: remove once only github actions will be left
|
||||
rm /var/log/clickhouse-server/clickhouse-server1.log
|
||||
rm /var/log/clickhouse-server/clickhouse-server2.log
|
||||
|
@ -4,7 +4,7 @@
|
||||
set -e -x -a
|
||||
|
||||
# Choose random timezone for this test run.
|
||||
TZ="$(grep -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||
TZ="$(rg -v '#' /usr/share/zoneinfo/zone.tab | awk '{print $3}' | shuf | head -n1)"
|
||||
echo "Choosen random timezone $TZ"
|
||||
ln -snf "/usr/share/zoneinfo/$TZ" /etc/localtime && echo "$TZ" > /etc/timezone
|
||||
|
||||
@ -130,6 +130,8 @@ function run_tests()
|
||||
ADDITIONAL_OPTIONS+=('--report-coverage')
|
||||
fi
|
||||
|
||||
ADDITIONAL_OPTIONS+=('--report-logs-stats')
|
||||
|
||||
set +e
|
||||
clickhouse-test --testname --shard --zookeeper --check-zookeeper-session --hung-check --print-time \
|
||||
--test-runs "$NUM_TRIES" "${ADDITIONAL_OPTIONS[@]}" 2>&1 \
|
||||
@ -167,8 +169,8 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
sudo clickhouse stop --pid-path /var/run/clickhouse-server2 ||:
|
||||
fi
|
||||
|
||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.gz &
|
||||
rg -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.zst &
|
||||
|
||||
# Compress tables.
|
||||
#
|
||||
@ -179,10 +181,10 @@ pigz < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhous
|
||||
# for files >64MB, we want this files to be compressed explicitly
|
||||
for table in query_log zookeeper_log trace_log transactions_info_log
|
||||
do
|
||||
clickhouse-local --path /var/lib/clickhouse/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | pigz > /test_output/$table.tsv.gz ||:
|
||||
clickhouse-local --path /var/lib/clickhouse/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.tsv.zst ||:
|
||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
clickhouse-local --path /var/lib/clickhouse1/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | pigz > /test_output/$table.1.tsv.gz ||:
|
||||
clickhouse-local --path /var/lib/clickhouse2/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | pigz > /test_output/$table.2.tsv.gz ||:
|
||||
clickhouse-local --path /var/lib/clickhouse1/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.1.tsv.zst ||:
|
||||
clickhouse-local --path /var/lib/clickhouse2/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.2.tsv.zst ||:
|
||||
fi
|
||||
done
|
||||
|
||||
@ -199,7 +201,7 @@ do
|
||||
order by samples desc
|
||||
settings allow_introspection_functions = 1
|
||||
format TabSeparated" \
|
||||
| pigz > "/test_output/trace-log-$trace_type-flamegraph.tsv.gz" ||:
|
||||
| zstd --threads=0 > "/test_output/trace-log-$trace_type-flamegraph.tsv.zst" ||:
|
||||
done
|
||||
|
||||
|
||||
@ -207,16 +209,16 @@ done
|
||||
rm /var/log/clickhouse-server/clickhouse-server.log
|
||||
mv /var/log/clickhouse-server/stderr.log /test_output/ ||:
|
||||
if [[ -n "$WITH_COVERAGE" ]] && [[ "$WITH_COVERAGE" -eq 1 ]]; then
|
||||
tar -chf /test_output/clickhouse_coverage.tar.gz /profraw ||:
|
||||
tar --zstd -chf /test_output/clickhouse_coverage.tar.zst /profraw ||:
|
||||
fi
|
||||
|
||||
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||
|
||||
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
|
||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||:
|
||||
grep -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||:
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.gz ||:
|
||||
pigz < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.gz ||:
|
||||
rg -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server1.log ||:
|
||||
rg -Fa "Fatal" /var/log/clickhouse-server/clickhouse-server2.log ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server1.log > /test_output/clickhouse-server1.log.zst ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server2.log > /test_output/clickhouse-server2.log.zst ||:
|
||||
# FIXME: remove once only github actions will be left
|
||||
rm /var/log/clickhouse-server/clickhouse-server1.log
|
||||
rm /var/log/clickhouse-server/clickhouse-server2.log
|
||||
|
@ -22,7 +22,7 @@ Still alive
|
||||
2018-10-22 13:49:16,195 Stress is ok
|
||||
2018-10-22 13:49:16,195 Copying server log files
|
||||
$ ls $HOME/test_result
|
||||
clickhouse-server.err.log clickhouse-server.log.0.gz stderr.log stress_test_run_0.txt stress_test_run_11.txt stress_test_run_13.txt
|
||||
clickhouse-server.err.log clickhouse-server.log.0.zst stderr.log stress_test_run_0.txt stress_test_run_11.txt stress_test_run_13.txt
|
||||
stress_test_run_15.txt stress_test_run_2.txt stress_test_run_4.txt stress_test_run_6.txt stress_test_run_8.txt clickhouse-server.log
|
||||
perf_stress_run.txt stdout.log stress_test_run_10.txt stress_test_run_12.txt
|
||||
stress_test_run_14.txt stress_test_run_1.txt
|
||||
|
@ -77,11 +77,12 @@ EOL
|
||||
|
||||
local max_users_mem
|
||||
max_users_mem=$((total_mem*30/100)) # 30%
|
||||
echo "Setting max_memory_usage_for_user=$max_users_mem"
|
||||
echo "Setting max_memory_usage_for_user=$max_users_mem and max_memory_usage for queries to 10G"
|
||||
cat > /etc/clickhouse-server/users.d/max_memory_usage_for_user.xml <<EOL
|
||||
<clickhouse>
|
||||
<profiles>
|
||||
<default>
|
||||
<max_memory_usage>10G</max_memory_usage>
|
||||
<max_memory_usage_for_user>${max_users_mem}</max_memory_usage_for_user>
|
||||
</default>
|
||||
</profiles>
|
||||
@ -127,18 +128,12 @@ EOL
|
||||
|
||||
function stop()
|
||||
{
|
||||
local max_tries="${1:-90}"
|
||||
local pid
|
||||
# Preserve the pid, since the server can hung after the PID will be deleted.
|
||||
pid="$(cat /var/run/clickhouse-server/clickhouse-server.pid)"
|
||||
|
||||
clickhouse stop $max_tries --do-not-kill && return
|
||||
|
||||
if [ -n "$1" ]
|
||||
then
|
||||
# temporarily disable it in BC check
|
||||
clickhouse stop --force
|
||||
return
|
||||
fi
|
||||
clickhouse stop --max-tries "$max_tries" --do-not-kill && return
|
||||
|
||||
# We failed to stop the server with SIGTERM. Maybe it hang, let's collect stacktraces.
|
||||
kill -TERM "$(pidof gdb)" ||:
|
||||
@ -159,7 +154,7 @@ function start()
|
||||
echo -e "Cannot start clickhouse-server\tFAIL" >> /test_output/test_results.tsv
|
||||
cat /var/log/clickhouse-server/stdout.log
|
||||
tail -n1000 /var/log/clickhouse-server/stderr.log
|
||||
tail -n100000 /var/log/clickhouse-server/clickhouse-server.log | grep -F -v -e '<Warning> RaftInstance:' -e '<Information> RaftInstance' | tail -n1000
|
||||
tail -n100000 /var/log/clickhouse-server/clickhouse-server.log | rg -F -v -e '<Warning> RaftInstance:' -e '<Information> RaftInstance' | tail -n1000
|
||||
break
|
||||
fi
|
||||
# use root to match with current uid
|
||||
@ -302,7 +297,7 @@ start
|
||||
|
||||
clickhouse-client --query "SELECT 'Server successfully started', 'OK'" >> /test_output/test_results.tsv \
|
||||
|| (echo -e 'Server failed to start (see application_errors.txt and clickhouse-server.clean.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
&& grep -a "<Error>.*Application" /var/log/clickhouse-server/clickhouse-server.log > /test_output/application_errors.txt)
|
||||
&& rg --text "<Error>.*Application" /var/log/clickhouse-server/clickhouse-server.log > /test_output/application_errors.txt)
|
||||
|
||||
stop
|
||||
|
||||
@ -312,20 +307,20 @@ stop
|
||||
# Grep logs for sanitizer asserts, crashes and other critical errors
|
||||
|
||||
# Sanitizer asserts
|
||||
grep -Fa "==================" /var/log/clickhouse-server/stderr.log | grep -v "in query:" >> /test_output/tmp
|
||||
grep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
zgrep -Fav -e "ASan doesn't fully support makecontext/swapcontext functions" -e "DB::Exception" /test_output/tmp > /dev/null \
|
||||
rg -Fa "==================" /var/log/clickhouse-server/stderr.log | rg -v "in query:" >> /test_output/tmp
|
||||
rg -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
rg -Fav -e "ASan doesn't fully support makecontext/swapcontext functions" -e "DB::Exception" /test_output/tmp > /dev/null \
|
||||
&& echo -e 'Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
||||
rm -f /test_output/tmp
|
||||
|
||||
# OOM
|
||||
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server*.log > /dev/null \
|
||||
rg -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server*.log > /dev/null \
|
||||
&& echo -e 'OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# Logical errors
|
||||
zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server*.log > /test_output/logical_errors.txt \
|
||||
rg -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server*.log > /test_output/logical_errors.txt \
|
||||
&& echo -e 'Logical error thrown (see clickhouse-server.log or logical_errors.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No logical errors\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
@ -333,7 +328,7 @@ zgrep -Fa "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-serve
|
||||
[ -s /test_output/logical_errors.txt ] || rm /test_output/logical_errors.txt
|
||||
|
||||
# No such key errors
|
||||
zgrep -Ea "Code: 499.*The specified key does not exist" /var/log/clickhouse-server/clickhouse-server*.log > /test_output/no_such_key_errors.txt \
|
||||
rg --text "Code: 499.*The specified key does not exist" /var/log/clickhouse-server/clickhouse-server*.log > /test_output/no_such_key_errors.txt \
|
||||
&& echo -e 'S3_ERROR No such key thrown (see clickhouse-server.log or no_such_key_errors.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No lost s3 keys\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
@ -341,29 +336,29 @@ zgrep -Ea "Code: 499.*The specified key does not exist" /var/log/clickhouse-serv
|
||||
[ -s /test_output/no_such_key_errors.txt ] || rm /test_output/no_such_key_errors.txt
|
||||
|
||||
# Crash
|
||||
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server*.log > /dev/null \
|
||||
rg -Fa "########################################" /var/log/clickhouse-server/clickhouse-server*.log > /dev/null \
|
||||
&& echo -e 'Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Not crashed\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# It also checks for crash without stacktrace (printed by watchdog)
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server*.log > /test_output/fatal_messages.txt \
|
||||
rg -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server*.log > /test_output/fatal_messages.txt \
|
||||
&& echo -e 'Fatal message in clickhouse-server.log (see fatal_messages.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# Remove file fatal_messages.txt if it's empty
|
||||
[ -s /test_output/fatal_messages.txt ] || rm /test_output/fatal_messages.txt
|
||||
|
||||
zgrep -Fa "########################################" /test_output/* > /dev/null \
|
||||
rg -Fa "########################################" /test_output/* > /dev/null \
|
||||
&& echo -e 'Killed by signal (output files)\tFAIL' >> /test_output/test_results.tsv
|
||||
|
||||
zgrep -Fa " received signal " /test_output/gdb.log > /dev/null \
|
||||
rg -Fa " received signal " /test_output/gdb.log > /dev/null \
|
||||
&& echo -e 'Found signal in gdb.log\tFAIL' >> /test_output/test_results.tsv
|
||||
|
||||
if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
echo -e "Backward compatibility check\n"
|
||||
|
||||
echo "Get previous release tag"
|
||||
previous_release_tag=$(clickhouse-client --version | grep -o "[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*" | get_previous_release_tag)
|
||||
previous_release_tag=$(clickhouse-client --version | rg -o "[0-9]*\.[0-9]*\.[0-9]*\.[0-9]*" | get_previous_release_tag)
|
||||
echo $previous_release_tag
|
||||
|
||||
echo "Clone previous release repository"
|
||||
@ -378,7 +373,7 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.clean.log
|
||||
for table in query_log trace_log
|
||||
do
|
||||
clickhouse-local --path /var/lib/clickhouse/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | pigz > /test_output/$table.tsv.gz ||:
|
||||
clickhouse-local --path /var/lib/clickhouse/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.tsv.zst ||:
|
||||
done
|
||||
|
||||
tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||
@ -464,7 +459,8 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
clickhouse stop --force
|
||||
)
|
||||
|
||||
stop 1
|
||||
# Use bigger timeout for previous version
|
||||
stop 300
|
||||
mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.backward.stress.log
|
||||
|
||||
# Start new server
|
||||
@ -476,7 +472,7 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
start 500
|
||||
clickhouse-client --query "SELECT 'Backward compatibility check: Server successfully started', 'OK'" >> /test_output/test_results.tsv \
|
||||
|| (echo -e 'Backward compatibility check: Server failed to start\tFAIL' >> /test_output/test_results.tsv \
|
||||
&& grep -a "<Error>.*Application" /var/log/clickhouse-server/clickhouse-server.log >> /test_output/bc_check_application_errors.txt)
|
||||
&& rg --text "<Error>.*Application" /var/log/clickhouse-server/clickhouse-server.log >> /test_output/bc_check_application_errors.txt)
|
||||
|
||||
clickhouse-client --query="SELECT 'Server version: ', version()"
|
||||
|
||||
@ -496,7 +492,7 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
# ("This engine is deprecated and is not supported in transactions", "[Queue = DB::MergeMutateRuntimeQueue]: Code: 235. DB::Exception: Part")
|
||||
# FIXME https://github.com/ClickHouse/ClickHouse/issues/39174 - bad mutation does not indicate backward incompatibility
|
||||
echo "Check for Error messages in server log:"
|
||||
zgrep -Fav -e "Code: 236. DB::Exception: Cancelled merging parts" \
|
||||
rg -Fav -e "Code: 236. DB::Exception: Cancelled merging parts" \
|
||||
-e "Code: 236. DB::Exception: Cancelled mutating parts" \
|
||||
-e "REPLICA_IS_ALREADY_ACTIVE" \
|
||||
-e "REPLICA_ALREADY_EXISTS" \
|
||||
@ -532,7 +528,8 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
-e "MutateFromLogEntryTask" \
|
||||
-e "No connection to ZooKeeper, cannot get shared table ID" \
|
||||
-e "Session expired" \
|
||||
/var/log/clickhouse-server/clickhouse-server.backward.dirty.log | zgrep -Fa "<Error>" > /test_output/bc_check_error_messages.txt \
|
||||
-e "TOO_MANY_PARTS" \
|
||||
/var/log/clickhouse-server/clickhouse-server.backward.dirty.log | rg -Fa "<Error>" > /test_output/bc_check_error_messages.txt \
|
||||
&& echo -e 'Backward compatibility check: Error message in clickhouse-server.log (see bc_check_error_messages.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Backward compatibility check: No Error messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
@ -540,21 +537,21 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
[ -s /test_output/bc_check_error_messages.txt ] || rm /test_output/bc_check_error_messages.txt
|
||||
|
||||
# Sanitizer asserts
|
||||
zgrep -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
zgrep -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
zgrep -Fav -e "ASan doesn't fully support makecontext/swapcontext functions" -e "DB::Exception" /test_output/tmp > /dev/null \
|
||||
rg -Fa "==================" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
rg -Fa "WARNING" /var/log/clickhouse-server/stderr.log >> /test_output/tmp
|
||||
rg -Fav -e "ASan doesn't fully support makecontext/swapcontext functions" -e "DB::Exception" /test_output/tmp > /dev/null \
|
||||
&& echo -e 'Backward compatibility check: Sanitizer assert (in stderr.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Backward compatibility check: No sanitizer asserts\tOK' >> /test_output/test_results.tsv
|
||||
rm -f /test_output/tmp
|
||||
|
||||
# OOM
|
||||
zgrep -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.backward.*.log > /dev/null \
|
||||
rg -Fa " <Fatal> Application: Child process was terminated by signal 9" /var/log/clickhouse-server/clickhouse-server.backward.*.log > /dev/null \
|
||||
&& echo -e 'Backward compatibility check: OOM killer (or signal 9) in clickhouse-server.log\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Backward compatibility check: No OOM messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# Logical errors
|
||||
echo "Check for Logical errors in server log:"
|
||||
zgrep -Fa -A20 "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.backward.*.log > /test_output/bc_check_logical_errors.txt \
|
||||
rg -Fa -A20 "Code: 49, e.displayText() = DB::Exception:" /var/log/clickhouse-server/clickhouse-server.backward.*.log > /test_output/bc_check_logical_errors.txt \
|
||||
&& echo -e 'Backward compatibility check: Logical error thrown (see clickhouse-server.log or bc_check_logical_errors.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Backward compatibility check: No logical errors\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
@ -562,13 +559,13 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
[ -s /test_output/bc_check_logical_errors.txt ] || rm /test_output/bc_check_logical_errors.txt
|
||||
|
||||
# Crash
|
||||
zgrep -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.backward.*.log > /dev/null \
|
||||
rg -Fa "########################################" /var/log/clickhouse-server/clickhouse-server.backward.*.log > /dev/null \
|
||||
&& echo -e 'Backward compatibility check: Killed by signal (in clickhouse-server.log)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Backward compatibility check: Not crashed\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
# It also checks for crash without stacktrace (printed by watchdog)
|
||||
echo "Check for Fatal message in server log:"
|
||||
zgrep -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.backward.*.log > /test_output/bc_check_fatal_messages.txt \
|
||||
rg -Fa " <Fatal> " /var/log/clickhouse-server/clickhouse-server.backward.*.log > /test_output/bc_check_fatal_messages.txt \
|
||||
&& echo -e 'Backward compatibility check: Fatal message in clickhouse-server.log (see bc_check_fatal_messages.txt)\tFAIL' >> /test_output/test_results.tsv \
|
||||
|| echo -e 'Backward compatibility check: No fatal messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv
|
||||
|
||||
@ -578,7 +575,7 @@ if [ "$DISABLE_BC_CHECK" -ne "1" ]; then
|
||||
tar -chf /test_output/coordination.backward.tar /var/lib/clickhouse/coordination ||:
|
||||
for table in query_log trace_log
|
||||
do
|
||||
clickhouse-local --path /var/lib/clickhouse/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | pigz > /test_output/$table.backward.tsv.gz ||:
|
||||
clickhouse-local --path /var/lib/clickhouse/ --only-system-tables -q "select * from system.$table format TSVWithNamesAndTypes" | zstd --threads=0 > /test_output/$table.backward.tsv.zst ||:
|
||||
done
|
||||
fi
|
||||
fi
|
||||
@ -597,7 +594,7 @@ clickhouse-local --structure "test String, res String" -q "SELECT 'failure', tes
|
||||
[ -s /test_output/check_status.tsv ] || echo -e "success\tNo errors found" > /test_output/check_status.tsv
|
||||
|
||||
# Core dumps
|
||||
for core in core.*; do
|
||||
pigz $core
|
||||
mv $core.gz /test_output/
|
||||
find . -type f -maxdepth 1 -name 'core.*' | while read core; do
|
||||
zstd --threads=0 $core
|
||||
mv $core.zst /test_output/
|
||||
done
|
||||
|
@ -89,7 +89,7 @@ def run_func_test(
|
||||
|
||||
|
||||
def compress_stress_logs(output_path, files_prefix):
|
||||
cmd = f"cd {output_path} && tar -zcf stress_run_logs.tar.gz {files_prefix}* && rm {files_prefix}*"
|
||||
cmd = f"cd {output_path} && tar --zstd --create --file=stress_run_logs.tar.zst {files_prefix}* && rm {files_prefix}*"
|
||||
check_output(cmd, shell=True)
|
||||
|
||||
|
||||
@ -146,6 +146,12 @@ def prepare_for_hung_check(drop_databases):
|
||||
"KILL QUERY WHERE query LIKE 'SELECT URL, uniq(SearchPhrase) AS u FROM test.hits GROUP BY URL ORDER BY u %'"
|
||||
)
|
||||
)
|
||||
# Long query from 02136_kill_scalar_queries
|
||||
call_with_retry(
|
||||
make_query_command(
|
||||
"KILL QUERY WHERE query LIKE 'SELECT (SELECT number FROM system.numbers WHERE number = 1000000000000)%'"
|
||||
)
|
||||
)
|
||||
|
||||
if drop_databases:
|
||||
for i in range(5):
|
||||
@ -289,6 +295,7 @@ if __name__ == "__main__":
|
||||
"--database=system",
|
||||
"--hung-check",
|
||||
"--stress",
|
||||
"--report-logs-stats",
|
||||
"00001_select_1",
|
||||
]
|
||||
)
|
||||
|
@ -6,6 +6,8 @@ import argparse
|
||||
import csv
|
||||
|
||||
|
||||
# TODO: add typing and log files to the fourth column, think about launching
|
||||
# everything from the python and not bash
|
||||
def process_result(result_folder):
|
||||
status = "success"
|
||||
description = ""
|
||||
|
@ -126,12 +126,6 @@ Contribute all new information in English language. Other languages are translat
|
||||
|
||||
### Adding a New File
|
||||
|
||||
When you add a new file, it should end with a link like:
|
||||
|
||||
`[Original article](https://clickhouse.com/docs/<path-to-the-page>) <!--hide-->`
|
||||
|
||||
and there should be **a new empty line** after it.
|
||||
|
||||
{## When adding a new file:
|
||||
|
||||
- Make symbolic links for all other languages. You can use the following commands:
|
||||
|
@ -194,4 +194,3 @@
|
||||
* NO CL ENTRY: 'Revert "Test and doc for PR12771 krb5 + cyrus-sasl + kerberized kafka"'. [#15232](https://github.com/ClickHouse/ClickHouse/pull/15232) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Avoid deadlocks in Log/TinyLog"'. [#15259](https://github.com/ClickHouse/ClickHouse/pull/15259) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Bump mkdocs-macros-plugin from 0.4.13 to 0.4.17 in /docs/tools'. [#15460](https://github.com/ClickHouse/ClickHouse/pull/15460) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
|
||||
|
@ -177,4 +177,3 @@
|
||||
* NO CL ENTRY: 'Revert "Test and doc for PR12771 krb5 + cyrus-sasl + kerberized kafka"'. [#15232](https://github.com/ClickHouse/ClickHouse/pull/15232) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* NO CL ENTRY: 'Revert "Avoid deadlocks in Log/TinyLog"'. [#15259](https://github.com/ClickHouse/ClickHouse/pull/15259) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* NO CL ENTRY: 'Bump mkdocs-macros-plugin from 0.4.13 to 0.4.17 in /docs/tools'. [#15460](https://github.com/ClickHouse/ClickHouse/pull/15460) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
|
||||
|
@ -14,4 +14,3 @@
|
||||
* Backported in [#16374](https://github.com/ClickHouse/ClickHouse/issues/16374): Fix async Distributed INSERT w/ prefer_localhost_replica=0 and internal_replication. [#16358](https://github.com/ClickHouse/ClickHouse/pull/16358) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Backported in [#16419](https://github.com/ClickHouse/ClickHouse/issues/16419): Fix group by with totals/rollup/cube modifers and min/max functions over group by keys. Fixes [#16393](https://github.com/ClickHouse/ClickHouse/issues/16393). [#16397](https://github.com/ClickHouse/ClickHouse/pull/16397) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Backported in [#16448](https://github.com/ClickHouse/ClickHouse/issues/16448): Fix double free in case of exception in function `dictGet`. It could have happened if dictionary was loaded with error. [#16429](https://github.com/ClickHouse/ClickHouse/pull/16429) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
|
||||
|
@ -14,4 +14,3 @@
|
||||
* Backported in [#16760](https://github.com/ClickHouse/ClickHouse/issues/16760): This will fix optimize_read_in_order/optimize_aggregation_in_order with max_threads>0 and expression in ORDER BY. [#16637](https://github.com/ClickHouse/ClickHouse/pull/16637) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Backported in [#16741](https://github.com/ClickHouse/ClickHouse/issues/16741): Fix `IN` operator over several columns and tuples with enabled `transform_null_in` setting. Fixes [#15310](https://github.com/ClickHouse/ClickHouse/issues/15310). [#16722](https://github.com/ClickHouse/ClickHouse/pull/16722) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Backported in [#16893](https://github.com/ClickHouse/ClickHouse/issues/16893): Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
@ -15,4 +15,3 @@
|
||||
* Backported in [#17038](https://github.com/ClickHouse/ClickHouse/issues/17038): Reresolve the IP of the `format_avro_schema_registry_url` in case of errors. [#16985](https://github.com/ClickHouse/ClickHouse/pull/16985) ([filimonov](https://github.com/filimonov)).
|
||||
* Backported in [#17092](https://github.com/ClickHouse/ClickHouse/issues/17092): Fixed wrong result in big integers (128, 256 bit) when casting from double. [#16986](https://github.com/ClickHouse/ClickHouse/pull/16986) ([Mike Kot](https://github.com/myrrc)).
|
||||
* Backported in [#17169](https://github.com/ClickHouse/ClickHouse/issues/17169): Fix bug when `ON CLUSTER` queries may hang forever for non-leader ReplicatedMergeTreeTables. [#17089](https://github.com/ClickHouse/ClickHouse/pull/17089) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
@ -28,4 +28,3 @@
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#17289](https://github.com/ClickHouse/ClickHouse/issues/17289): Update embedded timezone data to version 2020d (also update cctz to the latest master). [#17204](https://github.com/ClickHouse/ClickHouse/pull/17204) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
|
@ -10,4 +10,3 @@
|
||||
* Backported in [#18361](https://github.com/ClickHouse/ClickHouse/issues/18361): fixes [#18186](https://github.com/ClickHouse/ClickHouse/issues/18186) fixes [#16372](https://github.com/ClickHouse/ClickHouse/issues/16372) fix unique key convert crash in MaterializeMySQL database engine. [#18211](https://github.com/ClickHouse/ClickHouse/pull/18211) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Backported in [#18292](https://github.com/ClickHouse/ClickHouse/issues/18292): Fix key comparison between Enum and Int types. This fixes [#17989](https://github.com/ClickHouse/ClickHouse/issues/17989). [#18214](https://github.com/ClickHouse/ClickHouse/pull/18214) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Backported in [#18295](https://github.com/ClickHouse/ClickHouse/issues/18295): - Fixed issue when `clickhouse-odbc-bridge` process is unreachable by server on machines with dual IPv4/IPv6 stack; - Fixed issue when ODBC dictionary updates are performed using malformed queries and/or cause crashes; Possibly closes [#14489](https://github.com/ClickHouse/ClickHouse/issues/14489). [#18278](https://github.com/ClickHouse/ClickHouse/pull/18278) ([Denis Glazachev](https://github.com/traceon)).
|
||||
|
||||
|
@ -154,4 +154,3 @@
|
||||
* NO CL ENTRY: 'minor fix.'. [#16335](https://github.com/ClickHouse/ClickHouse/pull/16335) ([Xianda Ke](https://github.com/kexianda)).
|
||||
* NO CL ENTRY: 'Bump tornado from 5.1.1 to 6.1 in /docs/tools'. [#16590](https://github.com/ClickHouse/ClickHouse/pull/16590) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
* NO CL ENTRY: 'Bump mkdocs-macros-plugin from 0.4.17 to 0.4.20 in /docs/tools'. [#16692](https://github.com/ClickHouse/ClickHouse/pull/16692) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
|
||||
|
@ -159,4 +159,3 @@
|
||||
* NO CL ENTRY: 'minor fix.'. [#16335](https://github.com/ClickHouse/ClickHouse/pull/16335) ([Xianda Ke](https://github.com/kexianda)).
|
||||
* NO CL ENTRY: 'Bump tornado from 5.1.1 to 6.1 in /docs/tools'. [#16590](https://github.com/ClickHouse/ClickHouse/pull/16590) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
* NO CL ENTRY: 'Bump mkdocs-macros-plugin from 0.4.17 to 0.4.20 in /docs/tools'. [#16692](https://github.com/ClickHouse/ClickHouse/pull/16692) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
|
||||
|
@ -2,4 +2,3 @@
|
||||
|
||||
#### Bug Fix
|
||||
* Backported in [#16891](https://github.com/ClickHouse/ClickHouse/issues/16891): Fix rare silent crashes when query profiler is on and ClickHouse is installed on OS with glibc version that has (supposedly) broken asynchronous unwind tables for some functions. This fixes [#15301](https://github.com/ClickHouse/ClickHouse/issues/15301). This fixes [#13098](https://github.com/ClickHouse/ClickHouse/issues/13098). [#16846](https://github.com/ClickHouse/ClickHouse/pull/16846) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
|
@ -17,4 +17,3 @@
|
||||
* Backported in [#17127](https://github.com/ClickHouse/ClickHouse/issues/17127): Avoid unnecessary network errors for remote queries which may be cancelled while execution, like queries with `LIMIT`. [#17006](https://github.com/ClickHouse/ClickHouse/pull/17006) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Backported in [#17132](https://github.com/ClickHouse/ClickHouse/issues/17132): Fixed crash on `CREATE TABLE ... AS some_table` query when `some_table` was created `AS table_function()` Fixes [#16944](https://github.com/ClickHouse/ClickHouse/issues/16944). [#17072](https://github.com/ClickHouse/ClickHouse/pull/17072) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Backported in [#17170](https://github.com/ClickHouse/ClickHouse/issues/17170): Fix bug when `ON CLUSTER` queries may hang forever for non-leader ReplicatedMergeTreeTables. [#17089](https://github.com/ClickHouse/ClickHouse/pull/17089) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
@ -30,4 +30,3 @@
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#17290](https://github.com/ClickHouse/ClickHouse/issues/17290): Update embedded timezone data to version 2020d (also update cctz to the latest master). [#17204](https://github.com/ClickHouse/ClickHouse/pull/17204) ([filimonov](https://github.com/filimonov)).
|
||||
|
||||
|
@ -11,4 +11,3 @@
|
||||
* Backported in [#18358](https://github.com/ClickHouse/ClickHouse/issues/18358): fixes [#18186](https://github.com/ClickHouse/ClickHouse/issues/18186) fixes [#16372](https://github.com/ClickHouse/ClickHouse/issues/16372) fix unique key convert crash in MaterializeMySQL database engine. [#18211](https://github.com/ClickHouse/ClickHouse/pull/18211) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Backported in [#18259](https://github.com/ClickHouse/ClickHouse/issues/18259): Fix key comparison between Enum and Int types. This fixes [#17989](https://github.com/ClickHouse/ClickHouse/issues/17989). [#18214](https://github.com/ClickHouse/ClickHouse/pull/18214) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Backported in [#18297](https://github.com/ClickHouse/ClickHouse/issues/18297): - Fixed issue when `clickhouse-odbc-bridge` process is unreachable by server on machines with dual IPv4/IPv6 stack; - Fixed issue when ODBC dictionary updates are performed using malformed queries and/or cause crashes; Possibly closes [#14489](https://github.com/ClickHouse/ClickHouse/issues/14489). [#18278](https://github.com/ClickHouse/ClickHouse/pull/18278) ([Denis Glazachev](https://github.com/traceon)).
|
||||
|
||||
|
@ -59,4 +59,3 @@
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#18543](https://github.com/ClickHouse/ClickHouse/issues/18543): Update timezones info to 2020e. [#18531](https://github.com/ClickHouse/ClickHouse/pull/18531) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
@ -95,4 +95,3 @@
|
||||
* NO CL ENTRY: 'Enabling existing testflows RBAC tests.'. [#16773](https://github.com/ClickHouse/ClickHouse/pull/16773) ([MyroTk](https://github.com/MyroTk)).
|
||||
* NO CL ENTRY: 'Bump protobuf from 3.13.0 to 3.14.0 in /docs/tools'. [#17056](https://github.com/ClickHouse/ClickHouse/pull/17056) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
* NO CL ENTRY: 'Fixed a problem with the translation of the document'. [#17218](https://github.com/ClickHouse/ClickHouse/pull/17218) ([qianmoQ](https://github.com/qianmoQ)).
|
||||
|
||||
|
@ -124,4 +124,3 @@
|
||||
* NO CL ENTRY: 'Enabling existing testflows RBAC tests.'. [#16773](https://github.com/ClickHouse/ClickHouse/pull/16773) ([MyroTk](https://github.com/MyroTk)).
|
||||
* NO CL ENTRY: 'Bump protobuf from 3.13.0 to 3.14.0 in /docs/tools'. [#17056](https://github.com/ClickHouse/ClickHouse/pull/17056) ([dependabot-preview[bot]](https://github.com/apps/dependabot-preview)).
|
||||
* NO CL ENTRY: 'Fixed a problem with the translation of the document'. [#17218](https://github.com/ClickHouse/ClickHouse/pull/17218) ([qianmoQ](https://github.com/qianmoQ)).
|
||||
|
||||
|
@ -1,2 +1 @@
|
||||
### ClickHouse release v20.12.3.3-stable FIXME as compared to v20.12.2.1-stable
|
||||
|
||||
|
@ -11,4 +11,3 @@
|
||||
* Backported in [#18359](https://github.com/ClickHouse/ClickHouse/issues/18359): fixes [#18186](https://github.com/ClickHouse/ClickHouse/issues/18186) fixes [#16372](https://github.com/ClickHouse/ClickHouse/issues/16372) fix unique key convert crash in MaterializeMySQL database engine. [#18211](https://github.com/ClickHouse/ClickHouse/pull/18211) ([Winter Zhang](https://github.com/zhang2014)).
|
||||
* Backported in [#18258](https://github.com/ClickHouse/ClickHouse/issues/18258): Fix key comparison between Enum and Int types. This fixes [#17989](https://github.com/ClickHouse/ClickHouse/issues/17989). [#18214](https://github.com/ClickHouse/ClickHouse/pull/18214) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Backported in [#18296](https://github.com/ClickHouse/ClickHouse/issues/18296): - Fixed issue when `clickhouse-odbc-bridge` process is unreachable by server on machines with dual IPv4/IPv6 stack; - Fixed issue when ODBC dictionary updates are performed using malformed queries and/or cause crashes; Possibly closes [#14489](https://github.com/ClickHouse/ClickHouse/issues/14489). [#18278](https://github.com/ClickHouse/ClickHouse/pull/18278) ([Denis Glazachev](https://github.com/traceon)).
|
||||
|
||||
|
@ -11,4 +11,3 @@
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* Backported in [#18546](https://github.com/ClickHouse/ClickHouse/issues/18546): Update timezones info to 2020e. [#18531](https://github.com/ClickHouse/ClickHouse/pull/18531) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
@ -54,4 +54,3 @@
|
||||
* Backported in [#19811](https://github.com/ClickHouse/ClickHouse/issues/19811): In previous versions, unusual arguments for function arrayEnumerateUniq may cause crash or infinite loop. This closes [#19787](https://github.com/ClickHouse/ClickHouse/issues/19787). [#19788](https://github.com/ClickHouse/ClickHouse/pull/19788) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Backported in [#19941](https://github.com/ClickHouse/ClickHouse/issues/19941): Deadlock was possible if system.text_log is enabled. This fixes [#19874](https://github.com/ClickHouse/ClickHouse/issues/19874). [#19875](https://github.com/ClickHouse/ClickHouse/pull/19875) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Backported in [#19937](https://github.com/ClickHouse/ClickHouse/issues/19937): BloomFilter index crash fix. Fixes [#19757](https://github.com/ClickHouse/ClickHouse/issues/19757). [#19884](https://github.com/ClickHouse/ClickHouse/pull/19884) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
|
||||
|
@ -15,4 +15,3 @@
|
||||
#### NO CL ENTRY
|
||||
|
||||
* NO CL ENTRY: 'Revert "Backport [#20224](https://github.com/ClickHouse/ClickHouse/issues/20224) to 20.12: Fix access control manager destruction order"'. [#20396](https://github.com/ClickHouse/ClickHouse/pull/20396) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
|
@ -6,4 +6,3 @@
|
||||
* Backported in [#20617](https://github.com/ClickHouse/ClickHouse/issues/20617): Check if table function `view` is used in expression list and throw an error. This fixes [#20342](https://github.com/ClickHouse/ClickHouse/issues/20342). [#20350](https://github.com/ClickHouse/ClickHouse/pull/20350) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Backported in [#20487](https://github.com/ClickHouse/ClickHouse/issues/20487): Fix `LOGICAL_ERROR` for `join_use_nulls=1` when JOIN contains const from SELECT. [#20461](https://github.com/ClickHouse/ClickHouse/pull/20461) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Backported in [#20614](https://github.com/ClickHouse/ClickHouse/issues/20614): Add proper checks while parsing directory names for async INSERT (fixes SIGSEGV). [#20498](https://github.com/ClickHouse/ClickHouse/pull/20498) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user