mirror of
https://github.com/ClickHouse/ClickHouse.git
synced 2024-11-25 17:12:03 +00:00
Merge remote-tracking branch 'origin/master' into revert-46909-revert-45911-mutations_rename_hang
This commit is contained in:
commit
a72ee0b85e
97
.clang-tidy
97
.clang-tidy
@ -23,9 +23,12 @@ Checks: '*,
|
||||
-bugprone-implicit-widening-of-multiplication-result,
|
||||
-bugprone-narrowing-conversions,
|
||||
-bugprone-not-null-terminated-result,
|
||||
-bugprone-reserved-identifier,
|
||||
-bugprone-unchecked-optional-access,
|
||||
|
||||
-cert-dcl16-c,
|
||||
-cert-dcl37-c,
|
||||
-cert-dcl51-cpp,
|
||||
-cert-err58-cpp,
|
||||
-cert-msc32-c,
|
||||
-cert-msc51-cpp,
|
||||
@ -38,6 +41,8 @@ Checks: '*,
|
||||
-clang-analyzer-security.insecureAPI.strcpy,
|
||||
|
||||
-cppcoreguidelines-avoid-c-arrays,
|
||||
-cppcoreguidelines-avoid-const-or-ref-data-members,
|
||||
-cppcoreguidelines-avoid-do-while,
|
||||
-cppcoreguidelines-avoid-goto,
|
||||
-cppcoreguidelines-avoid-magic-numbers,
|
||||
-cppcoreguidelines-avoid-non-const-global-variables,
|
||||
@ -105,6 +110,7 @@ Checks: '*,
|
||||
-misc-const-correctness,
|
||||
-misc-no-recursion,
|
||||
-misc-non-private-member-variables-in-classes,
|
||||
-misc-confusable-identifiers, # useful but slooow
|
||||
|
||||
-modernize-avoid-c-arrays,
|
||||
-modernize-concat-nested-namespaces,
|
||||
@ -125,10 +131,12 @@ Checks: '*,
|
||||
-portability-simd-intrinsics,
|
||||
|
||||
-readability-braces-around-statements,
|
||||
-readability-convert-member-functions-to-static,
|
||||
-readability-else-after-return,
|
||||
-readability-function-cognitive-complexity,
|
||||
-readability-function-size,
|
||||
-readability-identifier-length,
|
||||
-readability-identifier-naming,
|
||||
-readability-implicit-bool-conversion,
|
||||
-readability-isolate-declaration,
|
||||
-readability-magic-numbers,
|
||||
@ -141,73 +149,32 @@ Checks: '*,
|
||||
-readability-use-anyofallof,
|
||||
|
||||
-zirkon-*,
|
||||
|
||||
-misc-*, # temporarily disabled due to being too slow
|
||||
# also disable checks in other categories which are aliases of checks in misc-*:
|
||||
# https://releases.llvm.org/15.0.0/tools/clang/tools/extra/docs/clang-tidy/checks/list.html
|
||||
-cert-dcl54-cpp, # alias of misc-new-delete-overloads
|
||||
-hicpp-new-delete-operators, # alias of misc-new-delete-overloads
|
||||
-cert-fio38-c, # alias of misc-non-copyable-objects
|
||||
-cert-dcl03-c, # alias of misc-static-assert
|
||||
-hicpp-static-assert, # alias of misc-static-assert
|
||||
-cert-err09-cpp, # alias of misc-throw-by-value-catch-by-reference
|
||||
-cert-err61-cpp, # alias of misc-throw-by-value-catch-by-reference
|
||||
-cppcoreguidelines-c-copy-assignment-signature, # alias of misc-unconventional-assign-operator
|
||||
-cppcoreguidelines-non-private-member-variables-in-classes, # alias of misc-non-private-member-variables-in-classes
|
||||
'
|
||||
|
||||
WarningsAsErrors: '*'
|
||||
|
||||
# TODO: use dictionary syntax for CheckOptions when minimum clang-tidy level rose to 15
|
||||
# some-check.SomeOption: 'some value'
|
||||
# instead of
|
||||
# - key: some-check.SomeOption
|
||||
# value: 'some value'
|
||||
CheckOptions:
|
||||
- key: readability-identifier-naming.ClassCase
|
||||
value: CamelCase
|
||||
- key: readability-identifier-naming.EnumCase
|
||||
value: CamelCase
|
||||
- key: readability-identifier-naming.LocalVariableCase
|
||||
value: lower_case
|
||||
- key: readability-identifier-naming.StaticConstantCase
|
||||
value: aNy_CasE
|
||||
- key: readability-identifier-naming.MemberCase
|
||||
value: lower_case
|
||||
- key: readability-identifier-naming.PrivateMemberPrefix
|
||||
value: ''
|
||||
- key: readability-identifier-naming.ProtectedMemberPrefix
|
||||
value: ''
|
||||
- key: readability-identifier-naming.PublicMemberCase
|
||||
value: lower_case
|
||||
- key: readability-identifier-naming.MethodCase
|
||||
value: camelBack
|
||||
- key: readability-identifier-naming.PrivateMethodPrefix
|
||||
value: ''
|
||||
- key: readability-identifier-naming.ProtectedMethodPrefix
|
||||
value: ''
|
||||
- key: readability-identifier-naming.ParameterPackCase
|
||||
value: lower_case
|
||||
- key: readability-identifier-naming.StructCase
|
||||
value: CamelCase
|
||||
- key: readability-identifier-naming.TemplateTemplateParameterCase
|
||||
value: CamelCase
|
||||
- key: readability-identifier-naming.TemplateUsingCase
|
||||
value: lower_case
|
||||
- key: readability-identifier-naming.TypeTemplateParameterCase
|
||||
value: CamelCase
|
||||
- key: readability-identifier-naming.TypedefCase
|
||||
value: CamelCase
|
||||
- key: readability-identifier-naming.UnionCase
|
||||
value: CamelCase
|
||||
- key: readability-identifier-naming.UsingCase
|
||||
value: CamelCase
|
||||
- key: modernize-loop-convert.UseCxx20ReverseRanges
|
||||
value: false
|
||||
- key: performance-move-const-arg.CheckTriviallyCopyableMove
|
||||
value: false
|
||||
# Workaround clang-tidy bug: https://github.com/llvm/llvm-project/issues/46097
|
||||
- key: readability-identifier-naming.TypeTemplateParameterIgnoredRegexp
|
||||
value: expr-type
|
||||
- key: cppcoreguidelines-avoid-do-while.IgnoreMacros
|
||||
value: true
|
||||
readability-identifier-naming.ClassCase: CamelCase
|
||||
readability-identifier-naming.EnumCase: CamelCase
|
||||
readability-identifier-naming.LocalVariableCase: lower_case
|
||||
readability-identifier-naming.StaticConstantCase: aNy_CasE
|
||||
readability-identifier-naming.MemberCase: lower_case
|
||||
readability-identifier-naming.PrivateMemberPrefix: ''
|
||||
readability-identifier-naming.ProtectedMemberPrefix: ''
|
||||
readability-identifier-naming.PublicMemberCase: lower_case
|
||||
readability-identifier-naming.MethodCase: camelBack
|
||||
readability-identifier-naming.PrivateMethodPrefix: ''
|
||||
readability-identifier-naming.ProtectedMethodPrefix: ''
|
||||
readability-identifier-naming.ParameterPackCase: lower_case
|
||||
readability-identifier-naming.StructCase: CamelCase
|
||||
readability-identifier-naming.TemplateTemplateParameterCase: CamelCase
|
||||
readability-identifier-naming.TemplateUsingCase: lower_case
|
||||
readability-identifier-naming.TypeTemplateParameterCase: CamelCase
|
||||
readability-identifier-naming.TypedefCase: CamelCase
|
||||
readability-identifier-naming.UnionCase: CamelCase
|
||||
readability-identifier-naming.UsingCase: CamelCase
|
||||
modernize-loop-convert.UseCxx20ReverseRanges: false
|
||||
performance-move-const-arg.CheckTriviallyCopyableMove: false
|
||||
# Workaround clang-tidy bug: https://github.com/llvm/llvm-project/issues/46097
|
||||
readability-identifier-naming.TypeTemplateParameterIgnoredRegexp: expr-type
|
||||
cppcoreguidelines-avoid-do-while.IgnoreMacros: true
|
||||
|
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
2
.github/PULL_REQUEST_TEMPLATE.md
vendored
@ -7,7 +7,7 @@ tests/ci/run_check.py
|
||||
### Changelog category (leave one):
|
||||
- New Feature
|
||||
- Improvement
|
||||
- Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
- Bug Fix (user-visible misbehavior in an official stable release)
|
||||
- Performance Improvement
|
||||
- Backward Incompatible Change
|
||||
- Build/Testing/Packaging Improvement
|
||||
|
25
.github/workflows/backport_branches.yml
vendored
25
.github/workflows/backport_branches.yml
vendored
@ -9,8 +9,22 @@ on: # yamllint disable-line rule:truthy
|
||||
branches:
|
||||
- 'backport/**'
|
||||
jobs:
|
||||
CheckLabels:
|
||||
runs-on: [self-hosted, style-checker]
|
||||
# Run the first check always, even if the CI is cancelled
|
||||
if: ${{ always() }}
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Labels check
|
||||
run: |
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 run_check.py
|
||||
PythonUnitTests:
|
||||
runs-on: [self-hosted, style-checker]
|
||||
needs: CheckLabels
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
@ -22,6 +36,7 @@ jobs:
|
||||
python3 -m unittest discover -s . -p '*_test.py'
|
||||
DockerHubPushAarch64:
|
||||
runs-on: [self-hosted, style-checker-aarch64]
|
||||
needs: CheckLabels
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
@ -38,6 +53,7 @@ jobs:
|
||||
path: ${{ runner.temp }}/docker_images_check/changed_images_aarch64.json
|
||||
DockerHubPushAmd64:
|
||||
runs-on: [self-hosted, style-checker]
|
||||
needs: CheckLabels
|
||||
steps:
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
@ -333,6 +349,13 @@ jobs:
|
||||
with:
|
||||
clear-repository: true
|
||||
submodules: true
|
||||
- name: Apply sparse checkout for contrib # in order to check that it doesn't break build
|
||||
run: |
|
||||
rm -rf "$GITHUB_WORKSPACE/contrib" && echo 'removed'
|
||||
git -C "$GITHUB_WORKSPACE" checkout . && echo 'restored'
|
||||
"$GITHUB_WORKSPACE/contrib/update-submodules.sh" && echo 'OK'
|
||||
du -hs "$GITHUB_WORKSPACE/contrib" ||:
|
||||
find "$GITHUB_WORKSPACE/contrib" -type f | wc -l ||:
|
||||
- name: Build
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
@ -454,7 +477,7 @@ jobs:
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 docker_server.py --release-type head --no-push \
|
||||
--image-repo clickhouse/clickhouse-server --image-path docker/server
|
||||
python3 docker_server.py --release-type head --no-push --no-ubuntu \
|
||||
python3 docker_server.py --release-type head --no-push \
|
||||
--image-repo clickhouse/clickhouse-keeper --image-path docker/keeper
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
|
796
.github/workflows/master.yml
vendored
796
.github/workflows/master.yml
vendored
@ -487,6 +487,13 @@ jobs:
|
||||
with:
|
||||
clear-repository: true
|
||||
submodules: true
|
||||
- name: Apply sparse checkout for contrib # in order to check that it doesn't break build
|
||||
run: |
|
||||
rm -rf "$GITHUB_WORKSPACE/contrib" && echo 'removed'
|
||||
git -C "$GITHUB_WORKSPACE" checkout . && echo 'restored'
|
||||
"$GITHUB_WORKSPACE/contrib/update-submodules.sh" && echo 'OK'
|
||||
du -hs "$GITHUB_WORKSPACE/contrib" ||:
|
||||
find "$GITHUB_WORKSPACE/contrib" -type f | wc -l ||:
|
||||
- name: Build
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
@ -862,7 +869,7 @@ jobs:
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 docker_server.py --release-type head \
|
||||
--image-repo clickhouse/clickhouse-server --image-path docker/server
|
||||
python3 docker_server.py --release-type head --no-ubuntu \
|
||||
python3 docker_server.py --release-type head \
|
||||
--image-repo clickhouse/clickhouse-keeper --image-path docker/keeper
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
@ -1131,7 +1138,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_database_replicated/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1167,6 +1174,114 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_database_replicated/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestReleaseDatabaseReplicated2:
|
||||
needs: [BuilderDebRelease]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_database_replicated
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (release, DatabaseReplicated)
|
||||
REPO_COPY=${{runner.temp}}/stateless_database_replicated/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestReleaseDatabaseReplicated3:
|
||||
needs: [BuilderDebRelease]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_database_replicated
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (release, DatabaseReplicated)
|
||||
REPO_COPY=${{runner.temp}}/stateless_database_replicated/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestReleaseS3_0:
|
||||
needs: [BuilderDebRelease]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_s3_storage
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (release, s3 storage)
|
||||
REPO_COPY=${{runner.temp}}/stateless_s3_storage/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
EOF
|
||||
- name: Download json reports
|
||||
@ -1190,7 +1305,7 @@ jobs:
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestReleaseS3:
|
||||
FunctionalStatelessTestReleaseS3_1:
|
||||
needs: [BuilderDebRelease]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
@ -1202,6 +1317,8 @@ jobs:
|
||||
CHECK_NAME=Stateless tests (release, s3 storage)
|
||||
REPO_COPY=${{runner.temp}}/stateless_s3_storage/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1271,7 +1388,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1307,7 +1424,79 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestAsan2:
|
||||
needs: [BuilderDebAsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_debug
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestAsan3:
|
||||
needs: [BuilderDebAsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_debug
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1343,7 +1532,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_tsan/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1379,7 +1568,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_tsan/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1415,7 +1604,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_tsan/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1438,7 +1627,79 @@ jobs:
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestUBsan:
|
||||
FunctionalStatelessTestTsan3:
|
||||
needs: [BuilderDebTsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_tsan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_tsan/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestTsan4:
|
||||
needs: [BuilderDebTsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_tsan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_tsan/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=4
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestUBsan0:
|
||||
needs: [BuilderDebUBsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
@ -1450,6 +1711,44 @@ jobs:
|
||||
CHECK_NAME=Stateless tests (ubsan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_ubsan/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestUBsan1:
|
||||
needs: [BuilderDebUBsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_ubsan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (ubsan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_ubsan/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1485,7 +1784,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_memory/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1521,7 +1820,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_memory/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1557,7 +1856,115 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_memory/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestMsan3:
|
||||
needs: [BuilderDebMsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_memory
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (msan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_memory/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestMsan4:
|
||||
needs: [BuilderDebMsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_memory
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (msan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_memory/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=4
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestMsan5:
|
||||
needs: [BuilderDebMsan]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_memory
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (msan)
|
||||
REPO_COPY=${{runner.temp}}/stateless_memory/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=5
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1593,7 +2000,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1629,7 +2036,7 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -1665,7 +2072,79 @@ jobs:
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestDebug3:
|
||||
needs: [BuilderDebDebug]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_debug
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (debug)
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Functional test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 functional_test_check.py "$CHECK_NAME" "$KILL_TIMEOUT"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
FunctionalStatelessTestDebug4:
|
||||
needs: [BuilderDebDebug]
|
||||
runs-on: [self-hosted, func-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/stateless_debug
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Stateless tests (debug)
|
||||
REPO_COPY=${{runner.temp}}/stateless_debug/ClickHouse
|
||||
KILL_TIMEOUT=10800
|
||||
RUN_BY_HASH_NUM=4
|
||||
RUN_BY_HASH_TOTAL=5
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2116,7 +2595,7 @@ jobs:
|
||||
CHECK_NAME=Integration tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_asan/ClickHouse
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2151,7 +2630,7 @@ jobs:
|
||||
CHECK_NAME=Integration tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_asan/ClickHouse
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2186,7 +2665,112 @@ jobs:
|
||||
CHECK_NAME=Integration tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_asan/ClickHouse
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Integration test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 integration_test_check.py "$CHECK_NAME"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
IntegrationTestsAsan3:
|
||||
needs: [BuilderDebAsan]
|
||||
runs-on: [self-hosted, stress-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/integration_tests_asan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Integration tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_asan/ClickHouse
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Integration test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 integration_test_check.py "$CHECK_NAME"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
IntegrationTestsAsan4:
|
||||
needs: [BuilderDebAsan]
|
||||
runs-on: [self-hosted, stress-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/integration_tests_asan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Integration tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_asan/ClickHouse
|
||||
RUN_BY_HASH_NUM=4
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Integration test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 integration_test_check.py "$CHECK_NAME"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
IntegrationTestsAsan5:
|
||||
needs: [BuilderDebAsan]
|
||||
runs-on: [self-hosted, stress-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/integration_tests_asan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Integration tests (asan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_asan/ClickHouse
|
||||
RUN_BY_HASH_NUM=5
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2221,7 +2805,7 @@ jobs:
|
||||
CHECK_NAME=Integration tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_tsan/ClickHouse
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2256,7 +2840,7 @@ jobs:
|
||||
CHECK_NAME=Integration tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_tsan/ClickHouse
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2291,7 +2875,7 @@ jobs:
|
||||
CHECK_NAME=Integration tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_tsan/ClickHouse
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2326,7 +2910,77 @@ jobs:
|
||||
CHECK_NAME=Integration tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_tsan/ClickHouse
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Integration test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 integration_test_check.py "$CHECK_NAME"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
IntegrationTestsTsan4:
|
||||
needs: [BuilderDebTsan]
|
||||
runs-on: [self-hosted, stress-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/integration_tests_tsan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Integration tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_tsan/ClickHouse
|
||||
RUN_BY_HASH_NUM=4
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Integration test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 integration_test_check.py "$CHECK_NAME"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
IntegrationTestsTsan5:
|
||||
needs: [BuilderDebTsan]
|
||||
runs-on: [self-hosted, stress-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/integration_tests_tsan
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Integration tests (tsan)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_tsan/ClickHouse
|
||||
RUN_BY_HASH_NUM=5
|
||||
RUN_BY_HASH_TOTAL=6
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2361,7 +3015,7 @@ jobs:
|
||||
CHECK_NAME=Integration tests (release)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_release/ClickHouse
|
||||
RUN_BY_HASH_NUM=0
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -2396,7 +3050,77 @@ jobs:
|
||||
CHECK_NAME=Integration tests (release)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_release/ClickHouse
|
||||
RUN_BY_HASH_NUM=1
|
||||
RUN_BY_HASH_TOTAL=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Integration test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 integration_test_check.py "$CHECK_NAME"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
IntegrationTestsRelease2:
|
||||
needs: [BuilderDebRelease]
|
||||
runs-on: [self-hosted, stress-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/integration_tests_release
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Integration tests (release)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_release/ClickHouse
|
||||
RUN_BY_HASH_NUM=2
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: ${{ env.REPORTS_PATH }}
|
||||
- name: Check out repository code
|
||||
uses: ClickHouse/checkout@v1
|
||||
with:
|
||||
clear-repository: true
|
||||
- name: Integration test
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
mkdir -p "$TEMP_PATH"
|
||||
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
|
||||
cd "$REPO_COPY/tests/ci"
|
||||
python3 integration_test_check.py "$CHECK_NAME"
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
run: |
|
||||
docker ps --quiet | xargs --no-run-if-empty docker kill ||:
|
||||
docker ps --all --quiet | xargs --no-run-if-empty docker rm -f ||:
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
IntegrationTestsRelease3:
|
||||
needs: [BuilderDebRelease]
|
||||
runs-on: [self-hosted, stress-tester]
|
||||
steps:
|
||||
- name: Set envs
|
||||
run: |
|
||||
cat >> "$GITHUB_ENV" << 'EOF'
|
||||
TEMP_PATH=${{runner.temp}}/integration_tests_release
|
||||
REPORTS_PATH=${{runner.temp}}/reports_dir
|
||||
CHECK_NAME=Integration tests (release)
|
||||
REPO_COPY=${{runner.temp}}/integration_tests_release/ClickHouse
|
||||
RUN_BY_HASH_NUM=3
|
||||
RUN_BY_HASH_TOTAL=4
|
||||
EOF
|
||||
- name: Download json reports
|
||||
uses: actions/download-artifact@v3
|
||||
@ -3116,23 +3840,36 @@ jobs:
|
||||
- FunctionalStatelessTestDebug0
|
||||
- FunctionalStatelessTestDebug1
|
||||
- FunctionalStatelessTestDebug2
|
||||
- FunctionalStatelessTestDebug3
|
||||
- FunctionalStatelessTestDebug4
|
||||
- FunctionalStatelessTestRelease
|
||||
- FunctionalStatelessTestReleaseDatabaseOrdinary
|
||||
- FunctionalStatelessTestReleaseDatabaseReplicated0
|
||||
- FunctionalStatelessTestReleaseDatabaseReplicated1
|
||||
- FunctionalStatelessTestReleaseDatabaseReplicated2
|
||||
- FunctionalStatelessTestReleaseDatabaseReplicated3
|
||||
- FunctionalStatelessTestAarch64
|
||||
- FunctionalStatelessTestAsan0
|
||||
- FunctionalStatelessTestAsan1
|
||||
- FunctionalStatelessTestAsan2
|
||||
- FunctionalStatelessTestAsan3
|
||||
- FunctionalStatelessTestTsan0
|
||||
- FunctionalStatelessTestTsan1
|
||||
- FunctionalStatelessTestTsan2
|
||||
- FunctionalStatelessTestTsan3
|
||||
- FunctionalStatelessTestTsan4
|
||||
- FunctionalStatelessTestMsan0
|
||||
- FunctionalStatelessTestMsan1
|
||||
- FunctionalStatelessTestMsan2
|
||||
- FunctionalStatelessTestUBsan
|
||||
- FunctionalStatelessTestMsan3
|
||||
- FunctionalStatelessTestMsan4
|
||||
- FunctionalStatelessTestMsan5
|
||||
- FunctionalStatelessTestUBsan0
|
||||
- FunctionalStatelessTestUBsan1
|
||||
- FunctionalStatefulTestDebug
|
||||
- FunctionalStatefulTestRelease
|
||||
- FunctionalStatelessTestReleaseS3
|
||||
- FunctionalStatelessTestReleaseS3_0
|
||||
- FunctionalStatelessTestReleaseS3_1
|
||||
- FunctionalStatefulTestAarch64
|
||||
- FunctionalStatefulTestAsan
|
||||
- FunctionalStatefulTestTsan
|
||||
@ -3146,12 +3883,19 @@ jobs:
|
||||
- IntegrationTestsAsan0
|
||||
- IntegrationTestsAsan1
|
||||
- IntegrationTestsAsan2
|
||||
- IntegrationTestsAsan3
|
||||
- IntegrationTestsAsan4
|
||||
- IntegrationTestsAsan5
|
||||
- IntegrationTestsRelease0
|
||||
- IntegrationTestsRelease1
|
||||
- IntegrationTestsRelease2
|
||||
- IntegrationTestsRelease3
|
||||
- IntegrationTestsTsan0
|
||||
- IntegrationTestsTsan1
|
||||
- IntegrationTestsTsan2
|
||||
- IntegrationTestsTsan3
|
||||
- IntegrationTestsTsan4
|
||||
- IntegrationTestsTsan5
|
||||
- PerformanceComparisonX86-0
|
||||
- PerformanceComparisonX86-1
|
||||
- PerformanceComparisonX86-2
|
||||
|
9
.github/workflows/pull_request.yml
vendored
9
.github/workflows/pull_request.yml
vendored
@ -550,6 +550,13 @@ jobs:
|
||||
with:
|
||||
clear-repository: true
|
||||
submodules: true
|
||||
- name: Apply sparse checkout for contrib # in order to check that it doesn't break build
|
||||
run: |
|
||||
rm -rf "$GITHUB_WORKSPACE/contrib" && echo 'removed'
|
||||
git -C "$GITHUB_WORKSPACE" checkout . && echo 'restored'
|
||||
"$GITHUB_WORKSPACE/contrib/update-submodules.sh" && echo 'OK'
|
||||
du -hs "$GITHUB_WORKSPACE/contrib" ||:
|
||||
find "$GITHUB_WORKSPACE/contrib" -type f | wc -l ||:
|
||||
- name: Build
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
@ -918,7 +925,7 @@ jobs:
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 docker_server.py --release-type head --no-push \
|
||||
--image-repo clickhouse/clickhouse-server --image-path docker/server
|
||||
python3 docker_server.py --release-type head --no-push --no-ubuntu \
|
||||
python3 docker_server.py --release-type head --no-push \
|
||||
--image-repo clickhouse/clickhouse-keeper --image-path docker/keeper
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
|
2
.github/workflows/release.yml
vendored
2
.github/workflows/release.yml
vendored
@ -55,7 +55,7 @@ jobs:
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 docker_server.py --release-type auto --version "$GITHUB_TAG" \
|
||||
--image-repo clickhouse/clickhouse-server --image-path docker/server
|
||||
python3 docker_server.py --release-type auto --version "$GITHUB_TAG" --no-ubuntu \
|
||||
python3 docker_server.py --release-type auto --version "$GITHUB_TAG" \
|
||||
--image-repo clickhouse/clickhouse-keeper --image-path docker/keeper
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
|
9
.github/workflows/release_branches.yml
vendored
9
.github/workflows/release_branches.yml
vendored
@ -406,6 +406,13 @@ jobs:
|
||||
with:
|
||||
clear-repository: true
|
||||
submodules: true
|
||||
- name: Apply sparse checkout for contrib # in order to check that it doesn't break build
|
||||
run: |
|
||||
rm -rf "$GITHUB_WORKSPACE/contrib" && echo 'removed'
|
||||
git -C "$GITHUB_WORKSPACE" checkout . && echo 'restored'
|
||||
"$GITHUB_WORKSPACE/contrib/update-submodules.sh" && echo 'OK'
|
||||
du -hs "$GITHUB_WORKSPACE/contrib" ||:
|
||||
find "$GITHUB_WORKSPACE/contrib" -type f | wc -l ||:
|
||||
- name: Build
|
||||
run: |
|
||||
sudo rm -fr "$TEMP_PATH"
|
||||
@ -527,7 +534,7 @@ jobs:
|
||||
cd "$GITHUB_WORKSPACE/tests/ci"
|
||||
python3 docker_server.py --release-type head --no-push \
|
||||
--image-repo clickhouse/clickhouse-server --image-path docker/server
|
||||
python3 docker_server.py --release-type head --no-push --no-ubuntu \
|
||||
python3 docker_server.py --release-type head --no-push \
|
||||
--image-repo clickhouse/clickhouse-keeper --image-path docker/keeper
|
||||
- name: Cleanup
|
||||
if: always()
|
||||
|
187
CHANGELOG.md
187
CHANGELOG.md
@ -1,10 +1,195 @@
|
||||
### Table of Contents
|
||||
**[ClickHouse release v23.3 LTS, 2023-03-30](#233)**<br/>
|
||||
**[ClickHouse release v23.2, 2023-02-23](#232)**<br/>
|
||||
**[ClickHouse release v23.1, 2023-01-25](#231)**<br/>
|
||||
**[Changelog for 2022](https://clickhouse.com/docs/en/whats-new/changelog/2022/)**<br/>
|
||||
|
||||
# 2023 Changelog
|
||||
|
||||
### <a id="233"></a> ClickHouse release 23.3 LTS, 2023-03-30
|
||||
|
||||
#### Upgrade Notes
|
||||
* Lightweight DELETEs are production ready and enabled by default. The `DELETE` query for MergeTree tables is now available by default.
|
||||
* The behavior of `*domain*RFC` and `netloc` functions is slightly changed: relaxed the set of symbols that are allowed in the URL authority for better conformance. [#46841](https://github.com/ClickHouse/ClickHouse/pull/46841) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Prohibited creating tables based on KafkaEngine with DEFAULT/EPHEMERAL/ALIAS/MATERIALIZED statements for columns. [#47138](https://github.com/ClickHouse/ClickHouse/pull/47138) ([Aleksandr Musorin](https://github.com/AVMusorin)).
|
||||
* An "asynchronous connection drain" feature is removed. Related settings and metrics are removed as well. It was an internal feature, so the removal should not affect users who had never heard about that feature. [#47486](https://github.com/ClickHouse/ClickHouse/pull/47486) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Support 256-bit Decimal data type (more than 38 digits) in `arraySum`/`Min`/`Max`/`Avg`/`Product`, `arrayCumSum`/`CumSumNonNegative`, `arrayDifference`, array construction, IN operator, query parameters, `groupArrayMovingSum`, statistical functions, `min`/`max`/`any`/`argMin`/`argMax`, PostgreSQL wire protocol, MySQL table engine and function, `sumMap`, `mapAdd`, `mapSubtract`, `arrayIntersect`. Add support for big integers in `arrayIntersect`. Statistical aggregate functions involving moments (such as `corr` or various `TTest`s) will use `Float64` as their internal representation (they were using `Decimal128` before this change, but it was pointless), and these functions can return `nan` instead of `inf` in case of infinite variance. Some functions were allowed on `Decimal256` data types but returned `Decimal128` in previous versions - now it is fixed. This closes [#47569](https://github.com/ClickHouse/ClickHouse/issues/47569). This closes [#44864](https://github.com/ClickHouse/ClickHouse/issues/44864). This closes [#28335](https://github.com/ClickHouse/ClickHouse/issues/28335). [#47594](https://github.com/ClickHouse/ClickHouse/pull/47594) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Make backup_threads/restore_threads server settings (instead of user settings). [#47881](https://github.com/ClickHouse/ClickHouse/pull/47881) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Do not allow const and non-deterministic secondary indices [#46839](https://github.com/ClickHouse/ClickHouse/pull/46839) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
|
||||
#### New Feature
|
||||
* Add a new mode for splitting the work on replicas using settings `parallel_replicas_custom_key` and `parallel_replicas_custom_key_filter_type`. If the cluster consists of a single shard with multiple replicas, up to `max_parallel_replicas` will be randomly picked and turned into shards. For each shard, a corresponding filter is added to the query on the initiator before being sent to the shard. If the cluster consists of multiple shards, it will behave the same as `sample_key` but with the possibility to define an arbitrary key. [#45108](https://github.com/ClickHouse/ClickHouse/pull/45108) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* An option to display partial result on cancel: Added query setting `partial_result_on_first_cancel` allowing the canceled query (e.g. due to Ctrl-C) to return a partial result. [#45689](https://github.com/ClickHouse/ClickHouse/pull/45689) ([Alexey Perevyshin](https://github.com/alexX512)).
|
||||
* Added support of arbitrary tables engines for temporary tables (except for Replicated and KeeperMap engines). Close [#31497](https://github.com/ClickHouse/ClickHouse/issues/31497). [#46071](https://github.com/ClickHouse/ClickHouse/pull/46071) ([Roman Vasin](https://github.com/rvasin)).
|
||||
* Add support for replication of user-defined SQL functions using centralized storage in Keeper. [#46085](https://github.com/ClickHouse/ClickHouse/pull/46085) ([Aleksei Filatov](https://github.com/aalexfvk)).
|
||||
* Implement `system.server_settings` (similar to `system.settings`), which will contain server configurations. [#46550](https://github.com/ClickHouse/ClickHouse/pull/46550) ([pufit](https://github.com/pufit)).
|
||||
* Support for `UNDROP TABLE` query. Closes [#46811](https://github.com/ClickHouse/ClickHouse/issues/46811). [#47241](https://github.com/ClickHouse/ClickHouse/pull/47241) ([chen](https://github.com/xiedeyantu)).
|
||||
* Allow separate grants for named collections (e.g. to be able to give `SHOW/CREATE/ALTER/DROP named collection` access only to certain collections, instead of all at once). Closes [#40894](https://github.com/ClickHouse/ClickHouse/issues/40894). Add new access type `NAMED_COLLECTION_CONTROL` which is not given to user default unless explicitly added to the user config (is required to be able to do `GRANT ALL`), also `show_named_collections` is no longer obligatory to be manually specified for user default to be able to have full access rights as was in 23.2. [#46241](https://github.com/ClickHouse/ClickHouse/pull/46241) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Allow nested custom disks. Previously custom disks supported only flat disk structure. [#47106](https://github.com/ClickHouse/ClickHouse/pull/47106) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Introduce a function `widthBucket` (with a `WIDTH_BUCKET` alias for compatibility). [#42974](https://github.com/ClickHouse/ClickHouse/issues/42974). [#46790](https://github.com/ClickHouse/ClickHouse/pull/46790) ([avoiderboi](https://github.com/avoiderboi)).
|
||||
* Add new function `parseDateTime`/`parseDateTimeInJodaSyntax` according to the specified format string. parseDateTime parses String to DateTime in MySQL syntax, parseDateTimeInJodaSyntax parses in Joda syntax. [#46815](https://github.com/ClickHouse/ClickHouse/pull/46815) ([李扬](https://github.com/taiyang-li)).
|
||||
* Use `dummy UInt8` for the default structure of table function `null`. Closes [#46930](https://github.com/ClickHouse/ClickHouse/issues/46930). [#47006](https://github.com/ClickHouse/ClickHouse/pull/47006) ([flynn](https://github.com/ucasfl)).
|
||||
* Support for date format with a comma, like `Dec 15, 2021` in the `parseDateTimeBestEffort` function. Closes [#46816](https://github.com/ClickHouse/ClickHouse/issues/46816). [#47071](https://github.com/ClickHouse/ClickHouse/pull/47071) ([chen](https://github.com/xiedeyantu)).
|
||||
* Add settings `http_wait_end_of_query` and `http_response_buffer_size` that corresponds to URL params `wait_end_of_query` and `buffer_size` for the HTTP interface. This allows changing these settings in the profiles. [#47108](https://github.com/ClickHouse/ClickHouse/pull/47108) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Add `system.dropped_tables` table that shows tables that were dropped from `Atomic` databases but were not completely removed yet. [#47364](https://github.com/ClickHouse/ClickHouse/pull/47364) ([chen](https://github.com/xiedeyantu)).
|
||||
* Add `INSTR` as alias of `positionCaseInsensitive` for MySQL compatibility. Closes [#47529](https://github.com/ClickHouse/ClickHouse/issues/47529). [#47535](https://github.com/ClickHouse/ClickHouse/pull/47535) ([flynn](https://github.com/ucasfl)).
|
||||
* Added `toDecimalString` function allowing to convert numbers to string with fixed precision. [#47838](https://github.com/ClickHouse/ClickHouse/pull/47838) ([Andrey Zvonov](https://github.com/zvonand)).
|
||||
* Add a merge tree setting `max_number_of_mutations_for_replica`. It limits the number of part mutations per replica to the specified amount. Zero means no limit on the number of mutations per replica (the execution can still be constrained by other settings). [#48047](https://github.com/ClickHouse/ClickHouse/pull/48047) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Add the Map-related function `mapFromArrays`, which allows the creation of a map from a pair of arrays. [#31125](https://github.com/ClickHouse/ClickHouse/pull/31125) ([李扬](https://github.com/taiyang-li)).
|
||||
* Allow control of compression in Parquet/ORC/Arrow output formats, adds support for more compression input formats. This closes [#13541](https://github.com/ClickHouse/ClickHouse/issues/13541). [#47114](https://github.com/ClickHouse/ClickHouse/pull/47114) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add SSL User Certificate authentication to the native protocol. Closes [#47077](https://github.com/ClickHouse/ClickHouse/issues/47077). [#47596](https://github.com/ClickHouse/ClickHouse/pull/47596) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Add *OrNull() and *OrZero() variants for `parseDateTime`, add alias `str_to_date` for MySQL parity. [#48000](https://github.com/ClickHouse/ClickHouse/pull/48000) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Added operator `REGEXP` (similar to operators "LIKE", "IN", "MOD" etc.) for better compatibility with MySQL [#47869](https://github.com/ClickHouse/ClickHouse/pull/47869) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
|
||||
#### Performance Improvement
|
||||
* Marks in memory are now compressed, using 3-6x less memory. [#47290](https://github.com/ClickHouse/ClickHouse/pull/47290) ([Michael Kolupaev](https://github.com/al13n321)).
|
||||
* Backups for large numbers of files were unbelievably slow in previous versions. Not anymore. Now they are unbelievably fast. [#47251](https://github.com/ClickHouse/ClickHouse/pull/47251) ([Alexey Milovidov](https://github.com/alexey-milovidov)). Introduced a separate thread pool for backup's IO operations. This will allow scaling it independently of other pools and increase performance. [#47174](https://github.com/ClickHouse/ClickHouse/pull/47174) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). Use MultiRead request and retries for collecting metadata at the final stage of backup processing. [#47243](https://github.com/ClickHouse/ClickHouse/pull/47243) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)). If a backup and restoring data are both in S3 then server-side copy should be used from now on. [#47546](https://github.com/ClickHouse/ClickHouse/pull/47546) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fixed excessive reading in queries with `FINAL`. [#47801](https://github.com/ClickHouse/ClickHouse/pull/47801) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Setting `max_final_threads` would be set to the number of cores at server startup (by the same algorithm as used for `max_threads`). This improves the concurrency of `final` execution on servers with high number of CPUs. [#47915](https://github.com/ClickHouse/ClickHouse/pull/47915) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Allow executing reading pipeline for DIRECT dictionary with CLICKHOUSE source in multiple threads. To enable set `dictionary_use_async_executor=1` in `SETTINGS` section for source in `CREATE DICTIONARY` statement. [#47986](https://github.com/ClickHouse/ClickHouse/pull/47986) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Optimize one nullable key aggregate performance. [#45772](https://github.com/ClickHouse/ClickHouse/pull/45772) ([LiuNeng](https://github.com/liuneng1994)).
|
||||
* Implemented lowercase `tokenbf_v1` index utilization for `hasTokenOrNull`, `hasTokenCaseInsensitive` and `hasTokenCaseInsensitiveOrNull`. [#46252](https://github.com/ClickHouse/ClickHouse/pull/46252) ([ltrk2](https://github.com/ltrk2)).
|
||||
* Optimize functions `position` and `LIKE` by searching the first two chars using SIMD. [#46289](https://github.com/ClickHouse/ClickHouse/pull/46289) ([Jiebin Sun](https://github.com/jiebinn)).
|
||||
* Optimize queries from the `system.detached_parts`, which could be significantly large. Added several sources with respect to the block size limitation; in each block, an IO thread pool is used to calculate the part size, i.e. to make syscalls in parallel. [#46624](https://github.com/ClickHouse/ClickHouse/pull/46624) ([Sema Checherinda](https://github.com/CheSema)).
|
||||
* Increase the default value of `max_replicated_merges_in_queue` for ReplicatedMergeTree tables from 16 to 1000. It allows faster background merge operation on clusters with a very large number of replicas, such as clusters with shared storage in ClickHouse Cloud. [#47050](https://github.com/ClickHouse/ClickHouse/pull/47050) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Updated `clickhouse-copier` to use `GROUP BY` instead of `DISTINCT` to get the list of partitions. For large tables, this reduced the select time from over 500s to under 1s. [#47386](https://github.com/ClickHouse/ClickHouse/pull/47386) ([Clayton McClure](https://github.com/cmcclure-twilio)).
|
||||
* Fix performance degradation in `ASOF JOIN`. [#47544](https://github.com/ClickHouse/ClickHouse/pull/47544) ([Ongkong](https://github.com/ongkong)).
|
||||
* Even more batching in Keeper. Improve performance by avoiding breaking batches on read requests. [#47978](https://github.com/ClickHouse/ClickHouse/pull/47978) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Allow PREWHERE for Merge with different DEFAULT expressions for columns. [#46831](https://github.com/ClickHouse/ClickHouse/pull/46831) ([Azat Khuzhin](https://github.com/azat)).
|
||||
|
||||
#### Experimental Feature
|
||||
* Parallel replicas: Improved the overall performance by better utilizing the local replica, and forbid the reading with parallel replicas from non-replicated MergeTree by default. [#47858](https://github.com/ClickHouse/ClickHouse/pull/47858) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
|
||||
* Support filter push down to left table for JOIN with `Join`, `Dictionary` and `EmbeddedRocksDB` tables if the experimental Analyzer is enabled. [#47280](https://github.com/ClickHouse/ClickHouse/pull/47280) ([Maksim Kita](https://github.com/kitaisreal)).
|
||||
* Now ReplicatedMergeTree with zero copy replication has less load to Keeper. [#47676](https://github.com/ClickHouse/ClickHouse/pull/47676) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix create materialized view with MaterializedPostgreSQL [#40807](https://github.com/ClickHouse/ClickHouse/pull/40807) ([Maksim Buren](https://github.com/maks-buren630501)).
|
||||
|
||||
#### Improvement
|
||||
* Enable `input_format_json_ignore_unknown_keys_in_named_tuple` by default. [#46742](https://github.com/ClickHouse/ClickHouse/pull/46742) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allow errors to be ignored while pushing to MATERIALIZED VIEW (add new setting `materialized_views_ignore_errors`, by default to `false`, but it is set to `true` for flushing logs to `system.*_log` tables unconditionally). [#46658](https://github.com/ClickHouse/ClickHouse/pull/46658) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Track the file queue of distributed sends in memory. [#45491](https://github.com/ClickHouse/ClickHouse/pull/45491) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Now `X-ClickHouse-Query-Id` and `X-ClickHouse-Timezone` headers are added to responses in all queries via HTTP protocol. Previously it was done only for `SELECT` queries. [#46364](https://github.com/ClickHouse/ClickHouse/pull/46364) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* External tables from `MongoDB`: support for connection to a replica set via a URI with a host:port enum and support for the readPreference option in MongoDB dictionaries. Example URI: mongodb://db0.example.com:27017,db1.example.com:27017,db2.example.com:27017/?replicaSet=myRepl&readPreference=primary. [#46524](https://github.com/ClickHouse/ClickHouse/pull/46524) ([artem-yadr](https://github.com/artem-yadr)).
|
||||
* This improvement should be invisible for users. Re-implement projection analysis on top of query plan. Added setting `query_plan_optimize_projection=1` to switch between old and new version. Fixes [#44963](https://github.com/ClickHouse/ClickHouse/issues/44963). [#46537](https://github.com/ClickHouse/ClickHouse/pull/46537) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Use Parquet format v2 instead of v1 in output format by default. Add setting `output_format_parquet_version` to control parquet version, possible values `1.0`, `2.4`, `2.6`, `2.latest` (default). [#46617](https://github.com/ClickHouse/ClickHouse/pull/46617) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* It is now possible to use the new configuration syntax to configure Kafka topics with periods (`.`) in their name. [#46752](https://github.com/ClickHouse/ClickHouse/pull/46752) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix heuristics that check hyperscan patterns for problematic repeats. [#46819](https://github.com/ClickHouse/ClickHouse/pull/46819) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Don't report ZK node exists to system.errors when a block was created concurrently by a different replica. [#46820](https://github.com/ClickHouse/ClickHouse/pull/46820) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Increase the limit for opened files in `clickhouse-local`. It will be able to read from `web` tables on servers with a huge number of CPU cores. Do not back off reading from the URL table engine in case of too many opened files. This closes [#46852](https://github.com/ClickHouse/ClickHouse/issues/46852). [#46853](https://github.com/ClickHouse/ClickHouse/pull/46853) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Exceptions thrown when numbers cannot be parsed now have an easier-to-read exception message. [#46917](https://github.com/ClickHouse/ClickHouse/pull/46917) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Added update `system.backups` after every processed task to track the progress of backups. [#46989](https://github.com/ClickHouse/ClickHouse/pull/46989) ([Aleksandr Musorin](https://github.com/AVMusorin)).
|
||||
* Allow types conversion in Native input format. Add settings `input_format_native_allow_types_conversion` that controls it (enabled by default). [#46990](https://github.com/ClickHouse/ClickHouse/pull/46990) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allow IPv4 in the `range` function to generate IP ranges. [#46995](https://github.com/ClickHouse/ClickHouse/pull/46995) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Improve exception message when it's impossible to move a part from one volume/disk to another. [#47032](https://github.com/ClickHouse/ClickHouse/pull/47032) ([alesapin](https://github.com/alesapin)).
|
||||
* Support `Bool` type in `JSONType` function. Previously `Null` type was mistakenly returned for bool values. [#47046](https://github.com/ClickHouse/ClickHouse/pull/47046) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Use `_request_body` parameter to configure predefined HTTP queries. [#47086](https://github.com/ClickHouse/ClickHouse/pull/47086) ([Constantine Peresypkin](https://github.com/pkit)).
|
||||
* Automatic indentation in the built-in UI SQL editor when Enter is pressed. [#47113](https://github.com/ClickHouse/ClickHouse/pull/47113) ([Alexey Korepanov](https://github.com/alexkorep)).
|
||||
* Self-extraction with 'sudo' will attempt to set uid and gid of extracted files to running user. [#47116](https://github.com/ClickHouse/ClickHouse/pull/47116) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Previously, the `repeat` function's second argument only accepted an unsigned integer type, which meant it could not accept values such as -1. This behavior differed from that of the Spark function. In this update, the repeat function has been modified to match the behavior of the Spark function. It now accepts the same types of inputs, including negative integers. Extensive testing has been performed to verify the correctness of the updated implementation. [#47134](https://github.com/ClickHouse/ClickHouse/pull/47134) ([KevinyhZou](https://github.com/KevinyhZou)). Note: the changelog entry was rewritten by ChatGPT.
|
||||
* Remove `::__1` part from stacktraces. Display `std::basic_string<char, ...` as `String` in stacktraces. [#47171](https://github.com/ClickHouse/ClickHouse/pull/47171) ([Mike Kot](https://github.com/myrrc)).
|
||||
* Reimplement interserver mode to avoid replay attacks (note, that change is backward compatible with older servers). [#47213](https://github.com/ClickHouse/ClickHouse/pull/47213) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Improve recognition of regular expression groups and refine the regexp_tree dictionary. [#47218](https://github.com/ClickHouse/ClickHouse/pull/47218) ([Han Fei](https://github.com/hanfei1991)).
|
||||
* Keeper improvement: Add new 4LW `clrs` to clean resources used by Keeper (e.g. release unused memory). [#47256](https://github.com/ClickHouse/ClickHouse/pull/47256) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add optional arguments to codecs `DoubleDelta(bytes_size)`, `Gorilla(bytes_size)`, `FPC(level, float_size)`, this allows using these codecs without column type in `clickhouse-compressor`. Fix possible aborts and arithmetic errors in `clickhouse-compressor` with these codecs. Fixes: https://github.com/ClickHouse/ClickHouse/discussions/47262. [#47271](https://github.com/ClickHouse/ClickHouse/pull/47271) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Add support for big int types to the `runningDifference` function. Closes [#47194](https://github.com/ClickHouse/ClickHouse/issues/47194). [#47322](https://github.com/ClickHouse/ClickHouse/pull/47322) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Add an expiration window for S3 credentials that have an expiration time to avoid `ExpiredToken` errors in some edge cases. It can be controlled with `expiration_window_seconds` config, the default is 120 seconds. [#47423](https://github.com/ClickHouse/ClickHouse/pull/47423) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Support Decimals and Date32 in `Avro` format. [#47434](https://github.com/ClickHouse/ClickHouse/pull/47434) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Do not start the server if an interrupted conversion from `Ordinary` to `Atomic` was detected, print a better error message with troubleshooting instructions. [#47487](https://github.com/ClickHouse/ClickHouse/pull/47487) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Add a new column `kind` to the `system.opentelemetry_span_log`. This column holds the value of [SpanKind](https://opentelemetry.io/docs/reference/specification/trace/api/#spankind) defined in OpenTelemtry. [#47499](https://github.com/ClickHouse/ClickHouse/pull/47499) ([Frank Chen](https://github.com/FrankChen021)).
|
||||
* Allow reading/writing nested arrays in `Protobuf` format with only the root field name as column name. Previously column name should've contained all nested field names (like `a.b.c Array(Array(Array(UInt32)))`, now you can use just `a Array(Array(Array(UInt32)))`. [#47650](https://github.com/ClickHouse/ClickHouse/pull/47650) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Added an optional `STRICT` modifier for `SYSTEM SYNC REPLICA` which makes the query wait for the replication queue to become empty (just like it worked before https://github.com/ClickHouse/ClickHouse/pull/45648). [#47659](https://github.com/ClickHouse/ClickHouse/pull/47659) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Improve the naming of some OpenTelemetry span logs. [#47667](https://github.com/ClickHouse/ClickHouse/pull/47667) ([Frank Chen](https://github.com/FrankChen021)).
|
||||
* Prevent using too long chains of aggregate function combinators (they can lead to slow queries in the analysis stage). This closes [#47715](https://github.com/ClickHouse/ClickHouse/issues/47715). [#47716](https://github.com/ClickHouse/ClickHouse/pull/47716) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Support for subquery in parameterized views; resolves [#46741](https://github.com/ClickHouse/ClickHouse/issues/46741) [#47725](https://github.com/ClickHouse/ClickHouse/pull/47725) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix memory leak in MySQL integration (reproduces with `connection_auto_close=1`). [#47732](https://github.com/ClickHouse/ClickHouse/pull/47732) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Improved error handling in the code related to Decimal parameters, resulting in more informative error messages. Previously, when incorrect Decimal parameters were supplied, the error message generated was unclear or unhelpful. With this update, the error message printed has been fixed to provide more detailed and useful information, making it easier to identify and correct issues related to Decimal parameters. [#47812](https://github.com/ClickHouse/ClickHouse/pull/47812) ([Yu Feng](https://github.com/Vigor-jpg)). Note: this changelog entry is rewritten by ChatGPT.
|
||||
* The parameter `exact_rows_before_limit` is used to make `rows_before_limit_at_least` is designed to accurately reflect the number of rows returned before the limit is reached. This pull request addresses issues encountered when the query involves distributed processing across multiple shards or sorting operations. Prior to this update, these scenarios were not functioning as intended. [#47874](https://github.com/ClickHouse/ClickHouse/pull/47874) ([Amos Bird](https://github.com/amosbird)).
|
||||
* ThreadPools metrics introspection. [#47880](https://github.com/ClickHouse/ClickHouse/pull/47880) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Add `WriteBufferFromS3Microseconds` and `WriteBufferFromS3RequestsErrors` profile events. [#47885](https://github.com/ClickHouse/ClickHouse/pull/47885) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Add `--link` and `--noninteractive` (`-y`) options to ClickHouse install. Closes [#47750](https://github.com/ClickHouse/ClickHouse/issues/47750). [#47887](https://github.com/ClickHouse/ClickHouse/pull/47887) ([Nikolay Degterinsky](https://github.com/evillique)).
|
||||
* Fixed `UNKNOWN_TABLE` exception when attaching to a materialized view that has dependent tables that are not available. This might be useful when trying to restore state from a backup. [#47975](https://github.com/ClickHouse/ClickHouse/pull/47975) ([MikhailBurdukov](https://github.com/MikhailBurdukov)).
|
||||
* Fix case when the (optional) path is not added to an encrypted disk configuration. [#47981](https://github.com/ClickHouse/ClickHouse/pull/47981) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Support for CTE in parameterized views Implementation: Updated to allow query parameters while evaluating scalar subqueries. [#48065](https://github.com/ClickHouse/ClickHouse/pull/48065) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Support big integers `(U)Int128/(U)Int256`, `Map` with any key type and `DateTime64` with any precision (not only 3 and 6). [#48119](https://github.com/ClickHouse/ClickHouse/pull/48119) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Allow skipping errors related to unknown enum values in row input formats. [#48133](https://github.com/ClickHouse/ClickHouse/pull/48133) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
|
||||
#### Build/Testing/Packaging Improvement
|
||||
* ClickHouse now builds with `C++23`. [#47424](https://github.com/ClickHouse/ClickHouse/pull/47424) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fuzz `EXPLAIN` queries in the AST Fuzzer. [#47803](https://github.com/ClickHouse/ClickHouse/pull/47803) [#47852](https://github.com/ClickHouse/ClickHouse/pull/47852) ([flynn](https://github.com/ucasfl)).
|
||||
* Split stress test and the automated backward compatibility check (now Upgrade check). [#44879](https://github.com/ClickHouse/ClickHouse/pull/44879) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Updated the Ubuntu Image for Docker to calm down some bogus security reports. [#46784](https://github.com/ClickHouse/ClickHouse/pull/46784) ([Julio Jimenez](https://github.com/juliojimenez)). Please note that ClickHouse has no dependencies and does not require Docker.
|
||||
* Adds a prompt to allow the removal of an existing `clickhouse` download when using "curl | sh" download of ClickHouse. Prompt is "ClickHouse binary clickhouse already exists. Overwrite? \[y/N\]". [#46859](https://github.com/ClickHouse/ClickHouse/pull/46859) ([Dan Roscigno](https://github.com/DanRoscigno)).
|
||||
* Fix error during server startup on old distros (e.g. Amazon Linux 2) and on ARM that glibc 2.28 symbols are not found. [#47008](https://github.com/ClickHouse/ClickHouse/pull/47008) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Prepare for clang 16. [#47027](https://github.com/ClickHouse/ClickHouse/pull/47027) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Added a CI check which ensures ClickHouse can run with an old glibc on ARM. [#47063](https://github.com/ClickHouse/ClickHouse/pull/47063) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Add a style check to prevent incorrect usage of the `NDEBUG` macro. [#47699](https://github.com/ClickHouse/ClickHouse/pull/47699) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Speed up the build a little. [#47714](https://github.com/ClickHouse/ClickHouse/pull/47714) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Bump `vectorscan` to 5.4.9. [#47955](https://github.com/ClickHouse/ClickHouse/pull/47955) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Add a unit test to assert Apache Arrow's fatal logging does not abort. It covers the changes in [ClickHouse/arrow#16](https://github.com/ClickHouse/arrow/pull/16). [#47958](https://github.com/ClickHouse/ClickHouse/pull/47958) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Restore the ability of native macOS debug server build to start. [#48050](https://github.com/ClickHouse/ClickHouse/pull/48050) ([Robert Schulze](https://github.com/rschu1ze)). Note: this change is only relevant for development, as the ClickHouse official builds are done with cross-compilation.
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in an official stable release)
|
||||
* Fix formats parser resetting, test processing bad messages in `Kafka` [#45693](https://github.com/ClickHouse/ClickHouse/pull/45693) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix data size calculation in Keeper [#46086](https://github.com/ClickHouse/ClickHouse/pull/46086) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fixed a bug in automatic retries of `DROP TABLE` query with `ReplicatedMergeTree` tables and `Atomic` databases. In rare cases it could lead to `Can't get data for node /zk_path/log_pointer` and `The specified key does not exist` errors if the ZooKeeper session expired during DROP and a new replicated table with the same path in ZooKeeper was created in parallel. [#46384](https://github.com/ClickHouse/ClickHouse/pull/46384) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix incorrect alias recursion while normalizing queries that prevented some queries to run. [#46609](https://github.com/ClickHouse/ClickHouse/pull/46609) ([Raúl Marín](https://github.com/Algunenano)).
|
||||
* Fix IPv4/IPv6 serialization/deserialization in binary formats [#46616](https://github.com/ClickHouse/ClickHouse/pull/46616) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* ActionsDAG: do not change result of `and` during optimization [#46653](https://github.com/ClickHouse/ClickHouse/pull/46653) ([Salvatore Mesoraca](https://github.com/aiven-sal)).
|
||||
* Improve query cancellation when a client dies [#46681](https://github.com/ClickHouse/ClickHouse/pull/46681) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix arithmetic operations in aggregate optimization [#46705](https://github.com/ClickHouse/ClickHouse/pull/46705) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
* Fix possible `clickhouse-local`'s abort on JSONEachRow schema inference [#46731](https://github.com/ClickHouse/ClickHouse/pull/46731) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix changing an expired role [#46772](https://github.com/ClickHouse/ClickHouse/pull/46772) ([Vitaly Baranov](https://github.com/vitlibar)).
|
||||
* Fix combined PREWHERE column accumulation from multiple steps [#46785](https://github.com/ClickHouse/ClickHouse/pull/46785) ([Alexander Gololobov](https://github.com/davenger)).
|
||||
* Use initial range for fetching file size in HTTP read buffer. Without this change, some remote files couldn't be processed. [#46824](https://github.com/ClickHouse/ClickHouse/pull/46824) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix the incorrect progress bar while using the URL tables [#46830](https://github.com/ClickHouse/ClickHouse/pull/46830) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix MSan report in `maxIntersections` function [#46847](https://github.com/ClickHouse/ClickHouse/pull/46847) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix a bug in `Map` data type [#46856](https://github.com/ClickHouse/ClickHouse/pull/46856) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix wrong results of some LIKE searches when the LIKE pattern contains quoted non-quotable characters [#46875](https://github.com/ClickHouse/ClickHouse/pull/46875) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix - WITH FILL would produce abort when the Filling Transform processing an empty block [#46897](https://github.com/ClickHouse/ClickHouse/pull/46897) ([Yakov Olkhovskiy](https://github.com/yakov-olkhovskiy)).
|
||||
* Fix date and int inference from string in JSON [#46972](https://github.com/ClickHouse/ClickHouse/pull/46972) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix bug in zero-copy replication disk choice during fetch [#47010](https://github.com/ClickHouse/ClickHouse/pull/47010) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix a typo in systemd service definition [#47051](https://github.com/ClickHouse/ClickHouse/pull/47051) ([Palash Goel](https://github.com/palash-goel)).
|
||||
* Fix the NOT_IMPLEMENTED error with CROSS JOIN and algorithm = auto [#47068](https://github.com/ClickHouse/ClickHouse/pull/47068) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Fix the problem that the 'ReplicatedMergeTree' table failed to insert two similar data when the 'part_type' is configured as 'InMemory' mode (experimental feature). [#47121](https://github.com/ClickHouse/ClickHouse/pull/47121) ([liding1992](https://github.com/liding1992)).
|
||||
* External dictionaries / library-bridge: Fix error "unknown library method 'extDict_libClone'" [#47136](https://github.com/ClickHouse/ClickHouse/pull/47136) ([alex filatov](https://github.com/phil-88)).
|
||||
* Fix race condition in a grace hash join with limit [#47153](https://github.com/ClickHouse/ClickHouse/pull/47153) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Fix concrete columns PREWHERE support [#47154](https://github.com/ClickHouse/ClickHouse/pull/47154) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix possible deadlock in Query Status [#47161](https://github.com/ClickHouse/ClickHouse/pull/47161) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Forbid insert select for the same `Join` table, as it leads to a deadlock [#47260](https://github.com/ClickHouse/ClickHouse/pull/47260) ([Vladimir C](https://github.com/vdimir)).
|
||||
* Skip merged partitions for `min_age_to_force_merge_seconds` merges [#47303](https://github.com/ClickHouse/ClickHouse/pull/47303) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Modify find_first_symbols, so it works as expected for find_first_not_symbols [#47304](https://github.com/ClickHouse/ClickHouse/pull/47304) ([Arthur Passos](https://github.com/arthurpassos)).
|
||||
* Fix big numbers inference in CSV [#47410](https://github.com/ClickHouse/ClickHouse/pull/47410) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Disable logical expression optimizer for expression with aliases. [#47451](https://github.com/ClickHouse/ClickHouse/pull/47451) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Fix error in `decodeURLComponent` [#47457](https://github.com/ClickHouse/ClickHouse/pull/47457) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix explain graph with projection [#47473](https://github.com/ClickHouse/ClickHouse/pull/47473) ([flynn](https://github.com/ucasfl)).
|
||||
* Fix query parameters [#47488](https://github.com/ClickHouse/ClickHouse/pull/47488) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Parameterized view: a bug fix. [#47495](https://github.com/ClickHouse/ClickHouse/pull/47495) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fuzzer of data formats, and the corresponding fixes. [#47519](https://github.com/ClickHouse/ClickHouse/pull/47519) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
|
||||
* Fix monotonicity check for `DateTime64` [#47526](https://github.com/ClickHouse/ClickHouse/pull/47526) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix "block structure mismatch" for a Nullable LowCardinality column [#47537](https://github.com/ClickHouse/ClickHouse/pull/47537) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
|
||||
* Proper fix for a bug in Apache Parquet [#45878](https://github.com/ClickHouse/ClickHouse/issues/45878) [#47538](https://github.com/ClickHouse/ClickHouse/pull/47538) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix `BSONEachRow` parallel parsing when document size is invalid [#47540](https://github.com/ClickHouse/ClickHouse/pull/47540) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Preserve error in `system.distribution_queue` on `SYSTEM FLUSH DISTRIBUTED` [#47541](https://github.com/ClickHouse/ClickHouse/pull/47541) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Check for duplicate column in `BSONEachRow` format [#47609](https://github.com/ClickHouse/ClickHouse/pull/47609) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix wait for zero copy lock during move [#47631](https://github.com/ClickHouse/ClickHouse/pull/47631) ([alesapin](https://github.com/alesapin)).
|
||||
* Fix aggregation by partitions [#47634](https://github.com/ClickHouse/ClickHouse/pull/47634) ([Nikita Taranov](https://github.com/nickitat)).
|
||||
* Fix bug in tuple as array serialization in `BSONEachRow` format [#47690](https://github.com/ClickHouse/ClickHouse/pull/47690) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Fix crash in `polygonsSymDifferenceCartesian` [#47702](https://github.com/ClickHouse/ClickHouse/pull/47702) ([pufit](https://github.com/pufit)).
|
||||
* Fix reading from storage `File` compressed files with `zlib` and `gzip` compression [#47796](https://github.com/ClickHouse/ClickHouse/pull/47796) ([Anton Popov](https://github.com/CurtizJ)).
|
||||
* Improve empty query detection for PostgreSQL (for pgx golang driver) [#47854](https://github.com/ClickHouse/ClickHouse/pull/47854) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix DateTime monotonicity check for LowCardinality types [#47860](https://github.com/ClickHouse/ClickHouse/pull/47860) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Use restore_threads (not backup_threads) for RESTORE ASYNC [#47861](https://github.com/ClickHouse/ClickHouse/pull/47861) ([Azat Khuzhin](https://github.com/azat)).
|
||||
* Fix DROP COLUMN with ReplicatedMergeTree containing projections [#47883](https://github.com/ClickHouse/ClickHouse/pull/47883) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix for Replicated database recovery [#47901](https://github.com/ClickHouse/ClickHouse/pull/47901) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Hotfix for too verbose warnings in HTTP [#47903](https://github.com/ClickHouse/ClickHouse/pull/47903) ([Alexander Tokmakov](https://github.com/tavplubix)).
|
||||
* Fix "Field value too long" in `catboostEvaluate` [#47970](https://github.com/ClickHouse/ClickHouse/pull/47970) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Fix [#36971](https://github.com/ClickHouse/ClickHouse/issues/36971): Watchdog: exit with non-zero code if child process exits [#47973](https://github.com/ClickHouse/ClickHouse/pull/47973) ([Коренберг Марк](https://github.com/socketpair)).
|
||||
* Fix for "index file `cidx` is unexpectedly long" [#48010](https://github.com/ClickHouse/ClickHouse/pull/48010) ([SmitaRKulkarni](https://github.com/SmitaRKulkarni)).
|
||||
* Fix MaterializedPostgreSQL query to get attributes (replica-identity) [#48015](https://github.com/ClickHouse/ClickHouse/pull/48015) ([Solomatov Sergei](https://github.com/solomatovs)).
|
||||
* parseDateTime(): Fix UB (signed integer overflow) [#48019](https://github.com/ClickHouse/ClickHouse/pull/48019) ([Robert Schulze](https://github.com/rschu1ze)).
|
||||
* Use unique names for Records in Avro to avoid reusing its schema [#48057](https://github.com/ClickHouse/ClickHouse/pull/48057) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
* Correctly set TCP/HTTP socket timeouts in Keeper [#48108](https://github.com/ClickHouse/ClickHouse/pull/48108) ([Antonio Andelic](https://github.com/antonio2368)).
|
||||
* Fix possible member call on null pointer in `Avro` format [#48184](https://github.com/ClickHouse/ClickHouse/pull/48184) ([Kruglov Pavel](https://github.com/Avogar)).
|
||||
|
||||
### <a id="232"></a> ClickHouse release 23.2, 2023-02-23
|
||||
|
||||
#### Backward Incompatible Change
|
||||
@ -140,7 +325,7 @@
|
||||
* Upgrade Intel QPL from v0.3.0 to v1.0.0 2. Build libaccel-config and link it statically to QPL library instead of dynamically. [#45809](https://github.com/ClickHouse/ClickHouse/pull/45809) ([jasperzhu](https://github.com/jinjunzh)).
|
||||
|
||||
|
||||
#### Bug Fix (user-visible misbehavior in official stable or prestable release)
|
||||
#### Bug Fix (user-visible misbehavior in official stable release)
|
||||
|
||||
* Flush data exactly by `rabbitmq_flush_interval_ms` or by `rabbitmq_max_block_size` in `StorageRabbitMQ`. Closes [#42389](https://github.com/ClickHouse/ClickHouse/issues/42389). Closes [#45160](https://github.com/ClickHouse/ClickHouse/issues/45160). [#44404](https://github.com/ClickHouse/ClickHouse/pull/44404) ([Kseniia Sumarokova](https://github.com/kssenii)).
|
||||
* Use PODArray to render in sparkBar function, so we can control the memory usage. Close [#44467](https://github.com/ClickHouse/ClickHouse/issues/44467). [#44489](https://github.com/ClickHouse/ClickHouse/pull/44489) ([Duc Canh Le](https://github.com/canhld94)).
|
||||
|
@ -121,6 +121,7 @@ if (ENABLE_COLORED_BUILD AND CMAKE_GENERATOR STREQUAL "Ninja")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fdiagnostics-color=always")
|
||||
# ... such manually setting of flags can be removed once CMake supports a variable to
|
||||
# activate colors in *all* build systems: https://gitlab.kitware.com/cmake/cmake/-/issues/15502
|
||||
# --> available since CMake 3.24: https://stackoverflow.com/a/73349744
|
||||
endif ()
|
||||
|
||||
include (cmake/check_flags.cmake)
|
||||
@ -134,24 +135,15 @@ if (COMPILER_CLANG)
|
||||
set(COMPILER_FLAGS "${COMPILER_FLAGS} -gdwarf-aranges")
|
||||
endif ()
|
||||
|
||||
if (HAS_USE_CTOR_HOMING)
|
||||
# For more info see https://blog.llvm.org/posts/2021-04-05-constructor-homing-for-debug-info/
|
||||
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG" OR CMAKE_BUILD_TYPE_UC STREQUAL "RELWITHDEBINFO")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Xclang -fuse-ctor-homing")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Xclang -fuse-ctor-homing")
|
||||
endif()
|
||||
# See https://blog.llvm.org/posts/2021-04-05-constructor-homing-for-debug-info/
|
||||
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG" OR CMAKE_BUILD_TYPE_UC STREQUAL "RELWITHDEBINFO")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Xclang -fuse-ctor-homing")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Xclang -fuse-ctor-homing")
|
||||
endif()
|
||||
|
||||
no_warning(enum-constexpr-conversion) # breaks Protobuf in clang-16
|
||||
endif ()
|
||||
|
||||
# If compiler has support for -Wreserved-identifier. It is difficult to detect by clang version,
|
||||
# because there are two different branches of clang: clang and AppleClang.
|
||||
# (AppleClang is not supported by ClickHouse, but some developers have misfortune to use it).
|
||||
if (HAS_RESERVED_IDENTIFIER)
|
||||
add_compile_definitions (HAS_RESERVED_IDENTIFIER)
|
||||
endif ()
|
||||
|
||||
option(ENABLE_TESTS "Provide unit_test_dbms target with Google.Test unit tests" ON)
|
||||
option(ENABLE_EXAMPLES "Build all example programs in 'examples' subdirectories" OFF)
|
||||
option(ENABLE_BENCHMARKS "Build all benchmark programs in 'benchmarks' subdirectories" OFF)
|
||||
@ -184,26 +176,11 @@ if (OS_DARWIN)
|
||||
set (ENABLE_CURL_BUILD OFF)
|
||||
endif ()
|
||||
|
||||
# Ignored if `lld` is used
|
||||
option(ADD_GDB_INDEX_FOR_GOLD "Add .gdb-index to resulting binaries for gold linker.")
|
||||
|
||||
if (NOT CMAKE_BUILD_TYPE_UC STREQUAL "RELEASE")
|
||||
# Can be lld or ld-lld or lld-13 or /path/to/lld.
|
||||
if (LINKER_NAME MATCHES "lld" AND OS_LINUX)
|
||||
if (LINKER_NAME MATCHES "lld")
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--gdb-index")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -Wl,--gdb-index")
|
||||
message (STATUS "Adding .gdb-index via --gdb-index linker option.")
|
||||
# we use another tool for gdb-index, because gold linker removes section .debug_aranges, which used inside clickhouse stacktraces
|
||||
# http://sourceware-org.1504.n7.nabble.com/gold-No-debug-aranges-section-when-linking-with-gdb-index-td540965.html#a556932
|
||||
elseif (LINKER_NAME MATCHES "gold$" AND ADD_GDB_INDEX_FOR_GOLD)
|
||||
find_program (GDB_ADD_INDEX_EXE NAMES "gdb-add-index" DOC "Path to gdb-add-index executable")
|
||||
if (NOT GDB_ADD_INDEX_EXE)
|
||||
set (USE_GDB_ADD_INDEX 0)
|
||||
message (WARNING "Cannot add gdb index to binaries, because gold linker is used, but gdb-add-index executable not found.")
|
||||
else()
|
||||
set (USE_GDB_ADD_INDEX 1)
|
||||
message (STATUS "gdb-add-index found: ${GDB_ADD_INDEX_EXE}")
|
||||
endif()
|
||||
endif ()
|
||||
endif()
|
||||
|
||||
@ -235,7 +212,7 @@ endif ()
|
||||
|
||||
# Create BuildID when using lld. For other linkers it is created by default.
|
||||
# (NOTE: LINKER_NAME can be either path or name, and in different variants)
|
||||
if (LINKER_NAME MATCHES "lld" AND OS_LINUX)
|
||||
if (LINKER_NAME MATCHES "lld")
|
||||
# SHA1 is not cryptographically secure but it is the best what lld is offering.
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--build-id=sha1")
|
||||
endif ()
|
||||
@ -301,16 +278,17 @@ if (ENABLE_BUILD_PROFILING)
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
set (CMAKE_CXX_STANDARD 20)
|
||||
set (CMAKE_CXX_EXTENSIONS ON) # Same as gnu++2a (ON) vs c++2a (OFF): https://cmake.org/cmake/help/latest/prop_tgt/CXX_EXTENSIONS.html
|
||||
set (CMAKE_CXX_STANDARD 23)
|
||||
set (CMAKE_CXX_EXTENSIONS OFF)
|
||||
set (CMAKE_CXX_STANDARD_REQUIRED ON)
|
||||
|
||||
set (CMAKE_C_STANDARD 11)
|
||||
set (CMAKE_C_EXTENSIONS ON)
|
||||
set (CMAKE_C_EXTENSIONS ON) # required by most contribs written in C
|
||||
set (CMAKE_C_STANDARD_REQUIRED ON)
|
||||
|
||||
if (COMPILER_GCC OR COMPILER_CLANG)
|
||||
# Enable C++14 sized global deallocation functions. It should be enabled by setting -std=c++14 but I'm not sure.
|
||||
# See https://reviews.llvm.org/D112921
|
||||
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fsized-deallocation")
|
||||
endif ()
|
||||
|
||||
@ -329,11 +307,7 @@ if (ARCH_AMD64)
|
||||
set(BRANCHES_WITHIN_32B_BOUNDARIES "-Wa,${BRANCHES_WITHIN_32B_BOUNDARIES}")
|
||||
endif()
|
||||
|
||||
include(CheckCXXCompilerFlag)
|
||||
check_cxx_compiler_flag("${BRANCHES_WITHIN_32B_BOUNDARIES}" HAS_BRANCHES_WITHIN_32B_BOUNDARIES)
|
||||
if (HAS_BRANCHES_WITHIN_32B_BOUNDARIES)
|
||||
set(COMPILER_FLAGS "${COMPILER_FLAGS} ${BRANCHES_WITHIN_32B_BOUNDARIES}")
|
||||
endif()
|
||||
set(COMPILER_FLAGS "${COMPILER_FLAGS} ${BRANCHES_WITHIN_32B_BOUNDARIES}")
|
||||
endif()
|
||||
|
||||
if (COMPILER_GCC)
|
||||
@ -445,6 +419,7 @@ option(WERROR "Enable -Werror compiler option" ON)
|
||||
if (WERROR)
|
||||
# Don't pollute CMAKE_CXX_FLAGS with -Werror as it will break some CMake checks.
|
||||
# Instead, adopt modern cmake usage requirement.
|
||||
# TODO: Set CMAKE_COMPILE_WARNING_AS_ERROR (cmake 3.24)
|
||||
target_compile_options(global-group INTERFACE "-Werror")
|
||||
endif ()
|
||||
|
||||
@ -593,7 +568,7 @@ if (NATIVE_BUILD_TARGETS
|
||||
COMMAND ${CMAKE_COMMAND}
|
||||
"-DCMAKE_C_COMPILER=${CMAKE_C_COMPILER}"
|
||||
"-DCMAKE_CXX_COMPILER=${CMAKE_CXX_COMPILER}"
|
||||
"-DENABLE_CCACHE=${ENABLE_CCACHE}"
|
||||
"-DCOMPILER_CACHE=${COMPILER_CACHE}"
|
||||
# Avoid overriding .cargo/config.toml with native toolchain.
|
||||
"-DENABLE_RUST=OFF"
|
||||
"-DENABLE_CLICKHOUSE_SELF_EXTRACTING=${ENABLE_CLICKHOUSE_SELF_EXTRACTING}"
|
||||
|
@ -19,8 +19,8 @@ endif()
|
||||
if (NOT "$ENV{CFLAGS}" STREQUAL ""
|
||||
OR NOT "$ENV{CXXFLAGS}" STREQUAL ""
|
||||
OR NOT "$ENV{LDFLAGS}" STREQUAL ""
|
||||
OR CMAKE_C_FLAGS OR CMAKE_CXX_FLAGS OR CMAKE_EXE_LINKER_FLAGS OR CMAKE_SHARED_LINKER_FLAGS OR CMAKE_MODULE_LINKER_FLAGS
|
||||
OR CMAKE_C_FLAGS_INIT OR CMAKE_CXX_FLAGS_INIT OR CMAKE_EXE_LINKER_FLAGS_INIT OR CMAKE_SHARED_LINKER_FLAGS_INIT OR CMAKE_MODULE_LINKER_FLAGS_INIT)
|
||||
OR CMAKE_C_FLAGS OR CMAKE_CXX_FLAGS OR CMAKE_EXE_LINKER_FLAGS OR CMAKE_MODULE_LINKER_FLAGS
|
||||
OR CMAKE_C_FLAGS_INIT OR CMAKE_CXX_FLAGS_INIT OR CMAKE_EXE_LINKER_FLAGS_INIT OR CMAKE_MODULE_LINKER_FLAGS_INIT)
|
||||
|
||||
# if $ENV
|
||||
message("CFLAGS: $ENV{CFLAGS}")
|
||||
@ -36,7 +36,6 @@ if (NOT "$ENV{CFLAGS}" STREQUAL ""
|
||||
message("CMAKE_C_FLAGS_INIT: ${CMAKE_C_FLAGS_INIT}")
|
||||
message("CMAKE_CXX_FLAGS_INIT: ${CMAKE_CXX_FLAGS_INIT}")
|
||||
message("CMAKE_EXE_LINKER_FLAGS_INIT: ${CMAKE_EXE_LINKER_FLAGS_INIT}")
|
||||
message("CMAKE_SHARED_LINKER_FLAGS_INIT: ${CMAKE_SHARED_LINKER_FLAGS_INIT}")
|
||||
message("CMAKE_MODULE_LINKER_FLAGS_INIT: ${CMAKE_MODULE_LINKER_FLAGS_INIT}")
|
||||
|
||||
message(FATAL_ERROR "
|
||||
|
11
README.md
11
README.md
@ -1,4 +1,4 @@
|
||||
[![ClickHouse — open source distributed column-oriented DBMS](https://github.com/ClickHouse/clickhouse-presentations/raw/master/images/logo-400x240.png)](https://clickhouse.com)
|
||||
[<img alt="ClickHouse — open source distributed column-oriented DBMS" width="400px" src="https://clickhouse.com/images/ch_gh_logo_rounded.png" />](https://clickhouse.com?utm_source=github)
|
||||
|
||||
ClickHouse® is an open-source column-oriented database management system that allows generating analytical data reports in real-time.
|
||||
|
||||
@ -21,11 +21,10 @@ curl https://clickhouse.com/ | sh
|
||||
* [Contacts](https://clickhouse.com/company/contact) can help to get your questions answered if there are any.
|
||||
|
||||
## Upcoming Events
|
||||
* [**v23.2 Release Webinar**](https://clickhouse.com/company/events/v23-2-release-webinar?utm_source=github&utm_medium=social&utm_campaign=release-webinar-2023-02) - Feb 23 - 23.2 is rapidly approaching. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release.
|
||||
* [**ClickHouse Meetup in Amsterdam**](https://www.meetup.com/clickhouse-netherlands-user-group/events/291485868/) - Mar 9 - The first ClickHouse Amsterdam Meetup of 2023 is here! 🎉 Join us for short lightning talks and long discussions. Food, drinks & good times on us.
|
||||
* [**ClickHouse Meetup in SF Bay Area**](https://www.meetup.com/clickhouse-silicon-valley-meetup-group/events/291490121/) - Mar 14 - A night to meet with ClickHouse team in the San Francisco area! Food and drink are a given...but networking is the primary focus.
|
||||
* [**ClickHouse Meetup in Austin**](https://www.meetup.com/clickhouse-austin-user-group/events/291486654/) - Mar 16 - The first ClickHouse Meetup in Austin is happening soon! Interested in speaking, let us know!
|
||||
* [**ClickHouse Meetup in Austin**](https://www.meetup.com/clickhouse-austin-user-group/events/291486654/) - Mar 30 - The first ClickHouse Meetup in Austin is happening soon! Interested in speaking, let us know!
|
||||
* [**v23.3 Release Webinar**](https://clickhouse.com/company/events/v23-3-release-webinar?utm_source=github&utm_medium=social&utm_campaign=release-webinar-2023-02) - Mar 30 - 23.3 is rapidly approaching. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release.
|
||||
|
||||
## Recent Recordings
|
||||
* **FOSDEM 2023**: In the "Fast and Streaming Data" room Alexey gave a talk entitled "Building Analytical Apps With ClickHouse" that looks at the landscape of data tools, an interesting data set, and how you can interact with data quickly. Check out the recording on **[YouTube](https://www.youtube.com/watch?v=JlcI2Vfz_uk)**.
|
||||
* **Recording available**: [**v23.1 Release Webinar**](https://www.youtube.com/watch?v=zYSZXBnTMSE) 23.1 is the ClickHouse New Year release. Original creator, co-founder, and CTO of ClickHouse Alexey Milovidov will walk us through the highlights of the release. Inverted indices, query cache, and so -- very -- much more.
|
||||
* **Recording available**: [**v23.2 Release Webinar**](https://www.youtube.com/watch?v=2o0vRMMIrkY) NTILE Window Function support, Partition Key for GROUP By, io_uring, Apache Iceberg support, Dynamic Disks, integrations updates! Watch it now!
|
||||
* **All release webinar recordings**: [YouTube playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3jAlSy1JxyP8zluvXaN3nxU)
|
||||
|
@ -13,9 +13,10 @@ The following versions of ClickHouse server are currently being supported with s
|
||||
|
||||
| Version | Supported |
|
||||
|:-|:-|
|
||||
| 23.3 | ✔️ |
|
||||
| 23.2 | ✔️ |
|
||||
| 23.1 | ✔️ |
|
||||
| 22.12 | ✔️ |
|
||||
| 22.12 | ❌ |
|
||||
| 22.11 | ❌ |
|
||||
| 22.10 | ❌ |
|
||||
| 22.9 | ❌ |
|
||||
@ -24,7 +25,7 @@ The following versions of ClickHouse server are currently being supported with s
|
||||
| 22.6 | ❌ |
|
||||
| 22.5 | ❌ |
|
||||
| 22.4 | ❌ |
|
||||
| 22.3 | ✔️ |
|
||||
| 22.3 | ❌ |
|
||||
| 22.2 | ❌ |
|
||||
| 22.1 | ❌ |
|
||||
| 21.* | ❌ |
|
||||
|
@ -2,6 +2,10 @@ if (USE_CLANG_TIDY)
|
||||
set (CMAKE_CXX_CLANG_TIDY "${CLANG_TIDY_PATH}")
|
||||
endif ()
|
||||
|
||||
# TODO: Remove this. We like to compile with C++23 (set by top-level CMakeLists) but Clang crashes with our libcxx
|
||||
# when instantiated from JSON.cpp. Try again when libcxx(abi) and Clang are upgraded to 16.
|
||||
set (CMAKE_CXX_STANDARD 20)
|
||||
|
||||
set (SRCS
|
||||
argsToConfig.cpp
|
||||
coverage.cpp
|
||||
|
@ -466,9 +466,8 @@ JSON::Pos JSON::searchField(const char * data, size_t size) const
|
||||
{
|
||||
if (!it->hasEscapes())
|
||||
{
|
||||
if (static_cast<int>(size) + 2 > it->dataEnd() - it->data())
|
||||
continue;
|
||||
if (!strncmp(data, it->data() + 1, size))
|
||||
const auto current_name = it->getRawName();
|
||||
if (current_name.size() == size && 0 == memcmp(current_name.data(), data, size))
|
||||
break;
|
||||
}
|
||||
else
|
||||
|
@ -2,6 +2,8 @@
|
||||
|
||||
#if WITH_COVERAGE
|
||||
|
||||
#pragma GCC diagnostic ignored "-Wreserved-identifier"
|
||||
|
||||
# include <mutex>
|
||||
# include <unistd.h>
|
||||
|
||||
|
@ -36,7 +36,7 @@
|
||||
|
||||
namespace detail
|
||||
{
|
||||
template <char ...chars> constexpr bool is_in(char x) { return ((x == chars) || ...); }
|
||||
template <char ...chars> constexpr bool is_in(char x) { return ((x == chars) || ...); } // NOLINT(misc-redundant-expression)
|
||||
|
||||
#if defined(__SSE2__)
|
||||
template <char s0>
|
||||
@ -159,22 +159,22 @@ inline const char * find_first_symbols_sse42(const char * const begin, const cha
|
||||
#endif
|
||||
|
||||
for (; pos < end; ++pos)
|
||||
if ( (num_chars >= 1 && maybe_negate<positive>(*pos == c01))
|
||||
|| (num_chars >= 2 && maybe_negate<positive>(*pos == c02))
|
||||
|| (num_chars >= 3 && maybe_negate<positive>(*pos == c03))
|
||||
|| (num_chars >= 4 && maybe_negate<positive>(*pos == c04))
|
||||
|| (num_chars >= 5 && maybe_negate<positive>(*pos == c05))
|
||||
|| (num_chars >= 6 && maybe_negate<positive>(*pos == c06))
|
||||
|| (num_chars >= 7 && maybe_negate<positive>(*pos == c07))
|
||||
|| (num_chars >= 8 && maybe_negate<positive>(*pos == c08))
|
||||
|| (num_chars >= 9 && maybe_negate<positive>(*pos == c09))
|
||||
|| (num_chars >= 10 && maybe_negate<positive>(*pos == c10))
|
||||
|| (num_chars >= 11 && maybe_negate<positive>(*pos == c11))
|
||||
|| (num_chars >= 12 && maybe_negate<positive>(*pos == c12))
|
||||
|| (num_chars >= 13 && maybe_negate<positive>(*pos == c13))
|
||||
|| (num_chars >= 14 && maybe_negate<positive>(*pos == c14))
|
||||
|| (num_chars >= 15 && maybe_negate<positive>(*pos == c15))
|
||||
|| (num_chars >= 16 && maybe_negate<positive>(*pos == c16)))
|
||||
if ( (num_chars == 1 && maybe_negate<positive>(is_in<c01>(*pos)))
|
||||
|| (num_chars == 2 && maybe_negate<positive>(is_in<c01, c02>(*pos)))
|
||||
|| (num_chars == 3 && maybe_negate<positive>(is_in<c01, c02, c03>(*pos)))
|
||||
|| (num_chars == 4 && maybe_negate<positive>(is_in<c01, c02, c03, c04>(*pos)))
|
||||
|| (num_chars == 5 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05>(*pos)))
|
||||
|| (num_chars == 6 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06>(*pos)))
|
||||
|| (num_chars == 7 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07>(*pos)))
|
||||
|| (num_chars == 8 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08>(*pos)))
|
||||
|| (num_chars == 9 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09>(*pos)))
|
||||
|| (num_chars == 10 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09, c10>(*pos)))
|
||||
|| (num_chars == 11 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09, c10, c11>(*pos)))
|
||||
|| (num_chars == 12 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09, c10, c11, c12>(*pos)))
|
||||
|| (num_chars == 13 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09, c10, c11, c12, c13>(*pos)))
|
||||
|| (num_chars == 14 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09, c10, c11, c12, c13, c14>(*pos)))
|
||||
|| (num_chars == 15 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09, c10, c11, c12, c13, c14, c15>(*pos)))
|
||||
|| (num_chars == 16 && maybe_negate<positive>(is_in<c01, c02, c03, c04, c05, c06, c07, c08, c09, c10, c11, c12, c13, c14, c15, c16>(*pos))))
|
||||
return pos;
|
||||
return return_mode == ReturnMode::End ? end : nullptr;
|
||||
}
|
||||
|
@ -1,5 +1,6 @@
|
||||
#pragma once
|
||||
|
||||
#include <bit>
|
||||
#include <cstring>
|
||||
#include "types.h"
|
||||
|
||||
|
@ -1,6 +1,4 @@
|
||||
#ifdef HAS_RESERVED_IDENTIFIER
|
||||
#pragma clang diagnostic ignored "-Wreserved-identifier"
|
||||
#endif
|
||||
|
||||
/// This code was based on the code by Fedor Korotkiy https://www.linkedin.com/in/fedor-korotkiy-659a1838/
|
||||
|
||||
|
@ -35,7 +35,7 @@ public:
|
||||
Self & operator=(T && rhs) { t = std::move(rhs); return *this;}
|
||||
|
||||
// NOLINTBEGIN(google-explicit-constructor)
|
||||
operator const T & () const { return t; }
|
||||
constexpr operator const T & () const { return t; }
|
||||
operator T & () { return t; }
|
||||
// NOLINTEND(google-explicit-constructor)
|
||||
|
||||
|
@ -5,10 +5,8 @@ constexpr size_t KiB = 1024;
|
||||
constexpr size_t MiB = 1024 * KiB;
|
||||
constexpr size_t GiB = 1024 * MiB;
|
||||
|
||||
#ifdef HAS_RESERVED_IDENTIFIER
|
||||
# pragma clang diagnostic push
|
||||
# pragma clang diagnostic ignored "-Wreserved-identifier"
|
||||
#endif
|
||||
#pragma clang diagnostic push
|
||||
#pragma clang diagnostic ignored "-Wreserved-identifier"
|
||||
|
||||
// NOLINTBEGIN(google-runtime-int)
|
||||
constexpr size_t operator"" _KiB(unsigned long long val) { return val * KiB; }
|
||||
@ -16,6 +14,4 @@ constexpr size_t operator"" _MiB(unsigned long long val) { return val * MiB; }
|
||||
constexpr size_t operator"" _GiB(unsigned long long val) { return val * GiB; }
|
||||
// NOLINTEND(google-runtime-int)
|
||||
|
||||
#ifdef HAS_RESERVED_IDENTIFIER
|
||||
# pragma clang diagnostic pop
|
||||
#endif
|
||||
#pragma clang diagnostic pop
|
||||
|
@ -155,13 +155,13 @@ struct common_type<wide::integer<Bits, Signed>, Arithmetic>
|
||||
std::is_floating_point_v<Arithmetic>,
|
||||
Arithmetic,
|
||||
std::conditional_t<
|
||||
sizeof(Arithmetic) < Bits * sizeof(long),
|
||||
sizeof(Arithmetic) * 8 < Bits,
|
||||
wide::integer<Bits, Signed>,
|
||||
std::conditional_t<
|
||||
Bits * sizeof(long) < sizeof(Arithmetic),
|
||||
Bits < sizeof(Arithmetic) * 8,
|
||||
Arithmetic,
|
||||
std::conditional_t<
|
||||
Bits * sizeof(long) == sizeof(Arithmetic) && (std::is_same_v<Signed, signed> || std::is_signed_v<Arithmetic>),
|
||||
Bits == sizeof(Arithmetic) * 8 && (std::is_same_v<Signed, signed> || std::is_signed_v<Arithmetic>),
|
||||
Arithmetic,
|
||||
wide::integer<Bits, Signed>>>>>;
|
||||
};
|
||||
@ -732,9 +732,10 @@ public:
|
||||
if (std::numeric_limits<T>::is_signed && (is_negative(lhs) != is_negative(rhs)))
|
||||
return is_negative(rhs);
|
||||
|
||||
integer<Bits, Signed> t = rhs;
|
||||
for (unsigned i = 0; i < item_count; ++i)
|
||||
{
|
||||
base_type rhs_item = get_item(rhs, big(i));
|
||||
base_type rhs_item = get_item(t, big(i));
|
||||
|
||||
if (lhs.items[big(i)] != rhs_item)
|
||||
return lhs.items[big(i)] > rhs_item;
|
||||
@ -757,9 +758,10 @@ public:
|
||||
if (std::numeric_limits<T>::is_signed && (is_negative(lhs) != is_negative(rhs)))
|
||||
return is_negative(lhs);
|
||||
|
||||
integer<Bits, Signed> t = rhs;
|
||||
for (unsigned i = 0; i < item_count; ++i)
|
||||
{
|
||||
base_type rhs_item = get_item(rhs, big(i));
|
||||
base_type rhs_item = get_item(t, big(i));
|
||||
|
||||
if (lhs.items[big(i)] != rhs_item)
|
||||
return lhs.items[big(i)] < rhs_item;
|
||||
@ -779,9 +781,10 @@ public:
|
||||
{
|
||||
if constexpr (should_keep_size<T>())
|
||||
{
|
||||
integer<Bits, Signed> t = rhs;
|
||||
for (unsigned i = 0; i < item_count; ++i)
|
||||
{
|
||||
base_type rhs_item = get_item(rhs, any(i));
|
||||
base_type rhs_item = get_item(t, any(i));
|
||||
|
||||
if (lhs.items[any(i)] != rhs_item)
|
||||
return false;
|
||||
@ -1239,7 +1242,7 @@ constexpr integer<Bits, Signed>::operator long double() const noexcept
|
||||
for (unsigned i = 0; i < _impl::item_count; ++i)
|
||||
{
|
||||
long double t = res;
|
||||
res *= std::numeric_limits<base_type>::max();
|
||||
res *= static_cast<long double>(std::numeric_limits<base_type>::max());
|
||||
res += t;
|
||||
res += tmp.items[_impl::big(i)];
|
||||
}
|
||||
|
@ -64,6 +64,6 @@ struct fmt::formatter<wide::integer<Bits, Signed>>
|
||||
template <typename FormatContext>
|
||||
auto format(const wide::integer<Bits, Signed> & value, FormatContext & ctx)
|
||||
{
|
||||
return format_to(ctx.out(), "{}", to_string(value));
|
||||
return fmt::format_to(ctx.out(), "{}", to_string(value));
|
||||
}
|
||||
};
|
||||
|
81
base/glibc-compatibility/musl/expf.c
Normal file
81
base/glibc-compatibility/musl/expf.c
Normal file
@ -0,0 +1,81 @@
|
||||
/* origin: FreeBSD /usr/src/lib/msun/src/e_expf.c */
|
||||
/*
|
||||
* Conversion to float by Ian Lance Taylor, Cygnus Support, ian@cygnus.com.
|
||||
*/
|
||||
/*
|
||||
* ====================================================
|
||||
* Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved.
|
||||
*
|
||||
* Developed at SunPro, a Sun Microsystems, Inc. business.
|
||||
* Permission to use, copy, modify, and distribute this
|
||||
* software is freely granted, provided that this notice
|
||||
* is preserved.
|
||||
* ====================================================
|
||||
*/
|
||||
|
||||
#include "libm.h"
|
||||
|
||||
static const float
|
||||
half[2] = {0.5,-0.5},
|
||||
ln2hi = 6.9314575195e-1f, /* 0x3f317200 */
|
||||
ln2lo = 1.4286067653e-6f, /* 0x35bfbe8e */
|
||||
invln2 = 1.4426950216e+0f, /* 0x3fb8aa3b */
|
||||
/*
|
||||
* Domain [-0.34568, 0.34568], range ~[-4.278e-9, 4.447e-9]:
|
||||
* |x*(exp(x)+1)/(exp(x)-1) - p(x)| < 2**-27.74
|
||||
*/
|
||||
P1 = 1.6666625440e-1f, /* 0xaaaa8f.0p-26 */
|
||||
P2 = -2.7667332906e-3f; /* -0xb55215.0p-32 */
|
||||
|
||||
float expf(float x)
|
||||
{
|
||||
float_t hi, lo, c, xx, y;
|
||||
int k, sign;
|
||||
uint32_t hx;
|
||||
|
||||
GET_FLOAT_WORD(hx, x);
|
||||
sign = hx >> 31; /* sign bit of x */
|
||||
hx &= 0x7fffffff; /* high word of |x| */
|
||||
|
||||
/* special cases */
|
||||
if (hx >= 0x42aeac50) { /* if |x| >= -87.33655f or NaN */
|
||||
if (hx >= 0x42b17218 && !sign) { /* x >= 88.722839f */
|
||||
/* overflow */
|
||||
x *= 0x1p127f;
|
||||
return x;
|
||||
}
|
||||
if (sign) {
|
||||
/* underflow */
|
||||
FORCE_EVAL(-0x1p-149f/x);
|
||||
if (hx >= 0x42cff1b5) /* x <= -103.972084f */
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
|
||||
/* argument reduction */
|
||||
if (hx > 0x3eb17218) { /* if |x| > 0.5 ln2 */
|
||||
if (hx > 0x3f851592) /* if |x| > 1.5 ln2 */
|
||||
k = invln2*x + half[sign];
|
||||
else
|
||||
k = 1 - sign - sign;
|
||||
hi = x - k*ln2hi; /* k*ln2hi is exact here */
|
||||
lo = k*ln2lo;
|
||||
x = hi - lo;
|
||||
} else if (hx > 0x39000000) { /* |x| > 2**-14 */
|
||||
k = 0;
|
||||
hi = x;
|
||||
lo = 0;
|
||||
} else {
|
||||
/* raise inexact */
|
||||
FORCE_EVAL(0x1p127f + x);
|
||||
return 1 + x;
|
||||
}
|
||||
|
||||
/* x is now in primary range */
|
||||
xx = x*x;
|
||||
c = x - xx*(P1+xx*P2);
|
||||
y = 1 + (x*c/(2-c) - lo + hi);
|
||||
if (k == 0)
|
||||
return y;
|
||||
return scalbnf(y, k);
|
||||
}
|
31
base/glibc-compatibility/musl/scalbnf.c
Normal file
31
base/glibc-compatibility/musl/scalbnf.c
Normal file
@ -0,0 +1,31 @@
|
||||
#include <math.h>
|
||||
#include <stdint.h>
|
||||
|
||||
float scalbnf(float x, int n)
|
||||
{
|
||||
union {float f; uint32_t i;} u;
|
||||
float_t y = x;
|
||||
|
||||
if (n > 127) {
|
||||
y *= 0x1p127f;
|
||||
n -= 127;
|
||||
if (n > 127) {
|
||||
y *= 0x1p127f;
|
||||
n -= 127;
|
||||
if (n > 127)
|
||||
n = 127;
|
||||
}
|
||||
} else if (n < -126) {
|
||||
y *= 0x1p-126f;
|
||||
n += 126;
|
||||
if (n < -126) {
|
||||
y *= 0x1p-126f;
|
||||
n += 126;
|
||||
if (n < -126)
|
||||
n = -126;
|
||||
}
|
||||
}
|
||||
u.i = (uint32_t)(0x7f+n)<<23;
|
||||
x = y * u.f;
|
||||
return x;
|
||||
}
|
@ -466,7 +466,7 @@ namespace Data
|
||||
bool extractManualImpl(std::size_t pos, T & val, SQLSMALLINT cType)
|
||||
{
|
||||
SQLRETURN rc = 0;
|
||||
T value = (T)0;
|
||||
T value;
|
||||
|
||||
resizeLengths(pos);
|
||||
|
||||
|
@ -105,6 +105,8 @@ public:
|
||||
const std::string & getText() const;
|
||||
/// Returns the text of the message.
|
||||
|
||||
void appendText(const std::string & text);
|
||||
|
||||
void setPriority(Priority prio);
|
||||
/// Sets the priority of the message.
|
||||
|
||||
|
@ -27,8 +27,7 @@ Message::Message():
|
||||
_tid(0),
|
||||
_file(0),
|
||||
_line(0),
|
||||
_pMap(0),
|
||||
_fmt_str(0)
|
||||
_pMap(0)
|
||||
{
|
||||
init();
|
||||
}
|
||||
@ -157,6 +156,12 @@ void Message::setText(const std::string& text)
|
||||
}
|
||||
|
||||
|
||||
void Message::appendText(const std::string & text)
|
||||
{
|
||||
_text.append(text);
|
||||
}
|
||||
|
||||
|
||||
void Message::setPriority(Priority prio)
|
||||
{
|
||||
_prio = prio;
|
||||
|
@ -27,7 +27,7 @@
|
||||
#include "Poco/Exception.h"
|
||||
#include "Poco/NumberParser.h"
|
||||
#include "Poco/NumberFormatter.h"
|
||||
#include <set>
|
||||
#include <unordered_map>
|
||||
|
||||
|
||||
namespace Poco {
|
||||
|
@ -27,9 +27,7 @@
|
||||
#define _PATH_TTY "/dev/tty"
|
||||
#endif
|
||||
|
||||
#ifdef HAS_RESERVED_IDENTIFIER
|
||||
#pragma clang diagnostic ignored "-Wreserved-identifier"
|
||||
#endif
|
||||
|
||||
#include <termios.h>
|
||||
#include <signal.h>
|
||||
|
@ -2,11 +2,11 @@
|
||||
|
||||
# NOTE: has nothing common with DBMS_TCP_PROTOCOL_VERSION,
|
||||
# only DBMS_TCP_PROTOCOL_VERSION should be incremented on protocol changes.
|
||||
SET(VERSION_REVISION 54472)
|
||||
SET(VERSION_REVISION 54473)
|
||||
SET(VERSION_MAJOR 23)
|
||||
SET(VERSION_MINOR 3)
|
||||
SET(VERSION_MINOR 4)
|
||||
SET(VERSION_PATCH 1)
|
||||
SET(VERSION_GITHASH 52bf836e03a6ba7cf2d654eaaf73231701abc3a2)
|
||||
SET(VERSION_DESCRIBE v23.3.1.2537-testing)
|
||||
SET(VERSION_STRING 23.3.1.2537)
|
||||
SET(VERSION_GITHASH 46e85357ce2da2a99f56ee83a079e892d7ec3726)
|
||||
SET(VERSION_DESCRIBE v23.4.1.1-testing)
|
||||
SET(VERSION_STRING 23.4.1.1)
|
||||
# end of autochange
|
||||
|
@ -1,5 +1,6 @@
|
||||
# Setup integration with ccache to speed up builds, see https://ccache.dev/
|
||||
|
||||
# Matches both ccache and sccache
|
||||
if (CMAKE_CXX_COMPILER_LAUNCHER MATCHES "ccache" OR CMAKE_C_COMPILER_LAUNCHER MATCHES "ccache")
|
||||
# custom compiler launcher already defined, most likely because cmake was invoked with like "-DCMAKE_CXX_COMPILER_LAUNCHER=ccache" or
|
||||
# via environment variable --> respect setting and trust that the launcher was specified correctly
|
||||
@ -8,45 +9,65 @@ if (CMAKE_CXX_COMPILER_LAUNCHER MATCHES "ccache" OR CMAKE_C_COMPILER_LAUNCHER MA
|
||||
return()
|
||||
endif()
|
||||
|
||||
option(ENABLE_CCACHE "Speedup re-compilations using ccache (external tool)" ON)
|
||||
|
||||
if (NOT ENABLE_CCACHE)
|
||||
message(STATUS "Using ccache: no (disabled via configuration)")
|
||||
return()
|
||||
set(ENABLE_CCACHE "default" CACHE STRING "Deprecated, use COMPILER_CACHE=(auto|ccache|sccache|disabled)")
|
||||
if (NOT ENABLE_CCACHE STREQUAL "default")
|
||||
message(WARNING "The -DENABLE_CCACHE is deprecated in favor of -DCOMPILER_CACHE")
|
||||
endif()
|
||||
|
||||
set(COMPILER_CACHE "auto" CACHE STRING "Speedup re-compilations using the caching tools; valid options are 'auto' (ccache, then sccache), 'ccache', 'sccache', or 'disabled'")
|
||||
|
||||
# It has pretty complex logic, because the ENABLE_CCACHE is deprecated, but still should
|
||||
# control the COMPILER_CACHE
|
||||
# After it will be completely removed, the following block will be much simpler
|
||||
if (COMPILER_CACHE STREQUAL "ccache" OR (ENABLE_CCACHE AND NOT ENABLE_CCACHE STREQUAL "default"))
|
||||
find_program (CCACHE_EXECUTABLE ccache)
|
||||
elseif(COMPILER_CACHE STREQUAL "disabled" OR NOT ENABLE_CCACHE STREQUAL "default")
|
||||
message(STATUS "Using *ccache: no (disabled via configuration)")
|
||||
return()
|
||||
elseif(COMPILER_CACHE STREQUAL "auto")
|
||||
find_program (CCACHE_EXECUTABLE ccache sccache)
|
||||
elseif(COMPILER_CACHE STREQUAL "sccache")
|
||||
find_program (CCACHE_EXECUTABLE sccache)
|
||||
else()
|
||||
message(${RECONFIGURE_MESSAGE_LEVEL} "The COMPILER_CACHE must be one of (auto|ccache|sccache|disabled), given '${COMPILER_CACHE}'")
|
||||
endif()
|
||||
|
||||
find_program (CCACHE_EXECUTABLE ccache)
|
||||
|
||||
if (NOT CCACHE_EXECUTABLE)
|
||||
message(${RECONFIGURE_MESSAGE_LEVEL} "Using ccache: no (Could not find find ccache. To significantly reduce compile times for the 2nd, 3rd, etc. build, it is highly recommended to install ccache. To suppress this message, run cmake with -DENABLE_CCACHE=0)")
|
||||
message(${RECONFIGURE_MESSAGE_LEVEL} "Using *ccache: no (Could not find find ccache or sccache. To significantly reduce compile times for the 2nd, 3rd, etc. build, it is highly recommended to install one of them. To suppress this message, run cmake with -DCOMPILER_CACHE=disabled)")
|
||||
return()
|
||||
endif()
|
||||
|
||||
execute_process(COMMAND ${CCACHE_EXECUTABLE} "-V" OUTPUT_VARIABLE CCACHE_VERSION)
|
||||
string(REGEX REPLACE "ccache version ([0-9\\.]+).*" "\\1" CCACHE_VERSION ${CCACHE_VERSION})
|
||||
if (CCACHE_EXECUTABLE MATCHES "/ccache$")
|
||||
execute_process(COMMAND ${CCACHE_EXECUTABLE} "-V" OUTPUT_VARIABLE CCACHE_VERSION)
|
||||
string(REGEX REPLACE "ccache version ([0-9\\.]+).*" "\\1" CCACHE_VERSION ${CCACHE_VERSION})
|
||||
|
||||
set (CCACHE_MINIMUM_VERSION 3.3)
|
||||
set (CCACHE_MINIMUM_VERSION 3.3)
|
||||
|
||||
if (CCACHE_VERSION VERSION_LESS_EQUAL ${CCACHE_MINIMUM_VERSION})
|
||||
message(${RECONFIGURE_MESSAGE_LEVEL} "Using ccache: no (found ${CCACHE_EXECUTABLE} (version ${CCACHE_VERSION}), the minimum required version is ${CCACHE_MINIMUM_VERSION}")
|
||||
return()
|
||||
endif()
|
||||
if (CCACHE_VERSION VERSION_LESS_EQUAL ${CCACHE_MINIMUM_VERSION})
|
||||
message(${RECONFIGURE_MESSAGE_LEVEL} "Using ccache: no (found ${CCACHE_EXECUTABLE} (version ${CCACHE_VERSION}), the minimum required version is ${CCACHE_MINIMUM_VERSION}")
|
||||
return()
|
||||
endif()
|
||||
|
||||
message(STATUS "Using ccache: ${CCACHE_EXECUTABLE} (version ${CCACHE_VERSION})")
|
||||
set(LAUNCHER ${CCACHE_EXECUTABLE})
|
||||
message(STATUS "Using ccache: ${CCACHE_EXECUTABLE} (version ${CCACHE_VERSION})")
|
||||
set(LAUNCHER ${CCACHE_EXECUTABLE})
|
||||
|
||||
# Work around a well-intended but unfortunate behavior of ccache 4.0 & 4.1 with
|
||||
# environment variable SOURCE_DATE_EPOCH. This variable provides an alternative
|
||||
# to source-code embedded timestamps (__DATE__/__TIME__) and therefore helps with
|
||||
# reproducible builds (*). SOURCE_DATE_EPOCH is set automatically by the
|
||||
# distribution, e.g. Debian. Ccache 4.0 & 4.1 incorporate SOURCE_DATE_EPOCH into
|
||||
# the hash calculation regardless they contain timestamps or not. This invalidates
|
||||
# the cache whenever SOURCE_DATE_EPOCH changes. As a fix, ignore SOURCE_DATE_EPOCH.
|
||||
#
|
||||
# (*) https://reproducible-builds.org/specs/source-date-epoch/
|
||||
if (CCACHE_VERSION VERSION_GREATER_EQUAL "4.0" AND CCACHE_VERSION VERSION_LESS "4.2")
|
||||
message(STATUS "Ignore SOURCE_DATE_EPOCH for ccache 4.0 / 4.1")
|
||||
set(LAUNCHER env -u SOURCE_DATE_EPOCH ${CCACHE_EXECUTABLE})
|
||||
# Work around a well-intended but unfortunate behavior of ccache 4.0 & 4.1 with
|
||||
# environment variable SOURCE_DATE_EPOCH. This variable provides an alternative
|
||||
# to source-code embedded timestamps (__DATE__/__TIME__) and therefore helps with
|
||||
# reproducible builds (*). SOURCE_DATE_EPOCH is set automatically by the
|
||||
# distribution, e.g. Debian. Ccache 4.0 & 4.1 incorporate SOURCE_DATE_EPOCH into
|
||||
# the hash calculation regardless they contain timestamps or not. This invalidates
|
||||
# the cache whenever SOURCE_DATE_EPOCH changes. As a fix, ignore SOURCE_DATE_EPOCH.
|
||||
#
|
||||
# (*) https://reproducible-builds.org/specs/source-date-epoch/
|
||||
if (CCACHE_VERSION VERSION_GREATER_EQUAL "4.0" AND CCACHE_VERSION VERSION_LESS "4.2")
|
||||
message(STATUS "Ignore SOURCE_DATE_EPOCH for ccache 4.0 / 4.1")
|
||||
set(LAUNCHER env -u SOURCE_DATE_EPOCH ${CCACHE_EXECUTABLE})
|
||||
endif()
|
||||
elseif(CCACHE_EXECUTABLE MATCHES "/sccache$")
|
||||
message(STATUS "Using sccache: ${CCACHE_EXECUTABLE}")
|
||||
set(LAUNCHER ${CCACHE_EXECUTABLE})
|
||||
endif()
|
||||
|
||||
set (CMAKE_CXX_COMPILER_LAUNCHER ${LAUNCHER} ${CMAKE_CXX_COMPILER_LAUNCHER})
|
||||
|
@ -1,7 +1,5 @@
|
||||
include (CheckCXXCompilerFlag)
|
||||
include (CheckCCompilerFlag)
|
||||
|
||||
check_cxx_compiler_flag("-Wreserved-identifier" HAS_RESERVED_IDENTIFIER)
|
||||
check_cxx_compiler_flag("-Wsuggest-destructor-override" HAS_SUGGEST_DESTRUCTOR_OVERRIDE)
|
||||
check_cxx_compiler_flag("-Wsuggest-override" HAS_SUGGEST_OVERRIDE)
|
||||
check_cxx_compiler_flag("-Xclang -fuse-ctor-homing" HAS_USE_CTOR_HOMING)
|
||||
# Set/unset variable based on existence of compiler flags. Example:
|
||||
# check_cxx_compiler_flag("-Wreserved-identifier" HAS_RESERVED_IDENTIFIER)
|
||||
|
@ -5,14 +5,14 @@ if (ENABLE_CLANG_TIDY)
|
||||
|
||||
find_program (CLANG_TIDY_CACHE_PATH NAMES "clang-tidy-cache")
|
||||
if (CLANG_TIDY_CACHE_PATH)
|
||||
find_program (_CLANG_TIDY_PATH NAMES "clang-tidy-15" "clang-tidy-14" "clang-tidy-13" "clang-tidy-12" "clang-tidy")
|
||||
find_program (_CLANG_TIDY_PATH NAMES "clang-tidy-16" "clang-tidy-15" "clang-tidy-14" "clang-tidy")
|
||||
|
||||
# Why do we use ';' here?
|
||||
# It's a cmake black magic: https://cmake.org/cmake/help/latest/prop_tgt/LANG_CLANG_TIDY.html#prop_tgt:%3CLANG%3E_CLANG_TIDY
|
||||
# The CLANG_TIDY_PATH is passed to CMAKE_CXX_CLANG_TIDY, which follows CXX_CLANG_TIDY syntax.
|
||||
set (CLANG_TIDY_PATH "${CLANG_TIDY_CACHE_PATH};${_CLANG_TIDY_PATH}" CACHE STRING "A combined command to run clang-tidy with caching wrapper")
|
||||
else ()
|
||||
find_program (CLANG_TIDY_PATH NAMES "clang-tidy-15" "clang-tidy-14" "clang-tidy-13" "clang-tidy-12" "clang-tidy")
|
||||
find_program (CLANG_TIDY_PATH NAMES "clang-tidy-16" "clang-tidy-15" "clang-tidy-14" "clang-tidy")
|
||||
endif ()
|
||||
|
||||
if (CLANG_TIDY_PATH)
|
||||
|
@ -22,7 +22,6 @@ set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
set (CMAKE_ASM_FLAGS "${CMAKE_ASM_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
|
||||
set (CMAKE_EXE_LINKER_FLAGS_INIT "-fuse-ld=bfd")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS_INIT "-fuse-ld=bfd")
|
||||
|
||||
# Currently, lld does not work with the error:
|
||||
# ld.lld: error: section size decrease is too large
|
||||
|
@ -30,7 +30,6 @@ set (CMAKE_SYSROOT "${TOOLCHAIN_PATH}/x86_64-linux-gnu/libc")
|
||||
set (CMAKE_C_FLAGS "${CMAKE_C_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
set (CMAKE_ASM_FLAGS "${CMAKE_ASM_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
set (CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} --gcc-toolchain=${TOOLCHAIN_PATH}")
|
||||
|
||||
|
@ -50,69 +50,49 @@ endif ()
|
||||
string (REGEX MATCHALL "[0-9]+" COMPILER_VERSION_LIST ${CMAKE_CXX_COMPILER_VERSION})
|
||||
list (GET COMPILER_VERSION_LIST 0 COMPILER_VERSION_MAJOR)
|
||||
|
||||
# Example values: `lld-10`, `gold`.
|
||||
# Example values: `lld-10`
|
||||
option (LINKER_NAME "Linker name or full path")
|
||||
|
||||
# s390x doesnt support lld
|
||||
if (NOT ARCH_S390X)
|
||||
if (NOT LINKER_NAME)
|
||||
if (COMPILER_GCC)
|
||||
find_program (LLD_PATH NAMES "ld.lld")
|
||||
find_program (GOLD_PATH NAMES "ld.gold")
|
||||
elseif (COMPILER_CLANG)
|
||||
# llvm lld is a generic driver.
|
||||
# Invoke ld.lld (Unix), ld64.lld (macOS), lld-link (Windows), wasm-ld (WebAssembly) instead
|
||||
if (OS_LINUX)
|
||||
if (LINKER_NAME MATCHES "gold")
|
||||
message (FATAL_ERROR "Linking with gold is unsupported. Please use lld.")
|
||||
endif ()
|
||||
|
||||
if (NOT LINKER_NAME)
|
||||
if (COMPILER_GCC)
|
||||
find_program (LLD_PATH NAMES "ld.lld")
|
||||
elseif (COMPILER_CLANG)
|
||||
if (OS_LINUX)
|
||||
if (NOT ARCH_S390X) # s390x doesnt support lld
|
||||
find_program (LLD_PATH NAMES "ld.lld-${COMPILER_VERSION_MAJOR}" "ld.lld")
|
||||
elseif (OS_DARWIN)
|
||||
find_program (LLD_PATH NAMES "ld64.lld-${COMPILER_VERSION_MAJOR}" "ld64.lld")
|
||||
endif ()
|
||||
find_program (GOLD_PATH NAMES "ld.gold" "gold")
|
||||
endif ()
|
||||
endif ()
|
||||
if (OS_LINUX)
|
||||
if (LLD_PATH)
|
||||
if (COMPILER_GCC)
|
||||
# GCC driver requires one of supported linker names like "lld".
|
||||
set (LINKER_NAME "lld")
|
||||
else ()
|
||||
# Clang driver simply allows full linker path.
|
||||
set (LINKER_NAME ${LLD_PATH})
|
||||
endif ()
|
||||
endif ()
|
||||
endif()
|
||||
endif()
|
||||
|
||||
if ((OS_LINUX OR OS_DARWIN) AND NOT LINKER_NAME)
|
||||
# prefer lld linker over gold or ld on linux and macos
|
||||
if (LLD_PATH)
|
||||
if (COMPILER_GCC)
|
||||
# GCC driver requires one of supported linker names like "lld".
|
||||
set (LINKER_NAME "lld")
|
||||
else ()
|
||||
# Clang driver simply allows full linker path.
|
||||
set (LINKER_NAME ${LLD_PATH})
|
||||
endif ()
|
||||
endif ()
|
||||
|
||||
if (NOT LINKER_NAME)
|
||||
if (GOLD_PATH)
|
||||
message (FATAL_ERROR "Linking with gold is unsupported. Please use lld.")
|
||||
if (COMPILER_GCC)
|
||||
set (LINKER_NAME "gold")
|
||||
else ()
|
||||
set (LINKER_NAME ${GOLD_PATH})
|
||||
endif ()
|
||||
endif ()
|
||||
endif ()
|
||||
endif ()
|
||||
# TODO: allow different linker on != OS_LINUX
|
||||
|
||||
if (LINKER_NAME)
|
||||
find_program (LLD_PATH NAMES ${LINKER_NAME})
|
||||
if (NOT LLD_PATH)
|
||||
message (FATAL_ERROR "Using linker ${LINKER_NAME} but can't find its path.")
|
||||
endif ()
|
||||
if (COMPILER_CLANG)
|
||||
find_program (LLD_PATH NAMES ${LINKER_NAME})
|
||||
if (NOT LLD_PATH)
|
||||
message (FATAL_ERROR "Using linker ${LINKER_NAME} but can't find its path.")
|
||||
endif ()
|
||||
|
||||
# This a temporary quirk to emit .debug_aranges with ThinLTO
|
||||
# This a temporary quirk to emit .debug_aranges with ThinLTO, can be removed after upgrade to clang-16
|
||||
set (LLD_WRAPPER "${CMAKE_CURRENT_BINARY_DIR}/ld.lld")
|
||||
configure_file ("${CMAKE_CURRENT_SOURCE_DIR}/cmake/ld.lld.in" "${LLD_WRAPPER}" @ONLY)
|
||||
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} --ld-path=${LLD_WRAPPER}")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} --ld-path=${LLD_WRAPPER}")
|
||||
else ()
|
||||
set (CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -fuse-ld=${LINKER_NAME}")
|
||||
set (CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -fuse-ld=${LINKER_NAME}")
|
||||
endif ()
|
||||
|
||||
endif ()
|
||||
|
2
contrib/arrow
vendored
2
contrib/arrow
vendored
@ -1 +1 @@
|
||||
Subproject commit d03245f801f798c63ee9a7d2b8914a9e5c5cd666
|
||||
Subproject commit 1f1b3d35fb6eb73e6492d3afd8a85cde848d174f
|
@ -202,6 +202,7 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/builder.cc"
|
||||
"${LIBRARY_DIR}/buffer.cc"
|
||||
"${LIBRARY_DIR}/chunked_array.cc"
|
||||
"${LIBRARY_DIR}/chunk_resolver.cc"
|
||||
"${LIBRARY_DIR}/compare.cc"
|
||||
"${LIBRARY_DIR}/config.cc"
|
||||
"${LIBRARY_DIR}/datum.cc"
|
||||
@ -268,6 +269,10 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/util/uri.cc"
|
||||
"${LIBRARY_DIR}/util/utf8.cc"
|
||||
"${LIBRARY_DIR}/util/value_parsing.cc"
|
||||
"${LIBRARY_DIR}/util/byte_size.cc"
|
||||
"${LIBRARY_DIR}/util/debug.cc"
|
||||
"${LIBRARY_DIR}/util/tracing.cc"
|
||||
"${LIBRARY_DIR}/util/atfork_internal.cc"
|
||||
"${LIBRARY_DIR}/vendored/base64.cpp"
|
||||
"${LIBRARY_DIR}/vendored/datetime/tz.cpp"
|
||||
|
||||
@ -301,9 +306,11 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/compute/exec/source_node.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/sink_node.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/order_by_impl.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/partition_util.cc"
|
||||
"${LIBRARY_DIR}/compute/function.cc"
|
||||
"${LIBRARY_DIR}/compute/function_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernel.cc"
|
||||
"${LIBRARY_DIR}/compute/light_array.cc"
|
||||
"${LIBRARY_DIR}/compute/registry.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/aggregate_basic.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/aggregate_mode.cc"
|
||||
@ -317,21 +324,28 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_boolean.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_dictionary.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_extension.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_nested.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_numeric.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_string.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_cast_temporal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_compare.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_nested.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_random.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_round.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_set_lookup.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_string.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_temporal_binary.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_temporal_unary.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_validity.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_if_else.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_string_ascii.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/scalar_string_utf8.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/util_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_array_sort.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_cumulative_ops.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_hash.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_rank.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_select_k.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_nested.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_replace.cc"
|
||||
"${LIBRARY_DIR}/compute/kernels/vector_selection.cc"
|
||||
@ -340,13 +354,15 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/compute/exec/union_node.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_hash.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_map.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_compare.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/key_encode.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/util.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/hash_join_dict.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/hash_join.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/hash_join_node.cc"
|
||||
"${LIBRARY_DIR}/compute/exec/task_util.cc"
|
||||
"${LIBRARY_DIR}/compute/row/encode_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/row/grouper.cc"
|
||||
"${LIBRARY_DIR}/compute/row/compare_internal.cc"
|
||||
"${LIBRARY_DIR}/compute/row/row_internal.cc"
|
||||
|
||||
"${LIBRARY_DIR}/ipc/dictionary.cc"
|
||||
"${LIBRARY_DIR}/ipc/feather.cc"
|
||||
@ -357,7 +373,8 @@ set(ARROW_SRCS
|
||||
"${LIBRARY_DIR}/ipc/writer.cc"
|
||||
|
||||
"${ARROW_SRC_DIR}/arrow/adapters/orc/adapter.cc"
|
||||
"${ARROW_SRC_DIR}/arrow/adapters/orc/adapter_util.cc"
|
||||
"${ARROW_SRC_DIR}/arrow/adapters/orc/util.cc"
|
||||
"${ARROW_SRC_DIR}/arrow/adapters/orc/options.cc"
|
||||
)
|
||||
|
||||
add_definitions(-DARROW_WITH_LZ4)
|
||||
|
2
contrib/boost
vendored
2
contrib/boost
vendored
@ -1 +1 @@
|
||||
Subproject commit 03d9ec9cd159d14bd0b17c05138098451a1ea606
|
||||
Subproject commit 8fe7b3326ef482ee6ecdf5a4f698f2b8c2780f98
|
2
contrib/cctz
vendored
2
contrib/cctz
vendored
@ -1 +1 @@
|
||||
Subproject commit 7c78edd52b4d65acc103c2f195818ffcabe6fe0d
|
||||
Subproject commit 5e05432420f9692418e2e12aff09859e420b14a2
|
2
contrib/croaring
vendored
2
contrib/croaring
vendored
@ -1 +1 @@
|
||||
Subproject commit 2c867e9f9c9e2a3a7032791f94c4c7ae3013f6e0
|
||||
Subproject commit f40ed52bcdd635840a79877cef4857315dba817c
|
@ -17,7 +17,8 @@ set(SRCS
|
||||
"${LIBRARY_DIR}/src/containers/run.c"
|
||||
"${LIBRARY_DIR}/src/roaring.c"
|
||||
"${LIBRARY_DIR}/src/roaring_priority_queue.c"
|
||||
"${LIBRARY_DIR}/src/roaring_array.c")
|
||||
"${LIBRARY_DIR}/src/roaring_array.c"
|
||||
"${LIBRARY_DIR}/src/memory.c")
|
||||
|
||||
add_library(_roaring ${SRCS})
|
||||
|
||||
|
@ -48,6 +48,9 @@ set(gRPC_ABSL_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
||||
# We don't want to build C# extensions.
|
||||
set(gRPC_BUILD_CSHARP_EXT OFF)
|
||||
|
||||
# TODO: Remove this. We generally like to compile with C++23 but grpc isn't ready yet.
|
||||
set (CMAKE_CXX_STANDARD 20)
|
||||
|
||||
set(_gRPC_CARES_LIBRARIES ch_contrib::c-ares)
|
||||
set(gRPC_CARES_PROVIDER "clickhouse" CACHE STRING "" FORCE)
|
||||
add_subdirectory("${_gRPC_SOURCE_DIR}" "${_gRPC_BINARY_DIR}")
|
||||
|
2
contrib/krb5
vendored
2
contrib/krb5
vendored
@ -1 +1 @@
|
||||
Subproject commit f8262a1b548eb29d97e059260042036255d07f8d
|
||||
Subproject commit b56ce6ba690e1f320df1a64afa34980c3e462617
|
@ -15,10 +15,6 @@ if(NOT AWK_PROGRAM)
|
||||
message(FATAL_ERROR "You need the awk program to build ClickHouse with krb5 enabled.")
|
||||
endif()
|
||||
|
||||
if (NOT (ENABLE_OPENSSL OR ENABLE_OPENSSL_DYNAMIC))
|
||||
add_compile_definitions(USE_BORINGSSL=1)
|
||||
endif ()
|
||||
|
||||
set(KRB5_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/krb5/src")
|
||||
set(KRB5_ET_BIN_DIR "${CMAKE_CURRENT_BINARY_DIR}/include_private")
|
||||
|
||||
@ -160,6 +156,13 @@ set(ALL_SRCS
|
||||
|
||||
# "${KRB5_SOURCE_DIR}/lib/gssapi/spnego/negoex_trace.c"
|
||||
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/kdf.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/cmac.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/des/des_keys.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/des/f_parity.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/enc_provider/rc4.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/hash_provider/hash_md4.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/md4/md4.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/prng.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/enc_dk_cmac.c"
|
||||
# "${KRB5_SOURCE_DIR}/lib/crypto/krb/crc32.c"
|
||||
@ -183,7 +186,6 @@ set(ALL_SRCS
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/block_size.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/string_to_key.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/verify_checksum.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/crypto_libinit.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/derive.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/random_to_key.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/verify_checksum_iov.c"
|
||||
@ -217,9 +219,7 @@ set(ALL_SRCS
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_rc4.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/valid_cksumtype.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/nfold.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/prng_fortuna.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/encrypt_length.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/cmac.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/keyblocks.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/prf_rc4.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb/s2k_pbkdf2.c"
|
||||
@ -227,12 +227,11 @@ set(ALL_SRCS
|
||||
# "${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/des.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/rc4.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/des3.c"
|
||||
#"${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/camellia.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/cmac.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/sha256.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/hmac.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/kdf.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/pbkdf2.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/init.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/stubs.c"
|
||||
# "${KRB5_SOURCE_DIR}/lib/crypto/openssl/hash_provider/hash_crc32.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/hash_provider/hash_evp.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/des/des_keys.c"
|
||||
@ -312,7 +311,6 @@ set(ALL_SRCS
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb/allow_weak.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_rep.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb/mk_priv.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb/s4u_authdata.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb/preauth_otp.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb/init_keyblock.c"
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb/ser_addr.c"
|
||||
@ -476,6 +474,14 @@ set(ALL_SRCS
|
||||
"${KRB5_SOURCE_DIR}/lib/krb5/krb5_libinit.c"
|
||||
)
|
||||
|
||||
if (NOT (ENABLE_OPENSSL OR ENABLE_OPENSSL_DYNAMIC))
|
||||
add_compile_definitions(USE_BORINGSSL=1)
|
||||
else()
|
||||
set(ALL_SRCS ${ALL_SRCS}
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl/enc_provider/camellia.c"
|
||||
)
|
||||
endif()
|
||||
|
||||
add_custom_command(
|
||||
OUTPUT "${CMAKE_CURRENT_BINARY_DIR}/compile_et"
|
||||
COMMAND /bin/sh
|
||||
@ -675,6 +681,7 @@ target_include_directories(_krb5 PRIVATE
|
||||
"${KRB5_SOURCE_DIR}/lib/gssapi/krb5"
|
||||
"${KRB5_SOURCE_DIR}/lib/gssapi/spnego"
|
||||
"${KRB5_SOURCE_DIR}/util/et"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/builtin/md4"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/openssl"
|
||||
"${KRB5_SOURCE_DIR}/lib/crypto/krb"
|
||||
"${KRB5_SOURCE_DIR}/util/profile"
|
||||
@ -688,6 +695,7 @@ target_include_directories(_krb5 PRIVATE
|
||||
|
||||
target_compile_definitions(_krb5 PRIVATE
|
||||
KRB5_PRIVATE
|
||||
CRYPTO_OPENSSL
|
||||
_GSS_STATIC_LINK=1
|
||||
KRB5_DEPRECATED=1
|
||||
LOCALEDIR="/usr/local/share/locale"
|
||||
|
2
contrib/llvm-project
vendored
2
contrib/llvm-project
vendored
@ -1 +1 @@
|
||||
Subproject commit a8bf69e9cd39a23140a2b633c172d201484172da
|
||||
Subproject commit e0accd517933ebb44aff84bc8db448ffd8ef1929
|
@ -31,6 +31,40 @@
|
||||
#define BIG_CONSTANT(x) (x##LLU)
|
||||
|
||||
#endif // !defined(_MSC_VER)
|
||||
//
|
||||
//-----------------------------------------------------------------------------
|
||||
// Block read - on little-endian machines this is a single load,
|
||||
// while on big-endian or unknown machines the byte accesses should
|
||||
// still get optimized into the most efficient instruction.
|
||||
static inline uint32_t getblock ( const uint32_t * p )
|
||||
{
|
||||
#if defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
|
||||
return *p;
|
||||
#else
|
||||
const uint8_t *c = (const uint8_t *)p;
|
||||
return (uint32_t)c[0] |
|
||||
(uint32_t)c[1] << 8 |
|
||||
(uint32_t)c[2] << 16 |
|
||||
(uint32_t)c[3] << 24;
|
||||
#endif
|
||||
}
|
||||
|
||||
static inline uint64_t getblock ( const uint64_t * p )
|
||||
{
|
||||
#if defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
|
||||
return *p;
|
||||
#else
|
||||
const uint8_t *c = (const uint8_t *)p;
|
||||
return (uint64_t)c[0] |
|
||||
(uint64_t)c[1] << 8 |
|
||||
(uint64_t)c[2] << 16 |
|
||||
(uint64_t)c[3] << 24 |
|
||||
(uint64_t)c[4] << 32 |
|
||||
(uint64_t)c[5] << 40 |
|
||||
(uint64_t)c[6] << 48 |
|
||||
(uint64_t)c[7] << 56;
|
||||
#endif
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
||||
@ -52,7 +86,7 @@ uint32_t MurmurHash2 ( const void * key, size_t len, uint32_t seed )
|
||||
|
||||
while(len >= 4)
|
||||
{
|
||||
uint32_t k = *(uint32_t*)data;
|
||||
uint32_t k = getblock((const uint32_t *)data);
|
||||
|
||||
k *= m;
|
||||
k ^= k >> r;
|
||||
@ -105,7 +139,7 @@ uint64_t MurmurHash64A ( const void * key, size_t len, uint64_t seed )
|
||||
|
||||
while(data != end)
|
||||
{
|
||||
uint64_t k = *data++;
|
||||
uint64_t k = getblock(data++);
|
||||
|
||||
k *= m;
|
||||
k ^= k >> r;
|
||||
@ -151,12 +185,12 @@ uint64_t MurmurHash64B ( const void * key, size_t len, uint64_t seed )
|
||||
|
||||
while(len >= 8)
|
||||
{
|
||||
uint32_t k1 = *data++;
|
||||
uint32_t k1 = getblock(data++);
|
||||
k1 *= m; k1 ^= k1 >> r; k1 *= m;
|
||||
h1 *= m; h1 ^= k1;
|
||||
len -= 4;
|
||||
|
||||
uint32_t k2 = *data++;
|
||||
uint32_t k2 = getblock(data++);
|
||||
k2 *= m; k2 ^= k2 >> r; k2 *= m;
|
||||
h2 *= m; h2 ^= k2;
|
||||
len -= 4;
|
||||
@ -164,7 +198,7 @@ uint64_t MurmurHash64B ( const void * key, size_t len, uint64_t seed )
|
||||
|
||||
if(len >= 4)
|
||||
{
|
||||
uint32_t k1 = *data++;
|
||||
uint32_t k1 = getblock(data++);
|
||||
k1 *= m; k1 ^= k1 >> r; k1 *= m;
|
||||
h1 *= m; h1 ^= k1;
|
||||
len -= 4;
|
||||
@ -215,7 +249,7 @@ uint32_t MurmurHash2A ( const void * key, size_t len, uint32_t seed )
|
||||
|
||||
while(len >= 4)
|
||||
{
|
||||
uint32_t k = *(uint32_t*)data;
|
||||
uint32_t k = getblock((const uint32_t *)data);
|
||||
|
||||
mmix(h,k);
|
||||
|
||||
@ -278,7 +312,7 @@ public:
|
||||
|
||||
while(len >= 4)
|
||||
{
|
||||
uint32_t k = *(uint32_t*)data;
|
||||
uint32_t k = getblock((const uint32_t *)data);
|
||||
|
||||
mmix(m_hash,k);
|
||||
|
||||
@ -427,7 +461,7 @@ uint32_t MurmurHashAligned2 ( const void * key, size_t len, uint32_t seed )
|
||||
|
||||
while(len >= 4)
|
||||
{
|
||||
d = *(uint32_t *)data;
|
||||
d = getblock((const uint32_t *)data);
|
||||
t = (t >> sr) | (d << sl);
|
||||
|
||||
uint32_t k = t;
|
||||
@ -492,7 +526,7 @@ uint32_t MurmurHashAligned2 ( const void * key, size_t len, uint32_t seed )
|
||||
{
|
||||
while(len >= 4)
|
||||
{
|
||||
uint32_t k = *(uint32_t *)data;
|
||||
uint32_t k = getblock((const uint32_t *)data);
|
||||
|
||||
MIX(h,k,m);
|
||||
|
||||
|
@ -55,14 +55,32 @@ inline uint64_t rotl64 ( uint64_t x, int8_t r )
|
||||
|
||||
FORCE_INLINE uint32_t getblock32 ( const uint32_t * p, int i )
|
||||
{
|
||||
uint32_t res;
|
||||
memcpy(&res, p + i, sizeof(res));
|
||||
return res;
|
||||
#if defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
|
||||
return p[i];
|
||||
#else
|
||||
const uint8_t *c = (const uint8_t *)&p[i];
|
||||
return (uint32_t)c[0] |
|
||||
(uint32_t)c[1] << 8 |
|
||||
(uint32_t)c[2] << 16 |
|
||||
(uint32_t)c[3] << 24;
|
||||
#endif
|
||||
}
|
||||
|
||||
FORCE_INLINE uint64_t getblock64 ( const uint64_t * p, int i )
|
||||
{
|
||||
#if defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
|
||||
return p[i];
|
||||
#else
|
||||
const uint8_t *c = (const uint8_t *)&p[i];
|
||||
return (uint64_t)c[0] |
|
||||
(uint64_t)c[1] << 8 |
|
||||
(uint64_t)c[2] << 16 |
|
||||
(uint64_t)c[3] << 24 |
|
||||
(uint64_t)c[4] << 32 |
|
||||
(uint64_t)c[5] << 40 |
|
||||
(uint64_t)c[6] << 48 |
|
||||
(uint64_t)c[7] << 56;
|
||||
#endif
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
@ -329,9 +347,13 @@ void MurmurHash3_x64_128 ( const void * key, const size_t len,
|
||||
|
||||
h1 += h2;
|
||||
h2 += h1;
|
||||
|
||||
#if defined(__BYTE_ORDER__) && (__BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__)
|
||||
((uint64_t*)out)[0] = h1;
|
||||
((uint64_t*)out)[1] = h2;
|
||||
#else
|
||||
((uint64_t*)out)[0] = h2;
|
||||
((uint64_t*)out)[1] = h1;
|
||||
#endif
|
||||
}
|
||||
|
||||
//-----------------------------------------------------------------------------
|
||||
|
530
contrib/qpl-cmake/benchmark_sample/client_scripts/allin1_ssb.sh
Normal file
530
contrib/qpl-cmake/benchmark_sample/client_scripts/allin1_ssb.sh
Normal file
@ -0,0 +1,530 @@
|
||||
#!/bin/bash
|
||||
ckhost="localhost"
|
||||
ckport=("9000" "9001" "9002" "9003")
|
||||
WORKING_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/.."
|
||||
OUTPUT_DIR="${WORKING_DIR}/output"
|
||||
LOG_DIR="${OUTPUT_DIR}/log"
|
||||
RAWDATA_DIR="${WORKING_DIR}/rawdata_dir"
|
||||
database_dir="${WORKING_DIR}/database_dir"
|
||||
CLIENT_SCRIPTS_DIR="${WORKING_DIR}/client_scripts"
|
||||
LOG_PACK_FILE="$(date +%Y-%m-%d-%H-%M-%S)"
|
||||
QUERY_FILE="queries_ssb.sql"
|
||||
SERVER_BIND_CMD[0]="numactl -m 0 -N 0"
|
||||
SERVER_BIND_CMD[1]="numactl -m 0 -N 0"
|
||||
SERVER_BIND_CMD[2]="numactl -m 1 -N 1"
|
||||
SERVER_BIND_CMD[3]="numactl -m 1 -N 1"
|
||||
CLIENT_BIND_CMD=""
|
||||
SSB_GEN_FACTOR=20
|
||||
TABLE_NAME="lineorder_flat"
|
||||
TALBE_ROWS="119994608"
|
||||
CODEC_CONFIG="lz4 deflate zstd"
|
||||
|
||||
# define instance number
|
||||
inst_num=$1
|
||||
if [ ! -n "$1" ]; then
|
||||
echo "Please clarify instance number from 1,2,3 or 4"
|
||||
exit 1
|
||||
else
|
||||
echo "Benchmarking with instance number:$1"
|
||||
fi
|
||||
|
||||
if [ ! -d "$OUTPUT_DIR" ]; then
|
||||
mkdir $OUTPUT_DIR
|
||||
fi
|
||||
if [ ! -d "$LOG_DIR" ]; then
|
||||
mkdir $LOG_DIR
|
||||
fi
|
||||
if [ ! -d "$RAWDATA_DIR" ]; then
|
||||
mkdir $RAWDATA_DIR
|
||||
fi
|
||||
|
||||
# define different directories
|
||||
dir_server=("" "_s2" "_s3" "_s4")
|
||||
ckreadSql="
|
||||
CREATE TABLE customer
|
||||
(
|
||||
C_CUSTKEY UInt32,
|
||||
C_NAME String,
|
||||
C_ADDRESS String,
|
||||
C_CITY LowCardinality(String),
|
||||
C_NATION LowCardinality(String),
|
||||
C_REGION LowCardinality(String),
|
||||
C_PHONE String,
|
||||
C_MKTSEGMENT LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree ORDER BY (C_CUSTKEY);
|
||||
|
||||
CREATE TABLE lineorder
|
||||
(
|
||||
LO_ORDERKEY UInt32,
|
||||
LO_LINENUMBER UInt8,
|
||||
LO_CUSTKEY UInt32,
|
||||
LO_PARTKEY UInt32,
|
||||
LO_SUPPKEY UInt32,
|
||||
LO_ORDERDATE Date,
|
||||
LO_ORDERPRIORITY LowCardinality(String),
|
||||
LO_SHIPPRIORITY UInt8,
|
||||
LO_QUANTITY UInt8,
|
||||
LO_EXTENDEDPRICE UInt32,
|
||||
LO_ORDTOTALPRICE UInt32,
|
||||
LO_DISCOUNT UInt8,
|
||||
LO_REVENUE UInt32,
|
||||
LO_SUPPLYCOST UInt32,
|
||||
LO_TAX UInt8,
|
||||
LO_COMMITDATE Date,
|
||||
LO_SHIPMODE LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree PARTITION BY toYear(LO_ORDERDATE) ORDER BY (LO_ORDERDATE, LO_ORDERKEY);
|
||||
|
||||
CREATE TABLE part
|
||||
(
|
||||
P_PARTKEY UInt32,
|
||||
P_NAME String,
|
||||
P_MFGR LowCardinality(String),
|
||||
P_CATEGORY LowCardinality(String),
|
||||
P_BRAND LowCardinality(String),
|
||||
P_COLOR LowCardinality(String),
|
||||
P_TYPE LowCardinality(String),
|
||||
P_SIZE UInt8,
|
||||
P_CONTAINER LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree ORDER BY P_PARTKEY;
|
||||
|
||||
CREATE TABLE supplier
|
||||
(
|
||||
S_SUPPKEY UInt32,
|
||||
S_NAME String,
|
||||
S_ADDRESS String,
|
||||
S_CITY LowCardinality(String),
|
||||
S_NATION LowCardinality(String),
|
||||
S_REGION LowCardinality(String),
|
||||
S_PHONE String
|
||||
)
|
||||
ENGINE = MergeTree ORDER BY S_SUPPKEY;
|
||||
"
|
||||
supplier_table="
|
||||
CREATE TABLE supplier
|
||||
(
|
||||
S_SUPPKEY UInt32,
|
||||
S_NAME String,
|
||||
S_ADDRESS String,
|
||||
S_CITY LowCardinality(String),
|
||||
S_NATION LowCardinality(String),
|
||||
S_REGION LowCardinality(String),
|
||||
S_PHONE String
|
||||
)
|
||||
ENGINE = MergeTree ORDER BY S_SUPPKEY;
|
||||
"
|
||||
part_table="
|
||||
CREATE TABLE part
|
||||
(
|
||||
P_PARTKEY UInt32,
|
||||
P_NAME String,
|
||||
P_MFGR LowCardinality(String),
|
||||
P_CATEGORY LowCardinality(String),
|
||||
P_BRAND LowCardinality(String),
|
||||
P_COLOR LowCardinality(String),
|
||||
P_TYPE LowCardinality(String),
|
||||
P_SIZE UInt8,
|
||||
P_CONTAINER LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree ORDER BY P_PARTKEY;
|
||||
"
|
||||
lineorder_table="
|
||||
CREATE TABLE lineorder
|
||||
(
|
||||
LO_ORDERKEY UInt32,
|
||||
LO_LINENUMBER UInt8,
|
||||
LO_CUSTKEY UInt32,
|
||||
LO_PARTKEY UInt32,
|
||||
LO_SUPPKEY UInt32,
|
||||
LO_ORDERDATE Date,
|
||||
LO_ORDERPRIORITY LowCardinality(String),
|
||||
LO_SHIPPRIORITY UInt8,
|
||||
LO_QUANTITY UInt8,
|
||||
LO_EXTENDEDPRICE UInt32,
|
||||
LO_ORDTOTALPRICE UInt32,
|
||||
LO_DISCOUNT UInt8,
|
||||
LO_REVENUE UInt32,
|
||||
LO_SUPPLYCOST UInt32,
|
||||
LO_TAX UInt8,
|
||||
LO_COMMITDATE Date,
|
||||
LO_SHIPMODE LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree PARTITION BY toYear(LO_ORDERDATE) ORDER BY (LO_ORDERDATE, LO_ORDERKEY);
|
||||
"
|
||||
customer_table="
|
||||
CREATE TABLE customer
|
||||
(
|
||||
C_CUSTKEY UInt32,
|
||||
C_NAME String,
|
||||
C_ADDRESS String,
|
||||
C_CITY LowCardinality(String),
|
||||
C_NATION LowCardinality(String),
|
||||
C_REGION LowCardinality(String),
|
||||
C_PHONE String,
|
||||
C_MKTSEGMENT LowCardinality(String)
|
||||
)
|
||||
ENGINE = MergeTree ORDER BY (C_CUSTKEY);
|
||||
"
|
||||
|
||||
lineorder_flat_table="
|
||||
SET max_memory_usage = 20000000000;
|
||||
CREATE TABLE lineorder_flat
|
||||
ENGINE = MergeTree
|
||||
PARTITION BY toYear(LO_ORDERDATE)
|
||||
ORDER BY (LO_ORDERDATE, LO_ORDERKEY) AS
|
||||
SELECT
|
||||
l.LO_ORDERKEY AS LO_ORDERKEY,
|
||||
l.LO_LINENUMBER AS LO_LINENUMBER,
|
||||
l.LO_CUSTKEY AS LO_CUSTKEY,
|
||||
l.LO_PARTKEY AS LO_PARTKEY,
|
||||
l.LO_SUPPKEY AS LO_SUPPKEY,
|
||||
l.LO_ORDERDATE AS LO_ORDERDATE,
|
||||
l.LO_ORDERPRIORITY AS LO_ORDERPRIORITY,
|
||||
l.LO_SHIPPRIORITY AS LO_SHIPPRIORITY,
|
||||
l.LO_QUANTITY AS LO_QUANTITY,
|
||||
l.LO_EXTENDEDPRICE AS LO_EXTENDEDPRICE,
|
||||
l.LO_ORDTOTALPRICE AS LO_ORDTOTALPRICE,
|
||||
l.LO_DISCOUNT AS LO_DISCOUNT,
|
||||
l.LO_REVENUE AS LO_REVENUE,
|
||||
l.LO_SUPPLYCOST AS LO_SUPPLYCOST,
|
||||
l.LO_TAX AS LO_TAX,
|
||||
l.LO_COMMITDATE AS LO_COMMITDATE,
|
||||
l.LO_SHIPMODE AS LO_SHIPMODE,
|
||||
c.C_NAME AS C_NAME,
|
||||
c.C_ADDRESS AS C_ADDRESS,
|
||||
c.C_CITY AS C_CITY,
|
||||
c.C_NATION AS C_NATION,
|
||||
c.C_REGION AS C_REGION,
|
||||
c.C_PHONE AS C_PHONE,
|
||||
c.C_MKTSEGMENT AS C_MKTSEGMENT,
|
||||
s.S_NAME AS S_NAME,
|
||||
s.S_ADDRESS AS S_ADDRESS,
|
||||
s.S_CITY AS S_CITY,
|
||||
s.S_NATION AS S_NATION,
|
||||
s.S_REGION AS S_REGION,
|
||||
s.S_PHONE AS S_PHONE,
|
||||
p.P_NAME AS P_NAME,
|
||||
p.P_MFGR AS P_MFGR,
|
||||
p.P_CATEGORY AS P_CATEGORY,
|
||||
p.P_BRAND AS P_BRAND,
|
||||
p.P_COLOR AS P_COLOR,
|
||||
p.P_TYPE AS P_TYPE,
|
||||
p.P_SIZE AS P_SIZE,
|
||||
p.P_CONTAINER AS P_CONTAINER
|
||||
FROM lineorder AS l
|
||||
INNER JOIN customer AS c ON c.C_CUSTKEY = l.LO_CUSTKEY
|
||||
INNER JOIN supplier AS s ON s.S_SUPPKEY = l.LO_SUPPKEY
|
||||
INNER JOIN part AS p ON p.P_PARTKEY = l.LO_PARTKEY;
|
||||
show settings ilike 'max_memory_usage';
|
||||
"
|
||||
|
||||
function insert_data(){
|
||||
echo "insert_data:$1"
|
||||
create_table_prefix="clickhouse client --host ${ckhost} --port $2 --multiquery -q"
|
||||
insert_data_prefix="clickhouse client --query "
|
||||
case $1 in
|
||||
all)
|
||||
clickhouse client --host ${ckhost} --port $2 --multiquery -q"$ckreadSql" && {
|
||||
${insert_data_prefix} "INSERT INTO customer FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/customer.tbl --port=$2
|
||||
${insert_data_prefix} "INSERT INTO part FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/part.tbl --port=$2
|
||||
${insert_data_prefix} "INSERT INTO supplier FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/supplier.tbl --port=$2
|
||||
${insert_data_prefix} "INSERT INTO lineorder FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/lineorder.tbl --port=$2
|
||||
}
|
||||
${create_table_prefix}"${lineorder_flat_table}"
|
||||
;;
|
||||
customer)
|
||||
echo ${create_table_prefix}\"${customer_table}\"
|
||||
${create_table_prefix}"${customer_table}" && {
|
||||
echo "${insert_data_prefix} \"INSERT INTO $1 FORMAT CSV\" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2"
|
||||
${insert_data_prefix} "INSERT INTO $1 FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2
|
||||
}
|
||||
;;
|
||||
part)
|
||||
echo ${create_table_prefix}\"${part_table}\"
|
||||
${create_table_prefix}"${part_table}" && {
|
||||
echo "${insert_data_prefix} \"INSERT INTO $1 FORMAT CSV\" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2"
|
||||
${insert_data_prefix} "INSERT INTO $1 FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2
|
||||
}
|
||||
;;
|
||||
supplier)
|
||||
echo ${create_table_prefix}"${supplier_table}"
|
||||
${create_table_prefix}"${supplier_table}" && {
|
||||
echo "${insert_data_prefix} \"INSERT INTO $1 FORMAT CSV\" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2"
|
||||
${insert_data_prefix} "INSERT INTO $1 FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2
|
||||
}
|
||||
;;
|
||||
lineorder)
|
||||
echo ${create_table_prefix}"${lineorder_table}"
|
||||
${create_table_prefix}"${lineorder_table}" && {
|
||||
echo "${insert_data_prefix} \"INSERT INTO $1 FORMAT CSV\" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2"
|
||||
${insert_data_prefix} "INSERT INTO $1 FORMAT CSV" < ${RAWDATA_DIR}/ssb-dbgen/$1.tbl --port=$2
|
||||
}
|
||||
;;
|
||||
lineorder_flat)
|
||||
echo ${create_table_prefix}"${lineorder_flat_table}"
|
||||
${create_table_prefix}"${lineorder_flat_table}"
|
||||
return 0
|
||||
;;
|
||||
*)
|
||||
exit 0
|
||||
;;
|
||||
|
||||
esac
|
||||
}
|
||||
|
||||
function check_sql(){
|
||||
select_sql="select * from "$1" limit 1"
|
||||
clickhouse client --host ${ckhost} --port $2 --multiquery -q"${select_sql}"
|
||||
}
|
||||
|
||||
function check_table(){
|
||||
checknum=0
|
||||
source_tables="customer part supplier lineorder lineorder_flat"
|
||||
test_tables=${1:-${source_tables}}
|
||||
echo "Checking table data required in server..."
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
for j in `echo ${test_tables}`
|
||||
do
|
||||
check_sql $j ${ckport[i]} &> /dev/null || {
|
||||
let checknum+=1 && insert_data "$j" ${ckport[i]}
|
||||
}
|
||||
done
|
||||
done
|
||||
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
echo "clickhouse client --host ${ckhost} --port ${ckport[i]} -m -q\"select count() from ${TABLE_NAME};\""
|
||||
var=$(clickhouse client --host ${ckhost} --port ${ckport[i]} -m -q"select count() from ${TABLE_NAME};")
|
||||
if [ $var -eq $TALBE_ROWS ];then
|
||||
echo "Instance_${i} Table data integrity check OK -> Rows:$var"
|
||||
else
|
||||
echo "Instance_${i} Table data integrity check Failed -> Rows:$var"
|
||||
exit 1
|
||||
fi
|
||||
done
|
||||
if [ $checknum -gt 0 ];then
|
||||
echo "Need sleep 10s after first table data insertion...$checknum"
|
||||
sleep 10
|
||||
fi
|
||||
}
|
||||
|
||||
function check_instance(){
|
||||
instance_alive=0
|
||||
for i in {1..10}
|
||||
do
|
||||
sleep 1
|
||||
netstat -nltp | grep ${1} > /dev/null
|
||||
if [ $? -ne 1 ];then
|
||||
instance_alive=1
|
||||
break
|
||||
fi
|
||||
|
||||
done
|
||||
|
||||
if [ $instance_alive -eq 0 ];then
|
||||
echo "check_instance -> clickhouse server instance faild to launch due to 10s timeout!"
|
||||
exit 1
|
||||
else
|
||||
echo "check_instance -> clickhouse server instance launch successfully!"
|
||||
fi
|
||||
}
|
||||
|
||||
function start_clickhouse_for_insertion(){
|
||||
echo "start_clickhouse_for_insertion"
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
echo "cd ${database_dir}/$1${dir_server[i]}"
|
||||
echo "${SERVER_BIND_CMD[i]} clickhouse server -C config_${1}${dir_server[i]}.xml >&${LOG_DIR}/${1}_${i}_server_log& > /dev/null"
|
||||
|
||||
cd ${database_dir}/$1${dir_server[i]}
|
||||
${SERVER_BIND_CMD[i]} clickhouse server -C config_${1}${dir_server[i]}.xml >&${LOG_DIR}/${1}_${i}_server_log& > /dev/null
|
||||
check_instance ${ckport[i]}
|
||||
done
|
||||
}
|
||||
|
||||
function start_clickhouse_for_stressing(){
|
||||
echo "start_clickhouse_for_stressing"
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
echo "cd ${database_dir}/$1${dir_server[i]}"
|
||||
echo "${SERVER_BIND_CMD[i]} clickhouse server -C config_${1}${dir_server[i]}.xml >&/dev/null&"
|
||||
|
||||
cd ${database_dir}/$1${dir_server[i]}
|
||||
${SERVER_BIND_CMD[i]} clickhouse server -C config_${1}${dir_server[i]}.xml >&/dev/null&
|
||||
check_instance ${ckport[i]}
|
||||
done
|
||||
}
|
||||
yum -y install git make gcc sudo net-tools &> /dev/null
|
||||
pip3 install clickhouse_driver numpy &> /dev/null
|
||||
test -d ${RAWDATA_DIR}/ssb-dbgen || git clone https://github.com/vadimtk/ssb-dbgen.git ${RAWDATA_DIR}/ssb-dbgen && cd ${RAWDATA_DIR}/ssb-dbgen
|
||||
|
||||
if [ ! -f ${RAWDATA_DIR}/ssb-dbgen/dbgen ];then
|
||||
make && {
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/customer.tbl || echo y |./dbgen -s ${SSB_GEN_FACTOR} -T c
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/part.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T p
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/supplier.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T s
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/date.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T d
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/lineorder.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T l
|
||||
}
|
||||
else
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/customer.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T c
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/part.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T p
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/supplier.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T s
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/date.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T d
|
||||
test -f ${RAWDATA_DIR}/ssb-dbgen/lineorder.tbl || echo y | ./dbgen -s ${SSB_GEN_FACTOR} -T l
|
||||
|
||||
fi
|
||||
|
||||
filenum=`find ${RAWDATA_DIR}/ssb-dbgen/ -name "*.tbl" | wc -l`
|
||||
|
||||
if [ $filenum -ne 5 ];then
|
||||
echo "generate ssb data file *.tbl faild"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
function kill_instance(){
|
||||
instance_alive=1
|
||||
for i in {1..2}
|
||||
do
|
||||
pkill clickhouse && sleep 5
|
||||
instance_alive=0
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
netstat -nltp | grep ${ckport[i]} > /dev/null
|
||||
if [ $? -ne 1 ];then
|
||||
instance_alive=1
|
||||
break;
|
||||
fi
|
||||
done
|
||||
if [ $instance_alive -eq 0 ];then
|
||||
break;
|
||||
fi
|
||||
done
|
||||
if [ $instance_alive -eq 0 ];then
|
||||
echo "kill_instance OK!"
|
||||
else
|
||||
echo "kill_instance Failed -> clickhouse server instance still alive due to 10s timeout"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
function run_test(){
|
||||
is_xml=0
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
if [ -f ${database_dir}/${1}${dir_server[i]}/config_${1}${dir_server[i]}.xml ]; then
|
||||
is_xml=$[is_xml+1]
|
||||
fi
|
||||
done
|
||||
if [ $is_xml -eq $inst_num ];then
|
||||
echo "Benchmark with $inst_num instance"
|
||||
start_clickhouse_for_insertion ${1}
|
||||
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
clickhouse client --host ${ckhost} --port ${ckport[i]} -m -q"show databases;" >/dev/null
|
||||
done
|
||||
|
||||
if [ $? -eq 0 ];then
|
||||
check_table
|
||||
fi
|
||||
kill_instance
|
||||
|
||||
if [ $1 == "deflate" ];then
|
||||
test -f ${LOG_DIR}/${1}_server_log && deflatemsg=`cat ${LOG_DIR}/${1}_server_log | grep DeflateJobHWPool`
|
||||
if [ -n "$deflatemsg" ];then
|
||||
echo ------------------------------------------------------
|
||||
echo $deflatemsg
|
||||
echo ------------------------------------------------------
|
||||
fi
|
||||
fi
|
||||
echo "Check table data required in server_${1} -> Done! "
|
||||
|
||||
start_clickhouse_for_stressing ${1}
|
||||
for i in $(seq 0 $[inst_num-1])
|
||||
do
|
||||
clickhouse client --host ${ckhost} --port ${ckport[i]} -m -q"show databases;" >/dev/null
|
||||
done
|
||||
if [ $? -eq 0 ];then
|
||||
test -d ${CLIENT_SCRIPTS_DIR} && cd ${CLIENT_SCRIPTS_DIR}
|
||||
echo "Client stressing... "
|
||||
echo "${CLIENT_BIND_CMD} python3 client_stressing_test.py ${QUERY_FILE} $inst_num &> ${LOG_DIR}/${1}.log"
|
||||
${CLIENT_BIND_CMD} python3 client_stressing_test.py ${QUERY_FILE} $inst_num &> ${LOG_DIR}/${1}.log
|
||||
echo "Completed client stressing, checking log... "
|
||||
finish_log=`grep "Finished" ${LOG_DIR}/${1}.log | wc -l`
|
||||
if [ $finish_log -eq 1 ] ;then
|
||||
kill_instance
|
||||
test -f ${LOG_DIR}/${1}.log && echo "${1}.log ===> ${LOG_DIR}/${1}.log"
|
||||
else
|
||||
kill_instance
|
||||
echo "No find 'Finished' in client log -> Performance test may fail"
|
||||
exit 1
|
||||
|
||||
fi
|
||||
|
||||
else
|
||||
echo "${1} clickhouse server start fail"
|
||||
exit 1
|
||||
fi
|
||||
else
|
||||
echo "clickhouse server start fail -> Please check xml files required in ${database_dir} for each instance"
|
||||
exit 1
|
||||
|
||||
fi
|
||||
}
|
||||
function clear_log(){
|
||||
if [ -d "$LOG_DIR" ]; then
|
||||
cd ${LOG_DIR} && rm -rf *
|
||||
fi
|
||||
}
|
||||
|
||||
function gather_log_for_codec(){
|
||||
cd ${OUTPUT_DIR} && mkdir -p ${LOG_PACK_FILE}/${1}
|
||||
cp -rf ${LOG_DIR} ${OUTPUT_DIR}/${LOG_PACK_FILE}/${1}
|
||||
}
|
||||
|
||||
function pack_log(){
|
||||
if [ -e "${OUTPUT_DIR}/run.log" ]; then
|
||||
cp ${OUTPUT_DIR}/run.log ${OUTPUT_DIR}/${LOG_PACK_FILE}/
|
||||
fi
|
||||
echo "Please check all log information in ${OUTPUT_DIR}/${LOG_PACK_FILE}"
|
||||
}
|
||||
|
||||
function setup_check(){
|
||||
|
||||
iax_dev_num=`accel-config list | grep iax | wc -l`
|
||||
if [ $iax_dev_num -eq 0 ] ;then
|
||||
iax_dev_num=`accel-config list | grep iax | wc -l`
|
||||
if [ $iax_dev_num -eq 0 ] ;then
|
||||
echo "No IAA devices available -> Please check IAA hardware setup manually!"
|
||||
exit 1
|
||||
else
|
||||
echo "IAA enabled devices number:$iax_dev_num"
|
||||
fi
|
||||
else
|
||||
echo "IAA enabled devices number:$iax_dev_num"
|
||||
fi
|
||||
libaccel_version=`accel-config -v`
|
||||
clickhouser_version=`clickhouse server --version`
|
||||
kernel_dxd_log=`dmesg | grep dxd`
|
||||
echo "libaccel_version:$libaccel_version"
|
||||
echo "clickhouser_version:$clickhouser_version"
|
||||
echo -e "idxd section in kernel log:\n$kernel_dxd_log"
|
||||
}
|
||||
|
||||
setup_check
|
||||
export CLICKHOUSE_WATCHDOG_ENABLE=0
|
||||
for i in ${CODEC_CONFIG[@]}
|
||||
do
|
||||
clear_log
|
||||
codec=${i}
|
||||
echo "run test------------$codec"
|
||||
run_test $codec
|
||||
gather_log_for_codec $codec
|
||||
done
|
||||
|
||||
pack_log
|
||||
echo "Done."
|
@ -0,0 +1,278 @@
|
||||
from operator import eq
|
||||
import os
|
||||
import random
|
||||
import time
|
||||
import sys
|
||||
from clickhouse_driver import Client
|
||||
import numpy as np
|
||||
import subprocess
|
||||
import multiprocessing
|
||||
from multiprocessing import Manager
|
||||
|
||||
warmup_runs = 10
|
||||
calculated_runs = 10
|
||||
seconds = 30
|
||||
max_instances_number = 8
|
||||
retest_number = 3
|
||||
retest_tolerance = 10
|
||||
|
||||
|
||||
def checkInt(str):
|
||||
try:
|
||||
int(str)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
|
||||
def setup_client(index):
|
||||
if index < 4:
|
||||
port_idx = index
|
||||
else:
|
||||
port_idx = index + 4
|
||||
client = Client(
|
||||
host="localhost",
|
||||
database="default",
|
||||
user="default",
|
||||
password="",
|
||||
port="900%d" % port_idx,
|
||||
)
|
||||
union_mode_query = "SET union_default_mode='DISTINCT'"
|
||||
client.execute(union_mode_query)
|
||||
return client
|
||||
|
||||
|
||||
def warm_client(clientN, clientL, query, loop):
|
||||
for c_idx in range(clientN):
|
||||
for _ in range(loop):
|
||||
clientL[c_idx].execute(query)
|
||||
|
||||
|
||||
def read_queries(queries_list):
|
||||
queries = list()
|
||||
queries_id = list()
|
||||
with open(queries_list, "r") as f:
|
||||
for line in f:
|
||||
line = line.rstrip()
|
||||
line = line.split("$")
|
||||
queries_id.append(line[0])
|
||||
queries.append(line[1])
|
||||
return queries_id, queries
|
||||
|
||||
|
||||
def run_task(client, cname, query, loop, query_latency):
|
||||
start_time = time.time()
|
||||
for i in range(loop):
|
||||
client.execute(query)
|
||||
query_latency.append(client.last_query.elapsed)
|
||||
|
||||
end_time = time.time()
|
||||
p95 = np.percentile(query_latency, 95)
|
||||
print(
|
||||
"CLIENT: {0} end. -> P95: %f, qps: %f".format(cname)
|
||||
% (p95, loop / (end_time - start_time))
|
||||
)
|
||||
|
||||
|
||||
def run_multi_clients(clientN, clientList, query, loop):
|
||||
client_pids = {}
|
||||
start_time = time.time()
|
||||
manager = multiprocessing.Manager()
|
||||
query_latency_list0 = manager.list()
|
||||
query_latency_list1 = manager.list()
|
||||
query_latency_list2 = manager.list()
|
||||
query_latency_list3 = manager.list()
|
||||
query_latency_list4 = manager.list()
|
||||
query_latency_list5 = manager.list()
|
||||
query_latency_list6 = manager.list()
|
||||
query_latency_list7 = manager.list()
|
||||
|
||||
for c_idx in range(clientN):
|
||||
client_name = "Role_%d" % c_idx
|
||||
if c_idx == 0:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list0),
|
||||
)
|
||||
elif c_idx == 1:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list1),
|
||||
)
|
||||
elif c_idx == 2:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list2),
|
||||
)
|
||||
elif c_idx == 3:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list3),
|
||||
)
|
||||
elif c_idx == 4:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list4),
|
||||
)
|
||||
elif c_idx == 5:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list5),
|
||||
)
|
||||
elif c_idx == 6:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list6),
|
||||
)
|
||||
elif c_idx == 7:
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task,
|
||||
args=(clientList[c_idx], client_name, query, loop, query_latency_list7),
|
||||
)
|
||||
else:
|
||||
print("ERROR: CLIENT number dismatch!!")
|
||||
exit()
|
||||
print("CLIENT: %s start" % client_name)
|
||||
client_pids[c_idx].start()
|
||||
|
||||
for c_idx in range(clientN):
|
||||
client_pids[c_idx].join()
|
||||
end_time = time.time()
|
||||
totalT = end_time - start_time
|
||||
|
||||
query_latencyTotal = list()
|
||||
for item in query_latency_list0:
|
||||
query_latencyTotal.append(item)
|
||||
for item in query_latency_list1:
|
||||
query_latencyTotal.append(item)
|
||||
for item in query_latency_list2:
|
||||
query_latencyTotal.append(item)
|
||||
for item in query_latency_list3:
|
||||
query_latencyTotal.append(item)
|
||||
for item in query_latency_list4:
|
||||
query_latencyTotal.append(item)
|
||||
for item in query_latency_list5:
|
||||
query_latencyTotal.append(item)
|
||||
for item in query_latency_list6:
|
||||
query_latencyTotal.append(item)
|
||||
for item in query_latency_list7:
|
||||
query_latencyTotal.append(item)
|
||||
|
||||
totalP95 = np.percentile(query_latencyTotal, 95) * 1000
|
||||
return totalT, totalP95
|
||||
|
||||
|
||||
def run_task_caculated(client, cname, query, loop):
|
||||
query_latency = list()
|
||||
start_time = time.time()
|
||||
for i in range(loop):
|
||||
client.execute(query)
|
||||
query_latency.append(client.last_query.elapsed)
|
||||
end_time = time.time()
|
||||
p95 = np.percentile(query_latency, 95)
|
||||
|
||||
|
||||
def run_multi_clients_caculated(clientN, clientList, query, loop):
|
||||
client_pids = {}
|
||||
start_time = time.time()
|
||||
for c_idx in range(clientN):
|
||||
client_name = "Role_%d" % c_idx
|
||||
client_pids[c_idx] = multiprocessing.Process(
|
||||
target=run_task_caculated,
|
||||
args=(clientList[c_idx], client_name, query, loop),
|
||||
)
|
||||
client_pids[c_idx].start()
|
||||
for c_idx in range(clientN):
|
||||
client_pids[c_idx].join()
|
||||
end_time = time.time()
|
||||
totalT = end_time - start_time
|
||||
return totalT
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
client_number = 1
|
||||
queries = list()
|
||||
queries_id = list()
|
||||
|
||||
if len(sys.argv) != 3:
|
||||
print(
|
||||
"usage: python3 client_stressing_test.py [queries_file_path] [client_number]"
|
||||
)
|
||||
sys.exit()
|
||||
else:
|
||||
queries_list = sys.argv[1]
|
||||
client_number = int(sys.argv[2])
|
||||
print(
|
||||
"queries_file_path: %s, client_number: %d" % (queries_list, client_number)
|
||||
)
|
||||
if not os.path.isfile(queries_list) or not os.access(queries_list, os.R_OK):
|
||||
print("please check the right path for queries file")
|
||||
sys.exit()
|
||||
if (
|
||||
not checkInt(sys.argv[2])
|
||||
or int(sys.argv[2]) > max_instances_number
|
||||
or int(sys.argv[2]) < 1
|
||||
):
|
||||
print("client_number should be in [1~%d]" % max_instances_number)
|
||||
sys.exit()
|
||||
|
||||
client_list = {}
|
||||
queries_id, queries = read_queries(queries_list)
|
||||
|
||||
for c_idx in range(client_number):
|
||||
client_list[c_idx] = setup_client(c_idx)
|
||||
# clear cache
|
||||
os.system("sync; echo 3 > /proc/sys/vm/drop_caches")
|
||||
|
||||
print("###Polit Run Begin")
|
||||
for i in queries:
|
||||
warm_client(client_number, client_list, i, 1)
|
||||
print("###Polit Run End -> Start stressing....")
|
||||
|
||||
query_index = 0
|
||||
for q in queries:
|
||||
print(
|
||||
"\n###START -> Index: %d, ID: %s, Query: %s"
|
||||
% (query_index, queries_id[query_index], q)
|
||||
)
|
||||
warm_client(client_number, client_list, q, warmup_runs)
|
||||
print("###Warm Done!")
|
||||
for j in range(0, retest_number):
|
||||
totalT = run_multi_clients_caculated(
|
||||
client_number, client_list, q, calculated_runs
|
||||
)
|
||||
curr_loop = int(seconds * calculated_runs / totalT) + 1
|
||||
print(
|
||||
"###Calculation Done! -> loopN: %d, expected seconds:%d"
|
||||
% (curr_loop, seconds)
|
||||
)
|
||||
|
||||
print("###Stress Running! -> %d iterations......" % curr_loop)
|
||||
|
||||
totalT, totalP95 = run_multi_clients(
|
||||
client_number, client_list, q, curr_loop
|
||||
)
|
||||
|
||||
if totalT > (seconds - retest_tolerance) and totalT < (
|
||||
seconds + retest_tolerance
|
||||
):
|
||||
break
|
||||
else:
|
||||
print(
|
||||
"###totalT:%d is far way from expected seconds:%d. Run again ->j:%d!"
|
||||
% (totalT, seconds, j)
|
||||
)
|
||||
|
||||
print(
|
||||
"###Completed! -> ID: %s, clientN: %d, totalT: %.2f s, latencyAVG: %.2f ms, P95: %.2f ms, QPS_Final: %.2f"
|
||||
% (
|
||||
queries_id[query_index],
|
||||
client_number,
|
||||
totalT,
|
||||
totalT * 1000 / (curr_loop * client_number),
|
||||
totalP95,
|
||||
((curr_loop * client_number) / totalT),
|
||||
)
|
||||
)
|
||||
query_index += 1
|
||||
print("###Finished!")
|
@ -0,0 +1,10 @@
|
||||
Q1.1$SELECT sum(LO_EXTENDEDPRICE * LO_DISCOUNT) AS revenue FROM lineorder_flat WHERE toYear(LO_ORDERDATE) = 1993 AND LO_DISCOUNT BETWEEN 1 AND 3 AND LO_QUANTITY < 25;
|
||||
Q2.1$SELECT sum(LO_REVENUE),toYear(LO_ORDERDATE) AS year,P_BRAND FROM lineorder_flat WHERE P_CATEGORY = 'MFGR#12' AND S_REGION = 'AMERICA' GROUP BY year,P_BRAND ORDER BY year,P_BRAND;
|
||||
Q2.2$SELECT sum(LO_REVENUE),toYear(LO_ORDERDATE) AS year,P_BRAND FROM lineorder_flat WHERE P_BRAND >= 'MFGR#2221' AND P_BRAND <= 'MFGR#2228' AND S_REGION = 'ASIA' GROUP BY year,P_BRAND ORDER BY year,P_BRAND;
|
||||
Q2.3$SELECT sum(LO_REVENUE),toYear(LO_ORDERDATE) AS year,P_BRAND FROM lineorder_flat WHERE P_BRAND = 'MFGR#2239' AND S_REGION = 'EUROPE' GROUP BY year,P_BRAND ORDER BY year,P_BRAND;
|
||||
Q3.1$SELECT C_NATION,S_NATION,toYear(LO_ORDERDATE) AS year,sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_REGION = 'ASIA' AND S_REGION = 'ASIA' AND year >= 1992 AND year <= 1997 GROUP BY C_NATION,S_NATION,year ORDER BY year ASC,revenue DESC;
|
||||
Q3.2$SELECT C_CITY,S_CITY,toYear(LO_ORDERDATE) AS year,sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE C_NATION = 'UNITED STATES' AND S_NATION = 'UNITED STATES' AND year >= 1992 AND year <= 1997 GROUP BY C_CITY,S_CITY,year ORDER BY year ASC,revenue DESC;
|
||||
Q3.3$SELECT C_CITY,S_CITY,toYear(LO_ORDERDATE) AS year,sum(LO_REVENUE) AS revenue FROM lineorder_flat WHERE (C_CITY = 'UNITED KI1' OR C_CITY = 'UNITED KI5') AND (S_CITY = 'UNITED KI1' OR S_CITY = 'UNITED KI5') AND year >= 1992 AND year <= 1997 GROUP BY C_CITY,S_CITY,year ORDER BY year ASC,revenue DESC;
|
||||
Q4.1$SELECT toYear(LO_ORDERDATE) AS year,C_NATION,sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year,C_NATION ORDER BY year ASC,C_NATION ASC;
|
||||
Q4.2$SELECT toYear(LO_ORDERDATE) AS year,S_NATION,P_CATEGORY,sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE C_REGION = 'AMERICA' AND S_REGION = 'AMERICA' AND (year = 1997 OR year = 1998) AND (P_MFGR = 'MFGR#1' OR P_MFGR = 'MFGR#2') GROUP BY year,S_NATION,P_CATEGORY ORDER BY year ASC,S_NATION ASC,P_CATEGORY ASC;
|
||||
Q4.3$SELECT toYear(LO_ORDERDATE) AS year,S_CITY,P_BRAND,sum(LO_REVENUE - LO_SUPPLYCOST) AS profit FROM lineorder_flat WHERE S_NATION = 'UNITED STATES' AND (year = 1997 OR year = 1998) AND P_CATEGORY = 'MFGR#14' GROUP BY year,S_CITY,P_BRAND ORDER BY year ASC,S_CITY ASC,P_BRAND ASC;
|
@ -0,0 +1,6 @@
|
||||
WORKING_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)/.."
|
||||
if [ ! -d "${WORKING_DIR}/output" ]; then
|
||||
mkdir ${WORKING_DIR}/output
|
||||
fi
|
||||
bash allin1_ssb.sh 2 > ${WORKING_DIR}/output/run.log
|
||||
echo "Please check log in: ${WORKING_DIR}/output/run.log"
|
@ -0,0 +1,49 @@
|
||||
<!-- This file was generated automatically.
|
||||
Do not edit it: it is likely to be discarded and generated again before it's read next time.
|
||||
Files used to generate this file:
|
||||
config.xml -->
|
||||
|
||||
<!-- Config that is used when server is run without config file. --><clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
||||
<http_port>8123</http_port>
|
||||
<tcp_port>9000</tcp_port>
|
||||
<mysql_port>9004</mysql_port>
|
||||
|
||||
<path>./</path>
|
||||
|
||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
<mlock_executable>true</mlock_executable>
|
||||
|
||||
<compression>
|
||||
<case>
|
||||
<method>deflate_qpl</method>
|
||||
</case>
|
||||
</compression>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
<password/>
|
||||
|
||||
<networks>
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
</default>
|
||||
</users>
|
||||
|
||||
<profiles>
|
||||
<default/>
|
||||
</profiles>
|
||||
|
||||
<quotas>
|
||||
<default/>
|
||||
</quotas>
|
||||
</clickhouse>
|
@ -0,0 +1,49 @@
|
||||
<!-- This file was generated automatically.
|
||||
Do not edit it: it is likely to be discarded and generated again before it's read next time.
|
||||
Files used to generate this file:
|
||||
config.xml -->
|
||||
|
||||
<!-- Config that is used when server is run without config file. --><clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
||||
<http_port>8124</http_port>
|
||||
<tcp_port>9001</tcp_port>
|
||||
<mysql_port>9005</mysql_port>
|
||||
|
||||
<path>./</path>
|
||||
|
||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
<mlock_executable>true</mlock_executable>
|
||||
|
||||
<compression>
|
||||
<case>
|
||||
<method>deflate_qpl</method>
|
||||
</case>
|
||||
</compression>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
<password/>
|
||||
|
||||
<networks>
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
</default>
|
||||
</users>
|
||||
|
||||
<profiles>
|
||||
<default/>
|
||||
</profiles>
|
||||
|
||||
<quotas>
|
||||
<default/>
|
||||
</quotas>
|
||||
</clickhouse>
|
@ -0,0 +1,49 @@
|
||||
<!-- This file was generated automatically.
|
||||
Do not edit it: it is likely to be discarded and generated again before it's read next time.
|
||||
Files used to generate this file:
|
||||
config.xml -->
|
||||
|
||||
<!-- Config that is used when server is run without config file. --><clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
||||
<http_port>8123</http_port>
|
||||
<tcp_port>9000</tcp_port>
|
||||
<mysql_port>9004</mysql_port>
|
||||
|
||||
<path>./</path>
|
||||
|
||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
<mlock_executable>true</mlock_executable>
|
||||
|
||||
<compression>
|
||||
<case>
|
||||
<method>lz4</method>
|
||||
</case>
|
||||
</compression>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
<password/>
|
||||
|
||||
<networks>
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
</default>
|
||||
</users>
|
||||
|
||||
<profiles>
|
||||
<default/>
|
||||
</profiles>
|
||||
|
||||
<quotas>
|
||||
<default/>
|
||||
</quotas>
|
||||
</clickhouse>
|
@ -0,0 +1,49 @@
|
||||
<!-- This file was generated automatically.
|
||||
Do not edit it: it is likely to be discarded and generated again before it's read next time.
|
||||
Files used to generate this file:
|
||||
config.xml -->
|
||||
|
||||
<!-- Config that is used when server is run without config file. --><clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
||||
<http_port>8124</http_port>
|
||||
<tcp_port>9001</tcp_port>
|
||||
<mysql_port>9005</mysql_port>
|
||||
|
||||
<path>./</path>
|
||||
|
||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
<mlock_executable>true</mlock_executable>
|
||||
|
||||
<compression>
|
||||
<case>
|
||||
<method>lz4</method>
|
||||
</case>
|
||||
</compression>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
<password/>
|
||||
|
||||
<networks>
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
</default>
|
||||
</users>
|
||||
|
||||
<profiles>
|
||||
<default/>
|
||||
</profiles>
|
||||
|
||||
<quotas>
|
||||
<default/>
|
||||
</quotas>
|
||||
</clickhouse>
|
@ -0,0 +1,49 @@
|
||||
<!-- This file was generated automatically.
|
||||
Do not edit it: it is likely to be discarded and generated again before it's read next time.
|
||||
Files used to generate this file:
|
||||
config.xml -->
|
||||
|
||||
<!-- Config that is used when server is run without config file. --><clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
||||
<http_port>8123</http_port>
|
||||
<tcp_port>9000</tcp_port>
|
||||
<mysql_port>9004</mysql_port>
|
||||
|
||||
<path>./</path>
|
||||
|
||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
<mlock_executable>true</mlock_executable>
|
||||
|
||||
<compression>
|
||||
<case>
|
||||
<method>zstd</method>
|
||||
</case>
|
||||
</compression>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
<password/>
|
||||
|
||||
<networks>
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
</default>
|
||||
</users>
|
||||
|
||||
<profiles>
|
||||
<default/>
|
||||
</profiles>
|
||||
|
||||
<quotas>
|
||||
<default/>
|
||||
</quotas>
|
||||
</clickhouse>
|
@ -0,0 +1,49 @@
|
||||
<!-- This file was generated automatically.
|
||||
Do not edit it: it is likely to be discarded and generated again before it's read next time.
|
||||
Files used to generate this file:
|
||||
config.xml -->
|
||||
|
||||
<!-- Config that is used when server is run without config file. --><clickhouse>
|
||||
<logger>
|
||||
<level>trace</level>
|
||||
<console>true</console>
|
||||
</logger>
|
||||
|
||||
<http_port>8124</http_port>
|
||||
<tcp_port>9001</tcp_port>
|
||||
<mysql_port>9005</mysql_port>
|
||||
|
||||
<path>./</path>
|
||||
|
||||
<uncompressed_cache_size>8589934592</uncompressed_cache_size>
|
||||
<mark_cache_size>5368709120</mark_cache_size>
|
||||
<mlock_executable>true</mlock_executable>
|
||||
|
||||
<compression>
|
||||
<case>
|
||||
<method>zstd</method>
|
||||
</case>
|
||||
</compression>
|
||||
|
||||
<users>
|
||||
<default>
|
||||
<password/>
|
||||
|
||||
<networks>
|
||||
<ip>::/0</ip>
|
||||
</networks>
|
||||
|
||||
<profile>default</profile>
|
||||
<quota>default</quota>
|
||||
<access_management>1</access_management>
|
||||
</default>
|
||||
</users>
|
||||
|
||||
<profiles>
|
||||
<default/>
|
||||
</profiles>
|
||||
|
||||
<quotas>
|
||||
<default/>
|
||||
</quotas>
|
||||
</clickhouse>
|
19
contrib/sparse-checkout/setup-sparse-checkout.sh
Executable file
19
contrib/sparse-checkout/setup-sparse-checkout.sh
Executable file
@ -0,0 +1,19 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
git config submodule."contrib/llvm-project".update '!../sparse-checkout/update-llvm-project.sh'
|
||||
git config submodule."contrib/croaring".update '!../sparse-checkout/update-croaring.sh'
|
||||
git config submodule."contrib/aws".update '!../sparse-checkout/update-aws.sh'
|
||||
git config submodule."contrib/openssl".update '!../sparse-checkout/update-openssl.sh'
|
||||
git config submodule."contrib/boringssl".update '!../sparse-checkout/update-boringssl.sh'
|
||||
git config submodule."contrib/arrow".update '!../sparse-checkout/update-arrow.sh'
|
||||
git config submodule."contrib/grpc".update '!../sparse-checkout/update-grpc.sh'
|
||||
git config submodule."contrib/orc".update '!../sparse-checkout/update-orc.sh'
|
||||
git config submodule."contrib/h3".update '!../sparse-checkout/update-h3.sh'
|
||||
git config submodule."contrib/icu".update '!../sparse-checkout/update-icu.sh'
|
||||
git config submodule."contrib/boost".update '!../sparse-checkout/update-boost.sh'
|
||||
git config submodule."contrib/aws-s2n-tls".update '!../sparse-checkout/update-aws-s2n-tls.sh'
|
||||
git config submodule."contrib/protobuf".update '!../sparse-checkout/update-protobuf.sh'
|
||||
git config submodule."contrib/libxml2".update '!../sparse-checkout/update-libxml2.sh'
|
||||
git config submodule."contrib/brotli".update '!../sparse-checkout/update-brotli.sh'
|
12
contrib/sparse-checkout/update-arrow.sh
Executable file
12
contrib/sparse-checkout/update-arrow.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for arrow"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/cpp/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
13
contrib/sparse-checkout/update-aws-s2n-tls.sh
Executable file
13
contrib/sparse-checkout/update-aws-s2n-tls.sh
Executable file
@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for aws-s2n-tls"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/docs/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/compliance/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
13
contrib/sparse-checkout/update-aws.sh
Executable file
13
contrib/sparse-checkout/update-aws.sh
Executable file
@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for aws"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/aws-cpp-sdk-core/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/aws-cpp-sdk-s3/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
85
contrib/sparse-checkout/update-boost.sh
Executable file
85
contrib/sparse-checkout/update-boost.sh
Executable file
@ -0,0 +1,85 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for boost"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/boost/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/algorithm/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/any/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/atomic/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/assert/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/bind/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/concept/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/config/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/container/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/container_hash/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/context/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/convert/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/coroutine/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/core/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/detail/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/dynamic_bitset/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/exception/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/filesystem/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/functional/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/function/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/geometry/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/graph/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/heap/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/integer/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/intrusive/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/iostreams/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/io/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/iterator/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/math/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/move/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/mpl/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/multi_index/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/multiprecision/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/numeric/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/predef/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/preprocessor/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/program_options/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/range/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/regex/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/smart_ptr/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/type_index/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/type_traits/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/system/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/tti/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/utility/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/lexical_cast/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/optional/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/property_map/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/pending/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/multi_array/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/tuple/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/icl/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/unordered/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/typeof/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/parameter/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/mp11/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/archive/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/function_types/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/serialization/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/fusion/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/variant/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/format/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/locale/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/random/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/spirit/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/uuid/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/xpressive/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/asio/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/circular_buffer/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/proto/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/qvm/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/boost/property_tree/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/libs/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
14
contrib/sparse-checkout/update-boringssl.sh
Executable file
14
contrib/sparse-checkout/update-boringssl.sh
Executable file
@ -0,0 +1,14 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for boringsll"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/fuzz/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/crypto/cipher_extra/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/third_party/wycheproof_testvectors/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/third_party/googletest/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
12
contrib/sparse-checkout/update-brotli.sh
Executable file
12
contrib/sparse-checkout/update-brotli.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for brotli"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/c/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
12
contrib/sparse-checkout/update-croaring.sh
Executable file
12
contrib/sparse-checkout/update-croaring.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for croaring"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/benchmarks/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/tests/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
22
contrib/sparse-checkout/update-grpc.sh
Executable file
22
contrib/sparse-checkout/update-grpc.sh
Executable file
@ -0,0 +1,22 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for grpc"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/test/build/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/tools/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/tools/codegen/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/examples/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/doc/*' >> $FILES_TO_CHECKOUT
|
||||
# FIXME why do we need csharp?
|
||||
#echo '!/src/csharp/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/src/python/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/src/objective-c/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/src/php/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/src/ruby/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
12
contrib/sparse-checkout/update-h3.sh
Executable file
12
contrib/sparse-checkout/update-h3.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for h3"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/tests/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/website/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
12
contrib/sparse-checkout/update-icu.sh
Executable file
12
contrib/sparse-checkout/update-icu.sh
Executable file
@ -0,0 +1,12 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for icu"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/icu4c/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
16
contrib/sparse-checkout/update-libxml2.sh
Executable file
16
contrib/sparse-checkout/update-libxml2.sh
Executable file
@ -0,0 +1,16 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for libxml2"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/result/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/doc/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/os400/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/fuzz/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/python/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
27
contrib/sparse-checkout/update-llvm-project.sh
Executable file
27
contrib/sparse-checkout/update-llvm-project.sh
Executable file
@ -0,0 +1,27 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for llvm-project"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/llvm/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/llvm/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/llvm/cmake/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/llvm/projects/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/llvm/include/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/llvm/lib/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/llvm/utils/TableGen/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/libcxxabi/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/libcxxabi/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/libcxx/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/libcxx/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/libunwind/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/libunwind/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/compiler-rt/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/compiler-rt/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/cmake/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
15
contrib/sparse-checkout/update-openssl.sh
Executable file
15
contrib/sparse-checkout/update-openssl.sh
Executable file
@ -0,0 +1,15 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for openssl"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/fuzz/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/test/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/doc/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/providers/*' >> $FILES_TO_CHECKOUT
|
||||
echo '!/apps/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
13
contrib/sparse-checkout/update-orc.sh
Executable file
13
contrib/sparse-checkout/update-orc.sh
Executable file
@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for orc"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '/*' > $FILES_TO_CHECKOUT
|
||||
echo '!/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/c++/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/proto/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
13
contrib/sparse-checkout/update-protobuf.sh
Executable file
13
contrib/sparse-checkout/update-protobuf.sh
Executable file
@ -0,0 +1,13 @@
|
||||
#!/bin/sh
|
||||
|
||||
echo "Using sparse checkout for protobuf"
|
||||
|
||||
FILES_TO_CHECKOUT=$(git rev-parse --git-dir)/info/sparse-checkout
|
||||
echo '!/*' > $FILES_TO_CHECKOUT
|
||||
echo '/*/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/src/*' >> $FILES_TO_CHECKOUT
|
||||
echo '/cmake/*' >> $FILES_TO_CHECKOUT
|
||||
|
||||
git config core.sparsecheckout true
|
||||
git checkout $1
|
||||
git read-tree -mu HEAD
|
2
contrib/sqlite-amalgamation
vendored
2
contrib/sqlite-amalgamation
vendored
@ -1 +1 @@
|
||||
Subproject commit 400ad7152a0c7ee07756d96ab4f6a8f6d1080916
|
||||
Subproject commit 20598079891d27ef1a3ad3f66bbfa3f983c25268
|
11
contrib/update-submodules.sh
vendored
Executable file
11
contrib/update-submodules.sh
vendored
Executable file
@ -0,0 +1,11 @@
|
||||
#!/bin/sh
|
||||
|
||||
set -e
|
||||
|
||||
WORKDIR=$(dirname "$0")
|
||||
WORKDIR=$(readlink -f "${WORKDIR}")
|
||||
|
||||
"$WORKDIR/sparse-checkout/setup-sparse-checkout.sh"
|
||||
git submodule init
|
||||
git submodule sync
|
||||
git submodule update --depth=1
|
2
contrib/vectorscan
vendored
2
contrib/vectorscan
vendored
@ -1 +1 @@
|
||||
Subproject commit f6250ae3e5a3085000239313ad0689cc1e00cdc2
|
||||
Subproject commit b4bba94b1a250603b0b198e0394946e32f6c3f30
|
@ -1,3 +1,6 @@
|
||||
# The Dockerfile.ubuntu exists for the tests/ci/docker_server.py script
|
||||
# If the image is built from Dockerfile.alpine, then the `-alpine` suffix is added automatically,
|
||||
# so the only purpose of Dockerfile.ubuntu is to push `latest`, `head` and so on w/o suffixes
|
||||
FROM ubuntu:20.04 AS glibc-donor
|
||||
|
||||
ARG TARGETARCH
|
||||
@ -29,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
esac
|
||||
|
||||
ARG REPOSITORY="https://s3.amazonaws.com/clickhouse-builds/22.4/31c367d3cd3aefd316778601ff6565119fe36682/package_release"
|
||||
ARG VERSION="23.2.4.12"
|
||||
ARG VERSION="23.3.1.2823"
|
||||
ARG PACKAGES="clickhouse-keeper"
|
||||
|
||||
# user/group precreated explicitly with fixed uid/gid on purpose.
|
||||
|
1
docker/keeper/Dockerfile.ubuntu
Symbolic link
1
docker/keeper/Dockerfile.ubuntu
Symbolic link
@ -0,0 +1 @@
|
||||
Dockerfile
|
@ -69,13 +69,14 @@ RUN add-apt-repository ppa:ubuntu-toolchain-r/test --yes \
|
||||
libc6 \
|
||||
libc6-dev \
|
||||
libc6-dev-arm64-cross \
|
||||
python3-boto3 \
|
||||
yasm \
|
||||
zstd \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /var/lib/apt/lists
|
||||
|
||||
# Download toolchain and SDK for Darwin
|
||||
RUN wget -nv https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.0.sdk.tar.xz
|
||||
RUN curl -sL -O https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.0.sdk.tar.xz
|
||||
|
||||
# Architecture of the image when BuildKit/buildx is used
|
||||
ARG TARGETARCH
|
||||
@ -97,7 +98,7 @@ ENV PATH="$PATH:/usr/local/go/bin"
|
||||
ENV GOPATH=/workdir/go
|
||||
ENV GOCACHE=/workdir/
|
||||
|
||||
ARG CLANG_TIDY_SHA1=03644275e794b0587849bfc2ec6123d5ae0bdb1c
|
||||
ARG CLANG_TIDY_SHA1=c191254ea00d47ade11d7170ef82fe038c213774
|
||||
RUN curl -Lo /usr/bin/clang-tidy-cache \
|
||||
"https://raw.githubusercontent.com/matus-chochlik/ctcache/$CLANG_TIDY_SHA1/clang-tidy-cache" \
|
||||
&& chmod +x /usr/bin/clang-tidy-cache
|
||||
|
@ -6,6 +6,7 @@ exec &> >(ts)
|
||||
ccache_status () {
|
||||
ccache --show-config ||:
|
||||
ccache --show-stats ||:
|
||||
SCCACHE_NO_DAEMON=1 sccache --show-stats ||:
|
||||
}
|
||||
|
||||
[ -O /build ] || git config --global --add safe.directory /build
|
||||
|
@ -5,13 +5,19 @@ import os
|
||||
import argparse
|
||||
import logging
|
||||
import sys
|
||||
from typing import List
|
||||
from pathlib import Path
|
||||
from typing import List, Optional
|
||||
|
||||
SCRIPT_PATH = os.path.realpath(__file__)
|
||||
SCRIPT_PATH = Path(__file__).absolute()
|
||||
IMAGE_TYPE = "binary"
|
||||
IMAGE_NAME = f"clickhouse/{IMAGE_TYPE}-builder"
|
||||
|
||||
|
||||
def check_image_exists_locally(image_name):
|
||||
class BuildException(Exception):
|
||||
pass
|
||||
|
||||
|
||||
def check_image_exists_locally(image_name: str) -> bool:
|
||||
try:
|
||||
output = subprocess.check_output(
|
||||
f"docker images -q {image_name} 2> /dev/null", shell=True
|
||||
@ -21,17 +27,17 @@ def check_image_exists_locally(image_name):
|
||||
return False
|
||||
|
||||
|
||||
def pull_image(image_name):
|
||||
def pull_image(image_name: str) -> bool:
|
||||
try:
|
||||
subprocess.check_call(f"docker pull {image_name}", shell=True)
|
||||
return True
|
||||
except subprocess.CalledProcessError:
|
||||
logging.info(f"Cannot pull image {image_name}".format())
|
||||
logging.info("Cannot pull image %s", image_name)
|
||||
return False
|
||||
|
||||
|
||||
def build_image(image_name, filepath):
|
||||
context = os.path.dirname(filepath)
|
||||
def build_image(image_name: str, filepath: Path) -> None:
|
||||
context = filepath.parent
|
||||
build_cmd = f"docker build --network=host -t {image_name} -f {filepath} {context}"
|
||||
logging.info("Will build image with cmd: '%s'", build_cmd)
|
||||
subprocess.check_call(
|
||||
@ -40,7 +46,7 @@ def build_image(image_name, filepath):
|
||||
)
|
||||
|
||||
|
||||
def pre_build(repo_path: str, env_variables: List[str]):
|
||||
def pre_build(repo_path: Path, env_variables: List[str]):
|
||||
if "WITH_PERFORMANCE=1" in env_variables:
|
||||
current_branch = subprocess.check_output(
|
||||
"git branch --show-current", shell=True, encoding="utf-8"
|
||||
@ -56,7 +62,9 @@ def pre_build(repo_path: str, env_variables: List[str]):
|
||||
# conclusion is: in the current state the easiest way to go is to force
|
||||
# unshallow repository for performance artifacts.
|
||||
# To change it we need to rework our performance tests docker image
|
||||
raise Exception("shallow repository is not suitable for performance builds")
|
||||
raise BuildException(
|
||||
"shallow repository is not suitable for performance builds"
|
||||
)
|
||||
if current_branch != "master":
|
||||
cmd = (
|
||||
f"git -C {repo_path} fetch --no-recurse-submodules "
|
||||
@ -67,14 +75,14 @@ def pre_build(repo_path: str, env_variables: List[str]):
|
||||
|
||||
|
||||
def run_docker_image_with_env(
|
||||
image_name,
|
||||
as_root,
|
||||
output,
|
||||
env_variables,
|
||||
ch_root,
|
||||
ccache_dir,
|
||||
docker_image_version,
|
||||
image_name: str,
|
||||
as_root: bool,
|
||||
output_dir: Path,
|
||||
env_variables: List[str],
|
||||
ch_root: Path,
|
||||
ccache_dir: Optional[Path],
|
||||
):
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
env_part = " -e ".join(env_variables)
|
||||
if env_part:
|
||||
env_part = " -e " + env_part
|
||||
@ -89,10 +97,14 @@ def run_docker_image_with_env(
|
||||
else:
|
||||
user = f"{os.geteuid()}:{os.getegid()}"
|
||||
|
||||
ccache_mount = f"--volume={ccache_dir}:/ccache"
|
||||
if ccache_dir is None:
|
||||
ccache_mount = ""
|
||||
|
||||
cmd = (
|
||||
f"docker run --network=host --user={user} --rm --volume={output}:/output "
|
||||
f"--volume={ch_root}:/build --volume={ccache_dir}:/ccache {env_part} "
|
||||
f"{interactive} {image_name}:{docker_image_version}"
|
||||
f"docker run --network=host --user={user} --rm {ccache_mount}"
|
||||
f"--volume={output_dir}:/output --volume={ch_root}:/build {env_part} "
|
||||
f"{interactive} {image_name}"
|
||||
)
|
||||
|
||||
logging.info("Will build ClickHouse pkg with cmd: '%s'", cmd)
|
||||
@ -100,24 +112,25 @@ def run_docker_image_with_env(
|
||||
subprocess.check_call(cmd, shell=True)
|
||||
|
||||
|
||||
def is_release_build(build_type, package_type, sanitizer):
|
||||
def is_release_build(build_type: str, package_type: str, sanitizer: str) -> bool:
|
||||
return build_type == "" and package_type == "deb" and sanitizer == ""
|
||||
|
||||
|
||||
def parse_env_variables(
|
||||
build_type,
|
||||
compiler,
|
||||
sanitizer,
|
||||
package_type,
|
||||
cache,
|
||||
distcc_hosts,
|
||||
clang_tidy,
|
||||
version,
|
||||
author,
|
||||
official,
|
||||
additional_pkgs,
|
||||
with_coverage,
|
||||
with_binaries,
|
||||
build_type: str,
|
||||
compiler: str,
|
||||
sanitizer: str,
|
||||
package_type: str,
|
||||
cache: str,
|
||||
s3_bucket: str,
|
||||
s3_directory: str,
|
||||
s3_rw_access: bool,
|
||||
clang_tidy: bool,
|
||||
version: str,
|
||||
official: bool,
|
||||
additional_pkgs: bool,
|
||||
with_coverage: bool,
|
||||
with_binaries: str,
|
||||
):
|
||||
DARWIN_SUFFIX = "-darwin"
|
||||
DARWIN_ARM_SUFFIX = "-darwin-aarch64"
|
||||
@ -243,32 +256,43 @@ def parse_env_variables(
|
||||
else:
|
||||
result.append("BUILD_TYPE=None")
|
||||
|
||||
if cache == "distcc":
|
||||
result.append(f"CCACHE_PREFIX={cache}")
|
||||
if not cache:
|
||||
cmake_flags.append("-DCOMPILER_CACHE=disabled")
|
||||
|
||||
if cache:
|
||||
if cache == "ccache":
|
||||
cmake_flags.append("-DCOMPILER_CACHE=ccache")
|
||||
result.append("CCACHE_DIR=/ccache")
|
||||
result.append("CCACHE_COMPRESSLEVEL=5")
|
||||
result.append("CCACHE_BASEDIR=/build")
|
||||
result.append("CCACHE_NOHASHDIR=true")
|
||||
result.append("CCACHE_COMPILERCHECK=content")
|
||||
cache_maxsize = "15G"
|
||||
if clang_tidy:
|
||||
# 15G is not enough for tidy build
|
||||
cache_maxsize = "25G"
|
||||
result.append("CCACHE_MAXSIZE=15G")
|
||||
|
||||
# `CTCACHE_DIR` has the same purpose as the `CCACHE_DIR` above.
|
||||
# It's there to have the clang-tidy cache embedded into our standard `CCACHE_DIR`
|
||||
if cache == "sccache":
|
||||
cmake_flags.append("-DCOMPILER_CACHE=sccache")
|
||||
# see https://github.com/mozilla/sccache/blob/main/docs/S3.md
|
||||
result.append(f"SCCACHE_BUCKET={s3_bucket}")
|
||||
sccache_dir = "sccache"
|
||||
if s3_directory:
|
||||
sccache_dir = f"{s3_directory}/{sccache_dir}"
|
||||
result.append(f"SCCACHE_S3_KEY_PREFIX={sccache_dir}")
|
||||
if not s3_rw_access:
|
||||
result.append("SCCACHE_S3_NO_CREDENTIALS=true")
|
||||
|
||||
if clang_tidy:
|
||||
# `CTCACHE_DIR` has the same purpose as the `CCACHE_DIR` above.
|
||||
# It's there to have the clang-tidy cache embedded into our standard `CCACHE_DIR`
|
||||
if cache == "ccache":
|
||||
result.append("CTCACHE_DIR=/ccache/clang-tidy-cache")
|
||||
result.append(f"CCACHE_MAXSIZE={cache_maxsize}")
|
||||
|
||||
if distcc_hosts:
|
||||
hosts_with_params = [f"{host}/24,lzo" for host in distcc_hosts] + [
|
||||
"localhost/`nproc`"
|
||||
]
|
||||
result.append('DISTCC_HOSTS="' + " ".join(hosts_with_params) + '"')
|
||||
elif cache == "distcc":
|
||||
result.append('DISTCC_HOSTS="localhost/`nproc`"')
|
||||
if s3_bucket:
|
||||
# see https://github.com/matus-chochlik/ctcache#environment-variables
|
||||
ctcache_dir = "clang-tidy-cache"
|
||||
if s3_directory:
|
||||
ctcache_dir = f"{s3_directory}/{ctcache_dir}"
|
||||
result.append(f"CTCACHE_S3_BUCKET={s3_bucket}")
|
||||
result.append(f"CTCACHE_S3_FOLDER={ctcache_dir}")
|
||||
if not s3_rw_access:
|
||||
result.append("CTCACHE_S3_NO_CREDENTIALS=true")
|
||||
|
||||
if additional_pkgs:
|
||||
# NOTE: This are the env for packages/build script
|
||||
@ -300,9 +324,6 @@ def parse_env_variables(
|
||||
if version:
|
||||
result.append(f"VERSION_STRING='{version}'")
|
||||
|
||||
if author:
|
||||
result.append(f"AUTHOR='{author}'")
|
||||
|
||||
if official:
|
||||
cmake_flags.append("-DCLICKHOUSE_OFFICIAL_BUILD=1")
|
||||
|
||||
@ -312,14 +333,14 @@ def parse_env_variables(
|
||||
return result
|
||||
|
||||
|
||||
def dir_name(name: str) -> str:
|
||||
if not os.path.isabs(name):
|
||||
name = os.path.abspath(os.path.join(os.getcwd(), name))
|
||||
return name
|
||||
def dir_name(name: str) -> Path:
|
||||
path = Path(name)
|
||||
if not path.is_absolute():
|
||||
path = Path.cwd() / name
|
||||
return path
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(message)s")
|
||||
def parse_args() -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
|
||||
description="ClickHouse building script using prebuilt Docker image",
|
||||
@ -331,7 +352,7 @@ if __name__ == "__main__":
|
||||
)
|
||||
parser.add_argument(
|
||||
"--clickhouse-repo-path",
|
||||
default=os.path.join(os.path.dirname(SCRIPT_PATH), os.pardir, os.pardir),
|
||||
default=SCRIPT_PATH.parents[2],
|
||||
type=dir_name,
|
||||
help="ClickHouse git repository",
|
||||
)
|
||||
@ -361,17 +382,34 @@ if __name__ == "__main__":
|
||||
)
|
||||
|
||||
parser.add_argument("--clang-tidy", action="store_true")
|
||||
parser.add_argument("--cache", choices=("ccache", "distcc", ""), default="")
|
||||
parser.add_argument(
|
||||
"--ccache_dir",
|
||||
default=os.getenv("HOME", "") + "/.ccache",
|
||||
"--cache",
|
||||
choices=("ccache", "sccache", ""),
|
||||
default="",
|
||||
help="ccache or sccache for objects caching; sccache uses only S3 buckets",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--ccache-dir",
|
||||
default=Path.home() / ".ccache",
|
||||
type=dir_name,
|
||||
help="a directory with ccache",
|
||||
)
|
||||
parser.add_argument("--distcc-hosts", nargs="+")
|
||||
parser.add_argument(
|
||||
"--s3-bucket",
|
||||
help="an S3 bucket used for sscache and clang-tidy-cache",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--s3-directory",
|
||||
default="ccache",
|
||||
help="an S3 directory prefix used for sscache and clang-tidy-cache",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--s3-rw-access",
|
||||
action="store_true",
|
||||
help="if set, the build fails on errors writing cache to S3",
|
||||
)
|
||||
parser.add_argument("--force-build-image", action="store_true")
|
||||
parser.add_argument("--version")
|
||||
parser.add_argument("--author", default="clickhouse", help="a package author")
|
||||
parser.add_argument("--official", action="store_true")
|
||||
parser.add_argument("--additional-pkgs", action="store_true")
|
||||
parser.add_argument("--with-coverage", action="store_true")
|
||||
@ -387,34 +425,54 @@ if __name__ == "__main__":
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
image_name = f"clickhouse/{IMAGE_TYPE}-builder"
|
||||
if args.additional_pkgs and args.package_type != "deb":
|
||||
raise argparse.ArgumentTypeError(
|
||||
"Can build additional packages only in deb build"
|
||||
)
|
||||
|
||||
if args.cache != "ccache":
|
||||
args.ccache_dir = None
|
||||
|
||||
if args.with_binaries != "":
|
||||
if args.package_type != "deb":
|
||||
raise argparse.ArgumentTypeError(
|
||||
"Can add additional binaries only in deb build"
|
||||
)
|
||||
logging.info("Should place %s to output", args.with_binaries)
|
||||
|
||||
if args.cache == "sccache":
|
||||
if not args.s3_bucket:
|
||||
raise argparse.ArgumentTypeError("sccache must have --s3-bucket set")
|
||||
|
||||
return args
|
||||
|
||||
|
||||
def main():
|
||||
logging.basicConfig(level=logging.INFO, format="%(asctime)s %(message)s")
|
||||
args = parse_args()
|
||||
|
||||
ch_root = args.clickhouse_repo_path
|
||||
|
||||
if args.additional_pkgs and args.package_type != "deb":
|
||||
raise Exception("Can build additional packages only in deb build")
|
||||
dockerfile = ch_root / "docker/packager" / IMAGE_TYPE / "Dockerfile"
|
||||
image_with_version = IMAGE_NAME + ":" + args.docker_image_version
|
||||
if args.force_build_image:
|
||||
build_image(image_with_version, dockerfile)
|
||||
elif not (
|
||||
check_image_exists_locally(image_with_version) or pull_image(image_with_version)
|
||||
):
|
||||
build_image(image_with_version, dockerfile)
|
||||
|
||||
if args.with_binaries != "" and args.package_type != "deb":
|
||||
raise Exception("Can add additional binaries only in deb build")
|
||||
|
||||
if args.with_binaries != "" and args.package_type == "deb":
|
||||
logging.info("Should place %s to output", args.with_binaries)
|
||||
|
||||
dockerfile = os.path.join(ch_root, "docker/packager", IMAGE_TYPE, "Dockerfile")
|
||||
image_with_version = image_name + ":" + args.docker_image_version
|
||||
if not check_image_exists_locally(image_name) or args.force_build_image:
|
||||
if not pull_image(image_with_version) or args.force_build_image:
|
||||
build_image(image_with_version, dockerfile)
|
||||
env_prepared = parse_env_variables(
|
||||
args.build_type,
|
||||
args.compiler,
|
||||
args.sanitizer,
|
||||
args.package_type,
|
||||
args.cache,
|
||||
args.distcc_hosts,
|
||||
args.s3_bucket,
|
||||
args.s3_directory,
|
||||
args.s3_rw_access,
|
||||
args.clang_tidy,
|
||||
args.version,
|
||||
args.author,
|
||||
args.official,
|
||||
args.additional_pkgs,
|
||||
args.with_coverage,
|
||||
@ -423,12 +481,15 @@ if __name__ == "__main__":
|
||||
|
||||
pre_build(args.clickhouse_repo_path, env_prepared)
|
||||
run_docker_image_with_env(
|
||||
image_name,
|
||||
image_with_version,
|
||||
args.as_root,
|
||||
args.output_dir,
|
||||
env_prepared,
|
||||
ch_root,
|
||||
args.ccache_dir,
|
||||
args.docker_image_version,
|
||||
)
|
||||
logging.info("Output placed into %s", args.output_dir)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
|
@ -33,7 +33,7 @@ RUN arch=${TARGETARCH:-amd64} \
|
||||
# lts / testing / prestable / etc
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
|
||||
ARG VERSION="23.2.4.12"
|
||||
ARG VERSION="23.3.1.2823"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# user/group precreated explicitly with fixed uid/gid on purpose.
|
||||
|
@ -22,7 +22,7 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
|
||||
|
||||
ARG REPO_CHANNEL="stable"
|
||||
ARG REPOSITORY="deb https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
|
||||
ARG VERSION="23.2.4.12"
|
||||
ARG VERSION="23.3.1.2823"
|
||||
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
|
||||
|
||||
# set non-empty deb_location_url url to create a docker image
|
||||
|
@ -20,12 +20,6 @@ RUN apt-get update \
|
||||
zstd \
|
||||
--yes --no-install-recommends
|
||||
|
||||
# Install CMake 3.20+ for Rust compilation
|
||||
RUN apt purge cmake --yes
|
||||
RUN wget -O - https://apt.kitware.com/keys/kitware-archive-latest.asc 2>/dev/null | gpg --dearmor - | tee /etc/apt/trusted.gpg.d/kitware.gpg >/dev/null
|
||||
RUN apt-add-repository 'deb https://apt.kitware.com/ubuntu/ focal main'
|
||||
RUN apt update && apt install cmake --yes
|
||||
|
||||
RUN pip3 install numpy scipy pandas Jinja2
|
||||
|
||||
ARG odbc_driver_url="https://github.com/ClickHouse/clickhouse-odbc/releases/download/v1.1.4.20200302/clickhouse-odbc-1.1.4-Linux.tar.gz"
|
||||
|
@ -16,7 +16,8 @@ export LLVM_VERSION=${LLVM_VERSION:-13}
|
||||
# it being undefined. Also read it as array so that we can pass an empty list
|
||||
# of additional variable to cmake properly, and it doesn't generate an extra
|
||||
# empty parameter.
|
||||
read -ra FASTTEST_CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
||||
# Read it as CMAKE_FLAGS to not lose exported FASTTEST_CMAKE_FLAGS on subsequential launch
|
||||
read -ra CMAKE_FLAGS <<< "${FASTTEST_CMAKE_FLAGS:-}"
|
||||
|
||||
# Run only matching tests.
|
||||
FASTTEST_FOCUS=${FASTTEST_FOCUS:-""}
|
||||
@ -37,6 +38,13 @@ export FASTTEST_DATA
|
||||
export FASTTEST_OUT
|
||||
export PATH
|
||||
|
||||
function ccache_status
|
||||
{
|
||||
ccache --show-config ||:
|
||||
ccache --show-stats ||:
|
||||
SCCACHE_NO_DAEMON=1 sccache --show-stats ||:
|
||||
}
|
||||
|
||||
function start_server
|
||||
{
|
||||
set -m # Spawn server in its own process groups
|
||||
@ -171,14 +179,14 @@ function run_cmake
|
||||
export CCACHE_COMPILERCHECK=content
|
||||
export CCACHE_MAXSIZE=15G
|
||||
|
||||
ccache --show-stats ||:
|
||||
ccache_status
|
||||
ccache --zero-stats ||:
|
||||
|
||||
mkdir "$FASTTEST_BUILD" ||:
|
||||
|
||||
(
|
||||
cd "$FASTTEST_BUILD"
|
||||
cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER="clang++-${LLVM_VERSION}" -DCMAKE_C_COMPILER="clang-${LLVM_VERSION}" "${CMAKE_LIBS_CONFIG[@]}" "${FASTTEST_CMAKE_FLAGS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt"
|
||||
cmake "$FASTTEST_SOURCE" -DCMAKE_CXX_COMPILER="clang++-${LLVM_VERSION}" -DCMAKE_C_COMPILER="clang-${LLVM_VERSION}" "${CMAKE_LIBS_CONFIG[@]}" "${CMAKE_FLAGS[@]}" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee "$FASTTEST_OUTPUT/cmake_log.txt"
|
||||
)
|
||||
}
|
||||
|
||||
@ -193,7 +201,7 @@ function build
|
||||
strip programs/clickhouse -o "$FASTTEST_OUTPUT/clickhouse-stripped"
|
||||
zstd --threads=0 "$FASTTEST_OUTPUT/clickhouse-stripped"
|
||||
fi
|
||||
ccache --show-stats ||:
|
||||
ccache_status
|
||||
ccache --evict-older-than 1d ||:
|
||||
)
|
||||
}
|
||||
|
@ -3,7 +3,9 @@ set -ex
|
||||
set -o pipefail
|
||||
trap "exit" INT TERM
|
||||
trap 'kill $(jobs -pr) ||:' EXIT
|
||||
S3_URL=${S3_URL:="https://clickhouse-builds.s3.amazonaws.com"}
|
||||
BUILD_NAME=${BUILD_NAME:-package_release}
|
||||
export S3_URL BUILD_NAME
|
||||
|
||||
mkdir db0 ||:
|
||||
mkdir left ||:
|
||||
@ -28,8 +30,9 @@ function download
|
||||
# Historically there were various paths for the performance test package.
|
||||
# Test all of them.
|
||||
declare -a urls_to_try=(
|
||||
"https://s3.amazonaws.com/clickhouse-builds/$left_pr/$left_sha/$BUILD_NAME/performance.tar.zst"
|
||||
"https://s3.amazonaws.com/clickhouse-builds/$left_pr/$left_sha/$BUILD_NAME/performance.tgz"
|
||||
"$S3_URL/PRs/$left_pr/$left_sha/$BUILD_NAME/performance.tar.zst"
|
||||
"$S3_URL/$left_pr/$left_sha/$BUILD_NAME/performance.tar.zst"
|
||||
"$S3_URL/$left_pr/$left_sha/$BUILD_NAME/performance.tgz"
|
||||
)
|
||||
|
||||
for path in "${urls_to_try[@]}"
|
||||
|
@ -6,11 +6,7 @@ export CHPC_CHECK_START_TIMESTAMP
|
||||
|
||||
S3_URL=${S3_URL:="https://clickhouse-builds.s3.amazonaws.com"}
|
||||
BUILD_NAME=${BUILD_NAME:-package_release}
|
||||
|
||||
COMMON_BUILD_PREFIX="/clickhouse_build_check"
|
||||
if [[ $S3_URL == *"s3.amazonaws.com"* ]]; then
|
||||
COMMON_BUILD_PREFIX=""
|
||||
fi
|
||||
export S3_URL BUILD_NAME
|
||||
|
||||
# Sometimes AWS responde with DNS error and it's impossible to retry it with
|
||||
# current curl version options.
|
||||
@ -66,8 +62,9 @@ function find_reference_sha
|
||||
# test all of them.
|
||||
unset found
|
||||
declare -a urls_to_try=(
|
||||
"https://s3.amazonaws.com/clickhouse-builds/0/$REF_SHA/$BUILD_NAME/performance.tar.zst"
|
||||
"https://s3.amazonaws.com/clickhouse-builds/0/$REF_SHA/$BUILD_NAME/performance.tgz"
|
||||
"$S3_URL/PRs/0/$REF_SHA/$BUILD_NAME/performance.tar.zst"
|
||||
"$S3_URL/0/$REF_SHA/$BUILD_NAME/performance.tar.zst"
|
||||
"$S3_URL/0/$REF_SHA/$BUILD_NAME/performance.tgz"
|
||||
)
|
||||
for path in "${urls_to_try[@]}"
|
||||
do
|
||||
@ -92,10 +89,15 @@ chmod 777 workspace output
|
||||
cd workspace
|
||||
|
||||
# Download the package for the version we are going to test.
|
||||
if curl_with_retry "$S3_URL/$PR_TO_TEST/$SHA_TO_TEST$COMMON_BUILD_PREFIX/$BUILD_NAME/performance.tar.zst"
|
||||
then
|
||||
right_path="$S3_URL/$PR_TO_TEST/$SHA_TO_TEST$COMMON_BUILD_PREFIX/$BUILD_NAME/performance.tar.zst"
|
||||
fi
|
||||
# A temporary solution for migrating into PRs directory
|
||||
for prefix in "$S3_URL/PRs" "$S3_URL";
|
||||
do
|
||||
if curl_with_retry "$prefix/$PR_TO_TEST/$SHA_TO_TEST/$BUILD_NAME/performance.tar.zst"
|
||||
then
|
||||
right_path="$prefix/$PR_TO_TEST/$SHA_TO_TEST/$BUILD_NAME/performance.tar.zst"
|
||||
break
|
||||
fi
|
||||
done
|
||||
|
||||
mkdir right
|
||||
wget -nv -nd -c "$right_path" -O- | tar -C right --no-same-owner --strip-components=1 --zstd --extract --verbose
|
||||
|
@ -26,6 +26,7 @@ logging.basicConfig(
|
||||
total_start_seconds = time.perf_counter()
|
||||
stage_start_seconds = total_start_seconds
|
||||
|
||||
|
||||
# Thread executor that does not hides exception that happens during function
|
||||
# execution, and rethrows it after join()
|
||||
class SafeThread(Thread):
|
||||
@ -158,6 +159,7 @@ for e in subst_elems:
|
||||
|
||||
available_parameters[name] = values
|
||||
|
||||
|
||||
# Takes parallel lists of templates, substitutes them with all combos of
|
||||
# parameters. The set of parameters is determined based on the first list.
|
||||
# Note: keep the order of queries -- sometimes we have DROP IF EXISTS
|
||||
|
@ -670,7 +670,6 @@ if args.report == "main":
|
||||
)
|
||||
|
||||
elif args.report == "all-queries":
|
||||
|
||||
print((header_template.format()))
|
||||
|
||||
add_tested_commits()
|
||||
|
@ -128,7 +128,7 @@ function run_tests()
|
||||
set +e
|
||||
|
||||
if [[ -n "$USE_PARALLEL_REPLICAS" ]] && [[ "$USE_PARALLEL_REPLICAS" -eq 1 ]]; then
|
||||
clickhouse-test --client="clickhouse-client --use_hedged_requests=0 --allow_experimental_parallel_reading_from_replicas=1 \
|
||||
clickhouse-test --client="clickhouse-client --use_hedged_requests=0 --allow_experimental_parallel_reading_from_replicas=1 --parallel_replicas_for_non_replicated_merge_tree=1 \
|
||||
--max_parallel_replicas=100 --cluster_for_parallel_replicas='parallel_replicas'" \
|
||||
-j 2 --testname --shard --zookeeper --check-zookeeper-session --no-stateless --no-parallel-replicas --hung-check --print-time "${ADDITIONAL_OPTIONS[@]}" \
|
||||
"$SKIP_TESTS_OPTION" 2>&1 | ts '%Y-%m-%d %H:%M:%S' | tee test_output/test_result.txt
|
||||
|
@ -10,31 +10,38 @@ import requests
|
||||
import tempfile
|
||||
|
||||
|
||||
DEFAULT_URL = 'https://clickhouse-datasets.s3.amazonaws.com'
|
||||
DEFAULT_URL = "https://clickhouse-datasets.s3.amazonaws.com"
|
||||
|
||||
AVAILABLE_DATASETS = {
|
||||
'hits': 'hits_v1.tar',
|
||||
'visits': 'visits_v1.tar',
|
||||
"hits": "hits_v1.tar",
|
||||
"visits": "visits_v1.tar",
|
||||
}
|
||||
|
||||
RETRIES_COUNT = 5
|
||||
|
||||
|
||||
def _get_temp_file_name():
|
||||
return os.path.join(tempfile._get_default_tempdir(), next(tempfile._get_candidate_names()))
|
||||
return os.path.join(
|
||||
tempfile._get_default_tempdir(), next(tempfile._get_candidate_names())
|
||||
)
|
||||
|
||||
|
||||
def build_url(base_url, dataset):
|
||||
return os.path.join(base_url, dataset, 'partitions', AVAILABLE_DATASETS[dataset])
|
||||
return os.path.join(base_url, dataset, "partitions", AVAILABLE_DATASETS[dataset])
|
||||
|
||||
|
||||
def dowload_with_progress(url, path):
|
||||
logging.info("Downloading from %s to temp path %s", url, path)
|
||||
for i in range(RETRIES_COUNT):
|
||||
try:
|
||||
with open(path, 'wb') as f:
|
||||
with open(path, "wb") as f:
|
||||
response = requests.get(url, stream=True)
|
||||
response.raise_for_status()
|
||||
total_length = response.headers.get('content-length')
|
||||
total_length = response.headers.get("content-length")
|
||||
if total_length is None or int(total_length) == 0:
|
||||
logging.info("No content-length, will download file without progress")
|
||||
logging.info(
|
||||
"No content-length, will download file without progress"
|
||||
)
|
||||
f.write(response.content)
|
||||
else:
|
||||
dl = 0
|
||||
@ -46,7 +53,11 @@ def dowload_with_progress(url, path):
|
||||
if sys.stdout.isatty():
|
||||
done = int(50 * dl / total_length)
|
||||
percent = int(100 * float(dl) / total_length)
|
||||
sys.stdout.write("\r[{}{}] {}%".format('=' * done, ' ' * (50-done), percent))
|
||||
sys.stdout.write(
|
||||
"\r[{}{}] {}%".format(
|
||||
"=" * done, " " * (50 - done), percent
|
||||
)
|
||||
)
|
||||
sys.stdout.flush()
|
||||
break
|
||||
except Exception as ex:
|
||||
@ -56,14 +67,21 @@ def dowload_with_progress(url, path):
|
||||
if os.path.exists(path):
|
||||
os.remove(path)
|
||||
else:
|
||||
raise Exception("Cannot download dataset from {}, all retries exceeded".format(url))
|
||||
raise Exception(
|
||||
"Cannot download dataset from {}, all retries exceeded".format(url)
|
||||
)
|
||||
|
||||
sys.stdout.write("\n")
|
||||
logging.info("Downloading finished")
|
||||
|
||||
|
||||
def unpack_to_clickhouse_directory(tar_path, clickhouse_path):
|
||||
logging.info("Will unpack data from temp path %s to clickhouse db %s", tar_path, clickhouse_path)
|
||||
with tarfile.open(tar_path, 'r') as comp_file:
|
||||
logging.info(
|
||||
"Will unpack data from temp path %s to clickhouse db %s",
|
||||
tar_path,
|
||||
clickhouse_path,
|
||||
)
|
||||
with tarfile.open(tar_path, "r") as comp_file:
|
||||
comp_file.extractall(path=clickhouse_path)
|
||||
logging.info("Unpack finished")
|
||||
|
||||
@ -72,15 +90,21 @@ if __name__ == "__main__":
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Simple tool for dowloading datasets for clickhouse from S3")
|
||||
description="Simple tool for dowloading datasets for clickhouse from S3"
|
||||
)
|
||||
|
||||
parser.add_argument('--dataset-names', required=True, nargs='+', choices=list(AVAILABLE_DATASETS.keys()))
|
||||
parser.add_argument('--url-prefix', default=DEFAULT_URL)
|
||||
parser.add_argument('--clickhouse-data-path', default='/var/lib/clickhouse/')
|
||||
parser.add_argument(
|
||||
"--dataset-names",
|
||||
required=True,
|
||||
nargs="+",
|
||||
choices=list(AVAILABLE_DATASETS.keys()),
|
||||
)
|
||||
parser.add_argument("--url-prefix", default=DEFAULT_URL)
|
||||
parser.add_argument("--clickhouse-data-path", default="/var/lib/clickhouse/")
|
||||
|
||||
args = parser.parse_args()
|
||||
datasets = args.dataset_names
|
||||
logging.info("Will fetch following datasets: %s", ', '.join(datasets))
|
||||
logging.info("Will fetch following datasets: %s", ", ".join(datasets))
|
||||
for dataset in datasets:
|
||||
logging.info("Processing %s", dataset)
|
||||
temp_archive_path = _get_temp_file_name()
|
||||
@ -92,10 +116,11 @@ if __name__ == "__main__":
|
||||
logging.info("Some exception occured %s", str(ex))
|
||||
raise
|
||||
finally:
|
||||
logging.info("Will remove downloaded file %s from filesystem if it exists", temp_archive_path)
|
||||
logging.info(
|
||||
"Will remove downloaded file %s from filesystem if it exists",
|
||||
temp_archive_path,
|
||||
)
|
||||
if os.path.exists(temp_archive_path):
|
||||
os.remove(temp_archive_path)
|
||||
logging.info("Processing of %s finished", dataset)
|
||||
logging.info("Fetch finished, enjoy your tables!")
|
||||
|
||||
|
||||
|
@ -170,6 +170,7 @@ if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]
|
||||
fi
|
||||
|
||||
rg -Fa "<Fatal>" /var/log/clickhouse-server/clickhouse-server.log ||:
|
||||
rg -A50 -Fa "============" /var/log/clickhouse-server/stderr.log ||:
|
||||
zstd --threads=0 < /var/log/clickhouse-server/clickhouse-server.log > /test_output/clickhouse-server.log.zst &
|
||||
|
||||
# Compress tables.
|
||||
|
@ -41,9 +41,14 @@ if [ "$is_tsan_build" -eq "0" ]; then
|
||||
export THREAD_FUZZER_pthread_mutex_lock_AFTER_SLEEP_TIME_US=10000
|
||||
export THREAD_FUZZER_pthread_mutex_unlock_BEFORE_SLEEP_TIME_US=10000
|
||||
export THREAD_FUZZER_pthread_mutex_unlock_AFTER_SLEEP_TIME_US=10000
|
||||
|
||||
export THREAD_FUZZER_EXPLICIT_SLEEP_PROBABILITY=0.01
|
||||
export THREAD_FUZZER_EXPLICIT_MEMORY_EXCEPTION_PROBABILITY=0.01
|
||||
fi
|
||||
|
||||
export ZOOKEEPER_FAULT_INJECTION=1
|
||||
# Initial run without S3 to create system.*_log on local file system to make it
|
||||
# available for dump via clickhouse-local
|
||||
configure
|
||||
|
||||
azurite-blob --blobHost 0.0.0.0 --blobPort 10000 --debug /azurite_log &
|
||||
|
@ -11,13 +11,14 @@ RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
|
||||
aspell \
|
||||
curl \
|
||||
git \
|
||||
file \
|
||||
libxml2-utils \
|
||||
moreutils \
|
||||
python3-fuzzywuzzy \
|
||||
python3-pip \
|
||||
shellcheck \
|
||||
yamllint \
|
||||
&& pip3 install black==22.8.0 boto3 codespell==2.2.1 dohq-artifactory mypy PyGithub unidiff pylint==2.6.2 \
|
||||
&& pip3 install black==23.1.0 boto3 codespell==2.2.1 dohq-artifactory mypy PyGithub unidiff pylint==2.6.2 \
|
||||
&& apt-get clean \
|
||||
&& rm -rf /root/.cache/pip
|
||||
|
||||
|
@ -49,17 +49,19 @@ echo -e "Successfully cloned previous release tests$OK" >> /test_output/test_res
|
||||
echo -e "Successfully downloaded previous release packages$OK" >> /test_output/test_results.tsv
|
||||
|
||||
# Make upgrade check more funny by forcing Ordinary engine for system database
|
||||
mkdir /var/lib/clickhouse/metadata
|
||||
mkdir -p /var/lib/clickhouse/metadata
|
||||
echo "ATTACH DATABASE system ENGINE=Ordinary" > /var/lib/clickhouse/metadata/system.sql
|
||||
|
||||
# Install previous release packages
|
||||
install_packages previous_release_package_folder
|
||||
|
||||
# Start server from previous release
|
||||
# Let's enable S3 storage by default
|
||||
export USE_S3_STORAGE_FOR_MERGE_TREE=1
|
||||
# Previous version may not be ready for fault injections
|
||||
export ZOOKEEPER_FAULT_INJECTION=0
|
||||
# Initial run without S3 to create system.*_log on local file system to make it
|
||||
# available for dump via clickhouse-local
|
||||
configure
|
||||
|
||||
start
|
||||
stop
|
||||
mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/clickhouse-server.initial.log
|
||||
|
||||
# force_sync=false doesn't work correctly on some older versions
|
||||
sudo cat /etc/clickhouse-server/config.d/keeper_port.xml \
|
||||
@ -67,8 +69,6 @@ sudo cat /etc/clickhouse-server/config.d/keeper_port.xml \
|
||||
> /etc/clickhouse-server/config.d/keeper_port.xml.tmp
|
||||
sudo mv /etc/clickhouse-server/config.d/keeper_port.xml.tmp /etc/clickhouse-server/config.d/keeper_port.xml
|
||||
|
||||
configure
|
||||
|
||||
# But we still need default disk because some tables loaded only into it
|
||||
sudo cat /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml \
|
||||
| sed "s|<main><disk>s3</disk></main>|<main><disk>s3</disk></main><default><disk>default</disk></default>|" \
|
||||
@ -76,6 +76,13 @@ sudo cat /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml \
|
||||
sudo chown clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
||||
sudo chgrp clickhouse /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml
|
||||
|
||||
# Start server from previous release
|
||||
# Let's enable S3 storage by default
|
||||
export USE_S3_STORAGE_FOR_MERGE_TREE=1
|
||||
# Previous version may not be ready for fault injections
|
||||
export ZOOKEEPER_FAULT_INJECTION=0
|
||||
configure
|
||||
|
||||
start
|
||||
|
||||
clickhouse-client --query="SELECT 'Server version: ', version()"
|
||||
@ -102,8 +109,7 @@ mv /var/log/clickhouse-server/clickhouse-server.log /var/log/clickhouse-server/c
|
||||
|
||||
# Install and start new server
|
||||
install_packages package_folder
|
||||
# Disable fault injections on start (we don't test them here, and it can lead to tons of requests in case of huge number of tables).
|
||||
export ZOOKEEPER_FAULT_INJECTION=0
|
||||
export ZOOKEEPER_FAULT_INJECTION=1
|
||||
configure
|
||||
start 500
|
||||
clickhouse-client --query "SELECT 'Server successfully started', 'OK', NULL, ''" >> /test_output/test_results.tsv \
|
||||
@ -185,8 +191,6 @@ tar -chf /test_output/coordination.tar /var/lib/clickhouse/coordination ||:
|
||||
|
||||
collect_query_and_trace_logs
|
||||
|
||||
check_oom_in_dmesg
|
||||
|
||||
mv /var/log/clickhouse-server/stderr.log /test_output/
|
||||
|
||||
# Write check result into check_status.tsv
|
||||
|
@ -92,4 +92,17 @@ RUN mkdir /tmp/ccache \
|
||||
&& cd / \
|
||||
&& rm -rf /tmp/ccache
|
||||
|
||||
ARG TARGETARCH
|
||||
ARG SCCACHE_VERSION=v0.4.1
|
||||
RUN arch=${TARGETARCH:-amd64} \
|
||||
&& case $arch in \
|
||||
amd64) rarch=x86_64 ;; \
|
||||
arm64) rarch=aarch64 ;; \
|
||||
esac \
|
||||
&& curl -Ls "https://github.com/mozilla/sccache/releases/download/$SCCACHE_VERSION/sccache-$SCCACHE_VERSION-$rarch-unknown-linux-musl.tar.gz" | \
|
||||
tar xz -C /tmp \
|
||||
&& mv "/tmp/sccache-$SCCACHE_VERSION-$rarch-unknown-linux-musl/sccache" /usr/bin \
|
||||
&& rm "/tmp/sccache-$SCCACHE_VERSION-$rarch-unknown-linux-musl" -r
|
||||
|
||||
|
||||
COPY process_functional_tests_result.py /
|
||||
|
1
docs/.gitignore
vendored
1
docs/.gitignore
vendored
@ -1 +1,2 @@
|
||||
build
|
||||
clickhouse-docs
|
||||
|
@ -40,6 +40,8 @@ The documentation contains information about all the aspects of the ClickHouse l
|
||||
|
||||
At the moment, [documentation](https://clickhouse.com/docs) exists in English, Russian, and Chinese. We store the reference documentation besides the ClickHouse source code in the [GitHub repository](https://github.com/ClickHouse/ClickHouse/tree/master/docs), and user guides in a separate repo [Clickhouse/clickhouse-docs](https://github.com/ClickHouse/clickhouse-docs).
|
||||
|
||||
To get the latter launch the `get-clickhouse-docs.sh` script.
|
||||
|
||||
Each language lies in the corresponding folder. Files that are not translated from English are symbolic links to the English ones.
|
||||
|
||||
<a name="how-to-contribute"/>
|
||||
|
@ -108,7 +108,7 @@ sidebar_label: 2022
|
||||
* Print out git status information at CMake configure stage. [#28047](https://github.com/ClickHouse/ClickHouse/pull/28047) ([Braulio Valdivielso Martínez](https://github.com/BraulioVM)).
|
||||
* Add new log level `<test>` for testing environments. [#28559](https://github.com/ClickHouse/ClickHouse/pull/28559) ([alesapin](https://github.com/alesapin)).
|
||||
|
||||
#### Bug Fix (user-visible misbehaviour in official stable or prestable release)
|
||||
#### Bug Fix (user-visible misbehaviour in official stable release)
|
||||
|
||||
* Fix handling null value with type of Nullable(String) in function JSONExtract. This fixes [#27929](https://github.com/ClickHouse/ClickHouse/issues/27929) and [#27930](https://github.com/ClickHouse/ClickHouse/issues/27930) . This was introduced in https://github.com/ClickHouse/ClickHouse/pull/25452 . [#27939](https://github.com/ClickHouse/ClickHouse/pull/27939) ([Amos Bird](https://github.com/amosbird)).
|
||||
* Fix extremely rare segfaults on shutdown due to incorrect order of context/config reloader shutdown. [#28088](https://github.com/ClickHouse/ClickHouse/pull/28088) ([nvartolomei](https://github.com/nvartolomei)).
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user