Merge branch 'master' into metadata_storage

This commit is contained in:
alesapin 2022-06-03 14:42:21 +02:00 committed by GitHub
commit ac03e3af3e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
847 changed files with 7109 additions and 5619 deletions

View File

@ -62,6 +62,7 @@ Checks: '*,
-google-build-using-namespace,
-google-readability-braces-around-statements,
-google-readability-casting,
-google-readability-function-size,
-google-readability-namespace-comments,
-google-readability-todo,

View File

@ -215,8 +215,8 @@ jobs:
fetch-depth: 0 # For a proper version and performance artifacts
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -259,8 +259,8 @@ jobs:
fetch-depth: 0 # For a proper version and performance artifacts
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -305,8 +305,8 @@ jobs:
fetch-depth: 0 # otherwise we will have no info about contributors
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -350,8 +350,8 @@ jobs:
# uses: actions/checkout@v2
# - name: Build
# run: |
# git -C "$GITHUB_WORKSPACE" submodule sync
# git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
# git -C "$GITHUB_WORKSPACE" submodule sync --recursive
# git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
# sudo rm -fr "$TEMP_PATH"
# mkdir -p "$TEMP_PATH"
# cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -395,8 +395,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -440,8 +440,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -485,8 +485,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -530,8 +530,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -575,8 +575,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -623,8 +623,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -668,8 +668,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -715,8 +715,8 @@ jobs:
fetch-depth: 0 # otherwise we will have no info about contributors
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -762,8 +762,8 @@ jobs:
fetch-depth: 0 # otherwise we will have no info about contributors
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -809,8 +809,8 @@ jobs:
fetch-depth: 0 # otherwise we will have no info about contributors
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -856,8 +856,8 @@ jobs:
fetch-depth: 0 # otherwise we will have no info about contributors
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -903,8 +903,8 @@ jobs:
fetch-depth: 0 # otherwise we will have no info about contributors
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -2911,7 +2911,7 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=0
RUN_BY_HASH_TOTAL=4
@ -2949,7 +2949,7 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=1
RUN_BY_HASH_TOTAL=4
@ -2987,7 +2987,7 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=2
RUN_BY_HASH_TOTAL=4
@ -3025,7 +3025,7 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=3
RUN_BY_HASH_TOTAL=4

View File

@ -81,7 +81,6 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
BUILD_NAME=coverity
CACHES_PATH=${{runner.temp}}/../ccaches
CHECK_NAME=ClickHouse build check (actions)
IMAGES_PATH=${{runner.temp}}/images_path
REPO_COPY=${{runner.temp}}/build_check/ClickHouse
TEMP_PATH=${{runner.temp}}/build_check
@ -99,13 +98,15 @@ jobs:
id: coverity-checkout
uses: actions/checkout@v2
with:
submodules: 'true'
fetch-depth: 0 # otherwise we will have no info about contributors
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
cd "$REPO_COPY/tests/ci" && python3 build_check.py "$CHECK_NAME" "$BUILD_NAME"
cd "$REPO_COPY/tests/ci" && python3 build_check.py "$BUILD_NAME"
- name: Upload Coverity Analysis
if: ${{ success() || failure() }}
run: |

View File

@ -277,8 +277,8 @@ jobs:
fetch-depth: 0 # for performance artifact
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -322,8 +322,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -367,8 +367,8 @@ jobs:
# uses: actions/checkout@v2
# - name: Build
# run: |
# git -C "$GITHUB_WORKSPACE" submodule sync
# git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
# git -C "$GITHUB_WORKSPACE" submodule sync --recursive
# git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
# sudo rm -fr "$TEMP_PATH"
# mkdir -p "$TEMP_PATH"
# cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -414,8 +414,8 @@ jobs:
fetch-depth: 0 # for performance artifact
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -459,8 +459,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -504,8 +504,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -549,8 +549,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -594,8 +594,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -639,8 +639,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -687,8 +687,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -732,8 +732,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -777,8 +777,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -822,8 +822,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -867,8 +867,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -912,8 +912,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -957,8 +957,8 @@ jobs:
uses: actions/checkout@v2
- name: Build
run: |
git -C "$GITHUB_WORKSPACE" submodule sync
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --init --jobs=10
git -C "$GITHUB_WORKSPACE" submodule sync --recursive
git -C "$GITHUB_WORKSPACE" submodule update --depth=1 --recursive --init --jobs=10
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
@ -3127,7 +3127,7 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=0
RUN_BY_HASH_TOTAL=4
@ -3165,7 +3165,7 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=1
RUN_BY_HASH_TOTAL=4
@ -3203,7 +3203,7 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=2
RUN_BY_HASH_TOTAL=4
@ -3241,7 +3241,159 @@ jobs:
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison (actions)
CHECK_NAME=Performance Comparison
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=3
RUN_BY_HASH_TOTAL=4
EOF
- name: Download json reports
uses: actions/download-artifact@v2
with:
path: ${{ env.REPORTS_PATH }}
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Performance Comparison
run: |
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
cd "$REPO_COPY/tests/ci"
python3 performance_comparison_check.py "$CHECK_NAME"
- name: Cleanup
if: always()
run: |
# shellcheck disable=SC2046
docker kill $(docker ps -q) ||:
# shellcheck disable=SC2046
docker rm -f $(docker ps -a -q) ||:
sudo rm -fr "$TEMP_PATH"
PerformanceComparisonAarch0:
needs: [BuilderDebAarch64]
runs-on: [self-hosted, func-tester-aarch64]
steps:
- name: Set envs
run: |
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison Aarch64
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=0
RUN_BY_HASH_TOTAL=4
EOF
- name: Download json reports
uses: actions/download-artifact@v2
with:
path: ${{ env.REPORTS_PATH }}
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Performance Comparison
run: |
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
cd "$REPO_COPY/tests/ci"
python3 performance_comparison_check.py "$CHECK_NAME"
- name: Cleanup
if: always()
run: |
# shellcheck disable=SC2046
docker kill $(docker ps -q) ||:
# shellcheck disable=SC2046
docker rm -f $(docker ps -a -q) ||:
sudo rm -fr "$TEMP_PATH"
PerformanceComparisonAarch1:
needs: [BuilderDebAarch64]
runs-on: [self-hosted, func-tester-aarch64]
steps:
- name: Set envs
run: |
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison Aarch64
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=1
RUN_BY_HASH_TOTAL=4
EOF
- name: Download json reports
uses: actions/download-artifact@v2
with:
path: ${{ env.REPORTS_PATH }}
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Performance Comparison
run: |
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
cd "$REPO_COPY/tests/ci"
python3 performance_comparison_check.py "$CHECK_NAME"
- name: Cleanup
if: always()
run: |
# shellcheck disable=SC2046
docker kill $(docker ps -q) ||:
# shellcheck disable=SC2046
docker rm -f $(docker ps -a -q) ||:
sudo rm -fr "$TEMP_PATH"
PerformanceComparisonAarch2:
needs: [BuilderDebAarch64]
runs-on: [self-hosted, func-tester-aarch64]
steps:
- name: Set envs
run: |
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison Aarch64
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=2
RUN_BY_HASH_TOTAL=4
EOF
- name: Download json reports
uses: actions/download-artifact@v2
with:
path: ${{ env.REPORTS_PATH }}
- name: Clear repository
run: |
sudo rm -fr "$GITHUB_WORKSPACE" && mkdir "$GITHUB_WORKSPACE"
- name: Check out repository code
uses: actions/checkout@v2
- name: Performance Comparison
run: |
sudo rm -fr "$TEMP_PATH"
mkdir -p "$TEMP_PATH"
cp -r "$GITHUB_WORKSPACE" "$TEMP_PATH"
cd "$REPO_COPY/tests/ci"
python3 performance_comparison_check.py "$CHECK_NAME"
- name: Cleanup
if: always()
run: |
# shellcheck disable=SC2046
docker kill $(docker ps -q) ||:
# shellcheck disable=SC2046
docker rm -f $(docker ps -a -q) ||:
sudo rm -fr "$TEMP_PATH"
PerformanceComparisonAarch3:
needs: [BuilderDebAarch64]
runs-on: [self-hosted, func-tester-aarch64]
steps:
- name: Set envs
run: |
cat >> "$GITHUB_ENV" << 'EOF'
TEMP_PATH=${{runner.temp}}/performance_comparison
REPORTS_PATH=${{runner.temp}}/reports_dir
CHECK_NAME=Performance Comparison Aarch64
REPO_COPY=${{runner.temp}}/performance_comparison/ClickHouse
RUN_BY_HASH_NUM=3
RUN_BY_HASH_TOTAL=4
@ -3333,6 +3485,10 @@ jobs:
- PerformanceComparison1
- PerformanceComparison2
- PerformanceComparison3
- PerformanceComparisonAarch0
- PerformanceComparisonAarch1
- PerformanceComparisonAarch2
- PerformanceComparisonAarch3
- UnitTestsAsan
- UnitTestsTsan
- UnitTestsMsan

6
.gitmodules vendored
View File

@ -79,10 +79,10 @@
url = https://github.com/ClickHouse/snappy.git
[submodule "contrib/cppkafka"]
path = contrib/cppkafka
url = https://github.com/ClickHouse/cppkafka.git
url = https://github.com/mfontanini/cppkafka.git
[submodule "contrib/brotli"]
path = contrib/brotli
url = https://github.com/ClickHouse/brotli.git
url = https://github.com/google/brotli.git
[submodule "contrib/h3"]
path = contrib/h3
url = https://github.com/ClickHouse/h3
@ -144,7 +144,7 @@
ignore = untracked
[submodule "contrib/msgpack-c"]
path = contrib/msgpack-c
url = https://github.com/ClickHouse/msgpack-c
url = https://github.com/msgpack/msgpack-c
[submodule "contrib/libcpuid"]
path = contrib/libcpuid
url = https://github.com/ClickHouse/libcpuid.git

View File

@ -45,14 +45,16 @@ std::string replxx_now_ms_str()
time_t t = ms.count() / 1000;
tm broken;
if (!localtime_r(&t, &broken))
{
return std::string();
}
return {};
static int const BUFF_SIZE(32);
char str[BUFF_SIZE];
strftime(str, BUFF_SIZE, "%Y-%m-%d %H:%M:%S.", &broken);
snprintf(str + sizeof("YYYY-mm-dd HH:MM:SS"), 5, "%03d", static_cast<int>(ms.count() % 1000));
if (strftime(str, BUFF_SIZE, "%Y-%m-%d %H:%M:%S.", &broken) <= 0)
return {};
if (snprintf(str + sizeof("YYYY-mm-dd HH:MM:SS"), 5, "%03d", static_cast<int>(ms.count() % 1000)) <= 0)
return {};
return str;
}

View File

@ -576,8 +576,8 @@ private:
else if constexpr (Bits == 128 && sizeof(base_type) == 8)
{
using CompilerUInt128 = unsigned __int128;
CompilerUInt128 a = (CompilerUInt128(lhs.items[1]) << 64) + lhs.items[0];
CompilerUInt128 b = (CompilerUInt128(rhs.items[1]) << 64) + rhs.items[0];
CompilerUInt128 a = (CompilerUInt128(lhs.items[1]) << 64) + lhs.items[0]; // NOLINT(clang-analyzer-core.UndefinedBinaryOperatorResult)
CompilerUInt128 b = (CompilerUInt128(rhs.items[1]) << 64) + rhs.items[0]; // NOLINT(clang-analyzer-core.UndefinedBinaryOperatorResult)
CompilerUInt128 c = a * b;
integer<Bits, Signed> res;
res.items[0] = c;
@ -841,8 +841,8 @@ public:
{
using CompilerUInt128 = unsigned __int128;
CompilerUInt128 a = (CompilerUInt128(numerator.items[1]) << 64) + numerator.items[0];
CompilerUInt128 b = (CompilerUInt128(denominator.items[1]) << 64) + denominator.items[0];
CompilerUInt128 a = (CompilerUInt128(numerator.items[1]) << 64) + numerator.items[0]; // NOLINT(clang-analyzer-core.UndefinedBinaryOperatorResult)
CompilerUInt128 b = (CompilerUInt128(denominator.items[1]) << 64) + denominator.items[0]; // NOLINT(clang-analyzer-core.UndefinedBinaryOperatorResult)
CompilerUInt128 c = a / b; // NOLINT
integer<Bits, Signed> res;
@ -1204,7 +1204,7 @@ constexpr integer<Bits, Signed>::operator T() const noexcept
UnsignedT res{};
for (unsigned i = 0; i < _impl::item_count && i < (sizeof(T) + sizeof(base_type) - 1) / sizeof(base_type); ++i)
res += UnsignedT(items[i]) << (sizeof(base_type) * 8 * i);
res += UnsignedT(items[i]) << (sizeof(base_type) * 8 * i); // NOLINT(clang-analyzer-core.UndefinedBinaryOperatorResult)
return res;
}

2
contrib/arrow vendored

@ -1 +1 @@
Subproject commit 6f274b737c66a6c39bab0d3bdf6cf7d139ef06f5
Subproject commit efdcd015cfdee1b6aa349c9ca227ca12c3d697f5

2
contrib/brotli vendored

@ -1 +1 @@
Subproject commit 5bd78768449751a78d4b4c646b0612917986f5b1
Subproject commit 63be8a99401992075c23e99f7c84de1c653e39e2

2
contrib/cppkafka vendored

@ -1 +1 @@
Subproject commit 64bd67db12b9c705e9127439a5b05b351d9df7da
Subproject commit 5a119f689f8a4d90d10a9635e7ee2bee5c127de1

2
contrib/libxml2 vendored

@ -1 +1 @@
Subproject commit a075d256fd9ff15590b86d981b75a50ead124fca
Subproject commit 7846b0a677f8d3ce72486125fa281e92ac9970e8

2
contrib/msgpack-c vendored

@ -1 +1 @@
Subproject commit 790b3fe58ebded7a8bd130782ef28bec5784c248
Subproject commit 46684265d50b5d1b062d4c5c428ba08462844b1d

2
contrib/rapidjson vendored

@ -1 +1 @@
Subproject commit b571bd5c1a3b1fc931d77ae36932537a3c9018c3
Subproject commit c4ef90ccdbc21d5d5a628d08316bfd301e32d6fa

2
contrib/snappy vendored

@ -1 +1 @@
Subproject commit 3786173af204d21da97180977ad6ab4321138b3d
Subproject commit fb057edfed820212076239fd32cb2ff23e9016bf

View File

@ -6,7 +6,7 @@ FROM ubuntu:20.04
ARG apt_archive="http://archive.ubuntu.com"
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=14
RUN apt-get update \
&& apt-get install \

View File

@ -319,25 +319,16 @@ if __name__ == "__main__":
)
parser.add_argument("--output-dir", type=dir_name, required=True)
parser.add_argument("--build-type", choices=("debug", ""), default="")
parser.add_argument(
"--compiler",
choices=(
"clang-11",
"clang-11-darwin",
"clang-11-darwin-aarch64",
"clang-11-aarch64",
"clang-12",
"clang-12-darwin",
"clang-12-darwin-aarch64",
"clang-12-aarch64",
"clang-13",
"clang-13-darwin",
"clang-13-darwin-aarch64",
"clang-13-aarch64",
"clang-13-ppc64le",
"clang-11-freebsd",
"clang-12-freebsd",
"clang-13-freebsd",
"clang-14",
"clang-14-darwin",
"clang-14-darwin-aarch64",
"clang-14-aarch64",
"clang-14-ppc64le",
"clang-14-freebsd",
"gcc-11",
),
default="clang-13",
@ -348,6 +339,7 @@ if __name__ == "__main__":
choices=("address", "thread", "memory", "undefined", ""),
default="",
)
parser.add_argument("--split-binary", action="store_true")
parser.add_argument("--clang-tidy", action="store_true")
parser.add_argument("--cache", choices=("ccache", "distcc", ""), default="")

View File

@ -7,7 +7,7 @@ FROM clickhouse/test-util:$FROM_TAG
ARG apt_archive="http://archive.ubuntu.com"
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=14
RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \

View File

@ -8,14 +8,18 @@ FROM clickhouse/binary-builder:$FROM_TAG
ARG apt_archive="http://archive.ubuntu.com"
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
RUN apt-get update && apt-get --yes --allow-unauthenticated install clang-13 libllvm13 libclang-13-dev libmlir-13-dev
RUN apt-get update && apt-get --yes --allow-unauthenticated install clang-14 libllvm14 libclang-14-dev libmlir-14-dev
# repo versions doesn't work correctly with C++17
# also we push reports to s3, so we add index.html to subfolder urls
# https://github.com/ClickHouse-Extras/woboq_codebrowser/commit/37e15eaf377b920acb0b48dbe82471be9203f76b
RUN git clone https://github.com/ClickHouse-Extras/woboq_codebrowser
RUN cd woboq_codebrowser && cmake . -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=clang\+\+-13 -DCMAKE_C_COMPILER=clang-13 && make -j
# TODO: remove branch in a few weeks after merge, e.g. in May or June 2022
RUN git clone https://github.com/ClickHouse-Extras/woboq_codebrowser --branch llvm-14 \
&& cd woboq_codebrowser \
&& cmake . -DCMAKE_BUILD_TYPE=Release -DCMAKE_CXX_COMPILER=clang\+\+-14 -DCMAKE_C_COMPILER=clang-14 \
&& make -j \
&& cd .. \
&& rm -rf woboq_codebrowser
ENV CODEGEN=/woboq_codebrowser/generator/codebrowser_generator
ENV CODEINDEX=/woboq_codebrowser/indexgenerator/codebrowser_indexgenerator
@ -28,7 +32,7 @@ ENV SHA=nosha
ENV DATA="https://s3.amazonaws.com/clickhouse-test-reports/codebrowser/data"
CMD mkdir -p $BUILD_DIRECTORY && cd $BUILD_DIRECTORY && \
cmake $SOURCE_DIRECTORY -DCMAKE_CXX_COMPILER=/usr/bin/clang\+\+-13 -DCMAKE_C_COMPILER=/usr/bin/clang-13 -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DENABLE_EMBEDDED_COMPILER=0 -DENABLE_S3=0 && \
cmake $SOURCE_DIRECTORY -DCMAKE_CXX_COMPILER=/usr/bin/clang\+\+-14 -DCMAKE_C_COMPILER=/usr/bin/clang-14 -DCMAKE_EXPORT_COMPILE_COMMANDS=ON -DENABLE_EMBEDDED_COMPILER=0 -DENABLE_S3=0 && \
mkdir -p $HTML_RESULT_DIRECTORY && \
$CODEGEN -b $BUILD_DIRECTORY -a -o $HTML_RESULT_DIRECTORY -p ClickHouse:$SOURCE_DIRECTORY:$SHA -d $DATA | ts '%Y-%m-%d %H:%M:%S' && \
cp -r $STATIC_DATA $HTML_RESULT_DIRECTORY/ &&\

View File

@ -7,7 +7,7 @@ FROM clickhouse/test-util:$FROM_TAG
ARG apt_archive="http://archive.ubuntu.com"
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=13
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=14
RUN apt-get update \
&& apt-get install ca-certificates lsb-release wget gnupg apt-transport-https \

View File

@ -12,7 +12,7 @@ stage=${stage:-}
script_dir="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
echo "$script_dir"
repo_dir=ch
BINARY_TO_DOWNLOAD=${BINARY_TO_DOWNLOAD:="clang-13_debug_none_bundled_unsplitted_disable_False_binary"}
BINARY_TO_DOWNLOAD=${BINARY_TO_DOWNLOAD:="clang-14_debug_none_bundled_unsplitted_disable_False_binary"}
BINARY_URL_TO_DOWNLOAD=${BINARY_URL_TO_DOWNLOAD:="https://clickhouse-builds.s3.amazonaws.com/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/$BINARY_TO_DOWNLOAD/clickhouse"}
function clone

View File

@ -2,7 +2,7 @@
set -euo pipefail
CLICKHOUSE_PACKAGE=${CLICKHOUSE_PACKAGE:="https://clickhouse-builds.s3.amazonaws.com/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/clang-13_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"}
CLICKHOUSE_PACKAGE=${CLICKHOUSE_PACKAGE:="https://clickhouse-builds.s3.amazonaws.com/$PR_TO_TEST/$SHA_TO_TEST/clickhouse_build_check/clang-14_relwithdebuginfo_none_bundled_unsplitted_disable_False_binary/clickhouse"}
CLICKHOUSE_REPO_PATH=${CLICKHOUSE_REPO_PATH:=""}

View File

@ -207,6 +207,13 @@ function run_tests
test_files=($(ls "$test_prefix"/*.xml))
fi
# We can filter out certain tests
if [ -v CHPC_TEST_GREP_EXCLUDE ]; then
# filter tests array in bash https://stackoverflow.com/a/40375567
filtered_test_files=( $( for i in ${test_files[@]} ; do echo $i ; done | grep -v ${CHPC_TEST_GREP_EXCLUDE} ) )
test_files=("${filtered_test_files[@]}")
fi
# We split perf tests into multiple checks to make them faster
if [ -v CHPC_TEST_RUN_BY_HASH_TOTAL ]; then
# filter tests array in bash https://stackoverflow.com/a/40375567

View File

@ -338,6 +338,7 @@ then
-e "Code: 1000, e.code() = 111, Connection refused" \
-e "UNFINISHED" \
-e "Renaming unexpected part" \
-e "PART_IS_TEMPORARILY_LOCKED" \
/var/log/clickhouse-server/clickhouse-server.backward.clean.log | zgrep -Fa "<Error>" > /test_output/bc_check_error_messages.txt \
&& echo -e 'Backward compatibility check: Error message in clickhouse-server.log (see bc_check_error_messages.txt)\tFAIL' >> /test_output/test_results.tsv \
|| echo -e 'Backward compatibility check: No Error messages in clickhouse-server.log\tOK' >> /test_output/test_results.tsv

View File

@ -6,8 +6,8 @@ Minimal ClickHouse build example:
```bash
cmake .. \
-DCMAKE_C_COMPILER=$(which clang-13) \
-DCMAKE_CXX_COMPILER=$(which clang++-13) \
-DCMAKE_C_COMPILER=$(which clang-14) \
-DCMAKE_CXX_COMPILER=$(which clang++-14) \
-DCMAKE_BUILD_TYPE=Debug \
-DENABLE_UTILS=OFF \
-DENABLE_TESTS=OFF

View File

@ -10,7 +10,7 @@ This is intended for continuous integration checks that run on Linux servers.
The cross-build for AARCH64 is based on the [Build instructions](../development/build.md), follow them first.
## Install Clang-13
## Install Clang-14 or newer
Follow the instructions from https://apt.llvm.org/ for your Ubuntu or Debian setup or do
```
@ -31,7 +31,7 @@ tar xJf gcc-arm-8.3-2019.03-x86_64-aarch64-linux-gnu.tar.xz -C build-aarch64/cma
``` bash
cd ClickHouse
mkdir build-arm64
CC=clang-13 CXX=clang++-13 cmake . -Bbuild-arm64 -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-aarch64.cmake
CC=clang-14 CXX=clang++-14 cmake . -Bbuild-arm64 -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-aarch64.cmake
ninja -C build-arm64
```

View File

@ -10,14 +10,14 @@ This is intended for continuous integration checks that run on Linux servers. If
The cross-build for Mac OS X is based on the [Build instructions](../development/build.md), follow them first.
## Install Clang-13
## Install Clang-14
Follow the instructions from https://apt.llvm.org/ for your Ubuntu or Debian setup.
For example the commands for Bionic are like:
``` bash
sudo echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-13 main" >> /etc/apt/sources.list
sudo apt-get install clang-13
sudo echo "deb [trusted=yes] http://apt.llvm.org/bionic/ llvm-toolchain-bionic-14 main" >> /etc/apt/sources.list
sudo apt-get install clang-14
```
## Install Cross-Compilation Toolset {#install-cross-compilation-toolset}

View File

@ -23,7 +23,7 @@ sudo bash -c "$(wget -O - https://apt.llvm.org/llvm.sh)"
``` bash
cd ClickHouse
mkdir build-riscv64
CC=clang-13 CXX=clang++-13 cmake . -Bbuild-riscv64 -G Ninja -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-riscv64.cmake -DGLIBC_COMPATIBILITY=OFF -DENABLE_LDAP=OFF -DOPENSSL_NO_ASM=ON -DENABLE_JEMALLOC=ON -DENABLE_PARQUET=OFF -DENABLE_ORC=OFF -DUSE_UNWIND=OFF -DENABLE_GRPC=OFF -DENABLE_HDFS=OFF -DENABLE_MYSQL=OFF
CC=clang-14 CXX=clang++-14 cmake . -Bbuild-riscv64 -G Ninja -DCMAKE_TOOLCHAIN_FILE=cmake/linux/toolchain-riscv64.cmake -DGLIBC_COMPATIBILITY=OFF -DENABLE_LDAP=OFF -DOPENSSL_NO_ASM=ON -DENABLE_JEMALLOC=ON -DENABLE_PARQUET=OFF -DENABLE_ORC=OFF -DUSE_UNWIND=OFF -DENABLE_GRPC=OFF -DENABLE_HDFS=OFF -DENABLE_MYSQL=OFF
ninja -C build-riscv64
```

View File

@ -77,7 +77,7 @@ The build requires the following components:
- Git (is used only to checkout the sources, its not needed for the build)
- CMake 3.14 or newer
- Ninja
- C++ compiler: clang-13 or newer
- C++ compiler: clang-14 or newer
- Linker: lld
If all the components are installed, you may build in the same way as the steps above.

View File

@ -155,7 +155,7 @@ While inside the `build` directory, configure your build by running CMake. Befor
export CC=clang CXX=clang++
cmake ..
If you installed clang using the automatic installation script above, also specify the version of clang installed in the first command, e.g. `export CC=clang-13 CXX=clang++-13`. The clang version will be in the script output.
If you installed clang using the automatic installation script above, also specify the version of clang installed in the first command, e.g. `export CC=clang-14 CXX=clang++-14`. The clang version will be in the script output.
The `CC` variable specifies the compiler for C (short for C Compiler), and `CXX` variable instructs which C++ compiler is to be used for building.

View File

@ -3,6 +3,6 @@ sidebar_label: Development
sidebar_position: 58
---
# ClickHouse Development {#clickhouse-development}
# ClickHouse Development
[Original article](https://clickhouse.com/docs/en/development/) <!--hide-->

View File

@ -81,7 +81,7 @@ $ ./src/unit_tests_dbms --gtest_filter=LocalAddress*
## Performance Tests {#performance-tests}
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries. Tests are located at `tests/performance`. Each test is represented by `.xml` file with description of test case. Tests are run with `docker/tests/performance-comparison` tool . See the readme file for invocation.
Performance tests allow to measure and compare performance of some isolated part of ClickHouse on synthetic queries. Tests are located at `tests/performance`. Each test is represented by `.xml` file with description of test case. Tests are run with `docker/test/performance-comparison` tool . See the readme file for invocation.
Each test run one or multiple queries (possibly with combinations of parameters) in a loop.

View File

@ -4,7 +4,7 @@ toc_priority: 27
toc_title: Introduction
---
# Database Engines {#database-engines}
# Database Engines
Database engines allow you to work with tables. By default, ClickHouse uses the [Atomic](../../engines/database-engines/atomic.md) database engine, which provides configurable [table engines](../../engines/table-engines/index.md) and an [SQL dialect](../../sql-reference/syntax.md).

View File

@ -3,7 +3,7 @@ sidebar_label: Lazy
sidebar_position: 20
---
# Lazy {#lazy}
# Lazy
Keeps tables in RAM only `expiration_time_in_seconds` seconds after last access. Can be used only with \*Log tables.

View File

@ -3,7 +3,7 @@ sidebar_label: MaterializedPostgreSQL
sidebar_position: 60
---
# [experimental] MaterializedPostgreSQL {#materialize-postgresql}
# [experimental] MaterializedPostgreSQL
Creates a ClickHouse database with tables from PostgreSQL database. Firstly, database with engine `MaterializedPostgreSQL` creates a snapshot of PostgreSQL database and loads required tables. Required tables can include any subset of tables from any subset of schemas from specified database. Along with the snapshot database engine acquires LSN and once initial dump of tables is performed - it starts pulling updates from WAL. After database is created, newly added tables to PostgreSQL database are not automatically added to replication. They have to be added manually with `ATTACH TABLE db.table` query.

View File

@ -3,7 +3,7 @@ sidebar_position: 40
sidebar_label: PostgreSQL
---
# PostgreSQL {#postgresql}
# PostgreSQL
Allows to connect to databases on a remote [PostgreSQL](https://www.postgresql.org) server. Supports read and write operations (`SELECT` and `INSERT` queries) to exchange data between ClickHouse and PostgreSQL.

View File

@ -3,7 +3,7 @@ sidebar_position: 30
sidebar_label: Replicated
---
# [experimental] Replicated {#replicated}
# [experimental] Replicated
The engine is based on the [Atomic](../../engines/database-engines/atomic.md) engine. It supports replication of metadata via DDL log being written to ZooKeeper and executed on all of the replicas for a given database.

View File

@ -3,7 +3,7 @@ sidebar_position: 55
sidebar_label: SQLite
---
# SQLite {#sqlite}
# SQLite
Allows to connect to [SQLite](https://www.sqlite.org/index.html) database and perform `INSERT` and `SELECT` queries to exchange data between ClickHouse and SQLite.

View File

@ -4,7 +4,7 @@ toc_priority: 26
toc_title: Introduction
---
# Table Engines {#table_engines}
# Table Engines
The table engine (type of table) determines:

View File

@ -3,7 +3,7 @@ sidebar_position: 12
sidebar_label: ExternalDistributed
---
# ExternalDistributed {#externaldistributed}
# ExternalDistributed
The `ExternalDistributed` engine allows to perform `SELECT` queries on data that is stored on a remote servers MySQL or PostgreSQL. Accepts [MySQL](../../../engines/table-engines/integrations/mysql.md) or [PostgreSQL](../../../engines/table-engines/integrations/postgresql.md) engines as an argument so sharding is possible.

View File

@ -3,7 +3,7 @@ sidebar_position: 9
sidebar_label: EmbeddedRocksDB
---
# EmbeddedRocksDB Engine {#EmbeddedRocksDB-engine}
# EmbeddedRocksDB Engine
This engine allows integrating ClickHouse with [rocksdb](http://rocksdb.org/).

View File

@ -3,7 +3,7 @@ sidebar_position: 6
sidebar_label: HDFS
---
# HDFS {#table_engines-hdfs}
# HDFS
This engine provides integration with the [Apache Hadoop](https://en.wikipedia.org/wiki/Apache_Hadoop) ecosystem by allowing to manage data on [HDFS](https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html) via ClickHouse. This engine is similar to the [File](../../../engines/table-engines/special/file.md#table_engines-file) and [URL](../../../engines/table-engines/special/url.md#table_engines-url) engines, but provides Hadoop-specific features.

View File

@ -3,7 +3,7 @@ sidebar_position: 4
sidebar_label: Hive
---
# Hive {#hive}
# Hive
The Hive engine allows you to perform `SELECT` quries on HDFS Hive table. Currently it supports input formats as below:

View File

@ -3,7 +3,7 @@ sidebar_position: 40
sidebar_label: Integrations
---
# Table Engines for Integrations {#table-engines-for-integrations}
# Table Engines for Integrations
ClickHouse provides various means for integrating with external systems, including table engines. Like with all other table engines, the configuration is done using `CREATE TABLE` or `ALTER TABLE` queries. Then from a user perspective, the configured integration looks like a normal table, but queries to it are proxied to the external system. This transparent querying is one of the key advantages of this approach over alternative integration methods, like external dictionaries or table functions, which require to use custom query methods on each use.

View File

@ -3,7 +3,7 @@ sidebar_position: 3
sidebar_label: JDBC
---
# JDBC {#table-engine-jdbc}
# JDBC
Allows ClickHouse to connect to external databases via [JDBC](https://en.wikipedia.org/wiki/Java_Database_Connectivity).

View File

@ -3,7 +3,7 @@ sidebar_position: 8
sidebar_label: Kafka
---
# Kafka {#kafka}
# Kafka
This engine works with [Apache Kafka](http://kafka.apache.org/).

View File

@ -3,7 +3,7 @@ sidebar_position: 12
sidebar_label: MaterializedPostgreSQL
---
# MaterializedPostgreSQL {#materialize-postgresql}
# MaterializedPostgreSQL
Creates ClickHouse table with an initial data dump of PostgreSQL table and starts replication process, i.e. executes background job to apply new changes as they happen on PostgreSQL table in the remote PostgreSQL database.

View File

@ -3,7 +3,7 @@ sidebar_position: 5
sidebar_label: MongoDB
---
# MongoDB {#mongodb}
# MongoDB
MongoDB engine is read-only table engine which allows to read data (`SELECT` queries) from remote MongoDB collection. Engine supports only non-nested data types. `INSERT` queries are not supported.

View File

@ -3,7 +3,7 @@ sidebar_position: 4
sidebar_label: MySQL
---
# MySQL {#mysql}
# MySQL
The MySQL engine allows you to perform `SELECT` and `INSERT` queries on data that is stored on a remote MySQL server.

View File

@ -3,7 +3,7 @@ sidebar_position: 2
sidebar_label: ODBC
---
# ODBC {#table-engine-odbc}
# ODBC
Allows ClickHouse to connect to external databases via [ODBC](https://en.wikipedia.org/wiki/Open_Database_Connectivity).

View File

@ -3,7 +3,7 @@ sidebar_position: 11
sidebar_label: PostgreSQL
---
# PostgreSQL {#postgresql}
# PostgreSQL
The PostgreSQL engine allows to perform `SELECT` and `INSERT` queries on data that is stored on a remote PostgreSQL server.

View File

@ -3,7 +3,7 @@ sidebar_position: 10
sidebar_label: RabbitMQ
---
# RabbitMQ Engine {#rabbitmq-engine}
# RabbitMQ Engine
This engine allows integrating ClickHouse with [RabbitMQ](https://www.rabbitmq.com).

View File

@ -3,7 +3,7 @@ sidebar_position: 7
sidebar_label: S3
---
# S3 Table Engine {#table-engine-s3}
# S3 Table Engine
This engine provides integration with [Amazon S3](https://aws.amazon.com/s3/) ecosystem. This engine is similar to the [HDFS](../../../engines/table-engines/special/file.md#table_engines-hdfs) engine, but provides S3-specific features.

View File

@ -3,7 +3,7 @@ sidebar_position: 7
sidebar_label: SQLite
---
# SQLite {#sqlite}
# SQLite
The engine allows to import and export data to SQLite and supports queries to SQLite tables directly from ClickHouse.

View File

@ -3,7 +3,7 @@ sidebar_position: 20
sidebar_label: Log Family
---
# Log Engine Family {#log-engine-family}
# Log Engine Family
These engines were developed for scenarios when you need to quickly write many small tables (up to about 1 million rows) and read them later as a whole.

View File

@ -3,7 +3,7 @@ toc_priority: 33
toc_title: Log
---
# Log {#log}
# Log
The engine belongs to the family of `Log` engines. See the common properties of `Log` engines and their differences in the [Log Engine Family](../../../engines/table-engines/log-family/index.md) article.

View File

@ -3,7 +3,7 @@ toc_priority: 32
toc_title: StripeLog
---
# Stripelog {#stripelog}
# Stripelog
This engine belongs to the family of log engines. See the common properties of log engines and their differences in the [Log Engine Family](../../../engines/table-engines/log-family/index.md) article.

View File

@ -3,7 +3,7 @@ toc_priority: 34
toc_title: TinyLog
---
# TinyLog {#tinylog}
# TinyLog
The engine belongs to the log engine family. See [Log Engine Family](../../../engines/table-engines/log-family/index.md) for common properties of log engines and their differences.

View File

@ -3,7 +3,7 @@ sidebar_position: 60
sidebar_label: AggregatingMergeTree
---
# AggregatingMergeTree {#aggregatingmergetree}
# AggregatingMergeTree
The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree), altering the logic for data parts merging. ClickHouse replaces all rows with the same primary key (or more accurately, with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md)) with a single row (within a one data part) that stores a combination of states of aggregate functions.

View File

@ -3,7 +3,7 @@ sidebar_position: 70
sidebar_label: CollapsingMergeTree
---
# CollapsingMergeTree {#table_engine-collapsingmergetree}
# CollapsingMergeTree
The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md) and adds the logic of rows collapsing to data parts merge algorithm.

View File

@ -3,7 +3,7 @@ sidebar_position: 30
sidebar_label: Custom Partitioning Key
---
# Custom Partitioning Key {#custom-partitioning-key}
# Custom Partitioning Key
:::warning
In most cases you do not need a partition key, and in most other cases you do not need a partition key more granular than by months. Partitioning does not speed up queries (in contrast to the ORDER BY expression).

View File

@ -3,7 +3,7 @@ sidebar_position: 90
sidebar_label: GraphiteMergeTree
---
# GraphiteMergeTree {#graphitemergetree}
# GraphiteMergeTree
This engine is designed for thinning and aggregating/averaging (rollup) [Graphite](http://graphite.readthedocs.io/en/latest/index.html) data. It may be helpful to developers who want to use ClickHouse as a data store for Graphite.

View File

@ -3,7 +3,7 @@ sidebar_position: 10
sidebar_label: MergeTree Family
---
# MergeTree Engine Family {#mergetree-engine-family}
# MergeTree Engine Family
Table engines from the MergeTree family are the core of ClickHouse data storage capabilities. They provide most features for resilience and high-performance data retrieval: columnar storage, custom partitioning, sparse primary index, secondary data-skipping indexes, etc.

View File

@ -3,7 +3,7 @@ sidebar_position: 11
sidebar_label: MergeTree
---
# MergeTree {#table_engines-mergetree}
# MergeTree
The `MergeTree` engine and other engines of this family (`*MergeTree`) are the most robust ClickHouse table engines.

View File

@ -3,7 +3,7 @@ sidebar_position: 40
sidebar_label: ReplacingMergeTree
---
# ReplacingMergeTree {#replacingmergetree}
# ReplacingMergeTree
The engine differs from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree) in that it removes duplicate entries with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md) value (`ORDER BY` table section, not `PRIMARY KEY`).

View File

@ -3,7 +3,7 @@ sidebar_position: 20
sidebar_label: Data Replication
---
# Data Replication {#table_engines-replication}
# Data Replication
Replication is only supported for tables in the MergeTree family:

View File

@ -3,7 +3,7 @@ sidebar_position: 50
sidebar_label: SummingMergeTree
---
# SummingMergeTree {#summingmergetree}
# SummingMergeTree
The engine inherits from [MergeTree](../../../engines/table-engines/mergetree-family/mergetree.md#table_engines-mergetree). The difference is that when merging data parts for `SummingMergeTree` tables ClickHouse replaces all the rows with the same primary key (or more accurately, with the same [sorting key](../../../engines/table-engines/mergetree-family/mergetree.md)) with one row which contains summarized values for the columns with the numeric data type. If the sorting key is composed in a way that a single key value corresponds to large number of rows, this significantly reduces storage volume and speeds up data selection.

View File

@ -3,7 +3,7 @@ sidebar_position: 80
sidebar_label: VersionedCollapsingMergeTree
---
# VersionedCollapsingMergeTree {#versionedcollapsingmergetree}
# VersionedCollapsingMergeTree
This engine:

View File

@ -3,7 +3,7 @@ sidebar_position: 120
sidebar_label: Buffer
---
# Buffer Table Engine {#buffer}
# Buffer Table Engine
Buffers the data to write in RAM, periodically flushing it to another table. During the read operation, data is read from the buffer and the other table simultaneously.

View File

@ -3,7 +3,7 @@ sidebar_position: 20
sidebar_label: Dictionary
---
# Dictionary Table Engine {#dictionary}
# Dictionary Table Engine
The `Dictionary` engine displays the [dictionary](../../../sql-reference/dictionaries/external-dictionaries/external-dicts.md) data as a ClickHouse table.

View File

@ -3,7 +3,7 @@ sidebar_position: 10
sidebar_label: Distributed
---
# Distributed Table Engine {#distributed}
# Distributed Table Engine
Tables with Distributed engine do not store any data of their own, but allow distributed query processing on multiple servers.
Reading is automatically parallelized. During a read, the table indexes on remote servers are used, if there are any.

View File

@ -3,7 +3,7 @@ sidebar_position: 130
sidebar_label: External Data
---
# External Data for Query Processing {#external-data-for-query-processing}
# External Data for Query Processing
ClickHouse allows sending a server the data that is needed for processing a query, together with a `SELECT` query. This data is put in a temporary table (see the section “Temporary tables”) and can be used in the query (for example, in `IN` operators).

View File

@ -3,7 +3,7 @@ sidebar_position: 40
sidebar_label: File
---
# File Table Engine {#table_engines-file}
# File Table Engine
The File table engine keeps the data in a file in one of the supported [file formats](../../../interfaces/formats.md#formats) (`TabSeparated`, `Native`, etc.).

View File

@ -3,7 +3,7 @@ sidebar_position: 140
sidebar_label: GenerateRandom
---
# GenerateRandom Table Engine {#table_engines-generate}
# GenerateRandom Table Engine
The GenerateRandom table engine produces random data for given table schema.

View File

@ -3,7 +3,7 @@ sidebar_position: 50
sidebar_label: Special
---
# Special Table Engines {#special-table-engines}
# Special Table Engines
There are three main categories of table engines:

View File

@ -3,7 +3,7 @@ sidebar_position: 70
sidebar_label: Join
---
# Join Table Engine {#join}
# Join Table Engine
Optional prepared data structure for usage in [JOIN](../../../sql-reference/statements/select/join.md#select-join) operations.

View File

@ -3,7 +3,7 @@ sidebar_position: 100
sidebar_label: MaterializedView
---
# MaterializedView Table Engine {#materializedview}
# MaterializedView Table Engine
Used for implementing materialized views (for more information, see [CREATE VIEW](../../../sql-reference/statements/create/view.md#materialized)). For storing data, it uses a different engine that was specified when creating the view. When reading from a table, it just uses that engine.

View File

@ -3,7 +3,7 @@ sidebar_position: 110
sidebar_label: Memory
---
# Memory Table Engine {#memory}
# Memory Table Engine
The Memory engine stores data in RAM, in uncompressed form. Data is stored in exactly the same form as it is received when read. In other words, reading from this table is completely free.
Concurrent data access is synchronized. Locks are short: read and write operations do not block each other.

View File

@ -3,7 +3,7 @@ sidebar_position: 30
sidebar_label: Merge
---
# Merge Table Engine {#merge}
# Merge Table Engine
The `Merge` engine (not to be confused with `MergeTree`) does not store data itself, but allows reading from any number of other tables simultaneously.

View File

@ -3,7 +3,7 @@ sidebar_position: 50
sidebar_label: 'Null'
---
# Null Table Engine {#null}
# Null Table Engine
When writing to a `Null` table, data is ignored. When reading from a `Null` table, the response is empty.

View File

@ -3,7 +3,7 @@ sidebar_position: 60
sidebar_label: Set
---
# Set Table Engine {#set}
# Set Table Engine
A data set that is always in RAM. It is intended for use on the right side of the `IN` operator (see the section “IN operators”).

View File

@ -3,7 +3,7 @@ sidebar_position: 80
sidebar_label: URL
---
# URL Table Engine {#table_engines-url}
# URL Table Engine
Queries data to/from a remote HTTP/HTTPS server. This engine is similar to the [File](../../../engines/table-engines/special/file.md) engine.

View File

@ -3,7 +3,7 @@ sidebar_position: 90
sidebar_label: View
---
# View Table Engine {#table_engines-view}
# View Table Engine
Used for implementing views (for more information, see the `CREATE VIEW query`). It does not store data, but only stores the specified `SELECT` query. When reading from a table, it runs this query (and deletes all unnecessary columns from the query).

View File

@ -3,7 +3,7 @@ sidebar_label: AMPLab Big Data Benchmark
description: A benchmark dataset used for comparing the performance of data warehousing solutions.
---
# AMPLab Big Data Benchmark {#amplab-big-data-benchmark}
# AMPLab Big Data Benchmark
See https://amplab.cs.berkeley.edu/benchmark/

View File

@ -6,7 +6,7 @@ description: ClickHouse can run on any Linux, FreeBSD, or Mac OS X with x86_64,
slug: /en/getting-started/install
---
# Installation {#installation}
# Installation
## System Requirements {#system-requirements}

View File

@ -6,7 +6,7 @@ description: The ClickHouse Playground allows people to experiment with ClickHou
slug: /en/getting-started/playground
---
# ClickHouse Playground {#clickhouse-playground}
# ClickHouse Playground
[ClickHouse Playground](https://play.clickhouse.com/play?user=play) allows people to experiment with ClickHouse by running queries instantly, without setting up their server or cluster.
Several example datasets are available in Playground.

View File

@ -3,7 +3,7 @@ sidebar_position: 17
sidebar_label: Command-Line Client
---
# Command-line Client {#command-line-client}
# Command-line Client
ClickHouse provides a native command-line client: `clickhouse-client`. The client supports command-line options and configuration files. For more information, see [Configuring](#interfaces_cli_configuration).

View File

@ -3,7 +3,7 @@ sidebar_position: 24
sidebar_label: C++ Client Library
---
# C++ Client Library {#c-client-library}
# C++ Client Library
See README at [clickhouse-cpp](https://github.com/ClickHouse/clickhouse-cpp) repository.

View File

@ -3,7 +3,7 @@ sidebar_position: 21
sidebar_label: Input and Output Formats
---
# Formats for Input and Output Data {#formats}
# Formats for Input and Output Data
ClickHouse can accept and return data in various formats. A format supported for input can be used to parse the data provided to `INSERT`s, to perform `SELECT`s from a file-backed table such as File, URL or HDFS, or to read an external dictionary. A format supported for output can be used to arrange the
results of a `SELECT`, and to perform `INSERT`s into a file-backed table.

View File

@ -3,7 +3,7 @@ sidebar_position: 19
sidebar_label: gRPC Interface
---
# gRPC Interface {#grpc-interface}
# gRPC Interface
## Introduction {#grpc-interface-introduction}

View File

@ -3,7 +3,7 @@ sidebar_position: 19
sidebar_label: HTTP Interface
---
# HTTP Interface {#http-interface}
# HTTP Interface
The HTTP interface lets you use ClickHouse on any platform from any programming language in a form of REST API. The HTTP interface is more limited than the native interface, but it has better language support.

View File

@ -3,7 +3,7 @@ sidebar_position: 22
sidebar_label: JDBC Driver
---
# JDBC Driver {#jdbc-driver}
# JDBC Driver
Use the [official JDBC driver](https://github.com/ClickHouse/clickhouse-jdbc) (and Java client) to access ClickHouse from your Java applications.

View File

@ -3,7 +3,7 @@ sidebar_position: 20
sidebar_label: MySQL Interface
---
# MySQL Interface {#mysql-interface}
# MySQL Interface
ClickHouse supports MySQL wire protocol. It can be enabled by [mysql_port](../operations/server-configuration-parameters/settings.md#server_configuration_parameters-mysql_port) setting in configuration file:

View File

@ -3,7 +3,7 @@ sidebar_position: 23
sidebar_label: ODBC Driver
---
# ODBC Driver {#odbc-driver}
# ODBC Driver
Use the [official ODBC driver](https://github.com/ClickHouse/clickhouse-odbc) for accessing ClickHouse as a data source.

View File

@ -5,7 +5,7 @@ keywords: [clickhouse, network, interfaces, http, tcp, grpc, command-line, clien
description: ClickHouse provides three network interfaces
---
# Interfaces {#interfaces}
# Interfaces
ClickHouse provides three network interfaces (they can be optionally wrapped in TLS for additional security):

View File

@ -3,7 +3,7 @@ sidebar_position: 18
sidebar_label: Native Interface (TCP)
---
# Native Interface (TCP) {#native-interface-tcp}
# Native Interface (TCP)
The native protocol is used in the [command-line client](../interfaces/cli.md), for inter-server communication during distributed query processing, and also in other C++ programs. Unfortunately, native ClickHouse protocol does not have formal specification yet, but it can be reverse-engineered from ClickHouse source code (starting [around here](https://github.com/ClickHouse/ClickHouse/tree/master/src/Client)) and/or by intercepting and analyzing TCP traffic.

View File

@ -3,7 +3,7 @@ sidebar_position: 26
sidebar_label: Client Libraries
---
# Client Libraries from Third-party Developers {#client-libraries-from-third-party-developers}
# Client Libraries from Third-party Developers
:::warning
ClickHouse Inc does **not** maintain the libraries listed below and hasnt done any extensive testing to ensure their quality.

View File

@ -3,7 +3,7 @@ sidebar_position: 28
sidebar_label: Visual Interfaces
---
# Visual Interfaces from Third-party Developers {#visual-interfaces-from-third-party-developers}
# Visual Interfaces from Third-party Developers
## Open-Source {#open-source}

View File

@ -3,7 +3,7 @@ toc_folder_title: Third-Party
sidebar_position: 24
---
# Third-Party Interfaces {#third-party-interfaces}
# Third-Party Interfaces
This is a collection of links to third-party tools that provide some sort of interface to ClickHouse. It can be either visual interface, command-line interface or an API:

View File

@ -3,7 +3,7 @@ sidebar_position: 27
sidebar_label: Integrations
---
# Integration Libraries from Third-party Developers {#integration-libraries-from-third-party-developers}
# Integration Libraries from Third-party Developers
:::warning Disclaimer
ClickHouse, Inc. does **not** maintain the tools and libraries listed below and havent done extensive testing to ensure their quality.

View File

@ -3,7 +3,7 @@ sidebar_position: 29
sidebar_label: Proxies
---
# Proxy Servers from Third-party Developers {#proxy-servers-from-third-party-developers}
# Proxy Servers from Third-party Developers
## chproxy {#chproxy}

Some files were not shown because too many files have changed in this diff Show More