Merge branch 'master' into hanfei/keeperrocks

This commit is contained in:
Han Fei 2024-02-28 15:30:52 +01:00
commit 658d074c39
1152 changed files with 26010 additions and 10154 deletions

View File

@ -17,7 +17,7 @@ assignees: ''
> A link to reproducer in [https://fiddle.clickhouse.com/](https://fiddle.clickhouse.com/). > A link to reproducer in [https://fiddle.clickhouse.com/](https://fiddle.clickhouse.com/).
**Does it reproduce on recent release?** **Does it reproduce on the most recent release?**
[The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv) [The list of releases](https://github.com/ClickHouse/ClickHouse/blob/master/utils/list-versions/version_date.tsv)
@ -34,11 +34,11 @@ assignees: ''
**How to reproduce** **How to reproduce**
* Which ClickHouse server version to use * Which ClickHouse server version to use
* Which interface to use, if matters * Which interface to use, if it matters
* Non-default settings, if any * Non-default settings, if any
* `CREATE TABLE` statements for all tables involved * `CREATE TABLE` statements for all tables involved
* Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary * Sample data for all these tables, use [clickhouse-obfuscator](https://github.com/ClickHouse/ClickHouse/blob/master/programs/obfuscator/Obfuscator.cpp#L42-L80) if necessary
* Queries to run that lead to unexpected result * Queries to run that lead to an unexpected result
**Expected behavior** **Expected behavior**

View File

@ -11,7 +11,7 @@ on: # yamllint disable-line rule:truthy
- 'backport/**' - 'backport/**'
jobs: jobs:
RunConfig: RunConfig:
runs-on: [self-hosted, style-checker] runs-on: [self-hosted, style-checker-aarch64]
outputs: outputs:
data: ${{ steps.runconfig.outputs.CI_DATA }} data: ${{ steps.runconfig.outputs.CI_DATA }}
steps: steps:

View File

@ -11,7 +11,7 @@ on: # yamllint disable-line rule:truthy
- 'master' - 'master'
jobs: jobs:
RunConfig: RunConfig:
runs-on: [self-hosted, style-checker] runs-on: [self-hosted, style-checker-aarch64]
outputs: outputs:
data: ${{ steps.runconfig.outputs.CI_DATA }} data: ${{ steps.runconfig.outputs.CI_DATA }}
steps: steps:
@ -35,7 +35,7 @@ jobs:
- name: PrepareRunConfig - name: PrepareRunConfig
id: runconfig id: runconfig
run: | run: |
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --configure --rebuild-all-binaries --outfile ${{ runner.temp }}/ci_run_data.json python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --configure --outfile ${{ runner.temp }}/ci_run_data.json
echo "::group::CI configuration" echo "::group::CI configuration"
python3 -m json.tool ${{ runner.temp }}/ci_run_data.json python3 -m json.tool ${{ runner.temp }}/ci_run_data.json
@ -55,7 +55,6 @@ jobs:
uses: ./.github/workflows/reusable_docker.yml uses: ./.github/workflows/reusable_docker.yml
with: with:
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
set_latest: true
StyleCheck: StyleCheck:
needs: [RunConfig, BuildDockers] needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -98,6 +97,14 @@ jobs:
build_name: package_release build_name: package_release
checkout_depth: 0 checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
BuilderDebReleaseCoverage:
needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_build.yml
with:
build_name: package_release_coverage
checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }}
BuilderDebAarch64: BuilderDebAarch64:
needs: [RunConfig, BuildDockers] needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -278,6 +285,7 @@ jobs:
- BuilderDebDebug - BuilderDebDebug
- BuilderDebMsan - BuilderDebMsan
- BuilderDebRelease - BuilderDebRelease
- BuilderDebReleaseCoverage
- BuilderDebTsan - BuilderDebTsan
- BuilderDebUBsan - BuilderDebUBsan
uses: ./.github/workflows/reusable_test.yml uses: ./.github/workflows/reusable_test.yml
@ -319,7 +327,7 @@ jobs:
run_command: | run_command: |
python3 build_report_check.py "$CHECK_NAME" python3 build_report_check.py "$CHECK_NAME"
MarkReleaseReady: MarkReleaseReady:
if: ${{ !failure() && !cancelled() }} if: ${{ ! (contains(needs.*.result, 'skipped') || contains(needs.*.result, 'failure')) }}
needs: needs:
- BuilderBinDarwin - BuilderBinDarwin
- BuilderBinDarwinAarch64 - BuilderBinDarwinAarch64
@ -329,8 +337,6 @@ jobs:
steps: steps:
- name: Check out repository code - name: Check out repository code
uses: ClickHouse/checkout@v1 uses: ClickHouse/checkout@v1
with:
clear-repository: true
- name: Mark Commit Release Ready - name: Mark Commit Release Ready
run: | run: |
cd "$GITHUB_WORKSPACE/tests/ci" cd "$GITHUB_WORKSPACE/tests/ci"
@ -369,14 +375,6 @@ jobs:
test_name: Stateless tests (release) test_name: Stateless tests (release)
runner_type: func-tester runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatelessTestReleaseDatabaseOrdinary:
needs: [RunConfig, BuilderDebRelease]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateless tests (release, DatabaseOrdinary)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatelessTestReleaseDatabaseReplicated: FunctionalStatelessTestReleaseDatabaseReplicated:
needs: [RunConfig, BuilderDebRelease] needs: [RunConfig, BuilderDebRelease]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -401,6 +399,22 @@ jobs:
test_name: Stateless tests (release, s3 storage) test_name: Stateless tests (release, s3 storage)
runner_type: func-tester runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatelessTestS3Debug:
needs: [RunConfig, BuilderDebDebug]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateless tests (debug, s3 storage)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatelessTestS3Tsan:
needs: [RunConfig, BuilderDebTsan]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateless tests (tsan, s3 storage)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatelessTestAarch64: FunctionalStatelessTestAarch64:
needs: [RunConfig, BuilderDebAarch64] needs: [RunConfig, BuilderDebAarch64]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -509,6 +523,55 @@ jobs:
test_name: Stateful tests (debug) test_name: Stateful tests (debug)
runner_type: func-tester runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
# Parallel replicas
FunctionalStatefulTestDebugParallelReplicas:
needs: [RunConfig, BuilderDebDebug]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateful tests (debug, ParallelReplicas)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatefulTestUBsanParallelReplicas:
needs: [RunConfig, BuilderDebUBsan]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateful tests (ubsan, ParallelReplicas)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatefulTestMsanParallelReplicas:
needs: [RunConfig, BuilderDebMsan]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateful tests (msan, ParallelReplicas)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatefulTestTsanParallelReplicas:
needs: [RunConfig, BuilderDebTsan]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateful tests (tsan, ParallelReplicas)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatefulTestAsanParallelReplicas:
needs: [RunConfig, BuilderDebAsan]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateful tests (asan, ParallelReplicas)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
FunctionalStatefulTestReleaseParallelReplicas:
needs: [RunConfig, BuilderDebRelease]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Stateful tests (release, ParallelReplicas)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
############################################################################################## ##############################################################################################
########################### ClickBench ####################################################### ########################### ClickBench #######################################################
############################################################################################## ##############################################################################################
@ -716,6 +779,28 @@ jobs:
runner_type: func-tester-aarch64 runner_type: func-tester-aarch64
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
############################################################################################## ##############################################################################################
############################ SQLLOGIC TEST ###################################################
##############################################################################################
SQLLogicTestRelease:
needs: [RunConfig, BuilderDebRelease]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Sqllogic test (release)
runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }}
##############################################################################################
##################################### SQL TEST ###############################################
##############################################################################################
SQLTest:
needs: [RunConfig, BuilderDebRelease]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: SQLTest
runner_type: fuzzer-unit-tester
data: ${{ needs.RunConfig.outputs.data }}
##############################################################################################
###################################### SQLANCER FUZZERS ###################################### ###################################### SQLANCER FUZZERS ######################################
############################################################################################## ##############################################################################################
SQLancerTestRelease: SQLancerTestRelease:
@ -740,7 +825,6 @@ jobs:
- MarkReleaseReady - MarkReleaseReady
- FunctionalStatelessTestDebug - FunctionalStatelessTestDebug
- FunctionalStatelessTestRelease - FunctionalStatelessTestRelease
- FunctionalStatelessTestReleaseDatabaseOrdinary
- FunctionalStatelessTestReleaseDatabaseReplicated - FunctionalStatelessTestReleaseDatabaseReplicated
- FunctionalStatelessTestReleaseAnalyzer - FunctionalStatelessTestReleaseAnalyzer
- FunctionalStatelessTestReleaseS3 - FunctionalStatelessTestReleaseS3
@ -749,6 +833,8 @@ jobs:
- FunctionalStatelessTestTsan - FunctionalStatelessTestTsan
- FunctionalStatelessTestMsan - FunctionalStatelessTestMsan
- FunctionalStatelessTestUBsan - FunctionalStatelessTestUBsan
- FunctionalStatelessTestS3Debug
- FunctionalStatelessTestS3Tsan
- FunctionalStatefulTestDebug - FunctionalStatefulTestDebug
- FunctionalStatefulTestRelease - FunctionalStatefulTestRelease
- FunctionalStatefulTestAarch64 - FunctionalStatefulTestAarch64
@ -756,6 +842,12 @@ jobs:
- FunctionalStatefulTestTsan - FunctionalStatefulTestTsan
- FunctionalStatefulTestMsan - FunctionalStatefulTestMsan
- FunctionalStatefulTestUBsan - FunctionalStatefulTestUBsan
- FunctionalStatefulTestDebugParallelReplicas
- FunctionalStatefulTestUBsanParallelReplicas
- FunctionalStatefulTestMsanParallelReplicas
- FunctionalStatefulTestTsanParallelReplicas
- FunctionalStatefulTestAsanParallelReplicas
- FunctionalStatefulTestReleaseParallelReplicas
- StressTestDebug - StressTestDebug
- StressTestAsan - StressTestAsan
- StressTestTsan - StressTestTsan
@ -781,6 +873,8 @@ jobs:
- UnitTestsReleaseClang - UnitTestsReleaseClang
- SQLancerTestRelease - SQLancerTestRelease
- SQLancerTestDebug - SQLancerTestDebug
- SQLLogicTestRelease
- SQLTest
runs-on: [self-hosted, style-checker] runs-on: [self-hosted, style-checker]
steps: steps:
- name: Check out repository code - name: Check out repository code

View File

@ -14,7 +14,7 @@ jobs:
# The task for having a preserved ENV and event.json for later investigation # The task for having a preserved ENV and event.json for later investigation
uses: ./.github/workflows/debug.yml uses: ./.github/workflows/debug.yml
RunConfig: RunConfig:
runs-on: [self-hosted, style-checker] runs-on: [self-hosted, style-checker-aarch64]
outputs: outputs:
data: ${{ steps.runconfig.outputs.CI_DATA }} data: ${{ steps.runconfig.outputs.CI_DATA }}
steps: steps:
@ -28,7 +28,7 @@ jobs:
id: runconfig id: runconfig
run: | run: |
echo "::group::configure CI run" echo "::group::configure CI run"
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --configure --skip-jobs --rebuild-all-docker --outfile ${{ runner.temp }}/ci_run_data.json python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --configure --skip-jobs --outfile ${{ runner.temp }}/ci_run_data.json
echo "::endgroup::" echo "::endgroup::"
echo "::group::CI run configure results" echo "::group::CI run configure results"

View File

@ -18,7 +18,7 @@ on: # yamllint disable-line rule:truthy
########################################################################################## ##########################################################################################
jobs: jobs:
RunConfig: RunConfig:
runs-on: [self-hosted, style-checker] runs-on: [self-hosted, style-checker-aarch64]
outputs: outputs:
data: ${{ steps.runconfig.outputs.CI_DATA }} data: ${{ steps.runconfig.outputs.CI_DATA }}
steps: steps:
@ -147,6 +147,14 @@ jobs:
build_name: package_release build_name: package_release
checkout_depth: 0 checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
BuilderDebReleaseCoverage:
needs: [RunConfig, FastTest]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_build.yml
with:
build_name: package_release_coverage
checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }}
BuilderDebAarch64: BuilderDebAarch64:
needs: [RunConfig, FastTest] needs: [RunConfig, FastTest]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -309,6 +317,7 @@ jobs:
- BuilderDebDebug - BuilderDebDebug
- BuilderDebMsan - BuilderDebMsan
- BuilderDebRelease - BuilderDebRelease
- BuilderDebReleaseCoverage
- BuilderDebTsan - BuilderDebTsan
- BuilderDebUBsan - BuilderDebUBsan
uses: ./.github/workflows/reusable_test.yml uses: ./.github/workflows/reusable_test.yml
@ -483,21 +492,9 @@ jobs:
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml uses: ./.github/workflows/reusable_test.yml
with: with:
test_name: tests bugfix validate check test_name: Bugfix validation
runner_type: func-tester runner_type: func-tester
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
additional_envs: |
KILL_TIMEOUT=3600
run_command: |
TEMP_PATH="${TEMP_PATH}/integration" \
python3 integration_test_check.py "Integration $CHECK_NAME" \
--validate-bugfix --post-commit-status=file || echo 'ignore exit code'
TEMP_PATH="${TEMP_PATH}/stateless" \
python3 functional_test_check.py "Stateless $CHECK_NAME" "$KILL_TIMEOUT" \
--validate-bugfix --post-commit-status=file || echo 'ignore exit code'
python3 bugfix_validate_check.py "${TEMP_PATH}/stateless/functional_commit_status.tsv" "${TEMP_PATH}/integration/integration_commit_status.tsv"
############################################################################################## ##############################################################################################
############################ FUNCTIONAl STATEFUL TESTS ####################################### ############################ FUNCTIONAl STATEFUL TESTS #######################################
############################################################################################## ##############################################################################################
@ -785,6 +782,15 @@ jobs:
test_name: Integration tests (release) test_name: Integration tests (release)
runner_type: stress-tester runner_type: stress-tester
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
IntegrationTestsAarch64:
needs: [RunConfig, BuilderDebAarch64]
if: ${{ !failure() && !cancelled() }}
uses: ./.github/workflows/reusable_test.yml
with:
test_name: Integration tests (aarch64)
# FIXME: there is no stress-tester for aarch64. func-tester-aarch64 is ok?
runner_type: func-tester-aarch64
data: ${{ needs.RunConfig.outputs.data }}
IntegrationTestsFlakyCheck: IntegrationTestsFlakyCheck:
needs: [RunConfig, BuilderDebAsan] needs: [RunConfig, BuilderDebAsan]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -881,6 +887,7 @@ jobs:
- BuilderSpecialReport - BuilderSpecialReport
- DocsCheck - DocsCheck
- FastTest - FastTest
- TestsBugfixCheck
- FunctionalStatelessTestDebug - FunctionalStatelessTestDebug
- FunctionalStatelessTestRelease - FunctionalStatelessTestRelease
- FunctionalStatelessTestReleaseDatabaseReplicated - FunctionalStatelessTestReleaseDatabaseReplicated
@ -924,6 +931,7 @@ jobs:
- IntegrationTestsAnalyzerAsan - IntegrationTestsAnalyzerAsan
- IntegrationTestsTsan - IntegrationTestsTsan
- IntegrationTestsRelease - IntegrationTestsRelease
- IntegrationTestsAarch64
- IntegrationTestsFlakyCheck - IntegrationTestsFlakyCheck
- PerformanceComparisonX86 - PerformanceComparisonX86
- PerformanceComparisonAarch - PerformanceComparisonAarch
@ -992,7 +1000,7 @@ jobs:
####################################### libFuzzer ########################################### ####################################### libFuzzer ###########################################
############################################################################################# #############################################################################################
libFuzzer: libFuzzer:
if: ${{ !failure() && !cancelled() && contains(github.event.pull_request.labels.*.name, 'libFuzzer') }} if: ${{ !failure() && !cancelled() }}
needs: [RunConfig, StyleCheck] needs: [RunConfig, StyleCheck]
uses: ./.github/workflows/libfuzzer.yml uses: ./.github/workflows/libfuzzer.yml
with: with:

View File

@ -14,7 +14,7 @@ on: # yamllint disable-line rule:truthy
jobs: jobs:
RunConfig: RunConfig:
runs-on: [self-hosted, style-checker] runs-on: [self-hosted, style-checker-aarch64]
outputs: outputs:
data: ${{ steps.runconfig.outputs.CI_DATA }} data: ${{ steps.runconfig.outputs.CI_DATA }}
steps: steps:
@ -41,7 +41,7 @@ jobs:
id: runconfig id: runconfig
run: | run: |
echo "::group::configure CI run" echo "::group::configure CI run"
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --configure --rebuild-all-binaries --outfile ${{ runner.temp }}/ci_run_data.json python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --configure --outfile ${{ runner.temp }}/ci_run_data.json
echo "::endgroup::" echo "::endgroup::"
echo "::group::CI run configure results" echo "::group::CI run configure results"
python3 -m json.tool ${{ runner.temp }}/ci_run_data.json python3 -m json.tool ${{ runner.temp }}/ci_run_data.json
@ -91,6 +91,8 @@ jobs:
build_name: package_release build_name: package_release
checkout_depth: 0 checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
# always rebuild on release branches to be able to publish from any commit
force: true
BuilderDebAarch64: BuilderDebAarch64:
needs: [RunConfig, BuildDockers] needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -99,6 +101,8 @@ jobs:
build_name: package_aarch64 build_name: package_aarch64
checkout_depth: 0 checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
# always rebuild on release branches to be able to publish from any commit
force: true
BuilderDebAsan: BuilderDebAsan:
needs: [RunConfig, BuildDockers] needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -142,6 +146,8 @@ jobs:
build_name: binary_darwin build_name: binary_darwin
checkout_depth: 0 checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
# always rebuild on release branches to be able to publish from any commit
force: true
BuilderBinDarwinAarch64: BuilderBinDarwinAarch64:
needs: [RunConfig, BuildDockers] needs: [RunConfig, BuildDockers]
if: ${{ !failure() && !cancelled() }} if: ${{ !failure() && !cancelled() }}
@ -150,6 +156,8 @@ jobs:
build_name: binary_darwin_aarch64 build_name: binary_darwin_aarch64
checkout_depth: 0 checkout_depth: 0
data: ${{ needs.RunConfig.outputs.data }} data: ${{ needs.RunConfig.outputs.data }}
# always rebuild on release branches to be able to publish from any commit
force: true
############################################################################################ ############################################################################################
##################################### Docker images ####################################### ##################################### Docker images #######################################
############################################################################################ ############################################################################################
@ -206,13 +214,8 @@ jobs:
if: ${{ !cancelled() }} if: ${{ !cancelled() }}
needs: needs:
- RunConfig - RunConfig
- BuilderDebRelease - BuilderBinDarwin
- BuilderDebAarch64 - BuilderBinDarwinAarch64
- BuilderDebAsan
- BuilderDebTsan
- BuilderDebUBsan
- BuilderDebMsan
- BuilderDebDebug
uses: ./.github/workflows/reusable_test.yml uses: ./.github/workflows/reusable_test.yml
with: with:
test_name: ClickHouse special build check test_name: ClickHouse special build check
@ -225,7 +228,7 @@ jobs:
run_command: | run_command: |
python3 build_report_check.py "$CHECK_NAME" python3 build_report_check.py "$CHECK_NAME"
MarkReleaseReady: MarkReleaseReady:
if: ${{ !failure() && !cancelled() }} if: ${{ ! (contains(needs.*.result, 'skipped') || contains(needs.*.result, 'failure')) }}
needs: needs:
- BuilderBinDarwin - BuilderBinDarwin
- BuilderBinDarwinAarch64 - BuilderBinDarwinAarch64
@ -235,8 +238,6 @@ jobs:
steps: steps:
- name: Check out repository code - name: Check out repository code
uses: ClickHouse/checkout@v1 uses: ClickHouse/checkout@v1
with:
clear-repository: true
- name: Mark Commit Release Ready - name: Mark Commit Release Ready
run: | run: |
cd "$GITHUB_WORKSPACE/tests/ci" cd "$GITHUB_WORKSPACE/tests/ci"

View File

@ -26,6 +26,10 @@ name: Build ClickHouse
description: json ci data description: json ci data
type: string type: string
required: true required: true
force:
description: disallow job skipping
type: boolean
default: false
additional_envs: additional_envs:
description: additional ENV variables to setup the job description: additional ENV variables to setup the job
type: string type: string
@ -33,7 +37,7 @@ name: Build ClickHouse
jobs: jobs:
Build: Build:
name: Build-${{inputs.build_name}} name: Build-${{inputs.build_name}}
if: contains(fromJson(inputs.data).jobs_data.jobs_to_do, inputs.build_name) if: ${{ contains(fromJson(inputs.data).jobs_data.jobs_to_do, inputs.build_name) || inputs.force }}
env: env:
GITHUB_JOB_OVERRIDDEN: Build-${{inputs.build_name}} GITHUB_JOB_OVERRIDDEN: Build-${{inputs.build_name}}
runs-on: [self-hosted, '${{inputs.runner_type}}'] runs-on: [self-hosted, '${{inputs.runner_type}}']
@ -78,13 +82,15 @@ jobs:
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" \ python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" \
--infile ${{ toJson(inputs.data) }} \ --infile ${{ toJson(inputs.data) }} \
--job-name "$BUILD_NAME" \ --job-name "$BUILD_NAME" \
--run --run \
${{ inputs.force && '--force' || '' }}
- name: Post - name: Post
# it still be build report to upload for failed build job # it still be build report to upload for failed build job
if: ${{ !cancelled() }} if: ${{ !cancelled() }}
run: | run: |
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --post --job-name '${{inputs.build_name}}' python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --post --job-name '${{inputs.build_name}}'
- name: Mark as done - name: Mark as done
if: ${{ !cancelled() }}
run: | run: |
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --mark-success --job-name '${{inputs.build_name}}' python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --mark-success --job-name '${{inputs.build_name}}'
- name: Clean - name: Clean

View File

@ -46,7 +46,7 @@ jobs:
needs: [DockerBuildAmd64, DockerBuildAarch64] needs: [DockerBuildAmd64, DockerBuildAarch64]
runs-on: [self-hosted, style-checker] runs-on: [self-hosted, style-checker]
if: | if: |
!failure() && !cancelled() && toJson(fromJson(inputs.data).docker_data.missing_multi) != '[]' !failure() && !cancelled() && (toJson(fromJson(inputs.data).docker_data.missing_multi) != '[]' || inputs.set_latest)
steps: steps:
- name: Check out repository code - name: Check out repository code
uses: ClickHouse/checkout@v1 uses: ClickHouse/checkout@v1
@ -55,14 +55,12 @@ jobs:
- name: Build images - name: Build images
run: | run: |
cd "$GITHUB_WORKSPACE/tests/ci" cd "$GITHUB_WORKSPACE/tests/ci"
FLAG_LATEST=''
if [ "${{ inputs.set_latest }}" == "true" ]; then if [ "${{ inputs.set_latest }}" == "true" ]; then
FLAG_LATEST='--set-latest'
echo "latest tag will be set for resulting manifests" echo "latest tag will be set for resulting manifests"
python3 docker_manifests_merge.py --suffix amd64 --suffix aarch64 \
--image-tags '${{ toJson(fromJson(inputs.data).docker_data.images) }}' \
--missing-images '${{ toJson(fromJson(inputs.data).docker_data.missing_multi) }}' \
--set-latest
else
python3 docker_manifests_merge.py --suffix amd64 --suffix aarch64 \
--image-tags '${{ toJson(fromJson(inputs.data).docker_data.images) }}' \
--missing-images '${{ toJson(fromJson(inputs.data).docker_data.missing_multi) }}'
fi fi
python3 docker_manifests_merge.py --suffix amd64 --suffix aarch64 \
--image-tags '${{ toJson(fromJson(inputs.data).docker_data.images) }}' \
--missing-images '${{ toJson(fromJson(inputs.data).docker_data.missing_multi) }}' \
$FLAG_LATEST

View File

@ -107,6 +107,7 @@ jobs:
run: | run: |
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --post --job-name '${{inputs.test_name}}' python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --post --job-name '${{inputs.test_name}}'
- name: Mark as done - name: Mark as done
if: ${{ !cancelled() }}
run: | run: |
python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --mark-success --job-name '${{inputs.test_name}}' --batch ${{matrix.batch}} python3 "$GITHUB_WORKSPACE/tests/ci/ci.py" --infile ${{ toJson(inputs.data) }} --mark-success --job-name '${{inputs.test_name}}' --batch ${{matrix.batch}}
- name: Clean - name: Clean

View File

@ -55,7 +55,7 @@ jobs:
python3 ./utils/security-generator/generate_security.py > SECURITY.md python3 ./utils/security-generator/generate_security.py > SECURITY.md
git diff HEAD git diff HEAD
- name: Create Pull Request - name: Create Pull Request
uses: peter-evans/create-pull-request@v3 uses: peter-evans/create-pull-request@v6
with: with:
author: "robot-clickhouse <robot-clickhouse@users.noreply.github.com>" author: "robot-clickhouse <robot-clickhouse@users.noreply.github.com>"
token: ${{ secrets.ROBOT_CLICKHOUSE_COMMIT_TOKEN }} token: ${{ secrets.ROBOT_CLICKHOUSE_COMMIT_TOKEN }}

View File

@ -1,6 +1,6 @@
### CI modificators (add a leading space to apply): ### CI modificators (add a leading space to apply) ###
## To avoid a merge commit in CI: ## To avoid a merge commit in CI:
#no_merge_commit #no_merge_commit
@ -8,12 +8,21 @@
## To discard CI cache: ## To discard CI cache:
#no_ci_cache #no_ci_cache
## To not test (only style check):
#do_not_test
## To run specified set of tests in CI: ## To run specified set of tests in CI:
#ci_set_<SET_NAME> #ci_set_<SET_NAME>
#ci_set_reduced #ci_set_reduced
#ci_set_arm
#ci_set_integration
## To run specified job in CI: ## To run specified job in CI:
#job_<JOB NAME> #job_<JOB NAME>
#job_stateless_tests_release #job_stateless_tests_release
#job_package_debug #job_package_debug
#job_integration_tests_asan #job_integration_tests_asan
## To run only specified batches for multi-batch job(s)
#batch_2
#btach_1_2_3

2
.gitmodules vendored
View File

@ -99,7 +99,7 @@
url = https://github.com/awslabs/aws-c-event-stream url = https://github.com/awslabs/aws-c-event-stream
[submodule "aws-c-common"] [submodule "aws-c-common"]
path = contrib/aws-c-common path = contrib/aws-c-common
url = https://github.com/ClickHouse/aws-c-common url = https://github.com/awslabs/aws-c-common.git
[submodule "aws-checksums"] [submodule "aws-checksums"]
path = contrib/aws-checksums path = contrib/aws-checksums
url = https://github.com/awslabs/aws-checksums url = https://github.com/awslabs/aws-checksums

View File

@ -6,8 +6,6 @@
### <a id="241"></a> ClickHouse release 24.1, 2024-01-30 ### <a id="241"></a> ClickHouse release 24.1, 2024-01-30
### ClickHouse release master (b4a5b6060ea) FIXME as compared to v23.12.1.1368-stable (a2faa65b080)
#### Backward Incompatible Change #### Backward Incompatible Change
* The setting `print_pretty_type_names` is turned on by default. You can turn it off to keep the old behavior or `SET compatibility = '23.12'`. [#57726](https://github.com/ClickHouse/ClickHouse/pull/57726) ([Alexey Milovidov](https://github.com/alexey-milovidov)). * The setting `print_pretty_type_names` is turned on by default. You can turn it off to keep the old behavior or `SET compatibility = '23.12'`. [#57726](https://github.com/ClickHouse/ClickHouse/pull/57726) ([Alexey Milovidov](https://github.com/alexey-milovidov)).
* The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for `OPTIMIZE` is not allowed by default (unless `allow_experimental_replacing_merge_with_cleanup` is enabled). [#58316](https://github.com/ClickHouse/ClickHouse/pull/58316) ([Alexander Tokmakov](https://github.com/tavplubix)). * The MergeTree setting `clean_deleted_rows` is deprecated, it has no effect anymore. The `CLEANUP` keyword for `OPTIMIZE` is not allowed by default (unless `allow_experimental_replacing_merge_with_cleanup` is enabled). [#58316](https://github.com/ClickHouse/ClickHouse/pull/58316) ([Alexander Tokmakov](https://github.com/tavplubix)).
@ -24,7 +22,6 @@
* Add `quantileDD` aggregate function as well as the corresponding `quantilesDD` and `medianDD`. It is based on the DDSketch https://www.vldb.org/pvldb/vol12/p2195-masson.pdf. ### Documentation entry for user-facing changes. [#56342](https://github.com/ClickHouse/ClickHouse/pull/56342) ([Srikanth Chekuri](https://github.com/srikanthccv)). * Add `quantileDD` aggregate function as well as the corresponding `quantilesDD` and `medianDD`. It is based on the DDSketch https://www.vldb.org/pvldb/vol12/p2195-masson.pdf. ### Documentation entry for user-facing changes. [#56342](https://github.com/ClickHouse/ClickHouse/pull/56342) ([Srikanth Chekuri](https://github.com/srikanthccv)).
* Allow to configure any kind of object storage with any kind of metadata type. [#58357](https://github.com/ClickHouse/ClickHouse/pull/58357) ([Kseniia Sumarokova](https://github.com/kssenii)). * Allow to configure any kind of object storage with any kind of metadata type. [#58357](https://github.com/ClickHouse/ClickHouse/pull/58357) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Added `null_status_on_timeout_only_active` and `throw_only_active` modes for `distributed_ddl_output_mode` that allow to avoid waiting for inactive replicas. [#58350](https://github.com/ClickHouse/ClickHouse/pull/58350) ([Alexander Tokmakov](https://github.com/tavplubix)). * Added `null_status_on_timeout_only_active` and `throw_only_active` modes for `distributed_ddl_output_mode` that allow to avoid waiting for inactive replicas. [#58350](https://github.com/ClickHouse/ClickHouse/pull/58350) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Allow partitions from tables with different partition expressions to be attached when the destination table partition expression doesn't re-partition/split the part. [#39507](https://github.com/ClickHouse/ClickHouse/pull/39507) ([Arthur Passos](https://github.com/arthurpassos)).
* Add function `arrayShingles` to compute subarrays, e.g. `arrayShingles([1, 2, 3, 4, 5], 3)` returns `[[1,2,3],[2,3,4],[3,4,5]]`. [#58396](https://github.com/ClickHouse/ClickHouse/pull/58396) ([Zheng Miao](https://github.com/zenmiao7)). * Add function `arrayShingles` to compute subarrays, e.g. `arrayShingles([1, 2, 3, 4, 5], 3)` returns `[[1,2,3],[2,3,4],[3,4,5]]`. [#58396](https://github.com/ClickHouse/ClickHouse/pull/58396) ([Zheng Miao](https://github.com/zenmiao7)).
* Added functions `punycodeEncode`, `punycodeDecode`, `idnaEncode` and `idnaDecode` which are useful for translating international domain names to an ASCII representation according to the IDNA standard. [#58454](https://github.com/ClickHouse/ClickHouse/pull/58454) ([Robert Schulze](https://github.com/rschu1ze)). * Added functions `punycodeEncode`, `punycodeDecode`, `idnaEncode` and `idnaDecode` which are useful for translating international domain names to an ASCII representation according to the IDNA standard. [#58454](https://github.com/ClickHouse/ClickHouse/pull/58454) ([Robert Schulze](https://github.com/rschu1ze)).
* Added string similarity functions `dramerauLevenshteinDistance`, `jaroSimilarity` and `jaroWinklerSimilarity`. [#58531](https://github.com/ClickHouse/ClickHouse/pull/58531) ([Robert Schulze](https://github.com/rschu1ze)). * Added string similarity functions `dramerauLevenshteinDistance`, `jaroSimilarity` and `jaroWinklerSimilarity`. [#58531](https://github.com/ClickHouse/ClickHouse/pull/58531) ([Robert Schulze](https://github.com/rschu1ze)).

View File

@ -254,10 +254,17 @@ endif()
include(cmake/cpu_features.cmake) include(cmake/cpu_features.cmake)
# Asynchronous unwind tables are needed for Query Profiler.
# They are already by default on some platforms but possibly not on all platforms. # Query Profiler doesn't work on MacOS for several reasons
# Enable it explicitly. # - PHDR cache is not available
set (COMPILER_FLAGS "${COMPILER_FLAGS} -fasynchronous-unwind-tables") # - We use native functionality to get stacktraces which is not async signal safe
# and thus we don't need to generate asynchronous unwind tables
if (NOT OS_DARWIN)
# Asynchronous unwind tables are needed for Query Profiler.
# They are already by default on some platforms but possibly not on all platforms.
# Enable it explicitly.
set (COMPILER_FLAGS "${COMPILER_FLAGS} -fasynchronous-unwind-tables")
endif()
# Reproducible builds. # Reproducible builds.
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG")
@ -348,7 +355,7 @@ if (COMPILER_CLANG)
set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fdiagnostics-absolute-paths") set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -fdiagnostics-absolute-paths")
set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fdiagnostics-absolute-paths") set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -fdiagnostics-absolute-paths")
if (NOT ENABLE_TESTS AND NOT SANITIZE AND OS_LINUX) if (NOT ENABLE_TESTS AND NOT SANITIZE AND NOT SANITIZE_COVERAGE AND OS_LINUX)
# https://clang.llvm.org/docs/ThinLTO.html # https://clang.llvm.org/docs/ThinLTO.html
# Applies to clang and linux only. # Applies to clang and linux only.
# Disabled when building with tests or sanitizers. # Disabled when building with tests or sanitizers.
@ -546,7 +553,7 @@ if (ENABLE_RUST)
endif() endif()
endif() endif()
if (CMAKE_BUILD_TYPE_UC STREQUAL "RELWITHDEBINFO" AND NOT SANITIZE AND OS_LINUX AND (ARCH_AMD64 OR ARCH_AARCH64)) if (CMAKE_BUILD_TYPE_UC STREQUAL "RELWITHDEBINFO" AND NOT SANITIZE AND NOT SANITIZE_COVERAGE AND OS_LINUX AND (ARCH_AMD64 OR ARCH_AARCH64))
set(CHECK_LARGE_OBJECT_SIZES_DEFAULT ON) set(CHECK_LARGE_OBJECT_SIZES_DEFAULT ON)
else () else ()
set(CHECK_LARGE_OBJECT_SIZES_DEFAULT OFF) set(CHECK_LARGE_OBJECT_SIZES_DEFAULT OFF)

View File

@ -37,7 +37,7 @@ Keep an eye out for upcoming meetups around the world. Somewhere else you want u
## Recent Recordings ## Recent Recordings
* **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments" * **Recent Meetup Videos**: [Meetup Playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3iNDUzpY1S3L_iV4nARda_U) Whenever possible recordings of the ClickHouse Community Meetups are edited and presented as individual talks. Current featuring "Modern SQL in 2023", "Fast, Concurrent, and Consistent Asynchronous INSERTS in ClickHouse", and "Full-Text Indices: Design and Experiments"
* **Recording available**: [**v23.10 Release Webinar**](https://www.youtube.com/watch?v=PGQS6uPb970) All the features of 23.10, one convenient video! Watch it now! * **Recording available**: [**v24.1 Release Webinar**](https://www.youtube.com/watch?v=pBF9g0wGAGs) All the features of 24.1, one convenient video! Watch it now!
* **All release webinar recordings**: [YouTube playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3jAlSy1JxyP8zluvXaN3nxU) * **All release webinar recordings**: [YouTube playlist](https://www.youtube.com/playlist?list=PL0Z2YDlm0b3jAlSy1JxyP8zluvXaN3nxU)

View File

@ -17,6 +17,7 @@ set (SRCS
getMemoryAmount.cpp getMemoryAmount.cpp
getPageSize.cpp getPageSize.cpp
getThreadId.cpp getThreadId.cpp
int8_to_string.cpp
JSON.cpp JSON.cpp
mremap.cpp mremap.cpp
phdr_cache.cpp phdr_cache.cpp

View File

@ -1,6 +1,7 @@
#pragma once #pragma once
#include <base/types.h> #include <base/types.h>
#include <base/extended_types.h>
namespace wide namespace wide
{ {
@ -44,3 +45,8 @@ concept is_over_big_int =
|| std::is_same_v<T, Decimal128> || std::is_same_v<T, Decimal128>
|| std::is_same_v<T, Decimal256>; || std::is_same_v<T, Decimal256>;
} }
template <> struct is_signed<DB::Decimal32> { static constexpr bool value = true; };
template <> struct is_signed<DB::Decimal64> { static constexpr bool value = true; };
template <> struct is_signed<DB::Decimal128> { static constexpr bool value = true; };
template <> struct is_signed<DB::Decimal256> { static constexpr bool value = true; };

View File

@ -1,5 +1,6 @@
#pragma once #pragma once
#include <bit>
#include <cstring> #include <cstring>
#include <algorithm> #include <algorithm>
#include <type_traits> #include <type_traits>

View File

@ -1,4 +1,5 @@
#include "coverage.h" #include "coverage.h"
#include <sys/mman.h>
#pragma GCC diagnostic ignored "-Wreserved-identifier" #pragma GCC diagnostic ignored "-Wreserved-identifier"
@ -52,11 +53,21 @@ namespace
uint32_t * guards_start = nullptr; uint32_t * guards_start = nullptr;
uint32_t * guards_end = nullptr; uint32_t * guards_end = nullptr;
uintptr_t * coverage_array = nullptr; uintptr_t * current_coverage_array = nullptr;
uintptr_t * cumulative_coverage_array = nullptr;
size_t coverage_array_size = 0; size_t coverage_array_size = 0;
uintptr_t * all_addresses_array = nullptr; uintptr_t * all_addresses_array = nullptr;
size_t all_addresses_array_size = 0; size_t all_addresses_array_size = 0;
uintptr_t * allocate(size_t size)
{
/// Note: mmap return zero-initialized memory, and we count on that.
void * map = mmap(nullptr, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS, -1, 0);
if (MAP_FAILED == map)
return nullptr;
return static_cast<uintptr_t*>(map);
}
} }
extern "C" extern "C"
@ -79,7 +90,8 @@ void __sanitizer_cov_trace_pc_guard_init(uint32_t * start, uint32_t * stop)
coverage_array_size = stop - start; coverage_array_size = stop - start;
/// Note: we will leak this. /// Note: we will leak this.
coverage_array = static_cast<uintptr_t*>(malloc(sizeof(uintptr_t) * coverage_array_size)); current_coverage_array = allocate(sizeof(uintptr_t) * coverage_array_size);
cumulative_coverage_array = allocate(sizeof(uintptr_t) * coverage_array_size);
resetCoverage(); resetCoverage();
} }
@ -92,8 +104,8 @@ void __sanitizer_cov_pcs_init(const uintptr_t * pcs_begin, const uintptr_t * pcs
return; return;
pc_table_initialized = true; pc_table_initialized = true;
all_addresses_array = static_cast<uintptr_t*>(malloc(sizeof(uintptr_t) * coverage_array_size));
all_addresses_array_size = pcs_end - pcs_begin; all_addresses_array_size = pcs_end - pcs_begin;
all_addresses_array = allocate(sizeof(uintptr_t) * all_addresses_array_size);
/// They are not a real pointers, but also contain a flag in the most significant bit, /// They are not a real pointers, but also contain a flag in the most significant bit,
/// in which we are not interested for now. Reset it. /// in which we are not interested for now. Reset it.
@ -115,17 +127,24 @@ void __sanitizer_cov_trace_pc_guard(uint32_t * guard)
/// The values of `*guard` are as you set them in /// The values of `*guard` are as you set them in
/// __sanitizer_cov_trace_pc_guard_init and so you can make them consecutive /// __sanitizer_cov_trace_pc_guard_init and so you can make them consecutive
/// and use them to dereference an array or a bit vector. /// and use them to dereference an array or a bit vector.
void * pc = __builtin_return_address(0); intptr_t pc = reinterpret_cast<uintptr_t>(__builtin_return_address(0));
coverage_array[guard - guards_start] = reinterpret_cast<uintptr_t>(pc); current_coverage_array[guard - guards_start] = pc;
cumulative_coverage_array[guard - guards_start] = pc;
} }
} }
__attribute__((no_sanitize("coverage"))) __attribute__((no_sanitize("coverage")))
std::span<const uintptr_t> getCoverage() std::span<const uintptr_t> getCurrentCoverage()
{ {
return {coverage_array, coverage_array_size}; return {current_coverage_array, coverage_array_size};
}
__attribute__((no_sanitize("coverage")))
std::span<const uintptr_t> getCumulativeCoverage()
{
return {cumulative_coverage_array, coverage_array_size};
} }
__attribute__((no_sanitize("coverage"))) __attribute__((no_sanitize("coverage")))
@ -137,7 +156,7 @@ std::span<const uintptr_t> getAllInstrumentedAddresses()
__attribute__((no_sanitize("coverage"))) __attribute__((no_sanitize("coverage")))
void resetCoverage() void resetCoverage()
{ {
memset(coverage_array, 0, coverage_array_size * sizeof(*coverage_array)); memset(current_coverage_array, 0, coverage_array_size * sizeof(*current_coverage_array));
/// The guard defines whether the __sanitizer_cov_trace_pc_guard should be called. /// The guard defines whether the __sanitizer_cov_trace_pc_guard should be called.
/// For example, you can unset it after first invocation to prevent excessive work. /// For example, you can unset it after first invocation to prevent excessive work.

View File

@ -15,7 +15,10 @@ void dumpCoverageReportIfPossible();
/// Get accumulated unique program addresses of the instrumented parts of the code, /// Get accumulated unique program addresses of the instrumented parts of the code,
/// seen so far after program startup or after previous reset. /// seen so far after program startup or after previous reset.
/// The returned span will be represented as a sparse map, containing mostly zeros, which you should filter away. /// The returned span will be represented as a sparse map, containing mostly zeros, which you should filter away.
std::span<const uintptr_t> getCoverage(); std::span<const uintptr_t> getCurrentCoverage();
/// Similar but not being reset.
std::span<const uintptr_t> getCumulativeCoverage();
/// Get all instrumented addresses that could be in the coverage. /// Get all instrumented addresses that could be in the coverage.
std::span<const uintptr_t> getAllInstrumentedAddresses(); std::span<const uintptr_t> getAllInstrumentedAddresses();

View File

@ -1,8 +1,11 @@
#include <stdexcept>
#include <fstream>
#include <base/getMemoryAmount.h> #include <base/getMemoryAmount.h>
#include <base/getPageSize.h> #include <base/getPageSize.h>
#include <fstream>
#include <sstream>
#include <stdexcept>
#include <unistd.h> #include <unistd.h>
#include <sys/types.h> #include <sys/types.h>
#include <sys/param.h> #include <sys/param.h>
@ -11,6 +14,80 @@
#endif #endif
namespace
{
std::optional<uint64_t> getCgroupsV2MemoryLimit()
{
#if defined(OS_LINUX)
const std::filesystem::path default_cgroups_mount = "/sys/fs/cgroup";
/// This file exists iff the host has cgroups v2 enabled.
std::ifstream controllers_file(default_cgroups_mount / "cgroup.controllers");
if (!controllers_file.is_open())
return {};
/// Make sure that the memory controller is enabled.
/// - cgroup.controllers defines which controllers *can* be enabled.
/// - cgroup.subtree_control defines which controllers *are* enabled.
/// (see https://docs.kernel.org/admin-guide/cgroup-v2.html)
/// Caveat: nested groups may disable controllers. For simplicity, check only the top-level group.
/// ReadBufferFromFile subtree_control_file(default_cgroups_mount / "cgroup.subtree_control");
/// std::string subtree_control;
/// readString(subtree_control, subtree_control_file);
/// if (subtree_control.find("memory") == std::string::npos)
/// return {};
std::ifstream subtree_control_file(default_cgroups_mount / "cgroup.subtree_control");
std::stringstream subtree_control_buf;
subtree_control_buf << subtree_control_file.rdbuf();
std::string subtree_control = subtree_control_buf.str();
if (subtree_control.find("memory") == std::string::npos)
return {};
/// Identify the cgroup the process belongs to
/// All PIDs assigned to a cgroup are in /sys/fs/cgroups/{cgroup_name}/cgroup.procs
/// A simpler way to get the membership is:
std::ifstream cgroup_name_file("/proc/self/cgroup");
if (!cgroup_name_file.is_open())
return {};
std::stringstream cgroup_name_buf;
cgroup_name_buf << cgroup_name_file.rdbuf();
std::string cgroup_name = cgroup_name_buf.str();
if (!cgroup_name.empty() && cgroup_name.back() == '\n')
cgroup_name.pop_back(); /// remove trailing newline, if any
/// With cgroups v2, there will be a *single* line with prefix "0::/"
const std::string v2_prefix = "0::/";
if (!cgroup_name.starts_with(v2_prefix))
return {};
cgroup_name = cgroup_name.substr(v2_prefix.length());
std::filesystem::path current_cgroup = cgroup_name.empty() ? default_cgroups_mount : (default_cgroups_mount / cgroup_name);
/// Open the bottom-most nested memory limit setting file. If there is no such file at the current
/// level, try again at the parent level as memory settings are inherited.
while (current_cgroup != default_cgroups_mount.parent_path())
{
std::ifstream setting_file(current_cgroup / "memory.max");
if (setting_file.is_open())
{
uint64_t value;
if (setting_file >> value)
return {value};
else
return {}; /// e.g. the cgroups default "max"
}
current_cgroup = current_cgroup.parent_path();
}
return {};
#else
return {};
#endif
}
}
/** Returns the size of physical memory (RAM) in bytes. /** Returns the size of physical memory (RAM) in bytes.
* Returns 0 on unsupported platform * Returns 0 on unsupported platform
*/ */
@ -26,34 +103,27 @@ uint64_t getMemoryAmountOrZero()
uint64_t memory_amount = num_pages * page_size; uint64_t memory_amount = num_pages * page_size;
#if defined(OS_LINUX) /// Respect the memory limit set by cgroups v2.
// Try to lookup at the Cgroup limit auto limit_v2 = getCgroupsV2MemoryLimit();
if (limit_v2.has_value() && *limit_v2 < memory_amount)
// CGroups v2 memory_amount = *limit_v2;
std::ifstream cgroupv2_limit("/sys/fs/cgroup/memory.max");
if (cgroupv2_limit.is_open())
{
uint64_t memory_limit = 0;
cgroupv2_limit >> memory_limit;
if (memory_limit > 0 && memory_limit < memory_amount)
memory_amount = memory_limit;
}
else else
{ {
// CGroups v1 /// Cgroups v1 were replaced by v2 in 2015. The only reason we keep supporting v1 is that the transition to v2
std::ifstream cgroup_limit("/sys/fs/cgroup/memory/memory.limit_in_bytes"); /// has been slow. Caveat : Hierarchical groups as in v2 are not supported for v1, the location of the memory
if (cgroup_limit.is_open()) /// limit (virtual) file is hard-coded.
/// TODO: check at the end of 2024 if we can get rid of v1.
std::ifstream limit_file_v1("/sys/fs/cgroup/memory/memory.limit_in_bytes");
if (limit_file_v1.is_open())
{ {
uint64_t memory_limit = 0; // in case of read error uint64_t limit_v1;
cgroup_limit >> memory_limit; if (limit_file_v1 >> limit_v1)
if (memory_limit > 0 && memory_limit < memory_amount) if (limit_v1 < memory_amount)
memory_amount = memory_limit; memory_amount = limit_v1;
} }
} }
#endif
return memory_amount; return memory_amount;
} }

View File

@ -0,0 +1,9 @@
#include <base/int8_to_string.h>
namespace std
{
std::string to_string(Int8 v) /// NOLINT (cert-dcl58-cpp)
{
return to_string(int8_t{v});
}
}

View File

@ -0,0 +1,17 @@
#pragma once
#include <base/defines.h>
#include <base/types.h>
#include <fmt/format.h>
template <>
struct fmt::formatter<Int8> : fmt::formatter<int8_t>
{
};
namespace std
{
std::string to_string(Int8 v); /// NOLINT (cert-dcl58-cpp)
}

View File

@ -3,14 +3,29 @@
#include <cstdint> #include <cstdint>
#include <string> #include <string>
/// This is needed for more strict aliasing. https://godbolt.org/z/xpJBSb https://stackoverflow.com/a/57453713 /// Using char8_t more strict aliasing (https://stackoverflow.com/a/57453713)
using UInt8 = char8_t; using UInt8 = char8_t;
/// Same for using signed _BitInt(8) (there isn't a signed char8_t, which would be more convenient)
/// See https://godbolt.org/z/fafnWEnnf
#pragma clang diagnostic push
#pragma clang diagnostic ignored "-Wbit-int-extension"
using Int8 = signed _BitInt(8);
#pragma clang diagnostic pop
namespace std
{
template <>
struct hash<Int8> /// NOLINT (cert-dcl58-cpp)
{
size_t operator()(const Int8 x) const { return std::hash<int8_t>()(int8_t{x}); }
};
}
using UInt16 = uint16_t; using UInt16 = uint16_t;
using UInt32 = uint32_t; using UInt32 = uint32_t;
using UInt64 = uint64_t; using UInt64 = uint64_t;
using Int8 = int8_t;
using Int16 = int16_t; using Int16 = int16_t;
using Int32 = int32_t; using Int32 = int32_t;
using Int64 = int64_t; using Int64 = int64_t;

View File

@ -6,6 +6,7 @@
#include "throwError.h" #include "throwError.h"
#include <bit>
#include <cmath> #include <cmath>
#include <cfloat> #include <cfloat>
#include <cassert> #include <cassert>

View File

@ -166,12 +166,6 @@ set (SRCS
) )
add_library (_poco_foundation ${SRCS}) add_library (_poco_foundation ${SRCS})
target_link_libraries (_poco_foundation
PUBLIC
boost::headers_only
boost::system
)
add_library (Poco::Foundation ALIAS _poco_foundation) add_library (Poco::Foundation ALIAS _poco_foundation)
# TODO: remove these warning exclusions # TODO: remove these warning exclusions

View File

@ -23,8 +23,6 @@
#include <map> #include <map>
#include <vector> #include <vector>
#include <boost/smart_ptr/intrusive_ptr.hpp>
#include "Poco/Channel.h" #include "Poco/Channel.h"
#include "Poco/Format.h" #include "Poco/Format.h"
#include "Poco/Foundation.h" #include "Poco/Foundation.h"
@ -37,7 +35,7 @@ namespace Poco
class Exception; class Exception;
class Logger; class Logger;
using LoggerPtr = boost::intrusive_ptr<Logger>; using LoggerPtr = std::shared_ptr<Logger>;
class Foundation_API Logger : public Channel class Foundation_API Logger : public Channel
/// Logger is a special Channel that acts as the main /// Logger is a special Channel that acts as the main
@ -953,9 +951,6 @@ private:
static std::optional<LoggerMapIterator> find(const std::string & name); static std::optional<LoggerMapIterator> find(const std::string & name);
static Logger * findRawPtr(const std::string & name); static Logger * findRawPtr(const std::string & name);
friend void intrusive_ptr_add_ref(Logger * ptr);
friend void intrusive_ptr_release(Logger * ptr);
Logger(); Logger();
Logger(const Logger &); Logger(const Logger &);
Logger & operator=(const Logger &); Logger & operator=(const Logger &);

View File

@ -53,10 +53,11 @@ protected:
virtual ~RefCountedObject(); virtual ~RefCountedObject();
/// Destroys the RefCountedObject. /// Destroys the RefCountedObject.
mutable std::atomic<size_t> _counter;
private: private:
RefCountedObject(const RefCountedObject &); RefCountedObject(const RefCountedObject &);
RefCountedObject & operator=(const RefCountedObject &); RefCountedObject & operator=(const RefCountedObject &);
mutable std::atomic<size_t> _counter;
}; };

View File

@ -302,9 +302,40 @@ void Logger::formatDump(std::string& message, const void* buffer, std::size_t le
namespace namespace
{ {
inline LoggerPtr makeLoggerPtr(Logger & logger) struct LoggerDeleter
{ {
return LoggerPtr(&logger, false /*add_ref*/); void operator()(Poco::Logger * logger)
{
std::lock_guard<std::mutex> lock(getLoggerMutex());
/// If logger infrastructure is destroyed just decrement logger reference count
if (!_pLoggerMap)
{
logger->release();
return;
}
auto it = _pLoggerMap->find(logger->name());
assert(it != _pLoggerMap->end());
/** If reference count is 1, this means this shared pointer owns logger
* and need destroy it.
*/
size_t reference_count_before_release = logger->release();
if (reference_count_before_release == 1)
{
assert(it->second.owned_by_shared_ptr);
_pLoggerMap->erase(it);
}
}
};
inline LoggerPtr makeLoggerPtr(Logger & logger, bool owned_by_shared_ptr)
{
if (owned_by_shared_ptr)
return LoggerPtr(&logger, LoggerDeleter());
return LoggerPtr(std::shared_ptr<void>{}, &logger);
} }
} }
@ -327,15 +358,10 @@ LoggerPtr Logger::getShared(const std::string & name, bool should_be_owned_by_sh
/** If during `unsafeGet` logger was created, then this shared pointer owns it. /** If during `unsafeGet` logger was created, then this shared pointer owns it.
* If logger was already created, then this shared pointer does not own it. * If logger was already created, then this shared pointer does not own it.
*/ */
if (inserted) if (inserted && should_be_owned_by_shared_ptr_if_created)
{ it->second.owned_by_shared_ptr = true;
if (should_be_owned_by_shared_ptr_if_created)
it->second.owned_by_shared_ptr = true;
else
it->second.logger->duplicate();
}
return makeLoggerPtr(*it->second.logger); return makeLoggerPtr(*it->second.logger, it->second.owned_by_shared_ptr);
} }
@ -343,29 +369,20 @@ std::pair<Logger::LoggerMapIterator, bool> Logger::unsafeGet(const std::string&
{ {
std::optional<Logger::LoggerMapIterator> optional_logger_it = find(name); std::optional<Logger::LoggerMapIterator> optional_logger_it = find(name);
bool should_recreate_logger = false;
if (optional_logger_it) if (optional_logger_it)
{ {
auto & logger_it = *optional_logger_it; auto & logger_it = *optional_logger_it;
std::optional<size_t> reference_count_before;
if (get_shared) if (logger_it->second.owned_by_shared_ptr)
{ {
reference_count_before = logger_it->second.logger->duplicate(); logger_it->second.logger->duplicate();
}
else if (logger_it->second.owned_by_shared_ptr)
{
reference_count_before = logger_it->second.logger->duplicate();
logger_it->second.owned_by_shared_ptr = false;
}
/// Other thread already decided to delete this logger, but did not yet remove it from map if (!get_shared)
if (reference_count_before && reference_count_before == 0) logger_it->second.owned_by_shared_ptr = false;
should_recreate_logger = true; }
} }
if (!optional_logger_it || should_recreate_logger) if (!optional_logger_it)
{ {
Logger * logger = nullptr; Logger * logger = nullptr;
@ -379,12 +396,6 @@ std::pair<Logger::LoggerMapIterator, bool> Logger::unsafeGet(const std::string&
logger = new Logger(name, par.getChannel(), par.getLevel()); logger = new Logger(name, par.getChannel(), par.getLevel());
} }
if (should_recreate_logger)
{
(*optional_logger_it)->second.logger = logger;
return std::make_pair(*optional_logger_it, true);
}
return add(logger); return add(logger);
} }
@ -412,7 +423,7 @@ LoggerPtr Logger::createShared(const std::string & name, Channel * pChannel, int
auto [it, inserted] = unsafeCreate(name, pChannel, level); auto [it, inserted] = unsafeCreate(name, pChannel, level);
it->second.owned_by_shared_ptr = true; it->second.owned_by_shared_ptr = true;
return makeLoggerPtr(*it->second.logger); return makeLoggerPtr(*it->second.logger, it->second.owned_by_shared_ptr);
} }
Logger& Logger::root() Logger& Logger::root()
@ -479,43 +490,6 @@ Logger * Logger::findRawPtr(const std::string & name)
} }
void intrusive_ptr_add_ref(Logger * ptr)
{
ptr->duplicate();
}
void intrusive_ptr_release(Logger * ptr)
{
size_t reference_count_before = ptr->_counter.fetch_sub(1, std::memory_order_acq_rel);
if (reference_count_before != 1)
return;
{
std::lock_guard<std::mutex> lock(getLoggerMutex());
if (_pLoggerMap)
{
auto it = _pLoggerMap->find(ptr->name());
/** It is possible that during release other thread created logger and
* updated iterator in map.
*/
if (it != _pLoggerMap->end() && ptr == it->second.logger)
{
/** If reference count is 0, this means this intrusive pointer owns logger
* and need destroy it.
*/
assert(it->second.owned_by_shared_ptr);
_pLoggerMap->erase(it);
}
}
}
delete ptr;
}
void Logger::names(std::vector<std::string>& names) void Logger::names(std::vector<std::string>& names)
{ {
std::lock_guard<std::mutex> lock(getLoggerMutex()); std::lock_guard<std::mutex> lock(getLoggerMutex());

View File

@ -63,14 +63,14 @@ endif()
option(WITH_COVERAGE "Instrumentation for code coverage with default implementation" OFF) option(WITH_COVERAGE "Instrumentation for code coverage with default implementation" OFF)
if (WITH_COVERAGE) if (WITH_COVERAGE)
message (INFORMATION "Enabled instrumentation for code coverage") message (STATUS "Enabled instrumentation for code coverage")
set(COVERAGE_FLAGS "-fprofile-instr-generate -fcoverage-mapping") set(COVERAGE_FLAGS "-fprofile-instr-generate -fcoverage-mapping")
endif() endif()
option (SANITIZE_COVERAGE "Instrumentation for code coverage with custom callbacks" OFF) option (SANITIZE_COVERAGE "Instrumentation for code coverage with custom callbacks" OFF)
if (SANITIZE_COVERAGE) if (SANITIZE_COVERAGE)
message (INFORMATION "Enabled instrumentation for code coverage") message (STATUS "Enabled instrumentation for code coverage")
# We set this define for whole build to indicate that at least some parts are compiled with coverage. # We set this define for whole build to indicate that at least some parts are compiled with coverage.
# And to expose it in system.build_options. # And to expose it in system.build_options.

2
contrib/NuRaft vendored

@ -1 +1 @@
Subproject commit 1278e32bb0d5dc489f947e002bdf8c71b0ddaa63 Subproject commit 5bb3a0e8257bacd65b099cb1b7239bd6b9a2c477

2
contrib/aws vendored

@ -1 +1 @@
Subproject commit ca02358dcc7ce3ab733dd4cbcc32734eecfa4ee3 Subproject commit 9eb5097a0abfa837722cca7a5114a25837817bf2

2
contrib/aws-c-auth vendored

@ -1 +1 @@
Subproject commit 97133a2b5dbca1ccdf88cd6f44f39d0531d27d12 Subproject commit baeffa791d9d1cf61460662a6d9ac2186aaf05df

2
contrib/aws-c-cal vendored

@ -1 +1 @@
Subproject commit 85dd7664b786a389c6fb1a6f031ab4bb2282133d Subproject commit 9453687ff5493ba94eaccf8851200565c4364c77

@ -1 +1 @@
Subproject commit 45dcb2849c891dba2100b270b4676765c92949ff Subproject commit 80f21b3cac5ac51c6b8a62c7d2a5ef58a75195ee

@ -1 +1 @@
Subproject commit b517b7decd0dac30be2162f5186c250221c53aff Subproject commit 99ec79ee2970f1a045d4ced1501b97ee521f2f85

@ -1 +1 @@
Subproject commit 2f9b60c42f90840ec11822acda3d8cdfa97a773d Subproject commit 08f24e384e5be20bcffa42b49213d24dad7881ae

2
contrib/aws-c-http vendored

@ -1 +1 @@
Subproject commit dd34461987947672444d0bc872c5a733dfdb9711 Subproject commit a082f8a2067e4a31db73f1d4ffd702a8dc0f7089

2
contrib/aws-c-io vendored

@ -1 +1 @@
Subproject commit d58ed4f272b1cb4f89ac9196526ceebe5f2b0d89 Subproject commit 11ce3c750a1dac7b04069fc5bff89e97e91bad4d

2
contrib/aws-c-mqtt vendored

@ -1 +1 @@
Subproject commit 33c3455cec82b16feb940e12006cefd7b3ef4194 Subproject commit 6d36cd3726233cb757468d0ea26f6cd8dad151ec

2
contrib/aws-c-s3 vendored

@ -1 +1 @@
Subproject commit d7bfe602d6925948f1fff95784e3613cca6a3900 Subproject commit de36fee8fe7ab02f10987877ae94a805bf440c1f

@ -1 +1 @@
Subproject commit 208a701fa01e99c7c8cc3dcebc8317da71362972 Subproject commit fd8c0ba2e233997eaaefe82fb818b8b444b956d3

@ -1 +1 @@
Subproject commit ad53be196a25bbefa3700a01187fdce573a7d2d0 Subproject commit 321b805559c8e911be5bddba13fcbd222a3e2d3a

View File

@ -25,6 +25,7 @@ include("${ClickHouse_SOURCE_DIR}/contrib/aws-cmake/AwsFeatureTests.cmake")
include("${ClickHouse_SOURCE_DIR}/contrib/aws-cmake/AwsThreadAffinity.cmake") include("${ClickHouse_SOURCE_DIR}/contrib/aws-cmake/AwsThreadAffinity.cmake")
include("${ClickHouse_SOURCE_DIR}/contrib/aws-cmake/AwsThreadName.cmake") include("${ClickHouse_SOURCE_DIR}/contrib/aws-cmake/AwsThreadName.cmake")
include("${ClickHouse_SOURCE_DIR}/contrib/aws-cmake/AwsSIMD.cmake") include("${ClickHouse_SOURCE_DIR}/contrib/aws-cmake/AwsSIMD.cmake")
include("${ClickHouse_SOURCE_DIR}/contrib/aws-crt-cpp/cmake/AwsGetVersion.cmake")
# Gather sources and options. # Gather sources and options.
@ -35,6 +36,8 @@ set(AWS_PUBLIC_COMPILE_DEFS)
set(AWS_PRIVATE_COMPILE_DEFS) set(AWS_PRIVATE_COMPILE_DEFS)
set(AWS_PRIVATE_LIBS) set(AWS_PRIVATE_LIBS)
list(APPEND AWS_PRIVATE_COMPILE_DEFS "-DINTEL_NO_ITTNOTIFY_API")
if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG") if (CMAKE_BUILD_TYPE_UC STREQUAL "DEBUG")
list(APPEND AWS_PRIVATE_COMPILE_DEFS "-DDEBUG_BUILD") list(APPEND AWS_PRIVATE_COMPILE_DEFS "-DDEBUG_BUILD")
endif() endif()
@ -85,14 +88,20 @@ file(GLOB AWS_SDK_CORE_SRC
"${AWS_SDK_CORE_DIR}/source/external/cjson/*.cpp" "${AWS_SDK_CORE_DIR}/source/external/cjson/*.cpp"
"${AWS_SDK_CORE_DIR}/source/external/tinyxml2/*.cpp" "${AWS_SDK_CORE_DIR}/source/external/tinyxml2/*.cpp"
"${AWS_SDK_CORE_DIR}/source/http/*.cpp" "${AWS_SDK_CORE_DIR}/source/http/*.cpp"
"${AWS_SDK_CORE_DIR}/source/http/crt/*.cpp"
"${AWS_SDK_CORE_DIR}/source/http/standard/*.cpp" "${AWS_SDK_CORE_DIR}/source/http/standard/*.cpp"
"${AWS_SDK_CORE_DIR}/source/internal/*.cpp" "${AWS_SDK_CORE_DIR}/source/internal/*.cpp"
"${AWS_SDK_CORE_DIR}/source/monitoring/*.cpp" "${AWS_SDK_CORE_DIR}/source/monitoring/*.cpp"
"${AWS_SDK_CORE_DIR}/source/net/*.cpp"
"${AWS_SDK_CORE_DIR}/source/net/linux-shared/*.cpp"
"${AWS_SDK_CORE_DIR}/source/platform/linux-shared/*.cpp"
"${AWS_SDK_CORE_DIR}/source/smithy/tracing/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/*.cpp" "${AWS_SDK_CORE_DIR}/source/utils/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/base64/*.cpp" "${AWS_SDK_CORE_DIR}/source/utils/base64/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/component-registry/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/crypto/*.cpp" "${AWS_SDK_CORE_DIR}/source/utils/crypto/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/crypto/openssl/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/crypto/factory/*.cpp" "${AWS_SDK_CORE_DIR}/source/utils/crypto/factory/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/crypto/openssl/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/event/*.cpp" "${AWS_SDK_CORE_DIR}/source/utils/event/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/json/*.cpp" "${AWS_SDK_CORE_DIR}/source/utils/json/*.cpp"
"${AWS_SDK_CORE_DIR}/source/utils/logging/*.cpp" "${AWS_SDK_CORE_DIR}/source/utils/logging/*.cpp"
@ -115,9 +124,8 @@ OPTION(USE_AWS_MEMORY_MANAGEMENT "Aws memory management" OFF)
configure_file("${AWS_SDK_CORE_DIR}/include/aws/core/SDKConfig.h.in" configure_file("${AWS_SDK_CORE_DIR}/include/aws/core/SDKConfig.h.in"
"${CMAKE_CURRENT_BINARY_DIR}/include/aws/core/SDKConfig.h" @ONLY) "${CMAKE_CURRENT_BINARY_DIR}/include/aws/core/SDKConfig.h" @ONLY)
list(APPEND AWS_PUBLIC_COMPILE_DEFS "-DAWS_SDK_VERSION_MAJOR=1") aws_get_version(AWS_CRT_CPP_VERSION_MAJOR AWS_CRT_CPP_VERSION_MINOR AWS_CRT_CPP_VERSION_PATCH FULL_VERSION GIT_HASH)
list(APPEND AWS_PUBLIC_COMPILE_DEFS "-DAWS_SDK_VERSION_MINOR=10") configure_file("${AWS_CRT_DIR}/include/aws/crt/Config.h.in" "${AWS_CRT_DIR}/include/aws/crt/Config.h" @ONLY)
list(APPEND AWS_PUBLIC_COMPILE_DEFS "-DAWS_SDK_VERSION_PATCH=36")
list(APPEND AWS_SOURCES ${AWS_SDK_CORE_SRC} ${AWS_SDK_CORE_NET_SRC} ${AWS_SDK_CORE_PLATFORM_SRC}) list(APPEND AWS_SOURCES ${AWS_SDK_CORE_SRC} ${AWS_SDK_CORE_NET_SRC} ${AWS_SDK_CORE_PLATFORM_SRC})
@ -176,6 +184,7 @@ file(GLOB AWS_COMMON_SRC
"${AWS_COMMON_DIR}/source/*.c" "${AWS_COMMON_DIR}/source/*.c"
"${AWS_COMMON_DIR}/source/external/*.c" "${AWS_COMMON_DIR}/source/external/*.c"
"${AWS_COMMON_DIR}/source/posix/*.c" "${AWS_COMMON_DIR}/source/posix/*.c"
"${AWS_COMMON_DIR}/source/linux/*.c"
) )
file(GLOB AWS_COMMON_ARCH_SRC file(GLOB AWS_COMMON_ARCH_SRC

2
contrib/aws-crt-cpp vendored

@ -1 +1 @@
Subproject commit 8a301b7e842f1daed478090c869207300972379f Subproject commit f532d6abc0d2b0d8b5d6fe9e7c51eaedbe4afbd0

2
contrib/aws-s2n-tls vendored

@ -1 +1 @@
Subproject commit 71f4794b7580cf780eb4aca77d69eded5d3c7bb4 Subproject commit 9a1e75454023e952b366ce1eab9c54007250119f

2
contrib/curl vendored

@ -1 +1 @@
Subproject commit 7161cb17c01dcff1dc5bf89a18437d9d729f1ecd Subproject commit 5ce164e0e9290c96eb7d502173426c0a135ec008

2
contrib/libssh vendored

@ -1 +1 @@
Subproject commit 2c76332ef56d90f55965ab24da6b6dbcbef29c4c Subproject commit ed4011b91873836713576475a98cd132cd834539

View File

@ -8,24 +8,12 @@ endif()
set(LIB_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libssh") set(LIB_SOURCE_DIR "${ClickHouse_SOURCE_DIR}/contrib/libssh")
set(LIB_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/libssh") set(LIB_BINARY_DIR "${ClickHouse_BINARY_DIR}/contrib/libssh")
project(libssh VERSION 0.9.7 LANGUAGES C) # Set CMake variables which are used in libssh_version.h.cmake
project(libssh VERSION 0.9.8 LANGUAGES C)
# global needed variable set(LIBRARY_VERSION "4.8.8")
set(APPLICATION_NAME ${PROJECT_NAME})
# SOVERSION scheme: CURRENT.AGE.REVISION
# If there was an incompatible interface change:
# Increment CURRENT. Set AGE and REVISION to 0
# If there was a compatible interface change:
# Increment AGE. Set REVISION to 0
# If the source code was changed, but there were no interface changes:
# Increment REVISION.
set(LIBRARY_VERSION "4.8.7")
set(LIBRARY_SOVERSION "4") set(LIBRARY_SOVERSION "4")
# Copy library files to a lib sub-directory
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY "${LIB_BINARY_DIR}/lib")
set(CMAKE_THREAD_PREFER_PTHREADS ON) set(CMAKE_THREAD_PREFER_PTHREADS ON)
set(THREADS_PREFER_PTHREAD_FLAG ON) set(THREADS_PREFER_PTHREAD_FLAG ON)
@ -33,7 +21,87 @@ set(WITH_ZLIB OFF)
set(WITH_SYMBOL_VERSIONING OFF) set(WITH_SYMBOL_VERSIONING OFF)
set(WITH_SERVER ON) set(WITH_SERVER ON)
include(IncludeSources.cmake) set(libssh_SRCS
${LIB_SOURCE_DIR}/src/agent.c
${LIB_SOURCE_DIR}/src/auth.c
${LIB_SOURCE_DIR}/src/base64.c
${LIB_SOURCE_DIR}/src/bignum.c
${LIB_SOURCE_DIR}/src/buffer.c
${LIB_SOURCE_DIR}/src/callbacks.c
${LIB_SOURCE_DIR}/src/channels.c
${LIB_SOURCE_DIR}/src/client.c
${LIB_SOURCE_DIR}/src/config.c
${LIB_SOURCE_DIR}/src/connect.c
${LIB_SOURCE_DIR}/src/connector.c
${LIB_SOURCE_DIR}/src/curve25519.c
${LIB_SOURCE_DIR}/src/dh.c
${LIB_SOURCE_DIR}/src/ecdh.c
${LIB_SOURCE_DIR}/src/error.c
${LIB_SOURCE_DIR}/src/getpass.c
${LIB_SOURCE_DIR}/src/init.c
${LIB_SOURCE_DIR}/src/kdf.c
${LIB_SOURCE_DIR}/src/kex.c
${LIB_SOURCE_DIR}/src/known_hosts.c
${LIB_SOURCE_DIR}/src/knownhosts.c
${LIB_SOURCE_DIR}/src/legacy.c
${LIB_SOURCE_DIR}/src/log.c
${LIB_SOURCE_DIR}/src/match.c
${LIB_SOURCE_DIR}/src/messages.c
${LIB_SOURCE_DIR}/src/misc.c
${LIB_SOURCE_DIR}/src/options.c
${LIB_SOURCE_DIR}/src/packet.c
${LIB_SOURCE_DIR}/src/packet_cb.c
${LIB_SOURCE_DIR}/src/packet_crypt.c
${LIB_SOURCE_DIR}/src/pcap.c
${LIB_SOURCE_DIR}/src/pki.c
${LIB_SOURCE_DIR}/src/pki_container_openssh.c
${LIB_SOURCE_DIR}/src/poll.c
${LIB_SOURCE_DIR}/src/session.c
${LIB_SOURCE_DIR}/src/scp.c
${LIB_SOURCE_DIR}/src/socket.c
${LIB_SOURCE_DIR}/src/string.c
${LIB_SOURCE_DIR}/src/threads.c
${LIB_SOURCE_DIR}/src/wrapper.c
${LIB_SOURCE_DIR}/src/external/bcrypt_pbkdf.c
${LIB_SOURCE_DIR}/src/external/blowfish.c
${LIB_SOURCE_DIR}/src/external/chacha.c
${LIB_SOURCE_DIR}/src/external/poly1305.c
${LIB_SOURCE_DIR}/src/chachapoly.c
${LIB_SOURCE_DIR}/src/config_parser.c
${LIB_SOURCE_DIR}/src/token.c
${LIB_SOURCE_DIR}/src/pki_ed25519_common.c
${LIB_SOURCE_DIR}/src/threads/noop.c
${LIB_SOURCE_DIR}/src/threads/pthread.c
# LIBCRYPT specific
${libssh_SRCS}
${LIB_SOURCE_DIR}/src/threads/libcrypto.c
${LIB_SOURCE_DIR}/src/pki_crypto.c
${LIB_SOURCE_DIR}/src/ecdh_crypto.c
${LIB_SOURCE_DIR}/src/libcrypto.c
${LIB_SOURCE_DIR}/src/dh_crypto.c
${LIB_SOURCE_DIR}/src/options.c
${LIB_SOURCE_DIR}/src/server.c
${LIB_SOURCE_DIR}/src/bind.c
${LIB_SOURCE_DIR}/src/bind_config.c
)
if (NOT (ENABLE_OPENSSL OR ENABLE_OPENSSL_DYNAMIC))
add_compile_definitions(USE_BORINGSSL=1)
endif()
configure_file(${LIB_SOURCE_DIR}/include/libssh/libssh_version.h.cmake ${LIB_BINARY_DIR}/include/libssh/libssh_version.h @ONLY)
add_library(_ssh STATIC ${libssh_SRCS})
add_library(ch_contrib::ssh ALIAS _ssh)
target_link_libraries(_ssh PRIVATE OpenSSL::Crypto)
target_include_directories(_ssh PUBLIC "${LIB_SOURCE_DIR}/include" "${LIB_BINARY_DIR}/include")
# These headers need to be generated using the native build system on each platform.
if (OS_LINUX) if (OS_LINUX)
if (ARCH_AMD64) if (ARCH_AMD64)
if (USE_MUSL) if (USE_MUSL)
@ -63,7 +131,3 @@ elseif (OS_FREEBSD)
else () else ()
message(FATAL_ERROR "Platform is not supported") message(FATAL_ERROR "Platform is not supported")
endif() endif()
configure_file(${LIB_SOURCE_DIR}/include/libssh/libssh_version.h.cmake
${LIB_BINARY_DIR}/include/libssh/libssh_version.h
@ONLY)

View File

@ -1,98 +0,0 @@
set(LIBSSH_LINK_LIBRARIES
${LIBSSH_LINK_LIBRARIES}
OpenSSL::Crypto
)
set(libssh_SRCS
${LIB_SOURCE_DIR}/src/agent.c
${LIB_SOURCE_DIR}/src/auth.c
${LIB_SOURCE_DIR}/src/base64.c
${LIB_SOURCE_DIR}/src/bignum.c
${LIB_SOURCE_DIR}/src/buffer.c
${LIB_SOURCE_DIR}/src/callbacks.c
${LIB_SOURCE_DIR}/src/channels.c
${LIB_SOURCE_DIR}/src/client.c
${LIB_SOURCE_DIR}/src/config.c
${LIB_SOURCE_DIR}/src/connect.c
${LIB_SOURCE_DIR}/src/connector.c
${LIB_SOURCE_DIR}/src/curve25519.c
${LIB_SOURCE_DIR}/src/dh.c
${LIB_SOURCE_DIR}/src/ecdh.c
${LIB_SOURCE_DIR}/src/error.c
${LIB_SOURCE_DIR}/src/getpass.c
${LIB_SOURCE_DIR}/src/init.c
${LIB_SOURCE_DIR}/src/kdf.c
${LIB_SOURCE_DIR}/src/kex.c
${LIB_SOURCE_DIR}/src/known_hosts.c
${LIB_SOURCE_DIR}/src/knownhosts.c
${LIB_SOURCE_DIR}/src/legacy.c
${LIB_SOURCE_DIR}/src/log.c
${LIB_SOURCE_DIR}/src/match.c
${LIB_SOURCE_DIR}/src/messages.c
${LIB_SOURCE_DIR}/src/misc.c
${LIB_SOURCE_DIR}/src/options.c
${LIB_SOURCE_DIR}/src/packet.c
${LIB_SOURCE_DIR}/src/packet_cb.c
${LIB_SOURCE_DIR}/src/packet_crypt.c
${LIB_SOURCE_DIR}/src/pcap.c
${LIB_SOURCE_DIR}/src/pki.c
${LIB_SOURCE_DIR}/src/pki_container_openssh.c
${LIB_SOURCE_DIR}/src/poll.c
${LIB_SOURCE_DIR}/src/session.c
${LIB_SOURCE_DIR}/src/scp.c
${LIB_SOURCE_DIR}/src/socket.c
${LIB_SOURCE_DIR}/src/string.c
${LIB_SOURCE_DIR}/src/threads.c
${LIB_SOURCE_DIR}/src/wrapper.c
${LIB_SOURCE_DIR}/src/external/bcrypt_pbkdf.c
${LIB_SOURCE_DIR}/src/external/blowfish.c
${LIB_SOURCE_DIR}/src/external/chacha.c
${LIB_SOURCE_DIR}/src/external/poly1305.c
${LIB_SOURCE_DIR}/src/chachapoly.c
${LIB_SOURCE_DIR}/src/config_parser.c
${LIB_SOURCE_DIR}/src/token.c
${LIB_SOURCE_DIR}/src/pki_ed25519_common.c
)
set(libssh_SRCS
${libssh_SRCS}
${LIB_SOURCE_DIR}/src/threads/noop.c
${LIB_SOURCE_DIR}/src/threads/pthread.c
)
# LIBCRYPT specific
set(libssh_SRCS
${libssh_SRCS}
${LIB_SOURCE_DIR}/src/threads/libcrypto.c
${LIB_SOURCE_DIR}/src/pki_crypto.c
${LIB_SOURCE_DIR}/src/ecdh_crypto.c
${LIB_SOURCE_DIR}/src/libcrypto.c
${LIB_SOURCE_DIR}/src/dh_crypto.c
)
if (NOT (ENABLE_OPENSSL OR ENABLE_OPENSSL_DYNAMIC))
add_compile_definitions(USE_BORINGSSL=1)
endif()
set(libssh_SRCS
${libssh_SRCS}
${LIB_SOURCE_DIR}/src/options.c
${LIB_SOURCE_DIR}/src/server.c
${LIB_SOURCE_DIR}/src/bind.c
${LIB_SOURCE_DIR}/src/bind_config.c
)
add_library(_ssh STATIC ${libssh_SRCS})
target_include_directories(_ssh PRIVATE ${LIB_BINARY_DIR})
target_include_directories(_ssh PUBLIC "${LIB_SOURCE_DIR}/include" "${LIB_BINARY_DIR}/include")
target_link_libraries(_ssh
PRIVATE ${LIBSSH_LINK_LIBRARIES})
add_library(ch_contrib::ssh ALIAS _ssh)
target_compile_options(_ssh
PRIVATE
${DEFAULT_C_COMPILE_FLAGS}
-D_GNU_SOURCE)

View File

@ -1,6 +1,10 @@
#include <libunwind.h> #include <libunwind.h>
/// On MacOS this function will be replaced with a dynamic symbol
/// from the system library.
#if !defined(OS_DARWIN)
int backtrace(void ** buffer, int size) int backtrace(void ** buffer, int size)
{ {
return unw_backtrace(buffer, size); return unw_backtrace(buffer, size);
} }
#endif

2
contrib/libuv vendored

@ -1 +1 @@
Subproject commit 3a85b2eb3d83f369b8a8cafd329d7e9dc28f60cf Subproject commit 4482964660c77eec1166cd7d14fb915e3dbd774a

2
contrib/libxml2 vendored

@ -1 +1 @@
Subproject commit 8292f361458fcffe0bff515a385be02e9d35582c Subproject commit 223cb03a5d27b1b2393b266a8657443d046139d6

View File

@ -21,7 +21,7 @@ extern "C" {
* your library and includes mismatch * your library and includes mismatch
*/ */
#ifndef LIBXML2_COMPILING_MSCCDEF #ifndef LIBXML2_COMPILING_MSCCDEF
XMLPUBFUN void xmlCheckVersion(int version); XMLPUBFUN void XMLCALL xmlCheckVersion(int version);
#endif /* LIBXML2_COMPILING_MSCCDEF */ #endif /* LIBXML2_COMPILING_MSCCDEF */
/** /**
@ -29,28 +29,28 @@ XMLPUBFUN void xmlCheckVersion(int version);
* *
* the version string like "1.2.3" * the version string like "1.2.3"
*/ */
#define LIBXML_DOTTED_VERSION "2.12.4" #define LIBXML_DOTTED_VERSION "2.10.3"
/** /**
* LIBXML_VERSION: * LIBXML_VERSION:
* *
* the version number: 1.2.3 value is 10203 * the version number: 1.2.3 value is 10203
*/ */
#define LIBXML_VERSION 21204 #define LIBXML_VERSION 21003
/** /**
* LIBXML_VERSION_STRING: * LIBXML_VERSION_STRING:
* *
* the version number string, 1.2.3 value is "10203" * the version number string, 1.2.3 value is "10203"
*/ */
#define LIBXML_VERSION_STRING "21204" #define LIBXML_VERSION_STRING "21003"
/** /**
* LIBXML_VERSION_EXTRA: * LIBXML_VERSION_EXTRA:
* *
* extra version information, used to show a git commit description * extra version information, used to show a git commit description
*/ */
#define LIBXML_VERSION_EXTRA "-GITv2.12.4" #define LIBXML_VERSION_EXTRA ""
/** /**
* LIBXML_TEST_VERSION: * LIBXML_TEST_VERSION:
@ -58,7 +58,7 @@ XMLPUBFUN void xmlCheckVersion(int version);
* Macro to check that the libxml version in use is compatible with * Macro to check that the libxml version in use is compatible with
* the version the software has been compiled against * the version the software has been compiled against
*/ */
#define LIBXML_TEST_VERSION xmlCheckVersion(21204); #define LIBXML_TEST_VERSION xmlCheckVersion(21003);
#ifndef VMS #ifndef VMS
#if 0 #if 0
@ -270,7 +270,7 @@ XMLPUBFUN void xmlCheckVersion(int version);
* *
* Whether iconv support is available * Whether iconv support is available
*/ */
#if 1 #if 0
#define LIBXML_ICONV_ENABLED #define LIBXML_ICONV_ENABLED
#endif #endif
@ -313,7 +313,7 @@ XMLPUBFUN void xmlCheckVersion(int version);
/** /**
* LIBXML_DEBUG_RUNTIME: * LIBXML_DEBUG_RUNTIME:
* *
* Removed * Whether the runtime debugging is configured in
*/ */
#if 0 #if 0
#define LIBXML_DEBUG_RUNTIME #define LIBXML_DEBUG_RUNTIME
@ -409,7 +409,12 @@ XMLPUBFUN void xmlCheckVersion(int version);
#endif #endif
#ifdef __GNUC__ #ifdef __GNUC__
/** DOC_DISABLE */
/**
* ATTRIBUTE_UNUSED:
*
* Macro used to signal to GCC unused function parameters
*/
#ifndef ATTRIBUTE_UNUSED #ifndef ATTRIBUTE_UNUSED
# if ((__GNUC__ > 2) || ((__GNUC__ == 2) && (__GNUC_MINOR__ >= 7))) # if ((__GNUC__ > 2) || ((__GNUC__ == 2) && (__GNUC_MINOR__ >= 7)))
@ -419,6 +424,12 @@ XMLPUBFUN void xmlCheckVersion(int version);
# endif # endif
#endif #endif
/**
* LIBXML_ATTR_ALLOC_SIZE:
*
* Macro used to indicate to GCC this is an allocator function
*/
#ifndef LIBXML_ATTR_ALLOC_SIZE #ifndef LIBXML_ATTR_ALLOC_SIZE
# if (!defined(__clang__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ >= 3)))) # if (!defined(__clang__) && ((__GNUC__ > 4) || ((__GNUC__ == 4) && (__GNUC_MINOR__ >= 3))))
# define LIBXML_ATTR_ALLOC_SIZE(x) __attribute__((alloc_size(x))) # define LIBXML_ATTR_ALLOC_SIZE(x) __attribute__((alloc_size(x)))
@ -429,6 +440,12 @@ XMLPUBFUN void xmlCheckVersion(int version);
# define LIBXML_ATTR_ALLOC_SIZE(x) # define LIBXML_ATTR_ALLOC_SIZE(x)
#endif #endif
/**
* LIBXML_ATTR_FORMAT:
*
* Macro used to indicate to GCC the parameter are printf like
*/
#ifndef LIBXML_ATTR_FORMAT #ifndef LIBXML_ATTR_FORMAT
# if ((__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3))) # if ((__GNUC__ > 3) || ((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)))
# define LIBXML_ATTR_FORMAT(fmt,args) __attribute__((__format__(__printf__,fmt,args))) # define LIBXML_ATTR_FORMAT(fmt,args) __attribute__((__format__(__printf__,fmt,args)))
@ -440,69 +457,44 @@ XMLPUBFUN void xmlCheckVersion(int version);
#endif #endif
#ifndef XML_DEPRECATED #ifndef XML_DEPRECATED
# if defined (IN_LIBXML) || (__GNUC__ * 100 + __GNUC_MINOR__ < 301) # ifdef IN_LIBXML
# define XML_DEPRECATED # define XML_DEPRECATED
/* Available since at least GCC 3.1 */
# else # else
/* Available since at least GCC 3.1 */
# define XML_DEPRECATED __attribute__((deprecated)) # define XML_DEPRECATED __attribute__((deprecated))
# endif # endif
#endif #endif
#if defined(__clang__) || (__GNUC__ * 100 + __GNUC_MINOR__ >= 406)
#if defined(__clang__) || (__GNUC__ * 100 + __GNUC_MINOR__ >= 800)
#define XML_IGNORE_FPTR_CAST_WARNINGS \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wpedantic\"") \
_Pragma("GCC diagnostic ignored \"-Wcast-function-type\"")
#else
#define XML_IGNORE_FPTR_CAST_WARNINGS \
_Pragma("GCC diagnostic push") \
_Pragma("GCC diagnostic ignored \"-Wpedantic\"")
#endif
#define XML_POP_WARNINGS \
_Pragma("GCC diagnostic pop")
#else
#define XML_IGNORE_FPTR_CAST_WARNINGS
#define XML_POP_WARNINGS
#endif
#else /* ! __GNUC__ */ #else /* ! __GNUC__ */
/**
* ATTRIBUTE_UNUSED:
*
* Macro used to signal to GCC unused function parameters
*/
#define ATTRIBUTE_UNUSED #define ATTRIBUTE_UNUSED
/**
* LIBXML_ATTR_ALLOC_SIZE:
*
* Macro used to indicate to GCC this is an allocator function
*/
#define LIBXML_ATTR_ALLOC_SIZE(x) #define LIBXML_ATTR_ALLOC_SIZE(x)
/**
* LIBXML_ATTR_FORMAT:
*
* Macro used to indicate to GCC the parameter are printf like
*/
#define LIBXML_ATTR_FORMAT(fmt,args) #define LIBXML_ATTR_FORMAT(fmt,args)
/**
* XML_DEPRECATED:
*
* Macro used to indicate that a function, variable, type or struct member
* is deprecated.
*/
#ifndef XML_DEPRECATED #ifndef XML_DEPRECATED
# if defined (IN_LIBXML) || !defined (_MSC_VER) #define XML_DEPRECATED
# define XML_DEPRECATED
/* Available since Visual Studio 2005 */
# elif defined (_MSC_VER) && (_MSC_VER >= 1400)
# define XML_DEPRECATED __declspec(deprecated)
# endif
#endif
#if defined (_MSC_VER) && (_MSC_VER >= 1400)
# define XML_IGNORE_FPTR_CAST_WARNINGS __pragma(warning(push))
#else
# define XML_IGNORE_FPTR_CAST_WARNINGS
#endif
#ifndef XML_POP_WARNINGS
# if defined (_MSC_VER) && (_MSC_VER >= 1400)
# define XML_POP_WARNINGS __pragma(warning(pop))
# else
# define XML_POP_WARNINGS
# endif
#endif #endif
#endif /* __GNUC__ */ #endif /* __GNUC__ */
#define XML_NO_ATTR
#ifdef LIBXML_THREAD_ENABLED
#define XML_DECLARE_GLOBAL(name, type, attrs) \
attrs XMLPUBFUN type *__##name(void);
#define XML_GLOBAL_MACRO(name) (*__##name())
#else
#define XML_DECLARE_GLOBAL(name, type, attrs) \
attrs XMLPUBVAR type name;
#endif
#ifdef __cplusplus #ifdef __cplusplus
} }
#endif /* __cplusplus */ #endif /* __cplusplus */

@ -1 +1 @@
Subproject commit 2568a7cd1297c7c3044b0f3cc0c23a6f6444d856 Subproject commit d2142eed98046a47ff7112e3cc1e197c8a5cd80f

2
contrib/lz4 vendored

@ -1 +1 @@
Subproject commit 92ebf1870b9acbefc0e7970409a181954a10ff40 Subproject commit ce45a9dbdb059511a3e9576b19db3e7f1a4f172e

View File

@ -24,7 +24,7 @@ git config --file .gitmodules --get-regexp '.*path' | sed 's/[^ ]* //' | xargs -
# We don't want to depend on any third-party CMake files. # We don't want to depend on any third-party CMake files.
# To check it, find and delete them. # To check it, find and delete them.
grep -o -P '"contrib/[^"]+"' .gitmodules | grep -o -P '"contrib/[^"]+"' .gitmodules |
grep -v -P 'contrib/(llvm-project|google-protobuf|grpc|abseil-cpp|corrosion)' | grep -v -P 'contrib/(llvm-project|google-protobuf|grpc|abseil-cpp|corrosion|aws-crt-cpp)' |
xargs -I@ find @ \ xargs -I@ find @ \
-'(' -name 'CMakeLists.txt' -or -name '*.cmake' -')' -and -not -name '*.h.cmake' \ -'(' -name 'CMakeLists.txt' -or -name '*.cmake' -')' -and -not -name '*.h.cmake' \
-delete -delete

View File

@ -62,7 +62,6 @@
"dependent": [] "dependent": []
}, },
"docker/test/integration/runner": { "docker/test/integration/runner": {
"only_amd64": true,
"name": "clickhouse/integration-tests-runner", "name": "clickhouse/integration-tests-runner",
"dependent": [] "dependent": []
}, },

View File

@ -34,7 +34,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc # lts / testing / prestable / etc
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.1.1.2048" ARG VERSION="24.1.5.6"
ARG PACKAGES="clickhouse-keeper" ARG PACKAGES="clickhouse-keeper"
ARG DIRECT_DOWNLOAD_URLS="" ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -72,7 +72,7 @@ RUN add-apt-repository ppa:ubuntu-toolchain-r/test --yes \
zstd \ zstd \
zip \ zip \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# Download toolchain and SDK for Darwin # Download toolchain and SDK for Darwin
RUN curl -sL -O https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.0.sdk.tar.xz RUN curl -sL -O https://github.com/phracker/MacOSX-SDKs/releases/download/11.3/MacOSX11.0.sdk.tar.xz

View File

@ -115,12 +115,17 @@ def run_docker_image_with_env(
subprocess.check_call(cmd, shell=True) subprocess.check_call(cmd, shell=True)
def is_release_build(debug_build: bool, package_type: str, sanitizer: str) -> bool: def is_release_build(
return not debug_build and package_type == "deb" and sanitizer == "" debug_build: bool, package_type: str, sanitizer: str, coverage: bool
) -> bool:
return (
not debug_build and package_type == "deb" and sanitizer == "" and not coverage
)
def parse_env_variables( def parse_env_variables(
debug_build: bool, debug_build: bool,
coverage: bool,
compiler: str, compiler: str,
sanitizer: str, sanitizer: str,
package_type: str, package_type: str,
@ -261,7 +266,7 @@ def parse_env_variables(
build_target = ( build_target = (
f"{build_target} clickhouse-odbc-bridge clickhouse-library-bridge" f"{build_target} clickhouse-odbc-bridge clickhouse-library-bridge"
) )
if is_release_build(debug_build, package_type, sanitizer): if is_release_build(debug_build, package_type, sanitizer, coverage):
cmake_flags.append("-DSPLIT_DEBUG_SYMBOLS=ON") cmake_flags.append("-DSPLIT_DEBUG_SYMBOLS=ON")
result.append("WITH_PERFORMANCE=1") result.append("WITH_PERFORMANCE=1")
if is_cross_arm: if is_cross_arm:
@ -287,6 +292,9 @@ def parse_env_variables(
else: else:
result.append("BUILD_TYPE=None") result.append("BUILD_TYPE=None")
if coverage:
cmake_flags.append("-DSANITIZE_COVERAGE=1 -DBUILD_STANDALONE_KEEPER=0")
if not cache: if not cache:
cmake_flags.append("-DCOMPILER_CACHE=disabled") cmake_flags.append("-DCOMPILER_CACHE=disabled")
@ -415,6 +423,11 @@ def parse_args() -> argparse.Namespace:
choices=("address", "thread", "memory", "undefined", ""), choices=("address", "thread", "memory", "undefined", ""),
default="", default="",
) )
parser.add_argument(
"--coverage",
action="store_true",
help="enable granular coverage with introspection",
)
parser.add_argument("--clang-tidy", action="store_true") parser.add_argument("--clang-tidy", action="store_true")
parser.add_argument( parser.add_argument(
@ -507,6 +520,7 @@ def main() -> None:
env_prepared = parse_env_variables( env_prepared = parse_env_variables(
args.debug_build, args.debug_build,
args.coverage,
args.compiler, args.compiler,
args.sanitizer, args.sanitizer,
args.package_type, args.package_type,

View File

@ -32,7 +32,7 @@ RUN arch=${TARGETARCH:-amd64} \
# lts / testing / prestable / etc # lts / testing / prestable / etc
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}" ARG REPOSITORY="https://packages.clickhouse.com/tgz/${REPO_CHANNEL}"
ARG VERSION="24.1.1.2048" ARG VERSION="24.1.5.6"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
ARG DIRECT_DOWNLOAD_URLS="" ARG DIRECT_DOWNLOAD_URLS=""

View File

@ -23,14 +23,11 @@ RUN sed -i "s|http://archive.ubuntu.com|${apt_archive}|g" /etc/apt/sources.list
tzdata \ tzdata \
wget \ wget \
&& apt-get clean \ && apt-get clean \
&& rm -rf \ && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
/var/lib/apt/lists/* \
/var/cache/debconf \
/tmp/*
ARG REPO_CHANNEL="stable" ARG REPO_CHANNEL="stable"
ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main" ARG REPOSITORY="deb [signed-by=/usr/share/keyrings/clickhouse-keyring.gpg] https://packages.clickhouse.com/deb ${REPO_CHANNEL} main"
ARG VERSION="24.1.1.2048" ARG VERSION="24.1.5.6"
ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static" ARG PACKAGES="clickhouse-client clickhouse-server clickhouse-common-static"
# set non-empty deb_location_url url to create a docker image # set non-empty deb_location_url url to create a docker image

View File

@ -118,13 +118,19 @@ if [ -n "$CLICKHOUSE_USER" ] && [ "$CLICKHOUSE_USER" != "default" ] || [ -n "$CL
EOT EOT
fi fi
CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS="${CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS:-}"
# checking $DATA_DIR for initialization # checking $DATA_DIR for initialization
if [ -d "${DATA_DIR%/}/data" ]; then if [ -d "${DATA_DIR%/}/data" ]; then
DATABASE_ALREADY_EXISTS='true' DATABASE_ALREADY_EXISTS='true'
fi fi
# only run initialization on an empty data directory # run initialization if flag CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS is not empty or data directory is empty
if [ -z "${DATABASE_ALREADY_EXISTS}" ]; then if [[ -n "${CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS}" || -z "${DATABASE_ALREADY_EXISTS}" ]]; then
RUN_INITDB_SCRIPTS='true'
fi
if [ -n "${RUN_INITDB_SCRIPTS}" ]; then
if [ -n "$(ls /docker-entrypoint-initdb.d/)" ] || [ -n "$CLICKHOUSE_DB" ]; then if [ -n "$(ls /docker-entrypoint-initdb.d/)" ] || [ -n "$CLICKHOUSE_DB" ]; then
# port is needed to check if clickhouse-server is ready for connections # port is needed to check if clickhouse-server is ready for connections
HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port --try)" HTTP_PORT="$(clickhouse extract-from-config --config-file "$CLICKHOUSE_CONFIG" --key=http_port --try)"

View File

@ -13,7 +13,10 @@ RUN apt-get update \
zstd \ zstd \
locales \ locales \
sudo \ sudo \
--yes --no-install-recommends --yes --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# Sanitizer options for services (clickhouse-server) # Sanitizer options for services (clickhouse-server)
# Set resident memory limit for TSAN to 45GiB (46080MiB) to avoid OOMs in Stress tests # Set resident memory limit for TSAN to 45GiB (46080MiB) to avoid OOMs in Stress tests

View File

@ -17,16 +17,20 @@ CLICKHOUSE_CI_LOGS_CLUSTER=${CLICKHOUSE_CI_LOGS_CLUSTER:-system_logs_export}
EXTRA_COLUMNS=${EXTRA_COLUMNS:-"pull_request_number UInt32, commit_sha String, check_start_time DateTime('UTC'), check_name LowCardinality(String), instance_type LowCardinality(String), instance_id String, INDEX ix_pr (pull_request_number) TYPE set(100), INDEX ix_commit (commit_sha) TYPE set(100), INDEX ix_check_time (check_start_time) TYPE minmax, "} EXTRA_COLUMNS=${EXTRA_COLUMNS:-"pull_request_number UInt32, commit_sha String, check_start_time DateTime('UTC'), check_name LowCardinality(String), instance_type LowCardinality(String), instance_id String, INDEX ix_pr (pull_request_number) TYPE set(100), INDEX ix_commit (commit_sha) TYPE set(100), INDEX ix_check_time (check_start_time) TYPE minmax, "}
EXTRA_COLUMNS_EXPRESSION=${EXTRA_COLUMNS_EXPRESSION:-"CAST(0 AS UInt32) AS pull_request_number, '' AS commit_sha, now() AS check_start_time, toLowCardinality('') AS check_name, toLowCardinality('') AS instance_type, '' AS instance_id"} EXTRA_COLUMNS_EXPRESSION=${EXTRA_COLUMNS_EXPRESSION:-"CAST(0 AS UInt32) AS pull_request_number, '' AS commit_sha, now() AS check_start_time, toLowCardinality('') AS check_name, toLowCardinality('') AS instance_type, '' AS instance_id"}
EXTRA_ORDER_BY_COLUMNS=${EXTRA_ORDER_BY_COLUMNS:-"check_name, "} EXTRA_ORDER_BY_COLUMNS=${EXTRA_ORDER_BY_COLUMNS:-"check_name"}
# trace_log needs more columns for symbolization # trace_log needs more columns for symbolization
EXTRA_COLUMNS_TRACE_LOG="${EXTRA_COLUMNS} symbols Array(LowCardinality(String)), lines Array(LowCardinality(String)), " EXTRA_COLUMNS_TRACE_LOG="${EXTRA_COLUMNS} symbols Array(LowCardinality(String)), lines Array(LowCardinality(String)), "
EXTRA_COLUMNS_EXPRESSION_TRACE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayMap(x -> demangle(addressToSymbol(x)), trace)::Array(LowCardinality(String)) AS symbols, arrayMap(x -> addressToLine(x), trace)::Array(LowCardinality(String)) AS lines" EXTRA_COLUMNS_EXPRESSION_TRACE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayMap(x -> demangle(addressToSymbol(x)), trace)::Array(LowCardinality(String)) AS symbols, arrayMap(x -> addressToLine(x), trace)::Array(LowCardinality(String)) AS lines"
# coverage_log needs more columns for symbolization, but only symbol names (the line numbers are too heavy to calculate)
EXTRA_COLUMNS_COVERAGE_LOG="${EXTRA_COLUMNS} symbols Array(LowCardinality(String)), "
EXTRA_COLUMNS_EXPRESSION_COVERAGE_LOG="${EXTRA_COLUMNS_EXPRESSION}, arrayMap(x -> demangle(addressToSymbol(x)), coverage)::Array(LowCardinality(String)) AS symbols"
function __set_connection_args function __set_connection_args
{ {
# It's impossible to use generous $CONNECTION_ARGS string, it's unsafe from word splitting perspective. # It's impossible to use a generic $CONNECTION_ARGS string, it's unsafe from word splitting perspective.
# That's why we must stick to the generated option # That's why we must stick to the generated option
CONNECTION_ARGS=( CONNECTION_ARGS=(
--receive_timeout=45 --send_timeout=45 --secure --receive_timeout=45 --send_timeout=45 --secure
@ -129,6 +133,19 @@ function setup_logs_replication
debug_or_sanitizer_build=$(clickhouse-client -q "WITH ((SELECT value FROM system.build_options WHERE name='BUILD_TYPE') AS build, (SELECT value FROM system.build_options WHERE name='CXX_FLAGS') as flags) SELECT build='Debug' OR flags LIKE '%fsanitize%'") debug_or_sanitizer_build=$(clickhouse-client -q "WITH ((SELECT value FROM system.build_options WHERE name='BUILD_TYPE') AS build, (SELECT value FROM system.build_options WHERE name='CXX_FLAGS') as flags) SELECT build='Debug' OR flags LIKE '%fsanitize%'")
echo "Build is debug or sanitizer: $debug_or_sanitizer_build" echo "Build is debug or sanitizer: $debug_or_sanitizer_build"
# We will pre-create a table system.coverage_log.
# It is normally created by clickhouse-test rather than the server,
# so we will create it in advance to make it be picked up by the next commands:
clickhouse-client --query "
CREATE TABLE IF NOT EXISTS system.coverage_log
(
time DateTime COMMENT 'The time of test run',
test_name String COMMENT 'The name of the test',
coverage Array(UInt64) COMMENT 'An array of addresses of the code (a subset of addresses instrumented for coverage) that were encountered during the test run'
) ENGINE = Null COMMENT 'Contains information about per-test coverage from the CI, but used only for exporting to the CI cluster'
"
# For each system log table: # For each system log table:
echo 'Create %_log tables' echo 'Create %_log tables'
clickhouse-client --query "SHOW TABLES FROM system LIKE '%\\_log'" | while read -r table clickhouse-client --query "SHOW TABLES FROM system LIKE '%\\_log'" | while read -r table
@ -139,11 +156,16 @@ function setup_logs_replication
# Do not try to resolve stack traces in case of debug/sanitizers # Do not try to resolve stack traces in case of debug/sanitizers
# build, since it is too slow (flushing of trace_log can take ~1min # build, since it is too slow (flushing of trace_log can take ~1min
# with such MV attached) # with such MV attached)
if [[ "$debug_or_sanitizer_build" = 1 ]]; then if [[ "$debug_or_sanitizer_build" = 1 ]]
then
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION}" EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION}"
else else
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION_TRACE_LOG}" EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION_TRACE_LOG}"
fi fi
elif [[ "$table" = "coverage_log" ]]
then
EXTRA_COLUMNS_FOR_TABLE="${EXTRA_COLUMNS_COVERAGE_LOG}"
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION_COVERAGE_LOG}"
else else
EXTRA_COLUMNS_FOR_TABLE="${EXTRA_COLUMNS}" EXTRA_COLUMNS_FOR_TABLE="${EXTRA_COLUMNS}"
EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION}" EXTRA_COLUMNS_EXPRESSION_FOR_TABLE="${EXTRA_COLUMNS_EXPRESSION}"
@ -160,7 +182,7 @@ function setup_logs_replication
# Create the destination table with adapted name and structure: # Create the destination table with adapted name and structure:
statement=$(clickhouse-client --format TSVRaw --query "SHOW CREATE TABLE system.${table}" | sed -r -e ' statement=$(clickhouse-client --format TSVRaw --query "SHOW CREATE TABLE system.${table}" | sed -r -e '
s/^\($/('"$EXTRA_COLUMNS_FOR_TABLE"'/; s/^\($/('"$EXTRA_COLUMNS_FOR_TABLE"'/;
s/ORDER BY \(/ORDER BY ('"$EXTRA_ORDER_BY_COLUMNS"'/; s/^ORDER BY (([^\(].+?)|\((.+?)\))$/ORDER BY ('"$EXTRA_ORDER_BY_COLUMNS"', \2\3)/;
s/^CREATE TABLE system\.\w+_log$/CREATE TABLE IF NOT EXISTS '"$table"'_'"$hash"'/; s/^CREATE TABLE system\.\w+_log$/CREATE TABLE IF NOT EXISTS '"$table"'_'"$hash"'/;
/^TTL /d /^TTL /d
') ')
@ -168,7 +190,7 @@ function setup_logs_replication
echo -e "Creating remote destination table ${table}_${hash} with statement:\n${statement}" >&2 echo -e "Creating remote destination table ${table}_${hash} with statement:\n${statement}" >&2
echo "$statement" | clickhouse-client --database_replicated_initial_query_timeout_sec=10 \ echo "$statement" | clickhouse-client --database_replicated_initial_query_timeout_sec=10 \
--distributed_ddl_task_timeout=30 \ --distributed_ddl_task_timeout=30 --distributed_ddl_output_mode=throw_only_active \
"${CONNECTION_ARGS[@]}" || continue "${CONNECTION_ARGS[@]}" || continue
echo "Creating table system.${table}_sender" >&2 echo "Creating table system.${table}_sender" >&2

View File

@ -20,7 +20,9 @@ RUN apt-get update \
pv \ pv \
jq \ jq \
zstd \ zstd \
--yes --no-install-recommends --yes --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
RUN pip3 install numpy==1.26.3 scipy==1.12.0 pandas==1.5.3 Jinja2==3.1.3 RUN pip3 install numpy==1.26.3 scipy==1.12.0 pandas==1.5.3 Jinja2==3.1.3
@ -31,12 +33,14 @@ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \ && cp /tmp/clickhouse-odbc-tmp/lib64/*.so /usr/local/lib/ \
&& odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \ && odbcinst -i -d -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbcinst.ini.sample \
&& odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \ && odbcinst -i -s -l -f /tmp/clickhouse-odbc-tmp/share/doc/clickhouse-odbc/config/odbc.ini.sample \
&& rm -rf /tmp/clickhouse-odbc-tmp \ && rm -rf /tmp/clickhouse-odbc-tmp
# Give suid to gdb to grant it attach permissions
# chmod 777 to make the container user independent
RUN chmod u+s /usr/bin/gdb \
&& mkdir -p /var/lib/clickhouse \ && mkdir -p /var/lib/clickhouse \
&& chmod 777 /var/lib/clickhouse && chmod 777 /var/lib/clickhouse
# chmod 777 to make the container user independent
ENV TZ=Europe/Amsterdam ENV TZ=Europe/Amsterdam
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone

View File

@ -29,7 +29,7 @@ RUN apt-get update \
wget \ wget \
&& apt-get autoremove --yes \ && apt-get autoremove --yes \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
RUN pip3 install Jinja2 RUN pip3 install Jinja2

View File

@ -389,8 +389,8 @@ fi
rg --text -F '<Fatal>' server.log > fatal.log ||: rg --text -F '<Fatal>' server.log > fatal.log ||:
dmesg -T > dmesg.log ||: dmesg -T > dmesg.log ||:
zstd --threads=0 server.log zstd --threads=0 --rm server.log
zstd --threads=0 fuzzer.log zstd --threads=0 --rm fuzzer.log
cat > report.html <<EOF ||: cat > report.html <<EOF ||:
<!DOCTYPE html> <!DOCTYPE html>

View File

@ -10,13 +10,13 @@ ENV \
init=/lib/systemd/systemd init=/lib/systemd/systemd
# install systemd packages # install systemd packages
RUN apt-get update && \ RUN apt-get update \
apt-get install -y --no-install-recommends \ && apt-get install -y --no-install-recommends \
sudo \ sudo \
systemd \ systemd \
&& \ \
apt-get clean && \ && apt-get clean \
rm -rf /var/lib/apt/lists && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# configure systemd # configure systemd
# remove systemd 'wants' triggers # remove systemd 'wants' triggers

View File

@ -1,31 +1,27 @@
FROM ubuntu:20.04 FROM ubuntu:20.04
MAINTAINER lgbo-ustc <lgbo.ustc@gmail.com> MAINTAINER lgbo-ustc <lgbo.ustc@gmail.com>
RUN apt-get update RUN apt-get update \
RUN apt-get install -y wget openjdk-8-jre && apt-get install -y wget openjdk-8-jre \
&& wget https://archive.apache.org/dist/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz \
RUN wget https://archive.apache.org/dist/hadoop/common/hadoop-3.1.0/hadoop-3.1.0.tar.gz && \ && tar -xf hadoop-3.1.0.tar.gz && rm -rf hadoop-3.1.0.tar.gz \
tar -xf hadoop-3.1.0.tar.gz && rm -rf hadoop-3.1.0.tar.gz && wget https://apache.apache.org/dist/hive/hive-2.3.9/apache-hive-2.3.9-bin.tar.gz \
RUN wget https://apache.apache.org/dist/hive/hive-2.3.9/apache-hive-2.3.9-bin.tar.gz && \ && tar -xf apache-hive-2.3.9-bin.tar.gz && rm -rf apache-hive-2.3.9-bin.tar.gz \
tar -xf apache-hive-2.3.9-bin.tar.gz && rm -rf apache-hive-2.3.9-bin.tar.gz && apt install -y vim \
RUN apt install -y vim && apt install -y openssh-server openssh-client \
&& apt install -y mysql-server \
RUN apt install -y openssh-server openssh-client && mkdir -p /root/.ssh \
&& ssh-keygen -t rsa -b 2048 -P '' -f /root/.ssh/id_rsa \
RUN apt install -y mysql-server && cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys \
&& cp /root/.ssh/id_rsa /etc/ssh/ssh_host_rsa_key \
RUN mkdir -p /root/.ssh && \ && cp /root/.ssh/id_rsa.pub /etc/ssh/ssh_host_rsa_key.pub \
ssh-keygen -t rsa -b 2048 -P '' -f /root/.ssh/id_rsa && \ && wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.27.tar.gz \
cat /root/.ssh/id_rsa.pub > /root/.ssh/authorized_keys && \ && tar -xf mysql-connector-java-8.0.27.tar.gz \
cp /root/.ssh/id_rsa /etc/ssh/ssh_host_rsa_key && \ && mv mysql-connector-java-8.0.27/mysql-connector-java-8.0.27.jar /apache-hive-2.3.9-bin/lib/ \
cp /root/.ssh/id_rsa.pub /etc/ssh/ssh_host_rsa_key.pub && rm -rf mysql-connector-java-8.0.27.tar.gz mysql-connector-java-8.0.27 \
&& apt install -y iputils-ping net-tools \
RUN wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-8.0.27.tar.gz &&\ && apt-get clean \
tar -xf mysql-connector-java-8.0.27.tar.gz && \ && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
mv mysql-connector-java-8.0.27/mysql-connector-java-8.0.27.jar /apache-hive-2.3.9-bin/lib/ && \
rm -rf mysql-connector-java-8.0.27.tar.gz mysql-connector-java-8.0.27
RUN apt install -y iputils-ping net-tools
ENV JAVA_HOME=/usr ENV JAVA_HOME=/usr
ENV HADOOP_HOME=/hadoop-3.1.0 ENV HADOOP_HOME=/hadoop-3.1.0
@ -44,4 +40,3 @@ COPY demo_data.txt /
ENV PATH=/apache-hive-2.3.9-bin/bin:/hadoop-3.1.0/bin:/hadoop-3.1.0/sbin:$PATH ENV PATH=/apache-hive-2.3.9-bin/bin:/hadoop-3.1.0/bin:/hadoop-3.1.0/sbin:$PATH
RUN service ssh start && sed s/HOSTNAME/$HOSTNAME/ /hadoop-3.1.0/etc/hadoop/core-site.xml.template > /hadoop-3.1.0/etc/hadoop/core-site.xml && hdfs namenode -format RUN service ssh start && sed s/HOSTNAME/$HOSTNAME/ /hadoop-3.1.0/etc/hadoop/core-site.xml.template > /hadoop-3.1.0/etc/hadoop/core-site.xml && hdfs namenode -format
COPY start.sh / COPY start.sh /

View File

@ -3,14 +3,10 @@
FROM ubuntu:18.04 FROM ubuntu:18.04
RUN apt-get update && \ RUN apt-get update \
apt-get install -y software-properties-common build-essential openjdk-8-jdk curl && apt-get install -y software-properties-common build-essential openjdk-8-jdk curl \
&& apt-get clean \
RUN rm -rf \ && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
/var/lib/apt/lists/* \
/var/cache/debconf \
/tmp/* \
RUN apt-get clean
ARG ver=42.2.12 ARG ver=42.2.12
RUN curl -L -o /postgresql-java-${ver}.jar https://repo1.maven.org/maven2/org/postgresql/postgresql/${ver}/postgresql-${ver}.jar RUN curl -L -o /postgresql-java-${ver}.jar https://repo1.maven.org/maven2/org/postgresql/postgresql/${ver}/postgresql-${ver}.jar

View File

@ -37,11 +37,8 @@ RUN apt-get update \
libkrb5-dev \ libkrb5-dev \
krb5-user \ krb5-user \
g++ \ g++ \
&& rm -rf \ && apt-get clean \
/var/lib/apt/lists/* \ && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
/var/cache/debconf \
/tmp/* \
&& apt-get clean
ENV TZ=Etc/UTC ENV TZ=Etc/UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
@ -62,47 +59,49 @@ RUN curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - \
&& dockerd --version; docker --version && dockerd --version; docker --version
# kazoo 2.10.0 is broken
# https://s3.amazonaws.com/clickhouse-test-reports/59337/524625a1d2f4cc608a3f1059e3df2c30f353a649/integration_tests__asan__analyzer__[5_6].html
RUN python3 -m pip install --no-cache-dir \ RUN python3 -m pip install --no-cache-dir \
PyMySQL \ PyMySQL==1.1.0 \
aerospike==11.1.0 \ asyncio==3.4.3 \
asyncio \
avro==1.10.2 \ avro==1.10.2 \
azure-storage-blob \ azure-storage-blob==12.19.0 \
boto3 \ boto3==1.34.24 \
cassandra-driver \ cassandra-driver==3.29.0 \
confluent-kafka==1.9.2 \ confluent-kafka==2.3.0 \
delta-spark==2.3.0 \ delta-spark==2.3.0 \
dict2xml \ dict2xml==1.7.4 \
dicttoxml \ dicttoxml==1.7.16 \
docker==6.1.3 \ docker==6.1.3 \
docker-compose==1.29.2 \ docker-compose==1.29.2 \
grpcio \ grpcio==1.60.0 \
grpcio-tools \ grpcio-tools==1.60.0 \
kafka-python \ kafka-python==2.0.2 \
kazoo \ lz4==4.3.3 \
lz4 \ minio==7.2.3 \
minio \ nats-py==2.6.0 \
nats-py \ protobuf==4.25.2 \
protobuf \ kazoo==2.9.0 \
psycopg2-binary==2.9.6 \ psycopg2-binary==2.9.6 \
pyhdfs \ pyhdfs==0.3.1 \
pymongo==3.11.0 \ pymongo==3.11.0 \
pyspark==3.3.2 \ pyspark==3.3.2 \
pytest \ pytest==7.4.4 \
pytest-order==1.0.0 \ pytest-order==1.0.0 \
pytest-random \ pytest-random==0.2 \
pytest-repeat \ pytest-repeat==0.9.3 \
pytest-timeout \ pytest-timeout==2.2.0 \
pytest-xdist \ pytest-xdist==3.5.0 \
pytz \ pytest-reportlog==0.4.0 \
pytz==2023.3.post1 \
pyyaml==5.3.1 \ pyyaml==5.3.1 \
redis \ redis==5.0.1 \
requests-kerberos \ requests-kerberos==0.14.0 \
tzlocal==2.1 \ tzlocal==2.1 \
retry \ retry==0.9.2 \
bs4 \ bs4==0.0.2 \
lxml \ lxml==5.1.0 \
urllib3 urllib3==2.0.7
# bs4, lxml are for cloud tests, do not delete # bs4, lxml are for cloud tests, do not delete
# Hudi supports only spark 3.3.*, not 3.4 # Hudi supports only spark 3.3.*, not 3.4

View File

@ -1,7 +1,7 @@
version: '2.3' version: '2.3'
services: services:
mysql2: mysql2:
image: mysql:5.7 image: mysql:8.0
restart: always restart: always
environment: environment:
MYSQL_ROOT_PASSWORD: clickhouse MYSQL_ROOT_PASSWORD: clickhouse
@ -23,7 +23,7 @@ services:
source: ${MYSQL_CLUSTER_LOGS:-} source: ${MYSQL_CLUSTER_LOGS:-}
target: /mysql/ target: /mysql/
mysql3: mysql3:
image: mysql:5.7 image: mysql:8.0
restart: always restart: always
environment: environment:
MYSQL_ROOT_PASSWORD: clickhouse MYSQL_ROOT_PASSWORD: clickhouse
@ -45,7 +45,7 @@ services:
source: ${MYSQL_CLUSTER_LOGS:-} source: ${MYSQL_CLUSTER_LOGS:-}
target: /mysql/ target: /mysql/
mysql4: mysql4:
image: mysql:5.7 image: mysql:8.0
restart: always restart: always
environment: environment:
MYSQL_ROOT_PASSWORD: clickhouse MYSQL_ROOT_PASSWORD: clickhouse

View File

@ -24,7 +24,10 @@ RUN mkdir "/root/.ssh"
RUN touch "/root/.ssh/known_hosts" RUN touch "/root/.ssh/known_hosts"
# install java # install java
RUN apt-get update && apt-get install default-jre default-jdk libjna-java libjna-jni ssh gnuplot graphviz --yes --no-install-recommends RUN apt-get update && \
apt-get install default-jre default-jdk libjna-java libjna-jni ssh gnuplot graphviz --yes --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# install clojure # install clojure
RUN curl -O "https://download.clojure.org/install/linux-install-${CLOJURE_VERSION}.sh" && \ RUN curl -O "https://download.clojure.org/install/linux-install-${CLOJURE_VERSION}.sh" && \

View File

@ -27,7 +27,7 @@ RUN apt-get update \
wget \ wget \
&& apt-get autoremove --yes \ && apt-get autoremove --yes \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
RUN pip3 install Jinja2 RUN pip3 install Jinja2

View File

@ -37,7 +37,7 @@ RUN apt-get update \
&& apt-get purge --yes python3-dev g++ \ && apt-get purge --yes python3-dev g++ \
&& apt-get autoremove --yes \ && apt-get autoremove --yes \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* && rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
COPY run.sh / COPY run.sh /

View File

@ -31,7 +31,9 @@ RUN mkdir "/root/.ssh"
RUN touch "/root/.ssh/known_hosts" RUN touch "/root/.ssh/known_hosts"
# install java # install java
RUN apt-get update && apt-get install default-jre default-jdk libjna-java libjna-jni ssh gnuplot graphviz --yes --no-install-recommends RUN apt-get update && apt-get install default-jre default-jdk libjna-java libjna-jni ssh gnuplot graphviz --yes --no-install-recommends \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# install clojure # install clojure
RUN curl -O "https://download.clojure.org/install/linux-install-${CLOJURE_VERSION}.sh" && \ RUN curl -O "https://download.clojure.org/install/linux-install-${CLOJURE_VERSION}.sh" && \

View File

@ -5,9 +5,10 @@ FROM ubuntu:22.04
ARG apt_archive="http://archive.ubuntu.com" ARG apt_archive="http://archive.ubuntu.com"
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
RUN apt-get update --yes && \ RUN apt-get update --yes \
env DEBIAN_FRONTEND=noninteractive apt-get install wget git default-jdk maven python3 --yes --no-install-recommends && \ && env DEBIAN_FRONTEND=noninteractive apt-get install wget git default-jdk maven python3 --yes --no-install-recommends \
apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# We need to get the repository's HEAD each time despite, so we invalidate layers' cache # We need to get the repository's HEAD each time despite, so we invalidate layers' cache
ARG CACHE_INVALIDATOR=0 ARG CACHE_INVALIDATOR=0

View File

@ -15,7 +15,8 @@ RUN apt-get update --yes \
unixodbc-dev \ unixodbc-dev \
odbcinst \ odbcinst \
sudo \ sudo \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
RUN pip3 install \ RUN pip3 install \
numpy \ numpy \

View File

@ -11,7 +11,8 @@ RUN apt-get update --yes \
python3-dev \ python3-dev \
python3-pip \ python3-pip \
sudo \ sudo \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
RUN pip3 install \ RUN pip3 install \
pyyaml \ pyyaml \

View File

@ -9,7 +9,8 @@ RUN apt-get update -y \
python3-requests \ python3-requests \
nodejs \ nodejs \
npm \ npm \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
COPY create.sql / COPY create.sql /
COPY run.sh / COPY run.sh /

View File

@ -44,9 +44,10 @@ RUN apt-get update -y \
pv \ pv \
zip \ zip \
p7zip-full \ p7zip-full \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
RUN pip3 install numpy scipy pandas Jinja2 pyarrow RUN pip3 install numpy==1.26.3 scipy==1.12.0 pandas==1.5.3 Jinja2==3.1.3 pyarrow==15.0.0
RUN mkdir -p /tmp/clickhouse-odbc-tmp \ RUN mkdir -p /tmp/clickhouse-odbc-tmp \
&& wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \ && wget -nv -O - ${odbc_driver_url} | tar --strip-components=1 -xz -C /tmp/clickhouse-odbc-tmp \
@ -73,7 +74,6 @@ RUN arch=${TARGETARCH:-amd64} \
&& wget "https://dl.min.io/client/mc/release/linux-${arch}/archive/mc.RELEASE.${MINIO_CLIENT_VERSION}" -O ./mc \ && wget "https://dl.min.io/client/mc/release/linux-${arch}/archive/mc.RELEASE.${MINIO_CLIENT_VERSION}" -O ./mc \
&& chmod +x ./mc ./minio && chmod +x ./mc ./minio
RUN wget --no-verbose 'https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz' \ RUN wget --no-verbose 'https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz' \
&& tar -xvf hadoop-3.3.1.tar.gz \ && tar -xvf hadoop-3.3.1.tar.gz \
&& rm -rf hadoop-3.3.1.tar.gz && rm -rf hadoop-3.3.1.tar.gz

View File

@ -9,6 +9,8 @@ FROM ubuntu:20.04 as clickhouse-test-runner-base
VOLUME /packages VOLUME /packages
CMD apt-get update ;\ CMD apt-get update ;\
DEBIAN_FRONTEND=noninteractive \ DEBIAN_FRONTEND=noninteractive \
apt install -y /packages/clickhouse-common-static_*.deb \ apt install -y /packages/clickhouse-common-static_*.deb \
/packages/clickhouse-client_*.deb /packages/clickhouse-client_*.deb \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*

View File

@ -185,11 +185,15 @@ function run_tests()
if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then if [[ -n "$USE_DATABASE_REPLICATED" ]] && [[ "$USE_DATABASE_REPLICATED" -eq 1 ]]; then
ADDITIONAL_OPTIONS+=('--replicated-database') ADDITIONAL_OPTIONS+=('--replicated-database')
# Too many tests fail for DatabaseReplicated in parallel.
ADDITIONAL_OPTIONS+=('--jobs') ADDITIONAL_OPTIONS+=('--jobs')
ADDITIONAL_OPTIONS+=('2') ADDITIONAL_OPTIONS+=('2')
elif [[ 1 == $(clickhouse-client --query "SELECT value LIKE '%SANITIZE_COVERAGE%' FROM system.build_options WHERE name = 'CXX_FLAGS'") ]]; then
# Coverage on a per-test basis could only be collected sequentially.
# Do not set the --jobs parameter.
echo "Running tests with coverage collection."
else else
# Too many tests fail for DatabaseReplicated in parallel. All other # All other configurations are OK.
# configurations are OK.
ADDITIONAL_OPTIONS+=('--jobs') ADDITIONAL_OPTIONS+=('--jobs')
ADDITIONAL_OPTIONS+=('8') ADDITIONAL_OPTIONS+=('8')
fi fi

View File

@ -19,7 +19,8 @@ RUN apt-get update -y \
openssl \ openssl \
netcat-openbsd \ netcat-openbsd \
brotli \ brotli \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
COPY run.sh / COPY run.sh /

View File

@ -21,6 +21,7 @@ RUN apt-get update && env DEBIAN_FRONTEND=noninteractive apt-get install --yes \
locales \ locales \
&& pip3 install black==23.1.0 boto3 codespell==2.2.1 mypy==1.3.0 PyGithub unidiff pylint==2.6.2 \ && pip3 install black==23.1.0 boto3 codespell==2.2.1 mypy==1.3.0 PyGithub unidiff pylint==2.6.2 \
&& apt-get clean \ && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/* \
&& rm -rf /root/.cache/pip && rm -rf /root/.cache/pip
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen en_US.UTF-8 RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen en_US.UTF-8

View File

@ -19,7 +19,8 @@ RUN apt-get update -y \
openssl \ openssl \
netcat-openbsd \ netcat-openbsd \
brotli \ brotli \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
COPY run.sh / COPY run.sh /

View File

@ -77,6 +77,12 @@ remove_keeper_config "async_replication" "1"
# create_if_not_exists feature flag doesn't exist on some older versions # create_if_not_exists feature flag doesn't exist on some older versions
remove_keeper_config "create_if_not_exists" "[01]" remove_keeper_config "create_if_not_exists" "[01]"
# latest_logs_cache_size_threshold setting doesn't exist on some older versions
remove_keeper_config "latest_logs_cache_size_threshold" "[[:digit:]]\+"
# commit_logs_cache_size_threshold setting doesn't exist on some older versions
remove_keeper_config "commit_logs_cache_size_threshold" "[[:digit:]]\+"
# it contains some new settings, but we can safely remove it # it contains some new settings, but we can safely remove it
rm /etc/clickhouse-server/config.d/merge_tree.xml rm /etc/clickhouse-server/config.d/merge_tree.xml
rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml rm /etc/clickhouse-server/config.d/enable_wait_for_shutdown_replicated_tables.xml
@ -109,6 +115,12 @@ remove_keeper_config "async_replication" "1"
# create_if_not_exists feature flag doesn't exist on some older versions # create_if_not_exists feature flag doesn't exist on some older versions
remove_keeper_config "create_if_not_exists" "[01]" remove_keeper_config "create_if_not_exists" "[01]"
# latest_logs_cache_size_threshold setting doesn't exist on some older versions
remove_keeper_config "latest_logs_cache_size_threshold" "[[:digit:]]\+"
# commit_logs_cache_size_threshold setting doesn't exist on some older versions
remove_keeper_config "commit_logs_cache_size_threshold" "[[:digit:]]\+"
# But we still need default disk because some tables loaded only into it # But we still need default disk because some tables loaded only into it
sudo cat /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml \ sudo cat /etc/clickhouse-server/config.d/s3_storage_policy_by_default.xml \
| sed "s|<main><disk>s3</disk></main>|<main><disk>s3</disk></main><default><disk>default</disk></default>|" \ | sed "s|<main><disk>s3</disk></main>|<main><disk>s3</disk></main><default><disk>default</disk></default>|" \

View File

@ -5,7 +5,6 @@ FROM ubuntu:22.04
ARG apt_archive="http://archive.ubuntu.com" ARG apt_archive="http://archive.ubuntu.com"
RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list RUN sed -i "s|http://archive.ubuntu.com|$apt_archive|g" /etc/apt/sources.list
# 15.0.2
ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=17 ENV DEBIAN_FRONTEND=noninteractive LLVM_VERSION=17
RUN apt-get update \ RUN apt-get update \
@ -27,9 +26,10 @@ RUN apt-get update \
&& export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \ && export CODENAME="$(lsb_release --codename --short | tr 'A-Z' 'a-z')" \
&& echo "deb https://apt.llvm.org/${CODENAME}/ llvm-toolchain-${CODENAME}-${LLVM_VERSION} main" >> \ && echo "deb https://apt.llvm.org/${CODENAME}/ llvm-toolchain-${CODENAME}-${LLVM_VERSION} main" >> \
/etc/apt/sources.list \ /etc/apt/sources.list \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# Install cmake 3.20+ for rust support # Install cmake 3.20+ for Rust support
# Used https://askubuntu.com/a/1157132 as reference # Used https://askubuntu.com/a/1157132 as reference
RUN curl -s https://apt.kitware.com/keys/kitware-archive-latest.asc | \ RUN curl -s https://apt.kitware.com/keys/kitware-archive-latest.asc | \
gpg --dearmor - > /etc/apt/trusted.gpg.d/kitware.gpg && \ gpg --dearmor - > /etc/apt/trusted.gpg.d/kitware.gpg && \
@ -60,9 +60,10 @@ RUN apt-get update \
software-properties-common \ software-properties-common \
tzdata \ tzdata \
--yes --no-install-recommends \ --yes --no-install-recommends \
&& apt-get clean && apt-get clean \
&& rm -rf /var/lib/apt/lists/* /var/cache/debconf /tmp/*
# This symlink required by gcc to find lld compiler # This symlink is required by gcc to find the lld linker
RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld RUN ln -s /usr/bin/lld-${LLVM_VERSION} /usr/bin/ld.lld
# for external_symbolizer_path # for external_symbolizer_path
RUN ln -s /usr/bin/llvm-symbolizer-${LLVM_VERSION} /usr/bin/llvm-symbolizer RUN ln -s /usr/bin/llvm-symbolizer-${LLVM_VERSION} /usr/bin/llvm-symbolizer
@ -107,5 +108,4 @@ RUN arch=${TARGETARCH:-amd64} \
&& mv "/tmp/sccache-$SCCACHE_VERSION-$rarch-unknown-linux-musl/sccache" /usr/bin \ && mv "/tmp/sccache-$SCCACHE_VERSION-$rarch-unknown-linux-musl/sccache" /usr/bin \
&& rm "/tmp/sccache-$SCCACHE_VERSION-$rarch-unknown-linux-musl" -r && rm "/tmp/sccache-$SCCACHE_VERSION-$rarch-unknown-linux-musl" -r
COPY process_functional_tests_result.py / COPY process_functional_tests_result.py /

View File

@ -0,0 +1,31 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v23.11.5.29-stable (d83b108deca) FIXME as compared to v23.11.4.24-stable (e79d840d7fe)
#### Improvement
* Backported in [#58815](https://github.com/ClickHouse/ClickHouse/issues/58815): Add `SYSTEM JEMALLOC PURGE` for purging unused jemalloc pages, `SYSTEM JEMALLOC [ ENABLE | DISABLE | FLUSH ] PROFILE` for controlling jemalloc profile if the profiler is enabled. Add jemalloc-related 4LW command in Keeper: `jmst` for dumping jemalloc stats, `jmfp`, `jmep`, `jmdp` for controlling jemalloc profile if the profiler is enabled. [#58665](https://github.com/ClickHouse/ClickHouse/pull/58665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#59234](https://github.com/ClickHouse/ClickHouse/issues/59234): Allow to ignore schema evolution in Iceberg table engine and read all data using schema specified by the user on table creation or latest schema parsed from metadata on table creation. This is done under a setting `iceberg_engine_ignore_schema_evolution` that is disabled by default. Note that enabling this setting can lead to incorrect result as in case of evolved schema all data files will be read using the same schema. [#59133](https://github.com/ClickHouse/ClickHouse/pull/59133) ([Kruglov Pavel](https://github.com/Avogar)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix a stupid case of intersecting parts [#58482](https://github.com/ClickHouse/ClickHouse/pull/58482) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix stream partitioning in parallel window functions [#58739](https://github.com/ClickHouse/ClickHouse/pull/58739) ([Dmitry Novik](https://github.com/novikd)).
* Fix double destroy call on exception throw in addBatchLookupTable8 [#58745](https://github.com/ClickHouse/ClickHouse/pull/58745) ([Raúl Marín](https://github.com/Algunenano)).
* Fix JSONExtract function for LowCardinality(Nullable) columns [#58808](https://github.com/ClickHouse/ClickHouse/pull/58808) ([vdimir](https://github.com/vdimir)).
* Fix: LIMIT BY and LIMIT in distributed query [#59153](https://github.com/ClickHouse/ClickHouse/pull/59153) ([Igor Nikonov](https://github.com/devcrafter)).
* Fix not-ready set for system.tables [#59351](https://github.com/ClickHouse/ClickHouse/pull/59351) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix translate() with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* refine error message [#57991](https://github.com/ClickHouse/ClickHouse/pull/57991) ([Han Fei](https://github.com/hanfei1991)).
* Fix rare race in external sort/aggregation with temporary data in cache [#58013](https://github.com/ClickHouse/ClickHouse/pull/58013) ([Anton Popov](https://github.com/CurtizJ)).
* Follow-up to [#58482](https://github.com/ClickHouse/ClickHouse/issues/58482) [#58574](https://github.com/ClickHouse/ClickHouse/pull/58574) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix possible race in ManyAggregatedData dtor. [#58624](https://github.com/ClickHouse/ClickHouse/pull/58624) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Decrease log level for one log message [#59168](https://github.com/ClickHouse/ClickHouse/pull/59168) ([Kseniia Sumarokova](https://github.com/kssenii)).

View File

@ -0,0 +1,36 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v23.12.3.40-stable (a594704ae75) FIXME as compared to v23.12.2.59-stable (17ab210e761)
#### Improvement
* Backported in [#58660](https://github.com/ClickHouse/ClickHouse/issues/58660): When executing some queries, which require a lot of streams for reading data, the error `"Paste JOIN requires sorted tables only"` was previously thrown. Now the numbers of streams resize to 1 in that case. [#58608](https://github.com/ClickHouse/ClickHouse/pull/58608) ([Yarik Briukhovetskyi](https://github.com/yariks5s)).
* Backported in [#58817](https://github.com/ClickHouse/ClickHouse/issues/58817): Add `SYSTEM JEMALLOC PURGE` for purging unused jemalloc pages, `SYSTEM JEMALLOC [ ENABLE | DISABLE | FLUSH ] PROFILE` for controlling jemalloc profile if the profiler is enabled. Add jemalloc-related 4LW command in Keeper: `jmst` for dumping jemalloc stats, `jmfp`, `jmep`, `jmdp` for controlling jemalloc profile if the profiler is enabled. [#58665](https://github.com/ClickHouse/ClickHouse/pull/58665) ([Antonio Andelic](https://github.com/antonio2368)).
* Backported in [#59235](https://github.com/ClickHouse/ClickHouse/issues/59235): Allow to ignore schema evolution in Iceberg table engine and read all data using schema specified by the user on table creation or latest schema parsed from metadata on table creation. This is done under a setting `iceberg_engine_ignore_schema_evolution` that is disabled by default. Note that enabling this setting can lead to incorrect result as in case of evolved schema all data files will be read using the same schema. [#59133](https://github.com/ClickHouse/ClickHouse/pull/59133) ([Kruglov Pavel](https://github.com/Avogar)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Delay reading from StorageKafka to allow multiple reads in materialized views [#58477](https://github.com/ClickHouse/ClickHouse/pull/58477) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Fix a stupid case of intersecting parts [#58482](https://github.com/ClickHouse/ClickHouse/pull/58482) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Disable max_joined_block_rows in ConcurrentHashJoin [#58595](https://github.com/ClickHouse/ClickHouse/pull/58595) ([vdimir](https://github.com/vdimir)).
* Fix stream partitioning in parallel window functions [#58739](https://github.com/ClickHouse/ClickHouse/pull/58739) ([Dmitry Novik](https://github.com/novikd)).
* Fix double destroy call on exception throw in addBatchLookupTable8 [#58745](https://github.com/ClickHouse/ClickHouse/pull/58745) ([Raúl Marín](https://github.com/Algunenano)).
* Fix JSONExtract function for LowCardinality(Nullable) columns [#58808](https://github.com/ClickHouse/ClickHouse/pull/58808) ([vdimir](https://github.com/vdimir)).
* Multiple read file log storage in mv [#58877](https://github.com/ClickHouse/ClickHouse/pull/58877) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Fix: LIMIT BY and LIMIT in distributed query [#59153](https://github.com/ClickHouse/ClickHouse/pull/59153) ([Igor Nikonov](https://github.com/devcrafter)).
* Fix not-ready set for system.tables [#59351](https://github.com/ClickHouse/ClickHouse/pull/59351) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Fix translate() with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Follow-up to [#58482](https://github.com/ClickHouse/ClickHouse/issues/58482) [#58574](https://github.com/ClickHouse/ClickHouse/pull/58574) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix possible race in ManyAggregatedData dtor. [#58624](https://github.com/ClickHouse/ClickHouse/pull/58624) ([Nikolai Kochetov](https://github.com/KochetovNicolai)).
* Change log level for super imporant message in Keeper [#59010](https://github.com/ClickHouse/ClickHouse/pull/59010) ([alesapin](https://github.com/alesapin)).
* Decrease log level for one log message [#59168](https://github.com/ClickHouse/ClickHouse/pull/59168) ([Kseniia Sumarokova](https://github.com/kssenii)).
* Fix fasttest by pinning pip dependencies [#59256](https://github.com/ClickHouse/ClickHouse/pull/59256) ([Azat Khuzhin](https://github.com/azat)).
* No debug symbols in Rust [#59306](https://github.com/ClickHouse/ClickHouse/pull/59306) ([Alexey Milovidov](https://github.com/alexey-milovidov)).

View File

@ -0,0 +1,21 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v23.12.4.15-stable (4233d111d20) FIXME as compared to v23.12.3.40-stable (a594704ae75)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix incorrect result of arrayElement / map[] on empty value [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)).
* Fix crash in topK when merging empty states [#59603](https://github.com/ClickHouse/ClickHouse/pull/59603) ([Raúl Marín](https://github.com/Algunenano)).
* Fix distributed table with a constant sharding key [#59606](https://github.com/ClickHouse/ClickHouse/pull/59606) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix leftPad / rightPad function with FixedString input [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Fix 02720_row_policy_column_with_dots [#59453](https://github.com/ClickHouse/ClickHouse/pull/59453) ([Duc Canh Le](https://github.com/canhld94)).
* Pin python dependencies in stateless tests [#59663](https://github.com/ClickHouse/ClickHouse/pull/59663) ([Raúl Marín](https://github.com/Algunenano)).

View File

@ -0,0 +1,14 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.1.2.5-stable (b2605dd4a5a) FIXME as compared to v24.1.1.2048-stable (5a024dfc093)
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix translate() with FixedString input [#59356](https://github.com/ClickHouse/ClickHouse/pull/59356) ([Raúl Marín](https://github.com/Algunenano)).
* Fix stacktraces for binaries without debug symbols [#59444](https://github.com/ClickHouse/ClickHouse/pull/59444) ([Azat Khuzhin](https://github.com/azat)).

View File

@ -0,0 +1,34 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.1.3.31-stable (135b08cbd28) FIXME as compared to v24.1.2.5-stable (b2605dd4a5a)
#### Improvement
* Backported in [#59569](https://github.com/ClickHouse/ClickHouse/issues/59569): Now dashboard understands both compressed and uncompressed state of URL's #hash (backward compatibility). Continuation of [#59124](https://github.com/ClickHouse/ClickHouse/issues/59124) . [#59548](https://github.com/ClickHouse/ClickHouse/pull/59548) ([Amos Bird](https://github.com/amosbird)).
* Backported in [#59776](https://github.com/ClickHouse/ClickHouse/issues/59776): Added settings `split_parts_ranges_into_intersecting_and_non_intersecting_final` and `split_intersecting_parts_ranges_into_layers_final`. This settings are needed to disable optimizations for queries with `FINAL` and needed for debug only. [#59705](https://github.com/ClickHouse/ClickHouse/pull/59705) ([Maksim Kita](https://github.com/kitaisreal)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix `ASTAlterCommand::formatImpl` in case of column specific settings… [#59445](https://github.com/ClickHouse/ClickHouse/pull/59445) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
* Make MAX use the same rules as permutation for complex types [#59498](https://github.com/ClickHouse/ClickHouse/pull/59498) ([Raúl Marín](https://github.com/Algunenano)).
* Fix corner case when passing `update_insert_deduplication_token_in_dependent_materialized_views` [#59544](https://github.com/ClickHouse/ClickHouse/pull/59544) ([Jordi Villar](https://github.com/jrdi)).
* Fix incorrect result of arrayElement / map[] on empty value [#59594](https://github.com/ClickHouse/ClickHouse/pull/59594) ([Raúl Marín](https://github.com/Algunenano)).
* Fix crash in topK when merging empty states [#59603](https://github.com/ClickHouse/ClickHouse/pull/59603) ([Raúl Marín](https://github.com/Algunenano)).
* Maintain function alias in RewriteSumFunctionWithSumAndCountVisitor [#59658](https://github.com/ClickHouse/ClickHouse/pull/59658) ([Raúl Marín](https://github.com/Algunenano)).
* Fix leftPad / rightPad function with FixedString input [#59739](https://github.com/ClickHouse/ClickHouse/pull/59739) ([Raúl Marín](https://github.com/Algunenano)).
#### NO CL ENTRY
* NO CL ENTRY: 'Revert "Backport [#59650](https://github.com/ClickHouse/ClickHouse/issues/59650) to 24.1: MergeTree FINAL optimization diagnostics and settings"'. [#59701](https://github.com/ClickHouse/ClickHouse/pull/59701) ([Raúl Marín](https://github.com/Algunenano)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Fix 02720_row_policy_column_with_dots [#59453](https://github.com/ClickHouse/ClickHouse/pull/59453) ([Duc Canh Le](https://github.com/canhld94)).
* Refactoring of dashboard state encoding [#59554](https://github.com/ClickHouse/ClickHouse/pull/59554) ([Sergei Trifonov](https://github.com/serxa)).
* MergeTree FINAL optimization diagnostics and settings [#59650](https://github.com/ClickHouse/ClickHouse/pull/59650) ([Maksim Kita](https://github.com/kitaisreal)).
* Pin python dependencies in stateless tests [#59663](https://github.com/ClickHouse/ClickHouse/pull/59663) ([Raúl Marín](https://github.com/Algunenano)).

View File

@ -0,0 +1,28 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.1.4.20-stable (f59d842b3fa) FIXME as compared to v24.1.3.31-stable (135b08cbd28)
#### Improvement
* Backported in [#59826](https://github.com/ClickHouse/ClickHouse/issues/59826): In case when `merge_max_block_size_bytes` is small enough and tables contain wide rows (strings or tuples) background merges may stuck in an endless loop. This behaviour is fixed. Follow-up for https://github.com/ClickHouse/ClickHouse/pull/59340. [#59812](https://github.com/ClickHouse/ClickHouse/pull/59812) ([Nikita Mikhaylov](https://github.com/nikitamikhaylov)).
#### Build/Testing/Packaging Improvement
* Backported in [#59885](https://github.com/ClickHouse/ClickHouse/issues/59885): If you want to run initdb scripts every time when ClickHouse container is starting you shoud initialize environment varible CLICKHOUSE_ALWAYS_RUN_INITDB_SCRIPTS. [#59808](https://github.com/ClickHouse/ClickHouse/pull/59808) ([Alexander Nikolaev](https://github.com/AlexNik)).
#### Bug Fix (user-visible misbehavior in an official stable release)
* Fix digest calculation in Keeper [#59439](https://github.com/ClickHouse/ClickHouse/pull/59439) ([Antonio Andelic](https://github.com/antonio2368)).
* Fix distributed table with a constant sharding key [#59606](https://github.com/ClickHouse/ClickHouse/pull/59606) ([Vitaly Baranov](https://github.com/vitlibar)).
* Fix query start time on non initial queries [#59662](https://github.com/ClickHouse/ClickHouse/pull/59662) ([Raúl Marín](https://github.com/Algunenano)).
* Fix parsing of partition expressions surrounded by parens [#59901](https://github.com/ClickHouse/ClickHouse/pull/59901) ([János Benjamin Antal](https://github.com/antaljanosbenjamin)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* Temporarily remove a feature that doesn't work [#59688](https://github.com/ClickHouse/ClickHouse/pull/59688) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Make ZooKeeper actually sequentialy consistent [#59735](https://github.com/ClickHouse/ClickHouse/pull/59735) ([Alexander Tokmakov](https://github.com/tavplubix)).
* Fix special build reports in release branches [#59797](https://github.com/ClickHouse/ClickHouse/pull/59797) ([Mikhail f. Shiryaev](https://github.com/Felixoid)).

View File

@ -0,0 +1,17 @@
---
sidebar_position: 1
sidebar_label: 2024
---
# 2024 Changelog
### ClickHouse release v24.1.5.6-stable (7f67181ff31) FIXME as compared to v24.1.4.20-stable (f59d842b3fa)
#### Bug Fix (user-visible misbehavior in an official stable release)
* UniqExactSet read crash fix [#59928](https://github.com/ClickHouse/ClickHouse/pull/59928) ([Maksim Kita](https://github.com/kitaisreal)).
#### NOT FOR CHANGELOG / INSIGNIFICANT
* CI: do not reuse builds on release branches [#59798](https://github.com/ClickHouse/ClickHouse/pull/59798) ([Max K.](https://github.com/maxknv)).

View File

@ -166,11 +166,11 @@ For most external applications, we recommend using the HTTP interface because it
## Configuration {#configuration} ## Configuration {#configuration}
ClickHouse Server is based on POCO C++ Libraries and uses `Poco::Util::AbstractConfiguration` to represent it's configuration. Configuration is held by `Poco::Util::ServerApplication` class inherited by `DaemonBase` class, which in turn is inherited by `DB::Server` class, implementing clickhouse-server itself. So config can be accessed by `ServerApplication::config()` method. ClickHouse Server is based on POCO C++ Libraries and uses `Poco::Util::AbstractConfiguration` to represent its configuration. Configuration is held by `Poco::Util::ServerApplication` class inherited by `DaemonBase` class, which in turn is inherited by `DB::Server` class, implementing clickhouse-server itself. So config can be accessed by `ServerApplication::config()` method.
Config is read from multiple files (in XML or YAML format) and merged into single `AbstractConfiguration` by `ConfigProcessor` class. Configuration is loaded at server startup and can be reloaded later if one of config files is updated, removed or added. `ConfigReloader` class is responsible for periodic monitoring of these changes and reload procedure as well. `SYSTEM RELOAD CONFIG` query also triggers config to be reloaded. Config is read from multiple files (in XML or YAML format) and merged into single `AbstractConfiguration` by `ConfigProcessor` class. Configuration is loaded at server startup and can be reloaded later if one of config files is updated, removed or added. `ConfigReloader` class is responsible for periodic monitoring of these changes and reload procedure as well. `SYSTEM RELOAD CONFIG` query also triggers config to be reloaded.
For queries and subsystems other than `Server` config is accessible using `Context::getConfigRef()` method. Every subsystem that is capable of reloading it's config without server restart should register itself in reload callback in `Server::main()` method. Note that if newer config has an error, most subsystems will ignore new config, log warning messages and keep working with previously loaded config. Due to the nature of `AbstractConfiguration` it is not possible to pass reference to specific section, so `String config_prefix` is usually used instead. For queries and subsystems other than `Server` config is accessible using `Context::getConfigRef()` method. Every subsystem that is capable of reloading its config without server restart should register itself in reload callback in `Server::main()` method. Note that if newer config has an error, most subsystems will ignore new config, log warning messages and keep working with previously loaded config. Due to the nature of `AbstractConfiguration` it is not possible to pass reference to specific section, so `String config_prefix` is usually used instead.
## Threads and jobs {#threads-and-jobs} ## Threads and jobs {#threads-and-jobs}
@ -255,7 +255,7 @@ When we are going to read something from a part in `MergeTree`, we look at `prim
When you `INSERT` a bunch of data into `MergeTree`, that bunch is sorted by primary key order and forms a new part. There are background threads that periodically select some parts and merge them into a single sorted part to keep the number of parts relatively low. Thats why it is called `MergeTree`. Of course, merging leads to “write amplification”. All parts are immutable: they are only created and deleted, but not modified. When SELECT is executed, it holds a snapshot of the table (a set of parts). After merging, we also keep old parts for some time to make a recovery after failure easier, so if we see that some merged part is probably broken, we can replace it with its source parts. When you `INSERT` a bunch of data into `MergeTree`, that bunch is sorted by primary key order and forms a new part. There are background threads that periodically select some parts and merge them into a single sorted part to keep the number of parts relatively low. Thats why it is called `MergeTree`. Of course, merging leads to “write amplification”. All parts are immutable: they are only created and deleted, but not modified. When SELECT is executed, it holds a snapshot of the table (a set of parts). After merging, we also keep old parts for some time to make a recovery after failure easier, so if we see that some merged part is probably broken, we can replace it with its source parts.
`MergeTree` is not an LSM tree because it does not contain MEMTABLE and LOG: inserted data is written directly to the filesystem. This behavior makes MergeTree much more suitable to insert data in batches. Therefore frequently inserting small amounts of rows is not ideal for MergeTree. For example, a couple of rows per second is OK, but doing it a thousand times a second is not optimal for MergeTree. However, there is an async insert mode for small inserts to overcome this limitation. We did it this way for simplicitys sake, and because we are already inserting data in batches in our applications `MergeTree` is not an LSM tree because it does not contain MEMTABLE and LOG: inserted data is written directly to the filesystem. This behavior makes MergeTree much more suitable to insert data in batches. Therefore, frequently inserting small amounts of rows is not ideal for MergeTree. For example, a couple of rows per second is OK, but doing it a thousand times a second is not optimal for MergeTree. However, there is an async insert mode for small inserts to overcome this limitation. We did it this way for simplicitys sake, and because we are already inserting data in batches in our applications
There are MergeTree engines that are doing additional work during background merges. Examples are `CollapsingMergeTree` and `AggregatingMergeTree`. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a `MergeTree` table is almost always stored in more than one part, not in completely merged form. There are MergeTree engines that are doing additional work during background merges. Examples are `CollapsingMergeTree` and `AggregatingMergeTree`. This could be treated as special support for updates. Keep in mind that these are not real updates because users usually have no control over the time when background merges are executed, and data in a `MergeTree` table is almost always stored in more than one part, not in completely merged form.

View File

@ -38,7 +38,7 @@ ninja
## Running ## Running
Once built, the binary can be run with, eg.: Once built, the binary can be run with, e.g.:
```bash ```bash
qemu-s390x-static -L /usr/s390x-linux-gnu ./clickhouse qemu-s390x-static -L /usr/s390x-linux-gnu ./clickhouse

View File

@ -95,7 +95,7 @@ Complete below three steps mentioned in [Star Schema Benchmark](https://clickhou
- Inserting data. Here should use `./benchmark_sample/rawdata_dir/ssb-dbgen/*.tbl` as input data. - Inserting data. Here should use `./benchmark_sample/rawdata_dir/ssb-dbgen/*.tbl` as input data.
- Converting “star schema” to de-normalized “flat schema” - Converting “star schema” to de-normalized “flat schema”
Set up database with with IAA Deflate codec Set up database with IAA Deflate codec
``` bash ``` bash
$ cd ./database_dir/deflate $ cd ./database_dir/deflate
@ -104,7 +104,7 @@ $ [CLICKHOUSE_EXE] client
``` ```
Complete three steps same as lz4 above Complete three steps same as lz4 above
Set up database with with ZSTD codec Set up database with ZSTD codec
``` bash ``` bash
$ cd ./database_dir/zstd $ cd ./database_dir/zstd

Some files were not shown because too many files have changed in this diff Show More